url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.imlearningmath.com/in-scrabble-which-of-these-letters-is-worth-five-points/
# In Scrabble, which of these letters is worth five points? In Scrabble, which of these letters is worth five points? W J K Q The Answer: The correct answer is K.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182052612304688, "perplexity": 4170.151309895098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00203.warc.gz"}
http://projecteuler.net/problem=323
## Bitwise-OR operations on random integers ### Problem 323 Published on Sunday, 6th February 2011, 07:00 am; Solved by 1571 Let y0, y1, y2,... be a sequence of random unsigned 32 bit integers (i.e. 0 yi 232, every value equally likely). For the sequence xi the following recursion is given: • x0 = 0 and • xi = xi-1 | yi-1, for i 0. ( | is the bitwise-OR operator) It can be seen that eventually there will be an index N such that xi = 232 -1 (a bit-pattern of all ones) for all i N. Find the expected value of N.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433654546737671, "perplexity": 2385.307215511516}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
https://hal.inria.fr/hal-03024618
# Robustness of the Young/Daly formula for stochastic iterative applications 1 ROMA - Optimisation des ressources : modèles, algorithmes et ordonnancement Inria Grenoble - Rhône-Alpes, LIP - Laboratoire de l'Informatique du Parallélisme 3 TADAAM - Topology-Aware System-Scale Data Management for High-Performance Computing LaBRI - Laboratoire Bordelais de Recherche en Informatique, Inria Bordeaux - Sud-Ouest Abstract : The Young/Daly formula for periodic checkpointing is known to hold for a divisible load application where one can checkpoint at any time-step. In an nutshell, the optimal period is $P YD = 2µ f C$ where µ f is the Mean Time Between Failures (MTBF) and C is the checkpoint time. This paper assesses the accuracy of the formula for applications decomposed into computational iterations where: (i) the duration of an iteration is stochastic, i.e., obeys a probability distribution law D of mean µ D ; and (ii) one can checkpoint only at the end of an iteration. We first consider static strategies where checkpoints are taken after a given number of iterations k and provide a closed-form, asymptotically optimal, formula for k, valid for any distribution D. We then show that using the Young/Daly formula to compute $k (as k • µ D = P YD)$ is a first order approximation of this formula. We also consider dynamic strategies where one decides to checkpoint at the end of an iteration only if the total amount of work since the last checkpoint exceeds a threshold W th , and otherwise proceed to the next iteration. Similarly, we provide a closed-form formula for this threshold and show that P YD is a first-order approximation of W th. Finally, we provide an extensive set of simulations where D is either Uniform, Gamma or truncated Normal, which shows the global accuracy of the Young/Daly formula, even when the distribution D had a large standard deviation (and when one cannot use a first-order approximation). Hence we establish that the relevance of the formula goes well beyond its original framework. Keywords : Document type : Conference papers Domain : https://hal.inria.fr/hal-03024618 Contributor : Equipe Roma <> Submitted on : Monday, November 30, 2020 - 4:40:15 PM Last modification on : Thursday, December 3, 2020 - 1:52:14 PM ### File icpp20-170.pdf Files produced by the author(s) ### Citation Yishu Du, Loris Marchal, Yves Robert, Guillaume Pallez. Robustness of the Young/Daly formula for stochastic iterative applications. ICPP 2020 - 49th International Conference on Parallel Processing, Aug 2020, Edmonton / Virtual, Canada. pp.1-11, ⟨10.1145/3404397.3404419⟩. ⟨hal-03024618⟩ Record views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9355192184448242, "perplexity": 2797.39848751941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547475.44/warc/CC-MAIN-20210124075754-20210124105754-00055.warc.gz"}
http://terrytao.wordpress.com/tag/equidistribution/
You are currently browsing the tag archive for the ‘equidistribution’ tag. In Notes 5, we saw that the Gowers uniformity norms on vector spaces ${{\bf F}^n}$ in high characteristic were controlled by classical polynomial phases ${e(\phi)}$. Now we study the analogous situation on cyclic groups ${{\bf Z}/N{\bf Z}}$. Here, there is an unexpected surprise: the polynomial phases (classical or otherwise) are no longer sufficient to control the Gowers norms ${U^{s+1}({\bf Z}/N{\bf Z})}$ once ${s}$ exceeds ${1}$. To resolve this problem, one must enlarge the space of polynomials to a larger class. It turns out that there are at least three closely related options for this class: the local polynomials, the bracket polynomials, and the nilsequences. Each of the three classes has its own strengths and weaknesses, but in my opinion the nilsequences seem to be the most natural class, due to the rich algebraic and dynamical structure coming from the nilpotent Lie group undergirding such sequences. For reasons of space we shall focus primarily on the nilsequence viewpoint here. Traditionally, nilsequences have been defined in terms of linear orbits ${n \mapsto g^n x}$ on nilmanifolds ${G/\Gamma}$; however, in recent years it has been realised that it is convenient for technical reasons (particularly for the quantitative “single-scale” theory) to generalise this setup to that of polynomial orbits ${n \mapsto g(n) \Gamma}$, and this is the perspective we will take here. A polynomial phase ${n \mapsto e(\phi(n))}$ on a finite abelian group ${H}$ is formed by starting with a polynomial ${\phi: H \rightarrow {\bf R}/{\bf Z}}$ to the unit circle, and then composing it with the exponential function ${e: {\bf R}/{\bf Z} \rightarrow {\bf C}}$. To create a nilsequence ${n \mapsto F(g(n) \Gamma)}$, we generalise this construction by starting with a polynomial ${g \Gamma: H \rightarrow G/\Gamma}$ into a nilmanifold ${G/\Gamma}$, and then composing this with a Lipschitz function ${F: G/\Gamma \rightarrow {\bf C}}$. (The Lipschitz regularity class is convenient for minor technical reasons, but one could also use other regularity classes here if desired.) These classes of sequences certainly include the polynomial phases, but are somewhat more general; for instance, they almost include bracket polynomial phases such as ${n \mapsto e( \lfloor \alpha n \rfloor \beta n )}$. (The “almost” here is because the relevant functions ${F: G/\Gamma \rightarrow {\bf C}}$ involved are only piecewise Lipschitz rather than Lipschitz, but this is primarily a technical issue and one should view bracket polynomial phases as “morally” being nilsequences.) In these notes we set out the basic theory for these nilsequences, including their equidistribution theory (which generalises the equidistribution theory of polynomial flows on tori from Notes 1) and show that they are indeed obstructions to the Gowers norm being small. This leads to the inverse conjecture for the Gowers norms that shows that the Gowers norms on cyclic groups are indeed controlled by these sequences. In the previous lectures, we have focused mostly on the equidistribution or linear patterns on a subset of the integers ${{\bf Z}}$, and in particular on intervals ${[N]}$. The integers are of course a very important domain to study in additive combinatorics; but there are also other fundamental model examples of domains to study. One of these is that of a vector space ${V}$ over a finite field ${{\bf F} = {\bf F}_p}$ of prime order. Such domains are of interest in computer science (particularly when ${p=2}$) and also in number theory; but they also serve as an important simplified “dyadic model” for the integers. See this survey article of Green for further discussion of this point. The additive combinatorics of the integers ${{\bf Z}}$, and of vector spaces ${V}$ over finite fields, are analogous, but not quite identical. For instance, the analogue of an arithmetic progression in ${{\bf Z}}$ is a subspace of ${V}$. In many cases, the finite field theory is a little bit simpler than the integer theory; for instance, subspaces are closed under addition, whereas arithmetic progressions are only “almost” closed under addition in various senses. (For instance, ${[N]}$ is closed under addition approximately half of the time.) However, there are some ways in which the integers are better behaved. For instance, because the integers can be generated by a single generator, a homomorphism from ${{\bf Z}}$ to some other group ${G}$ can be described by a single group element ${g}$: ${n \mapsto g^n}$. However, to specify a homomorphism from a vector space ${V}$ to ${G}$ one would need to specify one group element for each dimension of ${V}$. Thus we see that there is a tradeoff when passing from ${{\bf Z}}$ (or ${[N]}$) to a vector space model; one gains a bounded torsion property, at the expense of conceding the bounded generation property. (Of course, if one wants to deal with arbitrarily large domains, one has to concede one or the other; the only additive groups that have both bounded torsion and boundedly many generators, are bounded.) The starting point for this course (Notes 1) was the study of equidistribution of polynomials ${P: {\bf Z} \rightarrow {\bf R}/{\bf Z}}$ from the integers to the unit circle. We now turn to the parallel theory of equidistribution of polynomials ${P: V \rightarrow {\bf R}/{\bf Z}}$ from vector spaces over finite fields to the unit circle. Actually, for simplicity we will mostly focus on the classical case, when the polynomials in fact take values in the ${p^{th}}$ roots of unity (where ${p}$ is the characteristic of the field ${{\bf F} = {\bf F}_p}$). As it turns out, the non-classical case is also of importance (particularly in low characteristic), but the theory is more difficult; see these notes for some further discussion. (Linear) Fourier analysis can be viewed as a tool to study an arbitrary function ${f}$ on (say) the integers ${{\bf Z}}$, by looking at how such a function correlates with linear phases such as ${n \mapsto e(\xi n)}$, where ${e(x) := e^{2\pi i x}}$ is the fundamental character, and ${\xi \in {\bf R}}$ is a frequency. These correlations control a number of expressions relating to ${f}$, such as the expected behaviour of ${f}$ on arithmetic progressions ${n, n+r, n+2r}$ of length three. In this course we will be studying higher-order correlations, such as the correlation of ${f}$ with quadratic phases such as ${n \mapsto e(\xi n^2)}$, as these will control the expected behaviour of ${f}$ on more complex patterns, such as arithmetic progressions ${n, n+r, n+2r, n+3r}$ of length four. In order to do this, we must first understand the behaviour of exponential sums such as $\displaystyle \sum_{n=1}^N e( \alpha n^2 ).$ Such sums are closely related to the distribution of expressions such as ${\alpha n^2 \hbox{ mod } 1}$ in the unit circle ${{\bf T} := {\bf R}/{\bf Z}}$, as ${n}$ varies from ${1}$ to ${N}$. More generally, one is interested in the distribution of polynomials ${P: {\bf Z}^d \rightarrow {\bf T}}$ of one or more variables taking values in a torus ${{\bf T}}$; for instance, one might be interested in the distribution of the quadruplet ${(\alpha n^2, \alpha (n+r)^2, \alpha(n+2r)^2, \alpha(n+3r)^2)}$ as ${n,r}$ both vary from ${1}$ to ${N}$. Roughly speaking, once we understand these types of distributions, then the general machinery of quadratic Fourier analysis will then allow us to understand the distribution of the quadruplet ${(f(n), f(n+r), f(n+2r), f(n+3r))}$ for more general classes of functions ${f}$; this can lead for instance to an understanding of the distribution of arithmetic progressions of length ${4}$ in the primes, if ${f}$ is somehow related to the primes. More generally, to find arithmetic progressions such as ${n,n+r,n+2r,n+3r}$ in a set ${A}$, it would suffice to understand the equidistribution of the quadruplet ${(1_A(n), 1_A(n+r), 1_A(n+2r), 1_A(n+3r))}$ in ${\{0,1\}^4}$ as ${n}$ and ${r}$ vary. This is the starting point for the fundamental connection between combinatorics (and more specifically, the task of finding patterns inside sets) and dynamics (and more specifically, the theory of equidistribution and recurrence in measure-preserving dynamical systems, which is a subfield of ergodic theory). This connection was explored in one of my previous classes; it will also be important in this course (particularly as a source of motivation), but the primary focus will be on finitary, and Fourier-based, methods. The theory of equidistribution of polynomial orbits was developed in the linear case by Dirichlet and Kronecker, and in the polynomial case by Weyl. There are two regimes of interest; the (qualitative) asymptotic regime in which the scale parameter ${N}$ is sent to infinity, and the (quantitative) single-scale regime in which ${N}$ is kept fixed (but large). Traditionally, it is the asymptotic regime which is studied, which connects the subject to other asymptotic fields of mathematics, such as dynamical systems and ergodic theory. However, for many applications (such as the study of the primes), it is the single-scale regime which is of greater importance. The two regimes are not directly equivalent, but are closely related: the single-scale theory can be usually used to derive analogous results in the asymptotic regime, and conversely the arguments in the asymptotic regime can serve as a simplified model to show the way to proceed in the single-scale regime. The analogy between the two can be made tighter by introducing the (qualitative) ultralimit regime, which is formally equivalent to the single-scale regime (except for the fact that explicitly quantitative bounds are abandoned in the ultralimit), but resembles the asymptotic regime quite closely. We will view the equidistribution theory of polynomial orbits as a special case of Ratner’s theorem, which we will study in more generality later in this course. For the finitary portion of the course, we will be using asymptotic notation: ${X \ll Y}$, ${Y \gg X}$, or ${X = O(Y)}$ denotes the bound ${|X| \leq CY}$ for some absolute constant ${C}$, and if we need ${C}$ to depend on additional parameters then we will indicate this by subscripts, e.g. ${X \ll_d Y}$ means that ${|X| \leq C_d Y}$ for some ${C_d}$ depending only on ${d}$. In the ultralimit theory we will use an analogue of asymptotic notation, which we will review later in these notes. Today, Prof. Margulis continued his lecture series, focusing on two specific examples of homogeneous dynamics applications to number theory, namely counting lattice points on algebraic varieties, and quantitative versions of the Oppenheim conjecture.  (Due to lack of time, the third application mentioned in the previous lecture, namely metric theory of Diophantine approximation, was not covered.) The final distinguished lecture series for the academic year here at UCLA is being given this week by Gregory Margulis, who is giving three lectures on “homogeneous dynamics and number theory”.  In his first lecture, Prof. Margulis surveyed some classical problems in number theory that turn out, rather surprisingly, to have more or less equivalent counterparts in homogeneous dynamics – the theory of dynamical systems on homogeneous spaces $G/\Gamma$. As usual, any errors in this post are due to my transcription of the talk. This week I was in Columbus, Ohio, attending a conference on equidistribution on manifolds. I talked about my recent paper with Ben Green on the quantitative behaviour of polynomial sequences in nilmanifolds, which I have blogged about previously. During my talk (and inspired by the immediately preceding talk of Vitaly Bergelson), I stated explicitly for the first time a generalisation of the van der Corput trick which morally underlies our paper, though it is somewhat buried there as we specialised it to our application at hand (and also had to deal with various quantitative issues that made the presentation more complicated). After the talk, several people asked me for a more precise statement of this trick, so I am presenting it here, and as an application reproving an old theorem of Leon Green that gives a necessary and sufficient condition as to whether a linear sequence $(g^n x)_{n=1}^\infty$ on a nilmanifold $G/\Gamma$ is equidistributed, which generalises the famous theorem of Weyl on equidistribution of polynomials. UPDATE, Feb 2013: It has been pointed out to me by Pavel Zorin that this argument does not fully recover the theorem of Leon Green; to cover all cases, one needs the more complicated van der Corput argument in our paper. Ben Green and I have just uploaded our joint paper, “The distribution of polynomials over finite fields, with applications to the Gowers norms“, to the arXiv, and submitted to Contributions to Discrete Mathematics. This paper, which we first announced at the recent FOCS meeting, and then gave an update on two weeks ago on this blog, is now in final form. It is being made available simultaneously with a closely related paper of Lovett, Meshulam, and Samorodnitsky. In the previous post on this topic, I focused on the negative results in the paper, and in particular the fact that the inverse conjecture for the Gowers norm fails for certain degrees in low characteristic. Today, I’d like to focus instead on the positive results, which assert that for polynomials in many variables over finite fields whose degree is less than the characteristic of the field, one has a satisfactory theory for the distribution of these polynomials. Very roughly speaking, the main technical results are: • A regularity lemma: Any polynomial can be expressed as a combination of a bounded number of other polynomials which are regular, in the sense that no non-trivial linear combination of these polynomials can be expressed efficiently in terms of lower degree polynomials. • A counting lemma: A regular collection of polynomials behaves as if the polynomials were selected randomly. In particular, the polynomials are jointly equidistributed. Ben Green and I have just uploaded our paper “The quantitative behaviour of polynomial orbits on nilmanifolds” to the arXiv (and shortly to be submitted to a journal, once a companion paper is finished). This paper grew out of our efforts to prove the Möbius and Nilsequences conjecture MN(s) from our earlier paper, which has applications to counting various linear patterns in primes (Dickson’s conjecture). These efforts were successful – as the companion paper will reveal – but it turned out that in order to establish this number-theoretic conjecture, we had to first establish a purely dynamical quantitative result about polynomial sequences in nilmanifolds, very much in the spirit of the celebrated theorems of Marina Ratner on unipotent flows; I plan to discuss her theorems in more detail in a followup post to this one.In this post I will not discuss the number-theoretic applications or the connections with Ratner’s theorem, and instead describe our result from a slightly different viewpoint, starting from some very simple examples and gradually moving to the general situation considered in our paper. To begin with, consider a infinite linear sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ in the unit circle ${\Bbb R}/{\Bbb Z}$, where $\alpha, \beta \in {\Bbb R}/{\Bbb Z}$. (One can think of this sequence as the orbit of $\beta$ under the action of the shift operator $T: x \mapsto x +\alpha$ on the unit circle.) This sequence can do one of two things: 1. If $\alpha$ is rational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is periodic and thus only takes on finitely many values. 2. If $\alpha$ is irrational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is dense in ${\Bbb R}/{\Bbb Z}$. In fact, it is not just dense, it is equidistributed, or equivalently that $\displaystyle\lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N F( n \alpha + \beta ) = \int_{{\Bbb R}/{\Bbb Z}} F$ for all continuous functions $F: {\Bbb R}/{\Bbb Z} \to {\Bbb C}$. This statement is known as the equidistribution theorem. We thus see that infinite linear sequences exhibit a sharp dichotomy in behaviour between periodicity and equidistribution; intermediate scenarios, such as concentration on a fractal set (such as a Cantor set), do not occur with linear sequences. This dichotomy between structure and randomness is in stark contrast to exponential sequences such as $( 2^n \alpha)_{n \in {\Bbb N}}$, which can exhibit an extremely wide spectrum of behaviours. For instance, the question of whether $(10^n \pi)_{n \in {\Bbb N}}$ is equidistributed mod 1 is an old unsolved problem, equivalent to asking whether $\pi$ is normal base 10. Intermediate between linear sequences and exponential sequences are polynomial sequences $(P(n))_{n \in {\Bbb N}}$, where P is a polynomial with coefficients in ${\Bbb R}/{\Bbb Z}$. A famous theorem of Weyl asserts that infinite polynomial sequences enjoy the same dichotomy as their linear counterparts, namely that they are either periodic (which occurs when all non-constant coefficients are rational) or equidistributed (which occurs when at least one non-constant coefficient is irrational). Thus for instance the fractional parts $\{ \sqrt{2}n^2\}$ of $\sqrt{2} n^2$ are equidistributed modulo 1. This theorem is proven by Fourier analysis combined with non-trivial bounds on Weyl sums. For our applications, we are interested in strengthening these results in two directions. Firstly, we wish to generalise from polynomial sequences in the circle ${\Bbb R}/{\Bbb Z}$ to polynomial sequences $(g(n)\Gamma)_{n \in {\Bbb N}}$ in other homogeneous spaces, in particular nilmanifolds. Secondly, we need quantitative equidistribution results for finite orbits $(g(n)\Gamma)_{1 \leq n \leq N}$ rather than qualitative equidistribution for infinite orbits $(g(n)\Gamma)_{n \in {\Bbb N}}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 115, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887397050857544, "perplexity": 244.73349283085517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703108201/warc/CC-MAIN-20130516111828-00020-ip-10-60-113-184.ec2.internal.warc.gz"}
https://hilbertthm90.wordpress.com/category/math/number-theory/
# An Application of p-adic Volume to Minimal Models Today I’ll sketch a proof of Ito that birational smooth minimal models have all of their Hodge numbers exactly the same. It uses the ${p}$-adic integration from last time plus one piece of heavy machinery. First, the piece of heavy machinery: If ${X, Y}$ are finite type schemes over the ring of integers ${\mathcal{O}_K}$ of a number field whose generic fibers are smooth and proper, then if ${|X(\mathcal{O}_K/\mathfrak{p})|=|Y(\mathcal{O}_K/\mathfrak{p})|}$ for all but finitely many prime ideals, ${\mathfrak{p}}$, then the generic fibers ${X_\eta}$ and ${Y_\eta}$ have the same Hodge numbers. If you’ve seen these types of hypotheses before, then there’s an obvious set of theorems that will probably be used to prove this (Chebotarev + Hodge-Tate decomposition + Weil conjectures). Let’s first restrict our attention to a single prime. Since we will be able to throw out bad primes, suppose we have ${X, Y}$ smooth, proper varieties over ${\mathbb{F}_q}$ of characteristic ${p}$. Proposition: If ${|X(\mathbb{F}_{q^r})|=|Y(\mathbb{F}_{q^r})|}$ for all ${r}$, then ${X}$ and ${Y}$ have the same ${\ell}$-adic Betti numbers. This is a basic exercise in using the Weil conjectures. First, ${X}$ and ${Y}$ clearly have the same Zeta functions, because the Zeta function is defined entirely by the number of points over ${\mathbb{F}_{q^r}}$. But the Zeta function decomposes $\displaystyle Z(X,t)=\frac{P_1(t)\cdots P_{2n-1}(t)}{P_0(t)\cdots P_{2n}(t)}$ where ${P_i}$ is the characteristic polynomial of Frobenius acting on ${H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)}$. The Weil conjectures tell us we can recover the ${P_i(t)}$ if we know the Zeta function. But now $\displaystyle \dim H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)=\deg P_i(t)=H^i(Y_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)$ and hence the Betti numbers are the same. Now let’s go back and notice the magic of ${\ell}$-adic cohomology. If ${X}$ and ${Y}$ are as before over the ring of integers of a number field. Our assumption about the number of points over finite fields being the same for all but finitely many primes implies that we can pick a prime of good reduction and get that the ${\ell}$-adic Betti numbers of the reductions are the same ${b_i(X_p)=b_i(Y_p)}$. One of the main purposes of ${\ell}$-adic cohomology is that it is “topological.” By smooth, proper base change we get that the ${\ell}$-adic Betti numbers of the geometric generic fibers are the same $\displaystyle b_i(X_{\overline{\eta}})=b_i(X_p)=b_i(Y_p)=b_i(Y_{\overline{\eta}}).$ By the standard characteristic ${0}$ comparison theorem we then get that the singular cohomology is the same when base changing to ${\mathbb{C}}$, i.e. $\displaystyle \dim H^i(X_\eta\otimes \mathbb{C}, \mathbb{Q})=\dim H^i(Y_\eta \otimes \mathbb{C}, \mathbb{Q}).$ Now we use the Chebotarev density theorem. The Galois representations on each cohomology have the same traces of Frobenius for all but finitely many primes by assumption and hence the semisimplifications of these Galois representations are the same everywhere! Lastly, these Galois representations are coming from smooth, proper varieties and hence the representations are Hodge-Tate. You can now read the Hodge numbers off of the Hodge-Tate decomposition of the semisimplification and hence the two generic fibers have the same Hodge numbers. Alright, in some sense that was the “uninteresting” part, because it just uses a bunch of machines and is a known fact (there’s also a lot of stuff to fill in to the above sketch to finish the argument). Here’s the application of ${p}$-adic integration. Suppose ${X}$ and ${Y}$ are smooth birational minimal models over ${\mathbb{C}}$ (for simplicity we’ll assume they are Calabi-Yau, Ito shows how to get around not necessarily having a non-vanishing top form). I’ll just sketch this part as well, since there are some subtleties with making sure you don’t mess up too much in the process. We can “spread out” our varieties to get our setup in the beginning. Namely, there are proper models over some ${\mathcal{O}_K}$ (of course they aren’t smooth anymore), where the base change of the generic fibers are isomorphic to our original varieties. By standard birational geometry arguments, there is some big open locus (the complement has codimension greater than ${2}$) where these are isomorphic and this descends to our model as well. Now we are almost there. We have an etale isomorphism ${U\rightarrow V}$ over all but finitely many primes. If we choose nowhere vanishing top forms on the models, then the restrictions to the fibers are ${p}$-adic volume forms. But our standard trick works again here. The isomorphism ${U\rightarrow V}$ pulls back the volume form on ${Y}$ to a volume form on ${X}$ over all but finitely primes and hence they differ by a function which has ${p}$-adic valuation ${1}$ everywhere. Thus the two models have the same volume over all but finitely many primes, and as was pointed out last time the two must have the same number of ${\mathbb{F}_{q^r}}$-valued points over these primes since we can read this off from knowing the volume. The machinery says that we can now conclude the two smooth birational minimal models have the same Hodge numbers. I thought that was a pretty cool and unexpected application of this idea of ${p}$-adic volume. It is the only one I know of. I’d be interested if anyone knows of any other. I came across this idea a long time ago, but I needed the result that uses it in its proof again, so I was curious about figuring out what in the world is going on. It turns out that you can make “${p}$-adic measures” to integrate against on algebraic varieties. This is a pretty cool idea that I never would have guessed possible. I mean, maybe complex varieties or something, but over ${p}$-adic fields? Let’s start with a pretty standard setup in ${p}$-adic geometry. Let ${K/\mathbb{Q}_p}$ be a finite extension and ${R}$ the ring of integers of ${K}$. Let ${\mathbb{F}_q=R_K/\mathfrak{m}}$ be the residue field. If this scares you, then just take ${K=\mathbb{Q}_p}$ and ${R=\mathbb{Z}_p}$. Now let ${X\rightarrow Spec(R)}$ be a smooth scheme of relative dimension ${n}$. The picture to have in mind here is some smooth ${n}$-dimensional variety over a finite field ${X_0}$ as the closed fiber and a smooth characteristic ${0}$ version of this variety, ${X_\eta}$, as the generic fiber. This scheme is just interpolating between the two. Now suppose we have an ${n}$-form ${\omega\in H^0(X, \Omega_{X/R}^n)}$. We want to say what it means to integrate against this form. Let ${|\cdot |_p}$ be the normalized ${p}$-adic valuation on ${K}$. We want to consider the ${p}$-adic topology on the set of ${R}$-valued points ${X(R)}$. This can be a little weird if you haven’t done it before. It is a totally disconnected, compact space. The idea for the definition is the exact naive way of converting the definition from a manifold to this setting. Consider some point ${s\in X(R)}$. Locally in the ${p}$-adic topology we can find a “disk” containing ${s}$. This means there is some open ${U}$ about ${s}$ together with a ${p}$-adic analytic isomorphism ${U\rightarrow V\subset R^n}$ to some open. In the usual way, we now have a choice of local coordinates ${x=(x_i)}$. This means we can write ${\omega|_U=fdx_1\wedge\cdots \wedge dx_n}$ where ${f}$ is a ${p}$-adic analytic on ${V}$. Now we just define $\displaystyle \int_U \omega= \int_V |f(x)|_p dx_1 \cdots dx_n.$ Now maybe it looks like we’ve converted this to another weird ${p}$-adic integration problem that we don’t know how to do, but we the right hand side makes sense because ${R^n}$ is a compact topological group so we integrate with respect to the normalized Haar measure. Now we’re done, because modulo standard arguments that everything patches together we can define ${\int_X \omega}$ in terms of these local patches (the reason for being able to patch without bump functions will be clear in a moment, but roughly on overlaps the form will differ by a unit with valuation ${1}$). This allows us to define a “volume form” for smooth ${p}$-adic schemes. We will call an ${n}$-form a volume form if it is nowhere vanishing (i.e. it trivializes ${\Omega^n}$). You might be scared that the volume you get by integrating isn’t well-defined. After all, on a real manifold you can just scale a non-vanishing ${n}$-form to get another one, but the integral will be scaled by that constant. We’re in luck here, because if ${\omega}$ and ${\omega'}$ are both volume forms, then there is some non-vanishing function such that ${\omega=f\omega'}$. Since ${f}$ is never ${0}$, it is invertible, and hence is a unit. This means ${|f(x)|_p=1}$, so since we can only get other volume forms by scaling by a function with ${p}$-adic valuation ${1}$ everywhere the volume is a well-defined notion under this definition! (A priori, there could be a bunch of “different” forms, though). It turns out to actually be a really useful notion as well. If we want to compute the volume of ${X/R}$, then there is a natural way to do it with our set-up. Consider the reduction mod ${\mathfrak{m}}$ map ${\phi: X(R)\rightarrow X(\mathbb{F}_q)}$. The fiber over any point is a ${p}$-adic open set, and they partition ${X(R)}$ into a disjoint union of ${|X(\mathbb{F}_q)|}$ mutually isomorphic sets (recall the reduction map is surjective here by the relevant variant on Hensel’s lemma). Fix one point ${x_0\in X(\mathbb{F}_q)}$, and define ${U:=\phi^{-1}(x_0)}$. Then by the above analysis we get $\displaystyle Vol(X)=\int_X \omega=|X(\mathbb{F}_q)|\int_{U}\omega$ All we have to do is compute this integral over one open now. By our smoothness hypothesis, we can find a regular system of parameters ${x_1, \ldots, x_n\in \mathcal{O}_{X, x_0}}$. This is a legitimate choice of coordinates because they define a ${p}$-adic analytic isomorphism with ${\mathfrak{m}^n\subset R^n}$. Now we use the same silly trick as before. Suppose ${\omega=fdx_1\wedge \cdots \wedge dx_n}$, then since ${\omega}$ is a volume form, ${f}$ can’t vanish and hence ${|f(x)|_p=1}$ on ${U}$. Thus $\displaystyle \int_{U}\omega=\int_{\mathfrak{m}^n}dx_1\cdots dx_n=\frac{1}{q^n}$ This tells us that no matter what ${X/R}$ is, if there is a volume form (which often there isn’t), then the volume $\displaystyle Vol(X)=\frac{|X(\mathbb{F}_q)|}{q^n}$ just suitably multiplies the number of ${\mathbb{F}_q}$-rational points there are by a factor dependent on the size of the residue field and the dimension of ${X}$. Next time we’ll talk about the one place I know of that this has been a really useful idea. # BSD for a Large Class of Elliptic Curves I’m giving up on the p-divisible group posts for awhile. I would have to be too technical and tedious to write anything interesting about enlarging the base. It is pretty fascinating stuff, but not blog material at the moment. I’ve been playing around with counting fibration structures on K3 surfaces, and I just noticed something I probably should have been aware of for a long time. This is totally well-known, but I’ll give a slightly anachronistic presentation so that we can use results from 2013 to prove the Birch and Swinnerton-Dyer conjecture!! … Well, only in a case that has been known since 1973 when it was published by Artin and Swinnerton-Dyer. Let’s recall the Tate conjecture for surfaces. Let ${k}$ be a finite field and ${X/k}$ a smooth, projective surface. We’ve written this down many times now, but the long exact sequence associate to the Kummer sequence $\displaystyle 0\rightarrow \mu_{\ell}\rightarrow \mathbb{G}_m\rightarrow \mathbb{G}_m\rightarrow 0$ (for ${\ell\neq \text{char}(k)}$) gives us a cycle class map $\displaystyle c_1: Pic(X_{\overline{k}})\otimes \mathbb{Q}_{\ell}\rightarrow H^2_{et}(X_{\overline{k}}, \mathbb{Q}_\ell(1))$ In fact, we could take Galois invariants to get our standard $\displaystyle 0\rightarrow Pic(X)\otimes \mathbb{Q}_{\ell}\rightarrow H^2_{et}(X_{\overline{k}}, \mathbb{Q}_\ell(1))^G\rightarrow Br(X)[\ell^\infty]\rightarrow 0$ The Tate conjecture is in some sense the positive characteristic version of the Hodge conjecture. It conjectures that the first map is surjective. In other words, whenever an ${\ell}$-adic class “looks like” it could come from an honest geometric thing, then it does. But if the Tate conjecture is true, then this implies the ${\ell}$-primary part of ${Br(X)}$ is finite. We could spend some time worrying about independence of ${\ell}$, but it works, and hence the Tate conjecture is actually equivalent to finiteness of ${Br(X)}$. Suppose now that ${X}$ is an elliptic K3 surface. This just means that there is a flat map ${X\rightarrow \mathbb{P}^1}$ where the fibers are elliptic curves (there are some degenerate fibers, but after some heavy machinery we could always put this into some nice form, we’re sketching an argument here so we won’t worry about the technical details of what we want “fibration” to mean). The generic fiber ${X_\eta}$ is a genus ${1}$ curve that does not necessarily have a rational point and hence is not necessarily an elliptic curve. But we can just use a relative version of the Jacobian construction to produce a new fibration ${J\rightarrow \mathbb{P}^1}$ where ${J}$ is a K3 surface fiberwise isomorphic to ${X}$, but now ${J_\eta=Jac(X_\eta)}$ and hence is an elliptic curve. Suppose we want to classify elliptic fibrations that have ${J}$ as the relative Jacobian. We have two natural ideas to do this. The first is that etale locally such a fibration is trivial, so you could consider all glueing data to piece such a thing together. The obstruction will be some Cech class that actually lives in ${H^2(X, \mathbb{G}_m)=Br(X)}$. In fancy language, you make these things as ${\mathbb{G}_m}$-gerbes which are just twisted relative moduli of sheaves. The class in ${Br(X)}$ is giving you the obstruction the existence of a universal sheaf. A more number theoretic way to think about this is that rather than think about surfaces over ${k}$, we work with the generic fiber ${X_\eta/k(t)}$. It is well-known that the Weil-Chatelet group: ${H^1(Gal(k(t)^{sep}/k(t), J_\eta)}$ gives you the possible genus ${1}$ curves that could occur as generic fibers of such fibrations. This group is way too big though, because we only want ones that are locally trivial everywhere (otherwise it won’t be a fibration). So it shouldn’t be surprising that the classification of such things is given by the Tate-Shafarevich group: Ш $\displaystyle (J_\eta /k(t))=ker ( H^1(G, J_\eta)\rightarrow \prod H^1(G_v, (J_\eta)_v))$ Very roughly, I’ve now given a heuristic argument (namely that they both classify the same set of things) that ${Br(X)\simeq}$ Ш ${(J_\eta)}$, and it turns out that Grothendieck proved the natural map that comes form the Leray spectral sequence ${Br(X)\rightarrow}$ Ш${(J_\eta)}$ is an isomorphism (this rigorous argument might actually have been easier than the heuristic one because we’ve computed everything involved in previous posts, but it doesn’t give you any idea why one might think they are the same). Theorem: If ${E/\mathbb{F}_q(t)}$ is an elliptic curve of height ${2}$ (occuring as the generic fiber of an elliptic K3 surface), then ${E}$ satisfies the Birch and Swinnerton-Dyer conjecture. Idea: Using the machinery alluded to before, we spread out ${E}$ to an elliptic K3 surface ${X\rightarrow \mathbb{P}^1}$ over a finite field. As of this year, it seems the Tate conjecture is true for K3 surfaces (the proofs are all there, I’m not sure if they have been double checked and published yet). Thus ${Br(X)}$ is finite. Thus Ш${ (E)}$ is finite. But now it is well-known that if Ш${ (E)}$ being finite is equivalent to the Birch and Swinnerton-Dyer conjecture. # Newton Polygons of p-Divisible Groups I really wanted to move on from this topic, because the theory gets much more interesting when we move to ${p}$-divisible groups over some larger rings than just algebraically closed fields. Unfortunately, while looking over how Demazure builds the theory in Lectures on ${p}$-divisible Groups, I realized that it would be a crime to bring you this far and not concretely show you the power of thinking in terms of Newton polygons. As usual, let’s fix an algebraically closed field of positive characteristic to work over. I was vague last time about the anti-equivalence of categories between ${p}$-divisible groups and ${F}$-crystals mostly because I was just going off of memory. When I looked it up, I found out I was slightly wrong. Let’s compute some examples of some slopes. Recall that ${D(\mu_{p^\infty})\simeq W(k)}$ and ${F=p\sigma}$. In particular, ${F(1)=p\cdot 1}$, so in our ${F}$-crystal theory we get that the normalized ${p}$-adic valuation of the eigenvalue ${p}$ of ${F}$ is ${1}$. Recall that we called this the slope (it will become clear why in a moment). Our other main example was ${D(\mathbb{Q}_p/\mathbb{Z}_p)\simeq W(k)}$ with ${F=\sigma}$. In this case we have ${1}$ is “the” eigenvalue which has ${p}$-adic valuation ${0}$. These slopes totally determine the ${F}$-crystal up to isomorphism, and the category of ${F}$-crystals (with slopes in the range ${0}$ to ${1}$) is anti-equivalent to the category of ${p}$-divisible groups. The Dieudonné-Manin decomposition says that we can always decompose ${H=D(G)\otimes_W K}$ as a direct sum of vector spaces indexed by these slopes. For example, if I had a height three ${p}$-divisible group, ${H}$ would be three dimensional. If it decomposed as ${H_0\oplus H_1}$ where ${H_0}$ was ${2}$-dimensional (there is a repeated ${F}$-eigenvalue of slope ${0}$), then ${H_1}$ would be ${1}$-dimensional, and I could just read off that my ${p}$-divisible group must be isogenous to ${G\simeq \mu_{p^\infty}\oplus (\mathbb{Q}_p/\mathbb{Z}_p)^2}$. In general, since we have a decomposition ${H=H_0\oplus H' \oplus H_1}$ where ${H'}$ is the part with slopes strictly in ${(0,1)}$ we get a decomposition ${G\simeq (\mu_{p^\infty})^{r_1}\oplus G' \oplus (\mathbb{Q}_p/\mathbb{Z}_p)^{r_0}}$ where ${r_j}$ is the dimension of ${H_j}$ and ${G'}$ does not have any factors of those forms. This is where the Newton polygon comes in. We can visually arrange this information as follows. Put the slopes of ${F}$ in increasing order ${\lambda_1, \ldots, \lambda_r}$. Make a polygon in the first quadrant by plotting the points ${P_0=(0,0)}$, ${P_1=(\dim H_{\lambda_1}, \lambda_1 \dim H_{\lambda_1})}$, … , ${\displaystyle P_j=\left(\sum_{l=1}^j\dim H_{\lambda_l}, \sum_{l=1}^j \lambda_l\dim H_{\lambda_l}\right)}$. This might look confusing, but all it says is to get from ${P_{j}}$ to ${P_{j+1}}$ make a line segment of slope ${\lambda_j}$ and make the segment go to the right for ${\dim H_{\lambda_j}}$. This way you visually encode the slope with the actual slope of the segment, and the longer the segment is the bigger the multiplicity of that eigenvalue. But this way of encoding the information gives us something even better, because it turns out that all these ${P_i}$ must have integer coordinates (a highly non-obvious fact proved in the book by Demazure listed above). This greatly restricts our possibilities for Dieudonné ${F}$-crystals. Consider the height ${2}$ case. We have ${H}$ is two dimensional, so we have ${2}$ slopes (possibly the same). The maximal ${y}$ coordinate you could ever reach is if both slopes were maximal which is ${1}$. In that case you just get the line segment from ${(0,0)}$ to ${(2,2)}$. The lowest you could get is if the slopes were both ${0}$ in which case you get a line segment ${(0,0)}$ to ${(2,0)}$. Every other possibility must be a polygon between these two with integer breaking points and increasing order of slopes. Draw it (or if you want to cheat look below). You will see that there are obviously only two other possibilities. The one that goes ${(0,0)}$ to ${(1,0)}$ to ${(2,1)}$ which is a slope ${0}$ and slope ${1}$ and corresponds to ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ and the one that goes ${(0,0)}$ to ${(2,1)}$. This corresponds to a slope ${1/2}$ with multiplicity ${2}$. This corresponds to the ${E[p^\infty]}$ for supersingular elliptic curves. That recovers our list from last time. We now just have a bit of a game to determine all height ${3}$ ${p}$-divisible groups up to isogeny (and it turns out in this small height case that determines them up to isomorphism). You can just draw all the possibilities for Newton polygons as in the height ${2}$ case to see that the only ${6}$ possibilities are ${(\mu_{p^\infty})^3}$, ${(\mu_{p^\infty})^2\oplus \mathbb{Q}_p/\mathbb{Z}_p}$, ${\mu_{p^\infty}\oplus (\mathbb{Q}_p/\mathbb{Z}_p)^2}$, ${(\mathbb{Q}_p/\mathbb{Z}_p)^3}$, and then two others: ${G_{1/3}}$ which corresponds to the thing with a triple eigenvalue of slope ${1/3}$ and ${G_{2/3}}$ which corresponds to the thing with a triple eigenvalue of slope ${2/3}$. To finish this post (and hopefully topic!) let’s bring this back to elliptic curves one more time. It turns out that ${D(E[p^\infty])\simeq H^1_{crys}(E/W)}$. Without reminding you of the technical mumbo-jumbo of crystalline cohomology, let’s think why this might be reasonable. We know ${E[p^\infty]}$ is always height ${2}$, so ${D(E[p^\infty])}$ is rank ${2}$. But if we consider that crystalline cohomology should be some sort of ${p}$-adic cohomology theory that “remembers topological information” (whatever that means), then we would guess that some topological ${H^1}$ of a “torus” should be rank ${2}$ as well. Moreover, the crystalline cohomology comes with a natural Frobenius action. But if we believe there is some sort of Weil conjecture magic that also applies to crystalline cohomology (I mean, it is a Weil cohomology theory), then we would have to believe that the product of the eigenvalues of this Frobenius equals ${p}$. Recall in the “classical case” that the characteristic polynomial has the form ${x^2-a_px+p}$. So there are actually only two possibilities in this case, both slope ${1/2}$ or one of slope ${1}$ and the other of slope ${0}$. As we’ve noted, these are the two that occur. In fact, this is a more general phenomenon. When thinking about ${p}$-divisible groups arising from algebraic varieties, because of these Weil conjecture type considerations, the Newton polygons must actually fit into much narrower regions and sometimes this totally forces the whole thing. For example, the enlarged formal Brauer group of an ordinary K3 surface has height ${22}$, but the whole Newton polygon is fully determined by having to fit into a certain region and knowing its connected component. # More Classification of p-Divisible Groups Today we’ll look a little more closely at ${A[p^\infty]}$ for abelian varieties and finish up a different sort of classification that I’ve found more useful than the one presented earlier as triples ${(M,F,V)}$. For safety we’ll assume ${k}$ is algebraically closed of characteristic ${p>0}$ for the remainder of this post. First, let’s note that we can explicitly describe all ${p}$-divisible groups over ${k}$ up to isomorphism (of any dimension!) up to height ${2}$ now. This is basically because height puts a pretty tight constraint on dimension: ${ht(G)=\dim(G)+\dim(G^D)}$. If we want to make this convention, we’ll say ${ht(G)=0}$ if and only if ${G=0}$, but I’m not sure it is useful anywhere. For ${ht(G)=1}$ we have two cases: If ${\dim(G)=0}$, then it’s dual must be the unique connected ${p}$-divisible group of height ${1}$, namely ${\mu_{p^\infty}}$ and hence ${G=\mathbb{Q}_p/\mathbb{Z}_p}$. The other case we just said was ${\mu_{p^\infty}}$. For ${ht(G)=2}$ we finally get something a little more interesting, but not too much more. From the height ${1}$ case we know that we can make three such examples: ${(\mu_{p^\infty})^{\oplus 2}}$, ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$, and ${(\mathbb{Q}_p/\mathbb{Z}_p)^{\oplus 2}}$. These are dimensions ${2}$, ${1}$, and ${0}$ respectively. The first and last are dual to each other and the middle one is self-dual. Last time we said there was at least one more: ${E[p^\infty]}$ for a supersingular elliptic curve. This was self-dual as well and the unique one-dimensional connected height ${2}$ ${p}$-divisible group. Now just playing around with the connected-étale decomposition, duals, and numerical constraints we get that this is the full list! If we could get a bit better feel for the weird supersingular ${E[p^\infty]}$ case, then we would have a really good understanding of all ${p}$-divisible groups up through height ${2}$ (at least over algebraically closed fields). There is an invariant called the ${a}$-number for abelian varieties defined by ${a(A)=\dim Hom(\alpha_p, A[p])}$. This essentially counts the number of copies of ${\alpha_p}$ sitting inside the truncated ${p}$-divisible group. Let’s consider the elliptic curve case again. If ${E/k}$ is ordinary, then we know ${E[p]}$ explicitly and hence can argue that ${a(E)=0}$. For the supersingular case we have that ${E[p]}$ is actually a non-split semi-direct product of ${\alpha_p}$ by itself and we get that ${a(E)=1}$. This shows that the ${a}$-number is an invariant that is equivalent to knowing ordinary/supersingular. This is a phenomenon that generalizes. For an abelian variety ${A/k}$ we get that ${A}$ is ordinary if and only if ${a(A)=0}$ in which case the ${p}$-divisible group is a bunch of copies of ${E[p^\infty]}$ for an ordinary elliptic curve, i.e. ${A[p^\infty]\simeq E[p^\infty]^g}$. On the other hand, ${A}$ is supersingular if and only if ${A[p^\infty]\simeq E[p^\infty]^g}$ for ${E/k}$ supersingular (these two facts are pretty easy if you use the ${p}$-rank as the definition of ordinary and supersingular because it tells you the étale part and you mess around with duals and numerics again). Now that we’ve beaten that dead horse beyond recognition, I’ll point out one more type of classification which is the one that comes up most often for me. In general, there is not redundant information in the triple ${(M, F, V)}$, but for special classes of ${p}$-divisible groups (for example the ones I always work with explained here) all you need to remember is the ${(M, F)}$ to recover ${G}$ up to isomorphism. A pair ${(M,F)}$ of a free, finite rank ${W}$-module equipped with a ${\phi}$-linear endomorphism ${F}$ is sometimes called a Cartier module or ${F}$-crystal. Every Dieudonné module of a ${p}$-divisible group is an example of one of these. We could also consider ${H=M\otimes_W K}$ where ${K=Frac(W)}$ to get a finite dimensional vector space in characteristic ${0}$ with a ${\phi}$-linear endomorphism preserving the ${W}$-lattice ${M\subset H}$. Passing to this vector space we would expect to lose some information and this is usually called the associated ${F}$-isocrystal. But doing this gives us a beautiful classification theorem which was originally proved by Diedonné and Manin. We have that ${H}$ is naturally an ${A}$-module where ${A=K[T]}$ is the noncommutative polynomial ring ${T\cdot a=\phi(a)\cdot T}$. The classification is to break up ${H\simeq \oplus H_\alpha}$ into a slope decomposition. These ${\alpha}$ are just rational numbers corresponding to the slopes of the ${F}$ operator. The eigenvalues ${\lambda_1, \ldots, \lambda_n}$ of ${F}$ are not necessarily well-defined, but if we pick the normalized valuation ${ord(p)=1}$, then the valuations of the eigenvalues are well-defined. Knowing the slopes and multiplicities completely determines ${H}$ up to isomorphism, so we can completely capture the information of ${H}$ in a simple Newton polygon. Note that when ${H}$ is the ${F}$-isocrystal of some Dieudonné module, then the relation ${FV=VF=p}$ forces all slopes to be between 0 and 1. Unfortunately, knowing ${H}$ up to isomorphism only determines ${M}$ up to equivalence. This equivalence is easily seen to be the same as an injective map ${M\rightarrow M'}$ whose cokernel is a torsion ${W}$-module (that way it becomes an isomorphism when tensoring with ${K}$). But then by the anti-equivalence of categories two ${p}$-divisible groups (in the special subcategory that allows us to drop the ${V}$) ${G}$ and ${G'}$ have equivalent Dieudonné modules if and only if there is a surjective map ${G' \rightarrow G}$ whose kernel is finite, i.e. ${G}$ and ${G'}$ are isogenous as ${p}$-divisible groups. Despite the annoying subtlety in fully determining ${G}$ up to isomorphism, this is still really good. It says that just knowing the valuation of some eigenvalues of an operator on a finite dimensional characteristic ${0}$ vector space allows us to recover ${G}$ up to isogeny. # A Quick User’s Guide to Dieudonné Modules of p-Divisible Groups Last time we saw that if we consider a ${p}$-divisible group ${G}$ over a perfect field of characteristic ${p>0}$, that there wasn’t a whole lot of information that went into determining it up to isomorphism. Today we’ll make this precise. It turns out that up to isomorphism we can translate ${G}$ into a small amount of (semi-)linear algebra. I’ve actually discussed this before here. But let’s not get bogged down in the details of the construction. The important thing is to see how to use this information to milk out some interesting theorems fairly effortlessly. Let’s recall a few things. The category of ${p}$-divisible groups is (anti-)equivalent to the category of Dieudonné modules. We’ll denote this functor ${G\mapsto D(G)}$. Let ${W:=W(k)}$ be the ring of Witt vectors of ${k}$ and ${\sigma}$ be the natural Frobenius map on ${W}$. There are only a few important things that come out of the construction from which you can derive tons of facts. First, the data of a Dieudonné module is a free ${W}$-module, ${M}$, of finite rank with a Frobenius ${F: M\rightarrow M}$ which is ${\sigma}$-linear and a Verschiebung ${V: M\rightarrow M}$ which is ${\sigma^{-1}}$-linear satisfying ${FV=VF=p}$. Fact 1: The rank of ${D(G)}$ is the height of ${G}$. Fact 2: The dimension of ${G}$ is the dimension of ${D(G)/FD(G)}$ as a ${k}$-vector space (dually, the dimension of ${D(G)/VD(G)}$ is the dimension of ${G^D}$). Fact 3: ${G}$ is connected if and only if ${F}$ is topologically nilpotent (i.e. ${F^nD(G)\subset pD(G)}$ for ${n>>0}$). Dually, ${G^D}$ is connected if and only if ${V}$ is topologically nilpotent. Fact 4: ${G}$ is étale if and only if ${F}$ is bijective. Dually, ${G^D}$ is étale if and only if ${V}$ is bijective. These facts alone allow us to really get our hands dirty with what these things look like and how to get facts back about ${G}$ using linear algebra. Let’s compute the Dieudonné modules of the two “standard” ${p}$-divisible groups: ${\mu_{p^\infty}}$ and ${\mathbb{Q}_p/\mathbb{Z}_p}$ over ${k=\mathbb{F}_p}$ (recall in this situation that ${W(k)=\mathbb{Z}_p}$). Before starting, we know that the standard Frobenius ${F(a_0, a_1, \ldots, )=(a_0^p, a_1^p, \ldots)}$ and Verschiebung ${V(a_0, a_1, \ldots, )=(0, a_0, a_1, \ldots )}$ satisfy the relations to make a Dieudonné module (the relations are a little tricky to check because constant multiples ${c\cdot (a_0, a_1, \ldots )}$ for ${c\in W}$ involve Witt multiplication and should be done using universal properties). In this case ${F}$ is bijective so the corresponding ${G}$ must be étale. Also, ${VW\subset pW}$ so ${V}$ is topologically nilpotent which means ${G^D}$ is connected. Thus we have a height one, étale ${p}$-divisible group with one-dimensional, connected dual which means that ${G=\mathbb{Q}_p/\mathbb{Z}_p}$. Now we’ll do ${\mu_{p^\infty}}$. Fact 1 tells us that ${D(\mu_{p^\infty})\simeq \mathbb{Z}_p}$ because it has height ${1}$. We also know that ${F: \mathbb{Z}_p\rightarrow \mathbb{Z}_p}$ must have the property that ${\mathbb{Z}_p/F(\mathbb{Z}_p)=\mathbb{F}_p}$ since ${\mu_{p^\infty}}$ has dimension ${1}$. Thus ${F=p\sigma}$ and hence ${V=\sigma^{-1}}$. The proof of the anti-equivalence proceeds by working at finite stages and taking limits. So it turns out that the theory encompasses a lot more at the finite stages because ${\alpha_{p^n}}$ are perfectly legitimate finite, ${p}$-power rank group schemes (note the system does not form a ${p}$-divisible group because multiplication by ${p}$ is the zero morphism). Of course taking the limit ${\alpha_{p^\infty}}$ is also a formal ${p}$-torsion group scheme. If we wanted to we could build the theory of Dieudonné modules to encompass these types of things, but in the limit process we would have finite ${W}$-module which are not necessarily free and we would get an extra “Fact 5” that ${D(G)}$ is free if and only if ${G}$ is ${p}$-divisible. Let’s do two more things which are difficult to see without this machinery. For these two things we’ll assume ${k}$ is algebraically closed. There is a unique connected, ${1}$-dimensional ${p}$-divisible of height ${h}$ over ${k}$. I imagine without Dieudonné theory this would be quite difficult, but it just falls right out by playing with these facts. Since ${D(G)/FD(G)\simeq k}$ we can choose a basis, ${D(G)=We_1\oplus \cdots \oplus We_h}$, so that ${F(e_j)=e_{j+1}}$ and ${F(e_h)=pe_1}$. Up to change of coordinates, this is the only way that eventually ${F^nD(G)\subset pD(G)}$ (in fact ${F^hD(G)\subset pD(G)}$ is the smallest ${n}$). This also determines ${V}$ (note these two things need to be justified, I’m just asserting it here). But all the phrase “up to change of coordinates” means is that any other such ${(D(G'),F',V')}$ will be isomorphic to this one and hence by the equivalence of categories ${G\simeq G'}$. Suppose that ${E/k}$ is an elliptic curve. Now we can determine ${E[p^\infty]}$ up to isomorphism as a ${p}$-divisible group, a task that seemed out of reach last time. We know that ${E[p^\infty]}$ always has height ${2}$ and dimension ${1}$. In previous posts, we saw that for an ordinary ${E}$ we have ${E[p^\infty]^{et}\simeq \mathbb{Q}_p/\mathbb{Z}_p}$ (we calculated the reduced part by using flat cohomology, but I’ll point out why this step isn’t necessary in a second). Thus for an ordinary ${E/k}$ we get that ${E[p^\infty]\simeq E[p^\infty]^0\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ by the connected-étale decomposition. But height and dimension considerations tell us that ${E[p^\infty]^0}$ must be the unique height ${1}$, connected, ${1}$-dimensional ${p}$-divisible group, i.e. ${\mu_{p^\infty}}$. But of course we’ve been saying this all along: ${E[p^\infty]\simeq \mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$. If ${E/k}$ is supersingular, then we’ve also calculated previously that ${E[p^\infty]^{et}=0}$. Thus by the connected-étale decomposition we get that ${E[p^\infty]\simeq E[p^\infty]^0}$ and hence must be the unique, connected, ${1}$-dimensional ${p}$-divisible group of height ${2}$. For reference, since ${ht(G)=\dim(G)+\dim(G^D)}$ we see that ${G^D}$ is also of dimension ${1}$ and height ${2}$. If it had an étale part, then it would have to be ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ again, so ${G^D}$ must be connected as well and hence is the unique such group, i.e. ${G\simeq G^D}$. It is connected with connected dual. This gives us our first non-obvious ${p}$-divisible group since it is not just some split extension of ${\mu_{p^\infty}}$‘s and ${\mathbb{Q}_p/\mathbb{Z}_p}$‘s. If we hadn’t done these previous calculations, then we could still have gotten these results by a slightly more general argument. Given an abelian variety ${A/k}$ we have that ${A[p^\infty]}$ is a ${p}$-divisible group of height ${2g}$ where ${g=\dim A}$. Using Dieudonné theory we can abstractly argue that ${A[p^\infty]^{et}}$ must have height less than or equal to ${g}$. So in the case of an elliptic curve it is ${1}$ or ${0}$ corresponding to the ordinary or supersingular case respectively, and the proof would be completed because ${\mathbb{Q}_p/\mathbb{Z}_p}$ is the unique étale, height ${1}$, ${p}$-divisible group. # p-Divisible Groups Revisited 1 I’ve posted about ${p}$-divisible groups all over the place over the past few years (see: here, here, and here). I’ll just do a quick recap here on the “classical setting” to remind you of what we know so far. This will kick-start a series on some more subtle aspects I’d like to discuss which are kind of scary at first. Suppose ${G}$ is a ${p}$-divisible group over ${k}$, a perfect field of characteristic ${p>0}$. We can be extremely explicit in classifying all such objects. Recall that ${G}$ is just an injective limit of group schemes ${G=\varinjlim G_\nu}$ where we have an exact sequence ${0\rightarrow G_\nu \rightarrow G_{\nu+1}\stackrel{p^\nu}{\rightarrow} G_{\nu+1}}$ and there is a fixed integer ${h}$ such that group schemes ${G_{\nu}}$ are finite of rank ${p^{\nu h}}$. As a corollary to the standard connected-étale sequence for group schemes we get a canonical decomposition called the connected-étale sequence: $\displaystyle 0\rightarrow G^0 \rightarrow G \rightarrow G^{et} \rightarrow 0$ where ${G^0}$ is connected and ${G^{et}}$ is étale. Since ${k}$ was assumed to be perfect, this sequence actually splits. Thus ${G}$ is a semi-direct product of an étale ${p}$-divisible group and a connected ${p}$-divisible group. If you’ve seen the theory for finite, flat group schemes, then you’ll know that we usually decompose these two categories even further so that we get a piece that is connected with connected dual, connected with étale dual, étale with connected dual, and étale with étale dual. The standard examples to keep in mind for these four categories are ${\alpha_p}$, ${\mu_p}$, ${\mathbb{Z}/p}$, and ${\mathbb{Z}/\ell}$ for ${\ell\neq p}$ respectively. When we restrict ourselves to ${p}$-divisible groups the last category can’t appear in the decomposition of ${G_\nu}$ (since étale things are dimension 0, if something and its dual are both étale, then it would have to have height 0). I think it is not a priori clear, but the four category decomposition is a direct sum decomposition, and hence in this case we get that ${G\simeq G^0\oplus G^{et}}$ giving us a really clear idea of what these things look like. As usual we can describe étale group schemes in a nice way because they are just constant after base change. Thus the functor ${G^{et}\mapsto G^{et}(\overline{k})}$ is an equivalence of categories between étale ${p}$-divisible groups and the category of inverse systems of ${Gal(\overline{k}/k)}$-sets of order ${p^{\nu h}}$. Thus, after sufficient base change, we get an abstract isomorphism with the constant group scheme ${\prod \mathbb{Q}_p/\mathbb{Z}_p}$ for some product (for the ${p}$-divisible group case it will be a finite direct sum). All we have left now is to describe the possibilities for ${G^0}$, but this is a classical result as well. There is an equivalence of categories between the category of divisible, commutative, formal Lie groups and connected ${p}$-divisible groups given simply by taking the colimit of the ${p^n}$-torsion ${A\mapsto \varinjlim A[p^n]}$. The canonical example to keep in mind is ${\varinjlim \mathbb{G}_m[p^n]=\mu_{p^\infty}}$. This is connected only because in characteristic ${p}$ we have ${(x^p-1)=(x-1)^p}$, so ${\mu_{p^n}=Spec(k[x]/(x-1)^{p^n})}$. In any other characteristic this group scheme would be étale and totally disconnected. This brings us to the first subtlety which can cause a lot of confusion because of the abuse of notation. A few times ago we talked about the fact that ${E[p]}$ for an elliptic curve was either ${\mathbb{Z}/p}$ or ${0}$ depending on whether or not it was ordinary or supersingular (respectively). It is dangerous to write this, because here we mean ${E}$ as a group (really ${E(\overline{k})}$) and ${E[p]}$ the ${p}$-torsion in this group. When talking about the ${p}$-divisible group ${E[p^\infty]=\varinjlim E[p^n]}$ we are referring to ${E/k}$ as a group scheme and ${E[p^n]}$ as the (always!) non-trivial, finite, flat group scheme which is the kernel of the isogeny ${p^n: E\rightarrow E}$. The first way kills off the infinitesimal part so that we are just left with some nice reduced thing, and that’s why we can get ${0}$, because for a supersingular elliptic curve the group scheme ${E[p^n]}$ is purely infinitesimal, i.e. has trivial étale part. Recall also that we pointed out that ${E[p]\simeq \mathbb{Z}/p}$ for an ordinary elliptic curve by using some flat cohomology trick. But this trick is only telling us that the reduced group is cyclic of order ${p}$, but it does not tell us the scheme structure. In fact, in this case ${E[p^n]\simeq \mu_{p^n}\oplus \mathbb{Z}/p^n}$ giving us ${E[p^\infty]\simeq \mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$. So this is a word of warning that when working these things out you need to be very careful that you understand whether or not you are figuring out the full group scheme structure or just reduced part. It can be hard to tell sometimes. # Frobenius Semi-linear Algebra 2 Recall our setup. We have an algebraically closed field ${k}$ of characteristic ${p>0}$. We let ${V}$ be a finite dimensional ${k}$-vector space and ${\phi: V\rightarrow V}$ a ${p}$-linear map. Last time we left unfinished the Jordan decomposition that says that ${V=V_s\oplus V_n}$ where the two components are stable under ${\phi}$ and ${\phi}$ acts bijectively on ${V_s}$ and nilpotently on ${V_n}$. We then considered a strange consequence of what happens on the part on which it acts bijectively. If ${\phi}$ is bijective, then there always exists a full basis ${v_1, \ldots, v_n}$ that are fixed by ${\phi}$, i.e. ${\phi(v_i)=v_i}$. This is strange indeed, because in linear algebra this would force our operator to be the identity. There is one more slightly more disturbing consequence of this. If ${\phi}$ is bijective, then ${\phi-Id}$ is always surjective. This is a trivial consequence of having a fixed basis. Let ${w\in V}$. We want to find some ${z}$ such that ${\phi(z)=w}$. Well, we just construct the coefficients in the fixed basis by hand. We know ${w=\sum c_i v_i}$ for some ${c_i\in k}$. If ${z=\sum a_i v_i}$ really satisfies ${\phi(z)-z=w}$, then by comparing coefficients such an element exists if and only if we can solve ${a_i^p-a_i=c_i}$. These are just polynomial equations, so we can solve this over our algebraically closed field to get our coefficients. Strangely enough we really require algebraically closed and not merely perfect again, but the papers I’ve been reading explicitly require these facts over finite fields. Since they don’t give any references at all and just call these things “standard facts about ${p}$-linear algebra,” I’m not sure if there is a less stupid way to prove these things which work for arbitrary perfect fields. This is why you should give citations for things you don’t prove!! Why do I call this disturbing? Well, these maps really do appear when doing long exact sequences in cohomology. Last time we saw that we could prove that ${E[p]\simeq \mathbb{Z}/p}$ for an ordinary elliptic curve from computing the kernel of ${C-Id}$ where ${C}$ was the Cartier operator. But we have to be really, really careful to avoid linear algebra tricks when these maps come up, because in this situation we have ${\phi -Id}$ is always a surjective map between finite dimensional vector spaces of the same dimension, but also always has a non-trivial kernel isomorphic to ${\mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$ where the number of factors is the dimension of ${V}$. Even though we have a surjective map in the long exact sequence between vector spaces of the same dimension, we cannot conclude that it is bijective! Since everything we keep considering as real-life examples of semi-linear algebra has automatically been bijective (i.e. no nilpotent part), I haven’t actually been too concerned with the Jordan decomposition. But we may as well discuss it to round out the theory since people who work with ${p}$-Lie algebras care … I think? The idea of the proof is simple and related to what we did last time. We look at iterates ${\phi^j}$ of our map. We get a descending chain ${\phi^j(V)\supset \phi^{j+1}(V)}$ and hence it stabilizes somewhere, since even though ${\phi}$ is not a linear map, the image is still a vector subspace of ${V}$. Let ${r}$ be the smallest integer such that ${\phi^r(V)=\phi^{r+1}(V)}$. This means that ${r}$ is also the smallest integer such that ${\ker\phi^r=\ker \phi^{r+1}}$. Now we just take as our definition ${V_s=\phi^r(V)}$ and ${V_n=\ker \phi^r}$. Now by definition we get everything we want. It is just the kernel/image decomposition and hence a direct sum. By the choice of ${r}$ we certainly get that ${\phi}$ maps ${V_s}$ to ${V_s}$ and ${V_n}$ to ${V_n}$. Also, ${\phi|_{V_s}}$ is bijective by construction. Lastly, if ${v\in V_n}$, then ${\phi^j(v)=0}$ for some ${0\leq j\leq r}$ and hence ${\phi}$ is nilpotent on ${V_n}$. This is what we wanted to show. Here’s how this comes up for ${p}$-Lie algebras. Suppose you have some Lie group ${G/k}$ with Lie algebra ${\mathfrak{g}}$. You have the standard ${p}$-power map which is ${p}$-linear on ${\mathfrak{g}}$. By the structure theorem above ${\mathfrak{g}\simeq \mathfrak{h}\oplus \mathfrak{f}}$. The Lie subalgebra ${\mathfrak{h}}$ is the part the ${p}$-power map acts bijectively on and is called the core of the Lie algebra. Let ${X_1, \ldots, X_d}$ be a fixed basis of the core. We get a nice combinatorial classification of the Lie subalgebras of ${\mathfrak{h}}$. Let ${V=Span_{\mathbb{F}_p}\langle X_1, \ldots, X_d\rangle}$. The Lie subalgebras of ${\mathfrak{h}}$ are in bijective correspondence with the vector subspaces of ${V}$. In particular, the number of Lie subalgebras is finite and each occurs as a direct summand. The proof of this fact is to just repeat the argument of the Jordan decomposition for a Lie subalgebra and look at coefficients of the fixed basis. # Frobenius Semi-linear Algebra: 1 Today I want to explain some “well-known” facts in semilinear algebra. Here’s the setup. For safety we’ll assume ${k}$ is algebraically closed of characteristic ${p>0}$ (but merely being perfect should suffice for the main point later). Let ${V}$ be a finite dimensional vector space over ${k}$. Consider some ${p}$-semilinear operator on ${V}$ say ${\phi: V\rightarrow V}$. The fact that we are working with ${p}$ instead of ${p^{-1}}$ is mostly to not scare people. I think ${p^{-1}}$ actually appears more often in the literature and the theory is equivalent by “dualizing.” All this means is that it is a linear operator satisfying the usual properties ${\phi(v+w)=\phi(v)+\phi(w)}$, etc, except for the scalar rule in which we scale by a factor of ${p}$, so ${\phi(av)=a^p\phi(v)}$. This situation comes up surprisingly often in positive characteristic geometry, because often you want to analyze some long exact sequence in cohomology associated to a short exact sequence which involves the Frobenius map or the Cartier operator. The former will induce a ${p}$-linear map of vector spaces and the latter induces a ${p^{-1}}$-linear map. The facts we’re going to look at I’ve found in three or so papers just saying “from a well-known fact about ${p^{-1}}$-linear operators…” I wish there was a book out there that developed this theory like a standard linear algebra text so that people could actually give references. The proof today is a modification of that given in Dieudonne’s Lie Groups and Lie Hyperalgebras over a Field of Characteristic ${p>0}$ II (section 10). Let’s start with an example. In the one-dimensional case we have the following ${\phi: k\rightarrow k}$. If the map is non-trivial, then it is bijective. More importantly we can just write down every one of these because if ${\phi(1)=a}$, then $\displaystyle \begin{array}{rcl} \phi(x) & = & \phi(x\cdot 1) \\ & = & x^p\phi(1) \\ & = & ax^p \end{array}$ In fact, we can always find some non-zero fixed element, because this amounts to solving ${ax^p-x=x(ax^{p-1}-1)=0}$, i.e. finding a solution to ${ax^{p-1}-1=0}$ which we can do by being algebraically closed. This element ${b}$ obviously serves as a basis for ${k}$, but to set up an analogy we also see that ${Span_{\mathbb{F}_p}(b)}$ are all of the fixed points of ${\phi}$. In general ${V}$ will breakup into parts. The part that ${\phi}$ acts bijectively on will always have a basis of fixed elements whose ${\mathbb{F}_p}$-span consists of exactly the fixed points of ${\phi}$. Of course, this could never happen in linear algebra because finding a fixed basis implies the operator is the identity. Let’s start by proving this statement. Suppose ${\phi: V\rightarrow V}$ is a ${p}$-semilinear automorphism. We want to find a basis of fixed elements. We essentially mimic what we did before in a more complicated way. We induct on the dimension of ${V}$. If we can find a single ${v_1}$ fixed by ${\phi}$, then we would be done for the following reason. We kill off the span of ${v_1}$, then by the inductive hypothesis we can find ${v_2, \ldots, v_n}$ a fixed basis for the quotient. Together these make a fixed basis for all of ${V}$. Now we need to find a single fixed ${v_1}$ by brute force. Consider any non-zero ${w\in V}$. We start taking iterates of ${w}$ under ${\phi}$. Eventually they will become linearly dependent, so we consider ${w, \phi(w), \ldots, \phi^k(w)}$ for the minimal ${k}$ such that this is a linearly dependent set. This means we can find some coefficients that are not all ${0}$ for which ${\sum a_j \phi^j(w)=0}$. Let’s just see what must be true of some fictional ${v_1}$ in the span of these elements such that ${\phi(v_1)=v_1}$. Well, ${v_1=\sum b_j \phi^j(w)}$ must satisfy ${v_1=\phi(v_1)=\sum b_j^p \phi^{j+1}(w)}$. To make this easier to parse, let’s specialize to the case that ${k=3}$. This means that ${a_0 w+a_1\phi(w)+a_2\phi^2(w)=0}$ and by assumption the coefficient on this top power can’t be zero, so we rewrite the top power ${\phi^2(w)=-(a_0/a_2)w - (a_1/a_2)\phi(w)}$. The other equation is $\displaystyle \begin{array}{rcl} b_0w+b_1\phi(w) & = & b_0^p\phi(w)+b_1^p\phi^2(w)\\ & = & -(a_0/a_2)b_1^pw +(b_0^p-(a_1/a_2)b_1^p)\phi(w) \end{array}$ Comparing coefficients ${b_0=-(a_0/a_2)b_1^p}$ and then forward substituting ${b_1=-(a_0/a_2)^pb_1^{p^2}-(a_1/a_2)b_1^p}$. Ah, but we know the ${a_j}$ and this only involves the unknown ${b_1}$. So since ${k}$ is algebraically closed we can solve to find such a ${b_1}$. Then since we wrote all our other coefficients in terms of ${b_1}$ we actually can produce a fixed ${v_1}$ by brute force determining the coefficients of the vector in terms of our linear dependence coefficients. There was nothing special about ${k=3}$ here. In general, this trick will work because it only involves the fact that applying ${\phi}$ cycled the vectors forward by one which allows us to keep forward substituting all the equations from the comparison of coefficients to get everything in terms of the highest one including the highest one which transformed the problem into solving a single polynomial equation over our algebraically closed field. This completes the proof that if ${\phi}$ is bijective, then there is a basis of fixed vectors. The fact that ${V^\phi=Span_{\mathbb{F}_p}(v_1, \ldots, v_n)}$ is pretty easy after that. Of course, the ${\mathbb{F}_p}$-span is contained in the fixed points because by definition the prime subfield of ${k}$ is exactly the fixed elements of ${x\mapsto x^p}$. On the other hand, if ${c=\sum a_jv_j}$ is fixed, then ${c=\phi(c)=\sum a_j^p \phi(v_j)=\sum a_j^p v_j}$ shows that all the coefficients must be fixed by Frobenius and hence in ${\mathbb{F}_p}$. Here’s how this is useful. Recall the post on the fppf site. We said that if we wanted to understand the ${p}$-torsion of certain cohomology with coefficients in ${\mathbb{G}_m}$ (Picard group, Brauer group, etc), then we should look at the flat cohomology with coefficients in ${\mu_p}$. If we specialize to the case of curves we get an isomorphism ${H^1_{fl}(X, \mu_p)\simeq Pic(X)[p]}$. Recall the exact sequence at the end of that post. It told us that via the ${d\log}$ map ${H^1_{fl}(X, \mu_p)=ker(C-I)=H^0(X, \Omega^1)^C}$. Now we have a ridiculously complicated way to prove the following well-known fact. If ${E}$ is an ordinary elliptic curve over an algebraically closed field of characteristic ${p>0}$, then ${E[p]\simeq \mathbb{Z}/p}$. In fact, we can prove something slightly more general. By definition, a curve is of genus ${g}$ if ${H^0(X, \Omega^1)}$ is ${g}$-dimensional. We’ll say ${X}$ is ordinary if the Cartier operator ${C}$ is a ${p^{-1}}$-linear automorphism (I’m already sweeping something under the rug, because to even think of the Cartier operator acting on this cohomology group we need a hypothesis like ordinary to naturally identify some cohomology groups). By the results in this post we know that the structure of ${H^0(X, \Omega^1)^C}$ as an abelian group is ${\mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$ where there are ${g}$ copies. Thus in more generality this tells us that ${Jac(X)[p]\simeq Pic(X)[p]\simeq H^0(X, \Omega^1)^C\simeq \mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$. In particular, since for an elliptic curve (genus 1) we have ${Jac(E)=E}$, this statement is exactly ${E[p]\simeq \mathbb{Z}/p}$. This point is a little silly, because Silverman seems to just use this as the definition of an ordinary elliptic curve. Hartshorne uses the Hasse invariant in which case it is quite easy to derive that the Cartier operator is an automorphism (proof: it is Serre dual to the Frobenius which by the Hasse invariant definition is an automorphism). Using this definition, I’m actually not sure I’ve ever seen a derivation that ${E[p]\simeq \mathbb{Z}/p}$. I’d be interested if there is a lower level way of seeing it than going through this flat cohomology argument (Silverman cites a paper of Duering, but it’s in German). # Serre-Tate Theory 2 I guess this will be the last post on this topic. I’ll explain a tiny bit about what goes into the proof of this theorem and then why anyone would care that such canonical lifts exist. On the first point, there are tons of details that go into the proof. For example, Nick Katz’s article, Serre-Tate Local Moduli, is 65 pages. It is quite good if you want to learn more about this. Also, Messing’s book The Crystals Associated to Barsotti-Tate Groups is essentially building the machinery for this proof which is then knocked off in an appendix. So this isn’t quick or easy by any means. On the other hand, I think the idea of the proof is fairly straightforward. Let’s briefly recall last time. The situation is that we have an ordinary elliptic curve ${E_0/k}$ over an algebraically closed field of characteristic ${p>2}$. We want to understand ${Def_{E_0}}$, but in particular whether or not there is some distinguished lift to characteristic ${0}$ (this will be an element of ${Def_{E_0}(W(k))}$. To make the problem more manageable we consider the ${p}$-divisible group ${E_0[p^\infty]}$ attached to ${E_0}$. In the ordinary case this is the enlarged formal Picard group. It is of height ${2}$ whose connected component is ${\widehat{Pic}_{E_0}\simeq\mu_{p^\infty}}$. There is a natural map ${Def_{E_0}\rightarrow Def_{E_0[p^\infty]}}$ just by mapping ${E/R \mapsto E[p^\infty]}$. Last time we said the main theorem was that this map is an isomorphism. To tie this back to the flat topology stuff, ${E_0[p^\infty]}$ is the group representing the functor ${A\mapsto H^1_{fl}(E_0\otimes A, \mu_{p^\infty})}$. The first step in proving the main theorem is to note two things. In the (split) connected-etale sequence $\displaystyle 0\rightarrow \mu_{p^\infty}\rightarrow E_0[p^\infty]\rightarrow \mathbb{Q}_p/\mathbb{Z}_p\rightarrow 0$ we have that ${\mu_{p^\infty}}$ is height one and hence rigid. We have that ${\mathbb{Q}_p/\mathbb{Z}_p}$ is etale and hence rigid. Thus given any deformation ${G/R}$ of ${E_0[p^\infty]}$ we can take the connected-etale sequence of this and see that ${G^0}$ is the unique deformation of ${\mu_{p^\infty}}$ over ${R}$ and ${G^{et}=\mathbb{Q}_p/\mathbb{Z}_p}$. Thus the deformation functor can be redescribed in terms of extension classes of two rigid groups ${R\mapsto Ext_R^1(\mathbb{Q}_p/\mathbb{Z}_p, \mu_{p^\infty})}$. Now we see what the canonical lift is. Supposing our isomorphism of deformation functors, it is the lift that corresponds to the split and hence trivial extension class. So how do we actually check that this is an isomorphism? Like I said, it is kind of long and tedious. Roughly speaking you note that both deformation functors are prorepresentable by formally smooth objects of the same dimension. So we need to check that the differential is an isomorphism on tangent spaces. Here’s where some cleverness happens. You rewrite the differential as a composition of a whole bunch of maps that you know are isomorphisms. In particular, it is the following string of maps: The Kodaira-Spencer map ${T\stackrel{\sim}{\rightarrow} H^1(E_0, \mathcal{T})}$ followed by Serre duality (recall the canonical is trivial on an elliptic curve) ${H^1(E_0, \mathcal{T})\stackrel{\sim}{\rightarrow} Hom_k(H^1(E_0, \Omega^1), H^1(E_0, \mathcal{O}_{E_0}))}$. The hardest one was briefly mentioned a few posts ago and is the dlog map which gives an isomorphism ${H^2_{fl}(E_0, \mu_{p^\infty})\stackrel{\sim}{\rightarrow} H^1(E_0, \Omega^1)}$. Now noting that ${H^2_{fl}(E_0, \mu_{p^\infty})=\mathbb{Q}_p/\mathbb{Z}_p}$ and that ${T_0\mu_{p^\infty}\simeq H^1(E_0, \mathcal{O}_{E_0})}$ gives us enough compositions and isomorphisms that we get from the tangent space of the versal deformation of ${E_0}$ to the tangent space of the versal deformation of ${E_0[p^\infty]}$. As you might guess, it is a pain to actually check that this is the differential of the natural map (and in fact involves further decomposing those maps into yet other ones). It turns out to be the case and hence ${Def_{E_0}\rightarrow Def_{E_0[p^\infty]}}$ is an isomorphism and the canonical lift corresponds to the trivial extension. But why should we care? It turns out the geometry of the canonical lift is very special. This may not be that impressive for elliptic curves, but this theory all goes through for any ordinary abelian variety or K3 surface where it is much more interesting. It turns out that you can choose a nice set of coordinates (“canonical coordinates”) on the base of the versal deformation and a basis of the de Rham cohomology of the family that is adapted to the Hodge filtration such that in these coordinates the Gauss-Manin connection has an explicit and nice form. Also, the canonical lift admits a lift of the Frobenius which is also nice and compatible with how it acts on the above chosen basis on the de Rham cohomology. These coordinates are what give the base of the versal deformation the structure of a formal torus (product of ${\widehat{\mathbb{G}_m}}$‘s). One can then exploit all this nice structure to prove large open problems like the Tate conjecture in the special cases of the class of varieties that have these canonical lifts.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 746, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9501590132713318, "perplexity": 147.89497977988447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277475.33/warc/CC-MAIN-20160524002117-00061-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.cfd-online.com/Wiki/Vorticity_transport_equation
# Vorticity transport equation The vorticity transport equation governs the evolution of the vorticity. It is obtained by taking the curl of the momentum equation. $\frac{\partial u_i}{\partial t} + u_j \frac{\partial u_i}{\partial x_j} + \frac{1}{\rho} \frac{\partial p}{\partial x_i} = \frac{1}{\rho}\frac{\partial \tau_{ij}}{\partial x_j}$ For incompressible flow the vorticity transport equation reduces to $\frac{\partial \omega_i}{\partial t} + u_j \frac{\partial \omega_i}{\partial x_j} = \omega_j \frac{\partial u_i}{\partial x_j} + \nu \Delta \omega_i$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624321460723877, "perplexity": 345.3145849872583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00461-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.computer.org/csdl/trans/tp/1990/04/i0321-abs.html
Subscribe Issue No.04 - April (1990 vol.12) pp: 321-344 ABSTRACT <p>The general principles of detection, classification, and measurement of discontinuities are studied. The following issues are discussed: detecting the location of discontinuities; classifying discontinuities by their degrees; measuring the size of discontinuities; and coping with the random noise and designing optimal discontinuity detectors. An algorithm is proposed for discontinuity detection from an input signal S. For degree k discontinuity detection and measurement, a detector (P, Phi ) is used, where P is the pattern and Phi is the corresponding filter. If there is a degree k discontinuity at location t/sub 0/, then in the filter response there is a scaled pattern alpha P at t/sub 0/, where alpha is the size of the discontinuity. This reduces the problem to searching for the scaled pattern in the filter response. A statistical method is proposed for the approximate pattern matching. To cope with the random noise, a study is made of optimal detectors, which minimize the effects of noise.</p> INDEX TERMS discontinuities; computer vision; detection; classification; random noise; optimal discontinuity detectors; scaled pattern; statistical method; approximate pattern matching; computer vision; statistics CITATION D. Lee, "Coping with Discontinuities in Computer Vision: Their Detection, Classification, and Measurement", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.12, no. 4, pp. 321-344, April 1990, doi:10.1109/34.50620
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866123616695404, "perplexity": 3083.393858060679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299121.41/warc/CC-MAIN-20150323172139-00223-ip-10-168-14-71.ec2.internal.warc.gz"}
https://verification.asmedigitalcollection.asme.org/GT/proceedings-abstract/GT2015/56659/V02CT42A016/237118
The performance of a compressor is known to be affected by the ingestion of liquid droplets. Heat, mass and momentum transfer as well as the droplet dynamics are some of the important mechanisms that govern the two-phase flow. This paper presents numerical investigations of three-dimensional two-phase flow in a two-stage centrifugal compressor, incorporating the effects of the above mentioned mechanisms. The results of the two-phase flow simulations are compared with the simulation involving only the gaseous phase. The implications for the compressor performance, viz. the pressure ratio, the power input and the efficiency are discussed. The role played by the droplet-wall interactions on the rate of vaporization, and on the compressor performance is also highlighted. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365805983543396, "perplexity": 448.0740419891523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00421.warc.gz"}
http://mathoverflow.net/questions/64725/satisfiable-polynomial-equations-for-given-free-coefficients
## satisfiable polynomial equations for given free coefficients Let $F$ be a finite field, $n, k, m$ be natural numbers. I give you $m$ vectors $c^{(1)},\ldots,c^{(m)}\in F^n$. I ask for polynomials $p_1,\ldots,p_n$ on $k$ variables over $F$ such that the system of polynomial equations $p_i(t_1,\ldots,t_k)=c^{(j)}_i$ for $i=1,\ldots,n$ is satisfiable for every $1\leq j\leq m$. Such polynomials can be found with degree $1$ if $k=n$: just take $p_i^{(j)}(t_1,\ldots,t_{k}) = t_i$. Can one find such polynomials when $k=n^{\epsilon}$ for a small $\epsilon>0$ and with degree depending only on $1/\epsilon$? - A simple observation: If $k = 1$ and $m > |F|$, then there are no solutions, because for all $p_1, \ldots, p_n \in F[t]$, $|\lbrace (p_1(t), \ldots, p_n(t)): t \in F \rbrace| \leq |F| < m$. By the same argument, in general there can be no solution if $m > |F|^k$. – auniket May 17 2011 at 14:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237026572227478, "perplexity": 107.87396895663754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705543116/warc/CC-MAIN-20130516115903-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/6974/historical-question-cauchy-crofton-theorem-vs-radon-transform?answertab=oldest
# Historical question Cauchy-Crofton theorem vs. Radon transform The Radon transform apparently was discovered around 1917 if Wikipedia is to be believed. The Cauchy-Crofton theorem is a much older theorem (mid 19th-century). But both ideas are more or less the same. Did Radon consider his transform as a generalization of the Cauchy-Crofton theorem? Did he not know about the Cauchy-Crofton theorem? http://en.wikipedia.org/wiki/Crofton_formula
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9659988284111023, "perplexity": 870.1131106742804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098849.37/warc/CC-MAIN-20150627031818-00200-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/one-more-quantum-matrix-question.46672/
# One more Quantum Matrix question 1. Oct 8, 2004 ### Ed Quanta Let A be a Hermitian nxn matrix. Let the column vectors of the nxn matrix S be comprised of the orthnormalized eigenvectors of A Again, Sinv is the inverse of S a) Show that S is unitary b) Show that Sinv(A)S is a diagonal matrix comrpised of the eigenvalues of A No idea how to start this one off. 2. Oct 8, 2004 ### Wong a) U is a unitary matrix <=> U*U = I, where "*" denotes conjugate transpose <=> $$\sum_{j} u_{ji}^{*}u_{jk} = \delta_{ik}$$ <=> $$u_{i}^{*}u_{k}=\delta_{ik}$$, where $$u_{i}$$ is the ith column of U. The last relation implies orthogonality of columns of U. b)This one needs a little thought. If u is an eigenvector of A, then $$Au=\lambda u$$. Then what is AS? Remember that each column of S is just an eigenvector of A. Also note that Sinv*S=I. Last edited: Oct 8, 2004 3. Oct 9, 2004 ### Ed Quanta Sorry, I am still not sure how to find AS without knowing the eigenvectors of A. 4. Oct 9, 2004 ### Wong First try to think about what you want to prove. That is, $$S^{-1}AS=D$$, where D is a diagonal matrix. This is equivalent to proving AS=DS, where D is diagonal. Now each column of S is an eigenvector of A. So A acting on S should produce something quite simple. (Try to think of what is the defining eigenvalue equation for A.) May you put the result in the form DS, where D is a diagonal matrix? 5. Mar 6, 2005 ### erraticimpulse Wong Wrong I have doubts that either of you guys will read this anytime soon. I had this same problem and the conclusion that Wong tried to provide is incorrect. Instead of $$AS=DS$$ it's actually $$AS=SD$$. The product DS will produce the correct entries along the diagonal but false elsewhere (really think about what you're doing here). But if you use the produce SD it will provide the correct eigenvalue for every eigenvector.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880890250205994, "perplexity": 774.0007984966481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170425.26/warc/CC-MAIN-20170219104610-00263-ip-10-171-10-108.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/925518/proof-of-taylors-theorem-with-wirtinger-derivatives-complex-coordinates
# Proof of Taylor's Theorem with Wirtinger Derivatives (Complex coordinates) Suppose that $f$, defined in $D_1(0)$, is infinitely differentiable. Show that for each $n \in \mathbb{N}$ we have \begin{equation*} f(z,\bar{z}) = \sum\limits_{0 \leq j + k \leq n} \frac{\partial_z^j\partial_{\bar{z}}^kf(0,0) }{j!k!}z^j\bar{z}^k + \mathcal{O}(|z|^{n+1}). \end{equation*} I've tried to expand Taylor's theorem for reals to get this result, but everything I've tried has worked out badly. I'm sure there's an elegant way to do this that I'm just not seeing. This IS a homework problem, so feel free to give partial solutions/hints if you prefer. Edit: My starting point was that we know: \begin{equation*} f(x,y) = \sum\limits_{0 \leq j + k \leq n} \frac{\partial_x^j\partial_{y}^kf(0,0) }{j!k!}x^jy^k + \mathcal{O}(\sqrt{x^2 + y^2}^{n+1}). \end{equation*} from Taylor's theorem for two variables. It's likely possible to get one from the other from the extremely ugly, brute-force method of substituting in $z = x + iy,~\bar{z} = x - iy$ and $\partial_z = \frac{1}{2}(\partial_x - i\partial_y),~\partial_{\bar{z}} = \frac{1}{2}(\partial_x + i\partial_y)$. My gut feeling tells me that there must be a better way to solve this problem then that. I just can't figure it out. Any help would be extremely appreciated. Starting from the Taylor formula for functions of a real variable, $$g(x) = \sum_{k=0}^n \frac{g^{(k)}(0)}{k!}x^k + \frac{1}{n!}\int_0^x (x-t)^n \cdot g^{(n+1)}(t)\,dt,$$ we can obtain the result by considering $g_\varphi(r) = f(re^{i\varphi},re^{-i\varphi})$ and expressing the derivatives of $g_\varphi$ in terms of the Wirtinger derivatives of $f$. Inductively, we have \begin{align} g_\varphi^{(k+1)}(t) &= \frac{\partial}{\partial t} g_\varphi^{(k)}(t)\\ &= \frac{\partial}{\partial t} \sum_{m=0}^k \binom{k}{m} \partial_z^m\partial_{\overline{z}}^{k-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k-m)\varphi}\\ &= \sum_{m=0}^k \binom{k}{m} \partial_z^{m+1}\partial_{\overline{z}}^{k-m}f(te^{i\varphi},te^{-i\varphi})e^{i(m+1)\varphi}e^{-i(k-m)\varphi}\\ &\quad + \sum_{m=0}^k \binom{k}{m} \partial_z^m \partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ &= \sum_{m=0}^{k+1} \binom{k}{m-1} \partial_z^m\partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ &\quad + \sum_{m=0}^{k+1} \binom{k}{m} \partial_z^m\partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ &= \sum_{m=0}^{k+1}\binom{k+1}{m} \partial_z^m\partial_{\overline{z}}^{k+1-m}f(te^{i\varphi},te^{-i\varphi})e^{im\varphi}e^{-i(k+1-m)\varphi}\\ \end{align} by the chain rule just like for the real partial derivatives $\partial_x,\,\partial_y$, and so for $z = \lvert z\rvert e^{i\varphi}$ we obtain \begin{align} f(z,\overline{z}) &= g_\varphi(\lvert z\rvert)\\ &= \sum_{k=0}^n \frac{g_\varphi^{(k)}(0)}{k!}\lvert z\rvert^k + \underbrace{\frac{1}{n!}\int_0^{\lvert z\rvert} (\lvert z\rvert-t)^n g_\varphi^{(n+1)}(t)\,dt}_{R_n(z,\overline{z})}\\ &= \sum_{j+m\leqslant n} \frac{\partial_z^j\partial_{\overline{z}}^m f(0,0)}{j!m!} e^{ij\varphi}e^{-im\varphi}\lvert z\rvert^{j+m} + R_n(z,\overline{z})\\ &= \sum_{j+m\leqslant n} \frac{\partial_z^j\partial_{\overline{z}}^m f(0,0)}{j!m!}z^j\overline{z}^m + R_n(z,\overline{z}), \end{align} with \begin{align} \lvert R_n(z,\overline{z})\rvert &= \frac{1}{n!} \left\lvert \int_0^{\lvert z\rvert} (\lvert z\rvert-t)^n g_\varphi^{(n+1)}(t)\,dt\right\rvert\\ &\leqslant \frac{1}{n!}\sum_{j=0}^{n+1}\binom{n+1}{j}\int_0^{\lvert z\rvert} (\lvert z\rvert-t)^n \left\lvert \partial_z^j\partial_{\overline{z}}^{n+1-j}f(te^{i\varphi},te^{-i\varphi})\right\rvert\,dt\\ &\leqslant \left(\sum_{j=0}^{n+1} \frac{\lVert \partial_z^j\partial_{\overline{z}}^{n+1-j} f\rVert_{R}}{j!(n+1-j)!}\right)\lvert z\rvert^{n+1} \end{align} where $R$ is arbitrary between $\lvert z\rvert$ and $1$, and $\lVert h\rVert_R = \sup \{ \lvert h(z,\overline{z})\rvert : \lvert z\rvert \leqslant R\}$. Note: we cannot have a bound $C\cdot \lvert z\rvert^{n+1}$ for the remainder term uniformly on all of $D_1(0)$, since $f$ could be unbounded on the disk, but a polynomial always is bounded on bounded subsets of $\mathbb{C}$. We can only expect to have for every compact $K\subset D_1(0)$ a constant $C_K$ such that $\lvert R_n(z,\overline{z})\rvert \leqslant C_K\cdot \lvert z\rvert^{n+1}$ holds for all $z\in K$. The expression with the $\lVert\cdot\rVert_R$ gives exactly that. You may have noticed that the proof is exactly like the/a standard proof of the Taylor formula for a function of several (in this case two) real variables. The point is the formula for the higher derivatives of $g_\varphi$, which matches exactly the formula for the derivatives expressed in terms of the real partial derivatives. That they behave just like true partial derivatives in many ways (chain rule, product rule, ...) makes the Wirtinger derivatives useful. So, you know there is a polynomial $P$ of degree $\le n$ such that $$f(x,y) = P(x,y) + \mathcal{O}((x^2 + y^2)^{(n+1)/2}) \tag{1}$$ Note that the error term has all derivatives of orders $\le n$ vanishing at the origin. Plug $x=(z+\bar z)/2$ and $y=(z-\bar z)/(2i)$ in (1). Treating $z$ and $\bar z$ as abstract variables for the moment, observe that this is a linear invertible change of variables: a polynomial becomes another polynomial $Q$ of same degree. So, $$f(z) = Q(z,\bar z) + \mathcal{O}(|z|^{n+1}) \tag{2}$$ As before, the error term has all derivatives of orders $\le n$ vanishing at the origin. Assuming as known that $$\frac{\partial }{\partial z}(z^m \bar z^n)=mz^{m-1} \bar z^n,\qquad \frac{\partial }{\partial \bar z}(z^m \bar z^n)=nz^{m} \bar z^{n-1} \tag{3}$$ we find that the coefficients of $Q$ are what is claimed by taking derivatives on both sides and evaluating them at $0$. One way to prove (3) is to • check that Wirtinger derivatives satisfy the product rule (easy, since they are just the sum of two things that satisfy it) • check that $\frac{\partial }{\partial z} z =1$, $\frac{\partial }{\partial z} \bar z =0$, $\frac{\partial }{\partial \bar z} z =0$, $\frac{\partial }{\partial \bar z} \bar z =1$. (Something that should be done to motivate said derivatives, anyway.) • Thanks for the comment, but I'm having a bit of trouble understanding what you mean by "we find that the coefficients of $Q$ are what is claimed by taking derivatives on both sides and evaluating them at $0$." What exactly do you mean by that? – user165388 Sep 12 '14 at 20:03 • Say, you differentiated both sides of (2) twice in $z$ and three times in $\bar z$. Then on the right you have a polynomial where every monomial lost two factors of $z$ and three factors of $\bar z$ (and gained some coefficient). Now when we plug $0$ in there, the only monomial that survives is the one that came from $z^2\bar z^3$. This gives a relation between the derivatives of $f$ and coefficients of $Q$. – user147263 Sep 12 '14 at 21:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975202083587646, "perplexity": 272.34263652063356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00537.warc.gz"}
https://www.physicsforums.com/threads/normal-force-at-the-bottom-of-a-ferris-wheel.216271/
# Homework Help: Normal Force at the bottom of a Ferris Wheel 1. Feb 18, 2008 ### AnkhUNC 1. The problem statement, all variables and given/known data A student of weight 678 N rides a steadily rotating Ferris wheel (the student sits upright). At the highest point, the magnitude of the normal force N on the student from the seat is 565 N. (a) What is the magnitude of N at the lowest point? If the wheel's speed is doubled, what is the magnitude FN at the (b) highest and (c) lowest point? 2. Relevant equations 3. The attempt at a solution So M = 678N, NTop = 565N. Fc = mg - Ntop = 6079.4 So Nbottom = Nbottom - mg = 6079.4 which leads Nbottom to = 12723.8 but this is incorrect. Where am I going wrong? 2. Feb 18, 2008 ### Staff: Mentor At the top, normal force, weight, and acceleration all point down: N + mg = mv^2/r; so N = mv^2/r - mg At the bottom, normal force and acceleration point up, but weight points down: N - mg = mv^2/r; so N = mv^2/r + mg 3. Feb 18, 2008 ### AnkhUNC I really don't need all that though do I? If I do how am I going to solved for v^2 or r? I only have one equation and two unknowns. At best I'd have Ntop+Nbottom = mv^2/r. 4. Feb 18, 2008 ### Staff: Mentor Yep. It's the easy way! No need to solve for those. Examining those expressions for N, how does Nbottom compare to Ntop? (Hint: What's Nbottom - Ntop?)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8536773920059204, "perplexity": 2870.481053918855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746110.52/warc/CC-MAIN-20181119192035-20181119214035-00145.warc.gz"}
https://collaborate.princeton.edu/en/publications/particle-number-fluctuations-r%C3%A9nyi-entropy-and-symmetry-resolved-
# Particle number fluctuations, Rényi entropy, and symmetry-resolved entanglement entropy in a two-dimensional Fermi gas from multidimensional bosonization Mao Tian Tan, Shinsei Ryu Research output: Contribution to journalArticlepeer-review 26 Scopus citations ## Abstract We revisit the computation of particle number fluctuations and the Rényi entanglement entropy of a two-dimensional Fermi gas using multidimensional bosonization. In particular, we compute these quantities for a circular Fermi surface and a circular entangling surface. Both quantities display a logarithmic violation of the area law, and the Rényi entropy agrees with the Widom conjecture. Lastly, we compute the symmetry-resolved entanglement entropy for the two-dimensional circular Fermi surface and find that, while the total entanglement entropy scales as RlnR, the symmetry-resolved entanglement scales as RlnR, where R is the radius of the subregion of our interest. Original language English (US) 235169 Physical Review B 101 23 https://doi.org/10.1103/PhysRevB.101.235169 Published - Jun 15 2020 Yes ## All Science Journal Classification (ASJC) codes • Electronic, Optical and Magnetic Materials • Condensed Matter Physics ## Fingerprint Dive into the research topics of 'Particle number fluctuations, Rényi entropy, and symmetry-resolved entanglement entropy in a two-dimensional Fermi gas from multidimensional bosonization'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098526239395142, "perplexity": 2791.2887281441144}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00157.warc.gz"}
https://www.physicsforums.com/threads/velocity-along-a-frictionless-surface.866839/
# Velocity along a frictionless surface • #1 26 3 ## Homework Statement A body moves down along an inclined plane from A(top) to B(bottom), and then moves on the floor in continuation to some point C. (All surfaces are frictionless) After reaching B, body is having some acceleration. But while moving from B to C, a) will it keep on accelerating, b) or, its acceleration will be zero (constant velocity) from B to C. 2. The attempt at a solution Frictionless surface don't interfere with the motion of the body, so whatever state body is possessing at B (some velocity), this will continue holding, so body will move with zero acceleration from B to C. • #2 Merlin3189 Homework Helper Gold Member 1,659 771 The question sounded a bit odd, "After reaching B, body is having some acceleration." I would say, "Up to point B, body is having some acceleration." • #3 26 3 The question sounded a bit odd, "After reaching B, body is having some acceleration." I would say, "Up to point B, body is having some acceleration." Body is having acceleration at B as it has accelerated from A to B, question is about from B to C. • #4 145 12 The speed will remain constant from B to C Why? Well Because B and C are on the same horizontal level And thus there's no question of vertical motion here (they surely aren't going to break the floor and move And since the horizontal components of the forces acting on the block (gravity and the normal force) are zero And since there's no friction The block will keep moving with a constant velocity from B to C UchihaClan13 Likes rashida564 and Anjum S Khan • #5 26 3 The speed will remain constant from B to C Why? Well Because B and C are on the same horizontal level And thus there's no question of vertical motion here (they surely aren't going to break the floor and move And since the horizontal components of the forces acting on the block (gravity and the normal force) are zero And since there's no friction The block will keep moving with a constant velocity from B to C UchihaClan13 I was confused about the acceleration part. • #6 145 12 Don't be then :) The block accelerates from A to B because as there's no friction,the force mgsinθ which acts down the incline,accelerates the block Over the entire distance the block traverses/moves Once it reaches B,there is a momentary transition and there's some initial acceleration from B to C But its momentary and thus it can be neglected! UchihaClan13 • #7 13 1 Like others have, the speed of the body from B to C will remain constant since there's no force acting on it due to gravity. • Last Post Replies 3 Views 1K • Last Post Replies 2 Views 7K • Last Post Replies 7 Views 3K • Last Post Replies 1 Views 3K • Last Post Replies 8 Views 2K • Last Post Replies 3 Views 3K • Last Post Replies 2 Views 8K • Last Post Replies 4 Views 1K • Last Post Replies 5 Views 9K • Last Post Replies 1 Views 657
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469623327255249, "perplexity": 1871.4306155525164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00330.warc.gz"}
https://mathoverflow.net/questions/195826/quantum-field-theory-integral-notation
# Quantum Field theory - integral notation I have a problem with understanding how the resolution of the identity of an operator is presented in some literature for physicists. I'm a student of mathematics, and I understand the notion of a spectral measure (which is somethimes called the resolution of identity) and also have some knowledge in spectral theory (for normal operators). # Here is my brief explanation what do I understand: Let $$H$$ be Hilbert space (with an inner product linear w.r.t 2nd coordinate) with an orthonormal basis $$(e_{n})$$ and define a linear operator (diagonal operator) $$A = \sum_{i \geq 1} \lambda_i \left|e_i\right>\left where $$(\lambda_i)$$ is a sequence of complex numbers (the properties of this sequences determine the properties of $$A$$ such as boundedness, selfadjointness, compactness etc.), to make sense of $$A$$ we assume that the above series converges in SOT, I also used Dirac bra-ket notation. The associated spectral measure of $$A$$ is defined via $$E(\Delta) = \sum_i \mathbf{1}_{\Delta}(\lambda_i) \left|e_i\right>\left where $$\Delta$$ is an element of Borel sigma field over the spectrum of $$A$$, and we have that $$\left = \int_{\sigma(A)} \lambda \left \ \ (x,y \in H).$$ Very often physicists would use the following notation for $$A$$ which acts on an element $$\psi \in H$$ $$A\left|\psi\right> = \sum_{i} \lambda_i \left|i\right>\left.$$ # My problems with notation I started reading some notes, books about quantum field theory, and often it is written that, the identity operator $$I$$ on some (separable) Hilbert space $$H$$, has the expansion, called the resolution of the identity $$I= \int dq^{\prime} \left|q^{\prime}\right>\left I don't know whether it matters here but $$\{\left|q\right>\}$$ is supposed to be a complete set of states. Reference: http://eduardo.physics.illinois.edu/phys582/582-chapter5.pdf bottom of p. 129. # My question Is the notion of the above resolution of the identity the same as an integral w.r.t a spectral measure ($$I$$ is a diagonal operator)? If yes, how should I understand the above notation. If no, what do they actually mean by this resolution of the identity and how do they define this integral. I noticed that in a lot of book concerning quantum mechanics there are many calculations, but not very many definitions and assumptions, which makes stuff hard to understand for a mathematician. The answer is Yes. The interpretation of the notation is quite straight forward: $dq'|q'\rangle\langle q'| = E(dq')$. We need to presume that $E$ is the spectral measure of an operator $Q' = \int q' E(dq') = \int dq'\, q' |q'\rangle\langle q'|$. The only aspect that doesn't necessarily mesh well with your question, as written, is the fact that you've defined a spectral measure $E$ only for operators whose spectrum consists of eigenvalues, since $A|e_i\rangle = \lambda_i |e_i\rangle$. Spectral measures can be defined for operators with any kind of spectrum, including continuous. • Thanks, I know that you have spectral theorem for any normal operator defined on a Hilbert space, however the notation for the resolution of identity in the example which I gave about was for the identity operator, which is in fact diagonal. One quick question so it supposed to be $$I= \int d q^{\prime} q^{\prime} \left|q^{\prime} \right> \left<q^{\prime}\right|$$ instead of $$I= \int d q^{\prime} \left|q^{\prime} \right> \left<q^{\prime}\right|$$? – Eric Feb 6 '15 at 16:06 • Your first formula gives $Q'$ and not $I$, with $I$ given correctly by your second formula. The prototypical example of $Q'$ is the position operator in quantum mechanics. It has a simple continuous spectrum (in 1-dimension, that is) and that is why its "eigenvectors" are convenient for writing down a resolution of the identity. Essentially, you are representing identity using functional calculus $I = f(Q) = \int dq' f(q') |q'\rangle\langle q'|$, where $f(x) \equiv 1$. – Igor Khavkine Feb 6 '15 at 19:32 • Great answer! Thank you. For the identity we even don't need the functional calculus, we know that $Q^{\prime}$ as a selfadjoint operator admits a unique spectral measure $E$, thus by using the properties of a spectral measure we got $I=E(\sigma(Q^{\prime}))= \int_{\sigma(Q^{\prime})} 1 E(d\lambda)$, which can be of course written in a different notation which involves bra and kets. – Eric Feb 6 '15 at 19:47 See $\S$ 4.4 of de Madrid's "The role of the rigged Hilbert space in quantum mechanics"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9908544421195984, "perplexity": 1160.3295495924679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00504.warc.gz"}
https://wiki.lyx.org/Tips/Beamer
Go to page: Search:   Help Edit # Beamer Categories: Tips, Beamer << | Page list | >> Tips for using the Beamer presentation class. ### Enumerations • To customize the labels for an enumeration list, put the cursor at the start of the first item in the list and click `Insert > Short Title`. This will create an inset labeled 'opt' for optional arguments. Insert the label you want for the first item there. Beamer will automatically replace any occurrence of 1, i or I with the index of each item in Arabic, lower or upper case Roman numerals respectively. Be sure to include any punctuation you want. For example, XY1: would produce item labels 'XY1:', 'XY2:' etc. • If you want a label that contains the letter i or I (or a numeral that stays fixed), you need to enclose that part of the label in braces. For instance, '{Hint} I:' will generate labels 'Hint I:', 'Hint II:' etc. but 'Hint I:' will generate labels 'HInt I:', 'HIInt II:' etc. The braces cannot be entered directly; use the TEX button, `Insert > TeX Code` or `Ctrl-L` to add TeX insets to the optional argument inset, then type the braces in them. • The labels in subitems restart as 1, 2, 3, etc. To get a, b, c, etc. insert in the LaTeX Preamble the command : ``` `\setbeamertemplate{enumerate subitem}{\alph{enumii})}` ``` ### Repeating the Title Slide To repeat the title slide at the end of the presentation (or anywhere in between): 1. add `\renewcommand\makebeamertitle{\frame[label=mytitle]{\maketitle}}` to the document preamble; 2. at the point where you want the title slide to repeat, create a new frame using the AgainFrame environment and type in `mytitle` as the label. ### Versions for Note-taking The `handout` class option tells Beamer to create a version of the presentation with each frame on a single page. To create a handout with space on each page for the audience to take notes, you can use the `handoutWithNotes` package, available from http://www.guidodiepen.nl/2009/07/creating-latex-beamer-handouts-with-notes/ (with instructions there) (and apparently not available from CTAN). Install the style file into your local `texmf` tree (somewhere under `tex/latex`) and update the LaTeX file database (typically by running `texhash`, but somewhat distribution-specific). Then add the following two lines to your document preamble: ``` \usepackage{handoutWithNotes} \pgfpagesuselayout{1 on 1 with notes}[letterpaper,border shrink=5mm] ``` You can do various customizations in the second line (`a4paper` rather than `letterpaper` to change the paper size, `2 on 1` rather than `1 on 1` to reduce the number of pages, `landscape` (inside the optional argument) to switch from portrait to landscape mode, and so on. You still need to specify `handout` in the class options field to print one entry per frame, rather than one per overlay. ### Uncovering a Table Row-wise (This is covered in the Beamer user guide; what follows is mainly adjustments for use within LyX.) To uncover one row of a table at a time, end the last cell in each row (other than the final row and any headings) with `\pause` in a TeX Code (ERT) inset. For more granular control, replace `\pause` with `\uncover<?>{` in ERT at the end of the row above the one you will be uncovering and `}` in ERT at the end of the row being uncovered, where "?" is a valid overlay specification. The Beamer user guide also offers a tip for using a dark/light alternating background color in the rows of the table. To use it in LyX, add `table` to `Document > Settings > Document Class > Class options > Custom` and something like `\rowcolors[]{1}{blue!20}{blue!10}` in the preamble. That color scheme is the one suggested in the Beamer guide, but you can season it to taste. If you want to use a larger color palette, add `dvipsnames` alongside `table` in the custom class options (separated by a comma). ### Suppressing a Logo on One Slide This tip is based on an answer posted by Alan Munn at StackExchange. To suppress a logo on selected slides, add the following command to the document preamble: `\newcommand{\nologo}{\setbeamertemplate{logo}{}}`. At the end of the frame prior to the one where you want to remove the logo, add an `EndFrame` environment followed by `{\nologo` in a TeX Code inset (ERT), using a standard environment. Next, build the frame as usual, starting with `BeginFrame` or one of the other frame creation environments. End that frame with another `EndFrame` environment, followed by `}` in ERT. Start the next frame as usual. To suppress the logo from a sequence of consecutive frames, just move the second `EndFrame` and closing `}` to the last frame in the group.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570958971977234, "perplexity": 1825.653143334826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00551.warc.gz"}
https://tex.stackexchange.com/questions/100323/uncovering-items-with-changing-bullet-color
# Uncovering items with changing bullet color In this case consecutive items get covered. How could I change this behaviour so that all items would be black and only bullets would get different colours? \documentclass{beamer} \setbeamercovered{transparent} \begin{document} \begin{frame} \frametitle{Title} \begin{itemize} \item<1-> First \item<2-> Second \item<3-> Third \end{itemize} \end{frame} \end{document} • Check the beameruserguide for alert. Like \begin{itemize}[<alert@+>] – bloodworks Feb 28 '13 at 15:24 ## 1 Answer My answer, adapted from the example in the Beamer user guide, p82: \documentclass{beamer} \def\colorize<#1>{% \temporal<#1>{% \setbeamercolor{item}{fg=blue}% }{% \setbeamercolor{item}{fg=red}% }{% \setbeamercolor{item}{fg=blue}% } } \setbeamertemplate{itemize item}[triangle] \begin{document} \begin{frame} \frametitle{Title} \begin{itemize} \colorize<1> \item First \begin{itemize} \colorize<2> \item First a \colorize<3> \item First b \end{itemize} \colorize<4> \item Second \colorize<5> \item Third \end{itemize} There must be a better way of doing it though. Does anyone know how to redefine \item so as to get the desired output without having to use an extra command (\colorize here) in front of each \item? EDIT: \colorize is now compatible with all levels of itemize environments. • what about the itemize inside itemize? it's not working! – liberias Feb 28 '13 at 19:59 • I'm not sure I approve of your exclamation mark... Is it supposed to convey urgency or irritation? Anyway, you get the result you want by substituting item for itemize item in the definition of `\colorize'. – jub0bs Feb 28 '13 at 20:09 • sorry for the exclamation mark, tomorrow i have to present my thesis and i'm a bit nervous :] – liberias Feb 28 '13 at 20:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623299241065979, "perplexity": 3084.3632694214853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00165.warc.gz"}
http://math.stackexchange.com/questions/433880/prove-that-for-any-invertable-n-times-n-matrix-a-and-any-b-in-mathbbrn
# Prove that for any invertable $n\times n$ matrix A, and any $b\in\mathbb{R}^n$, there exists a unique solution to $Ax=b$ I think I've got the two ideas needed to solve this, but it feels like they're not tied together properly. I'm not sure if I'm allowed to do something like this: Let $A$ be an invertable $n\times n$ matrix, and $b$ be an n-dimensional vector. \begin{align} Ax=b&\Longrightarrow A^{-1}Ax=A^{-1}b\\ &\Longrightarrow x=A^{-1}b \end{align} Therefore, there exists at least one solution to the equation $Ax=b$. Additionally, for the equation $Ay=b$: \begin{align} Ay=b&\Longrightarrow A^{-1}Ay=A^{-1}b\\ &\Longrightarrow y=A^{-1}b\\ &\Longrightarrow y=x \end{align} Therefore, for any two unique combinations of $A$ and $b$, there is a unique $x$ for $Ax=b$. The problem I feel exists with this is that I'm doing two separate proofs and referencing one in the other, when I feel like I can only do that if they're combined into one single proof. Am I mistaken? - Curiously when I visited this question the only upvoted (and accepted) answer was the unique (sic) one that does not correctly address the interrogation that OP expressed. The point is that unique existence really has two different aspects, and that showing them separately is quite normal. Although one can present them in a combined fashion (see the answer by copper.hat) as an equivalence between two equations (but the equivalence still involves separate implications in two directions). – Marc van Leeuwen May 14 at 11:07 A handy way to deal with uniqueness proofs is to assume by contradiction that there exist distinct solutions. Assume that $x_1$ and $x_2$ are distinct solutions to $Ax=b$. Then, $Ax_1 = b$ and $Ax_2 = b$. Since $A$ is invertible, we have $x_1 = A^{-1}b$ and $x_2 = A^{-1}b$. Thus, because $A^{-1}b = A^{-1}b$, we have by transitivity $x_1 = x_2$, but we assumed they are distinct. Therefore, the solution must be unique. This is essentially what you're trying to do, but it is not two different proofs. Instead, we leverage the power of transitivity and reflexivity of equality to show that distinct solutions cannot exist. - Basically, you're just showing that if $x_1$ and $x_2$ are solutions of the system, they must be equal. Note that this actually is not a proof by contradiction, since the assumption that they are distinct is unnecessary. That of course does not take away the fact that it is a handy way of thinking about it. – Eric Spreen Jul 1 '13 at 19:01 @EricSpreen Of course, which is why I worded it like that. It is a natural thing to think "well, what if there were two solutions?" The remainder of the proof follows a bit more naturally from there, and it removes some of the uneasiness that comes from just stating equality and hoping it works. – Arkamis Jul 1 '13 at 19:10 This is just doing half the work. Showing unique existence requires showing uniqueness and showing existence. You did not do the latter, whereas the proof OP presents properly does both aspects. Therefore I find this as an answer to the question OP posed quite misleading. – Marc van Leeuwen May 14 at 11:03 @Marcvanleeuwen The implication here was more to clean up the OP's second part, not that there didn't need to be two parts. My comments were maybe misleading. "Not two different proofs" wasn't meant to imply there weren't two steps, but rather that it wasn't necessarily the existence proof applied twice. I'll clean up the wording. – Arkamis May 14 at 12:39 If $A$ is invertible and $b$ is given, then $Ax=b$ iff $x = A^{-1}b$. - If $A$ is invertible, left multiplication by $A$ is an isomorphism on $\mathbb{R}^n$. An isomorphism is a bijective linear map. For the linear system $Ax = b$, surjectivity tells us that a solution exists, and by injectivity the solution is unique. - Actually, your first calculation shows uniqueness, as starting from $Ax=b$ you infer that $x=A^{-1}b$. But by simply plugging in the value $A^{-1}b$ for $x$ you also get existence as for this choice of $x$ you get $Ax=AA^{-1}b=b$. Note that the first step used the existence of a left inverse (i.e. you made use of the fact that $A^{-1}A$ is th eidentity), whereas the existence made use of the right inverse property (i.e. that $AA^{-1}$ is the identity). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815475702285767, "perplexity": 264.30972698622537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446300.49/warc/CC-MAIN-20151124205406-00025-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.mathworks.com/help/symbolic/mupad_ug/z-transforms.html?requestedDomain=www.mathworks.com&nocookie=true
# Documentation ### This is machine translation Translated by Mouseover text to see original. Click the button below to return to the English verison of the page. ## Z-Transforms The Z-transform of the function `F(z)` is defined as follows: `$F\left(z\right)=\sum _{k=0}^{\infty }\frac{f\left(k\right)}{{z}^{k}}$` If `R` is a positive number, such that the function `F(Z)` is analytic on and outside the circle ```|z| = R```, then the inverse Z-transform is defined as follows: `$f\left(k\right)=\frac{1}{2\pi i}\underset{|z|=R}{\oint }F\left(z\right){z}^{k-1}dz,\text{ }k=0,1,2...$` You can consider the Z-transform as a discrete equivalent of the Laplace transform. To compute the Z-transform of an arithmetical expression, use the `ztrans` function. For example, compute the Z-transform of the following expression: `S := ztrans(sinh(n), n, z)` ``` ``` If you know the Z-transform of an expression, you can find the original expression or a mathematically equivalent form by computing the inverse Z-transform. To compute the inverse Z-transform, use the `iztrans` function. For example, compute the inverse Z-transform of the expression `S`: `iztrans(S, z, n)` ``` ``` Suppose, you compute the Z-transform of an expression, and then compute the inverse Z-transform of the result. In this case, MuPAD® can return an expression that is mathematically equivalent to the original one, but presented in a different form. For example, compute the Z-transform of the following expression: `C := ztrans(exp(n), n, z)` ``` ``` Now, compute the inverse Z-transform of the resulting expression `C`. The result differs from the original expression: `invC := iztrans(C, z, n)` ``` ``` Simplifying the resulting expression `invC` gives the original expression: `simplify(invC)` ``` ``` Besides arithmetical expressions, the `ztrans` and `iztrans` functions also accept matrices of arithmetical expressions. For example, compute the Z-transform of the following matrix: ```A := matrix(2, 2, [1, n, n + 1, 2*n + 1]): ZA := ztrans(A, n, z)``` ``` ``` Computing the inverse Z-transform of `ZA` gives the original matrix `A`: `iztrans(ZA, z, n)` ``` ``` The `ztrans` and `iztrans` functions let you evaluate the transforms of an expression or a matrix at a particular point. For example, evaluate the Z-transform of the following expression for the value `z = 2`: `ztrans(1/n!, n, 2)` ``` ``` Evaluate the inverse Z-transform of the following expression for the value `n = 10`: `iztrans(z/(z - exp(x)), z, 10)` ``` ``` If MuPAD cannot compute the Z-transform or the inverse Z-transform of an expression, it returns an unresolved transform: `ztrans(f(n), n, z)` ``` ``` `iztrans(F(z), z, n)` ``` ```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851475954055786, "perplexity": 473.18699353362115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00407-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.unisannio.it/it/biblio?f%5Bauthor%5D=13091
UNIVERSITÀ DEGLI STUDI DEL SANNIO   Benevento # Pubblicazioni di ateneo Found 33 results Author Titolo Tipo [ Anno] Filters: Author is Silvestrini, P.  [Clear All Filters] 2018 Physica C: Superconductivity and its Applications, vol. 555, pp. 35-38, 2018. IEEE Transactions on Applied Superconductivity, vol. 28, no. 4, 2018. 2016 IEEE Transactions on Applied Superconductivity, vol. 26, no. 3, 2016. 2012 Physics Procedia, vol. 36, pp. 371-376, 2012. 2010 Journal of Physics: Conference Series, vol. 234, no. PART 4, 2010. 2007 Open Systems and Information Dynamics, vol. 14, no. 2, pp. 209-216, 2007. Physics Letters, Section A: General, Atomic and Solid State Physics, vol. 370, no. 5-6, pp. 499-503, 2007. IEEE Transactions on Applied Superconductivity, vol. 17, no. 2, pp. 132-135, 2007. 2006 Quantum Computing in Solid State Systems, pp. 103-110, 2006. Quantum Computing in Solid State Systems, pp. 1-337, 2006. Physics Letters, Section A: General, Atomic and Solid State Physics, vol. 356, no. 6, pp. 435-438, 2006. Journal of Physics: Conference Series, vol. 43, no. 1, pp. 1401-1404, 2006. Journal of Physics: Conference Series, vol. 43, no. 1, pp. 1405-1408, 2006. 2005 Applied Physics Letters, vol. 87, no. 17, pp. 1-3, 2005. Physics Letters, Section A: General, Atomic and Solid State Physics, vol. 336, no. 1, pp. 71-75, 2005. 2004 Institute of Physics Conference Series, vol. 181, pp. 101-107, 2004. Superconductor Science and Technology, vol. 17, no. 5, pp. S385-S388, 2004. Physical Review B - Condensed Matter and Materials Physics, vol. 70, no. 17, pp. 1-4, 2004. 2003 IEEE Transactions on Applied Superconductivity, vol. 13, no. 2 I, pp. 1001-1004, 2003. International Journal of Modern Physics B, vol. 17, no. 4-6 II, pp. 762-767, 2003. 2002 Applied Physics Letters, vol. 80, no. 16, pp. 2952-2954, 2002. Physica C: Superconductivity and its Applications, vol. 372-376, no. PART 1, pp. 185-188, 2002. 2001 IEEE Transactions on Applied Superconductivity, vol. 11, no. 1 I, pp. 994-997, 2001. Applied Physics Letters, vol. 79, no. 8, pp. 1145-1147, 2001. 2000 International Journal of Modern Physics B, vol. 14, no. 25-27, pp. 3050-3055, 2000.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719936847686768, "perplexity": 2014.0341883040294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00367.warc.gz"}
http://kg15.herokuapp.com/abstracts/253
# Even orientations of graphs. ### Domenico Labbate Dipartimento di Matematica, Informatica ed Economia - Università degli Studi della Basilicata - Potenza (Italy) #### John Sheehan Department of Mathematical Sciences, King's College, Aberdeen (Scotland) PDF Minisymposium: GENERAL SESSION TALKS Content: A graph $G$ is $1$--extendable if every edge belongs to at least one $1$--factor. %An orientation of a graph $G$ is an assignment of a {\em direction} to each %edge of $G$. Now suppose that Let $G$ be a graph with a $1$--factor $F$. Then an {\em even $F$--orientation} of $G$ is an orientation in which each $F$--alternating cycle has exactly an even number of edges directed in the same fixed direction around the cycle. We examine the structure of $1$--extendible graphs $G$ which have no even $F$--orientation where $F$ is a fixed $1$--factor of $G$ and we give a characterization for $k$--regular graphs with $k\ge 3$ and graphs with connectivity at least four. Moreover, we will point out a relationship between our results on even orientations and Pfaffian graphs. Back to all abstracts
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163431286811829, "perplexity": 1013.1870017308454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056325.1/warc/CC-MAIN-20210416100222-20210416130222-00342.warc.gz"}
https://www.jcdp.or.kr/journal/view.php?number=43
Journal of Coastal Disaster Prevention 2015;2(3):107-112. Published online July 30, 2015. Risk Analysis of Breakwater Caisson Under Wave Attack Part II : load surface approximation Dong-Hyawn Kim 파랑하중을 받는 방파제 케이슨의 위험도 분석 Part II 김동현 Abstract A new load surface based approach to reliability analysis of caisson type breakwater is proposed. Uncertainties of horizontal and vertical wave load acting on breakwater are considered by using the so called load surfaces which can be estimated as a function of wave height, water level, and etc. Then, gradient based reliability analysis such as First Order Reliability Method(FORM) can be applied to find out probability of failure under wave action. Therefore, reliability analysis of breakwaters with uncertainties both in wave height and water level can be possible. In addition, uncertainty in wave breaking can be taken into account by using wave height ratio which relates significant wave height with maximum one. In numerical examples, proposed approach was applied to reliability analysis of caisson breakwater under wave attack which may undergo partial or full wave breaking. Key Words: Load surface; Reliability; Caisson; Breakwater; Wave breaking; FORM TOOLS METRICS • 134 View
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9390890598297119, "perplexity": 3562.231840166621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00346.warc.gz"}
https://chem.libretexts.org/Courses/Santa_Barbara_City_College/SBCC_Chem_101%3A_Introductory_Chemistry/00%3A_Front_Matter/03%3A_Table_of_Contents
This is a LibreText ebook used to support CHEM101,  It maps open source information onto the outline of Tro's Introductory Chemistry text but has been slightly modified to reflect the order of topics taught in the course. • ## 1: The Chemical World Chemistry is the study of matter and the ways in which different forms of matter combine with each other. You study chemistry because it helps you to understand the world around you. Everything you touch or taste or smell is a chemical, and the interactions of these chemicals with each other define our universe. • ## 2: Measurement and Problem Solving Chemistry, like all sciences, is quantitative. It deals with quantities, things that have amounts and units. Dealing with quantities is very important in chemistry, as is relating quantities to each other. In this chapter, we will discuss how we deal with numbers and units, including how they are combined and manipulated. • ## 5: Molecules and Compounds There are many substances that exist as two or more atoms connected together so strongly that they behave as a single particle. These multiatom combinations are called moleculesThe smallest part of a substance that has the physical and chemical properties of that substance.. A molecule is the smallest part of a substance that has the physical and chemical properties of that substance. In some respects, a molecule is similar to an atom. A molecule, however, is composed of more than one atom. • ## 7: Chemical Reactions How do we compare amounts of substances to each other in chemical terms when it is so difficult to count to a hundred billion billion? Actually, there are ways to do this, which we will explore in this chapter. In doing so, we will increase our understanding of stoichiometry, which is the study of the numerical relationships between the reactants and the products in a balanced chemical reaction. • ## 8: Gases Gases have no definite shape or volume; they tend to fill whatever container they are in. They can compress and expand, sometimes to a great extent. Gases have extremely low densities, one-thousandth or less the density of a liquid or solid. Combinations of gases tend to mix together spontaneously; that is, they form solutions. Air, for example, is a solution of mostly nitrogen and oxygen. Any understanding of the properties of gases must be able to explain these characteristics. • ## 10: Chemical Bonding How do atoms make compounds? Typically they join together in such a way that they lose their identities as elements and adopt a new identity as a compound. These joins are called chemical bonds. But how do atoms join together? Ultimately, it all comes down to electrons. Before we discuss how electrons interact, we need to introduce a tool to simply illustrate electrons in an atom. • ## 11: Liquids, Solids, and Intermolecular Forces In Chapter 6, we discussed the properties of gases. Here, we consider some properties of liquids and solids. As a review, the Table below lists some general properties of the three phases of matter. • ## 12: Solubility & Reaction Types A chemical reaction is a process that leads to the transformation of one set of chemical substances to another. Chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. • ## 13: Solutions Solutions play a very important role in many biological, laboratory, and industrial applications of chemistry. Of particular importance are solutions involving substances dissolved in water, or aqueous solutions. Solutions represent equilibrium systems, and the lessons learned in our last unit will be of particular importance again. Quantitative measurements of solutions are another key component of this unit. • ## 14: Acids and Bases Acids and bases are common substances found in many every day items, from fruit juices and soft drinks to soap. In this unit we'll exam what the properties are of acids and bases, and learn about the chemical nature of these important compounds. You'll learn what pH is and how to calculate the pH of a solution. • ## 15: Radioactivity and Nuclear Chemistry Radioactivity has a colorful history and clearly presents a variety of social and scientific dilemmas. In this chapter we will introduce the basic concepts of radioactivity, nuclear equations and the processes involved in nuclear fission and nuclear fusion.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141797184944153, "perplexity": 815.530903968242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00119.warc.gz"}
http://www.computer.org/csdl/trans/tp/2009/08/ttp2009081502-abs.html
Subscribe Issue No.08 - August (2009 vol.31) pp: 1502-1509 Rozenn Dahyot , Trinity College Dublin, Dublin ABSTRACT The Standard Hough Transform is a popular method in image processing and is traditionally estimated using histograms. Densities modeled with histograms in high dimensional space and/or with few observations, can be very sparse and highly demanding in memory. In this paper, we propose first to extend the formulation to continuous kernel estimates. Second, when dependencies in between variables are well taken into account, the estimated density is also robust to noise and insensitive to the choice of the origin of the spatial coordinates. Finally, our new statistical framework is unsupervised (all needed parameters are automatically estimated) and flexible (priors can easily be attached to the observations). We show experimentally that our new modeling encodes better the alignment content of images. INDEX TERMS Hough transform, Radon transform, kernel probability density function, uncertainty, line detection. CITATION Rozenn Dahyot, "Statistical Hough Transform", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.31, no. 8, pp. 1502-1509, August 2009, doi:10.1109/TPAMI.2008.288 REFERENCES [1] P. Hough, “Methods of Means for Recognizing Complex Patterns,” US Patent 3 069 654, 1962. [2] R.O. Duda and P.E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Comm. ACM, vol. 15, pp. 11-15, Jan. 1972. [3] J.-Y. Goulermas and P. Liatsis, “Incorporating Gradient Estimations in a Circle-Finding Probabilistic Hough Transform,” Pattern Analysis and Applications, vol. 2, pp. 239-250, 1999. [4] A. Goldenshluger and A. Zeevi, “The Hough Transform Estimator,” The Annals of Statistics, vol. 32, no. 5, pp. 1908-1932, Oct. 2004. [5] A.S. Aguado, E. Montiel, and M.S. Nixon, “Bias Error Analysis of the Generalized Hough Transform,” J. Math. Imaging and Vision, vol. 12, pp. 25-42, 2000. [6] M. Bober and J. Kittler, “Estimation of Complex Multimodal Motion: An Approach Based on Robust Statistics and Hough Transform,” Image and Vision Computing J., vol. 12, no. 10, pp. 661-668, Dec. 1994. [7] P. Ballester, “Hough Transform and Astronomical Data Analysis,” Vistas in Astronomy, vol. 40, no. 4, pp. 479-485, 1996. [8] G.R.J. Cooper and D.R. Cowan, “The Detection of Circular Features in Irregularly Spaced Data,” Computers & Geosciences, vol. 30, no. 1, pp. 101-105, Feb. 2004. [9] C. Schmid, R. Mohr, and C. Bauckhage, “Evaluation of Interest Point Detectors,” Int'l J. Computer Vision, vol. 37, no. 2, pp. 151-172, 2000. [10] B. Schiele and J.L. Crowley, “Recognition without Correspondence Using Multidimensional Receptive Field Histograms,” Int'l J. Computer Vision, vol. 36, no. 1, pp. 31-50, Jan. 2000. [11] B.W. Silverman, Density Estimation for Statistics and Data Analysis. Chapman and Hall, 1986. [12] D. Comaniciu and P. Meer, “Mean Shift: A Robust Approach Toward Feature Space Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603-619, May 2002. [13] R. Dahyot, P. Charbonnier, and F. Heitz, “Unsupervised Statistical Change Detection in Camera-in-Motion Video,” Proc. IEEE Int'l Conf. Image Processing, Oct. 2001. [14] Q. Ji and R.M. Haralick, “Error Propagation for the Hough Transform,” Pattern Recognition Letters, vol. 22, pp. 813-823, 2001. [15] A. Bonci, T. Leo, and S. Longhi, “A Bayesian Approach to the Hough Transform for Line Detection,” IEEE Trans. Systems, Man, and Cybernetics, vol. 35, no. 6, pp. 945-955, Nov. 2005. [16] G. Lai and R.D. Figueiredo, “A Novel Algorithm for Edge Detection from Direction-Derived Statistics,” Proc. IEEE Int'l Symp. Circuits and Systems, vol. 5, pp. 37-40, May 2000. [17] R. Dahyot, N. Rea, A. Kokaram, and N. Kingsbury, “Inlier Modeling for Multimedia Data Analysis,” Proc. IEEE Int'l Workshop Multimedia Signal Processing, pp. 482-485, Sept. 2004. [18] R. Dahyot and S. Wilson, “Robust Scale Estimation for the Generalized Gaussian Probability Density Function,” Advances in Methodology and Statistics (Metodološki zvezki), vol. 3, no. 1, pp. 21-37, 2006. [19] P. Meer and B. Georgescu, “Edge Detection with Embedded Confidence,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 12, pp.1351-1365, Dec. 2001. [20] N. Aggarwal and W.C. Karl, “Line Detection in Image through Regularized Hough Transform,” IEEE Trans. Image Processing, vol. 15, no. 3, pp. 582-591, Mar. 2006. [21] M.A. Fischler and R.C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Comm. ACM, vol. 24, no. 6, pp. 381-395, 1981. [22] D. Walsh and A.E. Raftery, “Accurate and Efficient Curve Detection in Images: The Importance Sampling Hough Transform,” Pattern Recognition, vol. 35, pp. 1421-1431, 2002. [23] A. Bandera, J.P.B.J.M. Pérez-Lorenzo, and F. Sandoval, “Mean Shift Based Clustering of Hough Domain for Fast Line Segment Detection,” Pattern Recognition Letters, vol. 27, pp. 578-586, 2006. [24] R.S. Stephens, “Probabilistic Approach to the Hough Transform,” Image and Vision Computing J., vol. 9, no. 1, pp. 66-71, Feb. 1991. [25] P. Huber, Robust Statistics. John Wiley and Sons, 1981. [26] R.M. Steele and C. Jaynes, “Feature Uncertainty Arising from Covariant Image Noise,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.1063-1069, 2005. [27] J. Princen, J. Illingworth, and J. Kittler, “A Formal Definition of the Hough Transform: Properties and Relationships,” J. Math. Imaging and Vision, vol. 1, no. 2, pp. 153-168, 1992. [28] S.J. Sheather, “Density Estimation,” Statistical Science, vol. 19, no. 4, pp. 588-597, 2004. [29] W.T. Freeman, “Steerable Filters and Local Analysis of Image Structure,” PhD dissertation, Massachusetts Inst. of Tech nology, 1992. [30] R. Dahyot, “Bayesian Classification for the Statistical Hough Transform,” Proc. IEEE Int'l Conf. Pattern Recognition, Dec. 2008.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258708715438843, "perplexity": 4946.413725223631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737893676.56/warc/CC-MAIN-20151001221813-00091-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/calculating-omega-as-a-function-of-time-for-a-flywheel.556868/
# Calculating omega as a function of time for a flywheel 1. Dec 4, 2011 ### dannyR hiya all, I've done a experiment which was hanging a mass from a light string wrapped around the axis of a flywheel. The mass was released and the flywheel began to rotate. during calculations ive found out it would be great to have ω as a function of time and ive been stuck about how to get this. could I do a force diagram using F=ma?, but then im unsure of the mass "m". would I use the mass which is falling and add the moment of inertia of the flywheel or is this very wrong? :(. or could I use energy stored such as mgh=1/2Iω2+1/2mr2ω2+K mgh, loss in potential energy of the falling mass kinetic energy in the flywheel kinetic energy in the falling mass where K would be the frictional force i think it would be proportional to ωr could i replace h the height the mass has fallen by using the F=ma bit i talked about above then replace h=1/2at2 then solve for t or ω?? ive thought about this alot and always been stopped by not knowing how to calculate something or use F=ma with moment of inertia stuff could someone please point me in the right direction? Thanks alot Danny
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132792353630066, "perplexity": 633.3197962672048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494125.62/warc/CC-MAIN-20190220003821-20190220025821-00595.warc.gz"}
https://skerritt.blog/this-simple-trick-will-save-you-hours-of-expanding-binomials/
Ever wanted to know how to expand (a+b)¹⁸⁷? Well now you can! What is a Binomial Coefficient? First, let’s start with a binomial. A binomial is a polynomial with two terms typically in the format (a+b)² A binomial coefficient is raising a binomial to the power of n, like so (a+b)^n We all remember from school that (a+b)² = a² + 2ab + b², but what is (a+b)⁸? This where the binomial formula comes in handy. Binominal Theorem The Binomial Theorem is the expected method to use for finding binomial coefficients because it is how a computer would compute it. The theorem is as follows: Luckily for us, this formula is the same as another formula we’ve seen, according to here. The combinations formula! Let’s try an example. Example What is the coefficient of x⁶ in (1+x)⁸? Simply plug this into the formula like so Something that may confuse people is, how do we work out what n and k are? Well, we have n objects overall and we want to choose k of them. For binomial / combinatorics sums it helps to think “(combinations of) X taken in sets of Y” where x > y for obvious reasons, in this case “(combinations of) 8 taken in sets of 6”. Pascal’s Triangle Pascal’s triangle is a triangle created by starting off with a 1, starting every line and ending every line with a 1 and adding the numbers above to make a new number; as seen in this gif. No one could ever explain a maths topic as well as Numberphile, so here’s a Numberphile video on it: Example Let’s solve the example from earlier using Pascal’s triangle. Pascal’s triangle always starts counting from 0, so to solve 8C6 (8 choose 6) we simply count 8 rows down, then 6 across. So the row here is the line of the number 1’s on the left hand side, and we start counting from 0. So the eigth row is the one that starts with 1, 8. Notice how the second inner column defines what row we’re on. Now we count 6 across which is… 28. We just found the binomial coefficient using a super neat and easy to draw up triangle. Of course, the hardest part is adding together all the numbers and if the coefficient is large it may be easier to just use the Binomial theorem, but this method still exists and is useful if you’ve forgotten the binomial theorem.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645629286766052, "perplexity": 593.3846311284693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255165.2/warc/CC-MAIN-20190519201521-20190519223521-00340.warc.gz"}
http://mathhelpforum.com/calculus/280935-limits.html
1. ## Limits I'm learning about limits and I have realised that if you graph the limit function, it seems to never exist where x = a i.e. where x = the number you're trying to calculate the limit at. However in the same tutorial it mentions that "many of the functions don't exist at x= a". This to me seems wrong, to me it seems functions never exist at x = a. Am I correct? 2. ## Re: Limits No. $$\lim_{x \to 4} (x-2) = 2$$ Indeed, the definition of the continuity of a function at a given point is that the function is equal to the limit of the function. 3. ## Re: Limits It depends on the function whose limit you are analyzing and whether that function is actually defined at x = a. For example: Suppose you are given $\displaystyle f(x)=x^2$ $\displaystyle \lim_{x\to1}\left(f(x)\right)=1$ This comes directly from the fact that $\displaystyle f(1)=1$. Now consider: $\displaystyle g(x)=\frac{x^2-2x+1}{x-1}$ Here, we would find: $\displaystyle \lim_{x\to1}\left(g(x)\right)=0$ Even though $\displaystyle g(1)$ is undefined. 4. ## Re: Limits The examples you are given have that property because they are more interesting. But continuous functions which, by definition, have the property that $\displaystyle \lim_{x\to a} f(x)= f(a)$ are the most useful functions. 5. ## Re: Limits I think where my confusion arose from was the fact that I was introduced to limits through derivatives. So I thought all limits were of the derivative form. Looks like the derivative is just a special case of limits. Yes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888389706611633, "perplexity": 409.2301350613483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583745010.63/warc/CC-MAIN-20190121005305-20190121031305-00501.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/17362/browse?type=title
# Browse Dissertations and Theses - Statistics by Title • (1989) This work deals with a decision-theoretic evaluation of p-value rules. A test statistic is judged on the behavior of its p-value with the loss function being an increasing function G of the p-value. application/pdf PDF (1MB) • (1959) application/pdf PDF (1MB) • (1996) The identifiability and estimability of the parameters for the Unified Cognitive/IRT Model are studies. A calibration procedure for the Unified Model is then proposed. This procedure uses the marginal maximum likelihood ... application/pdf PDF (4MB) • (2010-08-20) The statistical inference based on the ordinary least squares regression is sub-optimal when the distributions are skewed or when the quantity of interest is the upper or lower tail of the distributions. For example, the ... application/pdf PDF (1MB) • (2000) Using results from He & Shao (2000), a proof of the consistency and asymptotic normality of item parameter estimates obtained from the Marginal Maximum Likelihood Estimation (Bock & Lieberman, 1970) procedure as both the ... application/pdf PDF (5MB) • (1989) In many areas of application of statistics one has a relevent parametric family of densities and wishes to estimate the density from a random sample. In such cases one can use the family to generate an estimator. We fix a ... application/pdf PDF (4MB) • (1967) application/pdf PDF (1MB) • (1989) Many authors, for example, Fisher (1950), Pearson (1938), Birnbaum (1954), Good (1955), Littell and Folks (1971, 1973), Berk and Cohen (1979), and Koziol, Perlman, and Rasmussen (1988), have studied the problem of combining ... application/pdf PDF (7MB) • (2012-02-01) Bayesian inference provides a flexible way of combiningg data with prior information. However, quantile regression is not equipped with a parametric likelihood, and therefore, Bayesian inference for quantile regression ... application/pdf PDF (446kB) • (2002) This thesis presents a progression from theory development to real-data application. Chapter 1 gives a literature review of other psychometric models for formative assessment, or cognitive diagnosis models, as an introduction ... application/pdf PDF (9MB) • (1993) We consider the problem of regressing a dichotomous response variable on a predictor variable. Our interest is in modelling the probability of occurrence of the response as a function of the predictor variable, and in ... application/pdf PDF (6MB) • (2011-05-25) The latent class model (LCM) is a statistical method that introduces a set of latent categorical variables. The main advantage of LCM is that conditional on latent variables, the manifest variables are mutually independent ... application/pdf PDF (5MB) • (2011-05-25) Quantile regression, as a supplement to the mean regression, is often used when a comprehensive relationship between the response variable and the explanatory variables is desired. The traditional frequentists’ approach ... application/pdf PDF (374kB) • (2007) Clustering and classification have been important tools to address a broad range of problems in fields such as image analysis, genomics, and many other areas. Basically, these clustering problems can be simplified as two ... application/pdf PDF (2MB) • (2000) To effectively build a regression model with a large number of covariates is no easy task. We consider using dimension reduction before building a parametric or spline model. The dimension reduction procedure is based on ... application/pdf PDF (4MB) • (2000) Motivated by consulting in infrastructure studies, we consider the estimation and inference for regression models where the response variable is bounded or censored. In these conditions, least squares methods are not ... application/pdf PDF (3MB) • (2006) The classical approaches to clustering are hierarchical and k-means. They are popular in practice. However, they can not address the issue of determining the number of clusters within the data. In this dissertation, we ... application/pdf PDF (2MB) • (1991) Consider the model $y\sb{lj} = \mu\sb{l}(t\sb{j})$ + $\varepsilon\sb{lj}$, $l = 1,..,m$ and $j = 1,..,n,$ where $\varepsilon\sb{lj}$ are independent mean zero finite variance random variables. Under the above setting we ... application/pdf PDF (5MB) • (1990) Two-stage Bayes procedures, also known as Bayes double sample procedures, for estimating the mean of exponential family distributions are given by Cohen and Sackrowitz (1984). In their study, they develop double sample ... application/pdf PDF (2MB) • (2004) The flexible forms of nonparametric IRT models make test equating more challenging. Though linear equating under parametric IRT models is obvious and appropriate, it might not be appropriate for nonparametric models. Two ... application/pdf PDF (3MB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81325763463974, "perplexity": 1675.815670708438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00086.warc.gz"}
http://mathhelpforum.com/calculus/182795-series-expansion.html
# Math Help - Series expansion? 1. ## Series expansion? My problem is.... Use the series expansions for sin x and cos x to find the first two terms of a series expansion for tan x but which series do i use? Power, maclaurin? also how do I find tan x ( i know sinx/cosx=tanx) but how do i get there using series? many thanks 2. Use polynomial division. 3. im having some trouble using polynomial division. what is the method for having a fraction in the nominator/denominator. when i divide straight down as seen i get but i know the answer is if someone could help me with the method please. 4. Originally Posted by decoy808 im having some trouble using polynomial division. what is the method for having a fraction in the nominator/denominator. when i divide straight down as seen i get but i know the answer is if someone could help me with the method please. You should learn the polynomial long division method. Please refer, Polynomial Long Division. I hope you will find it simple and illustrative. 5. Originally Posted by decoy808 My problem is.... Use the series expansions for sin x and cos x to find the first two terms of a series expansion for tan x but which series do i use? Power, maclaurin? also how do I find tan x ( i know sinx/cosx=tanx) but how do i get there using series? many thanks $sinx=x-\frac{x^3}{3!}+\frac{x^5}{5!}-.....$ $cosx=1-\frac{x^2}{2!}+\frac{x^4}{4!}-....$ $tanx=\frac{sinx}{cosx}\Rightarrow\ sinx=cosxtanx$ $\Rightarrow\ x-\frac{x^3}{3!}+\frac{x^5}{5!}-....=\left(1-\frac{x^2}{2!}+\frac{x^4}{4!}-....\right)tanx$ Therefore the first term of tanx is x and the 2nd term involves $x^3$ as an $x^2$ would give even powers of x. $x-\frac{x^3}{3!}+\frac{x^5}{5!}-....=\left(1-\frac{x^2}{2!}+\frac{x^4}{4!}-....\right)\left(x+\frac{x^3}{k}+...\right)$ Multiplying out and comparing terms to find k, $x-\frac{x^3}{3!}+\frac{x^5}{5!}-...=x-\frac{x^3}{2!}+\frac{x^5}{4!}+...+\frac{x^3}{k}-\frac{x^5}{(k)2!}+\frac{x^7}{(k)4!}-...$ $\Rightarrow\frac{ x^3}{k}-\frac{x^3}{2!}=-\frac{x^3}{3!}$ $\Rightarrow\frac{2x^3-kx^3}{(k)2!}=-\frac{x^3}{3!}$ which gives k and therefore the second term in the expansion of tanx.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262230753898621, "perplexity": 865.7133800847511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927824.26/warc/CC-MAIN-20150521113207-00305-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/220677-field-vector-space-understood-reals.html
# Thread: Is the field of this vector space understood to be the reals? 1. ## Is the field of this vector space understood to be the reals? "Let V denote the set of all differentiable real-valued functions defined on the real line." Does this automatically mean that this vector space is over the field of reals? Why or why not? I ask because I need to prove this is a vector space. But, if I pick some element a from F (the field), then the scalar multiplication of a and an element of V is only real-valued if a is a real. This would make this scalar multiplication not an element of V, making it not a vector space, if F were a field that contained non-real elements. So, I must assume that this V is over the field of reals in order for it to prove it is a vector space, but why am I warranted to make that claim? Sorry if this is a dumb question, I am just starting LA independently. Thanks 2. ## Re: Is the field of this vector space understood to be the reals? Yes, your argument is correct. If a is a "scalar" and v is a "vector" then av must be a vector. If you multiply a "differentiable real-valued function defined on the real line" by a complex number, the result would no longer be "differentiable real-valued function defined on the real line". Now, it would be possible to have the space of "differentiable real-valued functions defined on the real line" over the field of rational numbers since the product of a rational and a real number is a real number- but that would be very unusual. 3. ## Re: Is the field of this vector space understood to be the reals? Originally Posted by HallsofIvy Yes, your argument is correct. If a is a "scalar" and v is a "vector" then av must be a vector. If you multiply a "differentiable real-valued function defined on the real line" by a complex number, the result would no longer be "differentiable real-valued function defined on the real line". Now, it would be possible to have the space of "differentiable real-valued functions defined on the real line" over the field of rational numbers since the product of a rational and a real number is a real number- but that would be very unusual. Hey HoI, So from your response, when being asked to prove that V is or is not a vector space, if the field is not mentioned, I should assume the field is one that would not make it immediately impossible for V to be a vector space. So there is nothing about "the set of all differentiable real-valued functions defined on the real line" that inherently makes the field R. And, this is a matter of taking it for granted that the field is an appropriate one that wouldn't trivialize the exercise. Is that correct? Thank you. 4. ## Re: Is the field of this vector space understood to be the reals? Yes, that seems like a reasonable interpretation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963447093963623, "perplexity": 211.11455539315747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319636.73/warc/CC-MAIN-20170622161445-20170622181445-00226.warc.gz"}
http://math.stackexchange.com/questions/34365/show-that-x2-3y2-n-either-has-no-solutions-or-infinitely-many-solutions/34371
# Show that $x^2 - 3y^2 = n$ either has no solutions or infinitely many solutions I have a question that I have problem with in number theory about Diophantine,and Pell's equations. Any help is appreciated! We suppose $n$ is a fixed non-zero integer, and suppose that $x^2_0 - 3 y^2_0 = n$, where $x_0$ and $y_0$ are bigger than or equal to zero. Let $x_1 = 2 x_0 + 3 y_0$ and $y_1 = x_0 + 2 y_0$. We need to show that we have $x^2_1 - 3 y^2_1 = n$, with $x_1>x_0$, and $y_1>y_0$. Also, we need to show then that given $n$, the equation $x^2 - 3 y^2 = n$ has either no solutions or infinitely many solutions. Thank you very much! - Just out of curiosity: How long did you spend trying to do this problem on your own before posting? –  Arturo Magidin Apr 21 '11 at 20:02 You should substitute $x=2x_0+3y_0$, $y=x_0+2y_0$ in the expression $x^2-3y^2$, simplify, see what happens. –  André Nicolas Apr 21 '11 at 20:02 @Arturo: I did try but I think I did mistake somewhere because I couldn't simplify the equation. Thanks! –  kira Apr 21 '11 at 20:11 @user6312:I did the same thing but got a problem. I'll try later again. Thanks! –  kira Apr 21 '11 at 20:11 Next time, please say what you tried and why things are not working out. Here, you could easily have posted your attempt, and people could have pointed out if (or where) there was a mistake. You'd learn a lot more that way. –  Arturo Magidin Apr 21 '11 at 20:12 The fact that if $x_1=2x_0+3y_0$ then $x_1\gt x_0$ is immediate: you cannot have both $x_0$ and $y_0$ zero; likewise with $y_1$. That $x_1^2+3y_1^2$ is also equal to $n$ if you assume that $x_0^2 - 3y_0^2=n$ should follow by simply plugging in the definitions of $x_1$ and $y_1$ (in terms of $x_0$ and $y_0$), and chugging. Finally, what you have just done is show that if you have one solution, you can come up with another solution. Do you see how this implies the final thing you "need to show"? - How to show it has no solutions or infinetely many solutions? Thanks! –  kira Apr 24 '11 at 23:15 @kira: The entire process tells you how to go from one solution to another. Keep going. If you have at least one solution, how many different solutions will you have? –  Arturo Magidin Apr 25 '11 at 2:00 If we have a solution, then we can find another one with $x_1>x_0$, and $y_1>y_0$ in the same quadrant. Thus, we have infinitely many. Is anything to be added since we can find a new solution every time with bigger $x_i$ and $y_i$? –  user9636 Apr 25 '11 at 4:11 Thank you! –  kira Apr 25 '11 at 4:20 HINT $\:$ Put $\rm\: z = x+\sqrt{3}\ y\:,\:$ norm $\rm\:N(z)\: = z\:z' = x^2 - 3\ y^2\:.\:$ Then $\rm u = 2 + \sqrt{3}\ \Rightarrow\ N(u) = u\:u' = 1\:$ so $\rm\ N(u\:z)\ =\ (u\:z)\:(u\:z)' =\ u\:u'\:z\:z'\ =\ z\:z'\:,\:$ where $\rm\ u\:z\ =\ 2\:x+3\:y + (x+2\:y)\ \sqrt{3}\:.\:$ Therefore the composition law (symmetry) $\rm\ z\to u\:z\$ on the solution space $\rm\:\{z\ :\ N(z) = n\}$ arises simply by multiplying by an element of $\rm\:u\:$ of norm $1\:,\:$ using the multiplicativity of the norm: $$\rm\ N(u) = 1\ \ \Rightarrow\ \ N(u\:z)\ =\ N(u)\:N(z)\ =\ N(z) = n$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9690213203430176, "perplexity": 220.22139670215276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00035-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/final-concentrations-unknown-volume.624107/
# Final concentrations unknown volume 1. ### keen55 1 Hi I have a known volume of water with a known concentration of calcium. I want to bring that volume up to a new (slightly) higher volume with a new (slightly) higher concentration. To do this I am adding a solution with a known concentration of Ca but cannot remember how to calculate the volume (of the second solution) I need to get to the final concentration. I cannot adjust the concentration of the second substance, I can only adjust the volume. thanks 2. ### mycotheology 91 Use the equation: C1V1 = C2V2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183661937713623, "perplexity": 899.4204477279056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987174.71/warc/CC-MAIN-20150728002307-00000-ip-10-236-191-2.ec2.internal.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=5329
2018 Том 70 № 5 On the problem of approximation of functions by algebraic polynomials with regard for the location of a point on a segment Motornyi V. P. Abstract We obtain a correction of an estimate of the approximation of functions from the class W r H ω (here, ω(t) is a convex modulus of continuity such that tω '(t) does not decrease) by algebraic polynomials with regard for the location of a point on an interval. English version (Springer): Ukrainian Mathematical Journal 60 (2008), no. 8, pp 1270-1284. Citation Example: Motornyi V. P. On the problem of approximation of functions by algebraic polynomials with regard for the location of a point on a segment // Ukr. Mat. Zh. - 2008. - 60, № 8. - pp. 1087–1098. Full text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260474443435669, "perplexity": 694.4852845676968}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00082.warc.gz"}
https://www.physicsforums.com/threads/measuring-gravitational-redshift-due-to-galaxies-without-gr.813046/
# Measuring Gravitational Redshift due to Galaxies without GR 1. May 9, 2015 ### quantumfoam Hi guys. How do astrophysicists measure the redshift of electromagnetic waves from galaxies due to gravity without the use of General Relativity? If I can be more specific, how do astrophysicists know that the gravitational redshift of light emitted from some part of a galaxy or galaxy cluster is small relative to kinematic redshifts (if these light emitting components of a galaxy or galaxy cluster are moving away from us of course) without using General Relativity to prove that such a redshift is small? For example, when creating the rotation curves for galaxies, it is often claimed that the redshifts measured are significantly due to kinematic effects rather than due to gravitational redshifts. How do astrophysicists know this without using General Relativity to show that this is true? 2. May 10, 2015 ### Orodruin Staff Emeritus Well, to start with, it does not matter for the rotational curves as you are looking at differences of redshift rather than absolute values. You can also estimate the amount of redshift by estimating the mass. 3. May 10, 2015 ### quantumfoam I'm sorry. I don't think I understand how it doesn't matter for rotational curves. Could you please explain it a little more? 4. May 10, 2015 ### Bandersnatch When you measure rotation, you look at red and blue-shifted lines in the galactic (or stellar) spectrum spread symmetrically around the expected line position. It'll produce a symmetrical spread of certain width, corresponding to the difference in velocities between the limb rotating towards you (blue-shifted) and the one rotating away (red-shifted). It doesn't matter where exactly the whole thing is in the spectrum (i.e., how shifted by gravity), since it's the width that gives you rotation data, and it doesn't change. 5. May 10, 2015 ### quantumfoam Thank you very much!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9290589690208435, "perplexity": 869.4372765382378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509170.2/warc/CC-MAIN-20181015100606-20181015122106-00193.warc.gz"}
http://mathhelpforum.com/differential-geometry/181768-vector-space-null-space.html
# Thread: Vector space and null space 1. ## Vector space and null space Please can you guys help me to solve the following questions Q. Let Z be a proper subspace of an n-dimensional vector space X, and let x_0 \in X-Z. Show tha there is an linear functional f on X such that f(x_0)=1 and f(x)=0 for all x\in Z 2. Originally Posted by kinkong Please can you guys help me to solve the following questions Q. Let Z be a proper subspace of an n-dimensional vector space X, and let x_0 \in X-Z. Show tha there is an linear functional f on X such that f(x_0)=1 and f(x)=0 for all x\in Z You may be overthinking this. To specify any linear transformations between two vector spaces one needs only specify the action of the map on a basis. So, let $\{x_1,\cdots,x_m\}$ be a basis for $Z$ now since $x_0$ is independent of this set you know that $\{x_0,x_1,\cdots,x_m\}$ can be extended to some basis $\{x_0,x_1,\cdots,x_m,x_{m+1},\cdots,x_{n-1}\}$ for $X$ and define your linear functional however you want, perhaps $\varphi:X\to F$ given by $\varphi(x_k)=\delta_{k,0}$ (the Kronecker delta function) and extend by linearity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990462064743042, "perplexity": 228.57489343972463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806720.32/warc/CC-MAIN-20171123031247-20171123051247-00006.warc.gz"}
http://alfan-farizki.blogspot.jp/2015/07/pymc-tutorial-bayesian-parameter.html
## Minggu, 26 Juli 2015 ### PyMC Tutorial #1: Bayesian Parameter Estimation for Bernoulli Distribution Suppose we have a Coin which consists of two sides, namely Head (H) and Tail (T). All of you might know that we can model a toss of a Coin using Bernoulli distribution, which takes the value of $$1$$ (if H appears) with probability $$\theta$$ and $$0$$ (if T appears) with probability $$1 - \theta$$. In this case, $$\theta$$ is also called as the parameter of a Bernoulli distribution since knowing the value of $$\theta$$ is sufficient for determining $$P(H)$$ and $$P(T)$$. For a fair Coin, $$\theta$$ is set to $$0.5$$, which means that we have equal degree of belief for both sides. This time, we aim at estimating the parameter $$\theta$$ of a particular Coin. To do that, first, we need to collect the data sample, which serves as our evidence, from an experiment. Second, we use that data to estimate the parameter $$\theta$$. Suppose, to collect the data, we toss the Coin 10 times and record the outcomes. We get a sequence of $$\{H, H, T, H, ..., T\}$$ which consists of 10 elements, in which each element represents the outcome a single coin tossing. By assuming that the previous data sample is independent and identically distributed (often referred to as i.i.d), we then perform statistical computation to determine the estimate of $$\theta$$. There are two broad categories of estimating the parameter of a known probability distribution. The first one is so called Maximum Likelihood Estimation (MLE) and the second one is Bayesian parameter estimation. We will examine both methods briefly in this post. In the end, we will focus on Bayesian parameter estimation and show the usage of PyMC (Python library for MCMC framework) to estimate the parameter of a Bernoulli distribution. Maximum Likelihood Estimation (MLE) Please do not be afraid when you hear the name of this method ! Eventhough the name of this method is somewhat “long-and-complicated”, but the opposite situation actually happens. MLE often involves basic counting of events on our data. As an example, MLE estimates the paramater θ of the Coin using the following, “surprisingly simple”, statistic $\hat{\theta} = \frac{\# Heads}{\# Heads + \# Tails}$ Because of that, people usually refer MLE as a “Frequentist approach”. In general, MLE aims at seeking a set of parameters which maximizes the likelihood of seeing our data. $\hat{\theta} = \substack{argmax \\ \theta} P(x_1, x_2, ..., x_n|\theta)$ Now, let us try to implement MLE for estimating the parameter of a Bernoulli distribution (using Python programming language). We simulate the experiment of tossing a Coin N times using a list of integer values, in which 1 and 0 represents Head and Tail, respectively. Each value is generated randomly from a Bernoulli distribution. $P(H) = P(1) = \theta$ $P(T) = P(0) = 1 - \theta$ We use Bernoulli-like distribution provided by Scipy library. So, we need to import this library as the first step. -code1- from scipy.stats import bernoulli Next, we generate a sample data using the following code. -code2- sampleSize = 20 theta = 0.2 def generateSample(t, s): return bernoulli.rvs(t, size=s) data = generateSample(theta, sampleSize) The preceding code will assign “data” with the following value -code3- array([1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0]) We can see that assigning theta to 0.2 makes the number of 0’s much more than the number of 1’s, which means that Tail has higher probability to occur compared to Head. Now, we pretend that we do not know the parameter $$\theta$$ and we only know the data. Given that data, we are going to estimate the value of $$\theta$$, which is unknown to us. We use MLE, which means that we need to implement the aforementioned statistic -code4- def thetaMLE(data): count = 0 for i in data: count+=i return count/float(len(data)) Now, let see several estimates when we use different sample size. -code5- def showSeveralEstimates(sampleSizes): for size in sampleSizes: estimate = thetaMLE(generateSample(0.2, size)) print("using sample with size %i : theta = %f" % (size,estimate)) showSeveralEstimates([10,100,1000,2000,5000,10000]) The preceding code yields the following results (the results may differ each time you run this program since it involves random sampling): -code6- using sample with size 10 : theta = 0.3 using sample with size 100 : theta = 0.23 using sample with size 1000 : theta = 0.194 using sample with size 2000 : theta = 0.1965 using sample with size 5000 : theta = 0.1982 using sample with size 10000 : theta = 0.2006 Look ! we can see that as the size of data increase, the estimate is getting closer to the real value of $$\theta$$, that is $$0.2$$. This concludes that if you want to obtain better estimate, you need to increase your data. Bayesian Parameter Estimation Although MLE is often easy to prepare as well as to compute, it has several limitations. One of them is that MLE can not leverage prior information or knowledge regarding the parameter itself. For example, based on our experience, we are really certain that a Coin is fair. Unfortunately, when we try to estimate the parameter using MLE, we cannot incorporate such knowledge to our computation. On the other hand, Bayesian Parameter Estimation takes into account prior knowledge regarding the parameter, which makes Bayesian Parameter Estimation provides more realistic and accurate estimation [1][2]. Sometimes we have prior belief about something before the observance of the data or evidence. But, once we finally see the evidence or data about that, we may change our belief [1][2]. Instead of directly estimating $$P(data|parameter)$$, Bayesian Parameter Estimation estimates $$P(parameter|data)$$.  Here, prior information about parameter $$\theta$$ is encoded as a probability distribution $$P(\theta)$$, which means that we consider $$\theta$$ as a value of a random variable. When we quantify uncertainty about $$\theta$$, it will be easy for us to encode our prior belief. After we observe our data, we then change our prior belief towards $$\theta$$, into our posterior belief, denoted as $$P(\theta|X)$$. $P(\theta|x_1, x_2, ..., x_n) \propto P(\theta) P(x_1, x_2, ..., x_n|\theta)$ In the preceding formula, $$P(\theta)$$ is the prior distribution of $$\theta$$. $$P(X|\theta)$$ is the likelihood of our observed data. The likelihood represents how likely that we will see the observed data when we know already the parameter $$\theta$$. $$P(\theta |X)$$ is the posterior distribution that represents the belief about $$\theta$$ after taking both the data and prior knowledge into account (after we see our data). We usually use the expected value to give the best estimate of $$\theta$$. In other words, given the data $$X$$, the estimate of $$\theta$$ is obtained by calculating $$E[\theta |X]$$. $$E[\theta |X]$$ is usually called as the Bayes estimator. $\hat{\theta} = E[\theta |x_1, x_2, ..., x_n]$ Hierarchical Bayesian Model The prior distribution $$P(\theta)$$ may be estimated using the so called hyperprior distributions. This kind of model is known as Hierarchical Bayesian Model. Furthermore, we can also estimate the hyperprior distribution itself, using hyper-hyperprior distribution, and so on. The reason behind using hyperprior distribution is that, instead we use directly the distribution $$\theta$$, which may be available (from previous experiment), why don’t we let the “present data tell us about $$\theta$$ by themselves” [2]. Let us see the previous example, in which we try to estimate the parameter of Bernoulli parameter $$\theta$$, given the data collected by conducting several tosses of a Coin $$\{H, T, H, H, H, T, ..., T\}.$$ Suppose $$x_i$$ represents the value of a single Coin toss. $x_i \sim Ber(\theta)$ Now, we can model the parameter $$\theta$$ using Beta distribution. In other words, $$\theta$$ is a random variable that follows Beta distribution with parameter $$\alpha$$ and $$\beta$$. $$\alpha$$ and $$\beta$$ are called hyper-parameters. We use Beta distribution since it is the prior conjugate of a Bernoulli distribution. We will not elaborate more on the notion of conjugacy in this post. However, there are several mathematical reasons behind the use of conjugate prior distributions. One of them is that conjugate prior distribution will make our computation easier. $\theta \sim Beta(\alpha, \beta)$ The posterior distribution of $$\theta$$ can be then denoted as follows $P(\theta |x_1, ..., x_n, \alpha, \beta) \propto P(\theta |\alpha, \beta) P(x_1, ..., x_n |\theta, \alpha, \beta)$ We can also represent the preceding model using the well-known plate notation as follows Where $$N$$ represents the number of tosses that we perform (the size of sample data). We get back to our main goal: estimating $$\theta$$ (the posterior distribution of $$\theta$$) using Bayesian parameter estimation. We have just learnt that the estimation task involves computing the expectation value of $$\theta$$ ($$E[\theta |X]$$), which means that we might need to perform a number of integrations. Unfortunately, in some cases, performing integrations will not be feasible, or at least it will be difficult to achieve a specified accuracy. Thus, we need to think of any approximation methods to back up our plan. There are many types of numerical approximations for Bayesian parameter estimation. One of them (the most common) is Markov Chain Monte Carlo (MCMC). MCMC estimates the posterior distribution of $$\theta$$ by performing a number of iterations or sampling. In each iteration, we improve the quality of our target distribution by leveraging the sampled data, and hoping that it will eventually arrive at the “true” posterior distribution of $$\theta$$. PyMC: A Python Library for MCMC Framework Now, we are ready to play with the programming problem. Python has a library that provides MCMC framework for our problem. This library is called PyMC. You can go directly to its official website, if you want to know more about it. First, let’s import several libraries that we need, including PyMC and pymc.Matplot for drawing histogram. -code7- import pymc as pc import pymc.Matplot as pt import numpy as np from scipy.stats import bernoulli Next, we need to create our model. -code8- def model(data): theta_prior = pc.Beta('theta_prior', alpha=1.0, beta=1.0) coin = pc.Bernoulli('coin', p=theta_prior, value=data, observed=True) mod = pc.Model([theta_prior, coin]) return mod In the preceding code, we represent $$\theta$$ as “theta_prior”, which follows Beta distribution with parameter $$\alpha$$ and $$\beta$$. Here, we set both $$\alpha$$ and $$\beta$$ with 1.0. “coin” represents a sequence of coin tosses (NOT a single toss), in which each toss follows Bernoulli distribution (This corresponds to $$X$$ in the preceding plate notation). We set “observed=True” since this is our observed data. “p=theta_prior” means that the parameter of “coin” is “theta_prior”. Here, our goal is to estimate the expected value of “theta_prior”, which is unknown. MCMC will perform several iterations to generate the sample from “theta_prior”, in which each iteration will improve the quality of the sample. Finally, we wrap all of our random variables using class Model. Like the previous one, we need a modul that can generate our toy-sample: -code9- def generateSample(t, s): return bernoulli.rvs(t, size=s) Suppose, we have already generated a sample, and pretend that we do not know the parameter of the distribution where it comes from. We then use the generated sample to estimate $$\theta$$. -code10- def mcmcTraces(data): mod = model(data) mc = pc.MCMC(mod) mc.sample(iter=5000, burn=1000) return mc.trace('theta_prior')[:] The preceding procedure/function will produce traces, or MCMC samples generated by a number of interations. Based on that code, MCMC will iterate 5000 times. “burn” specifies the minimum iteration that we need before we are sure that we have achieved the “true” posterior distribution of $$\theta$$. The function yields the traces of MCMC (except the sample generated before the burn-in period). Now, let’s perform the MCMC run on our model, and plot the posterior distribution of $$\theta$$ on a histogram. -code11- sample = generateSample(0.7, 100) trs = mcmcTraces(sample) pt.histogram(trs, "theta prior; size=100", datarange=(0.2,0.9)) Suppose the data is generated from a Bernoulli distribution with parameter $$\theta = 0.7$$ (size = 100). If we draw the traces of $$\theta$$ using Histogram, we will get the following figure. We can see that the distribution is centered in the area 0.65 – 0.80. We are most likely happy with this result (since the prediction somehow close to 0.70), yet the variance is still very high. Now, let see what will happened when we increase the size of our observable data ! The following histogram was generated when we set size to 500: The following histogram was generated when we set size to 5000: See ! when we increase the size of our data, then the variance of the distribution gets lower. Thus, we are more confident about our prediction ! If we need the estimate value of $$\theta$$, we can use the expected value (mean) of that distribution. We can use numpy library to get the mean of our sample. -code12- #estimated theta est_theta = np.mean(trs) print(est_theta) Main References: [1] Building Probabilistic Graphical Models with Python, Kiran R. Karkera, PACKT Publishing 2014 [2] Bayesian Inference, Byron Hall (STATISTICAT, LLC) Alfan Farizki Wicaksono (firstname [at] cs [dot] ui [dot] ac [dot] id) Fakultas Ilmu Komputer, UI Ditulis di Tambun, Bekasi, 26 Juli 2015
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059616923332214, "perplexity": 698.0311032763543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104636.62/warc/CC-MAIN-20170818121545-20170818141545-00342.warc.gz"}
http://www.cut-the-knot.org/m/Geometry/ThreeSquares.shtml
Extras in Bottema's Configuration 22 April 2015, Created with GeoGebra Problem 1 Given three squares $BCDE,$ $ABIF,$ and $ACJG,$ the latter two with centers $O$ and $O'.$ Let $P$ be the midpoint of $DG,$ $N$ that of $EF,$ $S$ the intersection of $NO'$ and $OP.$ Prove that $AS$ passes through $M,$ the midpoint of $DE.$ Solution We'll use complex numbers and shall not distinguish between points and the associated complex numbers. Scaling if necessary, we may define $B=1,$ $C=-1,$ $A=a-bi,$ where are $a,b$ are arbitrary real numbers. With this, we obtain further values: $D=-1-2i,$ $E=1-2i,$ $M=-2i,$ \begin{align} G&=i(C-A)+A\\ &=i(-1-a+bi)+(a-bi)\\ &=(a-b)-i(1+a+b). \end{align} Similarly, \begin{align} F&=-i(B-A)+A\\ &=-i(1-a+bi)+(a-bi)\\ &=(a+b)+i(-1+a-b). \end{align} Now we can find all the midpoints: \begin{align} O&=\frac{1}{2}(B+F)=\frac{1}{2}[(1+a+b)+i(-1+a-b)],\\ O'&=\frac{1}{2}(C+G)=\frac{1}{2}[(-1+a-b)-i(1+a+b)]. \end{align} (Note that $iO'=O.)$ Further, \begin{align} P&=\frac{1}{2}(D+G)=\frac{1}{2}[(-1+a-b)-i(-3+a+b)],\\ N&=\frac{1}{2}(E+F)=\frac{1}{2}[(1+a+b)-i(3-a+b)]. \end{align} Finally, we'll show that $S$ is the midpoint of both $NO'$ and $OP:$ \begin{align} \frac{1}{2}(N+O')&=\frac{1}{2}[a-i(2+b)],\\ \frac{1}{2}(O+P)&=\frac{1}{2}[a-i(2+b)], \end{align} which proves that $ONGO'$ is a parallelogram. But there is more: $S$ happens to be the midpoint of $A$ and $M.$ Indeed, directly \begin{align} \frac{1}{2}(A+M)&=\frac{1}{2}[a-i(2+b)]. \end{align} Thus not only $AS$ passes through $M,$ $AM$ is divided by $S$ in half as are $NO'$ and $PO,$ making $AO'PMNO$ a parahexagon. Note also that $ONGO'$ becomes a rectangle for $A$ on the perpendicular bisector of $BC,$ i.e., when the two small squares are equal. It is a square (equal to the two small squares at that) when $A$ lies on $BC.$ Problem 2 Given two squares $ABIF$ and $ACJG,$ join $I$ and $J,$ and erect perpendiculars to $BC$ at $I,$ $B,$ $C,$ and $J.$ Let $X,Y,U,V$ be the intersections as shown below. Then $JU=IW.$ Solution Add a perpendicular to $BC$ through $A:$ Then $\Delta CJX=\Delta ACZ,$ implying $CX=AZ.$ Also, $\Delta BIY=\Delta ABZ,$ implying $BY=AZ.$ It follows that $BY=CX$ and, therefore, $IW=JU$ as having equal projections on the same line $BC.$ Acknowledgment The two problems problem which are due to Ruben Dario from the Peru Geometrico group has been communicated to me by Leo Giugiuc. I decided to place them on the same page since both relate to Bottema's theorem. Solution to the first is by Leo Giugiuc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522141218185425, "perplexity": 532.5058289133092}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825436.78/warc/CC-MAIN-20171022184824-20171022204824-00893.warc.gz"}
https://foster-family.jp/chihou/chihou17/gaiyou/21gifuken.htm
PVNx(2005Nxj@ЉI{̎qǂ̏ @ @ 21 򕌌 @ @ 18N3 򕌌̐eĂȂqǂ5.6%eƒň炿AS61̂̏ʂ͑43ʂłB ˗eϑA{ݓ̔r @ @ @ ˗eϑPTւ̓c @ @ @ ˗eA{{݁E@ [̔r @ @ @ ˓s{ߎsʁ@o^eȂǂ̐(򕌌)ւ̃N @ @ @ eϑA{ݓ̔r TOP eĂ邱ƂoȂqǂAǂň‚̂̊Ă܂Bqǂ̌20uƒqǂ̉ƒňŒvɋts{̎{ݒŠ́AAqǂ̌ψPĂ܂B s{s {{ݎ @ v S @ @ 򕌌 34l 536l 35l 605l 43/61 @ @ 5.6% 88.6% 5.8% 100% @ @ @ S 3,293l 29,850l 3,008l 36,151l @ @ @ 9.1% 82.6% 8.3% 100% @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ eϑPTւ̓c TOP @JȂ́AuqǂEqĉvv̐lڕWƂāA21Nx܂łɗeϑ15.0v𗧂Ă܂BS̗̎̂eϑ15Ȃ΁Av̒B͓ł傤BZ܂̎̂̎ƒɁAǂ̂悤Ȍv𗧂ĂẮAAĉBЉI{KvƂqǂƗeϑĂqǂ̎тA\loAeϑ15Bo邩Z܂B s{s @ {{ݎ @ v{쎙v @ @ 14Nx 5.1% 27l 472l 26l 525l @ @ 15Nx 5.0% 28l 504l 32l 564l @ 򕌌 16Nx 4.3% 25l 520l 36l 581l @ @ 17Nx 5.6% 34l 536l 35l 605l @ @15%܂ŁA69lȏϑ𑝂₷Kv܂ 18Nx\ 5.2% 33l 600l 633l \l 19Nx\ 5.3% 35l 624l 659l \l 20Nx\ 5.3% 37l 648l 684l \l 21Nx\ 5.4% 38l 672l 710l \l @ 14Nx 7.4% 2,517l 28,983l 2,689l 34,189l @ @ 15Nx 8.1% 2,811l 29,134l 2,746l 34,691l @ S 16Nx 8.4% 3,022l 29,809l 2,934l 35,765l @ @ 17Nx 9.1% 3,293l 29,850l 3,008l 36,151l @ @15%܂ŁA1547lȏϑ𑝂₷Kv܂ 18Nx\ 9.6% 3,546l 33,394l 36,939l \l 19Nx\ 10.1% 3,799l 33,836l 37,635l \l 20Nx\ 10.6% 4,053l 34,278l 38,331l \l 21Nx\ 11.0% 4,307l 34,720l 39,027l \l \lExcelTREND֐gp @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ eւ̎ϑA{{݁E@ [̔r TOP @o^e𗢐eϑ̒Ɖ肵Aeւ̎ϑƎ{{݁E@̒[Ƃr܂B{{݁E@̒ςłĂAe֎ϑ́AႢƂ낪̂͂Ȃł傤B @ s{ o^e ϑe ϑ ψϑ ϑ S ϑ e 򕌌 151ƒ 28ƒ 18.5% 1.2l 51 34l S 7,737ƒ 2,370ƒ 30.6% 1.4l @ 3,293l @ @ @ @ @ @ @ @ @ s{ @ [ ϒ [ S {{ 򕌌 586l 568l 96.9% 59 5 10 S 33,676l 30,830l 91.5% 60 @ 558 @ 򕌌 35l 35l 100.0% 18 #REF! 2 S 3,669l 3,077l 83.9% 31 @ 117 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2007/10/28 by sido ( http://foster-family.jp/  ) @ @ @ @ @ @ @ @
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000085830688477, "perplexity": 650.4522767643049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00375.warc.gz"}
http://www.ck12.org/geometry/Applications-of-the-Pythagorean-Theorem/studyguide/Pythagorean-Theorem-Study-Guide/r1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> # Applications of the Pythagorean Theorem % Progress Practice Applications of the Pythagorean Theorem Progress % Pythagorean Theorem Study Guide Student Contributed This study guide is an overview of the Pythagorean theorem and its converse, Pythagorean triples, and proving the distance formula.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822125196456909, "perplexity": 3775.248440580431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430459748987.54/warc/CC-MAIN-20150501055548-00056-ip-10-235-10-82.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/9663/is-it-pions-or-gluons-that-mediate-the-strong-force-between-nucleons/20499
# Is it pions or gluons that mediate the strong force between nucleons? From my recent experience teaching high school students I've found that they are taught that the strong force between nucleons is mediated by virtual-pion exchange, whereas between quarks it's gluons. They are not, however, taught anything about colour or quark-confinement. At a more sophisticated level of physics, is it just that the maths works equally well for either type of boson, or is one (type of boson) in fact more correct than the other? - See the answer by Lubos at physics.stackexchange.com/questions/9661/… . The correct type is the gluon. –  anna v May 10 '11 at 12:04 @anna I posed this question after having read @Lubosh's answer. I don't feel that it answers my question and, either way, I was kind of hoping for a slightly more expansive answer. When I get a chance I'll add an edit, containing some LaTex, that should better describe why I posted this query. –  qftme May 10 '11 at 12:08 Lubos gave a complete answer, but one could add that nuclear forces are in analogy with the electromagnetic forces between molecules, the Van der Waals forces. There the mediator is the photon, but the moments of the charge distributions are what control the forces exerted between molecules. In a similar way the strong nuclear forces are such a spillover, except that in contrast to the photon the gluon carries color and couples to itself so it is much more complicated. –  anna v May 10 '11 at 13:42 Yes. Depending on the energy and distance scale in question. –  dmckee May 10 '11 at 14:18 add comment ## 2 Answers Dear qftme, I agree that your question deserves a more expansive answer. The answer, "pions" or "gluons", depends on the accuracy with which you want to describe the strong force. Historically, people didn't know about quarks and gluons in the 1930s when they began to study the forces in the nuclei for the first time. In 1935, Hideki Yukawa made the most important early contribution of Japanese science to physics when he proposed that there may be short-range forces otherwise analogous to long-range electromagnetism whose potential is $$V(r) = K\frac{e^{-\mu r}}{r}$$ The Fourier transform of this potential is simply $1/(p^2+\mu^2)$ which is natural - an inverted propagator of a massless particle. (The exponential was added relatively to the Coulomb potential; and in the Fourier transform, it's equivalent to the addition of $\mu^2$ in the denominator.) The Yukawa particle (a spinless boson) was mediating a force between particles that was only significantly nonzero for short enough distances. The description agreed with the application to protons, neutrons, and the forces among them. So the mediator of the strong force was thought to be a pion and the model worked pretty well. (In the 1930s, people were also confusing muons and pions in the cosmic rays, using names that sound bizarre to the contemporary physicists' ears - such as a mesotron, a hybrid of pion and muon, but that's another story.) The pion model was viable even when the nuclear interactions were understood much more quantitatively in the 1960s. The pions are "pseudo-Goldstone bosons". They're spinless (nearly) massless bosons whose existence is guaranteed by the existence of a broken symmetry - in this case, it was the $SU(3)$ symmetry rotating the three flavors we currently know as flavors of the $u,d,s$ light quarks. The symmetry is approximate which is why the pseudo-Goldstone bosons, the pions (and kaons), are not exactly massless. But they're still significantly lighter than the protons and neutrons. However, the theory with the fundamental pion fields is not renormalizable - it boils down to the Lagrangian's being highly nonlinear and complicated. It inevitably produces absurd predictions at short enough distances or high enough energies - distances that are shorter than the proton radius. A better theory was needed. Finally, it was found in Quantum Chromodynamics that explains all protons, neutrons, and even pions and kaons (and hundreds of others) as bound states of quarks (and gluons and antiquarks). In that theory, all the hadrons are described as complicated composite particles and all the forces ultimately boil down to the QCD Lagrangian where the force is due to the gluons. So whenever you study the physics at high enough energy or resolution so that you see "inside" the protons and you see the quarks, you must obviously use gluons as the messengers. Pions as messengers are only good in approximate theories in which the energies are much smaller than the proton mass. This condition also pretty much means that the velocities of the hadrons have to be much smaller than the speed of light. - @Lubosh, Thanks. I read your answer with great interest (3 times!) Would I be correct to summerize that pion-exchange is merely a crude (non-renormalizable) approximation and it is closer to the truth to teach that it is instead just gluon exchange? (I say closer to the truth because I'm sure that if sea- and valence-quarks, and their associated PDFs, are properly taken into account the situation must be become significantly more complex.) –  qftme May 10 '11 at 13:21 ( ), sound of two hands clapping. –  anna v May 10 '11 at 13:28 @qftme: you should use the appropriate level of detail to explain things. Trying to model inter-nucleon interactions with quarks and gluons directly is messy, difficult and clouds the problems. The nucleon/pion model is simple and quantitatively precise up to orders of inverse energy. It is the appropriate model for that scale. –  genneth May 10 '11 at 13:45 @genneth, I completely agree, in general. The student in question however, is particularly inquisitive and specifically asked for an answer that was not 'dummed-down' due to it being too complex. One of my old Professors used to say "Simplify, simplify, but don't through the baby out the bathwater." I think in this instance, whether or not to settle on the pion model is equivalent to the baby sitting on the rim of the bath.. –  qftme May 10 '11 at 13:59 @Lubosh, belay that last request, I just found a 12page study into whether or not the pion model is appropriate to teach at a pre-University level. For anyone interested it's here: teachers.web.cern.ch/teachers/archiv/HST2002/feynman/… –  qftme May 13 '11 at 14:52 show 5 more comments gluons mediate the strong force between quarks. Pions mediate the nuclear force or nucleon-nucleon interaction or RESIDUAL strong force. So, the answer to your question is BOTH. In different measure, but both. See Wikipedia: http://en.wikipedia.org/wiki/Nuclear_force - add comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852775514125824, "perplexity": 850.426790224962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997877644.62/warc/CC-MAIN-20140722025757-00123-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mylime.info/physik/ph-electricity-and-magnetism-magnetic-fields.php
Magnetic fields are almost everywhere: automotive drives, power supply or magnetic resonance imaging. Without magnetic fields the world wouldn't be like it is today. In this section we will explain and examine the magnetic flux density, coils, elctromagnets and magnetic force. 1 The effect of an bar magnet on a welding arc A bar magnet is approaching to a welding arc. As a result of this the welding arc is distracted. Watch yourself in the short film. Magnetic forces always occur pairwise. So beside the bar magnet there must be a second magnetic field. That is the point. Where is it? By the way: There is an attraction between the N-pole and the S-pole. But same poles do repell each other. The earth acts as if there is a huge bar magnet inside it. The N-pole of a bar magnet will be attracted toward the northern hemisphere. So there must be a imaginary S-pole far north. Are you interessted? Here you will find more answers ... 2 Magnetic field pattern Even some people state to be sensitive to a magnetic field, humans do not have any sensors for the magnetic field. But we are able to recognize the forces of a magnetic field: a compass or a magnet holding something on the steel dor of the fridge. The magnetic field pattern can made be visible with iron fillings on top of a horse-shoe magnet. In between is a homogeneous field, outside we are talking of inhomogenous field. The field lines do have a direction since the iron fillings will move as you start to move the magnet. The stronger the magnetic field, the more densely packed the lines of flux. So we describe the strength of the magnetic field by the magnetic flux density B, which is measured in Tesla (T). 3 The magnetic effect of a current Around a long straight wire with a current flow inside we observe a circular magnetic field. However to observe this there is current of at least 20 A necessary. If you use magnet neadels instead of iron fillings, you will notice a change in direction if the direcetion of current changes. This leads to the right-hand grip rule: If your right thumb points in the direction of the current, your fingers then curl in the direction of the lines of flux. Around a wire the magnetic flux density decreases with the distance $r$: $B=\frac{\mu_0 I}{2\pi r}$, in which the constant $\mu_0=4\pi \cdot 10^{-7}~TmA^{-1}$ is the permeability of free space. 4 magnetic field pattern and strength of a solenoid If you wrap a long wire we are talking of a coil. A solenoid is a long coil with a large number of turns of wire. The result of the superposition of each magnetic field is a homogeneous field pattern inside the solenoid. The pattern is identical to the field of a bar magnet. The difference: We can switch it on/off and adjust the strength of the magnetic field. The stronger the current $I$ and the more turns of wire $n$ the stronger the magnetic flux density. If a solenoid is in a vacuum or air we can write: $$B=\mu_0 \cdot n \cdot I$$ 5 Magnetic materials Electrons spin in each atom. So each electron acts like a tiny electric current and produces a tiny magnetic field. In some materials the magnetic effects of all electrons cancel in other they line up. In ferromagnetic materials like iron, cobalt and nickel the tine magnetic fields line up to a strong magnetic field. So a ferromagnetic core in a solenoid increases the magnetic strength. With the permeability of materials $µ_r$ which is up to 10,000 for iron, 1 in vacuum and less then 1 for copper we can write: $$B=\mu_0 \cdot \mu_r \cdot n \cdot I$$ 6 Magnetic force A wire carrying current placed in a magnetic field feels a force. The two magnetic fields interact with each other. If the direction of the two patterns are opposite the fields are destructive. If the direction is identical they line up constructive to a strong field resulting in a force on the wire. We can construct the direction of force with the right hand rule and the knowledge that densly packed field lines result in a pushing force. The stronger the magnetic flux density $B$, the higher the current $I$, the larger the length $l$ of the conductor in the field and the more vertical the angle $\theta$ between the magnetic field and the conductor the greater the force $F$: $$F=B\cdot I\cdot l\cdot sin~\theta$$ 7 Magnetic force on a moving charge Charged particles from the outer space get trapped by the magnetic field of earth and produce the spectacular glow in the sky shown in the video. The force (Lorentz) on a moving charge with its charge $Q$, velocity $v$ and angle $\theta$ can be written to: $$F=B\cdot Q \cdot v \cdot sin~\theta$$ ideas of R. Brugger, FTA15 Elektronikschule Tettnang K. Johnson et al., "Advanced Physics for You" #### Question 1 construction of magnetic field pattern Construct the magnetic field pattern around the current carrying wires #### Question 2 magnetic force Which description fits? 1. Around a current carrying wire, a magnetic field is ... 2. The direction of the magnetic field ... 1. The magnetic field between two poles is directed from ... 2. opposite poles ... 1. On the right side of the wire, both magnetic fields interact ... 2. The direction of force is towards the ... #### Experiment 1 make magnetic field pattern visible All you need is a bar magnet, acrylic glass and iron fillings. Before you put the iron fillings on the bar magnet put a acrylic glass in between. Hint: make a video of the experiment. • How does the magnetic field pattern look like? • Where is the field homogeneous and where inhomogenous? • Does the direction change if we flip the manget? • Add a second bar magnet and repeat the experiement. • magnetic field pattern of a horse shoe magnet: ### First relax ... Here you can find the world's simplest electric train. Can you explain how it works? #### Question 3 flux density Calculate the magnetic flux density 1. Calculate the magnetic flux density at a distance of 2.5 cm from a long straight wire carrying a current of 2.0 A. 2. Close to a wire carrying a current of 4.0 A the magnetic flux density is $4.0\cdot 10^{-5}$ T. How far is the distance from the wire? 3. A solenoid of length 12 cm has 4800 turns per meter, the current through it is 2.5 A. Calculate the magnetic flux densty at the centre of the solenoid. 1. magnetic flux density: $B=\frac{\mu_0I}{2\pi r}=1.6 \cdot 10^{-5}~T$ 2. distance from the wire: $r=\frac{\mu_0I}{2\pi B}=2~cm$ 3. magnetic flux density: $B=\mu_0 n I = \mu_0 \frac{4800}{0.12} I = 0.126~T$ #### Question 4 construction of magnetic force Construct the resulting magnetic field and force 1. magnetic field (blue) to the right, current (red) into the board 2. magnetic field (blue) and current (red) to the right 1. magnetic field (blue) into the board, current (red) to the right 2. magnetic field (blue), current (red) upwards Resuslting magnetic field and force: #### Question 5 The magnetic force on a moving charge An electron with a charge of $Q=e=-1.6 \cdot 10^{-19}~As$ is moving with $v=5 \cdot 10^{7}~m/s$ perpendicular to an uniform magnetic field with flux density $B=6.25 \cdot 10^{-2}~T$. The mass of the electron is $m=9.1 \cdot 10^{-31}~kg$. 1. Construct the direction of the force. 2. Calculate the force. 4. Calculate the time period T. 5. Explain how mass spectrometers are used to identify different isotopes in a sample of material. 1. Resuslting magnetic field and force: 2. magnetic force: $F=BQv= -5 \cdot 10^{-13}~N$ $r=\frac{mv}{BQ}= 4.55 \cdot 10^{-3}~m$ $T=\frac{2\pi m}{BQ}= 5.72 \cdot 10^{-10}~s$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510938882827759, "perplexity": 647.7445580949578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823236.2/warc/CC-MAIN-20181210013115-20181210034615-00167.warc.gz"}
https://wiraelectrical.com/inductance-formula-circuits/
# Inductance Formula of an Inductor – Explanation and Example Contents An inductance formula is quite similar to the resistance formula. The way we calculate inductance of a single inductor and resistance of a single resistor is related to the cross-section area and material. Not only that, we also can calculate series and parallel inductors easily like what we do with series and parallel resistors. We will mention both inductors and coils a lot here, but don’t get confused. Both of them still have the same equation and formula. ## What is an Inductor An inductor is one of the most popular passive elements for an electrical circuit. Why is it called a passive element instead of a passive component? Because an inductor provides inductance in the circuit, but an inductance may be generated without a single inductor in the circuit. Keep in mind, inductances can be found in a single conductor wire, especially if it is wired in a core like a coil. Every coil will likely produce inductance in the circuit. A conductor wire will produce a magnetic field when electric current passes through it. Inductors will produce self-induced EMF with opposite polarity as current flows through it (that is why the EMF is known as back-EMF). Inductor will have a changing magnetic field as long as there is change of current flowing through it. When EMF is induced in an electrical circuit where the inductor is used, it is called Self Induction (L). Self-induction can be found in an inductor used in an electrical circuit, where there is no inductor used in the same magnetic field. When EMF is induced in an adjacent pair of inductors placed in the same magnetic field, it is called Mutual Induction (M). Mutual-induction is mainly found in a transformer, relay, electric motor, and everything that has a pair of coils wrapped together. Inductance which we have talked about until now is self-induction. We will talk about mutual inductance later. ## What is an Inductance If a resistor provides a resistance against current in the circuit, inductors are quite similar to resistors. An inductor is a conductor wire wrapped around a core. It may be air, ferrite, etc. Of course, a coil of conducting wire is also considered as an inductor. An inductor is a passive element that stores energy in the form of a magnetic field and can be found almost everywhere in electronic circuits, power supply circuits, communication systems, and especially transformers. Moving on, an inductor provides inductance in the circuit. Any conducting wire that is inductive in the circuit is also considered as an inductor. What is an inductance? An inductance is the opposition rate against a change of current by an inductor when a current flows through it. From the illustration above, inductance is calculated from its length, cross-section area, material of the core, and number of turns. Mathematically, we can use the equation: Where: L = inductance, measured in Henry (H) N = number of turns μ = permeability of the core A = cross-section area l = length of the inductor Looking from the equation above, the core material which has specific permeability is a key role of an inductance value. There will be different values for air core and ferrite core. The measurement unit Henry (H) for inductance is taken from Joseph Henry, an American physicist who contributed greatly to electromagnets. Another measurement unit for inductance is Weber per Ampere and it is equal to Henry, 1 H is equal to 1 Wb/A. ## Inductance of an Inductor Why is the EMF produced by self-inductance called back-EMF? We can answer this from Lenz’ Law. According to Lenz’s law: The direction of the electric current induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes the changing initial magnetic field. Furthermore, we can define that: One Henry will be generated by a single coil when EMF is produced as a result of one volt induced in the coil, where the change of current rate 1 Ampere per second is flowing through that coil. Summarized, An inductance (L) of one Henry is generated while the change of current 1 A/s is flowing through it. This change of current induces a voltage (VL) at one volt. Mathematically, the change of current in a time for a coil is Where: di = change of current (A) dt = time needed to achieve the di, measured in seconds Combined with inductance (L) and voltage (V) we get, Where: V = induced voltage in the coil (V) L = inductances (H) Doing a little positioning and we get, Where: L = inductance (Henry) v = voltage across the inductor (V) di/dt = change of current per second (A/s) Just like a resistor that “resist” currents in the circuit, an inductor “resists” the change of current in the circuit. The bigger the Henries, the lower the change of current rate, and vice versa. ## Self Induction of Inductor Formula We can say that an inductor is a looped conductor wire wrapped around a core. This device can store energy in the form of a magnetic field. We can increase the inductance by increasing its loops or turns of the wire for the coil or inductor. If the inductance increases, the magnetic flux also increases with the same amount of current. Observe the self induction equation below: Where: L = inductance (H) N = number of turns Φ = magnetic flux I = current (A) The equation above is also known as magnetic flux linkage divided by the current flowing in each loop of the coil (NΦ/I). Let’s do simple example of self-inductance an inductor below: Assume that we have an air-core inductor with: • 100 turns of copper wire. • 5 mWb of magnetic flux. • 2 Ampere DC current flowing through it. Then using the self-inductance Substituting the known variable into the equation results in: ## Inductance Formula of an Inductor There will be another inductance formula besides the self-inductance. We will find it step-by-step to make sure you understand where it comes from (even it is not that important to most of us who only need to know how to use it properly). The magnetic flux that we used earlier is determined by the construction and characteristic of the coil or inductor. The construction is built from the length of the inductor, size, number of turns, materials, cores etc. Among all the factors, the permeability of the core and number of turns will be the key factors here. Using a different core will make the coil’s dimension changed, especially the number of turns. High permeability core and high number of turns produce high self-induction coefficient of an inductor. The magnetic flux produced by its core is equal to the flux density and cross-sectional area. Where: Φ = magnetic flux B = flux density A = cross-sectional area Going deeper, the flux density depends on the permeability of the core, number of turns, flowing current, and its length. Substituting the flux density into inductance formula we knew before produces: Simplifying the equation above into an inductance formula consists of core material, number of turns, cross-sectional area, and length. Where: L = inductances (H) μ = permeability of the core N = number of turns A = cross-sectional area l = length of the inductor ## Inductance Formula Summary Before closing our study here, let us mention some important things: 1. Just as the inductance formula above where it depends on the rate of change of current. 2. The value v will be zero if the current is steady. It means since the voltage is zero, an inductor acts as a short circuit in a DC circuit. 3. An instantaneous change of current is not allowed. It means a sudden discontinuity of current can’t be calculated properly. But the opposite behavior is possible for its voltage. 4. An ideal inductor doesn’t dissipate energy. 5. Inductors store energy by taking power from the circuit. 6. Inductors return energy when delivering power to the circuit. 7. Actual inductors have a resistive element since they are made from conductors such as copper wire. Going deeper, we will continue this topic to series and parallel inductors formula. ### How do you find inductance? Calculate inductance can be done using the formula L = μ N^2 A / l Where L is inductance, μ is permeability, N is number of turns, A is cross-sectional area, and I is the current. ### What is N in the inductance formula? N in the inductance formula indicates the number of turns the inductor or coil has while l is its length of coil or inductor. ### What is the unit of inductance? Inductance is measured in Henry as an honor to Joseph Henry. One Henry is the value of self-inductance, a coil or inductor where one volt is produced by inducing one ampere per second. ### Why is L used for inductance? L is used as an honor to Heinrich Lenz who introduced electromagnetism. ### What is the symbol for inductance? Inductance is presented by the symbol of L (Lenz) while its measurement unit is H (Henry).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9178646802902222, "perplexity": 946.479247216883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00650.warc.gz"}
https://iwaponline.com/wst/article-abstract/51/11/131/11694/Evaluation-of-denitrification-potential-of?redirectedFrom=fulltext
In this study the effect of retention time and rotation speed in the denitrification process in two full-scale rotating biological contactors (RBC) which were operated parallel and fed with municipal wastewater is evaluated. Each rotating biological contactor was covered to prevent oxygen input. The discs were 40% submerged. On the axle of one of the rotating biological contactors lamellas were placed (RBC1). During the experiments the nitrate removal performance of the rotating biological contactor with lamellas was observed to be less than the other (RBC2) since the lamellas caused oxygen diffusion through their movement. The highest nitrate removal observed was 2.06 g/m2.d achieved by a contact time of 28.84 minutes and a recycle flow of 1 l/s. The rotation speed during this set had the constant value of 0.8 min−1. Nitrate removal efficiency on RBC1 was decreasing with increasing rotation speed. On the rotating biological contactor without lamellas no effect on denitrification could be determined within a speed range from 0.67 to 2.1 min−1. If operated in proper conditions denitrification on RBC is a very suitable alternative for nitrogen removal that can easily fulfil the nutrient limitations in coastal areas due to the rotating biological contactors economical benefits and uncomplicated handling. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010960221290588, "perplexity": 4441.376303276477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00502.warc.gz"}
https://fr.maplesoft.com/support/help/view.aspx?path=evalc/functions&L=F
functions - Maple Help Functions Known to evalc Description • The following functions are known to evalc, in the sense that their real and imaginary parts are known for all complex arguments in their domains. sin cos tan csc sec cot sinh cosh tanh csch sech coth arcsin arccos arctan arccsc arcsec arccot arcsinh arccosh arctanh arccsch arcsech arccoth exp ln sqrt ^ abs conjugate polar argument signum csgn Re Im • The following functions are partially known to evalc, in the sense that their real and imaginary parts are known for some complex arguments in their domains, and/or it is known that the functions are not real valued everywhere on the real line. Ei LambertW Psi dilog surd Ci Si Chi Shi Ssi • If evalc is applied to an expression involving RootOfs of polynomials, the polynomials are split into pairs of polynomials whose roots include the real and imaginary parts of the roots of the original polynomials. • If evalc is applied to an expression involving ints (or sums), each such integral (or sum) are split into two integrals (or sums) of real functions, giving the real and imaginary parts of the original integrals (or sums). • evalc assumes that all variables represent real-valued quantities. evalc further assumes that unknown functions of real variables are real valued.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678046703338623, "perplexity": 1724.2372260157995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00645.warc.gz"}
http://blog.sigfpe.com/2010/07/automatic-divided-differences.html?showComment=1284832200458
# A Neighborhood of Infinity ## Saturday, July 31, 2010 ### Automatic Divided Differences Divided Differences I've previously talked about automatic differentiation here a few times. One of the standard arguments for using automatic differentiation is that it is more accurate than numeric differentiation implemented via divided differences. We can approximate f'(x) by using (f(x)-f(y))/(x-y) with a value of y near x. Accuracy requires y to be close to x, and that requires computing the difference between two numbers that are very close. But subtracting close numbers is itself a source of numerical error when working with finite precision. So you're doomed to error no matter how close you choose x and y to be. However, the accuracy problem with computing divided differences can itself be fixed. In fact, we can adapt the methods behind automatic differentiation to work with divided differences too. (This paragraph can be skipped. I just want to draw a parallel with what I said here. Firstly I need to correct the title of that article. I should have said it was about *divided differences*, not *finite differences*. The idea in that article was that the notion of a divided difference makes sense for types because for a large class of function you can define divided differences without using either differencing or division. You just need addition and multiplication. That's the same technique I'll be using here. I think it's neat to see the same trick being used in entirely different contexts.) The Direct Approach Firstly, here's a first attempt at divided differencing: > diff0 f x y = (f x - f y)/(x - y) We can try it on the function f: > f x = (3*x+1/x)/(x-2/x) diff0 f 1 1.000001 gives -14.0000350000029. Repeating the calculation with an arbitrary precision package (I used CReal) gives -14.000035000084000. We are getting nowhere near the precision we'd like when working with double precision floating point. The Indirect Approach Automatic differentiation used a bunch of properties of differentiation: linearity, the product rule and the chain rule. Similar rules hold for divided differences. First let me introduce some notation. If f is a function then I'll use f(x) for normal function application. But I'll use f[x,y] to mean the divided difference (f(x)-f(y))/(x-y). We have (f+g)[x,y] = f[x,y]+g[x,y] (fg)[x,y] = f(x)g[x,y]+f[x,y]g(y) h[x,y] = f[g(x),g(y)]g[x,y] when h(x)=f(g(x)) We can modify the product rule to make it more symmetrical though it's not strictly necessary: (fg)[x,y] = 0.5(f(x)+f(y))g[x,y]+0.5f[x,y] (g(x)+g(y)) (I got that from this paper by Kahan.) In each case, given f evaluated at x and y, and its divided difference at [x, y], and the same for g, we can compute the corresponding quantities for the sum and product of f and g. So we can store f(x), f(y) and f[x,y] together in a single structure: > data D a = D { fx :: a, fy :: a, fxy :: a } deriving (Eq, Show, Ord) And now we can implement arithmetic on these structures using the rules above: > instance Fractional a => Num (D a) where > fromInteger n = let m = fromInteger n in D m m 0 > D fx fy fxy + D gx gy gxy = D (fx+gx) (fy+gy) (fxy+gxy) > D fx fy fxy * D gx gy gxy = D (fx*gx) (fy*gy) (0.5*(fxy*(gx+gy) + (fx+fy)*gxy)) > negate (D fx fy fxy) = D (negate fx) (negate fy) (negate fxy) I'll leave as an exercise the proof that this formula for division works: > instance Fractional a => Fractional (D a) where > fromRational n = let m = fromRational n in D m m 0 > D fx fy fxy / D gx gy gxy = D (fx/gx) (fy/gy) (0.5*(fxy*(gx+gy) - (fx+fy)*gxy)/(gx*gy)) For the identity function, i, we have i(x)=x, i(y)=y and i[x,y]=1. So for any x and y, the evaluation of the identity function at x, y and [x,y] is represented as D x y 1. To compute divided differences for any function f making use of addition, subtraction and division we need to simply apply f to D x y 1. We pick off the divided difference from the fxy element of the structure. Here's our replacement for diff0. > diff1 f x y = fxy \$ f (D x y 1) This is all mimicking the construction for automatic differentiation. Evaluating diff0 f 1 1.000001 gives -14.000035000083997. Much closer to the result derived using CReal. One neat thing about this is that we have a function that's well defined even in the limit as x tends to y. When we evaluate diff1 f 1 1 we get the derivative of f at 1. I thought that this was a novel approach but I found it sketched at the end of this paper by Reps and Rall. (Though their sketch is a bit vague so it's not entirely clear what they intend.) Both the Kahan paper and the Reps and Rall papers give some applications of computing divided diferences this way. It's not clear how to deal with the standard transcendental functions. They have divided differences that are very complex compared to their derivatives. Aside There is a sense in which divided differences are uncomputable(!) and that what we've had to do is switch from an extensional description of functions to an intensional description to compute them. I'll write about this some day. Note that the ideas here can be extended to higher order divided differences and that there are some really nice connections with type theory. I'll try to write about these too. Update: I found another paper by Reps and Rall that uses precisely the method described here. Trevor said... Is there a 0.5 missing from the (*) definition for the Num instance of D? sigfpe said... Trevor, I factored out a 0.5 which may be what is leading you to think the definition of (*) differs from the formula I gave a few lines earlier. So I think the code is correct. Trevor said... Ah, I didn't match parentheses. Thanks! Nimish said... Have you looked at differential Galois theory? It seems that could provide answers since the type derivation that makes zippers "obvious" fits the criteria for a derivation on the ring (field?) of Haskell types.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034131169319153, "perplexity": 1123.3717668841275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770403.126/warc/CC-MAIN-20141217075250-00153-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-a-molecular-approach-3rd-edition/chapter-1-sections-1-1-1-8-exercises-problems-by-topic-page-39/66
## Chemistry: A Molecular Approach (3rd Edition) $$Density = 4.49g/cm^{3}$$ Since, we are told to find the answer in $g/cm^{3}$, we must first convert our units. 1 kg = 1000 g 1.41kg = 1410 g 1 L = $1000 cm^{3}$ 0.314 L= $314cm^{3}$ Now we can use the Density formula, to calculate the answer: $$D=\frac{m}{v}$$ $$D=\frac{1410g}{314 cm}$$ $$D=4.49g/cm^{3}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914310872554779, "perplexity": 824.9627924190164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517917.20/warc/CC-MAIN-20171212192750-20171212212750-00726.warc.gz"}
http://dspace.nwu.ac.za/handle/10394/1865/browse?value=Active+galaxies&type=subject
Now showing items 1-1 of 1 • #### Search for Lorentz invariance breaking with a likelihood fit of the PKS 2155-304 flare data taken on MJD 53944  (Elsevier Science, 2011) Several models of Quantum Gravity predict Lorentz Symmetry breaking at energy scales approaching the Planck scale (∼1019 GeV). With present photon data from the observations of distant astrophysical sources, it is possible ... Theme by
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253155589103699, "perplexity": 4633.65639777675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659512.19/warc/CC-MAIN-20160924173739-00195-ip-10-143-35-109.ec2.internal.warc.gz"}
https://en.wikisource.org/wiki/The_Direction_of_Force_and_Acceleration
# The Direction of Force and Acceleration Non-Newtonian Mechanics :— The Direction of Force and Acceleration. By Richard C. Tolman, Ph.D., Instructor in Physical Chemistry at the University of Michigan[1]. If force is defined as the rate of increase of momentum, the equation $\mathsf{F}=\frac{d}{dt}(m\mathsf{u})=m\frac{d\mathsf{u}}{dt}+\frac{dm}{dt}\mathsf{u}$ (1) allows for a change in mass as well as a change in velocity. This is the fundamental equation of non-Newtonian mechanics[2]. It has been shown from the principle of relativity[3] that the mass of a moving body is given by the equation $m=\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}},$ where $m_0$ is the mass of the body at rest and $c$ is the velocity of light. Substituting in equation (1) we obtain $\mathsf{F}=\frac{d}{dt}\left(\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\mathsf{u}\right)=\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\frac{d\mathsf{u}}{dt}+\frac{d}{dt}\left(\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\right)\mathsf{u}$ (2) From an inspection of equations (1) and (2) it is evident that the force acting on a body is equal to the sum of two vectors, one of which is in the direction of the acceleration $d\mathsf{u}/dt$ and the other in the direction of the existing velocity u, so that in general the force and the acceleration it produces are not in the same direction. If the force which does produce acceleration in a given direction he resolved perpendicular and parallel to the acceleration, it may be shown that the two components are connected by a definite relation. Relation between the Components of Force Parallel and Perpendicular to the Acceleration. Consider a body (fig. 1) moving with the velocity $\mathsf{u}=u_{x}\mathsf{i}+u_{y}\mathsf{j}.$ Let it be accelerated in the Y direction by the action of the component forces $\mathsf{F}_{y}$ and $\mathsf{F}_{x}$. From equation (2) we have $\mathsf{F}_{x}=\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\frac{du_{x}}{dt}+\frac{d}{dt}\left(\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\right)u_{x}$ (3) $\mathsf{F}_{y}=\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\frac{du_{y}}{dt}+\frac{d}{dt}\left(\frac{m_{0}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\right)u_{y}$ (4) Introducing the condition that there is no acceleration in the X direction, which makes $du_{x}/dt=0$, further noting that $u^{2}=u_{x}^{2}+u_{y}^{2}$, by the division of equation (3) by (4) we obtain $\frac{\mathsf{F}_{x}}{\mathsf{F}_{y}}=\frac{u_{x}u_{y}}{c^{2}-u_{x}^{2}},$ $\mathsf{F}_{x}=\frac{u_{x}u_{y}}{c^{2}-u_{x}^{2}}\mathsf{F}_{y}$ (5) Hence in order to accelerate a body in a given direction, we may apply any force $\mathsf{F}_{y}$ in the desired direction, but must at the same time apply at right angles another force $\mathsf{F}_{x}$ whose magnitude is given by equation (5). From a qualitative consideration, it is also possible to see the necessity of a component of force, perpendicular to the desired acceleration. Referring again to fig. 1, since the body is being accelerated in the Y direction, its total velocity and hence its mass are increasing. This increasing mass is accompanied by increasing momentum in the X direction even when the velocity in that direction remains constant. The component force $\mathsf{F}_{x}$ is necessary for the production of this increase in X-momentum. In predicting the path of moving electrons with the help of the fifth equation of electromagnetic theory, $\mathsf{F}=\mathsf{E}+\frac{1}{c}\mathsf{v}\times\mathsf{H},$, we find an interesting application of equation (5). Application in Electromagnetic Theory. Consider a charge $\epsilon$ constrained to move in the X direction with the velocity $v$ and let it be the origin of a system of moving coordinates Y$\epsilon$X (fig. 2). Suppose now a test electron $t$, of unit charge, situated at the point $x=0$, $y=y$, moving in the X direction with the same velocity $v$ as the charge $\epsilon$, and also having a component velocity in the Y direction $u_y$. Let us predict the nature of its motion under the influence of the charge $\epsilon$. The moving charge $\epsilon$ will be surrounded by electric and magnetic fields whose intensities at any point are given by the following expressions[4], obtained by integrating Maxwell’s four field equations, for the case of a moving point charger,- $\mathsf{E}=\left(1-\frac{v^{2}}{c^{2}}\right)\frac{\epsilon\mathsf{R}}{\mathsf{R}^{3}\left(1-\frac{v^{2}}{c^{2}}\sin^{2}\psi\right)^{\frac{3}{2}}}$ (6) $\mathsf{H}=\frac{1}{c}\mathsf{v}\times\mathsf{E},$ (7) where R is the radius vector connecting the moving charge with the point in question and $\psi$ is the angle between R and v. For the field acting on the test electron $t$, situated at the point $x=0$, $y=y$, we may substitute $\mathsf{R}=y\mathsf{j}$ and $\sin\psi=1$, giving us, $\mathsf{E}=\frac{\epsilon}{y^{2}\left(1-\frac{v^{2}}{c^{2}}\right)^{\frac{1}{2}}}\mathsf{j}$ (8) and $\mathsf{H}=\frac{v}{c}\frac{\epsilon}{y^{2}\left(1-\frac{v^{2}}{c^{2}}\right)^{\frac{1}{2}}}\mathsf{k},$ (9) substituting into the fifth fundamental equation of electromagnetic theory, $\mathsf{F}=\mathsf{E}+\frac{1}{c}\mathsf{v}\times\mathsf{H},$ (10) we obtain the force acting on the unit test electron $t$. [Note in the above equation that v, the velocity of the electron, is for our case $v\mathsf{i}+u_{y}\mathsf{j}$.] $\mathsf{F}=\frac{\epsilon}{y^{2}\left(1-\frac{v^{2}}{c^{2}}\right)^{\frac{1}{2}}}\mathsf{j}-\frac{1}{c^{2}}\frac{v^{2}\epsilon}{y^{2}\left(1-\frac{v^{2}}{c^{2}}\right)^{\frac{1}{2}}}\mathsf{j}+\frac{1}{c^{2}}\frac{v^{2}u_{y}\epsilon}{y^{2}\left(1-\frac{v^{2}}{c^{2}}\right)^{\frac{1}{2}}}i,$ (11) or $\mathsf{F}_{x}=\frac{\epsilon}{y^{2}\left(1-\frac{v^{2}}{c^{2}}\right)^{\frac{1}{2}}}\frac{vu_{y}}{c^{2}},$ (12) $\mathsf{F}_{y}=\frac{\epsilon}{y^{2}\left(1-\frac{v^{2}}{c^{2}}\right)^{\frac{1}{2}}}\left(1-\frac{v^{2}}{c^{2}}\right)$ (13) Under the action of the component force $\mathsf{F}_{x}$ we might at first sight expect the electron $t$ to aquire an acceleration in the X direction: Such condition, however, would not be in agreement with the principle of relativity, since from the point of view of an observer who is moving along with the charge $\epsilon$, the phenomenon is merely one of ordinary electrostatic repulsion and the test electron should experience no change in velocity in the X direction but should be accelerated merely in the Y direction. If, however, we divide equation (12) by (13) we obtain $\mathsf{F}_{x}=\frac{vu_{y}}{c^{2}-v^{2}}\mathsf{F}_{y},$ (14) which agrees with equation (5), the necessary relation for zero acceleration in the X direction. The application of equation (5) thus removes a discrepancy which could not be accounted for in any system of mechanics in which force and acceleration are in the same direction. Summary. For non-Newtonian mechanics, it has been pointed out that force and the acceleration it produces are not in general in the same direction. A definite relation (equation 5) has been derived connecting the components of force parallel and perpendicular to the acceleration. For a special problem, the application of this relation has removed an apparent discrepancy between the predictions based on the electromagnetic theory and on the principle of relativity. Ann Arbor, Mich. March 25th, 1911. 1. Communicated by the Author. 2. This definition of force was first used by Lewis (Phil. Mag. xvi. p. 705 (1908)). In Einstein‘s later treatment of the principle of relativity, Jahrbuch der Radioktivität, iv. p. 411 (1907), he defines force by the equations $\mathsf{F}_{x}=\frac{d}{dt}\left(\frac{m_{0}u_{x}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\right),\ \mathsf{F}_{y}=\frac{d}{dt}\left(\frac{m_{0}u_{y}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\right),\ \mathsf{F}_{z}=\frac{d}{dt}\left(\frac{m_{0}u_{z}}{\sqrt{1-\frac{u^{2}}{c^{2}}}}\right).$ He there states that this definition has in general no physical meaning. We see, however, that these are merely the scalar equations corresponding to equation (2) above and hence derivable from equation (1), which is an obvious definition of force and has a physical meaning. In further support of this definition of force, it has recently been pointed out by the writer, Phil. Mag. xxi. p. 296 (1911), that, combined with the principle of relativity, it leads to a derivation of the fifth fundamental equation of electromagnetic theory in its exact form $\mathsf{F}=\mathsf{E}+\frac{1}{c}\mathsf{v}\times\mathsf{H},$ there being no necessity for distinguishing between longitudinal and transverse mass. 3. Lewis & Tolman, Proc. Amer. Acad. xliv. p. 711 (1909); Phil. Mag. p. 510 (1909). 4. Abraham, Theorie der Elektrizität, vol. ii. p. 86 et seq. (B. G. Teubner, Leipzig and Berlin, 1908). This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1948, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 60 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 52, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485068917274475, "perplexity": 276.22550707864315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161775.86/warc/CC-MAIN-20160205193921-00264-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.libretexts.org/TextMaps/Precalculus_Textmaps/Map%3A_Precalculus_(OpenStax)/01%3A_Functions/1.6%3A_Absolute_Value_Functions
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 1.6: Absolute Value Functions Until the 1920s, the so-called spiral nebulae were believed to be clouds of dust and gas in our own galaxy, some tens of thousands of light years away. Then, astronomer Edwin Hubble proved that these objects are galaxies in their own right, at distances of millions of light years. Today, astronomers can detect galaxies that are billions of light years away. Distances in the universe can be measured in all directions. As such, it is useful to consider distance as an absolute value function. In this section, we will investigate absolute value functions. Figure 1.6.1: Distances in deep space can be measured in all directions. As such, it is useful to consider distance in terms of absolute values. (credit: "s58y"/Flickr) ### Understanding Absolute Value Recall that in its basic form $$f(x)=|x|$$, the absolute value function, is one of our toolkit functions. The absolute value function is commonly thought of as providing the distance the number is from zero on a number line. Algebraically, for whatever the input value is, the output is the value without regard to sign. Absolute value function The absolute value function can be defined as a piecewise function $f(x)=|x|= \begin{cases} x & \text{ if }x{\geq}0 \\ -x & \text{ if } x<0 \end{cases}$ Example 1.6.1: Determine a Number within a Prescribed Distance Describe all values $$x$$ within or including a distance of 4 from the number 5. Solution We want the distance between $$x$$ and 5 to be less than or equal to 4. We can draw a number line, such as the one in , to represent the condition to be satisfied. Figure 1.6.2: Number line describing the difference of the distance of 4 away from 5 The distance from $$x$$ to 5 can be represented using the absolute value as $$|x−5|$$. We want the values of $$x$$ that satisfy the condition $$| x−5 |\leq4$$. Analysis Note that \begin{align} -4&{\leq}x-5 & x-5&\leq4 \\1&{\leq}x & x&{\leq}9 \end{align} So $$|x−5|\leq4$$ is equivalent to $$1{\leq}x\leq9$$. However, mathematicians generally prefer absolute value notation. 1.6.1: Describe all values $$x$$ within a distance of 3 from the number 2. Solution $$|x−2|\leq3$$ Example 1.6.2: Resistance of a Resistor Electrical parts, such as resistors and capacitors, come with specified values of their operating parameters: resistance, capacitance, etc. However, due to imprecision in manufacturing, the actual values of these parameters vary somewhat from piece to piece, even when they are supposed to be the same. The best that manufacturers can do is to try to guarantee that the variations will stay within a specified range, often ±1%, ±5%, or ±10%. Suppose we have a resistor rated at 680 ohms, ±5%. Use the absolute value function to express the range of possible values of the actual resistance. Solution 5% of 680 ohms is 34 ohms. The absolute value of the difference between the actual and nominal resistance should not exceed the stated variability, so, with the resistance $$R$$ in ohms, $|R−680|\leq34$ 1.6.2: Students who score within 20 points of 80 will pass a test. Write this as a distance from 80 using absolute value notation. Solution Using the variable $$p$$ for passing, $$| p−80 |\leq20$$ ### Graphing an Absolute Value Function The most significant feature of the absolute value graph is the corner point at which the graph changes direction. This point is shown at the origin in Figure 1.6.3. Figure 1.6.3: Graph of an absolute function. Figure 1.6.4 shows the graph of $$y=2|x–3|+4$$. The graph of $$y=|x|$$ has been shifted right 3 units, vertically stretched by a factor of 2, and shifted up 4 units. This means that the corner point is located at $$(3,4)$$ for this transformed function. Figure 1.6.4: Graph of the different types of transformations for an absolute function. Example 1.6.3: Writing an Equation for an Absolute Value Function Write an equation for the function graphed in Figure 1.6.5. Figure 1.6.5: Graph of an absolute function. Solution The basic absolute value function changes direction at the origin, so this graph has been shifted to the right 3 units and down 2 units from the basic toolkit function. See Figure 1.6.6. Figure 1.6.6: Graph of two transformations for an absolute function at $$(3, -2)$$. We also notice that the graph appears vertically stretched, because the width of the final graph on a horizontal line is not equal to 2 times the vertical distance from the corner to this line, as it would be for an unstretched absolute value function. Instead, the width is equal to 1 times the vertical distance as shown in Figure 1.6.7. Figure 1.6.7: Graph of two transformations for an absolute function at $$(3, -2)$$ and the ratios between the two different transformations. From this information we can write the equation \begin{align} f(x)&=2|x-3|-2, &\text{treating the stretch as a vertial stretch, or} \\ f(x)&=|2(x-3)|-2, &\text{treating the stretch as a horizontal compression.} \end{align} Analysis Note that these equations are algebraically equivalent—the stretch for an absolute value function can be written interchangeably as a vertical or horizontal stretch or compression. If we couldn’t observe the stretch of the function from the graphs, could we algebraically determine it? Yes. If we are unable to determine the stretch based on the width of the graph, we can solve for the stretch factor by putting in a known pair of values for $$x$$ and $$f(x)$$. $f(x)=a|x−3|−2$ Now substituting in the point $$(1, 2)$$ \begin{align} 2&=a|1-3|-2 \\ 4&=2a \\ a&=2 \end{align} 1.6.3: Write the equation for the absolute value function that is horizontally shifted left 2 units, is vertically flipped, and vertically shifted up 3 units. Solution $$f(x)=−| x+2 |+3$$ Do the graphs of absolute value functions always intersect the vertical axis? The horizontal axis? Yes, they always intersect the vertical axis. The graph of an absolute value function will intersect the vertical axis when the input is zero. No, they do not always intersect the horizontal axis. The graph may or may not intersect the horizontal axis, depending on how the graph has been shifted and reflected. It is possible for the absolute value function to intersect the horizontal axis at zero, one, or two points (see Figure 1.6.8). Figure 1.6.8: (a) The absolute value function does not intersect the horizontal axis. (b) The absolute value function intersects the horizontal axis at one point. (c) The absolute value function intersects the horizontal axis at two points. ### Solving an Absolute Value Equation Now that we can graph an absolute value function, we will learn how to solve an absolute value equation. To solve an equation such as $$8=|2x−6|$$, we notice that the absolute value will be equal to 8 if the quantity inside the absolute value is 8 or -8. This leads to two different equations we can solve independently. $2x-6=8 \text{ or } 2x-6=-8$ \begin{align} 2x&=14 & 2x&=-2 \\x&=7 & x&=-1 \end{align} Knowing how to solve problems involving absolute value functions is useful. For example, we may need to identify numbers or points on a line that are at a specified distance from a given reference point. An absolute value equation is an equation in which the unknown variable appears in absolute value bars. For example, $|x|=4,$ $|2x−1|=3,$ $|5x+2|−4=9,$ Solutions to Absolute Value Equations For real numbers $$A$$ and $$B$$, an equation of the form $$|A|=B$$, with $$B\geq0$$, will have solutions when $$A=B$$ or $$A=−B$$. If $$B<0$$, the equation $$|A|=B$$ has no solution. Given the formula for an absolute value function, find the horizontal intercepts of its graph. 1. Isolate the absolute value term. 2. Use $$|A|=B$$ to write $$A=B$$ or $$−A=B$$, assuming $$B>0$$. 3. Solve for $$x$$. Example 1.6.4: Finding the Zeros of an Absolute Value Function For the function $$f(x)=|4x+1|−7$$, find the values of $$x$$ such that  $$f(x)=0$$. Solution \begin{align} 0&=|4x+1|-7 & & &\text{Substitute 0 for f(x).} \\ 7&=|4x+1| & & &\text{Isolate the absolute value on one side of the equation.} \\ 7&=4x+1 &\text{or} -7&=4x+1 &\text{Break into two separate equations and solve.} \\ 6&=4x & -8&=4x & \\ x&=\frac{6}{4}=1.5 & x&=\frac{-8}{4}=-2 \end{align} The function outputs 0 when $$x=1.5$$ or $$x=−2$$. See Figure 1.6.8. Figure 1.6.8: Graph of an absolute function with x-intercepts at -2 and 1.5. 1.6.4: For the function $$f(x)=|2x−1|−3$$,find the values of $$x$$ such that $$f(x)=0$$. Solution $$x=−1$$ or $$x=2$$ Should we always expect two answers when solving $$|A|=B$$? No. We may find one, two, or even no answers. For example, there is no solution to $$2+|3x−5|=1$$. Given an absolute value equation, solve it. 1. Isolate the absolute value term. 2. Use $$|A|=B$$ to write $$A=B$$ or $$A=−B$$. 3. Solve for $$x$$. Example 1.6.5: Solving an Absolute Value Equation Solve $$1=4|x−2|+2$$. Solution Isolating the absolute value on one side of the equation gives the following. \begin{align} 1&=4|x-2|+2 \\ -1&=4|x-2| \\ -\frac{1}{4}&=|x-2| \end{align} The absolute value always returns a positive value, so it is impossible for the absolute value to equal a negative value. At this point, we notice that this equation has no solutions. In example 1.6.3, if $$f(x)=1$$ and $$g(x)=4|x−2|+2$$ were graphed on the same set of axes, would the graphs intersect? No. The graphs of $$f$$ and $$g$$ would not intersect, as shown in Figure 1.6.9. This confirms, graphically, that the equation $$1=4|x−2|+2$$ has no solution. Figure 1.6.9: Graph of $$g(x)=4|x-2|+2$$ and $$f(x)=1$$. Find where the graph of the function $$f(x)=−| x+2 |+3$$ intersects the horizontal and vertical axes. $$f(0)=1$$, so the graph intersects the vertical axis at $$(0,1)$$. $$f(x)=0$$ when $$x=−5$$ and $$x=1$$ so the graph intersects the horizontal axis at $$(−5,0)$$ and $$(1,0)$$. ### Solving an Absolute Value Inequality Absolute value equations may not always involve equalities. Instead, we may need to solve an equation within a range of values. We would use an absolute value inequality to solve such an equation. An absolute value inequality is an equation of the form $|A|<B,\;|A|{\leq}B,|A|>B, \text{ or } |A|{\geq}B$, where an expression $$A$$ (and possibly but not usually $$B$$) depends on a variable $$x$$. Solving the inequality means finding the set of all $$x$$ that satisfy the inequality. Usually this set will be an interval or the union of two intervals. There are two basic approaches to solving absolute value inequalities: graphical and algebraic. The advantage of the graphical approach is we can read the solution by interpreting the graphs of two functions. The advantage of the algebraic approach is it yields solutions that may be difficult to read from the graph. For example, we know that all numbers within 200 units of 0 may be expressed as $|x|<200 \text{ or } −200<x<200$ Suppose we want to know all possible returns on an investment if we could earn some amount of money within $200 of$600. We can solve algebraically for the set of values $$x$$ such that the distance between $$x$$ and 600 is less than 200. We represent the distance between $$x$$ and 600 as $$|x−600|$$. $|x−600|<200 \text{ or } −200<x−600<200$ $−200+600<x−600+600<200+600$ $400<x<800$ This means our returns would be between $400 and$800. Sometimes an absolute value inequality problem will be presented to us in terms of a shifted and/or stretched or compressed absolute value function, where we must determine for which values of the input the function’s output will be negative or positive. Given an absolute value inequality of the form $$|x−A|{\leq}B$$ for real numbers $$a$$ and $$b$$ where $$b$$ is positive, solve the absolute value inequality algebraically. 1. Find boundary points by solving $$|x−A|=B$$. 2. Test intervals created by the boundary points to determine where $$|x−A|{\leq}B$$. 3. Write the interval or union of intervals satisfying the inequality in interval, inequality, or set-builder notation. Example 1.6.6: Solving an Absolute Value Inequality Solve $$|x −5|{\leq}4$$. Solution With both approaches, we will need to know first where the corresponding equality is true. In this case we first will find where $$|x−5|=4$$. We do this because the absolute value is a function with no breaks, so the only way the function values can switch from being less than 4 to being greater than 4 is by passing through where the values equal 4. Solve $$|x−5|=4$$. \begin{align} x−5&=4 &\text{ or }\;\;\;\;\;\;\;\; x&=9 \\ x−5&=−4 & x&=1\end{align} After determining that the absolute value is equal to 4 at $$x=1$$ and $$x=9$$, we know the graph can change only from being less than 4 to greater than 4 at these values. This divides the number line up into three intervals: $x<1,\; 1<x<9, \text{ and } x>9.$ To determine when the function is less than 4, we could choose a value in each interval and see if the output is less than or greater than 4, as shown in Table 1.6.1. Interval test $$x$$ $$f(x)$$ 68 $$x<1$$ 0 $$|0-5|=5$$ Greater than $$19$$ 11 $$|11-5|=6$$ Greater than Table 1.6.1 Because $$1{\leq}x{\leq}9$$ is the only interval in which the output at the test value is less than 4, we can conclude that the solution to $$|x−5|{\leq}4$$ is $$1{\leq}x{\leq}9$$, or $$[1,9]$$. To use a graph, we can sketch the function $$f(x)=|x−5|$$. To help us see where the outputs are 4, the line $$g(x)=4$$ could also be sketched as in Figure 1.6.10. Figure 1.6.10: Graph to find the points satisfying an absolute value inequality. We can see the following: • The output values of the absolute value are equal to 4 at $$x=1$$ and $$x=9$$. • The graph of $$f$$ is below the graph of $$g$$ on $$1<x<9$$. This means the output values of $$f(x)$$ are less than the output values of $$g(x)$$. • The absolute value is less than or equal to 4 between these two points, when $$1{\leq}x\leq9$$. In interval notation, this would be the interval $$[1,9]$$. Analysis For absolute value inequalities, $|x−A|<C,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; |x−A|>C, \\−C<x−A<C,\;\;\;\; x−A<−C \text{ or } x−A>C.$ The $$<$$ or $$>$$ symbol may be replaced by $$\leq$$ or $$\geq$$. So, for this example, we could use this alternative approach. \begin{align} |x−5|&{\leq}4 \\ −4&{\leq}x−5{\leq}4 &\text{Rewrite by removing the absolute value bars.} \\ −4+5&{\leq}x−5+5{\leq}4+5 &\text{Isolate the x.} \\ 1&{\leq}x\leq9 \end{align} 1.6.5: Solve $$|x+2|\leq6$$. Solution $$4{\leq}x\leq8$$ Given an absolute value function, solve for the set of inputs where the output is positive (or negative). 1. Set the function equal to zero, and solve for the boundary points of the solution set. 2. Use test points or a graph to determine where the function’s output is positive or negative. Example 1.6.7:  Using a Graphical Approach to Solve Absolute Value Inequalities Given the function $$f(x)=−\frac{1}{2}|4x−5|+3$$, determine the $$x$$-values for which the function values are negative. Solution We are trying to determine where $$f(x)<0$$, which is when $$−\frac{1}{2}|4x−5|+3<0$$. We begin by isolating the absolute value. \begin{align} -\frac{1}{2}|4x−5|&<−3 \;\;\; \text{Multiply both sides by –2, and reverse the inequality.} \\ |4x−5|&>6\end{align} Next we solve for the equality $$|4x−5|=6$$. \begin{align} 4x-5&=6 & 4x-5&=-6 \\ 4x-6&=6 \;\; &\text{or } \;\;\; 4x&=-1 \\ x&=\frac{11}{4} & x&=-\frac{1}{4} \end{align} Now, we can examine the graph of $$f$$ to observe where the output is negative. We will observe where the branches are below the $$x$$-axis. Notice that it is not even important exactly what the graph looks like, as long as we know that it crosses the horizontal axis at $$x=−\frac{1}{4}$$ and $$x=\frac{11}{4}$$ and that the graph has been reflected vertically. See Figure 1.6.11. Figure 1.6.11: Graph of an absolute function with x-intercepts at -0.25 and 2.75.] We observe that the graph of the function is below the $$x$$-axis left of $$x=−\frac{1}{4}$$ and right of $$x=\frac{11}{4}$$. This means the function values are negative to the left of the first horizontal intercept at $$x=−\frac{1}{4}$$, and negative to the right of the second intercept at $$x=\frac{11}{4}$$. This gives us the solution to the inequality. $x<−\frac{1}{4} \text{ or } x>1\frac{1}{4}$ In interval notation, this would be $$( −\infty,−0.25 )\cup( 2.75,\infty)$$. 1.6.6: Solve $$−2|k−4|\leq−6$$. Solution $$k\leq1$$ or $$k\leq7$$; in interval notation, this would be $$\left(−\infty,1\right]\cup\left[7,\infty\right)$$ ### Key Concepts • The absolute value function is commonly used to measure distances between points. • Applied problems, such as ranges of possible values, can also be solved using the absolute value function. • The graph of the absolute value function resembles a letter V. It has a corner point at which the graph changes direction. • In an absolute value equation, an unknown variable is the input of an absolute value function. • If the absolute value of an expression is set equal to a positive number, expect two solutions for the unknown variable. • An absolute value equation may have one solution, two solutions, or no solutions. • An absolute value inequality is similar to an absolute value equation but takes the form | A |<B, | A |≤B, | A |>B, or | A |≥B.It can be solved by determining the boundaries of the solution set and then testing which segments are in the set. • Absolute value inequalities can also be solved graphically. ### Section Exercise Verbal Exercise 1.6.1 How do you solve an absolute value equation? Solution Isolate the absolute value term so that the equation is of the form $$|A|=B$$. Form one equation by setting the expression inside the absolute value symbol, $$A$$, equal to the expression on the other side of the equation, $$B$$. Form a second equation by setting $$A$$ equal to the opposite of the expression on the other side of the equation, $$−B$$. Solve each equation for the variable. Exercise 1.6.2 How can you tell whether an absolute value function has two x-intercepts without graphing the function? Exercise 1.6.3 When solving an absolute value function, the isolated absolute value term is equal to a negative number. What does that tell you about the graph of the absolute value function? Solution The graph of the absolute value function does not cross the x-axis, so the graph is either completely above or completely below the x-axis. Exercise 1.6.4 How can you use the graph of an absolute value function to determine the x-values for which the function values are negative? Exercise 1.6.5 How do you solve an absolute value inequality algebraically? Solution First determine the boundary points by finding the solution(s) of the equation. Use the boundary points to form possible solution intervals. Choose a test value in each interval to determine which values satisfy the inequality. Algebraic Exercise 1.6.6 Describe all numbers $$x$$ that are at a distance of 4 from the number 8. Express this using absolute value notation. Exercise 1.6.7 Describe all numbers $$x$$ that are at a distance of $$\dfrac{1}{2}$$ from the number −4. Express this using absolute value notation. Solution $$|x+4|= \dfrac{1}{2}$$ Exercise 1.6.8 Describe the situation in which the distance that point $$x$$ is from 10 is at least 15 units. Express this using absolute value notation. Exercise 1.6.9 Find all function values $$f(x)$$ such that the distance from $$f(x)$$ to the value 8 is less than 0.03 units. Express this using absolute value notation. Solution $$|f(x)−8|<0.03$$ For the following exercises, solve the equations below and express the answer using set notation. Exercise 1.6.10 $$|x+3|=9$$ Exercise 1.6.11 $$|6−x|=5$$ Solution $${1,11}$$ Exercise 1.6.12 $$|5x−2|=11$$ Exercise 1.6.13 $$|4x−2|=11$$ Solution $$\{\dfrac{9}{4}, \dfrac{13}{4}\}$$ Exercise 1.6.14 $$2|4−x|=7$$ Exercise 1.6.15 $$3|5−x|=5$$ Solution $$\{\dfrac{10}{3},\dfrac{20}{3}\}$$ Exercise 1.6.16 $$3|x+1|−4=5$$ Exercise 1.6.17 $$5|x−4|−7=2$$ Solution $$\{\dfrac{11}{5}, \dfrac{29}{5}\}$$ Exercise 1.6.18 $$0=−|x−3|+2$$ Exercise 1.6.19 $$2|x−3|+1=2$$ Solution $$\{\dfrac{5}{2}, \dfrac{7}{2}\}$$ Exercise 1.6.20 $$|3x−2|=7$$ Exercise 1.6.21 $$|3x−2|=−7$$ Solution No solution Exercise 1.6.22 $$|\dfrac{1}{2}x−5|=11$$ Exercise 1.6.23 $$| \dfrac{1}{3}x+5|=14$$ Solution $$\{−57,27\}$$ Exercise 1.6.24 $$−|\dfrac{1}{3}x+5|+14=0$$ For the following exercises, find the x- and y-intercepts of the graphs of each function. Exercise 1.6.25 $$f(x)=2|x+1|−10$$ Solution $$(0,−8)$$; $$(−6,0)$$, $$(4,0)$$ Exercise 1.6.26 $$f(x)=4|x−3|+4$$ Exercise 1.6.27 $$f(x)=−3|x−2|−1$$ Solution $$(0,−7)$$; no x-intercepts Exercise 1.6.28 $$f(x)=−2|x+1|+6$$ For the following exercises, solve each inequality and write the solution in interval notation. Exercise 1.6.29 $$| x−2 |>10$$ Solution $$(−\infty,−8)\cup(12,\infty)$$ Exercise 1.6.30 $$2|v−7|−4\geq42$$ Exercise 1.6.31 $$|3x−4|\geq8$$ Solution $$−\dfrac{4}{3}{\leq}x\leq4$$ Exercise 1.6.32 $$|x−4|\geq8$$ Exercise 1.6.33 $$|3x−5|\geq-13$$ Solution $$\left(−\infty,− \dfrac{8}{3}\right]\cup\left[6,\infty\right)$$ Exercise 1.6.34 $$|3x−5|\geq−13$$ Exercise 1.6.35 $$|\dfrac{3}{4}x−5|\geq7$$ Solution $$\left(-\infty,-\dfrac{8}{3}\right]\cup\left[16,\infty\right)$$ Exercise 1.6.36 $$|\dfrac{3}{4}x−5|+1\leq16$$ Graphical For the following exercises, graph the absolute value function. Plot at least five points by hand for each graph. Exercise 1.6.37 $$y=|x−1|$$ Solution Exercise 1.6.38 $$y=|x+1|$$ Exercise 1.6.39 $$y=|x|+1$$ For the following exercises, graph the given functions by hand. Exercise 1.6.40 $$y=|x|−2$$ Exercise 1.6.41 $$y=−|x|$$ Solution Exercise 1.6.42 $$y=−|x|−2$$ Exercise 1.6.43 $$y=−|x−3|−2$$ Solution Exercise 1.6.44 $$f(x)=−|x−1|−2$$ Exercise 1.6.45 $$f(x)=−|x+3|+4$$ Solution Exercise 1.6.46 $$f(x)=2|x+3|+1$$ Exercise $$f(x)=3|x−2|+3$$ Solution Exercise 1.6.47 $$f(x)=|2x−4|−3$$ Exercise 1.6.48 $$f(x)=|3x+9|+2$$ Solution Exercise 1.6.49 $$f(x)=−|x−1|−3$$ Exercise 1.6.50 $$f(x)=−|x+4|−3$$ Solution Exercise 1.6.51 $$f(x)=\dfrac{1}{2}|x+4|−3$$ Technology Exercise 1.6.52 Use a graphing utility to graph $$f(x)=10|x−2|$$ on the viewing window [ 0,4 ]. Identify the corresponding range. Show the graph. Solution range: $$[0,20]$$ Exercise 1.6.53 Use a graphing utility to graph $$f(x)=−100|x|+100$$ on the viewing window $$[−5,5]$$. Identify the corresponding range. Show the graph. For the following exercises, graph each function using a graphing utility. Specify the viewing window. Exercise 1.6.54 $$f(x)=−0.1|0.1(0.2−x)|+0.3$$ Solution x-intercepts: Exercise 1.6.55 $$f(x)=4 \times10^{9}|x−(5 \times 10^9)|+2 \times10^9$$ Extensions For the following exercises, solve the inequality. Exercise 1.6.56 $$|−2x− \dfrac{2}{3}(x+1)|+3>−1$$ Solution $$(−\infty,\infty)$$ Exercise 1.6.57 If possible, find all values of a such that there are no x-intercepts for $$f(x)=2|x+1|+a$$. Exercise 1.6.58 If possible, find all values of a such that there are no y-intercepts for $$f(x)=2|x+1|+a$$. Solution There is no solution for a that will keep the function from having a y-intercept. The absolute value function always crosses the y-intercept when $$x=0$$. Real-World Applications Exercise 1.6.59 Cities A and B are on the same east-west line. Assume that city A is located at the origin. If the distance from city A to city B is at least 100 miles and $$x$$ represents the distance from city B to city A, express this using absolute value notation. Exercise 1.6.60 The true proportion $$p$$ of people who give a favorable rating to Congress is 8% with a margin of error of 1.5%. Describe this statement using an absolute value equation. Solution $$|p−0.08|\leq0.015$$ Exercise 1.6.61 Students who score within 18 points of the number 82 will pass a particular test. Write this statement using absolute value notation and use the variable $$x$$ for the score. Exercise 1.6.62 A machinist must produce a bearing that is within 0.01 inches of the correct diameter of 5.0 inches. Using $$x$$ as the diameter of the bearing, write this statement using absolute value notation. Solution $$|x−5.0|\leq0.01$$ Exercise 1.6.63 The tolerance for a ball bearing is 0.01. If the true diameter of the bearing is to be 2.0 inches and the measured value of the diameter is $$x$$ inches, express the tolerance using absolute value notation. ### Glossary absolute value equation an equation of the form $$|A|=B$$, with $$B\geq0$$; it will have solutions when $$A=B$$ or $$A=−B$$ absolute value inequality a relationship in the form $$|A|<B$$, $$|A|{\leq}B$$, $$|A|>B$$, or $$|A|{\geq}B$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 10, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587843418121338, "perplexity": 416.87616457306024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00549-ip-10-171-10-70.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/74752/overrightarrow-with-garamond-gap-in-the-arrow
# \overrightarrow with garamond: gap in the arrow Typesetting $\overrightarrow{OQ}$ yields a very unpleasant gap in the arrow when used with garamond. Here's a minimal working example that shows the problem. It looks okay without the garamond line. \documentclass{article} \usepackage[garamond]{mathdesign} \begin{document} $\overrightarrow{OQ}.$ \end{document} This is clearly an unwanted feature. Is it possible to fix this? (discarding garamond is not an option, I've already typeset and printed several hundreds of pages with it, and I want uniformity in my documents for my students). - The problem is that the minus sign and the arrow in the math font that's used with the garamond option is shorter than usual and this breaks \rightarrowfill that's used in \overrightarrow. You should repair the glitch by redefining \rightarrowfill: \documentclass{article} \usepackage[garamond]{mathdesign} \makeatletter \def\rightarrowfill{% $\m@th\smash-\mkern-9mu \cleaders\hbox{$\mkern-2mu\smash-\mkern-2mu$}\hfill \mkern-9mu\mathord\rightarrow$} \makeatother \begin{document} $\overrightarrow{OQ}$ \makebox[1.0em]{\rightarrowfill} \makebox[1.1em]{\rightarrowfill} \makebox[1.2em]{\rightarrowfill} \makebox[1.3em]{\rightarrowfill} \makebox[1.4em]{\rightarrowfill} \makebox[1.5em]{\rightarrowfill} \makebox[1.6em]{\rightarrowfill} \makebox[1.7em]{\rightarrowfill} \makebox[1.8em]{\rightarrowfill} \makebox[1.9em]{\rightarrowfill} \makebox[2.0em]{\rightarrowfill} \makebox[2.1em]{\rightarrowfill} \end{document} Using amsmath (which is implicitly loaded by amsart and amsbook) the patch should also be to \arrowfill@; it's simpler to use etoolbox: \usepackage{etoolbox} \makeatletter \patchcmd\arrowfill@{-7mu}{-9mu}{}{} \patchcmd\arrowfill@{-7mu}{-9mu}{}{} \patchcmd\rightarrowfill{-7mu}{-9mu}{}{} \patchcmd\rightarrowfill{-7mu}{-9mu}{}{} \makeatother - Thanks so much, it works fine! Well, in fact I'm using the amsbook class, which uses \rightarrowfill@ instead of \rightarrowfill... so I redefined \overrightarrow as found in base/fontmath.ltx and with your fix it seems to work perfectly! – gniourf_gniourf Sep 30 '12 at 16:26 @gniourf_gniourf I've added the patch for amsmath – egreg Sep 30 '12 at 16:32 fantastic! I'm now using your patch for amsmath, it's much nicer than my solution. Thanks a lot! – gniourf_gniourf Sep 30 '12 at 16:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630304574966431, "perplexity": 2026.9053694485804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00158-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/52732-am-i-doing-calculus-problem-right-related-rates.html
Math Help - Am I doing this calculus problem right?? (related rates)? 1. Am I doing this calculus problem right?? (related rates)? Here is the question #42 part A and B This is the work I have done so far and I would like to know if it is right If I am doing anything wrong please correct me. Also when I replace r with (3/5)h I think it's supposed to be (3/6). 2. Originally Posted by imbored205 Here is the question #42 part A and B This is the work I have done so far and I would like to know if it is right If I am doing anything wrong please correct me. Also when I replace r with (3/5)h I think it's supposed to be (3/6). All looks well...until you replace r in part (b). It should be $r=\frac{2.5}{5}h$. You can make this adjustment by keeping in mind that the radius has a value of 3 only when the height is 6. It is also good to take note that the radius is half the value of the height withing the cone... --Chris
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870801329612732, "perplexity": 258.6184850451282}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152959.66/warc/CC-MAIN-20160205193912-00001-ip-10-236-182-209.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/9594/disagreement-between-two-definitions-of-the-singular-boundary-map
# disagreement between two definitions of the singular boundary map Hi everyone, I have a little problem with the definition of singular boundary map in singular homology theory. It appears to be some disagreement between two authors. The first one is Hatcher in his 'Algebraic Topology' who uses a very intuitive and natural analogy with the definition of the boundary operator in simplicial homology. (this is the simple one, page 108). On the other hand we have Rotman in his 'An introduction to homological Algebra' who states that the definition used by Hatcher is wrong because the images under this operator aren't singular simplexes (page 29). I think this state must have something to do with the baricentric coordinates, but I'm not sure. In fact I don't understand his alternative definition that uses face maps where he puts down σε. Is that some kind of product? If anyone could give me an example for n=2 would be awesome. However the main question is about the differences between these two definitions. Any help is welcome. Thanks. - Both authors are defining exactly the same thing, just with slight differences of notation. – Reid Barton Dec 23 '09 at 6:23 The point Rotman is trying to make is the following: if you have a singular $q$-simplex $\sigma:\Delta^q\to X$, then for example the restriction $\sigma|\_{[e_0,e_2,\dots,e_q]}$ is not a singular $(q-1)$-simplex, simply because its domain $[e_0,e_2,\dots,e_q]$ is not the standard simplex $\Delta^{q-1}$, which is instead $[e_0,e_1,\dots,e_{q-1}]$. He fixes this by composing with the face maps $\varepsilon$, so as to get the domains right.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377800226211548, "perplexity": 206.45025787663826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456975.30/warc/CC-MAIN-20151124205416-00347-ip-10-71-132-137.ec2.internal.warc.gz"}
https://arxiv.org/abs/1708.01062v2
Title:Search for single production of a vector-like T quark decaying to a Z boson and a top quark in proton-proton collisions at $\sqrt{s} =$13 TeV Abstract: A search is presented for single production of a vector-like quark (T) decaying to a Z boson and a top quark, with the Z boson decaying leptonically and the top quark decaying hadronically. The search uses data collected by the CMS experiment in proton-proton collisions at a center-of-mass energy of 13 TeV in 2016, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. The presence of forward jets is a particular characteristic of single production of vector-like quarks that is used in the analysis. For the first time, different T quark width hypotheses are studied, from negligibly small to 30% of the new particle mass. At the 95% confidence level, the product of cross section and branching fraction is excluded above values in the range 0.26-0.04 pb for T quark masses in the range 0.7-1.7 TeV, assuming a negligible width. A similar sensitivity is observed for widths of up to 30% of the T quark mass. The production of a heavy Z' boson decaying to Tt, with T $\rightarrow$ tZ, is also searched for, and limits on the product of cross section and branching fractions for this process are set between 0.13 and 0.06 pb for Z' boson masses in the range from 1.5 to 2.5 TeV. Comments: Replaced with the published version. Added the journal reference. All the figures and tables can be found at this http URL (CMS Public Pages) Subjects: High Energy Physics - Experiment (hep-ex) Journal reference: Phys. Lett. B 781 (2018) 574 DOI: 10.1016/j.physletb.2018.04.036 Report number: CMS-B2G-17-007, CERN-EP-2017-155 Cite as: arXiv:1708.01062 [hep-ex] (or arXiv:1708.01062v2 [hep-ex] for this version) Submission history From: The CMS Collaboration [view email] [v1] Thu, 3 Aug 2017 09:09:06 UTC (482 KB) [v2] Fri, 15 Jun 2018 12:11:21 UTC (483 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743157625198364, "perplexity": 1751.3251579997057}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00105.warc.gz"}
https://aitopics.org/mlt?cdid=arxivorg%3A09D1E0C3&dimension=concept-tags
### Learning to Control in Metric Space with Optimal Regret We study online reinforcement learning for finite-horizon deterministic control systems with {\it arbitrary} state and action spaces. Suppose that the transition dynamics and reward function is unknown, but the state and action space is endowed with a metric that characterizes the proximity between different states and actions. We provide a surprisingly simple upper-confidence reinforcement learning algorithm that uses a function approximation oracle to estimate optimistic Q functions from experiences. We show that the regret of the algorithm after $K$ episodes is $O(HL(KH)^{\frac{d-1}{d}})$ where $L$ is a smoothness parameter, and $d$ is the doubling dimension of the state-action space with respect to the given metric. We also establish a near-matching regret lower bound. The proposed method can be adapted to work for more structured transition systems, including the finite-state case and the case where value functions are linear combinations of features, where the method also achieve the optimal regret. ### Towards An Understanding of What is Learned: Extracting Multi-Abstraction-Level Knowledge from Learning Agents Machine Learning approaches used in the context of agents (like Reinforcement Learning) commonly result in weighted state-action pair representations (where the weights determine which action should be performed, given a perceived state). The weighted state-action pairs are stored, e.g., in tabular form or as approximated functions which makes the learned knowledge hard to comprehend by humans, since the number of state-action pairs can be extremely high. In this paper, a knowledge extraction approach is presented which extracts compact and comprehensible knowledge bases from such weighted state-action pairs. For this purpose, so-called Hierarchical Knowledge Bases are described which allow for a top-down view on the learned knowledge at an adequate level of abstraction. The approach can be applied to gain structural insights into a problem and its solution and it can be easily transformed into common knowledge representation formalisms, like normal logic programs. ### Reinforcement Learning for Mixed Open-loop and Closed-loop Control Closed-loop control relies on sensory feedback that is usually assumed tobe free . But if sensing incurs a cost, it may be costeffective totake sequences of actions in open-loop mode. We describe a reinforcement learning algorithm that learns to combine open-loop and closed-loop control when sensing incurs a cost. Although weassume reliable sensors, use of open-loop control means that actions must sometimes be taken when the current state of the controlled system is uncertain. This is a special case of the hidden-state problem in reinforcement learning, and to cope, our algorithm relies on short-term memory. ### Asynchronous n-steps Q-learning Q-learning is the most famous Temporal Difference algorithm. Original Q-learning algorithm tries to determine the state-action value function that minimizes the error below. We will use an optimizer (the simplest one- Gradient Descent) to compute the values of the state-action function. First of all we need to compute the gradient of the loss function. Gradient descent finds the minimum of a function by subtracting the gradient, with respect to the parameters of the function, from the parameters. ### Pretraining Deep Actor-Critic Reinforcement Learning Algorithms With Expert Demonstrations Pretraining with expert demonstrations have been found useful in speeding up the training process of deep reinforcement learning algorithms since less online simulation data is required. Some people use supervised learning to speed up the process of feature learning, others pretrain the policies by imitating expert demonstrations. However, these methods are unstable and not suitable for actor-critic reinforcement learning algorithms. Also, some existing methods rely on the global optimum assumption, which is not true in most scenarios. In this paper, we employ expert demonstrations in a actor-critic reinforcement learning framework, and meanwhile ensure that the performance is not affected by the fact that expert demonstrations are not global optimal. We theoretically derive a method for computing policy gradients and value estimators with only expert demonstrations. Our method is theoretically plausible for actor-critic reinforcement learning algorithms that pretrains both policy and value functions. We apply our method to two of the typical actor-critic reinforcement learning algorithms, DDPG and ACER, and demonstrate with experiments that our method not only outperforms the RL algorithms without pretraining process, but also is more simulation efficient.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802433609962463, "perplexity": 558.7809220435631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00305.warc.gz"}
http://www.physicspages.com/2016/04/13/interacting-einstein-solids/
# Interacting Einstein solids Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 2.8. We’ve seen how to count micro- and macrostates in an Einstein solid. In a solid containing ${N}$ oscillators and ${q}$ quanta of energy, there are ${\binom{q+N-1}{q}}$ possible microstates. Consider now what happens if we have two such solids, ${A}$ and ${B}$, containing ${N_{A}}$ and ${N_{B}}$ oscillators and ${q_{A}}$ and ${q_{B}}$ quanta of energy. Each solid has its own set of microstates, but suppose that the solids can exchange energy quanta on a timescale that is quite long compared with the times over which quanta can travel between oscillators within each solid. We’re assuming that total energy ${q_{A}+q_{B}}$ is conserved in this process, but that ${q_{A}}$ and ${q_{B}}$ each can vary within this constraint (that is, the solids can exchange energy quanta between them). We’d like to investigate the probabilities of the various divisions of energy between the two solids. For any particular partition of the quanta, that is, for particular values of ${q_{A}}$ and ${q_{B}}$, the total number of microstates available to the compound system is $\displaystyle \Omega_{total}=\Omega_{A}\Omega_{B}=\binom{q_{A}+N_{A}-1}{q_{A}}\binom{q_{B}+N_{B}-1}{q_{B}} \ \ \ \ \ (1)$ This is true because for each microstate in solid ${A}$, we could have any of the ${\Omega_{B}}$ microstates in system ${B}$. If we consider all the possible partitions of quanta, the total number of microstates available to the compound system is the sum of this quantity over all possible values of ${q_{A}}$ (remember ${q_{B}=q_{total}-q_{A}}$ so ${q_{B}}$ isn’t an independent variable since ${q_{total}}$ is a constant). Looked at another way, we can view the compound solid as a single solid with ${N_{A}+N_{B}}$ oscillators and ${q_{A}+q_{B}}$ quanta, so the overall number of microstates is $\displaystyle \Omega_{overall}=\binom{q_{A}+q_{B}+N_{A}+N_{B}-1}{q_{A}+q_{B}} \ \ \ \ \ (2)$ The fundamental assumption of statistical mechanics is that if we look at the system at any instant of time, we are equally likely to find it in any one of these ${\Omega_{overall}}$ microstates. The question then becomes: given the division of the solid into two systems with the number of oscillators ${N_{A}}$ and ${N_{B}}$ in each solid fixed, what is the most likely distribution of the energy quanta between the two solids? That is, what is the most likely value of ${q_{A}}$? For relatively small systems, we can calculate these probabilities by brute force by just working out the binomial coefficients. For larger systems (ones containing a number of particles typical of macroscopic objects), this is no longer feasible so we need to resort to approximations. But for now, we can work out an example with manageable numbers. Before we begin, we need one final bit of terminology. We’ll refer to the macrostate of a compound solid as a particular division of the quanta between the two solids, without regard to how the quanta within each solid are distributed among the oscillators in that solid. In other words, each possible value of ${q_{A}}$ defines one macrostate. Since the possible values of ${q_{A}}$ are ${0,1,\ldots,q_{A}+q_{B}}$, there are ${q_{A}+q_{B}+1}$ possible macrostates in such a system. Example Suppose ${N_{A}=N_{B}=10}$ and ${q_{A}+q_{B}=20}$. There are therefore 21 possible macrostates. The number of microstates is $\displaystyle \Omega_{overall}=\binom{20+20-1}{20}=6.89\times10^{10} \ \ \ \ \ (3)$ The probability that all the energy is in solid ${A}$ is $\displaystyle \frac{1}{\Omega_{overall}}\binom{20+10-1}{20}\binom{0+10-1}{0}=\frac{10^{7}}{6.89\times10^{10}}=1.45\times10^{-4} \ \ \ \ \ (4)$ The probability that ${q_{A}=q_{B}=10}$ (that is, the energy is evenly distributed) is $\displaystyle \frac{1}{\Omega_{overall}}\binom{10+10-1}{10}\binom{10+10-1}{10}=0.1238 \ \ \ \ \ (5)$ The probability for a general value of ${q_{A}}$ is $\displaystyle Prob=\frac{1}{\Omega_{overall}}\binom{10+q_{A}-1}{q_{A}}\binom{10+20-q_{A}-1}{20-q_{A}} \ \ \ \ \ (6)$ A bar chart of the probabilities is It’s much more likely that the quanta will distribute themselves equally between the two solids, and once such a state is achieved, it’s unlikely that it will return to a state where one solid has a lot more quanta than the other. That is, a state where the quanta are distributed equally is said to be irreversible. ## 6 thoughts on “Interacting Einstein solids” 1. jaydeep singh THANKyou SIR JI SIR ONE THING IS MUCH CONFUSING TO ME, you have mentioned it in above paragraph that equally likely state are irreversible. but why these state are irreversible.??
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500304460525513, "perplexity": 249.98342067857678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00110-ip-10-145-167-34.ec2.internal.warc.gz"}
https://hal-mines-paristech.archives-ouvertes.fr/hal-01493631
Accéder directement au contenu Accéder directement à la navigation # Yearly changes in solar radiation over New Caledonia and relations with changing atmospheric properties Abstract : New Caledonia, a large island in Tropical Pacific Ocean, experiences a very sunny weather. As an average, approximately 60 % of the solar radiation available at the top of atmosphere reaches the ground. Solar radiation is an option for energy production. Because of the low cloudiness, direct solar radiation received on a plane normal to the sun rays (DNI) is large and this raises interest in exploitation of concentrating solar technologies (CST) that concentrate sun rays to produce electricity. A preliminary study has been performed to assess the potentials of DNI. Local measurements reveal a decrease in direct and global solar radiation since 2004. DNI has decreased by 15 % over 10 years and the global radiation on a horizontal surface by 10 %. One reason is an increase in cloudiness. The ICOADS (International Comprehensive Ocean-Atmosphere Data Set) of the NOAA shows an increase from 2004 to 2006 but then a slightly declining plateau of the cloud cover for the area comprised between-18 • and-22 • N and 157 • to 163 • E. Cloud cover cannot be the sole cause of the decrease in solar radiation. Another reason is an increase in aerosol load. The MACC (Monitoring Atmosphere Composition and Climate) projects, funded by the European Commission, provide data sets on aerosol properties, from 2004 till present, as well as total column contents in water vapor and ozone. These data sets are a valuable tool to describe the dynamics of aerosol from year to year. Analysis of these data and estimates of the DNI and global radiation in clear sky conditions provided by the McClear model exploiting the MACC data sets reveal an increase in the optical depth of the aerosols that yields a decrease of the DNI under clear sky conditions related to the decrease of the observed DNI. Type de document : Communication dans un congrès Domaine : Liste complète des métadonnées https://hal-mines-paristech.archives-ouvertes.fr/hal-01493631 Contributeur : Lucien Wald <> Soumis le : samedi 25 mars 2017 - 03:24:50 Dernière modification le : mardi 21 juillet 2020 - 03:19:26 ### Identifiants • HAL Id : hal-01493631, version 1 ### Citation Philippe Blanc, Lucien Wald. Yearly changes in solar radiation over New Caledonia and relations with changing atmospheric properties. 14th EMS annual meeting, Oct 2014, Prague, Czech Republic. pp.2014 - 441. ⟨hal-01493631⟩ ### Métriques Consultations de la notice ## 218 Téléchargements de fichiers
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8186593651771545, "perplexity": 3653.748593285349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00251.warc.gz"}
https://byjus.com/physics/stress/
# Stress - Definition And Types In physics, Stress is the force acting on the unit area of a material. Effect of stress on a body is named as strain. Stress can deform the body. How much force material experience can be measured using stress units. Stress can be categorised into three categories depending upon the direction of the deforming forces acting on the body. Let us study them one by one. ## What is Stress? When the deforming force is applied to an object. The object deforms. In order to bring the object back to the original shape and size, there will be an opposing force generated inside the object. This restoring force will be equal in magnitude and opposite in direction to the applied deforming force. The measure of this restoring force generated per unit area of the material is called Stress. Thus, Stress is defined as “The restoring force per unit area of the material”. It is a tensor quantity. Denoted by Greek letter σ. Measured using Pascal or N/m2. Mathematically expressed as – $\sigma =\frac{F}{A}$ Where, • F is the restoring force measured in Newton or N. • A is the area of cross section measured in m2. • σ is the stress measured using N/m2 or Pa. ## Stress Units Stress can be expressed using multiple units. Refer to the table given below for Stress units. System of units Stress units Fundamental units Kg.m-1.s-2 SI (derived units) N.m2 SI (derived units) Pa or pascal SI (mm)(derived units) M.Pa or N/(mm)2 US unit (ft) lbf/ft2 US unit (inch) Psi (lbf/inch2) ## Types of Stress There are several different types of stress in physics but mainly it is categorized into two forms that are Normal Stress and Tangential or Shearing Stress. Some stress types are discussed in the points below. ## Normal Stress: As the name suggests, Stress is said to be Normal stress when the direction of the deforming force is perpendicular to the cross-sectional area of the body. The length of the wire or the volume of the body changes stress will be at normal. Normal stress can be further classified into two types based on the dimension of force- • Longitudinal stress • Bulk Stress or Volumetric stress ## Longitudinal Stress: Consider a cylinder. When two cross-sectional areas of the cylinder are subjected to equal and opposite forces the stress experienced by the cylinder is called longitudinal stress. Longitudinal Stress = Deforming Force / Area of cross section = F/A As the name suggests, When the body is under longitudinal stress- • The deforming force will be acting along the length of the body. • Longitudinal stress results in the change in the length of the body, Hence thereby it affects slight change in diameter. The Longitudinal Stress either stretch the object or compress the object along its length. Thus it can be further classified into two types based on the direction of deforming force- • Tensile stress • Compressive stress ### Tensile Stress If the deforming force or applied force results in the increase in the object’s length then the resulting stress is termed as tensile stress. For example: When a rod or wire is stretched by pulling it with equal and opposite forces(outwards) at both ends. ### Compressive Stress If the deforming force or applied force results in the decrease in the object’s length then the resulting stress is termed as compressive stress. For example: When a rod or wire is compressed/squeezed by pushing it with equal and opposite forces(inwards) at both ends. ## Bulk Stress or Volume Stress When the deforming force or applied force acts from all dimension resulting in the change of volume of the object then such stress in called volumetric stress or Bulk stress. In short, When the volume of body changes due to the deforming force it is termed as Volume stress. ## Shearing Stress or Tangential Stress When the direction of the deforming force or external force is parallel to the cross-sectional area, the stress experienced by the object is called shearing stress or tangential stress. This results in the change in the shape of the body ## Summary In short, stress can be visualised as – ## Practice Questions For Stress Q1: What is Stress? Ans: Stress is the measure of restoring force per unit area. Q2:What is the unit of Stress? Ans: units of stress is pascal or N/m2. Q3: Is stress is a vector quantity? Ans: Yes. Stress is a vector quantity. Q4: What is the effect of deforming force? Ans: The deforming force can change the shape or volume or size of the object. Q5: What is the direction of the deforming force in the case of shearing stress? Ans: The deforming force is parallel to the area of cross-section. Q6: What is the nature of the restoring force? Ans: The restoring force is equal in magnitude and opposite in direction to deforming force or external force. Q7: Name the types of normal stress. Ans: Longitudinal stress and bulk or volume stress are two types of normal stress. Q8: What is the direction of the deforming force in the case of longitudinal stress? Ans: The deforming force is perpendicular to the area of cross-section. Q9: Name the types of longitudinal stress. Ans: Tensile stress and compressive stress are the two types of longitudinal stress. Q10: Define longitudinal stress. Ans: Stress experienced by an object along its length due to the presence of equal and opposite deforming forces perpendicular to the area of cross section is called longitudinal stress. Q11: What does Bulk stress do to an object? Ans: Bulk stress results in a change in the volume of the object. Q12:What does tangential stress do to an object? Ans: Tangential stress results in a change in the shape of the object. Q13: Define tangential or shear stress Ans: When the direction of the deforming force or external force is parallel to the cross-sectional area, the stress experienced by the object is called shearing stress or tangential stress. Q14: Give the expression for stress and explain the terms. Ans: The expression for stress is given by $\sigma =\frac{F}{A}$ Where, • F is the restoring force. • A is the area of cross-section. • σ is the stress. Q15:A rod is stretched by pulling at both the ends. Name the type of stress experienced by the rod. Ans: Tensile stress. Hope you have understood about Stress by learning what is stress? It’s formula, Stress units, Types of stress like- Normal stress, shear stress or tangential stress, longitudinal stress, bulk stress or volume stress, tensile stress, compressive stress. Physics Related Topics: Stay tuned with BYJU’S for more such interesting articles. Also, register to “BYJU’S-The Learning App for loads of interactive, engaging physics related videos and an unlimited academic assist. #### 1 Comment 1. Haseena Good nice points easy to Learn useful
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566714525222778, "perplexity": 1071.8600484945423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371893683.94/warc/CC-MAIN-20200410075105-20200410105605-00120.warc.gz"}
https://www.math3ma.com/archive/december-2015
# A Recipe for the Universal Cover of X⋁Y Below is a general method —a recipe, if you will —for computing the universal cover of the wedge sum $X\vee Y$ of arbitrary topological spaces $X$ and $Y$. This is simply a short-and-quick guideline that my prof mentioned in class, and I thought it'd be helpful to share on the blog. To help illustrate each step, we'll consider the case when $X=T^2$ is the torus and $Y=S^1$ is the circle. Welcome to part five of a six-part series where we prove that the fundamental group of the circle $\pi_1(S^1)$ is isomorphic to $\mathbb{Z}$. In this post we prove that our homomorphism from $\mathbb{Z}$ to $\pi_1(S^1)$ is injective. The proof follows that found in Hatcher's Algebraic Topology section 1.1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691799879074097, "perplexity": 189.24198837648376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00057.warc.gz"}
https://en.m.wikipedia.org/wiki/Quantum_harmonic_oscillator
# Quantum harmonic oscillator Some trajectories of a harmonic oscillator according to Newton's laws of classical mechanics (A–B), and according to the Schrödinger equation of quantum mechanics (C–H). In A–B, the particle (represented as a ball attached to a spring) oscillates back and forth. In C–H, some solutions to the Schrödinger Equation are shown, where the horizontal axis is position, and the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. C, D, E, F, but not G, H, are energy eigenstates. H is a coherent state—a quantum state that approximates the classical trajectory. The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.[1][2][3] ## One-dimensional harmonic oscillator ### Hamiltonian and energy eigenstates Wavefunction representations for the first eight bound eigenstates, n = 0 to 7. The horizontal axis shows the position x. Note: The graphs are not normalized, and the signs of some of the functions differ from those given in the text. Corresponding probability densities. The Hamiltonian of the particle is: ${\displaystyle {\hat {H}}={\frac {{\hat {p}}^{2}}{2m}}+{\frac {1}{2}}k{\hat {x}}^{2}={\frac {{\hat {p}}^{2}}{2m}}+{\frac {1}{2}}m\omega ^{2}{\hat {x}}^{2}\,,}$ where m is the particle's mass, k is the force constant, ${\displaystyle \omega ={\sqrt {\frac {k}{m}}}}$  is the angular frequency of the oscillator, ${\displaystyle {\hat {x}}}$  is the position operator (given by x), and ${\displaystyle {\hat {p}}}$  is the momentum operator (given by ${\displaystyle {\hat {p}}=-i\hbar {\partial \over \partial x}\,}$ ). The first term in the Hamiltonian represents the kinetic energy of the particle, and the second term represents its potential energy, as in Hooke's law. One may write the time-independent Schrödinger equation, ${\displaystyle {\hat {H}}\left|\psi \right\rangle =E\left|\psi \right\rangle ~,}$ where E denotes a to-be-determined real number that will specify a time-independent energy level, or eigenvalue, and the solution |ψ denotes that level's energy eigenstate. One may solve the differential equation representing this eigenvalue problem in the coordinate basis, for the wave function x|ψ⟩ = ψ(x), using a spectral method. It turns out that there is a family of solutions. In this basis, they amount to Hermite functions, ${\displaystyle \psi _{n}(x)={\frac {1}{\sqrt {2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad n=0,1,2,\ldots .}$ The functions Hn are the physicists' Hermite polynomials, ${\displaystyle H_{n}(z)=(-1)^{n}~e^{z^{2}}{\frac {d^{n}}{dz^{n}}}\left(e^{-z^{2}}\right).}$ The corresponding energy levels are ${\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right)=(2n+1){\hbar \over 2}\omega ~.}$ This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples of ħω) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy (the energy of the n = 0 state, called the ground state) is not equal to the minimum of the potential well, but ħω/2 above it; this is called zero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with the Heisenberg uncertainty principle. The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy. As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. (See the discussion below of the highly excited states.) This is consistent with the classical harmonic oscillator, in which the particle spends more of its time (and is therefore more likely to be found) near the turning points, where it is moving the slowest. The correspondence principle is thus satisfied. Moreover, special nondispersive wave packets, with minimum uncertainty, called coherent states oscillate very much like classical objects, as illustrated in the figure; they are not eigenstates of the Hamiltonian. Probability densities |ψn(x)|2 for the bound eigenstates, beginning with the ground state (n = 0) at the bottom and increasing in energy toward the top. The horizontal axis shows the position x, and brighter colors represent higher probability densities. The "ladder operator" method, developed by Paul Dirac, allows extraction of the energy eigenvalues without directly solving the differential equation. It is generalizable to more complicated problems, notably in quantum field theory. Following this approach, we define the operators a and its adjoint a, {\displaystyle {\begin{aligned}a&={\sqrt {m\omega \over 2\hbar }}\left({\hat {x}}+{i \over m\omega }{\hat {p}}\right)\\a^{\dagger }&={\sqrt {m\omega \over 2\hbar }}\left({\hat {x}}-{i \over m\omega }{\hat {p}}\right)\end{aligned}}} This leads to the useful representation of ${\displaystyle {\hat {x}}}$  and ${\displaystyle {\hat {p}}}$ , {\displaystyle {\begin{aligned}{\hat {x}}&={\sqrt {{\frac {\hbar }{2}}{\frac {1}{m\omega }}}}(a^{\dagger }+a)\\{\hat {p}}&=i{\sqrt {{\frac {\hbar }{2}}m\omega }}(a^{\dagger }-a)~.\end{aligned}}} The operator a is not Hermitian, since itself and its adjoint a are not equal. The energy eigenstates |n, when operated on by these ladder operators, give {\displaystyle {\begin{aligned}a^{\dagger }|n\rangle &={\sqrt {n+1}}|n+1\rangle \\a|n\rangle &={\sqrt {n}}|n-1\rangle .\end{aligned}}} It is then evident that a, in essence, appends a single quantum of energy to the oscillator, while a removes a quantum. For this reason, they are sometimes referred to as "creation" and "annihilation" operators. From the relations above, we can also define a number operator N, which has the following property: {\displaystyle {\begin{aligned}N&=a^{\dagger }a\\N\left|n\right\rangle &=n\left|n\right\rangle .\end{aligned}}} The following commutators can be easily obtained by substituting the canonical commutation relation, ${\displaystyle [a,a^{\dagger }]=1,\qquad [N,a^{\dagger }]=a^{\dagger },\qquad [N,a]=-a,}$ And the Hamilton operator can be expressed as ${\displaystyle {\hat {H}}=\hbar \omega \left(N+{\frac {1}{2}}\right),}$ so the eigenstate of N is also the eigenstate of energy. The commutation property yields {\displaystyle {\begin{aligned}Na^{\dagger }|n\rangle &=\left(a^{\dagger }N+[N,a^{\dagger }]\right)|n\rangle \\&=\left(a^{\dagger }N+a^{\dagger }\right)|n\rangle \\&=(n+1)a^{\dagger }|n\rangle ,\end{aligned}}} and similarly, ${\displaystyle Na|n\rangle =(n-1)a|n\rangle .}$ This means that a acts on |n to produce, up to a multiplicative constant, |n–1⟩, and a acts on |n to produce |n+1⟩. For this reason, a is called a annihilation operator ("lowering operator"), and a a creation operator ("raising operator"). The two operators together are called ladder operators. In quantum field theory, a and a are alternatively called "annihilation" and "creation" operators because they destroy and create particles, which correspond to our quanta of energy. Given any energy eigenstate, we can act on it with the lowering operator, a, to produce another eigenstate with ħω less energy. By repeated application of the lowering operator, it seems that we can produce energy eigenstates down to E = −∞. However, since ${\displaystyle n=\langle n|N|n\rangle =\langle n|a^{\dagger }a|n\rangle ={\Bigl (}a|n\rangle {\Bigr )}^{\dagger }a|n\rangle \geqslant 0,}$ the smallest eigen-number is 0, and ${\displaystyle a\left|0\right\rangle =0.}$ In this case, subsequent applications of the lowering operator will just produce zero kets, instead of additional energy eigenstates. Furthermore, we have shown above that ${\displaystyle {\hat {H}}\left|0\right\rangle ={\frac {\hbar \omega }{2}}\left|0\right\rangle }$ Finally, by acting on |0⟩ with the raising operator and multiplying by suitable normalization factors, we can produce an infinite set of energy eigenstates ${\displaystyle \left\{\left|0\right\rangle ,\left|1\right\rangle ,\left|2\right\rangle ,\ldots ,\left|n\right\rangle ,\ldots \right\},}$ such that ${\displaystyle {\hat {H}}\left|n\right\rangle =\hbar \omega \left(n+{\frac {1}{2}}\right)\left|n\right\rangle ,}$ which matches the energy spectrum given in the preceding section. Arbitrary eigenstates can be expressed in terms of |0⟩, ${\displaystyle |n\rangle ={\frac {(a^{\dagger })^{n}}{\sqrt {n!}}}|0\rangle .}$ Proof: {\displaystyle {\begin{aligned}\langle n|aa^{\dagger }|n\rangle &=\langle n|\left([a,a^{\dagger }]+a^{\dagger }a\right)|n\rangle =\langle n|(N+1)|n\rangle =n+1\\\Rightarrow a^{\dagger }|n\rangle &={\sqrt {n+1}}|n+1\rangle \\\Rightarrow |n\rangle &={\frac {a^{\dagger }}{\sqrt {n}}}|n-1\rangle ={\frac {(a^{\dagger })^{2}}{\sqrt {n(n-1)}}}|n-2\rangle =\cdots ={\frac {(a^{\dagger })^{n}}{\sqrt {n!}}}|0\rangle .\end{aligned}}} #### Analytical questions The preceding analysis is algebraic, using only the commutation relations between the raising and lowering operators. Once the algebraic analysis is complete, one should turn to analytical questions. First, one should find the ground state, that is, the solution of the equation ${\displaystyle a\psi _{0}=0}$ . In the position representation, this is the first-order differential equation ${\displaystyle \left(x+{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)\psi _{0}=0}$ , whose solution is easily found to be the Gaussian[4] ${\displaystyle \psi _{0}(x)=Ce^{-{\frac {m\omega x^{2}}{2\hbar }}}}$ . Conceptually, it is important that there is only one solution of this equation; if there were, say, two linearly independent ground states, we would get two independent chains of eigenvectors for the harmonic oscillator. Once the ground state is computed, one can show inductively that the excited states are Hermite polynomials times the Gaussian ground state, using the explicit form of the raising operator in the position representation. One can also prove that, as expected from the uniqueness of the ground state, the energy eigenstates ${\displaystyle \psi _{n}}$  constructed by the ladder method form a complete orthonormal set of functions.[5] Explicitly connecting with the previous section, the ground state |0⟩ in the position representation is determined by ${\displaystyle a|0\rangle =0}$ , ${\displaystyle \left\langle x\mid a\mid 0\right\rangle =0\qquad \Rightarrow \left(x+{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)\left\langle x\mid 0\right\rangle =0\qquad \Rightarrow }$ ${\displaystyle \left\langle x\mid 0\right\rangle =\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}\exp \left(-{\frac {m\omega }{2\hbar }}x^{2}\right)=\psi _{0}~,}$ hence ${\displaystyle \langle x\mid a^{\dagger }\mid 0\rangle =\psi _{1}(x)~,}$ so that ${\displaystyle \psi _{1}(x,t)=\langle x\mid e^{-3i\omega t/2}a^{\dagger }\mid 0\rangle }$ , and so on. ### Natural length and energy scales The quantum harmonic oscillator possesses natural scales for length and energy, which can be used to simplify the problem. These can be found by nondimensionalization. The result is that, if energy is measured in units of ħω and distance in units of ħ/(), then the Hamiltonian simplifies to ${\displaystyle H=-{\frac {1}{2}}{d^{2} \over dx^{2}}+{\frac {1}{2}}x^{2},}$ while the energy eigenfunctions and eigenvalues simplify to Hermite functions and integers offset by a half, ${\displaystyle \psi _{n}(x)=\left\langle x\mid n\right\rangle ={1 \over {\sqrt {2^{n}n!}}}~\pi ^{-1/4}\exp(-x^{2}/2)~H_{n}(x),}$ ${\displaystyle E_{n}=n+{\tfrac {1}{2}}~,}$ where Hn(x) are the Hermite polynomials. To avoid confusion, these "natural units" will mostly not be adopted in this article. However, they frequently come in handy when performing calculations, by bypassing clutter. For example, the fundamental solution (propagator) of H−i∂t, the time-dependent Schrödinger operator for this oscillator, simply boils down to the Mehler kernel,[6][7] ${\displaystyle \langle x\mid \exp(-itH)\mid y\rangle \equiv K(x,y;t)={\frac {1}{\sqrt {2\pi i\sin t}}}\exp \left({\frac {i}{2\sin t}}\left((x^{2}+y^{2})\cos t-2xy\right)\right)~,}$ where K(x,y;0) =δ(xy). The most general solution for a given initial configuration ψ(x,0) then is simply ${\displaystyle \psi (x,t)=\int dy~K(x,y;t)\psi (y,0)~.}$ ### Coherent states The coherent states of the harmonic oscillator are special nondispersive wave packets, with minimum uncertainty σx σp = ​2, whose observables' expectation values evolve like a classical system. They are eigenvectors of the annihilation operator, not the Hamiltonian, and form an overcomplete basis which consequentially lacks orthogonality. The coherent states are indexed by α ∈ ℂ and expressed in the |n⟩ basis as ${\displaystyle |\alpha \rangle =\sum _{n=0}^{\infty }|n\rangle \langle n|\alpha \rangle =e^{-{\frac {1}{2}}|\alpha |^{2}}\sum _{n=0}^{\infty }{\frac {\alpha ^{n}}{\sqrt {n!}}}|n\rangle =e^{-{\frac {1}{2}}|\alpha |^{2}}e^{\alpha a^{\dagger }}|0\rangle }$ . Because ${\displaystyle a\left|0\right\rangle =0}$  and via the Kermack-McCrae identity, the last form is equivalent to a unitary displacement operator acting on the ground state: ${\displaystyle |\alpha \rangle =e^{\alpha {\hat {a}}^{\dagger }-\alpha ^{*}{\hat {a}}}|0\rangle =D(\alpha )|0\rangle }$ . The position space wave functions are ${\displaystyle \psi _{\alpha }(x')=\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}e^{{\frac {i}{\hbar }}\langle {\hat {p}}\rangle _{\alpha }x'-{\frac {m\omega }{2\hbar }}(x'-\langle {\hat {x}}\rangle _{\alpha })^{2}}}$ . ### Highly excited states Excited state with n=30, with the vertical lines indicating the turning points When n is large, the eigenstates are localized into the classical allowed region, that is, the region in which a classical particle with energy En can move. The eigenstates are peaked near the turning points: the points at the ends of the classically allowed region where the classical particle changes direction. This phenomenon can be verified through asymptotics of the Hermite polynomials, and also through the WKB approximation. The frequency of oscillation at x is proportional to the momentum p(x) of a classical particle of energy En and position x. Furthermore, the square of the amplitude (determining the probability density) is inversely proportional to p(x), reflecting the length of time the classical particle spends near x. The system behavior in a small neighborhood of the turning point does not have a simple classical explanation, but can be modeled using an Airy function. Using properties of the Airy function, one may estimate the probability of finding the particle outside the classically allowed region, to be approximately ${\displaystyle {\frac {2}{n^{1/3}3^{2/3}\Gamma ^{2}({\tfrac {1}{3}})}}={\frac {1}{n^{1/3}\cdot 7.46408092658...}}}$ This is also given, asymptotically, by the integral ${\displaystyle {\frac {1}{2\pi }}\int _{0}^{\infty }e^{(2n+1)\left(x-{\tfrac {1}{2}}\sinh(2x)\right)}dx~.}$ ### Phase space solutions In the phase space formulation of quantum mechanics, solutions to the quantum harmonic oscillator in several different representations of the quasiprobability distribution can be written in closed form. The most widely used of these is for the Wigner quasiprobability distribution, which has the solution ${\displaystyle F_{n}(u)={\frac {(-1)^{n}}{\pi \hbar }}L_{n}\left(4{\frac {u}{\hbar \omega }}\right)e^{-2u/\hbar \omega }~,}$ where ${\displaystyle u={\frac {1}{2}}m\omega ^{2}x^{2}+{\frac {p^{2}}{2m}}}$ , and Ln are the Laguerre polynomials. This example illustrates how the Hermite and Laguerre polynomials are linked through the Wigner map. Meanwhile, the Husimi Q function of the harmonic oscillator eigenstates have an even simpler form. If we work in the natural units described above, we have ${\displaystyle Q(\psi _{n})(x,p)={\frac {(x^{2}+p^{2})^{n}}{n!}}{\frac {e^{-(x^{2}+p^{2})}}{\pi }}}$ This claim can be verified using the Segal–Bargmann transform. Specifically, since the raising operator in the Segal–Bargmann representation is simply multiplication by ${\displaystyle z=x+ip}$  and the ground state is the constant function 1, the normalized harmonic oscillator states in this representation are simply ${\displaystyle z^{n}/{\sqrt {n!}}}$  . At this point, we can appeal to the formula for the Husimi Q function in terms of the Segal–Bargmann transform. ## N-dimensional harmonic oscillator The one-dimensional harmonic oscillator is readily generalizable to N dimensions, where N = 1, 2, 3, ... . In one dimension, the position of the particle was specified by a single coordinate, x. In N dimensions, this is replaced by N position coordinates, which we label x1, ..., xN. Corresponding to each position coordinate is a momentum; we label these p1, ..., pN. The canonical commutation relations between these operators are {\displaystyle {\begin{aligned}{[}x_{i},p_{j}{]}&=i\hbar \delta _{i,j}\\{[}x_{i},x_{j}{]}&=0\\{[}p_{i},p_{j}{]}&=0\end{aligned}}} The Hamiltonian for this system is ${\displaystyle H=\sum _{i=1}^{N}\left({p_{i}^{2} \over 2m}+{1 \over 2}m\omega ^{2}x_{i}^{2}\right).}$ As the form of this Hamiltonian makes clear, the N-dimensional harmonic oscillator is exactly analogous to N independent one-dimensional harmonic oscillators with the same mass and spring constant. In this case, the quantities x1, ..., xN would refer to the positions of each of the N particles. This is a convenient property of the ${\displaystyle r^{2}}$  potential, which allows the potential energy to be separated into terms depending on one coordinate each. This observation makes the solution straightforward. For a particular set of quantum numbers {n} the energy eigenfunctions for the N-dimensional oscillator are expressed in terms of the 1-dimensional eigenfunctions as: ${\displaystyle \langle \mathbf {x} |\psi _{\{n\}}\rangle =\prod _{i=1}^{N}\langle x_{i}\mid \psi _{n_{i}}\rangle }$ In the ladder operator method, we define N sets of ladder operators, {\displaystyle {\begin{aligned}a_{i}&={\sqrt {m\omega \over 2\hbar }}\left(x_{i}+{i \over m\omega }p_{i}\right),\\a_{i}^{\dagger }&={\sqrt {m\omega \over 2\hbar }}\left(x_{i}-{i \over m\omega }p_{i}\right).\end{aligned}}} By an analogous procedure to the one-dimensional case, we can then show that each of the ai and ai operators lower and raise the energy by ℏω respectively. The Hamiltonian is ${\displaystyle H=\hbar \omega \,\sum _{i=1}^{N}\left(a_{i}^{\dagger }\,a_{i}+{\frac {1}{2}}\right).}$ This Hamiltonian is invariant under the dynamic symmetry group U(N) (the unitary group in N dimensions), defined by ${\displaystyle U\,a_{i}^{\dagger }\,U^{\dagger }=\sum _{j=1}^{N}a_{j}^{\dagger }\,U_{ji}\quad {\text{for all}}\quad U\in U(N),}$ where ${\displaystyle U_{ji}}$  is an element in the defining matrix representation of U(N). The energy levels of the system are ${\displaystyle E=\hbar \omega \left[(n_{1}+\cdots +n_{N})+{N \over 2}\right].}$ ${\displaystyle n_{i}=0,1,2,\dots \quad ({\text{the energy level in dimension }}i).}$ As in the one-dimensional case, the energy is quantized. The ground state energy is N times the one-dimensional ground energy, as we would expect using the analogy to N independent one-dimensional oscillators. There is one further difference: in the one-dimensional case, each energy level corresponds to a unique quantum state. In N-dimensions, except for the ground state, the energy levels are degenerate, meaning there are several states with the same energy. The degeneracy can be calculated relatively easily. As an example, consider the 3-dimensional case: Define n = n1 + n2 + n3. All states with the same n will have the same energy. For a given n, we choose a particular n1. Then n2 + n3 = n − n1. There are n − n1 + 1 possible pairs {n2n3}. n2 can take on the values 0 to n − n1, and for each n2 the value of n3 is fixed. The degree of degeneracy therefore is: ${\displaystyle g_{n}=\sum _{n_{1}=0}^{n}n-n_{1}+1={\frac {(n+1)(n+2)}{2}}}$ Formula for general N and n [gn being the dimension of the symmetric irreducible nth power representation of the unitary group U(N)]: ${\displaystyle g_{n}={\binom {N+n-1}{n}}}$ The special case N = 3, given above, follows directly from this general equation. This is however, only true for distinguishable particles, or one particle in N dimensions (as dimensions are distinguishable). For the case of N bosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integer n using integers less than or equal to N. ${\displaystyle g_{n}=p(N_{-},n)}$ This arises due to the constraint of putting N quanta into a state ket where ${\displaystyle \sum _{k=0}^{\infty }kn_{k}=n}$  and ${\displaystyle \sum _{k=0}^{\infty }n_{k}=N}$ , which are the same constraints as in integer partition. ### Example: 3D isotropic harmonic oscillator Schrödinger 3D spherical harmonic orbital solutions in 2D density plots; the Mathematica source code that used for generating the plots is at the top The Schrödinger equation of a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables; see this article for the present case. This procedure is analogous to the separation performed in the hydrogen-like atom problem, but with the spherically symmetric potential ${\displaystyle V(r)={1 \over 2}\mu \omega ^{2}r^{2},}$ where μ is the mass of the problem. Because m will be used below for the magnetic quantum number, mass is indicated by μ, instead of m, as earlier in this article. ${\displaystyle \psi _{klm}(r,\theta ,\phi )=N_{kl}r^{l}e^{-\nu r^{2}}L_{k}^{(l+{1 \over 2})}(2\nu r^{2})Y_{lm}(\theta ,\phi )}$ where ${\displaystyle N_{kl}={\sqrt {{\sqrt {\frac {2\nu ^{3}}{\pi }}}{\frac {2^{k+2l+3}\;k!\;\nu ^{l}}{(2k+2l+1)!!}}}}~~}$  is a normalization constant; ${\displaystyle \nu \equiv {\mu \omega \over 2\hbar }~}$ ; ${\displaystyle {L_{k}}^{(l+{1 \over 2})}(2\nu r^{2})}$ are generalized Laguerre polynomials; The order k of the polynomial is a non-negative integer; ${\displaystyle Y_{lm}(\theta ,\phi )\,}$  is a spherical harmonic function; ħ is the reduced Planck constant:   ${\displaystyle \hbar \equiv {\frac {h}{2\pi }}~.}$ The energy eigenvalue is ${\displaystyle E=\hbar \omega \left(2k+l+{\frac {3}{2}}\right)~.}$ The energy is usually described by the single quantum number ${\displaystyle n\equiv 2k+l~.}$ Because k is a non-negative integer, for every even n we have ℓ = 0, 2, ..., n − 2, n and for every odd n we have ℓ = 1, 3, ..., n − 2, n . The magnetic quantum number m is an integer satisfying −ℓ ≤ m ≤ ℓ, so for every n and ℓ there are 2 + 1 different quantum states, labeled by m . Thus, the degeneracy at level n is ${\displaystyle \sum _{l=\ldots ,n-2,n}(2l+1)={(n+1)(n+2) \over 2}~,}$ where the sum starts from 0 or 1, according to whether n is even or odd. This result is in accordance with the dimension formula above, and amounts to the dimensionality of a symmetric representation of SU(3),[9] the relevant degeneracy group. ## Applications ### Harmonic oscillators lattice: phonons We can extend the notion of a harmonic oscillator to a one-dimensional lattice of many particles. Consider a one-dimensional quantum mechanical harmonic chain of N identical atoms. This is the simplest quantum mechanical model of a lattice, and we will see how phonons arise from it. The formalism that we will develop for this model is readily generalizable to two and three dimensions. As in the previous section, we denote the positions of the masses by x1,x2,..., as measured from their equilibrium positions (i.e. xi = 0 if the particle i is at its equilibrium position). In two or more dimensions, the xi are vector quantities. The Hamiltonian for this system is ${\displaystyle \mathbf {H} =\sum _{i=1}^{N}{p_{i}^{2} \over 2m}+{1 \over 2}m\omega ^{2}\sum _{\{ij\}(nn)}(x_{i}-x_{j})^{2}~,}$ where m is the (assumed uniform) mass of each atom, and xi and pi are the position and momentum operators for the i th atom and the sum is made over the nearest neighbors (nn). However, it is customary to rewrite the Hamiltonian in terms of the normal modes of the wavevector rather than in terms of the particle coordinates so that one can work in the more convenient Fourier space. We introduce, then, a set of N "normal coordinates" Qk, defined as the discrete Fourier transforms of the xs, and N "conjugate momenta" Π defined as the Fourier transforms of the ps, ${\displaystyle Q_{k}={1 \over {\sqrt {N}}}\sum _{l}e^{ikal}x_{l}}$ ${\displaystyle \Pi _{k}={1 \over {\sqrt {N}}}\sum _{l}e^{-ikal}p_{l}~.}$ The quantity kn will turn out to be the wave number of the phonon, i.e. 2π divided by the wavelength. It takes on quantized values, because the number of atoms is finite. This preserves the desired commutation relations in either real space or wave vector space {\displaystyle {\begin{aligned}\left[x_{l},p_{m}\right]&=i\hbar \delta _{l,m}\\\left[Q_{k},\Pi _{k'}\right]&={1 \over N}\sum _{l,m}e^{ikal}e^{-ik'am}[x_{l},p_{m}]\\&={i\hbar \over N}\sum _{m}e^{iam(k-k')}=i\hbar \delta _{k,k'}\\\left[Q_{k},Q_{k'}\right]&=\left[\Pi _{k},\Pi _{k'}\right]=0~.\end{aligned}}} From the general result {\displaystyle {\begin{aligned}\sum _{l}x_{l}x_{l+m}&={1 \over N}\sum _{kk'}Q_{k}Q_{k'}\sum _{l}e^{ial\left(k+k'\right)}e^{iamk'}=\sum _{k}Q_{k}Q_{-k}e^{iamk}\\\sum _{l}{p_{l}}^{2}&=\sum _{k}\Pi _{k}\Pi _{-k}~,\end{aligned}}} it is easy to show, through elementary trigonometry, that the potential energy term is ${\displaystyle {1 \over 2}m\omega ^{2}\sum _{j}(x_{j}-x_{j+1})^{2}={1 \over 2}m\omega ^{2}\sum _{k}Q_{k}Q_{-k}(2-e^{ika}-e^{-ika})={1 \over 2}m\sum _{k}{\omega _{k}}^{2}Q_{k}Q_{-k}~,}$ where ${\displaystyle \omega _{k}={\sqrt {2\omega ^{2}(1-\cos(ka))}}~.}$ The Hamiltonian may be written in wave vector space as ${\displaystyle \mathbf {H} ={1 \over {2m}}\sum _{k}\left({\Pi _{k}\Pi _{-k}}+m^{2}\omega _{k}^{2}Q_{k}Q_{-k}\right)~.}$ Note that the couplings between the position variables have been transformed away; if the Qs and Πs were hermitian(which they are not), the transformed Hamiltonian would describe N uncoupled harmonic oscillators. The form of the quantization depends on the choice of boundary conditions; for simplicity, we impose periodic boundary conditions, defining the (N + 1)th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is ${\displaystyle k=k_{n}={2n\pi \over Na}\quad {\hbox{for}}\ n=0,\pm 1,\pm 2,\ldots ,\pm {N \over 2}.\ }$ The upper bound to n comes from the minimum wavelength, which is twice the lattice spacing a, as discussed above. The harmonic oscillator eigenvalues or energy levels for the mode ωk are ${\displaystyle E_{n}=\left({1 \over 2}+n\right)\hbar \omega _{k}\quad {\hbox{for}}\quad n=0,1,2,3,\ldots }$ If we ignore the zero-point energy then the levels are evenly spaced at ${\displaystyle \hbar \omega ,\,2\hbar \omega ,\,3\hbar \omega ,\,\ldots }$ So an exact amount of energy ħω, must be supplied to the harmonic oscillator lattice to push it to the next energy level. In comparison to the photon case when the electromagnetic field is quantised, the quantum of vibrational energy is called a phonon. All quantum systems show wave-like and particle-like properties. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described later.[10] In the continuum limit, a→0, N→∞, while Na is held fixed. The canonical coordinates Qk devolve to the decoupled momentum modes of a scalar field, ${\displaystyle \phi _{k}}$ , whilst the location index i (not the displacement dynamical variable) becomes the parameter x argument of the scalar field, ${\displaystyle \phi (x,t)}$ . ### Molecular vibrations • The vibrations of a diatomic molecule are an example of a two-body version of the quantum harmonic oscillator. In this case, the angular frequency is given by ${\displaystyle \omega ={\sqrt {\frac {k}{\mu }}}}$ where ${\displaystyle \mu ={\frac {m_{1}m_{2}}{m_{1}+m_{2}}}}$  is the reduced mass and ${\displaystyle m_{1}}$  and ${\displaystyle m_{2}}$  are the masses of the two atoms.[11] • The Hooke's atom is a simple model of the helium atom using the quantum harmonic oscillator. • Modelling phonons, as discussed above. • A charge ${\displaystyle q}$ , with mass ${\displaystyle m}$ , in a uniform magnetic field ${\displaystyle \mathbf {B} }$ , is an example of a one-dimensional quantum harmonic oscillator: the Landau quantization. ## References 1. ^ Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 978-0-13-805326-0. 2. ^ Liboff, Richard L. (2002). Introductory Quantum Mechanics. Addison–Wesley. ISBN 978-0-8053-8714-8. 3. ^ Rashid, Muneer A. (2006). "Transition amplitude for time-dependent linear harmonic oscillator with Linear time-dependent terms added to the Hamiltonian" . M.A. Rashid – Center for Advanced Mathematics and Physics. National Center for Physics. Retrieved 19 October 2010. 4. ^ The normalization constant is ${\displaystyle C=\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}}$ , and satisfies the normalization condition ${\displaystyle \int _{-\infty }^{\infty }\psi _{0}(x)^{*}\psi _{0}(x)dx=1}$ . 5. ^ See Theorem 11.4 in Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, 267, Springer, ISBN 978-1461471158 6. ^ Pauli, W. (2000), Wave Mechanics: Volume 5 of Pauli Lectures on Physics (Dover Books on Physics). ISBN 978-0486414621 ; Section 44. 7. ^ Condon, E. U. (1937). "Immersion of the Fourier transform in a continuous group of functional transformations", Proc. Natl. Acad. Sci. USA 23, 158–164. online 8. ^ Albert Messiah, Quantum Mechanics, 1967, North-Holland, Ch XII,  § 15, p 456.online 9. ^ Fradkin, D. M. "Three-dimensional isotropic harmonic oscillator and SU3." American Journal of Physics 33 (3) (1965) 207–211. 10. ^ Mahan, GD (1981). many particle physics. New York: springer. ISBN 978-0306463389. 11. ^ "Quantum Harmonic Oscillator". Hyperphysics. Retrieved 24 September 2009.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 98, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705812931060791, "perplexity": 667.6517070074943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00535.warc.gz"}
https://math.stackexchange.com/questions/890814/residue-integral-int-0-infty-fracxn-2x-1x2n-1-mathrmdx
# Residue Integral: $\int_0^\infty \frac{x^n - 2x + 1}{x^{2n} - 1} \mathrm{d}x$ Inspired by some of the greats on this site, I've been trying to improve my residue theorem skills. I've come across the integral $$\int_0^\infty \frac{x^n - 2x + 1}{x^{2n} - 1} \mathrm{d}x,$$ where $n$ is a positive integer that is at least $2$, and I'd like to evaluate it with the residue theorem. Through non-complex methods, I know that the integral is $0$ for all $n \geq 2$. But I know that it can be done with the residue theorem. The trouble comes in choosing a contour. We're probably going to do some pie-slice contour, perhaps small enough to avoid any of the $2n$th roots of unity, and it's clear that the outer-circle will vanish. But I'm having trouble evaluating the integral on the contour, or getting cancellation. Can you help? (Also, do you have a book reference for collections of calculations of integrals with the residue theorem that might have similar examples?) • was it from mathematics magazine? – DeepSea Aug 8 '14 at 7:35 • Yes, that's right! It's problem 1912 from February, 2013. – BigThumb Aug 8 '14 at 7:37 • How exactly were you able to show that its value is $0$ for $n>1$ ? – Lucian Aug 8 '14 at 8:02 We want to prove that the integral is $0$ for $n>1$, it is the same thing as $$\int_0^{\infty} \frac{\mathrm{d}x}{x^n+1} = 2\int_0^{\infty} \frac{x-1}{x^{2n}-1} \ \mathrm{d}x.$$ The left hand integral is widely known to be $\frac{\pi}{n} \csc \frac{\pi}{n}$, we want to calculate the right hand integral. let $f(x)=\frac{x-1}{x^{2n}-1}$, and consider the contour $C=C_1\cup C_2\cup C_3$ where $$C_1=[0,r],\ C_2=\left\{z \in \mathbb{C} | |z|=r,\ \arg(z) \in \left[0,\frac{\pi}{2n}\right]\right\},\ \ C_3 =e^{\frac{\pi i}{2n}} C_1.$$ Here's what the contour look like Notice that $\int_C f(z) \ \mathrm{d}z=0$ (the integral is taken counter clockwise always) since $f$ is holomorphic inside $C$. and $$\left|\int_{C_2} f(x)\ \mathrm{d}x \right| =\mathcal{O}(r^{-1}) \to 0.$$ And \begin{align*} \int_{C_3}f(z) \ \mathrm{d}z &= e^{\frac{\pi i}{2n}}\int_0^r f\left(x e^{\frac{\pi i }{2n}}\right) \ \mathrm{d}x \\ &=e^{\frac{\pi i}{2n}}\int_0^r \frac{e^{\frac{\pi i}{2n}}x -1}{x^{2n}+1} \ \mathrm{d}x \\ &= e^{\frac{\pi i}{n}}\int_0^r \frac{x }{x^{2n}+1} \ \mathrm{d}x-e^{\frac{\pi i}{2n}}\int_0^r \frac{1}{x^{2n}+1} \ \mathrm{d}x. \end{align*} Note that $\int_{0}^{\infty} \frac{x}{x^{2n}+1} \ \mathrm{d}x = \frac{\pi }{2n} \csc \frac{\pi}{n}$, then by taking $r\to \infty$ we get $$\int_0^{\infty} f(x) \ \mathrm{d}x =-e^{\frac{\pi i}{n}}\cdot \frac{\pi }{2n} \csc \frac{\pi}{n} + e^{\frac{\pi i}{2n}} \frac{\pi }{2n} \csc \frac{\pi}{2n} = \frac{\pi}{2n} \csc \frac{\pi}{n}.$$ Which is what we were looking for. • Hi, I am wondering how the last $=$ works, which seems to imply $-e^{i\pi/n} + e^{i\pi/(2n)} = 1$? – Taozi Apr 26 '16 at 15:08 By factorization what we are trying to prove is: $$\int_0^\infty \frac{1}{x^n-1} dx = 2 \int_0^\infty \frac{x}{x^{2n}-1} dx$$ On the right hand side let $x \rightarrow \sqrt{t}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991625547409058, "perplexity": 438.56244501142334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00344.warc.gz"}
https://homotopytypetheory.org/blog/page/2/
## HoTTSQL: Proving Query Rewrites with Univalent SQL Semantics SQL is the lingua franca for retrieving structured data. Existing semantics for SQL, however, either do not model crucial features of the language (e.g., relational algebra lacks bag semantics, correlated subqueries, and aggregation), or make it hard to formally reason about SQL query rewrites (e.g., the SQL standard’s English is too informal). This post focuses on the ways that HoTT concepts (e.g., Homotopy Types, the Univalence Axiom, and Truncation) enabled us to develop HoTTSQL — a new SQL semantics that makes it easy to formally reason about SQL query rewrites. Our paper also details the rich set of SQL features supported by HoTTSQL. Posted in Applications | 5 Comments ## Combinatorial Species and Finite Sets in HoTT (Post by Brent Yorgey) My dissertation was on the topic of combinatorial species, and specifically on the idea of using species as a foundation for thinking about generalized notions of algebraic data types. (Species are sort of dual to containers; I think both have intereseting and complementary things to offer in this space.) I didn’t really end up getting very far into practicalities, instead getting sucked into a bunch of more foundational issues. To use species as a basis for computational things, I wanted to first “port” the definition from traditional, set-theory-based, classical mathematics into a constructive type theory. HoTT came along at just the right time, and seems to provide exactly the right framework for thinking about a constructive encoding of combinatorial species. For those who are familiar with HoTT, this post will contain nothing all that new. But I hope it can serve as a nice example of an “application” of HoTT. (At least, it’s more applied than research in HoTT itself.) # Combinatorial Species Traditionally, a species is defined as a functor $F : \mathbb{B} \to \mathbf{FinSet}$, where $\mathbb{B}$ is the groupoid of finite sets and bijections, and $\mathbf{FinSet}$ is the category of finite sets and (total) functions. Intuitively, we can think of a species as mapping finite sets of “labels” to finite sets of “structures” built from those labels. For example, the species of linear orderings (i.e. lists) maps the finite set of labels $\{1,2, \dots, n\}$ to the size-$n!$ set of all possible linear orderings of those labels. Functoriality ensures that the specific identity of the labels does not matter—we can always coherently relabel things. # Constructive Finiteness So what happens when we try to define species inside a constructive type theory? The crucial piece is $\mathbb{B}$: the thing that makes species interesting is that they have built into them a notion of bijective relabelling, and this is encoded by the groupoid $\mathbb{B}$. The first problem we run into is how to encode the notion of a finite set, since the notion of finiteness is nontrivial in a constructive setting. One might well ask why we even care about finiteness in the first place. Why not just use the groupoid of all sets and bijections? To be honest, I have asked myself this question many times, and I still don’t feel as though I have an entirely satisfactory answer. But what it seems to come down to is the fact that species can be seen as a categorification of generating functions. Generating functions over the semiring $R$ can be represented by functions $\mathbb{N} \to R$, that is, each natural number maps to some coefficient in $R$; each natural number, categorified, corresponds to (an equivalence class of) finite sets. Finite label sets are also important insofar as our goal is to actually use species as a basis for computation. In a computational setting, one often wants to be able to do things like enumerate all labels (e.g. in order to iterate through them, to do something like a map or fold). It will therefore be important that our encoding of finiteness actually has some computational content that we can use to enumerate labels. Our first attempt might be to say that a finite set will be encoded as a type $A$ together with a bijection between $A$ and a canonical finite set of a particular natural number size. That is, assuming standard inductively defined types $\mathbb{N}$ and $\mathsf{Fin}$, $\displaystyle \Sigma (A:U). \Sigma (n : \mathbb{N}). A \cong \mathsf{Fin}(n).$ However, this is unsatisfactory, since defining a suitable notion of bijections/isomorphisms between such finite sets is tricky. Since $\mathbb{B}$ is supposed to be a groupoid, we are naturally led to try using equalities (i.e. paths) as morphisms—but this does not work with the above definition of finite sets. In $\mathbb{B}$, there are supposed to be $n!$ different morphisms between any two sets of size $n$. However, given any two same-size inhabitants of the above type, there is only one path between them—intuitively, this is because paths between $\Sigma$-types correspond to tuples of paths relating the components pointwise, and such paths must therefore preserve the particular relation to $\mathsf{Fin}(n)$. The only bijection which is allowed is the one which sends each element related to $i$ to the other element related to $i$, for each $i \in \mathsf{Fin}(n)$. So elements of the above type are not just finite sets, they are finite sets with a total order, and paths between them must be order-preserving; this is too restrictive. (However, this type is not without interest, and can be used to build a counterpart to L-species. In fact, I think this is exactly the right setting in which to understand the relationship between species and L-species, and more generally the difference between isomorphism and equipotence of species; there is more on this in my dissertation.) # Truncation to the Rescue We can fix things using propositional truncation. In particular, we define $\displaystyle U_F := \Sigma (A:U). \|\Sigma (n : \mathbb{N}). A \cong \mathsf{Fin}(n)\|.$ That is, a “finite set” is a type $A$ together with some hidden evidence that $A$ is equivalent to $\mathsf{Fin}(n)$ for some $n$. (I will sometimes abuse notation and write $A : U_F$ instead of $(A, p) : U_F$.) A few observations: • First, we can pull the size $n$ out of the propositional truncation, that is, $U_F \cong \Sigma (A:U). \Sigma (n: \mathbb{N}). \|A \cong \mathsf{Fin}(n)\|$. Intuitively, this is because if a set is finite, there is only one possible size it can have, so the evidence that it has that size is actually a mere proposition. • More generally, I mentioned previously that we sometimes want to use the computational evidence for the finiteness of a set of labels, e.g. enumerating the labels in order to do things like maps and folds. It may seem at first glance that we cannot do this, since the computational evidence is now hidden inside a propositional truncation. But actually, things are exactly the way they should be: the point is that we can use the bijection hidden in the propositional truncation as long as the result does not depend on the particular bijection we find there. For example, we cannot write a function which returns the value of type $A$ corresponding to $0 : \mathsf{Fin}(n)$, since this reveals something about the underlying bijection; but we can write a function which finds the smallest value of $A$ (with respect to some linear ordering), by iterating through all the values of $A$ and taking the minimum. • It is not hard to show that if $A : U_F$, then $A$ is a set (i.e. a 0-type) with decidable equality, since $A$ is equivalent to the 0-type $\mathsf{Fin}(n)$. Likewise, $U_F$ itself is a 1-type. • Finally, note that paths between inhabitants of $U_F$ now do exactly what we want: a path $(A,p) = (B,q)$ is really just a path $A = B$ between 0-types, that is, a bijection, since $p = q$ trivially. # Constructive Species We can now define species in HoTT as functions of type $U_F \to U$. The main reason I think this is the Right Definition ™ of species in HoTT is that functoriality comes for free! When defining species in set theory, one must say “a species is a functor, i.e. a pair of mappings satisfying such-and-such properties”. When constructing a particular species one must explicitly demonstrate the functoriality properties; since the mappings are just functions on sets, it is quite possible to write down mappings which are not functorial. But in HoTT, all functions are functorial with respect to paths, and we are using paths to represent the morphisms in $U_F$, so any function of type $U_F \to U$ automatically has the right functoriality properties—it is literally impossible to write down an invalid species. Actually, in my dissertation I define species as functors between certain categories built from $U_F$ and $U$, but the point is that any function $U_F \to U$ can be automatically lifted to such a functor. Here’s another nice thing about the theory of species in HoTT. In HoTT, coends whose index category are groupoids are just plain $\Sigma$-types. That is, if $\mathbb{C}$ is a groupoid, $\mathbb{D}$ a category, and $T : \mathbb{C}^{\mathrm{op}} \times \mathbb{C} \to \mathbb{D}$, then $\int^C T(C,C) \cong \Sigma (C : \mathbb{C}). T(C,C)$. In set theory, this coend would be a quotient of the corresponding $\Sigma$-type, but in HoTT the isomorphisms of $\mathbb{C}$ are required to correspond to paths, which automatically induce paths over the $\Sigma$-type which correspond to the necessary quotient. Put another way, we can define coends in HoTT as a certain HIT, but in the case that $\mathbb{C}$ is a groupoid we already get all the paths given by the higher path constructor anyway, so it is redundant. So, what does this have to do with species, I hear you ask? Well, several species constructions involve coends (most notably partitional product); since species are functors from a groupoid, the definitions of these constructions in HoTT are particularly simple. We again get the right thing essentially “for free”. There’s lots more in my dissertation, of course, but these are a few of the key ideas specifically relating species and HoTT. I am far from being an expert on either, but am happy to entertain comments, questions, etc. I can also point you to the right section of my dissertation if you’re interested in more detail about anything I mentioned above. ## Parametricity and excluded middle Exercise 6.9 of the HoTT book tells us that, and assuming LEM, we can exhibit a function $f:\Pi_{X:\mathcal{U}}(X\to X)$ such that $f_\mathbf{2}$ is a non-identity function $\mathbf{2}\to\mathbf{2}.$ I have proved the converse of this. Like in exercise 6.9, we assume univalence. ## Parametricity In a typical functional programming career, at some point one encounters the notions of parametricity and free theorems. Parametricity can be used to answer questions such as: is every function f : forall x. x -> x equal to the identity function? Parametricity tells us that this is true for System F. However, this is a metatheoretical statement. Parametricity gives properties about the terms of a language, rather than proving internally that certain elements satisfy some properties. So what can we prove internally about a polymorphic function $f:\Pi_{X:\mathcal{U}}X\to X$? In particular, we can see that internal proofs (claiming that $f$ must be the identity function for every type $X$cannot exist: exercise 6.9 of the HoTT book tells us that, assuming LEM, we can exhibit a function $f:\Pi_{X:\mathcal{U}}(X\to X)$ such that $f_\mathbf{2}$ is $\mathsf{flip}:\mathbf{2}\to\mathbf{2}.$ (Notice that the proof of this is not quite as trivial as it may seem: LEM only gives us $P+\neg P$ if $P$ is a (mere) proposition (a.k.a. subsingleton). Hence, simple case analysis on $X\simeq\mathbf{2}$ does not work, because this is not necessarily a proposition.) And given the fact that LEM is consistent with univalent foundations, this means that a proof that $f$ is the identity function cannot exist. I have proved that LEM is exactly what is needed to get a polymorphic function that is not the identity on the booleans. Theorem. If there is a function $f:\Pi_{X:\mathcal U}X\to X$ with $f_\mathbf2\neq\mathsf{id}_\mathbf2,$ then LEM holds. ## Proof idea If $f_\mathbf2\neq\mathsf{id}_\mathbf2,$ then by simply trying both elements $0_\mathbf2,1_\mathbf2:\mathbf2,$ we can find an explicit boolean $b:\mathbf2$ such that $f_\mathbf2(b)\neq b.$ Without loss of generality, we can assume $f_\mathbf2(0_\mathbf2)\neq 0_\mathbf2.$ For the remainder of this analysis, let $P$ be an arbitrary proposition. Then we want to achieve $P+\neg P,$ to prove LEM. We will consider a type with three points, where we identify two points depending on whether $P$ holds. In other words, we consider the quotient of a three-element type, where the relation between two of those points is the proposition $P.$ I will call this space $\mathbf{3}_P,$ and it can be defined as $\Sigma P+\mathbf{1},$ where $\Sigma P$ is the suspension of $P.$ This particular way of defining the quotient, which is equivalent to a quotient of a three-point set, will make case analysis simpler to set up. (Note that suspensions are not generally quotients: we use the fact that $P$ is a proposition here.) Notice that if $P$ holds, then $\mathbf{3}_P\simeq\mathbf{2},$ and also $(\mathbf{3}_P\simeq\mathbf{3}_P)\simeq\mathbf{2}.$ We will consider $f$ at the type $(\mathbf{3}_P\simeq\mathbf{3}_P)$ (not $\mathbf{3}_P$ itself!). Now the proof continues by defining $g:=f_{\mathbf{3}_P\simeq\mathbf{3}_P}(\mathsf{ide}_{\mathbf{3}_P}):\mathbf{3}_P\simeq\mathbf{3}_P$ (where $\mathsf{ide_{\mathbf3_P}}$ is the equivalence given by the identity function on $\mathbf3_P$) and doing case analysis on $g(\mathsf{inr}(*)),$ and if necessary also on $g(\mathsf{inl}(x))$ for some elements $x:\Sigma P.$ I do not believe it is very instructive to spell out all cases explicitly here. I wrote a more detailed note containing an explicit proof. Notice that doing case analysis here is simply an instance of the induction principle for $+.$ In particular, we do not require decidable equality of $\mathbf3_P$ (which would already give us $P+\neg P,$ which is exactly what we are trying to prove). For the sake of illustration, here is one case: • $g(\mathsf{inr}(*))= \mathsf{inr}(*):$ Assume $P$ holds. Then since $(\mathbf{3}_P\simeq\mathbf{3}_P)\simeq\mathbf{2},$ then by transporting along an appropriate equivalence (namely the one that identifies $0_\mathbf2$ with $\mathsf{ide}_{\mathbf3_P}),$ we get $f_{\mathbf{3}_P\simeq\mathbf{3}_P}(\mathsf{ide}_{\mathbf{3}_P})\neq\mathsf{ide}_{\mathbf{3}_P}.$ But since $g$ is an equivalence for which $\mathsf{inr}(*)$ is a fixed point, $g$ must be the identity everywhere, that is, $g=\mathsf{ide}_{\mathbf{3}_P},$ which is a contradiction. I formalized this proof in Agda using the HoTT-Agda library ## Acknowledgements Thanks to Martín Escardó, my supervisor, for his support. Thanks to Uday Reddy for giving the talk on parametricity that inspired me to think about this. Posted in Foundations | 13 Comments ## Colimits in HoTT In this post, I would want to present you two things: 1. the small library about colimits that I formalized in Coq, 2. a construction of the image of a function as a colimit, which is essentially a sliced version of the result that Floris van Doorn talked in this blog recently, and further improvements. I present my hott-colimits library in the first part. This part is quite easy but I hope that the library could be useful to some people. The second part is more original. Lets sketch it. Given a function $f_0:\ A \rightarrow B$ we can construct a diagram where the HIT $\mathbf{KP}$ is defined by: HIT KP f := | kp : A -> KP f | kp_eq : forall x x', f(x) = f(x') -> kp(x) = kp(x'). and where $f_{n+1}$ is defined recursively from $f_n$. We call this diagram the iterated kernel pair of $f_0$. The result is that the colimit of this diagram is $\Sigma_{y:B} \parallel \mathbf{fib}_{f_0}\ y \parallel$, the image of $f_0$ ($\mathbf{fib}_{f_0}\ y$ is $\Sigma_{x:A}\ f_0(x) = y$ the homotopy fiber of $f_0$ in $y$). It generalizes Floris’ result in the following sense: if we consider the unique arrow $f_0: A \rightarrow \mathbf{1}$ (where $\mathbf{1}$ is Unit) then $\mathbf{KP}(f_0)$ is $\{ A \}$ the one-step truncation of $A$ and the colimit is equivalent to $\parallel A \parallel$ the truncation of $A$. We then go further. Indeed, this HIT doesn’t respect the homotopy levels at all: even $\{\mathbf{1}\}$ is the circle. We try to address this issue considering an HIT that take care of already existing paths: HIT KP' f := | kp : A -> KP' f | kp_eq : forall x x', f(x) = f(x') -> kp(x) = kp(x'). | kp_eq_1 : forall x, kp_eq (refl (f x)) = refl (kp x) This HIT avoid adding new paths when some elements are already equals, and turns out to better respect homotopy level: it at least respects hProps. See below for the details. Besides, there is another interesting thing considering this HIT: we can sketch a link between the iterated kernel pair using $\mathbf{KP'}$ and the Čech nerve of a function. We outline this in the last paragraph. All the following is joint work with Kevin Quirin and Nicolas Tabareau (from the CoqHoTT project), but also with Egbert Rijke, who visited us. All our results are formalized in Coq. The library is available here: https://github.com/SimonBoulier/hott-colimits # Colimits in HoTT In homotopy type theory, Type, the type of all types can be seen as an ∞-category. We seek to calculate some homotopy limits and colimits in this category. The article of Jeremy Avigad, Krzysztof Kapulkin and Peter LeFanu Lumsdaine explain how to calculate the limits over graphs using sigma types. For instance an equalizer of two function $f$ and $g$ is $\Sigma_{x:A} f(x) = g(x)$. The colimits over graphs are computed in same way with Higher Inductive Types instead of sigma types. For instance, the coequalizer of two functions is HIT Coeq (f g: A -> B) : Type := | coeq : B -> Coeq f g | cp : forall x, coeq (f x) = coeq (g x). In both case there is a severe restriction: we don’t know how two compute limits and colimits over diagrams which are much more complicated than those generated by some graphs (below we use an extension to “graphs with compositions” which is proposed in the exercise 7.16 of the HoTT book, but those diagrams remain quite poor). We first define the type of graphs and diagrams, as in the HoTT book (exercise 7.2) or in hott-limits library of Lumsdaine et al.: Record graph := { G_0 :> Type ; G_1 :> G_0 -> G_0 - Type }. Record diagram (G : graph) := { D_0 :> G -> Type ; D_1 : forall {i j : G}, G i j -> (D_0 i -> D_0 j) }. And then, a cocone over a diagram into a type $Q$ : Record cocone {G: graph} (D: diagram G) (Q: Type) := { q : forall (i: G), D i - X ; qq : forall (i j: G) (g: G i j) (x: D i), q j (D_1 g x) = q i x }. Let $C:\mathrm{cocone}\ D\ Q$ be a cocone into $Q$ and $f$ be a function $Q \rightarrow Q'$. Then we can extend $C$ to a cocone into $Q'$ by postcomposition with $f$. It gives us a function $\mathrm{postcompose} :\ (\mathrm{cocone}\ D\ Q) \rightarrow (Q': \mathrm{Type}) \rightarrow (Q \rightarrow Q')\rightarrow (\mathrm{cocone}\ D\ Q')$ A cocone $C$ is said to be universal if, for all other cocone $C'$ over the same diagram, $C'$ can be obtained uniquely by extension of $C$, that we translate by: Definition is_universal (C: cocone D Q) := forall (Q': Type), IsEquiv (postcompose_cocone C Q'). Last, a type $Q$ is said to be a colimit of the diagram $D$ if there exists a universal cocone over $D$ into $Q$. ## Existence The existence of the colimit over a diagram is given by the HIT: HIT colimit (D: diagram G) : Type := | colim : forall (i: G), D i - colimit D | eq : forall (i j: G) (g: G i j) (x: D i), colim j (D_1 g x) = colim i x Of course, $\mathrm{colimit}\ D$ is a colimit of $D$. ## Functoriality and Uniqueness ### Diagram morphisms Let $D$ and $D'$ be two diagrams over the same graph $G$. A morphism of diagrams is defined by: Record diagram_map (D1 D2 : diagram G) := { map_0: forall i, D1 i - D2 i ; map_1: forall i j (g: G i j) x, D_1 D2 g (map_0 i x) = map_0 j (D_1 D1 g x) }. We can compose diagram morphisms and there is an identity morphism. We say that a morphism $m$ is an equivalence of diagrams if all functions $m_i$ are equivalences. In that case, we can define the inverse of $m$ (reversing the proofs of commutation), and check that it is indeed an inverse for the composition of diagram morphisms. ### Precomposition We yet defined forward extension of a cocone by postcomposition, we now define backward extension. Given a diagram morphism $m: D \Rightarrow D'$, we can make every cocone over $D'$ into a cocone over $D$ by precomposition by $m$. It gives us a function $\mathrm{precompose} :\ (D \Rightarrow D') \rightarrow (Q : \mathrm{Type})\rightarrow (\mathrm{cocone}\ D'\ Q) \rightarrow (\mathrm{cocone}\ D\ Q)$ We check that precomposition and postcomposition respect the identity and the composition of morphism. And then, we can show that the notions of universality and colimits are stable by equivalence. ### Functoriality of colimits Let $m: D \Rightarrow D'$ be a diagram morphism and $Q$ and $Q'$ two colimits of $D$ and $D'$. Let’s note $C$ and $C'$ the universal cocone into $Q$ and $Q'$. Then, we can get a function $Q \rightarrow Q'$ given by: $(\mathrm{postcompose}\ C\ Q)^{-1}\ (\mathrm{precompose}\ m\ Q'\ C')$ We check that if $m$ is an equivalence of diagram then the function $Q' \rightarrow Q$ given by $m^{-1}$ is well an inverse of $Q \rightarrow Q'$. As a consequence, we get: The colimits of two equivalents diagrams are equivalent. ### Uniqueness In particular, if we consider the identity morphism $D \Rightarrow D$ we get: Let $Q_1$ and $Q_2$ be two colimits of the same diagram, then: $Q_1~\simeq~Q_2~$. So, if we assume univalence, the colimit of a diagram is truly unique! ## Commutation with sigmas Let $B$ be a type and, for all $y:B$, $D^y$ a diagram over a graph $G$. We can then build a new diagram over $G$ whose objects are the $\Sigma_y D_0^y(i)\$ and functions $\Sigma_y D_0^y(i) \rightarrow \Sigma_y D_0^y(j)$ are induced by the identity on the first component and by $D_1^y(g) : D_0^y(i) \rightarrow D_0^y(j)$ on the second one. Let’s note $\Sigma D$ this diagram. Seemingly, from a family of cocone $C:\Pi_y\mathrm{cocone}\ D^y\ Q_y$, we can make a cocone over $\Sigma D$ into $\Sigma_y Q_y$. We proved the following result, which we believed to be quite nice: If, for all $y:B\$, $Q_y$ is a colimit of $D_y$, then $\Sigma_y Q_y$ is a colimit of $\Sigma D$. # Iterated Kernel Pair ## First construction Let’s first recall the result of Floris. An attempt to define the propositional truncation is the following: HIT {_} (A: Type) := | α : A -> {A} | e : forall (x x': A), α x = α x'. Unfortunately, in general $\{ A \}$ is not a proposition, the path constructor $\mathrm{e}$ is not strong enough. But we have the following result: Let $A$ be a type. Let’s consider the following diagram: $A \rightarrow \{A\} \rightarrow \{\{A\}\} \rightarrow \dots$ Then, $\parallel A \parallel$ is a colimit of this diagram. Let’s generalize this result to a function $f: A \rightarrow B$ (we will recover the theorem considering the unique function $A \rightarrow \mathbf{1}$). Let $f: A \rightarrow B$. We note $\mathbf{KP}(f)$ the colimit of the kernel pair of $f$: where the pullback $A \times_B A$ is given by $\Sigma_{x,\, x'}\, f(x) = f(x')$. Hence, $\mathbf{KP}(f)$ is the following HIT: Inductive KP f := | kp : A -> KP f | kp_eq : forall x x', f(x) = f(x') -> kp(x) = kp(x'). Let’s consider the following cocone: we get a function $\mathrm{lift}_f: \mathbf{KP}(f) \rightarrow B$ by universality (another point of view is to say that $\mathrm{lift}_f$ is defined by $\mathbf{KP\_rec}(f, \lambda\ p.\ p)$). Then, iteratively, we can construct the following diagram: where $f_0 := f :\ A \rightarrow B$ and $f_{n+1} := \mathrm{lift}_{f_n} :\ \mathbf{KP}(f_n) \rightarrow B$. The iterated kernel pair of $f$ is the subdiagram We proved the following result: The colimit of this diagram is $\Sigma_{y:B}\parallel \mathbf{fib}_f\ y\parallel \$, the image of $f$. The proof is a slicing argument to come down to Floris’ result. It uses all properties of colimits that we talked above. The idea is to show that those three diagrams are equivalent. Going from the first line to the second is just apply the equivalence $A\ \simeq\ \Sigma_{y:B}\mathbf{fib}_f\ y$ (for $f: A \rightarrow B$) at each type. Going from the second to the third is more involved, we don’t detail it here. And $\Sigma_{y:B}\parallel \mathbf{fib}_f\ y\parallel \$ is well the colimit of the last line: by commutation with sigmas it is sufficient to show that for all $y$, $\parallel \mathbf{fib}_f\ y\parallel \$ is the colimit of the diagram which is exactly Floris’ result! The details are available here. ## Second construction The previous construction has a small defect: it did not respect the homotopy level at all. For instance $\{\mathbf{1}\}$ is the circle $\mathbb{S}^1$. Hence, to compute $\parallel \mathbf{1}\parallel$ (which is $\mathbf{1}$ of course), we go through very complex types. We found a way to improve this: adding identities! Indeed, the proof keeps working if we replace $\mathbf{KP}$ by $\mathbf{KP'}$ which is defined by: Inductive KP' f := | kp : A -> KP' f | kp_eq : forall x x', f(x) = f(x') -> kp(x) = kp(x'). | kp_eq_1 : forall x, kp_eq (refl (f x)) = refl (kp x) $\mathbf{KP'}$ can be seen as a “colimit with identities” of the following diagram : (♣) with $\pi_i \circ \delta = \mathrm{id}$. In his article, Floris explains that, when $p:\ a =_A b$ then $\mathrm{ap}_\alpha(p)$ and $\mathrm{t\_eq}\ a\ b$ are not equal. But now they become equal: by path induction we bring back to $\mathrm{kp\_eq\_1}$. That is, if two elements are already equal, we don’t add any path between them. And indeed, this new HIT respects the homotopy level better, at least in the following sense: 1. $\mathbf{KP'}(\mathbf{1} \rightarrow \mathbf{1})$ is $\mathbf{1}$ (meaning that the one-step truncation of a contractible type is now $\mathbf{1}$), 2. If $f: A \rightarrow B$ is an embedding (in the sense that $\mathrm{ap}(f) : x = y \rightarrow f(x) = f(y)$ is an equivalence for all $x, y$) then so is $\mathrm{lift}_f : \mathbf{KP'}(f) \rightarrow B$. In particular, if $A$ is hProp then so is $\mathbf{KP'}(A \rightarrow \mathbf{1})$ (meaning that the one-step truncation of an hProp is now itself). ## Toward a link with the Čech nerve Although we don’t succeed in making it precise, there are several hints which suggest a link between the iterated kernel pair and the Čech nerve of a function. The Čech nerve of a function $f$ is a generalization of his kernel pair: it is the simplicial object (the degeneracies are not dawn but they are present). We will call n-truncated Čech nerve the diagram restricted to the n+1 first objects: (degeneracies still here). The kernel pair (♣) is then the 1-truncated Čech nerve. We wonder to which extent $\mathbf{KP}(f_n)$ could be the colimit of the (n+1)-truncated Čech nerve. We are far from having such a proof but we succeeded in proving : 1. That $\mathbf{KP'}(f_0)$ is the colimit of the kernel pair (♣), 2. and that there is a cocone over the 2-trunated Čech nerve into $\mathbf{KP'}(f_1)$ (both in the sense of “graphs with compositions”, see exercise 7.16 of the HoTT book). The second point is quite interesting because it makes the path concatenation appear. We don’t detail exactly how, but to build a cocone over the 2-trunated Čech nerve into a type $C$, $C$ must have a certain compatibility with the path concatenation. $\mathbf{KP'}(f)$ doesn’t have such a compatibility: if $p:\ f(a) =_A f(b)$ and $q:\ f(b) =_A f(c)$, in general we do not have $\mathrm{kp\_eq}_f\ (p \centerdot q)\ =\ \mathrm{kp\_eq}_f\ p\ \centerdot\ \mathrm{kp\_eq}_f\ q$     in     $\mathrm{kp}(a)\ =_{\mathbf{KP'}(f)}\ \mathrm{kp}(c)$. On the contrary, $\mathbf{KP'}(f_1)$ have the require compatibility: we can prove that $\mathrm{kp\_eq}_{f_1}\ (p \centerdot q)\ =\ \mathrm{kp\_eq}_{f_1}\ p\ \centerdot\ \mathrm{kp\_eq}_{f_1}\ q$     in     $\mathrm{kp}(\mathrm{kp}(a))\ =_{\mathbf{KP'}(f_1)}\ \mathrm{kp}(\mathrm{kp}(c))$. ($p$ has indeed the type $f_1(\mathrm{kp}(a)) = f_1(\mathrm{kp}(b))$ because $f_1$ is $\mathbf{KP\_rec}(f, \lambda\ p.\ p)$ and then $f_1(\mathrm{kp}(x)) \equiv x$.) This fact is quite surprising. The proof is basically getting an equation with a transport with apD and then making the transport into a path concatenation (see the file link_KPv2_CechNerve.v of the library for more details). ## Questions Many questions are left opened. To what extent $\mathbf{KP}(f_n)$ is linked with the (n+1)-truncated diagram? Could we use the idea of the iterated kernel pair to define a groupoid object internally? Indeed, in an ∞-topos every groupoid object is effective (by Giraud’s axioms) an then is the Čech nerve of his colimit… Posted in Code, Higher Inductive Types | 14 Comments ## The Lean Theorem Prover Lean is a new player in the field of proof assistants for Homotopy Type Theory. It is being developed by Leonardo de Moura working at Microsoft Research, and it is still under active development for the foreseeable future. The code is open source, and available on Github. You can install it on Windows, OS X or Linux. It will come with a useful mode for Emacs, with syntax highlighting, on-the-fly syntax checking, autocompletion and many other features. There is also an online version of Lean which you can try in your browser. The on-line version is quite a bit slower than the native version and it takes a little while to load, but it is still useful to try out small code snippets. You are invited to test the code snippets in this post in the on-line version. You can run code by pressing shift+enter. In this post I’ll first say more about the Lean proof assistant, and then talk about the considerations for the HoTT library of Lean (Lean has two libraries, the standard library and the HoTT library). I will also cover our approach to higher inductive types. Since Lean is not mature yet, things mentioned below can change in the future. Update January 2017: the newest version of Lean currently doesn’t support HoTT, but there is a frozen version which does support HoTT. The newest version is available here, and the frozen version is available here. To use the frozen version, you will have to compile it from the source code yourself. Posted in Code, Higher Inductive Types, Programming | 48 Comments ## Real-cohesive homotopy type theory Two new papers have recently appeared online: Both of them have fairly chatty introductions, so I’ll try to restrain myself from pontificating at length here about their contents. Just go read the introductions. Instead I’ll say a few words about how these papers came about and how they are related to each other. Posted in Applications, Foundations, Paper | 12 Comments ## A new class of models for the univalence axiom First of all, in case anyone missed it, Chris Kapulkin recently wrote a guest post at the n-category cafe summarizing the current state of the art regarding “homotopy type theory as the internal language of higher categories”. I’ve just posted a preprint which improves that state a bit, providing a version of “Lang(C)” containing univalent strict universes for a wider class of (∞,1)-toposes C: Posted in Models, Paper, Univalence | 4 Comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 271, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860757946968079, "perplexity": 526.8189013437056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816094.78/warc/CC-MAIN-20180225031153-20180225051153-00333.warc.gz"}
http://codeforces.com/blog/zscoder
### zscoder's blog By zscoder, history, 9 days ago, , Bank Robbery Cutting Carrot Naming Company Labelling Cities Choosing Carrot Leha and security system Replace All • • +126 • By zscoder, 11 days ago, , Hi all! On May 13, 12:35 MSK, Tinkoff Challenge — Final Round will be held. Standings of the official finalists are availiable here. The authors of the round are me (zscoder, Zi Song Yeoh), AnonymousBunny (Sreejato Kishor Bhattacharya), hloya_ygrt (Yury Shilyaev). Special thanks to KAN (Nikolay Kalinin) for coordinating the round, winger (Vladislav Isenbaev) and AlexFetisov (Alex Fetisov) for testing the problems. Also, thanks to MikeMirzayanov (Mike Mirzayanov) for the Codeforces and Polygon system. There are seven problems and the duration is two hours. Scoring will be announced before the round. Top 20 participants of the Elimination Round will compete in the Tinkoff Office. The round is rated. Division 1 and Division 2 will have the same problemset with seven problems. We hope everyone will find interesting problems and get high rating! UPD : Scoring Distribution : 500 — 1000 — 1750 — 2000 — 2500 — 2750 — 3500 UPD2 : The editorial is out! UPD3 : Congratulations to the top 10 : • • +404 • By zscoder, history, 5 weeks ago, , Hi everyone! Malaysian Computing Olympiad 2017 (also known as MCO 2017) has just ended a few days ago. You can find the problems in this group. There are 6 problems and each problem is divided into several subtasks. • • +86 • By zscoder, history, 4 months ago, , Weekly Training Farm 22 is over. Congratulations to the winners : 1. W4yneb0t (perfect score in < 1 hour!) 2. aaaaajack (perfect score) 3. eddy1021 Here is the editorial : ### Problem A This problem can be solved by greedy. We list down the positive integers one by one. We keep a pointer that initially points to the first letter of s. Whenever the pointed character in the string s matches the corresponding digit of the integer, we move the pointer one step to the right and continue. Repeat this process until the pointer reaches the end. However, we still need to know whether the answer can be large. The key is to note that the answer will never exceed 106, because after writing down 10 consecutive numbers, at least one of them has last digit equals to the current digit, so the pointer will move to the right at least once when we write down 10 consecutive numbers. Thus, in the worse case, we'll only list down the numbers from 1 to 106, which is definitely fast enough. Code ### Problem B This problem can be solved using dynamic programming. Firstly, observe that if we already determine which set of problems to solve, then it's best to solve the problem in increasing order of time needed to solve in order to minimize the time penalty. Thus, we can first sort the problems in increasing order of time needed, breaking ties arbitarily. Let dp[i][j] denote the maximum number of problems solved and minimum time penalty acquired when doing so by using exactly j minutes and only solving problems among the first i ones. dp[0][0] = (0, 0) (the first integer denotes the number of problems solved and the second integer denotes the time penalty in order to do so). The transitions can be handled easily by simply considering whether to solve the i-th problem or not. The time complexity of this solution is O(nT) (T is the duration of the contest) Code ### Problem C This is an ad hoc problem. Firstly, we can use two moves to determine what the value of the first bit is. (simply flipping it twice will tell you its value. Now, if the bit is 1, you don't need to flip it anymore. If it's 0, you'll need to flip it. In any case, we'll flip the second bit as well. (if the first bit needs to be flipped, we'll flip [1, 2] and flip [2, 2] otherwise) After flipping the second bit, we can determine whether it's a 1 or 0 by calculating from the total number of 1s of the string before the flip and after the flip. We can repeat this for every 2 consecutive bits until we arrive at the last two bits. At this point, we know what the second last bit is, and we also know the total number of 1 bits. So, we can easily deduce the value of the last bit from the information as well. Now, we just need to perform one last flip to make the last 2 bits become 1. The total number of moves made is n + 1. Code ### Problem D1 First, we can use 18 moves to determine the value of a, by asking 2 to 19 in increasing order and the first yes answer will be the value of a. If there're no "yes" answers, then the value of a is 20. Call a number good if it can be represented as the sum of nonnegative multiples of as and b. Note that if x is good, then x + a, x + b are both good. Now that we have the value of a, let's think about what b is. Consider the numbers ka + 1, ka + 2, ..., ka + (a - 1) for a fixed k. If none of these numbers are good, we can immediately say that b is larger than (k + 1)a. Why? Suppose b = qa + r. Clearly, r ≠ 0 since a and b are coprime. Note that xa + r for all x ≥ q will be the good, since xa + r = (qa + r) + (x - q)a = b + (x - q)a. So, b cannot be less than any of the numbers ka + 1, ka + 2, ..., ka + (a - 1), or else one of these numbers would've been good, a contradiction. Note that this also means that if y is the smallest integer such that ya + 1, ya + 2, ..., ya + (a - 1) are not all bad, then there will be exactly one good number, which will be b. Also note that for all integers k > y, there will have at least one good number among ka + 1, ka + 2, ..., ka + (a - 1). Thus, we can now binary search for the value of y. In each iteration of the binary search, we need to ask at most a - 1 ≤ 19 questions, and there are at most iterations, so the maximum number of operations needed is 19·19 + 18 = 379 < 380. Code ### Problem D2 This problem is the same as D1, but with higher constraints. Firstly, we find the value of a in 18 moves as in problem D. To proceed, we need to think about this problem from another angle. Suppose we know a number N that is good and not a multiple of a, and we can find the maximum number k such that N - ka is good, then what does this tell us? This means that N - ka is a multiple of b. Why? We know that N - ka = ax + by for some nonnegative integers x and y since N - ka is good. If x > 0, then N - (k + 1)a = a(x - 1) + by is also good, contradicting the maximality of k. Thus, x = 0 and so N - ka = by. Note that b > 0 since we choose N so that it's not a multiple of a. To find a value of N such that N is good and not a multiple of a, it is sufficient to take 500000a - 1, since any number greater than ab - a - b is guaranteed to be good. (this is a well-known fact) We can find the largest k such that N - ka is good via binary search, because if N - ma is not good then N - (m + 1)a can't be good. (or else if N - (m + 1)a = ax + by, then N - ma = a(x + 1) + by) This takes at most 19 questions. What to do after finding a value which is a multiple of b? Let C = N - ka. We consider the prime factorization of C. The main claim is that if is good, then x must be a multiple of b. The reasoning is the same as what we did before. So, we can find the prime factorization of C, and divide the prime factors one by one. If the number becomes bad, we know that the prime factor cannot be removed, and proceed to the next prime factor. Since a number less than 10000000 can have at most 23 prime factors (maximum is 223), so this takes another 23 questions. Thus, we only used at most 18 + 19 + 23 = 60 questions to find the values of a and b. Code ### Problem E Firstly, note that a connected graph on n vertices with n edges contains exactly 1 cycle. Call the vertices on the cycle the cycle vertices. From each cycle vertex, there's a tree rooted at it. Thus, call the remaining vertices the tree vertices. Note that the number of useless edges is equal to the length of the cycle. Now, we do some casework : • u is equal to a tree vertex Note that this will not change the length of the cycle. Thus, we just have to count how many ways are there to change the value of au such that the graph remains connected. The observation is that for each tree node u, the only possible values of au are the nodes which are not in the subtree of u in the tree u belongs to. Thus, the number of possibilities can be calculated with a tree dp. For each tree, we calculate the subtree size of each node and add all these subtree sizes and subtract this from the total number of ways to choose a non-tree vertex u and choosing the value of au. This part can be done in O(n) time. • u is equal to a cycle vertex For two cycle vertices u and v, let d(u, v) be the directed distance from u to v (We consider the distance from u to v in the functional graph for all 1 ≤ i ≤ n). Note that if we change au to x, and the root of the tree x is in is v (x = v is x is a cycle vertex), then the length of the cycle after the change will be d(v, u) + 1 + h[x], where h[x] is the height of x in its tree. The key is instead of fixing u and iterate through all other nodes x, we iterate through all endpoints x and see how it changes our answer. Note that if x is fixed, which also means that v is fixed, then we just have to add 1 to the answer for c = d(v, u) + 1 + h[x] for all cycle vertices u. However, note that d(v, u) ranges from 0 to C - 1 (where C denotes the length of the original cycle), so this is equivalent to adding 1 to the answer for c = h[x] + 1, h[x] + 2, ..., h[x] + C. Now, we can iterate through all vertices x and add 1 to the answer for c = h[x] + 1, h[x] + 2, ..., h[x] + C. To do this quickly, we can employ the "+1, -1" method. Whenever we want to add 1 to a range [l, r], we add 1 to ansl and subtract 1 from ansr + 1. Then, to find the actual values of the ans array, we just have to take the prefix sum of the ans array. Finally, do not forget to subtract the cases where v = au from the answer. The total complexity is O(n). Code • • +45 • By zscoder, history, 4 months ago, , Hi everyone! I would like to invite you to the Weekly Training Farm 22 ! The problemsetter is me (zscoder) and the tester and quality controller is dreamoon. It will be a contest in ACM-ICPC style and contains 6 problems. The difficulty is around 500-1500-1500-1750-2500-2500 (compared to Div. 2 Contests) The contest begins at 19:30 UTC+8 and lasts for two hours. To join the contest, join this group (as participant) first and find Weekly Training Farm 22 on the Group Contest tab. In addition, there will be a few interactive problems in this round. Please check the Interactive Problems Guide if you're not familiar with interactive problems. Good luck and hope you enjoy the problems! UPD : Contest starts in around 4.5 hours. UPD : You can find the editorial here UPD : Since next week will be the lunar new year, there'll be no Weekly Training Farm next week. It will resume on February. • • +68 • By zscoder, history, 4 months ago, , Congratulations to the winners! Also special props to biGinNer for solving the last 3 problems (and the only one to solve F during contest) Here are the editorials : ## Problem A. This is a simple problem. First, we calculate the position Harry ends in after making the moves 1 time. This can be done by directly simulating the moves Harry make. Now, suppose Harry is at (x, y) after 1 iteration. Note that after every iteration, Harry will move x units to the right and y units up, so after n moves he will end up in (nx, ny). The complexity of the solution is O(|s|). ## Problem B. This is a dp problem. Let dp[i] be the maximum possible sum of the remaining numbers in the range [1..i]. For 1 ≤ i ≤ k - 1 the value is just the sum of the numbers in the range. Let dp[0] = 0. For i ≥ k, we may choose to keep the element ai or remove a subsegment of length k which ends at ai. Thus, we arrive at the recurrence dp[i] = max(dp[i - 1] + ai, dp[i - k]). We can calculate the dp values in O(n). ## Problem C. Observe that we can consider each prime factor separately. For each prime p that appears in N, let's see what prime power pki should we pick from each number ai so that the sum of ki is equal to the power of p in the prime factorization of N. Firstly, we need to prime factorize all the numbers ai. We can use Sieve to find the primes and the factorization can be done in . From now on, we'll focus on a specific prime p. Now, we know the maximum prime power mi we can take from each number ai (so ki ≤ mi). From here, we can use a greedy method to decide what to take from each number ai. Note that mi ≤ 20 because 220 = 1048576 > 106. So, for each number ai, we know the cost needed if we take 1, 2, ..., mi factors of p from ai. We can store a vector and for each ai, we push wip, wi(p2 - p), wi(p3 - p2), ..., wi(pmi - pmi - 1) into the vector. Now, we sort the vector and take the first x elements, where x is the power of prime p in the prime factorization of N. If we can't take x elements, the answer is  - 1. We can repeat this for all primes and solve the problem in time. ## Problem D. To solve this problem, you need to know a bit about permutations. First, we need to determine how to find the minimum number of swaps to sort a permutation. This is a well-known problem. Let the permutation be P = p1, p2, ..., pn. Construct a graph by drawing edges from i to pi for all 1 ≤ i ≤ n. Note that the graph is formed by disjoint cycles. You can easily see swapping two elements can either split a cycle into two smaller cycles, or merge two cycles into one cycle. Since the identity permutation is formed by n cycles, the optimal way is to keep splitting cycles into two and increase the total number of cycles by 1 each step. Thus, if we denote c as the number of cycles in the current permutation, the number of moves needed to sort the permutation is n - c. Harry wins if and only if n - c is odd. The key observation is that whenever there are exactly two question marks left, the first player will always win. Why? Consider how the current graph of the permutation looks like. It will be a union of few cycles and 2 chains (we consider the singleton, a component formed by a single vertex, as a chain). Now, the first player can either choose to close off one of the chains, or join the two chains together. The latter will leave exactly 1 less number of cycles than the former. So, one of them will guarantee the value of n - c to be odd. Thus, the first player only have to choose the correct move. This implies that if the number of question marks m is at least 2, then Harry wins if m is even and loses otherwise. Now, the only case left is when there're only 1 question mark in the beginning. This means that Harry only have 1 possible move and we're left with the problem of deciding whether the final permutation have n - c odd. Thus, it is enough to count the number of cycles in the formed graph. This can be done by dfs. The complexity of the solution is O(n). ## Problem E. First, we form a trie of the given words. Now, the game is equivalent to the following : 1. Start from the root of the trie. 2. Each player can either move to one of the children of the current node, or delete one edge connecting the current node to one of the children. The one who can't move loses. This reduced game can be solved with Tree DP. Let dp[u] denote the winner of the game if the first player starts at node u. The leaves have dp[u] = 2. Our goal is to find dp[0] (where 0 is the root). The recurrence is simple. Suppose we're finding dp[u] and the children of u are v1, v2, ..., vk. If one of the children has dp value of 2, then Player 1 can just move to that children and win. Otherwise, all children have dp value of 1. Thus, both players will try not to move down unless forced to. So, they'll keep deleting edges. If there are an even number of children, Player 2 will win, as he will either delete all edges or force Player 1 to move down. Otherwise, Player 1 wins. This gives a simple O(n) tree dp solution. ## Problem F. Firstly, we make the same observations as problem C. Swapping two elements will either split a cycle into two or merge two cycles. Note that if we swap two elements from the same cycle, the cycle will split into two. If we swap two elements from different cycles, the two cycles will combine. Also note that for a cycle of size c, we can always split it into two cycles a and b with a, b > 0 and a + b = c by choosing the appropriate two elements to swap from the cycle. Now, the game reduces to choose 2 possibly equal elements from one cycle, swap them, and delete one of the resulting cycles. So, for a given permutation, if the cycle sizes are c1, c2, ..., ck, then each move we can choose one of the sizes and the operation is equivalent to changing the size into any nonnegative number strictly smaller than it. Thus, we have reduced the problem to playing a game of Nim on c1, c2, ..., ck. Since Harry goes second, he wins if and only if the xor values of all the cycle sizes is 0. (This is a well-known fact) Thus, we've reduced the problem into finding the number of permutations of length n which have the xor of all cycle sizes equal to 0. To do so, let dp[i][j] denote the number of permutations with length i and xor of all cycle sizes equal j. The dp transitions can be done by iterating through all possible sizes s of the cycle containing i. For each s, there are ways to choose the remaining elements of the cycle containing i and (s - 1)! ways to permute them. Thus, we can sum up the values of for all 1 ≤ s ≤ i. The whole solution works in O(n3) time. • • +44 • By zscoder, history, 5 months ago, , Hi everyone! I would like to invite you to the Weekly Training Farm 20 ! The problemsetter is me (zscoder) and the tester and quality controllers are dreamoon and drazil. It will be a contest in ACM-ICPC style and contains 6 problems. The difficulty is around 500-1250-1750-2000-2250-2250 (compared to Div. 2 Contests) The contest begins at 20:00 UTC+8 and lasts for two hours. To join the contest, join this group (as participant) first and find Weekly Training Farm 20 on the Group Contest tab. Reminder : The contest will start in around 5 hours from now. Update : Less than 1 hour before start. Good luck! Here's the editorial. Announcement of Weekly Training Farm 20 • • +58 • By zscoder, history, 7 months ago, , Codechef October Challenge has just ended few hours ago. Every time I find that my weakest spot is in solving those approximation problems. How do you start solving them? There are people who get very high points and I'm curious how they manage to do that. • • +47 • By zscoder, history, 7 months ago, , Hi everyone! Following my last article, today I'm writing about a not-so-common trick that has nevertheless appeared in some problems before and might be helpful to you. I'm not sure if this trick has been given a name yet so I'd refer to it as "Slope Trick" here. Disclaimer : It would be helpful to have a pen and paper with you to sketch the graphs so that you can visualize these claims easier. Example Problem 1 : 713C - Sonya and Problem Wihtout a Legend This solution originated from koosaga's comment in the editorial post here. The solution below will solve this problem in , wheareas the intended solution is O(n2). So, the first step is to get rid of the strictly increasing condition. To do so, we apply a[i] -= i for all i and thus we just have to find the minimum number of moves to change it to a non-decreasing sequence. Define fi(x) as the minimum number of moves to change the first i elements into a non-decreasing sequence such that ai ≤ x. It is easy to see that by definition we have the recurrences fi(X) = minY ≤ X(|ai - Y|) when i = 1 and fi(X) = minY ≤ X(fi - 1(Y) + |ai - Y|}. Now, note that fi(X) is non-increasing, since it is at most the minimum among all the values of f for smaller X by definition. We store a set of integers that denotes where the function fi change slopes. More formally, we consider the function gi(X) = fi(X + 1) - fi(X). The last element of the set will be the smallest j such that gi(j) = 0, the second last element will be the smallest j such that gi(j) =  - 1, and so on. (note that the set of slope changing points is bounded) Let Opt(i) denote a position where fi(X) achieves its minimum. (i.e. gi(Opt(i)) = 0) The desired answer will be fn(Opt(n)). We'll see how to update these values quickly. Now, suppose we already have everything for fi - 1. Now, we want to update the data for fi. First, note that all the values x < ai will have its slope decreased by 1. Also, every value with x ≥ ai will have its slope increased by 1 unless we have reached the slope = 0 point, in which the graph never goes up again. There are two cases to consider : Case 1 : Opt(i - 1) ≤ ai Here, the slope at every point before ai decreases by 1. Thus, we push ai into the slope array as this indicates that we decreases the slope at all the slope changing points by 1, and the slope changing point for slope = 0 is ai, i.e. Opt(i) = ai. Thus, this case is settled. Case 2 : Opt(i - 1) > ai Now, we insert ai into the set, since it decreases the slope at all the slope changing points before ai by 1. Furthermore, we insert ai again because it increases the slope at the slope changing points between ai and Opt(i - 1) by 1. Now, we can just take Opt(i) = Opt(i - 1) since the slope at Opt(i - 1) is still 0. Finally, we remove Opt(i - 1) from the set because it's no longer the first point where the slope changes to 0. (it's the previous point where the slope changes to  - 1 and the slope now becomes 0 because of the addition of ai) Thus, the set of slope changing points is maintained. We have fi(Opt(i)) = fi - 1(Opt(i - 1)) + |Opt(i - 1) - ai|. Thus, we can just use a priority queue to store the slope changing points and it is easy to see that the priority queue can handle all these operations efficiently (in time). Here's the implementation of this idea by koosaga : 20623607 This trick is called the "Slope Trick" because we're considering the general function and analyzing how its slope changes at different points to find the minimum or maximum value. The next example is APIO 2016 P2 — Fireworks This problem was the "killer" problem of APIO 2016, and was solved by merely 4 contestants in the actual contest. I'll explain the solution, which is relatively simple and demonstrates the idea of slope trick. So, the idea is similar to the last problem. For each node u, we store a function f(x) which denotes the minimum cost to change the weights on edges in the entire subtree rooted at u including the parent edge of u such that the sum of weights on each path from u to leaves are equal to x. We'll store the slope changing points of the function in a container (which we'll determine later) again. In addition, we store two integers a, b, which denotes that for all x ≥ X, where X is the largest slope changing point, the value of the function is aX + b. (clearly this function exists, since when X increases one can always increase the parent node by 1) Now, for the child nodes i, it is clear that a = 1, b =  - ci, where ci is the cost of the parent edge of i, and the slope changing points are {ci, ci}. For a non-leaf node u, we have to combine the functions from its children first. Firstly, we set the function as the sum of all functions of its child, and we'll correct it later. We set the value a of this node as the sum of all as of its children, and similarly for b. Also, we combine all the slope-changing points together. It is important that we merge the smaller sets into the larger set. (see dsu on tree, a.k.a. small-to-large technique) Now, the function is still incorrect. Firstly, note that all the slope-changing points that have slope  > 1 is meaningless, because we can just increase the parent edge by 1 to increase the sum of the whole subtree, so we can remove these slope-changing points while updating the values of a, b. Suppose we remove a slope-changing point x with slope a, then we decrement a, increase b by x, and remove x from the set. (this is because ax + b = (a - 1)x + (b + x)) Repeat this till a becomes at most 1. Next, since the cost of the parent edge is ci, we have to shift the slope 0 and 1 changing points to the right by ci. Note that the slope  - 1 changing point doesn't change, because we can just reduce the weight of ci until it reaches 0. (note that the condition that the weights can be reduced to 0 helped here) Finally, we have to decrease b by ci, since we shifted the points to the right by ci. Thus, the function for this node is now complete. Thus, we can do a dfs and keep merging functions until we get the function for the root node. Then, we just have to find the value of the function when a = 0. (using the same method by we decrease a until it reaches 0) Finally, the answer will be the updated value of b at the root node, and we're done. We'll use a priority queue to store the slope changing points as it is the most convenient option. Official Solution ## Beyond APIO 2016 Fireworks Now, the next example is the generalization of this problem. It has came from Codechef October Challenge — Tree Balancing. We'll solve this using the slope trick as well. The Codechef problem is the same as the last problem, except : 1. The weights of the edges can be changed to negative values 2. You must output a possible construction aside from the minimum cost needed 3. The edges now have a cost wi, and when you change the value of an edge by 1, your total cost increases by wi. However, it is still possible to solve this using Slope Trick. Firstly, we suppose that wi = 1, to simplify the problem. Now, since the edges can be changed to negative values, at each node there is no point with slope that has absolute value greater than 1, since changing the parent edge will yield a better result. Thus, each node actually have only 2 slope-changing points, the point where the slope changes from  - 1 to 0 and the point where the slope changes from 0 to 1. Thus, this means that we have to pop slope-changing points from the front as well as the back of the set. The best way to store the data is to use a multiset. With this modification, we can find the minimum cost needed like before. Now, the second part of the question is, how to reconstruct the answer? This part is not hard if you understand what we're doing here. The problem reduces to solving for each node u, if I need to make the sum of weights from the parent of u to all leaves equal to x, what should the parent edge weight be, where x is given. We start from the childrens of the root, with value x which is equal to the point where the slope changes from 0 to 1. (i.e. the point that yields minimum value) For each node we store the 2 slope-changing points li, ri in an array while we find the minimum cost. Now, if li ≤ x ≤ ri, then the best thing to do is not change the parent edge. If x > ri, then we should increase the parent edge value by x - ri. Otherwise, we should decrease the parent edge value by li - x. Thus, we can find the required weights for the parent nodes and it remains to push the remaining sum of weights needed to its children and recurse until we get all the weights of the edges. The time complexity is the same. To get the full AC, we need to solve the cost-weighted case. It is actually similar to this case, but we have to modify the solution a bit. The idea is still the same. However, the slope changing points has increased by a lot. To efficiently store these slope points, we will store the compressed form of the set. For example, the set {3, 4, 5, 5, 5, 5, 6, 6} will be stored as {(3, 1), (4, 1), (5, 4), (6, 2)}. Basically, we store the number of occurences of the integers instead of storing it one by one. We can use a map to handle this. The base case is a bit different now. Suppose the leaf node is u and the cost of its parent edge is du. Then, a = du, b =  - cu × du, where cu is the weight of its parent edge. The slope changing points is {(cu, 2du)}. Merging the functions to its parent will be the same. Now, we have to update the slope changing points and the function ax + b. First, we remove all points with slope  > du and  <  - du, as we can just change the parent edge. Then, we have to shift every slope changing point by cu. However, shifting the whole map naively is inefficient. The trick here is to store a counter shift for each node that denotes the amount to add for each slope changing point. Now, the shifting part is equivalent to just adding cu to the counter shift. Finally, we update a and b as before. To recover the solution, we use the same method as above, with some changes. Firstly, l and r will be the minimum slope changing point of the function and maximum slope changing point of the function respectively. Secondly, if the sum of di of all children is less than the di of the parent edge, then we do not change the weight of the parent edge, as it is sufficient to just update all the children edges. My implementation of this solution (100 points) That's it for this post. If you know any other application of this trick, feel free to post them in the comments. • • +192 • By zscoder, history, 7 months ago, , Hi everyone! Today I want to share some DP tricks and techniques that I have seen from some problems. I think this will be helpful for those who just started doing DP. Sometimes the tutorials are very brief and assumes the reader already understand the technique so it will be hard for people who are new to the technique to understand it. Note : You should know how to do basic DP before reading the post This is actually a very well-known technique and most people should already know this. This trick is usually used when one of the variables have very small constraints that can allow exponential solutions. The classic example is applying it to solve the Travelling Salesman Problem in O(n2·2n) time. We let dp[i][j] be the minimum time needed to visit the vertices in the set denoted by i and ending at vertex j. Note that i will iterate through all possible subsets of the vertices and thus the number of states is O(2n·n). We can go from every state to the next states in O(n) by considering all possible next vertex to go to. Thus, the time complexity is O(2n·n2). Usually, when doing DP + Bitmasks problems, we store the subsets as an integer from 0 to 2n - 1. How do we know which elements belong to a subset denoted by i? We write i in its binary representation and for each bit j that is 1, the j-th element is included in the set. For example, the set 35 = 1000112 denotes the set {0, 4, 5} (the bits are 0-indexed from left to right). Thus, to test if the j-th element is in the subset denoted by j, we can test if i & (1<<j) is positive. (Why? Recall that (1<<j) is 2j and how the & operator works.) Now, we look at an example problem : 453B - Little Pony and Harmony Chest So, the first step is to establish a maximum bound for the bi. We prove that bi < 2ai. Assume otherwise, then we can replace bi with 1 and get a smaller answer (and clearly it preserves the coprime property). Thus, bi < 60. Note that there are 17 primes less than 60, which prompts us to apply dp + bitmask here. Note that for any pair bi, bj with i ≠ j, their set of prime factors must be disjoint since they're coprime. Now, we let dp[i][j] be the minimum answer one can get by changing the first i elements such that the set of primes used (i.e. the set of prime factors of the numbers b1, b2, ..., bi) is equal to the subset denoted by j. Let f[x] denote the set of prime factors of x. Since bi ≤ 60, we iterate through all possible values of bi, and for a fixed bi, let F = f[bi]. Then, let x be the complement of the set F, i.e. the set of primes not used by bi. We iterate through all subsets of x. (see here for how to iterate through all subsets of a subset x) For each s which is a subset of x, we want dp[i][s|F] = min(dp[i][s|F], dp[i - 1][s] + abs(a[i] - b[i])). This completes the dp. We can reconstruct the solution by storing the position where the dp achieves its minimum value for each state as usual. This solution is enough to pass the time limits. Here are some other problems that uses bitmask dp : 678E - Another Sith Tournament 662C - Binary Table ## Do we really need to visit all the states? Sometimes, the naive dp solution to a problem might take too long and too much memory. However, sometimes it is worth noting that most of the states can be ignored because they will never be reached and this can reduce your time complexity and memory complexity. Example Problem : 505C - Mr. Kitayuta, the Treasure Hunter So, the most direct way of doing dp would be let dp[i][j] be the number of gems Mr. Kitayuta can collect after he jumps to island i, while the length of his last jump is equal to j. Then, the dp transitions are quite obvious, because we only need to test all possible jumps and take the one that yields maximum results. If you have trouble with the naive dp, you can read the original editorial. However, the naive method is too slow, because it would take O(m2) time and memory. The key observation here is that most of the states will never be visited, more precisiely j can only be in a certain range. These bounds can be obtained by greedily trying to maximize j and minimize j and we can see that their values will always be in the order of from the initial length of jump. This type of intuition might come in handy to optimize your dp and turn the naive dp into an AC solution. ## Change the object to dp Example Problem : 559C - Gerald and Giant Chess This is a classic example. If the board was smaller, say 3000 × 3000, then the normal 2D dp would work. However, the dimensions of the grid is too large here. Note that the number of blocked cells is not too large though, so we can try to dp on them. Let S be the set of blocked cells. We add the ending cell to S for convenience. We sort S in increasing order of x-coordinate, and break ties by increasing order of y-coordinate. As a result, the ending cell will always be the last element of S. Now, let dp[i] be the number of ways to reach the i-th blocked cell (assuming it is not blocked). Our goal is to find dp[s], where s = |S|. Note that since we have sort S by increasing order, the j-th blocked cell will not affect the number of ways to reach the i-th blocked cell if i < j. (There is no path that visits the j-th blocked cell first before visiting the i-th blocked cell) The number of ways from square (x1, y1) to (x2, y2) without any blocked cells is . (if x2 > x1, y2 > y1. The case when some two are equal can be handled trivially). Let f(P, Q) denote the number of ways to reach Q from P. We can calculate f(P, Q) in O(1) by precomputing factorials and its inverse like above. The base case, dp[1] can be calculated as the number of ways to reach S1 from the starting square. Similarly, we initialize all dp[i] as the number of ways to reach Si from the starting square. Now, we have to subtract the number of paths that reach some of the blocked cells. Assume we already fixed the values of dp[1], dp[2], ..., dp[i - 1]. For a fix blocked cell Si, we'll do so by dividing the paths into groups according to the first blocked cell it encounters. The number of ways for each possible first blocked cell j is equal to dp[jf(Sj, Si), so we can subtract this from dp[i]. Thus, this dp works in O(n2). Another problem using this idea : 722E - Research Rover ## Open and Close Interval Trick Example Problem : 626F - Group Projects First, note that the order doesn't matter so we can sort the ai in non-decreasing order. Now, note that every interval's imbalance can be calculated with its largest and smallest value. We start adding the elements to sets from smallest to largest in order. Suppose we're adding the i-th element. Some of the current sets are open, i.e. has a minimum value but is not complete yet (does not have a maximum). Suppose there are j open sets. When we add ai, the sum ai - ai - 1 will contribute to each of the j open sets, so we increase the current imbalance by j(ai - ai - 1). Let dp[i][j][k] be the number of ways such that when we inserted the first i elements, there are j open sets and the total imbalance till now is k. Now, we see how to do the state transitions. Let v = dp[i - 1][j][k]. We analyze which states involves v. Firstly, the imbalance of the new state must be val = k + j(ai - ai - 1), as noted above. Now, there are a few cases : 1. We place the current number ai in its own group : Then, dp[i][j][val] +  = v. 2. We place the current number ai in one of the open groups, but not close it : Then, dp[i][j][val] +  = j·v (we choose one of the open groups to add ai. 3. Open a new group with minimum = ai : Then, dp[i][j + 1][val] +  = v. 4. Close an open group by inserting ai in one of them and close it : Then, dp[i][j - 1][val] +  = j·v. The answer can be found as dp[n][0][0] + dp[n][0][1] + ... + dp[n][0][k]. Related Problems : 466D - Increase Sequence 367E - Sereja and Intervals ## "Connected Component" DP Example Problem : JOI 2016 Open Contest — Skyscrapers Previously, I've made a blog post here asking for a more detailed solution. With some hints from Reyna, I finally figured it out and I've seen this trick appeared some number of times. Abridged Statement : Given a1, a2, ..., an, find the number of permutations of these numbers such that |a1 - a2| + |a2 - a3| + ... + |an - 1 - an| ≤ L where L is a given integer. Constraints : n ≤ 100, L ≤ 1000, ai ≤ 1000 Now, we sort the values ai and add them into the permutation one by one. At each point, we will have some connected components of values (for example it will be something like 2, ?, 1, 5, ?, ?, 3, ?, 4) Now, suppose we already added ai - 1. We treat the ? as ai and calculate the cost. When we add a new number we increase the values of the ? and update the cost accordingly. Let dp[i][j][k][l] be the number of ways to insert the first i elements such that : • There are j connected components • The total cost is k (assuming the ? are ai + 1) • l of the ends of the permutations has been filled. (So, 0 ≤ l ≤ 2) I will not describe the entire state transitions here as it will be very long. If you want the complete transitions you can view the code below, where I commented what each transition means. Some key points to note : • Each time you add a new element, you have to update the total cost by ai + 1 - ai times the number of filled spaces adjacent to an empty space. • When you add a new element, it can either combine 2 connected components, create a new connected components, or be appended to the front or end of one of the connected components. A problem that uses this idea can be seen here : 704B - Ant Man ## × 2,  + 1 trick This might not be a very common trick, and indeed I've only seen it once and applied it myself once. This is a special case of the "Do we really need to visit all the states" example. Example 1 : Perfect Permutations, Subtask 4 My solution only works up to Subtask 4. The official solution uses a different method but the point here is to demonstrate this trick. Abridged Statement : Find the number of permutations of length N with exactly K inversions. (K ≤ N, N ≤ 109, K ≤ 1000 (for subtask 4)) You might be wondering : How can we apply dp when N is as huge as 109? We'll show how to apply it below. The trick is to skip the unused states. First, we look at how to solve this when N, K are small. Let dp[i][j] be the number of permutations of length i with j inversions. Then, dp[i][j] = dp[i - 1][j] + dp[i - 1][j - 1] + ... + dp[i - 1][j - (i - 1)]. Why? Again we consider the permutation by adding the numbers from 1 to i in this order. When we add the element i, adding it before k of the current elements will increase the number of inversions by k. So, we sum over all possibilities for all 0 ≤ k ≤ i - 1. We can calculate this in O(N2) by sliding window/computing prefix sums. How do we get read of the N factor and replace it with K instead? We will use the following trick : Suppose we calculated dp[i][j] for all 0 ≤ j ≤ K. We have already figured out how to calculate dp[i + 1][j] for all 0 ≤ j ≤ K in O(K). The trick here is we can calculate dp[2i][j] from dp[i][j] for all j in O(K2). How? We will find the number of permutations using 1, 2, ..., n and n + 1, n + 2, ..., 2n and combine them together. Suppose the first permutation has x inversions and the second permutation has y inversions. How will the total number of inversions when we merge them? Clearly, there'll be at least x + y inversions. Now, we call the numbers from 1 to n small and n + 1 to 2n large. Suppose we already fixed the permutation of the small and large numbers. Thus, we can replace the small numbers with the letter 'S' and large numbers with the letter 'L'. For each L, it increases the number of inversions by the number of Ss at the right of it. Thus, if we want to find the number of ways that this can increase the number of inversions by k, we just have to find the number of unordered tuples of nonnegative integers (a1, a2, ..., an) such that they sum up to k (we can view ai as the number of Ss at the back of the i-th L) How do we count this value? We'll count the number of such tuples where each element is positive and at most k and the elements sum up to k instead, regardless of its length. This value will be precisely what we want for large enough n because there can be at most k positive elements and thus the length will not exceed n when n > k. We can handle the values for small n with the naive O(n2) dp manually so there's no need to worry about it. Thus, it remains to count the number of such tuples where each element is positive and at most k and sums up to S = k. Denote this value by f(S, k). We want to find S(k, k). We can derive the recurrence f(S, k) = f(S, k - 1) + f(S - k, k), denoting whether we use k or not in the sum. Thus, we can precompute these values in O(K2). Now, let g0, g1, g2, ..., gK be the number of permutations of length n with number of inversions equal to 0, 1, 2, ..., K. To complete this step, we can multiply the polynomial g0 + g1x + ... + gKxK by itself (in O(K2) or with FFT, but that doesn't really change the complexity since the precomputation already takes O(K2)), to obtain the number of pairs of permutations of {1, 2, ..., n} and {n + 1, n + 2, ..., 2n} with total number of inversions i for all 0 ≤ i ≤ K. Next, we just have to multiply this with f(0, 0) + f(1, 1)x + ... + f(K, K)xK and we get the desired answer for permutations of length 2n, as noted above. Thus, we have found a way to obtain dp[2i][·] from dp[i][·] in O(K2). To complete the solution, we first write N in its binary representation and compute the dp values for the number formed from the first 10 bits (until the number is greater than K). Then, we can update the dp values when N is multiplied by 2 or increased by 1 in O(K2) time, so we can find the value dp[N][K] in , which fits in the time limit for this subtask. Example 2 : Problem Statement in Mandarin This solution originated from the comment from WJMZBMR here Problem Statement : A sequence a1, a2, ..., an is valid if all its elements are pairwise distinct and for all i. We define value(S) of a valid sequence S as the product of its elements. Find the sum of value(S) for all possible valid sequences S, modulo p where p is a prime. Constraints : A, p ≤ 109, n ≤ 500, p > A > n + 1 Firstly, we can ignore the order of the sequence and multiply the answer by n! in the end because the numbers are distinct. First, we look at the naive solution : Now, let dp[i][j] be the sum of values of all valid sequences of length j where values from 1 to i inclusive are used. The recurrence is dp[i][j] = dp[i - 1][j] + i·dp[i - 1][j - 1], depending on whether i is used. This will give us a complexity of O(An), which is clearly insufficient. Now, we'll use the idea from the last example. We already know how to calculate dp[i + 1][·] from dp[i][·] in O(n) time. Now, we just have to calculate dp[2i][·] from dp[i][·] fast. Suppose we want to calculate dp[2A][n]. Then, we consider for all possible a the sum of the values of all sequences where a of the elements are selected from 1, 2, ..., A and the remaining n - a are from i + 1, i + 2, ..., 2A. Firstly, note that . Now, let ai denote the sum of all values of sequences of length i where elements are chosen from {1, 2, ..., A}, i.e. dp[A][i]. Let bi denote the same value, but the elements are chosen from {A + 1, A + 2, ..., 2A}. Now, we claim that . Indeed, this is just a result of the formula above, where we iterate through all possible subset sizes. Note that the term is the number of sets of size i which contains a given subset of size j and all elements are chosen from 1, 2, ..., A. (take a moment to convince yourself about this formula) Now, computing the value of isn't hard (you can write out the binomial coefficient and multiply its term one by one with some precomputation, see the formula in the original pdf if you're stuck), and once you have that, you can calculate the values of bi in O(n2). Finally, with the values of bi, we can calculate dp[2A][·] the same way as the last example, as dp[2A][n] is just and we can calculate this by multiplying the two polynomials formed by [ai] and [bi]. Thus, the entire step can be done in O(n2). Thus, we can calculate dp[2i][·] and dp[i + 1][·] in O(n2) and O(n) respectively from dp[i][·]. Thus, we can write A in binary as in the last example and compute the answers step by step, using at most steps. Thus, the total time complexity is , which can pass. This is the end of this post. I hope you benefited from it and please share your own dp tricks in the comments with us. • • +612 • By zscoder, history, 7 months ago, , Announcement Start time : 21:00 JST as usual Reminder that this contest actually exists on Atcoder :) Let's discuss the problem after contest. • • +53 • By zscoder, history, 7 months ago, , I don't know about others, but recently I've been getting quite a number of private messages on CF and Hackerrank (well basically anywhere with a PM system) that sounds like this : "Hi, regarding codechef long challenge october.. How to do that power sum problwm.. did u get any idea.. if so. then please drop me a hint.. thanks" "Hi, I was trying POWSUMS in this month's codechef long challenge. Can I get a hint for that problem? Thanks" "Can you send me the code for Simplified Chess engine or give me how to solve it ?" "Hi Zi, Any hints for Shashank and the Palindromic Strings Thanks." and more (FYI "POWSUMS", "Simplified Chess engine" and "Shashank and the Palindromic Strings" are live contest problems) Is anyone else getting these PMs too? I find them annoying like it when you see "You received 2 new messages" and all of them are asking for hints/sols/code for a live contest problem. Why do people do this? It's not like anyone is going to tell them the solution anyway. • • +77 • By zscoder, history, 8 months ago, , We hope everyone enjoyed the problems. Here is the editorial for the problems. I tried to make it more detailed but there might be some parts that might not be explained clearly. ## Div. 2 A — Crazy Computer Prerequisites : None This is a straightforward implementation problem. Iterate through the times in order, keeping track of when is the last time a word is typed, keeping a counter for the number of words appearing on the screen. Increment the counter by 1 whenever you process a new time. Whenever the difference between the time for two consecutive words is greater than c, reset the counter to 0. After that, increment it by 1. Time Complexity : O(n), since the times are already sorted. Code (O(n)) ## Div. 2 B — Complete The Word Prerequisites : None Firstly, if the length of the string is less than 26, output  - 1 immediately. We want to make a substring of length 26 have all the letters of the alphabet. Thus, the simplest way is to iterate through all substrings of length 26 (there are O(n) such substrings), then for each substring count the number of occurrences of each alphabet, ignoring the question marks. After that, if there exist a letter that occurs twice or more, this substring cannot contain all letters of the alphabet, and we process the next substring. Otherwise, we can fill in the question marks with the letters that have not appeared in the substring and obtain a substring of length 26 which contains all letters of the alphabet. After iterating through all substrings, either there is no solution, or we already created a nice substring. If the former case appears, output  - 1. Otherwise, fill in the remaining question marks with random letters and output the string. Note that one can optimize the solution above by noting that we don't need to iterate through all 26 letters of each substring we consider, but we can iterate through the substrings from left to right and when we move to the next substring, remove the front letter of the current substring and add the last letter of the next substring. This optimization is not required to pass. We can still optimize it further and make the complexity purely O(|s|). We use the same trick as above, when we move to the next substring, we remove the previous letter and add the new letter. We store a frequency array counting how many times each letter appear in the current substring. Additionally, store a counter which we will use to detect whether the current substring can contain all the letters of the alphabet in O(1). When a letter first appear in the frequency array, increment the counter by 1. If a letter disappears (is removed) in the frequency array, decrement the counter by 1. When we add a new question mark, increment the counter by 1. When we remove a question mark, decrement the counter by 1. To check whether a substring can work, we just have to check whether the counter is equal to 26. This solution works in O(|s|). Time Complexity : O(|s|·262), O(|s|·26) or O(|s|) Code (O(26^2*|s|) Code (O(26*|s|) Code (O(|s|) ## Div. 2 C/Div. 1 A — Plus and Square Root Prerequisites : None Firstly, let ai(1 ≤ i ≤ n) be the number on the screen before we level up from level i to i + 1. Thus, we require all the ais to be perfect square and additionally to reach the next ai via pressing the plus button, we require and for all 1 ≤ i < n. Additionally, we also require ai to be a multiple of i. Thus, we just need to construct a sequence of such integers so that the output numbers does not exceed the limit 1018. There are many ways to do this. The third sample actually gave a large hint on my approach. If you were to find the values of ai from the second sample, you'll realize that it is equal to 4, 36, 144, 400. You can try to find the pattern from here. My approach is to use ai = [i(i + 1)]2. Clearly, it is a perfect square for all 1 ≤ i ≤ n and when n = 100000, the output values can be checked to be less than 1018 Unable to parse markup [type=CF_TEX] which is a multiple of i + 1, and is also a multiple of i + 1. The constraints ai must be a multiple of i was added to make the problem easier for Div. 1 A. Time Complexity : O(n) Code (O(n)) ## Div. 2 D/Div. 1 B — Complete The Graph Prerequisites : Dijkstra's Algorithm This problem is actually quite simple if you rule out the impossible conditions. Call the edges that does not have fixed weight variable edges. First, we'll determine when a solution exists. Firstly, we ignore the variable edges. Now, find the length of the shortest path from s to e. If this length is  < L, there is no solution, since even if we replace the 0 weights with any positive weight the shortest path will never exceed this shortest path. Thus, if the length of this shortest path is  < L, there is no solution. (If no path exists we treat the length as .) Next, we replace the edges with 0 weight with weight 1. Clearly, among all the possible graphs you can generate by replacing the weights, this graph will give the minimum possible shortest path from s to e, since increasing any weight will not decrease the length of the shortest path. Thus, if the shortest path of this graph is  > L, there is no solution, since the shortest path will always be  > L. If no path exists we treat the length as . Other than these two conditions, there will always be a way to assign the weights so that the shortest path from s to e is exactly L! How do we prove this? First, consider all paths from s to e that has at least one 0 weight edge, as changing weights won't affect the other paths. Now, we repeat this algorithm. Initially, assign all the weights as 1. Then, sort the paths in increasing order of length. If the length of the shortest path is equal to L, we're done. Otherwise, increase the weight of one of the variable edges on the shortest path by 1. Note that this will increase the lengths of some of the paths by 1. It is not hard to see that by repeating these operations the shortest path will eventually have length L, so an assignment indeed exists. Now, we still have to find a valid assignment of weights. We can use a similar algorithm as our proof above. Assign 1 to all variable edges first. Next, we first find and keep track of the shortest path from s to e. Note that if this path has no variable edges it must have length exactly L or strictly more than L, so either we're already done or the shortest path contains variable edges and the length is strictly less than L. (otherwise we're done) From now on, whenever we assign weight to a variable edge (after assigning 1 to every variable edge), we call the edge assigned. Now, mark all variable edges not on the shortest path we found as weight. (we can choose any number greater than L as ) Next, we will find the shortest path from s to e, and replace the weight of an unassigned variable edge such that the length of the path becomes equal to L. Now, we don't touch the assigned edges again. While the shortest path from s to e is still strictly less than L, we repeat the process and replace a variable edge that is not assigned such that the path length is equal to L. Note that this is always possible, since otherwise this would've been the shortest path in one of the previous steps. Eventually, the shortest path from s to e will have length exactly L. It is easy to see that we can repeat this process at most n times because we are only replacing the edges which are on the initial shortest path we found and there are less than n edges to replace (we only touch each edge at most once). Thus, we can find a solution after less than n iterations. So, the complexity becomes . This is sufficient to pass all tests. What if the constraints were n, m ≤ 105? Can we do better? Yes! Thanks to HellKitsune who found this solution during testing. First, we rule out the impossible conditions like we did above. Then, we assign all the variable edges with weight. We enumerate the variable edges arbitarily. Now, we binary search to find the minimal value p such that if we make all the variable edges numbered from 1 to p have weight 1 and the rest , then the shortest path from s to e has length  ≤ L. Now, note that if we change the weight of p to the length of shortest path will be more than L. (if p equals the number of variable edges, the length of the shortest path is still more than L or it will contradict the impossible conditions) If the weight is 1, the length of the shortest path is  ≤ L. So, if we increase the weight of edge p by 1 repeatedly, the length of the shortest path from s to e will eventually reach L, since this length can increase by at most 1 in each move. So, since the length of shortest path is non-decreasing when we increase the weight of this edge, we can binary search for the correct weight. This gives an solution. Time Complexity : or Code (O(mnlogn)) Code (O(mlogn(logm+logL)) ## Div. 2 E/Div. 1 C — Digit Tree Prerequisites : Tree DP, Centroid Decomposition, Math Compared to the other problems, this one is more standard. The trick is to first solve the problem if we have a fixed vertex r as root and we want to find the number of paths passing through r that works. This can be done with a simple tree dp. For each node u, compute the number obtained when going from r down to u and the number obtained when going from u up to r, where each number is taken modulo M. This can be done with a simple dfs. To calculate the down value, just multiply the value of the parent node by 10 and add the value on the edge to it. To calculate the up value, we also need to calculate the height of the node. (i.e. the distance from u to r) Then, if we let h be the height of u, d be the digit on the edge connecting u to its parent and val be the up value of the parent of u, then the up value for u is equal to 10h - 1·d + val. Thus, we can calculate the up and down value for each node with a single dfs. Next, we have to figure out how to combine the up values and down values to find the number of paths passing through r that are divisible by M. For this, note that each path is the concatenation of a path from u to r and r to v, where u and v are pairs of vertices from different subtrees, and the paths that start from r and end at r. For the paths that start and end at r the answer can be easily calculated with the up and down values (just iterate through all nodes as the other endpoint). For the other paths, we iterate through all possible v, and find the number of vertices u such that going from u to v will give a multiple of M. Since v is fixed, we know its height and down value, which we denote as h and d respectively. So, if the up value of u is equal to up, then up·10h + d must be a multiple of M. So, we can solve for up to be  - d·10 - h modulo M. Note that in this case the multiplicative inverse of 10 modulo M is well-defined, as we have the condition . To find the multiplicative inverse of 10, we can find φ(M) and since by Euler's Formula we have xφ(M) ≡ 1(modM) if , we have xφ(M) - 1 ≡ x - 1(modM), which is the multiplicative inverse of x (in this case we have x = 10) modulo M. After that, finding the up value can be done by binary exponentiation. Thus, we can find the unique value of up such that the path from u to v is a multiple of M. This means that we can just use a map to store the up values of all nodes and also the up values for each subtree. Then, to find the number of viable nodes u, find the required value of up and subtract the number of suitable nodes that are in the same subtree as v from the total number of suitable nodes. Thus, for each node v, we can find the number of suitable nodes u in time. Now, we have to generalize this for the whole tree. We can use centroid decomposition. We pick the centroid as the root r and find the number of paths passing through r as above. Then, the other paths won't pass through r, so we can remove r and split the tree into more subtrees, and recursively solve for each subtree as well. Since each subtree is at most half the size of the original tree, and the time taken to solve the problem where the path must pass through the root for a single tree takes time proportional to the size of the tree, this solution works in time, where the other comes from using maps. Time Complexity : Code ## Div. 1 D — Create a Maze Prerequisites : None The solution to this problem is quite simple, if you get the idea. Thanks to dans for improving the solution to the current constraints which is much harder than my original proposal. Note that to calculate the difficulty of a given maze, we can just use dp. We write on each square (room) the number of ways to get from the starting square to it, and the number written on (i, j) will be the sum of the numbers written on (i - 1, j) and (i, j - 1), and the edge between (i - 1, j) and (i, j) is blocked, we don't add the number written on (i - 1, j) and similarly for (i, j - 1). We'll call the rooms squares and the doors as edges. We'll call locking doors as edge deletions. First, we look at several attempts that do not work. Write t in its binary representation. To solve the problem, we just need to know how to construct a maze with difficulty 2x and x + 1 from a given maze with difficulty x. The most direct way to get from x to 2x is to increase both dimensions of the maze by 1. Let's say the bottom right square of the grid was (n, n) and increased to (n + 1, n + 1). So, the number x is written at (n, n). Then, we can block off the edge to the left of (n + 1, n) and above (n, n + 1). This will make the numbers in these two squares equal to x, so the number in square (n + 1, n + 1) would be 2x, as desired. To create x + 1 from x, we can increase both dimensions by 1, remove edges such that (n + 1, n) contains x while (n, n + 1) contains 1 (this requires deleting most of the edges joining the n-th column and (n + 1)-th column. Thus, the number in (n, n) would be x + 1. This would've used way too many edge deletions and the size of the grid would be too large. This was the original proposal. There's another way to do it with binary representation. We construct a grid with difficulty 2x and 2x + 1 from a grid with difficulty x. The key idea is to make use of surrounding 1s and maintaining it with some walls so that 2x + 1 can be easily constructed. This method is shown in the picture below. This method would've used around 120 × 120 grid and 480 edge deletions, which is too large to pass. Now, what follows is the AC solution. Since it's quite easy once you get the idea, I recommend you to try again after reading the hint. To read the full solution, click on the spoiler tag. Hint : Binary can't work since there can be up to 60 binary digits for t and our grid size can be at most 50. In our binary solution we used a 2 × 2 grid to multiply the number of ways by 2. What about using other grid sizes instead? Full Solution Of course, this might not be the only way to solve this problem. Can you come up with other ways of solving this or reducing the constraints even further? (Open Question) Time Complexity : Code ## Div. 1 E — Complete The Permutations Prerequisites : Math, Graph Theory, DP, Any fast multiplication algorithm We'll slowly unwind the problem and reduce it to something easier to count. First, we need to determine a way to tell when the distance between p and q is exactly k. This is a classic problem but I'll include it here for completeness. Let f denote the inverse permutation of q. So, the minimum number of swaps to transform p into q is the minimum number of swaps to transform pfi into the identity permutation. Construct the graph where the edges are for all 1 ≤ i ≤ n. Now, note that the graph is equivalent to and is composed of disjoint cycles after qi and pi are filled completely. Note that the direction of the edges doesn't matter so we consider the edges to be for all 1 ≤ i ≤ n. Note that if the number of cycles of the graph is t, then the minimum number of swaps needed to transform p into q would be n - t. (Each swap can break one cycle into two) This means we just need to find the number of ways to fill in the empty spaces such that the number of cycles is exactly i for all 1 ≤ i ≤ n. Now, some of the values pi and qi are known. The edges can be classified into four types : A-type : The edges of the form , i.e. pi is known, qi isn't. B-type : The edges of the form , i.e. qi is known, pi isn't. C-type : The edges of the form , i.e. both pi and qi are known. D-type : The edges of the form , i.e. both pi and qi are unknown. Now, the problem reduces to finding the number of ways to assign values to the question marks such that the number of cycles of the graph is exactly i for all 1 ≤ i ≤ n. First, we'll simplify the graph slightly. While there exists a number x appears twice (clearly it can't appear more than twice) among the edges, we will combine the edges with x together to simplify the graph. If there's an edge , then we increment the total number of cycles by 1 and remove this edge from the graph. If there is an edge and , where a and b might be some given numbers or question marks, then we can merge them together to form the edge . Clearly, these are the only cases for x to appear twice. Hence, after doing all the reductions, we're reduced to edges where each known number appears at most once, i.e. all the known numbers are distinct. We'll do this step in O(n2). For each number x, store the position i such that pi = x and also the position j such that qj = x, if it has already been given and  - 1 otherwise. So, we need to remove a number when the i and j stored are both positive. We iterate through the numbers from 1 to n. If we need to remove a number, we go to the two positions where it occur and replace the two edges with the new merged one. Then, recompute the positions for all numbers (takes O(n) time). So, for each number, we used O(n) time. (to remove naively and update positions) Thus, the whole complexity for this part is O(n2). (It is possible to do it in O(n) with a simple dfs as well. Basically almost any correct way of doing this part that is at most O(n3) works, since the constraints for n is low) Now, suppose there are m edges left and p known numbers remain. Note that in the end when we form the graph we might join edges of the form and (where a and b are either fixed numbers or question marks) together. So, the choice for the ? can be any of the m - p remaining unused numbers. Note that there will be always m - p such pairs so we need to multiply our answer by (m - p)! in the end. Also, note that the ? are distinguishable, and order is important when filling in the blanks. So, we can actually reduce the problem to the following : Given integers a, b, c, d denoting the number of A-type, B-type, C-type, D-type edges respectively. Find the number of ways to create k cycles using them, for all 1 ≤ k ≤ n. Note that the answer is only dependent on the values of a, b, c, d as the numbers are all distinct after the reduction. First, we'll look at how to solve the problem for k = 1. We need to fit all the edges in a single cycle. First, we investigate what happens when d = 0. Note that we cannot have a B-type and C-type edge before an A-type or C-type edge, since all numbers are distinct so these edges can't be joined together. Similarly, an A or C-type edge cannot be directly after a B or C-type edge. Thus, with these restrictions, it is easy to see that the cycle must contain either all A-type edges or B-type edges. So, the answer can be easily calculated. It is also important to note that if we ignore the cyclic property then a contiguous string of edges without D must be of the form AA...BB.. or AA...CBB..., where there is only one C, and zero or more As and Bs. Now, if d ≥ 1, we can fix one of the D-type edges as the front of the cycle. This helps a lot because now we can ignore the cyclic properties. (we can place anything at the end of the cycle because D-type edges can connect with any type of edges) So, we just need to find the number of ways to make a length n - 1 string with a As, b Bs, c Cs and d - 1 Ds. In fact, we can ignore the fact that the A-type edges, B-type edges, C-type edges and D-type edges are distinguishable and after that multiply the answer by a!b!c!(d - 1)!. We can easily find the number of valid strings we can make. First, place all the Ds. Now, we're trying to insert the As, Bs and Cs into the d empty spaces between, after and before the Ds. The key is that by our observation above, we only care about how many As, Bs and Cs we insert in each space since after that the way to put that in is uniquely determined. So, to place the As and Bs, we can use the balls in urns formula to find that the number of ways to place the As is and the number of ways to place the Bs is . The number of ways to place the Cs is , since we choose where the Cs should go. Thus, it turns out that we can find the answer in O(1) (with precomputing binomial coefficients and factorials) when k = 1. We'll use this to find the answer for all k. In the general case, there might be cycles that consists entirely of As and entirely of Bs, and those that contains at least one D. We call them the A-cycle, B-cycle and D-cycles respectively. Now, we precompute f(n, k), the number of ways to form k cycles using n distinguishable As. This can be done with a simple dp in O(n3). We iterate through the number of As we're using for the first cycle. Then, suppose we use m As. The number of ways to choose which of the m As to use is and we can permute them in (m - 1)! ways inside the cycle. (not m! because we have to account for all the cyclic permutations) Also, after summing this for all m, we have to divide the answer by k, to account for overcounting the candidates for the first cycle (the order of the k cycles are not important) Thus, f(n, k) can be computed in O(n3). First, we see how to compute the answer for a single k. Fix x, y, e, f, the number of A-cycles, B-cycles, number of As in total among the A-cycles and number of Bs in total among the B-cycles. Then, since k is fixed, we know that the number of D-cycles is k - x - y. Now, we can find the answer in O(1). First, we can use the values of f(e, x), f(f, y), f(d, k - x - y) to determine the number of ways to place the Ds, and the As, Bs that are in the A-cycles and B-cycles. Then, to place the remaining As, Bs and Cs, we can use the same method as we did for k = 1 in O(1), since the number of spaces to place them is still the same. (You can think of it as each D leaves an empty space to place As, Bs and Cs to the right of it) After that, we multiply the answer by to account for the choice of the set of As and Bs used in the A-only and B-only cycles. Thus, the complexity of this method is O(n4) for each k and O(n5) in total, which is clearly too slow. We can improve this by iterating through all x + y, e, f instead. So, for this to work we need to precompute f(e, 0)f(f, x + y) + f(e, 1)f(f, x + y - 1) + ... + f(e, x + y)f(f, 0), which we can write as g(x + y, e, f). Naively doing this precomputation gives O(n4). Then, we can calculate the answer by iterating through all x + y, e, f and thus getting O(n3) per query and O(n4) for all k. This is still too slow to pass n = 250. We should take a closer look of what we're actually calculating. Note that for a fixed pair e, f, the values of g(x + y, e, f) can be calculated for all possible x + y in or O(n1.58) by using Number Theoretic Transform or Karatsuba's Algorithm respectively. (note that the modulus has been chosen for NFT to work) This is because if we fix e, f, then we're precisely finding the coefficients of the polynomial (f(e, 0)x0 + f(e, 1)x1 + ... + f(e, n)xn)(f(f, 0)x0 + f(f, 1)x1 + ... + f(f, n)xn), so this can be handled with NFT/Karatsuba. Thus, the precomputation of g(x + y, e, f) can be done in or O(n3.58). Next, suppose we fixed e and f. We will calculate the answer for all possible k in similar to how we calculated g(x + y, e, f). This time, we're multiplying the following two polynomials : f(d, 0)x0 + f(d, 1)x1 + ... + f(d, n)xn and g(0, e, f)x0 + g(1, e, f)x1 + ... + g(n, e, f)xn. Again, we can calculate this using any fast multiplication method, so the entire solution takes or O(n3.58), depending on which algorithm is used to multiply polynomials. Note that if you're using NFT/FFT, there is a small trick that can save some time. When we precompute the values of g(x + y, e, f), we don't need to do inverse FFT on the result and leave it in the FFTed form. After that, when we want to find the convolution of f(d, i) and g(i, e, f), we just need to apply FFT to the first polynomial and multiply them. This reduces the number of FFTs and it reduced my solution runtime by half. Time Complexity : or O(n3.58), depending on whether NFT or Karatsuba is used. Code (NFT) Code (Karatsuba) • • +173 • By zscoder, history, 8 months ago, , Hi everyone, it's me again! Codeforces Round #372 (Div. 1 + Div. 2) will take place on 17 September 2016 at 16:35 MSK, After my last round, this will be my second round on Codeforces. I believe you'll find the problems interesting and I hope you'll enjoy the round. This round would not be possible without dans who improved one of the problems that made this round possible, and also helped in preparing and testing the round. Also, thanks to all the testers, IlyaLos, HellKitsune and phobos and thanks to MikeMirzayanov for the awesome Codeforces and Polygon platforms. ZS the Coder and Chris the Baboon's trip in Udayland is over. In this round, you'll help ZS the Coder solve the problems he have randomly came up with. Do you have what it takes to solve them all? The problems are sorted by difficulty but as always it's recommended to read all the problems. We wish you'll have many solutions and enjoy the problems. :) As usual, the scoring will be published right before the contest. UPD : There will be 5 problems in both division as usual. Scoring : Div. 2 : 500 — 1000 — 1500 — 2000 — 2500 Div. 1 : 500 — 1000 — 1500 — 25002750 Good luck and I hope you enjoy the problems! UPD : Contest is over. I hope you enjoyed the contest and problems :) I'm sure some of you wants to see the editorial now, so here it is while we wait for System Test to start. UPD : System tests is over. Here're the winners : Division 1 : Division 2 : Congratulations to them! • • +423 • By zscoder, history, 9 months ago, , Here are the editorials for all the problems. Hope you enjoyed them and found them interesting! Code Code Code (O(nkm^2)) Code (O(nkm)) Code Code • • +100 • By zscoder, history, 9 months ago, , Important Update: Our friends have noticed that the upcoming round collides with their contest and also weekend is full of many another contests, so the round is now moved to Monday, 29 August 2016 15:05 MSK. We are sorry for the inconvenience caused and hope that you'll understand us. Hi everyone! Codeforces Round #369 (Div. 2) will take place on 27 August 2016 at 16:05 MSK. As usual, Div.1 participants can join out of competition. I would like to thank dans for helping me with the preparation of the round, MikeMirzayanov for the amazing Codeforces and Polygon platforms and also Phyto for testing the problems. I am the author of all the problems, and dans also helped making one of the problems harder. This is my first round on Codeforces! Hope everyone will enjoy the problems and find them interesting. It is advisable to read all the problems ;) In this round, you will help ZS the Coder and Chris the Baboon while they are on an adventure in Udayland. Can you help them solve their problems? :) Good luck, have fun, and wish everyone many Accepted Solutions. :) UPD : Also thanks to IlyaLos and HellKitsune for testing the problems too. UPD 2 : There will be 5 problems and the scoring is standard : 500-1000-1500-2000-2500. UPD 3 : Editorial UPD 4 : Congratulations to the winners : Div. 1 winners : Div. 2 Winners : • • +286 • By zscoder, history, 10 months ago, , Hi everyone! I created a small group here which is open for public. There will be 3 5-hour contests held there featuring 3 problems each and the problems are taken from olympiads of different countries as well as problems from other sites (though these are rare) The contests will be in ACM-ICPC mode. (since this is the default CF mode) Since almost all of the problems are unoriginal, it is very likely that you might have seen some of the problems before. Everyone is welcome to join the group and participate in any contest anytime. The schedule of the contests have been posted in the group. Additionally, ziadxkabakibi told me he have uploaded some Croatian OI problems before, so he might also add it to the group as well. • • +38 • By zscoder, history, 10 months ago, , Problem statement : There are N planets in the galaxy. For each planet i, it has two possible states : empty, or it is connected to some other planet j ≠ i with a one-way path. Initially, each planet is in the empty state (i.e. there are no paths between any pair of planets) There are three types of operations : 1. If planet i is in empty state, connect it with planet j with a one-way path (i and j are distinct) 2. If planet i is currently connected to planet j with a one-way path, remove that path. Consequently, planet i is now in empty state again 3. For a pair of planets u, v, find a planet x such that one can travel from u to x and v to x. If there are multiple such planets, find the one where the total distance travelled by u and v is minimized (distance is the number of paths it passes through). If there are no solutions, output  - 1. Q, N ≤ 106. Time limit is 10 seconds. One of the official solutions uses splay tree to solve this problem, but I have no idea how it works (I haven't use splay trees before). Can anyone tell me how to use splay tree on this problem? Thanks. • • +6 • By zscoder, history, 10 months ago, , Recently I encountered a problem which is very similar to RMQ. Abridged Statement : First, you have an empty array. You have two operations : 1. Insert a value v between positions x - 1 and x (1 ≤ x ≤ k + 1) where k is the current size of the array.(positions are 1-indexed) 2. Find the maximum in the range [l, r] (i.e. the maximum from the positions l to r inclusive) There are at most Q ≤ 250000 queries. My current solution uses SQRT decomposition but it was not fast enough to get AC. I was wondering if there is an or solution. Edit : Forgot to mention that the queries must be answered online (it's actually function call), so offline solutions doesn't work. • • +26 • By zscoder, history, 11 months ago, , Problem Statement Abridged Problem Statement : Given a1, a2, ..., an, find the number of permutations of these numbers such that |a1 - a2| + |a2 - a3| + ... + |an - 1 - an| ≤ L where L is a given integer. The editorial given is quite brief and the sample code is nearly unreadable. I have no idea how to do the dp. Can anyone explain the solution? Thanks. UPD : Thanks to ffao for the hint! I finally got how the dp works. The unobfuscated code with comments is here. • • +38 • By zscoder, history, 12 months ago, , Reminder that Google Distributed Code Jam Online Round 1 starts at this time. Good luck to all people participating! • • +42 • By zscoder, history, 12 months ago, , I was trying to solve this problem. I could only figure out the naive solution. (DFS from each vertex) I think I have encountered similar problems before but I couldn't solve them either. How do I solve this kind of problem? • • +2 • By zscoder, history, 13 months ago, , I keep getting this message when I try to package the problem in polygon : PackageException: There exists a test where checker crashes. Why does this error show up? (I'm using the standard checker that compares sequences of integers) By zscoder, history, 13 months ago, , Reminder that Ad Infinitum 15 — Math Programming Contest starts at Apr 15 2016, 11:30 pm CST. T-shirts for top 10 ranks on leaderboard. • • +28 • By zscoder, history, 13 months ago, , ## 1. COLOR This problem is trivial. Note that we want the resulting string to be monochromatic and thus we can just choose the color with maximal occurences and change the remaining letters into the color. ## 2. CHBLLNS This problem is also trivial. For each color, we take k - 1 balls, or if there're less than k - 1 balls, take all the balls. Currently, there are at most k - 1 balls for each color. Then, take one more ball. By pigeonhole principle, there exists k balls of same color. So, this is our answer. ## 3. CHEFPATH This problem is not hard. WLOG, assume n ≤ m. If n = 1, then the task is possible if and only if m = 2 for obvious reasons. Otherwise, if m, n are both odd, then coloring the board in a checkerboard fashion can show that no such path exist. If one of m, n is even, then such path exists. (it's not hard to construct such path) ## 4. BIPIN3 In fact, the answer is the same for any tree. For the root vertex we can select k possible colors. For each children, note that we can select any color except the color of the parent vertex, so there're k - 1 choices. So, in total there are k·(n - 1)k - 1 possible colorings. To evaluate this value, we use modular exponentiation. ## 5. FIBQ The idea of this problem is to convert the sum in matrix language. Let T be the matrix[\begin{bmatrix} 1 & 1 \ 1 & 0 \end{bmatrix} ] Additionally, let I be the identity matrix. Then, our desired answer will be the top right element of (Tal + I)(Tal + 1 + I)...(Tar + I). Now, to support the update queries, just use a segment tree. The segment tree from this link is perfect for the job. ## 6. DEVGOSTR Despite it's appearance, this problem is actually very simple. For A = 1, there is at most 1 possible string, namely the string consisting of all `a's. The background of this problem is Van der Waerden's Theorem. According to the Wikipedia page, the maximal length of a good string for A = 2 is 8 and the maximal length for a good string for A = 3 is 26. Now, for A = 2 we can brute force all possible strings. However, for A = 3 we need to brute force more cleverly. The key is to note that if a string s is not good then appending any letter to s will not make it good. So, we just need to attempt to append letters to good strings. This pruning will easily AC the problem. ## 7. AMAEXPER Call a point that minimizes the answer for a subtree rooted at r the king of r. If there are mutiple kings choose the one closer to r. The problem revolves around an important lemma : Lemma : For the subtree rooted at r, consider all children of r. Let l be the children such that the distance from r to the furthest node in the subtree rooted at l is the longest. If there are multiple l, then r is the king. Otherwise, either r is the king of the node in the subtree rooted at l is the king. Sketch of proof : Suppose not, then the maximal distance from some other vertex will be the distance from that node to root + the distance from root to the furthest node in the subtree rooted at l, so choosing r is more optimal. Thus, for we can actually divide the tree into chains. A chain starts from some vertex v and we keep going down to the children l. For each node r, we store the distance to root, the maximal distance from r to any vertex in its subtree, as well as the maximal distance IF we don't go down to children l. (this distance is 0 if r has only 1 children) Now, for each chain, we start from the bottom. We also maintain a king counter starting from the bottom. At first, the answer for the bottom node is the maximal distance stored for that node. Then, as we go up to the top of the chain, note that the possible places for the king is on the chain and will not go below the king for the node below. Thus, we can update the king counter in a sliding window like fashion. How do we find the answer for each node? This value can be calculated in terms of the numbers stored on each node, and using an RMQ we can find the desired answer. The details for this will not be included here. ## 8. FURGRAPH The key observation for this problem is the following : Instead of weighing the edges, for each edge with weight w, we increase w to the endpoints of the edge (note that for self-loops the vertex will be added twice). Then, the difference of score of Mario and Luigi is just the difference of the sum of weights on the vertices they have chosen, divided by 2. Why? If an edge has its endpoints picked by Mario (or Luigi), then Mario (or Luigi)'s score will increase by 2w. If each person pick one of the endpoints, then the difference of score is unchanged, as desired. Now, Mario and Luigi's strategy is obvious : They will choose the vertex with maximal possible weight at the time, so letting a1 ≤ a2 ≤ ... ≤ an be the weights of the vertices in sorted order, the answer is an - an - 1 + an - 2... + ( - 1)n - 1a1, i.e. the alternating sum of weights in sorted order. Using this observation alone can easily give us Subtask 2. We can naively store the vertex weights in a map and update as neccesary. For each query, just loop through the map and calculate the alternating sum. Subtask 3 requires more optimization. We will use sqrt-decomposition with an order statistic tree to store the vertex weights in sorted order. Note that for each update, we're cyclicly shifting the values of al to ar, for some l, r which can be found from our order statistic tree. Then, we update ar as al + w. To efficiently perform these queries, we divide the array (of vertex weights) into parts of elements each. We start from element l. For each block we store a deque containing the elements, the sum of elements (we store the elements as negative if we want to subtract the value when calculating the alternating sum for convenience), and the sign of the elements in the block. We iterate through the elements and perform the neccesary updates until we reach the end of block. Then, for updating entire blocks, we perform the neccesary updates on the values, negate the sign, and since we're using deque we can pop front and push back the desired element (namely the next element). Then, when we reach the block containing r, we iterate naively again. The total complexity is . My solution with this algorithm ACs the problem. ## 9. CHNBGMT Unfortunately, I only get the first 3 subtasks to this problem. This problem is similar to this, except it's harder. The solution for that problem uses Lindstrom-Gessel-Viennot Lemma. We can apply that lemma to this problem as well. You can see how to apply the lemma from the old CF problem. Now, the problem reduces to finding the number of ways to go from ai to bj for 1 ≤ i, j ≤ 2, and finding the determinant of the resulting 2 by 2 matrix. For subtasks 1 and 2, M and N are small so we can find these values using a simple dp. In particular, we can use something like dp[i][j][k] =  number of ways to get to (i, j) passing through exactly k carrots. Since M, N ≤ 60, this solution can easily pass. For subtask 3, N, M ≤ 105. However, we are given that C = 0. For N = 2 or M = 2 the answer can be calculated manually. So, from now onwards, we assume M, N ≥ 3. Then finding the values is equivalent to computing binomial coefficients. (The details are trivial). So, all it remains is to know how to compute binomial coefficients mod MOD where MOD ≤ 109 is not guaranteed to be a prime. Firstly, the obvious step is to factor MOD into its prime factors. Thus, we can compute the answer mod all the prime powers that are factors of MOD and combine the results with Chinese Remainder Theorem. How do we find the result mod pk? This is trivial. We just need to store vp(n) of the numbers and the remainder of the number mod pk when we divide out all factors of p. Now, compute modular inverse mod pk assuming that the value is not divisible by p can be done using Euler's Theorem the same way we find modular inverse mod p. I would like to know how to get the other subtasks where C > 0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077941536903381, "perplexity": 335.6705620626613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463605485.49/warc/CC-MAIN-20170522171016-20170522191016-00541.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/particle-confined-rigid-one-dimensional-box-length-10-rm-fm-energy-level-en-329-mev-adjace-q2787266
A particle confined in a rigid one-dimensional box of length 10 {\rm fm} has an energy level E_n=32.9 MeV and an adjacent energy level E_{n + 1}=51.4 MeV . Part A Determine the values of n and n+1. Choices: n=4 and n+1=5 n=7 and n+1=8 n=3 and n+1=4 n=2 and n+1=3 What is the wavelength of a photon emitted in the n+1 ----> n transition?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587801694869995, "perplexity": 3717.973953290185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815991.16/warc/CC-MAIN-20140820021335-00035-ip-10-180-136-8.ec2.internal.warc.gz"}
https://searxiv.org/search?author=H.%20Atmacan
### Results for "H. Atmacan" total 80173took 0.23s Evidence for the decay $B^{0}\to ηη$Sep 12 2016We report a search for $B^{0}\to \eta \eta$ with a data sample corresponding to an integrated luminosity of $698 \,{\rm fb}^{-1}$ containing $753 \times 10^{6}$ $B\bar{B}$ pairs collected at the $\Upsilon(4S)$ resonance with the Belle detector at the ... More Measurement of the $τ$ lepton polarization in the decay ${\bar B} \rightarrow D^* τ^- {\bar ν_τ}$Aug 23 2016We report the first measurement of the $\tau$ lepton polarization in the decay ${\bar B} \rightarrow D^* \tau^- {\bar\nu_{\tau}}$ as well as a new measurement of the ratio of the branching fractions $R(D^{*}) = \mathcal{B}({\bar B} \rightarrow D^* \tau^- ... More Precise determination of the CKM matrix element$\left| V_{cb}\right|$with$\bar B^0 \to D^{*\,+} \, \ell^- \, \bar ν_\ell$decays with hadronic tagging at BelleFeb 06 2017Feb 14 2017The precise determination of the CKM matrix element$\left| V_{cb}\right|$is important for carrying out tests of the flavour sector of the Standard Model. In this article we present a preliminary analysis of the$\bar B^0 \to D^{*\,+} \, \ell^- \, \bar ... More Study of χ_{bJ}(1P) Properties in the Radiative Υ(2S) DecaysJun 03 2016We report a study of radiative decays of \chi_{bJ}(1P)(J=0,1,2) mesons into 74 hadronic final states comprising charged and neutral pions, kaons, protons; out of these, 41 modes are observed with at least 5 standard deviation significance. Our measurements ... More Measurement of the branching ratio of $\bar{B}^0 \rightarrow D^{*+} τ^- \barν_τ$ relative to $\bar{B}^0 \rightarrow D^{*+} \ell^- \barν_{\ell}$ decays with a semileptonic tagging methodMar 22 2016We report a measurement of ratio ${\cal R}(D^*) = {\cal B}(\bar{B}^0 \rightarrow D^{*+} \tau^- \bar{\nu}_{\tau})/{\cal B}(\bar{B}^0 \rightarrow D^{*+} \ell^- \bar{\nu}_{\ell})$, where $\ell$ denotes an electron or a muon. The results are based on a data ... More Measurement of CKM Matrix Element $|V_{cb}|$ from $\bar{B} \to D^{*+} \ell^{-} \barν_\ell$Sep 10 2018Nov 19 2018We present a new measurement of the CKM matrix element $|V_{cb}|$ from $B^{0} \rightarrow D^{*}\ell \nu$ decays, reconstructed with full Belle data set ($711 \, \rm fb^{-1}$). Two form factor parameterisations, based on work by the CLN and BGL groups, ... More Measurements of branching fraction and $CP$ asymmetry of the $\bar{B}^{0}(B^{0})\to K^{0}_{S}K^{\mp}π^{\pm}$ decay at BelleJul 18 2018Sep 03 2018We report the measurement of the branching fraction and $CP$ asymmetry for the $\bar{B}^{0}(B^{0})\to K^{0}_{S}K^{\mp}\pi^{\pm}$ decay. The analysis is performed on a data sample of 711 $\rm{fb}^{-1}$ collected at the $\Upsilon(4S)$ resonance with the ... More Study of charmless decays $B^{\pm} \to K^{0}_{S} K^{0}_{S} h^{\pm}$ ($h=K,π$) at BelleAug 01 2018We report a search for charmless hadronic decays of charged $B$ mesons to the final states $K^{0}_{S} K^{0}_{S} K^{\pm}$ and $K^{0}_{S} K^{0}_{S} \pi^{\pm}$ . The results are based on a $711 {fb}^{-1}$ data sample that contains $772 \times 10^6$ $B \bar{B}$ ... More Evidence for the h_b(1P) meson in the decay Upsilon(3S) --> pi0 h_b(1P)Feb 22 2011Oct 17 2011Using a sample of 122 million Upsilon(3S) events recorded with the BaBar detector at the PEP-II asymmetric-energy e+e- collider at SLAC, we search for the $h_b(1P)$ spin-singlet partner of the P-wave chi_{bJ}(1P) states in the sequential decay Upsilon(3S) ... More Search for hadronic decays of a light Higgs boson in the radiative decay Upsilon --> gamma A0Aug 17 2011We search for hadronic decays of a light Higgs boson (A0) produced in radiative decays of an Upsilon(2S) or Upsilon(3S) meson, Upsilon --> gamma A0. The data have been recorded by the BABAR experiment at the Upsilon(3S) and Upsilon(2S) center of mass ... More Emergent states in dense systems of active rods: from swarming to turbulenceApr 02 2012Dense suspensions of self-propelled rod-like particles exhibit a fascinating variety of non-equilibrium phenomena. By means of computer simulations of a minimal model for rigid self-propelled colloidal rods with variable shape we explore the generic diagram ... More Aggregation of self-propelled colloidal rods near confining wallsSep 03 2008Non-equilibrium collective behavior of self-propelled colloidal rods in a confining channel is studied using Brownian dynamics simulations and dynamical density functional theory. We observe an aggregation process in which rods self-organize into transiently ... More The Belle II Physics BookAug 31 2018Oct 24 2018We present the physics program of the Belle II experiment, located on the intensity frontier SuperKEKB e+e- collider. Belle II collected its first collisions in 2018, and is expected to operate for the next decade. It is anticipated to collect 50/ab of ... More Rhythmic cluster generation in strongly driven colloidal dispersionsJun 28 2006We study the response of a nematic colloidal dispersion of rods to a driven probe particle which is dragged with high speed through the dispersion perpendicular to the nematic director. In front of the dragged particle, clusters of rods are generated ... More Superradiant cascade emissions in an atomic ensemble via four-wave mixingJan 21 2015May 04 2015We investigate superradiant cascade emissions from an atomic ensemble driven by two-color classical fields. The correlated pair of photons (signal and idler) is generated by adiabatically driving the system with large-detuned light fields via four-wave ... More Spectral analysis for cascade-emission-based quantum communication in atomic ensemblesMar 11 2014The ladder configuration of atomic levels provides a source for telecom photons (signal) from the upper atomic transition. \ For rubidium and cesium atoms, the signal field has the range around 1.3-1.5 $\mu$m that can be coupled to an optical fiber and ... More Positive-P phase space method simulation in superradiant emission from a cascade atomic ensembleJan 12 2012The superradiant emission properties from an atomic ensemble with cascade level configuration is numerically simulated. The correlated spontaneous emissions (signal then idler fields) are purely stochastic processes which are initiated by quantum fluctuations. ... More AdS/CFT correspondence in the Euclidean contextNov 02 2006May 23 2007We study two possible prescriptions for AdS/CFT correspondence by means of functional integrals. The considerations are non-perturbative and reveal certain divergencies which turn out to be harmless, in the sense that reflection-positivity and conformal ... More Correlation Energy Estimators based on Møller-Plesset Perturbation TheoryMar 05 1996Some methods for the convergence acceleration of the M{\o}ller-Plesset perturbation series for the correlation energy are discussed. The order-by-order summation is less effective than the Feenberg series. The latter is obtained by renormalizing the unperturbed ... More N=8 matter coupled AdS_3 supergravitiesJun 18 2001Following the recent construction of maximal (N=16) gauged supergravity in three dimensions, we derive gauged D=3, N=8 supergravities in three dimensions as deformations of the corresponding ungauged theories with scalar manifolds SO(8,n)/(SO(8)x SO(n)). ... More Maximal gauged supergravity in three dimensionsOct 11 2000Jan 07 2001We construct maximally supersymmetric gauged N=16 supergravity in three dimensions, thereby obtaining an entirely new class of AdS supergravities. These models are not derivable from any known higher-dimensional theory, indicating the existence of a new ... More Whitham Prepotential and SuperpotentialDec 30 2003Jan 24 2004N=2 supersymmetric U(N) Yang-Mills theory softly broken to N=1 by the superpotential of the adjoint scalar fields is discussed from the viewpoint of the Whitham deformation theory for prepotential. With proper identification of the superpotential we derive ... More Surface anisotropy in nanomagnets: transverse or Néel ?Jul 17 2003Mar 31 2004Through the hysteresis loop and magnetization spatial distribution we study and compare two models for surface anisotropy in nanomagnets: a model with transverse anisotropy axes and N\'eel's model. While surface anisotropy in the transverse model induces ... More String-Scale BaryogenesisMar 25 1997Mar 26 1997Baryogenesis scenarios at the string scale are considered. The observed baryon to entropy ratio, $n_B /s \sim 10^{-10}$, can be explained in these scenarios. Thermodynamic of universe with a varying dark energy componentMay 04 2015Aug 03 2015We consider a FRW universe filled by a dark energy candidate together with other possible sources which may include the baryonic and non-baryonic matters. Thereinafter, we consider a situation in which the cosmos sectors do not interact with each other. ... More Analysis of the thermal cross section of the capture reaction 13C(n,gamma)14COct 02 1997We investigate the thermal cross section of the reaction 13C(n,gamma)14}C which takes place in the helium burning zones of red giant star as well as in the nucleosynthesis of Inhomogeneous Big Bang models. We find that we can reproduce the experimentally ... More Erroneous solution of three-dimensional (3D) simple orthorhombic Ising latticesSep 04 2012Jun 18 2013The first paper is an invited comment on arXiv:1110.5527 presented at Hypercomplex Seminar 2012 and on sixteen earlier published papers by Zhidong Zhang and Norman H. March. All these works derive from an erroneous solution of the three-dimensional Ising ... More Rejoinder on "Conjectures on exact solution of three-dimensional (3D) simple orthorhombic Ising lattices"Jan 19 2009Mar 30 2009It is shown that the arguments in the reply of Z.-D. Zhang (arXiv:0812.0194) to the comment arXiv:0811.1802 defending his conjectures in arXiv:0705.1045 are invalid. His conjectures have been thoroughly disproved. The two-fermion vector potential of constraint theory from Feynman diagramsOct 16 1995The relativistic fermion-antifermion bound state vector potential of constraint theory is calculated, in perturbation theory, by means of the Lippmann-Schwinger type equation that relates it to the scattering amplitude. Leading contributions of n-photon ... More Energy and decay width of the pi-K atomMay 24 2006The energy and decay width of the pi-K atom are evaluated in the framework of the quasipotential-constraint theory approach. The main electromagnetic and isospin symmetry breaking corrections to the lowest-order formulas for the energy shift from the ... More Molecular dynamics simulation of aging in amorphous silicaDec 06 1999By means of molecular dynamics simulations we examine the aging process of a strong glass former, a silica melt modeled by the BKS potential. The system is quenched from a temperature above to one below the critical temperature, and the potential energy ... More An indefinite metric model for interacting quantum fields with non-stationary background gravitationAug 25 2004We consider a relativistic Ansatz for the vacuum expectation values (VEVs) of a quantum field on a globally hyperbolic space-time which is motivated by certain Euclidean field theories. The Yang-Feldman asymptotic condition w.r.t. a "in"-field in a quasi-free ... More Series Prediction based on Algebraic ApproximantsJul 12 2011It is described how the Hermite-Pad\'e polynomials corresponding to an algebraic approximant for a power series may be used to predict coefficients of the power series that have not been used to compute the Hermite-Pad\'e polynomials. A recursive algorithm ... More Boundary blow-up solutions of elliptic equations involving regional fractional LaplacianFeb 09 2016In this paper, we study existence of boundary blow-up solutions for elliptic equations involving regional fractional Laplacian. We also discuss the optimality of our results. A 5D noncompact Kaluza -Klein cosmology in the presence of Null perfect fluidMay 18 2010Jun 08 2011For the description of the early inflation, and acceleration expansion of the Universe, compatible with observational data, the 5D noncompact Kaluza--Klein cosmology is investigated. It is proposed that the 5D space is filled with a null perfect fluid, ... More Schwinger-Dyson and Large $N_{c}$ Loop Equation for Supersymmetric Yang-Mills TheoryApr 04 1996We derive an infinite sequence of Schwinger-Dyson equations for $N=1$ supersymmetric Yang-Mills theory. The fundamental and the only variable employed is the Wilson-loop geometrically represented in $N=1$ superspace: it organizes an infinite number of ... More On pseudo B-Weyl operators and generalized Drazin invertibility for operator matricesMar 23 2015We introduce a new class which generalizes the class of B-Weyl operators. We say that $T\in L(X)$ is pseudo B-Weyl if $T=T_1\oplus T_2$ where $T_1$ is a Weyl operator and $T_2$ is a quasi-nilpotent operator. We show that the corresponding pseudo B-Weyl ... More The infinite mass limit of the two-particle Green's function in QEDApr 07 1997The behavior of the two-particle Green's function in QED is analyzed in the limit when one of the particles becomes infinitely massive. It is found that the dependences of the Green's function on the relative times of the ingoing and outgoing particles ... More Incorporation of anomalous magnetic moments in the two-body relativistic wave equations of constraint theoryJun 19 1996Using a Dirac-matrix substitution rule, applied to the electric charge, the anomalous magnetic moments of fermions are incorporated in local form in the two-body relativistic wave equations of constraint theory. The structure of the resulting potential ... More The Early History of the Integrable Chiral Potts Model and the Odd-Even ProblemNov 26 2015Jan 18 2016In the first part of this paper I shall discuss the round-about way of how the integrable chiral Potts model was discovered about 30 years ago. As there should be more higher-genus models to be discovered, this might be of interest. In the second part ... More Supereigenvalue Model and Dijkgraaf-Vafa ProposalApr 22 2003We present a variant of the supereigenvalue model proposed before by Alvarez-Gaume, Itoyama, Manes, and Zadra. This model derives a set of three planar loop equations which takes the same form as the set of three anomalous Ward-Takahashi identities on ... More Scalar Levin-Type Sequence TransformationsMay 22 2000Sequence transformations are important tools for the convergence acceleration of slowly convergent scalar sequences or series and for the summation of divergent series. Transformations that depend not only on the sequence elements or partial sums $s_n$ ... More Entropy of entanglement in continuous frequency space of the biphoton state from multiplexed cold atomic ensemblesJan 05 2016We consider a scheme of multiplexed cold atomic ensembles that generate a frequency-entangled biphoton state with controllable entropy of entanglement. The biphoton state consists of a telecommunication photon (signal) immediately followed by an infrared ... More On K(E_9)Jul 08 2004We study the maximal compact subgroup K(E_9) of the affine Lie group E_9(9) and its on-shell realization as an R symmetry of maximal N=16 supergravity in two dimensions. We first give a rigorous definition of the group K(E_9), which lives on the double ... More Compact and Noncompact Gauged Maximal Supergravities in Three DimensionsMar 06 2001Apr 21 2001We present the maximally supersymmetric three-dimensional gauged supergravities. Owing to the special properties of three dimensions -- especially the on-shell duality between vector and scalar fields, and the purely topological character of (super)gravity ... More A nontrivial solvable noncommutative φ^3 model in 4 dimensionsMar 07 2006May 24 2006We study the quantization of the noncommutative selfdual \phi^3 model in 4 dimensions, by mapping it to a Kontsevich model. The model is shown to be renormalizable, provided one additional counterterm is included compared to the 2-dimensional case which ... More Renormalization of the noncommutative phi^3 model through the Kontsevich modelDec 16 2005We point out that the noncommutative selfdual phi^3 model can be mapped to the Kontsevich model, for a suitable choice of the eigenvalues in the latter. This allows to apply known results for the Kontsevich model to the quantization of the field theory, ... More Transition Amplitudes within the Stochastic Quantization SchemeSep 30 1993Quantum mechanical transition amplitudes are calculated within the stochastic quantization scheme for the free nonrelativistic particle, the harmonic oscillator and the nonrelativistic particle in a constant magnetic field; we close with free Grassmann ... More Independent Component Analysis of Spatiotemporal ChaosMay 13 2005Two types of spatiotemporal chaos exhibited by ensembles of coupled nonlinear oscillators are analyzed using independent component analysis (ICA). For diffusively coupled complex Ginzburg-Landau oscillators that exhibit smooth amplitude patterns, ICA ... More Pentaquark $Θ^+$ in nuclear matter and $Θ^+$ hypernucleiOct 17 2004Jul 05 2005We study the properties of the $\Theta^+$ in nuclear matter and $\Theta^+$ hypernuclei within the quark mean-field (QMF) model, which has been successfully used for the description of ordinary nuclei and $\Lambda$ hypernuclei. With the assumption that ... More Gauge transformations in relativistic two-particle constraint theorySep 16 1996Using connection with quantum field theory, the infinitesimal covariant abelian gauge transformation laws of relativistic two-particle constraint theory wave functions and potentials are established and weak invariance of the corresponding wave equations ... More Relativistic effects in the pionium lifetimeJun 23 1997The pionium decay width is evaluated in the framework of chiral perturbation theory and the relativistic bound state formalism of constraint theory. Corrections of order O(\alpha) are calculated with respect to the conventional lowest-order formula, in ... More Comment on "Conjectures on exact solution of three-dimensional (3D) simple orthorhombic Ising lattices" [arXiv:0705.1045]Nov 12 2008Nov 22 2008It is shown that a recent article by Z.-D. Zhang [arXiv:0705.1045] is in error and violates well-known theorems. A Simple Method to Reduce Thermodynamic Derivatives by ComputerJan 09 2014Studies in thermodynamics often require the reduction of some first or second order partial derivatives in terms of a smaller basic set. A simple algorithm to perform such a reduction is presented here, together with a review of earlier related works. ... More Variational Calculation of Effective Classical Potential at $T \neq 0$ to Higher OrdersApr 16 1995Using the new variational approach proposed recently for a systematic improvement of the locally harmonic Feynman-Kleinert approximation to path integrals we calculate the partition function of the anharmonic oscillator for all temperatures and coupling ... More Superradiant laser: Effect of long-ranged dipole-dipole interactionSep 02 2016We theoretically investigate the effect of long-ranged dipole-dipole interaction (LRDDI) on a superradiant laser (SL). This effect is induced from the atom-photon interaction in the dissipation process. In the bad-cavity limit usually performed to initiate ... More Cooperative single-photon subradiant states in a three-dimensional atomic arrayJun 21 2016We propose a complete superradiant and subradiant states that can be manipulated and prepared in a three-dimensional atomic array. These subradiant states can be realized by absorbing a single photon and imprinting the spatially-dependent phases on the ... More Effects of Spin Fluctuations in Quasi-One-Dimensional Organic SuperconductorsMay 05 1999We study the electronic states of quasi-one-dimensional organic conductors using the single band Hubbard model at half-filling. We treat the effects of the on-site Coulomb interaction by the fluctuation-exchange (FLEX) method, and calculate the phase ... More Application of the Limit Cycle Model to Star Formation Histories in Spiral Galaxies: Variation among Morphological TypesMay 04 2000We propose a limit-cycle scenario of star formation history for any morphological type of spiral galaxies. It is known observationally that the early-type spiral sample has a wider range of the present star formation rate (SFR) than the late-type sample. ... More On the extrapolation of perturbation seriesDec 21 2002We discuss certain special cases of algebraic approximants that are given as zeroes of so-called "effective characteristic polynomials" and their generalization to a multiseries setting. These approximants are useful for the convergence acceleration or ... More The size-extensitivity of correlation energy estimators based on effective characteristic polynomialsApr 08 1997Estimators $\Pi n$ for the correlation energy can be computed as roots of effective characteristic polynomials of degree $n$. The coefficients of these polynomials are derived from the terms of the perturbation series of the energy. From a fourth-order ... More Integrability and Canonical Structure of d=2, N=16 SupergravityApr 23 1998Jul 01 1998The canonical formulation of d=2, N=16 supergravity is presented. We work out the supersymmetry generators (including all higher order spinor terms) and the N=16 superconformal constraint algebra. We then describe the construction of the conserved non-local ... More A focusable, convergent fast-electron beam from ultra-high-intensity laser-solid interactionsJan 29 2015A novel scheme for the creation of a convergent, or focussing, fast-electron beam generated from ultra-high-intensity laser-solid interactions is described. Self-consistent particle-in-cell simulations are used to demonstrate the efficacy of this scheme ... More Inherent global stabilization of unstable local behavior in coupled map latticesJul 05 2004The behavior of two-dimensional coupled map lattices is studied with respect to the global stabilization of unstable local fixed points without external control. It is numerically shown under which circumstances such inherent global stabilization can ... More Quark mean field model for nuclear matter and finite nucleiNov 15 1999We study nuclear matter and finite nuclei in terms of the quark mean field (QMF) model, in which we describe the nucleon using the constituent quark model. The meson mean fields, in particular the sigma meson, created by other nucleons act on quarks inside ... More The relativistic two-body potentials of constraint theory from summation of Feynman diagramsFeb 07 1996The relativistic two-body potentials of constraint theory for systems composed of two spin-0 or two spin-1/2 particles are calculated, in perturbation theory, by means of the Lippmann-Schwinger type equation that relates them to the scattering amplitude. ... More Theory of the Hall Coefficient and the Resistivity on the Layered Organic Superconductors κ-(BEDT-TTF)Nov 20 2000Feb 22 2001In the organic superconducting \kappa-(BEDT-TTF) compounds, various transport phenomena exhibit striking non-Fermi liquid behaviors, which should be the important clues to understanding the electronic state of this system. Especially, the Hall coefficient ... More Thermodynamical description of modified generalized Chaplygin gas model of dark energyApr 10 2015May 17 2016We consider a universe filled by a modified generalized Chaplygin gas together with a pressureless dark matter component. We get a thermodynamical interpretation for the modified generalized Chaplygin gas confined to the apparent horizon of FRW universe, ... More Stabilization of causally and non-causally coupled map latticesJul 07 2004Two-dimensional coupled map lattices have global stability properties that depend on the coupling between individual maps and their neighborhood. The action of the neighborhood on individual maps can be implemented in terms of "causal" coupling (to spatially ... More Asymptotic function for multi-growth surfaces using power-law noiseNov 06 2002Numerical simulations are used to investigate the multiaffine exponent $\alpha_q$ and multi-growth exponent $\beta_q$ of ballistic deposition growth for noise obeying a power-law distribution. The simulated values of $\beta_q$ are compared with the asymptotic ... More Study of $Λ$ hypernuclei in the quark mean field modelApr 24 2001Jul 11 2001We extend the quark mean field model to the study of $\Lambda$ hypernuclei. Without adjusting parameters, the properties of $\Lambda$ hypernuclei can be described reasonably well. The small spin-orbit splittings for $\Lambda$ in hypernuclei are achieved, ... More Comment on Mathematical structure of the three-dimensional (3D) Ising model'Jul 06 2013The review paper by Zhang Zhi-Dong contains many errors and is based on several earlier works that are equally wrong. Scalings between Physical and their Observationally related Quantities of Merger RemnantsSep 07 2005We present scaling relations between the virial velocity (V) and the one-dimensional central velocity dispersion (Sig0); the gravitational radius (Rv) and the effective radius (Re); and the total mass (M) and the luminous mass (ML) found in N-body simulations ... More B2BII - Data conversion from Belle to Belle IISep 28 2018We describe the conversion of simulated and recorded data by the Belle experiment to the Belle~II format with the software package \texttt{b2bii}. It is part of the Belle~II Analysis Software Framework. This allows the validation of the analysis software ... More Characterizing arbitrarily slow convergence in the method of alternating projectionsOct 12 2007In 1997, Bauschke, Borwein, and Lewis have stated a trichotomy theorem that characterizes when the convergence of the method of alternating projections can be arbitrarily slow. However, there are two errors in their proof of this theorem. In this note, ... More Weak Quantum Theory: Complementarity and Entanglement in Physics and BeyondApr 23 2001Nov 21 2001The concepts of complementarity and entanglement are considered with respect to their significance in and beyond physics. A formally generalized, weak version of quantum theory, more general than ordinary quantum theory of material systems, is outlined ... More CP Asymmetry for Inclusive Decay $B \to X_d + γ$ in the Minimal Supersymmetric Standard ModelJun 02 1999Aug 20 1999We study the inclusive rare decay $B\to X_d+\gamma$ in the supergravity inspired Minimal Supersymmetric Standard Model and compute the CP-asymmetry in the decay rates. We show that there exist two phenomenologically acceptable sets of SUSY parameters: ... More Abrupt Emergence of Pressure-Induced Superconductivity of 34 K in SrFe2As2: A Resistivity Study under PressureOct 27 2008Nov 19 2008We report resistivity measurement under pressure in single crystals of SrFe_2As_2, which is one of the parent materials of Fe-based superconductors. The structural and antiferromagnetic (AFM) transition of T_0 = 198 K at ambient pressure is suppressed ... More Feynman graphs for non-Gaussian measuresJan 12 2005Nov 30 2006Partition- and moment functions for a general (not necessarily Gaussian) functional measure that is perturbed by a Gibbs factor are calculated using generalized Feynman graphs. From the graphical calculus, a new notion of Wick ordering arises, that coincides ... More Feynman graph representation of the perturbation series for general functional measuresAug 20 2004A representation of the perturbation series of a general functional measure is given in terms of generalized Feynman graphs and -rules. The graphical calculus is applied to certain functional measures of L\'evy type. A graphical notion of Wick ordering ... More Asymptotic Analysis of High-Contrast Phononic Crystals and a Criterion for the Band-Gap OpeningSep 30 2006We investigate the band-gap structure of the frequency spectrum for elastic waves in a high-contrast, two-component periodic elastic medium. We consider two-dimensional phononic crystals consisting of a background medium which is perforated by an array ... More The System of Multi Color-flux-tubes in the Dual Ginzburg-Landau TheoryFeb 27 1996We study the system of multi color-flux-tubes in terms of the dual Ginzburg -Landau theory. We consider two ideal cases, where the directions of all the color-flux-tubes are the same in one case and alternative in the other case for neighboring flux-tubes. ... More Search for $η$-bound NucleiAug 17 2010The $\eta$ meson can be bound to atomic nuclei. Experimental search is discussed in the form of final state interaction for the reactions $dp\to{^3\text{He}}\eta$ and $dd\to{^4\text{He}}\eta$. For the latter case tensor polarized deuterons were used in ... More Physics at COSYNov 21 2004The COSY accelerator in J\'ulich is presented together with its internal and external detectors. The physics programme performed recently is discussed with emphasis on strangeness physics. Vortex Origin of Tricritical Point in Ginzburg-Landau TheorySep 16 2005Motivated by recent experimental progress in the critical regime of high-$T_c$ superconductors we show how the tricritical point in a superconductor can be derived from the Ginzburg-Landau theory as a consequence of vortex fluctuations. Our derivation ... More Vortex Line Nucleation of First-Order Transition U(1)-Symmetric Field SystemsNov 20 1998We show that in field systems with U(1)-symmetry, first-order transitions are nucleated by vortex lines, not bubbles, thus calling for a reinvestigation of the Kibble mechanism for the phase transition of the early universe. Hubbard-Stratonovich Transformation: Successes, Failure, and CureApr 27 2011We recall the successes of the Hubbard-Stratonovich Transformation (HST) of many-body theory, point out its failure to cope with competing channels of collective phenomena and show how to overcome this by Variational Perturbation Theory. That yields exponentially ... More Status and perspectives of sin2alpha measurementsJul 23 2003In the neutral B meson system, it is possible to measure the CKM angle alpha using the decay mode b -> u ubar d in the presence of pollution from gluonic b -> d penguin decays. Here the recent status of the measurements of CP-violating asymmetry parameters ... More Self--Dual Supergravity and Supersymmetric Yang--Mills Coupled to Green--Schwarz SuperstringNov 10 1992Nov 10 1992We present the {\it canonical} set of superspace constraints for self-dual supergravity, a self-dual'' tensor multiplet and a self-dual Yang-Mills multiplet with $~N=1~$ supersymmetry in the space-time with signature $(+,+,-,-)$. For this set of constraints, ... More Transmission delay times of localized wavesMar 05 2001We investigate the effects of wave localization on the delay time tau (frequency sensitivity of the scattering phase shift) of a wave transmitted through a disordered wave guide. Localization results in a separation tau=chi+chi' of the delay time into ... More Didactic derivation of the special theory of relativity from the Klein-Gordon equationJun 27 2014Jul 31 2014We present a didactic derivation of the special theory of relativity in which Lorentz transformations are discovered' as symmetry transformations of the Klein-Gordon equation. The interpretation of Lorentz boosts as transformations to moving inertial ... More Novel exact charged mass distribution in classical field theory and the notion of point-like elementary electric chargeAug 20 2014Feb 23 2016The existence of stable, charged elementary 'point particles' still is a basically unsolved puzzle in theoretical physics. E.g., in quantum electrodynamics the infinite self-energy of the Dirac point electron is 'swept under the carpet' by renormalizing ... More Valuation of path-dependent American options using a Monte Carlo approachJan 12 1998It is shown how to obtain accurate values for American options using Monte Carlo simulation. The main feature of the novel algorithm consists of tracking the boundary between exercise and hold regions via optimization of a certain payoff function. We ... More Recent studies of Charmonium Decays at CLEONov 13 2007Nov 16 2007Recent results on Charmonium decays are reviewed which includes two-, three- and four-body decays of $\chi_{cJ}$ states, observations of Y(4260) through $\pi\pi J/\psi$ transitions, precise measurements of $M(D^0)$, $M(\eta)$ as well as \$\mathcal{B}(\eta\to ... More The Fundamental Constants in PhysicsFeb 17 2009We discuss the fundamental constants of physics in the Standard Model and possible changes of these constants on the cosmological time scale. The Grand Unification of the strong, electromagnetic and weak interactions implies relations between the time ... More Flavor Symmetries, Neutrino Masses and Neutrino MixingFeb 07 2008We discuss the neutrino mixing, using the texture 0 mass matrices, which work very well for the quarks. The solar mixing angle is directly linked to the mass ratio of the first two neutrinos. The neutrino masses are hierarchical, but the mass ratios turn ... More In-Medium Similarity Renormalization Group for Closed and Open-Shell NucleiJul 23 2016We present a pedagogical introduction to the In-Medium Similarity Renormalization Group (IM-SRG) framework for ab initio calculations of nuclei. The IM-SRG performs continuous unitary transformations of the nuclear many-body Hamiltonian in second-quantized ... More On gravitational dressing of renormalization group beta-functions beyond lowest order of perturbation theoryOct 12 1994Based on considerations in conformal gauge I derive up to nextleading order a relation between the coefficients of beta-functions in 2D renormalizable field theories before and after coupling to gravity. The result implies a coupling constant dependence ... More The Density Profile of Massive Galaxy Clusters from Weak LensingOct 20 2003We use measurements of weak gravitational shear around a sample of massive galaxy clusters at z = 0.3 to constrain their average radial density profile. Our results are consistent with the density profiles of CDM halos in numerical simulations and inconsistent ... More Scaling of N-body calculationsDec 14 2000We report results of collisional N-body simulations aimed to study the N-dependance of the dynamical evolution of star clusters. Our clusters consist of equal-mass stars and are in virial equilibrium. Clusters moving in external tidal fields and clusters ... More
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187623858451843, "perplexity": 2315.385328611132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247515149.92/warc/CC-MAIN-20190222094419-20190222120419-00206.warc.gz"}
https://ai.nuhil.net/vector-and-matrix/scaler-vs-vector
Vector A vector has a magnitude and direction. The length of the line shows its magnitude and the arrowhead points in the direction. We can add two vectors by joining them head-to-tail. And it doesn't matter which order we add them, we get the same result. We can also subtract one vector from another. First, we reverse the direction of the vector we want to subtract, then add them as usual. $a = (8,13), b = (26,7), c = (8, 13) + (26, 7) = (8+26, 13+7) = (34, 20)$ # Magnitude of a Vector $|a| = \sqrt( x^2 + y^2 )$ Magnitude of the vector, $|b| = (6,8) = \sqrt( 6^2 + 8^2) = \sqrt( 36+64) = \sqrt100 = 10$ # Multiplying a Vector by a Vector • Dot Product - Result is a Scaler • $a \cdot b = \lvert a \lvert \times \lvert b \lvert \times cos (\theta)$ • Multiply the length of a times the length of b, then multiply by the cosine of the angle between a and b. • Or, we can use the formula $a \cdot b = a_x \times b_x + a_y \times b_y$ • Multiply the x's, multiply the y's, then add. • Cross Product - Results a Vector • Cross Product a × b of two vectors is another vector that is at right angles to both. • $a \times b = \lvert a \lvert \times \lvert b \lvert \times sin (\theta) \times n$ Vectors Dot Product Cross Product
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843882918357849, "perplexity": 540.477618103704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00302.warc.gz"}
http://philpapers.org/s/Johan%20Benthem
## Works by Johan Benthem 44 found Sort by: Disambiguations: Johan Van Benthem [29] Johan Benthem [15] 1. Johan Benthem, Davide Grossi & Fenrong Liu (2014). Priority Structures in Deontic Logic. Theoria 80 (2):116-152. This article proposes a systematic application of recent developments in the logic of preference to a number of topics in deontic logic. The key junction is the well-known Hansson conditional for dyadic obligations. These conditionals are generalized by pairing them with reasoning about syntactic priority structures. The resulting two-level approach to obligations is tested first against standard scenarios of contrary-to-duty obligations, leading also to a generalization for the Kanger-Anderson reduction of deontic logic. Next, the priority framework is applied to model (...) My bibliography Export citation 2. Johan Benthem (2012). The Logic of Empirical Theories Revisited. Synthese 186 (3):775 - 792. Logic and philosophy of science share a long history, though contacts have gone through ups and downs. This paper is a brief survey of some major themes in logical studies of empirical theories, including links to computer science and current studies of rational agency. The survey has no new results: we just try to make some things into common knowledge. No categories My bibliography Export citation 3. Johan Benthem, Nick Bezhanishvili & Ian Hodkinson (2012). Sahlqvist Correspondence for Modal Mu-Calculus. Studia Logica 100 (1-2):31-60. We define analogues of modal Sahlqvist formulas for the modal mu-calculus, and prove a correspondence theorem for them. My bibliography Export citation 4. Johan Benthem & Ştefan Minică (2012). Toward a Dynamic Logic of Questions. Journal of Philosophical Logic 41 (4):633 - 669. Questions are triggers for explicit events of 'issue management'. We give a complete logic in dynamic-epistemic style for events of raising, refining, and resolving an issue, all in the presence of information flow through observation or communication. We explore extensions of the framework to multiagent scenarios and long-term temporal protocols. We sketch a comparison with some alternative accounts. My bibliography Export citation 5. Johan Benthem & Sonja Smets (2012). New Logical Perspectives on Physics. Synthese 186 (3):615 - 617. No categories My bibliography Export citation 6. Johan Van Benthem (2011). Logic in a Social Setting. Episteme 8 (3):227-247. Taking Backward Induction as its running example, this paper explores avenues for a logic of information-driven social action. We use recent results on limit phenomena in knowledge updating and belief revision, procedural rationality, and a ‘Theory of Play’ analyzing how games are played by different agents. My bibliography Export citation 7. Johan van Benthem & Fernando R. Velázquez-Quesada (2010). The Dynamics of Awareness. Synthese 177 (S1):5 - 27. Classical epistemic logic describes implicit knowledge of agents about facts and knowledge of other agents based on semantic information. The latter is produced by acts of observation or communication that are described well by dynamic epistemic logics. What these logics do not describe, however, is how significant information is also produced by acts of inference— and key axioms of the system merely postulate "deductive closure". In this paper, we take the view that all information is produced by acts, and hence (...) No categories My bibliography Export citation 8. Thomas Ågotnes, Johan van Benthem & Eric Pacuit (2009). Logic and Intelligent Interaction. Synthese 169 (2):219 - 221. No categories My bibliography Export citation 9. Johan Van Benthem (2009). The Information in Intuitionistic Logic. Synthese 167 (2):251 - 270. Issues about information spring up wherever one scratches the surface of logic. Here is a case that raises delicate issues of 'factual' versus 'procedural' information, or 'statics' versus 'dynamics'. What does intuitionistic logic, perhaps the earliest source of informational and procedural thinking in contemporary logic, really tell us about information? How does its view relate to its 'cousin' epistemic logic? We discuss connections between intuitionistic models and recent protocol models for dynamic-epistemic logic, as well as more general issues that emerge. My bibliography Export citation 10. Johan Van Benthem, Jelle Gerbrandy, Tomohiro Hoshi & Eric Pacuit (2009). Merging Frameworks for Interaction. Journal of Philosophical Logic 38 (5):491 - 526. A variety of logical frameworks have been developed to study rational agents interacting over time. This paper takes a closer look at one particular interface, between two systems that both address the dynamics of knowledge and information flow. The first is Epistemic Temporal Logic (ETL) which uses linear or branching time models with added epistemic structure induced by agents' different capabilities for observing events. The second framework is Dynamic Epistemic Logic (DEL) that describes interactive processes in terms of epistemic event (...) My bibliography Export citation 11. Johan van Benthem, Jelle Gerbrandy & Barteld Kooi (2009). Dynamic Update with Probabilities. Studia Logica 93 (1):67 - 96. Current dynamic-epistemic logics model different types of information change in multi-agent scenarios. We generalize these logics to a probabilistic setting, obtaining a calculus for multi-agent update with three natural slots: prior probability on states, occurrence probabilities in the relevant process taking place, and observation probabilities of events. To match this update mechanism, we present a complete dynamic logic of information change with a probabilistic character. The completeness proof follows a compositional methodology that applies to a much larger class of dynamic-probabilistic (...) My bibliography Export citation 12. Johan Van Benthem, Patrick Girard & Olivier Roy (2009). Everything Else Being Equal: A Modal Logic for Ceteris Paribus Preferences. Journal of Philosophical Logic 38 (1):83 - 125. This paper presents a new modal logic for ceteris paribus preferences understood in the sense of "all other things being equal". This reading goes back to the seminal work of Von Wright in the early 1960's and has returned in computer science in the 1990' s and in more abstract "dependency logics" today. We show how it differs from ceteris paribus as "all other things being normal", which is used in contexts with preference defeaters. We provide a semantic analysis and (...) My bibliography Export citation 13. Johan Van Benthem (2008). The Many Faces of Interpolation. Synthese 164 (3):451 - 460. We present a number of, somewhat unusual, ways of describing what Craig's interpolation theorem achieves, and use them to identify some open problems and further directions. No categories My bibliography Export citation 14. Johan Van Benthem, Sujata Ghosh & Fenrong Liu (2008). Modelling Simultaneous Games in Dynamic Logic. Synthese 165 (2):247 - 268. We make a proposal for formalizing simultaneous games at the abstraction level of player's powers, combining ideas from dynamic logic of sequential games and concurrent dynamic logic. We prove completeness for a new system of 'concurrent game logic' CDGL with respect to finite non-determined games. We also show how this system raises new mathematical issues, and throws light on branching quantifiers and independence-friendly evaluation games for first-order logic. My bibliography Export citation 15. Johan Van Benthem (2006). Epistemic Logic and Epistemology: The State of Their Affairs. Philosophical Studies 128 (1):49 - 76. No categories My bibliography Export citation 16. Johan Van Benthem (2006). Modal Frame Correspondences and Fixed-Points. Studia Logica 83 (1/3):133 - 155. Taking Löb's Axiom in modal provability logic as a running thread, we discuss some general methods for extending modal frame correspondences, mainly by adding fixed-point operators to modal languages as well as their correspondence languages. Our suggestions are backed up by some new results -- while we also refer to relevant work by earlier authors. But our main aim is advertizing the perspectives, showing how modal languages with fixed-point operators are a natural medium to work with. My bibliography Export citation 17. Johan Van Benthem (2005). Minimal Predicates. Fixed-Points, and Definability. Journal of Symbolic Logic 70 (3):696 - 712. Minimal predicates P satisfying a given first-order description ϕ(P) occur widely in mathematical logic and computer science. We give an explicit first-order syntax for special first-order 'PIA conditions' ϕ(P) which quarantees unique existence of such minimal predicates. Our main technical result is a preservation theorem showing PIA-conditions to be expressively complete for all those first-order formulas that are preserved under a natural model-theoretic operation of 'predicate intersection'. Next, we show how iterated predicate minimization on PIA-conditions yields a language MIN(FO) equal (...) My bibliography Export citation 18. Johan Van Benthem (2004). What One May Come to Know. Analysis 64 (2):95 - 105. No categories My bibliography Export citation 19. Johan van Benthem & Fenrong Liu (2004). Diversity of Logical Agents in Games. Philosophia Scientiae 8:163-178. No categories My bibliography Export citation 20. Johan Van Benthem (2003). Logic Games Are Complete for Game Logics. Studia Logica 75 (2):183 - 203. Game logics describe general games through powers of players for forcing outcomes. In particular, they encode an algebra of sequential game operations such as choice, dual and composition. Logic games are special games for specific purposes such as proof or semantical evaluation for first-order or modal languages. We show that the general algebra of game operations coincides with that over just logical evaluation games, whence the latter are quite general after all. The main tool in proving this is a representation (...) My bibliography Export citation 21. Johan Van Benthem, Guram Bezhanishvili & Mai Gehrke (2003). Euclidean Hierarchy in Modal Logic. Studia Logica 75 (3):327 - 344. For a Euclidean space ${\Bbb R}^{n}$ , let $L_{n}$ denote the modal logic of chequered subsets of ${\Bbb R}^{n}$ . For every n ≥ 1, we characterize $L_{n}$ using the more familiar Kripke semantics thus implying that each $L_{n}$ is a tabular logic over the well-known modal system Grz of Grzegorczyk. We show that the logics $L_{n}$ form a decreasing chain converging to the logic $L_{\infty}$ of chequered subsets of ${\Bbb R}^{\infty}$ . As a result, we obtain that $L_{\infty}$ is (...) My bibliography Export citation 22. Jon Barwise & Johan van Benthem (1999). Interpolation, Preservation, and Pebble Games. Journal of Symbolic Logic 64 (2):881 - 903. Preservation and interpolation results are obtained for L ∞ω and sublogics $\mathscr{L} \subseteq L_{\infty\omega}$ such that equivalence in L can be characterized by suitable back-and-forth conditions on sets of partial isomorphisms. My bibliography Export citation 23. Johan van Benthem (1999). Resetting the Bounds of Logic. European Review of Philosophy 12 (4). My bibliography Export citation 24. Johan van Benthem (1999). The Range of Modal Logic. Journal of Applied Non-Classical Logics 9 (2-3). No categories My bibliography Export citation 25. Hajnal Andréka, István Németi & Johan van Benthem (1998). Modal Languages and Bounded Fragments of Predicate Logic. Journal of Philosophical Logic 27 (3):217 - 274. My bibliography Export citation 26. Johan Van Benthem (1998). Program Constructions That Are Safe for Bisimulation. Studia Logica 60 (2):311 - 330. It has been known since the seventies that the formulas of modal logic are invariant for bisimulations between possible worlds models -- while conversely, all bisimulation-invariant first-order formulas are modally definable. In this paper, we extend this semantic style of analysis from modal formulas to dynamic program operations. We show that the usual regular operations are safe for bisimulation, in the sense that the transition relations of their values respect any given bisimulation for their arguments. Our main result is a (...) My bibliography Export citation 27. In this paper, we generalize the set-theoretic translation method for polymodal logic introduced in [11] to extended modal logics. Instead of devising an ad-hoc translation for each logic, we develop a general framework within which a number of extended modal logics can be dealt with. We first extend the basic set-theoretic translation method to weak monadic second-order logic through a suitable change in the underlying set theory that connects up in interesting ways with constructibility; then, we show how to tailor (...) My bibliography Export citation 28. Johan Benthem & Jan Bergstra (1994). Logic of Transition Systems. Journal of Logic, Language and Information 3 (4):247-283. Labeled transition systems are key structures for modeling computation. In this paper, we show how they lend themselves to ordinary logical analysis (without any special new formalisms), by introducing their standard first-order theory. This perspective enables us to raise several basic model-theoretic questions of definability, axiomatization and preservation for various notions of process equivalence found in the computational literature, and answer them using well-known logical techniques (including the Compactness theorem, Saturation and Ehrenfeucht games). Moreover, we consider what happens to this (...) My bibliography Export citation 29. Johan Van Benthem (1993). Modelling the Kinematics of Meaning. Proceedings of the Aristotelian Society 93:105 - 122. No categories My bibliography Export citation 30. Johan Benthem (1991). Language in Action. Journal of Philosophical Logic 20 (3):225 - 263. A number of general points behind the story of this paper may be worth setting out separately, now that we have come to the end.There is perhaps one obvious omission to be addressed right away. Although the word “information” has occurred throughout this paper, it must have struck the reader that we have had nothing to say on what information is. In this respect, our theories may be like those in physics: which do not explain what “energy” is (a notion (...) My bibliography Export citation 31. Johan Benthem (1990). Categorial Grammar and Type Theory. Journal of Philosophical Logic 19 (2):115 - 168. My bibliography Export citation 32. Johan Benthem (1989). Polyadic Quantifiers. Linguistics and Philosophy 12 (4):437 - 464. My bibliography Export citation 33. Johan Benthem (1987). Meaning: Interpretation and Inference. Synthese 73 (3):451 - 470. No categories My bibliography Export citation 34. Johan Van Benthem (1986). Review: G. E. Hughes, M. J. Cresswell, A Companion to Modal Logic. [REVIEW] Journal of Symbolic Logic 51 (3):824-826. My bibliography Export citation 35. Johan Benthem (1985). Situations and Inference. Linguistics and Philosophy 8 (1):3 - 8. My bibliography Export citation 36. Johan Benthem (1985). The Variety of Consequence, According to Bolzano. Studia Logica 44 (4):389 - 403. Contemporary historians of logic tend to credit Bernard Bolzano with the invention of the semantic notion, of consequence, a full century before Tarski. Nevertheless, Bolzano's work played no significant rôle in the genesis of modern logical semantics. The purpose of this paper is to point out three highly original, and still quite relevant themes in Bolzano's work, being a systematic study of possible types of inference, of consistency, as well as their meta-theory. There are certain analogies with Tarski's concerns here, (...) My bibliography Export citation 37. Johan Benthem (1984). Foundations of Conditional Logic. Journal of Philosophical Logic 13 (3):303 - 349. My bibliography Export citation 38. Johan Benthem (1984). Possible Worlds Semantics: A Research Program That Cannot Fail? Studia Logica 43 (4):379 - 393. Providing a possible worlds semantics for a logic involves choosing a class of possible worlds models, and setting up a truth definition connecting formulas of the logic with statements about these models. This scheme is so flexible that a danger arises: perhaps, any (reasonable) logic whatsoever can be modelled in this way. Thus, the enterprise would lose its essential tension. Fortunately, it may be shown that the so-called incompleteness-examples from modal logic resist possible worlds modelling, even in the above wider (...) My bibliography Export citation 39. Johan Van Benthem (1984). Analytic/Synthetic: Sharpening a Philosophical Tool. Theoria 50 (2-3):106-137. No categories My bibliography Export citation 40. Johan Van Benthem (1984). Questions About Quantifiers. Journal of Symbolic Logic 49 (2):443 - 466. My bibliography Export citation 41. Johan Van Benthem & David Pearce (1984). A Mathematical Characterization of Interpretation Between Theories. Studia Logica 43 (3):295 - 303. Of the various notions of reduction in the logical literature, relative interpretability in the sense of Tarski et al. [6] appears to be the central one. In the present note, this syntactic notion is characterized semantically, through the existence of a suitable reduction functor on models. The latter mathematical condition itself suggests a natural generalization, whose syntactic equivalent turns out to be a notion of interpretability quite close to that of Ershov [1], Szczerba [5] and Gaifman [2]. My bibliography Export citation 42. Johan Van Benthem (1983). Logical Semantics as an Empirical Science. Studia Logica 42 (2/3):299 - 313. Exact philosophy consists of various disciplines scattered and separated. Formal semantics and philosophy of science are good examples of two such disciplines. The aim of this paper is to show that there is possible to find some integrating bridge topics between the two fields, and to show how insights from the one are illuminating and suggestive in the other. My bibliography Export citation 43. Johan van Benthem (1983). The Logic of Natural Language. Philosophical Books 24 (2):99-102. No categories
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025643229484558, "perplexity": 3504.5443147935266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936468536.58/warc/CC-MAIN-20150226074108-00094-ip-10-28-5-156.ec2.internal.warc.gz"}
https://math.eretrandre.org/tetrationforum/archive/index.php?thread-1033.html
# Tetration Forum Full Version: Superroots and a generalization for the Lambert-W You're currently viewing a stripped down version of our content. View the full version with proper formatting. Pages: 1 2 3 I experienced a bit with the problem of finding x in equations like $y = \;^2 x$ and $y = \;^3 x$ and $y = \;^n x$ where y is given. This was mainly motivated by frequent questions in MSE and/or MO for solutions where $y=- 1$ or $y= i$ What I mainly found was the requirement for a generalization of the Lambert-W (but a nicely straightforward one!) and some insight into the occuring power series. Although having now an accessible entry-point into the general problem, I did not yet find explicite, simple closed form expressions for the occuring coefficients except when $n=2$ (but those are already well known...), so it's an open field for pattern-detection and research on radii of convergence. It is too much to write it here in this limited box, so I made a pdf-file. I upload it as attachment but put it also on my webspace, see http://go.helms-net.de/math/tetdocs/Wexz...erroot.pdf Collaboration is appreciated... Gottfried I'm interested. Can you solve x^^(1/2) = 3? but... using "new" formula Conjecture 3.1 fails because the left hand side has radius going to 0. There are 2 ways to show that. On the other hand conjecture 3.1 does have meaning considering the first n derivatives of the LHS = RHS. It is a mystery how you intend to solve the cases x^^(3/2) = v though. Regards Tommy1729 (11/10/2015, 09:38 AM)tommy1729 Wrote: [ -> ]Conjecture 3.1 fails because the left hand side has radius going to 0. Hmm, I've not yet settled everything about this in my mind. I've of course seen, that with increasing n the convergence-radius of the function $\;^n W$ decreases. However, as usually, if a function can be analytically continued (beyond its radius of convergence) for instance by Euler-summation, I assume, that the result is still meaningful. And we have here the possibility for Euler-summation, so I think there is a true analytic continuation. However, I don't know yet whether this can be correctly inserted in my conjecture-formula for the limit-case. Quote:It is a mystery how you intend to solve the cases x^^(3/2) = v though. As I understand this, this is using fractional iteration heights. As I described my exercises, I'm concerned with the unknown bases, and am using integer heights so far, not fractional heights ( superroot, not superlog) Gottfried (11/09/2015, 11:27 PM)nuninho1980 Wrote: [ -> ]I'm interested. Can you solve x^^(1/2) = 3? but... using "new" formula Not with this elaboration. I'm on superroots of powertowers of integer heights so far. I've given no thought so far for analysis with fractional heights, except for some lazy tries to find an interpolation-formula for the rows in table 3.1 , but without easy success .... Mind you to step in for this? Gottfried [update] perhaps this is of interest: see MSE http://math.stackexchange.com/questions/...550#133550 Alot has already been said about the superroots. We know that lim n-> oo for x_n^^n = y for y > exp(1/e) [eta] gives x_n = eta. Also all results about slog and sexp relate. PROOF SKETCHES --- First i point out that when you have a nonzero radius , the eulersum = analytic continuation whenever and wherever both converge. BUT analytic continuation is USUALLY NOT the correct solution. For instance x^x^x^... = x^[oo] = y Has solution x = y^(1/y) IFF Dom y , range x are in the sheltron region. Clearly x=y^(1/y) is the analytic continuation , but thus false. Tommy's lemma : for all n -> W•n(0) = 0. Conjecture 3.1. Conjecture 3.1 conjectures lim n -> oo W•n (v) = -v exp(-v). Clearly the RHS has radius oo. But the algebra dictates that the radius can be at most : X^(1/x) = exp(ln(x) / x). Now v = ln(x) so x = exp(v). Therefore exp( ln(x) / x) = exp ( v exp(-v) ). Hence v exp(-v) = W•n(v) = - v exp(-v) => contradiction .. Unless for v satisfying V exp(-v) = v exp(-v) The solution set is v ={0,oo}. So the radius of lim W•n = 0. V= 0 implies x = 1. X = 1 is within the sheltron region so v = 0 is valid. Qed A second proof Let n = oo The fractal argument : W•n(v) = a <=> v = (b exp ( b exp ( ...(*)) = (b exp( b exp ( ... (a)). So v = ( b exp(*))^oo. A fractal within the sheltron. V = fixpoint [b exp(*)] ==> solve b exp(A) = A. ==> A = - W(-b). => - W(-b) = v. --> b = v exp(-v) = W•n(v). Similar too previous proof ; v must be 0 ==> radius = 0. Qed. Too explain the fractal argument Notice (b exp(a))^[2] for fixed b and variabele a is NOT equal too (a exp(a))^[2] .. Even if we set a = b ! For a = b , The difference is in the second case a exp(a) exp( a exp(a) ) Whereas in the first case a exp(a exp(a) ). That is why I use a and b and then set them equal. That looks confusing but is correct. --- So we get 2 proofs with W•n(v) = v exp(-v). However that is only valid within the sheltron. V = ln(x) -> 0 = ln(1). Min( | ln(1) - ln(e^(-1/e)) | , | ln(1) - ln(e^(1/e)) | ) = 1/e. So W•n(v) = v exp(-v) is valid within " radius " 1/e. And analytic continuation does not help. Hope that helps. Regards Tommy1729 I'm not sure if this should be in a different thread, but I just found a website with programming puzzles, and one of the puzzles is super-roots: http://www.checkio.org/mission/super-root/ this "mission" (programming puzzle) has a very large number of "solutions" (Python implementations) of super-roots. After you login and solve the mission, it will show you other people's solutions along with your own. I solved it using the infinite tetrate (x^x^x^...) which is probably not the fastest method, but I think it's more beautiful than using Newton's method with an unknown number of iterations until you find the right number. Most of the solutions are based on Newton's method or a similar bisection method, but I was considering going thru them to see if there's any new methods we didn't know about... (11/13/2015, 05:58 PM)andydude Wrote: [ -> ]I'm not sure if this should be in a different thread, but I just found a website with programming puzzles, and one of the puzzles is super-roots: Hi Andy, nice to read from you... Yes the page looks interesting, I'll try to get logged in another day (I'm in bed because of some bacteria or whatever and am just lurking around here a bit). I had already other posts with superroots, for instance that one about the power series of (1+x)^(1+x)^...^(1+x). At the moment I was involved in that question of the supperroots -1 = x^x^x and had much fun already (see http://math.stackexchange.com/questions/...38#1415538 ) but again - cannot yet complete the discussion. Cordially - Gottfried An actual discussion/application is on MSE, see the question of Vladimir Reshetnikov where I tried to find an explanation /ansatz for a proof in my answer http://math.stackexchange.com/a/1530136/1714 Perhaps someone can help to make more progress... Gottfried I agree that it is a long-researched problem; trying to find a closed form for super-roots, or anything for that matter. Using a combination of known facts from the Tetration Ref I collected, I was able to find a simpler expression for the logarithmic power series expansion of $y = x^{x^x}$ than I remember from before. I think the ideal solution would be to find a recurrence equation similar to the one we know for n-th tetrates. I've attached a short discussion of the things we know that might help in finding a closed form. Pages: 1 2 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 8, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8108950257301331, "perplexity": 1922.5996594237238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00220.warc.gz"}
https://dcc-backup.ligo.org/LIGO-P2000021/public
# Properties and astrophysical implications of the 150 Msun binary black hole merger GW190521 Document #: LIGO-P2000021-v12 Document type: P - Publications Other Versions: Abstract: The gravitational-wave signal GW190521 is consistent with a binary black hole merger source at redshift 0.8 with unusually high component masses, $$85^{+21}_{-14}\,M_{\odot}$$ and $$66^{+17}_{-18}\,M_{\odot}$$, compared to previously reported events, and shows mild evidence for spin-induced orbital precession. The primary falls in the mass gap predicted by (pulsational) pair-instability supernova theory, in the approximate range $$65 - 120\,M_{\odot}$$. The probability that at least one of the black holes in GW190521 is in that range is 99.0%. The final mass of the merger ($$142^{+28}_{-16}\,M_{\odot}$$) classifies it as an intermediate-mass black hole. Under the assumption of a quasi-circular binary black hole coalescence, we detail the physical properties of GW190521's source binary and its post-merger remnant, including component masses and spin vectors. Three different waveform models, as well as direct comparison to numerical solutions of general relativity, yield consistent estimates of these properties. Tests of strong-field general relativity targeting the merger-ringdown stages of coalescence indicate consistency of the observed signal with theoretical predictions. We estimate the merger rate of similar systems to be $$0.13^{+0.30}_{-0.11}\,{\rm Gpc}^{-3}\,\rm{yr}^{-1}$$. We discuss the astrophysical implications of GW190521 for stellar collapse, and for the possible formation of black holes in the pair-instability mass gap through various channels: via (multiple) stellar coalescence, or via hierarchical merger of lower-mass black holes in star clusters or in active galactic nuclei. We find it to be unlikely that GW190521 is a strongly lensed signal of a lower-mass black hole binary merger. We also discuss more exotic possible sources for GW190521, including a highly eccentric black hole binary, or a primordial black hole binary. Files in Document: Authors: Author Groups: Notes and Changes: Fix typo. Add author. Changes made to ApJL as well. DCC Version 3.3.0, contact Document Database Administrators
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456245064735413, "perplexity": 2196.076179015666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00368.warc.gz"}
http://sandbaronfalseriver.com/epub/an-introduction-to-metric-spaces-and-fixed-point-theory
# An Introduction to Metric Spaces and Fixed Point Theory by Mohamed A. Khamsi By Mohamed A. Khamsi Content material: Chapter 1 creation (pages 1–11): Chapter 2 Metric areas (pages 13–40): Chapter three Metric Contraction rules (pages 41–69): Chapter four Hyperconvex areas (pages 71–99): Chapter five “Normal” constructions in Metric areas (pages 101–124): Chapter 6 Banach areas: advent (pages 125–170): Chapter 7 non-stop Mappings in Banach areas (pages 171–196): Chapter eight Metric mounted aspect thought (pages 197–241): Chapter nine Banach house Ultrapowers (pages 243–271): Similar linear books LAPACK95 users' guide LAPACK95 is a Fortran ninety five interface to the Fortran seventy seven LAPACK library. it really is correct for someone who writes within the Fortran ninety five language and desires trustworthy software program for simple numerical linear algebra. It improves upon the unique user-interface to the LAPACK package deal, benefiting from the huge simplifications that Fortran ninety five permits. Semi-Simple Lie Algebras and Their Representations (Dover Books on Mathematics) Designed to acquaint scholars of particle physics already acquainted with SU(2) and SU(3) with strategies acceptable to all easy Lie algebras, this article is principally suited for the examine of grand unification theories. topics comprise uncomplicated roots and the Cartan matrix, the classical and unprecedented Lie algebras, the Weyl team, and extra. Lectures on Tensor Categories and Modular Functors This publication offers an exposition of the kin one of the following 3 issues: monoidal tensor different types (such as a class of representations of a quantum group), third-dimensional topological quantum box thought, and 2-dimensional modular functors (which clearly come up in 2-dimensional conformal box theory). Additional info for An Introduction to Metric Spaces and Fixed Point Theory Example text 14 (The metric transform φ) Let (M,d) be a metric space, and define the metric space (M, άφ) by taking for x,y € M άφ(χ,ν) = Φ(ά(χ,ν)), where φ : [0, oo) —» [0, oo) is increasing, concave downward, and satisfies φ(0) — 0. 15 (The Hausdorff metric) Let (M,d) be a metric space and let ΛΊ denote the family of all nonempty bounded closed subsets of M. For A € M. and ε > 0 define the ε-neighborhood of A to be the set Νε(Α) = {x € M : dist(x, A) < ε}. where dist(x, A) = inf d(x,y). A Now for Α,ΒΕΜ H(A, B) = inf{e > 0 : A C Ne(B) Then (M,H) set and B Ç Ne(A)}. Hence if |i — to\ < δ, 1/(0 - /(*o)l < 1/(0 - fN(t)\ + \fN(t) - fN(to)\ + |/jv(io) - f(to)\ < e. This proves continuity of / . To see that lim d(fn,f) = 0, let ε > 0 and n—*oo observe that since {/„} is a Cauchy sequence there is an integer TV such that if m,n> N then sup | / „ ( 0 - / m ( 0 l < e . «€[0,1] that is, / n ( 0 - e < / m ( 0 < / n ( 0 + e · Letting m —» oo we see that for any t € [0,1] and n > N, / n ( 0 ~e< / ( 0 < / n ( 0 + e; hence 1/(0 - /n(0l < ε. from which d(/„, / ) < ε. Since ε > 0 is arbitrary we conclude lim d(fn, f) = 0. Let {xa}ael D e a n y chain in (M, >), and for α,β € / set β > a <=> Χβ > xa. ι is a nonincreasing net in R + so there exists r > 0 such that = r. \imip(f(xa)) a Let ε > 0. Then there exists ao 6 / such that a > ao implies r < tp(f(xa)) a > ao, m a x i d ^ ^ ^ ) ^ ^ / ^ ) , / ^ ) ) } <
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928000807762146, "perplexity": 3790.9381409959256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00306.warc.gz"}
http://sepwww.stanford.edu/data/media/public/docs/sep124/biondo1/paper_html/node5.html
Next: Conclusions and future directions Up: Cost-efficient prestack exploding-reflector modeling Previous: Cost-efficient prestack exploding-reflector modeling ## Synthetic data examples To test the analytical results presented in the previous section, I modeled and migrated several data sets that combined SODCIGs extracted from the prestack images of the synthetic data set described above. The SODCIGs were uniformly spaced with four choices for their distance :640 meters, 320 meters, 160 meters, and 80 meters. The original midpoint spacing of the image, , was 10 meters. Therefore, the first data set required 64 independent modeling experiments, the second 32, the third 16 and the fourth 8. Figure  compares the SODCIG extracted from the starting prestack image (Figure a) with the corresponding SODCIGs extracted from the images obtained by migrating the four combined data sets with the correct velocity. All the SODCIGs have been extracted at the same horizontal location. As predicted by equation 23, the images obtained by the combined data sets are affected by cross talk along the offset domain. The images obtained from the smaller data set that had only 8 independent experiments (Figure e) is completely degraded by the cross-talk. Whereas the larger data sets ( equal to 320 and 640 meters) preserve the velocity information present in the original SODCIG and allow the computation of ADCIGs uncontaminated by artifacts, after the cross-talks are removed by limiting the offset aperture. Figure  shows the same SODCIGs shown in Figure  after the larger subsurface offsets are zeroed. Because the distance between cross-talks decreases with decreasing , the windows around zero offset also decreases in width. For Figure b the window was 410 meters wide, for Figure c it was 170 meters wide, for Figure d it was 110 meters wide, for Figure e it was 70 meters wide. Migs-nowind-overn Figure 8 Panel a): SODCIG extracted from source-receiver migration of synthetic data set migrated with correct velocity. Panels b) to e): SODCIGs obtained from migration of data sets modeled using the proposed method, with respectively 640 m, 320 m, 160 m and 80 m. Migs-wind-overn Figure 9 Same panels shown in Figure  after zeroing the larger subsurface offsets to maximally eliminate the cross-talk before transformation to angle domain (Figure ). Figure  shows the ADCIGs obtained by transforming into the angle domain the SODCIGs shown Figure . The ADCIGs computed by imaging the larger data sets ( equal 320 and 640 meters) preserve the velocity information contained in the original ADCIG (Figure a), whereas the ADCIG computed from the data set with only 8 independent experiments (Figure e), is completely overwhelmed by artifacts. Angs-wind-overn Figure 10 ADCIGs obtained by transformation of the windowed SODCIGs shown in Figure . Panel a): ADCIG computed from source-receiver migration of synthetic data set migrated with correct velocity. Panels b) to e): ADCIGs obtained from the migration of data sets modeled using the proposed method, with respectively 640 m, 320 m, 160 m and 80 m. The amount of interference caused by the cross-talk also depends on how well the SODCIGs are focused around zero subsurface offset, in addition to the spacing between SODCIGs. When the initial migration is not perfectly focused because of velocity inaccuracies, more experiments are needed to preserve the velocity information than when the starting image is well focused. Figure , illustrating this concept, shows the SODCIGs obtained starting from the prestack image computed by source-receiver migration using a migration velocity too low by 10%. Figure  shows the original SODCIG, whereas the other panels show the SODCIG obtained with increasingly smaller data sets, as in Figure . Because of the velocity error the SODCIGs are not well focused at zero offset. In this case, only the data set with 64 independent experiments produces a SODCIG with the cross-talk sufficiently separated from zero offset not to interfere with the desired image. This result is confirmed by the transformation to angle domain. Figure  shows the ADCIGs obtained after windowing the SODCIGs shown in Figure . The ADCIG obtained by migrating all the 64 independent experiments (Figure b) contains the same velocity information as the original ADCIG (Figure a), whereas the others are affected by artifact caused by the cross talks, increasingly so going from left to right in the figure. Migs-slow-nowind-overn Figure 11 Panel a): SODCIG extracted from source-receiver migration of synthetic data set migrated with velocity too slow by 10%. Panels b) to e): SODCIGs obtained from migration of data sets modeled using the proposed method, with respectively 640 m, 320 m, 160 m and 80 m. The modeling and the migration velocities were the same and both too slow by 10%. Angs-slow-wind-overn Figure 12 ADCIGs obtained by transformation of the SODCIGs shown in Figure  after windowing. Panel a): ADCIG computed from source-receiver migration of synthetic data set migrated with velocity too slow by 10%. Panels b) to e): ADCIGs obtained from migration of data sets modeled using the proposed method, with respectively 640 m, 320 m, 160 m and 80 m. The two previous examples display the imaging results when the modeling and migration velocity were the same. However, because the proposed modeling method would be used for MVA, which requires iterative migrations with different velocities, it is useful to evaluate the results when the modeling and migration velocities differ. Therefore, I modeled four data sets, again with decreasing ;I started as before with = 640 meters, and went down to 320 meters, 160 meters and 80 meters. The starting image was obtained by source-receiver migration with velocity too slow by 10%. The data were modeled assuming the same low velocity, but they were migrated using the correct velocity, and thus the SODCIGs after migration are now well focused. Figure  shows the resulting SODCIGs and compares them with the well-focused SODCIGs obtained by source-receiver migration of the original data set with the correct velocity (Figure a). As in Figure , the cross-talk artifacts in the SODCIGs obtained by migrating the data sets formed by 32 and 64 independent experiments are sufficiently far from zero offset to be easily zeroed before transformation to angle domain. Figure  shows the corresponding ADCIGs, which show flat moveout for the deep flat reflector. A small residual moveout can be observed for the shallow dipping reflector that is probably related to staircase artifacts in the initial modeling. In other words, because of the coarseness of the modeling grid, the dipping reflector behaves as a sequence of short segments of flat reflectors, instead as a continuous planar reflector dipping at 10 degrees. All ADCIGs, except the ones shown in Figure d and e are free from artifacts and provide useful velocity information. The last example illustrates the idea that the interference between SODCIGs depends on the amount of focusing of the SODCIG after migration, not in the starting image. In other words, the residual propagation'' operator present in equation 22 may decrease, or increase, the amount of cross-talk artifacts, depending whether it improves, or degrades, the focusing of the image. Migs-slow-slow-nowind-overn Figure 13 Panel a): SODCIG extracted from source-receiver migration of synthetic data set migrated with velocity too slow by 10%. Panels b) to e): SODCIGs obtained from migration of data sets modeled using the proposed method, with respectively 640 m, 320 m, 160 m and 80 m. The modeling velocity was too slow by 10%, but the migration velocity equaled the correct velocity. Angs-slow-slow-wind-overn Figure 14 ADCIGs obtained by transformation of the SODCIGs shown in Figure  after windowing. Panel a): ADCIG computed from source-receiver migration of synthetic data set migrated with velocity too slow by 10%. Panels b) to e): ADCIGs obtained from migration of data sets modeled using the proposed method, with respectively 640 m, 320 m, 160 m and 80 m. The modeling velocity was too slow by 10%, but the migration velocity equaled the correct velocity. Next: Conclusions and future directions Up: Cost-efficient prestack exploding-reflector modeling Previous: Cost-efficient prestack exploding-reflector modeling Stanford Exploration Project 4/5/2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630740642547607, "perplexity": 2576.9938843399877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00754.warc.gz"}
http://mathhelpforum.com/advanced-algebra/144683-group-modulo-additive-reals-isomorphic-multiplicative-reals.html
# Math Help - Which group modulo the additive reals is isomorphic to the multiplicative reals. 1. ## Which group modulo the additive reals is isomorphic to the multiplicative reals. Hi There. Simple question to pose, maybe not too easy to answer. Let G be a group such that it's quotient over the additive reals is isomorphic to the multiplicative reals. Which familiar group is G isomorphic to? Equivalently, what is the product group of the additive reals with the multiplicative reals isomorphic to? 2. Originally Posted by Kep Hi There. Simple question to pose, maybe not too easy to answer. Let G be a group such that it's quotient over the additive reals is isomorphic to the multiplicative reals. Which familiar group is G isomorphic to? Equivalently, what is the product group of the additive reals with the multiplicative reals isomorphic to? I would guess the complex numbers under addition. You are quotienting out a copy of the reals (under +) to get a copy of the reals (under *). Apparently the group you start with is one you are `familiar' with. Well, you only know a couple of groups which are uncountable... I hope that helps for the moment! EDIT: Although $(\mathbb{C}, +) \cong (\mathbb{R} \times \mathbb{R}, +)$ so you will have to look quite hard for this copy of the reals, it certainly isn't obvious... EDIT2: Powers seems to work. You are wanting to turn addition into multiplication, so powers seem to be a sensible choice. $\phia, b) \mapsto e^{a+b}" alt="\phia, b) \mapsto e^{a+b}" />. Clearly $e^{0+0} = e^0 = 1$ and $(a,b)\phi * (c,d)\phi = e^{a+b}e^{c+d} = e^{a+b+c+d} = (a+c, b+d)\phi = ((a, b)+(c, d))\phi$. You now just need to prove that the kernel is isomorphic to the reals. However, the kernel is the set $(a, -a)$ which is isomorphic to the reals under the isomorphism $(a, -a) \mapsto a$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329987168312073, "perplexity": 1379.9787446038729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824109.37/warc/CC-MAIN-20160723071024-00289-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electric-flux-from-line-charge-through-plane-strip.197055/
# Electric flux from line charge through plane strip 1. Nov 9, 2007 ### Natique Hey there! :) 1. The problem statement, all variables and given/known data A uniform line charge with linear charge density = 6nC/m is situated coincident with the x-axis. Find the electric flux per unit length of line passing through a plane strip extending in the x direction with edges at y=1, z=0, and y=1, z=5. The final answer is 1.31 nC/m. Only problem is I have no idea how to get it. 2. Relevant equations integral of D.ds over a closed surface = electric flux = Q enclosed. 3. The attempt at a solution Attached I'm sure the answer is really obvious, but I'm just not seeing it. I attached two solutions, but I actually attempted about 6 other ways, all of which are so pathetically illogical I'd really rather not post them. Anyway any help would be reeeeeeeeally appreciated! And it's not a homework question, so it'd be awesome if you could walk me through it step by step. Edit: Do you think this should be in the advanced physics subforum? :S #### Attached Files: • ###### my solution.zip File size: 6.4 KB Views: 122 Last edited: Nov 9, 2007 Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Can you offer guidance or do you also need help? Draft saved Draft deleted
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726707696914673, "perplexity": 1237.7069819918083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647681.81/warc/CC-MAIN-20180321180325-20180321200325-00000.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;927910e5.9811&FT=M&P=T&H=N&S=b
## [email protected] Options: Use Classic View Use Proportional Font Show HTML Part by Default Show All Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>] Re: pattern matching in LaTeX Marcel Oliver <[log in to unmask]> Mon, 9 Nov 1998 13:34:28 +0100 text/plain (28 lines) Sebastian Rahtz wrote: > not being a mathematician, i am not sure how to comment on this. i had > just assumed that publishing "logical" math is a Good Thing I guess the problem is not whether math is logical, but that published math contains a lot of omissions or jumps which people who work in the field know how to fill in. This is a necessary mechanism for optimal human-to-human communication. > > ones that drive mathematical innovation. Thus, having data formats > > which are optimized for presentation, and others which are optimized for > > machine processing of the logical content is, > i take the point that you need both, that we cannot get rid of > presentation math, because it performs a valuable function. but for > the *default*, low-class, mass-market math, surely you'd agree that > content markup is desirable? surely school textbook math should have > not have \hspace s in? Absolutely. I guess on this level things are, at least in principle, taken care of by LaTeX, so I assume we are talking about one step further: That, at present, I cannot take a piece of LaTeX code, read it unambiguously into Mathematica and work with it. (I am using this just as an example because I am most familiar with the two, but I guess everybody can replace these by their favorite systems...) Marcel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8632389307022095, "perplexity": 1914.0756470376941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00726.warc.gz"}
http://aga2012.wikidot.com/study-coordinates
Study Coordinates $\def\<{\langle} \def\>{\rangle} \newcommand{\p}{\partial} \newcommand{\x}{{\boldsymbol{x}}} \newcommand{\y}{{\boldsymbol{y}}} \newcommand{\A}{{\mathbb{A}}} \renewcommand{\P}{{\mathbb{P}}} \newcommand{\V}{{\mathbb{V}}} \newcommand{\G} {{\mathbb{G}}} \newcommand{\Gr}{{\operatorname{Gr}}} \def\Grob{Gr\"obner} \newcommand{\bC}{{\mathbb{C}}} \newcommand{\bR}{{\mathbb{R}}} \newcommand{\bN}{{\mathbb{N}}} \newcommand{\bQ}{{\mathbb{Q}}} \newcommand{\bH}{{\mathbb{H}}} \newcommand{\MM}{{\bf{M2}}}$ Recall that a pair $(e,g)\in \bH\times\bH$ lying on the Study quadric $S\subset \bR\P^7$ define the pair $(p,C) \in SE(3) = \bR^3\times SO(3)$: (1) \begin{align} p &= ge'/ee' \\ Cv &= eve'/ee' \end{align} 1. Find the matrix $C$ in terms of $e_0,e_1,e_2,e_3$. 2. Verify that the above construction gives an isomorphism of $S\setminus\V(ee')$ and $SE(3)$. (Express $(e,v)$ in terms of $(p,C)$.) ### Hints 1. Several lines of code in M2 using the quaternion type in platforms.m2 should do this: see what $Cv$ is for $v=i,j,k$. 2. Note that $g$ can be found as long as $e$ is known. ### Solution 1. Set $e = e_{0} + e_{1} i + e_{2} j + e_{3} k$ and $g = g_{0} + g_{1} i + g_{2} j + g_{3} k$ and $v = v_{1} i + v_{2} j + v_{3} k$. Compute the RHS of the equation $Cv = eve'/ ee'$ using Mathematica. They have a Quaternions package that works well with symbolic computation. The LHS is matrix multiplication where we view $v \in \mathbb{R}^{3}$ as a column vector. Denote $C \in \mathbb{R}^{3 \times 3}$ as: (2) \begin{eqnarray} \nonumber C = \begin{bmatrix} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \\ c_{31} & c_{32} & c_{33} \end{bmatrix} \ \end{eqnarray} We will have three linear equations in the variables $v_{1}, v_{2}, v_{3}$. The defining equations for $C$ in terms of $e_{0},e_{1},e_{2},e_{3}$ are found by comparing coefficients. Thus, (3) \begin{eqnarray} \nonumber C = \frac{1}{e_{0}^{2} + e_{1}^{2} + e_{2}^{2} + e_{3}^{2}} \begin{bmatrix} e_{0}^{2} + e_{1}^{2} - (e_{2}^{2} + e_{3}^{2}) & 2(e_{1}e_{2}-e_{0}e_{3}) & 2(e_{0}e_{2} + e_{1}e_{3})\\ 2(e_{1}e_{2} + e_{0}e_{3}) & e_{0}^{2}-e_{1}^{2} + e_{2}^{2} -e_{3}^{2} & -2(e_{0}e_{1} + e_{2}e_{3})\\ -2(e_{0}e_{2} + e_{1}e_{3}) & 2(e_{0}e_{1} + e_{2}e_{3}) & e_{0}^{2} - e_{1}^{2} - e_{2}^{2} + e_{3}^{2} \end{bmatrix} \end{eqnarray} ### Discussion • What is a Study quadric? $e_0g_0+e_1g_1+e_2g_2+e_3g_3 = 0$ • What does apostrophe mean (as in $e'$)? It means conjugation of quaternions, i.e. given $(q_0,q_1,q_2,q_3)\in\mathbb{R}^4$, then (4) $$(q_0+q_1 i+q_2 j + q_3 k)'=q_0-q_1 i-q_2 j - q_3 k$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9994169473648071, "perplexity": 1402.5330491763186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257197.14/warc/CC-MAIN-20190523083722-20190523105722-00127.warc.gz"}
https://www.bankofcanada.ca/2004/11/working-paper-2004-43/
# Real Return Bonds, Inflation Expectations, and the Break-Even Inflation Rate Available as: PDF According to the Fisher hypothesis, the gap between Canadian nominal and Real Return Bond yields (or break-even inflation rate) should be a good measure of inflation expectations. The authors find that this measure was higher, on average, and more variable than survey measures of inflation expectations between 1992 and 2003. They examine whether risk premiums and distortions embedded in this interest rate gap can account for these facts. Their results indicate that distortions were likely an important reason for the high level and variation of this measure over much of the 1990s. There is little evidence that the distortions examined were as important between 2000 and 2003, but the high level of the break-even inflation rate in 2004 may be evidence of their return. Given the potential distortions, and the difficulty in identifying them, the authors conclude that it is premature to consider this measure a reliable gauge of monetary policy credibility. In addition, it is not as useful as competing tools for short- and medium-term inflation forecasting. JEL Code(s): E, E3, E31, E4, E43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9521141648292542, "perplexity": 1456.1926566099319}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00047.warc.gz"}
http://mathhelpforum.com/number-theory/222141-rational-number-proof-print.html
# Rational number proof • September 21st 2013, 09:33 AM euphony Rational number proof I'm supposed to prove that log(m)/log(n) is rational if and only if there is some integer k such that m and n (which are integers) are powers of k. It's an "if and only if" proof, so there are two parts: a) Show that the existence of said integer k implies that log(m)/log(n) is rational. I did this, but I don't understand how to do the second part: b) Show that log(m)/log(n) being rational implies that there is some integer k such that m and n are powers of k. If log(m)/log(n) is rational, it can be represented as a/b where a and b are integers and b is not 0. log(m)/log(n) = a/b implies that m^b = n^a. But I don't know where to go from there. • September 21st 2013, 10:49 AM HallsofIvy Re: Rational number proof Quote: Originally Posted by euphony I'm supposed to prove that log(m)/log(n) is rational if and only if there is some integer k such that m and n (which are integers) are powers of k. It's an "if and only if" proof, so there are two parts: a) Show that the existence of said integer k implies that log(m)/log(n) is rational. I did this, but I don't understand how to do the second part: b) Show that log(m)/log(n) being rational implies that there is some integer k such that m and n are powers of k. If log(m)/log(n) is rational, it can be represented as a/b where a and b are integers and b is not 0. And, a/b is reduced to lowest terms. That is, a and b have no factors in common. Quote: log(m)/log(n) = a/b implies that m^b = n^a. But I don't know where to go from there. m^b= n^a is the same as saying m= n^(a/b). Since m is an integer, so is n^(a/b) and since a and b have no common factors, so is n^(1/b). Let k= n^(1/b). • September 21st 2013, 11:07 AM euphony Re: Rational number proof Quote: and since a and b have no common factors, so is n^(1/b) I don't quite understand this. • September 21st 2013, 02:40 PM Plato Re: Rational number proof Quote: Originally Posted by euphony I don't quite understand this. You you understand why $m=n^{\frac{a}{b}}~?$ If so, $m$ is an integer, therefore $n^{\frac{a}{b}}$ is also an integer. But because $\frac{a}{b}}$ is reduced form that means $n^{\frac{1}{b}}$ must an integer. Let $k=n^{\frac{1}{b}}$ so $m=k^a~\&~n=k^b$. • September 22nd 2013, 06:48 AM euphony Re: Rational number proof I don't understand why a and b having no common factors implies that n^(1/b) has to be an integer. • September 22nd 2013, 09:02 AM Plato Re: Rational number proof Quote: Originally Posted by euphony I don't understand why a and b having no common factors implies that n^(1/b) has to be an integer. Oh come on. If you are going to work at this level then think at this level. It is a matter of standard practice. If $\rho$ is a rational number then $\exists\{a,b\}\subset\mathbb{Z}$ such that $\text{GCD}(a,b)=1$ and $\rho=\frac{a}{b}$. • September 22nd 2013, 09:28 AM johng Re: Rational number proof Hi, Maybe what you're missing is this fact: if b and c are integers greater than 1, then the bth root of c is integral iff for any prime p with pk exactly dividing c, b divides k. From this it immediately follows that if a and b are relatively prime with na/b an integer, then n1/b is an integer. By the way, your original problem statement should have included the statement that m > 1, otherwise of course it's false. I think you should foster the habit of specifying the exact hypotheses of a statement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423953294754028, "perplexity": 542.1150375782112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982937576.65/warc/CC-MAIN-20160823200857-00224-ip-10-153-172-175.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/304396/diophantine-sets
Diophantine sets I'm trying to show that the following sets are Diophantine: 1. $\{(x,y)\mid x \leq y\}$ 2. $\{(x,y)\mid x < y\}$ 3. $\{(x,y)\mid x\text{ divides }y\}$ 4. $\{(x,y,z)\mid x\equiv y \pmod z\}$ 5. $\{(x,y,z)\mid x = \gcd(y,z)\}$ So, the definition I am using is that a Set $S$ is diophantine if i) it is a subset of $n$ , the set of all $n$-tuples of positive integers, and ii) there is a polynomial $p$ over in $n+k$ variables, $k0$ , such that $x$ is an element of the set $S$ iff there is some y an element of Naturals^k , such that $p(x,y)=0$ So, my answer is that set 1 isn't diophantine since it is not a subset of n, since if we let $y = -2$ for example. For set 2), it also isn't diophantine by the same reason as in #1 For set 3), it is since we let $k*x = y$, where $k$ is a positive number, but how to take care of the polynomial $p$ over $n+k$ variables? For set 4, condition 1 is met, but I need some justification and for set 5, it is also diophantine by the same reason as set 4. Thanks. - If you are going to post here regularly, may I suggest you learn a bit about how to format mathematics on this site (see the faq). For example, if I put dollar signs on either side of \{{(x,y)|x\le y\}}, I get $\{{(x,y)|x\le y\}}$. –  Gerry Myerson Feb 14 '13 at 23:21 Also, is that supposed to be $k_0$? –  Sniper Clown Feb 14 '13 at 23:22 I am surprised at the choice of positive integers. Almost universally in the field, variables range over the non-negative integers. Are you sure? Still doable, but things become more complicated. –  André Nicolas Feb 14 '13 at 23:23 @Asaf, why have you discorrected my edits of the spelling of diophantine? –  Gerry Myerson Feb 14 '13 at 23:29 @Asaf, you don't know what you're missing. My "Applications of higher cohomology to Latvian folkmusic of the 13th century" is a classic. –  Gerry Myerson Feb 15 '13 at 2:26 As is traditional in the field, we let variables range over the non-negative integers. The definitions should not be hard to modify if we are restricted to quantification over the positive integers. Note that the answers use polynomial equations $P=0$, where some of the coefficients of $P$ may be negative. If we want to avoid negative coefficients, we can, by bringing all the negative stuff to one side, use $P^+=P^-$. $1.$ For $x\le y$, use the formula $\exists u(x+u-y=0$. If you really insist on quantifying over positive integers only, say that $x=y$ or there exists a $u$ such that $x+u-y=0$. This can be expressed as $\exists u((x-y)(x+u-y)=0)$. From here on we don't type the existential quantifiers. $2.$ For $x\lt y$ use $x+1+u-y=0$. Here if we want $u$ to range over the positive integers, we can use the simpler $x+u-y=0$. $3.$ For $x$ divides $y$, use $ux=y$. $4.$ For congruence, there is the annoyance that $x-y$ may be positive, negative, or $0$. We can say that there exists $u$ such that $uz=x-y$ or $uz=y-x$. This can be written as there exists a $u$ such that $(uz-x+y)(uz+x-y)=0$. $5.$ For gcd, say that $x$ divides $y$ and $x$ divides $z$ (we already know how to do these) and $x$ can be written as a linear combination of $y$ and $z$ (Bezout's Theorem). To say linear combination, we can't quite say that there are $s$ and $t$ such that $sy+tz=x$, because almost always one of $s$ or $t$ will be negative. But we can sneak around that by saying there exist $s$ and $t$ such that $(sy-tz-x)(sy-tz-x)=0$. Note that we have three conditions whose conjunction we want to assert. Use the fact that the polynomials $P$, $Q$, and $R$ are all $0$ at a certain place iff $P^2+Q^2+R^2=0$ at that place. Remark: Your $x$, $y$ and so on implicitly range over the non-negative integers or the positive integers, according to local definition. So your choice of $y=-2$ is not allowed. It turns out that all recursively enumerable sunsets of $\mathbb{N}^n$ are Diophantine, so in particular all of the sets in your list will be. But what is asked for is an explicit construction for each. Added: For the $\gcd$ predicate, putting the pieces together, a formula that one can use is $$\exists u\exists v\exists s\exists t\left((ux-y)^2 +(vx-z)^2 +((sy-tz-x)(sy-tz-x))^2=0\right).$$ Note again the use of product to say "or" and of sum of squares to say "and." - @AndréNicolasI dont know what Bezout's theorem is. That was not in my book. Can you please help explain the problem and provide an explicit construction? –  mary Feb 17 '13 at 7:57 The theorem is not always given a name. It says that if $\gcd(a,b)=d$ then there exist integers $x$ and $y$ such that $ax+b y=d$. The most frequently used case says that if the $\gcd$ is $1$, there exist integers $x$ and $y$ such that $ax+by=1$. I thought I wrote out instructions for explicit construction. Will add a line at end. For more detail about Bezout's Theorem, see any beginning number theory book, or Wikipedia. –  André Nicolas Feb 17 '13 at 15:32 Still confused at how the work you did related to Diophatine characteristic. This is more of a computability question, can you please explain it along those lines? Thanks –  mary Feb 18 '13 at 12:15 I do not understand your question. Assume it is about the gcd problem. I gave an explicit existential (Diophantine) definition of your gcd predicate. It is not a computability question, it is a question of representability in a certain form. Of course all computable predicates are so representable, by the result of M., but they want an explicit formula. The reason the expression works is because the gcd can be represented as a linear combination, and nothing smaller can be. –  André Nicolas Feb 18 '13 at 12:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183494448661804, "perplexity": 219.18049162230778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831098.94/warc/CC-MAIN-20140820021351-00171-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/newtons-law-of-cooling-help.140286/
Homework Help: Newton's Law of Cooling Help 1. Oct 28, 2006 prace I have a question about Newton's Law of cooling. Basically I understand that the equation, http://album6.snapandshare.com/3936/45466/853596.jpg [Broken] Comes from the DE, dT/dt = K(T-To) Using this, I am to solve this problem: A thermometer is taken from an inside room to the outside, where the air temperature is 5 °F. After 1 minute, the thermometer reads 55 °F, and after 5 minutes the reading is 30 °F. What is the initial temperature? So to start, I solved for e^k... http://album6.snapandshare.com/3936/45466/853597.jpg [Broken] So now that I have e^k, what do I do? My guess is that A is the initial Temperature? But I am not sure and my text does not really explain it too well. So, basically, I guess I am asking, what is the constant A in the general formula mean? And if it is not the initial temperature, or initial condition, then what can I do next with this problem? Last edited by a moderator: May 2, 2017 2. Oct 28, 2006 Rewrite it as $$y(t) = y_{0}e^{kt}$$ where $$y = T - 5$$. So $$y(t) = y_{0}(\frac{1}{2})^{\frac{1}{4}t}$$ $$y(1) = y_{0}(\frac{1}{2})^{\frac{1}{4}}$$ Solve for $$y_{0}$$ and then get $$T_{0}$$ 3. Oct 28, 2006 arildno You now have: $$A=50*(\frac{1}{2})^{-\frac{1}{4}}$$ The initial temperature is now found by computing T(0) As for what A is, it is the DIFFERENCE between the initial temperature and the ambient temperature. 4. Oct 28, 2006 prace So what you are saying here is that $$y_{0}$$ in your equation is $$T_{0}$$, which is the initial temperature? 5. Oct 28, 2006 prace I don't think I am understanding this at all here... Sorry to put you through this, but, if A is the difference between the initial temperature and the ambient temperature, what is the variable for the initial temperature if $$T_{0}$$ is not the initial temperature, but the ambient temperature that arises as time gets very large or goes to infinity? 6. Oct 28, 2006 arildno Let's start with the diff.eq, with an assigned initial temperature $T_{i}=T(0)[/tex], and an ambient temperature [itex]T_{0}[/tex] We have the diff.eq: $$\frac{dT}{dt}=k(T-T_{0}), T(0)=T_{i}$ Introduce the new variable: [tex]y(t)=T(t)-T_{0}\to\frac{dy}{dt}=\frac{dT}{dt}, y(0)=T_{i}-T_{0}$$ Thus, we have the diff.eq problem: $$y(t)=ky, y(0)=T_{i}-T_{0}\to{y}(t)=(T_{i}-T_{0})e^{kt}$$ Thus, solving for T(t), we get: $$T(t)=T_{0}+(T_{i}-T_{0})e^{kt}$$ or more obscurely: $$T(t)=T_{0}+Ae^{kt}$$ where $A=T_{i}-T_{0}$ 7. Oct 28, 2006 prace $$T(t)=T_{0}+(T_{i}-T_{0})e^{kt}$$ Wow... This really made it clear here. Sorry for the obscure questions, but you really nailed it for me here. I am going to try a few problems in my text and see how they work out. Thanks again!! 8. Oct 29, 2006 prace Ok, so I worked it out and I got ~ 64.5°. If anyone has the time, would you mind checking this for me as I don't have the answer to this in my text. Thanks!! 9. Oct 29, 2006 arildno I haven't worked it out, but: Start having confidence in yourself! I'm sure you managed it all right. 10. Feb 19, 2010 Sarah12345 I don't understand how we find k in problems like this where no initial temperature is given. Do you have to compare the temps at t=1 and t=5?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9195470213890076, "perplexity": 681.7808485452125}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00033.warc.gz"}
https://davidetorlo.it/talks/2020-06-02-PhD-defense
# Hyperbolic problems: high order methods and model order reduction Date: PhD Defense at UZH Zurich. The talk summarize the main topics of the thesis: high order methods for ODEs, hyperbolic PDEs and model order reduction techniques. Slides Numerical simulations are extremely important to forecast physical events. In particular, this is true when experiments are too expensive or unfeasible. The field of numerical analysis studies how to obtain reliable simulations of physical phenomena. Physics provides the modeling equations, e. g. partial differential equations (PDEs), then numerical analysis creates numerical methods that approximate the solutions of such equations. In this manuscript, we focus on numerical methods for ordinary differential equations (ODEs) and hyperbolic PDEs. ODEs can model many chemical and biological processes and the numerical methods to solve them are fundamental to solve also PDEs. Hyperbolic PDEs comprise many physical models, including fluid dynamics, transport equations, kinetic models and wave equations. The numerical methods for this kind of problems are vital for many engineering applications. The schemes that we aim to obtain must verify many properties. They should converge to the analytical solution as the discretization scale decreases, they should be stable in order to produce spurious oscillations, they should guarantee a certain level of accuracy and they should be computable in reasonable times. Often, these last two factors are in contradiction as more accurate solutions require more computational time. To tackle this problem we propose in this thesis some possible solutions. The first one is to speed up the convergence process by using high order accurate schemes. These schemes obtain much more accurate solutions with less refinements of the discretization scale with respect to low order accurate solutions. Hence, the computational costs needed to reach a certain error threshold is lower a priori. Another technique that we will use are implicit schemes. These schemes do not need to follow the restriction that explicit schemes have on the time discretization, allowing the use of less time steps. Finally, model order reduction techniques are tools that create a smaller discrete model, which represents, up to a certain error, an approximation of the solution manifold for parametric problems. For high order accurate ODE solvers, we present in this work a class of arbitrarily high order schemes, called deferred correction (DeC) methods, which consist of an iterative procedure that, in a fixed number of loops, reaches an approximation of the required order. We study their A–stability for many possible orders of accuracy. In order to preserve positivity and conservation of physical quantities in production–destruction systems, we create a modified version of the DeC, which guarantees all these properties. This is possible thanks to the so–called Patankar trick, which makes the scheme linearly implicit. So far, the modified Patankar schemes were developed only up to third order of accuracy. The method we propose is arbitrarily high order accurate and unconditionally positivity preserving and conservative. The rest of the thesis is focused on hyperbolic PDEs. We consider the residual distribution (RD) schemes as high order accurate spatial discretization technique in combination with the DeC for the time discretization. As a first step, we show a von Neumann stability analysis of the combination of these two methods, which suggests the optimal value of the stabilization parameters to maximize the time steps. This analysis uses Kreiss’ theorem as a tool to verify the stability of the family of matrices that evolve the Fourier coefficients of the solutions. The complications of this analysis are due to the different nature of different degrees of freedom inside the polynomial reconstruction. Furthermore, we extend the RD DeC method to an implicit–explicit version for kinetic models. Kinetic models contain a source term that, in the asymptotic limit, becomes stiff. To deal with it, an implicit treatment of such a term is necessary. We propose an implicit—explicit RD DeC scheme that solves this type of models. Moreover, the proposed scheme is arbitrarily high order and asymptotic preserving, i. e., in the asymptotic regime the numerical solution converges to the analytical asymptotic limit. We prove these properties and we validate the theoretical results with numerical simulations. Next, we study the model order reduction (MOR) algorithms for parametric hyperbolic problems. These techniques were originally developed for elliptic and parabolic problems and not all the algorithms can be extended to the hyperbolic framework. We propose an uncertainty quantification application of a MOR benchmark algorithm for hyperbolic problems. We show how the reduction can save computational time and we compute some statistical quantities, like mean and variance, of stochastic hyperbolic PDEs. Finally, we extend this algorithm in order to gain more compression in the reduced model. Indeed, MOR algorithms are badly suited for advection dominated problems and most of the hyperbolic problems are of this kind. Even for the simplest wave transport problems, the classical MOR techniques fail to obtain a reasonable reduction, since they try to express the solution manifold as a linear combination of modes. What we propose in the last part of this thesis is to contextualize the PDEs into an arbitrary Lagrangian–Eulerian framework, which allows, through a transformation map, to align the advected features and to strongly compress the relevant information of the solution manifold. The transformation map must also be quickly computable in the reduced model and to do so, we use different regression techniques, such as polynomial regression and artificial neural networks, and we compare their performances. All the algorithms and schemes are validated through adequate numerical simulations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812095522880554, "perplexity": 301.6208524579958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00225.warc.gz"}
https://www.ramsay-maunder.co.uk/knowledge-base/glossary/hyperstatic-stress-fields/
# Hyperstatic Stress Fields ## Introduction Hyperstatic stress fields are self-balancing stress fields found in structural members or mechanical components. They are self-balancing in the sense that they satisfy the relevant equations of equilibrium with zero body forces and boundary tractions. A classic example of hyperstatic stress field is that induced in a component subjected to a temperature field. The internal stresses are not zero but they are generated without the need for associated body forces or boundary tractions. Other examples include residual stresses due, for example, to manufacturing processes. Hyperstatic stress fields are, though, more general than indicated above. Irrespective of the presence of thermally induced or residual stresses, they often exist quite normally within structural members or components as they are required, e.g., in a linear elastic structure, to ensure that the compatibility conditions are satisfied. In a plastic analysis, where compatibility conditions are no longer of importance, the hyperstatic stress fields are arranged so as to maximise the load carrying capacity of the member or component. ## Hyperstatic Stress Field for a Plate Membrane Problem A hyperstatic stress field for a plate membrane (planar elasticity) problem is shown in the following figure. The equations are used to define a set of equilibrating boundary tractions. Whilst statically admissible (SA), this stress field is not kinematically admissible (KA), i.e.,the corresponding strains do not satisfy the compatibility relations. The SA column plots the stresses given by the equations. The KA column is from a 2x2 mesh of compatible finite elements (CFE) model and so is kinematically admissible. The SAKA contours are from a highly refined CFE model. If the SA field is taken from the SAKA field then the hyperstatic stress required to make the SA field satisfy the compatibility relations is achieved. Whilst not the correct linear elastic stress field, the SA field has a useful property since, through the lower bound plasticity theorem, it may be used to provide a safe estimate of the ultimate limit state. Provided the stresses are kept within the relevant yield criterion then the structure is safe from plastic collapse. This property is utilised in Equilibrium Finite Element (EFE) analysis, which offers a useful alternative to conventional CFE analysis. ## Hyperstatic Moment Field for a Kirchhoff Plate Bending Problem The solutions shown in the following figure are theoretical solutions for a Kirchhoff plate. The solution for zero Poisson's ratio is identical to that obtained from a beam solution - no transverse moments are generated. With Poisson's ratio equal to 0.3 then transverse moments are generated. The difference between solutions 1 and 2 is the hyperstatic moment field shown. ## Hyperstatic Stress Resultants for a Beam Problem In the third example of hyperstatic stress fields a statically indeterminate beam is considered. There are two hyperstatic stress resultant fields (bending moments and shear forces) which can be added to the particular solution in various combinations. The particular solution balances the applied load whereas the hyperstatic fields as, as usual, in equilibrium with no applied load. The normalised bending moment diagram (normalised by dividing by the plastic moment) shows three solutions, one of which is the elastic solution scaled so that the maximum moment just causes first yield. The amplitudes of the hyperstatic fields are those required to make the total solution satisfy the kinematic boundary conditions of zero rotation at both ends of the beam. If plastic hinges are allowed to develop then the applied load can be increased beyond that required to develop first yield. A lower bound prediction of collapse is shown where a plastic hinge has been allowed to develop under the load. The exact collapse load though is obtained when additional plastic hinges are allowed to develop at the supports.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307432532310486, "perplexity": 1191.7069323629248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204885.27/warc/CC-MAIN-20190326075019-20190326101019-00412.warc.gz"}
http://mathhelpforum.com/advanced-algebra/196248-odd-degree-polynomials-fundamental-theorem-algebra.html
# Math Help - Odd Degree Polynomials and the Fundamental Theorem of Algebra 1. ## Odd Degree Polynomials and the Fundamental Theorem of Algebra This is my problem: __________________________________________- 1. How can the Fundamental Theorem of Algebra be used to show that any polynomial of odd degree has at least one root? Make sure that: 1. every claim is justified 3. vocabulary is used correctly 4. the solution is vivid (there are no missing details) __________________________________________ I don't even know where to start. Can someone please give me a step by step on the proof, as well as any tips? Thanks in advance. 2. ## Re: Odd Degree Polynomials and the Fundamental Theorem of Algebra Originally Posted by Freshmanmath This is my problem: __________________________________________- 1. How can the Fundamental Theorem of Algebra be used to show that any polynomial of odd degree has at least one root? Make sure that: 1. every claim is justified 3. vocabulary is used correctly 4. the solution is vivid (there are no missing details) __________________________________________ I don't even know where to start. Can someone please give me a step by step on the proof, as well as any tips? Thanks in advance. By the Fundamental Theorem of Algebra, every polynomial with real coefficients has exactly as many complex roots as its degree (some of which may be repeated). Non-real roots always appear as complex conjugates, which means there is always an even number of non-real roots. So an odd degree polynomial must have an extra root which does not have a complex conjugate, so therefore must be real. Therefore, every polynomial with real coefficients of odd degree has at least one real root. 3. ## Re: Odd Degree Polynomials and the Fundamental Theorem of Algebra Thanks, that's very informative. Would it also be possible for you to say, give me a simple example and apply what you said step by step? I'm more of a visual learner, and I really want to get this down solid. Excellent post, nonetheless. 5. ## Re: Odd Degree Polynomials and the Fundamental Theorem of Algebra let's suppose that p(x) is in R[x], of odd degree. there are two cases: 1. p(x) has a real root 2. p(x) does not have a real root we want to show that (1) always happens, or equivalently, that (2) never happens. p(x) is a complex polynomial as well, so by the fundamental theorem of algebra, p(x) has n linear complex factors (some of these factors may be repeated, that is correspond to multiple roots). now first (and we don't need to use FTA to prove this), let's show that if z is a complex number with z* it's complex conjugate, p(z) = 0 implies p(z*) = 0: recall that for two complex numbers z and w, z = a+ib, w = x+iy, that: (z+w)* = (a+x + i(b+y))* = a+x - i(b+y) = a - ib + x - iy = (a+ib)* + (x+iy)* = z* + w* (zw)* = [(a+ib)(x+iy)]* = [(ax - by) + i(ay + bx)]* = ax - by - i(ay + bx) = ax - (-b)(-y) + i(a(-y) + b(-x)) = (a - ib)(x - iy) = z*w* in particular, the last formula means that (zn)* = (z*)n, for all natural numbers n. also recall that z* = z if and only if b = 0, that is: z is real. suppose that p(x) = a0+a1x+a2x2+.....+anxn. then p(z*) = a0+a1z*+a2(z*)2+.....+an(z*)n = a0*+a1*z*+a2*(z2)*+.....+an*(zn)* = a0*+(a1z)*+(a2z2)*+...+(anzn)* = (a0+a1z+a2z2+...+anzn)* = [p(z)]* = 0* = 0. so if p(x) splits over C (which is what FTA states), then the root of p(x) (for REAL polynomials only, not complex ones) occur in conjugate pairs. note that (x - z)(x - z*) = x2 - (z+z*) + zz*, and z+z* and zz* are both real: (a+ib) + (a-ib) = 2a, (a+ib)(a-ib) = a2-(-b2) + i(a(-b) + ab) = a2+b2. so the conjugate pairs produce real quadratic factors of p(x), unless z = z*. but z = z* iff z is real, and case (2) asserts this never happens. so, in that case, we get that p(x) is a product of quadratic polynomials, in particular, p(x) is of even degree. since p(x) is assumed to be of odd degree, case (2) leads to a contradiction. 6. ## Re: Odd Degree Polynomials and the Fundamental Theorem of Algebra Another great answer! I'm going to try to put that in idiot's terms as best as possible.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055543541908264, "perplexity": 786.6804757146659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926964.7/warc/CC-MAIN-20150521113206-00201-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-equation-of-the-parabola-vertex-at-the-origin-and-the-direct
Precalculus Topics How do you find the equation of the parabola vertex at the origin and the directrix at x=7? Jan 10, 2016 With a vertex $= \left(0 , 0\right)$ and directrix $x = 7$, this parabola opens to the left and will be of the form $x = \left(\frac{1}{4 c}\right) {\left(y - k\right)}^{2} + h$ Explanation: The absolute distance between the directrix and vertex $c = 7 - 0 = 7$ So, the coefficient $\frac{1}{4 c} = \frac{1}{4 \times 7} = \frac{1}{28}$ The sign of the coefficient must be NEGATIVE because the parabola opens to the left. vertex $= \left(0 , 0\right) = \left(h , k\right)$ Finally, substitute the values into the equation ... Equation : $x = \left(- \frac{1}{28}\right) {\left(y - 0\right)}^{2} + 0 = - \frac{{y}^{2}}{28}$ hope that helped Impact of this question 755 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098264336585999, "perplexity": 655.5825946054404}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00123.warc.gz"}
http://math.stackexchange.com/questions/106940/computing-gradient-and-hessian-of-a-vector-function
# Computing Gradient and Hessian of a vector function I'm wondering how to compute gradient and hessian for this function $$f(\textbf{x}) = ||\textbf{x}||_2^p$$, where $\textbf{x}$ is a vector and $p$ is a constant and $p>1$. This is a homework question. As I'm unfamiliar with vector calculus which is the prerequisite of my class, I'm having a difficult time finding the solution. I'll appreciate it if you can give me reference to materials of vector calculus that helps finding the solution of this problem. The original homework question is to perform Newton's method to minimize $f(x)$. So I'm thinking of computing gradient and Hessian. Any hints on the original question will be appreciated. Thanks - $f(x_1,...,x_)=(x_1^2+...+x_n^2)^{\frac p 2}$ –  azarel Feb 8 '12 at 4:59 As $$f(\mathbf{x}):=||\mathbf{x}||_{2}^{p}=\left(\sum_{i=1}^{n}x_{i}^{2}\right)^{p/2}$$ and $$\nabla f(\mathbf{x}) := \left(\frac{\partial f}{\partial x_{1}},\frac{\partial f}{\partial x_{2}}, \ldots, \frac{\partial f }{\partial x_{n} } \right)$$ and noting that $$\frac{\partial f}{\partial x_{j}} = \frac{p}{2}\left(\sum_{i=1}^{n} x_{i}^{2} \right)^{p/2-1}\cdot 2x_{j} =px_{j}||\mathbf{x}||_{2}^{p-2}$$ then $$\nabla f(\mathbf{x})=p||\mathbf{x}||_{2}^{p-2}\left(x_{1},x_{2},\ldots, x_{n}\right)$$ As for the Hessian, $$\nabla^{2}f := \begin{pmatrix} \frac{\partial^{2} f}{\partial x_{1}^{2}} & \frac{\partial^{2} f}{\partial x_{1} \partial_{x_{2}}} & \cdots &\frac{\partial^{2} f}{\partial x_{1}\partial_{x_{n}}} \\ \frac{\partial^{2} f}{\partial x_{2} \partial x_{1}} & \frac{\partial^{2} f}{\partial x_{2}^{2}} & \cdots & \frac{\partial^{2} f}{\partial x_{2}\partial_{x_{n}}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^{2} f}{\partial x_{n} \partial_{x_{1}}} & \frac{\partial^{2} f}{\partial x_{n} \partial_{x_{2}}} & \cdots & \frac{\partial^{2} f}{\partial x_{n}^{2}} \end{pmatrix}$$ so we consider two cases: The diagonal elements and off-diagonal elements. These entries are computed easily from standard rules of calculus; I'm too worn out to compute them explicitly. A good book on vector calculus is Div, Grad, Curl, And All That: An Informal Text on Vector Calculus by H.M. Schey. Newton's Method in the multivariate case is a pretty straightforward generalization of the single-variable case, noting that $$f(\mathbf{x}+\mathbf{h})\approx f(\mathbf{x})+\left<\nabla f(\mathbf{x}),\mathbf{h} \right> + \frac{1}{2}\left<\mathbf{h},\nabla^{2}f(\mathbf{x})\mathbf{h}\right>$$ where $\left<\cdot, \cdot\right>$ denotes the ordinary dot product on $\mathbb{R}^{n}$. - Thanks Nick. I see where this is going. But I have another question: is it easy to get inverse of the Hessian? –  SeeBees Feb 8 '12 at 5:44 Absolutely not. There are schemes for getting around having to invert the Hessian which I have forgot, I believe that can be found in Kendall Atkinson's book <em>Theoretical Numerical Analysis</em>. –  Nick Thompson Feb 8 '12 at 5:47 But if I want to compute $H^{-1} * g$, where $H$ is the Hessian and $g$ is the gradient, would there be easy way? –  SeeBees Feb 8 '12 at 5:54 Hit both sides with $H$ then use Cholesky decomposition to solve the resulting system. –  Nick Thompson Feb 8 '12 at 7:34 @SeeBees: Take a look at Pearlmutter and Schraudolph's work on the R technique that approximates exactly that in linear time. –  Neil G Oct 31 '14 at 12:11 The second derivatives are $$\frac{\partial^2 f}{\partial x_i^2}=\frac{\partial}{\partial x_i}\left(px_i\left(\mathbf x^2\right)^{p/2-1}\right)=p\left(\left(\mathbf x^2\right)^{p/2-1}+(p-2)x_i^2\left(\mathbf x^2\right)^{p/2-2}\right)$$ and $$\frac{\partial^2 f}{\partial x_i\partial x_j}=\frac{\partial}{\partial x_i}\left(px_j\left(\mathbf x^2\right)^{p/2-1}\right)=p(p-2)x_ix_j\left(\mathbf x^2\right)^{p/2-2}$$ for $i\ne j$, so the Hessian matrix $H$ is given by $$H=p\left(\mathbf x^2\right)^{p/2-2}\left((\mathbf x^\top\mathbf x) I+(p-2)\mathbf x\mathbf x^\top\right)\;,$$ where $I$ is the identity matrix. Symmetry suggests that its inverse should then also be a linear combination of $I$ and $\mathbf x\mathbf x^\top$, and you can find it by using that as an ansatz and determining the coefficients of the linear combination from the condition that the product is the identity matrix. (You'll need to use $(\mathbf x\mathbf x^\top)(\mathbf x\mathbf x^\top)=\mathbf x(\mathbf x^\top\mathbf x)\mathbf x^\top=(\mathbf x^\top\mathbf x)\mathbf x\mathbf x^\top$ in the process.) - Thanks. Is there any good tutorial on matrix calculus? I'm unfamiliar with basic rules. –  SeeBees Feb 9 '12 at 20:14 @SeeBees: This isn't what I'd call matrix calculus (on which there is, by the way, a Wikipedia article). It's just calculus that leads to a matrix, and then inverting that matrix involves only ordinary matrix operations, not calculus. In case you mean that you're unfamiliar with the basic rules of matrix operations, this Wikipedia section might be a good place to start. I can't recommend any books because I learned linear algebra from a German book way back when :-) –  joriki Feb 9 '12 at 20:24 I was able to derive the gradient by the doing the following: $f(\mathbf x) = (\mathbf x ^\top \mathbf x) ^{p/2}$ $f'(\mathbf x) = p/2 (\mathbf x^\top \mathbf x) ^{(p-2)/2} 2\mathbf x = p (\mathbf x^\top \mathbf x) ^{(p-2)/2} \mathbf x$ I'm wondering if this is what's called matrix calculus. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531483054161072, "perplexity": 296.84232948024965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097710.32/warc/CC-MAIN-20150627031817-00187-ip-10-179-60-89.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3188106/directional-derivative-in-the-direction-in-which-z-is-growing
# Directional Derivative in the direction in which $z$ is growing Find the directional derivative of $$f(x, y, z) = xy + 2xz - y^2 + z^2$$, at the point $$P = (1, -2, 1)$$, passing through the curve $$x = t, y = t -3, z = t^2$$, in the direction in which $$z$$ is growing. The work I've done: $$\nabla f = (y + 2z, x - 2y, 2x + 2z) \Rightarrow \nabla f \rvert _P = (0, 5, 4)$$ Now I'm not sure of what I'm doing. Substituting the values of $$P$$ in the parametric equation of the curve, you get $$(1, 1, 1)$$. What I mean is, $$1 = t, -2 = t - 3, 1 = t^2$$ When you solve for $$t$$ in each equation you get $$(1, 1, 1)$$. So my guess is that $$\vec{v} = (1, 1, 1)$$ Therefore the directional derivative is $$\nabla f \rvert _P \cdot \frac{\vec{v}}{|\vec{v}|} = \frac{4 + 5}{\sqrt{3}} = 3\sqrt{3}$$ Is that correct? No, you get $$t= 1$$, not three different values. Each value of $$t$$ gives a point on the curve. $$t= 1$$ gives $$x= 1, y= 1- 3= -2, z= 1^2$$ or the point $$(1, -2, 1)$$ that you were given to begin with. The "directional derivative", also called a "tangent vector" is the function you got, $$(0, 5, 4)$$. • thanks for clearing it up.. what about the part "in the direction in which $z$ is growing?" – Victor S. Apr 15 at 0:01 • The tangent vector could also be given as $(0,-5,-4)$ but that would not be in the correct direction it would be $z$ decreasing (pointing downwards in the 3d plane). – Peter Foreman Apr 15 at 0:31 • Well the official answer is $13\sqrt{6}/6$. This is why I think I must be wrong. – Victor S. Apr 15 at 0:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9312314987182617, "perplexity": 153.77597378238667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258453.85/warc/CC-MAIN-20190525224929-20190526010929-00041.warc.gz"}
https://www.mezzacotta.net/100proofs/archives/tag/electromagnetism
## 11. Auroral ovals Aurorae are visible light phenomena observed in the night sky, mostly at high latitudes corresponding to Arctic and Antarctic regions. An aurora can appear as an indistinct glow from a distance or as distinct shifting curtain-like formations of light, in various colours, when seen from nearby. An aurora, observed near Eielson Air Force Base, near Fairbanks, Alaska. (Public domain image by Senior Airman Joshua Strang, United States Air Force.) Aurorae are caused by the impact on Earth’s atmosphere of charged particles streaming from the sun, known as the solar wind. Schematic representation of the solar wind streaming from the sun and interacting with the Earth’s magnetic field. The dashed lines indicate paths of solar particles towards Earth. The solid blue lines show Earth’s magnetic field. (Public domain image by NASA.) The Earth’s magnetic field captures the particles and deflects them (according to the well-known laws of electromagnetism) so that they spiral downwards around magnetic field lines. The result is that the particles hit the atmosphere near the Earth’s magnetic poles. Diagram of the solar wind interacting with Earth’s magnetic field (field lines in red). The magnetic field deflects the incoming particles around the Earth, except for a fraction of the particles that enter the magnetic polar funnels and spiral down towards Earth’s magnetic poles. (Public domain image by NASA. modified.) The incoming high energy particles ionise nitrogen atoms in the upper atmosphere, as well as exciting oxygen atoms and nitrogen molecules into high energy states. The recombination of nitrogen and the relaxation of the high energy states results in the emission of photons. The light is produced between about 90 km and 150 km above the surface of the Earth, as shown by triangulating the positions of aurorae from multiple observing locations. Observations of aurorae have established that they occur in nearly-circular elliptical rings of width equivalent to a few degrees of latitude (i.e. a few hundred kilometres), usually between 10° and 20° from the Earth’s magnetic poles. These rings, in the northern and southern hemispheres, are called the auroral ovals. Northern auroral oval observed on 22 January 2004. Figure reproduced from [1]. The auroral ovals are not precisely centred on the magnetic poles, but rather are pushed a few degrees towards the Earth’s night side. This is caused by the diurnal deflection of the Earth’s magnetic field by pressure from the charged particles of the solar wind. Northern auroral oval observed in 1983 by Dynamics Explorer 1 satellite. The large bright patch at left is the daylight side of Earth. (Public domain image by NASA.) The auroral ovals also expand when solar activity increases, particularly during solar storms, when increased particle emission from the sun and the resulting stronger solar wind compresses the Earth’s magnetic field, forcing field lines to move away from the poles. But despite these variations, the auroral ovals in the northern and southern hemispheres move and change sizes more or less in unison, and are always of similar size. Southern auroral oval observed in 2005 by IMAGE satellite, overlaid on a Blue Marble image of Earth. (Public domain image by NASA.) You can see the current locations and sizes of both the northern and southern auroral ovals as forecast based on the solar wind and interplanetary magnetic field conditions as measured by the Deep Space Climate Observatory satellite at https://www.spaceweatherlive.com/en/auroral-activity/auroral-oval. Current northern and southern auroral ovals as forecast by spaceweatherlive.com on 21 April, 2019. The auroral ovals are the same size and shape. Earth is not the only planet to display aurorae. Jupiter has a strong magnetic field, which acts to funnel the solar wind towards its polar regions in the same way as Earth’s field does on Earth. Jupiter we can establish by simple observation from ground-based telescopes is close to spherical in shape and not a flat disc. Auroral ovals are observed on Jupiter around both the northern and southern magnetic poles, exactly analogously to on Earth: of close to the same size and shape. Auroral ovals on Jupiter observed in the northern and southern polar regions by the Hubble Space Telescope, using the Wide Field Planetary Camera (1996) and the Space Telescope Imaging Spectrograph (1997-2001). Figure reproduced from [2]. Similar auroral ovals are also seen on Saturn, in both the northern and southern hemispheres [3][4]. And just for the record, Saturn is also easily shown to be spherical in shape, and not a flat disc. Now, we have established that auroral ovals appear on three different planets, with the southern and northern ovals of close to the same sizes and shapes on each individual planet. Everything is consistent and readily understandable – as long as you assume that the Earth is spherical like Jupiter and Saturn. If the Earth is flat, however, then the distributions of aurorae in the north and south map to very different shapes and sizes – with no ready explanation for either the shapes or their differences. In particular, large parts of the southern auroral oval end up being extremely far from the southern magnetic pole, in defiance of the electromagnetic mechanism that causes aurorae in the first place. Auroral ovals in their observed locations, mapped onto a flat disc Earth. The ovals are vastly different sizes. So the positions of aurorae on a flat Earth cannot be readily explained by known laws of physics, and they also do not resemble the locations and sizes of auroral ovals as observed on other planets. All of these problems go away and become self-consistent if the Earth is a globe. References: [1] Safargaleev, V., Sergienko, T., Nilsson, H., Kozlovsky, A., Massetti, S., Osipenko1, S., Kotikov, A. “Combined optical, EISCAT and magnetic observations of the omega bands/Ps6 pulsations and an auroral torch in the late morning hours: a case study”. Annales Geophysicae, 23, p. 1821-1838, 2005. https://doi.org/10.5194/angeo-23-1821-2005 [2] Grodent, D.,Clarke, J. T., Kim, J., Waite Jr., J. H., Cowley, S. W. H. “Jupiter’s main auroral oval observed with HST‐STIS”. Journal of Geophysical Research, 108, p. 1389-1404, 2003. https://doi.org/10.1029/2003JA009921 [3] Cowley, S. W. H., Bunce, E. J., Prangé, R. “Saturn’s polar ionospheric flows and their relation to the main auroral oval”. Annales Geophysicae, 22, p.1379-1394, 2004. https://doi.org/10.5194/angeo-22-1379-2004 [4] Nichols, J. D., Clarke, J. T., Cowley, S. W. H., Duval, J., Farmer, A. J., Gérard, J.‐C., Grodent, D., Wannawichian, S. “Oscillation of Saturn’s southern auroral oval”. Journal of Geophysical Research, 113, A11205, 2008. https://doi.org/10.1029/2008JA013444 ## 8. Earth’s magnetic field Magnetic fields have both a strength and a direction at each point in space. The strength is a measure of how strong a force a magnet feels when in the field, and the direction is the direction of the force on a magnetic north pole. North poles of magnets on Earth tend to be pulled towards the Earth’s North Magnetic Pole (which is in fact a magnetic south pole, but called “the North Magnetic Pole” because it is in the northern hemisphere), while south poles are pulled towards the South Magnetic Pole (similarly, actually a magnetic north pole, called “the South Magnetic Pole” because it’s in the south). Humans have used this property of magnets for thousands of years to navigate, with magnetic compasses. The simplest magnetic field is what’s known as a dipole, because it has two poles: a north pole and a south pole. You can think of this as the magnetic field of a simple bar magnet. The magnetic field lines are loops, with the field direction pointing out of the north pole and into the south pole, and the loops closing inside of the magnet. Illustration of magnetic field lines around a magnetic dipole. The north and south poles of the magnet are marked. It’s straightforward to measure both the strength and the direction of the Earth’s magnetic field at any point on the surface, using a device known as a magnetometer. So what does it look like? Here are some contour maps showing the Earth’s magnetic field strength and the inclination – the angle the field lines make to the ground. Earth’s magnetic field strength. The minimum field strength occurs over South America; the maximum field strengths occur just off Antarctica, south of Australia, and in the broad patch covering both central Russia and northern Canada. (Public domain image by the US National Ocean and Atmospheric Administration.) Earth’s magnetic field inclination. The field direction is parallel to the ground at points along the green line, points into the ground in the red region, and points out of the ground in the blue region. The field emerges vertically at the white mark off the coast of Antarctica, south of Australia – this is the Earth’s South Magnetic Pole. The field points straight down at the North Magnetic Pole, north of Canada – not shown in this Mercator projection map, which omits areas with latitude greater than 70° north or south. (Public domain image by the US National Ocean and Atmospheric Administration.) Now, how can we explain these observations with either a spherical Earth or flat Earth model? Let’s start with the spherical model. You may notice a few things about the maps above. The Earth’s magnetic field is not symmetrical at the surface. The lowest intensity point over South America is not mirrored anywhere in the northern hemisphere. And the South Magnetic Pole is at a latitude about 64°S, while the North Magnetic Pole is at latitude 82°N. As it happens, this observed magnetic field is to a first approximation the field of a magnetic dipole – just not a dipole that is centred at the centre of the Earth. The dipole is tilted with respect to Earth’s rotation, and is offset a bit to one side – towards south-east Asia and away from South America. This explains the minimum intensity in South America, and the asymmetry of the magnetic poles. The Earth’s magnetic field is approximated by a dipole, offset from the centre of the Earth. The rotational axis is the light blue line, with geographic north and south poles marked. The red dots are the equivalent magnetic poles. The North Magnetic Pole is much closer to the geographic north pole than the South Magnetic Pole is to the geographic south pole. (As stated in the text, the “North Magnetic Pole” of the Earth is actually a magnetic south pole, and vice versa.) Models of the interior of the Earth suggest that there are circulating electrical currents in the molten core, which is composed mostly of iron. These currents are caused by thermal convection, and twisted into helices by the Coriolis force produced by the Earth’s rotation, both well understood physical processes. Circulating electrical currents are exactly what causes magnetic fields. The simplest version of this so-called dynamo theory model is one in which there is a single giant loop of current, generating a simple magnetic dipole. And in fact this dipole fits the Earth’s magnetic field to an average deviation of 16% [1]. This is not a perfect fit, but it’s not too bad. The adjustments needed to better fit Earth’s measured field are relatively small, and can also be understood as the effects of circulating currents in the Earth’s core, causing additional components of the field with smaller magnitudes. (The Earth’s magnetic field also changes over time, but we’ll discuss that another day.) If the Earth is flat, however, there is no such relatively simple way to understand the strength and direction of Earth’s magnetic field using standard electromagnetic theory. Even the gross overall structure—which is readily explained by a magnetic dipole for the spherical Earth—has no such simple explanation. The shape of the field on a flat Earth would require either multiple electrical dynamos or large deposits of magnetic materials under the Earth’s crust, and they would have to be fortuitously arranged in such a way that they closely mimic a dipole if we assumed the Earth to be a sphere. For any random arrangement of magnetic field-inducing structures on a flat Earth to happen to mimic the field of a spherical planet so closely is highly unlikely. Potentially it could happen, but the Earth actually being a sphere is a much more likely explanation. That the simpler model is more likely to be true than the one requiring many ad-hoc assumptions is a case of Occam’s razor. In science, particularly, a simpler theory is more easily testable than one with a large number of ad-hoc assumptions. Occam’s razor will come up a lot, and I should probably write a sidebar article about it. References: [1] Nevalainen, J.; Usoskin, I.G.; Mishev, A. “Eccentric dipole approximation of the geomagnetic field: Application to cosmic ray computations”. Advances in Space Research, 52, p. 22-29, 2013. https://doi.org/10.1016/j.asr.2013.02.020
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8412652611732483, "perplexity": 899.4681694444402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00276.warc.gz"}
https://matlab-monkey.com/celestialMechanics/CRTBP/LagrangePoints/LagrangePoints.html
# CRTBP Pseudo-Potential and Lagrange Points #### 1. Pseudo-Potential When the motion of the test particle is confined to the plane containing the massive bodies, the acceleration in the rotating reference frame may be written in the form: where U is defined as the pseudo-potential. We monkeys define the potential the way physicists do, i.e. in an isolated gravitational well the potential is negative. The celestial dynamics literature often defines U as becoming more positive the deeper one drops into a well. So, if you are a dynamicist, the sign of the potential will look wrong to you. The first term on the right-hand-side generates the centrifugal force. The second and third terms are the gravitational potentials for masses m1 and m2. The following figure shows the potential represented as a surface plot for a mass ratio of m2/m1 = 0.1: • crtbpPotentialSurface.m - Plots the surface of the pseudo-potential for the circular, restricted three-body. Requires crtbpPotential.m to run. Dependent files: • crtbpPotential.m - returns the pseudo-potential for the circular, restricted three-body #### 2. Jacobi Integral and Zero-Velocity Curves While neither energy nor angular momentum are conserved in the rotating reference frame of the CRTBP, there is a quantity that is a constant of the motion. This quantity is called the Jacobi integral: The form of the Jacobi integral is similar to the total energy: it has two terms, one a pseudo-potential and the other a quadratic velocity term like the kinetic energy. The Jacobi integral for a particle will remain constant as it orbits the system. This property may be exploited to place bounds on the particle's motion. For a given Jacobi integral, one can calculate the curve in space where the velocity would go to zero. Such curves are called zero-velocity curves are are equivalent to turning points for potential wells in inertial frames of reference. The following figure shows zero velocity curves for different Jacobi integrals. The zero-velocity curves bound the shaded 'forbidden' regions where a particle with the specified Jacobi integral can not venture. For example, if a particle with CJ = 4 is initially in orbit around the green planet, it will be stuck there forever (unless it is given a velocity boost by some means). However, the zero-velocity curve for CJ = 3.92 encompasses both m1 and m2. Therefore a particle with this value of CJ can transition back and forth between orbits around each object. This plot also show the positions of the five Lagrange points (see next section). • crtbpZeroVel.m - Plots zero-velocity curves for difference values of the Jacobi integral. Also marks the positions of the Lagrange points with "+" symbols. Requires crtbpPotential.m and lagrangePoints.m to run. Dependent files: • crtbpPotential.m - returns the pseudo-potential for the circular, restricted three-body • lagrangePoints.m - returns a matrix containing the (x,y,z) coordinates of the five Lagrange points for given values of m1 and m2. This routine assumes that G = 1 and the distance between the primary objects in 1. #### 3. Lagrange Points Lagrange points (a.k.a. libration points) are equilibrium points in the rotating frame. They correspond to places where the pseudo-potential is locally flat. Lagrange showed that there are five such points for any mass ratio m2/m1. Two Lagrange points, L4 and L5, form an equilateral triangle with the two primary masses, one above the masses and the other below. The remaining three Lagrange points L1, L2, L3 lie along the line containing m1 and m2. The positions of the colinear points require solving the roots of a set of fifth order polynomials. The MATLAB program lagrangePoints.m determines the solution numerically for given masses m1 and m2.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587453007698059, "perplexity": 758.4150375344851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262029.97/warc/CC-MAIN-20190527065651-20190527091651-00405.warc.gz"}
https://arxiv.org/abs/0806.3145
quant-ph # Title:Operator quantum error correction for continuous dynamics Abstract: We study the conditions under which a subsystem code is correctable in the presence of noise that results from continuous dynamics. We consider the case of Markovian dynamics as well as the general case of Hamiltonian dynamics of the system and the environment, and derive necessary and sufficient conditions on the Lindbladian and system-environment Hamiltonian, respectively. For the case when the encoded information is correctable during an entire time interval, the conditions we obtain can be thought of as generalizations of the previously derived conditions for decoherence-free subsystems to the case where the subsystem is time dependent. As a special case, we consider conditions for unitary correctability. In the case of Hamiltonian evolution, the conditions for unitary correctability concern only the effect of the Hamiltonian on the system, whereas the conditions for general correctability concern the entire system-environment Hamiltonian. We also derive conditions on the Hamiltonian which depend on the initial state of the environment, as well as conditions for correctability at only a particular moment of time. We discuss possible implications of our results for approximate quantum error correction. Comments: 11 pages, no figures, essentially the published version, includes a new section on correctability at only a particular moment of time Subjects: Quantum Physics (quant-ph) Journal reference: Phys. Rev. A 78, 022333 (2008) DOI: 10.1103/PhysRevA.78.022333 Cite as: arXiv:0806.3145 [quant-ph] (or arXiv:0806.3145v2 [quant-ph] for this version) ## Submission history From: Ognyan Oreshkov [view email] [v1] Thu, 19 Jun 2008 08:31:48 UTC (17 KB) [v2] Sat, 23 Aug 2008 12:33:56 UTC (19 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737443685531616, "perplexity": 797.1285396038207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00093.warc.gz"}
https://brilliant.org/problems/keeps-sliding/
# Keeps sliding! Algebra Level 3 A man wants to climb a pole 50m tall pole. At first, he first climbs 1m but slides down 1m back to the ground. Then, he climbs 2m, but slides down 1m. Then, he climbs 3m, but slides down by 1m. Then, he climbs 4m, but slides down by 1m, and so on. If the man spends 10 seconds for each meter he climbs up and 5 seconds for each meter he slides down, then how many seconds will it take him to reach the top of the pole? ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948922872543335, "perplexity": 1871.0673445298905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00137.warc.gz"}
https://www.arxiv-vanity.com/papers/1502.05976/
# On Supersymmetry, Boundary Actions and Brane Charges Lorenzo Di Pietro, Nizan Klinghoffer and Itamar Shamir Weizmann Institute of Science Rehovot 76100, Israel , , . ###### Abstract: Supersymmetry transformations change the Lagrangian into a total derivative . On manifolds with boundaries the total derivative term is an obstruction to preserving supersymmetry. Such total derivative terms can be canceled by a boundary action without specifying boundary conditions, but only for a subalgebra of supersymmetry. We study compensating boundary actions for supersymmetry in 4d, and show that they are determined independently of the details of the theory and of the boundary conditions. Two distinct classes of boundary actions exist, which correspond to preserving either a linear combination of supercharges of opposite chirality (called A-type) or supercharges of opposite chirality independently (B-type). The first option preserves a subalgebra isomorphic to in 3d, while the second preserves only a 2d subgroup of the Lorentz symmetry and a subalgebra isomorphic to in 2d. These subalgebras are in one to one correspondence with half-BPS objects: the A-type corresponds to domain walls while the B-type corresponds to strings. We show that integrating the full current algebra and taking into account boundary contributions leads to an energy-momentum tensor which contains the boundary terms. The boundary terms come from the domain wall and string currents in the two respective cases. preprint: WIS/02/15-FEB-DPPA ## 1 Introduction The problem of preserving supersymmetry on space-time manifolds with boundaries has a long history in the literature. Most notably it has been extensively studied in the context of open strings and D-branes (see for instance [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]). Much attention was also given to the study of supergravity in spaces with boundaries. This includes applications to the strong coupling limit of heterotic string theory [12], supersymmetric Randall-Sundrum models [13] and general study of supergravity in various dimensions [14, 15, 16, 17, 18]. In field theory, a classification of the half-BPS supersymmetric boundary conditions (BC) for Super Yang-Mills was obtained in [19], and the behavior of these BC under S-duality was analyzed in a subsequent paper [20]. With fewer supersymmetries, the general BC and their interplay with dualities are still largely unexplored. (See [21, 22, 23] for the 3d case.) Furthermore, recently there has been a great progress, initiated in [24, 25, 26], in understanding how supersymmetry can be preserved on curved manifolds. Advances in localization suggest that partition functions factorize on some curved backgrounds, and the factors have the interpretations of partition functions on manifold with boundaries (see for instance [27, 28, 29]). Motivated by this set of questions, in this paper we consider theories on a 4d space-time with a boundary. A supersymmetric Lagrangian transforms under supersymmetry into a total derivative . When there is a boundary, the variation of the action is a boundary term δ∫ML=∫∂MVn . (1.1) Here is space-time, is the normal to the boundary and . We will consider this as the basic obstruction to preserving supersymmetry. We will show how to construct boundary Lagrangians for which so that δ(∫ML+∫∂MΔ) =0 . (1.2) In this way we can construct actions which are invariant under supersymmetry, independently of the choice of BC. This idea was suggested by several authors [15, 30, 31, 32, 33, 34, 35, 36]. In this paper we explore this idea systematically for in 4d. Let us demonstrate how this works in an example, given by [35] (see also [36]). Consider a superpotential, which comes from a chiral multiplet with supersymmetry variations δw=√2ζψw , (1.3) δFw=√2i¯ζ¯σμ∂μψw . Clearly is a supersymmetric bulk action. We can use as a compensating boundary Lagrangian if we restrict to variations for which . This defines a subalgebra isomorphic to in 3d. It follows that ∫MFw+i∫∂Mw (1.4) is invariant under this subalgebra without using BC. We see in this example that is exact only with respect to a subalgebra of the supersymmetry transformations. This corresponds to the fact that we cannot preserve all the supersymmetries of the bulk theory. Importantly, we note that the boundary action follows only from the structure of the chiral multiplet. It is independent of the details of the theory and of the specific BC we choose. This universality of the boundary action is the first of the two central points of this paper. Focusing on 4d it is possible to classify all the ’s which solve for any supersymmetric Lagrangian. This leads to a classification of the subalgebras that can be preserved in this way. We obtain that they are isomorphic to one of the following 1. in 3d : by preserving a linear combination of the supercharges and of opposite chirality. This breaks the -symmetry. 2. in 2d : by preserving a single component of each chirality independently together with the -symmetry and breaking to 2d Lorentz symmetry on the boundary. 3. in 2d : the intersection of the two options above. Option (1.) corresponds to the solution of [35, 36], while (2.) is to the best of our knowledge novel. They are related by dimensional reduction to the familiar - and -type branes in in 2d [1, 2]. The third subalgebra is the intersection of the first two. It comes about if we introduce two terms in the boundary action, each preserving only one of the two subalgebras above. In each case, after the boundary action is introduced, one can have various choices for BC which are compatible with the preserved subalgebra. Interestingly, the conditions under which the boundary Lagrangians are well-defined operators are exactly the same as the criteria in [37] for the existence of certain supersymmetric multiplets of the energy-momentum tensor. For example, for Abelian gauge theories with a Fayet-Iliopoulos (FI) term the A-type boundary Lagrangian is not gauge invariant. In these theories, the Ferrara-Zumino (FZ) multiplet of the energy-momentum tensor is not defined. Similarly, a theory must have a preserved -symmetry in order to construct the B-type boundary action. Exactly in this case one can define the -multiplet of the energy-momentum tensor. Moreover, the subalgebras above are in one-to-one correspondence with those preserved by BPS domain walls (case 1.) strings (2.) or both (3.). In fact, we will see that there is a relation between the boundary Lagrangian and the brane charges appearing in the supersymmetry algebra. These in turn are related to the multiplets of the energy-momentum tensor [38]. However, it is important to note that the failure of a certain boundary action to exist does not immediately lead to obstructions on preserving the subalgebras above in presence of the boundary. This is because it may be possible to choose appropriate BC that make the operators in the boundary Lagrangian well-defined (we will give an example of how that can happen in the main body). It only represents an obstruction to preserve supersymmetry independently of the choice of BC. The relation between a nontrivial and brane charges in the algebra has been known for a long time [39]. These brane charges appear in the supersymmetry variation of the supercurrent as follows {Qα,¯S˙αν} =2σμα˙α(Tμν+Cμν)+… , (1.5) {Qα,Sβρ} =σμναβCρμν+… , where and are the supercurrents, is the string current, is the domain wall current and the ellipses are Schwinger terms. If is exact with respect to a certain subalgebra, then all the brane currents which do not respect this subalgebra must drop. For A-type (B-type) the string (domain wall, respectively) current must vanish up to Schwinger terms. The non-vanishing brane current contributes a boundary term when the current algebra is integrated. We will show that it gives exactly the correct boundary action.111This bears some resemblance to partial supersymmetry breaking in [40]. It would be interesting to explore the relation with anomaly inflow and generalized symmetries, see for example [41, 42]. The interpretation of the boundary actions as arising from brane charges together with the relation to the FZ- and -multiplets is the second key point of the paper. It is our hope that this understanding will facilitate the study background supergravity in theories defined on a manifold with a boundary. The remainder of the paper is organized as follows. In section 2 we will review the basics of symmetries in quantum field theories with a boundary and explain the idea of compensating boundary actions. In section 3 we focus on in 4d and explain how to construct boundary actions. In section 4 we show that these results can be interpreted in terms of the brane currents of supersymmetry. ## 2 On Symmetries and Boundaries In this section we review some basic aspects of theories with boundaries and symmetries. In particular we discuss compensating boundary Lagrangians, and emphasize that they give rise to improvements of certain symmetry currents. Consider a space-time with a boundary . (The convention for the metric is mostly plus.) The boundary is specified by an outward normal vector which is normalized to unit length. Only cases in which is a constant space-like vector are discussed in this paper. We use the index for coordinates in the bulk and for coordinates on the boundary.222We use as an index in the obvious way . We will also consider constant time slices, which we denote by . The index is designated for coordinates on and for its boundary (see figure). The theories we consider are specified by a 4d bulk action and possibly also a 3d boundary action . Taking the variation we get δ(S+S∂M) =∫M(∂L∂Φ−∂μ∂L∂(∂μΦ))δΦ+∫∂M(∂L∂(∂nΦ)δΦ+δL∂M) . (2.1) Here represents all the fields in the theory. The BC are relations of the form G(Φ,∂nΦ)|∂M=0 , (2.2) and stationarity of the action on the equations of motion requires that (∂L∂(∂nΦ)δΦ+δL∂M)∣∣ ∣∣G=0=∂^μ(…) . (2.3) For simplicity we are not including additional dynamical fields on the boundary. Symmetries are transformations of the fields which leave the action invariant. (We denote symmetry variations with to distinguish from generic variations .) When there are no boundaries, a symmetry is required to satisfy δsymL =∂μVμ . (2.4) By Noether’s theorem this implies the existence of a current333 We choose this rather unconventional sign of for consistency with the conventions of [37] and [38] for the supercurrent and the energy-momentum tensor. Jμ =−∂L∂(∂μΦ)δsymΦ+Vμ (2.5) satisfying on-shell. This implies that is time independent. (2.5) is said to be the canonical form of the current. Suppose now that there is a boundary, and a transformation satisfying (2.4) is given. In this context, may be called a bulk symmetry. Under what conditions does this lead to the implications of a symmetry? Let us mention two aspects of this problem. Firstly, the time-derivative of the charge contains a boundary term ∂0Q=−∫∂ΣJn . (2.6) This means that the conservation may fail because charge can leak through the boundary. The equation is usually emphasized as the basic requirement of the BC for preserving the symmetry. A different starting point is to demand that the BC (2.2) are invariant under the symmetry transformation, a criterion that we call symmetric BC. This amounts to imposing444For space-time symmetries such as supersymmetry which acts with derivatives on fields, we can only demand that (2.7) holds up to equations of motion. δsymG|G=0 =0 . (2.7) We will explain below how this leads to the existence of a conserved charge. This condition was discussed by several authors (see for instance [2, 5, 6, 8, 9]), mainly as a consistency requirement for supersymmetric BC. Secondly, in presence of a boundary, a bulk symmetry gives rise to a boundary term in the variation of the action δsym(S+S∂M)=∫∂MVn+∫∂MδsymL∂M . (2.8) This obstruction to the invariance of the action can be removed without invoking the BC (or rather a priori to fixing the BC), by choosing a boundary term which cancels the bulk variation.555Of course, if we assume that the BC are symmetric then it follows from the stationarity of the action (2.3) that the boundary term vanishes on-shell. Notably, these two aspects are related because the boundary term that cancels (2.8) appears in the stationarity condition (2.3), in a way which makes it consistent with symmetric BC. Before coming to this point, we proceed to show how symmetric BC lead to vanishing flux. ### 2.1 Constructing Conserved Charges It follows from (2.6) that the charge is conserved if and only if, for our choice of BC, the normal component of the current is a total derivative on , i.e. Jn|∂Σ=∂^aK^a (2.9) for some (recall ). Let us show how this is obtained from symmetric BC. Consider a bulk symmetry as in (2.4). Using the equations of motion we can write the variation of the bulk action as δsymS|on−shell=∫∂M∂L∂(∂nΦ)δsymΦ . (2.10) Note that this is valid only if belongs to the field variations which are allowed by the BC, i.e. we have to consider symmetric BC. Comparing this with (2.4) we get that Jn|∂M=∂^μK^μ (2.11) for some (recall ). This looks very similar to the condition (2.9), but not quite since it includes a time derivative and so does not vanish when integrated on . This is easily corrected. One modifies the definition of the charge by including a boundary term Q′ =∫ΣJ0+∫∂ΣK0 . (2.12) It is easy to check that . In fact, this can be understood as an improvement of the current. We can find an anti-symmetric tensor such that . One then constructs an improved current J′μ =Jμ+∂νLμν (2.13) for which and . Let us stress that it is the canonical current (2.5) which is improved here. ### 2.2 Compensating Boundary Lagrangians We now turn to a discussion of the boundary terms that can be added to make the action invariant under a symmetry transformation. This idea was first applied to supersymmetry a long time ago [30, 31]. More recently it was expounded by Belyaev and van Nieuwenhuizen [15, 35, 43]. (See also [21, 34, 36].) Suppose that there exists a such that Vn+δsymΔ=∂^μK^μ . (2.14) In other words is exact in the symmetry variation, up to a total derivative on the boundary. Then adding ensures that the action is invariant without reference to BC. We call a compensating boundary Lagrangian. Beyond the compensating term we have the freedom to add any symmetric boundary action, i.e. a term which is invariant by itself. (Note that is only defined up to such “closed” terms.) This leads to the general form S+∫∂MΔ+∫∂ML′∂M , (2.15) where . We can use the form of the action in (2.15) to determine explicitly the required improvement which corresponds to a conserved charge. Let us assume for simplicity that the boundary Lagrangian consists only of the compensating term . Note that equation (2.3) holds with if we have symmetric BC. Then, using (2.3) and (2.14), we find666We consider a case where the total derivative in (2.3) vanishes for simplicity. Jn|∂M =−∂L∂(∂nΦ)δsymΦ+Vn=∂^μK^μ . (2.16) If one introduces in addition a boundary term as in (2.15) then the effect is to change (2.16) by . Note that, with a compensating boundary Lagrangian, the stationarity condition is manifestly consistent with symmetric BC. This suggests that it is always sufficient to consider actions adhering to the form (2.15). One should keep in mind that it is possible to add a boundary term which vanishes trivially on the BC and is not symmetric. The claim is modulo such terms. The mismatch goes also in the other direction. Given an action of the form (2.15), it may be possible to choose BC which are not symmetric but still respect the stationarity condition of the action. Let us now summarise the discussion above by the following comments. We emphasize that in what follows we will not attempt at finding general solutions of , rather we will focus on equation (2.14). The point is that while there are many solutions of , the possible solutions of (2.14) are finite, each corresponding to a whole family of BC. Moreover, the ’s which solve (2.14) are universal in that they are determined independently of the theory and of the BC. ### 2.3 The Energy-Momentum Tensor Let us look closer at the case of translational symmetries, specified by a constant vector . Since the Lagrangian is a scalar, it follows that . This gives rise to the canonical energy-momentum tensor ϵνˆTνμ =ϵν(−∂L∂(∂μΦ)∂νΦ+δμνL) . (2.17) Here the index is the direction of the translation, and is the current index (i.e. ) with respect to which it is conserved. In this convention and . In general, the canonical energy-momentum tensor is not symmetric. We will use a hat to distinguish it from the symmetric energy-momentum tensor. Now suppose that there is a boundary. This explicitly breaks translations for which . For the remaining translations with we have that and thus they do not require compensating boundary actions. Suppose that the definition of the theory includes a boundary Lagrangian . If a translation by is preserved, we must have that . As explained above, this implies an improvement of . The precise form depends on , and is necessarily not symmetric (unlike the canonical current which can always be symmetrized). This is linked with the breakdown of Lorentz invariance ensued by the boundary. In the discussion above it is important to notice that the energy-momentum tensor that we are improving is the canonical one. This will be important for us because in the context of supersymmetry one usually considers multiplets in which the energy-momentum tensor is symmetric. We will have to take this discrepancy into account. ## 3 Boundary Actions in Supersymmetry In this section we shall begin our investigation of supersymmetry. The basic constraint on the supercharges that can be preserved in flat space with boundaries arises because supersymmetry transformations anti-commute to translations, some of which are inevitably broken by the boundary. This implies that only a subset of the supersymmetries can survive in presence of boundaries.777Note that on curved manifolds it is sometimes possible to introduce a boundary without breaking any of the supersymmetries preserved by the background. This is because in general the Killing vectors associated to the isometries that appear on the r.h.s. of the supersymmetry algebra on curved space do not form a basis of the tangent space (see for instance [25]). Therefore, one can introduce a boundary that is left invariant by all the isometries appearing in the algebra. Focusing on the case of supersymmetry in 4d, there are two maximal subalgebras that can be preserved, one isomorphic to in 3d and the other one to in 2d. These options correspond to the possible compensating boundary actions that one can construct. We shall refer to these two cases as A-type and B-type respectively. We find these names appropriate because they are related by dimension reduction to the BC in in 2d bearing the same name. Note that in the case of B-type the 3d Lorentz invariance on the boundary is broken by the boundary action. In supersymmetry in 4d there are two ways to build bulk actions. One can construct a supersymmetric Lagrangian as the -component of a real multiplet or as the -component (-component) of a chiral (anti-chiral) superfield. The basic idea is to use the other bosonic fields in the multiplet to construct compensating boundary terms. We will see below that this follows straightforwardly from the supersymmetry variations which relate the components of the multiplet. We will use the conventions of [44], except that we take the Killing spinors and to be commuting. ### 3.1 A-type Boundary Actions This is the solution given by Belyaev and van Nieuwenhuizen [35] and later elaborated by Bilal [36], which we now review. (A 2d analogue can be found in [3, 45].) In addition, we derive the improvement which follows from the -term action. It will play an important role in section 5. Let us begin by recalling the example which appeared in the introduction, i.e. the compensating term for the superpotential. The supersymmetric Lagrangian comes from the -component of a chiral multiplet . As explained before, it follows from the structure of the chiral multiplet that the boundary term is δ¯ζ∫MFw=√2i∫∂M¯ζ¯σnψw . (3.1) To obtain the compensating action we restrict to a subalgebra defined by the relation . If the theory has an -symmetry we can set (as assumed in the introduction for simplicity), otherwise it is a free parameter. Equivalently, we consider supercharges which take the form ˜Qα=1√2(e−iγ/2Qα+eiγ/2(σn¯Q)α) . (3.2) The supersymmetry transformations thus generated are denoted by . The supercharges satisfy the reality condition (σn˜Q†)α=˜Qα . (3.3) The bulk action supplemented by the boundary term is SF,A−type=∫MFw+ieiγ∫∂Mw . (3.4) One can verify that with no information assumed about the value of on the boundary. Note that the boundary action breaks -symmetry explicitly. The subalgebra we obtained is in fact isomorphic to supersymmetry in 3d {˜Qα,˜Qβ}=2(Γ^μ)αβP^μ , (3.5) where we defined the 3d gamma matrices by (recall that ), so that . Only momenta tangent to the boundary appear in this algebra. We are now ready to consider the -term action. As noted above, the -term resides in a real multiplet whose components are . They are related by the following transformations δC=iζχ−i¯ζ¯χ , (3.6) δχα=ζαM+(σμ¯ζ)α(ivμ+∂μC) , δ¯χ˙α=¯ζ˙α¯M+(¯σμζ)˙α(ivμ−∂μC) , δM=2¯ζ¯λ+2i¯ζ¯σμ∂μχ , δ¯M=2ζλ+2iζσμ∂μ¯χ , δvμ=iζσμ¯λ+i¯ζ¯σμλ+∂μ(ζχ+¯ζ¯χ) , δλα=iζαD+2(σμνζ)α∂μvν , δ¯λ˙α=−i¯ζ˙αD+2(¯σμν¯ζ)˙α∂μvν , δD=−ζσμ∂μ¯λ+¯ζ¯σμ∂μλ . The top component is a bulk supersymmetric Lagrangian. Restricting as above to the supercharges we arrive at the following formula for the -term action supplemented by boundary terms SD,A−type=∫MD+12∫∂M(e−iγM+eiγ¯M)+∫∂M∂nC . (3.7) It is important to note that, unlike the previous case, the boundary terms compensate the bulk variation up to a total derivative on the boundary. Explicitly, we have that Vn+˜δ(e−iγM+eiγ¯M2+∂nC)=i∂^μ(e−iγ¯ζ¯σ^μχ+eiγζσ^μ¯χ) . (3.8) The significance of this was explained in section 2; a specific improvement of the canonical current is required in order to get a conserved supercharge. Using again the relation , we find that the improvement of the canonical supercurrent is ζ˜Sμ→ζ˜Sμ−2i∂ν(ζσμνχ−¯ζ¯σμν¯χ) . (3.9) There is a relation between the -term boundary action and -term boundary action. This comes about because a -term Lagrangian can always be written as a superpotential up to boundary terms. More precisely, given a real superfield , we can define a chiral superfield , whose -component is and bottom component is . Using expression (3.4) for the -term action with the boundary term, combined with the complex conjugate, leads exactly to the action (3.7). ### 3.2 B-type Boundary Actions We have presented above the construction of compensating boundary actions which correspond to the 3d subalgebra. It is natural to ask if it is possible to preserve supercharges of opposite chirality in an independent way, thus also preserving the -symmetry. Naively the answer to this question appears to be negative: on the boundary we expect to find a supersymmetry algebra with 2 supercharges and the only candidate seems to be the 3d algebra, whose supercharges are real Majorana fermions and which has no -charge. However, this line of reasoning includes the assumption that 3d Lorentz invariance is maintained. Relaxing this assumption, we are allowed to preserve only one component of and one of . This is implemented by choosing Killing spinors and . Without loss of generality we will place along one of the axes, by choosing . Let us consider again the -term action. The variations are written as δ∫MD =−∫∂Mζσ2¯λ and¯δ∫MD =∫∂M¯ζ¯σ2λ . (3.10) To find compensating boundary actions we choose the Killing spinors and , which satisfy the identities and . Using (3.6) we then find that a B-type modified -term is given by (3.11) It is easy to check that this boundary action does not lead to a time derivative on the boundary, so no improvement of the canonical current is needed. The boundary action can also be written as a bulk term with . This makes manifest the invariance under shifts of by a total derivative.888One might wonder whether it is possible to preserve two supercharges corresponding to the two components of , while breaking (or viceversa). Indeed one can see from (3.6) that an additional possibility for the -term compensating boundary action exists (3.12) which exactly corresponds to preserving only . (Preserving would be achieved by changing the sign of the boundary term.) This subalgebra is not compatible with the requirement that , which is satisfied in Lorentzian signature, and therefore we reject this possibility. The clash with unitarity is reflected in the boundary action being not real. The boundary action explicitly breaks the three dimensional Lorentz invariance by picking a preferred direction on the boundary. We remain with 2d Lorentz invariance in the plane. Defining and the preserved subalgebra is {Q−,¯Q−}=2(P0+P3) . (3.13) This subalgebra is isomorphic to supersymmetry in the two dimensions spanned by and . Changing the sign of the boundary action in (3.11) changes the 2d chirality leading to instead. We now explain how to find B-type compensating boundary action for an -term bulk Lagrangian. To this end, we will see that it is necessary to invoke the existence of an -symmetry. Moreover, differently from all the previous cases, in this case the cancellation of the boundary term will rely on the equations of motion. (It is however independent of the choice of boundary conditions.) For definiteness, we focus on a (-symmetric) superpotential in a Wess-Zumino model. Consider then a set of chiral fields of -charges , a Kähler potential and a superpotential . The equations of motion are given by ¯D2∂aK=4∂aW . (3.14) The superpotential must have -charge 2 in order to preserve the -symmetry, i.e. it must satisfy the constraint ∑aRaΦa∂aW=2W . (3.15) Likewise, the -neutrality of the Kähler potential means that ∑aiRa(Φa∂aK−¯Φ¯a∂¯aK)=0 (3.16) (up to a Kähler transformation which we disregard for brevity). One can then define a real multiplet by V′=12∑aRaΦa∂aK . (3.17) Using the equations of motion one obtains , which leads to the relation . We saw in the study of the -term that the variation of is compensated by adding on the boundary; gives rise to an additional boundary term. Hence, we obtain the following form for the -term and the relative compensating boundary Lagrangian SF,B−type=∫M(Fw+¯F¯w)+∫∂M(∂nC′+v′1) . (3.18) To find the corresponding improvement it is useful to note that the fermionic fields of and are related by . This leads to Vn=√2i¯ζ¯σnψw=−¯δ(∂nC′+v′1)−2i¯ζ¯σnμ∂μ¯χ′ , (3.19) and similarly for the variation. We then find that the supercurrents should be improved according to ζSμ→ζSμ−2iζσμν∂νχ′ ,¯ζ¯Sμ→¯ζ¯Sμ+2i¯ζ¯σμν∂ν¯χ′ . (3.20) ### 3.3 Discussion We would now like to look closer at the boundary actions obtained above, focusing on the cases of a Wess-Zumino model and a gauge theory. This will expose an intriguing relation to the supersymmetry multiplets of the energy-momentum tensor. Requiring that the boundary actions are well-defined presents nontrivial constraints on the underlying field theory, which will be shown to be equivalent to the existence of those multiplets. For a Wess-Zumino model the -term Lagrangian comes from the real superfield . The A-type compensating boundary Lagrangian for this -term contains the term . This makes sense only if the Kähler potential is well-defined up to an additive constant. Equivalently, the Kähler connection −i2(∂aKdΦa−∂¯aK∂¯Φ¯a) (3.21) must be globally well-defined. Note that this is never the case if the target space is compact. Another example comes from the Fayet-Iliopoulos term (FI) in Abelian gauge theories. The real superfield associated to such a -term action is the elementary Abelian vector superfield. Its bottom component is shifted by an arbitrary real function under a gauge transformation, making the would-be compensating action not gauge invariant. On the contrary, the B-type boundary action (3.11) for the -term is not affected by any ambiguity in the examples that we have just considered. Both under Kähler transformations in the Wess-Zumino model, and under gauge transformations in the gauge theory with an FI term, the boundary Lagrangian changes into a total derivative on the boundary; hence the action is well-defined. On the other hand, we showed that the construction of the B-type boundary actions requires the existence of an -symmetry. (Note that without a superpotential there is always an -symmetry that assigns charge 0 to all the chiral superfields.) When the boundary Lagrangian does not exist in some theory, it is not possible to obtain a total action that is invariant under the associated subalgebra independently of the BC. Let us stress that this does not mean that the subalgebra cannot be preserved in this theory. This is because we also need to specify some BC to fully define the theory, and it may be possible that the boundary operator becomes well-defined (or vanish altogether) when evaluated on the BC. An example will help clarify this issue. Consider a single chiral superfield , whose components we denote by , with a canonical Kähler potential . Suppose we identify , i.e. we take the target space to be cylinder. In this case the Kähler form (3.21) is not globally well-defined, and the term in the boundary action is not a well-defined operator. Nevertheless, consider the following BC ϕ =¯ϕ , (3.22) ∂nϕ =−∂n¯ϕ , ψ =σn¯ψ . Note that these BC respect the identification on the target space. A short computation reveals that the BC are symmetric with respect to the subalgebra given by the relation , i.e. an A-type subalgebra. Consistently, note that the boundary action vanishes identically when evaluated on (3.22). This means that given the BC (3.22) the boundary term is not required for the stationarity of the action, and hence it is redundant. Bearing in mind this caveat, we note that the conditions that allow to define the A-type and B-type boundary actions are in one-to-one correspondence with those found by [37] for the existence of the FZ- and -multiplets, respectively. These are supersymmetric multiplets of operators that include the energy-momentum tensor. This relation will be elucidated in the next section by a calculation of the current algebra in Wess-Zumino models. In preparation for the next section, let us discuss some relevant aspects of the supercurrent multiplets and their expression in Wess-Zumino models. The basic fact is that both the FZ-multiplet and the -multiplet can only be defined in a restricted class of 4d supersymmetric field theories. A third, larger multiplet which exists in general was introduced in [37] and dubbed -multiplet. The FZ-multiplet and the -multiplet are naturally embedded into the -multiplet. When either of the two shorter multiplets is defined, it can be obtained from the -multiplet via an improvement transformation (that sets to zero some of its components). A short review of the -multiplet and its improvements is given in appendix A. In Wess-Zumino models, given a Kähler potential and a superpotential the -multiplet is given by Sα˙α=2∂a∂¯aKDαΦa¯D˙α¯Φ¯a , (3.23) χα=¯D2DαK , (3.24) Yα=4DαW . Using the improvement (A.13) we can set if we choose , and we reduce to the FZ-multiplet. This is an allowed improvement only if the Kähler potential is well-defined up to an additive constant. On the other hand, to obtain the -multiplet we must demand that the theory has an -symmetry. Similarly to the comments in the previous section, when this is the case is a real multiplet and the equations of motion imply that . The improvement by sets and gives the -multiplet, whose bottom component is the conserved -current. It is interesting to compare the improvements of the supercurrent which are implied by the above choices of with the improvements (3.9) and (3.20) coming from the compensating boundary actions. Consider first the -multiplet, compared to the improvement that results from the B-type superpotential. Looking at the -component of , we see that the improvement which follows from the -multiplet turns out to be twice the B-type improvement (3.20). We will have to wait until the next section to see how this discrepancy is resolved. It will turn out that the -multiplet formulas have to be modified due to boundary effects. Now consider the case with well-defined FZ-multiplet and compare to the improvement for the A-type boundary action. We obtained the A-type compensating boundary action for the -term by first rewriting the -term as an integral over only half superspace, and then applying the result for the -term. The resulting -term for a Wess-Zumino model comes from the chiral superfield . Therefore, we have to improve in such a way that Yα=4DαW→Dα(4W−12¯D2K) . (3.25) This correspond to an improvement with . (Note that this is different from the improvement that sets to .) Comparing to the improvement that was obtained from the A-type boundary action (3.9), we find again the same discrepancy by a factor of 2. ## 4 Boundary Actions and Brane Charges In this section we will show that the compensating boundary actions can be interpreted in terms of brane charges of the supersymmetry algebra in 4d. From this point of view, a supersymmetric boundary is analogous to a BPS extended object. The algebra admits two kinds of half-BPS extended objects, namely domain walls and strings, (and quarter-BPS configurations obtained by combining the previous two, i.e. domain wall junctions) [46, 47, 48, 49]. As we will see, they correspond to A-type and B-type compensating boundary actions, respectively. In order to give a self-contained presentation, we will start by briefly reviewing brane charges and BPS objects (see [50] for more details). ### 4.1 Brane Charges and BPS Branes in N=1 in 4d The most general supersymmetry algebra in 4d which takes into account brane charges is {Qα,¯Q˙α} =2σμα˙α(Pμ+Zμ), (4.1) {Qα,Qβ} =σμναβZμν . (4.2) The structure of the brane charges and is fixed by Lorentz invariance. The real vector is a string charge and the complex two-form a domain wall charge. The corresponding conserved currents are a two-form current and a three-form current which are related to the charges by Zμ=∫Σd3xCμ0 , (4.3) Zμν=∫Σd3xCμν0 . (4.4) In flat space without a boundary the corresponding charge will vanish in any configuration with fields approaching zero sufficiently fast at infinity. This is how one recovers the usual supersymmetry algebra. States carrying brane charges can sometimes be annihilated by a subalgebra of the initial 4d supersymmetry algebra. In this case the brane is called BPS. For instance, for a domain wall with normal vector , we can go to the rest frame in which , being the energy of the configuration. The brane charge in this frame can be written as Zμν=2iZϵ0μνρnρ , (4.5) where is a complex number. and are formally infinite, but the energy and charge per unit volume are finite. Consider the supercharges ˜Qα=1√2(e−iγ/2Qα+eiγ/2(σn¯Q)α) (4.6) that appeared in (3.2). Computing their anticommutators in the rest frame, we find {~Qα,~Qβ}=−Γ0αβ(2E−e−iγZ−eiγZ∗)=−2Γ0αβ(E−|Z|) . (4.7) In the last equality we fixed to cancel the phase of . The reality condition of in (3.3) implies the BPS bound . When , the supercharges annihilate the state of the domain wall, and the configuration is half-BPS. Note that, if we consider fluctuations around the state of the domain wall, it is natural to consider a shifted momentum P′^μ=P^μ+|Z|η^μ0. (4.8) The supercharges then generate an algebra isomorphic to in 3d {~Qα,~Qβ}=2Γ^μαβP′^μ . (4.9) Analogous statements hold for the BPS string associated with the charge . In that case we have a real two-form normal to the two-dimensional world-sheet. In the rest frame the charge can be written as Zμ=−12Zϵ0μνρnνρ (4.10) for a real constant , and we fixed the normalization so that and . We can introduce the chiral projectors (P±) βα =12(δ βα∓i(σμν) βαnμν), (4.11) (P†±)˙α ˙β =12(δ˙α ˙β∓i(¯σμν)˙α ˙βnμν) . (4.12) The anticommutator of the projected supercharges (, ) is {Q±,¯Q±}=2(E∓Z) . (4.13) If we take , depending on the sign of the string will be invariant under the supercharges or . Shifting the momentum , the preserved supercharges will generate an algebra isomorphic to in 2d (or for the opposite sign of ). If both domain walls and strings are present, at most a superalgebra isomorphic to the (or ) in 2d can be preserved, and the corresponding state is quarter-BPS. As we have already stressed, the algebras of the supercharges which are symmetries of the BPS domain wall, or the BPS string, are exactly the same algebras which are preserved by the A-type compensating boundary action, or the B-type, respectively. Indeed, we will see in the following subsections that we can interpret such boundary Lagrangians as brane currents supported on the boundary. Taking this point of view, the shift in the momentum reflects the addition of a new term proportional to to the action (recall that, as discussed in section 2, adding the boundary Lagrangian affects the energy-momentum tensor.) This is the boundary term necessary to obtain an action which is invariant under the preserved algebra. Therefore, this approach will lead to an independent computation of the compensating boundary action, based on the algebra of charges rather than on the variation of the action. ### 4.2 Current Algebra of Supersymmetry and Boundaries Consider the full current algebra of supersymmetry – the equal time commutation relations of the supercurrents. Schematically, it takes the form {¯S0˙α(t,y),Sμα(t,x)}=2σνα˙αTνμδ(3)(y−x)+… , (4.14) {S0α(t,y),Sμβ(t,x)}=0+… , (4.15) where the ellipses represent total derivative terms, usually referred to as Schwinger terms. It is not known in general how to fix the form of all these terms. Note that when there are no boundaries and no extended objects this equation can be straightforwardly integrated to yield the 4d supersymmetry algebra . Integrating the anticommutators (4.14) only over the coordinate on a fixed time slice, one obtains the anticommutator of the supercharge with the supercurrent operator, known as the half-integrated algebra. When there is no boundary, the result of integrating (4.14) once is universal for any theory in 4d [37]. The following half-integrated current algebra is obtained {¯Q˙α,Sαμ}=σνα˙α(2Tνμ+2Cνμ−12ϵνμρσ∂ρjσ+i∂νjμ−iηνμ∂ρjρ) , (4.16) {Qβ,Sαρ}=σμναβCρμν . Here and are respectively the string and domain wall currents introduced above. Besides the brane currents, an additional operator appears in the algebra. The operators in (4.16) form the -multiplet (reviewed in appendix A). Let us emphasize that the energy-momentum tensor in (4.16) is symmetric. As explained in the appendix A, improvements of the -multiplet are parametrized by a real superfield U=u+θη+¯θ¯η+θ2N+¯θ2¯N−θσμ¯θVμ+… . (4.17) Here we follow the conventions of [38]. This leads to improvements of the energy-momentum tensor and the supercurrent given by Sαμ→Sαμ+∂ν(2σμνη)α , (4.18) Tμν→Tμν+12(∂μ∂ν−ημν∂2)u . Other operators in the -multiplet transform as jμ→jμ+Vμ , (4.19) Cνμ→Cνμ+34ϵνμρσ∂ρVσ , (4.20) Cνμρ→Cνμρ+2ϵνμρσ∂σN . Note that the improvement preserves the symmetry of the energy-momentum tensor. Under such improvements the half-integrated current algebra (4.16) is covariant – it retains its form when the improvements form a multiplet. In some cases, the improvements can be used to set to zero some of the Schwinger terms. If the brane currents can be improved to , the multiplet is reduced to a shorter one. In particular, when the string current is set to 0, the shortened multiplet is the FZ-multiplet, while when the domain wall current is set to 0 we obtain the -multiplet. Consider now the current algebra for theories with a boundary. We wish to integrate (4.14) carefully taking into account all the total derivative terms. This will introduce contributions in the integrated algebra of supercharges which have the structure of the brane charges in (4.1) and (4.2). In analogy with the BPS states, only a subalgebra which is blind to the brane charges can be preserved. Unlike the case with no boundary, the charges are now sensitive to improvements. We must choose the improvements in such a way that the resulting charges are time independent. There are several subtleties in realising the idea just presented. Naively, one could just integrate (4.16), with the correct improvement taken into account. However, this does not work for the following reason. The problem is that to obtain (4.16) from (4.14), one needs to integrate some total derivative terms in (4.14). We could set their contribution to zero by choosing appropriate BC. However this approach does not allow us to obtain information about the boundary terms. We wish to remain agnostic about a specific choice of BC and keep track of all the boundary contributions. The following simple example will help explain how boundary terms appear in the algebra, and their relation to the boundary Lagrangian. Consider a real scalar with a free Lagrangian. The canonical Hamiltonian (density) is given by T00=12(∂0φ)2+12(∂aφ)2 (4.21) and the canonical commutation relations by i[∂0φ(x),φ(y)]=δ(3)(x−y) . (4.22) If we pick time-translationally invariant BC, generates the symmetry of time translation. Indeed one readily checks that . However, consider also the action of on the canonical momenta. Using the equations of motion we obtain i[H,∂0φ(x)] =∂20φ(x)−δ(xn)∂nφ(x) . (4.23) Note the additional term localized on the boundary . Considering the canonical relation i[∂0φ,⋅]=δδφ(⋅) , (4.24) we recognize that this boundary term is analogous to the one coming from the variation of the action. Similarly to the latter, also the boundary term in (4.23) must be set to zero by the BC. In this case it is clear that Neumann BC are implied. We can have Dirichlet BC by adding a boundary term to , which leads to i[H,∂0φ(x)] =∂20φ(x)+∂nδ(xn)φ(x) . (4.25) Recall that the boundary term affects the Hamiltonian via the improvement of the energy-momentum tensor discussed in section 2. We can see from this simple example how the boundary terms in commutation relations with the Hamiltonian are related to boundary terms in the Lagrangian. This will continue to be true for supersymmetry, albeit in a more convoluted way. Let us address an objection which might be prompted by the discussion above. We have been using the naive canonical commutation relations, without taking into account how they are modified by the BC. Alternatively, one should first decide on BC and then formulate canonical commutation relations which are consistent with this choice. However, as mentioned before, doing that will prevent us from keeping track of the boundary terms. We will bypass this problem in the following way. We consider a theory which is defined in infinite flat space, such that the usual commutation relations hold everywhere. Now we focus our attention on a domain inside the infinite time slice . Charges formed by integration on this restricted domain are of course not guaranteed to be conserved, but there is no problem in computing their commutation relations. In this way we can now use the naive commutation relations and keep track of total derivative terms. ### 4.3 Wess-Zumino Model In this section we consider a Wess-Zumino model with canonical Kähler potential and generic superpotential. We will compute the current algebra explicitly starting from the canonical commutation relations, and use it to show the relation between the brane charge and the boundary action. We work in this setup in order to compute explicitly the boundary terms. Consider a chiral superfield . The Kähler potential is
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709676504135132, "perplexity": 415.6207502893798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00189.warc.gz"}
https://www.zora.uzh.ch/id/eprint/45616/
# Measurement of the charge ratio of atmospheric muons with the CMS detector CMS Collaboration; Khachatryan, V; Amsler, C; De Visscher, S; Chiochia, V; et al (2010). Measurement of the charge ratio of atmospheric muons with the CMS detector. Physics Letters B, 692(2):83-104. ## Abstract We present a measurement of the ratio of positive to negative muon fluxes from cosmic ray interactions in the atmosphere, using data collected by the CMS detector both at ground level and in the underground experimental cavern at the CERN LHC. Muons were detected in the momentum range from 5 GeV/c to 1 TeV/c. The surface flux ratio is measured to be 1.2766 ± 0.0032 (stat.) ± 0.0032 (syst.), independent of the muon momentum, below 100 GeV/c. This is the most precise measurement to date. At higher momenta the data are consistent with an increase of the charge ratio, in agreement with cosmic ray shower models and compatible with previous measurements by deep-underground experiments. ## Abstract We present a measurement of the ratio of positive to negative muon fluxes from cosmic ray interactions in the atmosphere, using data collected by the CMS detector both at ground level and in the underground experimental cavern at the CERN LHC. Muons were detected in the momentum range from 5 GeV/c to 1 TeV/c. The surface flux ratio is measured to be 1.2766 ± 0.0032 (stat.) ± 0.0032 (syst.), independent of the muon momentum, below 100 GeV/c. This is the most precise measurement to date. At higher momenta the data are consistent with an increase of the charge ratio, in agreement with cosmic ray shower models and compatible with previous measurements by deep-underground experiments. ## Statistics ### Citations Dimensions.ai Metrics 37 citations in Web of Science® 37 citations in Scopus®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871446490287781, "perplexity": 3582.1993876773727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00372.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/pc/chapter/4/lesson/4.2.3/problem/4-100
### Home > PC > Chapter 4 > Lesson 4.2.3 > Problem4-100 4-100. Verify that $\text{cos }θ + \text{sin }θ \text{ tan }θ = \text{sec }θ$ by simplifying the left side and using the Fundamental Pythagorean Identity. 4. Simplifying using the Fundamental Pythagorean Identities.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927590012550354, "perplexity": 2856.5158256728278}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00157.warc.gz"}
http://toc.cs.uchicago.edu/articles/v008a020/
Volume 8 (2012) Article 20 pp. 429-460 Special Issue in Honor of Rajeev Motwani Budget-Constrained Auctions with Heterogeneous Items Published: September 4, 2012 [PDF (317K)] [PS (1455K)] [Source ZIP] Keywords: mechanism design, approximation algorithms ACM Classification: G.1.6, J.4 AMS Classification: 68W25 Abstract: [Plain Text Version] We present the first approximation algorithms for designing revenue-optimal incentive-compatible mechanisms in the following setting: There are multiple (heterogeneous) items, and bidders have arbitrary demand and budget constraints (and additive valuations). Furthermore, the type of a bidder (which specifies her valuations for each item) is private knowledge, and the types of different bidders are drawn from publicly known mutually independent distributions. Our mechanisms are surprisingly simple. First, we assume that the type of each bidder is drawn from a discrete distribution with polynomially bounded support size. This restriction on the type-distribution, however, allows the random variables corresponding to a bidder's valuations for different items to be arbitrarily correlated. In this model, we describe a sequential all-pay mechanism that is truthful in expectation and Bayesian incentive compatible. The outcome of our all-pay mechanism can be computed in polynomial time, and its revenue is a $4$-approximation to the revenue of the optimal truthful-in-expectation Bayesian incentive-compatible mechanism. Next, we assume that the valuations of each bidder for different items are drawn from mutually independent discrete distributions satisfying the monotone hazard-rate condition. In this model, we present a sequential posted-price mechanism that is universally truthful and incentive compatible in dominant strategies. The outcome of the mechanism is computable in polynomial time, and its revenue is a $O(1)$-approximation to the revenue of the optimal truthful-in-expectation Bayesian incentive-compatible mechanism. If the monotone hazard-rate condition is removed, then we show a logarithmic approximation, and we complete the picture by proving that no sequential posted-price scheme can achieve a sub-logarithmic approximation. Finally, if the distributions are regular, and if the space of mechanisms is restricted to sequential posted-price schemes, then we show that there is a $O(1)$-approximation within this space. Our results are based on formulating novel LP relaxations for these problems, and developing generic rounding schemes from first principles. A preliminary version of this paper appeared in STOC 2010.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.911803126335144, "perplexity": 1434.5509920917923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00329.warc.gz"}
https://quant.stackexchange.com/questions/24602/calculate-total-risk
# Calculate total risk [closed] I have a question regarding how the risk is calculated, if I have only the returns. I think the risk premium (rp) is just the average of the returns and the sharpe ratio is the risk premium divided by the total risk. Let me know if I am mistaken. But how do they calculate the risk? Thanks in advance! PS:The exercise is in the attached pictures. ## closed as off-topic by Neeraj, olaker♦Feb 29 '16 at 7:27 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Basic financial questions are off-topic as they are assumed to be common knowledge for those studying or working in the field of quantitative finance." – Neeraj, olaker If this question can be reworded to fit the rules in the help center, please edit the question. Notice that the problem does not give you a risk-free investment, so the computation of the Sharpe ratio becomes: $$SR = \frac{E(r)}{\sqrt{VAR(r)}}$$ Year 1: $r_{p} = E(r) = \frac{1}{n}\sum_{i = 1}^{n}{r_{i}} = \frac{1}{4}(-2 + 6 - 2 + 6) = \frac{1}{4}(8) = 2$ $\sigma(r_{p}) = \sqrt{VAR(r)} = \sqrt{\frac{1}{n}\sum_{i = 1}^{n}{(r_{i} - r_{p})^{2}}} = \sqrt{\frac{1}{4}((-4)^{2} + 4^{2} + (-4)^{2} + 4^{2})} = \sqrt{\frac{1}{4}(16 + 16 + 16 + 16)} = \sqrt{\frac{1}{4}(64)} = \sqrt{16} = 4$ $SR = \frac{2}{4} = 0.5$ Year 2: $r_{p} = E(r) = \frac{1}{n}\sum_{i = 1}^{n}{r_{i}} = \frac{1}{4}(-6 + 18 - 6 + 18) = \frac{1}{4}(24) = 6$ $\sigma(r_{p}) = \sqrt{VAR(r)} = \sqrt{\frac{1}{n}\sum_{i = 1}^{n}{(r_{i} - r_{p})^{2}}} = \sqrt{\frac{1}{4}((-12)^{2} + 12^{2} + (-12)^{2} + 12^{2})} = \sqrt{\frac{1}{4}(144 + 144 + 144 + 144)} = \sqrt{\frac{1}{4}(576)} = \sqrt{144} = 12$ $SR = \frac{6}{12} = 0.5$ Year 1+2: $r_{p} = E(r) = \frac{1}{n}\sum_{i = 1}^{n}{r_{i}} = \frac{1}{8}(-2 + 6 - 2 + 6 - 6 + 18 - 6 + 18) = \frac{1}{8}(32) = 4$ $\sigma(r_{p}) = \sqrt{VAR(r)} = \sqrt{\frac{1}{n}\sum_{i = 1}^{n}{(r_{i} - r_{p})^{2}}} = \sqrt{\frac{1}{2}((-6)^{2} + (-2)^{2} + (-6)^{2} + (-2)^{2} + (-10)^{2} + 14^{2} + (-10)^{2} + 14^{2})} = \sqrt{\frac{1}{8}(36 + 4 + 36 + 4 + 100 + 196 + 100 + 196)} = \sqrt{\frac{1}{8}(672)} = \sqrt{84} = 9.165$ $SR = \frac{4}{9.165} = 0.436$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732335567474365, "perplexity": 985.5174511616541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00502.warc.gz"}
https://publications.hse.ru/en/articles/?mg=57961902
• A • A • A • ABC • ABC • ABC • А • А • А • А • А Regular version of the site Of all publications in the section: 28 Sort: by name by year Article Maciel J., Bogomolov F. A. Central European Journal of Mathematics. 2009. No. 7:1. P. 61-65. Article Tikhomirov A. S., Markushevich D., Trautmann G. Central European Journal of Mathematics. 2012. Vol. 19. No. 4. P. 1331-1355. We announce some results on compactifying moduli spaces of rank 2 vector bundles on surfaces by spaces of vector vector bundles on trees of surfaces. This is thought as an algebraic counterpart of the so-called bubbling of vector bundled connections an in differential geometry. The new moduli spaces are algebraic spaces arising as quotients by group actions according surfaces to a result of Kollár. As an example, the compactification of the space of stable rank 2 vector bundles with Chern classes $c_1=0, c_2=2$ on the projective plane is studied in more detail. Proofs are only indicated and will appear in separate papers. Article Bogomolov F. A., Tschinkel Y. Central European Journal of Mathematics. 2009. No. 7:3. P. 382-386. Article Bogomolov F. A., Rovinsky M. Central European Journal of Mathematics. 2013. Vol. 11. No. 1. P. 17-26. Let Ψ be the projectivization (i.e., the set of one-dimensional vector subspaces) of a vector space of dimension ≥ 3 over a field. Let H be a closed (in the pointwise convergence topology) subgroup of the permutation group GΨ of the set Ψ. Suppose that H contains the projective group and an arbitrary self-bijection of Ψ transforming a triple of collinear points to a non-collinear triple. It is well-known from [9] that if Ψ is finite then H contains the alternating subgroup AΨ of GΨ. We show in Theorem 3.1 below that H = GΨ, if Ψ is infinite. Article Markushevich D., Tikhomirov A. S., Verbitsky M. Central European Journal of Mathematics. 2012. Vol. 10. No. 4. P. 1185-1187. Article Tikhomirov A. S., Markushevich D., Verbitsky M. Central European Journal of Mathematics. 2012. Vol. 10. No. 4. P. 1185-1187. In this preface we give a short description of the current issue of the Central European Journal of Mathematics containing 22 papers which spin around the topics of the conference “Instantons in complex geometry”, held on March 14–18, 2011 in Moscow. The main goal of the conference was to bring together specialists in complex algebraic and analytic geometries whose research interests belong to this composite area between gauge theory, moduli spaces, derived categories, vector bundles and coherent sheaves. Besides the most relevant contributions to the conference, the issue contains miscellaneous articles by other authors that fit by subject and spirit. Article Katzarkov L., Yotov M., Orlov D. O. et al. Central European Journal of Mathematics. 2009. No. 7:4. Article Przyjalkowski V. V. Central European Journal of Mathematics. 2011. Vol. 9. No. 5. P. 972-977. Article Verbitsky M. Central European Journal of Mathematics. 2011. Vol. 9. No. 3. P. 535-557. Article Kuznetsov A. G. Central European Journal of Mathematics. 2012. Vol. 10. No. 4. P. 1198-1231. We introduce the notion of an instanton bundle on a Fano threefold of index 2. For such bundles we give an analogue of a monadic description and discuss the curve of jumping lines. The cases of threefolds of degree 5 and 4 are considered in a greater detail. Article Bogomolov F. A., Böhning C., Graf von Bothmer H. Central European Journal of Mathematics. 2012. Vol. 10. No. 2. P. 466-520. Let G be one of the groups SL n(ℂ), Sp 2n(ℂ), SO m(ℂ), O m(ℂ), or G 2. For a generically free G-representation V, we say that N is a level of stable rationality for V/G if V/G × ℙ N is rational. In this paper we improve known bounds for the levels of stable rationality for the quotients V/G. In particular, their growth as functions of the rank of the group is linear for G being one of the classical groups. Article Tikhomirov A. S., Bruzzo U., Markushevich D. Central European Journal of Mathematics. 2012. Vol. 10. No. 4. P. 1232-1245. Symplectic instanton vector bundles on the projective space $\mathbb{P}^3$ constitute a natural generalization of mathematical instantons of rank-2. We study the moduli space $I_{n;r}$ of rank-$2r$ symplectic instanton vector bundles on $\mathbb{P}^3$ with $r\ge2$ and second Chern class $n\ge r, n\equiv r(\mod 2)$. We introduce the notion of tame symplectic instantons by excluding a kind of pathological monads and show that the locus $I_{n;r}^*$ of tame symplectic instantons is irreducible and has the expected dimension, equal to $4n(r+1)-r(2r+1)$. Article Tschinkel Y., Bogomolov F. A. Central European Journal of Mathematics. 2008. No. 6:3. P. 343-350. Article Drozd Y., Gavran V. Central European Journal of Mathematics. 2014. Vol. 12. No. 5. P. 675-687. We generalize the results of Kahn about a correspondence between Cohen-Macaulay modules and vector bundles to non-commutative surface singularities. As an application, we give examples of non-commutative surface singularities which are not Cohen-Macaulay finite, but are Cohen-Macaulay tame. Article Arzhantsev I., Bazhov I. Central European Journal of Mathematics. 2013. Vol. 11. No. 10. P. 1713-1724. Let X be an affine toric variety. The total coordinates on X provide a canonical presentation !X -> X of X as a quotient of a vector space !X by a linear action of a quasitorus. We prove that the orbits of the connected component of the automorphism group Aut(X) on X coincide with the Luna strata defined by the canonical quotient presentation. Article Bogomolov F. A., Prokhorov Y. Central European Journal of Mathematics. 2013. Vol. 11. No. 12. P. 2099-2105. We discuss the problem of stable conjugacy of finite subgroups of Cremona groups. We show that the group $H^1(G,Pic(X))$ is a stable birational invariant and compute this group in some cases. Article Bogomolov F. A., Kulikov V. S. Central European Journal of Mathematics. 2012. Vol. 10. No. 2. P. 521-529. We show that the diffeomorphic type of the complement to a line arrangement in a complex projective plane P 2 depends only on the graph of line intersections if no line in the arrangement contains more than two points in which at least two lines intersect. This result also holds for some special arrangements which do not satisfy this property. However it is not true in general, see [Rybnikov G., On the fundamental group of the complement of a complex hyperplane arrangement, Funct. Article Bogomolov F. A., Kulikov V. S. Central European Journal of Mathematics. 2013. Vol. 11. No. 2. P. 254-263. The article contains a new proof that the Hilbert scheme of irreducible surfaces of degree m in ℙ m+1 is irreducible except m = 4. In the case m = 4 the Hilbert scheme consists of two irreducible components explicitly described in the article. The main idea of our approach is to use the proof of Chisini conjecture [Kulikov Vik. S., On Chisini's conjecture II, Izv. Math., 2008, 72(5), 901-913 (in Russian)] for coverings of projective plane branched in a special class of rational curves. Article Bogomolov F. A., Zarhin Y. Central European Journal of Mathematics. 2009. No. 7:2. P. 206-213. Let $\bbk$ be a field of characteristic zero and $G$ be a finite group of automorphisms of projective plane over $\bbk$. Castelnuovo's criterion implies that the quotient of projective plane by $G$ is rational if the field $\bbk$ is algebraically closed. In this paper we prove that $\mathbb{P}^2_{\bbk} / G$ is rational for an arbitrary field $\bbk$ of characteristic zero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691247701644897, "perplexity": 808.1773589806394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573121.4/warc/CC-MAIN-20190917203354-20190917225354-00458.warc.gz"}
http://www.dummies.com/how-to/content/string-theory-testing-supersymmetry.navId-811234.html
One major prediction of string theory is that a fundamental symmetry exists between bosons and fermions, called supersymmetry. For each boson there exists a related fermion, and for each fermion there exists a related boson. (Bosons and fermions are types of particles with different spins.) ## Finding the missing sparticles Under supersymmetry, each particle has a superpartner. Every boson has a corresponding fermionic superpartner, just as every fermion has a bosonic superpartner. The naming convention is that fermionic superpartners end in “–ino,” while bosonic superpartners start with an “s.” Finding these superpartners is a major goal of modern high-energy physics. The problem is that without a complete version of string theory, string theorists don’t know what energy levels to look at. Scientists will have to keep exploring until they find superpartners and then work backward to construct a theory that contains the superpartners. This seems only slightly better than the Standard Model of particle physics, where the properties of all 18 fundamental particles have to be entered by hand. Also, there doesn’t appear to be any fundamental theoretical reason why scientists haven’t found superpartners yet. If supersymmetry does unify the forces of physics and solve the hierarchy problem, then scientists would expect to find low-energy superpartners. (The search for the Higgs boson has undergone these same issues within the Standard Model framework for years. It has yet to be detected experimentally either.) Instead, scientists have explored energy ranges into a few hundred GeV, but still haven’t found any superpartners. So the lightest superpartner would appear to be heavier than the 17 observed fundamental particles. Some theoretical models predict that the superpartners could be 1,000 times heavier than protons, so their absence is understandable (heavier particles often tend to be more unstable and collapse into lower-energy particles if possible) but still frustrating. Right now, the best candidate for a way to find supersymmetric particles outside of a high-energy particle accelerator is the idea that the dark matter in our universe may actually be the missing superpartners. ## Testing implications of supersymmetry If supersymmetry exists, then some physical process takes place that causes the symmetry to become spontaneously broken as the universe goes from a dense high-energy state into its current low-energy state. In other words, as the universe cooled down, the superpartners had to somehow decay into the particles we observe today. If theorists can model this spontaneous symmetry-breaking process in a way that works, it may yield some testable predictions. The main problem is something called the flavor problem. In the standard model, there are three flavors (or generations) of particles. Electrons, muons, and taus are three different flavors of leptons. In the Standard Model, these particles don’t directly interact with each other. (They can exchange a gauge boson, so there’s an indirect interaction.) Physicists assign each particle numbers based on its flavor, and these numbers are a conserved quantity in quantum physics. The electron number, muon number, and tau numbers don’t change, in total, during an interaction. An electron, for example, gets a positive electron number but gets 0 for both muon and tau numbers. Because of this, a muon (which has a positive muon number but an electron number of zero) can never decay into an electron (with a positive electron number but a muon number of zero), or vice versa. In the Standard Model and in supersymmetry, these numbers are conserved, and interactions between the different flavors of particles are prohibited. However, our universe doesn’t have supersymmetry — it has broken supersymmetry. There is no guarantee that the broken supersymmetry will conserve the muon and electron number, and creating a theory of spontaneous supersymmetry breaking that keeps this conservation intact is actually very hard. Succeeding at it may provide a testable hypothesis, allowing for experimental support of string theory.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012257218360901, "perplexity": 659.9030423403487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00172-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/forces-problem.185077/
# Forces Problem 1. Sep 16, 2007 ### bob1182006 I have 2 problems I just can't seem to get so I'll post both here instead of making 2 threads. Both of these are from Halliday & Resnick 5th ed chapter 3 Problems. Problem #1. 1. The problem statement, all variables and given/known data A light beam from a satellite-carried laser strikes an object ejected from an accidentally launched ballistic missile. The beam exerts a force of $2.0 * 10^{-5}$ N on the target. If the "dwell time" of the beam on the target is 2.4s by how much is the object displaced if it is b) a 2.1-kg decoy? (These displacements can be measured by observing the reflected beam) 2. Relevant equations $$\triangle x=v_0 t +\frac{1}{2}at^2$$ F=ma 3. The attempt at a solution The missile experiences some uknown horizontal acceleration/velocity so I will just ignore those.. It also experiences a downward acceleration of g. So does the laser beam? I need to find the sum of the forces to find the acceleration. $$\frac{mg+2.7*10^{-5}}{m}$$ for a I get about 9.8 m/s^2 . Plugging that into the equation I get a displacement of about 56m but the answer should be some micro-meters.. Problem #9. 1. The problem statement, all variables and given/known data A chain consisting of five links, each with mass 100g, is lifted vertically with a constant a =2.5m/s^2. Find b) the force F exerted on the top link by the agent lifting the chain c) the net force on each link 2. Relevant equations F=ma 3. The attempt at a solution 2.5m/s^2 up -9.8m/s^2 up a total of -7.3m/s^2 up for a. add up the masses of link+links beneath it and then multiply by 7.3m/s^2 to get a force of: .5kg*7.3m/s^2 = 3.65 N being exerted on the top link. Which is completely wrong :/. for b. find the total mass and multiply by 7.3m/s^2 F=.5kg*7.3m/s^2=3.65 N again completely wrong.. for c. Somehow I got this right... lowest link force acting upon it is: .1kg*2.5m/s^2 = .25 N ...for all others then subtracting the link force - link below it to get a net force of .25 N on each link. Any help on either problem is greatly appreciated. 2. Sep 16, 2007 ### bob1182006 I'm mainly trying to get problem #9. I was working backwards from the answers the book gave. for a the lowest # and thus the lowest link in the chain I think. has a total force of 1.23N which requires a force of 12.3 m/s^2 (g+2.5) on a mass of 100g. Isn't this wrong though? since there's a downward force of .1g N but an upward of .1*2.5 N but somehow the in the book they're adding them... 3. Sep 16, 2007 ### learningphysics For the first problem, I don't think you should use mg... we don't really know anything about the missile... it may not be accelerating downwards at g (it may have its own thrust or something). we also don't know that the laser is directed straight upwards... 4. Sep 16, 2007 ### learningphysics For problem 9, part b) Write this equation out: $$\Sigma\vec{F} = ma$$ 5. Sep 16, 2007 ### PhanthomJay I don't think you are corrrectly applying newton 2 in your free body diagrams. Isolating the bottom link, there are 2 forces acting on it. You have correctly identified the downward force. The upward force is unknown...call it T. Now use newton 2nd law....F_net = ma. a is given, don't mess with it. 6. Sep 16, 2007 ### bob1182006 Yea the first problem is wierd especially the "These displacements can be measured by observing the reflected beam" o.o since it's barely chapter 3 and I have no knowledge of optics so far. So sticking to #9. for part a, there are 2 forces acting on each link, 1 up and 1 down, I add them together right? .1 kg * 2.5 m/s^2=.25 N .1 kg * 9.8 m/s^2=.98 N total being 1.23 N 1.23 N + (.1kg * 2.5 m/s^2 + .2kg * 9.8 m/s^2) = 1.23 N + 1.23 N=2.46 and so forth for the rest correct? for part b) I would do almost the same as I did for part a correct? part a I stop adding @ the 4th link since it has force exerted by the 5th which is topmost and the 3rd beneath it. 7. Sep 16, 2007 ### learningphysics Not sure what happened above... : .1kg * 2.5 m/s^2 + .2kg * 9.8 m/s^2 is not 1.23... but 2.46 is the correct answer though. 8. Sep 16, 2007 ### bob1182006 sorry did it first as 2ma+2mg=2.46 N but split it up into (ma+mg)+(ma+mg)=2.46N just forgot to change that .2 to a .1. But I'm still curious, why is it that when I add ma and mg they are both positive? a is pointing up but g is pointing down so shouldn't they be subtracting?.... 9. Sep 16, 2007 ### learningphysics If you're lifting 1 kg object upwards at an acceleration 1.0m/s^2 in a gravityless environment... you only need to exert 1N of force. If you're lifting 1 kg object upwards at an acceleration 1.0m/s^2 in a 9.8m/s^2 gravity environment... you'll need to exert a greater force... you're exerting a force to compensate for gravity. Fnet = ma Fupwards - mg = ma Fupwards = ma + mg The greater the gravity... the greater the force required to compensate for gravity, and move the object at the same acceleration. 10. Sep 16, 2007 ### bob1182006 ok I think I get it now. so when I find ma for .1kg m the net force IS .1*2.5=.25N but that is the total of the force I want - mg so that's why I add the mg! Thanks alot!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835304975509644, "perplexity": 1872.0550572044472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719027.25/warc/CC-MAIN-20161020183839-00437-ip-10-171-6-4.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/19860/can-plancks-constant-be-derived-from-maxwells-equations
# Can Planck's constant be derived from Maxwell's equations? Can mathematics (including statistics, dynamical systems,...) combined with classical electromagnetism (using only the constants appearing in chargefree Maxwell equations) be used to derive the Planck constant? Can it be proven that Planck's constant is truly a new physical constant? - Comment to the question (v3): Are you asking (i) if the value of Planck's constant $\hbar$ can be expressed in terms of quantities from classical theories, or (ii) are you asking if it is possible to infer the concept of a parameter $\hbar$ (without knowing its exact value) from classical theories? –  Qmechanic Sep 26 '13 at 22:33 Look, Dr. Zaslavsky is completely correct. But. The great mathematician Jean Leray once, after being asked to think about Maslov's work on asymptotic methods to approximate the solutions of partial differential equations which were generalisations of the WKB method, decided, in the 70's, to write an entire book titled Lagrangian Analysis and Quantum Mechanics, note he gives his own special meaning to « Lagrangian Analysis.», MIT Press, see the nice abstract entitled « The meaning of Maslov's asymptotic method: The need of Planck's constant in mathematics.» This is not a derivation of the magnitude of Planck's constatnt from Maxwell's equations, but it is a profound motivation for why there should be some finite, small, constant such as Planck's from the standpoint that the caustics you get in geometrical optics cannot be physical, and yet geometric optics ought to be a useful approximation to wave optics. From this point of view, there ought to be some constant like Planck's constant, at least in pure mathematics. It is, however, very advanced: inaccessible unless you already know about Fourier integral operators in Symplectic manifolds, such as in Duistermaat's book or Guillemin and Sternberg, Symplectic Techniques in Physics. Maslov's original book is, although non-rigorous, very insightful and more accessible. For a physicist, though, perhaps just the basics of the Hamiltonian relationship between geometrical optics and wave optics, and the basics of the WKB method, would be more important. - for now I only find a relatively short article by Jean Leray with the same title: projecteuclid.org/euclid.bams/1183548218 is this the entire book or should I search harder? thanks for the reference btw! –  propaganda Jan 23 '12 at 5:24 That's merely an abstract. The book is rather advanced, but yes it is an entire 200 page book. One should also read Maslov's original book which, although not rigorous, is tremendously insightful. the book by Guillemin and Sternberg (Symplectic Techniques in Physics) is also to be recommended, sort of, it is still more mathematical than physical, of course. –  joseph f. johnson Jan 23 '12 at 5:27 I can not find any reference to the book itself :( –  propaganda Jan 23 '12 at 5:31 It's on the shelf next to my bed right now. Look, information is not free. Leray and the translator put a lot of work into that book and they have to be paid for it...or their heirs or assigns... sorry. But the book by Maslov is a better introduction, and after that the abstract probably suffices. The book by Leray is very advanced and a little bit inaccessible unless you already know Fourier integral operators on symplectic manifolds, on the one hand, and Maslov's original work, on the other. So start there anyway, and put off Leray until you have got that far. And, for a physicsist, the –  joseph f. johnson Jan 23 '12 at 5:36 By the way, I'm not a professor ;-) –  David Z Jan 23 '12 at 5:49 If you're talking about deriving the value of Planck's constant, then no, that is not possible. The value is simply a consequence of our chosen unit system. If you're talking about deriving the fact that something analogous to Planck's constant has to exist at all, then I believe the answer is still no. To some extent that is also a consequence of our unit system, since if you use fully natural units, Planck's constant has a value of 1 and so it never shows up in the equations in the first place. But besides that, the original context in which the context was proposed was the quantization of energy, namely that the energy of an EM wave is quantized in units of $hf$. This could be considered the foundational assumption of quantum mechanics. Planck's constant is part of this assumption, so you can't really call it a derived result. - I realize that textbooks dont derive planck's constant from maxwell equations, but can it be proven to be impossible to derive from maxwells equations using only more mathematics? –  propaganda Jan 23 '12 at 4:35 If you can express the proposition "Planck's constant is impossible to derive from Maxwell's equation" in proper mathematical language, then perhaps yes, it is possible. But that would be a question for the math site. The (summarized) physics answer is that Planck's constant cannot be derived from Maxwell's equations because (1) Planck's constant is not something that can be derived, and (2) they deal with different areas of physics. –  David Z Jan 23 '12 at 4:42 I was thinking perhaps similar to how hidden variable theories can be in some sense ruled out by Bell's theorems? quantum mechanics does seem to have a lot in common with bayesian statistics: prior knowledge of one variable affects expected probabilities of another –  propaganda Jan 23 '12 at 4:43 also: using natural units shoves the value into the fine structure constant no? now you have a dimensionless unexplained constant –  propaganda Jan 23 '12 at 5:46 I don't see any connection between Bell's theorem (which is a precise statement about correlations of measurements) and any relationship that might have existed between Planck's constant and Maxwell's equations. Also, I'm really not sure what you mean about natural units and the fine structure constant... yes, $\alpha$ can be calculated using $\hbar$, but it's a unitless number and thus independent of the actual value of $\hbar$. If you would like to continue this, let's take it to Physics Chat. –  David Z Jan 23 '12 at 5:52 David Z and Joseph F Johnson give, in my opinion, good descriptions of how the Planck constant cannot be derived from Maxwell's equations (Joseph gives other arguments why a Planck like constant should exist, though). However, looking at the question from a slightly different standpoint: if one decides that light is quantized, then there is a thought experiment in classical optics that motivates the form of the Planck law, i.e. that the light energy quantum has to be proportional to its frequency. Again, the value of the proportionality constant cannot be derived from Maxwell's equations, but I think the following is interesting insofar that it is Maxwell's equations together with special relativity that show the quantisation law has to have a certain form. Our thought experiment is about light in a perfect optical resonator comprising two perfectly parallel mirrors with plane waves bouncing between them. We now "squash the light" by bringing the mirrors together: we accelerate the right hand one instantly to $v$ metres per second moving towards the other, which is kept still. Some time later, we stop the crush, again decelerating from $v$ metres per second to rest instantaneously. # Physical Overview If one works through the calculation one finds, of course, that the work done pushing the mirrors shows up as energy in the cavity field. But, at the same time, the pulse bouncing in cavity keeps its original functional form – but the argument of the functional form $k \,z - \omega \, t$ gets scaled up so that the constant pulse shape is shrunken to perfectly fit into the shrinking cavity. This is Doppler blueshifting in another guise - the Fourier (the covariant wavenumber space) representation is simply being uniformly dilated and the scale factor is the same scale factor applying to the energy of the field. Alternatively, we could imagine draining energy from the light by letting the cavity expand "adiabatically" and do work against the outside force. Then, of course, we'd get Doppler redshifting; again the Doppler scale factor is the same scale factor applying to the field's dwindling energy. This is the central point: The Doppler shifting factor is the same as the energy scaling factor. Now, supposing we think of this field’s classical energy as arising from any number of “photons” (say $N$) all in exactly the same state at the beginning of the experiment. Presumably if we squash slowly enough so that adiabaticity holds (see Wiki Page on the "Adiabaticity Theorem"), one might reasonably construe the field as still being in an $N$-photon number state afterwards. Whence: if we truly can assume the same number of photons, each in the same state which varies throughout the experiment, at the beginning and end, then: Each photon’s energy must be proportional to its frequency. And it all seems to come wholly from the form of the Lorentz transformation and Maxwell's equations. It's worth noting, when appealing to the Born Fock Adiabaticity Theorem, that this result is independent of the mirror speed $v$. We can wind the mirrors together as slowly as we like, so there is at least a plausibility to this idea. Of course, there is some circular reasoning here – one has to define quantum states properly to meaningfully talk about adiabaticity and, before that, one has to assume the Planck result – or some other postulate, to build a second quantised theory to make the idea of an $N$-photon number state rigorous; even once one has done that, I must admit I can’t even see how to go about writing a second quantised description of a cavity with a moveable mirror, maybe that's a new question. But, if one imagines going back in time to Planck’s day, one might imagine a thought experiment like this might have been taken as motivating $E = h \nu$. The idea of the electomagnetic field's second quantisation didn't begin to take shape until Dirac thought of it 26 years after Planck proposed his law in 1900. So, before Dirac's ideas, physics had to think in terms like the above thought experiement that seem from our hindsight-enlightened viewpoints to be begging the quesiton. Maybe indeed some early twentieth century worker came up with this thought experiment. # Some Details Here are some further details in my thought experiment. The calculations are straightforward, but complicated. Firstly we consider a one-dimensional electromagnetic wave scattering from a perfect reflector in the plane $z = 0$. To the left of the reflector, Maxwell's equations can be fulfilled by one-dimensional plane waves with the form: $$\begin{array}{lcl} \mathbf{E}\left(z, t\right) &=& \left[\,f_0\left(z - c\,t\right) - f_0\left(- \left(z+ c\,t\right)\right)\,\right] \; \mathcal{U}\left(-z\right) \;\hat{\mathbf{x}}\\ \mathbf{B}\left(z, t\right) &=& \frac{1}{c}\left[\,f_0\left(z - c\,t\right) + f_0\left(- \left(z+ c\,t\right)\right)\,\right] \; \mathcal{U}\left(-z\right) \; \hat{\mathbf{y}}\\ \mathbf{J}_s\left(0, t\right) &=& 2 \;\sqrt{\frac{\epsilon_0}{\mu_0}} \,f_0\left( - c\,t\right) \hat{\mathbf{x}}\\ \mathbf{F}_s\left(0, t\right) &=& 2 \,\epsilon_0 \,\left(_0f_1\left( - c\,t\right)\right)^2 \hat{\mathbf{z}} \end{array}\quad\quad\quad\quad(1)$$ where $\mathbf{E}$ and $\mathbf{B}$ are respectively the electric field and magnetic induction, $f$ any arbitrary pulse shape, $c$ the freespace lightspeed, $\mathbf{J}_s$ surface current (in amp`eres per metre) in the perfect reflector, $\mathbf{F}_s$ force per unit area on the conductor and $\mathcal{U}$ the Heaviside step function. The force is most straightforwardly calculated by the method of virtual work; to understand the calculation from the Lorentz force formula, one must calculate the scattering from a metal with finite conductivity $\sigma$ as in Method 3 of my answer here, integrate the body force density $\mathbf{J} \wedge \mathbf{B}$ and then take the limit as $\sigma \rightarrow \infty$, the skin depth $\delta \rightarrow 0$ and the body current density thus becomes a surface current. This result differs by a factor of two from the "blithe" result $\mathbf{J}_s \wedge \mathbf{B}$ gotten by applying the Lorentz force formula without heed to the limiting process that defines a perfect conduction and current sheet. Tacitly, an assumption has been made that the plane's conductivity $\sigma$ fulfills $\sigma >> \omega_{max} \epsilon$ where $\omega_{max}$ is the highest frequency of a "significant" Fourier component of $f_0()$. Now we want to know what happens when the perfect reflector is shifted leftwards so that its velocity is $-v \, \hat{\mathbf{z}}$. The outcome can of course be found by calculating the fields seen by an observer moving uniformly at velocity $v\,\hat{\mathbf{z}}$. Upon making the relavent Lorentz transformation on Eq.(1), one finds: $$\begin{array}{lcl} \mathbf{E}\left(z, t\right) &=& \left[\sqrt{\frac{c-v}{c+v}}\,f_0\left(\sqrt{\frac{c-v}{c+v}}\left(z - c\,t\right)\right) - \sqrt{\frac{c+v}{c-v}}\,f_0\left(- \sqrt{\frac{c+v}{c-v}} \left(z+ c\,t\right)\right)\,\right] \; \mathcal{U}\left(-\left(z+v\,t\right)\right) \; \hat{\mathbf{x}}\\ \mathbf{B}\left(z, t\right) &=& \frac{1}{c} \left[\sqrt{\frac{c-v}{c+v}}\,f_0\left(\sqrt{\frac{c-v}{c+v}}\left(z - c\,t\right)\right) + \sqrt{\frac{c+v}{c-v}}\,f_0\left(- \sqrt{\frac{c+v}{c-v}} \left(z+ c\,t\right)\right)\,\right] \; \mathcal{U}\left(-\left(z+v\,t\right)\right) \; \hat{\mathbf{y}}\\ \end{array} \quad\quad\quad\quad(2)$$ These equations are more meaningful if we rewrite them so that $f_1\left(u\right) = \sqrt{\frac{c-v}{c + v}} \; f_0\left(\sqrt{\frac{c-v}{c + v}} \; u\right)$, i.e. we rescale amplitudes and arguments so that: $$\begin{array}{lcl} \mathbf{E}\left(z, t\right) &=& \left[\,f_1\left(z - c\,t\right) - \frac{c+v}{c-v}\,f_1\left(-\frac{c+v}{c-v} \left(z+ c\,t\right)\right)\,\right] ] \; \mathcal{U}\left(-\left(z+v\,t\right)\right) \; \hat{\mathbf{x}}\\ \mathbf{B}\left(z, t\right) &=& \frac{1}{c} \left[\,f_1\left(z - c\,t\right) + \frac{c+v}{c-v}\,f_1\left(- \frac{c+v}{c-v} \left(z+ c\,t\right)\right)\,\right] ] \; \mathcal{U}\left(-\left(z+v\,t\right)\right) \; \hat{\mathbf{y}}\\ \end{array} \quad\quad\quad\quad(3)$$ and the reflected waves $\frac{c+v}{c-v}\,f_1\left(-\frac{c+v}{c-v} \left(z+ c\,t\right)\right)$ are given in terms of the incident waves $f_1\left(z- c\,t\right)$. This form of the equations underlies the wonted causal relationships in such a system: the rightwards running wave $f_1\left(z- c\,t\right)$ at any point in the region $z < 0$ will meet the reflector in the future, so that this wave must be uninfluenced by the reflector until that time of meeting. Its shape and scaling must therefore simply be a delayed version of what left its source somewhere far out in the region $z < 0$. The scattered wave $\frac{c+v}{c-v}\,f_1\left(-\frac{c+v}{c-v} \left(z+ c\,t\right)\right)$ has already met the reflector and has been Doppler shifted by it (witness that the argument has been multiplied by the squared Doppler factor $\frac{c+v}{c-v}$, so that wavelengths are shrunken by the factor $\frac{c-v}{c+v}$) and its intensity boosted by the factor $\left(\frac{c+v}{c-v}\right)^2$. Positive work must be done on the reflector to push it leftwards at constant speed against the photonic pressure. Take heed that the wonted electromagnetic field boundary conditions do not hold for moving boundaries. The discontinuity in the tangential electric field components can be understood as follows: as the reflector and its surface current advances leftwards, it is quelling the field in its wake altogether. Thus, if we imagine a thin loop whose plane is normal to both the reflector and the magnetic induction and with width $\Delta z$ in the $z$ direction and length $\ell$ along the direction of the magnetic field, the magnetic flux through this loop goes from $\left|\mathbf{B}\right| \ell \Delta z$ in time $\Delta z / v$ as the reflector passes by the loop, hence there must be a difference $\left|\Delta \mathbf{E}\right|$ between the electric fields along the loop's long sides, i.e. $\left|\Delta \mathbf{E}\right| \ell = \left|\mathbf{B}\right| \ell v$ as $\Delta z \rightarrow 0$, hence the discontinuity $2 \, v f(0) / (c-v)$ in the electric field. Again, the electrodynamics of this discontinuity are better understood by doing the calculations at a finite conductivity (thus removing the discontinuity) and passing to the infinite conductivity limit. Now we shift the reflector to an arbitrary $z$-position $a$: $$\begin{array}{lcl} \mathbf{E}\left(z, t\right) &=& \left[\,f_1\left(z - c\,t - a\right) - \frac{c+v}{c-v}\,f_1\left(-\frac{c+v}{c-v} \left(z+ c\,t - a\right)\right)\,\right] \; \mathcal{U}\left(a-z-v\,t\right) \; \hat{\mathbf{x}}\\ \mathbf{B}\left(z, t\right) &=& \frac{1}{c} \left[\,f_1\left(z - c\,t - a\right) + \frac{c+v}{c-v}\,f_1\left(- \frac{c+v}{c-v} \left(z+ c\,t - a\right)\right)\,\right] \; \mathcal{U}\left(a-z-v\,t\right) \; \hat{\mathbf{y}}\\ \end{array} \quad\quad\quad\quad(4)$$ then transform the functional notation so that $f\left(t - \frac{z}{c}\right) = f_1\left(z - c\,t - a\right)$: $$\begin{array}{lcl} \mathbf{E}\left(z, t\right) &=& \left[\,f\left(t - \frac{z}{c}\right) - \frac{c+v}{c-v}\,f\left(\frac{c+v}{c-v} \left(t + \frac{z}{c}\right) - \frac{2 \,a}{c - v}\right)\,\right] \; \mathcal{U}\left(a-z-v\,t\right) \; \hat{\mathbf{x}}\\ \mathbf{B}\left(z, t\right) &=& \frac{1}{c} \left[\,f\left(t - \frac{z}{c}\right) + \frac{c+v}{c-v}\,f\left(\frac{c+v}{c-v} \left(t+ \frac{z}{c}\right) -\frac{2 \,a}{c - v}\right)\,\right] \; \mathcal{U}\left(a-z-v\,t\right) \; \hat{\mathbf{y}}\\ \end{array} \quad\quad\quad\quad(5)$$ and imagine a second, still reflector at $z = 0$ so as to consider a one-dimensional cavity resonator as shown in the drawing. The cavity resonator is "shrinking" and the light within it is being "squashed". Boundary conditions very like those in Eq.(1) hold, thus implying the "loop condition": $$f\left(\frac{c-v}{c+v}\, u + \frac{2}{c+v}\, a\right) = \frac{c+v}{c-v}\,f\left(u\right)\quad\quad\quad\quad(6)$$ and the field's intensity and frequency both grow exponentially together i.e. vary like $\left(\frac{c+v}{c-v}\right)^n$with the cavity circulation number $n$. Suppose at $t = 0$, the rightwards running cavity wave's functional form is $g_+(z), \;0\leq z \leq a$ and that there is no leftwards running wave. The wave's lagging (leftmost) edge meets the right reflector (i.e. that which was at position $z = a$ at time $t = 0$) at time $t = a / (c + v)$. Likewise, the wave's leading edge is boosted in amplitude by a factor $(c+v)/(c-v)$ and meets the left reflector (at $z = 0$) slightly later at time $t = a / c$. So, at this time, the wave is now wholly backwards (leftwards) running, its whole length still fits into the shortened cavity and it still has the same functional form, but with a "squashed" $z$-dependence; its functional form is now $\frac{c+v}{c-v} g_+\left(a - \frac{c+v}{c-v}\,z\right)$ for $0 \leq z \leq \frac{c-v}{c+v} a$, whilst the cavity's length is now $\frac{c-v}{c} a$, i.e. longer than the wave's extent. Now we repeat the reasoning for the wave scattering from the left reflector. This time there is no Doppler shift or amplitude boost, and the time taken for the wave's leading edge to run from the left to the right reflector is $\frac{c-v}{c+v} \frac{a}{c}$, i.e. exactly the wave's temporal duration and this duration in turn is exactly the time taken for the wave's lagging edge to reach $z = 0$. Thus, after a total time $t = 2\frac{a}{c + v}$ the wave has returned to its original shape, albeit that its amplitude has been boosted by a factor $\frac{c+v}{c-v}$, its functional form is now $g_+(\frac{c+v}{c-v}\,z),\; 0\leq z \leq \frac{c-v}{c+v}\, a$, the wave's length $\frac{c-v}{c+v}\, a$ so that it fits exactly into its new cavity length $a^\prime = \frac{c-v}{c+v}\, a$. We can repeat the analysis for a backwards running wave $g_-(z), \;0\leq z \leq a$ and assume that there is no forwards running wave. The result is naturally the same: after one circulation time $t = 2\frac{a}{c + v}$, the wave has returned to being a wholly backwards running wave, its amplitude has been boosted by the factor $\frac{c-v}{c+v}$ and its argument has been shrunken (blueshifted) so that it fits exactly into the shrunken cavity, which now has a length $a^\prime = \frac{c-v}{c+v}\, a$. Thus, if the cavity begins with forward and backwards running variations $g_+(z),\; g_-(z)$ respectively for $0\leq z \leq a$, the following parameters define $n^{th}$ cavity round trip: $$\begin{array}{llcl} n^{th}\, \mathrm{Round\,Trip\,Time}:& t_n & = & 2 \frac{a}{c + v} \left(\frac{c-v}{c+v}\right)^{n - 1}\\ \mathrm{Time\,Till\,Completion}:& T_n & = & \sum\limits_{j = 1}^n t_n = \frac{a}{v} \left(1 - \left(\frac{c-v}{c+v}\right)^n\right)\\ \mathrm{Blueshift\,(Frequency\,Scale)}:& \nu_n & = & \left(\frac{c+v}{c-v}\right)^n\\ \mathrm{Cavity\,Length}:& L_n & = & \left(\frac{c-v}{c+v}\right)^n = \nu_n^{-1}\\ \mathrm{Amplitude\,Scale}:& a_n & = & \left(\frac{c+v}{c-v}\right)^n = \nu_n\\ \mathrm{Intensity\,Scale}:& i_n & = & \left(\frac{c-v}{c+v}\right)^{2\,n} = \nu_n^2\\ \mathrm{Total\,Cavity\,Energy}:& E_n & = & \frac{\epsilon_0}{2}\,\int_0^{\frac{a}{L_n}} \nu_n^2 \left(g_+\left(\nu_n\,z\right)^2+g_-\left(\nu_n\,z\right)^2\right) \mathrm{d}z\\ & &= & \nu_n \frac{\epsilon_0}{2}\,\int_0^a \left(g_+\left(z\right)^2+g_-\left(z\right)^2\right) \mathrm{d}z = \nu_n\,E_0\\ \mathrm{Total\,Cavity\,Energy\,Scale}:& e_n & = & \left(\frac{c+v}{c-v}\right)^n = \nu_n\\ \mathrm{Photonic\,Pressure\,Scale}:& p_n & = & \left(\frac{c-v}{c+v}\right)^{2\,n} = \nu_n^2\\ \end{array} \quad\quad\quad\quad(7)$$ thus the light within the cavity is infinitely blueshifted and power and pressure needs of this process increase without bound as the cavity approaches zero length. Note that analogous results can be gotten for a reflector speed $v\left(t\right)$ that varies with time. In this case, the functional forms $g_+(z)$ and $g_-(z)$ are in general nonuniformly stretched and shrunken to account for the variation of speed within each circulation period. The results in Eq.(7) are replaced by effective average definitions, but the fundamental results that the total cavity energy and mean blueshift are both inversely proportional to the cavity length are the same and independent of the detailed time variation. So, no matter how one gets there, the cavity energy and mean blueshift depend only on the current cavity length. - Maxwell assumes only that a particle has charge, not that an electron has a frequency that depends on its rest mass. So one can not deduce Planck's constant from Maxwell. de Broglie fixed that. - ## protected by Qmechanic♦Sep 23 '13 at 7:56 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258599877357483, "perplexity": 475.1118693164221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.96/warc/CC-MAIN-20150521113210-00246-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.siive.de/how-to-reform-low-pressure-boiler-thermal-efficiency.html
# how to reform low pressure boiler thermal efficiency ##### Determining & Testing Boiler Efficiency for Commercial 2016-4-12 · Fuel-to-steam or fuel-to-water efficiency is a measure of the overall efficiency of the boiler. It accounts for the effectiveness of the heat exchanger as well as the radiation and convection losses. For space heating boilers and in the BTS-2000 testing procedure, this type of efficiency is called “thermal efficiency.” It is an ##### BOILER EFFICIENCY GUIDE – cleaverbrooks 2018-8-5 · BOILER EFFICIENCY BOILER EFFICIENCY GUIDE. FACTS. Forward Today’s process and heating applications boiler with “designed-in” low maintenance and high efficiency can provide outstanding high pressure drop design, and simple, robust linkages, are easy to tune and accurately ##### Calculating Boiler Efficiency – forbesmarshall Thermal efficiency; Apart from these efficiencies, there are some other losses which also play a role while deciding the boiler efficiency and hence need to be considered while calculating the boiler efficiency. Combustion Efficiency. The combustion efficiency of a boiler is … ##### Boiler Efficiency – Engineering ToolBox 2019-7-11 · Boiler efficiency may be indicated by. Combustion Efficiency – indicates a burners ability to burn fuel measured by unburned fuel and excess air in the exhaust; Thermal Efficiency – indicates the heat exchangers effectiveness to transfer heat from the combustion process to the water or steam in the boiler, exclusive radiation and convection losses; Fuel to Fluid Efficiency – indicates the ##### What is the thermal efficiency of a gas fired boiler 2019-4-26 · What is the thermal efficiency of a gas fired boiler 2017-09-22 17:04:29. What is the thermal efficiency of a gas fired boiler?Generally speaking, not only gas fired boilers, including oil fired boilers, the thermal efficiency of WNS oil and gas fired boiler and SZS oil & gas boiler can achieve at least 95%. ##### Calculator: Boiler Efficiency | TLV – A Steam Specialist 2019-5-14 · Online calculator to quickly determine Boiler Efficiency. Includes 53 different calculations. Equations displayed for easy reference. ##### High Pressure Boilers: Features and Advantages ~ ME … A boiler is called a high-pressure boiler when it operates with a steam pressure above 80 bars. The high-pressure boilers are widely used for power generation in thermal power plants. In a high-pressure boiler, if the feed-water pressure increases, the saturation temperature of water rises and the latent heat of vaporization decreases. ##### thermal efficiency of biomass boiler | Low Pressure … Thermal power station – Wikipedia. 2018-6-26 · The energy efficiency of a conventional thermal power station, considered salable energy produced as a percent of the heating value of the fuel consumed, is typically 33% to 48%.Biomass heating system – Wikipedia2018-6-19 · Benefits of biomass heating. : Ankit Taneja ##### Boiler Calculations – energy.kth.se 2003-10-30 · Water boils under constant temperature and pressure, so a horizontal line inside the enclosed region represents a vaporization process in the T-s diagram. The steam/water heating process in the boiler represented by the diagram in figure 2 can also be drawn in a T-s diagram (figure 4), if the boiler pressure is assumed to be e.g. 10 MPa.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142463207244873, "perplexity": 2524.302834638854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00334.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-with-applications-10th-edition/chapter-2-nonlinear-functions-2-4-exponential-functions-2-4-exercises-page-86/14
## Calculus with Applications (10th Edition) Published by Pearson # Chapter 2 - Nonlinear Functions - 2.4 Exponential Functions - 2.4 Exercises - Page 86: 14 #### Answer $x=3$ #### Work Step by Step The exponential function is a one-to-one function, therefore if $a^x=a^y$ and $a\ne 0$ $a\ne 1$, then $x=y$. Thus we can rewrite the equation: $4^x=64$ $4^x=4^3$ $x=3$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8452585339546204, "perplexity": 1151.9771374727823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528702.42/warc/CC-MAIN-20190420060931-20190420082856-00062.warc.gz"}