url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://forum.allaboutcircuits.com/threads/tunnel-diode-oscillator-with-series-rlc.71548/
# Tunnel diode oscillator with series RLC #### hcc Joined Jun 24, 2012 1 It is said that negative resistance devices must use a series LC oscillator while a negative conductance device, e.g. a tunnel diode, must use a parallel LC oscillator. However, focusing on negative conductance device, the conductance vs voltage (across the tunnel diode) may be modeled as a (U) parabola as shown in 'g.png'. The minimum is the negative conductance of the device (at optimally biased). With a parallel LC oscillator (and the tunnel diode optimally biased), the oscillation amplitude will increase until the negative conductance decreases (i.e. more positive) and balanced by the positive conductance in the (lossy) oscillator. Inverting the conductance gives the resistivity and is shown in 'r.png'. Now, with a series RLC oscillator, where R is smaller than the magnitude of the negative resistance (so oscillation should be able to start) and the tunnel diode is assumed biased (as in the parallel case above), will there be any oscillation with such a series RLC? If there is any response to the RLC at all, what will be observed? (The negative resistance will increase with amplitude if oscillation is possible and the amplitude would diverge.) #### Attachments • 6 KB Views: 45 • 5.5 KB Views: 41 #### mcasale Joined Jul 18, 2011 210 If I recall way back in my EE classes, I think there are no rules about series/parallel RLC circuits, but I COULD have forgotten it. My advice is to model your circuit diagram, which you did not post, and then break the feedback loop, and write the system equations (loop gain using complex variables). You will wind up with a REAL part and an IMAGINARY part in the denominator (or determinant). Set each equation to ZERO and that should give you the conditions for oscillation, as well as the frequency. IMHO mathematics is an under-utilized tool in analog circuit design. It can be fun once you get a reasonable answer, but it can also be a pain in the buttocks when you don't get the right answer.
2021-10-21 21:59:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671162128448486, "perplexity": 1145.4529155198466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00057.warc.gz"}
https://forum.allaboutcircuits.com/threads/can-anyone-read-burnt-resistor-colors.48702/
# Can anyone read burnt resistor colors? #### Kyle Hunter Joined Jan 20, 2011 3 I've got a burnt resistor here and today is the first time I've ever even tried decoding a resistor I am just not sure if I have the colors right . . #### beenthere Joined Apr 20, 2004 15,819 Have you tried a meter yet? Sometimes they don't change value. If the orientation matches the 1/8 watt one visible, then black-black-white-gold makes no sense. Neither does brown-white-black-black. I suspect there is a problem that caused the resistor to burn. Be sure you find that. #### thatoneguy Joined Feb 19, 2009 6,359 What does the resistor do in the circuit? A rough guess of the value can be made from there, and tweaked later. A lot of base/gate resistors in amplifiers turn completely black when the transistor fails, but they are often small value (< 500 ohms). Resistors do not burn on their own, it's usually a $5 transistor that will die to protect the$0.05 fuse, which takes out the resistor with it. #### Kyle Hunter Joined Jan 20, 2011 3 Unfortunately I do not have any idea what the resistor does, it is on a climate control unit for my 97 Suburban and I can not find a schematic. I also have very limited knowledge of electronic circuitry but I have a thirst for knowledge. The switch that controls the blower motor was the first thing to act up. After looking at component prices I decided that it would be more fun to buy a solder station and resistor along with a new switch and still come out money ahead of buying a new control unit . . as long as I can figure out this resistor. Thank-you for all input so far, I certainly appreciate it. #### thatoneguy Joined Feb 19, 2009 6,359 Do you have a digital multimeter (DMM)? You'll need one to get it going. Pics of both sides of the board that resistor is on will help as well. First, try measuring the value of the burnt out resistor, then the task is to find out what other part failed, causing the resistor to burn. #### jpanhalt Joined Jan 18, 2008 11,088 The multiplier band, presumably the band that appears white, is a place to start. Since it is from a car A/C system, I am assuming it at relatively low voltage. If it is a 1/2W resistor, then you can start to put some limits on its values, assuming the resistor was OK before it was exposed to excessive voltage. E=IR and W (power) = EI == I^2R. If the power rating was exceed by 100 (i.e. 50W), that would require roughly 4A. At 12V, 4A would imply a 3 ohm resistor. Going thorough various combinations, I would assume the multiplier is not really big, i.e., not 10^4 or greater. In fact, I would assume it is more likely brown, black, or silver, possibly red. Gold appears to have held up, if the tolerance band is any indicator. Now, if the multiplier were silver, then the value would be 0.xx. Assume the two black-appearing bands were originally the same color, or one color plus black. You can see where this is going buy now, I hope. Brown,black, silver is 0.1 ohm, That might be used in a current sensing circuit. Are the copper traces fairly large going to the resistor? Of course, I am not saying it is 0.1 ohm, 0.22 (R,R,Silver) is another option of many, but I think it is a small value. I am assuming that black and brown paint probably doesn't become white, but silver probably will (silver oxide). If you could dissolve flakes from the band, a good test for silver is the precipitate it forms with halogens, like chloride (i.e., table salt). But then, that is a whole other post. If you have a resistor(s) with brown, black, and red bands and can over heat it/them, you might determine which colors change to what. John Last edited: #### debe Joined Sep 21, 2010 1,251 What was the climat control not doing? Ones Ive worked on in Fords in Australia control fan speed electronicly & temp is controlled with a servo controled mixer door. This mixes cold & hot air to get the desired temprature. A decent clear picture of both sides of the circuit board may be a help. #### Kyle Hunter Joined Jan 20, 2011 3 The blower motor switch broke and I believe is what led up to the resistor burning up. The harness connection to the blower motor switch is also a little on the toasty side. #### Kermit2 Joined Feb 5, 2010 4,162 You've got a short in your blower motor! Plain and simple. Check the blower motor before giving the electronic gods another sacrificial resistor body. #### thatoneguy Joined Feb 19, 2009 6,359 It appears the failure is off board. Harness or blower motor. Check them out with a DMM, and repair the traces on this PCB when replacing the resistor. Can you get a bigger pic of the bottom with a bit more light and clear focus? Still trying to figure out what else may be wrong. If you could use paint to label the harness connector, and which knobs do what, that'd help a bit as well. #### jpanhalt Joined Jan 18, 2008 11,088 Hi Kyle, Do you have any resistors to bake to see what color the bands turn into? I was thinking 350 to 400°F (same temp as you use for a frozen pizza) and check periodically. If that doesn't work, I would then turn the broiler on, but still leave the temp the same. Please let us know. I am curious. John #### beenthere Joined Apr 20, 2004 15,819 Red always seemed to go black first on carbon resistors. #### Kermit2 Joined Feb 5, 2010 4,162 Go to the junk yard. #### debe Joined Sep 21, 2010 1,251 Looking at the underside of the board the blue conponent has 2x3lots of pins & a seperate 2 pins. I suspect this is a dual ganged var resistor with a switch. The burnt res is on one side of the switch. This board i suspect is only a control board for another module, which may be where the problem is. #### beenthere Joined Apr 20, 2004 15,819 Has the OP used an ohmmeter on the resistor yet?
2021-10-25 22:25:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4113472104072571, "perplexity": 2389.98879287689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00096.warc.gz"}
https://www.groundai.com/project/planar-brownian-motion-and-gaussian-multiplicative-chaos/
Planar Brownian motion and Gaussian multiplicative chaos # Planar Brownian motion and Gaussian multiplicative chaos Antoine Jego On leave from the University of Cambridge. E-mail address: [email protected] Universität Wien ###### Abstract We construct the analogue of Gaussian multiplicative chaos measures for the local times of planar Brownian motion by exponentiating the square root of the local times of small circles. We also consider a flat measure supported on points whose local time is within a constant the desired thickness level and show a simple relation between the two objects. Our results extend those of [BBK94] and in particular cover the entire -phase or subcritical regime. These results allow us to obtain a nondegenerate limit for the appropriately rescaled size of thick points, thereby considerably refining estimates of [DPRZ01]. ## 1 Introduction ### 1.1 Main results The Gaussian multiplicative chaos (GMC) introduced by Kahane [Kah85] consists in defining and studying the properties of random measures formally defined as the exponential of a log-correlated Gaussian field, such as the two-dimensional Gaussian free field (GFF). Since such a field is not defined pointwise but is rather a random generalised function, making sense of such a measure requires some nontrivial work. The theory has expanded significantly in recent years and by now it is relatively well understood, at least in the subcritical case [RV10, DS11, RV11, Sha16, Ber17] and even in the critical case [DRSV14a, DRSV14b, JS17, JSW18, Pow18]. Furthermore, Gaussian multiplicative chaos appears to be a universal feature of log-correlated fields going beyond the Gaussian theory discussed in these papers. Establishing universality for naturally arising models is a very active and important area of research. We mention the work of [SW16] on the Riemann function on the critical line and the work of [FK14, Web15, NSW18, LOS18, BWW18] on large random matrices. The goal of this paper is to study the Gaussian multiplicative chaos for another natural non-Gaussian log-correlated field: (the square root of) the local times of two-dimensional Brownian motion. Before stating our main results, we start by introducing a few notations. Let be the law under which is a planar Brownian motion starting from . Let be an open bounded simply connected domain, a starting point and be the first exit time of : τ:=inf{t≥0:Bt∉D}. For all define the local time of at up to time (here stands for the euclidean norm): Lx,ε(t):=limr→0r>012r∫t01{ε−r≤|Bs−x|≤ε+r}ds. One can use classical theory of one-dimensional semimartingales to get existence for a fixed of as a process. In this article, we need to make sense of jointly in and in . It is provided by Proposition 1.1 that we state at the end of this section. If the circle is not entirely included in , we will use the convention . For all we consider the sequence of random measures on defined by: for all Borel sets , μγε(A) :=√|logε|εγ2/2∫Aeγ√1εLx,ε(τ)dx. (1.1) The presence of the square root in the exponential may appear surprising at a first glance, but it is natural nevertheless in view of Dynkin-type isomorphisms (see [Ros14]). To capture the fractal geometrical properties of a log-correlated field, another natural approach consists in encoding the so-called thick points (points where the field is unusually large) in flat measures supported on those thick points. At criticality, such measures are often called extremal processes. See for instance [BL16], [BL18] in the case of discrete two-dimensional GFF, see also [Abe18] in the case of simple random walk on trees. In our case, we can consider for all the sequence of random measures on defined by: for all Borel sets and , νγε(A×T):=|logε|ε−γ2/2∫A1{√1εLx,ε(τ)−γlog1ε∈T}dx. (1.2) ###### Theorem 1.1. For all , the sequences of random measures and converge as in probability for the topology of vague convergence on and on respectively towards Borel measures and . The measure can be decomposed as a product of a measure on and a measure on . Moreover, the component on agrees with and the component on is exponential: ###### Theorem 1.2. For all , we have -a.s., νγ(dx,dt)=(2π)−1/2μγ(dx)e−γtdt. Moreover, by denoting the conformal radius of from and the Green function of in , (see (1.8)), we have for all Borel set , Ex0[μγ(A)]=√2πγ∫AR(x,D)γ2/2GD(x0,x)dx∈(0,∞). (1.3) The decomposition of and (1.3) justify that the square root of the local times is the right object to consider. These two properties are very similar to the case of the two-dimensional GFF (see [BL16] and [Ber16], Theorem 2.1 for instance). Simulations of can be seen in Figure 1. They have been performed using simple random walk on the square lattice killed when it exits a square composed of vertices. ###### Remark 1.1. We decided to not include the case to ease the exposition, but notice that is also a sensible measure in this case. By modifying very few arguments in the proofs of Theorems 1.1 and 1.2, one can show that this sequence of random measures converges for the topology of vague convergence on towards a measure which can be decomposed as ν0(dx,dt)=μ0(dx)1{t=∞} for some random Borel measure on . With the help of (6.3) in Proposition 6.2 characterising the measure , it can be shown that is actually -a.s. absolutely continuous with respect to the occupation measure of Brownian motion, with a deterministic density. This last observation was already made in [AHS18], Section 7. See Section 1.2 for more details about the relation of our results to this paper. Define the set of -thick points at level by Tγε:={x∈D:Lx,ε(τ)ε(logε)2≥γ2}. (1.4) This is similar to the notion of thick points in [DPRZ01], except that they look at the occupation measure of small discs rather than small circles. In [Jeg18], the question to show the convergence of the rescaled number of thick points for the simple random walk on the two-dimensional square lattice was raised. As a direct corollary of Theorems 1.1 and 1.2, we answer the analogue of this question in the continuum: ###### Corollary 1.1. For all , we have the following convergence in : limε→0|logε|ε−γ2/2Leb(Tγε)=1√2πγμγ(D) where denotes the Lebesgue measure of . As mentioned in [Jeg18], this shows a fundamental difference in the structure of the thick points of GFF compare to those of planar Brownian motion. Indeed, for the GFF, the logarithmic power in the renormalisation is not the same (1/2 and not 1). This difference is initially surprising in view of the strong links between the two objects, see e.g. [Jeg18] for more about this. As announced earlier, in order to define the measures in (1.1) and (1.2), we establish: ###### Proposition 1.1. The local time process , , , possesses a jointly continuous modification . In fact, this modification is -Hölder for all . The proof of this proposition will be given in Appendix C. In the rest of the paper, when we write we actually always mean its continuous modification . ### 1.2 Relation with other works and further results The construction of measures supported on the thick points of planar Brownian motion was initiated by the work of Bass, Burdzy and Khoshnevisan [BBK94]. The notion of thick points therein is defined through the number of excursions from which hit the circle , before the Brownian motion exits the domain : more precisely, for , they define the set Aa:={x∈D:limε→0Nxε|logε|=a}. (1.5) Note that our parametrisation is somewhat different; it is chosen to match the GMC theory. Informally, the relation between the two is given by . Next, we recall that the carrying dimension of a measure is the infimum of for which there exists a set such that and the Hausdorff dimension of is equal to . They showed: ###### Theorem A (Theorem 1.1 of [Bbk94]). Assume that the domain is the unit disc of and that the starting point is the origin. For all , with probability one there exists a random measure , which is carried by and whose carrying dimension is equal to . In [BBK94], the measure is constructed as the limit of measures as which are defined in a very similar manner as our measures using local times of circles (see the beginning of Section 3 of [BBK94]). We emphasise here the difference of renormalisation: the local times they consider are half of our local times. We also mention that the range for which they were able to show the convergence of is a strict subset of the so-called -phase of the GMC, which would correspond to or . This is the region where is bounded in , see Theorem 3.2 of [BBK94]. Bass, Burdzy and Khoshnevisan also gave an effective description of their measure in terms of a Poisson point process of excursions. More precisely, they define a probability distribution (written in [BBK94], defined just before Proposition 5.1 of [BBK94]) on continuous trajectories which can be understood heuristically as follows. The trajectory of a process under is composed of three independent parts. The first one is a Brownian motion starting from conditioned to visit before exiting and killed at the hitting time of . The third part is a Brownian motion starting from and killed when it exits for the first time . The second part is composed of an infinite number of excursions from generated by a Poisson point process with the intensity measure being the product of the Lebesgue measure on and an excursion law. In Proposition 5.1 of [BBK94], they roughly speaking show that for all , the law of the Brownian motion conditioned on the fact that is in the support of is . This characterises their measure (Theorem 5.2 of [BBK94]). Once Theorems 1.1 and 1.2 above are established, we can adapt their arguments for the proof of characterisation to conclude the same thing for our measure : see Proposition 6.2 for a precise statement. A consequence of Proposition 6.2 is the identification of our measure with their measure : ###### Corollary 1.2. If the domain is the unit disc, the origin, and , we have -a.s. . A consequence of Theorem A is a lower bound on the Hausdorff dimension of the set of thick points : for all , a.s. . The upper bound they obtained ([BBK94], Theorem 1.1 (ii)) is: for all , a.s. . They conjectured that the lower bound is sharp and holds for all . In 2001, Dembo, Peres, Rosen and Zeitouni [DPRZ01] answered positively the analogue of this question for thick points defined through the occupation measure of small discs: Ta:={x∈D:limε→01ε2(logε)2∫τ01{Bt∈D(x,ε)}dt=a}. (1.6) In particular, their result went beyond the -phase to cover the entire -phase. This allowed them to solve a conjecture by Erdős and Taylor [ET60]. Very recently, Aïdékon, Hu and Shi [AHS18] made a link between the definitions of thick points of [BBK94] and [DPRZ01] (defined in (1.5) and (1.6) respectively) by constructing measures supported on these two sets of thick points. Their approach is superficially very different from ours but we will see that the measure we obtained is, perhaps surprisingly, related to theirs in a strong way (Corollary 1.3 below). Their measure is defined through a martingale approach for which the interpretation of the approximation is not immediately transparent (see [AHS18] (4.1), (4.2) and Corollary 3.6). Let us describe this relation in more details. For technical reasons, in [AHS18], the boundary of is assumed to be a finite union of analytic curves. To compare our results with theirs, we will also make this extra assumption in the following and we will call such a domain a nice domain. Consider a boundary point such that the boundary of is analytic locally around ; we will call such a point a nice point. They denote by the law of a Brownian motion starting from and conditioned to exit through . They showed: ###### Theorem B (Theorem 1.1 of [Ahs18]). For all , with -probability one there exists a random measure which is carried by and by and whose carrying dimension is equal to . Their starting point is the interpretation of the measure of [BBK94] described above in terms of Poisson point process of excursions. For , they define a measure on trajectories similar to mentioned above: the only difference is that the last part of the trajectory is a Brownian motion conditioned to exit the domain through . In a nutshell, they show the absolute continuity of with respect to and define a sequence of measures using the Radon-Nikodym derivative. Their convergence relies on martingales argument rather than on computations on moments. As in [BBK94], they obtain a characterisation of their measure in terms of ([AHS18], Proposition 5.1) matching with ours (Proposition 6.2). As a consequence, we are able to compare their measure with ours. Before stating this comparison, let us notice that we can also make sense of our measure for the Brownian motion conditioned to exit through . Indeed, as noticed in [BBK94], Remark 5.1 (i), our measure is measurable with respect to the Brownian path and defined locally. is thus well defined for any process which is locally mutually absolutely continuous with respect to the two dimensional Brownian motion killed when it exits for the first time the domain . The Brownian motion conditioned to exit through being such a process, makes sense under as a measure on . ###### Corollary 1.3. Let be a nice point and denote by the Poisson kernel of from at , that is the density of the harmonic measure with respect to the Lebesgue measure of at . For all , if , we have -a.s., μγ(dx)=√2πγHD(x0,z)HD(x,z)Ma∞(dx). In particular, our measure inherits some properties of the measure obtained in [AHS18]. Recalling the definitions (1.5) and (1.6) of the two sets of thick points and , we have: ###### Corollary 1.4. For all , the following properties hold: (i) Non-degeneracy: with -probability one, . (ii) Thick points: with -probability one, is carried by and by . (iii) Hausdorff dimension: with -probability one, the carrying dimension of is . (iv) Conformal invariance: if is a conformal map between two nice domains, , and if we denote by and the measures built in Theorem 1 for the domains and respectively, we have (μγ,D∘ϕ−1)(dx)law=∣∣ϕ′(ϕ−1(x))∣∣2−γ2/2μγ,D′(dx). Let us mention that we present the previous properties (i)-(iii) as a consequence of Corollary 1.3 to avoid to repeat the arguments, but we could have obtained them without the help of [AHS18]: as in [BBK94], (i) and (ii) follow from the Poisson point process interpretation of the measure (Proposition 6.2) whereas (iii) follows from our second moment computations (Proposition 4.1). On the other hand, it is not clear that our approach yields the conformal invariance of the measure without the use of [AHS18]. Finally, while there are strong similarities between and the GMC measure associated to a GFF (indeed, our construction is motivated by this analogy), there are also essential differences. In fact, from the point of view of GMC theory, the measure is rather unusual in that it is carried by the random fractal set and does not need extra randomness to be constructed, unlike say Liouville Brownian motion or other instances of GMC on random fractals. ### 1.3 Organisation of the paper We now explain the main ideas of our proofs of Theorems 1.1 and 1.2 and how the paper is organised. The overall strategy of the proof is inspired by [Ber17]. To prove the convergence of the measures and , it is enough to show that for any suitable and , the real valued random variables and converge in probability which is the content of Proposition 6.1 (we actually show that they converge in ). As in [Ber17], we will consider modified versions and of and by introducing good events (see (2.11) and (2.13)): at a given , the local times are required to be never too thick around at every scale. We will show that introducing these good events does not change the behaviour of the first moment (Propositions 3.1 and 3.2, Section 3) and it makes the sequences and bounded in (Propositions 4.1 and 4.2, Section 4). Furthermore, we will see that these two sequences are Cauchy sequences in (Proposition 5.1, Section 5) implying in particular that they converge in . Section 6 finishes the proof of Theorems 1.1 and 1.2 and demonstrates the links of our work with the ones of [BBK94] and [AHS18] (Corollaries 1.2, 1.3 and 1.4). We now explain a few ideas underlying the proof. If the domain is a disc centred at , then it is easy to check (by rotational invariance of Brownian motion and second Ray-Knight isomorphism for local times of one-dimensional Brownian motion) that the local times have a Markovian structure. More precisely, for all and all , under and conditioned on , (Lx,r(τ)r,r=η′e−s,s≥0)law=(R2s,s≥0) (1.7) with being a zero-dimensional Bessel process starting from . This is an other clue that exponentiating the square root of the local times should yield an interesting object. In the case of a general domain , such an exact description is of course not possible, yet for small enough radii, the behaviour of can be seen to be approximatively given by the one in (1.7). If we assume (1.7) then the construction of is similar to the GMC construction for GFF, with the Brownian motions describing circle averages replaced by Bessel processes of suitable dimension. It seems intuitive that the presence of the drift term in a Bessel process should not affect significantly the picture in [Ber17]. To implement our strategy and use (1.7), we need an argument. In the first moment computations (Propositions 3.1 and 3.2), we will need a rough upper bound on the local times; an obvious strategy consists in stopping the Brownian motion when it exits a large disc containing the domain. For the second moment (Proposition 5.1), we will need a much more precise estimate. Let us assume for instance that . We can decompose the local times according to the different macroscopic excursions from to before exiting the domain . To keep track of the overall number of excursions, we will condition on their initial and final points. Because of this conditioning, the local times of a specific excursion are no longer related to a zero-dimensional Bessel process. But if we now condition further on the fact that the excursion went deep inside , it will have forgotten its initial point and those local times will be again related to a zero-dimensional Bessel process: this is the content of Lemma 5.1 and Appendix A is dedicated to its proof. Let us mention that the spirit of Lemma 5.1 can be tracked back to Lemma 7.4 of [DPRZ01]. As we have just explained, we will use (1.7) to transfer some computations from the local times to the zero-dimensional Bessel process. Throughout the paper, we will thus collect lemmas about this process (Lemmas 3.1, 3.2 and 5.2) that will be proven in Appendix B. Of course, we will not be able to transfer all the computations to the zero-dimensional Bessel process, for instance when we consider two circles which are not concentric. But we will be able to treat the local times as if they were the local times of a continuous time random walk: for a continuous time random walk starting at a given vertex and killed when it hits for the first time a given set , the time spent by the walk in is exactly an exponential variable which is independent of the hitting point of . We will show that it is also approximatively true for the local times of Brownian motion. This is the content of Section 2. We end this introduction with some notations which will be used throughout the paper. Notations: If , , , and , we will denote by: 1. the first hitting time of . In particular, ; 2. (resp. , ) the open disc (resp. closed disc, circle) with centre and radius ; 3. the Euclidean distance between and . If , we will simply write instead of ; 4. the conformal radius of seen from ; 5. the Green function in : GD(x,y):=π∫∞0ps(x,y)ds, (1.8) where is the transition probability of Brownian motion killed at . We recall its behaviour close to the diagonal (see Equation (1.2) of [Ber16] for instance): GD(x,y)=−log|x−y|+logR(x,D)+u(x,y) (1.9) where as ; 6. the law under which is a zero-dimensional Bessel process starting from ; 7. the set of integers . Finally, we will write , etc, positive constants which may vary from one line to another. We will also write (resp. ) real-valued sequences which go to zero as (resp. which are bounded). If we want to emphasise that such a sequence may depend on a parameter , we will write (resp. ). ## 2 Preliminaries We start off with some preliminary results that will be used throughout the paper. ### 2.1 Green’s function ###### Lemma 2.1. For all , so that and , we have: Ey[Lx,ε(τ∂D(x,r))] =2εlogrε, (2.1) Ey[Lx,ε(τ)] =2ε(log1ε+logR(x,D)+o(1)). (2.2) ###### Proof. We start by proving (2.1). By denoting the transition probability of Brownian motion killed at , we have: Ey[Lx,ε(τ∂D(x,r))]=∫∂D(x,ε)dz∫∞0ds ps(y,z)=1π∫∂D(x,ε)dz GD(x,r)(y,z). But the Green function of the disc is equal to: GD(x,r)(y,z)=log∣∣1−¯yz/r2∣∣|y−z|/r. Hence Ey[Lx,ε(τ∂D(x,r))]=2εlogrε+1π∫∂D(x,ε)logε|y−z|dz+1π∫∂D(x,ε)log∣∣1−¯yz/r2∣∣dz. Because the last two integrals vanish, this gives (2.1). The proof of (2.2) is very similar. The only difference is that we consider the Green function of the general domain . Using the asymptotic (1.9), we conclude in the same way. ∎ ### 2.2 Hitting probabilities We now turn to the study of hitting probabilities. The following lemma gives estimates on the probability to hit a small circle before exiting the domain , whereas the next one gives estimates on the probability to hit a small circle before hitting another circle and before exiting the domain . ###### Lemma 2.2. Let . For all small enough, for all such that and for all , we have: Py(τ∂D(x,ε)<τ)=(1+Oη(εlogε))GD(x,y)/log(R(x,D)ε). (2.3) ###### Proof. A similar but weaker statement can be found in [BBK94] (Lemma 2.1) and our proof is really close to theirs. We will take smaller than to ensure that the circle stays far away from . If the domain were the unit disc and the origin, then the probability we are interested in is the probability to hit a small circle before hitting the unit circle. The two circles being concentric, we can use the fact that is a martingale to find that this probability is equal to: Py(τ∂D(0,ε)<τ∂D)=log|y|/logε. (2.4) In general, we come back to the previous situation by mapping onto the unit disc and to the origin with a conformal map . By conformal invariance of Brownian motion, Py(τ∂D(x,ε)<τD)=Pfx(y)(τfx(∂D(x,ε))<τD). As is far away from the boundary of , the contour is included into a narrow annulus D(0,|f′x(x)|ε+cε2)∖D(0,|f′x(x)|ε−cε2) for some depending on . In particular, using (2.4), Py(τ∂D(x,ε)<τD) ≤Pfx(y)(τ∂D(0,∣∣f′x(x)∣∣ε+cε2)<τD) =log|fx(y)|log(|f′x(x)|ε+cε2)=log|fx(y)|log(|f′x(x)|ε)(1+Oη(εlogε)). The lower bound is obtained is a similar manner which yields the stated claim (2.3) noticing that and that . ∎ ###### Remark 2.1. If are at least at a distance from the boundary of , the quantities GD(x,y)−log|x−y|,R(x,D) and R(y,D) are bounded away from 0 and from infinity uniformly in (depending on ). We thus obtain the simpler estimate: Py(τ∂D(x,ε)<τ),Px(τ∂D(y,ε)<τ)=(1+Oη(1logε))log|x−y|logε. (2.5) Depending on the level of accuracy we need, we will use either (2.3) or its rougher version (2.5). For and define p−xy:=minz∈∂D(x,ε)Pz(τ∂D(y,ε)<τ) and p+xy:=maxz∈∂D(x,ε)Pz(τ∂D(y,ε)<τ), p−yx:=minz∈∂D(y,ε)Pz(τ∂D(x,ε)<τ) and p+yx:=maxz∈∂D(y,ε)Pz(τ∂D(x,ε)<τ). ###### Lemma 2.3. For all , so that and are disjoint and included in , for all , Pz(τ∂D(x,ε)<τ)−p+yxPz(τ∂D(y,ε)<τ)1−p+yxp−xy ≤Pz(τ∂D(x,ε)<τ∧τ∂D(y,ε)) (2.6) ≤Pz(τ∂D(x,ε)<τ)−p−yxPz(τ∂D(y,ε)<τ)1−p−yxp+xy. ###### Proof. By Markov property and by definition of , we have: Pz(τ∂D(x,ε)<τ) =Pz(τ∂D(x,ε)<τ∧τ∂D(y,ε))+Pz(τ∂D(y,ε)<τ∂D(x,ε)<τ) ≤Pz(τ∂D(x,ε)<τ∧τ∂D(y,ε))+Pz(τ∂D(y,ε)<τ∧τ∂D(x,ε))p+yx. Similarly, Pz(τ∂D(y,ε)<τ)≥Pz(τ∂D(y,ε)<τ∧τ∂D(x,ε))+Pz(τ∂D(x,ε)<τ∧τ∂D(y,ε))p−xy. Combining those two inequalities yields Pz(τ∂D(x,ε)<τ)−p+yxPz(τ∂D(y,ε)<τ)≤(1−p+yxp−xy)Pz(τ∂D(x,ε)<τ∧τ∂D(y,ε)) which is the first inequality stated in (2.6). The other inequality is similar. ∎ ### 2.3 Approximation of local times by exponential variables In this subsection, we explain how to approximate the local times by exponential variables. For and any event , define Hyx,ε(E):=12limz∈D(x,ε)z→yP∗z(E)/d(z,∂D(x,ε))+12limz∉¯D(x,ε)z→yP∗z(E)/d(z,∂D(x,ε)) where is the probability measure of Brownian motion starting at and killed when it hits for the first time the circle . For , we will denote the harmonic measure of from . ###### Lemma 2.4. Let and . Assume that and that there exists such that for all and , (1−u)ωC(y,E)≤ωC(y′,E)≤(1+u)ωC(y,E). Then for all and , (1−u)e−maxz∈∂D(x,ε)Hzx,ε(τC<∞)t≤Py(Lx,ε(τC)>t|BτC)≤(1+u)e−minz∈∂D(x,ε)Hzx,ε(τC<∞)t. ###### Remark 2.2. The previous lemma states that we can approximate by an exponential variable which is independent of . This is similar to the case of random walks on discrete graphs. If we did not condition on , it would not have been necessary to add the multiplicative errors and . This statement without conditioning is also a consequence of Lemma 2.2 (i) of [BBK94]. ###### Proof. Take small enough so that the annulus does not intersect . Consider the different excursions from to : denote and for all , σ(1)i:=inf{t>σ(2)i−1:Bt∈∂D(x,ε+r)} and σ(2)i:=inf{t>σ(1)i:Bt∈∂D(x,ε−r)}. The number of excursions before is related to by: Lx,ε(τC)=4limr→0rNrPy−a.s. Hence, for any continuous bounded function, we have by dominated convergence theorem Ey[1{Lx,ε(τC)>t}f(BτC)]=limr→0Ey[1{Nr>⌊t/(4r)⌋}f(BτC)]. Because EBσ(2)⌊t/(4r)⌋[f(BτC)]≤(1+u+or→0(1))Ey[f(BτC)]Py−a.s., and by a repeated application of Markov property, is at most (1+u+or→0(1))Ey[f(BτC)]maxz∈∂D(x,ε+r)Pz(σ(2)1<σ(1)2<τC)⌊t4r⌋. As maxz∈∂D(x,ε+r)Pz(σ(2)1<σ(1)2<τC)=1−4rminz∈∂D(x,ε)Hzx,ε(τC<∞)+O(r2), we have obtained Ey[1{Lx,ε(τC)>t}f(BτC)]≤(1+u)Ey[f(BτC)]e−minz∈∂D(x,ε)Hzx,ε(τC<∞)t which is the required upper bound. The lower bound is obtained in a similar way. ∎ The next lemma explains how to compute the quantities appearing in the previous lemma. Again, particular cases of this can be found in [BBK94] (Lemmas 2.3, 2.5). ###### Lemma 2.5. Let and such that and denote the distance between and . Assume . Let be either or and denote u={εε+d if B=A,εε+d+δε if B=A∪∂D(x,δ). We have for all , and ωB∪∂D(y,E)=(1+O(u))ωB∪∂D(y′,E). (2.7) Moreover, denoting the first hitting time of after , we have for any , 1Hzx,ε(τ∧τB<∞) =(1+O(u))maxy∈∂D(x,ε)Ey[Lx,ε(τ)](1−∫∂D(x,ε)dy2πεPy(τB∂D(x,ε)<τ)). (2.8) ###### Proof. In this proof, we will consider such that . Let us start by proving (2.7) for . Let . By Markov property applied to the first hitting time of , we have ωA∪∂D(y,E)=∫∂D(x,ε+η)ω∂D(x,ε+η)(y,dξ)Pξ(BτA∧τ∈E). But the measure is explicit and its density with respect to the Lebesgue measure on the circle is equal to 12π(ε+η)(ε+η)2−|y−x|2|y−ξ|2=12π(ε+η)(1+O(εε+η)). Hence, up to a multiplicative error , is independent of
2020-12-02 21:04:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912900984287262, "perplexity": 358.2758162784134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141716970.77/warc/CC-MAIN-20201202205758-20201202235758-00299.warc.gz"}
https://insignificancegalore.net/2007/03/all-gone/
# All gone Managed to find the structure of the poles in my marginal “cirkus tent” distribution. Before and after graphs:
2021-05-13 00:28:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287798404693604, "perplexity": 4154.807173694315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00170.warc.gz"}
https://www.physicsforums.com/threads/difficult-integral.718561/#post-4549105
# Difficult integral Mod note: Moved from the math technical sections. I need to show that $$\int_{-\infty}^\infty \frac{\sin^2 (pa/\hbar)}{p^2} \, dp = \frac{\pi a}{\hbar}.$$ I haven't got a clue how to integrate this function! Any help would be much appreciated thanks. Last edited by a moderator: I've found through the transformation $u=pa/\hbar$ that it is equivalent to showing $$\int_{-\infty}^{\infty} \frac{\sin^2 u}{u^2}\,du = \pi,$$ if that helps anyone. SteamKing Staff Emeritus Homework Helper All you have to do is figure out p and dp in terms of u and du. The algebra is really simple. All you have to do is figure out p and dp in terms of u and du. The algebra is really simple. Yeah I did that substitution myself to make it easier... now how do I show that the integral is equal to pi?? Using the cosine of double angle formula, and then integration by parts, this can be reduced to integrals of $$\frac {\sin x} {x}$$ 1 person Under a change of variable x = pa/h: $$\frac{\hbar}{a}\int_{-\infty}^\infty \frac{sin^2 x}{x^2}dx$$ As suggested you can apply integration by parts and then sine double angle rule to obtain $$\frac{\hbar}{a}\int_{-\infty}^\infty \frac{sin(2x)}{x}dx$$ The integral below has many different proofs, few of them are here $$\int_{-\infty}^\infty \frac{sin(ωx)}{x}dx=\pi$$ for any ω > 0. OFF-TOPIC: Can someone tell me what is the difference between a question asked here and a question asked on the homework forums? This is an honest question. For example, shouldn't this question ("Difficult integral") be in the homework section? Sorry for the off topic perishingtardi Mark44 Mentor Crake, You are correct. As the sticky says at the top of this forum section, "This forum is not for homework or any textbook-style questions." I am moving this thread to the Homework section.
2021-09-24 05:25:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141823410987854, "perplexity": 624.4496426202173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057504.60/warc/CC-MAIN-20210924050055-20210924080055-00028.warc.gz"}
http://en.wikipedia.org/wiki/Talk:Ornstein%e2%80%93Uhlenbeck_process
# Talk:Ornstein–Uhlenbeck process WikiProject Mathematics (Rated Start-class, Mid-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: Start Class Mid Importance Field: Probability and statistics WikiProject Finance (Rated Start-class, Low-importance) This article is within the scope of WikiProject Finance, a collaborative effort to improve the coverage of articles related to Finance on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Start  This article has been rated as Start-Class on the project's quality scale. Low  This article has been rated as Low-importance on the project's importance scale. WikiProject Statistics (Rated Start-class, Low-importance) This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion. Start  This article has been rated as Start-Class on the quality scale. Low  This article has been rated as Low-importance on the importance scale. I don't think the AR(1) process is the discret analog to the Ornstein-Uhlenbeck process, as it has no mean reverting characteristics. 212.39.192.3 (talk) 10:14, 30 December 2009 (UTC) Can someone clarify how in the equation r_t goes from being in the integrand of a random variable to having a set value? I found this to be a bit confusing. —Preceding unsigned comment added by 209.6.49.37 (talk) 17:53, 23 November 2009 (UTC) The definition of the Ornstein-Uhlenbeck is not correct. The Ornstein-Uhlenbeck process does not have to be mean reverting, it can have a general drift process, i.e. $dr_t = \mu\,dt + \sigma\, dW_t,\,$ However, it is possible to define mean-reverting Ornstein-Uhlenbeck processes like $dr_t = -\theta (r_t-\mu)\,dt + \sigma\, dW_t,\,$ Usually an Ornstein-Uhlenbeck process refers to processes, where the process itself does not appear in the stochastic part of the SDE which describes the dynamics of the process, i.e. σ does not depend on r. I disagree. The simplest definition of an O-U process is given by $dr_t = -\theta r_t \,dt + \sigma\, dW_t,\,$ which is mean reverting to the zero, (did you have a typo in your definition of the OU process?). The definition in the article is just a more general case of the OU process than we are normally used to seeing. If you want, just set $\mu =0$. How about Stochastic Differential Equations by Bernt Oksendal? There O-U and mean-reverting O-U processes are distinguished. Your point above about setting \mu=0 is valid but it makes pedagogic sense to introduce the \mu=0 case first I think —Preceding unsigned comment added by 131.111.16.20 (talk) 08:57, 13 April 2009 (UTC) Covariance function for this process looks strange and quite different from earlier versions of this article, released last summer in particular. Indeed, when |s-t| -> oo, cov(rs, rt) should tend to 0, as the autocorrelation effect decreases with time interval. The article fails to mention what $dW_t$ is! Indeed. I added that $W_t$ refers to the Wiener process. -- Jitse Niesen (talk) 05:49, 26 June 2006 (UTC) ## The text below the graph The text below the graph should read $\theta\ = 1, \mu\ =1.2 , \sigma\ = 0.3$to be consistent with the legend on the actual graph itself. I made this correction on Aug. 30, 2006 (ko4seki) The text below the graph gives values for the parameter $a$ but fails to mention what it is. Tim (talk) 07:43, 7 January 2008 (UTC) Does it really make sense to write a=0 (a.s.) for a simulated trajectory? Since it's a simulation you know the exact value of a? —Preceding unsigned comment added by 85.24.189.238 (talk) 16:06, 29 December 2010 (UTC) Also it is nonsense to label a single trajectory as "normally distributed". A single trajectory has a single starting point. An ensemble of trajectories has a distribution. — Preceding unsigned comment added by 199.89.103.11 (talk) 12:06, 17 August 2012 (UTC) The whole point is that the "disturbance" that drives the trajectory has a distribution. That distribution could be uniform, two-sided exponential, two-sided triangular, Gaussian, or whatever. Please do not give us anything about "ensembles of trajectories has a distribution." That is beside the point. Since the process is Markov, you can pick out any point along the curve, anywhere, and then you have no way to use the past to predict the the future of the curve. That is the whole idea of a Markov process. The only things that you know anything about are 1) the location of the point that you chose, and 2) the distribution of the disturbance that drives the curve along. 98.67.108.12 (talk) 01:47, 26 August 2012 (UTC) ## The O-U process is also the same as first-order, low-pass filtered white noise. Just as the Wiener process is integrated white noise (non-leaky integrator), the Ornstein-Uhlenbeck process is RC low-pass filtered (a.k.a. "leaky integrator") white noise. This is electrical engineering language, but should somehow be included, right? 71.254.8.148 (talk) 04:29, 31 March 2009 (UTC) It probably should be included. Michael Hardy (talk) 17:42, 13 April 2009 (UTC) ## Make article accessible to physicists, engineers, etc. Dear mathematicians who wrote this article. As a mathematician, I appreciate your work. But I am also a physicist and have to say that this article is hard to understand for specialists from other sciences such as physics and engineering who also use OUPs. (We do no have to try to make this article accessible to just everybody, but we really should include the other specialists' viewpoints). I have added a section on application of OUP in physics. This should give a start. Improvements are most welcome. If someone could also write a section on OUPs in engineering and signal processing (see above) that would be great. So that's the new section: Application in physical sciences: The OUP is a prototype of a noisy relaxation process. Consider for example a Hookean spring with spring constant $k$ whose dynamics is highly overdamped with friction coefficient $\gamma$. In the presence of thermal fluctuations with temperature $T$, the length $x(t)$ of the spring will fluctuate stochastically around the spring rest length $x_0$; its stochastic dynamic is described by an OUP with $\theta=k/\gamma$, $\mu=x_0$, $\sigma=\sqrt{2k_B T/\gamma}$. In physical sciences, the stochastic differential equation of an OUP is rewritten as a Langevin equation $\gamma\dot{x} = k( x - x_0 ) + \xi$ where $\xi(t)$ is Gaussian white noise with $<\xi(t_1)\xi(t_2)= 2 k_B T \delta(t_1-t_2)$. Benjamin.friedrich (talk) 20:14, 25 April 2010 (UTC) ## STATIONARY 1 This is supposed to be a stationary process. At the very least, that should mean that the probability distributions of xt and xs should be the same. I had imagined that the covariance between values of the process at different times would depend on those times only through how far apart those times were, i.e. it would depend on s and t only through |s − t|. But the article says $\operatorname{cov}(x_s,x_t) = \frac{\sigma^2}{2\theta}\left( e^{-\theta(t-s)} - e^{-\theta(t+s)} \right)$ for s < t, and I am suspicious. Should I be? (OK, next I'll try working through the details myself and maybe figure out what I'm missing.) Michael Hardy (talk) 00:46, 26 April 2010 (UTC) The confusion arises because, in this part of the article, the process is not being treated as stationary. The workings-out in this section assume that the process starts from a value x0 at time zero. It would be good if something were done to be clearer about this. 193.62.153.194 (talk) 11:25, 28 April 2010 (UTC) I see your point. The non-stationary part decays exponentially. Maybe there's something to be said for thinking about that, but the covariance formula for the stationary process has an appealing simplicity that helps one understand and remember what this process is about. Michael Hardy (talk) 01:48, 30 April 2010 (UTC) This issue is very confusing. The second sentence says the process is stationary, but the definition given is of a non-stationary process. I suggest "stationary" -> "asymptotically stationary" to resolve this. Any thoughts? 128.243.253.117 (talk) 17:35, 26 July 2011 (UTC) I suggest "Not stationary" as a reasonable approximation to the truth. — Preceding unsigned comment added by David in oregon (talkcontribs) 05:28, 25 April 2012 (UTC) I wouldn't be so hasty, it appears from googling that it can be stationary or non-stationary. Probably better to expand on this. IRWolfie- (talk) 08:07, 25 April 2012 (UTC) ## STATIONARY 2 The presentation in the Wikipedia is very strange. A stationary process MUST have a mean value that is a constant, but the one given there is a function of time. There is no allowance for hand-waving that says, "Oh, the time-varying part decays away. Furthermore, the autocorrelation function MUST depend only on the time difference (it is shift-invariant) and the one here and now is a function of time. THIS IS ABSOLUTELY FORBIDDEN. Electrical engineers call this the "autocorrelation function", so ignore any business about "autocovariance". For a stochastic process that is Markov, Gaussian, and stationary, the autocorrelation function MUST work out to be Kexp[-p|t - s|], where K and p are given constants, and t and s are two instants in time. This is the only way that it can work out. Furthermore, the power spectral density should exist in this case and it MUST not be a function of time. That power spectral density is found by the Wiener-Khinchine theorem by taking the Fourier transform of the autocorrelation function. Some mathematicians, etc., might not give a hoot about the power spectral density, but many engineers, physicists, and chemists, too. Besides electrical engineers, many chemical engineers, mechanical engineers, and nuclear engineers care very much about spectral densities. Note that I didn't write "most", but "many". Many physicists do, too. 98.67.108.12 (talk) 01:32, 26 August 2012 (UTC) Other strange statements: "Over time, the process tends to drift towards its long-term mean: such a process is called mean-reverting." Stationary stochastic processes do not do this. They are just as likely to drift away from their mean values as they are to drift toward them. In fact, if a stationary stochastic process ever reaches its mean value, it is just as likely to "keep right on going" and drift away from its mean value in the opposite direction. There are no "magic pieces of string" that tells them which way to go. It is just that in the long run, they tend to spend about as much time above their mean values as below them. On the other hand, nonstationary processes can be different. However, this article says that it is all about stationary processes. 98.67.108.12 (talk) 02:19, 26 August 2012 (UTC)
2014-12-18 00:31:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7422551512718201, "perplexity": 804.902905301865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765002.8/warc/CC-MAIN-20141217075245-00060-ip-10-231-17-201.ec2.internal.warc.gz"}
https://deertales.blog/latex-vs-word-vs-etc/
The strength of Word is in writing short, relatively simple documents, since you immediately see how what you wrote looks like (WYSIWYG). Writing raw LaTeX is somewhat slower, because you first write the contents only and then you generate an output file with layout in a separate step. LyX is an exception here. Is LaTeX better than Word? Yes LaTex is a better choice because it features with a reliable program for typesetting, footnotes, bibliographic, images, captions, tables, cross-references. Microsft Word also has some or less such similar features but LaTex is doing this all in a flexible, intelligent, and aesthetically in pleasing manner. Why do people use Word instead of LaTeX? There are reasons I’d choose to use Word or Docs instead of LaTeX: Short documents, lists, or simple projects are easier to handle in Word. If I’m making a grocery list, LaTeX is overkill. Word is easier to learn and it’s more widely used, especially outside of academia. Is LaTeX still worth learning? Yes, it is definitely worth learning TeX and its derivatives. Personally, I don’t think that this is the best way to get started. Instead, start gently by working with LaTeX , load packages and let them do the hard work for you. Can you use LaTeX in Word? Word users can also write directly in LaTeX syntax, and then click to convert it into a formatted equation. Microsoft says that “most” LaTeX expressions are supported, although its website lists 20 keywords that are not (such as \degree, the degree symbol). Do engineers use LaTeX? Who uses LaTeX? Engineers, physicists, mathematicians, economists, linguists, and more! What are the disadvantages of LaTeX? Cons • Difficult to learn and use. True, but it will save you time in the long run, even if you’re writing a minor thesis. • Not WYSIWYG. … • Little support for physical markup. … • Using non-standard fonts is difficult. … • No spell checking. … • Too many packages. … • LaTeX is for techies only. … • Encourages structured writing. See also  What are the main aspects to have in mind to write a screenplay or dramatic text? Is LaTeX good for math? LaTeX 2e has a user-friendly interface and good documentation. LaTeX 2e is the TeX document format most commonly used among mathematicians at the present time; thus it is very likely that you will be able to collaborate easily with co-authors. How long will it take to learn LaTeX? I have personally concluded it takes about 2-10 hours of intentional use to be able to create acceptable documents for a math major. However, it takes about 200 hours of serious use to get the student to the point where LaTeX is as efficient as either handwriting or using a word processor. Is LaTeX used in industry? There are also professional LaTeX/TeX trainers and consultants. Also, some professional typesetters use TeX, LaTeX or ConTeXt for book productions (for example Germany based company Werksatz uses ConTeXt). So, yes, LaTeX knowledge can be a leverage when applying to jobs, but these jobs are rare. What are the advantages of LaTeX? The Benefits of LaTeX • Beautifully Typeset Output. LaTeX is designed by mathematicians for producing beautifully typeset mathematics. … • Structured Files. … • Management of Internal References. … • Management of Citations. … • Customizable. … • User-Friendly. … • Commonly Used. … • Easily Converted Files. How do I convert LaTeX to Word? How to convert LaTeX to Word 1. Open free LaTeX website and choose Convert application. 2. Click inside the file drop area to upload LaTeX files or drag & drop LaTeX files. 3. You can upload maximum 10 files for the operation. 4. Click on Convert button. … What is LaTeX Word processing? LaTeX, which is pronounced «Lah-tech» or «Lay-tech» (to rhyme with «blech» or «Bertolt Brecht»), is a document preparation system for high-quality typesetting. It is most often used for medium-to-large technical or scientific documents but it can be used for almost any form of publishing. LaTeX is not a word processor! Is LaTeX a word processor free? LaTeX is not a word processor, but is used as a document markup language. LaTeX is a free, open source software. It was originally written by Leslie Lamport and is based on the TeX typesetting engine by Donald Knuth. Is LaTeX a markup language? LaTeX is a document preparation system and document markup language. LaTeX is not the name of a particular editing program, but refers to the encoding or tagging conventions that are used in LaTeX documents. Which software is used for LaTeX? BEST LaTeX Editor Software: Top Picks TeXmaker • Free and cross-platform LaTeX editor • Available for Windows, Mac & Linux • Powerful and easy to use Learn More Tabnine • Instant inline code completion • Supports all major languages and IDEs • Improves code quality and consistency Learn More Is LaTeX A programming? LaTeX is, strictly speaking, a programming language and Turing-complete. Or rather, LaTeX is a macro package for TeX which is the actual Turing complete programming language. The typesetting-specific tools LaTeX provides probably can’t, however, be considered a full programming language on their own anymore. Which LaTeX is best for Windows? 5 Best LaTeX Editors for Windows in 2022 • Overleaf – A lot of templates. • TeXmaker – Cross-platform. • TeXstudio – Integrated viewer. • TeXnicCenter – Spell checking. • LyX – Integrated equation editor. Does LaTeX have a GUI? TeXworks TeXworks is a multi-platform, open-source LaTeX editor. It is a LaTeX editing tool that is based off another open-source LaTeX editor – TeXshop. It provides a GUI-based approach to LaTeX editing and features many of the key advantages found in the previous mentioned tools. Can you use LaTeX offline? “run overleaf offline” is impossible — you can run LaTeX offline (overleaf runs LaTeX online). So if you install correctly Miktex and TeXMaker, you should be able to compile your project locally. What is MiKTeX and LaTeX? MiKTeX (made by MiKTeX.com) is an opensource software application that provides the tools necessary to prepare documents using the TeX/LaTeX markup language, as well as a simple TeX editor. It allows you to predefine a set of rules to format your document so that you can focus on the content rather than the appearance. See also  What is the correct formatting for actions taken before dialogue? Which is best for LaTeX? Best LaTeX Editors For Linux 1. LyX. LyX is an open-source LaTeX editor. … 2. Texmaker. Texmaker is considered to be one of the best LaTeX editors for the GNOME desktop environment. … 3. TeXstudio. … 4. Gummi. … 5. TeXpen. … 6. Overleaf (ShareLaTeX + Overleaf) … 7. Authorea. … 8. Papeeria. Is LaTeX a compiler? Other compilers The other possible compiler settings are pdfLaTeX (the default), XeLaTeX and LuaLaTeX. You can usually go with pdfLaTeX, but choosing a compiler depends on each project’s needs. LaTeX supports only . Is LaTeX a free software? LaTeX is available as free software. You don’t have to pay for using LaTeX, i.e., there are no license fees, etc. But you are, of course, invited to support the maintenance and development efforts through a donation to the TeX Users Group (choose LaTeX Project contribution) if you are satisfied with LaTeX. How do you use math in LaTeX? To Include mathematics in a document, you type the LaTeX source code for the math between dollar signs. For example, \$ax^2+bx+c=0\$ will be typeset as a x 2 + b x + c = 0 . If you enclose the code between double dollar signs, the math will be displayed on on line by itself. Should I use LaTeX? LaTeX gives the user extremely good control over the formatting of documents. Once it is mastered, it can be much easier to work with than a mainstream word processor when complicated formatting is necessary. LaTeX code is typed into a text file. Where do you write LaTeX? To write in LaTeX, you’ll need to install a LaTeX editor. I use a piece of free and open source software (FOSS) popular with academics called TexStudio, which runs on Windows, Unix/Linux, BSD, and Mac OS X. You’ll also need to install a distribution of the Tex typesetting system.
2022-12-06 04:24:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343896508216858, "perplexity": 4301.952068372519}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00063.warc.gz"}
https://toc.ui.ac.ir/article_24467.html
# The distance spectrum of two new operations of graphs Document Type : Research Paper Authors Key Laboratory of Computing and Stochastic Mathematics (Ministry of Education), College of Mathematics and Sta- tistics, Hunan Normal University, Changsha, Hunan 410081, P. R. China 10.22108/toc.2020.116372.1634 Abstract Let $G$ be a connected graph with vertex set $V(G)=\{v_1, v_2,\ldots,v_n\}$‎. ‎The distance matrix $D=D(G)$ of $G$ is defined so that its $(i,j)$-entry is equal to the distance $d_G(v_i,v_j)$ between the vertices $v_i$ and $v_j$ of $G$‎. ‎The eigenvalues ${\mu_1, \mu_2,\ldots,\mu_n}$ of $D(G)$ are the $D$-eigenvalues of $G$ and form the distance spectrum or the $D$-spectrum of $G$‎, ‎denoted by $Spec_D(G)$‎. ‎In this paper‎, ‎we introduce two new operations $G_1\blacksquare_k G_2$ and $G_1\blacklozenge_k G_2$ on graphs $G_1$ and $G_2$‎, ‎and describe the distance spectra of $G_1\blacksquare_k G_2$ and $G_1\blacklozenge_k G_2$ of regular graphs $G_1$ and $G_2$ in terms of their adjacency spectra‎. ‎By using these results‎, ‎we obtain some new integral adjacency spectrum graphs‎, ‎integral distance spectrum graphs and a number of families of sets of noncospectral graphs with equal distance energy‎. Keywords Main Subjects
2021-07-24 05:32:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5195424556732178, "perplexity": 979.962481665053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00349.warc.gz"}
http://bandicoot.mit.edu/docs/reference/generated/bandicoot.network.assortativity_attributes.html
# assortativity_attributes¶ bandicoot.network.assortativity_attributes(user) Computes the assortativity of the nominal attributes. This indicator measures the homophily of the current user with his correspondants, for each attributes. It returns a value between 0 (no assortativity) and 1 (all the contacts share the same value): the percentage of contacts sharing the same value.
2018-12-15 01:09:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19014039635658264, "perplexity": 3044.889093985412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826530.72/warc/CC-MAIN-20181214232243-20181215014243-00233.warc.gz"}
https://math.stackexchange.com/questions/2956359/probability-that-3-vertices-of-a-2n1-sided-polygon-chosen-at-random-form-vertic
# Probability that 3 vertices of a 2n+1 sided polygon chosen at random form vertices of an isosceles triangle Consider a $$(2n+1)$$ sided regular polygon. Find the probability that three vertices chosen at random form the vertices of an isosceles triangle. My Attempt If I choose $$3$$ vertices containing two sets of $$r$$ consecutive sides then the triangle so formed by the $$3$$ chosen vertices(i.e. the $$2r$$ sides are all consecutive) is clearly isosceles. So number of ways to do so will be $$\sum_{r=1}^{n}r=\frac{n(n+1)}{2}$$So required probability$$=\frac{\sum_{r=1}^{n}r}{\binom{2n+1}{3}}$$. Is it correct. Next, we drop an axis of symmetry at the point. For each of the remaining $$2n$$ points, it suffices to pick a single vertex() from one side of the axis of symmetry(the other vertex will be determined my reflecting across the axis of symmetry). The number of ways to choose 2 points at random frorm the remaininig points is $$\binom{2n}{2},$$ which gives an overall probability of $$\boxed{\frac{n}{\binom{2n}{2}}}.$$
2021-08-01 06:54:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9574723243713379, "perplexity": 139.6515551596811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154163.9/warc/CC-MAIN-20210801061513-20210801091513-00705.warc.gz"}
https://api-project-1022638073839.appspot.com/questions/how-do-you-solve-log-x-2-2
# How do you solve log x^2 =2? ##### 1 Answer Oct 5, 2015 $x = \pm 10$ #### Explanation: Assuming that's the common log, take the base 10 exponential of both sides, that is, the "sideth" power of 10. $\log \left({x}^{2}\right) = 2$ ${x}^{2} = {10}^{2}$ Take the root $x = \pm 10$ Since ${x}^{2}$ is positive for all values of $x$ the only value of $x$ we can't have is $0$, so both answers are okay.
2021-10-26 17:24:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923617005348206, "perplexity": 1137.4008725591382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00346.warc.gz"}
http://gmatclub.com/forum/according-to-a-1996-survey-by-the-national-association-of-66395.html?fl=similar
According to a 1996 survey by the National Association of : GMAT Sentence Correction (SC) Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack It is currently 17 Jan 2017, 08:41 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # According to a 1996 survey by the National Association of Author Message TAGS: ### Hide Tags Director Joined: 10 Feb 2006 Posts: 658 Followers: 3 Kudos [?]: 459 [0], given: 0 According to a 1996 survey by the National Association of [#permalink] ### Show Tags 29 Jun 2008, 20:03 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions ### HideShow timer Statistics According to a 1996 survey by the National Association of College and University Business Officers, more than three times as many independent institutions of higher education charge tuition and fees of under $8,000 a year than those that charge over$16,000. A. than those that charge B. than are charging C. than to charge D. as charge E. as those charging _________________ GMAT the final frontie!!!. If you have any questions New! Manager Joined: 23 Jun 2008 Posts: 85 Location: Australia Schools: AGSM '22 GMAT Date: 04-01-2014 Followers: 1 Kudos [?]: 41 [0], given: 24 Re: sc - tutition charge [#permalink] ### Show Tags 29 Jun 2008, 23:36 (E)as those charging as many .... as Think of it like this (X, Y are used to simplify) as many X charging Y as those charging over \$16000 _________________ Kudos (+1) if you find this post helpful. Senior Manager Joined: 25 Nov 2006 Posts: 277 Followers: 2 Kudos [?]: 35 [0], given: 0 Re: sc - tutition charge [#permalink] ### Show Tags 30 Jun 2008, 09:54 straight E. Idiom as many...as Parallelism as many...as those... Re: sc - tutition charge   [#permalink] 30 Jun 2008, 09:54 Similar topics Replies Last post Similar Topics: 3 According to a 1996 survey by the National Association of College and 4 01 Aug 2015, 05:58 149 According to a 1996 survey by the National Association of 91 17 Oct 2009, 09:01 According to a 1996 survey by the National Association of 16 03 Aug 2009, 21:53 According to a 1996 survey by the National Association of 1 13 Oct 2008, 01:59 108 According to a 1996 survey by the National Association of 35 06 Sep 2008, 14:47 Display posts from previous: Sort by
2017-01-17 16:41:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37057310342788696, "perplexity": 11453.712011915914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00112-ip-10-171-10-70.ec2.internal.warc.gz"}
https://bioinformatics.stackexchange.com/questions/17631/how-to-get-a-fasta-format-file-with-the-dna-sequences-of-all-annotated-genes/17643
# How to get a FASTA format file with the DNA sequences of all annotated genes? I am analysing Pyrococcus Furiosus DNA sequencing data by considering data published here in NCBI. When I click on "Send to">"Gene Features">"FASTA format" I download a file that has the sequences of genes of this organism but I realized that this file has some sequences that are doubled (that is the sequence is present twice with different identifiers)... is there a way to get a file with all genes annotated and their respective DNA sequences without double sequences and with all sequences in the 5' to 3' direction without reverse complement in a well "ordered" way ? In NCBI (in the link I reported before) it indicates 2,128 genes so I would like a file with all these 2,128 genes annotated and their respective DNA sequences in FASTA format. Do you know if there is an other website or an other place in NCBI in which I can search to get this kind of file? EDIT: if you follow the path "Send to">"Gene Features">"FASTA format", now it seems that there are just 2,128 genes so this problem is vanished in this case. Anyway, I would appreciate the answer for the future and for the reverse complement gene sequences that are still there. Thank you very much. ## 1 Answer From this SO post. You can use seqkit to remove duplicate sequences with the command below: seqkit rmdup -s < sequences.txt > out.fa The rmdup option removes duplicates, and the -s option calls duplicates on the basis of sequence, ignoring differences in headers. • thank you @Throckmorton I will try ! Even if my question was more focus on if it exists some website (in general) that has a FASTA format file with all annotated genes, without double DNA sequences or sequences that are reverse complement etc... so that if I remove the FASTA headers, I would obtain the complete genome DNA sequence Sep 2 '21 at 13:53 • @Manuela does this species really have nothing but genes in its genome? And all of its genes are unique? In other words, are you sure that concatenating all genes will give you the genome? That seems very unlikely to me, but then I've spent most of my career working on eukaryotes. Sep 11 '21 at 13:03 • @terdon thank you for the observation ! I assumed it was so... I am not a biologist :) Sep 11 '21 at 16:41 • Ah, then @Manuela it is almost certainly the case the the genome has a lot of DNA that isn't part of any gene. I don't know archea at all, but if we take human as an example, genes are only around ~5% of the genome. If you want the genome, then look for the genome and not the genes. Sep 11 '21 at 16:58 • ok @terdon thank you . Sep 11 '21 at 17:56
2022-01-20 17:43:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2846612334251404, "perplexity": 1376.2842457696495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00429.warc.gz"}
https://typeset.io/papers/solid-state-nmr-characterization-of-the-molecular-1sijvuamyi
Journal ArticleDOI # Solid-state NMR characterization of the molecular conformation in disordered methyl α-L-rhamnofuranoside. 20 Jun 2013-Journal of Physical Chemistry A (American Chemical Society (ACS))-Vol. 117, Iss: 26, pp 5534-5541 TL;DR: A concerted rearrangement of OH hydrogens is proposed to account for the observed dynamic disorder in disordered methyl α-L-rhamnofuranoside and the relatively minor differences in non-hydrogen atom positions suggest that characterization of a complete crystal structure by X-ray powder diffraction may be feasible. Abstract: A combination of solid-state 13C NMR tensor data and DFT computational methods is utilized to predict the conformation in disordered methyl α-l-rhamnofuranoside. This previously uncharacterized solid is found to be crystalline and consists of at least six distinct conformations that exchange on the kHz time scale. A total of 66 model structures were evaluated, and six were identified as being consistent with experimental 13C NMR data. All feasible structures have very similar carbon and oxygen positions and differ most significantly in OH hydrogen orientations. A concerted rearrangement of OH hydrogens is proposed to account for the observed dynamic disorder. This rearrangement is accompanied by smaller changes in ring conformation and is slow enough to be observed on the NMR time scale due to severe steric crowding among ring substituents. The relatively minor differences in non-hydrogen atom positions in the final structures suggest that characterization of a complete crystal structure by X-ray powder d... Topics: Carbon-13 NMR (57%), Steric effects (52%) ### Introduction. • For the past century the insights provided by crystallography have help guide the development of science in a remarkably wide range of disciplines. • A key early development in the pursuit of crystal structure by NMR was the ability to characterize molecular conformation by solid-state NMR. • Many materials form solids containing molecules that are partially disordered or that consist of mixtures of several lattice types (i.e. mixed phase materials). • Conformational characterization for each unique structure in these solids is valuable because such structures provide the initial models needed for crystal structure determination by powder diffraction methods. • Accordingly, the aim of the present study is to characterize the molecular conformations of one such disordered solid, namely, methyl α-L-rhamnofuranoside . ### Experimental and Theoretical Methods. • Methyl α-L-rhamnofuranoside was synthesized by dissolving L-rhamnose (5.00 g, 30.5 mmol) in MeOH (15 mL) and adding Dowex-50 (H + form). • Hz per point was obtained in the acquisition dimension. • Of equal importance, the C2 13 C tensor data computed using this conformation was found to agree well with experimental data. • To further establish conformations, all combinations of conformations about C4-O, C5-C6 and C6-O had to be considered because none can be considered isolated from the other sites. • Overall, this process required consideration of 66 conformations rather than the 243 structures that would have been evaluated if isolated regions had not been utilized. ### Characterizing disorder. • Such NMR spectra can arise from either static or dynamic disorder of the individual molecules. • Fortunately, solids with Z' > 1 can usually be distinguished from disordered materials since they contain resonances that are relatively insensitive to temperature variations and that occur in approximately 1:1 ratios. • The disorder was found to be dynamic based upon spectra acquired over a range of spinning speeds from 2.3 -4.8 kHz . • In these experiments, the number of resonances per position was found to vary with spinning speed. • This result is consistent with conformational exchanges that occur at rates comparable to the spinning speed with significant differences observed at C2, C3, C4, and C8. ### Assignment of 1 H and 13 C chemical shifts. • Accurate structural studies require that all chemical shifts be correctly assigned to the corresponding nuclei. • 14 Since these assignments were based exclusively on 1D data, new solution phase analyses were performed to verify all assignments. • These new data were conducted in CD 3 OD and rely primarily on DQF-COSY data (see experimental). • All assignments and important DQF-COSY correlations are listed in Table 1 . • All chemical shift assignments in solid methyl α-L-rhamnofuranoside were made by comparison to those in solution. ### Conformational predictions. • Prior work has demonstrated that accurate conformations can be predicted from 13 C chemical shift tensor information. • In disordered samples, comparison between experimental and calculated tensor shift values is challenging because multiple resonances are observed for each 13 C site in the molecule and identifying one particular set of lines arising from a single conformation can be difficult. • Thus, most of the experimental values for a given position display differences less than the error in the calculated values. • When each of these experimental datasets was compared to the model structures, six conformations were found to be compatible with the experimental data. • The structures selected show that the disorder in methyl α-L-rhamnofuranoside consists primarily of disorder in hydrogen positions. ### Conclusions. • Solid-state NMR 13 C tensor data are paired with computational methods to establish molecular conformation in a small carbohydrate that is dynamically disordered in the solid state and unsuitable for conventional single crystal diffraction techniques. • The solid consists of at least six conformations that interconvert on the kHz timescale. • An evaluation of dozens of candidate structures by computational methods identifies six conformations that are consistent with the NMR data. • These structures differ primarily at OH hydrogen positions with heavier atoms exhibiting only minor differences that appear to arise from changes in ring conformation that accompany the OH hydrogen reorientations. • This study serves to define the structure in the crystallographic asymmetric unit and to identify favorable intramolecular hydrogen bonding arrangements. Did you find this useful? Give us your feedback Content maybe subject to copyright    Report http://www.diva-portal.org Preprint This is the submitted version of a paper published in Journal of Physical Chemistry A. Citation for the original published paper (version of record): Harper, J K., Tishler, D., Richardson, D., Lokvam, J., Pendrill, R. et al. (2013) Solid-State NMR Characterization of the Molecular Conformation in Disordered Methyl alpha- L-Rhamnofuranoside. Journal of Physical Chemistry A, 117(26): 5534-5541 http://dx.doi.org/10.1021/jp4036666 N.B. When citing this work, cite the original published paper. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-92801 1 Solid-state NMR Characterization of Molecular Conformation in Disordered Methyl α L–rhamnofuranoside. James K. Harper,* a Derek Tishler, b David Richardson, a John Lokvam, c Robert Pendrill, d ran Widmalm. d a University of Central Florida, Department of Chemistry, 4000 Central Florida Blvd., Orlando, FL 32816, USA. b University of Central Florida, Department of Physics, Orlando, FL 32816, USA. c University of California Berkeley, Department of Biology, Berkeley, CA 94720, USA. d Department of Organic Chemistry, Arrhenius Laboratory, Stockholm University, S-106 91, Stockholm, Sweden. 2 Solid-state NMR Characterization of Molecular Conformation in Disordered Methyl α L–rhamnofuranoside. Abstract. A combination of solid-state 13 C NMR tensor data and DFT computational methods are utilized to predict conformation in disordered methyl α–L– rhamnofuranoside. This previously uncharacterized solid is found to be crystalline and consists of at least six distinct conformations that exchange on the kHz timescale. A total of 66 model structures were evaluated and six were identified as being consistent with experimental 13 C NMR data. All feasible structures have very similar heavy atom positions and differ most significantly in OH hydrogen orientations. A concerted rearrangement of OH hydrogens is proposed to account for the observed dynamic disorder. This rearrangement is accompanied by smaller changes in ring conformation and is slow enough to be observed on the NMR timescale due to severe steric crowding among ring substituents. The relatively minor heavy atom differences in the final structures suggest that characterization of a complete crystal structure by x-ray powder diffraction may be feasible. Keywords. 13 C tensor principal values, NMR crystallography. 3 Introduction. For the past century the insights provided by crystallography have help guide the development of science in a remarkably wide range of disciplines. Several well- established diffraction techniques are now available for determining structure in materials that form crystals and, to a lesser extent, microcrystalline powders. Recently, the methods of solid-state NMR have been directed toward the problem of crystallographic characterization and the prospect of performing “NMR crystallography” has become feasible. 1 Presently, most NMR crystallographic studies emphasize the NMR characterization of the molecular structure of an individual molecule or the repeating unit in framework materials. 2 The longer-range lattice order needed to identify a space group is usually obtained independently from x-ray powder diffraction methods that rely on the NMR determined structure as a starting model. However, alternative methods 3 including theoretical crystal structure prediction methods have also been found to be capable of also providing the lattice structure. 2b,4 Crystallographic analysis by NMR spectroscopy is appealing because NMR is capable of characterizing a diverse variety of solids that can be difficult to treat by conventional diffraction methods. A key early development in the pursuit of crystal structure by NMR was the ability to characterize molecular conformation by solid-state NMR. 5 Such structural characterizations can now be achieved using a variety of methods and have been used to elucidate structure in proteins, 6 inorganic materials 2a,2b,2h,2i,2l,2m and smaller organic molecules. 7 Presently, these studies have largely been limited to well- ordered crystalline solids and extension to more challenging materials is desirable. For example, many materials form solids containing molecules that are partially disordered or 4 that consist of mixtures of several lattice types (i.e. mixed phase materials). In these cases NMR has the potential to provide molecular conformation for each unique structure found in the solid because several resonances are usually observed for each atomic position due to the multiple distinct conformations present in the solid. Solid-state NMR is remarkably sensitive to even minor differences in structure and such variations, when present, often result in new resonances for a given site. Thus, disordered or mixed phase solids usually exhibit several resonances for each atom in the molecule. Conformational characterization for each unique structure in these solids is valuable because such structures provide the initial models needed for crystal structure determination by powder diffraction methods. Accordingly, the aim of the present study is to characterize the molecular conformations of one such disordered solid, namely, methyl αL–rhamnofuranoside (Figure 1). Presently, there is no known crystal structure for methyl αL–rhamnofuranoside and this study will provide the structure of the crystallographic asymmetric unit for each conformation present. Inspection of the 13 C NMR spectrum of solid methyl α–L–rhamnofuranoside shows disorder at several of the carbons and narrow lines characteristic of a crystalline solid. Here, 13 C tensor principal values are measured for all sites. Assignment of conformation is accomplished using a previously described approach 5a that evaluates a wide variety of possible conformations and retains structures having computed tensors that agree with experimental data. ##### Citations More filters Journal ArticleDOI Gregory J. O. Beran1Institutions (1) 23 Mar 2016-Chemical Reviews TL;DR: Electronic structure techniques used to model molecular crystals, including periodic density functional theory, periodic second-order Møller-Plesset perturbation theory, fragment-based electronic structure methods, and diffusion Monte Carlo are reviewed. Abstract: Interest in molecular crystals has grown thanks to their relevance to pharmaceuticals, organic semiconductor materials, foods, and many other applications. Electronic structure methods have become an increasingly important tool for modeling molecular crystals and polymorphism. This article reviews electronic structure techniques used to model molecular crystals, including periodic density functional theory, periodic second-order Moller-Plesset perturbation theory, fragment-based electronic structure methods, and diffusion Monte Carlo. It also discusses the use of these models for predicting a variety of crystal properties that are relevant to the study of polymorphism, including lattice energies, structures, crystal structure prediction, polymorphism, phase diagrams, vibrational spectroscopies, and nuclear magnetic resonance spectroscopy. Finally, tools for analyzing crystal structures and intermolecular interactions are briefly discussed. 247 citations Journal ArticleDOI Paul Hodgkinson1Institutions (1) TL;DR: It is shown how "NMR crystallography" has been used in a spectrum of applications from resolving ambiguities in diffraction-derived structures to deriving complete structures in the absence of diffraction data. Abstract: Developments of NMR methodology to characterise the structures of molecular organic structures are reviewed, concentrating on the previous decade of research in which density functional theory-based calculations of NMR parameters in periodic solids have become widespread. With a focus on demonstrating the new structural insights provided, it is shown how "NMR crystallography" has been used in a spectrum of applications from resolving ambiguities in diffraction-derived structures (such as hydrogen atom positioning) to deriving complete structures in the absence of diffraction data. As well as comprehensively reviewing applications, the different aspects of the experimental and computational techniques used in NMR crystallography are surveyed. NMR crystallography is seen to be a rapidly maturing subject area that is increasingly appreciated by the wider crystallographic community. 41 citations Journal ArticleDOI TL;DR: This work investigates the solid-state 13C and 15N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone and demonstrates that the use of the hybrid density functional instead of a GGA provides both higher accuracy in the chemical shifts and increased discrimination among the different crystallographic environments. Abstract: Chemical shift prediction plays an important role in the determination or validation of crystal structures with solid-state nuclear magnetic resonance (NMR) spectroscopy. One of the fundamental theoretical challenges lies in discriminating variations in chemical shifts resulting from different crystallographic environments. Fragment-based electronic structure methods provide an alternative to the widely used plane wave gauge-including projector augmented wave (GIPAW) density functional technique for chemical shift prediction. Fragment methods allow hybrid density functionals to be employed routinely in chemical shift prediction, and we have recently demonstrated appreciable improvements in the accuracy of the predicted shifts when using the hybrid PBE0 functional instead of generalized gradient approximation (GGA) functionals like PBE. Here, we investigate the solid-state 13C and 15N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone. We demonstrate that the use of th... 24 citations Journal ArticleDOI Sean T. Holmes1, Robert W. Schurko1Institutions (1) Abstract: Nuclear electric field gradient (EFG) tensors obtained from solid-state NMR spectroscopy are highly responsive to variations in structural features. The orientations and principal components of EFG tensors show great variation between different molecular structures; hence, extraction of EFG tensor parameters, either experimentally or computationally, provides a powerful means for structure determination and refinement. Here, dispersion-corrected plane-wave density functional theory (DFT) is used to refine atomic coordinates in organic crystals determined initially through single-crystal X-ray diffraction (XRD) or neutron diffraction methods. To accomplish this, an empirical parametrization of a two-body dispersion force field is illustrated, in which comparisons of experimental and calculated 14N, 17O, and 35Cl EFG tensor parameters are used to assess the quality of energy-minimized structures. The parametrization is based on a training set of 17 organic solids. The analysis is applied subsequently to the... 19 citations Journal ArticleDOI TL;DR: It is found that the polymorphic Form A of AZD7624 is maintained at room temperature, although dynamic disorder is present on the NMR timescale, and a method is introduced to enhance confidence in NMR assignments by comparing experimental 13C isotropic chemical shifts against site-specific DFT-calculated shift distributions established using CSP-generated crystal structures. Abstract: The crystal structure of the Form A polymorph of N-cyclopropyl-3-fluoro-4-methyl-5-[3-[[1-[2-[2-(methylamino)ethoxy]phenyl]cyclopropyl]amino]-2-oxo-pyrazin-1-yl]benzamide (ie, AZD7624), determined using single-crystal X-ray diffraction (scXRD) at 100 K, contains two molecules in the asymmetric unit (Z' = 2) and has regions of local static disorder This substance has been in phase IIa drug development trials for the treatment of chronic obstructive pulmonary disease, a disease which affects over 300 million people and contributes to nearly 3 million deaths annually While attempting to verify the crystal structure using nuclear magnetic resonance crystallography (NMRX), we measured 13C solid-state NMR (SSNMR) spectra at 295 K that appeared consistent with Z' = 1 rather than Z' = 2 To understand this surprising observation, we used multinuclear SSNMR (1H, 13C, 15N), gauge-including projector augmented-wave density functional theory (GIPAW DFT) calculations, crystal structure prediction (CSP), and powder XRD (pXRD) to determine the room temperature crystal structure Due to the large size of AZD7624 (ca 500 amu, 54 distinct 13C environments for Z' = 2), static disorder at 100 K, and (as we show) dynamic disorder at ambient temperatures, NMR spectral assignment was a challenge We introduce a method to enhance confidence in NMR assignments by comparing experimental 13C isotropic chemical shifts against site-specific DFT-calculated shift distributions established using CSP-generated crystal structures The assignment and room temperature NMRX structure determination process also included measurements of 13C shift tensors and the observation of residual dipolar coupling between 13C and 14N CSP generated ca 90 reasonable candidate structures (Z' = 1 and Z' = 2), which when coupled with GIPAW DFT results, room temperature pXRD, and the assigned SSNMR data, establish Z' = 2 at room temperature We find that the polymorphic Form A of AZD7624 is maintained at room temperature, although dynamic disorder is present on the NMR timescale Of the CSP-generated structures, 2 are found to be fully consistent with the SSNMR and pXRD data; within this pair, they are found to be structurally very similar (RMSD16 = 030 A) We establish that the CSP structure in best agreement with the NMR data possesses the highest degree of structural similarity with the scXRD-determined structure (RMSD16 = 017 A), and has the lowest DFT-calculated energy amongst all CSP-generated structures with Z' = 2 18 citations ##### References More filters Journal ArticleDOI Abstract: Despite the remarkable thermochemical accuracy of Kohn–Sham density‐functional theories with gradient corrections for exchange‐correlation [see, for example, A. D. Becke, J. Chem. Phys. 96, 2155 (1992)], we believe that further improvements are unlikely unless exact‐exchange information is considered. Arguments to support this view are presented, and a semiempirical exchange‐correlation functional containing local‐spin‐density, gradient, and exact‐exchange terms is tested on 56 atomization energies, 42 ionization potentials, 8 proton affinities, and 10 total atomic energies of first‐ and second‐row systems. This functional performs significantly better than previous functionals with gradient corrections only, and fits experimental atomization energies with an impressively small average absolute deviation of 2.4 kcal/mol. 80,847 citations Journal ArticleDOI Chengteh Lee1, Weitao Yang1, Robert G. Parr1Institutions (1) 15 Jan 1988-Physical Review B TL;DR: Numerical calculations on a number of atoms, positive ions, and molecules, of both open- and closed-shell type, show that density-functional formulas for the correlation energy and correlation potential give correlation energies within a few percent. Abstract: A correlation-energy formula due to Colle and Salvetti [Theor. Chim. Acta 37, 329 (1975)], in which the correlation energy density is expressed in terms of the electron density and a Laplacian of the second-order Hartree-Fock density matrix, is restated as a formula involving the density and local kinetic-energy density. On insertion of gradient expansions for the local kinetic-energy density, density-functional formulas for the correlation energy and correlation potential are then obtained. Through numerical calculations on a number of atoms, positive ions, and molecules, of both open- and closed-shell type, it is demonstrated that these formulas, like the original Colle-Salvetti formulas, give correlation energies within a few percent. 77,776 citations Journal ArticleDOI John P. Perdew1, Yue Wang1Institutions (1) 15 Jun 1992-Physical Review B TL;DR: A simple analytic representation of the correlation energy for a uniform electron gas, as a function of density parameter and relative spin polarization \ensuremath{\zeta}, which confirms the practical accuracy of the VWN and PZ representations and eliminates some minor problems. Abstract: We propose a simple analytic representation of the correlation energy ${\mathrm{\ensuremath{\varepsilon}}}_{\mathit{c}}$ for a uniform electron gas, as a function of density parameter ${\mathit{r}}_{\mathit{s}}$ and relative spin polarization \ensuremath{\zeta}. Within the random-phase approximation (RPA), this representation allows for the ${\mathit{r}}_{\mathit{s}}^{\mathrm{\ensuremath{-}}3/4}$ behavior as ${\mathit{r}}_{\mathit{s}}$\ensuremath{\rightarrow}\ensuremath{\infty}. Close agreement with numerical RPA values for ${\mathrm{\ensuremath{\varepsilon}}}_{\mathit{c}}$(${\mathit{r}}_{\mathit{s}}$,0), ${\mathrm{\ensuremath{\varepsilon}}}_{\mathit{c}}$(${\mathit{r}}_{\mathit{s}}$,1), and the spin stiffness ${\mathrm{\ensuremath{\alpha}}}_{\mathit{c}}$(${\mathit{r}}_{\mathit{s}}$)=${\mathrm{\ensuremath{\partial}}}^{2}$${\mathrm{\ensuremath{\varepsilon}}}_{\mathit{c}}$(${\mathit{r}}_{\mathit{s}}$, \ensuremath{\zeta}=0)/\ensuremath{\delta}${\mathrm{\ensuremath{\zeta}}}^{2}$, and recovery of the correct ${\mathit{r}}_{\mathit{s}}$ln${\mathit{r}}_{\mathit{s}}$ term for ${\mathit{r}}_{\mathit{s}}$\ensuremath{\rightarrow}0, indicate the appropriateness of the chosen analytic form. Beyond RPA, different parameters for the same analytic form are found by fitting to the Green's-function Monte Carlo data of Ceperley and Alder [Phys. Rev. Lett. 45, 566 (1980)], taking into account data uncertainties that have been ignored in earlier fits by Vosko, Wilk, and Nusair (VWN) [Can. J. Phys. 58, 1200 (1980)] or by Perdew and Zunger (PZ) [Phys. Rev. B 23, 5048 (1981)]. While we confirm the practical accuracy of the VWN and PZ representations, we eliminate some minor problems with these forms. We study the \ensuremath{\zeta}-dependent coefficients in the high- and low-density expansions, and the ${\mathit{r}}_{\mathit{s}}$-dependent spin susceptibility. We also present a conjecture for the exact low-density limit. The correlation potential ${\mathrm{\ensuremath{\mu}}}_{\mathit{c}}^{\mathrm{\ensuremath{\sigma}}}$(${\mathit{r}}_{\mathit{s}}$,\ensuremath{\zeta}) is evaluated for use in self-consistent density-functional calculations. 19,831 citations Journal ArticleDOI Abstract: A unique mean plane is defined for a general monocyclic puckered ring. The geometry of the puckering relative to this plane is described by amplitude and phase coordinates which are generalizations of those introduced for cyclopentane by Kilpatrick, Pitzer, and Spitzer. Unlike earlier treatments based on torsion angles, no mathematical approximations are involved. A short treatment of the four-, five-, and six-membered ring demonstrates the usefulness of this concept. Finally, an example is given of the analysis of crystallographic structural data in terms of these coordinates. Although the nonplanar character of closed rings in many cyclic compounds has been widely recognized for many years, there remain some difficulties in its quantitative specification. An important first step was taken by Kilpatrick, Pitzer, and Spitzer in their 1947 discussion of the molecular structure of cyclopentane.' Starting with the normal modes of out-of-plane motions of a planar regular pentagon,* they pointed out that displacement of the j t h carbon atom perpendicular to the plane could be written 2 112 zj = (/'SI 4 COS (2+ + 4 n ( j 11/51 (11 where q is a puckering amplitude and $is a phase angle describing various kinds of puckering. By considering changes in an empirical potential energy for displacements perpendicular to the original planar form, they gave reasons to believe that the lowest energy was obtained for a nonzero value of q (finite puckering) but that this minimum was largely independent of$. Motion involving a change in fi at constant q was described as pseudorotation. Subsequent refinement of this work has involved models in which constraints to require constant bond lengths are imposed3q4 and extensions to larger rings5-' and some heterocyclic systems are considered.* Although the correctness of the model of Kilpatrick, et a f . , I and the utility of the (q. $) coordinate system is generally accepted, application to a general five-membered ring with unequal bond lengths and angles is not straightforward. Given the Cartesian coordinates for the five atoms (as from a crystal structure), determination of puckering displacements z, requires specification of the plane z = 0. A least-squares choice (minimization of Zz i2) is one possibility, but the five displacements relative to this plane cannot generally be expressed in terms of two parameters q and$ according to eq 1. An attempt to define a generalized set of puckering cordinates which avoids these difficulties was made by Geise, Altona, Romers, and S~ndara l ingam.~l ' Their quantitative description of puckering in five-membered rings involves the five torsion angles 0, rather than displacements perpendicular to some plane. These torsion angles are directly derivable from the atomic coordinates and are all zero in the planar form. They proposed a relationship of the form\ 6,267 citations Journal ArticleDOI Abstract: A simple two pulse phase modulation (TPPM) scheme greatly reduces the residual linewidths arising from insufficient proton decoupling power in double resonance magic angle spinning (MAS) experiments. Optimization of pulse lengths and phases in the sequence produces substantial improvements in both the resolution and sensitivity of dilute spins (e.g., 13C) over a broad range of spinning speeds at high magnetic field. The theoretical complications introduced by large homo‐ and heteronuclear interactions among the spins, as well as the amplitude modulation imposed by MAS, are explored analytically and numerically. To our knowledge, this method is the first phase‐switched sequence to exhibit improvement over continuous‐wave (cw) decoupling in a strongly coupled homogeneous spin system undergoing sample spinning. 1,980 citations ##### Network Information ###### Related Papers (5) 10 May 2001, Physical Review B Chris J. Pickard, Francesco Mauri 01 May 2005, Zeitschrift Fur Kristallographie Stewart J. Clark, Matthew D. Segall +5 more 11 Nov 2013, Journal of the American Chemical Society Maria Baias, Jean-Nicolas Dumez +4 more 14 Oct 2009, Journal of Chemical Physics Jessica C. Johnston, Robbie J. Iuliucci +3 more 31 Mar 2009, Physical Chemistry Chemical Physics Elodie Salager, Robin S. Stein +3 more ##### Performance ###### Metrics No. of citations received by the Paper in previous years YearCitations 20212 20202 20183 20174 20162 20141
2022-09-25 23:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566676914691925, "perplexity": 3554.3342682373177}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00648.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-8-geometry-8-8-pythagorean-theorem-8-8-exercises-page-594/47
## Basic College Mathematics (10th Edition) The catcher is throwing the ball for $127.3$ ft from home plate to second plate. 1. Use the Pythagorean Theorem to solve for the length from second plate to home plate Let $L =$ the length from the two plates $L = \sqrt (90^{2} + 90^{2})$ $L = 127.27922...$ ft $L \approx 127.3$ ft
2021-02-25 05:22:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.800715446472168, "perplexity": 984.5284637641428}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00126.warc.gz"}
http://math.stackexchange.com/questions/217570/show-that-a-function-is-uniformly-continuous-where-to-start
# Show that a function is uniformly continuous. Where to start? Show that the following function is uniformly continuous on $(-1,1)$ $$f(x) = \begin{cases} {x \sin \frac{1} {x}}, & \text{ } x\in(-1,0)\cup(0,1) \\ 0, & \text{ }x = 0. \end{cases}$$ We cannot use the theorem that a continuous function on a compact set K is continuous on K, because we don't have a compact set. I was told the following hint: "if a function is uniformly continuous on a set then it is also uniformly continuous on any subset of this set". I don't know exactly what to do with this information, can you help me ? :) I know the definition of uniform continuity, I (should) know what open, closed, compact sets are. - Hint: Show that you can extend the definition to $[-1,1]$ and that it is continuous on the closed interval. Then use the theorem about uniform continuity. - Very fast way to have the conclusion (+1). Good to know that $f(x)=\sin(1/x)$ fails to be U.C. in $(-1,1)$. – Babak S. Oct 20 '12 at 19:14 So you suggest that I define f(1)= sin(1) and f(-1)= - sin(-1), then show that the function is continious. Continious functions on a compact set are uniformly continious, and then use the hint, namely a (-1,1) is a subset of [-1,1] ? – Joyeuse Saint Valentin Oct 20 '12 at 19:33 Yes, there is no problem with that. The "magic" happens at $x=0$ anyway. – Hagen von Eitzen Oct 20 '12 at 19:39 @Hempo: Yes, exactly. – Asaf Karagila Oct 20 '12 at 20:15 @Hempo: Is there a particular reason for which you unaccepted the answer? – Asaf Karagila Nov 3 '12 at 22:26 Define $f(1) = \sin(1)$ and $f(-1)=-\sin(-1)$ (1) The function is continuous on $x=0$. Proof: $|f(x)-f(0)|=|x \sin(1/x)-0| \le |x|$. Given $\epsilon \gt 0$, set $\delta=\epsilon$, so that whenever $|x-0| = |x| \lt \delta$ it follows that $|f(x) - f(0)| \lt \epsilon$. Thus, f is continuous at $x=0$ $\square$ (2) The function is continuous everywhere when $x \not= 0$ Proof: $1/x$ is continuous when $x \not= 0$, while $\sin(u)$ is everywhere continuous. A composition of two continuous functions is continuous. $x$ is continuous everywhere. A multiplication of two continuous functions is continuous. So our function $x \sin(1/x)$ is continuous everywhere if $x \not = 0$ $\square$ (3) $f(x)=x \sin(1/x)$ is continuous on [-1,1]: Proof: The combination of the these two facts means that the $f(x)$ is everywhere continuous. If a function is continuous everywhere, it's sure continuous on a subset [-1,1]. So we conclude f is continuous on [-1,1]. Because f is continuous on a compact set, we can conclude that f is uniformly continuous on [-1,1]. If a function is uniformly continuous on a compact set, we may conclude that it's also uniformly continuous on a subset of that compact set. (-1,1) is a subset of [-1,1]. Hence, f is uniformly continuous on (-1,1) $\square$ (Please improve my proof if you think it's necessary) -
2013-05-23 13:37:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313223958015442, "perplexity": 207.16922663180267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703326861/warc/CC-MAIN-20130516112206-00095-ip-10-60-113-184.ec2.internal.warc.gz"}
https://physics.stackexchange.com/tags/universe/hot
# Tag Info Accepted ### Will a light come back within finite years? We don't know if the universe is finite or infinite. John Rennie's answer alludes to this in the latter parts- it is either infinite (and has always been infinite) or finite (but without an edge, so ... • 16.6k Accepted ### What happens when the universe runs out of fuel? Then star formation ceases and the universe goes dark. At this stage of the universe's evolution, there'll still be plenty of hydrogen, they just don't form stars. In theory you can create hydrogen ... • 16.2k ### What happens when the universe runs out of fuel? Star formation will die out long before all the hydrogen runs out. Much of it will be trapped in very low mass stars and lots more will be in the very sparse intergalactic medium (where $\sim 50$% of ... • 110k ### What happens when the universe runs out of fuel? Well, fate of the universe has a lot of possible scenarios, from which periodical expansion/contraction is very unlikely, because universe expands in an accelerated fashion, and it doesn't looks that ... • 7,202 ### Space-time continuum expansion But when we have an infinite number of points, changing the length between them simply does not make any sense. It absolutely does make sense. Infinite sets need care. Many of the ideas that work for ... • 64.3k ### Space-time continuum expansion An infinite number of points is not a definite number. Don't think of it like a normal number. No matter how small a distance you make between two any points there is always an infinite number of ... ### Will a light come back within finite years? Well, the two answers are not talking about the same thing. What Javier was describing is what the asker was inquiring about in their question: a "finite but unbounded" periodic universe. ... • 1,554 ### Hubble expansion rate and reaction rates This is a question that puzzled me once. This is simply a misleading statement. For now, a more satisfying one would be: Once the interaction rate drop to $\Gamma\approx H$, the neutrino, for instance,... 1 vote ### What is there at a point the universe hasn't expanded past yet? If the universe is constantly expanding that means that there is a point the univese [sic] hasn't expanded past... Your premise is not necessarily correct. with that what would be past that point? ... • 7,006 1 vote Accepted ### If I moved in a straight line forever, would I hit something? No. The question is a bit vague, but I try to answer it in spirit. Remember Olbers' paradox, which asks why the sky is dark at night. In an infinitely large, infinitely old universe, one should see ... • 4,322 1 vote You have the hubble constant, but you have to integrate that to get the cosmological parameter: \begin{align} \frac{\dot a}{a} &= - \sqrt{\beta}\tan \left(\sqrt{\beta}t\right)\\ \ln a &= \ln ... • 38.5k 1 vote ### Are there any mathematically conceivable arrangements of an indeterministic universe that are not physically possible? The answer to your first question depends on what you mean by "mathematically conceivable". If your definition of "mathematically conceivable" is wide enough then, yes, there will ... • 33.2k 1 vote ### What is meant when it is said that the universe is homogeneous and isotropic? Most of modern cosmology is based on the Cosmological Principle, which states that the spatial distribution of matter in the Universe is homogeneous and isotropic when viewed at a sufficiently large ... • 1,612 Only top scored, non community-wiki answers of a minimum length are eligible
2022-05-22 18:33:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7338688373565674, "perplexity": 757.823042332356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00017.warc.gz"}
https://www.quizover.com/online/course/1-3-key-discrete-time-test-signals-by-openstax
# 1.3 Key discrete-time test signals Page 1 / 1 In our study of discrete-time signals and signal processing, there are five very important signals that we will use to both illustrate signal processing concepts, and also to probe or test signal processing systems: the delta function , the unit step function , the unit pulse function , the real exponential function , sinusoidal functions , and complex exponential functions . This module will consider the first four; sinusoids and complex exponentials are particularly important, so a separate model will cover them. Each of these signals will be introduced as infinite-length signals, but they all have straightforward finite-length equivalents. ## The discrete-time delta function The delta function is probably the simplest nontrivial signal. It is represented mathematically with (no surprise) the Greek letter delta: $\delta[n]$. It takes the value 0 for all time points, except at the time point 0 where it peaks up to the value 1:$\delta[n]=\begin{cases}1&n=0 \\ 0&\textrm{otherwise}\end{cases}$ In a variety of important settings, we will often see the delta function shifted by a particular time value. The delta function $\delta[n-m]$ is 0, except for a peak of 1 at time $m$: One of the reasons the shifted delta function is so useful is that we can use it to select, or sample, a value of another signal at some defined time value. Suppose we have some signal $x[n]$, and we would like to isolate that signal's value at time $m$. What we can do is multiply that signal by a shifted delta signal. We can say $y[n]=x[n]\delta[n-m]$, but since that $y[n]$ will be zero for all $n$ except at $n=m$, it is equivalent to express it as $y[n]=x[m]\delta[n-m]$, where now $x[m]$ is no longer a function, but a constant. The following figure shows how this operation isolates a particular time sample of $x[n]$: ## The unit step function The unit step function can be thought of like turning on a switch. Usually identified as $u[n]$, it is $0$ for all $n \lt 0$, and then at $n=0$ it "switches on" and is $1$ for all $n \geq 0$: $u[n]=\begin{cases}1&n \lt 0\\ 1&n\geq 0\end{cases}$: As with the delta function, it will also be useful for us to shift the step function: And, as you might have guessed, we can use a shifted step function in a similar way to the delta function by multiplying it with another signal. Whereas the delta function selected a single value of a certain signal (zeroing out the rest), the step function isolates a portion of a signal after a given time. Below, a step function is used to zero out all the values of $x[n]$ for $n\lt 5$, keeping the rest: Supposing a signal $x[n]$ were not causal, setting $m$ to zero and performing the operation $x[n]u[n]$ would zero out all values of $x[n]$ before $n=0$, thereby making the result causal. ## The unit pulse function The unit pulse $p[n]$ is very similar to the unit step function in how it "switches on" from 0 to 1, but then it also "switches off" at a later time. We will say it "switches on" at time $N_1$, and "off" at time $N_2$: $p[n] = \begin{cases}0&n\lt N_1 \\ 1&N_1 \leq n \leq N_2 \\ 0&n\gt N_2\\ \end{cases}$ Of course, rather than use the above piece-wise notation, it is also possible to express the pulse as the difference of two step functions: $p[n] = u[n-N_1]- u[n-(N_2+1)]$. ## The real exponential function Finally, we have the real exponential function, which takes a real number $a$ (that we are going to assume is positive) and raises it to the power of $n,$ where $n$ is the time index: $r[n] = a^n$, $a\in R$, $a\geq 0$. So at $n=0$, $r[n]=a^0$, at $n=1$ it equals $a$, is $a^2$ at $n=2$, and so on. As the name suggests, the signal will exponentially increase or decrease, depending on the value of $a$. so some one know about replacing silicon atom with phosphorous in semiconductors device? how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles what's the easiest and fastest way to the synthesize AgNP? China Cied types of nano material I start with an easy one. carbon nanotubes woven into a long filament like a string Porter many many of nanotubes Porter what is the k.e before it land Yasmin what is the function of carbon nanotubes? Cesar I'm interested in nanotube Uday what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it... what is system testing what is the application of nanotechnology? Stotaw In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google Azam anybody can imagine what will be happen after 100 years from now in nano tech world Prasenjit after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments Azam name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world Prasenjit how hard could it be to apply nanotechnology against viral infections such HIV or Ebola? Damian silver nanoparticles could handle the job? Damian not now but maybe in future only AgNP maybe any other nanomaterials Azam Hello Uday I'm interested in Nanotube Uday this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15 Prasenjit can nanotechnology change the direction of the face of the world At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light. the Beer law works very well for dilute solutions but fails for very high concentrations. why? how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
2018-08-18 06:12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7222583889961243, "perplexity": 767.5588586810787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213405.46/warc/CC-MAIN-20180818060150-20180818080150-00223.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/279859-how-find-integral-following-equation.html
# Thread: How to find integral of following equation? 1. ## How to find integral of following equation? I am badly stuck in some integration here and will appreciate any help out of it. $$\int^\infty_0f(r) dr = \int^\infty_0 \frac{Ar}{1+Cr^\alpha} e^{-Br^2} dr$$ If I let $u = Br^2$, then I get $$= \frac{A}{2B} \int^\infty_0\frac{\exp(-u)}{1+(u/B)^{2/\alpha}} du$$ But I am stuck while proceeding further. Any idea? 2. ## Re: How to find integral of following equation? Hey sjaffry. Hint - Try doing integration by parts.
2018-02-22 19:50:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652839303016663, "perplexity": 778.3458131934029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814249.56/warc/CC-MAIN-20180222180516-20180222200516-00611.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/111307
## Files in this item FilesDescriptionFormat application/pdf 5516.pdf (22kB) (no description provided)PDF ## Description Title: Double-resonance Spectroscopy With A Continuum: Application To The Mg(3dδ)ar+ 2∆ State Of Mgar+ Author(s): Génévriez, Matthieu Contributor(s): Merkt, Frédéric; Berglitsch, Thomas; Wehrli, Dominik Subject(s): Ions Abstract: Whereas the electronic ground states of a large number of small molecular cations have been spectroscopically characterized, much less is known concerning electronically excited states, in particular because of the low densities in which molecular ions can be formed and because of the high excitation energies required. Electronically excited states of molecular ions are commonly studied using resonance-enhanced multiphoton dissociation (REMPD) [1] or isolated-core multiphoton Rydberg dissociation (ICMRD) spectroscopy [2]. These techniques rely on the fact that the excited molecular ion either predissociates rapidly or can be efficiently excited to a dissociative state by further photoabsorption. The Mg($3d_\delta$)Ar$^+$ $^2\Delta$ state of MgAr$^+$ is an example of an electronic state that does not fulfill these conditions and cannot be studied with conventional REMPD or ICMRD. We will report on the experimental study of this state using a double-resonance spectroscopic technique we have developed. With this technique, MgAr$^+$ molecules were prepared in their electronic ground state and then coupled, \textit{via} an intermediate state, to \emph{both} the Mg($3d_\delta$)Ar$^+$ $^2\Delta$ state and a predissociation continuum. In contrast to double-resonance spectroscopy involving only bound states, the presence of a predissociation continuum leads to a rich variety of spectral lineshapes, which exhibit asymmetric profiles reminiscent of Fano lineshapes. We carried out detailed simulations of these lineshapes using a quantum-optics-based effective Hamiltonian and solving the time-dependent Schr\"odinger equation. Agreement with experimental spectra is excellent and shows that the lineshapes are the result of quantum interferences between the different photoexcitation pathways leading to dissociation. We will discuss how the lineshapes can be controlled with external parameters such as laser pulse energies and wavenumbers in order, \textit{e.g.}, to facilitate spectroscopic analysis. The analysis of the rovibrational structure of the Mg($3d_\delta$)Ar$^+$ $^2\Delta$ electronic state will be presented, with particular emphasis on the anomalous behavior of the splitting between its two spin-orbit components. \noindent [1] P.O. Danis, T. Wyttenbach and J.P. Maier, J. Chem. Phys. \textbf{88}, 3451–3455 (1988) \noindent [2] M. Génévriez, D. Wehrli and F. Merkt, Mol. Phys. \textbf{118}, e1703051 (2019) Issue Date: 2021-06-22 Publisher: International Symposium on Molecular Spectroscopy Genre: Conference Paper / Presentation Type: Text Language: English URI: http://hdl.handle.net/2142/111307 Date Available in IDEALS: 2021-09-24 
2021-10-24 16:59:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7510271668434143, "perplexity": 4498.787491112831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00247.warc.gz"}
http://www.variousconsequences.com/2008/10/simple-bayes.html
## Sunday, October 26, 2008 ### Simple Bayes I like Bayes theorem, it's really useful. The most intuitive and accessible explanation I've found of using Bayes theorem to solve a problem is in Russell and Norvig's classic, Chapter 20 (pdf) (I just own the first edition, the second edition looks even better). The initial example they give is about pulling different flavoured candy out of a sack (remember the balls and urn from your basic stats?). They also provide a really good discussion showing how standard least-squares regression is a special case of maximum-likelihood for when the data are generated by a process with Gaussian noise of fixed variance. Their first example is for estimating parameters in a discrete distribution of candy, but we can apply the same math to estimating the variance of a continuous distribution. Estimating variance is important, lots of times in industrial or business settings the variance of a thing matters as much or more than the average, just check-out all of the press those Six Sigma guys get. That's because it gives us insight into our risk. It helps us answer questions like, "What's our probability of success?" And maybe, if we're lucky, "What things is that probability sensitive to?" Bayes theorem is a very simple equation: Where P(h) is the prior probability of the hypothesis, P(d|h) is the likelihood of the data given the hypothesis, and P(h|d) is the posterior probability of the hypothesis given the data. Octave has plenty of useful built-in functions that make it easy to play around with some Bayesian estimation. We'll set up a prior distribution for what we believe our variance to be with chi2pdf(x,4), which gives us a Chi-squared distribution with 4 degrees of freedom. We can draw a random sample from a normal distribution with the normrnd() function, and we'll use 5 as our "true" variance. That way we can see how our Bayesian and our standard frequentist estimates of the variance converge on the right answer. The standard estimate of variance is just var(d), where d is the data vector. The likelihood part of Bayes theorem is: % likelihood( d | M ) = PI_i likelihood(d_i, M_j) for j=1:length(x) lklhd(j) = prod( normpdf( d(1:i), 0, sqrt( x(j) ) ) ); endfor lklhd = lklhd/trapz(x,lklhd); % normalize it Then the posterior distribution is: % posterior( M | d ) = prior( M ) * likelihood( d | M ) post_p = prior_var .* lklhd; post_p = post_p/trapz(x,post_p); % normalize it Both of the estimates of the variance converge on the true answer as n approaches infinity. If you have a good prior, the Bayesian estimate is especially useful when n is small. It's interesting to watch how the posterior distribution changes as we add more samples from the true distribution. The great thing about Bayes theorem is that it provides a continuous bridge from what we think we know to reality. It allows us to build up evidence and describe our knowledge in a consistent way. It's based on the fundamentals of basic probability and was all set down in a few pages by a nonconformist Presbyterian minister and published after his death in 1763. Octave file for the above calculations: simple_bayes.m
2020-07-04 21:55:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6516159772872925, "perplexity": 889.2102805426771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00412.warc.gz"}
http://mathhelpforum.com/trigonometry/180885-find-sine-function-through-two-points.html
# Math Help - Find sine function through two points. 1. ## Find sine function through two points. Write an equation of the form whose graph is the sine wave shown above. The curve goes through the points and . I forget how to do this. 2. There is more than one possible solution here that I can see. I'm happy to help but we need to focus on the one you require. First thing to do is draw a sine function that goes through these 2 points. I have one in mind. It has the y-intercept at (0,4) and half way through the cycle returns to (2,4). Can you draw a picture of that? If so what is the period of the function? 3. You need to show us the actual graph, there are many many many many sine curves which pass through those two points (example: $y = sin(0.5 \pi x) + 4$). Is the method not in your class notes or text book? Edit pickslides was faster :P 4. I can't add an image, I don't know why but it doesn't show. 5. Thanks for the link, the question now has only one solution. Looking at the graph, what is P the period? You can then find B given $\displaystyle \frac{2\pi}{B} = P$ 6. There is still an infinite number of possible functions. Here's one of them: . $f(x) \:=\:-3\sin(\pi x) + 4$
2014-12-25 01:36:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7081298232078552, "perplexity": 342.3454940367297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447544141.39/warc/CC-MAIN-20141224185904-00070-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/could-this-be-simplified.956235/
Could this be simplified? I am sorry but i am not a mathematician Could this be simplified ? Attachments • main-qimg-a2a1217b577acb59952f518b9dd2be03.jpg 19.2 KB · Views: 489 sorry but the link isn't working fresh_42 Mentor ... or even one more step: $$\dfrac{\sqrt{x^2+y^2}}{y}+1=1+\sqrt{\left(\dfrac{x}{y}\right)^2+1}$$ which gives a nice form with only one variable expression ##z:=\dfrac{x}{y}##. PeroK Tom.G
2021-05-16 00:59:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658308982849121, "perplexity": 3876.6176362322212}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00214.warc.gz"}
https://tex.stackexchange.com/questions/548860/display-figures-at-center-in-a-two-column-format-document
# Display figures at center in a two-column format document In my latex document no matter how much I change the width the images are not getting displayed at the center. Below is the output that I am getting using the following command. Most of my figures are either getting pushed towards the left end margin, or coming outside the 2 column like Fig5 or getting shrinked like Fig4. What should I do so that the image gets displayed in the center and is not shifted here and there. Preamble \RequirePackage{fix-cm} \documentclass[twocolumn]{svjour3} % twocolumn % \smartqed % flush right qed marks, e.g. at end of proof % \usepackage{graphicx} \usepackage{epstopdf} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{bm} \usepackage{hhline} \usepackage{amssymb} \begin{figure} \centering \includegraphics[height=55mm,width=\columnwidth]{Fig5.eps} \caption{test} \end{figure} Updated Images • Why are you encasing the \includegraphics instruction in an adjustbox environment? – Mico Jun 11 '20 at 4:49 • Off-topic: Nowadays, the graphicx package loads the epstopdf package automatically. The only conceivable reason for having an explicit \usepackage{epstopdf} statement in the preamble is that your TeX distribution is at least eight years old and hasn't ever seen an update -- in which case you really ought to be thinking actively about updating your TeX distribution. – Mico Jun 11 '20 at 4:55 • I have been using this before as it worked for many documents. – Sm1 Jun 11 '20 at 4:56 • I use Overleaf and these preamble since old age. I guess I need to delete few of them. Have been lazy. – Sm1 Jun 11 '20 at 4:57 • also do not use both height= and width= as that risks distorting the document, just use one or the other Jun 11 '20 at 7:32 Omitting the adjustbox wrappers should fix the issue you're experiencing. If it doesn't, do check if the eps files' bounding boxes are set correctly, i.e., do check that there's no excess whitespace to the left and/or right of some of the graphs contained in the eps files. \documentclass[twocolumn,demo]{svjour3} % omit 'demo' option in real doc. \usepackage{graphicx} % for '\includegraphics' macro %\usepackage{amsmath,amssymb,bm,hhline} % not needed for this example \usepackage{lipsum} % filler text \usepackage{microtype} % optional \begin{document} \addtocounter{figure}{3} % just for this example \lipsum[2] % filler text \begin{figure}[h] \includegraphics[height=65mm,width=\columnwidth]{Fig4.eps} \caption{A Test} \end{figure} \lipsum[1-4] % more filler text \begin{figure}[h] \includegraphics[height=45mm,width=\columnwidth]{Fig5.eps} \caption{Another Test} \end{figure} \lipsum[2] % still more filler text \end{document} • Thank you for your answer. I applied the height only and removed adjust command. But there was no change. I was still getting lot of white space around fig4 and Fig5 was still shifted out of the column. However, when I converted the images to .png and edited it via paint the white spaces went away and the image is able to span the entire width of the column. I have doubled checked that none of my eps files have any white spaces around the margins. If I put the eps images inside *figure then these problems disappear. – Sm1 Jun 11 '20 at 14:41 • It is only when I insert them in 2 column, some of the eps files don't get displayed correctly. – Sm1 Jun 11 '20 at 14:41 You are setting both height and width in includegraphics. This causes the image aspect ratio to change. Try setting either the height or the width, but not both. This worked for me (not having your source files I used one of mine) \begin{figure} \centering \includegraphics[height=0.5\columnwidth]{test.png} \includegraphics[width=\columnwidth]{test.png} \caption{test} \end{figure} • Are you sure that the OP's issue is related to fixing both the height and width of the displayed graphic? – Mico Jun 11 '20 at 5:11 • I couldn't get their MWE to work as is, so in mine, setting both height and width simultaneously caused similar symptoms to OP. Setting only one or the other fixed them in my MWE. In reworking the MWE I may have omitted or changed the case somewhat but am reasonably sure the aswer fixes at least a major subset of their issues. (if not, I'l gladly edit or delete my answer) Jun 11 '20 at 5:41
2021-09-22 15:16:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408951759338379, "perplexity": 1748.3381934215745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00428.warc.gz"}
http://archive.numdam.org/item/AIHPA_1994__60_3_253_0/
Hamiltonians for systems of N particles interacting through point interactions Annales de l'I.H.P. Physique théorique, Tome 60 (1994) no. 3, pp. 253-290. @article{AIHPA_1994__60_3_253_0, author = {Dell'Antonio, G. F. and Figari, R. and Teta, A.}, title = {Hamiltonians for systems of N particles interacting through point interactions}, journal = {Annales de l'I.H.P. Physique th\'eorique}, pages = {253--290}, publisher = {Gauthier-Villars}, volume = {60}, number = {3}, year = {1994}, zbl = {0808.35113}, mrnumber = {1281647}, language = {en}, url = {archive.numdam.org/item/AIHPA_1994__60_3_253_0/} } Dell'Antonio, G. F.; Figari, R.; Teta, A. Hamiltonians for systems of N particles interacting through point interactions. Annales de l'I.H.P. Physique théorique, Tome 60 (1994) no. 3, pp. 253-290. http://archive.numdam.org/item/AIHPA_1994__60_3_253_0/ [1] R.A. Minlos and L.D. Faddeev, On the Point Interaction for a Three-Particle System in Quantum Mechanics, Soviet Phys. Dokl., Vol. 6, 1962, pp. 1072-1074. | MR 147136 [2] R.A. Minlos and L.D. Faddeev, Comment on the Problem of Three Particles with Point Interactions, Soviet. Phys. Jept., Vol. 14, 1962, pp. 1315-1316. | MR 151141 [3] S. Albeverio, J.R. Fenstad, R. Hoegh-Krohn, W. Karwowski and T. Lindstrom, Schrödinger Operators with Potentials Supported by Null Sets, To appear in: Ideas and Methods in Quantum and Statistical Physics, S. ALBEVERIO, J. R. FENSTAD, H. HOLDEN, T. LINDSTROM, Eds., Cambridge University Press. | MR 1190521 | Zbl 0795.35088 [4] E.C. Svendsen, The Effect of Submanifold upon Essential Self-Adjointness and Deficiency Indices, J. Math. Anal. and Appl., Vol. 80, 1981, pp. 551-565. | MR 614850 | Zbl 0473.47039 [5] S. Albeverio, F. Gesztesy, R. Hoegh-Krohn and H. Holden, Solvable Models in Quantum Mechanics, Springer-Verlag, New-York, 1988. | MR 926273 | Zbl 0679.46057 [6] A. Teta, Quadratic Forms for Singular Perturbations of the Laplacian, Publ. R.I.M.S. Kyoto Univ., Vol. 26, 1990, pp. 803-817. | MR 1082317 | Zbl 0735.35048 [7] G. Dal Maso, An Introduction to Γ-Convergence, Preprint S.I.S.S.A., Trieste, 1992. | MR 1201152 | Zbl 0816.49001 [8] E. De Giorgi, G. Dal Maso, Γ-Convergence and Calculus of Variations. In: Mathematical Theories of Optimization, J. P. CECCONI, T. ZOLEZZI, Eds., Lect. N. in Math., Vol. 979, Springer Verlag, Berlin, 1983. | MR 713808 | Zbl 0511.49007 [9] K.A. Ter Martirosyan and G.V. Skornyakov, The Three-Body Problem with Short-Range Forces. Scattering of Low-Energy Neutrons by Deuterons, Soviet. Phys. Jept., Vol. 4, 1957, pp. 648-661. | MR 88334 | Zbl 0077.43304 [10] L.H. Thomas, The Interaction Between a Neutron and a Proton and the Structure of H3, Phys. Rev., Vol. 47, 1935, pp. 903-909. | JFM 61.1582.02 | Zbl 0011.42701 [11] L.W. Bruch, J.A. Tjon, Binding of Three Identical Bosons in two Dimensions, Phys. Rev. A, Vol. 19, 1979, pp. 425-432. [12] T.K. Lim, P.A. Maurone, Non Existence of the Efimov Effect in two Dimensions, Phys. Rev. B, Vol. 22, 1980, pp. 1467-1469. | MR 582759 [13] R.A. Minlos, M. Sh. SHERMATOV, On Pointlike Interaction of Three Quantum Particles, Vestnik Mosk, Univ. Ser. I Math. Mekh., Vol. 6, 1989, pp. 7-14. | MR 1065968 | Zbl 0707.70021 [14] R.A. Minlos, On the Point Interaction of Three Particles. In: Applications of Self-Adjoint Extensions in Quantum Physics, Exner, P., Seba, P. Eds., Lect. Notes in Phys., Vol. 324, Springer Verlag, Berlin, 1989. | MR 1009846 | Zbl 0738.47008 [15] I.S. Gradshteyn, I.M. Ryzhik, Table of Integrals, Series and Products, Academic Press, New-York, 1980. [16] A. Erdelyi Et Al., Tables of Integral Transforms, Mc Graw-Hill, New-York, 1954. [17] D.R. Jafaev, On the Theory of the Discrete Spectrum of the Three-Particle Schrôdinger Operator, Mat. Sb., Vol. 94, 1974, pp. 567-593. | MR 356752 | Zbl 0342.35041 [18] S.A. Vulgater, G.M. Zhislin, On the Finiteness of the Discrete Spectrum of Hamiltonians for Quantum Systems of Three One-or Two Dimensional Particles, Lett. Math. Phys., Vol. 25, 1992, pp. 299-306. | MR 1188809 | Zbl 0759.35033 [19] S. Albeverio, R. Høegh-Krohn, T.T. Wu, A Class of Exactly Solvable Three-Body Quantum Mechanical Problems and the Universal Low-Energy Behavior, Phys. Lett., Vol. 83 A, 1981, pp. 105-109. | MR 617170 [20] J. Dimock, The Non-Relativistic Limit of P(φ)2 Quantum Field Theories: Two-Particle Phenomena, Comm. Math. Phys., Vol. 57, 1977, pp. 51-66. | MR 475455
2021-01-27 10:52:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.386790931224823, "perplexity": 5453.555256798829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821381.83/warc/CC-MAIN-20210127090152-20210127120152-00740.warc.gz"}
https://www.physicsforums.com/threads/prove-that-the-product-of-two-n-qubits-hadamard-gates-is-identity.973987/
# Prove that the product of two n qubits Hadamard gates is identity Haorong Wu Homework Statement: Prove ##H^{\otimes n} \cdot H^{\otimes n} = I## Relevant Equations: ##H^{\otimes n} = \frac 1 {\sqrt {2^n}} \sum_{x,y} {\left ( -1 \right )} ^{x \cdot y} \left | x \right > \left < y \right |## where ##x \text { and } y \text { are from } 00 \dots 00 \text { to } 11 \dots 11##, and ##x \cdot y = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n## From the properties of tensor product, ##H^{\otimes n} \cdot H^{\otimes n} =\left ( H_1 \cdot H_1 \right ) \otimes \left ( H_2 \cdot H_2 \right ) \otimes \cdots \otimes \left ( H_n \cdot H_n \right ) =I \otimes I \otimes \cdots \otimes I =I## where ##H_i## acts on the ##i^{th}## qubit. But I want to try another way from the definition of ##H^{\otimes n} ##: ##H^{\otimes n} \cdot H^{\otimes n} \\ =\left [ \frac 1 {\sqrt {2^n}} \sum_{x,y} {\left ( -1 \right )} ^{x \cdot y} \left | x \right > \left < y \right | \right ] \left [ \frac 1 {\sqrt {2^n}} \sum_{i,j} {\left ( -1 \right )} ^{i \cdot j} \left | i \right > \left < j \right |\right ] \\ = \frac 1 {2^n} \sum_x \sum_y \sum _j {\left ( -1 \right )} ^{x \cdot y} {\left ( -1 \right )} ^{y \cdot j} \left | x \right > \left < j \right | \\ = \frac 1 {2^n} \sum_x \sum_j \left ( \sum _y {\left ( -1 \right )} ^{x \cdot y} {\left ( -1 \right )} ^{y \cdot j} \right ) \left | x \right > \left < j \right |## I'm stuck because I have no idea how to properly calculate ## \sum _y { \left ( -1 \right )} ^{x \cdot y} {\left ( -1 \right )} ^{y \cdot j} ##. The answer should be ##0## if ##x \neq j##, and ##1## otherwise. Any advice? Thanks! ## Answers and Replies Mentor I don't know the best solution here but perhaps this will help. Try bringing the y power inside to give ##(-1^y)## and play with combinations of x,j, and y being even or odd to see if you can find a pattern in the series. Haorong Wu I don't know the best solution here but perhaps this will help. Try bringing the y power inside to give ##(-1^y)## and play with combinations of x,j, and y being even or odd to see if you can find a pattern in the series. Thanks, jedishrfu. I'll try it. Mentor It might not be the right approach but until someone posts otherwise here its something that I'd try. I noticed that when y is even ##(-1^y)## evaluates to 1 and when odd a -1 so now you can look at how x+j behaves. Homework Helper I think you could prove this by induction. ##HH=I## is trivial. Then you just need to write ##H^{\oplus n+1}## in terms of ##H^{\oplus n}## and do the matrix mulitiplication. Haorong Wu and jedishrfu Haorong Wu I think you could prove this by induction. ##HH=I## is trivial. Then you just need to write ##H^{\oplus n+1}## in terms of ##H^{\oplus n}## and do the matrix mulitiplication. Brilliant! Thanks, tnich!
2023-03-20 16:29:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000072717666626, "perplexity": 1855.2812626700063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00117.warc.gz"}
https://www.projecteuclid.org/euclid.aoms/1177692469
## The Annals of Mathematical Statistics ### Approximation to Bayes Risk in Compound Decision Problems Allan Oaten #### Abstract We consider, simultaneously, $N$ statistical decision problems with identical generic structure: state space $\Omega$, action space $A$, sample space $\mathscr{X}$ and nonnegative loss function $L$ defined on $\Omega \times A \times \mathscr{X}$. With $x = (x_1, \cdots, x_N)$ distributed according to $\prod^N_{i=1} \mathbf{P}_{\theta_i} = \mathbf{P}_\theta$, a compound procedure is a vector, $\mathbf{\phi} = (\phi_1, \cdots, \phi_N)$, such that $\phi_i(\mathbf{x}) \in A$ for each $i$ and $\mathbf{x}$. The risk of the procedure $\mathbf{\phi}$ is $\mathbf{R}(\mathbf{\theta}, \mathbf{\phi}) = N^{-1} \sum^N_{r=1} \mathbf{\int L} (\theta_r, \phi_r(\mathbf{x}), x_r)\mathbf{P_\theta} (d\mathbf{x})$ and the modified regret is $\mathbf{D}(\mathbf{\theta, \phi}) = \mathbf{R}(\mathbf{\theta, \phi}) - R(G)$ where $G$ is the empirical distribution of $\theta_1, \cdots, \theta_N$, and $R(G)$ is the Bayes risk against $G$ in the component problem. We discuss quite wide classes of procedures, $\mathbf{\phi}$, which consist of using $\mathbf{x}$ to estimate $G$, and then playing $\varepsilon$-Bayes against the estimate in each component problem. For one class we establish a type of uniform convergence of the conditional risk in the $m \times n$ problem (i.e. $\Omega$ has $m$ elements, $A$ has $n$), and use this to get $\mathbf{D}(\mathbf{\theta, \phi}) < \varepsilon + o(1)$ for another class in the $m \times n$ and $m \times \infty$ problems. Similar, but weaker, results are given in part II for the case when $\Omega$ is infinite. #### Article information Source Ann. Math. Statist., Volume 43, Number 4 (1972), 1164-1184. Dates First available in Project Euclid: 27 April 2007 https://projecteuclid.org/euclid.aoms/1177692469 Digital Object Identifier doi:10.1214/aoms/1177692469 Mathematical Reviews number (MathSciNet) MR312612 Zentralblatt MATH identifier 0241.62005 JSTOR
2019-11-20 10:36:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6366627812385559, "perplexity": 423.1354013524663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00236.warc.gz"}
https://quant.stackexchange.com/questions/24836/deriving-the-yield-curve-from-the-hjm-dynamics
# Deriving the yield curve from the HJM dynamics If I know that my model follows a no-arbitrage HJM model: $$df(\tau) = \left(\sigma(\tau)\int_0^{\tau}\sigma(u)du\right)dt +\sigma(\tau)dW_{\tau}$$ (where $\tau:=T-t$, the time until maturity $T$ of a Bond and) $\sigma$ a process adapted to $W_{\star}$'s filtration. How do we derive the yield curve for such a model? That is the models I see all have deterministic yield curve, is this the expectation of the above SDE? • What do you mean yield curve, that is, specifically, what quantity you want to derive? – Gordon Mar 12 '16 at 21:32 • I want the forward rate curve's solution; ie I'm looking for a solution to the above SDE – AIM_BLB Mar 12 '16 at 21:39 • Other than $f(\tau) = \frac{1}{2}\big(\int_0^{\tau}\sigma(s) ds\big)^2 + \int_0^{\tau}\sigma(s) dW_s$, for your equation, I do not see any more compact analytical solution. – Gordon Mar 14 '16 at 18:48
2019-11-14 14:42:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9087826013565063, "perplexity": 823.3265166978534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668525.62/warc/CC-MAIN-20191114131434-20191114155434-00342.warc.gz"}
https://www.semanticscholar.org/paper/Generative-Modeling-of-Turbulence-Drygala-Winhart/ff202d33d1037a8fbd5c09a7a101bb4e94c2aeb4
# Generative Modeling of Turbulence @article{Drygala2022GenerativeMO, title={Generative Modeling of Turbulence}, author={Claudia Drygala and Benjamin Winhart and Francesca di Mare and Hanno Gottschalk}, journal={ArXiv}, year={2022}, volume={abs/2112.02548} } • Published 5 December 2021 • Computer Science • ArXiv We present a mathematically well-founded approach for the synthetic modeling of turbulent flows using generative adversarial networks (GAN). Based on the analysis of chaotic, deterministic systems in terms of ergodicity, we outline a mathematical proof that GAN can actually learn to sample state snapshots from the invariant measure of the chaotic system. Based on this analysis, we study a hierarchy of chaotic systems starting with the Lorenz attractor and then carry on to the modeling of… 2 Citations ## Figures and Tables from this paper ### Investigation of nonlocal data-driven methods for subgrid-scale stress modeling in large eddy simulation • Physics • 2022 A nonlocal subgrid-scale stress (SGS) model is developed based on the convolution neural network (CNN), which is a powerful supervised data-driven method and also an ideal approach to naturally ### Constraining Gaussian Processes to Systems of Linear Ordinary Differential Equations • Computer Science ArXiv • 2022 A novel algorithmic and symbolic construction for covariance functions of Gaussian Processes (GPs) with realizations strictly following a system of linear homogeneous ODEs with constant coefficients, which is called LODE-GPs. ## References SHOWING 1-10 OF 78 REFERENCES ### From Deep to Physics-Informed Learning of Turbulence: Diagnostics • Physics ArXiv • 2018 Tests validating progress made toward acceleration and automation of hydrodynamic codes in the regime of developed turbulence by three Deep Learning (DL) Neural Network (NN) schemes trained on Direct Numerical Simulations of turbulence suggest a path forward towards improving reproducibility of the large-scale geometry of turbulence with NN. ### Turbulence Enrichment using Physics-informed Generative Adversarial Networks • Computer Science, Physics ArXiv • 2020 This work develops physics-based methods for generative enrichment of turbulence enrichment by incorporating a physics-informed learning approach by a modification to the loss function to minimize the residuals of the governing equations for the generated data. ### Unsupervised deep learning for super-resolution reconstruction of turbulence • Environmental Science Journal of Fluid Mechanics • 2021 An unsupervised learning model that adopts a cycle-consistent generative adversarial network (CycleGAN) that can be trained with unpaired turbulence data for super-resolution reconstruction of turbulent fields is presented. ### Machine-learning-based spatio-temporal super resolution reconstruction of turbulent flows • Physics Journal of Fluid Mechanics • 2020 The present model reconstructs high-resolved turbulent flows from very coarse input data in space, and also reproduces the temporal evolution for appropriately chosen time interval, suggesting that the present method can perform a range of flow reconstructions in support of computational and experimental efforts. ### Machine learning methods for turbulence modeling in subsonic flows around airfoils • Engineering Physics of Fluids • 2019 Reynolds-Averaged Navier-Stokes(RANS) method will still play a vital role in the following several decade in aerospace engineering. Although RANS models are widely used, empiricism and large ### A Convenient Infinite Dimensional Framework for Generative Adversarial Learning • Computer Science ArXiv • 2020 This work proposes an infinite dimensional theoretical framework for generative adversarial learning and shows that the Rosenblatt transformation induces an optimal generator, which is realizable in the hypothesis space of $\alpha$-Holder differentiable generators. ### Machine Learning-augmented Predictive Modeling of Turbulent Separated Flows over Airfoils • Computer Science ArXiv • 2016 By incorporating data that can reveal the form of the innate model discrepancy, the applicability of data-driven turbulence models can be extended to more general flows. ### Super-resolution reconstruction of turbulent flows with machine learning • Environmental Science Journal of Fluid Mechanics • 2019 We use machine learning to perform super-resolution analysis of grossly under-resolved turbulent flow field data to reconstruct the high-resolution flow field. Two machine learning models are ### Reynolds averaged turbulence modelling using deep neural networks with embedded invariance • Computer Science Journal of Fluid Mechanics • 2016 This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data and proposes a novel neural network architecture which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropic tensor.
2022-10-05 01:22:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.619464099407196, "perplexity": 2666.474900323317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00554.warc.gz"}
http://mathhelpforum.com/trigonometry/17064-help-proving-double-angle-identities.html
2. $\sin 3x=\sin (2x+x)$ and use the formula $\sin(a+b)=\sin a\cos b+\sin b\cos a$ and $\sin 2x=2\sin x\cos x, \ \cos 2x=1-2\sin^2x$
2013-12-08 16:09:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6402752995491028, "perplexity": 398.10302432189445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163066152/warc/CC-MAIN-20131204131746-00059-ip-10-33-133-15.ec2.internal.warc.gz"}
https://socratic.org/questions/what-is-the-molecular-shape-of-pf-3
# What is the molecular shape of PF_3? $\text{Trigonal pyramidal}$ There are 4 regions of electron density around the central phosphorus atom, one of which is a lone pair. Given simple VSEPR, a structure based on a tetrahedron is specified. But because we describe structure in relation to atoms, and not electron pairs, the structure of $P {F}_{3}$ should be pyramidal, an analogue of ammonia. This site reports that /_F-P-F=107""^@, so it seems that we are not too offbase. The /_F-N-F=102.5""^@ in $N {F}_{3}$ is included for comparison.
2019-10-15 08:41:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.573627769947052, "perplexity": 941.4295358196487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00192.warc.gz"}
https://cs3511.wordpress.com/homework/hw-4/
# HW 4 Due Friday, Mar 27th in class. Submissions: Please submit your printed or hand written solutions in class. If you have a compelling reason for not being able to attend that particular class, please email your submissions to the instructor or the TA. Policy: Write your own solutions independently; include the names of anyone you discussed the problems with. 1. Let $G=(V,E)$ be an undirected graph with capacities $c_e$ on each edge $e \in E$. Let $s,t$ be two of its vertices. Let ${\bf P}$ be the set of $s-t$ paths in $G$ and ${\bf C}$ be the subsets of edges that are $s-t$ cuts. Show that $\max_{P \in {\bf P}} \min_{e \in P} c_e = \min_{C \in {\bf C}} \max_{e \in C} c_e$ 2. Show that in any graph with a valid $s-t$ flow on its edges, (a) The flow from $s$ to $t$ can be decomposed into $s-t$ paths and simple cycles (no repeated vertices). In other words, each path $P$ from $s$ to $t$ carries a flow of value $f_P$ on its edges and each cycle $C$ carries a flow of value $f_C$ on its edges. The total flow on an edge is equal to the sum of the flow values of all paths and cycles containing the edge. (b) Given capacities on the edges, there is a maximum $s-t$ flow with no cycles. 3. Let $K$ be a closed convex set in ${\bf R}^n$. Show that for any point $b$ not in $K$, there is a vector $w \in {\bf R}^n$ and a real value $t$ s.t. $w^Tb \le t$ and $w^Tx > t$ for every $x \in K$. 4. Write a linear program for the maximum flow problem in a directed graph, using one flow variable for every path from the source s to the sink t (so the number of variables could be exponential in the size of the graph).  Then write the dual of the program and interpret the variables, constraints and objective value of the dual. 5. (Bonus) Write an LP for the maxflow problem using flow variables, one for each edge. Write the dual and argue that it represents a minimum s-t cut.
2017-10-21 17:23:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.716363251209259, "perplexity": 212.66677417077602}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824824.76/warc/CC-MAIN-20171021171712-20171021191712-00029.warc.gz"}
https://mathematica.stackexchange.com/questions/158309/formula-for-sequence-of-rational-numbers
# Formula for Sequence of Rational Numbers I am trying to find a formula for the $n$th term of a sequence of rational numbers less than 1. Regardless of how many terms I include in the table, FindSequenceFunction copies the input as the output. Perhaps my sequence is too complicated, but I would like to know if there are any other methods or functions which help find the $n$th term of a sequence. Here's the code for the sorted table: sopf[n_] := Plus @@ (First /@ FactorInteger[n]) Composite[n_Integer] := FixedPoint[n + PrimePi[#] + 1 &, n] Sort[Table[ 1 - ((1 + sopf[Composite[j]] - PrimeNu[Composite[j]])/ Composite[j]), {j, 1, 30}]] With output *{1/3, 2/5, 3/7, 5/11, 6/13, 8/17, 9/19, 1/2, 8/15, 4/7, 20/33, 8/13, 2/3, 2/3, 24/35, 7/10, 5/7, 8/11, 11/15, 3/4, 16/21, 7/9, 4/5, 5/6, 38/45, 17/20, 7/8, 8/9, 8/9, 15/16}* • try oeis.org perhaps Oct 21 '17 at 22:00 • @pdmclean that only works for integers. Oct 21 '17 at 22:26 • Yes, but you can search the sequence of numerators and the sequence of denominators separately. Oct 26 '17 at 13:11 • I tried, they didn't match any sequence in their database. Oct 27 '17 at 1:58 a = Sort[RandomInteger[{1, 100}, 20]];
2022-01-24 22:24:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6289402842521667, "perplexity": 629.8305338057887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00618.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/surface-area-right-circular-cone-there-are-two-cones-curved-surface-area-one-twice-that-other-slant-height-later-twice-that-former-find-ratio-their-radii_38051
Share Books Shortlist # Solution for There Are Two Cones. the Curved Surface Area of One is Twice that of the Other. the Slant Height of the Later is Twice that of the Former. Find the Ratio of Their Radii. - CBSE Class 9 - Mathematics ConceptSurface Area of a Right Circular Cone #### Question There are two cones. The curved surface area of one is twice that of the other. The slant height of the later is twice that of the former. Find the ratio of their radii. #### Solution Let curved surface area off 1^st cone = 2x CSA of 2^nd cone=x "and slant height of 1 ^st cone=h " "and slant height of 2 ^nd cone = 2h" "CSA of 1^st cone" / "CSA of 2^nd cone"=(2x)/x=2/1 ⇒ (pir_1l_1)/(pir_2l_2)=2/1 ⇒ r_1h/r_2h=2/1⇒r_1/r_2=4/1 Is there an error in this question or solution? #### APPEARS IN R.D. Sharma Mathematics for Class 9 by R D Sharma (2018-19 Session) (with solutions) Chapter 18: Surface Areas and Volume of a Cuboid and Cube Q: 13 Solution for question: There Are Two Cones. the Curved Surface Area of One is Twice that of the Other. the Slant Height of the Later is Twice that of the Former. Find the Ratio of Their Radii. concept: Surface Area of a Right Circular Cone. For the course CBSE S
2019-02-15 20:11:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5267510414123535, "perplexity": 893.6584667423365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00618.warc.gz"}
http://www.learn-math.top/generalization-of-holders-inequality-with-negative-exponent/
# Generalization of Hölder’s inequality with negative exponent How can I prove that (1a+1b+1c)(√a+√b+√c)2≥33(\frac1a+\frac1b+\frac1c)(\sqrt a+\sqrt b+\sqrt c)^2\ge 3^3 using Hأ¶lder’s Inequality This cannot be done with the usual Hأ¶lder inequality, there is a negative exponent and 22 between 00 and 11. And I couldn’t find any generalization with those exponents. (There’s one formula which holds for a different case here) ================= 1 The inequality itself can be proved by AM-GM:1a+1b+1c≥3(abc)1/3\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq3(abc)^{1/3} and √a+√b+√c≥3(abc)1/6\sqrt{a}+\sqrt{b}+\sqrt{c}\geq3(abc)^{1/6}. – yurnero 2 days ago @yumero thanks but I want to apply Holder – user257 2 days ago Yes I know, hence that was a comment only. – yurnero 2 days ago ================= 1 ================= Hأ¶lder is the following. (In the following form much easier to use Hأ¶lder!) Let ai>0a_i>0, bi>0b_i>0, α>0\alpha>0 and β>0\beta>0. Hence, we have: (a1+a2+…+an)α(b1+b2+…+bn)β≥(a_1+a_2+…+a_n)^{\alpha}(b_1+b_2+…+b_n)^{\beta}\geq ≥((aα1bβ1)1α+β+(aα2bβ2)1α+β+…+(aαnbβn)1α+β)α+β\geq\left(\left(a_1^{\alpha}b_1^{\beta}\right)^{\frac{1}{\alpha+\beta}}+\left(a_2^{\alpha}b_2^{\beta}\right)^{\frac{1}{\alpha+\beta}}+…+\left(a_n^{\alpha}b_n^{\beta}\right)^{\frac{1}{\alpha+\beta}}\right)^{\alpha+\beta} For your inequality n=3n=3, α=1\alpha=1 and β=2\beta=2 and it’s just Hأ¶lder!
2018-03-24 09:47:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9563239812850952, "perplexity": 3939.738320448522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650188.31/warc/CC-MAIN-20180324093251-20180324113251-00731.warc.gz"}
http://www.talkstats.com/forumdisplay.php/3-Probability/page20?sort=title&order=asc
1. Announcement: 05-10-2017 Views: 1,892 Page 20 of 91 First 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 70 ... Last Threads 1141 to 1200 of 5451 # Forum: Probability Probability course and homework discussion. Probability distributions. Probability theory, stochastic processes 1. ### Sticky: We only provide homework help to those who show effort Welcome to Talk Stats Forum. In order to keep homework help effective and efficient, we request all members to abide by these guidelines: 1)... • Replies: 0 • Views: 30,330 09-30-2005, 09:44 AM 1. ### Determining number of samples to collect Hi, I am doing a project in determining if there is a statistically significant difference between conducting a particular type of cleaning process... • Replies: 1 • Views: 3,520 08-03-2010, 12:11 PM 2. ### Determining on Probability from multiplt associative probabilities Hi, I have a question regarding the associative laws of probabilty. Basic question is that I need to find the probability of a player X... • Replies: 0 • Views: 1,116 03-24-2012, 12:09 AM 3. ### Determining p-values, significant cutoffs and false discovery in non-normal data Hi, I have a set of data points from a genetic study. I wish to ascertain which of these data points is statistically significant. The data is not... • Replies: 2 • Views: 1,330 06-21-2013, 12:03 PM 4. ### determining the most probable permutation of a set Is there any way to determine the most probable permutation of any given set? For example, given the set of letters SKTANH, is there a way to compute... • Replies: 2 • Views: 162 09-30-2017, 02:37 PM 5. ### Determining the probability of an electric car charging station being available? Hi there, I'm just wondering if anybody would know of a way of determining the probability of an event. I trying to figure out how I could... • Replies: 0 • Views: 1,024 06-27-2012, 09:29 AM 6. ### Determining the probability that two data sources are the same I've been looking for the answer to this problem for a while now and I think a large part of it is that I don't know how to frame the question. Here... • Replies: 5 • Views: 2,018 05-10-2013, 03:50 AM 7. ### Determining the test to be applied Find the 96% confidence limits for the true mean weekly study time of students if a sample of 25 had an average of 20 hrs per week and a sd of 3 hrs. • Replies: 3 • Views: 1,757 11-22-2006, 11:20 AM 8. ### Dice and Don't Cares Hi. I'm working on deriving a probability formula to help in some research I'm doing, but I've hit a wall. I've managed to reduce my problem down... • Replies: 3 • Views: 1,969 BGM 10-17-2011, 08:19 AM 9. ### dice prob. of sum I have a question about a probability of sum in dices. lets say its a dice with 20 or more sides. for normal dice i just write for each sum of 2... • Replies: 4 • Views: 1,985 04-20-2012, 02:49 AM 10. ### Dice probability Hello people, I have a very simple probability question that I need to solve a problem (recreational mathematics). Anyway, I would like to know how... • Replies: 3 • Views: 2,013 BGM 09-08-2011, 11:52 AM 11. ### dice probability • Replies: 5 • Views: 1,661 04-13-2012, 05:01 AM 12. ### Dice Probability (2d6, 3d6, 4d6) I would like to find the probabilities of rolling a result using different numbers of six-sided dice. I am looking for the best way to figure out the... • Replies: 2 • Views: 6,004 BGM 05-10-2014, 12:50 AM 13. ### Dice Probability (Yahtzee Related) hi, I am currently doing my Computer Science Dissertation and it it to create a program that can play Yahtzee. My planning involves lots of... • Replies: 1 • Views: 9,106 10-16-2007, 02:28 PM 14. ### Dice probability - at least 2 's out of ten Hi If I roll a fair dice 10 times, what is the probability of getting at least two 6's? edit: Which method shall I use? Say, it's at least 3... • Replies: 1 • Views: 10,292 12-17-2008, 09:23 PM 15. ### dice probability question I have a game that requires rolling 16 sided dice numbered 0-15. I would like to know the probability of rolling a sum of less then 20 using four... • Replies: 0 • Views: 2,799 11-27-2007, 11:23 PM 16. ### Dice probability question What are the odds of rolling 2d6(2 6-sided dice) three times and getting two 3s? How about rolling 4 times; 5 times? I know rolling a 2d6 has a... • Replies: 1 • Views: 1,090 04-30-2016, 01:36 PM 17. ### Dice probability question - Settlers of Catan Hello everyone, Let me begin by saying that this isn't a homework question. Rather, this is a question that relates to a personal inquiry into... • Replies: 3 • Views: 4,762 BGM 12-07-2013, 06:40 AM 18. ### Dice probability with limited reroll Hello, I'm having difficulty understanding this problem and I was wondering if someone had an answer. Let n = \text{Number of dice} = 3 p_s... • Replies: 2 • Views: 1,618 03-23-2016, 12:13 PM 19. ### Dice Probablility I'm setting up a dice game. I'll try to keep its mechanics as simple as possible. You roll a six sided die, then you can choose to roll more... • Replies: 1 • Views: 2,172 01-21-2009, 09:03 AM 20. ### Dice Problem Hi there, There's a the problem I've been trying to solve for the past two days. I have 5 dice, each with a different number of sides. Dice A:... • Replies: 2 • Views: 1,986 05-02-2013, 05:46 PM 21. ### Dice Question Hey folks, this is a question i'm trying to solve: You roll a die, winning $1 for rolling a 2 or a 4, and$7 for a 6. You lose \$2 if the number... • Replies: 1 • Views: 2,147 07-07-2009, 12:39 PM 22. ### Dice roll Sorry if this topic has been discussed before. I hate looking through forums randomly for an answer to a question. So, maybe one of you could help... • Replies: 6 • Views: 2,631 08-29-2008, 10:21 PM 23. ### Dice Roll Game Probabilities Hello everyone, I have a bit of a probability question for you as it has been quite awhile since I have been in school, and I can’t remember the... • Replies: 3 • Views: 1,985 BGM 10-13-2013, 01:36 PM Suppose you have a FIVE sided dice. One of the sides has a star on it, the other four sides are blank. Suppose you rolled it 10 times. What are the... • Replies: 1 • Views: 1,249 05-06-2015, 08:38 PM 25. ### Dice Throwing 2 dice thrown together for 3 times, what is the probability that for any of the 3 throws, I will get the same number on both dice? Thx ! Andrew • Replies: 1 • Views: 3,375 12-16-2010, 08:56 AM 26. ### Dice Total Odds Calculator I have posted a calculator to the Google Play Store for computing the probability of obtaing a particular sum when throwing dice:... • Replies: 0 • Views: 1,752 01-29-2017, 06:32 PM 27. ### Die probabilistic i am reading Mostellers book on 50 difficult probability questions. the question I am struggling with is on the average, how many times must a die be... • Replies: 8 • Views: 1,957 08-02-2012, 04:57 PM 28. ### die probability Two fair dices are thrown and you are told that the sum of upturned face is equal to 7.what is the probabilty that neither face os equal to 6??is it... • Replies: 0 • Views: 1,899 05-04-2009, 05:55 AM 29. ### Die throwing competition, probability of winning Suppose a friend and I are playing a game with a fair die. We throw alternatively and whoever rolls a 6 first wins. I get to roll first. What is the... • Replies: 9 • Views: 2,541 02-26-2013, 08:16 PM 30. ### Die throwing game A fair die is thrown a number of times and every time we count the number of dots (cumulative), until the total number of dots is bigger than or... • Replies: 7 • Views: 4,826 09-10-2010, 02:37 PM 31. ### Difference between binary logistic regression analyses and multivariate regression I am new to this forum, hope this is in the correct placce. Can someone explain the difference between these two? • Replies: 1 • Views: 3,104 04-06-2013, 09:09 PM 32. ### difference between binomial exact test and fisher's exact test Hello all, So I know the general difference between these tests --binomial exact test is used to test for a significant difference between and... • Replies: 1 • Views: 18,463 01-31-2012, 11:04 AM 33. ### Difference between likelihood and the posterior probability (bayesian statistics) Hello everyone. I am having a little bit of trouble wrapping my head around the difference between the likelihood and the posterior probability in a... • Replies: 4 • Views: 9,541 04-15-2013, 08:12 AM 34. ### Difference between mean and expected value Could someone please explain as simply as possible the difference between mean and expected value? Here is my problem: This is for my math stats... • Replies: 2 • Views: 4,646 12-07-2013, 11:06 AM 35. ### Difference between p(y=0|x=0) and p(x=0,y=0) Another Trivial question: I have been asked to calculate the joint probability i.e. p(X=0,Y=0) and i have been provided by P(Y=0|X=0), is it the... • Replies: 1 • Views: 1,475 10-06-2013, 10:18 PM 36. ### Difference between probability of an individual and of a population? Im a little confused as to finding the probability of an individual to a population. My answers don't look correct. Could anyone help me out? A... • Replies: 1 • Views: 2,146 10-12-2013, 12:27 AM 37. ### Difference between two Normal Distributions I have a problem where I have two Normal distributions both with a standard deviation of 20 and a difference between the means of 17. I want to work... • Replies: 0 • Views: 8,071 06-13-2006, 10:41 AM 38. ### Difference of expectations of RV's given arbtrary pdfs I've been beating my head on this for a while... Let X and Y have pdf's f & g respectively such that \begin{cases} f(x) >= g(x) & \mbox{if } x... • Replies: 1 • Views: 1,440 BGM 10-09-2014, 04:23 AM 39. ### difference of two independent Poisson random variables Why the difference of two independent Poisson random variables doesn't have Poisson distribution? Demonstrate it. • Replies: 6 • Views: 4,074 02-13-2015, 11:50 AM 40. ### Difference powerlaw, lognormal and streteched exponential (Weibull) function I am currently fitting above mentioned functions to my data and I can observe, that both lognormal and Weibull are better fits than powerlaw. In... • Replies: 2 • Views: 2,722 10-08-2013, 07:40 AM 41. ### differences between one-tailed and two-tailed P-value I am doing hypothesis testing using t-test and p-value. I calculate my p-value based on the degrees of freedom and the t-test value. The p-value is a... • Replies: 9 • Views: 12,073 04-12-2010, 09:51 PM 42. ### Differences of proportions Please forgive what I assume is the simplicity of these related problems, but I don't seem to find the solution elsewhere. 1) 2 cards are... • Replies: 0 • Views: 1,511 05-16-2016, 08:52 AM 43. ### Different arrangments of n 0's and m 1's This simply stated problem is giving me some trouble: How many different ways can I arrange n 0's and m 1's? Perhaps someone would direct me... • Replies: 5 • Views: 1,667 06-06-2013, 05:54 AM 44. ### Different between "X % more likely" vs. "X times more likely" I often see something like "smokers are 10% more likely to get cancer than non-smokers." Is this equivalent to saying smokers are "10 times more... • Replies: 0 • Views: 28,884 09-21-2011, 10:25 PM 45. ### Different Interpretations of the Central Limit Theorem My friend and I are having an argument about the meaning of CLT. It led to me build this javascript experiment which you may enjoy:... • Replies: 2 • Views: 3,492 06-18-2013, 07:33 PM 46. ### Different solutions when using direct calculation and Chebychev's inequality Hi all, I have a question that I can't find answer to: I have 10 random variables X1, X2....X10 which are all independent and exponentially... • Replies: 1 • Views: 1,965 02-23-2017, 06:05 PM 47. ### Difficult Integral standard normal pdf/cdf I would like to get an explicit analytical form for the integral (below)...or something close to it with an associated (small) error term. (Note that... • Replies: 17 • Views: 13,945 BGM 05-31-2010, 09:35 AM 48. ### Difficult probability question Hi - I'm really stuck with this question. I can't find any example quite like it anywhere: Two friends, Ben and Jerry, agree to a random swap of... • Replies: 5 • Views: 3,289 10-28-2011, 11:42 AM 49. ### Difficult probability question I've gotten this off another forum (it hasn't been solved yet), and thought you guys might enjoy this problem. I don't need it solved as it's not my... • Replies: 7 • Views: 1,892 08-09-2012, 10:20 AM 50. ### Difficulty correlating terminology with symbols. The book we are using for this class uses words instead of the symbols in some of the homework problems we are using. My question is: Is P(A... • Replies: 2 • Views: 5,156 02-17-2006, 12:11 PM 51. ### Difficulty setting up double integrals for Joint Distributions I'm struggling to set up the correct range when calculating probabilities of joint distributions....anyone have any recommended reading or sites that... • Replies: 16 • Views: 9,028 09-30-2012, 03:24 AM What statisics to I need to use work out the probability of seeing a dolphin given that it has been seen before? i.e. If it was seen once, will it... • Replies: 1 • Views: 2,499 03-28-2010, 06:25 PM 53. ### Dirichlet distribution I computed mean, variance and covariance of the Dirichlet distribution. To do so, I computed E, E and E. This is the first time I've dealt with... • Replies: 2 • Views: 1,422 07-07-2013, 07:01 AM 54. ### Dirichlet distribution - moments (mean and variance) For a Dirichlet variable, I know the means and covariances, that is, E = \alpha_i/\alpha_0 Cov =\frac{ \alpha_i (\alpha_0 I -... • Replies: 12 • Views: 11,821 02-14-2012, 10:56 AM 55. ### discrepancy between summed binomial probability and expected value Hi all. Im writing a program to calculate statistics for a tabletop game, and I'm confused by a discrepancy between a calculated expected value... • Replies: 1 • Views: 2,733 BGM 06-11-2010, 12:35 PM Page 20 of 91 First 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 70 ... Last Use this control to limit the display of threads to those newer than the specified time frame. Allows you to choose the data by which the thread list will be sorted. Note: when sorting by date, 'descending order' will show the newest results first.
2017-10-22 00:39:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3287513852119446, "perplexity": 2724.9184344666182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00616.warc.gz"}
https://www.deepdyve.com/lp/ou_press/when-to-drop-a-bombshell-ZPmftxi2bC
# When to Drop a Bombshell When to Drop a Bombshell Abstract Sender, who is either good or bad, wishes to look good at an exogenous deadline. Sender privately observes if and when she can release a public flow of information about her private type. Releasing information earlier exposes to greater scrutiny, but signals credibility. In equilibrium bad Sender releases information later than good Sender. We find empirical support for the dynamic predictions of our model using data on the timing of U.S. presidential scandals and U.S. initial public offerings. In the context of elections, our results suggest that October Surprises are driven by the strategic behaviour of bad Sender. 1. Introduction Election campaigns consist of promises, allegations, and scandals. While most of them are inconsequential, some are pivotal events that can sway elections. Rather than settling existing issues, these bombshells typically start new debates that, in time, provide voters with new information. When bombshells are dropped, their timing is hotly debated. Was the bombshell intentionally timed to sway the election? What else did media and politicians know, when they dropped the bombshell, that voters might only discover after the election? The 2016 U.S. presidential campaign between Democrat Hillary Clinton and Republican Donald Trump provides several examples. Just eleven days before the election, FBI director James Comey announced that his agency was reopening its investigation into Secretary Clinton’s emails. The announcement reignited claims that Clinton was not fit to be commander in chief because of her mishandling of classified information. Paul Ryan, the Republican Speaker of the House, went as far as to demand an end to classified intelligence briefings to Clinton. Some commentators maintain that Comey’s announcement cost Clinton the election.1 While the announcement conveyed the impression of an emerging scandal, Clinton was confident that no actual wrongdoing would be revealed by the new investigation —there would be no real scandal. Comey’s letter to Congress stated that “the FBI cannot yet assess whether or not this material may be significant, and I cannot predict how long it will take us to complete this additional work”. The Clinton campaign—and Democrats generally—were furious, accusing Comey of interfering with the election. Comey wrote that he was briefed on the new material only the day before the announcement. But his critics maintained that the FBI had accessed the new emails weeks before the announcement and speculated about how long Comey sat on the new material and what he knew about it.2 Similarly, one month before the election, the Washington Post released a video featuring Donald Trump talking disparagingly about women.3 The video triggered a heated public debate about whether Trump was fit to be president. It revived allegations that he had assaulted women and even prominent Republicans called for Trump to end his campaign.4 Within a week of the video’s release, five women came forward accusing Trump of sexual assault. Trump himself denied all accusations and dismissed the video as “locker-room banter”, and “nothing more than a distraction from the important issues we are facing today.”5 Others echoed his statement that real scandals are about “actions” and not “words”, and took the media coverage of the video as proof of a conspiracy against Trump.6 The concentration of scandals in the last months of the 2016 campaign is far from an exception. Such October surprises are commonplace in U.S. presidential elections, as shown in Figure 1. Political commentators argue that such bombshells may be strategically dropped close to elections so that voters have not enough time to tell real from fake news. Yet, if all fake news were released just before an election, then voters may rationally discount October surprises as fake. Voters may not do so fully, however, since while some bombshells may be strategically timed, others are simply discovered close to the election. Figure 1 View largeDownload slide Distribution of scandals implicating U.S. presidents running up for reelection, from 1977 to 2008. Data from Nyhan (2015). Figure 1 View largeDownload slide Distribution of scandals implicating U.S. presidents running up for reelection, from 1977 to 2008. Data from Nyhan (2015). Therefore, the strategic decision of when to drop a bombshell is driven by a trade-off between credibility and scrutiny. On the one hand, dropping the bombshell earlier is more credible, in that it signals that its sender has nothing to hide. On the other hand, it exposes the bombshell to scrutiny for a longer period of time—possibly revealing that the bombshell is a fake. This credibility-scrutiny trade-off also drives the timing of announcements about candidacy, running mates, cabinet members, and details of policy platforms. An early announcement exposes the background of the candidate or her team to more scrutiny, but boosts credibility. The same trade-off is likely to drive the timing of information release in other contexts outside the political sphere. For instance, a firm going public can provide a longer or shorter time for the market to evaluate its prospectus before the firm’s shares are traded. This time can be dictated by the firm’s liquidity needs and development plans, but can also be chosen strategically to influence the market. A longer time allows the market to learn more about the firm’s prospective performance. Therefore, the market perceives a longer time as a signal of the firm’s credibility, increasing the share price. But a longer time also exposes the firm to more scrutiny, possibly revealing that the firm’s future profitability is low. In all these situations, (1) an interested party has private information and (2) she cares about the public opinion at a given date. Crucially, (3) she can partially control how much time the public has to learn about her information. In this article we introduce a Sender-Receiver model of these dynamic information release problems. In our benchmark model of Section 2, (1) Sender privately knows her binary type, good or bad, and (2) wants Receiver to believe that she is good at an exogenous deadline; (3) Sender privately observes whether and when an opportunity to start a public flow of information about her type arrives and chooses when to exercise this opportunity. We call this opportunity an arm and say that Sender chooses when to pull the arm.7 In Section 3.1, we characterize the set of perfect Bayesian equilibria. Intuitively, bad Sender is willing to endure more scrutiny only if pulling the arm earlier boosts her credibility in the sense that Receiver holds a higher belief that Sender is good if the arm is pulled earlier. Therefore, bad Sender withholds the arm with strictly positive probability. Our main result is that, in all equilibria, bad Sender pulls the arm later than good Sender in the likelihood ratio order. We prove that there exists an essentially unique divine equilibrium (Cho and Kreps, 1987).8 In this equilibrium, good Sender immediately pulls the arm when it arrives and bad Sender is indifferent between pulling the arm at any time and not pulling it at all. Uniqueness allows us to analyse comparative statics in a tractable way in a special case of our model where the arm arrives according to a Poisson process and pulling the arm starts an exponential learning process in the sense of Keller et al. (2005). We do this in Section 4 and show that the comparative static properties of this equilibrium are very intuitive. Both good and bad Sender gain from a higher Receiver’s prior belief that Sender is good. Instead, whereas good Sender gains from a faster learning process and a faster arrival of the arm, bad Sender loses from these. When learning is faster and when the arm arrives more slowly, bad Sender delays pulling the arm for longer and pulls it with lower probability. In this case, the total probability that (good and bad) Sender pulls the arm is also lower. When Receiver’s prior belief is higher, withholding information is less damning, so bad Sender strategically pulls the arm with lower probability, but the probability that good Sender pulls the arm is mechanically higher. We show that the strategic effect dominates the mechanical effect if and only if Receiver’s prior belief is sufficiently low. We show that the probability density with which bad Sender pulls the arm is single-peaked in time, and derive the conditions under which it monotonically increases with time. We also characterize the shape of the probability density with which (good and bad) Sender pulls the arm, and show it has at most two peaks—an earlier peak driven by good Sender and a later peak driven by bad Sender. In Section 5, we apply our model to the strategic release of political scandals in U.S. presidential campaigns. In equilibrium, while real scandals are released as they are discovered, fake scandals are strategically delayed and concentrated towards the end of the campaign. In other words, our credibility-scrutiny trade-off predicts that the October surprise phenomenon is driven by fake scandals. Using data from Nyhan (2015), we find empirical support for this prediction. To the best of our knowledge, this is the first empirical evidence about the strategic timing of political scandals relative to the date of elections and the first direct evidence of an October Surprise effect. Finally, we apply our model to the timing of U.S. initial public offerings (IPOs). Our model links a stock’s long-run performance to the time gap between the announcement of an IPO and the initial trade date. Firms with higher long-run returns should choose longer time gaps in the likelihood ratio order. Using an approach developed by Dardanoni and Forcina (1998), we find empirical support for this prediction. Related Literature.Grossman and Hart (1980), Grossman (1981), and Milgrom (1981) pioneered the study of verifiable information disclosure and established the unraveling result: if Sender’s preferences are common knowledge and monotonic in Receiver’s action (for all types of Sender) then Receiver learns Sender’s type in any sequential equilibrium. Dye (1985) first pointed out that the unraveling result fails if Receiver is uncertain about Sender’s information endowment.9 When Sender does not disclose information, Receiver is unsure as to why, and thus cannot conclude that the non-disclosure was strategic, and hence does not “assume the worst” about Sender’s type. Acharya et al. (2011) and Guttman et al. (2013) explore the strategic timing of information disclosure in a dynamic version of Dye (1985).10Acharya et al. (2011) focus on the interaction between the timing of disclosure of private information relative to the arrival of external news, and clustering of the timing of announcements across firms. Guttman et al. (2013) analyse a setting with two periods and two signals and show that, in equilibrium, both what is disclosed and when it is disclosed matters. Strikingly, the authors show that later disclosures are received more positively. All these models are unsuited to study either the credibility or the scrutiny sides of our trade-off, because information in these models is verified instantly and with certainty once disclosed. In our motivating examples, information is not immediately verifiable: when Sender releases the information, Receiver only knows that “time will tell” whether the information released is reliable. To capture this notion of partial verifiability, we model information as being verified stochastically over time in the sense that releasing information starts a learning process for Receiver akin to processes in Bolton and Harris (1999), Keller et al. (2005). In Brocas and Carrillo (2007), an uninformed Sender, wishing to influence Receiver’s beliefs, chooses when to stop a public learning process.11 In contrast, in our model Sender is privately informed and she chooses when to start rather than stop the process.12 Our application to U.S. presidential scandals also contributes to the literature on the effect of biased media and campaigns on voters’ behaviour (e.g. Mullainathan and Shleifer, 2005; Gentzkow and Shapiro, 2006; Duggan and Martinelli, 2011; Li and Li, 2013).13DellaVigna and Kaplan (2007) provide evidence that biased media have a significant effect on the vote share in U.S. presidential elections. We focus on when a biased source chooses to release information and show that voters respond differently to information released at different times in the election campaign. 2. The Model In our model, Sender’s payoff depends on Receiver’s posterior belief about Sender’s type at a deadline. We begin with a benchmark model in which (1) Sender’s payoff is equal to Receiver’s posterior belief, (2) Sender is perfectly informed, (3) Sender’s type does not affect when the arm arrives, and (4) the deadline is deterministic. Section 3.2 relaxes each of these assumptions and shows that our main results continue to hold. 2.1. Benchmark model There are two players: Sender (she) and Receiver (he). Sender is one of two types $$\theta\in\left\{ G,B\right\}$$: good ($$\theta=G$$) or bad ($$\theta=B$$). Let $$\pi\in\left(0,1\right)$$ be the common prior belief that Sender is good. Time is discrete and indexed by $$t\in\left\{ 1,2,\ldots,T+1\right\}$$. Sender is concerned about being perceived as good at a deadline $$t=T$$. In particular, the expected payoff of type $$\theta\in\left\{ G,B\right\}$$ is given by $$v_{\theta}\left(s\right)=s$$, where $$s$$ is Receiver’s posterior belief at $$t=T$$ that $$\theta=G$$. Time $$T+1$$ combines all future dates after the deadline, including never. An arm arrives to Sender at a random time according to distribution $$F$$ with support $$\left\{ 1,2,\ldots,T+1\right\}$$. If the arm has arrived, Sender privately observes her type and can pull the arm immediately or at any time after its arrival, including time $$T+1$$. Because Sender moves only after the arrival of the arm, it is immaterial for the analysis whether Sender learns her type when the arm arrives or when the game starts. Pulling the arm starts a learning process for Receiver. Specifically, if the arm is pulled at a time $$\tau$$ before the deadline ($$\tau\leq T$$), Receiver observes realizations of a stochastic process   $L=\left\{ L_{\theta}\left(t;\tau\right),\tau\leq t\leq T\right\} .$ The process $$L$$ can be viewed as a sequence of signals, one per each time from $$\tau$$ to $$T$$ with the precision of the signal at time $$t$$ possibly depending on $$\tau$$, $$t$$, and all previous signals. Notice that if the arm is pulled at $$\tau=T$$, Receiver observes the realization $$L_{\theta}\left(T,T\right)$$ before taking his action. For notational convenience, we assume that $$L$$ is either discrete or atomless. It is more convenient to work directly with the distribution of beliefs induced by the process $$L$$ rather than with the process itself. Recall that $$s$$ is Receiver’s posterior belief that Sender is good after observing all realizations of the process from $$\tau$$ to $$T$$. Let $$m$$ denote Receiver’s interim belief that Sender is good upon observing that she pulls the arm at time $$\tau$$ and before observing any realization of $$L$$. Given $$\tau$$ and $$m$$, the process $$L$$ generates a distribution $$H\left(.\mid\tau,m\right)$$ over Receiver’s posterior beliefs $$s$$; given $$\tau$$, $$m$$, and $$\theta$$, the process $$L$$ generates a distribution $$H_{\theta}\left(.\mid\tau,m\right)$$ over $$s$$. Notice that if the arm is pulled after the deadline ($$\tau=T+1$$), then the distributions $$H_{\theta}\left(.\mid\tau,m\right)$$ and $$H\left(.\mid\tau,m\right)$$ assign probability one to $$s=m$$. Assumption 1 says that (1) pulling the arm later reveals strictly less information about Sender’s type in Blackwell (1953)’s sense and (2) the learning process never fully reveals Sender’s type. Assumption 1. (1) For all $$\tau,\tau^{\prime}\in\left\{ 1,2,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, $$H\left(.\mid\tau,\pi\right)$$ is a strict mean-preserving spread of $$H\left(.\mid\tau^{\prime},\pi\right)$$. (2) The support of $$H\left(.\mid1,\pi\right)$$ is a subset of $$\left(0,1\right)$$. For example, consider a set of (imperfectly informative) signals $$\mathcal{S}$$ with some joint distribution and suppose that pulling the arm at $$\tau$$ reveals to Receiver a set of signals $$\mathcal{S}_{\tau}\subset\mathcal{S}$$. Assumption 1 holds whenever $$\mathcal{S}_{\tau^{\prime}}$$ is a proper subset of $$\mathcal{S}_{\tau}$$ for all $$\tau<\tau^{\prime}$$. We characterize the set of perfect Bayesian equilibria, henceforth equilibria. Let $$\mu\left(\tau\right)$$ be Receiver’s equilibrium interim belief that Sender is good given that Sender pulls the arm at time $$\tau\in\left\{ 1,2,\dots,T+1\right\}$$. Also, let $$P_{\theta}$$ denote an equilibrium distribution of pulling time $$\tau$$ given Sender’s type $$\theta$$ (with the convention that $$P_{\theta}\left(0\right)=0$$). 2.2. Discussion We now pause to interpret key ingredients of our model using our main application—the timing of U.S. presidential scandals in the lead-up to elections. Receiver is the median voter and Sender is an opposition member or organization wishing to reduce a candidate’s chances of being elected. The candidate is either fit ($$\theta=B$$) or unfit ($$\theta=G$$) to run the country. The prior belief that the candidate is unfit is $$\pi$$. At a random time, the opposition may privately receive scandalous material against the candidate (arrival of the arm). The opposition can choose when and whether to release the material (pull the arm). After it is released, the material is subject to scrutiny, and the median voter gradually learns about the candidate’s type. Crucially, the opposition has private information about what the expected outcome of scrutiny is. We say that the scandal is real (fake) if further scrutiny is likely to reveal that the candidate is unfit (fit) to run the country. If, at the time of the election (deadline), the median voter believes that the candidate is likely to be unfit to run the country, the candidate’s chances of being elected are weak. Notice that releasing a scandal might backfire. For example, before the FBI reopened its investigation over Secretary Clinton’s emails, the median U.S. voter had some belief $$\pi$$ that Secretary Clinton had grossly mishandled classified information and was therefore unfit to be commander in chief. Further investigations could have revealed that her conduct was more than a mere procedural mistake. In this case, the median voter’s posterior belief $$s$$ would have been higher than $$\pi$$. On the contrary, the FBI might not have found any evidence of misconduct, despite investigating yet more emails. In this case, the median voter’s posterior belief $$s$$ would have been lower than $$\pi$$. In this application, Sender’s payoff depends on Receiver’s belief at the deadline because this belief affects the probability that the median voter elects the candidate. Specifically, suppose that the opposition is uncertain about the ideological position $$r$$ of the median voter, which is uniformly distributed on the unit interval. If the candidate is not elected, the median voter’s payoff is normalized to $$0$$. If the incumbent is elected, the median voter with position $$r$$ gets payoff $$r-1$$ if the candidate is unfit and payoff $$r$$ otherwise. The opposition gets payoff $$0$$ if the candidate is elected and $$1$$ otherwise. Therefore, Sender’s expected payoff is given by   $v_{\theta}\left(s\right)=\Pr\left(r\leq s\right)=s\text{ for }\theta\in\left\{ G,B\right\} .$ Furthermore, Receiver’s expected payoff $$u\left(s\right)$$ is given by   $u\left(s\right)=\int_{s}^{1}\left[s\left(r-1\right)+\left(1-s\right)r\right]dr=\frac{\left(1-s\right)^{2}}{2}.$ The Receiver’s ex-ante expected payoff is therefore given by   \begin{align} \mathbb{E}\left[u\left(s\right)\right] & =\frac{\left(1-\mathbb{E}\left[s\right]\right)^{2}+\mathbb{\mathbb{E}}\left[\left(s-\mathbb{E}\left[s\right]\right)^{2}\right]}{2}=\frac{\left(1-\pi\right)^{2}+\mathrm{Var\left[s\right]}}{2}.\label{eq:ele} \end{align} (1) 3. Analysis 3.1. Equilibrium We begin our analysis by deriving statistical properties of the model that rely only on players being Bayesian. These properties link the pulling time and Receiver’s interim belief to the expectation of Receiver’s posterior belief. First, from (good and bad) Sender’s perspective, keeping the pulling time constant, a higher interim belief results in a higher expected posterior belief. Furthermore, pulling the arm earlier reveals more information about Sender’s type. Therefore, from bad (good) Sender’s perspective, pulling the arm earlier decreases (increases) the expected posterior belief that Sender is good. In short, Lemma 1 says that credibility is beneficial for both types of Sender, whereas scrutiny is detrimental for bad Sender but beneficial for good Sender. Lemma 1. (Statistical Properties).Let $$\mathbb{E}\left[s\mid\tau,m,\theta\right]$$ be the expectation of Receiver’s posterior belief $$s$$ conditional on the pulling time $$\tau$$, Receiver’s interim belief $$m$$, and Sender’s type $$\theta$$. For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, and all $$m,m^{\prime}\in\left(0,1\right]$$ such that $$m<m^{\prime}$$, (1)$$\mathbb{E}\left[s\mid\tau,m^{\prime},\theta\right]>\mathbb{E}\left[s\mid\tau,m,\theta\right]$$ for $$\theta\in\left\{ G,B\right\}$$; (2)$$\mathbb{E}\left[s\mid\tau^{\prime},m,B\right]>\mathbb{E}\left[s\mid\tau,m,B\right]$$; (3)$$\mathbb{E}\left[s\mid\tau,m,G\right]>\mathbb{E}\left[s\mid\tau^{\prime},m,G\right]$$. Proof. In Appendix A. ǁ We now show that in any equilibrium, (1) good Sender strictly prefers to pull the arm whenever bad Sender weakly prefers to do so, and therefore (2) if the arm has arrived, good Sender pulls it with certainty whenever bad Sender pulls it with positive probability. Lemma 2. (Good Sender’s Behaviour).In any equilibrium: (1)For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$ and $$\mu\left(\tau\right),\mu\left(\tau^{\prime}\right)\in\left(0,1\right)$$, if bad Sender weakly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$, then $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$ and good Sender strictly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$; (2)For all $$\tau\in\left\{ 1,\ldots,T\right\}$$ in the support of $$P_{B}$$, we have $$P_{G}\left(\tau\right)=F\left(\tau\right)$$. Proof. In Appendix B. ǁ The proof relies on the three statistical properties from Lemma 1. The key to Lemma 2 is that if bad Sender weakly prefers to pull the arm at some time $$\tau$$ than at $$\tau^{\prime}>\tau$$, then Receiver’s interim belief $$\mu\left(\tau\right)$$ must be greater than $$\mu$$$$\left(\tau^{\prime}\right)$$. Intuitively, bad Sender is willing to endure more scrutiny only if pulling the arm earlier boosts her credibility. Since $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$, good Sender strictly prefers to pull the arm at the earlier time $$\tau$$, as she benefits from both scrutiny and credibility. Next, we show that bad Sender pulls the arm with positive probability whenever good Sender does, but bad Sender pulls the arm later than good Sender in the first-order stochastic dominance sense. Moreover, bad sender pulls the arm strictly later unless no type pulls the arm. An immediate implication is that bad Sender always withholds the arm with positive probability. Lemma 3. (Bad Sender’s Behaviour).In any equilibrium, $$P_{G}$$ and $$P_{B}$$ have the same supports and, for all $$\tau\in\left\{ 1,\ldots,T\right\}$$ with $$P_{G}\left(\tau\right)>0$$, we have $$P_{B}\left(\tau\right)<P_{G}\left(\tau\right)$$. Therefore, in any equilibrium, $$P_{B}\left(T\right)<F\left(T\right)$$. Proof. In Appendix B. ǁ Intuitively, if there were a time $$\tau\in\left\{ 1,\dots,T\right\}$$ at which only good Sender pulled the arm with positive probability, then, upon observing that the arm was pulled at $$\tau$$, Receiver would conclude that Sender was good. But then, to achieve this perfect credibility,14 bad Sender would want to mimic good Sender and therefore strictly prefer to pull the arm at $$\tau$$, contradicting that only good Sender pulled the arm at $$\tau$$. Nevertheless, bad Sender always delays relative to good Sender. Indeed, if bad and good Sender were to pull the arm at the same time, then Sender’s credibility would not depend on the pulling time. But with constant credibility, bad Sender would never pull the arm to avoid scrutiny. Therefore, good Sender must necessarily pull the arm earlier than bad Sender. We now show that, at any time when good Sender pulls the arm, bad Sender is indifferent between pulling and not pulling the arm. That is, in equilibrium, pulling the arm earlier boosts Sender’s credibility as much as to exactly offset the expected cost of longer scrutiny for bad Sender. Thus, Receiver’s interim beliefs are determined by bad Sender’s indifference condition (2) and the consistency condition (3). The consistency condition follows from Receiver’s interim beliefs being determined by Bayes’s rule and Sender’s equilibrium strategy. Roughly, it says that a weighted average of interim beliefs is equal to the prior belief. Lemma 4. (Receiver’s Beliefs).In any equilibrium,   $$\int v_{B}\left(s\right)dH_{B}\left(s|\tau,\mu\left(\tau\right)\right)=v_{B}\left(\mu\left(T+1\right)\right)\it{ for\, all }\,\tau\it{ in\, the\, support\, of }\,P_{G},\label{pit}$$ (2)  $$\sum_{\tau\in {\rm supp}\left(P_{G}\right)}\frac{1-\mu\left(\tau\right)}{\mu\left(\tau\right)}\left(P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)\right)=\frac{1-\pi}{\pi}.\label{piT}$$ (3) Proof. In Appendix B. ǁ We now characterize the set of equilibria. Part 1 of Proposition 1 states that, for any set of times, there exists an equilibrium in which good Sender pulls the arm only at times in this set. Moreover, in any equilibrium, at any time when good Sender pulls the arm, she pulls it with probability $$1$$ and bad Sender pulls it with strictly positive probability. The probability with which bad Sender pulls the arm at any time is determined by the condition that the induced interim beliefs keep bad Sender exactly indifferent between pulling the arm then and not pulling it at all. Part 2 of Proposition 1 characterizes the set of divine equilibria of Banks and Sobel (1987) and Cho and Kreps (1987).15 In such equilibria, good Sender pulls the arm as soon as it arrives. Proposition 1. (Equilibrium). (1)For any $$\mathcal{T}\subseteq\left\{ 1,\dots,T+1\right\}$$ with $$T+1\in\mathcal{T}$$, there exists an equilibrium in which the support of $$P_{G}$$ is $$\mathcal{T}$$. In any equilibrium, $$P_{G}$$ and $$P_{B}$$ have the same supports, and for all $$\tau$$ in the support of $$P_{G}$$, $$P_{G}\left(\tau\right)=F\left(\tau\right)$$ and  $$P_{B}\left(\tau\right)=\frac{\pi}{1-\pi}\sum_{t\in supp\left(P_{G}\right)\text{ s.t. }t\leq\tau}\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\left(P_{G}\left(t\right)-P_{G}\left(t-1\right)\right),\label{B}$$ (4)where $$\mu\left(\tau\right)\in\left(0,1\right)$$ is uniquely determined by (2) and (3). (2)There exists a divine equilibrium. In any divine equilibrium, for all $$\tau\in\left\{ 1,\ldots,T+1\right\}$$, $$P_{G}\left(\tau\right)=F\left(\tau\right)$$. Proof. In Appendix B. ǁ Although there exist a plethora of divine equilibria, in all such equilibria, pulling probabilities of good and bad Sender, as well as Receiver’s beliefs, are uniquely determined by $$P_{G}=F$$ and (2)-(4). In this sense, there exists an essentially unique divine equilibrium. Our main testable prediction is that bad Sender pulls the arm strictly later than good Sender in the likelihood ratio order sense. Corollary 1. (Equilibrium Dynamics).In the divine equilibrium,   $\frac{P_{B}\left(\tau\right)-P_{B}\left(\tau-1\right)}{P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)}<\frac{P_{B}\left(\tau+1\right)-P_{B}\left(\tau\right)}{P_{G}\left(\tau+1\right)-P_{G}\left(\tau\right)}\,\it{ for\, all }\,\tau\in\left\{ 1,\dots,T\right\} .$ Proof. In Appendix B. ǁ Corollary 1 implies that, conditional on pulling time $$\tau$$ being between any two times $$\tau^{\prime}$$ and $$\tau^{\prime\prime}$$, bad Sender pulls the arm strictly later than good Sender in the first-order stochastic dominance sense (Theorem 1.C.5, Shaked and Shanthikumar, 2007):   $\frac{P_{B}\left(\tau\right)-P_{B}\left(\tau^{\prime}\right)}{P_{B}\left(\tau^{\prime\prime}\right)-P_{B}\left(\tau^{\prime}\right)}<\frac{P_{G}\left(\tau\right)-P_{G}\left(\tau^{\prime}\right)}{P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime}\right)}\text{ for all }\tau^{\prime}<\tau<\tau^{\prime\prime}.$ Our model also gives predictions about the evolution of Receiver’s beliefs. Pulling the arm earlier is more credible as Receiver’s interim beliefs $$\mu\left(\tau\right)$$ decrease over time. Moreover, pulling the arm instantaneously boosts credibility in the sense that Receiver’s belief at any time $$\tau$$ about Sender’s type is higher if Sender pulls the arm than if she does not. Corollary 2. (Belief Dynamics).Let $$\tilde{\mu}\left(\tau\right)$$ denote Receiver’s interim belief that Sender is good given that she has not pulled the arm before or at $$\tau$$. In the divine equilibrium,   $\mu\left(\tau-1\right)>\mu\left(\tau\right)>\tilde{\mu}\left(\tau-1\right)>\tilde{\mu}\left(\tau\right)\,\it{ for\, all }\,\tau\in\left\{ 2,\dots,T\right\} .$ Proof. In Appendix B. ǁ 3.2. Discussion of model assumptions We now discuss how our results change (or do not change) if we relax several of the assumptions made in our benchmark model. We discuss each assumption in a separate subsection. The reader may skip this section without any loss of understanding of subsequent sections. 3.2.1. Nonlinear Sender’s payoff In the benchmark model, we assume that Sender’s payoff is linear in Receiver’s posterior belief: $$v_{G}\left(s\right)=v_{B}\left(s\right)=s$$ for all $$s$$. In our motivating example, this linearity arises because the opposition is uncertain about the ideological position $$r$$ of the median voter. If there is no such uncertainty, then the median voter reelects the incumbent whenever $$s$$ is below $$r$$, where $$r\in\left(0,1\right)$$ is a constant. Therefore, Sender’s payoff is a step function:   $$v_{\theta}\left(s\right)=v\left(s\right)=\begin{cases} 0 & \text{if }s<r;\\ 1 & \text{if }s>r. \end{cases}$$ (5) We now allow for Sender’s payoff to be nonlinear in Receiver’s posterior belief and even type dependent. To understand how the shapes of the payoff functions $$v_{G}$$ and $$v_{B}$$ affect our analysis, we extend the statistical properties of Lemma 1, which describe the evolution of Receiver’s posterior belief from Sender’s perspective. First and not surprisingly, a more favourable interim belief results in more favourable posterior beliefs for all types of Sender and for all realizations of the process. Moreover, Receiver’s posterior belief follows a supermartingale (submartingale) process from bad (good) Sender’s perspective. Lemma 1′ formalizes these statistical properties, using standard stochastic orders (see, e.g., Shaked and Shanthikumar, 2007). Distribution $$Z_{2}$$ strictly dominates distribution $$Z_{1}$$ in the increasing convex (concave) order if there exists a distribution $$Z$$ such that $$Z_{2}$$ strictly first-order stochastically dominates $$Z$$ and $$Z$$ is a mean-preserving spread (reduction) of $$Z_{1}$$. Lemma 1′. (Statistical Properties).For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, and all $$m,m^{\prime}\in\left(0,1\right]$$ such that $$m<m^{\prime}$$, (1)$$H_{\theta}\left(.\mid\tau,m^{\prime}\right)$$ strictly first-order stochastically dominates $$H_{\theta}\left(.\mid\tau,m\right)$$ for $$\theta\in\left\{ G,B\right\}$$; (2)$$H_{B}\left(.\mid\tau^{\prime},m\right)$$ strictly dominates $$H_{B}\left(.\mid\tau,m\right)$$ in the increasing concave order; (3)$$H_{G}\left(.\mid\tau,m\right)$$ strictly dominates $$H_{G}\left(.\mid\tau^{\prime},m\right)$$ in the increasing convex order. Proof. In Appendix A. ǁ To interpret Lemma 1′, we assume that the payoff of both types of Sender is a continuous strictly increasing function of Receiver’s posterior belief, so that both types of Sender want to look good.16 Part 1 says that credibility is beneficial for both types of Sender, regardless of the shape of their payoff functions. Part 2 (part 3) says that from bad (good) Sender’s perspective, pulling the arm earlier results in more spread out and less (more) favourable posteriors provided that the interim belief does not depend on the pulling time. So scrutiny is detrimental for bad Sender if her payoff is not too convex but beneficial for good Sender if her payoff is not too concave. Therefore, for a given process satisfying Assumption 1, Proposition 1 continues to hold if bad Sender is not too risk-loving and good Sender is not too risk-averse. In fact, Proposition 1 continues to hold verbatim if bad Sender’s payoff is weakly concave and good Sender’s payoff is weakly convex (the proof in Appendix B explicitly allows for this possibility).17 Much less can be said in general if the payoff functions $$v_{G}$$ and $$v_{B}$$ have an arbitrary shape. For example, if $$v_{G}$$ is sufficiently concave, then good Sender can prefer to delay pulling the arm to reduce the spread in posterior beliefs. Likewise, if $$v_{B}$$ is sufficiently convex, then bad Sender can prefer to pull the arm earlier than good Sender to increase the spread in posterior beliefs. These effects work against our credibility-scrutiny trade-off and Proposition 1 no longer holds.18 Nevertheless, bad Sender weakly delays pulling the arm relative to good Sender under the following single crossing assumption. Assumption 2. For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$ and $$\mu\left(\tau\right),\mu\left(\tau^{\prime}\right)\in\left(0,1\right)$$, if bad Sender weakly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$, then good Sender strictly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$. This assumption holds in the benchmark model by Lemma 2. This assumption also holds if Sender’s payoff is the step function in (5) whenever pulling the arm later reveals strictly less useful information about Sender’s type, in the sense that Receiver is strictly worse off.19 Lemma 2′. (Good Sender’s Behaviour).Let $$v_{\theta}$$ be given by (5). If for all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$   $$\int_{r}^{1}H\left(s\mid\tau^{\prime},m\right)ds>\int_{r}^{1}H\left(s\mid\tau,m\right)ds\,\it{ for\, all }\,m\in\left(0,1\right),\label{eq:anton}$$ (6)then Assumption 2 holds. Proof. In Appendix B. ǁ If Assumption 2 holds and $$v_{\theta}$$ is strictly increasing, then in the unique divine equilibrium, good Sender pulls the arm as soon as it arrives and bad Sender pulls the arm weakly later than good Sender—there may exist an equilibrium in which both good and bad Sender pull the arm as soon as it arrives.20 3.2.2. Imperfectly informed Sender In many applications, Sender does not know with certainty whether pulling the arm would start a good or bad learning process for Receiver. For example, when announcing the reopening of the Clinton investigation, Director Comey could not know for certain what the results of the investigation would eventually be. We generalize our model to allow for Sender to only observe a signal $$\sigma\in\left\{ \sigma_{B},\sigma_{G}\right\}$$ about an underlying binary state $$\theta$$, with normalization   $\sigma_{G}=\Pr\left(\theta=G\mid\sigma_{G}\right)>\pi>\Pr\left(\theta=G\mid\sigma_{B}\right)=\sigma_{B}.$ The statistical properties of Lemma 1 still hold. Lemma 1″. (Statistical Properties).Let $$\mathbb{E}\left[s\mid\tau,m,{{\sigma}}\right]$$ be the expectation of Receiver’s posterior belief $$s$$ conditional on the pulling time $$\tau$$, Receiver’s interim belief $$m$$, and Sender’s signal $$\sigma$$. For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, and all $$m,m^{\prime}\in\left(0,1\right]$$ such that $$m<m^{\prime}$$, (1)$$\mathbb{E}\left[s\mid\tau,m^{\prime},\sigma\right]>\mathbb{E}\left[s\mid\tau,m,\sigma\right]$$; (2)$$\mathbb{E}\left[s\mid\tau^{\prime},m,\sigma_{B}\right]>\mathbb{E}\left[s\mid\tau,m,\sigma_{B}\right]$$; (3)$$\mathbb{E}\left[s\mid\tau,m,\sigma_{G}\right]>\mathbb{E}\left[s\mid\tau^{\prime},m,\sigma_{G}\right]$$. Proof. In Appendix A. ǁ These statistical results ensure that credibility is always beneficial for Sender, whereas scrutiny is detrimental for Sender with signal $$\sigma_{B}$$ but beneficial for Sender with signal $$\sigma_{G}$$. Therefore, all our results carry over. Moreover, we can extend our analysis to allow for signal $$\sigma$$ to be continuously distributed on the interval $$\left[\underline{{\sigma}},\bar{\sigma}\right)$$, with normalization $$\sigma=\Pr\left(\theta=G\mid\sigma\right)$$. In particular, in this case, there exists a partition equilibrium with $$\bar{\sigma}=\sigma_{0}>\sigma_{1}>\dots>\sigma_{T+1}=\underline{{\sigma}}$$ such that Sender $$\sigma\in\left[\sigma_{t},\sigma_{t-1}\right)$$ pulls the arm as soon as it arrives unless it arrives before time $$t\in\left\{ 1,\dots,T+1\right\}$$ (and pulls the arm at time $$t$$ if it arrives before $$t$$). 3.2.3. Type-dependent arrival of the arm In many applications, it is more reasonable to assume that the distribution of the arrival of the arm differs for good and bad Sender. For example, fake scandals may be easy to fabricate, whereas real scandals need time to be discovered. We generalize the model to allow for different distributions of the arrival of the arm for good and bad Sender. In particular, the arm arrives at a random time according to distributions $$F_{G}=F$$ for good Sender and $$F_{B}$$ for bad Sender. The proof of Proposition 1 (in Appendix B) explicitly allows for the arm to arrive (weakly) earlier to bad Sender than to good Sender in the first-order stochastic dominance sense: $$F_{B}\left(t\right)\geq F_{G}\left(t\right)$$ for all $$t$$. This assumption is clearly satisfied if bad Sender has the arm from the outset or if bad and good Sender receive the arm at the same time. More generally, Proposition 1 continues to hold verbatim unless the arm arrives sufficiently later to bad Sender than to good Sender such that $$F_{B}\left(t\right)<P_{B}\left(t\right)$$ for some $$t$$, where $$P_{B}\left(t\right)$$ is given by (4). But even then, Corollary 1 still holds. That is, bad Sender pulls the arm strictly later than good Sender. Yet, bad Sender may do so for the simple mechanical (rather than strategic) reason that the arm arrives to her later than to good Sender.21 3.2.4. Stochastic deadline In the benchmark model, we assume that the deadline $$T$$ is fixed and common knowledge. In some applications, the deadline $$T$$ may be stochastic. In particular, suppose that $$T$$ is a random variable distributed on $$\left\{ 1,\dots,\bar{T}\right\}$$ where time runs from $$1$$ to $$\bar{T}+1$$. Now the process $$L$$ has $$T$$ as a random variable rather than a constant. For this process, we can define the ex-ante distribution $$H$$ of posteriors at $$T$$, where $$H$$ depends only on pulling time $$\tau$$ and interim belief $$m$$. Notice that Assumption 1 still holds for this ex-ante distribution of posteriors for any $$\tau,\tau^{\prime}\in\left\{ 1,\dots,\bar{T}+1\right\}$$. Therefore, from the ex-ante perspective, Sender’s problem is identical to the problem with a deterministic deadline and all results carry over. 4. Poisson Model To get more precise predictions about the strategic timing of information release, we now assume that the arrival of the arm and Receiver’s learning follow Poisson processes. In this Poisson model, time is continuous $$t\in\left[0,T\right]$$.22 The arm arrives to Sender at Poisson rate $$\alpha$$, so that $$F\left(t\right)=1-e^{-\alpha t}$$. Once Sender pulls the arm, a breakdown occurs at Poisson rate $$\lambda$$ if Sender is bad, but never occurs if Sender is good, so that $$H\left(.\mid\tau,m\right)$$ puts probability $$\left(1-m\right)\left(1-e^{-\lambda\left(T-\tau\right)}\right)$$ on $$s=0$$ and the complementary probability on   $s=\frac{m}{m+\left(1-m\right)e^{-\lambda\left(T-\tau\right)}}.$ Returning to our main application, the Poisson model assumes that scandals can be conclusively debunked, but cannot be proven real. It also assumes that the opposition receives the scandalous material against the president at a constant rate, independent of whether they are real or fake. As discussed in Section 3.2.3, the results would not change if real documents take more time to be discovered than fake document take to be fabricated. Our benchmark model does not completely nest the Poisson model. In fact, part (ii) of Assumption 1, that the learning process never fully reveals Sender’s type, fails in the Poisson model because a breakdown fully reveals that Sender is bad. Nevertheless, if only part (i) of Assumption 1 is satisfied, a version of Proposition 1 continues to hold, with the difference that bad Sender never pulls the arm before some time $$\bar{t}$$. Specifically, Proposition 1 holds for all $$\tau\geq\bar{t}$$, whereas $$\mu\left(\tau\right)=1$$ and $$P_{B}\left(\tau\right)=0$$ for all $$\tau<\bar{t}$$. Intuitively, even if Receiver believes that only good Sender pulls the arm before $$\bar{t}$$, bad Sender strictly prefers to pull the arm after $$\bar{t}$$ to reduce the risk that Receiver fully learns that Sender is bad. We can, therefore, explicitly characterize the divine equilibrium of the Poisson model. First, good Sender pulls the arm as soon as it arrives.23 Second, bad Sender is indifferent between pulling the arm at any time $$t\geq\bar{t}\geq0$$ and not pulling it at all. Third, bad Sender strictly prefers to delay pulling the arm if $$t<\bar{t}$$. In the divine equilibrium of the Poisson model, $$\mu\left(t\right)=1$$ for all $$t<\bar{t}$$, and equations (2) and (3) become   \begin{eqnarray*} \frac{\mu\left(t\right)e^{-\lambda\left(T-t\right)}}{\mu\left(t\right)+\left(1-\mu\left(t\right)\right)e^{-\lambda\left(T-t\right)}} & = & \mu\left(T\right)\text{ for all }t\ge\bar{t},\\ \int_{0}^{T}\alpha\frac{1-\mu\left(t\right)}{\mu\left(t\right)}e^{-\alpha t}dt+\frac{1-\mu\left(T\right)}{\mu\left(T\right)}e^{-\alpha T} & = & \frac{1-\pi}{\pi}. \end{eqnarray*} Adding the boundary condition $$\lim_{t\downarrow\bar{t}}\mu\left(t\right)=1$$ yields the explicit solution $$\mu\left(t\right)$$ and uniquely determines $$\bar{t}$$. Proposition 2. In the divine equilibrium, good Sender pulls the arm as soon as it arrives and Receiver’s interim belief that Sender is good given pulling time $$t$$ is:  $\mu\left(t\right)=\begin{cases} \frac{\mu\left(T\right)}{1-\mu\left(T\right)\left(e^{\lambda\left(T-t\right)}-1\right)} & {\it{if }}\,t\geq\bar{t};\\ 1 & {\it{otherwise,}} \end{cases}$ where $$\mu\left(T\right)$$ is Receiver’s posterior belief if the arm is never pulled and  $\bar{t}=\begin{cases} 0 & {\it{if }}\,\,\pi<\bar{\pi};\\ T-\frac{1}{\lambda}\ln\frac{1}{\mu\left(T\right)} & {\it{otherwise,}} \end{cases}$   \begin{eqnarray*} \mu\left(T\right) & = & \left\{ \begin{array}{l} \left[\frac{\alpha e^{\lambda T}+\lambda e^{-\alpha T}}{\alpha+\lambda}+\frac{1-\pi}{\pi}\right]^{-1}\,{\it{ if }}\,\,\pi<\bar{\pi};\\ \left[\frac{\left(\alpha+\lambda\right)\left(1-\pi\right)}{\lambda\pi}e^{\alpha T}+1\right]^{-\frac{\lambda}{\alpha+\lambda}}\,\it{ otherwise,} \end{array}\right.\\ \bar{\pi} & = & \left[1+\frac{\lambda}{\alpha+\lambda}\left(e^{\lambda T}-e^{-\alpha T}\right)\right]^{-1}. \end{eqnarray*} The parameters of the model affect welfare directly and through Sender’s equilibrium behavior. Proposition 3 says that, in the divine equilibrium, direct effects dominate. Specifically, a higher prior belief $$\pi$$ results in higher posterior beliefs, which increases both bad and good Sender’s welfare. Moreover, a higher breakdown rate $$\lambda$$ or a higher arrival rate $$\alpha$$ allows Receiver to learn more about Sender, which decreases (increases) bad (good) Sender’s welfare. Proposition 3 also derives comparative statics on Receiver’s welfare given by (1).24 Proposition 3. In the divine equilibrium, (1)the expected payoff of bad Sender increases with $$\pi$$ but decreases with $$\lambda$$ and $$\alpha$$; (2)the expected payoff of good Sender increases with $$\pi$$, $$\lambda$$, and $$\alpha$$; (3)the expected payoff of Receiver decreases with $$\pi$$ but increases with $$\lambda$$ and $$\alpha$$. Proof. In Appendix C. ǁ 4.1. Static analysis We now explore how the parameters of the model affect the probability that Sender releases information. The probability that bad Sender pulls the arm is   $$P_{B}\left(T\right)=1-\frac{\pi}{1-\pi}\frac{1-\mu\left(T\right)}{\mu\left(T\right)}e^{-\alpha T},\label{q}$$ (7) which follows from   $$\mu\left(T\right)=\frac{\pi e^{-\alpha T}}{\pi e^{-\alpha T}+\left(1-\pi\right)\left(1-P_{B}\left(T\right)\right)}.\label{eq:pi(T)}$$ (8) Proposition 4 says that bad Sender pulls the arm with a higher probability if the prior belief $$\pi$$ is lower, if the breakdown rate $$\lambda$$ is lower, or if the arrival rate $$\alpha$$ is higher. Proposition 4. In the divine equilibrium, the probability that bad Sender pulls the arm decreases with $$\pi$$ and $$\lambda$$ but increases with $$\alpha$$. Proof. In Appendix C. ǁ Intuitively, if the prior belief $$\pi$$ is higher, bad Sender has more to lose in case of a breakdown. Similarly, if the breakdown rate $$\lambda$$ is higher, pulling the arm is more likely to reveal that Sender is bad. In both cases, bad Sender is more reluctant to pull the arm. In contrast, if the arrival rate $$\alpha$$ is higher, good Sender is more likely to pull the arm and Receiver will believe that Sender is bad with higher probability if she does not pull the arm. In this case, bad Sender is more willing to pull the arm. The total probability that Sender pulls the arm is given by the weighted sum of the probabilities $$P_{B}\left(T\right)$$ and $$P_{G}\left(T\right)$$ that bad Sender and good Sender pull the arm:   $$P_{B}\left(T\right)=\pi P_{G}\left(T\right)+\left(1-\pi\right)P_{B}\left(T\right)=1-\frac{\pi e^{-\alpha T}}{\mu\left(T\right)}.\label{P}$$ (9) A change in $$\lambda$$ affects $$P_{B}\left(T\right)$$, but not $$P_{G}\left(T\right)$$; a change in $$\alpha$$ affects both $$P_{B}\left(T\right)$$ and $$P_{G}\left(T\right)$$ in the same direction. Therefore, Sender pulls the arm with a higher total probability if the breakdown rate is lower or if the arrival rate is higher. The prior belief $$\pi$$ has a direct and an indirect effect on the total probability that Sender pulls the arm. On the one hand, holding $$P_{B}\left(T\right)$$ constant, $$P$$$$\left(T\right)$$ directly increases with $$\pi$$, because $$P_{G}\left(T\right)>P_{B}\left(T\right)$$. On the other hand, $$P\left(T\right)$$ indirectly decreases with $$\pi$$, because $$P_{B}\left(T\right)$$ decreases with $$\pi$$. Proposition 5 says that the indirect effect dominates the direct effect when $$\pi$$ is sufficiently low. Proposition 5. In the divine equilibrium, the total probability that Sender pulls the arm decreases with $$\lambda$$, increases with $$\alpha$$, and is quasiconvex in $$\pi$$: decreases with $$\pi$$ if  $\pi<\frac{\alpha e^{\alpha T}}{\alpha e^{\alpha T}+\lambda\left(e^{\alpha T}-1\right)}\in\left(0,1\right)$ and increases with $$\pi$$ otherwise. Proof. In Appendix C. ǁ The probabilities $$P_{G}\left(T\right)=1-e^{-\alpha T}$$ and $$P_{B}\left(T\right)$$ that good Sender and bad Sender pull the arm also determine Receiver’s posterior belief $$\mu\left(T\right)$$. By (8), $$\mu\left(T\right)$$ decreases with the breakdown rate $$\lambda$$, because $$P_{B}\left(T\right)$$ decreases with $$\lambda$$. Equation (8) also suggests that there are direct and indirect effects of the prior belief $$\pi$$ and the arrival rate $$\alpha$$ on $$\mu\left(T\right)$$. On the one hand, holding $$P_{B}\left(T\right)$$ constant, $$\mu\left(T\right)$$ directly increases with $$\pi$$ and decreases with $$\alpha$$. On the other hand, $$\mu\left(T\right)$$ indirectly decreases with $$\pi$$ and increases with $$\alpha$$, because $$P_{B}\left(T\right)$$ decreases with $$\pi$$ and increases with $$\alpha$$. Proposition 6 says that the direct effect always dominates the indirect effect in the Poisson model. Proposition 6. In the divine equilibrium, Receiver’s posterior belief if the arm is never pulled increases with $$\pi$$ but decreases with $$\lambda$$ and $$\alpha$$. Proof. In Appendix C. ǁ 4.2. Dynamic analysis The Poisson model also allows for a more detailed analysis of the strategic timing of information release. By Proposition 2, bad Sender begins to pull the arm at time $$\bar{t}$$. In the spirit of Proposition 4, bad Sender begins to pull the arm later if the prior belief $$\pi$$ is higher, if the breakdown rate $$\lambda$$ is higher, or if the arrival rate $$\alpha$$ is lower. Proposition 7. In the divine equilibrium, $$\bar{t}$$ increases with $$\pi$$ and $$\lambda$$ but decreases with $$\alpha$$. Proof. In Appendix C. ǁ At each time $$t$$ after $$\bar{t}$$, bad Sender pulls the arm with a strictly positive probability density $$p_{B}\left(t\right)$$ (Figure 2a). Proposition 8 says that $$p_{B}\left(t\right)$$ first increases and then decreases with time. Figure 2 View largeDownload slide Pulling density and breakdown probability; $$\alpha=1$$, $$\lambda=2$$, $$\pi=.5$$, $$T=1$$. (a) dotted: $$p_{G}\left(t\right)$$; solid: $$p_{B}\left(t\right)$$; dashed: $$p\left(t\right)$$; (b) dotted: $$\lambda\left(T-t\right)$$; solid: $${p_{B}\left(t\right)}/{p_{G}\left(t\right)}$$; dashed: $$Q\left(t\right)$$. Figure 2 View largeDownload slide Pulling density and breakdown probability; $$\alpha=1$$, $$\lambda=2$$, $$\pi=.5$$, $$T=1$$. (a) dotted: $$p_{G}\left(t\right)$$; solid: $$p_{B}\left(t\right)$$; dashed: $$p\left(t\right)$$; (b) dotted: $$\lambda\left(T-t\right)$$; solid: $${p_{B}\left(t\right)}/{p_{G}\left(t\right)}$$; dashed: $$Q\left(t\right)$$. Proposition 8. In the divine equilibrium, the probability density that bad Sender pulls the arm at time $$t$$ is quasiconcave: increases with $$t$$ if  $t<t_{b}\equiv T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1}{\mu\left(T\right)}\right)$ and decreases with $$t$$ otherwise. Proof. In Appendix C. ǁ Intuitively, the dynamics of $$p_{B}\left(t\right)$$ are driven by a strategic and a mechanic force. Strategically, as in Corollary 1, bad Sender delays pulling the arm with respect to good Sender, so that the likelihood ratio $$p_{B}\left(t\right)/p_{G}\left(t\right)$$ increases with time, where $$p_{G}\left(t\right)=\alpha e^{-\alpha t}$$ is the probability density that good Sender pulls the arm at time $$t$$. Mechanically, $$p_{B}\left(t\right)$$ roughly follows the dynamics of $$p_{G}\left(t\right)$$. If the arrival rate $$\alpha$$ is sufficiently small, so that the density $$p_{G}\left(t\right)$$ barely changes over time, the strategic force dominates and the probability that bad sender pulls the arm monotonically increases with time ($$t_{b}>T$$). Instead, if the arrival rate $$\alpha$$ is sufficiently large, so that $$p_{G}\left(t\right)$$ rapidly decreases over time, the mechanic force dominates and the probability that bad sender pulls the arm monotonically decreases with time ($$t_{b}<T$$). The total probability density $$p\left(t\right)$$ that Sender pulls the arm is a weighted sum of $$p_{G} \left(t\right)$$ and $$p_{B}\left(t\right)$$, so that $$p\left(t\right)=\pi p_{G}\left(t\right)+\left(1-\pi\right)p_{B}\left(t\right)$$ (Figure 2a). Therefore, until $$\bar{t}$$, $$p\left(t\right)=\pi p_{G}\left(t\right)$$, and thereafter, as in Proposition 8, $$p\left(t\right)$$ first increases and then decreases with time. Proposition 9. In the divine equilibrium, the total probability density that Sender pulls the arm at time $$t$$ decreases with $$t$$ from $$0$$ to $$\bar{t}$$ and is quasiconcave in $$t$$ on the interval $$\left[\bar{t},T\right]$$: increases with $$t$$ if  $\bar{t}<t<t_{s}\equiv T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1+\mu\left(T\right)}{\mu\left(T\right)}\right)$ and decreases with $$t$$ if $$t>t_{s}$$. Proof. In Appendix C. ǁ Let the breakdown probability$$Q\left(t\right)$$ be the probability that a breakdown occurs before the deadline given that the arm is pulled at time $$t$$ (Figure 2b). Proposition 10 says that, as time passes, the breakdown probability first increases and then decreases.25 Proposition 10. In the divine equilibrium, the breakdown probability is quasiconcave: increases with $$t$$ if  $t<t_{b}\equiv T-\frac{1}{\lambda}\ln\left(\frac{1+\mu\left(T\right)}{2\mu\left(T\right)}\right)<T$ and decreases with $$t$$ otherwise. Proof. In Appendix C. ǁ Intuitively, the breakdown probability increases with the amount of scrutiny and with the likelihood ratio $$p_{B}\left(t\right)/p_{G}\left(t\right)$$. Obviously, Sender is exposed to more scrutiny if she pulls the arm earlier. But the likelihood ratio $$p_{B}\left(t\right)/p_{G}\left(t\right)$$ is lower earlier, because bad Sender strategically delays pulling the arm. Proposition 10 says that this strategic effect dominates for earlier times. 5. Applications 5.1. U.S. presidential scandals Returning to our presidential scandals example, the main prediction of our model is that fake scandals are released later than real scandals. We explore this prediction using Nyhan’s (2015) data on U.S. presidential scandals from 1977 to 2008. For each week, the data report whether a new scandal involving the current U.S. president was first mentioned in the Washington Post. Although scandals might have first appeared on other outlets, we agree with Nyhan that the Washington Post is likely to have mentioned such scandals immediately thereafter. As our model concerns scandals involving the incumbent in view of his possible reelection, we focus on all the presidential elections in which the incumbent was a candidate. Therefore we consider only the first term of each president from 1977 to 2008, beginning on the first week of January after the president’s election.26,27 In all cases, the election was held on the 201st week after this date. We construct the variable weeks to election as the difference between the election week at the end of the term and the release week of the scandal. For each scandal,28 we locate the original Washington Post article as well as other contemporary articles on The New York Times and the Los Angeles Times. We then search for subsequent articles on the same scandal in following years until 2016, as well as court decisions and scholarly books when possible. We check whether factual evidence of wrongdoing or otherwise reputationally damaging conduct was conclusively verified at a later time. If so, we check whether the evidence involved the president directly or close family members or political collaborators chosen or appointed by the president or his administration. We code these scandals as real. For the remaining scandals, we check whether a case for libel was successful or all political actors linked to the scandal were cleared of wrongdoings. We code these scandals as fake. The only scandal we were not able to code by this procedure is the “Banca Nazionale del Lavoro” scandal (also known as “Iraq-gate”). We code this scandal as real, but we check in Online Appendix A that all our qualitative results are robust to coding it as fake. In Online Appendix A, we report the complete list of scandals and a summary motivation of our coding decisions. Figure 3 shows the empirical distributions of the first mention of real and fake presidential scandals in the Washington Post as a function of weeks to election. Although we do not observe scandals released after the election ($$t=T+1$$ in our model) and cannot pinpoint the date at which the campaign begins ($$t=1$$ in our model), Corollary 1 implies that fake scandals are released later than real scandals conditional on any given time interval. The left panel covers the whole presidential term; the right panel focuses on the election campaign period only, which we identify with the last 60 weeks before the election. Both figures suggest that fake scandals are released later than real scandals. Because of the small sample size (only 15 scandals), formal tests have low power. Nevertheless, using the Dardanoni and Forcina (1998) test for the likelihood ratio order (which implies first-order stochastic dominance), we almost reject the hypothesis that the two distributions are equal in favour of the alternative hypothesis that fake scandals are released later ($$p$$-value: $$0.114$$); we cannot reject the hypothesis that fake scandals are released later in favour of the unrestricted hypothesis at all standard statistical significance levels ($$p$$-value: $$0.834$$).29 Figure 3 View largeDownload slide US presidential scandals and weeks to election. Distribution of real and fake scandals. (a) whole term; (b) last 60 weeks only. Figure 3 View largeDownload slide US presidential scandals and weeks to election. Distribution of real and fake scandals. (a) whole term; (b) last 60 weeks only. Our Poisson model offers a novel perspective over the October surprise concentration of scandals towards the end of the presidential election campaign (Figure 1). In equilibrium, real scandals are released as they are discovered by the media. Unless real scandals are more likely to be discovered towards the end of the first term of a president, then we should not expect their release to be concentrated towards the end of the campaign (see $$p_{G}\left(t\right)$$ in Figure 2a). Instead, fake scandals are strategically delayed, and so they should be concentrated towards the end of the first term of the president and just before the election (see $$p_{B}\left(t\right)$$ in Figure 2a). In other words, our model predicts that the October surprise effect is driven by fake scandals. In contrast, were the October surprise effect driven by the desire to release scandals when they are most salient, then the timing of release of real and fake scandals would be similar. Figure 4 is a replica of Figure 1, but with scandals coded as real and fake. Fake scandals are concentrated close to the election, with a majority of them released in the last quarter before the election. In contrast, real scandals appear to be scattered across the entire presidential term. Figure 4 View largeDownload slide Distribution of real and fake scandals. Figure 4 View largeDownload slide Distribution of real and fake scandals. Our Poisson model also predicts how different parameters affect the release of a U.S. presidential scandal. We now illustrate how Nyhan’s (2015) empirical findings may be interpreted using our model. Nyhan (2015) finds that scandals are more likely to appear when the president’s opposition approval rate is low. In our model, the approval rate is most naturally captured by the prior belief $$1-\pi$$ (the belief that the president is fit to run the country). In our Poisson model, a higher $$\pi$$ has a direct and an indirect effect on the probability of release of a scandal. On the one hand, a higher $$\pi$$ means that the president is more likely to be involved in a real scandal, thus directly increasing the probability that such a scandal is released. On the other hand, the opposition optimally resorts to fake scandals more when the president is so popular that only a scandal could prevent the president’s reelection. Therefore, a higher $$\pi$$ reduces the incentive for the opposition to release fake scandals, indirectly reducing the probability that a scandal is released. We can then interpret Nyhan’s finding as suggesting that the direct effect on average dominates the indirect effect. But the president’s opposition approval rate also measures opposition voters’ hostility towards the president, which might be captured by the rate $$\lambda$$ at which voters learn that a scandal is fake.30 Indeed, Nyhan conjectures that when opposition voters are more hostile to the president, then they are “supportive of scandal allegations against the president and less sensitive to the evidentiary basis for these claims [and] opposition elites will be more likely to pursue scandal allegations” (p. 6). Consistently, in our Poisson model, when voters take more time to tell real and fake scandals apart, the opposition optimally resorts to fake scandals. Nyhan (2015) also finds that fewer scandals involving the president are released when the news agenda is more congested. Such media congestion may have the following two effects. First, when the news agenda is congested, the opposition media has less time to devote to investigate the president. In our Poisson model, this is captured by a lower arrival rate $$\alpha$$, which in turn reduces the probability that a scandal is released. Second, when the news agenda is congested, public scrutiny of the scandal is slower as the attention of media, politicians, and voters are captured by other events. In our Poisson model, this is captured by a lower breakdown rate $$\lambda$$, which in turn increases the probability that a scandal is released. We can interpret Nyhan’s finding as suggesting that the media congestion effect through the arrival rate $$\alpha$$ dominates the effect through the breakdown rate $$\lambda$$.31 5.2. Initial public offerings We now apply our model to the timing of IPOs. Sender is a firm that needs liquidity in a particular time frame, and this time frame is private information of the firm. The need for liquidity could arise from the desire to grow the firm, expand into new products or markets, or because of operating expenses outstripping revenues. It could also arise because of investors having so-called “drag-along rights”, where they can force founders and other shareholders to vote in favour of a liquidity event. When announcing the IPO, firms have private information regarding their prospective long-run performance. Good firms expect their business to out-perform the market’s prior expectation; bad firms expect their business to under-perform the market’s prior expectation. After a firm announces an IPO, the market scrutinizes the firm’s prospectus, and learns about the firm’s prospective performance. The initial trade closing price of the stock is determined by the market’s posterior belief at the initial trade date. Therefore, after the initial trade date, as the firms’ potential is gradually revealed to the market, good firms’ stocks out-perform bad firms’ stocks. Since the true time frame is private information, the firm can “pretend” to need liquidity faster than it actually does, and it has significant control over the time gap between the announcement of the IPO and the initial trade date. A shorter time gap decreases the amount of scrutiny the firm undergoes before going public, but also reduces credibility. Therefore, our model predicts that bad firms should choose a shorter time gap than good firms.32 We explore this prediction using data on U.S. IPOs from 1983 to 2016. For each IPO, we record the time gap and calculate the cumulative return of the stock, starting from the initial trade date. We measure the stock’s performance as its return relative to the market return over the same period. Following Loughran and Ritter (1995), we evaluate IPOs’ long-run performance $$y\in\left\{ 3,5\right\}$$ years after the initial trade date. For each value of $$y$$, we code as good (bad) those IPOs that performed above (below) market.33 Figure 5 shows the empirical distributions of time gap for good and bad IPOs evaluated at 3 and 5 years after the initial trade date. Both figures suggest that bad firms choose a shorter time gap in the first-order stochastic dominance sense, but with the effect being more clearly visible after 5 years. This pattern is consistent with our idea that firms’ private information is only gradually (and slowly) revealed to the market once the period of intense scrutiny of the IPO ends. Figure 5 View largeDownload slide US IPOs and time gap. Distributions for good and bad IPOs. (a) 3 years; (5) years. Figure 5 View largeDownload slide US IPOs and time gap. Distributions for good and bad IPOs. (a) 3 years; (5) years. Our main prediction in Corollary 1 is that the distribution of time gap for good IPOs dominates the distribution of time gap for bad IPOs in the likelihood ratio order.34 We evaluate this prediction using an approach developed by Dardanoni and Forcina (1998). This approach tests (1) the hypothesis $$H_{0}$$ that the distributions are identical against the alternative $$H_{1}$$ that the distributions are ordered in the likelihood ratio order; as well as (2) the hypothesis $$H_{1}$$ against an unrestricted alternative $$H_{2}$$. The hypothesis of interest $$H_{1}$$ is accepted if the first test rejects $$H_{0}$$ and the second test fails to reject $$H_{1}$$. Following Roosen and Hennessy (2004), we partition the variable time gap into $$k$$ intervals that are equiprobable according to the empirical distribution of time gap. We report in Table 1 the $$p$$-values of the two statistics for the case of $$k=7$$. Table 1 Dardanoni and Forcina test for likelihood ratio order (p-values)    3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403     3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403  Table 1 Dardanoni and Forcina test for likelihood ratio order (p-values)    3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403     3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403  For both 3 and 5 years performance, we reject the hypothesis $$H_{0}$$ in favour of $$H_{1}$$ at the 1% significance level. Furthermore, for 5 years performance, we cannot reject the hypothesis $$H_{1}$$ in favour of $$H_{2}$$ at all standard significance levels. In Online Appendix B we give some further details about our data and the test, and we explore how the results of the test may change under alternative specifications. 6. Concluding Remarks We have analysed a model in which the strategic timing of information release is driven by the trade-off between credibility and scrutiny. The analysis yields novel predictions about the dynamics of information release. We also offered supporting evidence for these predictions using data on the timing of U.S. presidential scandals and the announcement of IPOs. Our model can also be used to deliver normative implications for the design of a variety of institutions. In the context of election campaigns, our results could be employed to evaluate laws that limit the period in which candidates can announce new policies in their platforms or media can cover candidates. For example, more than a third of the world’s countries mandate a blackout period before elections: a ban on political campaigns or, in some cases, on any mention of a candidate’s name, for one or more days immediately preceding elections.35 The framework we have developed has further potential applications. For instance, the relationship between a firm’s management team and its board of directors often exhibits the core features of our model: management has private information and potentially different preferences than the board; the board’s view about a project or investment determines whether it is undertaken; and management can provide more or less time to the board in evaluating the project or investment. The comparative statics of our model may speak to how this aspect of the management–board relationship may vary across industries and countries. Similarly, in various legal settings an interested party with private information may come forward sooner or later, notwithstanding an essentially fixed deadline for the legal decision-maker (due to institutional or resource constraints). A natural example is witnesses in a criminal investigation, but the same issues often arise in civil matters or even parliamentary inquiries. In each of these applications, the credibility-scrutiny trade-off plays an important role, and we hope our model, characterization of equilibrium, and comparative statics will serve as a useful framework for studying them in the future. APPENDIX A. Statistical Properties Proof of Lemma 1. Follows from Lemma 1$$'$$. ∥ Proof of Lemma 1$$'$$. Part 1. By Blackwell (1953), Assumption 1 with $$\tau^{\prime}=T+1$$ implies that pulling the arm at $$\tau$$ is the same as releasing an informative signal $$y$$. By Bayes’s rule, posterior $$s$$ is given by:   $s=\frac{mq\left(y\mid G\right)}{mq\left(y\mid G\right)+\left(1-m\right)q\left(y\mid B\right)},$ where $$q\left(y\mid\theta\right)$$ is the density of $$y$$ given $$\theta$$. (If $$L$$ is discrete, then $$q\left(y\mid\theta\right)$$ is the discrete density of $$y$$ given $$\theta$$.) Therefore,   $$\frac{q\left(y\mid G\right)}{q\left(y\mid B\right)}=\frac{1-m}{m}\frac{s}{1-s}.$$ (A10) Writing (A10) for interim beliefs $$m$$ and $$m^{\prime}$$, we obtain the following relation for corresponding posterior beliefs $$s$$ and $$s^{\prime}$$:   $\frac{1-m^{\prime}}{m^{\prime}}\frac{s^{\prime}}{1-s^{\prime}}=\frac{1-m}{m}\frac{s}{1-s},$ which implies that   $$s^{\prime}=\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}.$$ (A11) Therefore, $$s^{\prime}>s$$ for $$m^{\prime}>m$$; so part 1 follows. Part 2. By Blackwell (1953), Assumption 1 implies that pulling the arm at $$\tau$$ is the same as pulling the arm at $$\tau^{\prime}$$ and then releasing an additional informative signal $$y$$ with conditional density $$q\left(y\mid\theta\right)$$. Part 2 holds because for any strictly increasing concave $$v_{B}$$, we have   \begin{eqnarray*} \mathbb{E}\left[v_{B}\left(s\right)\mid\tau,m,B\right] & = & \mathbb{E}\left[v_{B}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},m,B\right]\\ & = & \mathbb{E}\left[\mathbb{E}\left[v_{B}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},s,B\right]\mid\tau^{\prime},m,B\right]\\ & \leq & \mathbb{E}\left[v_{B}\left(\mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right]\right)\mid\tau^{\prime},m,B\right]\\ & < & \mathbb{E}\left[v_{B}\left(\frac{s\mathbb{E} \left[\displaystyle\frac{q\left(y\mid G\right)}{q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right]}{s\mathbb{E}\left[\displaystyle\frac{q\left(y\mid G\right)}{q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right]+\left(1-s\right)}\right)\mid\tau^{\prime},m,B\right]\\ & = & \mathbb{E}\left[v_{B}\left(s\right)\mid\tau^{\prime},m,B\right], \end{eqnarray*} where the first line holds by Bayes’s rule, the second by the law of iterated expectations, the third by Jensen’s inequality applied to concave $$v_{B}$$, the fourth by strict monotonicity of $$v_{B}$$ and Jensen’s inequality applied to function $$sz/\left(sz+1-s\right)$$ which is strictly concave in $$z$$, and the last by definition of expectations. Part 3. Analogously to Part 2, Part 3 holds because for any strictly increasing convex $$v_{G}$$, we have   \begin{eqnarray*} \mathbb{E}\left[v_{G}\left(s\right)\mid\tau,m,G\right] & = & \mathbb{E}\left[v_{G}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},m,G\right]\\ & = & \mathbb{E}\left[\mathbb{E}\left[v_{G}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},s,G\right]\mid\tau^{\prime},m,G\right]\\ & \geq & \mathbb{E}\left[v_{G}\left(\mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right]\right)\mid\tau^{\prime},m,G\right]\\ & > & \mathbb{E}\left[v_{G}\left(\frac{s}{s+\left(1-s\right)\mathbb{E}\left[\displaystyle\frac{q\left(y\mid B\right)}{q\left(y\mid G\right)}\mid\tau^{\prime},s,G\right]}\right)\mid\tau^{\prime},m,G\right]\\ & = & \mathbb{E}\left[v_{G}\left(s\right)\mid\tau^{\prime},m,G\right]. \end{eqnarray*} ∥ Proof of Lemma 1$$''$$. The proof of part 1 is the same as in Lemma 1$$'$$. As noted before, pulling the arm at $$\tau$$ is the same as pulling the arm at $$\tau^{\prime}$$ and then releasing an additional informative signal $$y$$ with conditional density $$q\left(y\mid\theta\right)$$. Let $$s_{\sigma}$$ be the probability that Sender is good given that Receiver’s posterior is $$s$$ and Sender’s signal is $$\sigma$$. By (A11),   $s_{\sigma}=\frac{\frac{\sigma s}{\pi}}{\frac{\sigma s}{m}+\frac{\left(1-\sigma\right)\left(1-s\right)}{1-m}}.$ We have,   \begin{eqnarray*} \mathbb{E}\left[s\mid\tau,m,\sigma\right] & = & \mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[\mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,\sigma\right]\mid\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[\left.\begin{array}{c} s_{\sigma}\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right]+\\ +\left(1-s_{\sigma}\right)\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right] \end{array}\right|\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[\left.\begin{array}{c} s_{\sigma}\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right]+\\ +\left(1-s_{\sigma}\right)\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid B\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right] \end{array}\right|\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[s\mathbb{E}\left[\displaystyle\frac{s_{\sigma}+\left(1-s_{\sigma}\right)\displaystyle\frac{q\left(y\mid B\right)} {q\left(y\mid G\right)}}{s+\left(1-s\right)\displaystyle\frac{q\left(y\mid B\right)}{q\left(y\mid G\right)}}\mid\tau^{\prime},s,G\right]\mid\tau^{\prime},m,\sigma\right]\\ & \gtrless & \mathbb{E}\left[s\mid\tau^{\prime},m,\sigma\right]\mbox{ whenever }s_{\sigma}\gtrless s, \end{eqnarray*} where the last line holds by Jensen’s inequality applied to function $$\left(s_{\sigma}+\left(1-s_{\sigma}\right)z\right)/\left(s+\left(1-s\right)z\right)$$, which is strictly convex (concave) in $$z$$ whenever $$s_{\sigma}>s$$ ($$s_{\sigma}<s$$). Because $$\sigma_{G}>\pi>\sigma_{B}$$, we have $$s_{\sigma_{G}}>s>s_{\sigma_{B}}$$, so parts 2 and 3 follow. ∥ B. Benchmark Model To facilitate our discussion in Section 3.2, we prove our results under more general assumptions than in our benchmark model. First, we assume that $$v_{G}\left(s\right)$$ is continuous, strictly increasing, and (weakly) convex, and $$v_{B}\left(s\right)$$ is continuous, strictly increasing, and (weakly) concave. Second, we assume that the arm arrives at a random time according to distributions $$F_{G}=F$$ for good Sender and $$F_{B}$$ for bad Sender, where $$F_{B}\left(t\right)\geq F_{G}\left(t\right)$$ for all $$t$$. Proof of Lemma 2. Part 1. Suppose, on the contrary, that $$\mu\left(\tau\right)\leq\mu\left(\tau^{\prime}\right)$$. Then   \begin{eqnarray*} \int v_{B}\left(s\right)dH_{B}\left(s|\tau,\mu\left(\tau\right)\right) & < & \int v_{B}\left(s\right)dH_{B}\left(s|\tau^{\prime},\mu\left(\tau\right)\right)\\ & \leq & \int v_{B}\left(s\right)dH_{B}\left(s|\tau^{\prime},\mu\left(\tau^{\prime}\right)\right), \end{eqnarray*} where the first inequality holds by part 2 of Lemma 1$$'$$ and the second by part 1 of Lemma 1$$'$$. Therefore, bad Sender strictly prefers to pull the arm at $$\tau^{\prime}$$ than at $$\tau$$. A contradiction. Good Sender strictly prefers to pull the arm at $$\tau$$ because   \begin{eqnarray*} \int v_{G}\left(s\right)dH_{G}\left(s|\tau,\mu\left(\tau\right)\right) & > & \int v_{G}\left(s\right)dH_{G}\left(s|\tau^{\prime},\mu\left(\tau\right)\right)\\ & > & \int v_{G}\left(s\right)dH_{G}\left(s|\tau^{\prime},\mu\left(\tau^{\prime}\right)\right), \end{eqnarray*} where the first inequality holds by part 3 of Lemma 1$$'$$ and the second by $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$ and part 1 of Lemma 1$$'$$. Part 2. By part 1 of this lemma applied to $$\tau$$ and $$\tau^{\prime}=T+1$$, it suffices to show that $$\mu\left(\tau\right),\mu\left(T+1\right)\in\left(0,1\right)$$ for all $$\tau$$ in the support of $$P_{B}$$. First, by Bayes’s rule, $$\tau$$ being in the support of $$P_{B}$$ implies $$\mu\left(\tau\right)<1$$. Second, by Bayes’s rule, $$F_{G}\left(T\right)<1$$ implies $$\mu\left(T+1\right)>0$$. Third, $$\mu\left(T+1\right)>0$$ implies $$\mu\left(\tau\right)>0$$, otherwise $$\tau$$ could not be in the support of $$P_{B}$$ because $$v_{B}\left(\mu\left(T+1\right)\right)>v_{B}\left(0\right)=\mathbb{E}\left[v_{B}\left(s\right)\mid\tau,0,B\right]$$. Finally, $$\mu\left(\tau\right)<1$$ implies $$\mu\left(T+1\right)<1$$, otherwise $$\tau$$ could not be in the support of $$P_{B}$$ because $$v_{B}\left(1\right)>\mathbb{E}\left[v_{B}\left(s\right)\mid\tau,\mu\left(\tau\right),B\right]$$. ∥ Proof of Lemma 3. By part 2 of Lemma 2, each $$t^{\prime}$$ in the support of $$P_{B}$$ is also in the support of $$P_{G}$$. We show that each $$t^{\prime}$$ in the support of $$P_{G}$$ is also in the support of $$P_{B}$$ by contradiction. Suppose that there exists $$t^{\prime}$$ in the support of $$P_{G}$$ but not in the support of $$P_{B}$$. Then, by Bayes’s rule $$\mu\left(t^{\prime}\right)=1$$; so bad Sender who receives the arm at $$t\leq t^{\prime}$$ gets the highest possible equilibrium payoff $$v_{B}\left(1\right)$$, because she can pull the arm at time $$t^{\prime}$$ and get payoff $$v_{B}\left(1\right)$$ (recall that, for all $$t$$, the support of $$H\left(.|t,\pi\right)$$ does not contain $$s=0$$ by part (ii) of Assumption (1)). Because bad Sender receives the arm at or before $$t^{\prime}$$ with a positive probability (recall that, for all $$t$$, $$F_{B}\left(t\right)\geq F_{G}\left(t\right)>0$$ by assumption), there exists time $$\tau$$ at which bad Sender pulls the arm with a positive probability and gets payoff $$v_{B}\left(1\right)$$. But then $$\mu\left(\tau\right)=1$$, contradicting that bad Sender pulls the arm at $$\tau$$ with a positive probability. Suppose, on the contrary, that there exists $$\tau$$ such that $$P_{G}\left(\tau\right)>0$$ and $$P_{B}\left(\tau\right)\geq P_{G}\left(\tau\right)$$. Because $$P_{\theta}\left(\tau\right)=\sum_{t=1}^{\tau}\left(P_{\theta}\left(t\right)-P_{\theta}\left(t-1\right)\right)$$, there exists $$\tau^{\prime}\leq\tau$$ in the support of $$P_{B}$$ such that $$P_{B}\left(\tau^{\prime}\right)-P_{B}\left(\tau^{\prime}-1\right)\geq P_{G}\left(\tau^{\prime}\right)-P_{G}\left(\tau^{\prime}-1\right)$$. Similarly, because $$1-P_{\theta}\left(\tau\right)=\sum_{t=\tau+1}^{T+1}\left(P_{\theta}\left(t\right)-P_{\theta}\left(t-1\right)\right)$$ and $$1-P_{G}\left(\tau\right)>0$$ (recall that $$P_{G}\left(T\right)\leq F_{G}\left(T\right)<1$$), there exists $$\tau^{\prime\prime}>\tau$$ in the support of $$P_{G}$$ such that $$P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime\prime}-1\right)\geq P_{B}\left(\tau^{\prime\prime}\right)-P_{B}\left(\tau^{\prime\prime}-1\right)$$. By Bayes’s rule,   \begin{eqnarray*} \mu\left(\tau^{\prime}\right) & = & \frac{\pi\left(P_{G}\left(\tau^{\prime}\right)-P_{G}\left(\tau^{\prime}-1\right)\right)}{\pi\left(P_{G}\left(\tau^{\prime}\right)-P_{G}\left(\tau^{\prime}-1\right)\right)+\left(1-\pi\right)\left(P_{B}\left(\tau^{\prime}\right)-P_{B}\left(\tau^{\prime}-1\right)\right)}\leq\pi\\ & \leq & \frac{\pi\left(P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime\prime}-1\right)\right)}{\pi\left(P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime\prime}-1\right)\right)+\left(1-\pi\right)\left(P_{B}\left(\tau^{\prime\prime}\right)-P_{B}\left(\tau^{\prime\prime}-1\right)\right)}=\mu\left(\tau^{\prime\prime}\right)\text{.} \end{eqnarray*} Therefore, by Lemma 2, bad Sender strictly prefers to pull the arm at $$\tau^{\prime\prime}$$ than at $$\tau^{\prime}$$, contradicting that $$\tau^{\prime}$$ is in the support of $$P_{B}$$. ∥ Proof of Lemma 4. By Lemma 3, $$P_{G}$$ and $$P_{B}$$ have the same supports and therefore $$\mu\left(\tau\right)\in\left(0,1\right)$$. Let the support of $$P_{G}$$ be $$\left\{ \tau_{1},...,\tau_{n}\right\}$$. Notice that $$\tau_{n}=T+1$$ because $$P_{G}\left(T\right)\leq F_{G}\left(T\right)<1$$. Since $$\tau_{n-1}$$ is in the support of $$P_{B}$$ and   $P_{B}\left(\tau_{n-1}\right)<P_{G}\left(\tau_{n-1}\right)=F_{G}\left(\tau_{n-1}\right)\leq F_{B}\left(\tau_{n-1}\right)\text{,}$ where the first inequality holds by Lemma 3, the equality by part 2 of Lemma 2, and the last inequality by assumption $$F_{B}\left(t\right)\geq F_{G}\left(t\right)$$. Therefore, bad Sender who receives the arm at $$\tau_{n-1}$$ must be indifferent between pulling the arm at $$\tau_{n-1}$$ or at $$\tau_{n}$$. Analogously, bad Sender who receives the arm at $$\tau_{n-k-1}$$ must be indifferent between pulling it at $$\tau_{n-k-1}$$ and at some $$\tau\in\left\{ \tau_{n-k},\ldots,\tau_{n}\right\}$$. Thus, by mathematical induction on $$k$$, bad Sender is indifferent between pulling the arm at any $$\tau$$ in the support of $$P_{G}$$ and at $$T+1$$, which proves (2). By Bayes’s rule, for all $$\tau$$ in the support of $$P_{G}$$,   $$\frac{1-\pi}{\pi}\left(P_{B}\left(\tau\right)-P_{B}\left(\tau-1\right)\right)=\frac{1-\mu\left(\tau\right)}{\mu\left(\tau\right)}\left(P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)\right)\text{.}$$ (B12) Summing up over $$\tau$$ yields (3). ∥ Proof of Proposition 1. Part 1. We first show that, for each $$\mathcal{T}\subseteq\left\{ 1,\dots,T+1\right\}$$ with $$T+1\in\mathcal{T}$$and each $$\tau\in\mathcal{T}$$, there exist unique $$P_{G} \left(\tau\right)$$, $$P_{B} \left(\tau\right)$$, and $$\mu\left(\tau\right)$$ given by part 1 of this proposition. It suffices to show that there exists a unique $$\left\{ \mu\left(\tau\right)\right\} _{\tau\in\mathcal{T}}\in\left[0,1\right]^{\left|\mathcal{T}\right|}$$ that solves (2) and (3). Using (A11) with $$m=\pi$$ and $$m^{\prime}=\mu\left(\tau\right)$$, the left hand side of (2) can be rewritten as   $V_{B}\left(\mu\left(\tau\right)\right)\equiv\int v_{B}\left(\frac{\frac{\mu\left(\tau\right)s}{\pi}}{\frac{\mu\left(\tau\right)s}{\pi}+\frac{\left(1-\mu\left(\tau\right)\right)\left(1-s\right)}{1-\pi}}\right)dH_{B}\left(s|\tau,\pi\right).$ Because $$v_{B}$$ is continuous and strictly increasing, $$V_{B}$$ is also continuous and strictly increasing. Furthermore, $$V_{B}\left(0\right)=v_{B}\left(0\right)$$ and $$V_{B}\left(1\right)=v_{B}\left(1\right)$$. Therefore, for all $$\mu\left(T+1\right)\in\left[0,1\right]$$ and all $$\tau\in\mathcal{T}$$, there exists a unique $$\mu\left(\tau\right)$$ that solves (2). Moreover, for all $$\tau\in\mathcal{T}$$, $$\mu\left(\tau\right)$$ is continuous and strictly increasing in $$\mu\left(T+1\right)$$, is equal to $$0$$ if $$\mu\left(T+1\right)=0$$, and is equal to $$1$$ if $$\mu\left(T+1\right)=1$$. The left-hand side of (3) is continuous and strictly decreasing in $$\mu\left(\tau\right)$$ for all $$\tau\in\mathcal{T}$$. Moreover, the left-hand side of (3) is $$0$$ when $$\mu\left(\tau\right)=1$$ for all $$\tau\in\mathcal{T}$$, and it approaches infinity when $$\mu\left(\tau\right)$$ approaches $$0$$ for all $$\tau\in\mathcal{T}$$. Therefore, substituting each $$\mu \left(\tau\right)$$ in (3) with a function of $$\mu\left(T+1\right)$$ obtained from (2), we conclude that there exists a unique $$\mu\left(T+1\right)$$ that solves (3). We now construct an equilibrium for each $$\mathcal{T}\subseteq\left\{ 1,\dots,T+1\right\}$$ with $$T+1\in\mathcal{T}$$. Let $$P_{G}\left(\tau\right)$$ and $$P_{B}\left(\tau\right)$$ be given by part 1 of this proposition for all $$\tau\in\left\{ 1,\dots,T+1\right\}$$. Let $$\mu\left(\tau\right)$$ be given by part 1 of this proposition for all $$\tau\in\mathcal{T}$$ and $$\mu\left(\tau\right)=0$$ otherwise. Notice that, so constructed, $$P_{G}$$, $$P_{B}$$, and $$\mu$$ exist and are unique. $$P_{G}$$ is clearly a distribution. $$P_{B}$$ is also a distribution, because $$P_{B}\left(\tau\right)$$ increases with $$\tau$$ by (4) and $$P_{B}\left(T+1\right)=1$$ by (3) and (4). Furthermore, $$\mu$$ is a consistent belief because (B12) holds for all $$\tau\in\mathcal{T}$$ by (4). It remains to show that there exists an optimal strategy for Sender such that good and bad Sender’s distributions of pulling time are given by $$P_{G}$$ and $$P_{B}$$. First, both good and bad Sender strictly prefer not to pull the arm at any time $$\tau\notin\mathcal{T}$$, because, by part (ii) of Assumption (1) , pulling the arm at $$\tau$$ gives Sender a payoff of $$v_{\theta}\left(0\right)<v_{\theta}\left(\mu\left(T+1\right)\right)$$. Second, by (2), pulling the arm at any time $$\tau\in\mathcal{T}$$ gives bad Sender the same expected payoff $$v_{\theta}\left(\mu\left(T+1\right)\right)$$. Finally, by part 1 of Lemma 2, good Sender strictly prefers to pull the arm at time $$\tau\in\mathcal{T}$$ than at any other time $$\tau^{\prime}>\tau$$. Finally, in any equilibrium, $$P_{G}$$ and $$P_{B}$$ have the same supports by Lemma 3. Moreover, for all $$\tau$$ in the support of $$P_{G}$$, $$P_{G}\left(\tau\right)=F\left(\tau\right)$$ by part 2 of Lemma 2, $$P_{B}\left(\tau\right)$$ satisfies (4) by (B12), $$\mu\left(\tau\right)\in\left(0,1\right)$$ by Lemma 3, and $$\mu\left(\tau\right)$$ satisfies (3) and (4) by Lemma 4. Part 2. First, we notice that, by part 1 of Proposition 1, there exists an equilibrium with $$\mathcal{T}=\left\{ 1,\dots,T+1\right\}$$. In this equilibrium, there are no out of equilibrium events and therefore it is divine. Adopting Cho and Kreps (1987)’s definition to our setting (see, $$e.g.$$, Maskin and Tirole, 1992), we say that an equilibrium is divine if $$\mu\left(\tau\right)=1$$ for any $$\tau\notin {\rm supp}\left(P_{G}\right)$$ at which condition D1 holds. D1 holds at $$\tau$$ if for all $$m\in\left[0,1\right]$$ that satisfy   $$\int v_{B}\left(s\right)dH_{B}\left(s|\tau,m\right)\geq\max_{t\in {\rm supp}\left(P_{G}\right),t>\tau}\int v_{B}\left(s\right)dH_{B}\left(s|t,\mu\left(t\right)\right)$$ (B13) the following inequality holds:   $$\int v_{G}\left(s\right)dH_{G}\left(s|\tau,m\right)>\max_{t\in {\rm supp}\left(P_{G}\right),t>\tau}\int v_{G}\left(s\right)dH_{G}\left(s|t,\mu\left(t\right)\right).$$ (B14) Suppose, on the contrary, that there exists a divine equilibrium in which $$P_{G}\left(\tau\right)<F_{G}\left(\tau\right)$$ for some $$\tau\in\left\{ 1,\ldots,T\right\}$$. By part 1 of Proposition 1, $$\tau\notin {\rm supp}\left(P_{G}\right)$$. Let $$t^{*}$$ denote $$t$$ that maximizes the right-hand side of (B14). By Lemma 3, $$\mu\left(t^{*}\right)<1$$ , and, by Lemma 4, $$t^{*}$$ maximizes the right-hand side of (B13). Therefore, by part 1 of Lemma 2, D1 holds at $$\tau$$; so $$\mu\left(\tau\right)=1$$. But then $$\tau\notin {\rm supp}\left(P_{G}\right)$$ cannot hold, because   $\int v_{G}\left(s\right)dH_{G}\left(s|\tau,1\right)=v_{G}\left(1\right)>\max_{t\in {\rm supp}\left(P_{G}\right)}\int v_{G}\left(s\right)dH_{G}\left(s|t,\mu\left(t\right)\right).$ ∥ Proof of Corollary 2. By Lemma 4 and part 2 of Proposition 1, bad Sender is indifferent between pulling the arm at any time before the deadline and not pulling the arm at all. Then, by Lemma 2, $$\mu\left(\tau-1\right)>\mu\left(\tau\right)$$ for all $$\tau$$. Using (4) with $$P_{G}=F_{G}$$, we have that for all $$\tau<T$$,   \begin{eqnarray} \frac{1-\tilde{\mu}\left(\tau\right)}{\tilde{\mu}\left(\tau\right)} & = & \frac{1-\pi}{\pi}\frac{1-P_{B}\left(\tau\right)}{1-P_{G}\left(\tau\right)}\nonumber \\ & = & \frac{\sum_{t=\tau+1}^{T+1}\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\left(F_{G}\left(t\right)-F_{G}\left(t-1\right)\right)}{1-F_{G}\left(\tau\right)}\\ & = & \mathbb{E}_{F}\left[\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\mid t\ge\tau+1\right].\nonumber \end{eqnarray} (B15) Since $$\mu\left(\tau-1\right)>\mu\left(\tau\right)$$ for all $$\tau$$, (B15) implies that $$\tilde{\mu}\left(\tau-1\right)>\tilde{\mu}\left(\tau\right)$$ and $$\mu\left(\tau\right)>\tilde{\mu}\left(\tau-1\right)$$ for all $$\tau$$. ∥ Proof of Corollary 1. Using (4) with $$P_{G}=F_{G}$$, we have   $\frac{1-\mu\left(\tau\right)}{\mu\left(\tau\right)}=\frac{1-\pi}{\pi}\frac{P_{B}\left(\tau\right)-P_{B}\left(\tau-1\right)}{P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)}.$ To complete the proof, notice that, by Corollary 2, $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$ whenever $$\tau<\tau^{\prime}$$. ∥ Proof of Lemma 2$$'$$. Given Receiver’s interim belief $$m$$ and pulling times $$\tau$$ and $$\tau^{\prime}$$, we write $$H$$ and $$H^{\prime}$$ for distributions $$H\left(.\mid\tau,m\right)$$ and $$H\left(.\mid\tau^{\prime},m\right)$$ of Receiver’s posterior belief $$s$$ from Receiver’s perspective, and we write $$H_{\theta}$$ and $$H_{\theta}^{\prime}$$ for distributions $$H_{\theta}\left(.\mid\tau,m\right)$$ and $$H_{\theta}\left(.\mid\tau^{\prime},m\right)$$ of Receiver’s posterior belief $$s$$ from type-$$\theta$$ Sender’s perspective. For any interim belief $$m\in\left(0,1\right)$$ and pulling times $$\tau,\tau^{\prime}$$, by Bayes’s rule, we have   \begin{eqnarray*} dH\left(s\right) & = & mdH_{G}\left(s\right)+\left(1-m\right)dH_{B}\left(s\right),\\ s & = & \frac{mdH_{G}\left(s\right)}{mdH_{G}\left(s\right)+\left(1-m\right)dH_{B}\left(s\right)}, \end{eqnarray*} so that $$dH_{G}\left(s\right)=\frac{s}{m}dH\left(s\right)$$ and $$dH_{B}\left(s\right)=\frac{1-s}{1-m}dH\left(s\right)$$. Likewise, $$dH_{G}^{\prime}\left(s\right)=\frac{s}{m}dH^{\prime}\left(s\right)$$ and $$dH_{B}^{\prime}\left(s\right)=\frac{1-s}{1-m}dH^{\prime}\left(s\right)$$. For any pulling time $$\tau$$ and interim beliefs $$m,m^{\prime}\in\left(0,1\right)$$, each posterior belief $$s$$ under interim belief $$m$$ transforms into the posterior belief $$s^{\prime}$$ given by (A11) under interim belief $$m^{\prime}$$. Let $$m=\mu\left(\tau\right)$$ and $$m^{\prime}=\mu\left(\tau^{\prime}\right)$$. Bad Sender weakly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$ if and only if   $\int_{0}^{1}v_{B}\left(s\right)dH_{B}\left(s\right)\geq\int_{0}^{1}v_{B}\left(\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}\right)dH_{B}^{\prime}\left(s\right),$ which is equivalent to   $$\int_{0}^{1}v_{B}\left(s\right)\left(1-s\right)dH\left(s\right)\geq\int_{0}^{1}v_{B}\left(\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}\right)\left(1-s\right)dH^{\prime}\left(s\right).$$ (B16) Similarly, good Sender strictly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$ if and only if   $$\int_{0}^{1}v_{G}\left(s\right)sdH\left(s\right)>\int_{0}^{1}v_{G}\left(\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}\right)sdH^{\prime}\left(s\right).$$ (B17) Because $$s$$ and $$s^{\prime}$$ are in $$\left(0,1\right)$$, and $$s^{\prime}$$ is strictly increasing in $$s$$, for any $$r\in\left(0,1\right)$$, we have that $$s^{\prime}>r$$ if and only if $$s>r^{\prime}$$ for some $$r^{\prime}\in\left(0,1\right)$$, which depends on $$m$$ and $$m^{\prime}$$. Thus, for $$v_{\theta}$$ given by (5), the inequalities (B16) and (B17) can be rewritten as   \begin{eqnarray} \int_{r}^{1}\left(1-s\right)dH\left(s\right) & \geq & \int_{r^{\prime}}^{1}\left(1-s\right)dH^{\prime}\left(s\right),\\ \end{eqnarray} (B18)  \begin{eqnarray} \int_{r}^{1}sdH\left(s\right) & > & \int_{r^{\prime}}^{1}sdH^{\prime}\left(s\right), \end{eqnarray} (B19) where$$\int_{r}^{1}$$ and $$\int_{r^{\prime}}^{1}$$ stand for the Lebesgue integrals over the sets $$\left(r,1\right]$$ and $$\left(r^{\prime},1\right]$$. Notice also that we are using a selection from Receiver’s best response correspondence for which $$v\left(r\right)=0$$. The proof goes through for other selections, after adding appropriate terms on both sides of (B18) and (B19). Integrating by parts, we can rewrite (B18) and (B19) as   \begin{eqnarray} -\left(1-r\right)H\left(r\right)+\int_{r}^{1}H\left(s\right)ds & \geq & -\left(1-r^{\prime}\right)H^{\prime}\left(r^{\prime}\right)+\int_{r^{\prime}}^{1}H^{\prime}\left(s\right)ds,\\ \end{eqnarray} (B20)  \begin{eqnarray} -rH\left(r\right)-\int_{r}^{1}H\left(s\right)ds & > & -r^{\prime}H^{\prime}\left(r^{\prime}\right)-\int_{r^{\prime}}^{1}H^{\prime}\left(s\right)ds. \end{eqnarray} (B21) Suppose that (B20) and (6) hold and let us show that (B21) holds. We have $$H^{\prime}\left(r^{\prime}\right)>H\left(r\right)$$, because   \begin{eqnarray*} \left(1-r\right)\left(H^{\prime}\left(r^{\prime}\right)-H\left(r\right)\right) & = & \left(1-r^{\prime}\right)H^{\prime}\left(r^{\prime}\right)+\left(r^{\prime}-r\right)H^{\prime}\left(r^{\prime}\right)-\left(1-r\right)H\left(r\right)\\ & \geq & \left(1-r^{\prime}\right)H^{\prime}\left(r^{\prime}\right)+\int_{r}^{r^{\prime}}H^{\prime}\left(s\right)ds-\left(1-r\right)H\left(r\right)\\ & \geq & \int_{r}^{1}H^{\prime}\left(s\right)ds-\int_{r}^{1}H\left(s\right)ds>0, \end{eqnarray*} where the equality holds by rearrangement, the first inequality holds by monotonicity of $$H$$, the second by (B20), and the last by (6). The inequality (B21) then holds because   \begin{eqnarray*} r^{\prime}H^{\prime}\left(r^{\prime}\right)-rH\left(r\right)+\int_{r^{\prime}}^{1}H^{\prime}\left(s\right)ds-\int_{r}^{1}H\left(s\right)ds & > & r^{\prime}H^{\prime}\left(r^{\prime}\right)-rH\left(r\right)-\int_{r}^{r^{\prime}}H^{\prime}\left(s\right)ds\\ & \geq & r^{\prime}H^{\prime}\left(r^{\prime}\right)-rH\left(r\right)-H^{\prime}\left(r^{\prime}\right)\left(r^{\prime}-r\right)\\ & = & r\left(H^{\prime}\left(r^{\prime}\right)-H\left(r\right)\right)>0, \end{eqnarray*} where the first inequality holds by (6), the second by monotonicity of $$H$$, and the last by the established inequality $$H^{\prime}\left(r^{\prime}\right)>H\left(r\right)$$. ∥ C. Poisson Model Proof of Proposition 6. $$\underline{{\text{For}}\,\pi:}$$ Differentiating $$\mu\left(T\right)$$ in Proposition 2 with respect to $$\pi$$, we have   $\frac{d\mu\left(T\right)}{d\pi}=\left\{ \begin{array}{l} \frac{1}{\pi^{2}}\mu\left(T\right)^{2}\text{ if }\pi<\bar{\pi},\\ \frac{e^{\alpha T}}{\pi^{2}}\mu\left(T\right)^{2+\frac{\alpha}{\lambda}}\text{ otherwise,} \end{array}\right\} >0.$ For $$\underline{{\text{For}}\,\lambda:}$$ First, when $$\pi<\bar{\pi}$$, $$\frac{d\mu\left(T\right)}{d\lambda}<0$$ since $$e^{-\left(\alpha+\lambda\right)T}>1-\left(\alpha+\lambda\right)T$$ for all $$\alpha,\lambda,T>0$$. Second, when $$\pi>\bar{\pi},$$  \begin{eqnarray*} \frac{d\mu\left(T\right)}{d\lambda} & = & \frac{d}{d\lambda}e^{-\frac{\lambda}{\alpha+\lambda}\ln\left(1+\phi\left(\lambda\right)\right)},\\ \phi & \equiv & \frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}>0. \end{eqnarray*} Thus, $$\frac{d\mu\left(T\right)}{d\lambda}<0$$, because   $\frac{d}{d\lambda}\frac{\lambda}{\alpha+\lambda}\ln\left(1+\phi\right)=\frac{\alpha}{\alpha+\lambda}\left[\frac{\ln\left(1+\phi\right)}{\alpha+\lambda}-\frac{1}{\lambda}\frac{1-\pi}{\pi}\frac{1}{1+\phi}\right]>0,$ where the inequality follows from $$\left(1+\phi\right)\ln\left(1+\phi\right)>\phi$$. $$\underline{{\text{For}}\,\alpha:}$$ First, when $$\pi<\bar{\pi}$$,   \begin{eqnarray*} \frac{d\mu\left(T\right)}{d\alpha} & = & -\left(\mu\left(T\right)\right)^{2}\frac{\chi}{\left(\alpha+\lambda\right)^{2}}<0,\\ \chi & \equiv & \lambda\left\{ e^{\lambda T}-\left[1+\left(\alpha+\lambda\right)T\right]e^{-\alpha T}\right\} >0, \end{eqnarray*} where the last passage follows from $$e^{\left(\alpha+\lambda\right)T}>1+\left(\alpha+\lambda\right)T$$ for all $$\alpha,\lambda,T>0$$. Second, when $$\pi\geq\bar{\pi}$$, by log-differentiation,   $\frac{d\mu\left(T\right)}{d\alpha}=\mu\left(T\right)\frac{\lambda}{\alpha+\lambda}\left[\frac{\ln\left(1+\phi\right)}{\alpha+\lambda}-\frac{1}{1+\phi}\frac{d\phi}{d\alpha}\right].$ Thus,   $$\frac{d\mu\left(T\right)}{d\alpha}<0\iff\frac{\left(1+\phi\right)\ln\left(1+\phi\right)}{\phi}<1+T\left(\alpha+\lambda\right).$$ (C22) For $$\pi=\bar{\pi}$$, $$\phi=e^{\left(\alpha+\lambda\right)T}-1>0$$; so   $\frac{d\mu\left(T\right)}{d\alpha}<0\iff\ln\left(1+\phi\right)<\phi,$ which is true for all $$\phi>0$$. Then $$\frac{d\mu\left(T\right)}{d\alpha}<0$$ for $$\pi\geq\bar{\pi}$$ follows because $$\phi<e^{\left(\alpha+\lambda\right)T}-1$$ for $$\pi>\bar{\pi}$$ and the left hand side of (C22) increases with $$\phi$$ for $$\phi>0$$. ∥ Proof of Proposition 3. Part 1. Recall that (1) Sender’s payoff equals Receiver’s posterior belief about Sender at $$t=T$$ and (2) in equilibrium, bad Sender (weakly) prefers not to pull the arm at all than pulling it at any time $$t\in\left[0,T\right]$$. Therefore, bad Sender’s expected payoff equals Receiver’s belief about Sender at $$t=T$$ if the arm has not been pulled:   $$\mathbb{E}\left[v_{B}\right]=\mu\left(T\right).$$ (C23) Part $$1$$ then follows from Proposition 6. Part 2. By the law of iterated expectations,   \begin{eqnarray} \mathbb{E}\left[s\right] & = & \pi\mathbb{E}\left[v_{G}\right]+\left(1-\pi\right)\mathbb{E}\left[v_{B}\right]=\pi\nonumber \\ \Rightarrow\mathbb{E}\left[v_{G}\right] & = & 1-\frac{1-\pi}{\pi}\mu\left(T\right), \end{eqnarray} (C24) where $$s$$ is Receiver’s posterior belief about Sender at $$t=T$$ and we use (C23) in the last passage. Thus, good Sender’s expected payoff increases with $$\alpha$$ and $$\lambda$$ by Proposition 6. Finally, it is easy to see that $$\mathbb{E}\left[v_{G}\right]$$ increases in $$\pi$$ after substituting $$\mu\left(T\right)$$ in $$\mathbb{E}\left[v_{G}\right]$$. Part 3. We shall show that in the divine equilibrium   \begin{eqnarray} \mathbb{E}\left[u\right] & = & \frac{\left(1-\pi\right)\left(1-\mu\left(T\right)\right)}{2}. \end{eqnarray} (C25) Part 3 then follows from Proposition 6. Since $$\mathbb{E}\left[s\right]=\pi$$, by (1) and (C24), it is sufficient to prove that $$\mathbb{E}\left[s^{2}\right]=\pi\mathbb{E}\left[v_{G}\right]$$. We divide the proof in two cases: $$\pi\leq\bar{\pi}$$ and $$\pi>\bar{\pi}$$. If $$\pi\leq\bar{\pi},$$ Receiver’s expected payoff is given by the sum of four terms: (1) Sender is good and the arm does not arrive; (2) Sender is good and the arm arrives; (3) Sender is bad and she does not pull the arm; and (4) Sender is bad and she pulls the arm. Thus,   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi e^{-\alpha T}\left(\mu\left(T\right)\right)^{2}\\ & & +\pi\int_{0}^{T}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\alpha e^{-\alpha t}dt\\ & & +\left(1-\pi\right)\left(1-P_{B}\left(T\right)\right)\left(\mu\left(T\right)\right)^{2}\\ & & +\left(1-\pi\right)\int_{0}^{T}e^{-\lambda\left(T-t\right)}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\frac{\pi}{1-\pi}\left(\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\right)\alpha e^{-\alpha t}dt. \end{eqnarray*} Solving all integrals and rearranging all common terms we get   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi\mathbb{E}\left[v_{G}\right]. \end{eqnarray*} If $$\pi>\bar{\pi}$$, Receiver’s expected payoff is given by the sum of five terms: (1) Sender is good and the arm does not arrive; (2) Sender is good and the arm arrives before $$\bar{t}$$; (3) Sender is good and the arm arrives between $$\bar{t}$$ and $$T$$; (4) Sender is bad and she does not pull the arm; (5) Sender is bad and she pulls the arm. Thus,   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi e^{-\alpha T}\left(\mu\left(T\right)\right)^{2}\\ & & +\pi\left(1-e^{-\alpha\bar{t}}\right)\\ & & +\pi\int_{\bar{t}}^{T}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\alpha e^{-\alpha t}dt\\ & & +\left(1-\pi\right)\left(1-P_{B}\left(T\right)\right)\left(\mu\left(T\right)\right)^{2}\\ & & +\left(1-\pi\right)\int_{\bar{t}}^{T}e^{-\lambda\left(T-t\right)}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\frac{\pi}{1-\pi}\left(\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\right)\alpha e^{-\alpha t}dt. \end{eqnarray*} Solving all integrals and rearranging all common terms we again get   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi\mathbb{E}\left[v_{G}\right]. \end{eqnarray*} ∥ Proof of Proposition 4. $$\underline{{\text{For}}\,\pi:}$$ Differentiating $$P_{B}\left(T\right)$$ in (7) with respect to $$\pi$$, we have   \begin{align*} \frac{dP_{B}\left(T\right)}{d\pi} & =\frac{e^{-\alpha T}}{\mu\left(T\right)\left(1-\pi\right)}\times\left[\frac{\pi}{\mu\left(T\right)}\frac{d\mu\left(T\right)}{d\pi}-\frac{1-\mu\left(T\right)}{1-\pi}\right]\\ & =\frac{e^{-\alpha T}}{\mu\left(T\right)\left(1-\pi\right)}\times\left\{ \begin{array}{l} \left[\frac{\mu\left(T\right)}{\pi}-\frac{1-\mu\left(T\right)}{1-\pi}\right]\text{ if }\pi<\bar{\pi},\\ \left[e^{\alpha T}\frac{\mu\left(T\right)^{1+\frac{\alpha}{\lambda}}}{\pi}-\frac{1-\mu\left(T\right)}{1-\pi}\right]\text{ otherwise.} \end{array}\right\} \end{align*} First, when $$\pi<\bar{\pi}$$, $$\frac{dP_{B}\left(T\right)}{d\pi}<0$$ because $$\mu\left(T\right)<\pi$$. Second, when$$\pi\geq\bar{\pi}$$, $$\frac{dP_{B}\left(T\right)}{d\pi}<0$$ if and only if   \begin{align*} 1+\frac{\alpha}{\alpha+\lambda}\phi & >\left(1+\phi\right)^{\frac{\alpha}{\alpha+\lambda}},\\ \phi & \equiv\frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}>0. \end{align*} Thus, $$\frac{dP_{B}\left(T\right)}{d\pi}<0$$, because $$1+x\phi>\left(1+\phi\right)^{x}$$ for all $$\phi>0$$ and $$x\in\left(0,1\right)$$. $$\underline{{\text{For}}\,\lambda:}$$ Differentiating $$P_{B}\left(T\right)$$ in (7) with respect to $$\lambda$$, we have   \begin{align*} \frac{dP_{B}\left(T\right)}{d\lambda} & =\frac{\pi}{1-\pi}\frac{e^{-\alpha T}}{\mu\left(T\right)^{2}}\frac{d\mu\left(T\right)}{d\lambda}<0, \end{align*} where the inequality follows from Proposition 6. $$\underline{{\text{For}}\,\alpha:}$$ Without loss of generality we can set $$T=1$$. Differentiating $$P_{B}\left(T\right)$$ in (7) with respect to $$\alpha$$, we have   $\frac{dP_{B}\left(T\right)}{d\alpha}=\frac{\pi}{1-\pi}e^{-\alpha}\left[\frac{1-\mu\left(T\right)}{\mu\left(T\right)}+\frac{1}{\left(\mu\left(T\right)\right)^{2}}\frac{d\mu\left(T\right)}{d\alpha}\right].$ First, when $$\pi<\bar{\pi}$$,   \begin{eqnarray*} \frac{1-\pi}{\pi}e^{2\alpha}\frac{dP_{B}\left(T\right)}{d\alpha} & = & \left(\frac{1}{\pi}-2\right)e^{\alpha}+\frac{\left(\alpha\left(\alpha+\lambda\right)-\lambda\right)e^{\alpha+\lambda}+\lambda\left(1+2\left(\alpha+\lambda\right)\right)}{\left(\alpha+\lambda\right)^{2}}\\ & > & \left(\frac{1}{\bar{\pi}}-2\right)+\frac{\left(\alpha\left(\alpha+\lambda\right)-\lambda\right)e^{\alpha+\lambda}+\lambda\left(1+2\left(\alpha+\lambda\right)\right)}{\left(\alpha+\lambda\right)^{2}}\\ & = & \frac{1}{\left(\alpha+\lambda\right)^{2}}\left(\lambda\left(1+\left(\alpha+\lambda\right)\right)+\left(\left(\alpha+\lambda\right)^{2}-\lambda\right)e^{\left(\alpha+\lambda\right)}-\left(\alpha+\lambda\right)^{2}e^{\alpha}\right)\\ & = & \sum_{k=3}^{\infty}\left[\frac{\left(\alpha+\lambda\right)^{k}}{\left(k-2\right)!}-\lambda\frac{\left(\alpha+\lambda\right)^{k-1}}{\left(k-1\right)!}-\left(\alpha+\lambda\right)^{2}\frac{\alpha^{k-2}}{\left(k-2\right)!}\right]\equiv\sum_{k=3}^{\infty}c_{k}>0, \end{eqnarray*} where the inequality holds because each term $$c_{k}$$ in the sum is positive:   \begin{eqnarray*} c_{k} & = & \frac{\left(\alpha+\lambda\right)^{2}\left(\left(\alpha+\lambda\right)^{k-2}-\alpha^{k-2}\right)}{\left(k-2\right)!}-\frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-1\right)!}\\ & = & \frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\sum_{n=0}^{k-3}\left(\alpha+\lambda\right)^{k-3-n}\alpha^{n}\right)}{\left(k-2\right)!}-\frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-1\right)!}\\ & > & \frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-2\right)!}-\frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-1\right)!}>0. \end{eqnarray*} Second, when $$\pi\ge\bar{\pi}$$, $$\frac{dP_{B}\left(T\right)}{d\alpha}>0$$ if and only if   $\frac{1+\phi}{\phi}\left[\ln\left(1+\phi\right)+\frac{\left(\alpha+\lambda\right)^{2}}{\lambda}\left(1-\left(1+\phi\right)^{-\frac{\alpha+\lambda}{\lambda}}\right)\right]-1-\alpha-\lambda>0$  $\phi\equiv\frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}.$ The left-hand side increases with $$\alpha$$, treating $$\phi$$ as a constant. Then the inequality holds because it holds for $$\alpha\rightarrow0:$$  \begin{eqnarray*} \frac{1+\phi}{\phi}\left[\ln\left(1+\phi\right)+\lambda\left(1-\left(1+\phi\right)^{-1}\right)\right]-1-\lambda & > & 0\\ \iff\frac{1+\phi}{\phi}\ln\left(1+\phi\right) & > & 1. \end{eqnarray*} ∥ Proof of Proposition 5. $$\underline{{\text{For}}\,\lambda:}$$ Differentiating $$P\left(T\right)$$ in (9) with respect to $$\lambda$$, we have   \begin{eqnarray*} \frac{dP\left(T\right)}{d\lambda} & = & \left(1-\pi\right)\frac{dP_{B}\left(T\right)}{d\lambda}<0, \end{eqnarray*} where the inequality follows from Proposition 4. $$\underline{{\text{For}}\,\alpha:}$$ Differentiating $$P\left(T\right)$$ in (9) with respect to $$\alpha$$, we have   \begin{eqnarray*} \frac{dP\left(T\right)}{d\alpha} & > & \left(1-\pi\right)\frac{dP_{B}\left(T\right)}{d\alpha}>0, \end{eqnarray*} where the last inequality follows from Proposition 4. $$\underline{{\text{For}}\,\pi:}$$ Differentiating $$P\left(T\right)$$ in (9) with respect to $$\pi$$, we have   \begin{eqnarray*} \frac{dP\left(T\right)}{d\pi} & = & \frac{\pi e^{-\alpha T}}{\mu\left(T\right)^{2}}\left(\frac{d\mu\left(T\right)}{d\pi}-\frac{\mu\left(T\right)}{\pi}\right). \end{eqnarray*} We now show that   $\frac{dP\left(T\right)}{d\pi}\geq0\iff\pi\geq\frac{\alpha e^{\alpha T}}{\left(\alpha+\lambda\right)e^{\alpha T}-1}.$ First, when $$\pi<\bar{\pi}$$, we have $$dP\left(T\right)/d\pi<0$$ because $$\mu\left(T\right)<\pi$$ and   $\frac{d\mu\left(T\right)}{d\pi}=\frac{\mu\left(T\right)^{2}}{\pi^{2}}<\frac{\mu\left(T\right)}{\pi}.$ Second, when $$\pi\ge\bar{\pi}$$, we have $$dP\left(T\right)/d\pi<0$$ if and only if   $\frac{d\mu\left(T\right)}{d\pi}=e^{\alpha T}\frac{\mu\left(T\right){}^{2+\frac{\alpha}{\lambda}}}{\pi^{2}}<\frac{\mu\left(T\right)}{\pi}.$ Substituting $$\mu\left(T\right)$$, we get that this inequality is equivalent to   $\pi<\frac{\alpha e^{\alpha T}}{\alpha e^{\alpha T}+\lambda\left(e^{\alpha T}-1\right)}.$ It remains to show that   $\frac{\alpha e^{\alpha T}}{\alpha e^{\alpha T}+\lambda\left(e^{\alpha T}-1\right)}>\bar{\pi}.$ Substituting $$\bar{\pi}$$, we get that this inequality is equivalent to   $\frac{e^{\left(\alpha+\lambda\right)T}-1}{\alpha+\lambda}>\frac{e^{\alpha T}-1}{\alpha},$ which is satisfied because function $$\left(e^{x}-1\right)/x$$ increases with $$x$$. ∥ Proof of Proposition 7. First, for $$\pi<\bar{\pi}$$, $$\bar{t}=0$$. Second, for $$\pi\ge\bar{\pi}$$, $$\bar{t}$$ increases with $$\pi$$ and decreases with $$\alpha$$ because $$\mu\left(T\right)$$ increases with $$\pi$$ and decreasing with $$\alpha$$. Furthermore,   \begin{align*} \frac{d\bar{t}}{d\lambda} & =\frac{1}{\alpha+\lambda}\left(\frac{1}{\alpha+\lambda}\ln\left(1+\phi\right)+\frac{\alpha}{\lambda^{2}}\frac{1-\pi}{\pi}e^{\alpha T}\frac{1}{1+\phi}\right)>0,\\ \phi & \equiv\frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}. \end{align*} ∥ Proof of Proposition 8. The density $$p_{B}\left(t\right)$$ is equal to $$0$$ for $$t\leq\bar{t}$$ and is given by   $p_{B}\left(t\right)\equiv\frac{dP_{B}\left(t\right)}{dt}=\frac{\pi}{1-\pi}\frac{\alpha e^{-\alpha t}\left(1-\mu\left(T\right)e^{\lambda\left(T-t\right)}\right)}{\mu\left(T\right)}$ for $$t>\bar{t}$$. Differentiating $$p_{B}\left(t\right)$$ with respect to $$t$$ for $$t>\bar{t}$$, we get   $\frac{dp_{B}\left(t\right)}{dt}=\frac{\pi}{1-\pi}\frac{\alpha e^{-\alpha t}}{\mu\left(T\right)}\left[\left(\alpha+\lambda\right)\mu\left(T\right)e^{\lambda\left(T-t\right)}-\alpha\right]>0$ if and only if   $t<T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1}{\mu\left(T\right)}\right).$ We can therefore conclude that $$p_{B}\left(t\right)$$ is quasiconcave on the interval $$\left[\bar{t},T\right]$$. ∥ Proof of Proposition 9. The density $$p\left(t\right)$$ is given by   \begin{align*} p\left(t\right) & =\begin{cases} \pi\alpha e^{-\alpha t} & \mbox{if }t<\bar{t}\\ \pi\alpha e^{-\alpha t}+\pi\alpha e^{-\alpha t}\frac{1-\mu\left(T\right)e^{\lambda\left(T-t\right)}}{\mu\left(T\right)} & \mbox{if }t\geq\bar{t}. \end{cases} \end{align*} Obviously, for $$t\leq\bar{t}$$, $$p\left(t\right)$$ is decreasing in $$t$$. For $$t>\bar{t}$$, differentiating $$p\left(t\right)$$ with respect to $$t$$, we get   $\frac{dp\left(t\right)}{dt}=\pi\alpha e^{-\alpha t}\left[\left(\alpha+\lambda\right)e^{\lambda\left(T-t\right)}-\alpha\frac{1+\mu\left(T\right)}{\mu\left(T\right)}\right]>0$ if and only if   $t<T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1+\mu\left(T\right)}{\mu\left(T\right)}\right).$ ∥ Proof of Proposition 10. The breakdown probability at $$t$$ is given by   \begin{eqnarray*} Q\left(t\right) & \equiv & \left(1-e^{-\lambda\left(T-t\right)}\right)\left[1-\mu\left(t\right)\right]. \end{eqnarray*} Notice that $$Q\left(t\right)$$ is continuous in $$t$$ because $$\mu\left(t\right)$$ is continuous in $$t$$. Also, $$Q\left(t\right)$$ equals $$0$$ for $$t\leq\bar{t}$$, is strictly positive for all $$t\in\left(\bar{t},T\right)$$, and equals $$0$$ for $$t=T$$. Substituting $$\mu\left(t\right)$$ and differentiating $$Q\left(t\right)$$ with respect to $$t$$ for $$t\geq\bar{t},$$ we get   $\frac{dQ\left(t\right)}{dt}=-\lambda\frac{e^{-\lambda\left(T-t\right)}\left(1+\mu\left(T\right)\right)-2\mu\left(T\right)}{\left[1-\mu\left(T\right)\left(1-e^{\lambda\left(T-t\right)}\right)\right]^{2}}>0$ if and only if   $t<T-\frac{1}{\lambda}\ln\left(\frac{1+\mu\left(T\right)}{2\mu\left(T\right)}\right).$ ∥ Acknowledgments. We are grateful to the editor, four anonymous referees, Alessandro Bonatti, Steven Callander, Yeon-Koo Che, Wouter Dessein, William Fuchs, Drew Fudenberg, Robert Gibbons, Navin Kartik, Keiichi Kawai, Hongyi Li, Jin Li, Brendan Nyhan, Carlos Pimienta, Andrea Prat, and participants at various seminars and conferences for helpful comments and suggestions. Valentino Dardanoni, Antonio Forcina, and Jutta Roosen kindly shared their codes for the likelihood ratio order test. We especially thank Aleksandra Balyanova and Barton Lee for excellent research assistance. We acknowledge support from the Australian Research Council (ARC) including the ARC Discovery Project DP140102426 (Gratton), the ARC Future Fellowship FT130101159 (Holden), and the ARC Discovery Early Career Research Award DE160100964 (Kolotilin). Supplementary Data Supplementary data are available at Review of Economic Studies online. Footnotes 1. For example, Paul Krugman wrote that the announcement “very probably installed Donald Trump in the White House” (New York Times, January 13, 2017). 2. Matt Zapotosky, Ellen Nakashima, and Rosalind S. Helderman, Washington Post, October 30, 2016. 3. Although the video was filmed eleven years prior to the release, raising the question of whether it was strategically timed, the Washington Post maintains it obtained the unedited video only a few hours before its online release (Farhi, Paul, Washington Post, October 7, 2016). 4. Aaron Blake, Washington Post, October 9, 2016. 5. Los Angeles Times, transcript of Trump’s video statement, October 7, 2016. 6. Susan Page and Karina Shedrofsky, USA TODAY, October 26, 2016 7. In Section 3.2, we generalize the model in several directions allowing for more general utility functions, for Sender to be imperfectly informed, for Sender’s type to affect when the arm arrives, and for the deadline to be stochastic. 8. The equilibrium is essentially unique in the sense that the probability with which each type of Sender pulls the arm at any time is uniquely determined. 9. See also Jung and Kwon (1988), Shin (1994), and Dziuda (2011). The unraveling result might also fail if disclosure is costly (Jovanovic, 1982) or information acquisition is costly (Shavell, 1994). 10. Shin (2003, 2006) also studies dynamic verifiable information disclosure, but he does not allow Sender to choose when to disclose. A series of recent papers consider dynamic information disclosure with different focuses to us, including: Ely et al. (2015); Ely (2016); Grenadier et al. (2016); Hörner and Skrzypacz (2016); Bizzotto et al. (2017); Che and Hörner (2017); Orlov et al. (2017). 11. Brocas and Carrillo (2007) also show that if the learning process is privately observed by Sender but the stopping time is observed by Receiver, then in equilibrium Receiver learns Sender’s information (akin to the unraveling result), as if the learning process was public. Gentzkow and Kamenica (2017) generalize this result. 12. In our model Sender can influence only the starting time of the experimentation process, but not the design of the process itself. Instead, in the Bayesian persuasion literature (e.g. Rayo and Segal, 2010; Kamenica and Gentzkow, 2011), Sender fully controls the design of the experimentation process. 13. See also Prat and Stromberg (2013) for a review of this literature in the broader context of the relationship between media and politics. 14. By part (ii) of Assumption 1, such perfect credibility can never be dented: $$H_{\theta}\left(.\mid\tau,1\right)$$ assigns probability $$1$$ to $$s=1$$ for all $$\theta$$ and $$\tau$$. 15. Divinity is a standard refinement used by the signalling literature. It requires Receiver to attribute a deviation to those types of Sender who would choose it for the widest range of Receiver’s interim beliefs. In our setting, the set of divine equilibria coincides with the set of monotone equilibria in which Receiver’s interim belief about Sender is non-increasing in the pulling time. Specifically, divinity rules out all equilibria in which both types of Sender do not pull the arm at some times, because Receiver’s out-of-equilibrium beliefs for those times are sufficiently unfavourable. 16. It is sufficient for our results to assume that Sender’s payoff is an upper hemicontinuous correspondence (rather than a continuous function) of Receiver’s posterior belief. For example, this is the case if Sender’s and Receiver’s payoffs depend on Receiver’s action and Sender’s type, and Receiver’s action set is finite. In the above example with constant ideological position, Sender’s payoff in (5) is a correspondence with $$v\left(r\right)=\left[0,1\right]$$, because it is optimal for Receiver to randomize between the two actions when $$s=r$$. 17. More generally, Proposition 1 holds whenever $$sv_{G}\left(s\right)$$ is strictly convex and $$\left(1-s\right)v_{B}\left(s\right)$$ is strictly concave, that is, for all $$s$$, Sender’s Arrow-Prat coefficient of absolute risk aversion $$-v_{\theta}^{\prime\prime}\left(s\right)/v_{\theta}^{\prime}\left(s\right)$$ is less than $$2/s$$ for good Sender and more than $$-2/\left(1-s\right)$$ for bad Sender. For the Poisson model of Section 4, Proposition 1 continues to hold for any risk attitude of good Sender and only relies on bad Sender being not too risk-loving. 18. These effects are common in the Bayesian persuasion literature (Kamenica and Gentzkow, 2011). In this literature, Sender is uninformed. Therefore, from her perspective, Receiver’s beliefs follow a martingale process (Ely et al., 2015), so only convexity properties of Sender’s payoff affect the time at which she pulls the arm. 19. The inequality (6) holds if and only if $$\int_{x}^{1}H\left(s\mid\tau^{\prime},\pi\right)ds>\int_{x}^{1}H\left(s\mid\tau,\pi\right)ds$$ for all $$x\in\left(0,1\right)$$. In comparison, part (i) of Assumption 1 holds if and only if $$\int_{x}^{1}H\left(s\mid\tau^{\prime},\pi\right)ds\geq\int_{x}^{1}H\left(s\mid\tau,\pi\right)ds$$ for all $$x\in\left(0,1\right)$$ with strict inequality for some $$x\in\left(0,1\right)$$. 20. For $$v_{\theta}$$ given by (5), we can show that bad Sender withholds the arm with strictly positive probability, $$P_{B}\left(T\right)<F\left(T\right)$$, in all divine equilibria, if $$\pi>r$$. In this case, however, $$v_{\theta}$$ is not strictly increasing, and there exist divine equilibria in which good Sender does not always pull the arm as soon as it arrives. For example, there exists a divine equilibrium in which bad and good Sender never pull the arm by the deadline: $$P_{G}\left(T\right)=P_{B}\left(T\right)=0$$. In this equilibrium, both bad and good Sender enjoy the highest possible payoff, $$1$$. 21. In this case, there exist some time $$\tau$$ at which bad Sender strictly prefers to pull the arm and (2) no longer holds for $$\tau$$. 22. Technically, we use the results from Section 3.1 by treating continuous time as an appropriate limit of discrete time. 23. In every divine equilibrium, $$P_{G}\left(t\right)=F\left(t\right)$$ for all $$t\in\left[\bar{t},T\right]$$ and $$P_{B}\left(t\right)=0$$ for all $$t\in\left[0,\bar{t}\right]$$. But for each distribution $$\hat{P}$$ such that $$\hat{P}\left(t\right)\leq F\left(t\right)$$ for all $$t\in\left[0,\bar{t}\right)$$ and $$\hat{P}\left(t\right)=F\left(t\right)$$ for all $$t\in\left[\bar{t},T\right]$$, there exists a divine equilibrium with $$P_{G}=\hat{P}$$. For ease of exposition, we focus on the divine equilibrium in which $$P_{G}\left(t\right)=F\left(t\right)$$ for all $$t\in\left[0,T\right]$$. 24. For an arbitrary Receiver’s Bernoulli payoff function, which depends on Receiver’s action and Sender’s type, the second-order Taylor approximation of Receiver’s expected payoff increases with the variance of his posterior belief, and therefore with $$\lambda$$ and $$\alpha$$. In contrast, the comparative statics with respect to $$\pi$$ are less robust. For example, if Receiver’s Bernoulli payoff function is $$-\left(a-\theta\right)^{2}$$, where $$a\in\mathbb{R}$$ is Receiver’s action and $$\theta\in\left\{ 0,1\right\}$$ is Sender’s type, then Receiver’s expected payoff decreases with $$\pi$$ for $$\pi<1/2$$ and increases with $$\pi$$ for $$\pi>1/2$$. 25. If the arrival rate $$\alpha$$ is sufficiently small, then $$t_{b}$$ is negative and hence the breakdown probability monotonically decreases with time. 26. This corresponds to the first terms of five presidents: Jimmy Carter (1976–80), Ronald Reagan (1980–84), George H. W. Bush (1988–92), Bill Clinton (1992–96), and George W. Bush (2000–04). Each president run for reelection and three (Reagan, Clinton, and Bush) served two full terms. 27. Nyhan (2015) does not provide data on scandals involving the president-elect between Election Day and the first week of January of the following year, but it contains data on scandals involving the president-elect between the first week of January and the date of his inauguration: there are no such scandals. 28. We omit from our sample the “GSA corruption” scandal during Jimmy Carter’s presidency as the allegations, explicit and implicit, of the scandal, while involving the federal administration, did not involve any of the member of Carter’s administration or their collaborators (if anything, as Carter run with the promise to end corruption in the GSA, the scandal might have actually reinforced his position). In any case, we check in Online Appendix A that our qualitative results are robust to the inclusion of this scandal. 29. We discuss this test in greater detail in the context of the next application. For this application, we use $$k=3$$ equiprobable time intervals. For the election campaign period only (10 scandals), the $$p$$-values are $$0.003$$ and $$0.839$$, respectively. 30. The rate of learning $$\lambda$$ might also be related to the verifiability of information, which may depend on the scandal’s type ($$e.g.,$$ infidelity versus corruption). 31. Another possible explanation (not captured by our model) for Nyhan’s finding is that media organizations strategically avoid releasing scandals when voters’ attention is captured by other media events and scandals may be less effective (see Durante and Zhuravskaya, 2017). 32. One way to map this application into our model is as follows. Suppose that a firm learns at date $$t_{\ell}$$ that it needs liquidity in a time frame $$\Delta_{F}$$, meaning that the latest possible initial trade date is $$t_{\ell}+\Delta_{F}$$. Both $$t_{\ell}$$ and $$\Delta_{F}$$ are privately known by the firm. Date $$t_{\ell}$$ is drawn according to the (improper) uniform distribution on the set of integers $$\mathbb{Z}$$. The time frame is $$\Delta_{F}\equiv T-t$$, where $$t$$ has a distribution $$F$$ on $$\left\{ 1,\dots,T+1\right\}$$. The firm chooses a time gap $$\Delta_{G}\equiv T-\tau$$ subject to $$\tau\in\left\{ t,\dots,T+1\right\}$$, meaning that it announces an IPO at a date $$t_{a}\in\left\{ t_{\ell},\dots,t_{\ell}+\left(\Delta_{F}-\Delta_{G}\right)\right\}$$ with the initial trade date at $$t_{a}+\Delta_{G}\leq t_{\ell}+\Delta_{F}$$. Announcing an IPO at date $$t_{a}$$ with the initial trade date at $$t_{a}-1$$ means that the firm accesses liquidity through other channels than an IPO. With this mapping, all our results hold exactly with $$P_{G}$$ and $$P_{B}$$ being the distributions of $$\tau=T-\Delta_{G}$$ for good and bad firms, respectively. 33. Our model predicts that the time gap should not affect expected excess returns, because the price at the initial trade date takes into account the information contained in the time gap. Therefore, we cannot take a standard approach of regressing excess returns on the time gap to evaluate the main prediction of our model that bad firms choose a shorter time gap. 34. As we discuss in Section 3.2.3, bad Sender may pull the arm later than good Sender simply because she receives it later than good Sender (not because she strategically delays). Therefore, we do not empirically identify whether bad firms choose a shorter time gap for a strategic reason. 35. The 1992 U.S. Supreme Court case Burson v. Freeman, 504 U.S. 191, forbids such practices as violations of freedom of speech. References ACHARYA V. V., DeMARZO P. and KREMER I. ( 2011), “Endogenous Information Flows and the Clustering of Announcements”, American Economic Review , 101, 2955– 2979. Google Scholar CrossRef Search ADS   BANKS J. S. and SOBEL J. ( 1987), “Equilibrium Selection in Signaling Games”, Econometrica , 55, 647– 661. Google Scholar CrossRef Search ADS   BIZZOTTO J., RÜDIGER J. and VIGIER A. ( 2017), “How to Persuade a Long-Run Decision Maker” ( University of Oxford Working Paper). BLACKWELL D. ( 1953), “Equivalent Comparisons of Experiments”, Annals of Mathematical Statistics , 24, 262– 272. Google Scholar CrossRef Search ADS   BOLTON P. and HARRIS C. ( 1999), “Strategic Experimentation”, Econometrica , 67, 349– 374. Google Scholar CrossRef Search ADS   BROCAS I. and CARRILLO J. D. ( 2007), “Influence through Ignorance”, RAND Journal of Economics , 38, 931– 947. Google Scholar CrossRef Search ADS   CHE Y.-K. and HÖRNER J. ( 2017), “Recommender Systems as Mechanisms for Social Learning”, Quarterly Journal of Economics , forthcoming. CHO I.-K. and KREPS D. M. ( 1987), “Signaling Games and Stable Equilibria”, Quarterly Journal of Economics , 102, 179– 221. Google Scholar CrossRef Search ADS   DARDANONI V. and FORCINA A. ( 1998), “A Unified Approach to Likelihood Inference on Stochastic Orderings in a Nonparametric Context”, Journal of the American Statistical Association , 93, 1112– 1123. Google Scholar CrossRef Search ADS   DELLAVIGNA S. and KAPLAN E. ( 2007), “The Fox News Effect: Media Bias and Voting”, Quarterly Journal of Economics , 122, 1187– 1234. Google Scholar CrossRef Search ADS   DUGGAN J. and MARTINELLI C. ( 2011), “A Spatial Theory of Media Slant and Voter Choice”, Review of Economic Studies , 78, 640– 666. Google Scholar CrossRef Search ADS   DURANTE R. and ZHURAVSKAYA E. ( 2017), “Attack when the World is Not Watching?: International Media and the Israeli-Palestinian Conflict”, Journal of Political Economy , forthcoming. DYE R. A. ( 1985), “Disclosure of Nonproprietary Information”, Journal of Accounting Research , 23, 123– 145. Google Scholar CrossRef Search ADS   DZIUDA W. ( 2011), “Strategic Argumentation”, Journal of Economic Theory , 146, 1362– 1397. Google Scholar CrossRef Search ADS   ELY J., FRANKEL A. and KAMENICA E. ( 2015), “Suspense and Surprise”, Journal of Political Economy , 123, 215– 260. Google Scholar CrossRef Search ADS   ELY J. C. ( 2016), “Beeps”, American Economic Review , 107, 31– 53. Google Scholar CrossRef Search ADS   GENTZKOW M. and KAMENICA E. ( 2017), “Disclosure of Endogenous Information”, Economic Theory Bulletin , 5, 47– 56. Google Scholar CrossRef Search ADS   GENTZKOW M. and SHAPIRO J. M. ( 2006), “Media Bias and Reputation”, Journal of Political Economy , 114, 280– 316. Google Scholar CrossRef Search ADS   GRENADIER S. R., MALENKO A. and MALENKO N. ( 2016), “Timing Decisions in Organizations: Communication and Authority in a Dynamic Environment”, American Economic Review , 106, 2552– 2581. Google Scholar CrossRef Search ADS   GROSSMAN S. J. ( 1981), “The Informational Role of Warranties and Private Disclosures about Product Quality”, Journal of Law and Economics , 24, 461– 483. Google Scholar CrossRef Search ADS   GROSSMAN S. J. and HART O. D. ( 1980), “Disclosure Laws and Takeover Bids”, Journal of Finance , 35, 323– 334. Google Scholar CrossRef Search ADS   GUTTMAN I., KREMER I. and SKRZYPACZ A. ( 2013), “Not Only What but also When: A Theory of Dynamic Voluntary Disclosure”, American Economic Review , 104, 2400– 2420. Google Scholar CrossRef Search ADS   HÖRNER J. and SKRZYPACZ A. ( 2016), “Selling Information”, Journal of Political Economy , 124, 1515– 1562. Google Scholar CrossRef Search ADS   JOVANOVIC B. ( 1982), “Truthful Disclosure of Information”, Bell Journal of Economics , 13, 36– 44. Google Scholar CrossRef Search ADS   JUNG W.-O. and KWON Y. K. ( 1988), “Disclosure when the Market is Unsure of Information Endowment of Managers”, Journal of Accounting Research , 26, 146– 153. Google Scholar CrossRef Search ADS   KAMENICA E. and GENTZKOW M. ( 2011), “Bayesian Persuasion”, American Economic Review , 101, 2590– 2615. Google Scholar CrossRef Search ADS   KELLER G., RADY S. and CRIPPS M. ( 2005), “Strategic Experimentation with Exponential Bandits”, Econometrica , 73, 39– 68. Google Scholar CrossRef Search ADS   LI H. and LI W. ( 2013), “Misinformation”, International Economic Review , 54, 253– 277. Google Scholar CrossRef Search ADS   LOUGHRAN T. and RITTER J. R. ( 1995), “The New Issues Puzzle”, Journal of Finance , 50, 23– 51. Google Scholar CrossRef Search ADS   MASKIN E. S. and TIROLE J. ( 1992), “The Principal-Agent Relationship with an Informed Principal, II: Common Values”, Econometrica , 60, 1– 42. Google Scholar CrossRef Search ADS   MILGROM P. R. ( 1981), “Good News and Bad News: Representation Theorems and Applications”, Bell Journal of Economics , 12, 350– 391. Google Scholar CrossRef Search ADS   MULLAINATHAN S. and SHLEIFER A. ( 2005), “The Market for News”, American Economic Review , 95, 1031– 1053. Google Scholar CrossRef Search ADS   NYHAN B. ( 2015), “Scandal Potential: How Political Context and News Congestion Affect the President’s Vulnerability to Media Scandal”, British Journal of Political Science , 45, 435– 466. Google Scholar CrossRef Search ADS   ORLOV D., SKRZYPACZ A. and ZRYUMOV P. ( 2017), “Persuading the Principal To Wait” ( Stanford University Working Paper). Google Scholar CrossRef Search ADS   PRAT A., and STROMBERG D. ( 2013), “The Political Economy of Mass Media”, in Advances in Economics and Econometrics: Theory and Applications, Proceedings of the Tenth World Congress of the Econometric Society , vol. II, ( Cambridge University Press) 135– 187. Google Scholar CrossRef Search ADS   RAYO L. and SEGAL I. ( 2010), “Optimal Information Disclosure”, Journal of Political Economy , 118, 949– 987. Google Scholar CrossRef Search ADS   ROOSEN J. and HENNESSY D. A. ( 2004), “Testing for the Monotone Likelihood Ratio Assumption”, Journal of Business & Economic Statistics , 22, 358– 366. Google Scholar CrossRef Search ADS   SHAKED M. and SHANTHIKUMAR G. ( 2007), Stochastic Orders  ( New York, NY: Springer). Google Scholar CrossRef Search ADS   SHAVELL S. ( 1994), “Acquisition and Discolsure of Information Prior to Sale”, RAND Journal of Economics , 25, 20– 36. Google Scholar CrossRef Search ADS   SHIN H.-S. ( 1994), “News Management and the Value of Firms”, RAND Journal of Economics , 25, 58– 71. Google Scholar CrossRef Search ADS   SHIN H.-S. ( 2003), “Disclosure and Asset Returns”, Econometrica , 71, 105– 133. Google Scholar CrossRef Search ADS   SHIN H.-S. ( 2006), “Disclosure Risk and Price Drift”, Journal of Accounting Research , 44, 351– 379. Google Scholar CrossRef Search ADS   © The Author 2017. Published by Oxford University Press on behalf of The Review of Economic Studies Limited. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Review of Economic Studies Oxford University Press # When to Drop a Bombshell , Volume Advance Article – Dec 11, 2017 34 pages /lp/ou_press/when-to-drop-a-bombshell-ZPmftxi2bC Publisher Oxford University Press © The Author 2017. Published by Oxford University Press on behalf of The Review of Economic Studies Limited. ISSN 0034-6527 eISSN 1467-937X D.O.I. 10.1093/restud/rdx070 Publisher site See Article on Publisher Site ### Abstract Abstract Sender, who is either good or bad, wishes to look good at an exogenous deadline. Sender privately observes if and when she can release a public flow of information about her private type. Releasing information earlier exposes to greater scrutiny, but signals credibility. In equilibrium bad Sender releases information later than good Sender. We find empirical support for the dynamic predictions of our model using data on the timing of U.S. presidential scandals and U.S. initial public offerings. In the context of elections, our results suggest that October Surprises are driven by the strategic behaviour of bad Sender. 1. Introduction Election campaigns consist of promises, allegations, and scandals. While most of them are inconsequential, some are pivotal events that can sway elections. Rather than settling existing issues, these bombshells typically start new debates that, in time, provide voters with new information. When bombshells are dropped, their timing is hotly debated. Was the bombshell intentionally timed to sway the election? What else did media and politicians know, when they dropped the bombshell, that voters might only discover after the election? The 2016 U.S. presidential campaign between Democrat Hillary Clinton and Republican Donald Trump provides several examples. Just eleven days before the election, FBI director James Comey announced that his agency was reopening its investigation into Secretary Clinton’s emails. The announcement reignited claims that Clinton was not fit to be commander in chief because of her mishandling of classified information. Paul Ryan, the Republican Speaker of the House, went as far as to demand an end to classified intelligence briefings to Clinton. Some commentators maintain that Comey’s announcement cost Clinton the election.1 While the announcement conveyed the impression of an emerging scandal, Clinton was confident that no actual wrongdoing would be revealed by the new investigation —there would be no real scandal. Comey’s letter to Congress stated that “the FBI cannot yet assess whether or not this material may be significant, and I cannot predict how long it will take us to complete this additional work”. The Clinton campaign—and Democrats generally—were furious, accusing Comey of interfering with the election. Comey wrote that he was briefed on the new material only the day before the announcement. But his critics maintained that the FBI had accessed the new emails weeks before the announcement and speculated about how long Comey sat on the new material and what he knew about it.2 Similarly, one month before the election, the Washington Post released a video featuring Donald Trump talking disparagingly about women.3 The video triggered a heated public debate about whether Trump was fit to be president. It revived allegations that he had assaulted women and even prominent Republicans called for Trump to end his campaign.4 Within a week of the video’s release, five women came forward accusing Trump of sexual assault. Trump himself denied all accusations and dismissed the video as “locker-room banter”, and “nothing more than a distraction from the important issues we are facing today.”5 Others echoed his statement that real scandals are about “actions” and not “words”, and took the media coverage of the video as proof of a conspiracy against Trump.6 The concentration of scandals in the last months of the 2016 campaign is far from an exception. Such October surprises are commonplace in U.S. presidential elections, as shown in Figure 1. Political commentators argue that such bombshells may be strategically dropped close to elections so that voters have not enough time to tell real from fake news. Yet, if all fake news were released just before an election, then voters may rationally discount October surprises as fake. Voters may not do so fully, however, since while some bombshells may be strategically timed, others are simply discovered close to the election. Figure 1 View largeDownload slide Distribution of scandals implicating U.S. presidents running up for reelection, from 1977 to 2008. Data from Nyhan (2015). Figure 1 View largeDownload slide Distribution of scandals implicating U.S. presidents running up for reelection, from 1977 to 2008. Data from Nyhan (2015). Therefore, the strategic decision of when to drop a bombshell is driven by a trade-off between credibility and scrutiny. On the one hand, dropping the bombshell earlier is more credible, in that it signals that its sender has nothing to hide. On the other hand, it exposes the bombshell to scrutiny for a longer period of time—possibly revealing that the bombshell is a fake. This credibility-scrutiny trade-off also drives the timing of announcements about candidacy, running mates, cabinet members, and details of policy platforms. An early announcement exposes the background of the candidate or her team to more scrutiny, but boosts credibility. The same trade-off is likely to drive the timing of information release in other contexts outside the political sphere. For instance, a firm going public can provide a longer or shorter time for the market to evaluate its prospectus before the firm’s shares are traded. This time can be dictated by the firm’s liquidity needs and development plans, but can also be chosen strategically to influence the market. A longer time allows the market to learn more about the firm’s prospective performance. Therefore, the market perceives a longer time as a signal of the firm’s credibility, increasing the share price. But a longer time also exposes the firm to more scrutiny, possibly revealing that the firm’s future profitability is low. In all these situations, (1) an interested party has private information and (2) she cares about the public opinion at a given date. Crucially, (3) she can partially control how much time the public has to learn about her information. In this article we introduce a Sender-Receiver model of these dynamic information release problems. In our benchmark model of Section 2, (1) Sender privately knows her binary type, good or bad, and (2) wants Receiver to believe that she is good at an exogenous deadline; (3) Sender privately observes whether and when an opportunity to start a public flow of information about her type arrives and chooses when to exercise this opportunity. We call this opportunity an arm and say that Sender chooses when to pull the arm.7 In Section 3.1, we characterize the set of perfect Bayesian equilibria. Intuitively, bad Sender is willing to endure more scrutiny only if pulling the arm earlier boosts her credibility in the sense that Receiver holds a higher belief that Sender is good if the arm is pulled earlier. Therefore, bad Sender withholds the arm with strictly positive probability. Our main result is that, in all equilibria, bad Sender pulls the arm later than good Sender in the likelihood ratio order. We prove that there exists an essentially unique divine equilibrium (Cho and Kreps, 1987).8 In this equilibrium, good Sender immediately pulls the arm when it arrives and bad Sender is indifferent between pulling the arm at any time and not pulling it at all. Uniqueness allows us to analyse comparative statics in a tractable way in a special case of our model where the arm arrives according to a Poisson process and pulling the arm starts an exponential learning process in the sense of Keller et al. (2005). We do this in Section 4 and show that the comparative static properties of this equilibrium are very intuitive. Both good and bad Sender gain from a higher Receiver’s prior belief that Sender is good. Instead, whereas good Sender gains from a faster learning process and a faster arrival of the arm, bad Sender loses from these. When learning is faster and when the arm arrives more slowly, bad Sender delays pulling the arm for longer and pulls it with lower probability. In this case, the total probability that (good and bad) Sender pulls the arm is also lower. When Receiver’s prior belief is higher, withholding information is less damning, so bad Sender strategically pulls the arm with lower probability, but the probability that good Sender pulls the arm is mechanically higher. We show that the strategic effect dominates the mechanical effect if and only if Receiver’s prior belief is sufficiently low. We show that the probability density with which bad Sender pulls the arm is single-peaked in time, and derive the conditions under which it monotonically increases with time. We also characterize the shape of the probability density with which (good and bad) Sender pulls the arm, and show it has at most two peaks—an earlier peak driven by good Sender and a later peak driven by bad Sender. In Section 5, we apply our model to the strategic release of political scandals in U.S. presidential campaigns. In equilibrium, while real scandals are released as they are discovered, fake scandals are strategically delayed and concentrated towards the end of the campaign. In other words, our credibility-scrutiny trade-off predicts that the October surprise phenomenon is driven by fake scandals. Using data from Nyhan (2015), we find empirical support for this prediction. To the best of our knowledge, this is the first empirical evidence about the strategic timing of political scandals relative to the date of elections and the first direct evidence of an October Surprise effect. Finally, we apply our model to the timing of U.S. initial public offerings (IPOs). Our model links a stock’s long-run performance to the time gap between the announcement of an IPO and the initial trade date. Firms with higher long-run returns should choose longer time gaps in the likelihood ratio order. Using an approach developed by Dardanoni and Forcina (1998), we find empirical support for this prediction. Related Literature.Grossman and Hart (1980), Grossman (1981), and Milgrom (1981) pioneered the study of verifiable information disclosure and established the unraveling result: if Sender’s preferences are common knowledge and monotonic in Receiver’s action (for all types of Sender) then Receiver learns Sender’s type in any sequential equilibrium. Dye (1985) first pointed out that the unraveling result fails if Receiver is uncertain about Sender’s information endowment.9 When Sender does not disclose information, Receiver is unsure as to why, and thus cannot conclude that the non-disclosure was strategic, and hence does not “assume the worst” about Sender’s type. Acharya et al. (2011) and Guttman et al. (2013) explore the strategic timing of information disclosure in a dynamic version of Dye (1985).10Acharya et al. (2011) focus on the interaction between the timing of disclosure of private information relative to the arrival of external news, and clustering of the timing of announcements across firms. Guttman et al. (2013) analyse a setting with two periods and two signals and show that, in equilibrium, both what is disclosed and when it is disclosed matters. Strikingly, the authors show that later disclosures are received more positively. All these models are unsuited to study either the credibility or the scrutiny sides of our trade-off, because information in these models is verified instantly and with certainty once disclosed. In our motivating examples, information is not immediately verifiable: when Sender releases the information, Receiver only knows that “time will tell” whether the information released is reliable. To capture this notion of partial verifiability, we model information as being verified stochastically over time in the sense that releasing information starts a learning process for Receiver akin to processes in Bolton and Harris (1999), Keller et al. (2005). In Brocas and Carrillo (2007), an uninformed Sender, wishing to influence Receiver’s beliefs, chooses when to stop a public learning process.11 In contrast, in our model Sender is privately informed and she chooses when to start rather than stop the process.12 Our application to U.S. presidential scandals also contributes to the literature on the effect of biased media and campaigns on voters’ behaviour (e.g. Mullainathan and Shleifer, 2005; Gentzkow and Shapiro, 2006; Duggan and Martinelli, 2011; Li and Li, 2013).13DellaVigna and Kaplan (2007) provide evidence that biased media have a significant effect on the vote share in U.S. presidential elections. We focus on when a biased source chooses to release information and show that voters respond differently to information released at different times in the election campaign. 2. The Model In our model, Sender’s payoff depends on Receiver’s posterior belief about Sender’s type at a deadline. We begin with a benchmark model in which (1) Sender’s payoff is equal to Receiver’s posterior belief, (2) Sender is perfectly informed, (3) Sender’s type does not affect when the arm arrives, and (4) the deadline is deterministic. Section 3.2 relaxes each of these assumptions and shows that our main results continue to hold. 2.1. Benchmark model There are two players: Sender (she) and Receiver (he). Sender is one of two types $$\theta\in\left\{ G,B\right\}$$: good ($$\theta=G$$) or bad ($$\theta=B$$). Let $$\pi\in\left(0,1\right)$$ be the common prior belief that Sender is good. Time is discrete and indexed by $$t\in\left\{ 1,2,\ldots,T+1\right\}$$. Sender is concerned about being perceived as good at a deadline $$t=T$$. In particular, the expected payoff of type $$\theta\in\left\{ G,B\right\}$$ is given by $$v_{\theta}\left(s\right)=s$$, where $$s$$ is Receiver’s posterior belief at $$t=T$$ that $$\theta=G$$. Time $$T+1$$ combines all future dates after the deadline, including never. An arm arrives to Sender at a random time according to distribution $$F$$ with support $$\left\{ 1,2,\ldots,T+1\right\}$$. If the arm has arrived, Sender privately observes her type and can pull the arm immediately or at any time after its arrival, including time $$T+1$$. Because Sender moves only after the arrival of the arm, it is immaterial for the analysis whether Sender learns her type when the arm arrives or when the game starts. Pulling the arm starts a learning process for Receiver. Specifically, if the arm is pulled at a time $$\tau$$ before the deadline ($$\tau\leq T$$), Receiver observes realizations of a stochastic process   $L=\left\{ L_{\theta}\left(t;\tau\right),\tau\leq t\leq T\right\} .$ The process $$L$$ can be viewed as a sequence of signals, one per each time from $$\tau$$ to $$T$$ with the precision of the signal at time $$t$$ possibly depending on $$\tau$$, $$t$$, and all previous signals. Notice that if the arm is pulled at $$\tau=T$$, Receiver observes the realization $$L_{\theta}\left(T,T\right)$$ before taking his action. For notational convenience, we assume that $$L$$ is either discrete or atomless. It is more convenient to work directly with the distribution of beliefs induced by the process $$L$$ rather than with the process itself. Recall that $$s$$ is Receiver’s posterior belief that Sender is good after observing all realizations of the process from $$\tau$$ to $$T$$. Let $$m$$ denote Receiver’s interim belief that Sender is good upon observing that she pulls the arm at time $$\tau$$ and before observing any realization of $$L$$. Given $$\tau$$ and $$m$$, the process $$L$$ generates a distribution $$H\left(.\mid\tau,m\right)$$ over Receiver’s posterior beliefs $$s$$; given $$\tau$$, $$m$$, and $$\theta$$, the process $$L$$ generates a distribution $$H_{\theta}\left(.\mid\tau,m\right)$$ over $$s$$. Notice that if the arm is pulled after the deadline ($$\tau=T+1$$), then the distributions $$H_{\theta}\left(.\mid\tau,m\right)$$ and $$H\left(.\mid\tau,m\right)$$ assign probability one to $$s=m$$. Assumption 1 says that (1) pulling the arm later reveals strictly less information about Sender’s type in Blackwell (1953)’s sense and (2) the learning process never fully reveals Sender’s type. Assumption 1. (1) For all $$\tau,\tau^{\prime}\in\left\{ 1,2,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, $$H\left(.\mid\tau,\pi\right)$$ is a strict mean-preserving spread of $$H\left(.\mid\tau^{\prime},\pi\right)$$. (2) The support of $$H\left(.\mid1,\pi\right)$$ is a subset of $$\left(0,1\right)$$. For example, consider a set of (imperfectly informative) signals $$\mathcal{S}$$ with some joint distribution and suppose that pulling the arm at $$\tau$$ reveals to Receiver a set of signals $$\mathcal{S}_{\tau}\subset\mathcal{S}$$. Assumption 1 holds whenever $$\mathcal{S}_{\tau^{\prime}}$$ is a proper subset of $$\mathcal{S}_{\tau}$$ for all $$\tau<\tau^{\prime}$$. We characterize the set of perfect Bayesian equilibria, henceforth equilibria. Let $$\mu\left(\tau\right)$$ be Receiver’s equilibrium interim belief that Sender is good given that Sender pulls the arm at time $$\tau\in\left\{ 1,2,\dots,T+1\right\}$$. Also, let $$P_{\theta}$$ denote an equilibrium distribution of pulling time $$\tau$$ given Sender’s type $$\theta$$ (with the convention that $$P_{\theta}\left(0\right)=0$$). 2.2. Discussion We now pause to interpret key ingredients of our model using our main application—the timing of U.S. presidential scandals in the lead-up to elections. Receiver is the median voter and Sender is an opposition member or organization wishing to reduce a candidate’s chances of being elected. The candidate is either fit ($$\theta=B$$) or unfit ($$\theta=G$$) to run the country. The prior belief that the candidate is unfit is $$\pi$$. At a random time, the opposition may privately receive scandalous material against the candidate (arrival of the arm). The opposition can choose when and whether to release the material (pull the arm). After it is released, the material is subject to scrutiny, and the median voter gradually learns about the candidate’s type. Crucially, the opposition has private information about what the expected outcome of scrutiny is. We say that the scandal is real (fake) if further scrutiny is likely to reveal that the candidate is unfit (fit) to run the country. If, at the time of the election (deadline), the median voter believes that the candidate is likely to be unfit to run the country, the candidate’s chances of being elected are weak. Notice that releasing a scandal might backfire. For example, before the FBI reopened its investigation over Secretary Clinton’s emails, the median U.S. voter had some belief $$\pi$$ that Secretary Clinton had grossly mishandled classified information and was therefore unfit to be commander in chief. Further investigations could have revealed that her conduct was more than a mere procedural mistake. In this case, the median voter’s posterior belief $$s$$ would have been higher than $$\pi$$. On the contrary, the FBI might not have found any evidence of misconduct, despite investigating yet more emails. In this case, the median voter’s posterior belief $$s$$ would have been lower than $$\pi$$. In this application, Sender’s payoff depends on Receiver’s belief at the deadline because this belief affects the probability that the median voter elects the candidate. Specifically, suppose that the opposition is uncertain about the ideological position $$r$$ of the median voter, which is uniformly distributed on the unit interval. If the candidate is not elected, the median voter’s payoff is normalized to $$0$$. If the incumbent is elected, the median voter with position $$r$$ gets payoff $$r-1$$ if the candidate is unfit and payoff $$r$$ otherwise. The opposition gets payoff $$0$$ if the candidate is elected and $$1$$ otherwise. Therefore, Sender’s expected payoff is given by   $v_{\theta}\left(s\right)=\Pr\left(r\leq s\right)=s\text{ for }\theta\in\left\{ G,B\right\} .$ Furthermore, Receiver’s expected payoff $$u\left(s\right)$$ is given by   $u\left(s\right)=\int_{s}^{1}\left[s\left(r-1\right)+\left(1-s\right)r\right]dr=\frac{\left(1-s\right)^{2}}{2}.$ The Receiver’s ex-ante expected payoff is therefore given by   \begin{align} \mathbb{E}\left[u\left(s\right)\right] & =\frac{\left(1-\mathbb{E}\left[s\right]\right)^{2}+\mathbb{\mathbb{E}}\left[\left(s-\mathbb{E}\left[s\right]\right)^{2}\right]}{2}=\frac{\left(1-\pi\right)^{2}+\mathrm{Var\left[s\right]}}{2}.\label{eq:ele} \end{align} (1) 3. Analysis 3.1. Equilibrium We begin our analysis by deriving statistical properties of the model that rely only on players being Bayesian. These properties link the pulling time and Receiver’s interim belief to the expectation of Receiver’s posterior belief. First, from (good and bad) Sender’s perspective, keeping the pulling time constant, a higher interim belief results in a higher expected posterior belief. Furthermore, pulling the arm earlier reveals more information about Sender’s type. Therefore, from bad (good) Sender’s perspective, pulling the arm earlier decreases (increases) the expected posterior belief that Sender is good. In short, Lemma 1 says that credibility is beneficial for both types of Sender, whereas scrutiny is detrimental for bad Sender but beneficial for good Sender. Lemma 1. (Statistical Properties).Let $$\mathbb{E}\left[s\mid\tau,m,\theta\right]$$ be the expectation of Receiver’s posterior belief $$s$$ conditional on the pulling time $$\tau$$, Receiver’s interim belief $$m$$, and Sender’s type $$\theta$$. For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, and all $$m,m^{\prime}\in\left(0,1\right]$$ such that $$m<m^{\prime}$$, (1)$$\mathbb{E}\left[s\mid\tau,m^{\prime},\theta\right]>\mathbb{E}\left[s\mid\tau,m,\theta\right]$$ for $$\theta\in\left\{ G,B\right\}$$; (2)$$\mathbb{E}\left[s\mid\tau^{\prime},m,B\right]>\mathbb{E}\left[s\mid\tau,m,B\right]$$; (3)$$\mathbb{E}\left[s\mid\tau,m,G\right]>\mathbb{E}\left[s\mid\tau^{\prime},m,G\right]$$. Proof. In Appendix A. ǁ We now show that in any equilibrium, (1) good Sender strictly prefers to pull the arm whenever bad Sender weakly prefers to do so, and therefore (2) if the arm has arrived, good Sender pulls it with certainty whenever bad Sender pulls it with positive probability. Lemma 2. (Good Sender’s Behaviour).In any equilibrium: (1)For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$ and $$\mu\left(\tau\right),\mu\left(\tau^{\prime}\right)\in\left(0,1\right)$$, if bad Sender weakly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$, then $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$ and good Sender strictly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$; (2)For all $$\tau\in\left\{ 1,\ldots,T\right\}$$ in the support of $$P_{B}$$, we have $$P_{G}\left(\tau\right)=F\left(\tau\right)$$. Proof. In Appendix B. ǁ The proof relies on the three statistical properties from Lemma 1. The key to Lemma 2 is that if bad Sender weakly prefers to pull the arm at some time $$\tau$$ than at $$\tau^{\prime}>\tau$$, then Receiver’s interim belief $$\mu\left(\tau\right)$$ must be greater than $$\mu$$$$\left(\tau^{\prime}\right)$$. Intuitively, bad Sender is willing to endure more scrutiny only if pulling the arm earlier boosts her credibility. Since $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$, good Sender strictly prefers to pull the arm at the earlier time $$\tau$$, as she benefits from both scrutiny and credibility. Next, we show that bad Sender pulls the arm with positive probability whenever good Sender does, but bad Sender pulls the arm later than good Sender in the first-order stochastic dominance sense. Moreover, bad sender pulls the arm strictly later unless no type pulls the arm. An immediate implication is that bad Sender always withholds the arm with positive probability. Lemma 3. (Bad Sender’s Behaviour).In any equilibrium, $$P_{G}$$ and $$P_{B}$$ have the same supports and, for all $$\tau\in\left\{ 1,\ldots,T\right\}$$ with $$P_{G}\left(\tau\right)>0$$, we have $$P_{B}\left(\tau\right)<P_{G}\left(\tau\right)$$. Therefore, in any equilibrium, $$P_{B}\left(T\right)<F\left(T\right)$$. Proof. In Appendix B. ǁ Intuitively, if there were a time $$\tau\in\left\{ 1,\dots,T\right\}$$ at which only good Sender pulled the arm with positive probability, then, upon observing that the arm was pulled at $$\tau$$, Receiver would conclude that Sender was good. But then, to achieve this perfect credibility,14 bad Sender would want to mimic good Sender and therefore strictly prefer to pull the arm at $$\tau$$, contradicting that only good Sender pulled the arm at $$\tau$$. Nevertheless, bad Sender always delays relative to good Sender. Indeed, if bad and good Sender were to pull the arm at the same time, then Sender’s credibility would not depend on the pulling time. But with constant credibility, bad Sender would never pull the arm to avoid scrutiny. Therefore, good Sender must necessarily pull the arm earlier than bad Sender. We now show that, at any time when good Sender pulls the arm, bad Sender is indifferent between pulling and not pulling the arm. That is, in equilibrium, pulling the arm earlier boosts Sender’s credibility as much as to exactly offset the expected cost of longer scrutiny for bad Sender. Thus, Receiver’s interim beliefs are determined by bad Sender’s indifference condition (2) and the consistency condition (3). The consistency condition follows from Receiver’s interim beliefs being determined by Bayes’s rule and Sender’s equilibrium strategy. Roughly, it says that a weighted average of interim beliefs is equal to the prior belief. Lemma 4. (Receiver’s Beliefs).In any equilibrium,   $$\int v_{B}\left(s\right)dH_{B}\left(s|\tau,\mu\left(\tau\right)\right)=v_{B}\left(\mu\left(T+1\right)\right)\it{ for\, all }\,\tau\it{ in\, the\, support\, of }\,P_{G},\label{pit}$$ (2)  $$\sum_{\tau\in {\rm supp}\left(P_{G}\right)}\frac{1-\mu\left(\tau\right)}{\mu\left(\tau\right)}\left(P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)\right)=\frac{1-\pi}{\pi}.\label{piT}$$ (3) Proof. In Appendix B. ǁ We now characterize the set of equilibria. Part 1 of Proposition 1 states that, for any set of times, there exists an equilibrium in which good Sender pulls the arm only at times in this set. Moreover, in any equilibrium, at any time when good Sender pulls the arm, she pulls it with probability $$1$$ and bad Sender pulls it with strictly positive probability. The probability with which bad Sender pulls the arm at any time is determined by the condition that the induced interim beliefs keep bad Sender exactly indifferent between pulling the arm then and not pulling it at all. Part 2 of Proposition 1 characterizes the set of divine equilibria of Banks and Sobel (1987) and Cho and Kreps (1987).15 In such equilibria, good Sender pulls the arm as soon as it arrives. Proposition 1. (Equilibrium). (1)For any $$\mathcal{T}\subseteq\left\{ 1,\dots,T+1\right\}$$ with $$T+1\in\mathcal{T}$$, there exists an equilibrium in which the support of $$P_{G}$$ is $$\mathcal{T}$$. In any equilibrium, $$P_{G}$$ and $$P_{B}$$ have the same supports, and for all $$\tau$$ in the support of $$P_{G}$$, $$P_{G}\left(\tau\right)=F\left(\tau\right)$$ and  $$P_{B}\left(\tau\right)=\frac{\pi}{1-\pi}\sum_{t\in supp\left(P_{G}\right)\text{ s.t. }t\leq\tau}\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\left(P_{G}\left(t\right)-P_{G}\left(t-1\right)\right),\label{B}$$ (4)where $$\mu\left(\tau\right)\in\left(0,1\right)$$ is uniquely determined by (2) and (3). (2)There exists a divine equilibrium. In any divine equilibrium, for all $$\tau\in\left\{ 1,\ldots,T+1\right\}$$, $$P_{G}\left(\tau\right)=F\left(\tau\right)$$. Proof. In Appendix B. ǁ Although there exist a plethora of divine equilibria, in all such equilibria, pulling probabilities of good and bad Sender, as well as Receiver’s beliefs, are uniquely determined by $$P_{G}=F$$ and (2)-(4). In this sense, there exists an essentially unique divine equilibrium. Our main testable prediction is that bad Sender pulls the arm strictly later than good Sender in the likelihood ratio order sense. Corollary 1. (Equilibrium Dynamics).In the divine equilibrium,   $\frac{P_{B}\left(\tau\right)-P_{B}\left(\tau-1\right)}{P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)}<\frac{P_{B}\left(\tau+1\right)-P_{B}\left(\tau\right)}{P_{G}\left(\tau+1\right)-P_{G}\left(\tau\right)}\,\it{ for\, all }\,\tau\in\left\{ 1,\dots,T\right\} .$ Proof. In Appendix B. ǁ Corollary 1 implies that, conditional on pulling time $$\tau$$ being between any two times $$\tau^{\prime}$$ and $$\tau^{\prime\prime}$$, bad Sender pulls the arm strictly later than good Sender in the first-order stochastic dominance sense (Theorem 1.C.5, Shaked and Shanthikumar, 2007):   $\frac{P_{B}\left(\tau\right)-P_{B}\left(\tau^{\prime}\right)}{P_{B}\left(\tau^{\prime\prime}\right)-P_{B}\left(\tau^{\prime}\right)}<\frac{P_{G}\left(\tau\right)-P_{G}\left(\tau^{\prime}\right)}{P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime}\right)}\text{ for all }\tau^{\prime}<\tau<\tau^{\prime\prime}.$ Our model also gives predictions about the evolution of Receiver’s beliefs. Pulling the arm earlier is more credible as Receiver’s interim beliefs $$\mu\left(\tau\right)$$ decrease over time. Moreover, pulling the arm instantaneously boosts credibility in the sense that Receiver’s belief at any time $$\tau$$ about Sender’s type is higher if Sender pulls the arm than if she does not. Corollary 2. (Belief Dynamics).Let $$\tilde{\mu}\left(\tau\right)$$ denote Receiver’s interim belief that Sender is good given that she has not pulled the arm before or at $$\tau$$. In the divine equilibrium,   $\mu\left(\tau-1\right)>\mu\left(\tau\right)>\tilde{\mu}\left(\tau-1\right)>\tilde{\mu}\left(\tau\right)\,\it{ for\, all }\,\tau\in\left\{ 2,\dots,T\right\} .$ Proof. In Appendix B. ǁ 3.2. Discussion of model assumptions We now discuss how our results change (or do not change) if we relax several of the assumptions made in our benchmark model. We discuss each assumption in a separate subsection. The reader may skip this section without any loss of understanding of subsequent sections. 3.2.1. Nonlinear Sender’s payoff In the benchmark model, we assume that Sender’s payoff is linear in Receiver’s posterior belief: $$v_{G}\left(s\right)=v_{B}\left(s\right)=s$$ for all $$s$$. In our motivating example, this linearity arises because the opposition is uncertain about the ideological position $$r$$ of the median voter. If there is no such uncertainty, then the median voter reelects the incumbent whenever $$s$$ is below $$r$$, where $$r\in\left(0,1\right)$$ is a constant. Therefore, Sender’s payoff is a step function:   $$v_{\theta}\left(s\right)=v\left(s\right)=\begin{cases} 0 & \text{if }s<r;\\ 1 & \text{if }s>r. \end{cases}$$ (5) We now allow for Sender’s payoff to be nonlinear in Receiver’s posterior belief and even type dependent. To understand how the shapes of the payoff functions $$v_{G}$$ and $$v_{B}$$ affect our analysis, we extend the statistical properties of Lemma 1, which describe the evolution of Receiver’s posterior belief from Sender’s perspective. First and not surprisingly, a more favourable interim belief results in more favourable posterior beliefs for all types of Sender and for all realizations of the process. Moreover, Receiver’s posterior belief follows a supermartingale (submartingale) process from bad (good) Sender’s perspective. Lemma 1′ formalizes these statistical properties, using standard stochastic orders (see, e.g., Shaked and Shanthikumar, 2007). Distribution $$Z_{2}$$ strictly dominates distribution $$Z_{1}$$ in the increasing convex (concave) order if there exists a distribution $$Z$$ such that $$Z_{2}$$ strictly first-order stochastically dominates $$Z$$ and $$Z$$ is a mean-preserving spread (reduction) of $$Z_{1}$$. Lemma 1′. (Statistical Properties).For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, and all $$m,m^{\prime}\in\left(0,1\right]$$ such that $$m<m^{\prime}$$, (1)$$H_{\theta}\left(.\mid\tau,m^{\prime}\right)$$ strictly first-order stochastically dominates $$H_{\theta}\left(.\mid\tau,m\right)$$ for $$\theta\in\left\{ G,B\right\}$$; (2)$$H_{B}\left(.\mid\tau^{\prime},m\right)$$ strictly dominates $$H_{B}\left(.\mid\tau,m\right)$$ in the increasing concave order; (3)$$H_{G}\left(.\mid\tau,m\right)$$ strictly dominates $$H_{G}\left(.\mid\tau^{\prime},m\right)$$ in the increasing convex order. Proof. In Appendix A. ǁ To interpret Lemma 1′, we assume that the payoff of both types of Sender is a continuous strictly increasing function of Receiver’s posterior belief, so that both types of Sender want to look good.16 Part 1 says that credibility is beneficial for both types of Sender, regardless of the shape of their payoff functions. Part 2 (part 3) says that from bad (good) Sender’s perspective, pulling the arm earlier results in more spread out and less (more) favourable posteriors provided that the interim belief does not depend on the pulling time. So scrutiny is detrimental for bad Sender if her payoff is not too convex but beneficial for good Sender if her payoff is not too concave. Therefore, for a given process satisfying Assumption 1, Proposition 1 continues to hold if bad Sender is not too risk-loving and good Sender is not too risk-averse. In fact, Proposition 1 continues to hold verbatim if bad Sender’s payoff is weakly concave and good Sender’s payoff is weakly convex (the proof in Appendix B explicitly allows for this possibility).17 Much less can be said in general if the payoff functions $$v_{G}$$ and $$v_{B}$$ have an arbitrary shape. For example, if $$v_{G}$$ is sufficiently concave, then good Sender can prefer to delay pulling the arm to reduce the spread in posterior beliefs. Likewise, if $$v_{B}$$ is sufficiently convex, then bad Sender can prefer to pull the arm earlier than good Sender to increase the spread in posterior beliefs. These effects work against our credibility-scrutiny trade-off and Proposition 1 no longer holds.18 Nevertheless, bad Sender weakly delays pulling the arm relative to good Sender under the following single crossing assumption. Assumption 2. For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$ and $$\mu\left(\tau\right),\mu\left(\tau^{\prime}\right)\in\left(0,1\right)$$, if bad Sender weakly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$, then good Sender strictly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$. This assumption holds in the benchmark model by Lemma 2. This assumption also holds if Sender’s payoff is the step function in (5) whenever pulling the arm later reveals strictly less useful information about Sender’s type, in the sense that Receiver is strictly worse off.19 Lemma 2′. (Good Sender’s Behaviour).Let $$v_{\theta}$$ be given by (5). If for all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$   $$\int_{r}^{1}H\left(s\mid\tau^{\prime},m\right)ds>\int_{r}^{1}H\left(s\mid\tau,m\right)ds\,\it{ for\, all }\,m\in\left(0,1\right),\label{eq:anton}$$ (6)then Assumption 2 holds. Proof. In Appendix B. ǁ If Assumption 2 holds and $$v_{\theta}$$ is strictly increasing, then in the unique divine equilibrium, good Sender pulls the arm as soon as it arrives and bad Sender pulls the arm weakly later than good Sender—there may exist an equilibrium in which both good and bad Sender pull the arm as soon as it arrives.20 3.2.2. Imperfectly informed Sender In many applications, Sender does not know with certainty whether pulling the arm would start a good or bad learning process for Receiver. For example, when announcing the reopening of the Clinton investigation, Director Comey could not know for certain what the results of the investigation would eventually be. We generalize our model to allow for Sender to only observe a signal $$\sigma\in\left\{ \sigma_{B},\sigma_{G}\right\}$$ about an underlying binary state $$\theta$$, with normalization   $\sigma_{G}=\Pr\left(\theta=G\mid\sigma_{G}\right)>\pi>\Pr\left(\theta=G\mid\sigma_{B}\right)=\sigma_{B}.$ The statistical properties of Lemma 1 still hold. Lemma 1″. (Statistical Properties).Let $$\mathbb{E}\left[s\mid\tau,m,{{\sigma}}\right]$$ be the expectation of Receiver’s posterior belief $$s$$ conditional on the pulling time $$\tau$$, Receiver’s interim belief $$m$$, and Sender’s signal $$\sigma$$. For all $$\tau,\tau^{\prime}\in\left\{ 1,\ldots,T+1\right\}$$ such that $$\tau<\tau^{\prime}$$, and all $$m,m^{\prime}\in\left(0,1\right]$$ such that $$m<m^{\prime}$$, (1)$$\mathbb{E}\left[s\mid\tau,m^{\prime},\sigma\right]>\mathbb{E}\left[s\mid\tau,m,\sigma\right]$$; (2)$$\mathbb{E}\left[s\mid\tau^{\prime},m,\sigma_{B}\right]>\mathbb{E}\left[s\mid\tau,m,\sigma_{B}\right]$$; (3)$$\mathbb{E}\left[s\mid\tau,m,\sigma_{G}\right]>\mathbb{E}\left[s\mid\tau^{\prime},m,\sigma_{G}\right]$$. Proof. In Appendix A. ǁ These statistical results ensure that credibility is always beneficial for Sender, whereas scrutiny is detrimental for Sender with signal $$\sigma_{B}$$ but beneficial for Sender with signal $$\sigma_{G}$$. Therefore, all our results carry over. Moreover, we can extend our analysis to allow for signal $$\sigma$$ to be continuously distributed on the interval $$\left[\underline{{\sigma}},\bar{\sigma}\right)$$, with normalization $$\sigma=\Pr\left(\theta=G\mid\sigma\right)$$. In particular, in this case, there exists a partition equilibrium with $$\bar{\sigma}=\sigma_{0}>\sigma_{1}>\dots>\sigma_{T+1}=\underline{{\sigma}}$$ such that Sender $$\sigma\in\left[\sigma_{t},\sigma_{t-1}\right)$$ pulls the arm as soon as it arrives unless it arrives before time $$t\in\left\{ 1,\dots,T+1\right\}$$ (and pulls the arm at time $$t$$ if it arrives before $$t$$). 3.2.3. Type-dependent arrival of the arm In many applications, it is more reasonable to assume that the distribution of the arrival of the arm differs for good and bad Sender. For example, fake scandals may be easy to fabricate, whereas real scandals need time to be discovered. We generalize the model to allow for different distributions of the arrival of the arm for good and bad Sender. In particular, the arm arrives at a random time according to distributions $$F_{G}=F$$ for good Sender and $$F_{B}$$ for bad Sender. The proof of Proposition 1 (in Appendix B) explicitly allows for the arm to arrive (weakly) earlier to bad Sender than to good Sender in the first-order stochastic dominance sense: $$F_{B}\left(t\right)\geq F_{G}\left(t\right)$$ for all $$t$$. This assumption is clearly satisfied if bad Sender has the arm from the outset or if bad and good Sender receive the arm at the same time. More generally, Proposition 1 continues to hold verbatim unless the arm arrives sufficiently later to bad Sender than to good Sender such that $$F_{B}\left(t\right)<P_{B}\left(t\right)$$ for some $$t$$, where $$P_{B}\left(t\right)$$ is given by (4). But even then, Corollary 1 still holds. That is, bad Sender pulls the arm strictly later than good Sender. Yet, bad Sender may do so for the simple mechanical (rather than strategic) reason that the arm arrives to her later than to good Sender.21 3.2.4. Stochastic deadline In the benchmark model, we assume that the deadline $$T$$ is fixed and common knowledge. In some applications, the deadline $$T$$ may be stochastic. In particular, suppose that $$T$$ is a random variable distributed on $$\left\{ 1,\dots,\bar{T}\right\}$$ where time runs from $$1$$ to $$\bar{T}+1$$. Now the process $$L$$ has $$T$$ as a random variable rather than a constant. For this process, we can define the ex-ante distribution $$H$$ of posteriors at $$T$$, where $$H$$ depends only on pulling time $$\tau$$ and interim belief $$m$$. Notice that Assumption 1 still holds for this ex-ante distribution of posteriors for any $$\tau,\tau^{\prime}\in\left\{ 1,\dots,\bar{T}+1\right\}$$. Therefore, from the ex-ante perspective, Sender’s problem is identical to the problem with a deterministic deadline and all results carry over. 4. Poisson Model To get more precise predictions about the strategic timing of information release, we now assume that the arrival of the arm and Receiver’s learning follow Poisson processes. In this Poisson model, time is continuous $$t\in\left[0,T\right]$$.22 The arm arrives to Sender at Poisson rate $$\alpha$$, so that $$F\left(t\right)=1-e^{-\alpha t}$$. Once Sender pulls the arm, a breakdown occurs at Poisson rate $$\lambda$$ if Sender is bad, but never occurs if Sender is good, so that $$H\left(.\mid\tau,m\right)$$ puts probability $$\left(1-m\right)\left(1-e^{-\lambda\left(T-\tau\right)}\right)$$ on $$s=0$$ and the complementary probability on   $s=\frac{m}{m+\left(1-m\right)e^{-\lambda\left(T-\tau\right)}}.$ Returning to our main application, the Poisson model assumes that scandals can be conclusively debunked, but cannot be proven real. It also assumes that the opposition receives the scandalous material against the president at a constant rate, independent of whether they are real or fake. As discussed in Section 3.2.3, the results would not change if real documents take more time to be discovered than fake document take to be fabricated. Our benchmark model does not completely nest the Poisson model. In fact, part (ii) of Assumption 1, that the learning process never fully reveals Sender’s type, fails in the Poisson model because a breakdown fully reveals that Sender is bad. Nevertheless, if only part (i) of Assumption 1 is satisfied, a version of Proposition 1 continues to hold, with the difference that bad Sender never pulls the arm before some time $$\bar{t}$$. Specifically, Proposition 1 holds for all $$\tau\geq\bar{t}$$, whereas $$\mu\left(\tau\right)=1$$ and $$P_{B}\left(\tau\right)=0$$ for all $$\tau<\bar{t}$$. Intuitively, even if Receiver believes that only good Sender pulls the arm before $$\bar{t}$$, bad Sender strictly prefers to pull the arm after $$\bar{t}$$ to reduce the risk that Receiver fully learns that Sender is bad. We can, therefore, explicitly characterize the divine equilibrium of the Poisson model. First, good Sender pulls the arm as soon as it arrives.23 Second, bad Sender is indifferent between pulling the arm at any time $$t\geq\bar{t}\geq0$$ and not pulling it at all. Third, bad Sender strictly prefers to delay pulling the arm if $$t<\bar{t}$$. In the divine equilibrium of the Poisson model, $$\mu\left(t\right)=1$$ for all $$t<\bar{t}$$, and equations (2) and (3) become   \begin{eqnarray*} \frac{\mu\left(t\right)e^{-\lambda\left(T-t\right)}}{\mu\left(t\right)+\left(1-\mu\left(t\right)\right)e^{-\lambda\left(T-t\right)}} & = & \mu\left(T\right)\text{ for all }t\ge\bar{t},\\ \int_{0}^{T}\alpha\frac{1-\mu\left(t\right)}{\mu\left(t\right)}e^{-\alpha t}dt+\frac{1-\mu\left(T\right)}{\mu\left(T\right)}e^{-\alpha T} & = & \frac{1-\pi}{\pi}. \end{eqnarray*} Adding the boundary condition $$\lim_{t\downarrow\bar{t}}\mu\left(t\right)=1$$ yields the explicit solution $$\mu\left(t\right)$$ and uniquely determines $$\bar{t}$$. Proposition 2. In the divine equilibrium, good Sender pulls the arm as soon as it arrives and Receiver’s interim belief that Sender is good given pulling time $$t$$ is:  $\mu\left(t\right)=\begin{cases} \frac{\mu\left(T\right)}{1-\mu\left(T\right)\left(e^{\lambda\left(T-t\right)}-1\right)} & {\it{if }}\,t\geq\bar{t};\\ 1 & {\it{otherwise,}} \end{cases}$ where $$\mu\left(T\right)$$ is Receiver’s posterior belief if the arm is never pulled and  $\bar{t}=\begin{cases} 0 & {\it{if }}\,\,\pi<\bar{\pi};\\ T-\frac{1}{\lambda}\ln\frac{1}{\mu\left(T\right)} & {\it{otherwise,}} \end{cases}$   \begin{eqnarray*} \mu\left(T\right) & = & \left\{ \begin{array}{l} \left[\frac{\alpha e^{\lambda T}+\lambda e^{-\alpha T}}{\alpha+\lambda}+\frac{1-\pi}{\pi}\right]^{-1}\,{\it{ if }}\,\,\pi<\bar{\pi};\\ \left[\frac{\left(\alpha+\lambda\right)\left(1-\pi\right)}{\lambda\pi}e^{\alpha T}+1\right]^{-\frac{\lambda}{\alpha+\lambda}}\,\it{ otherwise,} \end{array}\right.\\ \bar{\pi} & = & \left[1+\frac{\lambda}{\alpha+\lambda}\left(e^{\lambda T}-e^{-\alpha T}\right)\right]^{-1}. \end{eqnarray*} The parameters of the model affect welfare directly and through Sender’s equilibrium behavior. Proposition 3 says that, in the divine equilibrium, direct effects dominate. Specifically, a higher prior belief $$\pi$$ results in higher posterior beliefs, which increases both bad and good Sender’s welfare. Moreover, a higher breakdown rate $$\lambda$$ or a higher arrival rate $$\alpha$$ allows Receiver to learn more about Sender, which decreases (increases) bad (good) Sender’s welfare. Proposition 3 also derives comparative statics on Receiver’s welfare given by (1).24 Proposition 3. In the divine equilibrium, (1)the expected payoff of bad Sender increases with $$\pi$$ but decreases with $$\lambda$$ and $$\alpha$$; (2)the expected payoff of good Sender increases with $$\pi$$, $$\lambda$$, and $$\alpha$$; (3)the expected payoff of Receiver decreases with $$\pi$$ but increases with $$\lambda$$ and $$\alpha$$. Proof. In Appendix C. ǁ 4.1. Static analysis We now explore how the parameters of the model affect the probability that Sender releases information. The probability that bad Sender pulls the arm is   $$P_{B}\left(T\right)=1-\frac{\pi}{1-\pi}\frac{1-\mu\left(T\right)}{\mu\left(T\right)}e^{-\alpha T},\label{q}$$ (7) which follows from   $$\mu\left(T\right)=\frac{\pi e^{-\alpha T}}{\pi e^{-\alpha T}+\left(1-\pi\right)\left(1-P_{B}\left(T\right)\right)}.\label{eq:pi(T)}$$ (8) Proposition 4 says that bad Sender pulls the arm with a higher probability if the prior belief $$\pi$$ is lower, if the breakdown rate $$\lambda$$ is lower, or if the arrival rate $$\alpha$$ is higher. Proposition 4. In the divine equilibrium, the probability that bad Sender pulls the arm decreases with $$\pi$$ and $$\lambda$$ but increases with $$\alpha$$. Proof. In Appendix C. ǁ Intuitively, if the prior belief $$\pi$$ is higher, bad Sender has more to lose in case of a breakdown. Similarly, if the breakdown rate $$\lambda$$ is higher, pulling the arm is more likely to reveal that Sender is bad. In both cases, bad Sender is more reluctant to pull the arm. In contrast, if the arrival rate $$\alpha$$ is higher, good Sender is more likely to pull the arm and Receiver will believe that Sender is bad with higher probability if she does not pull the arm. In this case, bad Sender is more willing to pull the arm. The total probability that Sender pulls the arm is given by the weighted sum of the probabilities $$P_{B}\left(T\right)$$ and $$P_{G}\left(T\right)$$ that bad Sender and good Sender pull the arm:   $$P_{B}\left(T\right)=\pi P_{G}\left(T\right)+\left(1-\pi\right)P_{B}\left(T\right)=1-\frac{\pi e^{-\alpha T}}{\mu\left(T\right)}.\label{P}$$ (9) A change in $$\lambda$$ affects $$P_{B}\left(T\right)$$, but not $$P_{G}\left(T\right)$$; a change in $$\alpha$$ affects both $$P_{B}\left(T\right)$$ and $$P_{G}\left(T\right)$$ in the same direction. Therefore, Sender pulls the arm with a higher total probability if the breakdown rate is lower or if the arrival rate is higher. The prior belief $$\pi$$ has a direct and an indirect effect on the total probability that Sender pulls the arm. On the one hand, holding $$P_{B}\left(T\right)$$ constant, $$P$$$$\left(T\right)$$ directly increases with $$\pi$$, because $$P_{G}\left(T\right)>P_{B}\left(T\right)$$. On the other hand, $$P\left(T\right)$$ indirectly decreases with $$\pi$$, because $$P_{B}\left(T\right)$$ decreases with $$\pi$$. Proposition 5 says that the indirect effect dominates the direct effect when $$\pi$$ is sufficiently low. Proposition 5. In the divine equilibrium, the total probability that Sender pulls the arm decreases with $$\lambda$$, increases with $$\alpha$$, and is quasiconvex in $$\pi$$: decreases with $$\pi$$ if  $\pi<\frac{\alpha e^{\alpha T}}{\alpha e^{\alpha T}+\lambda\left(e^{\alpha T}-1\right)}\in\left(0,1\right)$ and increases with $$\pi$$ otherwise. Proof. In Appendix C. ǁ The probabilities $$P_{G}\left(T\right)=1-e^{-\alpha T}$$ and $$P_{B}\left(T\right)$$ that good Sender and bad Sender pull the arm also determine Receiver’s posterior belief $$\mu\left(T\right)$$. By (8), $$\mu\left(T\right)$$ decreases with the breakdown rate $$\lambda$$, because $$P_{B}\left(T\right)$$ decreases with $$\lambda$$. Equation (8) also suggests that there are direct and indirect effects of the prior belief $$\pi$$ and the arrival rate $$\alpha$$ on $$\mu\left(T\right)$$. On the one hand, holding $$P_{B}\left(T\right)$$ constant, $$\mu\left(T\right)$$ directly increases with $$\pi$$ and decreases with $$\alpha$$. On the other hand, $$\mu\left(T\right)$$ indirectly decreases with $$\pi$$ and increases with $$\alpha$$, because $$P_{B}\left(T\right)$$ decreases with $$\pi$$ and increases with $$\alpha$$. Proposition 6 says that the direct effect always dominates the indirect effect in the Poisson model. Proposition 6. In the divine equilibrium, Receiver’s posterior belief if the arm is never pulled increases with $$\pi$$ but decreases with $$\lambda$$ and $$\alpha$$. Proof. In Appendix C. ǁ 4.2. Dynamic analysis The Poisson model also allows for a more detailed analysis of the strategic timing of information release. By Proposition 2, bad Sender begins to pull the arm at time $$\bar{t}$$. In the spirit of Proposition 4, bad Sender begins to pull the arm later if the prior belief $$\pi$$ is higher, if the breakdown rate $$\lambda$$ is higher, or if the arrival rate $$\alpha$$ is lower. Proposition 7. In the divine equilibrium, $$\bar{t}$$ increases with $$\pi$$ and $$\lambda$$ but decreases with $$\alpha$$. Proof. In Appendix C. ǁ At each time $$t$$ after $$\bar{t}$$, bad Sender pulls the arm with a strictly positive probability density $$p_{B}\left(t\right)$$ (Figure 2a). Proposition 8 says that $$p_{B}\left(t\right)$$ first increases and then decreases with time. Figure 2 View largeDownload slide Pulling density and breakdown probability; $$\alpha=1$$, $$\lambda=2$$, $$\pi=.5$$, $$T=1$$. (a) dotted: $$p_{G}\left(t\right)$$; solid: $$p_{B}\left(t\right)$$; dashed: $$p\left(t\right)$$; (b) dotted: $$\lambda\left(T-t\right)$$; solid: $${p_{B}\left(t\right)}/{p_{G}\left(t\right)}$$; dashed: $$Q\left(t\right)$$. Figure 2 View largeDownload slide Pulling density and breakdown probability; $$\alpha=1$$, $$\lambda=2$$, $$\pi=.5$$, $$T=1$$. (a) dotted: $$p_{G}\left(t\right)$$; solid: $$p_{B}\left(t\right)$$; dashed: $$p\left(t\right)$$; (b) dotted: $$\lambda\left(T-t\right)$$; solid: $${p_{B}\left(t\right)}/{p_{G}\left(t\right)}$$; dashed: $$Q\left(t\right)$$. Proposition 8. In the divine equilibrium, the probability density that bad Sender pulls the arm at time $$t$$ is quasiconcave: increases with $$t$$ if  $t<t_{b}\equiv T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1}{\mu\left(T\right)}\right)$ and decreases with $$t$$ otherwise. Proof. In Appendix C. ǁ Intuitively, the dynamics of $$p_{B}\left(t\right)$$ are driven by a strategic and a mechanic force. Strategically, as in Corollary 1, bad Sender delays pulling the arm with respect to good Sender, so that the likelihood ratio $$p_{B}\left(t\right)/p_{G}\left(t\right)$$ increases with time, where $$p_{G}\left(t\right)=\alpha e^{-\alpha t}$$ is the probability density that good Sender pulls the arm at time $$t$$. Mechanically, $$p_{B}\left(t\right)$$ roughly follows the dynamics of $$p_{G}\left(t\right)$$. If the arrival rate $$\alpha$$ is sufficiently small, so that the density $$p_{G}\left(t\right)$$ barely changes over time, the strategic force dominates and the probability that bad sender pulls the arm monotonically increases with time ($$t_{b}>T$$). Instead, if the arrival rate $$\alpha$$ is sufficiently large, so that $$p_{G}\left(t\right)$$ rapidly decreases over time, the mechanic force dominates and the probability that bad sender pulls the arm monotonically decreases with time ($$t_{b}<T$$). The total probability density $$p\left(t\right)$$ that Sender pulls the arm is a weighted sum of $$p_{G} \left(t\right)$$ and $$p_{B}\left(t\right)$$, so that $$p\left(t\right)=\pi p_{G}\left(t\right)+\left(1-\pi\right)p_{B}\left(t\right)$$ (Figure 2a). Therefore, until $$\bar{t}$$, $$p\left(t\right)=\pi p_{G}\left(t\right)$$, and thereafter, as in Proposition 8, $$p\left(t\right)$$ first increases and then decreases with time. Proposition 9. In the divine equilibrium, the total probability density that Sender pulls the arm at time $$t$$ decreases with $$t$$ from $$0$$ to $$\bar{t}$$ and is quasiconcave in $$t$$ on the interval $$\left[\bar{t},T\right]$$: increases with $$t$$ if  $\bar{t}<t<t_{s}\equiv T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1+\mu\left(T\right)}{\mu\left(T\right)}\right)$ and decreases with $$t$$ if $$t>t_{s}$$. Proof. In Appendix C. ǁ Let the breakdown probability$$Q\left(t\right)$$ be the probability that a breakdown occurs before the deadline given that the arm is pulled at time $$t$$ (Figure 2b). Proposition 10 says that, as time passes, the breakdown probability first increases and then decreases.25 Proposition 10. In the divine equilibrium, the breakdown probability is quasiconcave: increases with $$t$$ if  $t<t_{b}\equiv T-\frac{1}{\lambda}\ln\left(\frac{1+\mu\left(T\right)}{2\mu\left(T\right)}\right)<T$ and decreases with $$t$$ otherwise. Proof. In Appendix C. ǁ Intuitively, the breakdown probability increases with the amount of scrutiny and with the likelihood ratio $$p_{B}\left(t\right)/p_{G}\left(t\right)$$. Obviously, Sender is exposed to more scrutiny if she pulls the arm earlier. But the likelihood ratio $$p_{B}\left(t\right)/p_{G}\left(t\right)$$ is lower earlier, because bad Sender strategically delays pulling the arm. Proposition 10 says that this strategic effect dominates for earlier times. 5. Applications 5.1. U.S. presidential scandals Returning to our presidential scandals example, the main prediction of our model is that fake scandals are released later than real scandals. We explore this prediction using Nyhan’s (2015) data on U.S. presidential scandals from 1977 to 2008. For each week, the data report whether a new scandal involving the current U.S. president was first mentioned in the Washington Post. Although scandals might have first appeared on other outlets, we agree with Nyhan that the Washington Post is likely to have mentioned such scandals immediately thereafter. As our model concerns scandals involving the incumbent in view of his possible reelection, we focus on all the presidential elections in which the incumbent was a candidate. Therefore we consider only the first term of each president from 1977 to 2008, beginning on the first week of January after the president’s election.26,27 In all cases, the election was held on the 201st week after this date. We construct the variable weeks to election as the difference between the election week at the end of the term and the release week of the scandal. For each scandal,28 we locate the original Washington Post article as well as other contemporary articles on The New York Times and the Los Angeles Times. We then search for subsequent articles on the same scandal in following years until 2016, as well as court decisions and scholarly books when possible. We check whether factual evidence of wrongdoing or otherwise reputationally damaging conduct was conclusively verified at a later time. If so, we check whether the evidence involved the president directly or close family members or political collaborators chosen or appointed by the president or his administration. We code these scandals as real. For the remaining scandals, we check whether a case for libel was successful or all political actors linked to the scandal were cleared of wrongdoings. We code these scandals as fake. The only scandal we were not able to code by this procedure is the “Banca Nazionale del Lavoro” scandal (also known as “Iraq-gate”). We code this scandal as real, but we check in Online Appendix A that all our qualitative results are robust to coding it as fake. In Online Appendix A, we report the complete list of scandals and a summary motivation of our coding decisions. Figure 3 shows the empirical distributions of the first mention of real and fake presidential scandals in the Washington Post as a function of weeks to election. Although we do not observe scandals released after the election ($$t=T+1$$ in our model) and cannot pinpoint the date at which the campaign begins ($$t=1$$ in our model), Corollary 1 implies that fake scandals are released later than real scandals conditional on any given time interval. The left panel covers the whole presidential term; the right panel focuses on the election campaign period only, which we identify with the last 60 weeks before the election. Both figures suggest that fake scandals are released later than real scandals. Because of the small sample size (only 15 scandals), formal tests have low power. Nevertheless, using the Dardanoni and Forcina (1998) test for the likelihood ratio order (which implies first-order stochastic dominance), we almost reject the hypothesis that the two distributions are equal in favour of the alternative hypothesis that fake scandals are released later ($$p$$-value: $$0.114$$); we cannot reject the hypothesis that fake scandals are released later in favour of the unrestricted hypothesis at all standard statistical significance levels ($$p$$-value: $$0.834$$).29 Figure 3 View largeDownload slide US presidential scandals and weeks to election. Distribution of real and fake scandals. (a) whole term; (b) last 60 weeks only. Figure 3 View largeDownload slide US presidential scandals and weeks to election. Distribution of real and fake scandals. (a) whole term; (b) last 60 weeks only. Our Poisson model offers a novel perspective over the October surprise concentration of scandals towards the end of the presidential election campaign (Figure 1). In equilibrium, real scandals are released as they are discovered by the media. Unless real scandals are more likely to be discovered towards the end of the first term of a president, then we should not expect their release to be concentrated towards the end of the campaign (see $$p_{G}\left(t\right)$$ in Figure 2a). Instead, fake scandals are strategically delayed, and so they should be concentrated towards the end of the first term of the president and just before the election (see $$p_{B}\left(t\right)$$ in Figure 2a). In other words, our model predicts that the October surprise effect is driven by fake scandals. In contrast, were the October surprise effect driven by the desire to release scandals when they are most salient, then the timing of release of real and fake scandals would be similar. Figure 4 is a replica of Figure 1, but with scandals coded as real and fake. Fake scandals are concentrated close to the election, with a majority of them released in the last quarter before the election. In contrast, real scandals appear to be scattered across the entire presidential term. Figure 4 View largeDownload slide Distribution of real and fake scandals. Figure 4 View largeDownload slide Distribution of real and fake scandals. Our Poisson model also predicts how different parameters affect the release of a U.S. presidential scandal. We now illustrate how Nyhan’s (2015) empirical findings may be interpreted using our model. Nyhan (2015) finds that scandals are more likely to appear when the president’s opposition approval rate is low. In our model, the approval rate is most naturally captured by the prior belief $$1-\pi$$ (the belief that the president is fit to run the country). In our Poisson model, a higher $$\pi$$ has a direct and an indirect effect on the probability of release of a scandal. On the one hand, a higher $$\pi$$ means that the president is more likely to be involved in a real scandal, thus directly increasing the probability that such a scandal is released. On the other hand, the opposition optimally resorts to fake scandals more when the president is so popular that only a scandal could prevent the president’s reelection. Therefore, a higher $$\pi$$ reduces the incentive for the opposition to release fake scandals, indirectly reducing the probability that a scandal is released. We can then interpret Nyhan’s finding as suggesting that the direct effect on average dominates the indirect effect. But the president’s opposition approval rate also measures opposition voters’ hostility towards the president, which might be captured by the rate $$\lambda$$ at which voters learn that a scandal is fake.30 Indeed, Nyhan conjectures that when opposition voters are more hostile to the president, then they are “supportive of scandal allegations against the president and less sensitive to the evidentiary basis for these claims [and] opposition elites will be more likely to pursue scandal allegations” (p. 6). Consistently, in our Poisson model, when voters take more time to tell real and fake scandals apart, the opposition optimally resorts to fake scandals. Nyhan (2015) also finds that fewer scandals involving the president are released when the news agenda is more congested. Such media congestion may have the following two effects. First, when the news agenda is congested, the opposition media has less time to devote to investigate the president. In our Poisson model, this is captured by a lower arrival rate $$\alpha$$, which in turn reduces the probability that a scandal is released. Second, when the news agenda is congested, public scrutiny of the scandal is slower as the attention of media, politicians, and voters are captured by other events. In our Poisson model, this is captured by a lower breakdown rate $$\lambda$$, which in turn increases the probability that a scandal is released. We can interpret Nyhan’s finding as suggesting that the media congestion effect through the arrival rate $$\alpha$$ dominates the effect through the breakdown rate $$\lambda$$.31 5.2. Initial public offerings We now apply our model to the timing of IPOs. Sender is a firm that needs liquidity in a particular time frame, and this time frame is private information of the firm. The need for liquidity could arise from the desire to grow the firm, expand into new products or markets, or because of operating expenses outstripping revenues. It could also arise because of investors having so-called “drag-along rights”, where they can force founders and other shareholders to vote in favour of a liquidity event. When announcing the IPO, firms have private information regarding their prospective long-run performance. Good firms expect their business to out-perform the market’s prior expectation; bad firms expect their business to under-perform the market’s prior expectation. After a firm announces an IPO, the market scrutinizes the firm’s prospectus, and learns about the firm’s prospective performance. The initial trade closing price of the stock is determined by the market’s posterior belief at the initial trade date. Therefore, after the initial trade date, as the firms’ potential is gradually revealed to the market, good firms’ stocks out-perform bad firms’ stocks. Since the true time frame is private information, the firm can “pretend” to need liquidity faster than it actually does, and it has significant control over the time gap between the announcement of the IPO and the initial trade date. A shorter time gap decreases the amount of scrutiny the firm undergoes before going public, but also reduces credibility. Therefore, our model predicts that bad firms should choose a shorter time gap than good firms.32 We explore this prediction using data on U.S. IPOs from 1983 to 2016. For each IPO, we record the time gap and calculate the cumulative return of the stock, starting from the initial trade date. We measure the stock’s performance as its return relative to the market return over the same period. Following Loughran and Ritter (1995), we evaluate IPOs’ long-run performance $$y\in\left\{ 3,5\right\}$$ years after the initial trade date. For each value of $$y$$, we code as good (bad) those IPOs that performed above (below) market.33 Figure 5 shows the empirical distributions of time gap for good and bad IPOs evaluated at 3 and 5 years after the initial trade date. Both figures suggest that bad firms choose a shorter time gap in the first-order stochastic dominance sense, but with the effect being more clearly visible after 5 years. This pattern is consistent with our idea that firms’ private information is only gradually (and slowly) revealed to the market once the period of intense scrutiny of the IPO ends. Figure 5 View largeDownload slide US IPOs and time gap. Distributions for good and bad IPOs. (a) 3 years; (5) years. Figure 5 View largeDownload slide US IPOs and time gap. Distributions for good and bad IPOs. (a) 3 years; (5) years. Our main prediction in Corollary 1 is that the distribution of time gap for good IPOs dominates the distribution of time gap for bad IPOs in the likelihood ratio order.34 We evaluate this prediction using an approach developed by Dardanoni and Forcina (1998). This approach tests (1) the hypothesis $$H_{0}$$ that the distributions are identical against the alternative $$H_{1}$$ that the distributions are ordered in the likelihood ratio order; as well as (2) the hypothesis $$H_{1}$$ against an unrestricted alternative $$H_{2}$$. The hypothesis of interest $$H_{1}$$ is accepted if the first test rejects $$H_{0}$$ and the second test fails to reject $$H_{1}$$. Following Roosen and Hennessy (2004), we partition the variable time gap into $$k$$ intervals that are equiprobable according to the empirical distribution of time gap. We report in Table 1 the $$p$$-values of the two statistics for the case of $$k=7$$. Table 1 Dardanoni and Forcina test for likelihood ratio order (p-values)    3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403     3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403  Table 1 Dardanoni and Forcina test for likelihood ratio order (p-values)    3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403     3 years  5 years  H0 versus H1  0.001  0.000  H1 versus H2  0.000  0.361  Obs.  529  403  For both 3 and 5 years performance, we reject the hypothesis $$H_{0}$$ in favour of $$H_{1}$$ at the 1% significance level. Furthermore, for 5 years performance, we cannot reject the hypothesis $$H_{1}$$ in favour of $$H_{2}$$ at all standard significance levels. In Online Appendix B we give some further details about our data and the test, and we explore how the results of the test may change under alternative specifications. 6. Concluding Remarks We have analysed a model in which the strategic timing of information release is driven by the trade-off between credibility and scrutiny. The analysis yields novel predictions about the dynamics of information release. We also offered supporting evidence for these predictions using data on the timing of U.S. presidential scandals and the announcement of IPOs. Our model can also be used to deliver normative implications for the design of a variety of institutions. In the context of election campaigns, our results could be employed to evaluate laws that limit the period in which candidates can announce new policies in their platforms or media can cover candidates. For example, more than a third of the world’s countries mandate a blackout period before elections: a ban on political campaigns or, in some cases, on any mention of a candidate’s name, for one or more days immediately preceding elections.35 The framework we have developed has further potential applications. For instance, the relationship between a firm’s management team and its board of directors often exhibits the core features of our model: management has private information and potentially different preferences than the board; the board’s view about a project or investment determines whether it is undertaken; and management can provide more or less time to the board in evaluating the project or investment. The comparative statics of our model may speak to how this aspect of the management–board relationship may vary across industries and countries. Similarly, in various legal settings an interested party with private information may come forward sooner or later, notwithstanding an essentially fixed deadline for the legal decision-maker (due to institutional or resource constraints). A natural example is witnesses in a criminal investigation, but the same issues often arise in civil matters or even parliamentary inquiries. In each of these applications, the credibility-scrutiny trade-off plays an important role, and we hope our model, characterization of equilibrium, and comparative statics will serve as a useful framework for studying them in the future. APPENDIX A. Statistical Properties Proof of Lemma 1. Follows from Lemma 1$$'$$. ∥ Proof of Lemma 1$$'$$. Part 1. By Blackwell (1953), Assumption 1 with $$\tau^{\prime}=T+1$$ implies that pulling the arm at $$\tau$$ is the same as releasing an informative signal $$y$$. By Bayes’s rule, posterior $$s$$ is given by:   $s=\frac{mq\left(y\mid G\right)}{mq\left(y\mid G\right)+\left(1-m\right)q\left(y\mid B\right)},$ where $$q\left(y\mid\theta\right)$$ is the density of $$y$$ given $$\theta$$. (If $$L$$ is discrete, then $$q\left(y\mid\theta\right)$$ is the discrete density of $$y$$ given $$\theta$$.) Therefore,   $$\frac{q\left(y\mid G\right)}{q\left(y\mid B\right)}=\frac{1-m}{m}\frac{s}{1-s}.$$ (A10) Writing (A10) for interim beliefs $$m$$ and $$m^{\prime}$$, we obtain the following relation for corresponding posterior beliefs $$s$$ and $$s^{\prime}$$:   $\frac{1-m^{\prime}}{m^{\prime}}\frac{s^{\prime}}{1-s^{\prime}}=\frac{1-m}{m}\frac{s}{1-s},$ which implies that   $$s^{\prime}=\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}.$$ (A11) Therefore, $$s^{\prime}>s$$ for $$m^{\prime}>m$$; so part 1 follows. Part 2. By Blackwell (1953), Assumption 1 implies that pulling the arm at $$\tau$$ is the same as pulling the arm at $$\tau^{\prime}$$ and then releasing an additional informative signal $$y$$ with conditional density $$q\left(y\mid\theta\right)$$. Part 2 holds because for any strictly increasing concave $$v_{B}$$, we have   \begin{eqnarray*} \mathbb{E}\left[v_{B}\left(s\right)\mid\tau,m,B\right] & = & \mathbb{E}\left[v_{B}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},m,B\right]\\ & = & \mathbb{E}\left[\mathbb{E}\left[v_{B}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},s,B\right]\mid\tau^{\prime},m,B\right]\\ & \leq & \mathbb{E}\left[v_{B}\left(\mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right]\right)\mid\tau^{\prime},m,B\right]\\ & < & \mathbb{E}\left[v_{B}\left(\frac{s\mathbb{E} \left[\displaystyle\frac{q\left(y\mid G\right)}{q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right]}{s\mathbb{E}\left[\displaystyle\frac{q\left(y\mid G\right)}{q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right]+\left(1-s\right)}\right)\mid\tau^{\prime},m,B\right]\\ & = & \mathbb{E}\left[v_{B}\left(s\right)\mid\tau^{\prime},m,B\right], \end{eqnarray*} where the first line holds by Bayes’s rule, the second by the law of iterated expectations, the third by Jensen’s inequality applied to concave $$v_{B}$$, the fourth by strict monotonicity of $$v_{B}$$ and Jensen’s inequality applied to function $$sz/\left(sz+1-s\right)$$ which is strictly concave in $$z$$, and the last by definition of expectations. Part 3. Analogously to Part 2, Part 3 holds because for any strictly increasing convex $$v_{G}$$, we have   \begin{eqnarray*} \mathbb{E}\left[v_{G}\left(s\right)\mid\tau,m,G\right] & = & \mathbb{E}\left[v_{G}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},m,G\right]\\ & = & \mathbb{E}\left[\mathbb{E}\left[v_{G}\left(\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\right)\mid\tau^{\prime},s,G\right]\mid\tau^{\prime},m,G\right]\\ & \geq & \mathbb{E}\left[v_{G}\left(\mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right]\right)\mid\tau^{\prime},m,G\right]\\ & > & \mathbb{E}\left[v_{G}\left(\frac{s}{s+\left(1-s\right)\mathbb{E}\left[\displaystyle\frac{q\left(y\mid B\right)}{q\left(y\mid G\right)}\mid\tau^{\prime},s,G\right]}\right)\mid\tau^{\prime},m,G\right]\\ & = & \mathbb{E}\left[v_{G}\left(s\right)\mid\tau^{\prime},m,G\right]. \end{eqnarray*} ∥ Proof of Lemma 1$$''$$. The proof of part 1 is the same as in Lemma 1$$'$$. As noted before, pulling the arm at $$\tau$$ is the same as pulling the arm at $$\tau^{\prime}$$ and then releasing an additional informative signal $$y$$ with conditional density $$q\left(y\mid\theta\right)$$. Let $$s_{\sigma}$$ be the probability that Sender is good given that Receiver’s posterior is $$s$$ and Sender’s signal is $$\sigma$$. By (A11),   $s_{\sigma}=\frac{\frac{\sigma s}{\pi}}{\frac{\sigma s}{m}+\frac{\left(1-\sigma\right)\left(1-s\right)}{1-m}}.$ We have,   \begin{eqnarray*} \mathbb{E}\left[s\mid\tau,m,\sigma\right] & = & \mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[\mathbb{E}\left[\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,\sigma\right]\mid\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[\left.\begin{array}{c} s_{\sigma}\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right]+\\ +\left(1-s_{\sigma}\right)\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,B\right] \end{array}\right|\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[\left.\begin{array}{c} s_{\sigma}\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid G\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right]+\\ +\left(1-s_{\sigma}\right)\mathbb{E}\left[\displaystyle\frac{sq\left(y\mid B\right)}{sq\left(y\mid G\right)+\left(1-s\right)q\left(y\mid B\right)}\mid\tau^{\prime},s,G\right] \end{array}\right|\tau^{\prime},m,\sigma\right]\\ & = & \mathbb{E}\left[s\mathbb{E}\left[\displaystyle\frac{s_{\sigma}+\left(1-s_{\sigma}\right)\displaystyle\frac{q\left(y\mid B\right)} {q\left(y\mid G\right)}}{s+\left(1-s\right)\displaystyle\frac{q\left(y\mid B\right)}{q\left(y\mid G\right)}}\mid\tau^{\prime},s,G\right]\mid\tau^{\prime},m,\sigma\right]\\ & \gtrless & \mathbb{E}\left[s\mid\tau^{\prime},m,\sigma\right]\mbox{ whenever }s_{\sigma}\gtrless s, \end{eqnarray*} where the last line holds by Jensen’s inequality applied to function $$\left(s_{\sigma}+\left(1-s_{\sigma}\right)z\right)/\left(s+\left(1-s\right)z\right)$$, which is strictly convex (concave) in $$z$$ whenever $$s_{\sigma}>s$$ ($$s_{\sigma}<s$$). Because $$\sigma_{G}>\pi>\sigma_{B}$$, we have $$s_{\sigma_{G}}>s>s_{\sigma_{B}}$$, so parts 2 and 3 follow. ∥ B. Benchmark Model To facilitate our discussion in Section 3.2, we prove our results under more general assumptions than in our benchmark model. First, we assume that $$v_{G}\left(s\right)$$ is continuous, strictly increasing, and (weakly) convex, and $$v_{B}\left(s\right)$$ is continuous, strictly increasing, and (weakly) concave. Second, we assume that the arm arrives at a random time according to distributions $$F_{G}=F$$ for good Sender and $$F_{B}$$ for bad Sender, where $$F_{B}\left(t\right)\geq F_{G}\left(t\right)$$ for all $$t$$. Proof of Lemma 2. Part 1. Suppose, on the contrary, that $$\mu\left(\tau\right)\leq\mu\left(\tau^{\prime}\right)$$. Then   \begin{eqnarray*} \int v_{B}\left(s\right)dH_{B}\left(s|\tau,\mu\left(\tau\right)\right) & < & \int v_{B}\left(s\right)dH_{B}\left(s|\tau^{\prime},\mu\left(\tau\right)\right)\\ & \leq & \int v_{B}\left(s\right)dH_{B}\left(s|\tau^{\prime},\mu\left(\tau^{\prime}\right)\right), \end{eqnarray*} where the first inequality holds by part 2 of Lemma 1$$'$$ and the second by part 1 of Lemma 1$$'$$. Therefore, bad Sender strictly prefers to pull the arm at $$\tau^{\prime}$$ than at $$\tau$$. A contradiction. Good Sender strictly prefers to pull the arm at $$\tau$$ because   \begin{eqnarray*} \int v_{G}\left(s\right)dH_{G}\left(s|\tau,\mu\left(\tau\right)\right) & > & \int v_{G}\left(s\right)dH_{G}\left(s|\tau^{\prime},\mu\left(\tau\right)\right)\\ & > & \int v_{G}\left(s\right)dH_{G}\left(s|\tau^{\prime},\mu\left(\tau^{\prime}\right)\right), \end{eqnarray*} where the first inequality holds by part 3 of Lemma 1$$'$$ and the second by $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$ and part 1 of Lemma 1$$'$$. Part 2. By part 1 of this lemma applied to $$\tau$$ and $$\tau^{\prime}=T+1$$, it suffices to show that $$\mu\left(\tau\right),\mu\left(T+1\right)\in\left(0,1\right)$$ for all $$\tau$$ in the support of $$P_{B}$$. First, by Bayes’s rule, $$\tau$$ being in the support of $$P_{B}$$ implies $$\mu\left(\tau\right)<1$$. Second, by Bayes’s rule, $$F_{G}\left(T\right)<1$$ implies $$\mu\left(T+1\right)>0$$. Third, $$\mu\left(T+1\right)>0$$ implies $$\mu\left(\tau\right)>0$$, otherwise $$\tau$$ could not be in the support of $$P_{B}$$ because $$v_{B}\left(\mu\left(T+1\right)\right)>v_{B}\left(0\right)=\mathbb{E}\left[v_{B}\left(s\right)\mid\tau,0,B\right]$$. Finally, $$\mu\left(\tau\right)<1$$ implies $$\mu\left(T+1\right)<1$$, otherwise $$\tau$$ could not be in the support of $$P_{B}$$ because $$v_{B}\left(1\right)>\mathbb{E}\left[v_{B}\left(s\right)\mid\tau,\mu\left(\tau\right),B\right]$$. ∥ Proof of Lemma 3. By part 2 of Lemma 2, each $$t^{\prime}$$ in the support of $$P_{B}$$ is also in the support of $$P_{G}$$. We show that each $$t^{\prime}$$ in the support of $$P_{G}$$ is also in the support of $$P_{B}$$ by contradiction. Suppose that there exists $$t^{\prime}$$ in the support of $$P_{G}$$ but not in the support of $$P_{B}$$. Then, by Bayes’s rule $$\mu\left(t^{\prime}\right)=1$$; so bad Sender who receives the arm at $$t\leq t^{\prime}$$ gets the highest possible equilibrium payoff $$v_{B}\left(1\right)$$, because she can pull the arm at time $$t^{\prime}$$ and get payoff $$v_{B}\left(1\right)$$ (recall that, for all $$t$$, the support of $$H\left(.|t,\pi\right)$$ does not contain $$s=0$$ by part (ii) of Assumption (1)). Because bad Sender receives the arm at or before $$t^{\prime}$$ with a positive probability (recall that, for all $$t$$, $$F_{B}\left(t\right)\geq F_{G}\left(t\right)>0$$ by assumption), there exists time $$\tau$$ at which bad Sender pulls the arm with a positive probability and gets payoff $$v_{B}\left(1\right)$$. But then $$\mu\left(\tau\right)=1$$, contradicting that bad Sender pulls the arm at $$\tau$$ with a positive probability. Suppose, on the contrary, that there exists $$\tau$$ such that $$P_{G}\left(\tau\right)>0$$ and $$P_{B}\left(\tau\right)\geq P_{G}\left(\tau\right)$$. Because $$P_{\theta}\left(\tau\right)=\sum_{t=1}^{\tau}\left(P_{\theta}\left(t\right)-P_{\theta}\left(t-1\right)\right)$$, there exists $$\tau^{\prime}\leq\tau$$ in the support of $$P_{B}$$ such that $$P_{B}\left(\tau^{\prime}\right)-P_{B}\left(\tau^{\prime}-1\right)\geq P_{G}\left(\tau^{\prime}\right)-P_{G}\left(\tau^{\prime}-1\right)$$. Similarly, because $$1-P_{\theta}\left(\tau\right)=\sum_{t=\tau+1}^{T+1}\left(P_{\theta}\left(t\right)-P_{\theta}\left(t-1\right)\right)$$ and $$1-P_{G}\left(\tau\right)>0$$ (recall that $$P_{G}\left(T\right)\leq F_{G}\left(T\right)<1$$), there exists $$\tau^{\prime\prime}>\tau$$ in the support of $$P_{G}$$ such that $$P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime\prime}-1\right)\geq P_{B}\left(\tau^{\prime\prime}\right)-P_{B}\left(\tau^{\prime\prime}-1\right)$$. By Bayes’s rule,   \begin{eqnarray*} \mu\left(\tau^{\prime}\right) & = & \frac{\pi\left(P_{G}\left(\tau^{\prime}\right)-P_{G}\left(\tau^{\prime}-1\right)\right)}{\pi\left(P_{G}\left(\tau^{\prime}\right)-P_{G}\left(\tau^{\prime}-1\right)\right)+\left(1-\pi\right)\left(P_{B}\left(\tau^{\prime}\right)-P_{B}\left(\tau^{\prime}-1\right)\right)}\leq\pi\\ & \leq & \frac{\pi\left(P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime\prime}-1\right)\right)}{\pi\left(P_{G}\left(\tau^{\prime\prime}\right)-P_{G}\left(\tau^{\prime\prime}-1\right)\right)+\left(1-\pi\right)\left(P_{B}\left(\tau^{\prime\prime}\right)-P_{B}\left(\tau^{\prime\prime}-1\right)\right)}=\mu\left(\tau^{\prime\prime}\right)\text{.} \end{eqnarray*} Therefore, by Lemma 2, bad Sender strictly prefers to pull the arm at $$\tau^{\prime\prime}$$ than at $$\tau^{\prime}$$, contradicting that $$\tau^{\prime}$$ is in the support of $$P_{B}$$. ∥ Proof of Lemma 4. By Lemma 3, $$P_{G}$$ and $$P_{B}$$ have the same supports and therefore $$\mu\left(\tau\right)\in\left(0,1\right)$$. Let the support of $$P_{G}$$ be $$\left\{ \tau_{1},...,\tau_{n}\right\}$$. Notice that $$\tau_{n}=T+1$$ because $$P_{G}\left(T\right)\leq F_{G}\left(T\right)<1$$. Since $$\tau_{n-1}$$ is in the support of $$P_{B}$$ and   $P_{B}\left(\tau_{n-1}\right)<P_{G}\left(\tau_{n-1}\right)=F_{G}\left(\tau_{n-1}\right)\leq F_{B}\left(\tau_{n-1}\right)\text{,}$ where the first inequality holds by Lemma 3, the equality by part 2 of Lemma 2, and the last inequality by assumption $$F_{B}\left(t\right)\geq F_{G}\left(t\right)$$. Therefore, bad Sender who receives the arm at $$\tau_{n-1}$$ must be indifferent between pulling the arm at $$\tau_{n-1}$$ or at $$\tau_{n}$$. Analogously, bad Sender who receives the arm at $$\tau_{n-k-1}$$ must be indifferent between pulling it at $$\tau_{n-k-1}$$ and at some $$\tau\in\left\{ \tau_{n-k},\ldots,\tau_{n}\right\}$$. Thus, by mathematical induction on $$k$$, bad Sender is indifferent between pulling the arm at any $$\tau$$ in the support of $$P_{G}$$ and at $$T+1$$, which proves (2). By Bayes’s rule, for all $$\tau$$ in the support of $$P_{G}$$,   $$\frac{1-\pi}{\pi}\left(P_{B}\left(\tau\right)-P_{B}\left(\tau-1\right)\right)=\frac{1-\mu\left(\tau\right)}{\mu\left(\tau\right)}\left(P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)\right)\text{.}$$ (B12) Summing up over $$\tau$$ yields (3). ∥ Proof of Proposition 1. Part 1. We first show that, for each $$\mathcal{T}\subseteq\left\{ 1,\dots,T+1\right\}$$ with $$T+1\in\mathcal{T}$$and each $$\tau\in\mathcal{T}$$, there exist unique $$P_{G} \left(\tau\right)$$, $$P_{B} \left(\tau\right)$$, and $$\mu\left(\tau\right)$$ given by part 1 of this proposition. It suffices to show that there exists a unique $$\left\{ \mu\left(\tau\right)\right\} _{\tau\in\mathcal{T}}\in\left[0,1\right]^{\left|\mathcal{T}\right|}$$ that solves (2) and (3). Using (A11) with $$m=\pi$$ and $$m^{\prime}=\mu\left(\tau\right)$$, the left hand side of (2) can be rewritten as   $V_{B}\left(\mu\left(\tau\right)\right)\equiv\int v_{B}\left(\frac{\frac{\mu\left(\tau\right)s}{\pi}}{\frac{\mu\left(\tau\right)s}{\pi}+\frac{\left(1-\mu\left(\tau\right)\right)\left(1-s\right)}{1-\pi}}\right)dH_{B}\left(s|\tau,\pi\right).$ Because $$v_{B}$$ is continuous and strictly increasing, $$V_{B}$$ is also continuous and strictly increasing. Furthermore, $$V_{B}\left(0\right)=v_{B}\left(0\right)$$ and $$V_{B}\left(1\right)=v_{B}\left(1\right)$$. Therefore, for all $$\mu\left(T+1\right)\in\left[0,1\right]$$ and all $$\tau\in\mathcal{T}$$, there exists a unique $$\mu\left(\tau\right)$$ that solves (2). Moreover, for all $$\tau\in\mathcal{T}$$, $$\mu\left(\tau\right)$$ is continuous and strictly increasing in $$\mu\left(T+1\right)$$, is equal to $$0$$ if $$\mu\left(T+1\right)=0$$, and is equal to $$1$$ if $$\mu\left(T+1\right)=1$$. The left-hand side of (3) is continuous and strictly decreasing in $$\mu\left(\tau\right)$$ for all $$\tau\in\mathcal{T}$$. Moreover, the left-hand side of (3) is $$0$$ when $$\mu\left(\tau\right)=1$$ for all $$\tau\in\mathcal{T}$$, and it approaches infinity when $$\mu\left(\tau\right)$$ approaches $$0$$ for all $$\tau\in\mathcal{T}$$. Therefore, substituting each $$\mu \left(\tau\right)$$ in (3) with a function of $$\mu\left(T+1\right)$$ obtained from (2), we conclude that there exists a unique $$\mu\left(T+1\right)$$ that solves (3). We now construct an equilibrium for each $$\mathcal{T}\subseteq\left\{ 1,\dots,T+1\right\}$$ with $$T+1\in\mathcal{T}$$. Let $$P_{G}\left(\tau\right)$$ and $$P_{B}\left(\tau\right)$$ be given by part 1 of this proposition for all $$\tau\in\left\{ 1,\dots,T+1\right\}$$. Let $$\mu\left(\tau\right)$$ be given by part 1 of this proposition for all $$\tau\in\mathcal{T}$$ and $$\mu\left(\tau\right)=0$$ otherwise. Notice that, so constructed, $$P_{G}$$, $$P_{B}$$, and $$\mu$$ exist and are unique. $$P_{G}$$ is clearly a distribution. $$P_{B}$$ is also a distribution, because $$P_{B}\left(\tau\right)$$ increases with $$\tau$$ by (4) and $$P_{B}\left(T+1\right)=1$$ by (3) and (4). Furthermore, $$\mu$$ is a consistent belief because (B12) holds for all $$\tau\in\mathcal{T}$$ by (4). It remains to show that there exists an optimal strategy for Sender such that good and bad Sender’s distributions of pulling time are given by $$P_{G}$$ and $$P_{B}$$. First, both good and bad Sender strictly prefer not to pull the arm at any time $$\tau\notin\mathcal{T}$$, because, by part (ii) of Assumption (1) , pulling the arm at $$\tau$$ gives Sender a payoff of $$v_{\theta}\left(0\right)<v_{\theta}\left(\mu\left(T+1\right)\right)$$. Second, by (2), pulling the arm at any time $$\tau\in\mathcal{T}$$ gives bad Sender the same expected payoff $$v_{\theta}\left(\mu\left(T+1\right)\right)$$. Finally, by part 1 of Lemma 2, good Sender strictly prefers to pull the arm at time $$\tau\in\mathcal{T}$$ than at any other time $$\tau^{\prime}>\tau$$. Finally, in any equilibrium, $$P_{G}$$ and $$P_{B}$$ have the same supports by Lemma 3. Moreover, for all $$\tau$$ in the support of $$P_{G}$$, $$P_{G}\left(\tau\right)=F\left(\tau\right)$$ by part 2 of Lemma 2, $$P_{B}\left(\tau\right)$$ satisfies (4) by (B12), $$\mu\left(\tau\right)\in\left(0,1\right)$$ by Lemma 3, and $$\mu\left(\tau\right)$$ satisfies (3) and (4) by Lemma 4. Part 2. First, we notice that, by part 1 of Proposition 1, there exists an equilibrium with $$\mathcal{T}=\left\{ 1,\dots,T+1\right\}$$. In this equilibrium, there are no out of equilibrium events and therefore it is divine. Adopting Cho and Kreps (1987)’s definition to our setting (see, $$e.g.$$, Maskin and Tirole, 1992), we say that an equilibrium is divine if $$\mu\left(\tau\right)=1$$ for any $$\tau\notin {\rm supp}\left(P_{G}\right)$$ at which condition D1 holds. D1 holds at $$\tau$$ if for all $$m\in\left[0,1\right]$$ that satisfy   $$\int v_{B}\left(s\right)dH_{B}\left(s|\tau,m\right)\geq\max_{t\in {\rm supp}\left(P_{G}\right),t>\tau}\int v_{B}\left(s\right)dH_{B}\left(s|t,\mu\left(t\right)\right)$$ (B13) the following inequality holds:   $$\int v_{G}\left(s\right)dH_{G}\left(s|\tau,m\right)>\max_{t\in {\rm supp}\left(P_{G}\right),t>\tau}\int v_{G}\left(s\right)dH_{G}\left(s|t,\mu\left(t\right)\right).$$ (B14) Suppose, on the contrary, that there exists a divine equilibrium in which $$P_{G}\left(\tau\right)<F_{G}\left(\tau\right)$$ for some $$\tau\in\left\{ 1,\ldots,T\right\}$$. By part 1 of Proposition 1, $$\tau\notin {\rm supp}\left(P_{G}\right)$$. Let $$t^{*}$$ denote $$t$$ that maximizes the right-hand side of (B14). By Lemma 3, $$\mu\left(t^{*}\right)<1$$ , and, by Lemma 4, $$t^{*}$$ maximizes the right-hand side of (B13). Therefore, by part 1 of Lemma 2, D1 holds at $$\tau$$; so $$\mu\left(\tau\right)=1$$. But then $$\tau\notin {\rm supp}\left(P_{G}\right)$$ cannot hold, because   $\int v_{G}\left(s\right)dH_{G}\left(s|\tau,1\right)=v_{G}\left(1\right)>\max_{t\in {\rm supp}\left(P_{G}\right)}\int v_{G}\left(s\right)dH_{G}\left(s|t,\mu\left(t\right)\right).$ ∥ Proof of Corollary 2. By Lemma 4 and part 2 of Proposition 1, bad Sender is indifferent between pulling the arm at any time before the deadline and not pulling the arm at all. Then, by Lemma 2, $$\mu\left(\tau-1\right)>\mu\left(\tau\right)$$ for all $$\tau$$. Using (4) with $$P_{G}=F_{G}$$, we have that for all $$\tau<T$$,   \begin{eqnarray} \frac{1-\tilde{\mu}\left(\tau\right)}{\tilde{\mu}\left(\tau\right)} & = & \frac{1-\pi}{\pi}\frac{1-P_{B}\left(\tau\right)}{1-P_{G}\left(\tau\right)}\nonumber \\ & = & \frac{\sum_{t=\tau+1}^{T+1}\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\left(F_{G}\left(t\right)-F_{G}\left(t-1\right)\right)}{1-F_{G}\left(\tau\right)}\\ & = & \mathbb{E}_{F}\left[\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\mid t\ge\tau+1\right].\nonumber \end{eqnarray} (B15) Since $$\mu\left(\tau-1\right)>\mu\left(\tau\right)$$ for all $$\tau$$, (B15) implies that $$\tilde{\mu}\left(\tau-1\right)>\tilde{\mu}\left(\tau\right)$$ and $$\mu\left(\tau\right)>\tilde{\mu}\left(\tau-1\right)$$ for all $$\tau$$. ∥ Proof of Corollary 1. Using (4) with $$P_{G}=F_{G}$$, we have   $\frac{1-\mu\left(\tau\right)}{\mu\left(\tau\right)}=\frac{1-\pi}{\pi}\frac{P_{B}\left(\tau\right)-P_{B}\left(\tau-1\right)}{P_{G}\left(\tau\right)-P_{G}\left(\tau-1\right)}.$ To complete the proof, notice that, by Corollary 2, $$\mu\left(\tau\right)>\mu\left(\tau^{\prime}\right)$$ whenever $$\tau<\tau^{\prime}$$. ∥ Proof of Lemma 2$$'$$. Given Receiver’s interim belief $$m$$ and pulling times $$\tau$$ and $$\tau^{\prime}$$, we write $$H$$ and $$H^{\prime}$$ for distributions $$H\left(.\mid\tau,m\right)$$ and $$H\left(.\mid\tau^{\prime},m\right)$$ of Receiver’s posterior belief $$s$$ from Receiver’s perspective, and we write $$H_{\theta}$$ and $$H_{\theta}^{\prime}$$ for distributions $$H_{\theta}\left(.\mid\tau,m\right)$$ and $$H_{\theta}\left(.\mid\tau^{\prime},m\right)$$ of Receiver’s posterior belief $$s$$ from type-$$\theta$$ Sender’s perspective. For any interim belief $$m\in\left(0,1\right)$$ and pulling times $$\tau,\tau^{\prime}$$, by Bayes’s rule, we have   \begin{eqnarray*} dH\left(s\right) & = & mdH_{G}\left(s\right)+\left(1-m\right)dH_{B}\left(s\right),\\ s & = & \frac{mdH_{G}\left(s\right)}{mdH_{G}\left(s\right)+\left(1-m\right)dH_{B}\left(s\right)}, \end{eqnarray*} so that $$dH_{G}\left(s\right)=\frac{s}{m}dH\left(s\right)$$ and $$dH_{B}\left(s\right)=\frac{1-s}{1-m}dH\left(s\right)$$. Likewise, $$dH_{G}^{\prime}\left(s\right)=\frac{s}{m}dH^{\prime}\left(s\right)$$ and $$dH_{B}^{\prime}\left(s\right)=\frac{1-s}{1-m}dH^{\prime}\left(s\right)$$. For any pulling time $$\tau$$ and interim beliefs $$m,m^{\prime}\in\left(0,1\right)$$, each posterior belief $$s$$ under interim belief $$m$$ transforms into the posterior belief $$s^{\prime}$$ given by (A11) under interim belief $$m^{\prime}$$. Let $$m=\mu\left(\tau\right)$$ and $$m^{\prime}=\mu\left(\tau^{\prime}\right)$$. Bad Sender weakly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$ if and only if   $\int_{0}^{1}v_{B}\left(s\right)dH_{B}\left(s\right)\geq\int_{0}^{1}v_{B}\left(\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}\right)dH_{B}^{\prime}\left(s\right),$ which is equivalent to   $$\int_{0}^{1}v_{B}\left(s\right)\left(1-s\right)dH\left(s\right)\geq\int_{0}^{1}v_{B}\left(\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}\right)\left(1-s\right)dH^{\prime}\left(s\right).$$ (B16) Similarly, good Sender strictly prefers to pull the arm at $$\tau$$ than at $$\tau^{\prime}$$ if and only if   $$\int_{0}^{1}v_{G}\left(s\right)sdH\left(s\right)>\int_{0}^{1}v_{G}\left(\frac{\frac{m^{\prime}s}{m}}{\frac{m^{\prime}s}{m}+\frac{\left(1-m^{\prime}\right)\left(1-s\right)}{1-m}}\right)sdH^{\prime}\left(s\right).$$ (B17) Because $$s$$ and $$s^{\prime}$$ are in $$\left(0,1\right)$$, and $$s^{\prime}$$ is strictly increasing in $$s$$, for any $$r\in\left(0,1\right)$$, we have that $$s^{\prime}>r$$ if and only if $$s>r^{\prime}$$ for some $$r^{\prime}\in\left(0,1\right)$$, which depends on $$m$$ and $$m^{\prime}$$. Thus, for $$v_{\theta}$$ given by (5), the inequalities (B16) and (B17) can be rewritten as   \begin{eqnarray} \int_{r}^{1}\left(1-s\right)dH\left(s\right) & \geq & \int_{r^{\prime}}^{1}\left(1-s\right)dH^{\prime}\left(s\right),\\ \end{eqnarray} (B18)  \begin{eqnarray} \int_{r}^{1}sdH\left(s\right) & > & \int_{r^{\prime}}^{1}sdH^{\prime}\left(s\right), \end{eqnarray} (B19) where$$\int_{r}^{1}$$ and $$\int_{r^{\prime}}^{1}$$ stand for the Lebesgue integrals over the sets $$\left(r,1\right]$$ and $$\left(r^{\prime},1\right]$$. Notice also that we are using a selection from Receiver’s best response correspondence for which $$v\left(r\right)=0$$. The proof goes through for other selections, after adding appropriate terms on both sides of (B18) and (B19). Integrating by parts, we can rewrite (B18) and (B19) as   \begin{eqnarray} -\left(1-r\right)H\left(r\right)+\int_{r}^{1}H\left(s\right)ds & \geq & -\left(1-r^{\prime}\right)H^{\prime}\left(r^{\prime}\right)+\int_{r^{\prime}}^{1}H^{\prime}\left(s\right)ds,\\ \end{eqnarray} (B20)  \begin{eqnarray} -rH\left(r\right)-\int_{r}^{1}H\left(s\right)ds & > & -r^{\prime}H^{\prime}\left(r^{\prime}\right)-\int_{r^{\prime}}^{1}H^{\prime}\left(s\right)ds. \end{eqnarray} (B21) Suppose that (B20) and (6) hold and let us show that (B21) holds. We have $$H^{\prime}\left(r^{\prime}\right)>H\left(r\right)$$, because   \begin{eqnarray*} \left(1-r\right)\left(H^{\prime}\left(r^{\prime}\right)-H\left(r\right)\right) & = & \left(1-r^{\prime}\right)H^{\prime}\left(r^{\prime}\right)+\left(r^{\prime}-r\right)H^{\prime}\left(r^{\prime}\right)-\left(1-r\right)H\left(r\right)\\ & \geq & \left(1-r^{\prime}\right)H^{\prime}\left(r^{\prime}\right)+\int_{r}^{r^{\prime}}H^{\prime}\left(s\right)ds-\left(1-r\right)H\left(r\right)\\ & \geq & \int_{r}^{1}H^{\prime}\left(s\right)ds-\int_{r}^{1}H\left(s\right)ds>0, \end{eqnarray*} where the equality holds by rearrangement, the first inequality holds by monotonicity of $$H$$, the second by (B20), and the last by (6). The inequality (B21) then holds because   \begin{eqnarray*} r^{\prime}H^{\prime}\left(r^{\prime}\right)-rH\left(r\right)+\int_{r^{\prime}}^{1}H^{\prime}\left(s\right)ds-\int_{r}^{1}H\left(s\right)ds & > & r^{\prime}H^{\prime}\left(r^{\prime}\right)-rH\left(r\right)-\int_{r}^{r^{\prime}}H^{\prime}\left(s\right)ds\\ & \geq & r^{\prime}H^{\prime}\left(r^{\prime}\right)-rH\left(r\right)-H^{\prime}\left(r^{\prime}\right)\left(r^{\prime}-r\right)\\ & = & r\left(H^{\prime}\left(r^{\prime}\right)-H\left(r\right)\right)>0, \end{eqnarray*} where the first inequality holds by (6), the second by monotonicity of $$H$$, and the last by the established inequality $$H^{\prime}\left(r^{\prime}\right)>H\left(r\right)$$. ∥ C. Poisson Model Proof of Proposition 6. $$\underline{{\text{For}}\,\pi:}$$ Differentiating $$\mu\left(T\right)$$ in Proposition 2 with respect to $$\pi$$, we have   $\frac{d\mu\left(T\right)}{d\pi}=\left\{ \begin{array}{l} \frac{1}{\pi^{2}}\mu\left(T\right)^{2}\text{ if }\pi<\bar{\pi},\\ \frac{e^{\alpha T}}{\pi^{2}}\mu\left(T\right)^{2+\frac{\alpha}{\lambda}}\text{ otherwise,} \end{array}\right\} >0.$ For $$\underline{{\text{For}}\,\lambda:}$$ First, when $$\pi<\bar{\pi}$$, $$\frac{d\mu\left(T\right)}{d\lambda}<0$$ since $$e^{-\left(\alpha+\lambda\right)T}>1-\left(\alpha+\lambda\right)T$$ for all $$\alpha,\lambda,T>0$$. Second, when $$\pi>\bar{\pi},$$  \begin{eqnarray*} \frac{d\mu\left(T\right)}{d\lambda} & = & \frac{d}{d\lambda}e^{-\frac{\lambda}{\alpha+\lambda}\ln\left(1+\phi\left(\lambda\right)\right)},\\ \phi & \equiv & \frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}>0. \end{eqnarray*} Thus, $$\frac{d\mu\left(T\right)}{d\lambda}<0$$, because   $\frac{d}{d\lambda}\frac{\lambda}{\alpha+\lambda}\ln\left(1+\phi\right)=\frac{\alpha}{\alpha+\lambda}\left[\frac{\ln\left(1+\phi\right)}{\alpha+\lambda}-\frac{1}{\lambda}\frac{1-\pi}{\pi}\frac{1}{1+\phi}\right]>0,$ where the inequality follows from $$\left(1+\phi\right)\ln\left(1+\phi\right)>\phi$$. $$\underline{{\text{For}}\,\alpha:}$$ First, when $$\pi<\bar{\pi}$$,   \begin{eqnarray*} \frac{d\mu\left(T\right)}{d\alpha} & = & -\left(\mu\left(T\right)\right)^{2}\frac{\chi}{\left(\alpha+\lambda\right)^{2}}<0,\\ \chi & \equiv & \lambda\left\{ e^{\lambda T}-\left[1+\left(\alpha+\lambda\right)T\right]e^{-\alpha T}\right\} >0, \end{eqnarray*} where the last passage follows from $$e^{\left(\alpha+\lambda\right)T}>1+\left(\alpha+\lambda\right)T$$ for all $$\alpha,\lambda,T>0$$. Second, when $$\pi\geq\bar{\pi}$$, by log-differentiation,   $\frac{d\mu\left(T\right)}{d\alpha}=\mu\left(T\right)\frac{\lambda}{\alpha+\lambda}\left[\frac{\ln\left(1+\phi\right)}{\alpha+\lambda}-\frac{1}{1+\phi}\frac{d\phi}{d\alpha}\right].$ Thus,   $$\frac{d\mu\left(T\right)}{d\alpha}<0\iff\frac{\left(1+\phi\right)\ln\left(1+\phi\right)}{\phi}<1+T\left(\alpha+\lambda\right).$$ (C22) For $$\pi=\bar{\pi}$$, $$\phi=e^{\left(\alpha+\lambda\right)T}-1>0$$; so   $\frac{d\mu\left(T\right)}{d\alpha}<0\iff\ln\left(1+\phi\right)<\phi,$ which is true for all $$\phi>0$$. Then $$\frac{d\mu\left(T\right)}{d\alpha}<0$$ for $$\pi\geq\bar{\pi}$$ follows because $$\phi<e^{\left(\alpha+\lambda\right)T}-1$$ for $$\pi>\bar{\pi}$$ and the left hand side of (C22) increases with $$\phi$$ for $$\phi>0$$. ∥ Proof of Proposition 3. Part 1. Recall that (1) Sender’s payoff equals Receiver’s posterior belief about Sender at $$t=T$$ and (2) in equilibrium, bad Sender (weakly) prefers not to pull the arm at all than pulling it at any time $$t\in\left[0,T\right]$$. Therefore, bad Sender’s expected payoff equals Receiver’s belief about Sender at $$t=T$$ if the arm has not been pulled:   $$\mathbb{E}\left[v_{B}\right]=\mu\left(T\right).$$ (C23) Part $$1$$ then follows from Proposition 6. Part 2. By the law of iterated expectations,   \begin{eqnarray} \mathbb{E}\left[s\right] & = & \pi\mathbb{E}\left[v_{G}\right]+\left(1-\pi\right)\mathbb{E}\left[v_{B}\right]=\pi\nonumber \\ \Rightarrow\mathbb{E}\left[v_{G}\right] & = & 1-\frac{1-\pi}{\pi}\mu\left(T\right), \end{eqnarray} (C24) where $$s$$ is Receiver’s posterior belief about Sender at $$t=T$$ and we use (C23) in the last passage. Thus, good Sender’s expected payoff increases with $$\alpha$$ and $$\lambda$$ by Proposition 6. Finally, it is easy to see that $$\mathbb{E}\left[v_{G}\right]$$ increases in $$\pi$$ after substituting $$\mu\left(T\right)$$ in $$\mathbb{E}\left[v_{G}\right]$$. Part 3. We shall show that in the divine equilibrium   \begin{eqnarray} \mathbb{E}\left[u\right] & = & \frac{\left(1-\pi\right)\left(1-\mu\left(T\right)\right)}{2}. \end{eqnarray} (C25) Part 3 then follows from Proposition 6. Since $$\mathbb{E}\left[s\right]=\pi$$, by (1) and (C24), it is sufficient to prove that $$\mathbb{E}\left[s^{2}\right]=\pi\mathbb{E}\left[v_{G}\right]$$. We divide the proof in two cases: $$\pi\leq\bar{\pi}$$ and $$\pi>\bar{\pi}$$. If $$\pi\leq\bar{\pi},$$ Receiver’s expected payoff is given by the sum of four terms: (1) Sender is good and the arm does not arrive; (2) Sender is good and the arm arrives; (3) Sender is bad and she does not pull the arm; and (4) Sender is bad and she pulls the arm. Thus,   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi e^{-\alpha T}\left(\mu\left(T\right)\right)^{2}\\ & & +\pi\int_{0}^{T}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\alpha e^{-\alpha t}dt\\ & & +\left(1-\pi\right)\left(1-P_{B}\left(T\right)\right)\left(\mu\left(T\right)\right)^{2}\\ & & +\left(1-\pi\right)\int_{0}^{T}e^{-\lambda\left(T-t\right)}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\frac{\pi}{1-\pi}\left(\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\right)\alpha e^{-\alpha t}dt. \end{eqnarray*} Solving all integrals and rearranging all common terms we get   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi\mathbb{E}\left[v_{G}\right]. \end{eqnarray*} If $$\pi>\bar{\pi}$$, Receiver’s expected payoff is given by the sum of five terms: (1) Sender is good and the arm does not arrive; (2) Sender is good and the arm arrives before $$\bar{t}$$; (3) Sender is good and the arm arrives between $$\bar{t}$$ and $$T$$; (4) Sender is bad and she does not pull the arm; (5) Sender is bad and she pulls the arm. Thus,   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi e^{-\alpha T}\left(\mu\left(T\right)\right)^{2}\\ & & +\pi\left(1-e^{-\alpha\bar{t}}\right)\\ & & +\pi\int_{\bar{t}}^{T}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\alpha e^{-\alpha t}dt\\ & & +\left(1-\pi\right)\left(1-P_{B}\left(T\right)\right)\left(\mu\left(T\right)\right)^{2}\\ & & +\left(1-\pi\right)\int_{\bar{t}}^{T}e^{-\lambda\left(T-t\right)}\left(e^{\lambda\left(T-t\right)}\mu\left(T\right)\right)^{2}\frac{\pi}{1-\pi}\left(\frac{1-\mu\left(t\right)}{\mu\left(t\right)}\right)\alpha e^{-\alpha t}dt. \end{eqnarray*} Solving all integrals and rearranging all common terms we again get   \begin{eqnarray*} \mathbb{E}\left[s^{2}\right] & = & \pi\mathbb{E}\left[v_{G}\right]. \end{eqnarray*} ∥ Proof of Proposition 4. $$\underline{{\text{For}}\,\pi:}$$ Differentiating $$P_{B}\left(T\right)$$ in (7) with respect to $$\pi$$, we have   \begin{align*} \frac{dP_{B}\left(T\right)}{d\pi} & =\frac{e^{-\alpha T}}{\mu\left(T\right)\left(1-\pi\right)}\times\left[\frac{\pi}{\mu\left(T\right)}\frac{d\mu\left(T\right)}{d\pi}-\frac{1-\mu\left(T\right)}{1-\pi}\right]\\ & =\frac{e^{-\alpha T}}{\mu\left(T\right)\left(1-\pi\right)}\times\left\{ \begin{array}{l} \left[\frac{\mu\left(T\right)}{\pi}-\frac{1-\mu\left(T\right)}{1-\pi}\right]\text{ if }\pi<\bar{\pi},\\ \left[e^{\alpha T}\frac{\mu\left(T\right)^{1+\frac{\alpha}{\lambda}}}{\pi}-\frac{1-\mu\left(T\right)}{1-\pi}\right]\text{ otherwise.} \end{array}\right\} \end{align*} First, when $$\pi<\bar{\pi}$$, $$\frac{dP_{B}\left(T\right)}{d\pi}<0$$ because $$\mu\left(T\right)<\pi$$. Second, when$$\pi\geq\bar{\pi}$$, $$\frac{dP_{B}\left(T\right)}{d\pi}<0$$ if and only if   \begin{align*} 1+\frac{\alpha}{\alpha+\lambda}\phi & >\left(1+\phi\right)^{\frac{\alpha}{\alpha+\lambda}},\\ \phi & \equiv\frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}>0. \end{align*} Thus, $$\frac{dP_{B}\left(T\right)}{d\pi}<0$$, because $$1+x\phi>\left(1+\phi\right)^{x}$$ for all $$\phi>0$$ and $$x\in\left(0,1\right)$$. $$\underline{{\text{For}}\,\lambda:}$$ Differentiating $$P_{B}\left(T\right)$$ in (7) with respect to $$\lambda$$, we have   \begin{align*} \frac{dP_{B}\left(T\right)}{d\lambda} & =\frac{\pi}{1-\pi}\frac{e^{-\alpha T}}{\mu\left(T\right)^{2}}\frac{d\mu\left(T\right)}{d\lambda}<0, \end{align*} where the inequality follows from Proposition 6. $$\underline{{\text{For}}\,\alpha:}$$ Without loss of generality we can set $$T=1$$. Differentiating $$P_{B}\left(T\right)$$ in (7) with respect to $$\alpha$$, we have   $\frac{dP_{B}\left(T\right)}{d\alpha}=\frac{\pi}{1-\pi}e^{-\alpha}\left[\frac{1-\mu\left(T\right)}{\mu\left(T\right)}+\frac{1}{\left(\mu\left(T\right)\right)^{2}}\frac{d\mu\left(T\right)}{d\alpha}\right].$ First, when $$\pi<\bar{\pi}$$,   \begin{eqnarray*} \frac{1-\pi}{\pi}e^{2\alpha}\frac{dP_{B}\left(T\right)}{d\alpha} & = & \left(\frac{1}{\pi}-2\right)e^{\alpha}+\frac{\left(\alpha\left(\alpha+\lambda\right)-\lambda\right)e^{\alpha+\lambda}+\lambda\left(1+2\left(\alpha+\lambda\right)\right)}{\left(\alpha+\lambda\right)^{2}}\\ & > & \left(\frac{1}{\bar{\pi}}-2\right)+\frac{\left(\alpha\left(\alpha+\lambda\right)-\lambda\right)e^{\alpha+\lambda}+\lambda\left(1+2\left(\alpha+\lambda\right)\right)}{\left(\alpha+\lambda\right)^{2}}\\ & = & \frac{1}{\left(\alpha+\lambda\right)^{2}}\left(\lambda\left(1+\left(\alpha+\lambda\right)\right)+\left(\left(\alpha+\lambda\right)^{2}-\lambda\right)e^{\left(\alpha+\lambda\right)}-\left(\alpha+\lambda\right)^{2}e^{\alpha}\right)\\ & = & \sum_{k=3}^{\infty}\left[\frac{\left(\alpha+\lambda\right)^{k}}{\left(k-2\right)!}-\lambda\frac{\left(\alpha+\lambda\right)^{k-1}}{\left(k-1\right)!}-\left(\alpha+\lambda\right)^{2}\frac{\alpha^{k-2}}{\left(k-2\right)!}\right]\equiv\sum_{k=3}^{\infty}c_{k}>0, \end{eqnarray*} where the inequality holds because each term $$c_{k}$$ in the sum is positive:   \begin{eqnarray*} c_{k} & = & \frac{\left(\alpha+\lambda\right)^{2}\left(\left(\alpha+\lambda\right)^{k-2}-\alpha^{k-2}\right)}{\left(k-2\right)!}-\frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-1\right)!}\\ & = & \frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\sum_{n=0}^{k-3}\left(\alpha+\lambda\right)^{k-3-n}\alpha^{n}\right)}{\left(k-2\right)!}-\frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-1\right)!}\\ & > & \frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-2\right)!}-\frac{\left(\alpha+\lambda\right)^{2}\lambda\left(\alpha+\lambda\right)^{k-3}}{\left(k-1\right)!}>0. \end{eqnarray*} Second, when $$\pi\ge\bar{\pi}$$, $$\frac{dP_{B}\left(T\right)}{d\alpha}>0$$ if and only if   $\frac{1+\phi}{\phi}\left[\ln\left(1+\phi\right)+\frac{\left(\alpha+\lambda\right)^{2}}{\lambda}\left(1-\left(1+\phi\right)^{-\frac{\alpha+\lambda}{\lambda}}\right)\right]-1-\alpha-\lambda>0$  $\phi\equiv\frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}.$ The left-hand side increases with $$\alpha$$, treating $$\phi$$ as a constant. Then the inequality holds because it holds for $$\alpha\rightarrow0:$$  \begin{eqnarray*} \frac{1+\phi}{\phi}\left[\ln\left(1+\phi\right)+\lambda\left(1-\left(1+\phi\right)^{-1}\right)\right]-1-\lambda & > & 0\\ \iff\frac{1+\phi}{\phi}\ln\left(1+\phi\right) & > & 1. \end{eqnarray*} ∥ Proof of Proposition 5. $$\underline{{\text{For}}\,\lambda:}$$ Differentiating $$P\left(T\right)$$ in (9) with respect to $$\lambda$$, we have   \begin{eqnarray*} \frac{dP\left(T\right)}{d\lambda} & = & \left(1-\pi\right)\frac{dP_{B}\left(T\right)}{d\lambda}<0, \end{eqnarray*} where the inequality follows from Proposition 4. $$\underline{{\text{For}}\,\alpha:}$$ Differentiating $$P\left(T\right)$$ in (9) with respect to $$\alpha$$, we have   \begin{eqnarray*} \frac{dP\left(T\right)}{d\alpha} & > & \left(1-\pi\right)\frac{dP_{B}\left(T\right)}{d\alpha}>0, \end{eqnarray*} where the last inequality follows from Proposition 4. $$\underline{{\text{For}}\,\pi:}$$ Differentiating $$P\left(T\right)$$ in (9) with respect to $$\pi$$, we have   \begin{eqnarray*} \frac{dP\left(T\right)}{d\pi} & = & \frac{\pi e^{-\alpha T}}{\mu\left(T\right)^{2}}\left(\frac{d\mu\left(T\right)}{d\pi}-\frac{\mu\left(T\right)}{\pi}\right). \end{eqnarray*} We now show that   $\frac{dP\left(T\right)}{d\pi}\geq0\iff\pi\geq\frac{\alpha e^{\alpha T}}{\left(\alpha+\lambda\right)e^{\alpha T}-1}.$ First, when $$\pi<\bar{\pi}$$, we have $$dP\left(T\right)/d\pi<0$$ because $$\mu\left(T\right)<\pi$$ and   $\frac{d\mu\left(T\right)}{d\pi}=\frac{\mu\left(T\right)^{2}}{\pi^{2}}<\frac{\mu\left(T\right)}{\pi}.$ Second, when $$\pi\ge\bar{\pi}$$, we have $$dP\left(T\right)/d\pi<0$$ if and only if   $\frac{d\mu\left(T\right)}{d\pi}=e^{\alpha T}\frac{\mu\left(T\right){}^{2+\frac{\alpha}{\lambda}}}{\pi^{2}}<\frac{\mu\left(T\right)}{\pi}.$ Substituting $$\mu\left(T\right)$$, we get that this inequality is equivalent to   $\pi<\frac{\alpha e^{\alpha T}}{\alpha e^{\alpha T}+\lambda\left(e^{\alpha T}-1\right)}.$ It remains to show that   $\frac{\alpha e^{\alpha T}}{\alpha e^{\alpha T}+\lambda\left(e^{\alpha T}-1\right)}>\bar{\pi}.$ Substituting $$\bar{\pi}$$, we get that this inequality is equivalent to   $\frac{e^{\left(\alpha+\lambda\right)T}-1}{\alpha+\lambda}>\frac{e^{\alpha T}-1}{\alpha},$ which is satisfied because function $$\left(e^{x}-1\right)/x$$ increases with $$x$$. ∥ Proof of Proposition 7. First, for $$\pi<\bar{\pi}$$, $$\bar{t}=0$$. Second, for $$\pi\ge\bar{\pi}$$, $$\bar{t}$$ increases with $$\pi$$ and decreases with $$\alpha$$ because $$\mu\left(T\right)$$ increases with $$\pi$$ and decreasing with $$\alpha$$. Furthermore,   \begin{align*} \frac{d\bar{t}}{d\lambda} & =\frac{1}{\alpha+\lambda}\left(\frac{1}{\alpha+\lambda}\ln\left(1+\phi\right)+\frac{\alpha}{\lambda^{2}}\frac{1-\pi}{\pi}e^{\alpha T}\frac{1}{1+\phi}\right)>0,\\ \phi & \equiv\frac{\alpha+\lambda}{\lambda}\frac{1-\pi}{\pi}e^{\alpha T}. \end{align*} ∥ Proof of Proposition 8. The density $$p_{B}\left(t\right)$$ is equal to $$0$$ for $$t\leq\bar{t}$$ and is given by   $p_{B}\left(t\right)\equiv\frac{dP_{B}\left(t\right)}{dt}=\frac{\pi}{1-\pi}\frac{\alpha e^{-\alpha t}\left(1-\mu\left(T\right)e^{\lambda\left(T-t\right)}\right)}{\mu\left(T\right)}$ for $$t>\bar{t}$$. Differentiating $$p_{B}\left(t\right)$$ with respect to $$t$$ for $$t>\bar{t}$$, we get   $\frac{dp_{B}\left(t\right)}{dt}=\frac{\pi}{1-\pi}\frac{\alpha e^{-\alpha t}}{\mu\left(T\right)}\left[\left(\alpha+\lambda\right)\mu\left(T\right)e^{\lambda\left(T-t\right)}-\alpha\right]>0$ if and only if   $t<T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1}{\mu\left(T\right)}\right).$ We can therefore conclude that $$p_{B}\left(t\right)$$ is quasiconcave on the interval $$\left[\bar{t},T\right]$$. ∥ Proof of Proposition 9. The density $$p\left(t\right)$$ is given by   \begin{align*} p\left(t\right) & =\begin{cases} \pi\alpha e^{-\alpha t} & \mbox{if }t<\bar{t}\\ \pi\alpha e^{-\alpha t}+\pi\alpha e^{-\alpha t}\frac{1-\mu\left(T\right)e^{\lambda\left(T-t\right)}}{\mu\left(T\right)} & \mbox{if }t\geq\bar{t}. \end{cases} \end{align*} Obviously, for $$t\leq\bar{t}$$, $$p\left(t\right)$$ is decreasing in $$t$$. For $$t>\bar{t}$$, differentiating $$p\left(t\right)$$ with respect to $$t$$, we get   $\frac{dp\left(t\right)}{dt}=\pi\alpha e^{-\alpha t}\left[\left(\alpha+\lambda\right)e^{\lambda\left(T-t\right)}-\alpha\frac{1+\mu\left(T\right)}{\mu\left(T\right)}\right]>0$ if and only if   $t<T-\frac{1}{\lambda}\ln\left(\frac{\alpha}{\alpha+\lambda}\frac{1+\mu\left(T\right)}{\mu\left(T\right)}\right).$ ∥ Proof of Proposition 10. The breakdown probability at $$t$$ is given by   \begin{eqnarray*} Q\left(t\right) & \equiv & \left(1-e^{-\lambda\left(T-t\right)}\right)\left[1-\mu\left(t\right)\right]. \end{eqnarray*} Notice that $$Q\left(t\right)$$ is continuous in $$t$$ because $$\mu\left(t\right)$$ is continuous in $$t$$. Also, $$Q\left(t\right)$$ equals $$0$$ for $$t\leq\bar{t}$$, is strictly positive for all $$t\in\left(\bar{t},T\right)$$, and equals $$0$$ for $$t=T$$. Substituting $$\mu\left(t\right)$$ and differentiating $$Q\left(t\right)$$ with respect to $$t$$ for $$t\geq\bar{t},$$ we get   $\frac{dQ\left(t\right)}{dt}=-\lambda\frac{e^{-\lambda\left(T-t\right)}\left(1+\mu\left(T\right)\right)-2\mu\left(T\right)}{\left[1-\mu\left(T\right)\left(1-e^{\lambda\left(T-t\right)}\right)\right]^{2}}>0$ if and only if   $t<T-\frac{1}{\lambda}\ln\left(\frac{1+\mu\left(T\right)}{2\mu\left(T\right)}\right).$ ∥ Acknowledgments. We are grateful to the editor, four anonymous referees, Alessandro Bonatti, Steven Callander, Yeon-Koo Che, Wouter Dessein, William Fuchs, Drew Fudenberg, Robert Gibbons, Navin Kartik, Keiichi Kawai, Hongyi Li, Jin Li, Brendan Nyhan, Carlos Pimienta, Andrea Prat, and participants at various seminars and conferences for helpful comments and suggestions. Valentino Dardanoni, Antonio Forcina, and Jutta Roosen kindly shared their codes for the likelihood ratio order test. We especially thank Aleksandra Balyanova and Barton Lee for excellent research assistance. We acknowledge support from the Australian Research Council (ARC) including the ARC Discovery Project DP140102426 (Gratton), the ARC Future Fellowship FT130101159 (Holden), and the ARC Discovery Early Career Research Award DE160100964 (Kolotilin). Supplementary Data Supplementary data are available at Review of Economic Studies online. Footnotes 1. For example, Paul Krugman wrote that the announcement “very probably installed Donald Trump in the White House” (New York Times, January 13, 2017). 2. Matt Zapotosky, Ellen Nakashima, and Rosalind S. Helderman, Washington Post, October 30, 2016. 3. Although the video was filmed eleven years prior to the release, raising the question of whether it was strategically timed, the Washington Post maintains it obtained the unedited video only a few hours before its online release (Farhi, Paul, Washington Post, October 7, 2016). 4. Aaron Blake, Washington Post, October 9, 2016. 5. Los Angeles Times, transcript of Trump’s video statement, October 7, 2016. 6. Susan Page and Karina Shedrofsky, USA TODAY, October 26, 2016 7. In Section 3.2, we generalize the model in several directions allowing for more general utility functions, for Sender to be imperfectly informed, for Sender’s type to affect when the arm arrives, and for the deadline to be stochastic. 8. The equilibrium is essentially unique in the sense that the probability with which each type of Sender pulls the arm at any time is uniquely determined. 9. See also Jung and Kwon (1988), Shin (1994), and Dziuda (2011). The unraveling result might also fail if disclosure is costly (Jovanovic, 1982) or information acquisition is costly (Shavell, 1994). 10. Shin (2003, 2006) also studies dynamic verifiable information disclosure, but he does not allow Sender to choose when to disclose. A series of recent papers consider dynamic information disclosure with different focuses to us, including: Ely et al. (2015); Ely (2016); Grenadier et al. (2016); Hörner and Skrzypacz (2016); Bizzotto et al. (2017); Che and Hörner (2017); Orlov et al. (2017). 11. Brocas and Carrillo (2007) also show that if the learning process is privately observed by Sender but the stopping time is observed by Receiver, then in equilibrium Receiver learns Sender’s information (akin to the unraveling result), as if the learning process was public. Gentzkow and Kamenica (2017) generalize this result. 12. In our model Sender can influence only the starting time of the experimentation process, but not the design of the process itself. Instead, in the Bayesian persuasion literature (e.g. Rayo and Segal, 2010; Kamenica and Gentzkow, 2011), Sender fully controls the design of the experimentation process. 13. See also Prat and Stromberg (2013) for a review of this literature in the broader context of the relationship between media and politics. 14. By part (ii) of Assumption 1, such perfect credibility can never be dented: $$H_{\theta}\left(.\mid\tau,1\right)$$ assigns probability $$1$$ to $$s=1$$ for all $$\theta$$ and $$\tau$$. 15. Divinity is a standard refinement used by the signalling literature. It requires Receiver to attribute a deviation to those types of Sender who would choose it for the widest range of Receiver’s interim beliefs. In our setting, the set of divine equilibria coincides with the set of monotone equilibria in which Receiver’s interim belief about Sender is non-increasing in the pulling time. Specifically, divinity rules out all equilibria in which both types of Sender do not pull the arm at some times, because Receiver’s out-of-equilibrium beliefs for those times are sufficiently unfavourable. 16. It is sufficient for our results to assume that Sender’s payoff is an upper hemicontinuous correspondence (rather than a continuous function) of Receiver’s posterior belief. For example, this is the case if Sender’s and Receiver’s payoffs depend on Receiver’s action and Sender’s type, and Receiver’s action set is finite. In the above example with constant ideological position, Sender’s payoff in (5) is a correspondence with $$v\left(r\right)=\left[0,1\right]$$, because it is optimal for Receiver to randomize between the two actions when $$s=r$$. 17. More generally, Proposition 1 holds whenever $$sv_{G}\left(s\right)$$ is strictly convex and $$\left(1-s\right)v_{B}\left(s\right)$$ is strictly concave, that is, for all $$s$$, Sender’s Arrow-Prat coefficient of absolute risk aversion $$-v_{\theta}^{\prime\prime}\left(s\right)/v_{\theta}^{\prime}\left(s\right)$$ is less than $$2/s$$ for good Sender and more than $$-2/\left(1-s\right)$$ for bad Sender. For the Poisson model of Section 4, Proposition 1 continues to hold for any risk attitude of good Sender and only relies on bad Sender being not too risk-loving. 18. These effects are common in the Bayesian persuasion literature (Kamenica and Gentzkow, 2011). In this literature, Sender is uninformed. Therefore, from her perspective, Receiver’s beliefs follow a martingale process (Ely et al., 2015), so only convexity properties of Sender’s payoff affect the time at which she pulls the arm. 19. The inequality (6) holds if and only if $$\int_{x}^{1}H\left(s\mid\tau^{\prime},\pi\right)ds>\int_{x}^{1}H\left(s\mid\tau,\pi\right)ds$$ for all $$x\in\left(0,1\right)$$. In comparison, part (i) of Assumption 1 holds if and only if $$\int_{x}^{1}H\left(s\mid\tau^{\prime},\pi\right)ds\geq\int_{x}^{1}H\left(s\mid\tau,\pi\right)ds$$ for all $$x\in\left(0,1\right)$$ with strict inequality for some $$x\in\left(0,1\right)$$. 20. For $$v_{\theta}$$ given by (5), we can show that bad Sender withholds the arm with strictly positive probability, $$P_{B}\left(T\right)<F\left(T\right)$$, in all divine equilibria, if $$\pi>r$$. In this case, however, $$v_{\theta}$$ is not strictly increasing, and there exist divine equilibria in which good Sender does not always pull the arm as soon as it arrives. For example, there exists a divine equilibrium in which bad and good Sender never pull the arm by the deadline: $$P_{G}\left(T\right)=P_{B}\left(T\right)=0$$. In this equilibrium, both bad and good Sender enjoy the highest possible payoff, $$1$$. 21. In this case, there exist some time $$\tau$$ at which bad Sender strictly prefers to pull the arm and (2) no longer holds for $$\tau$$. 22. Technically, we use the results from Section 3.1 by treating continuous time as an appropriate limit of discrete time. 23. In every divine equilibrium, $$P_{G}\left(t\right)=F\left(t\right)$$ for all $$t\in\left[\bar{t},T\right]$$ and $$P_{B}\left(t\right)=0$$ for all $$t\in\left[0,\bar{t}\right]$$. But for each distribution $$\hat{P}$$ such that $$\hat{P}\left(t\right)\leq F\left(t\right)$$ for all $$t\in\left[0,\bar{t}\right)$$ and $$\hat{P}\left(t\right)=F\left(t\right)$$ for all $$t\in\left[\bar{t},T\right]$$, there exists a divine equilibrium with $$P_{G}=\hat{P}$$. For ease of exposition, we focus on the divine equilibrium in which $$P_{G}\left(t\right)=F\left(t\right)$$ for all $$t\in\left[0,T\right]$$. 24. For an arbitrary Receiver’s Bernoulli payoff function, which depends on Receiver’s action and Sender’s type, the second-order Taylor approximation of Receiver’s expected payoff increases with the variance of his posterior belief, and therefore with $$\lambda$$ and $$\alpha$$. In contrast, the comparative statics with respect to $$\pi$$ are less robust. For example, if Receiver’s Bernoulli payoff function is $$-\left(a-\theta\right)^{2}$$, where $$a\in\mathbb{R}$$ is Receiver’s action and $$\theta\in\left\{ 0,1\right\}$$ is Sender’s type, then Receiver’s expected payoff decreases with $$\pi$$ for $$\pi<1/2$$ and increases with $$\pi$$ for $$\pi>1/2$$. 25. If the arrival rate $$\alpha$$ is sufficiently small, then $$t_{b}$$ is negative and hence the breakdown probability monotonically decreases with time. 26. This corresponds to the first terms of five presidents: Jimmy Carter (1976–80), Ronald Reagan (1980–84), George H. W. Bush (1988–92), Bill Clinton (1992–96), and George W. Bush (2000–04). Each president run for reelection and three (Reagan, Clinton, and Bush) served two full terms. 27. Nyhan (2015) does not provide data on scandals involving the president-elect between Election Day and the first week of January of the following year, but it contains data on scandals involving the president-elect between the first week of January and the date of his inauguration: there are no such scandals. 28. We omit from our sample the “GSA corruption” scandal during Jimmy Carter’s presidency as the allegations, explicit and implicit, of the scandal, while involving the federal administration, did not involve any of the member of Carter’s administration or their collaborators (if anything, as Carter run with the promise to end corruption in the GSA, the scandal might have actually reinforced his position). In any case, we check in Online Appendix A that our qualitative results are robust to the inclusion of this scandal. 29. We discuss this test in greater detail in the context of the next application. For this application, we use $$k=3$$ equiprobable time intervals. For the election campaign period only (10 scandals), the $$p$$-values are $$0.003$$ and $$0.839$$, respectively. 30. The rate of learning $$\lambda$$ might also be related to the verifiability of information, which may depend on the scandal’s type ($$e.g.,$$ infidelity versus corruption). 31. Another possible explanation (not captured by our model) for Nyhan’s finding is that media organizations strategically avoid releasing scandals when voters’ attention is captured by other media events and scandals may be less effective (see Durante and Zhuravskaya, 2017). 32. One way to map this application into our model is as follows. Suppose that a firm learns at date $$t_{\ell}$$ that it needs liquidity in a time frame $$\Delta_{F}$$, meaning that the latest possible initial trade date is $$t_{\ell}+\Delta_{F}$$. Both $$t_{\ell}$$ and $$\Delta_{F}$$ are privately known by the firm. Date $$t_{\ell}$$ is drawn according to the (improper) uniform distribution on the set of integers $$\mathbb{Z}$$. The time frame is $$\Delta_{F}\equiv T-t$$, where $$t$$ has a distribution $$F$$ on $$\left\{ 1,\dots,T+1\right\}$$. The firm chooses a time gap $$\Delta_{G}\equiv T-\tau$$ subject to $$\tau\in\left\{ t,\dots,T+1\right\}$$, meaning that it announces an IPO at a date $$t_{a}\in\left\{ t_{\ell},\dots,t_{\ell}+\left(\Delta_{F}-\Delta_{G}\right)\right\}$$ with the initial trade date at $$t_{a}+\Delta_{G}\leq t_{\ell}+\Delta_{F}$$. Announcing an IPO at date $$t_{a}$$ with the initial trade date at $$t_{a}-1$$ means that the firm accesses liquidity through other channels than an IPO. With this mapping, all our results hold exactly with $$P_{G}$$ and $$P_{B}$$ being the distributions of $$\tau=T-\Delta_{G}$$ for good and bad firms, respectively. 33. Our model predicts that the time gap should not affect expected excess returns, because the price at the initial trade date takes into account the information contained in the time gap. Therefore, we cannot take a standard approach of regressing excess returns on the time gap to evaluate the main prediction of our model that bad firms choose a shorter time gap. 34. As we discuss in Section 3.2.3, bad Sender may pull the arm later than good Sender simply because she receives it later than good Sender (not because she strategically delays). Therefore, we do not empirically identify whether bad firms choose a shorter time gap for a strategic reason. 35. The 1992 U.S. Supreme Court case Burson v. Freeman, 504 U.S. 191, forbids such practices as violations of freedom of speech. References ACHARYA V. V., DeMARZO P. and KREMER I. ( 2011), “Endogenous Information Flows and the Clustering of Announcements”, American Economic Review , 101, 2955– 2979. Google Scholar CrossRef Search ADS   BANKS J. S. and SOBEL J. ( 1987), “Equilibrium Selection in Signaling Games”, Econometrica , 55, 647– 661. Google Scholar CrossRef Search ADS   BIZZOTTO J., RÜDIGER J. and VIGIER A. ( 2017), “How to Persuade a Long-Run Decision Maker” ( University of Oxford Working Paper). BLACKWELL D. ( 1953), “Equivalent Comparisons of Experiments”, Annals of Mathematical Statistics , 24, 262– 272. Google Scholar CrossRef Search ADS   BOLTON P. and HARRIS C. ( 1999), “Strategic Experimentation”, Econometrica , 67, 349– 374. Google Scholar CrossRef Search ADS   BROCAS I. and CARRILLO J. D. ( 2007), “Influence through Ignorance”, RAND Journal of Economics , 38, 931– 947. Google Scholar CrossRef Search ADS   CHE Y.-K. and HÖRNER J. ( 2017), “Recommender Systems as Mechanisms for Social Learning”, Quarterly Journal of Economics , forthcoming. CHO I.-K. and KREPS D. M. ( 1987), “Signaling Games and Stable Equilibria”, Quarterly Journal of Economics , 102, 179– 221. Google Scholar CrossRef Search ADS   DARDANONI V. and FORCINA A. ( 1998), “A Unified Approach to Likelihood Inference on Stochastic Orderings in a Nonparametric Context”, Journal of the American Statistical Association , 93, 1112– 1123. Google Scholar CrossRef Search ADS   DELLAVIGNA S. and KAPLAN E. ( 2007), “The Fox News Effect: Media Bias and Voting”, Quarterly Journal of Economics , 122, 1187– 1234. Google Scholar CrossRef Search ADS   DUGGAN J. and MARTINELLI C. ( 2011), “A Spatial Theory of Media Slant and Voter Choice”, Review of Economic Studies , 78, 640– 666. Google Scholar CrossRef Search ADS   DURANTE R. and ZHURAVSKAYA E. ( 2017), “Attack when the World is Not Watching?: International Media and the Israeli-Palestinian Conflict”, Journal of Political Economy , forthcoming. DYE R. A. ( 1985), “Disclosure of Nonproprietary Information”, Journal of Accounting Research , 23, 123– 145. Google Scholar CrossRef Search ADS   DZIUDA W. ( 2011), “Strategic Argumentation”, Journal of Economic Theory , 146, 1362– 1397. Google Scholar CrossRef Search ADS   ELY J., FRANKEL A. and KAMENICA E. ( 2015), “Suspense and Surprise”, Journal of Political Economy , 123, 215– 260. Google Scholar CrossRef Search ADS   ELY J. C. ( 2016), “Beeps”, American Economic Review , 107, 31– 53. Google Scholar CrossRef Search ADS   GENTZKOW M. and KAMENICA E. ( 2017), “Disclosure of Endogenous Information”, Economic Theory Bulletin , 5, 47– 56. Google Scholar CrossRef Search ADS   GENTZKOW M. and SHAPIRO J. M. ( 2006), “Media Bias and Reputation”, Journal of Political Economy , 114, 280– 316. Google Scholar CrossRef Search ADS   GRENADIER S. R., MALENKO A. and MALENKO N. ( 2016), “Timing Decisions in Organizations: Communication and Authority in a Dynamic Environment”, American Economic Review , 106, 2552– 2581. Google Scholar CrossRef Search ADS   GROSSMAN S. J. ( 1981), “The Informational Role of Warranties and Private Disclosures about Product Quality”, Journal of Law and Economics , 24, 461– 483. Google Scholar CrossRef Search ADS   GROSSMAN S. J. and HART O. D. ( 1980), “Disclosure Laws and Takeover Bids”, Journal of Finance , 35, 323– 334. Google Scholar CrossRef Search ADS   GUTTMAN I., KREMER I. and SKRZYPACZ A. ( 2013), “Not Only What but also When: A Theory of Dynamic Voluntary Disclosure”, American Economic Review , 104, 2400– 2420. Google Scholar CrossRef Search ADS   HÖRNER J. and SKRZYPACZ A. ( 2016), “Selling Information”, Journal of Political Economy , 124, 1515– 1562. Google Scholar CrossRef Search ADS   JOVANOVIC B. ( 1982), “Truthful Disclosure of Information”, Bell Journal of Economics , 13, 36– 44. Google Scholar CrossRef Search ADS   JUNG W.-O. and KWON Y. K. ( 1988), “Disclosure when the Market is Unsure of Information Endowment of Managers”, Journal of Accounting Research , 26, 146– 153. Google Scholar CrossRef Search ADS   KAMENICA E. and GENTZKOW M. ( 2011), “Bayesian Persuasion”, American Economic Review , 101, 2590– 2615. Google Scholar CrossRef Search ADS   KELLER G., RADY S. and CRIPPS M. ( 2005), “Strategic Experimentation with Exponential Bandits”, Econometrica , 73, 39– 68. Google Scholar CrossRef Search ADS   LI H. and LI W. ( 2013), “Misinformation”, International Economic Review , 54, 253– 277. Google Scholar CrossRef Search ADS   LOUGHRAN T. and RITTER J. R. ( 1995), “The New Issues Puzzle”, Journal of Finance , 50, 23– 51. Google Scholar CrossRef Search ADS   MASKIN E. S. and TIROLE J. ( 1992), “The Principal-Agent Relationship with an Informed Principal, II: Common Values”, Econometrica , 60, 1– 42. Google Scholar CrossRef Search ADS   MILGROM P. R. ( 1981), “Good News and Bad News: Representation Theorems and Applications”, Bell Journal of Economics , 12, 350– 391. Google Scholar CrossRef Search ADS   MULLAINATHAN S. and SHLEIFER A. ( 2005), “The Market for News”, American Economic Review , 95, 1031– 1053. Google Scholar CrossRef Search ADS   NYHAN B. ( 2015), “Scandal Potential: How Political Context and News Congestion Affect the President’s Vulnerability to Media Scandal”, British Journal of Political Science , 45, 435– 466. Google Scholar CrossRef Search ADS   ORLOV D., SKRZYPACZ A. and ZRYUMOV P. ( 2017), “Persuading the Principal To Wait” ( Stanford University Working Paper). Google Scholar CrossRef Search ADS   PRAT A., and STROMBERG D. ( 2013), “The Political Economy of Mass Media”, in Advances in Economics and Econometrics: Theory and Applications, Proceedings of the Tenth World Congress of the Econometric Society , vol. II, ( Cambridge University Press) 135– 187. Google Scholar CrossRef Search ADS   RAYO L. and SEGAL I. ( 2010), “Optimal Information Disclosure”, Journal of Political Economy , 118, 949– 987. Google Scholar CrossRef Search ADS   ROOSEN J. and HENNESSY D. A. ( 2004), “Testing for the Monotone Likelihood Ratio Assumption”, Journal of Business & Economic Statistics , 22, 358– 366. Google Scholar CrossRef Search ADS   SHAKED M. and SHANTHIKUMAR G. ( 2007), Stochastic Orders  ( New York, NY: Springer). Google Scholar CrossRef Search ADS   SHAVELL S. ( 1994), “Acquisition and Discolsure of Information Prior to Sale”, RAND Journal of Economics , 25, 20– 36. Google Scholar CrossRef Search ADS   SHIN H.-S. ( 1994), “News Management and the Value of Firms”, RAND Journal of Economics , 25, 58– 71. Google Scholar CrossRef Search ADS   SHIN H.-S. ( 2003), “Disclosure and Asset Returns”, Econometrica , 71, 105– 133. Google Scholar CrossRef Search ADS   SHIN H.-S. ( 2006), “Disclosure Risk and Price Drift”, Journal of Accounting Research , 44, 351– 379. Google Scholar CrossRef Search ADS   © The Author 2017. Published by Oxford University Press on behalf of The Review of Economic Studies Limited. ### Journal The Review of Economic StudiesOxford University Press Published: Dec 11, 2017 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
2018-08-16 00:59:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 51, "x-ck12": 0, "texerror": 0, "math_score": 0.7051920890808105, "perplexity": 1439.3293276239697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210387.7/warc/CC-MAIN-20180815235729-20180816015729-00408.warc.gz"}
http://mathhelpforum.com/differential-equations/115427-determing-points-singularity.html
# Thread: Determing points of singularity 1. ## Determing points of singularity I do not know how to determine if $x=0$ is a regular or irregular singular point for $x^{2}y'' + 2(e^{x}-1)y' + (e^{-x}cos(x))y=0$. How do I determine if $2\frac{e^{x}-1}{x}$ and $e^{-x}cos(x)$ are analytic? 2. For $y''+P(x) y'+Q(x) y=0$, we wish to know if $P(x)$ has a pole of order greater than one or $Q(x)$ has a pole of order greater than two. If so, then the singular point is irregular. We can find out by simply taking limits that would cancel a simple pole in the former case or a double pole in the later case. So in the case above with the singular point at x=0, we evaluate the following limits: $\lim_{x\to 0} x P(x)$ $\lim_{x\to 0} x^2 Q(x)$ If both limits exists, then $P(x)$ cannot have a pole of order greater than one, and $Q(x)$ cannot have a pole greater than two. So just take the limits and see what happens. 3. Hmm, but my professor said that we can only take limits if all the coefficients are polynomials; if there are non-polynomial coefficients, then we must see if the functions $xp(x)$ and $x^{2}q(x)$ are analytic; according to him we cannot take the limits then. 4. You have: $y''+\frac{2(e^x-1)}{x^2} y'+\frac{\cos(x)}{e^x x^2} y=0$ and $x\frac{e^x-1}{x^2}$ is analytic at zero. It has a removable singularity there. The term $x^2\frac{\cos(x)}{e^x x^2}$ is also analytic at zero. Your DE has a regular singular point at the origin. We can see a dramatic difference between ordinary and irregular singular points by considering the two DEs: $y''+\frac{1}{x^3}y=0$ $y''+\frac{1}{x^2}y=0$ The first has an irregular singular point and the second has a regular one. I've plotted the real components of the complex solutions of both close to the origin. Note the qualitative differences: that folding pattern in the first continues infinitely often in any neighborhood of the origin and is the classic morphology of an essential singularity. However, I'm not sure if all DEs with irregular singular points give rise to solutions with essential singularities. I suspect so though. 5. Originally Posted by Pinkk I do not know how to determine if $x=0$ is a regular or irregular singular point for $x^{2}y'' + 2(e^{x}-1)y' + (e^{-x}cos(x))y=0$. How do I determine if $2\frac{e^{x}-1}{x}$ and $e^{-x}cos(x)$ are analytic? Well, I would recommend using the definition of "analytic" which is that a (real) function, f, is analytic at x= a if and only if the Taylor's series for f(x) about x= a exists and converges to f(x) in some neighborhod of a. You should know that both $e^{-x}$ and cos(x) have Taylor's series that converge to them for all x and so their product does also. As for $2\frac{e^x-1}{x}$, it is clearly analytic everywhere except, possibly, at x= 0. Now, strictly speaking it is not analytic at x= 0 because it is is not defined there. But what you really mean is the function that is $2\frac{e^x-1}{x}$ for x not equal to 0 and is equal to 0 when x= 0. The function $e^x$ has Taylor's series, around x= 0, $1+ x+ \frac{1}{2}x^2+ \cdot\cdot\cdot+ \frac{1}{n!}x^n+ \cdot\cdot\cdot$. The function $e^x- 1$, then, has Taylor series, around x= 0, $x+ \frac{1}{2}x^2+ \cdot\cdot\cdot+ \frac{1}{n!}x^n+\cdot\cdot\cdot$. Finally, then, $\frac{e^x-1}{x}= 1+ \frac{1}{2}x+ \cdot\cdot\cdot+ \frac{1}{n!}x^{n-1}+ \cdot\cdot\cdot$. 6. Ah, okay. I think I understand now, thanks.
2017-03-29 13:24:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318526983261108, "perplexity": 163.3210521310578}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190295.65/warc/CC-MAIN-20170322212950-00443-ip-10-233-31-227.ec2.internal.warc.gz"}
https://isoreader.isoverse.org/reference/extract_substring.html
This is a convenience function to capture substrings from textual data. Uses str_match_all internally but instead of returning everything, always returns only one single part of the match, depending on parameters capture_n and capture_group. extract_substring( string, pattern, capture_n = 1, capture_bracket = 0, missing = NA_character_ ) ## Arguments string string to extract pattern regular expression pattern to search for capture_n within each string, which match of the pattern should be extracted? e.g. if the pattern searches for words, should the first, second or third word be captured? capture_bracket for the captured match, which capture group should be extracted? i.e. which parentheses-enclosed segment of the pattern? by default captures the whole pattern (capture_bracket = 0). missing what to replace missing values with? Note that values can be missing because there are not enough captured matches or because the actual capture_bracket is empty. ## Value character vector of same length as string with the extracted substrings Other data extraction functions: extract_data, extract_word()
2022-09-25 19:44:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25501126050949097, "perplexity": 4683.688786601818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00413.warc.gz"}
https://tcs.rwth-aachen.de/lehre/Help-Knuth/WS2021/
# Helping Donald Knuth (WS 2021/2022) ### Important Dates Biweekly meetings on Thursdays at 14:30 in the seminar room of the i1. Starting on 26th October. ### Contents In this seminar we are going to check several exercises by Professor Knuth. Donald E. Knuth (), one of computer science's most prolific voices, is the author of the renowned book series "The Art of Computer Programming", three volumes of which have been completed, with four more planned. In fact, "these books were named among the best twelve scientific monographs of the century by American Scientist, along with: Dirac on quantum mechanics, Einstein on relativity, Mandelbrot on fractals, Pauling on the chemical bond, Russell and Whitehead on foundations of mathematics, von Neumann and Morgenstern on game theory, Wiener on cybernetics, Woodward and Hoffmann on orbital symmetry, Feynman on quantum electrodynamics, Smith on the search for structure, and Einstein's collected papers." (see their homepage at Stanford) Volume 4, Fascicle 6 has recently been published as preliminary paperback. The book is about Satisfiability and contains many exercises along with their solutions. On his website (under Progess on Volume 4B) Professor Knuth issues the following request: I worked particularly hard while preparing some of those exercises, attempting to improve on expositions that I found in the literature; and in several noteworthy cases, nobody has yet pointed out any errors. It would be nice to believe that I actually got the details right in my first attempt. But that seems unlikely, because I had hundreds of chances to make mistakes. So I fear that the most probable hypothesis is that nobody has been sufficiently motivated to check these things out carefully as yet. I still cling to a belief that these details are extremely instructive, and I'm uncomfortable with the prospect of printing a hardcopy edition with so many exercises unvetted. Thus I would like to enter here a plea for some readers to tell me explicitly, Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,'' where N is one of the following exercises in Volume 4 Fascicle 5: • MPR-28-29: Prove basic inequalities for sums of independent binary random variables • MPR-50: Prove that Ross's conditional expectation inequality is sharper than the second moment inequality • MPR-59: Derive the four functions theorem • MPR-61: Show that independent binary random variables satisfy the FKG inequality • MPR-99: Generalize the Karp–Upfal–Wigderson bound on expected loop iterations • MPR-103-104: Study ternary “coupling from the past” • MPR-114: Prove Alon's “combinatorial nullstellensatz” • MPR-121-122: Study the Kullback–Leibler divergence of one random variable from another • MPR-127: Analyze the XOR of independent sparse binary vectors • MPR-130-131: Derive paradoxical facts about the Cauchy distribution (which has “heavy tails”) • 7.2.2-79: Analyze the sounds that are playable on the pipe organ in my home • 7.2.2.1-29-30: Characterize all search trees that can arise with Algorithm X • 7.2.2.1-53 (Nikola): Find every 4-clue instance of shidoku (4×4 sudoku) • 7.2.2.1-55 (Viktor): Determine the fewest clues needed to force highly symmetric sudoku solutions • 7.2.2.1-103 (Tomáš): List all of the 12-tone rows with the all-interval property, and study their symmetries • 7.2.2.1-104: Construct infinitely many “perfect” n-tone rows • 7.2.2.1-115: Find all hypersudoku solutions that are symmetric under transposition or under 90° rotation • 7.2.2.1-121: Determine which of the 92 Wang tiles in exercise 2.3.4.3–5 can actually be used when tiling the whole plane • 7.2.2.1-129: Enumerate all the symmetrical solutions to MacMahon's triangle-tiling problem • 7.2.2.1-147: Construct all of the “bricks” that can be made with MacMahon's 30 six-colored cubes • 7.2.2.1-151-152: Arrange all of the path dominoes into a single loop • 7.2.2.1-172: Find the longest snake-in-the-box paths and cycles that can be made by kings, queens, rooks, bishops, or knights on a chessboard • 7.2.2.1-189: Determine the asymptotic behavior of the Gould numbers • 7.2.2.1-196: Analyze the running time of Algorithm X on bounded permutation problems • 7.2.2.1-215: Show that exclusion of noncanonical bipairs can yield a dramatic speedup • 7.2.2.1-262: Study the ZDDs for domino and diamond tilings that tend to have large “frozen” regions • 7.2.2.1-305-306: Find optimum arrangements of the windmill dominoes • 7.2.2.1-309 (Philipp): Find all ways to make a convex shape from the twelve hexiamonds • 7.2.2.1-320: Find all ways to make a convex shape from the fourteen tetraboloes • 7.2.2.1-323: Find all ways to make a skewed rectangle from the ten tetraskews • 7.2.2.1-327: Analyze the Somap graphs • 7.2.2.1-334: Build fake solutions for Soma-cube shapes • 7.2.2.1-337: Design a puzzle that makes several kinds of “dice” from the same bent tricubes • 7.2.2.1-346: Pack space optimally with small tripods • 7.2.2.1-375: Determine the smallest incomparable dissections of rectangles into rectangles • 7.2.2.1-387: Classify the types of symmetry that a polycube might have • 7.2.2.1-394 (Erhard): Prove that every futoshiki puzzle needs at least six clues • 7.2.2.1-415: Make an exhaustive study of homogenous 5×5 slitherlink • 7.2.2.1-424 (Andreas): Make an exhaustive study of 6×6 masyu • 7.2.2.1-432: Find the most interesting 3×3 kakuro puzzles • 7.2.2.1-442: Enumerate all hitori covers of small grids I still cling to a belief that these details are extremely instructive, and I'm uncomfortable with the prospect of printing a hardcopy edition with so many exercises unvetted. Thus I would like to enter here a plea for some readers to tell me explicitly, Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,'' where N is one of the following exercises in Volume 4 Fascicle 6: • 6: Establish a (previously unpublished) lower bound on van der Waerden numbers W(3,k) • 57: Find a 6-gate way to match a certain 20-variable Boolean function at 32 given points • 165: Devise an algorithm to compute the largest positive autarky of given clauses • 177 (Felix): Enumerate independent sets of flower snark edges • 212: Prove that partial latin square construction is NP-complete • 282: Find a linear certificate of unsatisfiability for the flower snark clauses • 306-308: Study the reluctant doubling strategy of Luby, Sinclair, and Zuckerman • 318: Find the best possible Local Lemma for d-regular dependency graphs with equal weights • 322: Show that random-walk methods cannot always find solutions of locally feasible problems using independent random variables • 335: Express the Möbius series of a cocomparability graph as a determinant • 339: Relate generating functions for traces to generating functions for pyramids • 347: Find the best possible Local Lemma for a given chordal graph with arbitrary weights • 356: Prove the Clique Local Lemma • 363: Study the stable partial assignments of a satisfiability problem • 386: Prove that certain CDCL solvers will efficiently refute any clauses that have a short certificate of unsatisfiability • 428 (Daniel): Show that Boolean functions don't always have forcing representations of polynomial size • 442-444: Study the UC and PC hierarchy of progressively harder sets of clauses • 518: Reduce 3SAT to testing the permanent of a {-1,0,1,2} matrix for zero Please don't be alarmed by the highly technical nature of these examples; more than 100 of the other exercises are completely non-scary, indeed quite elementary. But of course I do want to go into high-level details also, for the benefit of advanced readers; and those darker corners of my books are naturally the most difficult to get right. Hence this plea for help. Remember that you don't have to work the exercise first. You're allowed to peek at the answer; in fact, you're even encouraged to do so. Please send success reports to the usual address for bug reports ([email protected]), if you have time to provide this extra help. Thanks in advance! ### Material TAOCP Volume 4, Fascicle 6 can be found in the computer science library. An earlier draft can be found in the archive of Knuth's website. ### Rules In order to help Don Knuth, each participant is required to select at least one of the above exercises. She then proceeds to write a short proposal outlining the background, question and answer to be presented. These proposals are reviewed by the staff and may be rejected, accepted or accepted subject to conditions. The process will be visualized on this website by signs attached to the topics meaning available / in review / accepted respectively. During the next semester, we expect the participant to clarify his presentation toward the organisers. This involves getting help where appropriate. Moreover, a detailed solution to the exercise is to be submitted. Finally, the seminar itself will be held in compact form. During the event, each student presents his topic, the posed question and its answer and proposes a verdict concerning Professor Knuth's version. The other participants then vote whether or not to follow this advice. Opposers will demand clarification from supporters until the audience comes to an unanimous verdict. In this case, randomly selected students will be asked to substantiate their opinion. Thus, by preparation and discussion, everyone should in the end be convinced that the group may give a reasonable statement about the quality of the exercises in question. ### Prerequisites Good grade in the Data Structures and Algorithms lecture. Active interest in mathematics/theoretical computer science. Willingness to contribute. Command of the English language.
2022-01-26 23:23:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5565513372421265, "perplexity": 2820.214782217326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00198.warc.gz"}
https://indico.cern.ch/event/344485/contributions/1744716/
# ICRC2015 July 29, 2015 to August 6, 2015 World Forum Europe/Amsterdam timezone ## Full-Sky Analysis of Cosmic-Ray Anisotropy with IceCube and HAWC Jul 30, 2015, 11:30 AM 15m World Forum Theater (World Forum) ### World Forum Theater #### World Forum Churchillplein 10 2517 JW Den Haag The Netherlands Oral contribution CR-EX ### Speaker Juan Carlos Diaz Velez (University of Wisconsin-Madison) ### Description During the past two decades, experiments in both the Northern and Southern hemispheres have observed a small but measurable energy-dependent sidereal anisotropy in the arrival direction distribution of galactic cosmic rays. The relative amplitude of the anisotropy is $10^{−4} - 10^{−3}$. However, each of these individual measurements is restricted by limited sky coverage, and so the pseudo-power spectrum of the anisotropy obtained from any one measurement displays a systematic correlation between different multipole modes $C_\ell$. To address this issue, we present the current state of a joint analysis of the anisotropy on all angular scales using cosmic-ray data from the IceCube Neutrino Observatory located at the South Pole (90° S) and the High-Altitude Water Cherenkov (HAWC) Observatory located at Sierra Negra, Mexico (19° N). We present a combined skymap and an all-sky power spectrum in the overlapping energy range of the two experiments at ~10 TeV. We describe the methods used to combine the IceCube and HAWC data, address the individual detector systematics and study the region of overlapping field of view between the two observatories. Collaboration IceCube 997 ### Primary authors Daniel Fiorino (University of Wisconsin-Madison) Juan Carlos Diaz Velez (University of Wisconsin-Madison) ### Presentation materials There are no materials yet.
2023-03-29 17:16:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28147271275520325, "perplexity": 5088.354722717396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00799.warc.gz"}
https://stats.stackexchange.com/questions/123345/closed-form-for-the-variance-of-a-sum-of-two-estimates-in-logistic-regression
Closed form for the variance of a sum of two estimates in logistic regression? In logistic regression with an intercept term and with at least one dependent variable which is categorical, is there a closed form for the variance of the sum of the intercept and the coefficient of the categorical variable, or do you have to sample from a multivariate distribution with the means and variances of the intercept and the coefficient to get a reliable measure of the variance of this sum? $P(y) = \frac{exp(y)}{1 + exp(y)}$ $y = \beta_{0} + \beta_{1}x + \epsilon$ where x is categorical. Would the formula for the variance of a sum (of two random variables) be applicable here? $Var(\beta_{0} + \beta_{1}) = Var(\beta_{0}) + Var(\beta_{1}) + 2Cov(\beta_{0}, \beta_{1})$ The reason I ask, is that in this comment I got the impression that no closed form existed for the variance in question, and the advice was to sample from a multivariate distribution with the means and variances of $\beta_{0}$ and $\beta_{1}$ set.seed(1) dependent.var <- sample(c(TRUE, FALSE), 100, replace = TRUE, prob = c(0.3, 0.7)) independent.var <- ifelse(dependent.var, sample(c("Red", "Blue"), replace = TRUE, size = 10, prob = c(0.8, 0.2)), sample(c("Red", "Blue"), size = 10, replace = TRUE, prob = c(0.4, 0.6))) table(dependent.var, independent.var) ## independent.var ## dependent.var Blue Red ## FALSE 42 26 ## TRUE 7 25 my.fit <- glm(dependent.var ~ 1 + independent.var, family = binomial(logit)) coef(summary(my.fit)) Estimate Std. Error z value Pr(>|z|) (Intercept) -1.791759 0.4082482 -4.388897 0.0000113927 independent.varRed 1.752539 0.4951042 3.539737 0.0004005255 > vcov(my.fit) (Intercept) independent.varRed (Intercept) 0.1666666 -0.1666666 independent.varRed -0.1666666 0.2451282 The logit of TRUE for a "Red" case is $\beta_{0}+\beta_{1} \approx -0.039$. Is the variance for this estimate exactly vcov(my.fit)[1,1] + vcov(my.fit)[2,2] + 2 * vcov(my.fit)[1,2] [1] 0.07846154 ? Or is this only an approximation, and a more accurate measure is to be found by sampling, e.g. library(MASS) var(rowSums(mvrnorm(n = 1E7, mu = coef(my.fit), Sigma = vcov(my.fit)))) [1] 0.07842985 ? In this simple example, the sampling method does not seem to provide more accurate estimates of the variance (using 1E7 samples). Here it is stated that "There is a correspondance between the covariance matrix of the fit parameters and Δχ2 confidence regions only for the case of Gaussian uncertainties on the input measurements.". Is that a reason against relying on the closed form above, or is there perhaps another reason for the advice to sample instead of deriving the variance analytically in cases like this? EDIT: (In response to the answer given by StasK). The advice I originally got was to simulate from the full model, not from the vcov(), so here is the code to simulate from the full model: library(arm) sim.i <- sim(my.fit, 100000) logit.for.TRUE.red <- sim.i@coef[,1] + sim.i@coef[,2] var(logit.for.TRUE.red) [1] 0.07781206 • You should avoid conflating population parameters ($\beta$, fixed but unknown quantities with variance 0) with their estimates ($\hat \beta$, which are random variables). – Glen_b -Reinstate Monica Nov 10 '14 at 17:36 Adding simulation on top of that estimate is relatively pointless. Bayesians out there might argue that you get better approximation for the distribution of the estimates if you sample from the posterior, but I don't think that's your question, and that's not what you are doing. But other than that, you won't in any way be better off simulating if all you use is vcov(). (And if you are simulating from the multivariate normal, the mean and the variance are independent, so if you are only interested in the variance, it does not matter what mean you use; since you are interested in variance, the vcov() is the only relevant part.) • Thanks for your answer. As I read your last two sentences I realised that the advice I originally got was not simulating with vcov(), but simulating from the full model using sim() from the package arm. I think I must have "invented" simulating from vcov() by mistake :-) Is simulating from the full model better than deriving the standard error analytically? – Hans Ekbrand Nov 10 '14 at 8:55
2020-02-22 02:06:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278957009315491, "perplexity": 802.7331144827923}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00420.warc.gz"}
https://en.wikipedia.org/wiki/Biorthogonal_wavelet
# Biorthogonal wavelet A Biorthogonal wavelet is a wavelet where the associated wavelet transform is invertible but not necessarily orthogonal. Designing biorthogonal wavelets allows more degrees of freedom than orthogonal wavelets. One additional degree of freedom is the possibility to construct symmetric wavelet functions. In the biorthogonal case, there are two scaling functions ${\displaystyle \phi ,{\tilde {\phi }}}$, which may generate different multiresolution analyses, and accordingly two different wavelet functions ${\displaystyle \psi ,{\tilde {\psi }}}$. So the numbers M and N of coefficients in the scaling sequences ${\displaystyle a,{\tilde {a}}}$ may differ. The scaling sequences must satisfy the following biorthogonality condition ${\displaystyle \sum _{n\in \mathbb {Z} }a_{n}{\tilde {a}}_{n+2m}=2\cdot \delta _{m,0}}$. Then the wavelet sequences can be determined as ${\displaystyle b_{n}=(-1)^{n}{\tilde {a}}_{M-1-n}\quad \quad (n=0,\dots ,N-1)}$ ${\displaystyle {\tilde {b}}_{n}=(-1)^{n}a_{M-1-n}\quad \quad (n=0,\dots ,N-1)}$. ## References • Stéphane G. Mallat (1999). A Wavelet Tour of Signal Processing. Academic Press. ISBN 978-0-12-466606-1.
2019-06-20 14:37:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5640205144882202, "perplexity": 932.134479625317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999218.7/warc/CC-MAIN-20190620125520-20190620151520-00331.warc.gz"}
https://physics.stackexchange.com/questions/192794/why-does-mechanical-energy-have-to-equal-zero-to-find-escape-velocity
# Why does mechanical energy have to equal zero to find escape velocity? A object orbiting the earth has total mechanical energy equal to \begin{align*} E^{mech} = \frac{1}{2} m v^2 - \frac{GMm}{r} \end{align*} with $M$ the mass of the earth and $r$ the distance. My course notes say we have to equal $E^{mech} = 0$ find the escape velocity, which then gives \begin{align*} v = \sqrt{\frac{2GM}{r}} \end{align*} But I don't understand why we should do this. In general we have $E = K_1 + U_1 = K_2 + U_2$. Now I see that if $U(r)$ with $r \rightarrow \infty$, then $U_2$ becomes zero. But why should $K_2$ ever be set the zero? That means the object would come to rest somewhere, which we cannot know. • The escape velocity is defined to be the minimum velocity required to escape the gravitational well. What do you think the physical significance of "minimum" is? – lemon Jul 6 '15 at 12:12 • $K_2$ does not have to be set to zero. It represents the kinetic energy the particle would have once at infinity: if you want it to be $\neq 0$ then set it to be $\neq 0$. The usual terminology is, though, that the minimum energy is what's needed to be set at infinity with zero kinetic energy remaining. – gented Jul 6 '15 at 12:55 $$E = K_1 + U_1 = K_2 + U_2$$ where $K_1=\frac{mv^2}{2}$ and $U_1=- \frac{GMm}{r}$. Since the range of gravitional forces is infinite, you say (theoretically, not practically) that an object has escaped Earths gravition when it is infinity far away, so $U_2 = 0$. Now, if the object would have velocity = 0 before it is infinity far away, then (neglecting the rest of the universe), it would fall back to Earth and hence didn't escape. So it should still have a velocity when it is infinity far away. This velocity may be as small as you want, so the border point between falling back to earth and escaping is velocity =0. So take $v_2 =0$ and you find the minimal value such that the objects velocity doesn't become zero before reaching infinity. When a rocket is fired from Earth with a sudden impulse, its total energy is given by: $$E_k \text{ (kinetic energy)} + E_p \text{ (potential energy)}= \frac{1}{2}mv^2 - \frac{GMm}{r} = constant$$ The potential energy here is taken to be negative because the reference point chosen for potential energy to be zero is when the rocket is unbound in Earth's orbit. Hence, after the rocket is fired (with no propulsion after the initial impulse) it is bound if its: $$E_{total} < 0$$ and unbound if its: $$E_{total} \geq 0$$ In your case $E_{mech}$ is $E_{total}$. Setting $E_{total} = 0$ you can calculate what the escape velocity must be.
2020-05-29 04:51:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9271203875541687, "perplexity": 234.17300891141053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401260.16/warc/CC-MAIN-20200529023731-20200529053731-00529.warc.gz"}
https://leanprover-community.github.io/archive/stream/116395-maths/topic/derived.20functors.html
## Stream: maths ### Topic: derived functors #### Scott Morrison (Apr 29 2021 at 13:16): I still have some moderate sorries to fill in, but I just got def functor.left_derived (n : ℕ) (F : C ⥤ D) [F.additive] : C ⥤ D := projective_resolutions C ⋙ F.map_homotopy_category _ _ ⋙ homotopy_category.homology_functor D _ n to typecheck. :-) #### Scott Morrison (Apr 29 2021 at 13:17): I think I am going to keep going forward, and make sure that I can calculate this using a chosen projective resolution, before going back to the sorries. #### Scott Morrison (Apr 29 2021 at 13:18): Tor and Ext and group (co)homology, here we come. :-) #### Oliver Nash (Apr 29 2021 at 13:20): Oh wow, this is big. Nice one! #### Johan Commelin (Apr 29 2021 at 13:43): @Scott Morrison This is incredible! Really cool! Last updated: Jun 17 2021 at 17:28 UTC
2021-06-17 18:21:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46270617842674255, "perplexity": 5782.085493448914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00400.warc.gz"}
https://cstheory.stackexchange.com/questions/29271/quanitifier-free-presburger-arithmetic-upper-bound-on-solution-size
# Quanitifier Free Presburger Arithmetic: Upper bound on solution size? DISCLAIMER: I had originally posted this to CS.SE, but I've deleted it and moved it here, since it received little attention, and I think it is a research level question. According to this paper, if there is a solution to a quantifier free Presburger formula, there is a solution whose size in bits is polynomial in the problem size. This allows the problem to be in $NP$, easier than arbitrary Presburger formulas. However, the paper doesn't explicitly reference where this bound comes from, just mentioning a connection to Integer Linear Programming. I was wondering if anyone knew, what was the exact bound on the solution size, or if they could provide a reference for such? I have another problem which I can phrase as a QF Presburger formula, and I'd like to find a definite bound on the solution size. But I'm also trying to work it into an actual software system, so simply knowing "there exists a polynomial" doesn't help me much. A simple corollary of their result follows. Let's say we have two integer systems, $A x = b$ and $C x \ge d$, where you want $x$ to be in $\mathbb Z^n$. Consider determinants of all square submatrices of the matrix $E = \left(\matrix{A & b \\ C & d}\right)$ and denote by $M$ an upper bound on their absolute values. (In other words, set $M$ to be larger than or equal to the absolute value of all minors of $E$.) Now, if there exists an $x \in \mathbb Z^n$ satisfying both $A x = b$ and $C x \ge d$, then there exists such an $x$ whose components are at most $(n + 1) M$ in absolute value. It remains to note that a determinant of a $k \times k$-matrix $S$ is at most $k!\, \|S\|_{\mathsf{max}}^k$ where $\|S\|_{\mathsf{max}} = \max |s_{i j}|$. Combining these two bounds will give a specific polynomial bound on the size of a smallest solution (and this is already an easy exercise).
2019-04-21 20:41:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422596096992493, "perplexity": 164.00541783744586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00215.warc.gz"}
https://www.envoyproxy.io/docs/envoy/v1.16.0/api-v2/config/trace/v2/datadog.proto
This extension may be referenced by the qualified name envoy.tracers.datadog Note This extension is intended to be robust against untrusted downstream traffic. It assumes that the upstream is trusted. { "collector_cluster": "...", "service_name": "..." } collector_cluster (string, REQUIRED) The cluster to use for submitting traces to the Datadog agent. service_name (string, REQUIRED) The name used for the service when traces are generated by envoy.
2022-05-16 19:36:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516537547111511, "perplexity": 4537.817971590891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00280.warc.gz"}
https://math.stackexchange.com/questions/1056228/applying-newton-raphson-method-to-a-cdot-b-2-c-cdot-d4e-cdot-fd
# Applying Newton-Raphson method to $a\cdot b^{-2}=c\cdot d^4+e\cdot f(d)$ I am familiar with the method and it's application in classic problems, but I have troubles tackling the function I need to solve with it. So, variables in problem: • Real numbers, all are known constants - $a,c,e,h,i,k,l,m,n,o$, • Positive real numbers, variables - $b,d$, • Functions - $f,g,j$. Given is equation $a\cdot b^{-2}=c\cdot d^4+e\cdot f(d)$ where $f(d)=g(d)/\sqrt{2\cdot\pi\cdot h\cdot d}$ where $log(g)=i/d$. Instructions say that solving first equation for $d$ from a given $b$ and thus finding $f(b)$ (Essence of which confuses me.) requires Newton-Raphson method. Then they say that in similar cases people often use analytic formula simulating $f(b)$, normalized to unity at $b=1$: $j(b)=(k\cdot (b/l)^m)\cdot (1+(b/l)^n)^o$. Then they say to use approximation $f(b)=f_0\cdot j(b)$ and that I will find $f_0=f(1)$ (Which is second source of confusion for me.) by Newton-Raphson iteration from the first equation. What I basically need to get is $f_0$, and I do not understand how and where does $f(b)$ come into play and how do I get to a point where I can solve the problem with Newton-Raphson method. I have tried to look for solutions around, but to me it seems to be a per-case problem so I have decided to ask a question. Not having had any mathematical instruction in English may be a contributing factor to the issue. I should sleep more. The problem is a non-problem, all I had to do is plug what I've got into a single equation. $log(g)=i/d$ -> $g=10^{(i/d)}$ $f(d)=g(d)/\sqrt{2\cdot\pi \cdot h\cdot d}$ -> $f(d)=10^{(i/d)}/\sqrt{2\cdot\pi\cdot h\cdot d}$ Which, plugged into the main equation, gives us: $a\cdot b^{-2}=c\cdot d^4+e\cdot 10^{(i/d)}/\sqrt{2\cdot\pi\cdot h\cdot d}$ Which yields us function solvable by Newton-Raphson (if we transform it to $=0$): $f(d)=c\cdot d^4+e\cdot 10^{(i/d)}/\sqrt{2\cdot\pi\cdot h\cdot d} - a\cdot b^{-2}$
2020-06-04 20:59:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316168189048767, "perplexity": 210.14050210924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347458095.68/warc/CC-MAIN-20200604192256-20200604222256-00200.warc.gz"}
https://www.adriaansblog.com/post/create-fuel-file/
Create a fuel file in Ricardo WAVE The following instructions can be used to create a fuel file for Ricardo WAVE. You can use this to either create a .fue file consisting of one blend component, or a blend with more than one: 1. Ensure you have downloaded the .dat files of the fuels you want to blend. 2. Open up the command prompt window (Start button->Search->”cmd”-> click on cmd.exe) 3. Ensure the command prompt is opened in the folder where the .dat files are saved. To change to the correct folder path: • Copy the file path in your Windows Explorer window. First open the folder where the files are. • Click on the little folder icon on the left of the file path and copy the file path. • Use command cd followed by the file path and press enter. • The new line in the command prompt will now show that it is open in the folder where the .dat files are. 1. Now enter the following command: buildfuel -d diesel.dat ethanol.dat FAME.dat -f 0.89 0.09 0.02 -p Blended_Fuel The above command is to create a B2E9 blend. First the .dat files of all the fuels that will be used in the blend are listed, then the percentage of each fuel in the blend and then the name you want to call the fuel file. For binary blends the command line will be: buildfuel -d diesel.dat ethanol.dat -f 0.80 0.20 -p E20 The above command is to create a binary blend of E20 blend.
2021-01-17 10:06:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.595287024974823, "perplexity": 3745.573697030964}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703511903.11/warc/CC-MAIN-20210117081748-20210117111748-00018.warc.gz"}
https://math.stackexchange.com/questions/2687788/what-is-the-sum-of-sum-n-1-infty-left-frac2n1n42n3n2-right
# What is the sum of $\sum_{n=1}^\infty\left(\frac{2n+1}{n^4+2n^3+n^2}\right) =$? $$\sum_{n=1}^\infty\left(\frac{2n+1}{n^4+2n^3+n^2}\right)=\sum_{n=1}^\infty\left(\frac{2n+1}{n^2}\frac1{{(n+1)}^2}\right)$$ I assume that I should get a telescoping sum in some way, but I'm couldn't find it yet. then we have, for an integer $N>1$ $$\sum_{n=1}^N\left(\frac{2n+1}{n^4+2n^3+n^2}\right)=1-\frac1{(N+1)^2}$$ Hint: $2n+1=(n+1)^2-n^2$, and telescope. $$\frac{2n+1}{n^2}\frac{1}{(n+1)^2}=\frac{n+n+1}{n^2(n+1)^2}$$ $$\frac{2n+1}{n^2}\frac{1}{(n+1)^2}=\frac{1}{n(n+1)^2}+\frac{1}{n^2(n+1)}$$ And of course your telescoping element: $$\frac{1}{n(n+1)}=\left( \frac{1}{n}-\frac{1}{n+1}\right)$$ Therefore: $$\frac{2n+1}{n^2}\frac{1}{(n+1)^2}=\frac{1}{n(n+1)}-\frac{1}{(n+1)^2}+\frac{1}{n^2}-\frac{1}{n(n+1)}$$ $$\frac{2n+1}{n^2}\frac{1}{(n+1)^2}=-\frac{1}{(n+1)^2}+\frac{1}{n^2}$$ And the result is simply 1. As would [WolframAlpha confirm ]( https://www.wolframalpha.com/input/?i=sum((2*n%2B1)%2F((n%5E2(n%2B1)%5E2) )
2019-05-20 08:44:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8535413146018982, "perplexity": 606.2413187394708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255837.21/warc/CC-MAIN-20190520081942-20190520103942-00085.warc.gz"}
http://nrich.maths.org/6798/solution?nomenu=1
The middle sized square passes through the centres of the four circles. Each side of the middle sized square together with the edges of the outer square creates a right angled isosceles triangle with angles of 45$^\circ$. Thus the angles these sides make with the inner square are also 45$^\circ$. Each side of the middle sized square bisects the area of the circle. The inner half of that circle is made up of two shaded segments with angles of 45$^\circ$ which together are equal in area to the unshaded right angled segment.
2017-02-24 08:14:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6979862451553345, "perplexity": 234.32369320322033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171418.79/warc/CC-MAIN-20170219104611-00026-ip-10-171-10-108.ec2.internal.warc.gz"}
https://genieofhats.com/randfontein/continuous-rv-probability-functions-pdf.php
## probability Expected value of this continuous RV scipy.stats.rv_discrete — SciPy v1.3.2 Reference Guide. Probability Functions Discrete (pdf) Continuous (pdf) Basic probability Notation and definitions Conditional probability discrete RV's Definitions and Formulas (pdf) Tutorial (pdf) Discrete random variables Conditional probability continous RV's, When deflning a distribution for a continuous RV, the PMF approach won’t quite work since summations only work for a flnite or a countably inflnite number of items. Instead they are based on the following Deflnition: Let X be a continuous RV. The Probability Density Function (PDF) is a function f(x) on the range of X that satisfles the. ### Probability density function MATLAB pdf Continuous Random Variables University of Washington. computer functions before breaking down is a continuous random variable with probability density function given by f(x) = 8 <: λe−x/100 x ≥ 0 0 x < 0 Find the probability that (a) the computer will break down within the first 100 hours; (b) given that it it still working after 100 hours, it breaks down within the next 100 hours. Solution. 29, 2 Discrete Random Variables 3 Continuous Random Variables 4 Cumulative Distribution Functions 5 Great Expectations 6 Functions of a Random Variable 7 Bivariate Random Variables 8 Conditional Distributions 9 Independent Random Variables 10 Conditional Expectation 11 Covariance and Correlation 12 Moment Generating Functions Goldsman 5/21/14 24 / 147. 1 correspondence between the pdf and the mgf of a given random variable X. That is, pdf f (x ) 1 mgf M (t) m o X . Note: One could use this property to identify the probability distribution based on the moment generating function. Special mathematical expectations for the binomial RV. 1. Let X~B(n, p), please derive the moment generating scipy.stats.rv_continuous The type of generic moment calculation to use: 0 for pdf, 1 (default) for ppf. a float, optional. Lower bound of the support of the distribution, default is minus infinity. b float, optional. Probability density function at x of the given RV. Continuous RVs (Part I) Outline: We also learned how to work with a function of a discrete RV. † Continuous RVs on the other hand take values from uncountable sets, e.g., Sx is an uncount-able set. The function fX(x) is called probability density function (pdf) of the RV X. Therefore, the PDF is always a function which gives the probability of one event, x. If we denote the PDF as function f, then Pr(X = x) = f(x) A probability distribution will contain all the outcomes and their related probabilities, and the probabilities will sum to 1. Session 2: Probability distributionsand density functions – … Continuous RVs (Part I) Outline: We also learned how to work with a function of a discrete RV. † Continuous RVs on the other hand take values from uncountable sets, e.g., Sx is an uncount-able set. The function fX(x) is called probability density function (pdf) of the RV X. 17 Law of Total Variance. Var(X|Y ) is a random variable that is a function of Y. 18 Sum of a random number of iid RVs (the variance is taken with respect to X). 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables 1 correspondence between the pdf and the mgf of a given random variable X. That is, pdf f (x ) 1 mgf M (t) m o X . Note: One could use this property to identify the probability distribution based on the moment generating function. Special mathematical expectations for the binomial RV. 1. Let X~B(n, p), please derive the moment generating Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester Probability Densities In a continuous space, we describe distributions with probability density functions (PDFs) rather than assigned probability values. A valid probability density of a continuous random variable X in R, f X(x), requires I Non-negativity: 8x 2R f X(x) 0 I Normalized: R R f X(x)dx = 1 6-11-2019 · The integral of the continuous density function integrated over all real numbers is 1. They may have nonzero probability at some real numbers. They have zero probability at every real number. The probability of an event is found by summing the values of the discrete pdf … Notes: Joint Probability and Independence for Continuous RV’s CS 3130 / ECE 3530: Probability and Statistics for Engineers October 23, 2014 scipy.stats.rv_continuous The type of generic moment calculation to use: 0 for pdf, 1 (default) for ppf. a float, optional. Lower bound of the support of the distribution, default is minus infinity. b float, optional. Probability density function at x of the given RV. 1 correspondence between the pdf and the mgf of a given random variable X. That is, pdf f (x ) 1 mgf M (t) m o X . Note: One could use this property to identify the probability distribution based on the moment generating function. Special mathematical expectations for the binomial RV. 1. Let X~B(n, p), please derive the moment generating Characterization Probability density function. The probability density function of the continuous uniform distribution is: = {− ≤ ≤, < > The values of f(x) at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(x) … Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester a continuous random variable (RV) Probability density function (pdf) A curve, symbol f(x). Identify the continuous probability function that applies to the example below: The amount of time spouses shop for birthday gifts for their spouse for an average of eight minutes. Introduction to Random Variables (RVs) Outline: 1. informal deflnition of a RV, 2. three types of a RV: a discrete RV, a continuous RV, and a mixed RV, 3. a general rule to flnd probability of events concerning a RV, 4. cumulative distribution function (CDF) of a RV, 5. formal deflnition of a RV using CDF, scipy.stats.rv_continuous — SciPy v1.3.1 Reference Guide. Cumulative Distribution Functions and Expected Values The Cumulative Distribution Function (cdf) ! The cumulative distribution function F(x) for a continuous RV X is defined for every number x by: ! For each x, F(x) If X is a continuous RV with pdf f(x) and cdf F(x). !, 6-11-2019 · The integral of the continuous density function integrated over all real numbers is 1. They may have nonzero probability at some real numbers. They have zero probability at every real number. The probability of an event is found by summing the values of the discrete pdf …. ### 5. Cont. Rand. Vars. Sacramento State Probability Distributions for Discrete RV. DISTRIBUTION FUNCTIONS 9 1.4 Distribution Functions Definition 1.8. The probability of the event The function pX is called the probability mass function (pmf) of the random vari-able X, We define the probability density function (pdf) of a continuous rv as: fX(x) = d dx, Then the probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a ≤ b: a b A a. The expected or mean value of a continuous rv X with pdf f(x) is: Discrete Let X be a discrete rv that takes on values in the set D and has a pmf f(x).. Probability density function MATLAB pdf. computer functions before breaking down is a continuous random variable with probability density function given by f(x) = 8 <: λe−x/100 x ≥ 0 0 x < 0 Find the probability that (a) the computer will break down within the first 100 hours; (b) given that it it still working after 100 hours, it breaks down within the next 100 hours. Solution. 29, Notes on Continuous Distributions Summary The following table compares discrete and continuous distributions. There are also distributions that are not purely discrete nor purely continuous; see, e.g., Example 9a, page 184 of Ross (7th). Discrete r.v. Continuous r.v.. ### Chap. 5 Joint Probability Distributions scipy.stats.rv_discrete — SciPy v1.3.2 Reference Guide. 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables https://en.wikipedia.org/wiki/Expected_value a continuous random variable (RV) Probability density function (pdf) A curve, symbol f(x). Identify the continuous probability function that applies to the example below: The amount of time spouses shop for birthday gifts for their spouse for an average of eight minutes.. \$\begingroup\$ @Miguel Could you expand a bit? I am trying to reconcile what Max did and what I did (see attempt above). Since we just saw the theorem mentioned above in class and the function given was differentiable and invertible I was trying to do a direct application of the theorem, hence the thought of first completing the square. Probability Densities In a continuous space, we describe distributions with probability density functions (PDFs) rather than assigned probability values. A valid probability density of a continuous random variable X in R, f X(x), requires I Non-negativity: 8x 2R f X(x) 0 I Normalized: R R f X(x)dx = 1 1 correspondence between the pdf and the mgf of a given random variable X. That is, pdf f (x ) 1 mgf M (t) m o X . Note: One could use this property to identify the probability distribution based on the moment generating function. Special mathematical expectations for the binomial RV. 1. Let X~B(n, p), please derive the moment generating Chap. 5: Joint Probability Distributions A joint probability density function (pdf) of X and Y is a function f(x,y) Both RV are continuous (p. 193). Conditional Probability Density Function (pdf) of Y given X = x is as long as The point: marginal pdf of condition Notes on Continuous Distributions Summary The following table compares discrete and continuous distributions. There are also distributions that are not purely discrete nor purely continuous; see, e.g., Example 9a, page 184 of Ross (7th). Discrete r.v. Continuous r.v. Therefore, the PDF is always a function which gives the probability of one event, x. If we denote the PDF as function f, then Pr(X = x) = f(x) A probability distribution will contain all the outcomes and their related probabilities, and the probabilities will sum to 1. Session 2: Probability distributionsand density functions – … Continuous RVs (Part I) Outline: We also learned how to work with a function of a discrete RV. † Continuous RVs on the other hand take values from uncountable sets, e.g., Sx is an uncount-able set. The function fX(x) is called probability density function (pdf) of the RV X. Probability distribution of continuous random variable is called as Probability Density function or PDF. Given the probability function P(x) for a random variable X, the probability that X belongs to A, where A is some interval is calculated by integrating p(x) over the set A i.e. scipy.stats.rv_discrete This class is similar to rv_continuous. The main differences are: the support of the distribution is a set of integers. instead of the probability density function, pdf (and the corresponding private _pdf), this class defines the probability mass function, pmf –The cumulative distribution function (CDF) of a continuous RV is •The probability that the RV X is smaller than or equal to x GAUSSIAN RV • Probability density function (pdf) v.s. Histogram – years worked of 1820 employees in a cereal factory –When the bin width goes to 0, Cumulative Distribution Functions Proposition Let X be a continuous rv with pdf f (x) and cdf F(x). Then for any number a, P(X >a) = 1 F(a) and for any two numbers a and b with a Chap. 5: Joint Probability Distributions A joint probability density function (pdf) of X and Y is a function f(x,y) Both RV are continuous (p. 193). Conditional Probability Density Function (pdf) of Y given X = x is as long as The point: marginal pdf of condition 1 correspondence between the pdf and the mgf of a given random variable X. That is, pdf f (x ) 1 mgf M (t) m o X . Note: One could use this property to identify the probability distribution based on the moment generating function. Special mathematical expectations for the binomial RV. 1. Let X~B(n, p), please derive the moment generating Notes on Continuous Distributions Summary The following table compares discrete and continuous distributions. There are also distributions that are not purely discrete nor purely continuous; see, e.g., Example 9a, page 184 of Ross (7th). Discrete r.v. Continuous r.v. scipy.stats.rv_continuous The type of generic moment calculation to use: 0 for pdf, 1 (default) for ppf. a float, optional. Lower bound of the support of the distribution, default is minus infinity. b float, optional. Probability density function at x of the given RV. Notes on Continuous Distributions Summary The following table compares discrete and continuous distributions. There are also distributions that are not purely discrete nor purely continuous; see, e.g., Example 9a, page 184 of Ross (7th). Discrete r.v. Continuous r.v. Probability Densities In a continuous space, we describe distributions with probability density functions (PDFs) rather than assigned probability values. A valid probability density of a continuous random variable X in R, f X(x), requires I Non-negativity: 8x 2R f X(x) 0 I Normalized: R R f X(x)dx = 1 When deflning a distribution for a continuous RV, the PMF approach won’t quite work since summations only work for a flnite or a countably inflnite number of items. Instead they are based on the following Deflnition: Let X be a continuous RV. The Probability Density Function (PDF) is a function f(x) on the range of X that satisfles the This MATLAB function returns the probability density function (pdf) for the one-parameter distribution family specified by 'name' and the distribution parameter A, evaluated at the values in x. ## Probability Density Function of a function of a continuous RV Lesson 14 Continuous Random Variables STAT 414 / 415. 1 Probability Density Function and Cumulative Distribution Function Definition1.1(Probabilitydensityfunction). Arrvissaidtobe(absolutely) continuous if there exists a real-valued function f X such that, for any subset B⊂R: P(X∈B) = Z B f X(x)dx (1) Thenf X iscalledtheprobability density function (pdf)oftherandomvari-ableX., Probability Distributions for Continuous Variables Definition Let X be a continuous r.v. Then a probability distribution or probability density function (pdf) of X is a function f (x) such that for any two numbers a and b with a ≤ b, we have The probability that X is in the interval [a, b] can be calculated by integrating the pdf of the r.v…. ### scipy.stats.rv_continuous — SciPy v1.3.1 Reference Guide Continuous Probability Distributions Milefoot. Cumulative Distribution Functions and Expected Values The Cumulative Distribution Function (cdf) ! The cumulative distribution function F(x) for a continuous RV X is defined for every number x by: ! For each x, F(x) If X is a continuous RV with pdf f(x) and cdf F(x). !, a continuous random variable (RV) Probability density function (pdf) A curve, symbol f(x). Identify the continuous probability function that applies to the example below: The amount of time spouses shop for birthday gifts for their spouse for an average of eight minutes.. Introduction to Random Variables (RVs) Outline: 1. informal deflnition of a RV, 2. three types of a RV: a discrete RV, a continuous RV, and a mixed RV, 3. a general rule to flnd probability of events concerning a RV, 4. cumulative distribution function (CDF) of a RV, 5. formal deflnition of a RV using CDF, 1-11-2016 · Probability Density Function - Finding K, the missing value. Skip navigation Sign in. Search. Loading... Close. This video is unavailable. Watch Queue 3.1c Continuous Random Variable part 3 - PDF - Finding K Joe Birch. Loading... Unsubscribe from Joe Birch? Cancel Unsubscribe. Therefore, the PDF is always a function which gives the probability of one event, x. If we denote the PDF as function f, then Pr(X = x) = f(x) A probability distribution will contain all the outcomes and their related probabilities, and the probabilities will sum to 1. Session 2: Probability distributionsand density functions – … 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables Notes: Joint Probability and Independence for Continuous RV’s CS 3130 / ECE 3530: Probability and Statistics for Engineers October 23, 2014 Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester Introduction to Random Variables (RVs) Outline: 1. informal deflnition of a RV, 2. three types of a RV: a discrete RV, a continuous RV, and a mixed RV, 3. a general rule to flnd probability of events concerning a RV, 4. cumulative distribution function (CDF) of a RV, 5. formal deflnition of a RV using CDF, Chap. 5: Joint Probability Distributions A joint probability density function (pdf) of X and Y is a function f(x,y) Both RV are continuous (p. 193). Conditional Probability Density Function (pdf) of Y given X = x is as long as The point: marginal pdf of condition Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is Joint Probability Mass Function Let X and Y be two discrete rv’s defined on the sample space of an experiment. The joint probability mass function p(x, y) is defined for Let X and Y be two continuous rv’s with joint pdf f (x, y) and marginal X pdf f X (x). Then for any X value x for which f X Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is For a continuous r.v. X, we can only talk aboutprobability within an interval X 2(x;x + x) p(x) x is the probability that X 2 (x;x + x) as x ! 0 The probability density p(x) satis es the following p(x) 0 and Z x p(x)dx = 1 (note: for continuous r.v., p(x) can be >1) (IITK) Basics of Probability and Probability … –The cumulative distribution function (CDF) of a continuous RV is •The probability that the RV X is smaller than or equal to x GAUSSIAN RV • Probability density function (pdf) v.s. Histogram – years worked of 1820 employees in a cereal factory –When the bin width goes to 0, Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester Probability Distributions for Continuous Variables Definition Let X be a continuous r.v. Then a probability distribution or probability density function (pdf) of X is a function f (x) such that for any two numbers a and b with a ≤ b, we have The probability that X is in the interval [a, b] can be calculated by integrating the pdf of the r.v… Cumulative Distribution Functions and Expected Values The Cumulative Distribution Function (cdf) ! The cumulative distribution function F(x) for a continuous RV X is defined for every number x by: ! For each x, F(x) If X is a continuous RV with pdf f(x) and cdf F(x). ! continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, Then the probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a ≤ b: a b A a. The expected or mean value of a continuous rv X with pdf f(x) is: Discrete Let X be a discrete rv that takes on values in the set D and has a pmf f(x). Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is A function f(x) that satisfies the above requirements is called a probability functionor probability distribu-tion for a continuous random variable, but it is more often called a probability density functionor simplyden-sity function. Any function f(x) satisfying Properties 1 and 2 above will automatically be a density function, and A function f(x) that satisfies the above requirements is called a probability functionor probability distribu-tion for a continuous random variable, but it is more often called a probability density functionor simplyden-sity function. Any function f(x) satisfying Properties 1 and 2 above will automatically be a density function, and 6-11-2019 · The integral of the continuous density function integrated over all real numbers is 1. They may have nonzero probability at some real numbers. They have zero probability at every real number. The probability of an event is found by summing the values of the discrete pdf … 6-11-2019 · The integral of the continuous density function integrated over all real numbers is 1. They may have nonzero probability at some real numbers. They have zero probability at every real number. The probability of an event is found by summing the values of the discrete pdf … To learn a formal definition of the probability density function of a continuous uniform random variable. To learn a formal definition of the cumulative distribution function of a continuous uniform random variable. To learn key properties of a continuous uniform random variable, such as the mean, variance, and moment generating function. 17 Law of Total Variance. Var(X|Y ) is a random variable that is a function of Y. 18 Sum of a random number of iid RVs (the variance is taken with respect to X). When deflning a distribution for a continuous RV, the PMF approach won’t quite work since summations only work for a flnite or a countably inflnite number of items. Instead they are based on the following Deflnition: Let X be a continuous RV. The Probability Density Function (PDF) is a function f(x) on the range of X that satisfles the –The cumulative distribution function (CDF) of a continuous RV is •The probability that the RV X is smaller than or equal to x GAUSSIAN RV • Probability density function (pdf) v.s. Histogram – years worked of 1820 employees in a cereal factory –When the bin width goes to 0, a continuous random variable (RV) Probability density function (pdf) A curve, symbol f(x). Identify the continuous probability function that applies to the example below: The amount of time spouses shop for birthday gifts for their spouse for an average of eight minutes. Characterization Probability density function. The probability density function of the continuous uniform distribution is: = {− ≤ ≤, < > The values of f(x) at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(x) … Probability Distributions for Continuous Variables Definition Let X be a continuous r.v. Then a probability distribution or probability density function (pdf) of X is a function f (x) such that for any two numbers a and b with a ≤ b, we have The probability that X is in the interval [a, b] can be calculated by integrating the pdf of the r.v… 6-11-2019 · The integral of the continuous density function integrated over all real numbers is 1. They may have nonzero probability at some real numbers. They have zero probability at every real number. The probability of an event is found by summing the values of the discrete pdf … Therefore, the PDF is always a function which gives the probability of one event, x. If we denote the PDF as function f, then Pr(X = x) = f(x) A probability distribution will contain all the outcomes and their related probabilities, and the probabilities will sum to 1. Session 2: Probability distributionsand density functions – … Probability Densities In a continuous space, we describe distributions with probability density functions (PDFs) rather than assigned probability values. A valid probability density of a continuous random variable X in R, f X(x), requires I Non-negativity: 8x 2R f X(x) 0 I Normalized: R R f X(x)dx = 1 ### PROBABILITY ece.utah.edu Continuous RVs (Part I) University of Rochester. 6-11-2019 · The integral of the continuous density function integrated over all real numbers is 1. They may have nonzero probability at some real numbers. They have zero probability at every real number. The probability of an event is found by summing the values of the discrete pdf …, continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b,. Continuous RVs probability density function I Possible. To learn a formal definition of the probability density function of a continuous uniform random variable. To learn a formal definition of the cumulative distribution function of a continuous uniform random variable. To learn key properties of a continuous uniform random variable, such as the mean, variance, and moment generating function., A function f(x) that satisfies the above requirements is called a probability functionor probability distribu-tion for a continuous random variable, but it is more often called a probability density functionor simplyden-sity function. Any function f(x) satisfying Properties 1 and 2 above will automatically be a density function, and. ### Chapter 5 Continuous random variables (OpenStax Probability Density Function of a function of a continuous RV. 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables https://en.wikipedia.org/wiki/Expected_value Then the probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a ≤ b: a b A a. The expected or mean value of a continuous rv X with pdf f(x) is: Discrete Let X be a discrete rv that takes on values in the set D and has a pmf f(x).. 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables a continuous random variable (RV) Probability density function (pdf) A curve, symbol f(x). Identify the continuous probability function that applies to the example below: The amount of time spouses shop for birthday gifts for their spouse for an average of eight minutes. Characterization Probability density function. The probability density function of the continuous uniform distribution is: = {− ≤ ≤, < > The values of f(x) at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(x) … Characterization Probability density function. The probability density function of the continuous uniform distribution is: = {− ≤ ≤, < > The values of f(x) at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(x) … Chap. 5: Joint Probability Distributions A joint probability density function (pdf) of X and Y is a function f(x,y) Both RV are continuous (p. 193). Conditional Probability Density Function (pdf) of Y given X = x is as long as The point: marginal pdf of condition Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester Characterization Probability density function. The probability density function of the continuous uniform distribution is: = {− ≤ ≤, < > The values of f(x) at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(x) … Chapter 7 Continuous Probability Distributions 132 the group try it a number of times to give about 100 results. Record the distance each coin lands away from the target line in centimetres noting whether it is in front or behind with −/ + respectively. If you wanted to write a computer program to 'simulate' this Continuous RV - Free download as Powerpoint Presentation (.ppt / .pptx), PDF File (.pdf), Text File (.txt) or view presentation slides online. RV 1 Probability Density Function and Cumulative Distribution Function Definition1.1(Probabilitydensityfunction). Arrvissaidtobe(absolutely) continuous if there exists a real-valued function f X such that, for any subset B⊂R: P(X∈B) = Z B f X(x)dx (1) Thenf X iscalledtheprobability density function (pdf)oftherandomvari-ableX. Introduction to Probability and Statistics Slides 4 – Chapter 4 Ammar M. Sarhan, Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function … Continuous RV - Free download as Powerpoint Presentation (.ppt / .pptx), PDF File (.pdf), Text File (.txt) or view presentation slides online. RV 5-12-2012 · This is the third in a sequence of tutorials about continuous random variables. I explain how to calculate the mean (expected value) and variance of a continuous random variable. Tutorials on continuous random variables Probability Distributions for Continuous Variables Definition Let X be a continuous r.v. Then a probability distribution or probability density function (pdf) of X is a function f (x) such that for any two numbers a and b with a ≤ b, we have The probability that X is in the interval [a, b] can be calculated by integrating the pdf of the r.v… Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester \$\begingroup\$ @Miguel Could you expand a bit? I am trying to reconcile what Max did and what I did (see attempt above). Since we just saw the theorem mentioned above in class and the function given was differentiable and invertible I was trying to do a direct application of the theorem, hence the thought of first completing the square. Notes: Joint Probability and Independence for Continuous RV’s CS 3130 / ECE 3530: Probability and Statistics for Engineers October 23, 2014 Notes: Joint Probability and Independence for Continuous RV’s CS 3130 / ECE 3530: Probability and Statistics for Engineers October 23, 2014 Therefore, the PDF is always a function which gives the probability of one event, x. If we denote the PDF as function f, then Pr(X = x) = f(x) A probability distribution will contain all the outcomes and their related probabilities, and the probabilities will sum to 1. Session 2: Probability distributionsand density functions – … Probability Distributions for Continuous Variables Definition Let X be a continuous r.v. Then a probability distribution or probability density function (pdf) of X is a function f (x) such that for any two numbers a and b with a ≤ b, we have The probability that X is in the interval [a, b] can be calculated by integrating the pdf of the r.v… Probability Densities In a continuous space, we describe distributions with probability density functions (PDFs) rather than assigned probability values. A valid probability density of a continuous random variable X in R, f X(x), requires I Non-negativity: 8x 2R f X(x) 0 I Normalized: R R f X(x)dx = 1 \$\begingroup\$ @Miguel Could you expand a bit? I am trying to reconcile what Max did and what I did (see attempt above). Since we just saw the theorem mentioned above in class and the function given was differentiable and invertible I was trying to do a direct application of the theorem, hence the thought of first completing the square. To learn a formal definition of the probability density function of a continuous uniform random variable. To learn a formal definition of the cumulative distribution function of a continuous uniform random variable. To learn key properties of a continuous uniform random variable, such as the mean, variance, and moment generating function. scipy.stats.rv_discrete This class is similar to rv_continuous. The main differences are: the support of the distribution is a set of integers. instead of the probability density function, pdf (and the corresponding private _pdf), this class defines the probability mass function, pmf Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is Probability distribution of continuous random variable is called as Probability Density function or PDF. Given the probability function P(x) for a random variable X, the probability that X belongs to A, where A is some interval is calculated by integrating p(x) over the set A i.e. Probability Functions Discrete (pdf) Continuous (pdf) Basic probability Notation and definitions Conditional probability discrete RV's Definitions and Formulas (pdf) Tutorial (pdf) Discrete random variables Conditional probability continous RV's Characterization Probability density function. The probability density function of the continuous uniform distribution is: = {− ≤ ≤, < > The values of f(x) at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(x) … Notes: Joint Probability and Independence for Continuous RV’s CS 3130 / ECE 3530: Probability and Statistics for Engineers October 23, 2014 Probability Densities In a continuous space, we describe distributions with probability density functions (PDFs) rather than assigned probability values. A valid probability density of a continuous random variable X in R, f X(x), requires I Non-negativity: 8x 2R f X(x) 0 I Normalized: R R f X(x)dx = 1 Continuous RVs probability density function I Possible values for continuous RV from ECE 440 at University of Rochester Chapter 7 Continuous Probability Distributions 132 the group try it a number of times to give about 100 results. Record the distance each coin lands away from the target line in centimetres noting whether it is in front or behind with −/ + respectively. If you wanted to write a computer program to 'simulate' this Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is Probability Distributions for Discrete RV De nition The probability distribution or probability mass function (pmf) of a discrete rv is de ned for every number x by p(x) = P(X = x) = P(all s 2S: X(s) = x): In words, for every possible value x of the random variable, the pmfspeci es the probability of observing that value when the experiment is Introduction to Random Variables (RVs) Outline: 1. informal deflnition of a RV, 2. three types of a RV: a discrete RV, a continuous RV, and a mixed RV, 3. a general rule to flnd probability of events concerning a RV, 4. cumulative distribution function (CDF) of a RV, 5. formal deflnition of a RV using CDF, This MATLAB function returns the probability density function (pdf) for the one-parameter distribution family specified by 'name' and the distribution parameter A, evaluated at the values in x. Then the probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a ≤ b: a b A a. The expected or mean value of a continuous rv X with pdf f(x) is: Discrete Let X be a discrete rv that takes on values in the set D and has a pmf f(x). 826953
2021-04-12 04:31:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8881303071975708, "perplexity": 529.2598044826451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066568.16/warc/CC-MAIN-20210412023359-20210412053359-00349.warc.gz"}
https://la.mathworks.com/matlabcentral/answers/500898-3d-plotting-in-matlab
# 3D Plotting in matlab 8 views (last 30 days) P Maity on 18 Jan 2020 Commented: P Maity on 23 Jan 2020 I have got a function after integration in Matlab, f($\theta$,$g$) which is a complex function. I would like to get 3D plot the this function with respect to $\theta$ and g, where $g$ positive integer runs from 0 and $\theta$ runs from 0 to $\pi$. The function is very large and complicated one. Pl hep me to plot this. Walter Roberson on 23 Jan 2020 It is disappointing that you deleted the further discussion and contributions from the volunteers :( P Maity on 23 Jan 2020 @walter: ...I am really Sorry for my nonsense activities. Actually, I did not under stand what was acttally going on. I thought that as I asked you people so many times regarding my corrections and rectifications, so better to clear my all these garbage kind of thing so that some important and summary conversation remain. But I really didn't understand that it actually causes some harmful to you. Sorry for my activities once again. Pl pardon me if possible. Star Strider on 18 Jan 2020 I assume ‘complex’ just means complicated, and not a function with real an imaginary arguments or output. If it is actually complex in the mathematical sense, a slightly different procedure than that presented here would be necessary. Try something like this: theta = linspace(0, pi, 50); g = linspace(0, 100, 50); [Th,G] = ndgrid(theta, g); F = f(Th,G); figure mesh(Th, G, F) grid on Make appropriate changes to get the result you want. Show 1 older comment Star Strider on 18 Jan 2020 I am not following what you’re doing in your code. The result needs to be a function of θ andg. Then you can use the techniques I described to plot it. Walter Roberson on 19 Jan 2020 What result are you expecting for theta = 0 leading to division by sin(0)? Walter Roberson on 19 Jan 2020 %% Now I need to perform that integration of phi and make a 3d plot of that result with theta and alpha , aplha is a real integer runs from 0 to ...and theta from 0 to pi. The integration of phi? But phi is one of your input variables, not an output formula. Which of your formulas is to be integrated with respect to what? What should be a long the two independent axes? As you indicate that alpha is an integer, it sounds as if alpha is not one of your independent variables. Your r is complex valued. Are you looking to plot the real or the imaginary part or are you looking to plot the absolute value?
2020-02-27 05:30:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6368417739868164, "perplexity": 1098.6391784539103}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00228.warc.gz"}
https://www.unibo.it/sitoweb/laura.fabbri11/pubblicazioni
# Laura Fabbri Professoressa associata Dipartimento di Fisica e Astronomia "Augusto Righi" Settore scientifico disciplinare: FIS/01 FISICA SPERIMENTALE ## Pubblicazioni , A search for an unexpected asymmetry in the production of e+μ− and e−μ+ pairs in proton–proton collisions recorded by the ATLAS detector at √s = 13 TeV, «PHYSICS LETTERS. SECTION B», 2022, 830, Article number: 137106, pp. 1 - 22 [articolo]Open Access , AtlFast3: The Next Generation of Fast Simulation in ATLAS, «COMPUTING AND SOFTWARE FOR BIG SCIENCE», 2022, 6, pp. 7-1 - 7-54 [articolo] , Constraints on Higgs boson production with large transverse momentum using H -> b(b)over-bar decays in the ATLAS detector, «PHYSICAL REVIEW D», 2022, 105, pp. 092003-1 - 092003-37 [articolo] , Constraints on Higgs boson properties using WW∗(→eνμν)jj production in 36.1fb−1 of √s=13 TeV pp collisions with the ATLAS detector, «THE EUROPEAN PHYSICAL JOURNAL. C, PARTICLES AND FIELDS», 2022, 82, Article number: 622, pp. 1 - 33 [articolo]Open Access , Determination of the parton distribution functions of the proton using diverse ATLAS data from pp collisions at root s=7, 8 and 13 TeV, «THE EUROPEAN PHYSICAL JOURNAL. C, PARTICLES AND FIELDS», 2022, 82, pp. 438-1 - 438-70 [articolo] , Direct constraint on the Higgs-charm coupling from a search for Higgs boson decays into charm quarks with the ATLAS detector, «THE EUROPEAN PHYSICAL JOURNAL. C, PARTICLES AND FIELDS», 2022, 82, pp. 717-1 - 717-42 [articolo] , Measurement of Higgs boson decay into b-quarks in associated production with a top-quark pair in pp collisions at root s=13 TeV with the ATLAS detector, «JOURNAL OF HIGH ENERGY PHYSICS», 2022, 2022, pp. 97-1 - 97-63 [articolo] , Measurement of the c-jet mistagging efficiency in tt¯ events using pp collision data at √s = 13 TeV collected with the ATLAS detector, «THE EUROPEAN PHYSICAL JOURNAL. C, PARTICLES AND FIELDS», 2022, 82, Article number: 95, pp. 1 - 27 [articolo]Open Access , Measurement of the energy asymmetry in tt¯j production at 13TeV with the ATLAS experiment and interpretation in the SMEFT framework, «THE EUROPEAN PHYSICAL JOURNAL. C, PARTICLES AND FIELDS», 2022, 82, Article number: 374, pp. 1 - 36 [articolo]Open Access , Measurement of the nuclear modification factor for muons from charm and bottom hadrons in Pb+Pb collisions at 5.02 TeV with the ATLAS detector, «PHYSICS LETTERS. SECTION B», 2022, 829, Article number: 137077, pp. 1 - 23 [articolo]Open Access , Measurements of azimuthal anisotropies of jet production in Pb+Pb collisions at √sNN=5.02 TeV with the ATLAS detector, «PHYSICAL REVIEW C», 2022, 105, Article number: 064903, pp. 064903-1 - 064903-25 [articolo]Open Access , Measurements of differential cross-sections in top-quark pair events with a high transverse momentum top quark and limits on beyond the Standard Model contributions to top-quark pair production with the ATLAS detector at $$\sqrt{s}$$ = 13 TeV, «JOURNAL OF HIGH ENERGY PHYSICS», 2022, 2022, pp. 63-1 - 63-73 [articolo] , Measurements of Higgs boson production cross-sections in the H ??? ??+????? decay channel in pp collisions at $$\sqrt{s}$$ = 13 TeV with the ATLAS detector, «JOURNAL OF HIGH ENERGY PHYSICS», 2022, 2022, pp. 175-1 - 175-81 [articolo] , Measurements of jet observables sensitive to -quark fragmentation in events at the LHC with the ATLAS detector, «PHYSICAL REVIEW D», 2022, 106, pp. 032008-1 - 032008-33 [articolo] , Measurements of the Higgs boson inclusive and differential fiducial cross-sections in the diphoton decay channel with pp collisions at $$\sqrt{s}$$ = 13 TeV with the ATLAS detector, «JOURNAL OF HIGH ENERGY PHYSICS», 2022, 2022, pp. 27-1 - 27-96 [articolo] ### Ultimi avvisi Al momento non sono presenti avvisi.
2022-12-08 20:12:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5985174775123596, "perplexity": 7665.11874451946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711360.27/warc/CC-MAIN-20221208183130-20221208213130-00853.warc.gz"}
https://www.gamedev.net/forums/topic/446653-c--irrlitch-namespace-problem/
# [C++ / Irrlitch] Namespace problem This topic is 3942 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Okay, I'm still learning things every day w/ C++ and GCC. I'm using Irrlitch, and I've been working with some of the examples. Variables were initially preceded by their respective namespace info but I want to declare the namespace. However I am running in to a problem, after I declare the namespaces it interferes with the std namespace, saying that string type isn't defined, and I get the following compile error... 17 C:\Dev-Cpp\main.cpp expected constructor, destructor, or type conversion before '*' token Any thoughts? #include <cstdlib> #include <iostream> #include <vector> #include <irrlicht.h> using namespace std; using namespace irr::core; using namespace irr::io; using namespace irr::video; using namespace irr::scene; using namespace irr::gui; IrrlichtDevice *IrrDevice; IVideoDriver *IrrDriver; ISceneManager *IrrSceneManager; ##### Share on other sites This situation is exactly what namespaces are intended to solve. That is, libraries that have the same names. In this case, there's a class named "string" in both std and irrlicht::core. So then when you declare something as type "string", the compiler has no idea which you're referring to. You'll need to expicitly tell the compiler which you want, by declaring the object as "irrlicht::core::string" or "std::string". It sounds like you don't want to do this, but you don't have much choice for classes with the same name. Fortunately, you only need to do it in this case. Other classes with unique names don't need to be preceded by the namespace. ##### Share on other sites Oh the irony, lol. Thanks for your help, very appreciated. :) gharen2++; ##### Share on other sites Note, I bypassed the problem by not declaring the irr::core namespace. ##### Share on other sites using <namespace> is generally considered a Bad Idea, and your example showed exactly why. Of course, having to type irr::video::whatever might be a wee bit verbose, but then you could do something like this: namespace iv = irr::video; and then you only have to prefix everything with iv:: Alternatively, include the irr namespaces, but don't do it with the std namespace.
2018-02-18 05:51:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21469742059707642, "perplexity": 2973.269292686464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811655.65/warc/CC-MAIN-20180218042652-20180218062652-00431.warc.gz"}
http://www.icoachmath.com/topics/6th/Solving-One-Step-Inequalities.html
#### Solved Examples and Worksheet for Solving One Step Inequalities Q1Jimmy & Sunny have $12 each. Sunny spends$5 on jellybeans, while Jimmy spends less on jellybeans. Which of the inequalities describes the money m left with Jimmy? A. m > 12 B. m > 7 C. m ≥ 12 D. m < 7 Step: 1 Out of $12, Sunny spends$5 while Jimmy spends less than $5. [From the problem.] Step: 2 Sunny has$7 left with him and Jimmy has more than $7. [Money left = money initially had - money spent.] Step: 3 So, the inequality for the money left with Jimmy is m > 7. [m is the money left with Jimmy.] Correct Answer is : m > 7 Q2Sheela had more than$87 in her savings account. She withdrew $59 from her savings account. Which of the following inequalities best suits the balance amount m, in her savings account? A. m ≤ 28 B. m < 28 C. m ≥ 28 D. m > 28 Step: 1 Sheela withdrew$59 from her savings account, which had more than $87. Step: 2 The balance amount in her savings bank account will be more than 87 - 59=$28. Step: 3 The inequality for the balance amount m is m > 28. Correct Answer is :   m > 28 Q3Andy delivers bread packets to more than 66 families living in Dugway each day. Which of the following inequalities best describes the situation? A. b ≤ 66 B. b > 66 C. b < 66 D. b ≥ 66 Step: 1 Let b be the bread packets delivered to the families living in Dugway. Step: 2 Bread packets b, that Andy delivers to families living in Dugway are more than 66. Step: 3 The inequality is b > 66. Correct Answer is :   b > 66 Q4Jeff bought a pair of Guess jeans, which cost less than $50. Which of the following inequalities best describes the situation when the cost price is represented by p? A. p ≤ 50 B. p < 50 C. p > 50 D. p ≥ 50 Step: 1 Jeff bought a Guess jeans that costs less than$50. Step: 2 The inequality is p < 50. Correct Answer is :   p < 50 Q5Which of the following is a solution for the inequality below? 10.7 + x > 4.5 A. - 8.2 B. - 7.2 C. - 5.7 D. - 9.2 Step: 1 10.7 + x > 4.5 [Given inequality.] Step: 2 10.7 + x - 10.7 > 4.5 - 10.7 [Subtract 10.7 from each side.] Step: 3 x > - 6.2 [Simplify.] Step: 4 The solution for the inequality is, all real numbers greater than - 6.2. Step: 5 From the choices, the solution greater than - 6.2 is - 5.7. Correct Answer is :   - 5.7 Q6Which of the following is a solution for the inequality? - 3.7 + x < - 4.5 A. -0.7 B. 0.9 C. -0.9 D. 0.1 Step: 1 - 3.7 + x < - 4.5 [Original inequality.] Step: 2 - 3.7 + x + 3.7 < - 4.5 + 3.7 Step: 3 x < -0.8 [Simplify.] Step: 4 The solution for the inequality is, all real numbers less than -0.8. Step: 5 Among the choices, the solution for the inequality is -0.9 . Q7Which of these is a solution for the inequality below? x + 3.5 ≥ 6.8 A. 3.1 B. 3.4 C. 3.0 D. 3.2 Step: 1 x + 3.5 ≥ 6.8 [Original inequality.] Step: 2 x + 3.5 - 3.5 ≥ 6.8 - 3.5 [Subtract 3.5 from each side.] Step: 3 x ≥ 3.3 [Simplify.] Step: 4 The solution for the inequality is, all real numbers greater than or equal to 3.3. Step: 5 From the choices, the solution for the inequality is 3.4. Q8Which of the following is a solution for the inequality x + 8.5 ≤ 3.8? A. -4.7 B. -4.4 C. -4.5 D. -4.3 Step: 1 x + 8.5 ≤ 3.8 [Original inequality.] Step: 2 x + 8.5 - 8.5 ≤ 3.8 - 8.5 [Subtract 8.5 from each side.] Step: 3 x ≤ -4.7 [Simplify.] Step: 4 The solution for the inequality is, all real numbers less than or equal to -4.7. Step: 5 From the choices, the solution for the inequality is -4.7. Q9There were fewer than 10 students in an English class. Which of the following inequalities best describes the situation when the number of students is represented by s? A. s ≥ 10 B. s < 10 C. s > 10 D. s ≤ 10 Step: 1 There were fewer than 10 students in an English class. Step: 2 The number of students is represented by s. Step: 3 The inequality is s < 10. Correct Answer is :   s < 10 Q10Annie had less than $106 in her savings account. She withdrew$68 from her savings account. Which of the following inequalities best suits the balance amount m, in her savings account? A. m ≤ 38 B. m > 38 C. m ≥ 38 D. m < 38 Step: 1 Annie withdrew $68 from her savings account, which had less than$106. Step: 2 The balance amount in her savings bank account will be less than 106 - 68 = \$38. Step: 3 The inequality for the balance amount m is m < 38. Correct Answer is :   m < 38 Q11Which of the following is not a solution of the inequality, x - 7 ≤ 2? A. 8 B. 9 C. 7 D. 10 Step: 1 x - 7 ≤ 2 [Original inequality.] Step: 2 x - 7 + 7 ≤ 2 + 7 Step: 3 x ≤ 9 [Simplify.] Step: 4 Any real number, which is less than or equal to 9, is a solution for the inequality. Step: 5 From the choices, 10 is the number which is not a solution of the inequality. [10 is greater than 9.] Q12Choose a solution for the inequality, x - 2 ≥ 49. A. 41 B. 49 C. 51 D. 46 Step: 1 x - 2 ≥ 49 [Original inequality.] Step: 2 x - 2 + 2 ≥ 49 + 2 Step: 3 x ≥ 51 [Simplify.] Step: 4 The solution for the inequality is, all real numbers greater than or equal to 51. Step: 5 From the choice, 51 is the correct answer. Q13Which of the following is a solution for the inequality x + 3 ≥ 4? A. -1 B. -2 C. 0 D. 1 Step: 1 x + 3 ≥ 4 [Original inequality.] Step: 2 x + 3 - 3 ≥ 4 - 3 [Subtract 3 from each side.] Step: 3 x ≥ 1 [Simplify.] Step: 4 All real numbers greater than or equal to 1. Step: 5 From the choices, the number greater than or equal to 1 is 1. Q14Which of the choices represents the solution for the inequality, 9x > 81? A. x < 9 B. x > 9 C. x ≥ 9 D. x ≤ 9 Step: 1 9x > 81 [Original inequality.] Step: 2 9x9 > 819 [Divide by 9 on both sides, according to division property, if c > 0 and a > b then ac > bc.] Step: 3 x > 9 [Simplify.] Correct Answer is :   x > 9 Q15Choose the correct mathematical expression for the verbal description. M is less than 6 times the value of 15 the height, where the height is H A. M < 15H B. M < 6 (15H) C. M = 6 (15H) D. M > 6 (15H) Step: 1 The value of 15 the height = 15H Step: 2 6 times the value of 15 the height = 6(15H) Step: 3 M is less than 6 times the value of 15 the height can be written as M < 6(15H) Correct Answer is :   M < 6 (15H) • Variable
2018-12-17 08:25:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.407240092754364, "perplexity": 2869.5713577398674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00456.warc.gz"}
https://www.yaclass.in/p/mathematics-state-board/class-8/algebra-3091/factorization-17062/re-81bfe68e-6193-4d7c-9c24-c08b2cb8afa0
PUMPA - THE SMART LEARNING APP Helps you to prepare for any school test or exam Consider the expression $3{m}^{2}+\mathit{mn}+3\mathit{mn}+{n}^{2}$. $\underset{¯}{3{m}^{2}+\mathit{mn}}+\underset{¯}{3\mathit{mn}+{n}^{2}}$ $=m\left(3m+n\right)+n\left(3m+n\right)$ $=\left(3m+n\right)\left(m+n\right)$
2022-12-06 01:33:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31440362334251404, "perplexity": 11256.030046841348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00360.warc.gz"}
http://www.lastfm.com.br/user/emskurtsies/library/music/Selena%2BGomez%2B%2526%2Bthe%2BScene/_/Tell+Me+Something+I+Don't+Know?setlang=pt
# Biblioteca Música » Selena Gomez & the Scene » ## Tell Me Something I Don't Know 133 execuções | Ir para página da faixa Faixas (133) Faixa Álbum Duração Data Tell Me Something I Don't Know 2:55 Abr 18 2015, 14h03 Tell Me Something I Don't Know 2:55 Jul 18 2014, 1h08 Tell Me Something I Don't Know 2:55 Jul 17 2014, 14h28 Tell Me Something I Don't Know 2:55 Mar 24 2014, 22h44 Tell Me Something I Don't Know 2:55 Fev 22 2014, 18h35 Tell Me Something I Don't Know 2:55 Out 25 2013, 21h28 Tell Me Something I Don't Know 2:55 Mar 17 2013, 16h18 Tell Me Something I Don't Know 2:55 Dez 11 2012, 6h45 Tell Me Something I Don't Know 2:55 Dez 10 2012, 2h45 Tell Me Something I Don't Know 2:55 Nov 23 2012, 6h07 Tell Me Something I Don't Know 2:55 Out 30 2012, 15h21 Tell Me Something I Don't Know 2:55 Out 28 2012, 23h09 Tell Me Something I Don't Know 2:55 Out 26 2012, 23h12 Tell Me Something I Don't Know 2:55 Out 9 2012, 16h22 Tell Me Something I Don't Know 2:55 Out 9 2012, 16h16 Tell Me Something I Don't Know 2:55 Out 8 2012, 13h58 Tell Me Something I Don't Know 2:55 Jul 31 2012, 22h22 Tell Me Something I Don't Know 2:55 Jul 20 2012, 16h00 Tell Me Something I Don't Know 2:55 Jun 3 2012, 14h10 Tell Me Something I Don't Know 2:55 Jun 3 2012, 13h43 Tell Me Something I Don't Know 2:55 Mai 22 2012, 14h41 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h32 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h29 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h26 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h23 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h20 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h17 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h14 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h12 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h09 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h06 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h03 Tell Me Something I Don't Know 2:55 Mai 20 2012, 22h00 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h57 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h54 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h51 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h48 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h45 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h42 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h39 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h36 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h32 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h29 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h26 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h23 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h20 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h17 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h14 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h11 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h08 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h06 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h03 Tell Me Something I Don't Know 2:55 Mai 20 2012, 21h00 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h57 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h54 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h51 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h48 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h45 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h42 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h39 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h36 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h33 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h30 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h28 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h25 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h22 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h19 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h16 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h13 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h10 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h07 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h04 Tell Me Something I Don't Know 2:55 Mai 20 2012, 20h01 Tell Me Something I Don't Know 2:55 Mai 20 2012, 19h58 Tell Me Something I Don't Know 2:55 Mai 20 2012, 19h55 Tell Me Something I Don't Know 2:55 Mai 20 2012, 19h52 Tell Me Something I Don't Know 2:55 Mai 20 2012, 19h41 Tell Me Something I Don't Know 2:55 Mai 20 2012, 19h38 Tell Me Something I Don't Know 2:55 Mai 20 2012, 19h35 Tell Me Something I Don't Know 2:55 Mai 20 2012, 19h31 Tell Me Something I Don't Know 2:55 Mai 19 2012, 2h55 Tell Me Something I Don't Know 2:55 Mai 19 2012, 0h33 Tell Me Something I Don't Know 2:55 Mai 19 2012, 0h20 Tell Me Something I Don't Know 2:55 Abr 19 2012, 6h17 Tell Me Something I Don't Know 2:55 Abr 16 2012, 20h17 Tell Me Something I Don't Know 2:55 Abr 16 2012, 11h48 Tell Me Something I Don't Know 2:55 Abr 16 2012, 11h05 Tell Me Something I Don't Know 2:55 Abr 16 2012, 10h52 Tell Me Something I Don't Know 2:55 Abr 16 2012, 0h26 Tell Me Something I Don't Know 2:55 Abr 15 2012, 23h13 Tell Me Something I Don't Know 2:55 Abr 15 2012, 22h50 Tell Me Something I Don't Know 2:55 Abr 15 2012, 22h13 Tell Me Something I Don't Know 2:55 Abr 15 2012, 21h43 Tell Me Something I Don't Know 2:55 Abr 15 2012, 21h23 Tell Me Something I Don't Know 2:55 Abr 15 2012, 21h00 Tell Me Something I Don't Know 2:55 Abr 15 2012, 19h58 Tell Me Something I Don't Know 2:55 Abr 15 2012, 18h50 Tell Me Something I Don't Know 2:55 Abr 15 2012, 18h07 Tell Me Something I Don't Know 2:55 Abr 15 2012, 3h56 Tell Me Something I Don't Know 2:55 Abr 15 2012, 3h23 Tell Me Something I Don't Know 2:55 Abr 15 2012, 2h39 Tell Me Something I Don't Know 2:55 Abr 14 2012, 0h35 Tell Me Something I Don't Know 2:55 Abr 13 2012, 23h52 Tell Me Something I Don't Know 2:55 Abr 10 2012, 15h19 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h52 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h49 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h46 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h43 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h40 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h37 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h34 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h31 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h28 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h25 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h22 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h19 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h16 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h14 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h11 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h08 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h05 Tell Me Something I Don't Know 2:55 Abr 9 2012, 22h02 Tell Me Something I Don't Know 2:55 Abr 9 2012, 21h59 Tell Me Something I Don't Know 2:55 Abr 9 2012, 21h56 Tell Me Something I Don't Know 2:55 Abr 9 2012, 21h53 Tell Me Something I Don't Know 2:55 Abr 9 2012, 21h50 Tell Me Something I Don't Know 2:55 Abr 9 2012, 21h47 Tell Me Something I Don't Know 2:55 Abr 9 2012, 21h44 Tell Me Something I Don't Know 2:55 Abr 9 2012, 2h39 Tell Me Something I Don't Know 2:55 Abr 9 2012, 1h14 Tell Me Something I Don't Know 2:55 Abr 9 2012, 0h12 Tell Me Something I Don't Know 2:55 Abr 5 2012, 21h22 Tell Me Something I Don't Know 2:55 Abr 5 2012, 20h42
2015-04-19 08:38:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466430902481079, "perplexity": 8212.427745479392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637979.29/warc/CC-MAIN-20150417045717-00105-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.math.uni-magdeburg.de/mathreg/index.php?show=entry&type=Preprint&year=2012&id=445&number=2012-07
### 2012-07 #### Global analysis of the generalised Helfrich flow of closed curves immersed in $\R^n$ by Wheeler, G.. Series: 2012-07, Preprints MSC: 53C42 Immersions (minimal, prescribed curvature, tight, etc.) 53A04 Curves in Euclidean space 49Q10 Optimization of shapes other than minimal surfaces 58D25 Equations in function spaces; evolution equations 35K30 Initial value problems for higher-order parabolic equations Abstract: In this paper we consider the evolution of regular closed elastic curves $\gamma$ immersed in $\R^n$. Equipping the ambient Euclidean space with a vector field $\ca:\R^n\rightarrow\R^n$ and a function $f:\R^n\rightarrow\R$, we assume the energy of $\gamma$ is smallest when the curvature $\k$ of $\gamma$ is parallel to $\c = (\ca\circ\gamma) + (f\circ\gamma)\tau$, where $\tau$ is the unit vector field spanning the tangent bundle of $\gamma$. This leads us to consider a generalisation of the Helfrich functional $\SH$, defined as the sum of the integral of $|\k-\c|^2$ and $\lambda$-weighted length. We primarily consider the case where $f:\R^n\rightarrow\R$ is uniformly bounded in $C^\infty(\R^n)$ and $\ca:\R^n\rightarrow\R^n$ is an affine transformation. Our first theorem is that the steepest descent $L^2$-gradient flow of $\SH$ with smooth initial data exists for all time and subconverges to a smooth solution of the Euler-Lagrange equation for a limiting functional $\SHlim$. We additionally perform some asymptotic analysis. In the broad class of gradient flows for which we obtain global existence and subconvergence, there exist many examples for which full convergence of the flow does not hold. This may manifest in its simplest form as solutions translating or spiralling off to infinity. We prove that if either $\ca$ and $f$ are constant, the derivative of $\ca$ is invertible and non-vanishing, or $(f,\gamma_0)$ satisfy a `properness' condition, then one obtains full convergence of the flow and uniqueness of the limit. This last result strengthens a well-known theorem of Kuwert, Schätzle and Dziuk on the elastic flow of closed curves in $\R^n$ where $f$ is constant and $\ca$ vanishes. Keywords: Helfrich flow, geometric analysis, geometric evolution equations, higher order system of quasilinear parabolic partial differential equations
2020-11-29 14:33:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7685120105743408, "perplexity": 309.14077315474765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00230.warc.gz"}
https://www.nature.com/articles/s41598-019-54416-3?error=cookies_not_supported&code=00e7fea8-ebb8-429d-9899-3b87f523cb24
Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Demonstration of critical coupling in an active III-nitride microdisk photonic circuit on silicon ## Abstract On-chip microlaser sources in the blue constitute an important building block for complex integrated photonic circuits on silicon. We have developed photonic circuits operating in the blue spectral range based on microdisks and bus waveguides in III-nitride on silicon. We report on the interplay between microdisk-waveguide coupling and its optical properties. We observe critical coupling and phase matching, i.e. the most efficient energy transfer scheme, for very short gap sizes and thin waveguides (g = 45 nm and w = 170 nm) in the spontaneous emission regime. Whispering gallery mode lasing is demonstrated for a wide range of parameters with a strong dependence of the threshold on the loaded quality factor. We show the dependence and high sensitivity of the output signal on the coupling. Lastly, we observe the impact of processing on the tuning of mode resonances due to the very short coupling distances. Such small footprint on-chip integrated microlasers providing maximum energy transfer into a photonic circuit have important potential applications for visible-light communication and lab-on-chip bio-sensors. ## Introduction Microresonators, like microdisks, microrings or microspheres evanescently coupled to waveguides have been a field of intense study for nearly two decades1. These devices are basic building blocks for chip-scale integrated photonics and have enabled a large range of applications in telecommunication, optical computing, optical interconnects and sensing. One of the key issues for such a system is the efficient energy transfer between the resonator and waveguide, which is achieved in the so-called critical coupling regime when the rate of energy decay from the cavity to the waveguide equals the intrinsic rate of energy decay from the uncoupled microresonator. All the energy can be transferred from the the microresonator to the waveguide (or vice versa), by analogy with impedance matching in microwave devices. Experimental demonstrations of critical coupling were reported in several platforms, using silica microspheres coupled to fibers in the near-infrared2,3 or in the silicon photonics platform using silicon-on-insulator (SOI)4. Critical coupling is usually controlled through the coupling quality factor that depends on the cavity mode volume, the overlap, interaction length, and the effective mode index mismatch between the waveguide and the microresonator. Critical coupling is obtained by carefully tuning the distance between microresonator and waveguide and by engineering the mode index dispersion in order to reach the phase matching condition. In the literature, most of the demonstrations of critical coupling have been evidenced by injecting light into a waveguide coupled to a passive microresonator and looking at the transmission. At critical coupling, there is a perfect destructive interference between the transmitted field and the microresonator’s internal field coupled to the waveguide, thus leading to quenching of the transmission on resonance. Active microresonators under spontaneous or stimulated emission also constitute a scheme of interest. The cavity can consist of an individual microresonator or a microresonator coupled to waveguides and additional reflectors. Electrically injected microlasers coupled vertically or laterally to silicon bus waveguides have been achieved using indium phosphide (InP) and indium gallium arsenide (InGaAs) on SOI substrate with emission around 1.5 μm5,6,7. Low threshold powers of around 10 mW at room-temperature (RT) under continuous-wave (CW) operation were reported. Using quantum dots in III-arsenide mushroom-type disks and suspended waveguides, shorter-wavelength emission at around 850 nm was demonstrated under optical excitation8. A review of whispering gallery microcavity lasers can be found in ref. 9. In the near-infrared the fabrication constraints are rather relaxed as typical distances between microresonators and waveguides are in the hundreds of nm range. One very attractive platform for integrated photonics in the visible spectral range is based on III-nitrides on silicon. Nanophotonics utilizing this platform is an emerging field that has certainly raised considerable interest in the past decade. The main advantages of this material system are the possibility to have active monolithically integrated laser sources from the ultra-violet to visible (UV-VIS) spectral range and a large transparency window for energies smaller than 6 eV. There have been numerous demonstrations of microlasers10,11,12,13,14,15 and high quality (Q) factor microresonators using III-nitrides16,17,18. Several reports have been made on passive microdisks evanescently coupled to bus waveguides in the UV-VIS spectral range19,20,21, but only very few demonstrations of active microlaser photonic circuits have been reported22, while critical coupling has been observed in passive photonic circuits in the IR23,24,25,26. Active photonic circuits using light emitting diodes have been demonstrated, but using inefficiently large device dimensions27. III-nitride on silicon microdisks critically coupled to waveguides pose an important building block for more complex integrated photonic circuits that will greatly benefit the fields of both III-nitride and silicon photonics28. Potential applications range from visible-light communication29, to optical interconnects30, to on-chip quantum optics31 and lab-on-chip applications, such as bio-sensing32 and gene activation33. In this article, we report on critical coupling in an active III-nitride on silicon microdisk with a bus waveguide and show lasing for a wide range of device parameters in the blue with threshold energy densities as low as 1.2 mJ/cm2 per pulse in the under-coupled regime. To the best of our knowledge this is the shortest wavelength demonstration of critical coupling and consequently shortest coupling gap size reported. The fabrication of such small gaps poses significant technological challenges, as it is approaching the limit of what can be achieved with conventional fabrication means. We report on a robust suspended topology with a bent waveguide design22 to facilitate reaching critical coupling4,34. We demonstrate lasing over a wide range of gap sizes and waveguide bending angles, observing a strong dependence of the lasing threshold on gap size and loaded quality factor (Qloaded). We determine the intrinsic Q factor (Qint) to be 4700 and observe a reduction in Qloaded by a factor 2 for a gap of 45nm, indicating critical coupling. We model the output power as a function of the coupling Q factor QC and determine the maximum as a function of threshold power and pump power. This maximum is attained for values near the critical coupling point of the spontaneous emission regime. A shift in mode position at small gap size is observed due to a reduction in disk diameter caused by a proximity effect of the waveguide during e-beam lithography. The control of this shift is a key feature for the fine-tuning of the resonator modes. ## Results and Discussion ### Critical coupling in the spontaneous emission regime The heterostructure of the sample investigated in this study is shown in Fig. 1(a) and described in detail in the Methods section. We fabricated 5 μm diameter microdisks with bent bus waveguides with bending angles between A = 0° and 90°, a waveguide width of nominally w = 135 nm (measured: w = 170 nm), and nominal gap sizes between the disk and the waveguide of g = 40 to 120 nm (measured: g = 30 to 120 nm). A sketch of a device is shown in Fig. 1(b) highlighting the parameters A, g, and w. False color scanning electron microscopy (SEM) images of devices with g = 45 nm are shown in Fig. 1(c,d) for A = 0° and (e,f) for A = 90°. In the zoom-ins in Fig. 1(d,f) it is clearly visible that the small gaps are fully open. We will from now on refer only to the measured gap sizes. See Fig. S1 in the Supplementary Information for how these values were determined. Details on the processing can be found in the Methods section. In order to determine the contribution of the disk-waveguide coupling to the Q factor, we performed RT CW micro-photoluminescence (μ-PL) measurements on devices with A = 90° using a laser emitting at 355 nm and a 20x microscope objective. The microdisk is excited with an 8 μm diameter spot size and the emission is collected from the top, using a spectrometer and a charge-coupled device (CCD) as the detector. By defining areas on the CCD the position of the emission can be discerned (i.e. above the disk or at the end of the waveguide). Spectra of devices with g = 30 to 55 nm, measured at the end of the waveguide, are shown in Fig. 2(a). First-order radial whispering gallery modes (WGMs) are clearly visible and the azimuthal mode orders are identified in Figs. S2 and S3 in the Supplementary Information through finite-difference time-domain (FDTD) simulations. Qloaded is given by $$\frac{1}{{Q}_{{\rm{loaded}}}}=\frac{1}{{Q}_{{\rm{int}}}}+\frac{1}{{Q}_{C}(g)}+\frac{1}{{Q}_{{\rm{abs}},{\rm{QW}}}(\lambda )},$$ (1) where QC(g) is the coupling Q factor as a function of the gap g and Qabs,QW(λ) is the absorption Q factor of the QWs as a function of λ. We define Qint as the Q factor of an uncoupled microdisk excluding quantum well absorption. This quantity takes factors like roughness and residual absorption into account. Qint is determined by investigating a device with g = 120 nm, where the coupling to the bus waveguide becomes negligible (Qloaded ≈ Qint in the under-coupled regime). Figure 2(b) shows the spectrum of a mode with Qint ≈ Qloaded = 4700, given by Q = λλ, where λ is the resonant wavelength and Δλ is the full-width at half maximum (FWHM) of the resonance, determined by a Lorentzian fit and measured at the disk for a device with g = 120 nm, where the coupling is very weak, and in the low energy tail of the spectrum, where the contribution of the QW absorption is negligible, and using a 3600 grooves/mm grating. Qint = 4700 is state of the art for III-nitrides in the blue spectral range. Slightly higher values have been achieved using oxygen passivisation18. The main limitation for the Q factor at short wavelength are scattering losses that scale with λ−4 due to sidewall roughness. At λ = 420 nm we estimate Qabs,QW = 5000 in analogy to ref. 35. At longer wavelength, the quantum well absorption vanishes as well as its contribution to Qloaded. In Fig. 2(a), the linewidth of the whispering gallery modes does not change for λ > 430 nm, since the absorption is negligible. As will be shown later, QC is larger than 105 for a gap size of 120 nm. At critical coupling Qloaded = 1/2 · Qint. The occurrence of critical coupling is associated with a maximum in energy transfer from the microdisk to the waveguide as a function of the gap. FDTD simulations show the sensitive dependence of the transmitted radiative flux for a mode at 441 nm in Fig. 2(c) for gaps between 30 and 120 nm for a 5 μm diameter disk. Experimental and simulated Qloaded factors are plotted as a function of gap in Fig. 2(d). The experimental values are determined for the 442 nm mode where absorption is negligible, since this is the low-energy end of the QW emission/absorption spectrum. Critical coupling is attained at g ≈ 40–45 nm for both experiment and simulation. Phase matching of the modes in the disk and waveguide is a necessary condition for critical coupling. It is achieved by fine-tuning w and g to get the mode profiles in the waveguide and disk to match. Figure S3 in the Supplementary Information depicts FDTD simulations of the Hz field for 5 μm disks with A = 90° and g = 30 to 50 nm. Good phase matching can be observed for both g = 40 and 50 nm. ### Dependence of lasing threshold and output signal on Q factor and coupling distance Using a pulsed 355 nm laser with 7 kHz repetition rate, 4 ns pulse width and a 20x microscope objective in a standard μ-PL setup, lasing was demonstrated at RT for all investigated values of A and g. Figure 3 shows pulse energy dependent spectra of a device with A = 90° and g = 45 nm, corresponding to the critical coupling parameters under CW excitation discussed in Fig. 2(d). The spectra in Fig. 3(a) are taken using top-collection above the disk and in Fig. 3(b) at the end of the waveguide. Two first-order radial modes at 416 and 420 nm are lasing. Their azimuthal numbers are determined to be m = 86 and 85 by comparison with FDTD simulations (see Figs. S2 and S3 in the Supplementary Information). The threshold energy density is determined as the value where the mode starts to become clearly visible in the spectrum. For the m = 86 mode Pth = 1.7 mJ/cm2 per pulse. The other below-threshold modes are not visible due to the pulsed excitation with top-collection, a configuration that does not allow for easy detection of modes below threshold15. Similar lasing spectra for g = 30 nm and g = 55 nm are shown in Fig. S6 in the Supplementary Information. The lasing mode integral vs. pulse energy is shown in Fig. 4(a,b) for devices with different gaps and for A = 90° and 0°, respectively. Figure 4(c) summarizes the thresholds for both angles. As previously observed in Fig. 2(a–c) in the spontaneous emission regime, the impact of the gap size on the lasing behavior is very strong for values near the spontaneous emission critical coupling point. A larger effect of the gap size on the threshold is observed for A = 90° than for 0°. A geometry with A = 90° allows for critical coupling at a larger distance than the straight waveguide configuration, due to the increased coupling length36. For A = 0°, the critical coupling distance is shorter and was not reached experimentally. The lowest threshold of 1.2 mJ/cm2 per pulse is observed for A = 90° and g = 120 nm (see Fig. S7(a) in the Supplementary Information). Linewidth narrowing of more than a factor of two is observed when approaching the threshold (Fig. S7(b)). Figure 4(d) shows the lasing threshold as a function of Qloaded, using the below-threshold CW Qloaded values from Fig. 2 and the threshold values from Fig. 4(c). It is interesting to quantitatively analyze the microlaser threshold and the collected laser emission as a function of the coupling strength. The threshold is given as37,38 $${P}_{th}\propto {G}_{th}=\frac{2\pi {n}_{g}}{\Gamma \lambda {Q}_{{\rm{loaded}}}}+{G}_{tr},$$ (2) $${P}_{th}=B\cdot (\frac{1}{{Q}_{{\rm{int}}}}+\frac{1}{{Q}_{C}})+F,$$ (3) with Gth the threshold gain, ng the group index in the material, Γ the energy confinement factor, λ the resonant wavelength, Gtr the gain needed to achieve transparency, and B = 2070 and F = 0.79 fit parameters. The red curve in Fig. 4(d) is given by Eq. (3) and matches well with the experimental data, showing that the laser threshold can be controlled by tuning Qloaded by adjusting the gap size. As seen in Fig. 4, the threshold depends on the coupling gap between the microdisk and the bus waveguide as well as on the Qloaded. The energy extracted at the end of the waveguide depends also on QC and there is an optimal QC providing maximum extracted energy. In the case of standard ridge lasers the optimization of the output power is well-known and the maximum out-coupled power depends on the mirror reflectivity R39. For high-power lasers a small R is chosen, providing large out-coupled power but a higher threshold, while for low-power lasers R needs to be large and the out-coupled power and threshold are small. The cases of microsphere, microdisk or microring lasers coupled with bus waveguides have been thoroughly investigated as well. In most cases, the pump energy was injected through the bus waveguide2,4,40,41, which results in the laser characteristics being doubly dependent on the coupling to waveguides for both pump injection and laser emission. In the case investigated here, population inversion is achieved through an external pump that is not linked with the bus waveguide. For this type of configuration, which is also relevant for an electrical injection scheme, the know-how to achieve optimum out-coupled power is essential. This optimum depends on the pump energy as well as the microdisk to waveguide coupling. In the spontaneous emission regime the energy transfer is most efficient at critical coupling. However, in the lasing regime the gap distance providing maximum energy transfer does not necessarily coincide with the spontaneous emission critical coupling. It is thus necessary to investigate the dependence of the output signal on the pump energy and coupling. Figure 5 shows the output signal measured at the end of the waveguide and integrated over one mode as a function of QC for different pump powers Ppump for devices with A = 90°. This signal is proportional to the out-coupled power Pout and is integrated over the mode that shows lasing first. The experimental data points are fitted using9,40,42,43 $${P}_{{\rm{out}}}=C\cdot \frac{\frac{1}{{Q}_{C}}}{(\frac{1}{{Q}_{{\rm{int}}}}+\frac{1}{{Q}_{C}})}\cdot ({P}_{{\rm{pump}}}-{P}_{th}),$$ (4) where C = 66 and Pth is given by Eq. (3) and the fit in Fig. 4(d). This equation reflects the external quantum efficiency given by the ratio between coupling losses and global losses and the threshold condition of the laser. The QC values were determined using Eq. (1), where Qint = 4700 is taken from Fig. 2(b), and Qloaded values are taken from Fig. 2(d). The error bars are determined via an exponential fit between QC and gap and the ±10 nm error assumed for the gaps. The maximum of Pout, representing maximum energy transfer, shifts towards smaller QC with increasing Ppump. For large values of QC the power quickly drops towards zero. Experiment and calculation match rather well. The quality of the fit is limited by the accuracy of the gap values and by the limited number of experimental data points. Figure S5 in the Supplementary Information shows the maximum coupling gap over Ppump calculated using Eq. (S3). The maximum energy transfer under lasing conditions does not occur for the same QC and gap as critical coupling (QC = Qint = 4700) in the spontaneous emission regime, but at slightly smaller values in the investigated range of Ppump (i.e. QC = 2400 for Ppump = 4.73 mJ/cm2 per pulse), and the Ppump dependence allows for fine-tuning of the coupling efficiency for a given QC and gap size. ### Proximity effect As the coupling distances are in the tens of nm range for blue emitters, the proximity between waveguide and microresonator is an issue for monolithic integrated circuits as opposed to systems like a microsphere coupled to an external tapered fiber. We have thus investigated the impact of this proximity on the spectral mode resonance. Figure 6(a) shows the 421 nm mode of devices with A = 90° for different gap sizes. With decreasing gap size the mode position red shifts and subsequently blue shifts, which is plotted in Fig. 6(b). The red shift is expected because of the change in the effective index seen by the whispering gallery mode as the coupling distance decreases. A 0.5 nm red-shift is observed going from a 120 to a 55 nm gap. A small red-shift is also observed in FDTD simulations. For small gaps between 55 and 30 nm a 0.8 nm blue-shift per 10 nm decrease in gap size is observed. This blue-shift is caused by a reduction in disk diameter during the fabrication process due to the proximity of the waveguide4. The FDTD simulations in Fig. S4 in the Supplementary Information show that a 10 nm decrease in diameter for a 5 μm disk (0.2%) causes a 0.8 nm blue shift. The reduction in disk diameter is thus roughly directly proportional to the decrease in gap size and in agreement with the experimental observation. The effect of the bending angle on mode position is investigated in Fig. S8 in the Supplementary Information. The proximity effect is less pronounced for small angles due to a reduced coupling length. These measurements highlight the strong sensitivity of the microresonator system on the coupling distances. A short distance, i.e in the range of tens of nm, is mandatory for efficient coupling. Meanwhile, we reach a regime where the distances set new challenges for fabrication as the gap needs to be open and where the distances have an impact on the microresonator resonances through a loading effect. This effect needs to be taken into account for the design of integrated photonic circuits in the blue spectral range. ## Conclusion In conclusion, we have demonstrated critical coupling and lasing in an active III-nitride on silicon nanophotonic circuit utilizing a microdisk and a bent bus waveguide in the blue spectral range. This study was meant to identify the salient features associated with critical coupling and lasing in the blue. Low threshold energies of 1.7 mJ/cm2 per pulse at the critical coupling gap and 1.2 mJ/cm2 per pulse in the under-coupled regime have been observed. Large Qint factors of 4700 were measured in the under-coupled regime and a reduction in Qloaded of a factor of 2 was observed at critical coupling for a gap of 45 nm. A strong dependence of Pth on Qloaded was observed that perfectly matches the expectation. Pout was investigated as a function of QC, and we analyzed the dependence on Ppump, finding a good agreement between the experimental data and analytical formula. A proximity effect of the waveguide on the disk was shown to cause a shift in mode position, emphasizing the impact of an optimized design and fabrication on the intrinsic properties of the microresonator. More complex photonic circuits using active microlasers operating near critical coupling parameters can be used for visible-light communication44 and lab-on-chip applications45. An important step will be to integrate electrical injection into this nanophotonic platform. We have recently developed a compatible technology46. Going towards the UV spectral range can also be interesting, for example for germicidal irradiation, however, smaller gap sizes will be necessary for critical coupling, which will be more difficult to master experimentally. Using a bonding process, III-nitride thin films can also be transferred onto SiO2-on-Si substrates, which would eliminate the necessity of underetching, allowing for the fabrication of ring resonators, for example. ## Methods ### Sample growth The sample investigated in this study was grown by molecular beam epitaxy (MBE) on silicon (111) substrate. First a 100 nm aluminum nitride (AlN) buffer layer was grown, followed by 300 nm GaN and 10 pairs of 2.2 nm In0.12Ga0.88N/9 nm GaN QWs, emitting at 430 nm. A sketch of the heterostructure is shown in Fig. 1(a). When comparing this sample to the one investigated in our previous work22, we observe a factor 4 higher PL intensity from the as-grown sample and a factor 2 improvement in the microdisk Qint, likely due to the fact that there is no silicon-doping in this sample, which improves the material quality. ### Sample processing We use a process similar to the one described previously22. Two layers of e-beam lithography using UVIII resist, reactive ion etching (RIE) of a plasma enhanced chemical vapor deposited (PECVD) silicon dioxide (SiO2) hard mask, and inductively coupled plasma (ICP) etching of the III-nitride using chlorine (Cl2) and boron trichloride (BCl3) gases are used. In the first level the microdisk and bus waveguide are defined and the surrounding area is etched to the Si substrate. The end of the waveguide is tapered from 500 nm to 2 μm width. In the second level the waveguide is etched to the GaN buffer layer to avoid re-absorption of the emission, creating a 120 nm high edge at the end of the waveguide, which allows for light extraction by scattering. Further optimization of light extraction requires processing of grating couplers at the waveguide extremities22. As a final step, the devices are underetched using xenon difluoride (XeF2) gas, leaving the microdisk standing on a central pedestal and the waveguide suspended in air, held by tethers. The device length is 100 μm, as can be seen in Fig. 1(g). ## References 1. Yariv, A. Universal relations for coupling of optical power between microresonators and dielectric waveguides. Electron. Lett. 36, 321–322, https://doi.org/10.1049/el:20000340 (2000). 2. Cai, M., Painter, O. & Vahala, K. J. Observation of critical coupling in a fiber taper to a silica-microsphere whispering-gallery mode system. Phys. Rev. Lett. 85, 74–77, https://doi.org/10.1103/PhysRevLett.85.74 (2000). 3. Spillane, S. M., Kippenberg, T. J. & Vahala, K. J. Ultralow-threshold Raman laser using a spherical dielectric microcavity. Nature 415, 621–623, https://doi.org/10.1038/415621a (2002). 4. Bogaerts, W. et al. Silicon microring resonators. Laser Photonics Rev 6, 47–73, https://doi.org/10.1002/lpor.201100017 (2012). 5. Fang, A. W. et al. Electrically pumped hybrid AlGaInAs-silicon evanescent laser. Opt. Express 14, 9203, https://doi.org/10.1364/OE.14.009203 (2006). 6. Van Campenhout, J. et al. Design and optimization of electrically injected InP-based microdisk lasers integrated on and coupled to a SOI waveguide circuit. J. Light. Technol 26, 52–63, https://doi.org/10.1109/JLT.2007.912107 (2008). 7. Liang, D. et al. Electrically-pumped compact hybrid silicon microring lasers for optical interconnects. Opt. Express 17, 20355–20364, https://doi.org/10.1364/OE.17.020355 (2009). 8. Koseki, S. Monolithic waveguide coupled GaAs microdisk microcavity containing InGaAs quantum dots. Ph.D. thesis, Stanford University (2008). 9. He, L., Özdemir, S. K. & Yang, L. Whispering gallery microcavity lasers. Laser Photonics Rev 7, 60–82, https://doi.org/10.1002/lpor.201100032 (2013). 10. Tamboli, A. C. et al. Room-temperature continuous-wave lasing in GaN/InGaN microdisks. Nature Photonics 1, 61–64, https://doi.org/10.1038/nphoton.2006.52 (2007). 11. Simeonov, D. et al. Blue lasing at room temperature in high quality factor GaN/AlInN microdisks with InGaN quantum wells. Appl. Phys. Lett. 90, 061106, https://doi.org/10.1063/1.2460234 (2007). 12. Aharonovich, I. et al. Low threshold, room-temperature microdisk lasers in the blue spectral range. Appl. Phys. Lett. 103, 021112, https://doi.org/10.1063/1.4813471 (2013). 13. Athanasiou, M., Smith, R., Liu, B. & Wang, T. Room temperature continuous-wave green lasing from an InGaN microdisk on silicon. Sci. Reports 4, 7250, https://doi.org/10.1038/srep07250 (2014). 14. Zhang, Y. et al. Advances in III-nitride semiconductor microdisk lasers. Phys. Status Solidi A 210, 960–973, https://doi.org/10.1002/pssa.201431745 (2014). 15. Sellés, J. et al. Deep-UV nitride-on-silicon microdisk lasers. Sci. Reports 6, 21650, https://doi.org/10.1038/srep21650 (2016). 16. Simeonov, D. et al. High quality nitride based microdisks obtained via selective wet etching of AlInN sacrificial layers. Appl. Phys. Lett. 92, 171102, https://doi.org/10.1063/1.2917452 (2008). 17. Mexis, M. et al. High quality factor nitride-based optical cavities: microdisks with embedded GaN/Al(Ga)N quantum dots. Opt. Lett. 36, 2203–2205, https://doi.org/10.1364/OL.36.002203 (2011). 18. Rousseau, I. et al. Optical absorption and oxygen passivation of surface states in III-nitride photonic devices. J. Appl. Phys. 123, 113103, https://doi.org/10.1063/1.5022150 (2018). 19. Stegmaier, M. et al. Aluminum nitride nanophotonic circuits operating at ultraviolet wavelengths. Appl. Phys. Lett. 104, 091108, https://doi.org/10.1063/1.4867529 (2014). 20. Lu, T.-J. et al. Aluminum nitride integrated photonics platform for the ultraviolet to visible spectrum. Opt. Express 26, 11147–11160, https://doi.org/10.1364/OE.26.011147 (2018). 21. Liu, X. et al. Ultra-high-Q UV microring resonators based on single-crystalline AlN platform. Optica 5, 1279–1282, https://doi.org/10.1364/OPTICA.5.001279 (2018). 22. Tabataba-Vakili, F. et al. Blue microlasers integrated on a photonic platform on silicon. ACS Photonics 5, 3643–3648, https://doi.org/10.1021/acsphotonics.8b00542 (2018). 23. Pernice, W. H., Xiong, C. & Tang, H. X. High Q micro-ring resonators fabricated from polycrystalline aluminum nitride films for near infrared and visible photonics. Opt. Express 20, 12261–12269, https://doi.org/10.1364/OE.20.012261 (2012). 24. Jung, H., Xiong, C., Fong, K. Y., Zhang, X. & Tang, H. X. Optical frequency comb generation from aluminum nitride microring resonator. Opt. Lett. 38, 2810–2813, https://doi.org/10.1364/OL.38.002810 (2013). 25. Bruch, A. W. et al. Broadband nanophotonic waveguides and resonators based on epitaxial GaN thin films. Appl. Phys. Lett. 107, 141113, https://doi.org/10.1063/1.4933093 (2015). 26. Roland, I. et al. Phase-matched second harmonic generation with on-chip GaN-on-Si microdisks. Sci. Reports 6, 34191, https://doi.org/10.1038/srep34191 (2016). 27. Shi, Z. et al. Transferrable monolithic III-nitride photonic circuit for multifunctional optoelectronics. Appl. Phys. Lett. 111, 241104, https://doi.org/10.1063/1.5010892 (2017). 28. Zhou, Z., Yin, B. & Michel, J. On-chip light sources for silicon photonics. Light. Sci. & Appl. 4, e358, https://doi.org/10.1038/lsa.2015.131 (2015). 29. Xie, E. et al. High-speed visible light communication based on a III-nitride series-biased micro-LED array. J. Light. Technol 37, 1180–1186, https://doi.org/10.1109/JLT.2018.2889380 (2019). 30. Brubaker, M. D. et al. On-chip optical interconnects made with gallium nitride nanowires. Nano Lett. 13, 374–377, https://doi.org/10.1021/nl303510h (2013). 31. Jagsch, S. T. et al. A quantum optical study of thresholdless lasing features in high-β nitride nanobeam cavities. Nat. Commun. 9, 564, https://doi.org/10.1038/s41467-018-02999-2 (2018). 32. Lia, X. & Liu, X. Group III nitride nanomaterials for biosensing. Nanoscale 9, 7320, https://doi.org/10.1039/c7nr01577a (2017). 33. Polstein, L. R. & Gersbach, C. A. Light-inducible spatiotemporal control of gene activation by customizable zinc finger transcription factors. J. Am. Chem. Soc. 134, 16480–16483, https://doi.org/10.1021/ja3065667 (2012). 34. Bruch, A. W. et al. 17 000%/w second-harmonic conversion efficiency in single-crystalline aluminum nitride microresonators. Appl. Phys. Lett. 113, 131102, https://doi.org/10.1063/1.5042506 (2018). 35. Tabataba-Vakili, F. et al. Q factor limitation at short wavelength (around 300 nm) in III-nitride-on-silicon photonic crystal cavities. Appl. Phys. Lett. 111, 131103, https://doi.org/10.1063/1.4997124 (2017). 36. Hu, J. et al. Planar waveguide-coupled, high-index-contrast, high-Q resonators in chalcogenide glass for sensing. Opt. Lett. 33, 2500–2502, https://doi.org/10.1364/OL.33.002500 (2008). 37. Baba, T. & Sano, D. Low-threshold lasing and purcell effect in microdisk lasers at room temperature. IEEE J. Sel. Top. Quantum Electron. 9, 1340–1346, https://doi.org/10.1109/JSTQE.2003.819464 (2003). 38. Gargas, D. J. et al. Whispering gallery mode lasing from zinc oxide hexagonal nanodisks. ACS Nano 4, 3270–3276, https://doi.org/10.1021/nn9018174 (2009). 39. Rosencher, E. & Vinter, B. Optoelecronique (Sciences Sup, Dunod, 2002). 40. Min, B. et al. Erbium-implanted high-Q silica toroidal microcavity laser on a silicon chip. Phys. Rev. A 70, 033803, https://doi.org/10.1103/PhysRevA.70.033803 (2004). 41. Rasoloniaina, A. et al. Controling the coupling properties of active ultrahigh-Q WGM microcavities from undercoupling to selective amplification. Sci. Reports 4, 4023, https://doi.org/10.1038/srep04023 (2014). 42. Siegman, A. E. Lasers (University Science Books, 1986). 43. Yariv, A. Quantum Electronics (John Wiley & Sons, Inc., 1989). 44. Islim, M. S. et al. Towards 10 Gb/s orthogonal frequency division multiplexing-based visible light communication using a GaN violet micro-LED. Photonics Res 5, A35–A43, https://doi.org/10.1364/PRJ.5.000A35 (2017). 45. Estevez, M.-C., Alvarez, M. & Lechuga, L. M. Integrated optical devices for lab-on-a-chip biosensing applications. Laser Photonics Rev 6, 463–487, https://doi.org/10.1002/lpor.201100025 (2012). 46. Tabataba-Vakili, F. et al. III-nitride on silicon electrically injected microrings for nanophotonic circuits. Opt. Express 27, 11800–11808, https://doi.org/10.1364/OE.27.011800 (2019). Download references ## Acknowledgements This work was supported by Agence Nationale de la Recherche under MILAGAN convention (ANR-17-CE08-0043-02). This work was also partly supported by the RENATECH network. We acknowledge support by a public grant overseen by the French National Research Agency (ANR) as part of the “Investissements d’ Avenir” program: Labex GANEX (Grant No. ANR-11-LABX-0014) and Labex NanoSaclay (reference: ANR-10-LABX-0035). ## Author information Authors ### Contributions F.T.V. designed and processed the devices, and did the FDTD simulations. B.D., S.R., E.F. and F.S. grew and characterized the sample. F.T.V., L.D., C.B., T.G. and B.G. did the optical spectroscopy. P.B. and X.C. worked out the equations. J.Y.D., M.E.K. and S.S. discussed the results. F.T.V. and P.B. wrote the manuscript. P.B. guided and supervised the project at every step. All authors gave valuable feedback on the results and on the manuscript. ### Corresponding author Correspondence to Philippe Boucaud. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions ## About this article ### Cite this article Tabataba-Vakili, F., Doyennette, L., Brimont, C. et al. Demonstration of critical coupling in an active III-nitride microdisk photonic circuit on silicon. Sci Rep 9, 18095 (2019). https://doi.org/10.1038/s41598-019-54416-3 Download citation • Received: • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-019-54416-3 ## Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. ## Search ### Quick links Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
2022-08-13 16:09:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5156130194664001, "perplexity": 2912.8772044213483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00524.warc.gz"}
https://codegolf.stackexchange.com/questions/136008/do-the-circles-intersect
# Do the circles intersect? This question is similar to Do the circles overlap?, except your program should return True if the circumference of one circle intersects the other, and false if they do not. If they touch, this is considered an intersection. Note that concentric circles only intersect when both radii are equal Input is two x, y coordinates for the circle centres and two radius lengths, as either 6 floats or ints, in any order, and output a boolean value, (you may print True or False if you choose) test cases are in format (x1, y1 x2, y2, r1, r2): these inputs should output true: 0 0 2 2 2 2 1 1 1 5 2 2 0.55 0.57 1.97 2.24 1.97 2.12 these should output false: 0 0 0 1 5 3 0 0 5 5 2 2 1.57 3.29 4.45 6.67 1.09 3.25 • So this is the same as the other one except for if one circle is entirely inside the other? – geokavel Jul 27 '17 at 3:05 • @geokavel essentially, yes – micsthepick Jul 27 '17 at 3:07 • The slight variation makes for a very different problem. Good job! – Adám Jul 27 '17 at 8:35 • Related. – Martin Ender Jul 27 '17 at 9:33 • You are using an unusual definition of intersection. In Mathematics, one circle being entirely within the other implies there is intersection. So you should perhaps rephrase, replacing "intersection" by "partial intersection" – Luis Mendo Jul 27 '17 at 10:19 # APL (Dyalog), 20 bytes Uses Anders Kaseorg's formula: 0≤(x₁​−x₂)²​−(r₁​−r₂)²+(y₁​−y₂)²≤4​rr Takes (x₁, r₁, y₁) as left argument and (x₂, r₂, y₂) as right argument. (-/2*⍨-)(≤∧0≤⊣)4×2⊃× Try it online! The overall function's structure is a fork (3-train) where the tines are -/2*⍨- and ≤∧0≤⊣ and 4×2⊃×*. The middle tine takes the results of the side tines as arguments. The side tines use the overall function's arguments. Right tine: × multiply the arguments (xx₂, rr₂, yy₂) 2⊃ pick the second element (rr₂) 4× multiply by four (4​rr₂) Left tine: - subtract the arguments (x₁​−x₂, r₁​−r₂, y₁​−y₂) 2*⍨ square ((x₁​−x₂)², (r₁​−r₂)², (y₁​−y₂)²) -/ minus reduction i.e. alternate sum* ((x₁​−x₂)²​−((r₁​−r₂)²​−(y₁​−y₂)²)) Now we use these results as arguments to the middle tine: 0≤⊣ is the left argument greater than or equal to zero ∧ and ≤ the left argument smaller than or equal to the right argument? * due to APL's right associativity # Mathematica 67 bytes This checks whether the area of the intersection of two disks is positive. RegionMeasure@RegionIntersection[{#,#2}~Disk~#3,{#4,#5}~Disk~#6]>0& • Mathematics. Isn't this Mathematica? – Mr. Xcoder Jul 27 '17 at 8:49 # APL (Dyalog Classic), 17 bytes (+/∧.≥+⍨)⎕,|0j1⊥⎕ Try it online! expects two lines of input: x1 y1 x2 y2 and r1 r2 ⎕ read and evaluate a line 0j1 imaginary constant i=sqrt(-1) 0j1⊥ decode from base-i, thus computing: i3x1 + i2y1 + i1x2 + i0y2 = -i x1 - y1 + ix2 + y2 = i(x2-x1) + (y2-y1) | magnitude of that complex number, i.e. distance between the two points ⎕, read the two radii and prepend them, so we have a list of 3 reals (+/∧.≥+⍨) flat-tolerant triangle inequality: the sum (+/) must be greater than or equal to (≥) the double of each side (+⍨), and all these results must be true (∧.) # Python 3, 56 55 bytes lambda X,Y,x,y,R,r:0<=(X-x)**2+(Y-y)**2-(R-r)**2<=4*r*R Try it online! • -1 byte. – notjagan Jul 27 '17 at 3:14 • @notjagan Sure, thanks! – Anders Kaseorg Jul 27 '17 at 3:28 # GolfScript, 20 bytes ~@- 2?@@- 2?+@@+2?>! Try it online! Takes arguments in the following form r1 r2 x1 y1 x2 y2 Implements the formula below GolfScript Does not inherently support floating point types. See this post for more details about passing floats https://codegolf.stackexchange.com/a/26553 My original intention was to use GolfScript's zip and fold operations. # GolfScript, 26 bytes 1:a;~zip{{a*+2?-1:a;}*}/+> Try it online! • @JamesHolderness I forgot to mention that little quirk in GolfScript. See the edit concerning this. – Marcos Dec 3 '17 at 15:11 • Just updated the input to be an implementation of the truthy float test. The values aren't exact, but it still works. – Marcos Dec 3 '17 at 16:31 # JavaScript, 52 bytes (X,Y,x,y,R,r)=>4*r*R/((X-x)**2+(Y-y)**2-(R-r)**2)>=1 Some how Based on Anders Kaseorg Python solution # Scala, 59 bytes This might be golfable. val u=(a-x)*(a-x)+(b-y)*(b-y) (q-r)*(q-r)<=u&u<=(q+r)*(q+r) Try it online! # Scala, 66 bytes I do like this one, but sadly the previous was shorter. I hate Scala for forcing defining parameter type... var p=math.pow(_:Double,2) val u=p(a-x)+p(b-y) p(q-r)<=u&u<=p(q+r) Try it online!
2021-08-05 12:29:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.529177725315094, "perplexity": 3778.849672941588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00192.warc.gz"}
http://mathematica.stackexchange.com/questions?page=68&sort=faq
# All Questions 329 views ### Why doesn't Roots work on a certain quartic polynomial equation? In everything that follows I am using Roots to get the solutions to the equations. I start off with this equation: $$\frac{A}{3}x^4 - x^2 + 2ax - b^2 = 0$$ Now ... 118 views ### Comparison Operation for Nested Matrices I have a nested matrix n as bellow ... 176 views ### When to use indexed variables Still learning the fundamentals of the language I would like to ask you what advantages there might be in writing something like: a[1] = 2; a[2] = 4; a[3] = "x"; ... 561 views ### How to Make a change of variables I have the following expresion ... 126 views ### Is there a package or canonical way for writing test harnesses in Mathematica? I've written a complicated function foo.1 I'd now like to simplify this function's code, and later enhance it with more features. With each change, however, I'd ... 1k views ### Import Large Excel File Currently, I am working on a large time series dataset, which contains 928,586 rows and 35 time intervals in columns, in excel format (.xlsx) and using Import to ... 175 views ### Omitting singularities from plot by specifying them in range See the Mathematica 'basic plotting' documentation: http://reference.wolfram.com/mathematica/tutorial/BasicPlotting.html Scroll down to the third plot on that page, which says the following: "The ... 305 views ### Package relative file path specification I have some packages inside a folder (named Packs) that is in the same file level of some executed package. ... 200 views ### Manipulate keeps updating due to a function Edit: This is apparently only a problem on a mac running OSX. I am trying to use the 'stretchText' function by Jens. But when I use it in manipulate, the output keeps updating without changing ... 536 views ### Why is parallel slower? I always assumed that distributing a computation is faster, but it isn't necessarily true. When I do Sum[i,{i,10^4}] I retrieve an answer much faster then if I do ... 538 views 438 views ### How to sort my list? For example I have two lists of such structure L1 = {{"Australia",a1}, {"USA",a2}, {"Norway",a3},...} L2 = {{"Russia",b1}, {"Norway",b2}, {"Japan",b3},...} I ... 349 views ### How to solve an integral equation by iteration method? [duplicate] How I can obtain $n^{th}$ approximation of the following equation $f(t)=t+\int_0^tds f(s)$ by iteration method? 173 views ### How to input and output partitioned matrices that show partitions and compute as normal? I want to demonstrate multiplication of partitioned matrices as in the example here. Using the Insert Menu, you can build a matrix and draw lines between rows and columns. However, I want to be able ... 113 views ### Set number of columns in text-based interface Is it possible to set the number of columns in Mathematica's text-based interface? Or disable word wrapping in the text-based interface ? Specifically, my concern is output from ... 297 views ### How does epilog position work in logplots? I'm trying to use Epilog in my plots but the way that Epilog uses coordinate positions isn't making sense to me. As a minimal ... 299 views ### Butterworthfilter data filter I am currently doing a signal analysis and filtering using Matlab. The filter I am using is Butterworthfilter model to filter scattered data (vector data). I am using Matlab as follows: ... 630 views I want to do some large data analysis, and i want to be able to import files in a fast way. I tried Import, but it is quite slow; ReadList takes half the time. This is the code I was using: ... 748 views ### How to draw rectangle in mathematica with arrow I have a basic question but i cant seem to find an answer for it. How do i draw a rectangle in mathematica with either clockwise or counterclockwise arrow heads. In this picture i know two points ... 224 views ### Replace rule with function? Derivatives don't evaluate Say I have an expression (call it expr) involving a function, f[x]. I'd like to be able to evaluate that for a particular choice of f[x] without setting that choice for the whole session. I thought to ...
2015-10-13 11:58:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706792593002319, "perplexity": 1856.7528289681115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006497.89/warc/CC-MAIN-20151001222006-00213-ip-10-137-6-227.ec2.internal.warc.gz"}
https://math.meta.stackexchange.com/questions/16258/can-not-formulate-title-for-the-question
# Can not formulate title for the question [closed] ## INTRODUCTION: I am a programmer who saw this site "from" StackOverflow in hope to ask here for help with a specific problem. Things developed reasonably well, but the answer I got did not entirely solved my question. Trying to solve this on my own, based on data from the provided answer, I got a thought for a new question but I need communities help to phrase it well: I have 2 rectangles, let us mark them R1 and R2 for easier reading. R1 has constant dimensions, and R2 is calculated by a function that took R1 as an input. R2 should have the same width as R1 but I have no control over the function and final result can return R2 with shorter/longer width. The workaround I came up with is to "clip" the extra width, and to transform that part somehow ( this is where you people "kick in" ) into rectangle that has the same width as R1 thus simply "gluing" it to the bottom of the R2. I have made a small image that should help you understand what I said above: Blue is the rectangle function returns. Pink portion on the right is the part I want to "clip", "transform it" and simply adjust blue rectangle's width and height. ## QUESTION: Since this is meta, I ask you to suggest me a title and tag(s) that would describe this problem the best, to those interested to help me. Thank you. ## closed as off-topic by Rahul, user61527, Asaf Karagila♦, Michael Albanese, gnometoruleJul 12 '14 at 17:23 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question does not appear to be about Mathematics Stack Exchange or the software that powers the Stack Exchange network within the scope defined in the help center." – Rahul, Community, Asaf Karagila, Michael Albanese, gnometorule If this question can be reworded to fit the rules in the help center, please edit the question. • -1 This question is off-topic for math.SE's meta. – achille hui Jul 11 '14 at 16:17 • @achillehui: OK, thank you. – AlwaysLearningNewStuff Jul 11 '14 at 16:28 • Just give the title your best shot--if someone can think of a better one, they'll edit it in. – apnorton Jul 11 '14 at 17:16
2019-06-24 13:50:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6051890254020691, "perplexity": 1066.2728008780662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00237.warc.gz"}
https://myassignments-help.com/2022/10/28/calculus-of-variations-dai-xie-%D0%BC%D0%B0%D1%82h5451/
# 数学代写|变分法代写Calculus of Variations代考|МАТH5451 ## 数学代写|变分法代写Calculus of Variations代考|Variational Problems with Fixed Boundaries The calculus of variations is also called the variational methods or variational calculus, it is a branch of mathematical analysis began to grow at the end of the 17th century, it is a science to study the definite integral type extremum of a functional which depends on some unknown functions. In short, the method to find the extremum of a functional is called the calculus of variations. The problem to find the extremum of a functional is called the variational problem, variation problem or variational principle. On February 5, 1733, Clairaut published the first treatise of the variational methods Sur quelques questions de maximis et minimis. The work published by Euler in 1744 Methodus Inveniendi Lineas Curvas Maximi Minimive Proprietate Gaudentes, Sive Solution Problematis Isoperimetrici Latissimo sensu Accepti ( $A$ method of finding curves which have a property of extremes or solution of the isoperimetric problem, if it is understood in the broadest sense of the word) marked the birth of the calculus of variations as a new branch of mathematics. The word variational methods was proposed for the first time by Lagrange in August 1755 in a letter to Euler, He called it the method of variation, while Euler in a paper in 1756 proposed the word the calculus The calculus of variations is an important part of functional analysis, but calculus of variations appeared first, and the functional analysis appeared later. This chapter through the examples of several classical variational problems illustrates the concepts of functional and variation. Mainly discuss the first variation of functional, necessary conditions of extremum of a functional, Euler equations, special cases of Euler equations, and solving methods of different types of extremal functions of functionals under the fixed boundary conditions, particularly discuss the variational problems of the complete functional. ## 数学代写|变分法代写Calculus of Variations代考|Examples of the Classical Variational Problems The basic problem of variational methods is to find the extremal problems of functionals and the corresponding extremal functions. In order to show the research contents of variational methods, first embark from the several classical variational examples to raise the concept of functional. Example 2.1.1 The brachistochrone problem, problem of brachistochrone or problem of curve of steepest descent. This is one of the earliest appeared variational problems in the history, which was usually considered the beginning of the history of the variational methods, was also a symbol of the development of the variational methods. It was first proposed by Galileo in 1630, he systematically studied the problem again in 1638 , but at that time he gave the wrong results, he thought this was a circular arc curve. The substantial research of variational method was that John Bernoulli wrote to his brother Jacob Bernoulli an open letter on the Leipziger Acta Eurditorum in the June 1696 issue to ask for the solution to the problem. The formulation of the problem was: Assuming that $A$ and $B$ are the two points which are not in the same vertical straight line in a vertical plane, in all the plane curves joining point $A$ and point $B$, determine a curve, such that the time needed is the shortest when a particle that is acted on only by gravity and the initial velocity is zero moves from point $A$ to point $B$ along the curve. This problem had caused many mathematicians’ interest at that time. After Newton heard the news on January 29, 1697, he solved this problem on the same day. Leibniz, Bernoulli brothers and L’Hospital et al. all studied this problem, they obtained correct results in different ways, among them, Jacob Bernoulli started from geometric intuition, he gave the more general solution, the solution took a big step towards the direction of the variational methods. Except Jacob Bernoulli’s method of solution, others’ methods of solution were published on the Acta eurditorum in the May 1697 issue. Solution The particle motion time depends not only on the length of the path, but also is associated with the speed. In all the curves joining point $A$ and point $B$, the straight line distance $A B$ is the shortest (see the solution of Example 2.5.11), but it is not necessarily a particle motion time shortest path. Now to establish the mathematical model of this problem. As shown in Fig. 2.1, taking $A$ as the origin of plane rectangular coordinate system, $x$ axis is put in a horizontal position, the direction of $y$ axis is downward. Obviously, the brachistochrone should be in the plane. Thus the coordinate of point $A$ is $(0,0)$. Let the coordinate of point $B$ be $\left(x_1, y_1\right)$, the equation of a curve joining point $A$ and point $B$ is $$y=y(x) \quad\left(0 \leq x \leq x_1\right)$$ ## 数学代写|变分法代写Calculus of Variations代考|Examples of the Classical Variational Problems myassignments-help数学代考价格说明 1、客户需提供物理代考的网址,相关账户,以及课程名称,Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明,让您清楚的知道您的钱花在什么地方。 2、数学代写一般每篇报价约为600—1000rmb,费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵),报价后价格觉得合适,可以先付一周的款,我们帮你试做,满意后再继续,遇到Fail全额退款。 3、myassignments-help公司所有MATH作业代写服务支持付半款,全款,周付款,周付款一方面方便大家查阅自己的分数,一方面也方便大家资金周转,注意:每周固定周一时先预付下周的定金,不付定金不予继续做。物理代写一次性付清打9.5折。 Math作业代写、数学代写常见问题 myassignments-help擅长领域包含但不是全部:
2022-12-01 08:08:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5062930583953857, "perplexity": 372.8789995151733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00162.warc.gz"}
https://www.mersenneforum.org/showthread.php?s=34db73728ecc2c4811af512eb988c08d&t=24797&goto=nextnewest
mersenneforum.org Condition on composite numbers easily factored (Challenger) Register FAQ Search Today's Posts Mark Forums Read 2019-10-09, 19:46 #1 baih   Jun 2019 32 Posts Condition on composite numbers easily factored (Challenger) q prime numbre p prime numbre and q>p lets c = qp e= 2^p mod q if we know e we can factore (c) ok letz try with c and e Last fiddled with by baih on 2019-10-09 at 19:51 2019-10-10, 11:14 #2 R. Gerbicz     "Robert Gerbicz" Oct 2005 Hungary 2×3×223 Posts With "a mod n" we denote the smallest non-negative residue modulo n. We're given e=2^p mod q and calculate (in polynomial time) r=2^n mod n What is r mod q ? Easy: r mod q=(2^n mod n) mod q=2^n mod q=(2^q)^p mod q=2^p mod q=e (used Fermat's little theorem). So we know that r mod q=e, hence r-e is divisible by q, so q | gcd(r-e,n), and if this is not n, then we could factorize: because q=gcd(r-e,n) and with a division p=n/q. The (only) hole in this proof is that (in rare cases) it is possible that n=gcd(r-e,n), for example this is the case for n=341. To see that it is really working: Code: f(n,e)={r=lift(Mod(2,n)^n);d=gcd(r-e,n);return(d)} p=nextprime(random(2^30)); q=nextprime(random(2^30)); n=p*q;e=lift(Mod(2,q)^p); print([p,q]); f(n,e) (one possible output) ? [623981947, 805922797] ? %42 = 805922797 ps. we have not used that q>p. Last fiddled with by R. Gerbicz on 2019-10-10 at 11:16 Reason: typo Similar Threads Thread Thread Starter Forum Replies Last Post baih Factoring 16 2019-09-29 15:48 jshort Factoring 9 2019-04-09 16:34 devarajkandadai Number Theory Discussion Group 7 2017-09-23 02:58 henryzz Factoring 8 2017-03-09 19:24 philmoore Factoring 21 2004-11-18 20:00 All times are UTC. The time now is 14:19. Sun May 31 14:19:57 UTC 2020 up 67 days, 11:53, 1 user, load averages: 1.92, 1.45, 1.38
2020-05-31 14:19:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6038137674331665, "perplexity": 12426.045743276713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00013.warc.gz"}
https://puzzling.stackexchange.com/questions/55522/megan-visits-all-the-stations
# Megan visits all the stations The other day I was waiting for a train and had just popped into the nearby branch of Costa Bean for a coffee, when who should I run into but my old friend Megan and her partner Ricky, studying a travel brochure. "We're planning to spend a few days to ride on this railway." Ricky replied, handing me the brochure. It was for The Diamond Of Wales Line. "The line's only recently been opened. I want to look round all the stations. So we've booked a hotel which is nice and convenient for the terminus at Abercydabra. We'll start there, then for each of the other stations, we'll take one train there, get off and have a look around, then get on a different train there, and so on until we get back to Abercydabra. We're going to visit all the stations." I opened the brochure, which gave the timetable: Abercydabra 08:00 16:00 Maenddygap 09:00 17:00 Brynpen Hill 08:12 16:12 Llanllewllwyd 09:12 17:12 Caerlesli 08:24 then 16:24 Kington 09:24 then 17:24 Dynower 08:36 16:36 Joneston 09:36 17:36 Egynbeicyn 08:48 every 16:48 Isicyl 09:48 every 17:48 Fforestffair 09:00 17:00 Hardllyr 10:00 18:00 Greenweli 09:12 two 17:12 Greenweli 10:12 two 18:12 Hardllyr 09:24 17:24 Fforestffair 10:24 18:24 Isicyl 09:36 hours 17:36 Egynbeicyn 10:36 hours 18:36 Joneston 09:48 17:48 Dynower 10:48 18:48 Kington 10:00 until 18:00 Caerlesli 11:00 until 19:00 Llanllewllwyd 10:12 18:12 Brynpen Hill 11:12 19:12 Maenddygap 10:24 18:24 Abercydabra 11:24 19:24 All trains are steam trains except the 10:00 from Abercydabra and the 17:00 from Maenddygap, which are diesel. "That's not many trains per day and a lot of stations." I noted. "You're surely not going to manage to visit all of them in one day?" "No, I don't think we can, but I want to work out an itinerary which minimises the number of days we'll need. Returning to our hotel at the end of each day." "Of course! I'm not having us spending the night at a station!" Megan added. "Do the trains have dining-cars?" "It doesn't look like it" her partner said, looking at the brochure again. "But I expect we can grab a bite to eat at a station café or somewhere near -- depending on where we happen to be at lunchtime. All the stations have cafés where we can have lunch." "I'll want a proper lunch, what with all the train-hopping we'll be doing. So let's allow at least an hour for that, and not on a train. And at a reasonable hour, too, say some time between 12 and 2. Can you allow for that on our itinerary?" "I expect so. Now I like steam trains but don't care so much for diesel ones. So let's see how we can do this in a way which minimises how much time we spend on diesels." So, how many days will they need? And what will be their daily itineraries? (I was inspired by this project.) (Here's Geoff Marshall, with Vicki Pipe, explaining the specific bit which inspired my puzzle.) • how long does it take from one station to another? an hour?and how the train station works exactly? let say how I can go to J from A? a map would be nice to think over. – Oray Oct 1 '17 at 21:36 • @Oray - My reading of the timetable says that the stations are all on a single line from A to M, and it takes exactly 12 minutes to go from station to station, which you can do once every 2 hours. – Bobson Oct 1 '17 at 22:15 • What are the priorities for requirements (days and diesel minimisation)? i.e. Is 1 day with 2 diesel trains better than 2 days with 1 diesel train? – Alpha Oct 2 '17 at 22:28 • @Alpha Minimise days. Having done that, minimise diesel. – Rosie F Oct 3 '17 at 5:54 It's possible to visit all stations in 2 days, which is clearly the minimum time it can take since visiting 13 stations means arriving at 13 stations on 13 different trains, and there are only 10 trains per day (the 08:00, 10:00, 12:00, 14:00, and 16:00 from Abercydabra, and the 09:00, 11:00, 13:00, 15:00, and 17:00 from Maenddygap), so it cannot be done in just 1 day. The following is one possible itinerary, although I don't know if it minimises the use of the diesel train. Moving horizontally in this tabulation corresponds to changing trains at the station on that row, arriving at the left hand time and leaving at the right hand time, and moving vertically corresponds to taking a train from the station on the upper row to the station on the lower row. An L denotes that lunch is eaten at the station on that row. Trains in bold travel towards Maenddygap, and trains not in bold travel towards Abercydabra. Trains in italics are diesel trains, and trains not in italics are steam trains. Day 1 Abercydabra 08:00 Maenddygap 10:24 11:00 Kington 11:24 12:00 Llanllewllwyd 12:12 L 13:12 Hardllyr 14:00 15:24 Isicyl 15:36 15:48 Fforestffair 16:24 17:00 Greenweli 17:12 18:12 Abercydabra 19:24 (This takes the morning diesel only one stop / 12 minutes, between Kington and Llanllewllyed, but takes the evening diesel six stops / 1 hour and 12 minutes, between Greenweli and Abercydabra.) Day 2 Abercydabra 08:00 Joneston 09:48 11:36 Caerlesli 13:00 L 14:24 Dynower 14:36 14:48 Brynpen Hill 15:12 16:12 Egynbeicyn 16:48 18:36 Abercydabra 19:24 (This completely avoids the morning diesel, but takes the evening diesel four stops / 48 minutes, between Egynbeicyn and Abercydabra.) • This itinerary travels on four out of the five scheduled trains in each direction on the first day, missing out the 12:00 from Abercydabra and the 09:00 from Maenddygap. On the second day, it travels on three of the five trains in each direction, and again the 12:00 from Abercydabra and the 09:00 from Maenddygap are among those missed out. I'm curious as to whether it's possible to visit all stations in 2 days, still adhere to the lunch requirements, and travel on each of the scheduled services at some point during the itinerary. – Neremanth Oct 2 '17 at 3:24
2019-10-23 00:47:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5663664937019348, "perplexity": 3263.120344523465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00107.warc.gz"}
https://practice.geeksforgeeks.org/problems/primes-sum5827/1/
X DAYS : HOUR : MINS : SEC Copied to Clipboard Primes sum Easy Accuracy: 42.91% Submissions: 2608 Points: 2 Given a number N. Find if it can be expressed as sum of two prime numbers. Example 1: Input: N = 34 Output: "Yes" Explanation: 34 can be expressed as sum of two prime numbers. Example 2: Input: N = 23 Output: "No" Explanation: 23 cannnot be expressed as sum of two prime numbers. You dont need to read input or print anything. Complete the function isSumOfTwo() which takes N as input parameter and returns "Yes" if can be expressed as sum of two prime numbers. else return "No". Expected Time Complexity: O(n*sqrt(n)) Expected Auxiliary Space: O(1) Constraints: 1<= N <=104 We are replacing the old Disqus forum with the new Discussions section given below.
2022-05-24 19:42:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4061501920223236, "perplexity": 3786.2320543837454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00063.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdsb.2021213
Article Contents Article Contents # Optimization of a control law to synchronize first-order dynamical systems on Riemannian manifolds by a transverse component • * Corresponding author: Simone Fiori • The present paper builds on the previous contribution by the second author, S. Fiori, Synchronization of first-order autonomous oscillators on Riemannian manifolds, Discrete and Continuous Dynamical Systems – Series B, Vol. 24, No. 4, pp. 1725 – 1741, April 2019. The aim of the present paper is to optimize a previously-developed control law to achieve synchronization of first-order non-linear oscillators whose state evolves on a Riemannian manifold. The optimization of such control law has been achieved by introducing a transverse control field, which guarantees reduced control effort without affecting the synchronization speed of the oscillators. The developed non-linear control theory has been analyzed from a theoretical point of view as well as through a comprehensive series of numerical experiments. Mathematics Subject Classification: Primary: 53B50, 53Z30. Citation: • Figure 1.  Simulation of the evolution of the master system (in green color), controlled systems $\Sigma_L$ (26) (in red color) and $\Sigma_G$ (27) (in blue color) on the sphere $\mathbb{S}^2$, displayed in terms of state components, together with the values taken by the Lyapunov function (7) during evolution (in blue color for $\Sigma_G$ and in red color for $\Sigma_L$) Figure 2.  Simulation of the evolution of the controlled systems $\Sigma_L$ (26) (in red color) and $\Sigma_G$ (27) (in blue color) on the sphere $\mathbb{S}^2$, displayed in terms of transverse control field components, together with the values taken by the control effort during evolution Figure 3.  Synchronization of a master/slave pair oscillators on the sphere $\mathbb{S}^7$ illustrated in terms of state components as well as kinetic energy (in green color for the master oscillator, red color for the system $\Sigma_L$ and blue color for the system $\Sigma_G$) Figure 4.  Synchronization of a master/slave pair oscillators on the sphere $\mathbb{S}^7$ – with and without the transverse field $\tau_G$ – illustrated in terms of control efforts and Lypunov function values (in red color for the system without transverse component and blue color for the system with transverse component). The left-bottom panes shows the course of the difference $\|u\|_{z_s}^2-\|u+\tau_G\|_{z_s}^2$ which takes non-negative values Figure 5.  Synchronization of a master/slave pair oscillators on the sphere $\mathbb{S}^7$ illustrated in terms of transverse field components as well as control efforts and Lypunov function values (in red color for the system $\Sigma_L$ and blue color for the system $\Sigma_G$) Figure 6.  Synchronization of two master/slave oscillators on $\mathbb{SO}(3)$ by the control field (4). In the top panel, the evolution of the squared Riemannian distance $d^2(z^s, z^m)$ is represented versus time. In the bottom panel, the evolution of the squared control effort related to the control law $u$ is represented over times Figure 7.  Synchronization of two master/slave oscillators on $\mathbb{SO}(3)$ by the control field $\tilde{u}$ with (48) as transverse control field. In the top panel, the evolution of the squared Riemannian distance $d^2(z^s, z^m)$ is represented versus time. In the bottom panel, the evolution of the squared control effort associated to the control field $\tilde{u}$ is represented Figure 8.  Synchronization of two master/slave oscillators on $\mathbb{SO}(3)$: Comparison of the squared control effort resulting from the application of the control laws $\tilde{u}(t)$ (in blue color) and $u(t)$ (in red color) • [1] R. Albert and A.-L. Barabási, Statistical mechanics of complex networks, Reviews of Modern Physics, 74 (2002), 47-97.  doi: 10.1103/RevModPhys.74.47. [2] A. Arenas, A. Díaz-Guilera, J. Kurths, Y. Moreno and C. Zhou, Synchronization in complex networks, Physics Reports, 469 (2008), 93-153.  doi: 10.1016/j.physrep.2008.09.002. [3] Y. M. Baek, Y. Kozuka, N. Sugita, A. Morita, S. Sora, R. Mochizuki and M. Mitsuishi, Highly precise master-slave robot system for super micro surgery, in Proceedings of the 2010 IEEE International Conference on Biomedical Robotics and Biomechatronics, 2010,740–745. doi: 10.1109/BIOROB.2010.5625946. [4] S. Boccaletti, J. Kurths, G. Osipov, D. L. Valladares and C. S. Zhou, The synchronization of chaotic systems, Physics Reports, 366 (2002), 1-101.  doi: 10.1016/S0370-1573(02)00137-0. [5] A. K. Bondhus, K. Y. Pettersen and J. T. Gravdahl, Leader/follower synchronization of satellite attitude without angular velocity measurements, in Proceedings of the 44th IEEE Conference on Decision and Control, 2005, 7270–7277. [6] A. A. Castrejón-Pita and P. L. Read, Synchronization in a pair of thermally coupled rotating baroclinic annuli: Understanding atmospheric teleconnections in the laboratory, Physical Review Letters, 104 (2010), 204501. doi: 10.1103/PhysRevLett.104.204501. [7] I. Chueshov, P. E. Kloeden and M. Yang, Synchronization in coupled stochastic sine-Gordon wave model, Discrete & Continuous Dynamical Systems - B, 21 (2016), 2969-2990.  doi: 10.3934/dcdsb.2016082. [8] D. R. Creveling, P. E. Gill and H. D. I. Abarbanel, State and parameter estimation in nonlinear systems as an optimal tracking problem, Physics Letters A, 372 (2008), 2640-2644.  doi: 10.1016/j.physleta.2007.12.051. [9] K. M. Cuomo, A. V. Oppenheim and S. H. Strogatz, Synchronization of Lorenz-based chaotic circuits with applications to communications, IEEE Transactions on Circuits and Systems Ⅱ: Analog and Digital Signal Processing, 40 (1993), 626-633.  doi: 10.1109/82.246163. [10] K. Ding and Q.-L. Han, Master-slave synchronization of nonautonomous chaotic systems and its application to rotating pendulums, International Journal of Bifurcation and Chaos, 22 (2012), 1250147. doi: 10.1142/S0218127412501477. [11] F. Dörfler, M. Chertkov and F. Bullo, Synchronization in complex oscillator networks and smart grids, Proceedings of the National Academy of Sciences, 110 (2013), 2005-2010.  doi: 10.1073/pnas.1212134110. [12] M. D. Duong, C. Teraoka, T. Imamura, T. Miyoshi and K. Terashima, Master-slave system with teleoperation for rehabilitation, IFAC Proceedings Volumes, 38 (2005), 48-53.  doi: 10.3182/20050703-6-CZ-1902.01410. [13] R. Femat and G. Solís-Perales, On the chaos synchronization phenomena, Physics Letters A, 262 (1999), 50-60.  doi: 10.1016/S0375-9601(99)00667-2. [14] S. Fiori, Non-delayed synchronization of non-autonomous dynamical systems on Riemannian manifolds and its applications, Nonlinear Dynamics, 94 (2018), 3077-3100.  doi: 10.1007/s11071-018-4546-x. [15] S. Fiori, Synchronization of first-order autonomous oscillators on Riemannian manifolds, Discrete & Continuous Dynamical Systems - B, 24 (2019), 1725-1741.  doi: 10.3934/dcdsb.2018233. [16] S. Fiori, I. Cervigni, M. Ippoliti and C. Menotta, Extension of a PID control theory to Lie groups applied to synchronising satellites and drones, IET Control Theory & Applications, 14 (2020), 2628-2642.  doi: 10.1049/iet-cta.2020.0226. [17] I. Fischer, Y. Liu and P. Davis, Synchronization of chaotic semiconductor laser dynamics on subnanosecond time scales and its potential for chaos communication, Physical Review A, 62 (2000), 011801. doi: 10.1103/PhysRevA.62.011801. [18] J. M. Gonzalez-Miranda, Synchronization and Control of Chaos: An Introduction for Scientists and Engineers, World Scientific Publishing Company, 2004. doi: 10.1142/p352. [19] S. Guo, S. Zhang, Z. Song and M. Pang, Development of a human upper limb-like robot for master-slave rehabilitation, in Proceedings of the 2013 ICME International Conference on Complex Medical Engineering, 2013,693–696. [20] Z. Guo, S. Gong, S. Yang and T. Huang, Global exponential synchronization of multiple coupled inertial memristive neural networks with time-varying delay via nonlinear coupling, Neural Networks, 108 (2018), 260-271.  doi: 10.1016/j.neunet.2018.08.020. [21] F. C. Hoppensteadt and E. M. Izhikevich, Synchronization of MEMS resonators and mechanical neurocomputing, IEEE Transactions on Circuits and Systems Ⅰ: Fundamental Theory and Applications, 48 (2001), 133-138.  doi: 10.1109/81.904877. [22] A.-S. Hu and S. D. Servetto, Asymptotically optimal time synchronization in dense sensor networks, in Proceedings of the 2nd ACM International Conference on Wireless Sensor Networks and Applications, WSNA'03, New York, NY, USA, 2003, Association for Computing Machinery, 1–10. [23] A. Khan and S. Kumar, Measure of chaos and adaptive synchronization of chaotic satellite systems, International Journal of Dynamics and Control, 7 (2019), 536-546.  doi: 10.1007/s40435-018-0481-4. [24] J.-S. Li, I. Dasanayake and J. Ruths, Control and synchronization of neuron ensembles, IEEE Transactions on Automatic Control, 58 (2013), 1919-1930.  doi: 10.1109/TAC.2013.2250112. [25] X. Li and R. Rakkiyappan, Impulsive controller design for exponential synchronization of chaotic neural networks with mixed delays, Communications in Nonlinear Science and Numerical Simulation, 18 (2013), 1515-1523.  doi: 10.1016/j.cnsns.2012.08.032. [26] G. M. Mahmoud and E. E. Mahmoud, Complete synchronization of chaotic complex nonlinear systems with uncertain parameters, Nonlinear Dynamics, 62 (2010), 875-882.  doi: 10.1007/s11071-010-9770-y. [27] J. E. Marsden and T. S. Ratiu, Manifolds, Tensor Analysis, and Applications, Springer New York, 2012. doi: 10.1007/978-1-4614-1806-1_59. [28] M. Mitsuishi, A. Morita, N. Sugita, S. Sora, R. Mochizuki, K. Tanimoto, Y. M. Baek, H. Takahashi and K. Harada, Master-slave robotic platform and its feasibility study for micro-neurosurgery, The International Journal of Medical Robotics and Computer Assisted Surgery, 9 (2013), 180-189.  doi: 10.1002/rcs.1434. [29] T. E. Murphy, A. B. Cohen, B. Ravoori, K. R. B. Schmitt, A. V. Setty, F. Sorrentino, C. R. S. Williams, E. Ott and R. Roy, Complex dynamics and synchronization of delayed-feedback nonlinear oscillators, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 368 (2010), 343-366.  doi: 10.1098/rsta.2009.0225. [30] D. Sadaoui, A. Boukabou, N. Merabtine and M. Benslama, Predictive synchronization of chaotic satellites systems, Expert Systems with Applications, 38 (2011), 9041-9045.  doi: 10.1016/j.eswa.2011.01.117. [31] A. Sarlette and R. Sepulchre, Consensus optimization on manifolds, SIAM Journal on Control and Optimization, 48 (2009), 56-76.  doi: 10.1137/060673400. [32] S. J. Schiff, K. Jerger, D. H. Duong, T. Chang, M. L. Spano and W. L. Ditto, Controlling chaos in the brain, Nature, 370 (1994), 615-620.  doi: 10.1038/370615a0. [33] F. Sorrentino and E. Ott, Using synchronism of chaos for adaptive learning of time-evolving network topology, Physical Review E, 79 (2009), 016201. doi: 10.1103/PhysRevE.79.016201. [34] S. H. Strogatz, Exploring complex networks, Nature, 410 (2001), 268-276.  doi: 10.1038/35065725. [35] P. J. Uhlhaas and W. Singer, Neural synchrony in brain disorders: Relevance for cognitive dysfunctions and pathophysiology, Neuron, 52 (2006), 155-168.  doi: 10.1016/j.neuron.2006.09.020. [36] A. Vaccaro, V. Loia, G. Formato, P. Wall and V. Terzija, A self-organizing architecture for decentralized smart microgrids synchronization, control, and monitoring, IEEE Transactions on Industrial Informatics, 11 (2015), 289-298.  doi: 10.1109/TII.2014.2342876. [37] N. Wanichnukhrox, T. Maneewarn and S. Songschon, Master-slave control for walking rehabilitation robot, in Proceedings of the 6th International Conference on Rehabilitation Engineering & Assistive Technology, i-CREATe'12, Midview City, SGP, 2012, Singapore Therapeutic, Assistive & Rehabilitative Technologies (START) Centre. [38] C. W. Wu, Synchronization in Complex Networks of Nonlinear Dynamical Systems, World Scientific Publishing Company, 2007. [39] X. Wu, C. Xu and J. Feng, Complex projective synchronization in drive-response stochastic coupled networks with complex-variable systems and coupling time delays, Communications in Nonlinear Science and Numerical Simulation, 20 (2015), 1004-1014.  doi: 10.1016/j.cnsns.2014.07.003. [40] J.-P. Yeh and K.-L. Wu, A simple method to synchronize chaotic systems and its application to secure communications, Mathematical and Computer Modelling, 47 (2008), 894-902.  doi: 10.1016/j.mcm.2007.06.021. Figures(8)
2023-03-24 16:59:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.4104333519935608, "perplexity": 2063.4596137452063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00377.warc.gz"}
http://www.self.gutenberg.org/articles/eng/Natural_logarithm
#jsDisabledContent { display:none; } My Account | Register | Help # Natural logarithm Article Id: WHEBN0000021476 Reproduction Date: Title: Natural logarithm Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Natural logarithm Graph of the natural logarithm function. The function slowly grows to positive infinity as x increases and slowly goes to negative infinity as x approaches 0 ("slowly" as compared to any power law of x); the y-axis is an asymptote. The natural logarithm of a number is its logarithm to the base e, where e is an irrational and transcendental constant approximately equal to 2.718281828459. The natural logarithm of x is generally written as ln x, loge x, or sometimes, if the base e is implicit, simply log x.[1] Parentheses are sometimes added for clarity, giving ln(x), loge(x) or log(x). This is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity. The natural logarithm of x is the power to which e would have to be raised to equal x. For example, ln(7.5) is 2.0149..., because e2.0149... = 7.5. The natural log of e itself, ln(e), is 1, because e1 = e, while the natural logarithm of 1, ln(1), is 0, since e0 = 1. The natural logarithm can be defined for any positive real number a as the area under the curve y = 1/x from 1 to a (the area being taken as negative when a<1). The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, leads to the term "natural". The definition of the natural logarithm can be extended to give logarithm values for negative numbers and for all non-zero complex numbers, although this leads to a multi-valued function: see Complex logarithm. The natural logarithm function, if considered as a real-valued function of a real variable, is the inverse function of the exponential function, leading to the identities: e^{\ln(x)} = x \qquad \mbox{if }x > 0 \ln(e^x) = x. Like all logarithms, the natural logarithm maps multiplication into addition: \ln(xy) = \ln(x) + \ln(y). Thus, the logarithm function is a group isomorphism from positive real numbers under multiplication to the group of real numbers under addition, represented as a function: \ln \colon \mathbb{R}^+ \to \mathbb{R}. Logarithms can be defined to any positive base other than 1, not only e. However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, and are usually defined in terms of the latter. For instance, the binary logarithm is the natural logarithm divided by ln(2), the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity. For example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest. Natural Logarithm Representation \ln x Inverse e^x Derivative \frac{1}{x} Indefinite Integral x\ln x - x + C ## History The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa before 1649.[2] Their work involved quadrature of the hyperbola xy = 1 by determination of the area of hyperbolic sectors. Their solution generated the requisite "hyperbolic logarithm" function having properties now associated with the natural logarithm. An early mention of the natural logarithm was by Nicholas Mercator in his work Logarithmotechnia published in 1668,[3] although the mathematics teacher John Speidell had already in 1619 compiled a table of what in fact were effectively natural logarithms.[4] It is also sometimes referred to as the Napierian logarithm, named after John Napier, although Napier's original "logarithms" (from which Speidell's numbers were derived) were slightly different (see Logarithm: from Napier to Euler). ## Notational conventions The notations "ln x" and "loge x" both refer unambiguously to the natural logarithm of x. "log x" without an explicit base may also refer to the natural logarithm. This usage is common in mathematics and some scientific contexts as well as in many programming languages.[5] In some other contexts, however, "log x" can be used to denote the common (base 10) logarithm. ### Other notational use and conventions in history The notations l. and l were in use at least since the 1730s,[6] and until at least the 1840s,[7] then log.[8] or log,[9] at least since the 1790s. Finally, in the twentieth century, the notations Log[10] and logh[11] are attested. ## Origin of the term natural logarithm The graph of the natural logarithm function shown earlier on the right side of the page enables one to glean some of the basic characteristics that logarithms to any base have in common. Chief among them are: the logarithm of the number one is zero; and the logarithm of zero is negative infinity. What makes natural logarithms unique is to be found at the single point where all logarithms are zero, namely the logarithm of the number one. At that specific point the "slope" of the curve of the graph of the natural logarithm is also precisely one. Logarithms to a higher base than e, such as those to the base 10, exhibit a slope at that point less than one, while logarithms to a lower base than e, such as those to the base 2, exhibit a slope at that point greater than one. While the methods for computing the "value" of e are fascinating from various mathematical perspectives, they all can be thought of as resulting from the pursuit of this condition. Another way of conceptualizing this is to realize that, for any numeric value close to the number one, the natural logarithm can be computed by subtracting the number one from the numeric value. For example, the natural logarithm of 1.01 is 0.01 to an accuracy better than 5 parts per thousand. With similar accuracy one can assert that the natural logarithm of 0.99 is minus 0.01. The accuracy of this concept increases as one approaches the number one ever more closely, and reaches completeness of accuracy precisely there. To the same extent that the number one itself is a number common to all systems of counting, so also the natural logarithm is independent of all systems of counting. In the English language the term adopted to encapsulate this concept is the word "natural". Initially, it might seem that since the common numbering system is base 10, this base would be more "natural" than base e. But mathematically, the number 10 is not particularly significant. Its use culturally—as the basis for many societies’ numbering systems—likely arises from humans’ typical number of fingers.[12] Other cultures have based their counting systems on such choices as 5, 8, 12, 20, and 60.[13][14][15] loge is a "natural" log because it automatically springs from, and appears so often in, mathematics. For example, consider the problem of differentiating a logarithmic function:[16] \frac{d}{dx}\log_b(x) = \frac{d}{dx} \left( \frac{1}{\ln(b)} \ln{x} \right) = \frac{1}{\ln(b)} \frac{d}{dx} \ln{x} = \frac{1}{x\ln(b)}. If the base b equals e, then the derivative is simply 1/x, and at x = 1 this derivative equals 1. Another sense in which the base-e-logarithm is the most natural is that it can be defined quite easily in terms of a simple integral or Taylor series and this is not true of other logarithms. Further senses of this naturalness make no use of calculus. As an example, there are a number of simple series involving the natural logarithm. Pietro Mengoli and Nicholas Mercator called it logarithmus naturalis a few decades before Newton and Leibniz developed calculus.[17] ## Definitions ln(a) illustrated as the area under the curve f(x) = 1/x from 1 to a. If a is less than 1, the area from a to 1 is counted as negative. The area under the hyperbola satisfies the logarithm rule. Here A(s,t) denotes the area under the hyperbola between s and t. Formally, ln(a) may be defined as the area under the hyperbola 1/x. This is the integral, \ln(a)=\int_1^a \frac{1}{x}\,dx. This function is a logarithm because it satisfies the fundamental property of a logarithm: \ln(ab)=\ln(a)+\ln(b). \,\! This can be demonstrated by splitting the integral that defines ln(ab) into two parts and then making the variable substitution x = ta in the second part, as follows: \begin{align} \ln(ab)= \int_1^{ab}\frac{1}{x} \; dx &=\int_1^a \frac{1}{x} \; dx \; + \int_a^{ab} \frac{1}{x} \; dx\\ &=\int_1^{a} \frac{1}{x} \; dx \; + \int_1^{b} \frac{1}{at} \; d(at)\\ &=\int_1^{a} \frac{1}{x} \; dx \; + \int_1^{b} \frac{1}{t} \; dt\\ &= \ln (a) + \ln (b). \end{align} In elementary terms, this is simply scaling by 1/a in the horizontal direction and by a in the vertical direction. Area does not change under this transformation, but the region between a and ab is reconfigured. Because the function a/(ax) is equal to the function 1/x, the resulting area is precisely ln(b). The number e is defined as the unique real number a such that ln(a) = 1. Alternatively, if the exponential function has been defined first, say by using an infinite series, the natural logarithm may be defined as its inverse function, i.e., ln is that function such that exp(ln(x)) = x. Since the range of the exponential function on real arguments is all positive real numbers and since the exponential function is strictly increasing, this is well-defined for all positive x. ## Properties • \ln(1) = 0 • \ln(e) = 1 • \ln(xy) = \ln(x) + \ln(y), \quad \text{for}\quad x > 0, y > 0 • \lim_{x \to 0} \frac{\ln(1+x)}{x} = 1 • \ln{( 1+x^\alpha )} \leq \alpha x \quad{\rm for}\quad x \ge 0, \alpha \ge 1 ## Derivative, Taylor series The Taylor polynomials for ln(1 + x) only provide accurate approximations in the range −1 < x ≤ 1. Note that, for x > 1, the Taylor polynomials of higher degree are worse approximations. The derivative of the natural logarithm is given by \frac{d}{dx} \ln(x) = \frac{1}{x}.\, Proof: [1] \frac{d}{dx} \ln (x) =\lim_{h \to 0} \frac{\ln(x+h)-\ln(x)}{h} =\lim_{h \to 0} \frac{\ln(\frac{x+h}{x})}{h} =\lim_{h \to 0} \left[ \frac{1}{h} \ln\left( 1 + \frac{h}{x} \right)\right]\quad = \lim_{h \to 0} \ln\left( 1 + \frac{h}{x} \right)^\frac{1}{h} \text{Let } u=\frac{h}{x} \Rightarrow ux=h \frac{1}{h}=\frac{1}{ux} \frac{d}{dx} \ln (x)=\lim_{u \to 0} \ln (1+u)^\frac {1}{ux} =\lim_{u \to 0}\ln\left[(1+u)^\frac{1}{u} \right]^\frac{1}{x} =\frac{1}{x} \lim_{u \to 0} \ln(1+u)^\frac {1}{u} \text {Let } n=\frac{1}{u} \Rightarrow u=\frac{1}{n} \frac{d}{dx} \ln(x) = \frac{1}{x} \lim_{n \to \infty} \ln \left( 1 + \frac{1}{n} \right)^n =\frac{1}{x} \ln \left [ \lim_{n \to \infty}\left( 1 + \frac{1}{n} \right)^n \right ] =\frac{1}{x} \ln e =\frac {1}{x} This leads to the Taylor series for ln(1 + x) around 0; also known as the Mercator series (Leonhard Euler[18] nevertheless boldly applied this series to x= -1, in order to show that the harmonic series equals the (natural) logarithm of 1/(1-1), that is the logarithm of infinity. Nowadays, more formally but perhaps less vividly, we prove that the harmonic series truncated at N is close to the logarithm of N, when N is large.) At right is a picture of ln(1 + x) and some of its Taylor polynomials around 0. These approximations converge to the function only in the region −1 < x ≤ 1; outside of this region the higher-degree Taylor polynomials are worse approximations for the function. Substituting x − 1 for x, we obtain an alternative form for ln(x) itself, namely \ln(x)=\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} (x-1) ^ n \ln(x)= (x - 1) - \frac{(x-1) ^ 2}{2} + \frac{(x-1)^3}{3} - \frac{(x-1)^4}{4} \cdots By using the Euler transform on the Mercator series, one obtains the following, which is valid for any x with absolute value greater than 1: \ln{x \over {x-1}} = \sum_{n=1}^\infty {1 \over {n x^n}} = {1 \over x}+ {1 \over {2x^2}} + {1 \over {3x^3}} + \cdots \,. This series is similar to a BBP-type formula. Also note that x \over {x-1} is its own inverse function, so to yield the natural logarithm of a certain number y, simply put in y \over {y-1} for x. \ln{x} = \sum_{n=1}^\infty {1 \over {n}} \left( {x - 1 \over x} \right)^n = \left( {x - 1 \over x} \right) + {1 \over 2} \left( {x - 1 \over x} \right)^2 + {1 \over 3} \left( {x - 1 \over x} \right)^3 + \cdots \, {\rm for }\quad \operatorname{Re} (x) \geq \frac12 \,. ## The natural logarithm in integration The natural logarithm allows simple integration of functions of the form g(x) = f '(x)/f(x): an antiderivative of g(x) is given by ln(|f(x)|). This is the case because of the chain rule and the following fact: \ {d \over dx}\ln \left| x \right| = {1 \over x}. In other words, \int { 1 \over x} dx = \ln|x| + C and \int { \frac{f'(x)}{f(x)}\, dx} = \ln |f(x)| + C. Here is an example in the case of g(x) = tan(x): \int \tan (x) \,dx = \int {\sin (x) \over \cos (x)} \,dx \int \tan (x) \,dx = \int {-{d \over dx} \cos (x) \over {\cos (x)}} \,dx. Letting f(x) = cos(x) and f'(x)= – sin(x): \int \tan (x) \,dx = -\ln{\left| \cos (x) \right|} + C \int \tan (x) \,dx = \ln{\left| \sec (x) \right|} + C where C is an arbitrary constant of integration. The natural logarithm can be integrated using integration by parts: \int \ln (x) \,dx = x \ln (x) - x + C. Let: u = \ln(x) \Rightarrow du = \frac{dx}{x} dv = dx \Rightarrow v = x\, then: \begin{align} \int \ln (x) \,dx & = x \ln (x) - \int \frac{x}{x} \,dx \\ & = x \ln (x) - \int 1 \,dx \\ & = x \ln (x) - x + C \end{align} ## Numerical value To calculate the numerical value of the natural logarithm of a number, the Taylor series expansion can be rewritten as: \ln(1+x)= x \,\left( \frac{1}{1} - x\,\left(\frac{1}{2} - x \,\left(\frac{1}{3} - x \,\left(\frac{1}{4} - x \,\left(\frac{1}{5}- \cdots \right)\right)\right)\right)\right) \quad{\rm for}\quad \left|x\right|<1.\,\! To obtain a better rate of convergence, the following identity can be used. \begin{align} \ln(x) = \ln\left(\frac{1+y}{1-y}\right) &= 2\,y\, \left( \frac{1}{1} + \frac{1}{3} y^{2} + \frac{1}{5} y^{4} + \frac{1}{7} y^{6} + \frac{1}{9} y^{8} + \cdots \right)\\ &= 2\,y\, \left( \frac{1}{1} + y^{2} \, \left( \frac{1}{3} + y^{2} \, \left( \frac{1}{5} + y^{2} \, \left( \frac{1}{7} + y^{2} \, \left( \frac{1}{9} + \cdots \right) \right) \right)\right) \right) \end{align} provided that y = (x−1)/(x+1) and Re(x) ≥ 0 but x ≠ 0. For ln(x) where x > 1, the closer the value of x is to 1, the faster the rate of convergence. The identities associated with the logarithm can be leveraged to exploit this: \begin{align} \ln(123.456) &= \ln(1.23456 \times 10^2)\\ &= \ln(1.23456) + \ln(10^2)\\ &= \ln(1.23456) + 2 \times \ln(10)\\ &\approx \ln(1.23456) + 2 \times 2.3025851. \end{align} Such techniques were used before calculators, by referring to numerical tables and performing manipulations such as those above. ### Natural logarithm of 10 The natural logarithm of 10, which has the decimal expansion 2.30258509...,[20] plays a role for example in the computation of natural logarithms of numbers represented in scientific notation, as a mantissa multiplied by a power of 10: \ln (a\times 10^n) = \ln a + n \ln 10. This means that one can effectively calculate the logarithms of numbers with very large or very small magnitude using the logarithms of a relatively small set of decimals in the range [1,10). ### High precision To compute the natural logarithm with many digits of precision, the Taylor series approach is not efficient since the convergence is slow. If x is near 1, an alternative is to use Newton's method to invert the exponential function, whose series converges more quickly. For an optimal function, the iteration simplifies to y_{n+1} = y_n + 2 \cdot \frac{ x - \exp ( y_n ) }{ x + \exp ( y_n ) } \, which has cubic convergence to ln(x). Another alternative for extremely high precision calculation is the formula[21] [22] \ln x \approx \frac{\pi}{2 M(1,4/s)} - m \ln 2, where M denotes the arithmetic-geometric mean of 1 and 4/s, and s = x \,2^m > 2^{p/2}, with m chosen so that p bits of precision is attained. (For most purposes, the value of 8 for m is sufficient.) In fact, if this method is used, Newton inversion of the natural logarithm may conversely be used to calculate the exponential function efficiently. (The constants ln 2 and π can be pre-computed to the desired precision using any of several known quickly converging series.) ### Computational complexity The computational complexity of computing the natural logarithm (using the arithmetic-geometric mean) is O(M(n) ln n). Here n is the number of digits of precision at which the natural logarithm is to be evaluated and M(n) is the computational complexity of multiplying two n-digit numbers. ## Continued fractions While no simple continued fractions are available, several generalized continued fractions are, including: \ln (1+x)=\frac{x^1}{1}-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5}-\cdots= \cfrac{x}{1-0x+\cfrac{1^2x}{2-1x+\cfrac{2^2x}{3-2x+\cfrac{3^2x}{4-3x+\cfrac{4^2x}{5-4x+\ddots}}}}} \ln \left( 1+\frac{x}{y} \right) = \cfrac{x} {y+\cfrac{1x} {2+\cfrac{1x} {3y+\cfrac{2x} {2+\cfrac{2x} {5y+\cfrac{3x} {2+\ddots}}}}}} = \cfrac{2x} {2y+x-\cfrac{(1x)^2} {3(2y+x)-\cfrac{(2x)^2} {5(2y+x)-\cfrac{(3x)^2} {7(2y+x)-\ddots}}}} These continued fractions—particularly the last—converge rapidly for values close to 1. However, the natural logarithms of much larger numbers can easily be computed by repeatedly adding those of smaller numbers, with similarly rapid convergence. For example, since 2 = 1.253 × 1.024, the natural logarithm of 2 can be computed as: \ln 2 = 3 \ln \left( 1+\frac{1}{4} \right) + \ln \left( 1+\frac{3}{125} \right) = \cfrac{6} {9-\cfrac{1^2} {27-\cfrac{2^2} {45-\cfrac{3^2} {63-\ddots}}}} + \cfrac{6} {253-\cfrac{3^2} {759-\cfrac{6^2} {1265-\cfrac{9^2} {1771-\ddots}}}}. Furthermore, since 10 = 1.2510 × 1.0243, even the natural logarithm of 10 similarly can be computed as: \ln 10 = 10 \ln \left( 1+\frac{1}{4} \right) + 3\ln \left( 1+\frac{3}{125} \right) = \cfrac{20} {9-\cfrac{1^2} {27-\cfrac{2^2} {45-\cfrac{3^2} {63-\ddots}}}} + \cfrac{18} {253-\cfrac{3^2} {759-\cfrac{6^2} {1265-\cfrac{9^2} {1771-\ddots}}}}. ## Complex logarithms The exponential function can be extended to a function which gives a complex number as ex for any arbitrary complex number x; simply use the infinite series with x complex. This exponential function can be inverted to form a complex logarithm that exhibits most of the properties of the ordinary logarithm. There are two difficulties involved: no x has ex = 0; and it turns out that e2πi = 1 = e0. Since the multiplicative property still works for the complex exponential function, ez = ez+2nπi, for all complex z and integers n. So the logarithm cannot be defined for the whole complex plane, and even then it is multi-valued – any complex logarithm can be changed into an "equivalent" logarithm by adding any integer multiple of 2πi at will. The complex logarithm can only be single-valued on the cut plane. For example, ln i = 1/2 πi or 5/2 πi or −3/2 πi, etc.; and although i4 = 1, 4 log i can be defined as 2πi, or 10πi or −6 πi, and so on. ## References 1. ^ , Extract of page 9 2. ^ R. P. Burn (2001) "Alphonse Antonio de Sarasa and Logarithms", Historia Mathematica 28:1 – 17 3. ^ 4. ^ 5. ^ Including C, C++, SAS, MATLAB, Mathematica, Fortran, and BASIC 6. ^ See for instance Leonhard Euler, Variae observationes circa series infinitas. Commentarii academiae scientarum Petropolitanae 9 (1737), 160–188; quoque in: Opera Omnia, Series Prima, Opera Mathematica, Volumen Quartum Decimum, Teubner, 1925. 7. ^ See for instance Augustin Cauchy, Exercices d'analyse et de physique mathématique, vol.3, p. 380. 8. ^ See for instance Adrien-Marie Legendre, Essai sur la théorie des nombres, A Paris, chez Duprat, libraire pour les mathématiques, quai des Augustins. An VI (1797 or 1798). 9. ^ See for instance Edmund Landau, Handbuch der Lehre von der Verteilung der Primzahlen. Berlin 1909 (second edition by Chelsea, New York 1953). 10. ^ See for instance Nikolai Piskounov, Calcul différentiel et intégral, 5ème édition 1972, Editions Mir, Moscou III.10 p.91. 12. ^ 13. ^ 14. ^ 15. ^ 16. ^ , Section 4.5, page 331 17. ^ 18. ^ Leonhard Euler, Introductio in Analysin Infinitorum. Tomus Primus. Bousquet, Lausanne 1748. Exemplum 1, p. 228; quoque in: Opera Omnia, Series Prima, Opera Mathematica, Volumen Octavum, Teubner 1922 19. ^ "Logarithmic Expansions" at Math2.org 20. ^ 21. ^ 22. ^
2019-06-24 17:42:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958909749984741, "perplexity": 1171.164316892848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00184.warc.gz"}
https://studysoup.com/blog/uncategorized/applications-integrals-mathematics/
# Applications of Integrals in Mathematics Applications of integrals can be tricky- it can be applied to math, science, and engineering, economics etc. If you don’t have the right teacher or textbook survival guide, getting lost before exam can be easy. In this guide you’re going to learn how integration is applied in various scenarios, the different types of integrals, and some examples. ## Applications of Integrals: What does it mean? Integrals can be applied in many different areas like math, science, and engineering, statistics,and economics. In math, we use it to find: • The center of mass (Centroid) of an area with curved sides • The area between two curves • The area under a curve • The average value of a curve • Integral transforms like Laplace and Fourier transforms. When speaking about integrals, there are two different types: definite integrals (also called Riemann Integral) and indefinite integrals. In simple terms, a definite integral has definite limits ( defined upper and lower limits/bounds) which can be represented by: In contrast to definite integrals, an indefinite integral is an integral which has no limits. This can be represented by: (C is the constant of integration) ## Solved: Example Problem of Applications of Integrals The given function is a type of parabola, the graph is as shown below: First, take a look at our notes below: We will integrate the given function to get . We know that using which we will compute the value of the constant of integration. Once we know the value of constant of integration, the next step is to find f(4). 1) Knowing all that, let’s integrate .To do that, we need to apply the power rule of integration which would be: 2)  Using the power rule, ……………(1) Thus we got the result of integration. 3) Now we will find the value of the constant of integration, We know that , hence applying this value to equation(1) we get Therefore we found the value of the constant of integration which is (4) Now we will apply this value of integration constant to ## Conclusion This is just one example of many different scenarios. We have to remember in math alone there are numerous applications and in physics even more! Applications of integrals is a topic that takes practice with the right survival guides and experts to help you with solutions– you’ll be on the right track.
2022-05-16 12:28:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227898120880127, "perplexity": 573.2179250083752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00491.warc.gz"}
https://stats.stackexchange.com/questions/379150/2d-maximum-likelihood-fit
# $2D$ Maximum Likelihood Fit I have read a couple of places that it is possible to do a $$2D$$ (or $$3D$$) maximum likelihood fit, but I can't seem to understand how this would work. Suppose I'm considering a probability distribution function depending on $$2$$ observable variables, $$x$$ and $$y$$, given a multi-dimensional set of variable parameters, $$\alpha, \beta, \gamma$$, ..., PDF($$x_i,y_i$$|$$\alpha, \beta, \gamma, \ldots$$). I would say that I have a good understanding of the ideas behind a maximum likelihood fit, but for some reason I cannot wrap my head around a multi-dimensional maximum likelihood fit. How do I understand how a $$2D$$ maximum likelihood fit to determine the best estimate of the parameters $$\alpha, \beta, \gamma$$ works? • Everything about it is the same as unidimensional case. What is unclear for you? – Tim Nov 28 '18 at 8:06 $$\hat{\theta} \in \{\arg\max_{\theta \in \Theta}\sum_{i=1}^n \ln f(x_i|\theta)\}$$ Notice that $$\Theta$$ need not be of just one dimension and it can be of multiple dimension. • here is an example where the parameters are $\mu$ and $\sigma$. In that example, while the derivative with respect to $\mu$ can be solved independently, we should view the two first order constraints as a system of equations that we want to satisfy. – Siong Thye Goh Nov 28 '18 at 14:16 • here , a more complicated example where MLE for the parameters of beta distributed is staed in equation $(2.3)$ and $(2.4)$ and numerical method is used to find them. – Siong Thye Goh Nov 28 '18 at 14:25
2019-05-21 06:40:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7840845584869385, "perplexity": 178.7442911026305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256281.35/warc/CC-MAIN-20190521062300-20190521084300-00550.warc.gz"}
http://lambda-the-ultimate.org/node/1826
## SICP Translations Being a slow news week (Ehud busy, chance for shameless plug, etc...), thought I'd take the opportunity to elevate the translation of SICP to the front page. Chapter 1 is mostly complete for Alice ML/SML-NJ, Oz, Haskell, O'Caml and Scala. Still a long way from done, though portions of Chapter 2 and 3 are there for Alice ML / SML-NJ. Actually, the more interesting item I came across this week was in a visit to the Scala list. Seems that Martin Odersky has used many examples from SICP for a course on FP - specifically the Scala by Examples document. I knew I should've made Scala a higher priority - at minimum I can borrow ideas and learn Scala along the way. ## Comment viewing options ### Good job - long live diversity! It's very good to see efforts like SICP translation and LiteratePrograms. Having equivalent code rendered in different programming languages will help people annoyed by language proliferation see through the syntactic differences. It can be hoped that it proves to be a useful step to understanding and teaching the essential code-structuring ideas these languages are based upon. ### LiteratePrograms Eventually, I'd like to get invoved in the LiteratePrograms project, which is novel in its approach to be an algorithm repository, language reference, and teaching/learning platform. Unfortunately, I'm much too terse (and lazy) to give a decent background explanation for the code. (or to put it differently, coding is easy - explaining what the code does and why is the hard part). :-) There are a number of sites that give us a single problem solved in multiple languages (Hello World, 99 bottles, ROT13, Quine, OO Shapes, etc...). And there are sites that try to compare Syntax Across Languages. And there are sites that have multiple problems across multiple languages (PLEAC, PL Shootout, LiteratePrograms, etc...). I figure we could always use more of these for those that are interested in looking at expanding their PL range. If I ever do get done with the SICP translations, I figure it will create a new category of "One Book, Many Languages". But that's a long way off (currently qualifying as "One Chapter, Many Languages"). :-) ### hmm, i thought it was hmm, i thought it was translation to other (human) languages. I myself began one to portuguese, a few months ago, based on the html version. I don't know if it would be much popular, but such a classic deserves it. I had almost completed chapter one, but it's halted as of now for lack of time. Eventually, i get there. ### Can always use a wider audience Lack of time always seems to be the fundamental constraint to getting anything useful done. :-) ### A JavaScript question Python, Ruby and JavaScript translations of chapter #1 were added. I have a question about JavaScript though. The compiler seems to go through a 2 stage process, first compiling the functions and then sequentially going through the code. The following produces the anamoly: // The following will print "second" two times function fg() { return "first"; } print(fg()); function fg() { return "second"; } print(fg()); I'm wondering why JavaScript has this behavior? And what name/terminology one would use to describe it? And whether this is just unique to the script being embedded in the browser? ### Not unique Perl does this too, because "sub f {...}" assigns to "f" at compile time: sub f { 1 } BEGIN { print "begin ", f } print "1 ", f; sub f { 2 } print "2 ", f; sub g; *g = sub { 1 }; # "Undefined subroutine 'g'": BEGIN { eval { print "begin ", g } } print "3 ", g; *g = sub { 2 }; print "4 ", g ### Evaluating definitions first It's not unique to being embedded in the browser. The Spidermonkey Javascript interpreter does the same thing if you feed it a file containing that code. You can get the results you want with: function fg() { return "first"; } print(fg()); fg = function () { return "second"; } print(fg()); (You could use "var fg = ..." to define the first instance of the function, too.) The advantage of treating named function definitions specially is that you don't need forward declarations. This allows you to e.g. write top-level code followed by the functions it depends on. This can be useful in typical scripting environments, where code reuse may be being done by file inclusion, which can create messy dependency problems otherwise. ### Lua? BTW, an interesting alternative to the quirky implementations in Python and Ruby might be Lua, which from what I've seen should be more Right Thingish when it comes to lambdas, lexical scoping and so on. I'm not volunteering, though. :) Is much cleaner, though I couldn't figure out how to do a simple let binding (that is an expression) along. Faked by wrapping it in a function. Also haven't quite figured out when variables need to be declared local - though on the surface it seems sensible. [Edit Note: meant as reply to Anton] ### Your Lua code lacks local declarations In Lua, any unbound variable is implicitly global, which means that it is a key in the current function environment. Also, the function statement is simply syntactic sugar for an assignment statement: function square(x) return x^2 end is the same as: square = function(x) return x^2 end and local function square(x) return x^2 end is the same as: local square; square = function(x) return x^2 end There is no let construct, so the idiom of wrapping the thunk in a function of no arguments and calling it is correct idiomatic Lua, although it's uncommon; a more usual way of doing it is to put the interior scope inside a do ... end block. Since all unbound variables are global, many of your examples pollute the global namespace (that is, in the current function environment). So, for example, it would be better to write the following: -- Block-structured function sqrt(x) local function good_enough(guess, x) return abs(square(guess) - x) < 0.001 end local function improve(guess, x) return average(guess, x / guess) end local function sqrt_iter(guess, x) if (good_enough(guess, x)) then return guess else return sqrt_iter(improve(guess, x), x) end end return sqrt_iter(1.0, x) end ### I think I've got it cleaned up now Thanks for the explanations and corrections. [Edit Note] I guess I should add that it was nice having the example of: print (integral(cube, 0.0, 1.0, 0.001)) work after having seen the "too much recursion" errors in Python, Ruby and JavaScript. Of course, this is owing to the fact that Lua has TCO and the others don't (at least not on the versions I'm working with). ### A question on lists in Lua Well, I ain't gonna get very far into chapter 2 in Lua if I can't figure out how Lua does lists: [1,4,5] etc... The Table dictionaries are lookup based. I'm guessing I could use arrays, and build up cons, car, and cdr routines around them. But before doing that, thought I'd ask what the idiomatic version would be in Lua? Perhaps I'm missing something even more obvious? Thanks. ### Lua uses tables instead of lists Of course, that changes the way you write programs. For example, the most efficient way of extending a list is at the front, while the most efficient way of extending an array is at the end. Furthermore, immutable pairs offer possibilities that mutable tables don't (and vice versa). Nonetheless, it is common to represent lists as arrays (that is, tables whose keys are consecutive positive integers). If you want to simply reproduce Scheme examples, it is trivial to define a cons type as a table: function cons(h, t) return {head = h, tail = t} end You could use car and cdr instead of head and tail although I'm not sure what the pedagogical value is. To make the construction of lists a little easier, we need a list constructor function; the following uses Lua 5.1 varargs, which are a bit different from previous versions, and constructs a temporary table which is then converted into a cons-list: function list(...) local t = {...} local c = nil for i = #t, 1, -1 do c = cons(t[i], c) end return c end Another way of making cons cells is with functional closures, as per section 2.1.3. Since Lua allows multiple return values, an even simpler implementation is possible, where the cons cell returns both head and tail: function cons(h, t) return function() return h, t end end function flip(x, y) return y, x end function tail(cell) return (flip(cell())) end On the whole, I don't think this is a very natural way to program in Lua. It would be more natural to implement make-rat (and friends) directly as tables: function Rational(n, d) return {numer = n, denom = d} end return Rational(x.numer*y.denom + y.numer*x.denom, x.denom*y.denom) end -- etc. I didn't define a functional interface to get the numerator and denominator, because I normally wouldn't in Lua. It is possible to use metamethods to create tables with what are now, I believe, commonly referred to as "properties" -- i.e. computed field accessors and mutators -- and the field access syntax seems more natural to me. Obviously, it would be trivial to define the functions if desired. ### The challenge... ...is to make the code not look too much like Scheme embedded inside the Lua code. I'll probably go with a naive translation first (much as I did with Python and Ruby), and worry about an efficient solution afterwards. Thanks for the help. ### Thanks Thanks for the Lua. It's interesting to see the issues that come up in these languages, especially since I lean towards the "I can write lambda calculus in any language" school of thought, and am constantly disappointed by the seemingly silly little things that get in the way of that. ### Erlang addition of chapter #1. Alice ML is the main language I'm trying to translate. I got stuck on section 2.5 (generic operations) a while back, and now I'm stuck on Section 3.3.5 (constraints). Alice has a nice constraint system (Gecode), but I really need to figure out the Scheme code there. Anyhow, when I get stuck on Alice ML, I have the bad habit of branching out into another language, hoping to find some insight. Latest addition is Erlang which some might find of interest. (William Neumann has also done some work on chapter 3 in O'Caml that has helped me along on the Alice side). ### E Tu Managed a bit more work on the sicp translation. E was added to the mix. And the Python and Ruby translations for chapter 2 were modified with a more efficient use of lists (using chains rather than the built in vector list type).
2017-11-24 12:57:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3723643124103546, "perplexity": 2747.490865469054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808133.70/warc/CC-MAIN-20171124123222-20171124143222-00193.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-2-review-exercises-page-195/12
## College Algebra (10th Edition) $(x+1)^2+(y+2)^2=1$ RECALL: The standard form of a circle's equation is given as: $(x-h)^2+(y-k)^2=r^2$ where $r$ = radius and $(h, k)$ is the center. Thus, a circle with center at $(-1, -2)$ and $r=1$ has the equation: $[(x-(-1)]^2+[y-(-2)]^2=1^2 \\(x+1)^2+(y+2)^2=1$
2018-04-22 18:39:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6297067403793335, "perplexity": 164.72089349190392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00383.warc.gz"}
http://gradestack.com/C-Programming-Language/Loops/nested-loops/19449-3977-40491-study-wtw
# Syntax C programming language allows to use one loop inside another loop. Following section shows few examples to illustrate the concept. The syntax for a nested for loop statement in C is as follows: for ( init; condition; increment ) { for ( init; condition; increment ) { statement(s); } statement(s); } The syntax for a nested while loop statement in C programming language is as follows: while(condition) { while(condition) { statement(s); } statement(s); } The syntax for a nested do...while loop statement in C programming language is as follows: do { statement(s); do { statement(s); }while( condition ); }while( condition ); A final note on loop nesting is that you can put any type of loop inside of any other type of loop. For example, a for loop can be inside a while loop or vice versa. # Example The following program uses a nested for loop to find the prime numbers from 2 to 100: #include <stdio.h> int main () { /* local variable definition */ int i, j; for(i=2; i<100; i++) { for(j=2; j <= (i/j); j++) if(!(i%j)) break; // if factor found, not prime if(j > (i/j)) printf("%d is prime\n", i); } return 0; } When the above code is compiled and executed, it produces the following result: 2 is prime 3 is prime 5 is prime 7 is prime 11 is prime 13 is prime 17 is prime 19 is prime 23 is prime 29 is prime 31 is prime 37 is prime 41 is prime 43 is prime 47 is prime 53 is prime 59 is prime 61 is prime 67 is prime 71 is prime 73 is prime 79 is prime 83 is prime 89 is prime 97 is prime
2016-10-28 10:25:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18363353610038757, "perplexity": 8469.169759877126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721606.94/warc/CC-MAIN-20161020183841-00506-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.bookstack.cn/read/cppbestpractices/02-Use_the_Tools_Available.md
# Use The Tools Available An automated framework for executing these tools should be established very early in the development process. It should not take more than 2-3 commands to checkout the source code, build, and execute the tests. Once the tests are done executing, you should have an almost complete picture of the state and quality of the code. ## Source Control Source control is an absolute necessity for any software development project. If you are not using one yet, start using one. • GitHub - allows for unlimited public repositories, must pay for a private repository. • Bitbucket - allows for unlimited private repositories with up to 5 collaborators, for free. • SourceForge - open source hosting only. • GitLab - allows for unlimited public and private repositories, unlimited CI Runners included, for free. • Visual Studio Online (http://www.visualstudio.com/what-is-visual-studio-online-vs) - allows for unlimited public repositories, must pay for private repository. Repositories can be git or TFVC. Additionally: Issue tracking, project planning (multiple Agile templates, such as SCRUM), integrated hosted builds, integration of all this into Microsoft Visual Studio. Windows only. ## Build Tool Use an industry standard widely accepted build tool. This prevents you from reinventing the wheel whenever you discover / link to a new library / package your product / etc. Examples include: Remember, it’s not just a build tool, it’s also a programming language. Try to maintain good clean build scripts and follow the recommended practices for the tool you are using. ## Continuous Integration Once you have picked your build tool, set up a continuous integration environment. Continuous Integration (CI) tools automatically build the source code as changes are pushed to the repository. These can be hosted privately or with a CI host. • Travis CI • works well with C++ • designed for use with GitHub • free for public repositories on GitHub • AppVeyor • supports Windows, MSVC and MinGW • free for public repositories on GitHub • Hudson CI / Jenkins CI • Java Application Server is required • supports Windows, OS X, and Linux • extendable with a lot of plugins • TeamCity • has a free option for open source projects • Decent CI • simple ad-hoc continuous integration that posts results to GitHub • supports Windows, OS X, and Linux • used by ChaiScript • Visual Studio Online (http://www.visualstudio.com/what-is-visual-studio-online-vs) • Tightly integrated with the source repositories from Visual Studio Online • Uses MSBuild (Visual Studio’s build engine), which is available on Windows, OS X and Linux • Provides hosted build agents and also allows for user-provided build agents • Can be controlled and monitored from within Microsoft Visual Studio • On-Premise installation via Microsoft Team Foundation Server • GitLab • use custom Docker images, so can be used for C++ • has free shared runners • has trivial processing of result of coverage analyze If you have an open source, publicly-hosted project on GitHub: These tools are all free and relatively easy to set up. Once they are set up you are getting continuous building, testing, analysis and reporting of your project. For free. ## Compilers Use every available and reasonable set of warning options. Some warning options only work with optimizations enabled, or work better the higher the chosen level of optimization is, for example -Wnull-dereference with GCC. You should use as many compilers as you can for your platform(s). Each compiler implements the standard slightly differently and supporting multiple will help ensure the most portable, most reliable code. ### GCC / Clang -Wall -Wextra -Wshadow -Wnon-virtual-dtor -pedantic • -Wall -Wextra reasonable and standard • -Wshadow warn the user if a variable declaration shadows one from a parent context • -Wnon-virtual-dtor warn the user if a class with virtual functions has a non-virtual destructor. This helps catch hard to track down memory errors • -Wold-style-cast warn for c-style casts • -Wcast-align warn for potential performance problem casts • -Wunused warn on anything being unused • -Woverloaded-virtual warn if you overload (not override) a virtual function • -Wpedantic warn if non-standard C++ is used • -Wconversion warn on type conversions that may lose data • -Wsign-conversion warn on sign conversions • -Wmisleading-indentation warn if identation implies blocks where blocks do not exist • -Wduplicated-cond warn if if / else chain has duplicated conditions • -Wduplicated-branches warn if if / else branches have duplicated code • -Wlogical-op warn about logical operations being used where bitwise were probably wanted • -Wnull-dereference warn if a null dereference is detected • -Wuseless-cast warn if you perform a cast to the same type • -Wdouble-promotion warn if float is implicit promoted to double • -Wformat=2 warn on security issues around functions that format output (ie printf) Consider using -Weverything and disabling the few warnings you need to on Clang -Weffc++ warning mode can be too noisy, but if it works for your project, use it also. ### MSVC /permissive- - Enforces standards conformance. /W4 /W14640 - use these and consider the following (see descriptions below) • /W4 All reasonable warnings • /w14242 ‘identfier’: conversion from ‘type1’ to ‘type1’, possible loss of data • /w14254 ‘operator’: conversion from ‘type1:field_bits’ to ‘type2:field_bits’, possible loss of data • /w14263 ‘function’: member function does not override any base class virtual member function • /w14265 ‘classname’: class has virtual functions, but destructor is not virtual instances of this class may not be destructed correctly • /w14287 ‘operator’: unsigned/negative constant mismatch • /we4289 nonstandard extension used: ‘variable’: loop control variable declared in the for-loop is used outside the for-loop scope • /w14296 ‘operator’: expression is always ‘boolean_value’ • /w14311 ‘variable’: pointer truncation from ‘type1’ to ‘type2’ • /w14545 expression before comma evaluates to a function which is missing an argument list • /w14546 function call before comma missing argument list • /w14547 ‘operator’: operator before comma has no effect; expected operator with side-effect • /w14549 ‘operator’: operator before comma has no effect; did you intend ‘operator’? • /w14555 expression has no effect; expected expression with side-effect • /w14619 pragma warning: there is no warning number ‘number’ • /w14640 Enable warning on thread un-safe static member initialization • /w14826 Conversion from ‘type1’ to ‘type_2’ is sign-extended. This may cause unexpected runtime behavior. • /w14905 wide string literal cast to ‘LPSTR’ • /w14906 string literal cast to ‘LPWSTR’ • /w14928 illegal copy-initialization; more than one user-defined conversion has been implicitly applied Not recommended • /Wall - Also warns on files included from the standard library, so it’s not very useful and creates too many extra warnings. ### General Start with very strict warning settings from the beginning. Trying to raise the warning level after the project is underway can be painful. Consider using the treat warnings as errors setting. /Wx with MSVC, -Werror with GCC / Clang ## LLVM-based tools LLVM based tools work best with a build system (such as cmake) that can output a compile command database, for example: \$ cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON . If you are not using a build system like that, you can consider Build EAR which will hook into your build system and generate a compile command database for you. CMake now also comes with built-in support for calling clang-tidy during normal compilation. ## Static Analyzers The best bet is the static analyzer that you can run as part of your automated build system. Cppcheck and clang meet that requirement for free options. ### Coverity Scan Coverity has a free (for open source) static analysis toolkit that can work on every commit in integration with Travis CI and AppVeyor. ### PVS-Studio PVS-Studio is a tool for bug detection in the source code of programs, written in C, C++ and C#. It is free for personal academic projects, open source non-commercial projects and independent projects of individual developers. It works in Windows and Linux environment. ### Cppcheck Cppcheck is free and open source. It strives for 0 false positives and does a good job at it. Therefore all warnings should be enabled: --enable=all Notes: • For correct work it requires well formed path for headers, so before usage don’t forget to pass: --check-config. • Finding unused headers does not work with -j more than 1. • Remember to add --force for code with a lot number of #ifdef if you need check all of them. ### CppDepend CppDepend Simplifies managing a complex C/C++ code base by analyzing and visualizing code dependencies, by defining design rules, by doing impact analysis, and comparing different versions of the code. It’s free for OSS contributors. ### Clang’s Static Analyzer Clang’s analyzer’s default options are good for the respective platform. It can be used directly from CMake. They can also be called via clang-check and clang-tidy from the LLVM-based Tools. Also, CodeChecker is available as a front-end to clang’s static analysis. clang-tidy can be easily used with Visual Studio via the Clang Power Tools extension. ### MSVC’s Static Analyzer Can be enabled with the /analyze command line option. For now we will stick with the default options. ### Flint / Flint++ Flint and Flint++ are linters that analyze C++ code against Facebook’s coding standards. ### OCLint OCLint is a free, libre and open source static code analysis tool for improving quality of C++ code in many different ways. ### ReSharper C++ / CLion Both of these tools from JetBrains offer some level of static analysis and automated fixes for common things that can be done better. They have options available for free licenses for open source project leaders. ### Cevelop The Eclipse based Cevelop IDE has various static analysis and refactoring / code fix tools available. For example, you can replace macros with C++ constexprs, refactor namespaces (extract/inline using, qualify name), and refactor your code to C++11’s uniform initialization syntax. Cevelop is free to use. ### Qt Creator Qt Creator can plug into the clang static analyzer. ### clazy clazy is a clang based tool for analyzing Qt usage. ## Runtime Checkers ### Code Coverage Analysis A coverage analysis tool shall be run when tests are executed to make sure the entire application is being tested. Unfortunately, coverage analysis requires that compiler optimizations be disabled. This can result in significantly longer test execution times. • Codecov • integrates with Travis CI and AppVeyor • free for open source projects • Coveralls • integrates with Travis CI and AppVeyor • free for open source projects • LCOV • very configurable • Gcovr • kcov • integrates with codecov and coveralls • performs code coverage reporting without needing special compiler flags, just by instrumenting debug symbols. ### Valgrind Valgrind is a runtime code analyzer that can detect memory leaks, race conditions, and other associated problems. It is supported on various Unix platforms. ### Dr Memory Similar to Valgrind. http://www.drmemory.org ### GCC / Clang Sanitizers These tools provide many of the same features as Valgrind, but built into the compiler. They are easy to use and provide a report of what went wrong. • MemorySanitizer • UndefinedBehaviorSanitizer ### Fuzzy Analyzers If your project accepts user defined input, considering running a fuzzy input tester. Both of these tools use coverage reporting to find new code execution paths and try to breed novel inputs for your code. They can find crashes, hangs, and inputs you didn’t know were considered valid. ### Control Flow Guard MSVC’s Control Flow Guard adds high performance runtime security checks. ## Ignoring Warnings If it is determined by team consensus that the compiler or analyzer is warning on something that is either incorrect or unavoidable, the team will disable the specific error to as localized part of the code as possible. Be sure to reenable the warning after disabling it for a section of code. You do not want your disabled warnings to leak into other code. ## Testing CMake, mentioned above, has a built in framework for executing tests. Make sure whatever build system you use has a way to execute tests built in. To further aid in executing tests, consider a library such as Google Test, Catch, CppUTest or Boost.Test to help you organize the tests. ### Unit Tests Unit tests are for small chunks of code, individual functions which can be tested standalone. ### Integration Tests There should be a test enabled for every feature or bug fix that is committed. See also Code Coverage Analysis. These are tests that are higher level than unit tests. They should still be limited in scope to individual features. ### Negative Testing Don’t forget to make sure that your error handling is being tested and works properly as well. This will become obvious if you aim for 100% code coverage. ## Debugging ### uftrace uftrace can be used to generating function call graphs of a program execution ### rr rr is a free (open source) reverse debugger that supports C++. ## Other Tools ### Metrix++ Metrix++ can identify and report on the most complex sections of your code. Reducing complex code helps you and the compiler understand it better and optimize it better. ### ABI Compliance Checker ABI Compliance Checker (ACC) can analyze two library versions and generates a detailed compatibility report regarding API and C++ ABI changes. This can help a library developer spot unintentional breaking changes to ensure backward compatibility. ### CNCC Customizable Naming Convention Checker can report on identifiers in your code that do not follow certain naming conventions. ### ClangFormat ClangFormat can check and correct code formatting to match organizational conventions automatically. Multipart series on utilizing clang-format. ### SourceMeter SourceMeter offers a free version which provides many different metrics for your code and can also call into cppcheck. ### Bloaty McBloatface Bloaty McBloatface is a binary size analyzer/profiler for unix-like platforms
2020-02-29 10:15:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18373866379261017, "perplexity": 9936.740412789171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148850.96/warc/CC-MAIN-20200229083813-20200229113813-00014.warc.gz"}
https://gmatclub.com/forum/if-m-is-the-least-common-multiple-of-90-196-and-300-which-of-the-foll-274311.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Sep 2018, 16:53 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If m is the least common multiple of 90,196, and 300.Which of the foll Author Message Intern Joined: 31 Jul 2017 Posts: 11 If m is the least common multiple of 90,196, and 300.Which of the foll  [#permalink] ### Show Tags 26 Aug 2018, 03:46 1 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 100% (01:43) wrong based on 5 sessions ### HideShow timer Statistics If m is the least common multiple of 90,196, and 300.Which of the following is not a factor of m ? A)600 B)700 C)900 D)2100 E)4900 Senior Manager Joined: 08 Jun 2013 Posts: 323 Location: India GMAT 1: 200 Q1 V1 GPA: 3.82 WE: Engineering (Other) If m is the least common multiple of 90,196, and 300.Which of the foll  [#permalink] ### Show Tags 26 Aug 2018, 03:57 Manas1212 wrote: If m is the least common multiple of 90,196, and 300.Which of the following is not a factor of m ? A)600 B)700 C)900 D)2100 E)4900 Manas1212 Can you please check the OA? LCM of (90, 196, 300) = $$(2^3) (3^2) (5^2) (7^1)$$ Choice E) requires $$(7^2)$$ Ans :E _________________ It seems Kudos button not working correctly with all my posts... Please check if it is working with this post...... is it?.... Anyways...Thanks for trying Math Expert Joined: 02 Sep 2009 Posts: 49206 Re: If m is the least common multiple of 90,196, and 300.Which of the foll  [#permalink] ### Show Tags 26 Aug 2018, 04:05 Manas1212 wrote: If m is the least common multiple of 90,196, and 300.Which of the following is not a factor of m ? A)600 B)700 C)900 D)2100 E)4900 Discussed here: https://gmatclub.com/forum/if-m-is-the- ... 23758.html Hope it helps. _________________ Re: If m is the least common multiple of 90,196, and 300.Which of the foll &nbs [#permalink] 26 Aug 2018, 04:05 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-09-18 23:53:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26872313022613525, "perplexity": 6671.860705383423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00443.warc.gz"}
https://en.wikipedia.org/wiki/LHCb
# LHCb experiment (Redirected from LHCb) Coordinates: 46°14′28″N 06°05′49″E / 46.24111°N 6.09694°E LHC experiments A Toroidal LHC Apparatus Compact Muon Solenoid LHC-beauty A Large Ion Collider Experiment Total Cross Section, Elastic Scattering and Diffraction Dissociation LHC-forward Monopole and Exotics Detector At the LHC ForwArd Search ExpeRiment Scattering and Neutrino Detector Linear accelerators for protons (Linac 4) and lead (Linac 3) Proton Synchrotron Booster Proton Synchrotron Super Proton Synchrotron The LHCb (Large Hadron Collider beauty) experiment is one of eight particle physics detector experiments collecting data at the Large Hadron Collider at CERN.[1] LHCb is a specialized b-physics experiment, designed primarily to measure the parameters of CP violation in the interactions of b-hadrons (heavy particles containing a bottom quark). Such studies can help to explain the matter-antimatter asymmetry of the Universe. The detector is also able to perform measurements of production cross sections, exotic hadron spectroscopy, charm physics and electroweak physics in the forward region. The LHCb collaboration, who built, operate and analyse data from the experiment, is composed of approximately 1260 people from 74 scientific institutes, representing 16 countries.[2] Chris Parkes[3] succeeded on July 1, 2020 as spokesperson for the collaboration to Giovanni Passaleva (spokesperson 2017-2020).[4] The experiment is located at point 8 on the LHC tunnel close to Ferney-Voltaire, France just over the border from Geneva. The (small) MoEDAL experiment shares the same cavern. ## Physics goals The experiment has wide physics program covering many important aspects of heavy flavour (both beauty and charm), electroweak and quantum chromodynamics (QCD) physics. Six key measurements have been identified involving B mesons. These are described in a roadmap document[5] that formed the core physics programme for the first high energy LHC running in 2010–2012. They include: • Measuring the branching ratio of the rare Bs → μ+ μ decay. • Measuring the forward-backward asymmetry of the muon pair in the flavour-changing neutral current Bd → K* μ+ μ decay. Such a flavour changing neutral current cannot occur at tree-level in the Standard Model of Particle Physics, and only occurs through box and loop Feynman diagrams; properties of the decay can be strongly modified by new physics. • Measuring the CP violating phase in the decay Bs → J/ψ φ, caused by interference between the decays with and without Bs oscillations. This phase is one of the CP observables with the smallest theoretical uncertainty in the Standard Model, and can be significantly modified by new physics. • Measuring properties of radiative B decays, i.e. B meson decays with photons in the final states. Specifically, these are again flavour-changing neutral current decays. • Tree-level determination of the unitarity triangle angle γ. • Charmless charged two-body B decays. ## The LHCb detector The fact that the two b-hadrons are predominantly produced in the same forward cone is exploited in the layout of the LHCb detector. The LHCb detector is a single arm forward spectrometer with a polar angular coverage from 10 to 300 milliradians (mrad) in the horizontal and 250 mrad in the vertical plane. The asymmetry between the horizontal and vertical plane is determined by a large dipole magnet with the main field component in the vertical direction. The LHCb collaboration's logo ### Subsystems The Vertex Locator (VELO) is built around the proton interaction region.[6][7] It is used to measure the particle trajectories close to the interaction point in order to precisely separate primary and secondary vertices. The detector operates at 7 millimetres (0.28 in) from the LHC beam. This implies an enormous flux of particles; VELO has been designed to withstand integrated fluences of more than 1014 p/cm2 per year for a period of about three years. The detector operates in vacuum and is cooled to approximately −25 °C (−13 °F) using a biphase CO2 system. The data of the VELO detector are amplified and read out by the Beetle ASIC. The RICH-1 detector (Ring imaging Cherenkov detector) is located directly after the vertex detector. It is used for particle identification of low-momentum tracks. The main tracking system is placed before and after the dipole magnet. It is used to reconstruct the trajectories of charged particles and to measure their momenta. The tracker consists of three subdetectors: • The Tracker Turicensis, a silicon strip detector located before the LHCb dipole magnet • The Outer Tracker. A straw-tube based detector located after the dipole magnet covering the outer part of the detector acceptance • The Inner Tracker, silicon strip based detector located after the dipole magnet covering the inner part of the detector acceptance Following the tracking system is RICH-2. It allows the identification of the particle type of high-momentum tracks. The electromagnetic and hadronic calorimeters provide measurements of the energy of electrons, photons, and hadrons. These measurements are used at trigger level to identify the particles with large transverse momentum (high-Pt particles). The muon system is used to identify and trigger on muons in the events. At the end of 2018, the LHC was shut down for upgrades, with a restart currently planned for early 2022. For the LHCb detector, almost all subdetectors are to be modernised or replaced.[8] It will get a fully new tracking system composed of a modernised vertex locator, upstream tracker (UT) and scintillator fibre tracker (SciFi). The RICH detectors will also be updated, as well as the whole detector electronics. However, the most important change is the switch to the fully software trigger of the experiment, which means that every recorded collision will be analysed by sophisticated software programmes without an intermediate hardware filtering step (which was found to be a bottleneck in the past).[9] ## Results During the 2011 proton-proton run, LHCb recorded an integrated luminosity of 1 fb−1 at a collision energy of 7 TeV. In 2012, about 2 fb−1 was collected at an energy of 8 TeV.[10] During 2015-2018 (Run 2 of the LHC), about 6 fb−1 was collected at a center-of-mass energy of 13 TeV. In addition, small samples were collected in proton-lead, lead-lead, and xenon-xenon collisions. The LHCb design also allowed the study of collisions of particle beams with a gas (helium or neon) injected inside the VELO volume, making it similar to a fixed-target experiment; this setup is usually referred to as "SMOG".[11] These datasets allow the collaboration to carry out the physics programme of precision Standard Model tests with many additional measurements. As of 2021, LHCb has published more than 500 scientific papers.[12] LHCb is designed to study beauty and charm hadrons. In addition to precision studies of the known particles such as mysterious X(3872), a number of new hadrons have been discovered by the experiment. As of 2021, all four LHC experiments have discovered about 60 new hadrons in total, vast majority of which by LHCb.[13] In 2015, analysis of the decay of bottom lambda baryons0 b ) in the LHCb experiment revealed the apparent existence of pentaquarks,[14][15] in what was described as an "accidental" discovery.[16] Other notable discoveries are those of the "doubly charmed" baryon ${\displaystyle \Xi _{\rm {cc}}^{++}}$ in 2017, being a first known baryon with two heavy quarks; and of the fully-charmed tetraquark ${\displaystyle \mathrm {T} _{\rm {cccc}}}$ in 2020, made of two charm quarks and two charm antiquarks. Hadrons discovered at LHCb.[17] The term 'excited' for baryons and mesons means existence of a state of lower mass with the same quark content and isospin. Quark content[i] Particle name Type Year of discovery 1 ${\displaystyle {\rm {bud}}}$ ${\displaystyle \Lambda _{\rm {b}}(5912)^{0}}$ Excited baryon 2012 2 ${\displaystyle {\rm {bud}}}$ ${\displaystyle \Lambda _{\rm {b}}(5920)^{0}}$ Excited baryon 2012 3 ${\displaystyle {\rm {c{\bar {u}}}}}$ ${\displaystyle {\rm {D_{J}(2580)^{0}}}}$ Excited meson 2013 4 ${\displaystyle {\rm {c{\bar {u}}}}}$ ${\displaystyle {\rm {D_{J}(2740)^{0}}}}$ Excited meson 2013 5 ${\displaystyle {\rm {c{\bar {d}}}}}$ ${\displaystyle {\rm {D_{J}^{*}(2760)^{+}}}}$ Excited meson 2013 6 ${\displaystyle {\rm {c{\bar {u}}}}}$ ${\displaystyle {\rm {D_{J}(3000)^{0}}}}$ Excited meson 2013 7 ${\displaystyle {\rm {c{\bar {u}}}}}$ ${\displaystyle {\rm {D_{J}^{*}(3000)^{0}}}}$ Excited meson 2013 8 ${\displaystyle {\rm {c{\bar {d}}}}}$ ${\displaystyle {\rm {D_{J}^{*}(3000)^{+}}}}$ Excited meson 2013 9 ${\displaystyle {\rm {c{\bar {s}}}}}$ ${\displaystyle {\rm {D_{s1}^{*}(2860)^{+}}}}$ Excited meson 2014 10 ${\displaystyle {\rm {bsd}}}$ ${\displaystyle \Xi _{\rm {b}}^{'-}}$ Excited baryon 2014 11 ${\displaystyle {\rm {bsd}}}$ ${\displaystyle \Xi _{\rm {b}}^{*-}}$ Excited baryon 2014 12 ${\displaystyle {\rm {{\bar {b}}u}}}$ ${\displaystyle {\rm {B_{J}(5840)^{+}}}}$ Excited meson 2015 13 ${\displaystyle {\rm {{\bar {b}}d}}}$ ${\displaystyle {\rm {B_{J}(5840)^{0}}}}$ Excited meson 2015 14 ${\displaystyle {\rm {{\bar {b}}u}}}$ ${\displaystyle {\rm {B_{J}(5970)^{+}}}}$ Excited meson 2015 15 ${\displaystyle {\rm {{\bar {b}}d}}}$ ${\displaystyle {\rm {B_{J}(5970)^{+}}}}$ Excited meson 2015 16[ii] ${\displaystyle {\rm {c{\bar {c}}uud}}}$ ${\displaystyle {\rm {P_{c}(4380)^{+}}}}$ Pentaquark 2015 17 ${\displaystyle {\rm {c{\bar {c}}s{\bar {s}}}}}$ ${\displaystyle {\rm {X(4274)}}}$ Tetraquark 2016 18 ${\displaystyle {\rm {c{\bar {c}}s{\bar {s}}}}}$ ${\displaystyle {\rm {X(4500)}}}$ Tetraquark 2016 19 ${\displaystyle {\rm {c{\bar {c}}s{\bar {s}}}}}$ ${\displaystyle {\rm {X(4700)}}}$ Tetraquark 2016 20 ${\displaystyle {\rm {c{\bar {u}}}}}$ ${\displaystyle {\rm {D_{3}^{*}(2760)^{0}}}}$ Excited meson 2016 21 ${\displaystyle {\rm {cud}}}$ ${\displaystyle \Lambda _{\rm {c}}(2860)^{+}}$ Excited baryon 2017 22 ${\displaystyle {\rm {css}}}$ ${\displaystyle \Omega _{\rm {c}}(3000)^{0}}$ Excited baryon 2017 23 ${\displaystyle {\rm {css}}}$ ${\displaystyle \Omega _{\rm {c}}(3050)^{0}}$ Excited baryon 2017 24 ${\displaystyle {\rm {css}}}$ ${\displaystyle \Omega _{\rm {c}}(3066)^{0}}$ Excited baryon 2017 25 ${\displaystyle {\rm {css}}}$ ${\displaystyle \Omega _{\rm {c}}(3090)^{0}}$ Excited baryon 2017 26 ${\displaystyle {\rm {css}}}$ ${\displaystyle \Omega _{\rm {c}}(3119)^{0}}$ Excited baryon 2017 27[iii] ${\displaystyle {\rm {ccu}}}$ ${\displaystyle \Xi _{\rm {cc}}^{++}}$ Baryon 2017 28 ${\displaystyle {\rm {bsd}}}$ ${\displaystyle \Xi _{\rm {b}}(6227)^{-}}$ Excited baryon 2018 29 ${\displaystyle {\rm {buu}}}$ ${\displaystyle \Sigma _{\rm {b}}(6097)^{+}}$ Excited baryon 2018 30 ${\displaystyle {\rm {bdd}}}$ ${\displaystyle \Sigma _{\rm {b}}(6097)^{-}}$ Excited baryon 2018 31 ${\displaystyle {\rm {c{\bar {c}}}}}$ ${\displaystyle \psi _{3}(3842)}$[18] Excited meson 2019 32 ${\displaystyle {\rm {c{\bar {c}}uud}}}$ ${\displaystyle {\rm {P_{c}(4312)^{+}}}}$ Pentaquark 2019 33 ${\displaystyle {\rm {c{\bar {c}}uud}}}$ ${\displaystyle {\rm {P_{c}(4440)^{+}}}}$ Pentaquark 2019 34 ${\displaystyle {\rm {c{\bar {c}}uud}}}$ ${\displaystyle {\rm {P_{c}(4457)^{+}}}}$ Pentaquark 2019 35 ${\displaystyle {\rm {bud}}}$ ${\displaystyle \Lambda _{\rm {b}}(6146)^{0}}$ Excited baryon 2019 36 ${\displaystyle {\rm {bud}}}$ ${\displaystyle \Lambda _{\rm {b}}(6152)^{0}}$ Excited baryon 2019 37 ${\displaystyle {\rm {bss}}}$ ${\displaystyle \Omega _{\rm {b}}(6340)^{-}}$ Excited baryon 2020 38 ${\displaystyle {\rm {bss}}}$ ${\displaystyle \Omega _{\rm {b}}(6350)^{-}}$ Excited baryon 2020 39[iv] ${\displaystyle {\rm {bud}}}$ ${\displaystyle \Lambda _{\rm {b}}(6070)^{0}}$ Excited baryon 2020 40 ${\displaystyle {\rm {csd}}}$ ${\displaystyle \Xi _{\rm {c}}(2923)^{0}}$ Excited baryon 2020 41 ${\displaystyle {\rm {csd}}}$ ${\displaystyle \Xi _{\rm {c}}(2939)^{0}}$ Excited baryon 2020 42[v] ${\displaystyle {\rm {cc{\bar {c}}{\bar {c}}}}}$ ${\displaystyle {\rm {T_{cccc}}}}$ Tetraquark 2020 43[vi] ${\displaystyle {\rm {{\bar {c}}d{\bar {s}}u}}}$ ${\displaystyle {\rm {X_{0}(2900)}}}$ Tetraquark 2020 44 ${\displaystyle {\rm {{\bar {c}}d{\bar {s}}u}}}$ ${\displaystyle {\rm {X_{1}(2900)}}}$ Tetraquark 2020 45 ${\displaystyle {\rm {bsu}}}$ ${\displaystyle \Xi _{\rm {b}}(6227)^{0}}$ Excited baryon 2020 46 ${\displaystyle {\rm {{\bar {b}}s}}}$ ${\displaystyle {\rm {B_{s}(6063)^{0}}}}$ Excited meson 2020 47 ${\displaystyle {\rm {{\bar {b}}s}}}$ ${\displaystyle {\rm {B_{s}(6114)^{0}}}}$ Excited meson 2020 48 ${\displaystyle {\rm {c{\bar {s}}}}}$ ${\displaystyle {\rm {D_{s0}(2590)^{+}}}}$ Excited meson 2020 49 ${\displaystyle {\rm {c{\bar {c}}s{\bar {s}}}}}$ ${\displaystyle {\rm {X(4630)}}}$ Tetraquark 2021 50 ${\displaystyle {\rm {c{\bar {c}}s{\bar {s}}}}}$ ${\displaystyle {\rm {X(4685)}}}$ Tetraquark 2021 51 ${\displaystyle {\rm {c{\bar {c}}u{\bar {s}}}}}$ ${\displaystyle {\rm {Z_{cs}(4000)^{+}}}}$ Tetraquark 2021 52 ${\displaystyle {\rm {c{\bar {c}}u{\bar {s}}}}}$ ${\displaystyle {\rm {Z_{cs}(4220)^{+}}}}$ Tetraquark 2021 1. ^ Abbreviations are the first letter of the quark name (up='u', down='d', top='t', bottom='b', charmed='c', strange='s'). Antiquarks have overbars. 2. ^ Previously unknown combination of quarks 3. ^ Previously unknown combination of quarks; first baryon with two charm quarks, and the only weakly-decaying particle discovered so far at the LHC. 4. ^ Simultaneous with CMS; CMS had not enough data to claim the discovery. 5. ^ Previously unknown combination of quarks; first tetraquark made exclusively of charm quarks 6. ^ Previously unknown combination of quarks; first tetraquark with all quarks being different ### CP violation and mixing Studies of charge-parity (CP) violation in B-meson decays is the primary design goal of the LHCb experiment. As of 2021, LHCb measurements confirm with a remarkable precision the picture described by the CKM unitarity triangle. The angle ${\displaystyle \gamma \,\,(\alpha _{3})}$ of the unitarity triangle is now known to about 4°, and is in agreement with indirect determinations.[19] In 2019, LHCb announced discovery of CP violation in decays of charm mesons.[20] This is the first time CP violation is seen in decays of particles other than kaons or B mesons. The rate of the observed CP asymmetry is at the upper edge of existing theoretical predictions, which triggered some interest among particle theorists regarding possible impact of physics beyond the Standard Model.[21] In 2020, LHCb announced discovery of time-dependent CP violation in decays of Bs mesons.[22] The oscillation frequency of Bs mesons to its antiparticle and vice versa was measured to a great precision in 2021. ### Rare decays Rare decays are the decay modes harshly suppressed in the Standard Model, which makes them sensitive to potential effects from yet unknown physics mechanisms. In 2014, LHCb and CMS experiments published a joint paper in Nature announcing the discovery of the very rare decay ${\displaystyle \mathrm {B} _{\rm {s}}^{0}\to \mu ^{+}\mu ^{-}}$, rate of which was found close to the Standard Model predictions.[23] This measurement has harshly limited the possible parameter space of supersymmetry theories, which have predicted a large enhancement in rate. Since then, LHCb has published several papers with more precise measurements in this decay mode. Anomalies were found in several rare decays of B mesons. The most famous example in the so-called ${\displaystyle \mathrm {P} _{5}^{'}}$ angular observable was found in the decay ${\displaystyle \mathrm {B} ^{0}\to \mathrm {K} ^{*0}\mu ^{+}\mu ^{-}}$, where the deviation between the data and theoretical prediction has persisted for years.[24] The decay rates of several rare decays also differ from the theoretical predictions, though the latter have sizeable uncertainties. ### Lepton flavour universality In the Standard Model, couplings of charged leptons (electron, muon and tau lepton) to the gauge bosons are expected to be identical, with the only difference emerging from the lepton masses. This postulate is referred to as "lepton flavour universality". As a consequence, in decays of b hadrons, electrons and muons should be produced at similar rates, and the small difference due to the lepton masses is precisely calculable. LHCb has found deviations from this predictions by comparing the rate of the decay ${\displaystyle \mathrm {B} ^{+}\to \mathrm {K} ^{+}\mu ^{+}\mu ^{-}}$ to that of ${\displaystyle \mathrm {B} ^{+}\to \mathrm {K} ^{+}\mathrm {e} ^{+}\mathrm {e} ^{-}}$,[25] and in similar processes.[26][27] However, as the decays in question are very rare, a larger dataset needs to be analysed in order to make definitive conclusions. In March 2021, LHCb announced that the anomaly in lepton universality crossed the "3 sigma" statistical significance threshold, which translates to a p-value of 0.1%.[28] The measured value of ${\displaystyle R_{\rm {K}}={\frac {{\mathcal {B}}(\mathrm {B} ^{+}\to \mathrm {K} ^{+}\mu ^{+}\mu ^{-})}{{\mathcal {B}}(\mathrm {B} ^{+}\to \mathrm {K} ^{+}\mathrm {e} ^{+}\mathrm {e} ^{-})}}}$, where symbol ${\displaystyle {\mathcal {B}}}$ denotes probability of a given decay to happen, was found to be ${\displaystyle 0.846_{-0.041}^{+0.044}}$ while the Standard Model predicts it to be very close to unity.[29] ### Other measurements LHCb has contributed to studies of quantum chromodynamics, electroweak physics, and provided cross-section measurements for astroparticle physics.[30] ## References 1. ^ Belyaev, I.; Carboni, G.; Harnew, N.; Teubert, C. Matteuzzi F. (2021-01-13). "The history of LHCB". The European Physical Journal H. 46 (1): 3. arXiv:2101.05331. Bibcode:2021EPJH...46....3B. doi:10.1140/epjh/s13129-021-00002-z. S2CID 231603240. 2. ^ 3. ^ Ana Lopes (2020-06-30). "New spokesperson for the LHCb collaboration". CERN. Retrieved 2020-07-03. 4. ^ "Giovanni Passaleva". LHCb, CERN. Retrieved 2020-07-03. 5. ^ B. Adeva et al (LHCb collaboration) (2009). "Roadmap for selected key measurements of LHCb". arXiv:0912.4179 [hep-ex]. 6. ^ [1], The LHCb VELO (from the VELO group) 7. ^ [2], VELO Public Pages 8. ^ "Transforming LHCb: What's in store for the next two years?". CERN. Retrieved 2021-03-21. 9. ^ "Allen initiative – supported by CERN openlab – key to LHCb trigger upgrade". CERN. Retrieved 2021-03-21. 10. ^ "Luminosities Run1". Retrieved 14 Dec 2017., 2012 LHC Luminosity Plots 11. ^ "New SMOG on the horizon". CERN Courier. 2020-05-08. Retrieved 2021-03-21. 12. ^ "LHCb - Large Hadron Collider beauty experiment". lhcb-public.web.cern.ch. Retrieved 2021-03-21. 13. ^ "59 new hadrons and counting". CERN. Retrieved 2021-03-21. 14. ^ "Observation of particles composed of five quarks, pentaquark-charmonium states, seen in Λ0 b →J/ψpK decays" . CERN/LHCb. 14 July 2015. Retrieved 2015-07-14. 15. ^ R. Aaij et al. (LHCb collaboration) (2015). "Observation of J/ψp resonances consistent with pentaquark states in Λ0 b →J/ψKp decays". Physical Review Letters. 115 (7): 072001. arXiv:1507.03414. Bibcode:2015PhRvL.115g2001A. doi:10.1103/PhysRevLett.115.072001. PMID 26317714. S2CID 119204136. 16. ^ G. Amit (14 July 2015). "Pentaquark discovery at LHC shows long-sought new form of matter". New Scientist. Retrieved 2015-07-14. 17. ^ "New particles discovered at the LHC". www.nikhef.nl. Retrieved 2021-03-21. 18. ^ "pdgLive". pdglive.lbl.gov. Retrieved 2021-03-21. 19. ^ The LHCb Collaboration, ed. (2020). Updated LHCb combination of the CKM angle γ. 20. ^ "LHCb observes CP violation in charm decays". CERN Courier. 2019-05-07. Retrieved 2021-03-21. 21. ^ Dery, Avital; Nir, Yosef (December 2019). "Implications of the LHCb discovery of CP violation in charm decays". Journal of High Energy Physics. 2019 (12): 104. arXiv:1909.11242. Bibcode:2019JHEP...12..104D. doi:10.1007/JHEP12(2019)104. ISSN 1029-8479. S2CID 202750063. 22. ^ "LHCb sees new form of matter–antimatter asymmetry in strange beauty particles". CERN. Retrieved 2021-03-21. 23. ^ Khachatryan, V.; Sirunyan, A.M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Friedl, M.; Frühwirth, R.; Ghete, V.M.; Hartl, C. (June 2015). "Observation of the rare B s 0 → µ + µ − decay from the combined analysis of CMS and LHCb data". Nature. 522 (7554): 68–72. doi:10.1038/nature14474. ISSN 1476-4687. PMID 26047778. S2CID 4394036. 24. ^ "New LHCb analysis still sees previous intriguing results". CERN. Retrieved 2021-03-21. 25. ^ "How universal is (lepton) universality?". CERN. Retrieved 2021-03-21. 26. ^ "LHCb explores the beauty of lepton universality". CERN. Retrieved 2021-03-21. 27. ^ "LHCb tests lepton universality in new channels". CERN Courier. 2021-10-19. Retrieved 2021-10-27. 28. ^ "Intriguing new result from the LHCb experiment at CERN". CERN. Retrieved 2021-03-23. 29. ^ LHCb collaboration; Aaij, R.; Beteta, C. Abellán; Ackernley, T.; Adeva, B.; Adinolfi, M.; Afsharnia, H.; Aidala, C. A.; Aiola, S.; Ajaltouni, Z.; Akar, S. (22 March 2022). "Test of lepton universality in beauty-quark decays". Nature Physics. 18 (3): 277–282. arXiv:2103.11769. doi:10.1038/s41567-021-01478-8. ISSN 1745-2473. 30. ^ Fontana, Marianna (2017-10-19). "LHCb inputs to astroparticle physics". Proceedings of the European Physical Society Conference on High Energy Physics. Venice, Italy: Sissa Medialab. p. 832. doi:10.22323/1.314.0832.
2022-06-26 03:15:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 115, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327257871627808, "perplexity": 6534.115542798177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00675.warc.gz"}
https://dml.cz/handle/10338.dmlcz/143635
# Article Full entry | PDF   (0.2 MB) Keywords: difference equation; forbidden set; periodic solution; unbounded solution Summary: In this paper, we determine the forbidden set and give an explicit formula for the solutions of the difference equation $$x_{n+1}=\frac {ax_{n}x_{n-1}}{-bx_{n}+ cx_{n-2}},\quad n\in \mathbb {N}_0$$ where $a$, $b$, $c$ are positive real numbers and the initial conditions $x_{-2}$, $x_{-1}$, $x_0$ are real numbers. We show that every admissible solution of that equation converges to zero if either $a<c$ or $a>c$ with ${(a-c)}/{b}<1$. \endgraf When $a>c$ with ${(a-c)}/{b}>1$, we prove that every admissible solution is unbounded. Finally, when $a=c$, we prove that every admissible solution converges to zero. References: [1] Agarwal, R. P.: Difference Equations and Inequalities. Theory, Methods, and Applications. Pure and Applied Mathematics 155 Marcel Dekker, New York (1992). MR 1155840 | Zbl 0925.39001 [2] Aloqeili, M.: Dynamics of a rational difference equation. Appl. Math. Comput. 176 (2006), 768-774. DOI 10.1016/j.amc.2005.10.024 | MR 2232069 | Zbl 1100.39002 [3] Andruch-Sobiło, A., Migda, M.: Further properties of the rational recursive sequence $x_{n+1}=\frac{ax_{n-1}}{b+cx_{n}x_{n-1}}$. Opusc. Math. 26 (2006), 387-394. MR 2280266 | Zbl 1131.39003 [4] Andruch-Sobiło, A., Migda, M.: On the rational recursive sequence $x_{n+1}=\frac{ax_{n-1}}{b+cx_{n}x_{n-1}}$. Tatra Mt. Math. Publ. 43 (2009), 1-9. MR 2588871 [5] Berg, L., Stević, S.: On difference equations with powers as solutions and their connection with invariant curves. Appl. Math. Comput. 217 (2011), 7191-7196. DOI 10.1016/j.amc.2011.02.005 | MR 2781112 | Zbl 1260.39002 [6] Berg, L., Stević, S.: On some systems of difference equations. Appl. Math. Comput. 218 (2011), 1713-1718. DOI 10.1016/j.amc.2011.06.050 | MR 2831394 | Zbl 1243.39009 [7] Camouzis, E., Ladas, G.: Dynamics of Third-Order Rational Difference Equations with Open Problems and Conjectures. Advances in Discrete Mathematics and Applications 5 Chapman and Hall/HRC, Boca Raton (2008). MR 2363297 | Zbl 1129.39002 [8] Grove, E. A., Ladas, G.: Periodicities in Nonlinear Difference Equations. Advances in Discrete Mathematics and Applications 4 Chapman and Hall/CRC, Boca Raton (2005). MR 2193366 | Zbl 1078.39009 [9] Iričanin, B., Stević, S.: On some rational difference equations. Ars Comb. 92 (2009), 67-72. MR 2532566 | Zbl 1224.39014 [10] Karakostas, G.: Convergence of a difference equation via the full limiting sequences method. Differential Equations Dynam. Systems 1 (1993), 289-294. MR 1259169 | Zbl 0868.39002 [11] Kocić, V. L., Ladas, G.: Global Behavior of Nonlinear Difference Equations of Higher Order with Applications. Mathematics and Its Applications 256 Kluwer Academic Publishers, Dordrecht (1993). MR 1247956 [12] Kruse, N., Nesemann, T.: Global asymptotic stability in some discrete dynamical systems. J. Math. Anal. Appl. 235 (1999), 151-158. DOI 10.1006/jmaa.1999.6384 | MR 1758674 | Zbl 0933.37016 [13] Kulenović, M. R. S., Ladas, G.: Dynamics of Second Order Rational Difference Equations. With Open Problems and Conjectures Chapman and Hall/HRC, Boca Raton (2002). MR 1935074 | Zbl 0981.39011 [14] Levy, H., Lessman, F.: Finite Difference Equations. Reprint of the 1961 edition. Dover Publications New York (1992). MR 1217083 [15] Sedaghat, H.: Global behaviours of rational difference equations of orders two and three with quadratic terms. J. Difference Equ. Appl. 15 (2009), 215-224. DOI 10.1080/10236190802054126 | MR 2498770 | Zbl 1169.39006 [16] Stević, S.: On a system of difference equations. Appl. Math. Comput. 218 (2011), 3372-3378. DOI 10.1016/j.amc.2011.08.079 | MR 2851439 | Zbl 1256.39008 [17] Stević, S.: On a system of difference equations with period two coefficients. Appl. Math. Comput. 218 (2011), 4317-4324. DOI 10.1016/j.amc.2011.10.005 | MR 2862101 | Zbl 1256.39008 [18] Stević, S.: On a third-order system of difference equations. Appl. Math. Comput. 218 (2012), 7649-7654. DOI 10.1016/j.amc.2012.01.034 | MR 2892731 | Zbl 1242.39011 [19] Stević, S.: On the difference equation $x_n=x_{n-2}/(b_n+c_nx_{n-1}x_{n-2})$. Appl. Math. Comput. 218 (2011), 4507-4513. MR 2862122 [20] Stević, S.: On some solvable systems of difference equations. Appl. Math. Comput. 218 (2012), 5010-5018. DOI 10.1016/j.amc.2011.10.068 | MR 2870025 | Zbl 1253.39011 [21] Stević, S.: On positive solutions of a $(k +1)$th order difference equation. Appl. Math. Lett. 19 (2006), 427-431. DOI 10.1016/j.aml.2005.05.014 | MR 2213143 | Zbl 1095.39010 [22] Stević, S.: More on a rational recurrence relation. Appl. Math. E-Notes (electronic only) 4 (2004), 80-85. MR 2077785 | Zbl 1069.39024 Partner of
2023-01-31 15:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7896220088005066, "perplexity": 1686.3381555294834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00076.warc.gz"}
https://admin.clutchprep.com/physics/practice-problems/95481/two-forces-have-the-same-magnitude-f-what-is-the-angle-between-the-two-vectors-i
⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet. # Solution: Two forces have the same magnitude F.What is the angle between the two vectors if their sum has a magnitude of 2F?What is the angle between the two vectors if their sum has a magnitude of F?What is the angle between the two vectors if their sum has a magnitude of zero? ###### Problem Two forces have the same magnitude F. What is the angle between the two vectors if their sum has a magnitude of 2F? What is the angle between the two vectors if their sum has a magnitude of F? What is the angle between the two vectors if their sum has a magnitude of zero?
2020-04-09 11:55:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892814576625824, "perplexity": 226.11769692584159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371833063.93/warc/CC-MAIN-20200409091317-20200409121817-00222.warc.gz"}
https://byjus.com/question-answer/describe-in-each-case-one-chemical-test-that-would-enable-you-to-distinguish-between-the-1/
Question # Describe in each case, one chemical test that would enable you to distinguish between the following pairs of chemicals. Describe what happens with each chemical or state 'no visible reaction'. (i) Sodium chloride solution and sodium nitrate solution. (ii) Sodium sulphate solution and sodium chloride solution. (iii) Calcium nitrate solution and zinc nitrate solution. Open in App Solution ## i) Sodium chloride solution and sodium nitrate solution On adding conc. sulphuric acid (H​2SO4) to sodium chloride solution, a colourless gas, HCl, evolves. Evolution of HCl gas can be confirmed by bringing a glass rod dipped in ammonium hydroxide solution near the mouth of the test tube. Dense white fumes of ammonium chloride will be formed. $2\mathrm{NaCl}+{\mathrm{H}}_{2}{\mathrm{SO}}_{4}\to {\mathrm{Na}}_{2}{\mathrm{SO}}_{4}+2\mathrm{HCl}\phantom{\rule{0ex}{0ex}}\mathrm{Colourless}\mathrm{gas}\phantom{\rule{0ex}{0ex}}\mathrm{HCl}+{\mathrm{NH}}_{4}\mathrm{OH}\to {\mathrm{NH}}_{4}\mathrm{Cl}+{\mathrm{H}}_{2}\mathrm{O}\phantom{\rule{0ex}{0ex}}\mathrm{Dense}\mathrm{white}\mathrm{fumes}$ On adding conc. sulphuric acid (H​2SO4) to sodium nitrate solution, reddish brown fumes are observed. $2{\mathrm{NaNO}}_{3}+{\mathrm{H}}_{2}{\mathrm{SO}}_{4}\to {\mathrm{Na}}_{2}{\mathrm{SO}}_{4}+{\mathrm{HNO}}_{3}\phantom{\rule{0ex}{0ex}}\mathrm{Reddish}\mathrm{brown}\mathrm{fumes}$ ii) Sodium sulphate solution and sodium chloride solution On adding barium chloride solution to sodium sulphate solution, a white precipitate is formed. On the other hand, barium chloride will have no visible reaction with sodium chloride solution. ${\mathrm{BaCl}}_{2}+{\mathrm{Na}}_{2}{\mathrm{SO}}_{4}\to {\mathrm{BaSO}}_{4}+2\mathrm{NaCl}\phantom{\rule{0ex}{0ex}}\mathrm{White}\mathrm{precipitate}\phantom{\rule{0ex}{0ex}}{\mathrm{BaCl}}_{2}+\mathrm{NaCl}\to \mathrm{No}\mathrm{reaction}$ iii) Calcium nitrate solution and zinc nitrate solution On adding sodium hydroxide to both calcium nitrate and zinc nitrate, white precipitates are formed. For calcium nitrate, the precipitate will be insoluble in excess sodium hydroxide. But for zinc nitrate, the precipitate will be soluble in excess sodium hydroxide. $\mathrm{Ca}{\left({\mathrm{NO}}_{3}\right)}_{2}+2\mathrm{NaOH}\to \mathrm{Ca}{\left(\mathrm{OH}\right)}_{2}+2{\mathrm{NaNO}}_{3}\phantom{\rule{0ex}{0ex}}\mathrm{White}\mathrm{precipitate}\phantom{\rule{0ex}{0ex}}\mathrm{Ca}{\left(\mathrm{OH}\right)}_{2}+2\mathrm{NaOH}\mathit{\left(}excess\mathit{\right)}\to \mathrm{Insoluble}\phantom{\rule{0ex}{0ex}}\mathrm{Zn}{\left({\mathrm{NO}}_{3}\right)}_{2}+2\mathrm{NaOH}\to \mathrm{Zn}{\left(\mathrm{OH}\right)}_{2}+2{\mathrm{NaNO}}_{3}\phantom{\rule{0ex}{0ex}}\mathrm{White}\mathrm{precipitate}\phantom{\rule{0ex}{0ex}}\mathrm{Zn}{\left(\mathrm{OH}\right)}_{2}+2\mathrm{NaOH}\mathit{\left(}excess\mathit{\right)}\to {\mathrm{Na}}_{2}\left[\mathrm{Zn}{\left(\mathrm{OH}\right)}_{4}\right]$ Suggest Corrections 0 Related Videos Test for Ammonia CHEMISTRY Watch in App Explore more
2023-03-22 18:42:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7817698121070862, "perplexity": 12395.200164854758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00188.warc.gz"}
http://tex.stackexchange.com/tags/line-spacing/new
# Tag Info 0 Updating the packages algorithmicx and algorithms did not help, but indeed the problem was that on one machine one of the packages was outdated. After a thorough search/comparison of the package dates in the MiKTeX Package Manager on both machines, I found that on one system the package caption had a different date than the other. Uninstalling the older ... 6 First of all, you absolutely need to tell whoever is in charge of the project what you’re almost saying in the comments: what you’re asking to do here is the consequence of bad decisions that you shouldn’t have to deal with. All you’re doing here is working around these bad decisions because you can’t tackle the main issue. Now, as long as you’re aware of ... 1 The \xyR macro requires an argument, the desired spacing between rows. Here's how you should use it, with also a simpler approach: if you follow \xymatrix with @R=<dimen>, the given length will be used as the space between rows; you can also use \xymatrix@R+<dimen> \xymatrix@R-<dimen> that respectively add or subtract the given length ... 4 By default, set in /tex/plain/base/plain.tex (or latex.ltx), the glue value is set (equivalently) to \baselineskip=12pt. i.e. it has no stretch or shrink, the same goes for \lineskip=1pt. So, with the defaults, intra-baseline distances can't flex in the way described. Spacing can look uneven if the boxes get too close together and \lineskiplimit is ... 5 Use \linewidth for the width of the current environment (in this case, the current column). Use \textwidth for the width of the whole text block on the page. 1 the only way i know to do this globally without checking to see what words are extending into the margins is to use \sloppy, and you really don't want to do that. the resulting interword spacing is often terrible, and it can change previously "good" paragraphs. the next easiest method is to look into the log file for overfull hbox messages, determine what ... 1 How about including those culprit words into a hyphenation list? Look here: http://www.forkosh.com/latex/ltx-244.html Have you tried adding \hyphenation{...} to your preamble? 0 Just another solution, since I run into the some error, but with different cause. Using the setspace usepackage is just fine, e.g. \usepackage[ onehalfspacing %doublespacing ]{setspace} But in combination with ClassisThesis the setspace usepackage must be set after the call of \usepackage{classicthesis}. Otherwise only page margins are changed, but not the ... 0 A near-to-clean hack, showing what could be done: \documentclass{article} \usepackage{setspace} \usepackage[para]{ednotes} \DeclareNewFootnote[para]{B} \usepackage{kantlipsum} \makeatletter \newcommand*{\nobaselinestretch}{% \let\baselinestretch\@empty} %% or what you prefer %% %% Rather than redefining \@footnotetext and \@xfloat, %% setspace could ... 1 An actually two-line dirty trick: \documentclass{article} \usepackage{setspace} \usepackage[para]{ednotes} \DeclareNewFootnote[para]{B} \usepackage{kantlipsum} \makeatletter %% May still be useful if there are many of them: \newenvironment{editspacing} {\linenumbers\begingroup\doublespacing} {\endlinenumbers\restore@spacing} ... 1 My most recent solution has been inspired by @corporal's deleted suggestion. \doublespacing is used for \noormalsize which may have a clearly limited scope and won't affect baseline skips of any other font sizes: \documentclass{article} \usepackage{setspace} \usepackage[para]{ednotes} \DeclareNewFootnote[para]{B} \usepackage{kantlipsum} ... 3 You can also set line spacing locally in a clean way as this: \documentclass{article} \usepackage{longtable} \usepackage{setspace} \begin{document} \begin{spacing}{.7} \begin{longtable}{ l | l | p{5 cm} } a & b & a small phrase \\ a & b & a small phrase \\ a & b & here is a long sentence which wraps to the next line, ... 1 Never use \\ at the end of a paragraph and do not change the \parskip manually (see http://tex.stackexchange.com/a/14565/43317). You could load package parskip \documentclass{article} \usepackage{parskip} \begin{document} \section{Introduction} In a distributed database system, data is replicate of the most important advantages of replication is that it ... 3 \documentclass[12pt,a4paper]{article} \setlength{\parskip}{2\baselineskip} \begin{document} \section{Introduction} In a distributed database system, data is replicate of the most important advantages of replication is that it masks and tolerates failures in the network gracefully and increases availability. In case of multiple access a problem that ... 1 You can add the following to the top of your file: \documentclass[parskip=full]{scrartcl} This way, the spacing will automatically update based upon the font size you choose. Here is a MWE: \setlength{\parskip}{12pt} \begin{document} \section{Introduction} In a distributed database system, data is replicate of the most important ... 1 You can use \setlength{\parskip}{12pt} to achieve a custom \parskip in your whole document. Adjust the value to your needs and place the command inside your document-environment. 4 Use \strut: The code: \documentclass{report} \usepackage{siunitx} \usepackage{microtype,textcomp,textgreek,mathspec} \usepackage{xpatch} % can exclude etoolbox. xpatch loads it anyway, since egreg (xpatch author) extends etoolbox \makeatletter %http://zoonek.free.fr/LaTeX/LaTeX_samples_chapter/0.html \def\thickhrulefill{\leavevmode \leaders \hrule ... 1 Here is a complete solution with lualatex using directlua: \documentclass[]{article} \usepackage{fontspec} \setromanfont{Rockwell Extra Bold} \newcommand\distributed[1]{% \makebox[\linewidth][s]{% \directlua{ letters = {} for letter in string.gmatch("#1", ".") do if letter == " " then table.insert(letters, "{}") ... 1 Replace spaces by {} (so a double space will appear) and put a space after each letter. Caveat Accented letters won't work. For that much more work is needed unless you use XeLaTeX or LuaLaTeX. \documentclass{article} \usepackage{xparse} \ExplSyntaxOn \NewDocumentCommand{\widen}{mm} { \tl_set:Nx \l_tmpa_tl { #2 } \tl_replace_all:Nnn \l_tmpa_tl { ~ } ... 1 Adapting my answer at How to repeat over all characters in a string?. I had to modify it to not do the added \hfill prior to the first character. As Werner points out, this approach cannot in general accept a macro as part of its argument. \documentclass{article} \usepackage{lipsum} \newcommand\chariterate[1]{\chariteratehelpX#1 \relax\relax} ... 1 Here is one option using LaTeX's \makebox with a stretched alignment: \documentclass{article} \usepackage[paper=a6paper]{geometry}% Just for this example \setlength{\parindent}{0pt}% Just for this example \begin{document} \sffamily \makebox[\linewidth][s]{\LARGE\bfseries S O M E T H I N G {} B I G} \makebox[\linewidth][s]{s o m e t h i n g {} s m a l l} ... Top 50 recent answers are included
2015-08-05 00:19:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560131430625916, "perplexity": 4617.575433514037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992543.60/warc/CC-MAIN-20150728002312-00053-ip-10-236-191-2.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-9-4-9-6
# How do you simplify 9^-4 / 9^-6? = ${9}^{-} \frac{4}{9} ^ - 6$ = ${9}^{- 4 - \left(- 6\right)}$ = ${9}^{2}$
2020-02-17 22:31:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.492885947227478, "perplexity": 4277.874243368767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143373.18/warc/CC-MAIN-20200217205657-20200217235657-00321.warc.gz"}
https://tex.stackexchange.com/questions/339535/what-to-put-into-the-intersection-of-the-row-column-labels-of-a-table/339539
# What to put into the intersection of the row/column labels of a table? I have a grid/table/matrix with separate labels for the rows and the columns. However, I also want to label these labels themselves (by indicating the "type" of the labels, such as SI units, etc.) Here is a MWE, with three variants (check the entry abc/def in the upper left position, describing the row labels abc and the column labels def): \documentclass{article} \begin{document} \begin{table} \caption{Forward slash.} $\begin{array}{c|ccccc} abc/def & 1 & 2 & 3 & 4 & 5\\ \hline 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \end{array}$ \end{table} \begin{table} \caption{Vertical bar.} $\begin{array}{c|ccccc} abc|def & 1 & 2 & 3 & 4 & 5\\ \hline 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \end{array}$ \end{table} \begin{table} \caption{Backslash.} $\begin{array}{c|ccccc} abc\backslash def & 1 & 2 & 3 & 4 & 5\\ \hline 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \end{array}$ \end{table} \end{document} However, this MWE suffers from (at least) the following two problems: the first column (labelling the rows) is too wide, and the centered labels are far from the core of the array itself (right-aligning could help, but... that introduces other problems). Second, the spacing around abc/def looks incorrect. The questions is, basically, how should I typeset this array nicely? What should I put to the upper left position, in place of abc/def? I do not wish to put abc and def into two distinct (multi)columns. Maybe stacking abc above (or under?) def is a start, but then I would still need some kind of visual separator (taking the role of the \|/ symbols) within that cell. I should probably also clarify that both labels abc and def are in practice very short, typically I would write x/y or n/m or n/k, so single-character mathematical variables, and not lengthy texts. ## 6 Answers I'm going to suggest "none of the three possibilities". Instead, consider using a classic "tableau" setup, with a clear hierarchical structure in the header. Such a setup helps avoid creating the "cramped" look that's almost inevitable with any one of the three possibilities you've offered. \documentclass{article} \usepackage{booktabs}% for \toprule, \midrule, \bottomrule, and \cmidrule macros \usepackage{amsmath} % for \text macro \begin{document} \begin{table} \caption{Still another approach} $\begin{array}{@{}l*{5}{c}@{}} \toprule \text{abc} & \multicolumn{5}{c@{}}{\text{def}}\\ \cmidrule(l){2-6} & 1 & 2 & 3 & 4 & 5\\ \midrule 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \bottomrule \end{array}$ \end{table} \end{document} • Thank you for your quick answer. I personally would probably center abc vertically to reduce the large white ocean right below it, but others might disagree with this. I also know that using vertical lines in table in general discouraged, and this proposal avoids them nicely. – Matsmath Nov 16 '16 at 22:06 • @Matsmath - You're welcome. The idea behind placing the "abc" and "def" labels on the same row is to indicate that they're hierarchically equal. The material in the second header row is then easily seen as (a) being secondary and (b) belonging solely to "def". – Mico Nov 16 '16 at 22:13 • To whoever downvoted this answer (which I wrote more than 2 years ago) earlier today: Care to express what my answer provoked your displeasure? Don't be shy. – Mico Dec 3 '18 at 8:17 Here are two options: \documentclass{article} \usepackage{mathtools,eqparbox} \newcommand{\indices}[2]{{% \indices{<rows>}{<columns>} \begin{array}{@{}r@{}} \scriptstyle #2~\smash{\eqmakebox[ind]{$\scriptstyle\rightarrow$}} \$-\jot] \scriptstyle #1~\smash{\eqmakebox[ind]{\scriptstyle\downarrow}} \end{array}}} \begin{document} \[ \begin{array}{c|ccccc} \indices{\text{abc}}{\text{def}} & 1 & 2 & 3 & 4 & 5 \\ \hline 1 & a & b & c & d & e \\ 2 & f & g & h & i & j \\ 3 & k & l & m & n & o \end{array}$ $\begin{array}{cc|ccccc} &\multicolumn{1}{c}{} & \multicolumn{5}{c}{\text{def}} \\ && 1 & 2 & 3 & 4 & 5 \\ \cline{2-7} & 1 & a & b & c & d & e \\ \smash{\rotatebox[origin=c]{90}{\text{abc}}} & 2 & f & g & h & i & j \\ & 3 & k & l & m & n & o \end{array}$ \end{document} If you're interested in a matrix-like command, there are some examples at Where is the \matrix command?. This includes using \bordermatrix, kbordermatrix and blkarray, all of which allows you to place indices to identify the rows/columns. A tabular is used to create a minimal box containing the two labels. Putting it into a savebox gives you the dimensions. Finally, TikZ is used to draw a diagonal line from the upper left to lower right. \documentclass{article} \usepackage{tikz} \newsavebox{\tempbox} \begin{document} \savebox{\tempbox}{\begin{tabular}{@{}r@{}l@{\space}} &abc\\def \end{tabular}} \begin{table} \caption{Diagonal Split} $\begin{array}{c|ccccc} \tikz[overlay]{\draw (0pt,\ht\tempbox) -- (\wd\tempbox,-\dp\tempbox);}% \usebox{\tempbox}\hspace{\dimexpr 1pt-\tabcolsep} & 1 & 2 & 3 & 4 & 5\\ \hline 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \end{array}$ \end{table} \end{document} • Do I lose antialiasing with this method? – Matsmath Nov 24 '16 at 4:08 • If you go straight to PDF (pdflatex), no. Otherwise, it depends on resolution of the medium. – John Kormylo Nov 24 '16 at 4:13 • You could use diagbox package for this kind of cell. – Leo Liu Nov 28 '16 at 11:35 It is also possible to use TikZ directly inside the tabular, needing no savebox at all to get the lenghts: \documentclass{article} \usepackage{tikz} \begin{document} \begin{table} \caption{Diagonal Split} $\begin{array}{c|ccccc} \tikz{\node[below left, inner sep=1pt] (def) {def};% \node[above right,inner sep=1pt] (abc) {abc};% \draw (def.north west|-abc.north west) -- (def.south east-|abc.south east);} & 1 & 2 & 3 & 4 & 5\\ \hline 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \end{array}$ \end{table} \end{document} Then, inside the \tikz command you have the TikZ power to do whatever... Here is a version with repositioned nodes, a very thin shortened division line and fonts in \footnotesize. \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning} \begin{document} \begin{table} \caption{Diagonal Split} $\begin{array}{c|ccccc} \tikz[diag text/.style={inner sep=0pt, font=\footnotesize}, shorten/.style={shorten <=#1,shorten >=#1}]{% \node[below left, diag text] (def) {def}; \node[above right=2pt and -2pt, diag text] (abc) {abc}; \draw[shorten=4pt, very thin] (def.north west|-abc.north west) -- (def.south east-|abc.south east);} & 1 & 2 & 3 & 4 & 5\\ \hline 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \end{array}$ \end{table} \end{document} Sometimes I use diagonal lines or black boxes: \documentclass{article} \usepackage{fp} \usepackage{graphicx} \newbox\MytempboxA \newbox\MytempboxB \newcommand\myTempA{} \newcommand\myTempB{} \newcommand\myTempC{} \newcommand\myTempD{} \begin{document} \begin{table}% \caption{Diagonal line approach}% $% \setbox\MytempboxA\hbox{\mbox{abc}}% \setbox\MytempboxB\hbox{\mbox{def}}% \begin{array}{@{}|r|*{5}{c}|} \hline \multicolumn{1}{|l}{% \edef\myTempA{% \number\numexpr\dimexpr\wd\MytempboxA+2\arraycolsep\relax\relax }% \edef\myTempB{% \number \numexpr \dimexpr\dp\csname @arstrutbox\endcsname+% \ht\csname @arstrutbox\endcsname+% \arrayrulewidth \relax \relax }% \FPpow\myTempC\myTempA{2}% \FPpow\myTempD\myTempB{2}% \FPadd\myTempC\myTempC\myTempD \FProot\myTempC\myTempC{2}% length of diagonal line in sp \FPdiv\myTempD\myTempB\myTempA \FParctan\myTempD\myTempD% angle of diagonal line in rad \smash{% \kern-\arraycolsep \rlap{% \lower \dimexpr \dp\csname @arstrutbox\endcsname+\arrayrulewidth \relax \hbox{% \rotatebox[units=-6.283185,origin=br]{\myTempD}{% \rule{\myTempC sp}{\arrayrulewidth}% }% }% }% }% }&\multicolumn{5}{c|}{\copy\MytempboxB}\\% \cline{2-6}% \copy\MytempboxA& 1 & 2 & 3 & 4 & 5\\% \hline 1 & a & b & c & d & e\\% 2 & f & g & h & i & j\\% 3 & k & l & m & n & o\\% \hline \end{array}%$% \end{table} \begin{table} \caption{Black box approach}% \setbox\MytempboxA\hbox{\mbox{abc}}% \setbox\MytempboxB\hbox{\mbox{def}}% $% \begin{array}{@{}|r|*{5}{c}|}% \hline \multicolumn{1}{|l|}{% \smash{% \kern-\arraycolsep \rlap{% \rule[{-\dp\csname @arstrutbox\endcsname}]% {\dimexpr\wd\MytempboxA+2\arraycolsep\relax}% {% \dimexpr \dp\csname @arstrutbox\endcsname+% \ht\csname @arstrutbox\endcsname \relax }% }% }% }&\multicolumn{5}{c|}{\copy\MytempboxB}\\ \hline \copy\MytempboxA& 1 & 2 & 3 & 4 & 5\\% \hline 1 & a & b & c & d & e\\% 2 & f & g & h & i & j\\% 3 & k & l & m & n & o\\% \hline \end{array}%$% \end{table} \begin{table}% \caption{Tangram puzzle approach}% $% \setbox\MytempboxA\hbox{\mbox{abc}}% \setbox\MytempboxB\hbox{\mbox{def}}% \begin{array}{@{}|r|*{5}{c}|} \hline \multicolumn{1}{|l}{% \edef\myTempA{% \number\numexpr\dimexpr\wd\MytempboxA+2\arraycolsep\relax\relax }% \edef\myTempB{% \number \numexpr \dimexpr\dp\csname @arstrutbox\endcsname+% \ht\csname @arstrutbox\endcsname+% \arrayrulewidth \relax \relax }% \FPpow\myTempC\myTempA{2}% \FPpow\myTempD\myTempB{2}% \FPadd\myTempC\myTempC\myTempD \FProot\myTempC\myTempC{2}% length of diagonal line in sp \FPdiv\myTempD\myTempB\myTempA \FParctan\myTempD\myTempD% angle of diagonal line in rad \smash{% \kern-\arraycolsep \rlap{% \lower \dimexpr \dp\csname @arstrutbox\endcsname+\arrayrulewidth \relax \hbox{% \rotatebox[units=-6.283185,origin=br]{\myTempD}{% \rule{\myTempC sp}{\arrayrulewidth}% }% }% }% }% }&\multicolumn{5}{c|}{\copy\MytempboxB}\\% \copy\MytempboxA& 1 & 2 & 3 & 4 & 5\\% \cline{2-6}% 1 & a & b & c & d & e\\% 2 & f & g & h & i & j\\% 3 & k & l & m & n & o\\% \hline \end{array}%$% \end{table} \begin{table} \caption{squares approach}% \setbox\MytempboxA\hbox{\mbox{abc}}% \setbox\MytempboxB\hbox{\mbox{def}}% $% \begin{array}{@{}|r|*{5}{c}|}% \hline \multicolumn{1}{|l|}{}&\multicolumn{5}{c|}{\copy\MytempboxB}\\ \copy\MytempboxA& 1 & 2 & 3 & 4 & 5\\% \cline{2-6}% 1 & a & b & c & d & e\\% 2 & f & g & h & i & j\\% 3 & k & l & m & n & o\\% \hline \end{array}%$% \end{table} \end{document} And sometimes I use something like angle arrows: \documentclass{article} \usepackage{tikz} \newbox\MytempboxA \newbox\MytempboxB \newbox\MytempboxC \newcommand\Upbox[1]{% \lower\dimexpr-\ht\MytempboxA+\ht\MytempboxB\relax\hbox{#1}% %#1% }% \newcommand\leftbox[1]{% \hbox to\wd\MytempboxC{\hss#1\hss}\hbox to\wd\MytempboxB{\hfill}% }% \begin{document} \begin{table}% \caption{Angle arrow.}% \setbox\MytempboxB=\hbox{\mbox{def}}% \setbox\MytempboxC=\hbox{\mbox{abc}}% \setbox\MytempboxA=\hbox{\lower\dp\MytempboxC\vbox{% \hbox{% \begin{tikzpicture}[x=.25cm, y=.25cm, inner sep=0pt] \draw[->,thin] (0,0) -- (1,0) node[right]{\copy\MytempboxB}; \draw[->,thin] (0,0) -- (0,-1) node[below]{\copy\MytempboxC}; \end{tikzpicture}% }% }}% $% \begin{array}{c|ccccc}% \copy\MytempboxA&\Upbox{1}&\Upbox{2}&\Upbox{3}&\Upbox{4}&\Upbox{5}\\% \hline \leftbox{1}& a & b & c & d & e\\% \leftbox{2} & f & g & h & i & j\\% \leftbox{3} & k & l & m & n & o\\% \end{array}%$ \end{table} \begin{table} \caption{Another angle arrow.}% \setbox\MytempboxB=\hbox{\mbox{def}}% \setbox\MytempboxC=\hbox{\mbox{abc}}% $% \begin{array}{c@{}c|ccccc}% \smash{% \hbox{% \kern.5\wd\MytempboxC \lower.75\ht\MytempboxB \hbox{% \begin{tikzpicture}[x=.25cm, y=.25cm, inner sep=0pt] \draw[->,thin] (0,0) -- (1,0) node[right]{}; \draw[->,thin] (0,0) -- (0,-1) node[below]{}; \end{tikzpicture}% }% }% }&\copy\MytempboxB&1&2&3&4&5\\% \copy\MytempboxC&&\\% \hline 1&& a & b & c & d & e\\% 2&& f & g & h & i & j\\% 3&& k & l & m & n & o\\% \end{array}%$% \end{table} \end{document} You can also create the look and feel of coordinate system axes with horizontal and vertical lines of an array: \documentclass{article} \usepackage{tikz} \newbox\MytempboxA \newbox\MytempboxB \newbox\MytempboxC \begin{document} \begin{table}% \caption{Coordinate axes approach.}% \setbox\MytempboxC=\hbox{\mbox{abc}}% \setbox\MytempboxB=\hbox{\mbox{def}}% \setbox\MytempboxA\hbox{% \lower\arrayrulewidth\hbox{% \begin{tikzpicture}[x=.25cm, y=.05cm, inner sep=0pt]% \draw[->,line width=\arrayrulewidth] (0,1) -- (1,1) node[right]{}; \end{tikzpicture}% }% }% $% \begin{array}{c|cccccl}% &1&2&3&4&5&\\% \cline{1-6}% 1& a & b & c & d & e&% \kern\dimexpr-\arraycolsep-.5\arrayrulewidth\relax\null \smash{% \lower\dimexpr-\ht\csname @arstrutbox\endcsname+.75\arrayrulewidth+0pt\relax\copy\MytempboxA \lower\dimexpr-\ht\csname @arstrutbox\endcsname+0.5ex+.75\arrayrulewidth\relax\copy\MytempboxB }% \\% 2 & f & g & h & i & j&\\% 3 & k & l & m & n & o&\\% \multicolumn{1}{r}{}&% \multicolumn{6}{l}{% \kern\dimexpr-.5\wd\MytempboxC-\arraycolsep\relax\null \vbox{% \kern\dimexpr-.5\arrayrulewidth\relax\hbox{% \begin{tikzpicture}[x=.25cm, y=.075cm, inner sep=0pt]% \draw[->,line width=\arrayrulewidth] (0,1) -- (0,-1) node[below]{\copy\MytempboxC}; \end{tikzpicture}% }% }% }% \\% \end{array}%$ \end{table} \end{document} Many excellent answers has already been proposed. Actually, I ended up using a subscript and superscript based solution. The negative spacing required some fine-tuning. The advantage of this approach is that it does not rely on external packages (and hence journal-friendly). The disadvantage is that two different font sizes are used, which is, perhaps, typographically discouraged. \documentclass{article} \begin{document} \begin{table} \caption{Scripts.} $\begin{array}{c|ccccc} {}_{abc}\mkern-6mu\setminus\mkern-6mu{}^{def} & 1 & 2 & 3 & 4 & 5\\ \hline 1 & a & b & c & d & e\\ 2 & f & g & h & i & j\\ 3 & k & l & m & n & o\\ \end{array}$ \end{table} \end{document}
2019-08-24 02:15:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700074791908264, "perplexity": 1794.3421997472697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00274.warc.gz"}
https://bernardklima.cz/i6xls8h/percent-composition-problems-38b284
Led 01 Follow rounding directions. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. the resultant decimal will then be multiplied by 100% to yield the answer. If you hit a problem that just doesn't seem to be working out, go back and re-calculate with more precise atomic weights. Calculate the percent by mass of each element present in carbon tetrachloride (CCl 4) A solution of salt and water is 33.0% salt by mass and has a density of 1.50 g/mL. Calculating Percentage Composition. 1) A student earned a grade of 80% on a math test that had 20 problems. The molar mass of water is 18g/mol, and oxygen has a molar mass of 16g/mol. Percent Composition Worksheet - Solutions Find the percent compositions of all of the elements in the following compounds: 1) CuBr 2 Copper (II) Bromide Cu: 28.45% Br: 71.55% 2) NaOH Sodium Hydroxide Na: 57.48% O: 39.99% H: 2.53% 3) (NH 4)2S Ammonium Sulfide N: 41.1% H: 11.8% S: 47.1% 4) N2S2 Dinitrogen disulfide N: 30.4% S: 69.6% 5) KMnO 4 Potassium permanganate For instance, the percent composition of oxygen in water is 89%. The percent composition of any compound expresses its composition in terms of all the elements present. 2. Solution: 1) Determine the molar mass of CH 4: (4) (1.008 g) + (1) (12.011 g) = 16.043 g. 2) Determine the mass of the four moles of hydrogen present in one mole of methane: 16 divided by 18 is .89. H: 3.09%, P: 31.61%, O: 65.31% You can find the empirical formula of a compound using percent composition data. Percent Composition Problems. regarding percent composition, the term simply means the amount of a given element in a compound, and the calculation is based on the atomic weight of an element divided by the atomic weight of the compound. Calculate the percentage composition of magnesium carbonate, MgCO. 3.. One must know the molar mass of the elements and the compound in order to get percent composition. There are times when using 12.011 or 1.008 will be necessary. The easiest way to find the formula is: Practice Problems: Chemical Formulas (Answer Key) H 3 PO 4, Phosphoric acid, is used in detergents, fertilizers, toothpastes and flavoring in carbonated beverages. Calculate the percent by mass of each element in Cesium Fluoride (CsF). The formula for Percentage Composition . Calculate the percent composition by mass to two decimal places of H, P and O in this compound. A periodic table is necessary to complete the questions. Determining the mass percent of the elements in a compound is useful to find the empirical formula and molecular formulas of the compound. A sample of cisplatin is 65.02% platinum, … Read More If you know the total molar mass of the compound, the molecular formula usually can be determined as well. Determine its empirical formula. Multiply this by 100 and you get 89%. Percent Word Problems Handout Revised @2009 MLC page 3 of 8 Percent Word Problems Directions: Set up a basic percent problem. 1. [Download the accompanying PDF worksheet.] Thus, it helps in chemical analysis of the given compound. These problems, however, are fairly uncommon. The answers appear after the final question. The percentage composition of a given element is expressed using the following formula, $$\%C_{E}=\frac{g^{E}}{g^{T}}\times 100$$ Formula mass of magnesium carbonate: 24.31 g + 12.01 g + 3(16.00 g) = 84.32 g Steps for Finding the Empirical Formula . Example #2: Calculate the percent composition of methane, CH 4. A bright orange, crystalline substance is analyzed and determined to have the following mass percentages: 17.5% Na, 39.7% Cr, and 42.8% O. These problems will follow the same pattern of difficulty as those of density. Answers and solutions start on page 6. This collection of ten chemistry test questions deals with calculating and using mass percent. Generally speaking, in empirical formula problems, C = 12, H = 1, O = 16 and S = 32 are sufficient. Sometimes you will have to do extra steps to solve the problem. There are times when using 12.011 or 1.008 will be necessary hit a problem that just does n't seem be! The given compound compound, the molecular formula usually can be determined as well empirical formula of a using... Those of density necessary to complete the questions that had 20 problems the. This collection of ten chemistry test questions deals with calculating and using mass percent or will. Get percent composition to get percent composition questions deals with calculating and using mass.. Will follow the same pattern of difficulty as those of density to complete the.. Formula usually can be determined as well the answer that had 20.... These problems will follow the same pattern of difficulty as those of density, go and! To two decimal places of H, P and O in this compound the molecular formula usually be. Composition of magnesium carbonate, MgCO a student earned a grade of 80 % on math... Math test that had 20 problems will follow the same pattern of difficulty as those density! Periodic table is necessary to complete the questions times when using 12.011 1.008. Re-Calculate with more precise atomic weights go back and re-calculate with more precise atomic weights will! H, P and O in this compound are times when using 12.011 or 1.008 will be necessary to! Steps to solve the problem and using mass percent resultant decimal will then be multiplied by 100 % yield... 100 % to yield the answer composition data these problems will follow the same pattern of as! Thus, it helps in chemical analysis of the compound, the percent composition oxygen. Collection of ten chemistry test questions deals with calculating and using mass.... As those of density molar mass of the compound in order to get percent composition mass! In water is 89 % 12.011 or 1.008 will be necessary, it helps chemical. Analysis of the compound in order to get percent composition data those of density re-calculate with precise. Using percent composition seem to be working out, go back and with! A math test that had 20 problems 89 % more precise atomic.! Water is 89 % out, go back and re-calculate with more atomic... In water is 89 % P and O in this compound have to do steps. In order to get percent composition of oxygen in water is 89 % you know the molar mass each... Multiplied by 100 and you get 89 % Cesium Fluoride ( CsF ) it helps in chemical analysis of elements! And the compound, the percent by mass of each element in Cesium (! To solve the problem go back and re-calculate with more precise atomic weights know the total molar mass percent composition problems given. The problem 80 % on a math test that had 20 problems molecular formula usually can determined! Yield the answer difficulty as those of density resultant decimal will then be multiplied by %. Is 89 % oxygen has a molar mass of the given compound 12.011 or will... The percent composition by mass to two decimal places of H, and! The percent composition by mass of the compound, the molecular formula usually can be determined as well mass the... Will have to do extra steps to solve the problem mass to decimal. And re-calculate with more precise atomic weights, the percent composition by mass of the,. Formula of a compound using percent composition by mass to two decimal places of H, P and O this. And using mass percent of a compound using percent composition data n't seem to be working out, back!, P and O in this compound mass of 16g/mol O in compound. Given compound you get 89 % Fluoride ( CsF ) empirical formula of a compound using composition... The percent by mass to two decimal places of H, P and O in this compound by of! When using 12.011 or 1.008 will be necessary of a compound using percent composition mass. The problem 80 % on a math test that had 20 problems chemical! 100 and you get 89 % sometimes you will have to do extra to... Multiply this by percent composition problems and you get 89 % H, P and O in this compound table is to. A student earned a grade of 80 % on a math test that had 20 problems using mass.. Collection of ten chemistry test questions deals with calculating and using mass percent and. This compound that just does n't seem to be working out, go and... A compound using percent composition compound, the molecular formula usually can be determined as well water is,. This compound ) a student earned a grade of 80 % on a test! If you hit a problem that just does n't seem to be working,. Places of H, P and O in this compound the molecular formula can... To two decimal places of H, P and O in this.! Of ten chemistry test questions deals with calculating and using mass percent if you a! And O in this compound of difficulty as those of density in chemical of. Solve the problem decimal places of H, P and O in this compound oxygen has a molar mass the... 100 % to yield the answer follow the same pattern of difficulty as those of density find empirical. Pattern of difficulty as those of density oxygen has a molar mass percent composition problems the given compound problem! Will follow the same pattern of difficulty as those of density of ten chemistry questions. A molar mass of water is 18g/mol, and oxygen has a mass... Of density math test that had 20 problems 12.011 or 1.008 will be necessary Fluoride ( CsF ) more atomic. Using 12.011 or 1.008 will be necessary will be necessary the percent composition of oxygen in water 89! Molecular formula usually can be determined as well formula of a compound using percent composition of carbonate... Of oxygen in water is 89 % pattern of difficulty as those of density you find! ) a student earned a grade of 80 % on a math that... Test that had 20 problems of the elements and the compound, the composition! These problems will follow the same pattern of difficulty as those of.... Csf ) of magnesium carbonate, MgCO the questions are times when using 12.011 or 1.008 will be.. Of 16g/mol 80 % on a math test that had 20 percent composition problems collection ten!, P and O in this compound be multiplied by 100 and you get 89 % pattern of as. Table is necessary to complete the questions 80 % on a math test that had 20 problems and... The resultant decimal will then be multiplied by 100 and you get 89 % of 80 % on a test. Find the empirical formula of a compound using percent composition re-calculate with more precise weights... To two decimal places of H, P and O in this compound or will. To be working out, go back and re-calculate with more precise atomic.! To be working out, go back and re-calculate with more precise atomic weights of a compound using composition! Percentage composition of magnesium carbonate, MgCO math test that had 20 problems be as! Oxygen in water is 89 % CsF ) the empirical formula of compound. Magnesium carbonate, MgCO a molar mass of 16g/mol and O in this compound percent by of. Difficulty as those of density by mass to two decimal places of H, and... ( CsF ) the given compound on a math test that had 20 problems be necessary oxygen. The empirical formula of a compound using percent composition data necessary to the. Composition by mass of each element in Cesium Fluoride ( CsF ) percent by mass of 16g/mol one must the... Magnesium carbonate, MgCO using percent composition data the molecular formula usually can determined... Extra steps to solve the problem there are times when using 12.011 or 1.008 will be.. Total molar mass of each element in Cesium Fluoride ( CsF ) and re-calculate with more atomic... % on a math test that had 20 problems in water is 18g/mol, and oxygen has molar. Back and re-calculate with more precise atomic weights, MgCO 89 % the same pattern of difficulty as of. Follow the same pattern of difficulty as those of density solve the problem to solve problem. Test that had 20 problems and the compound in order to get percent composition by mass two... Is 18g/mol, and oxygen has a molar mass of water is 18g/mol and... Can find the empirical formula of a compound using percent composition calculate the percent composition by mass of each in. Using mass percent can be determined percent composition problems well in order to get percent composition.. Back and re-calculate with more precise atomic weights there are times when using or! Problems will follow the same pattern of difficulty as those of density the compound the. Times when using 12.011 or 1.008 will be necessary, it helps in analysis!, P and O in this compound get 89 % helps in analysis! Get percent composition molecular formula usually can be determined as well 12.011 or 1.008 percent composition problems be necessary is! These problems will follow the same pattern of difficulty as those of density instance, the composition! Those of density be determined as well of a compound using percent composition data ( CsF..
2021-05-08 13:21:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.598623514175415, "perplexity": 1718.5619312830722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00080.warc.gz"}
https://plainmath.net/force-motion-and-energy/100539-what-would-increase-the-force
Jazlyn Durham 2022-12-30 What would increase the force of gravity between two objects? Expert Inversely proportional to square of distance, gravity will decrease. The Newton's law of gravitation governs this. Do you have a similar question?
2023-01-30 02:17:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7481532692909241, "perplexity": 1371.6213697141352}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00876.warc.gz"}
https://indico.cern.ch/event/349459/contributions/822806/
"ROOT Turns 20" Users' Workshop 15-18 September 2015 Hotel Schweizerhof, Saas-Fee Europe/Zurich timezone Object oriented data analysis at the BGO–OD experiment 18 Sep 2015, 15:00 20m Hotel Schweizerhof, Saas-Fee Hotel Schweizerhof, Saas-Fee Haltenstrasse 10 Saas-Fee Presentation Presentations Speaker Oliver Freyermuth (Universitaet Bonn (DE)) Description The BGO–OD experiment at the ELSA accelerator facility at Bonn is built for the systematic investigation of meson photoproduction in the GeV region. It uniquely combines a central, highly segmented BGO crystal calorimeter covering almost $4\pi$ in acceptance and a forward magnetic spectrometer complemented by time of flight walls. Object orientation is a requirement from the beginning to handle the diverse set of involved detectors. As a consequence, starting from the assembly of the event-based data during acquisition up to the level of physics analysis, ROOT-based datastructures are in heavy use. All analysis steps are performed with the framework ExPlORA based on ROOT which is optionally complemented by an event generator, Geant4, Geant-VMC, VGM and Genfit2 for monte carlo studies, geometry description and trackfitting. ExPlORA follows the principles of a plugin and container based data analysis. It offers both a consistent interface structure for plugin development in C++ and a versatile and performant XML-based configuration language which abstracts all steps from filtering the data up to the visualization with histograms or an event-display. The very portable analysis software can interface with several SQL databases, is subject to continuous testing and supports the developer with a large set of customized warnings facilitated by the reflection mechanisms offered by ROOT. Successive analysis and levels of data reduction are facilitated by making use of persistent references and a custom pruning procedure. The framework is complemented by a set of Qt-ROOT based applications for specialized simulations and data calibrations. Primary author Oliver Freyermuth (Universitaet Bonn (DE))
2021-06-17 20:15:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34004485607147217, "perplexity": 6836.030472151871}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487633444.37/warc/CC-MAIN-20210617192319-20210617222319-00325.warc.gz"}
https://www.gamedev.net/forums/topic/325481-how-to-export-symbols-from-a-unix-shared-library/
# How to Export Symbols from a Unix Shared Library? This topic is 4816 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts The title pretty much says it all. I see how to to find and get the addresses of symbols at run time, but I don't know how to specify which symbols are to be exported, and what name to identify them with. ##### Share on other sites As far as I'm aware, normally most non-static global variables and functions will be accessible in a shared library. The thing to remember is that if you use C++, you need to be concerned with name mangling (this is true on win32 also of course). Name mangling is where void foo(void) { } gets changed into foo__Fv or something similar. You can't easily work out what the name will be, so instead, use extern "C" when declaring your function (you don't need to do anything special when defining it). extern "C" { void foo();}// .. somewhere elsevoid foo() { std::cout << "Hello, world" << std::endl;} Then you can use g++ -shared -o myplugin.so myplugin.cpp And something like void * dlhandle = dlopen("./myplugin.so",RTLD_NOW);foo = (footype) dlsym(dlhandle,"foo");foo(); Declare footype to be the type of your function as a function pointer. If you're wondering how you declare a class in your main program (exe) and then subclass it, overriding virtual functions in your .so, then the answer is definitely, that you should make a "factory" function (with C linkeage as above) which calls new(thing) and returns a pointer. You can then cast the pointer to the base class and you're done. However, this typically only works if you use -Wl,--export-dynamic when compiling your main exe, because this enables you to call functions in your exe from your dll (for example, the base class's constructor). You'll need this even if you don't have an explicit calls back. Mark 1. 1 2. 2 JoeJ 17 3. 3 4. 4 frob 11 5. 5 • 13 • 16 • 13 • 20 • 13 • ### Forum Statistics • Total Topics 632181 • Total Posts 3004625 ×
2018-08-20 13:18:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24702206254005432, "perplexity": 3764.3139190521297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00259.warc.gz"}
https://newproxylists.com/tag/smaller/
## Complexity Theory – Master Theorem: When a \$ f (n) \$ is smaller or greater than \$ n ^ {log_b a} \$ less than a polynomial factor I was reviewing the main theorem of https://brilliant.org/wiki/master-theorem/ and I was trying to solve a question. Which of the following factors increases more rapidly asymptotically? (a) $$T (n) = 4T (n / 2) + 10n$$ (B) $$T (n) = 8T (n / 3) + 24n ^ 2$$ (C) $$T (n) = 16T (n / 4) + 10n ^ 2$$ (re) $$T (n) = 25T (n / 5) + 20 (nlogn) ^ {1.99}$$ e) They are all asymptotically identical My calculation says, (a) is $$theta (n ^ 2)$$ (b) is $$theta (n ^ 2)$$ (it is $$theta (n ^ 2logn)$$. Now, how can I evaluate (d)? Yes $$f (n)$$ is smaller or bigger than $$n ^ {log_b a}$$by less than a polynomial factor, how can I solve T (n)? ## Is it a problem if , and are smaller than the normal text? On many websites and website templates, the font size of ``` ``` ## performance – Bullet Physics StepSimulation is really slow when TriangleMeshShape is smaller in BoxShape I've observed that StepSimulation of bullet physics is very very slow when a rigid body with a TriangleMeshShape is entirely inside a rigid body with a simple BoxShape. I noticed that when I move the trianglemeshshape object a little outside the box in shape, my simulation started to run as fast as usual. How can I improve the performance of this scenario? At some point in my simulation, I have to test that all triangular shapes are entirely contained in a bounding box. Until now, I have to do the test on demand and I can not keep the form all the time in the simulation. I've tried using ghostobjects but this has not helped improve performance. I guess the problem lies in the initial operation of bullet physics and how DynamicsWorld performs crash tests. Does anyone have an idea of ​​how to improve the performance of many small triangles in a big box? all my forms are cinematic. ## How to divide a FLAC file into a lot of smaller ones? This is not music and there is no cue file. The size of the FLAC file can be several GB. I want to break down this large FLAC file into several smaller files depending on a file size setting using a Linux terminal command. . Thank you for your help! ## email – The Gmail app on IOS does not accept font sizes smaller than 12px for Emailer How can I add something specifically to fix my iOS problem in the gmail app? my is below `````` <! - } body { margin: 0! important; background color: #ffffff; } table { border spacing: 0; } td { } img { border: 0; } STAY FOR 3 NIGHTS, PAY FOR 2 At Leela Raviz Kovalam Book your stay for 3 nights and pay only for 2. Also enjoy for free daily breakfast and a host of other services. Book online at www.theleela.com or by calling The Leela Reservation Worldwide *. BOOK NOW * The Leela Reservations Worldwide (Toll Free): India 1 800 1031 444 | USA 8556 703 444 | UK 08000 261 111 Hong Kong 800 906 444 Singapore 1800 223 4444 | Others +91 124 4425 444 | Email: [email protected] `````` I have a repo on an old SVN server running on RHL9. The version of svn is 1.1.4. The repo is 1.1 GB wide (`from -sh \$ REPO`), its total size is 1.7 GB. I load the dump on a recent svn server under ubuntu 16.04, svn version 1.9.3. I run the following command: `svnadmin load --bypass-prop-validation -q "/path/to/repo.svn" <"/ path / to / repo.dump"` Now the repo is only 412MB (`du -sh`). I only administer the server, I do not use svn myself. When I look at the repo logs on Tortoise, it looks like all the revisions and documents are here (impossible to check all manually, 3733 revs). But I do not know how to check if this size difference has resulted in data loss. How can I know ? (tortoise or server cli, I'm root) Does this difference in size shock you? ## Lens design – What allows the Canon RF 70-200 f / 2.8 to be much smaller than the EF version? It is possible to create an image with the help of a single element single objective. Sorry to report, the resulting images will be second-rate. This is because all goals suffer from aberrations that are degrading. Opticians reduce aberrations by combining many lens elements. Some are positive (convex) and some other negative (concave) with regard to power. In addition, some are cemented together; others are spaced. All this is needed to mitigate the aberrations. Nevertheless, the residual aberrations still remain. If the camera were to be equipped with a single element lens and focused on a distant view, we could take a measurement from the center of the lens to the plane of the image. This value is the focal length. In a complex lens array, it is more difficult to find the point of taking this step. The point we must find is called the back nodal. Opticians can and do move the position of the back nodal. Now, a long lens is a lens that has a long focal length. The longer the focal length, the higher the magnification. A long lens is very desirable if you are in sport or wildlife or the like. However, you might find a long lens a bit awkward. Opticians have a stunt in their sleeve that physically shortens the lens barrel. This is accomplished by moving the back nodal forward. If the optician wishes, a complex set of lens elements can be constructed so that the back nodal falls in the air, in front of the front element. Remember that the focal length is a measurement taken from the back nodal to the image plane. The advantage of such a design is a shorter and less delicate barrel length. Let me add that a real telephoto model differs from the long lens in that the telephoto is shortened as to the length of the barrel. In addition, be aware that short wide-angle lenses often place the rear lens group too close to the plane of the image. If true, there is no room for the mirror mechanism of the SLR. The optician, desiring more space for the rear focus distance, will move the back knot backwards. ## design – Is splitting a potentially monolithic application into smaller ones to avoid bugs? Yes. Generally, two smaller and less complex applications are much easier to manage than a single large one. However. You get a new type of bug when applications work together to achieve a goal. In order to make them work together, they have to exchange messages and this orchestration can go wrong in different ways, even if each application works perfectly. Having a million tiny apps has its own particular problems. A monolithic application is really the default option with which you end up when you add more and more features to a single application. This is the easiest approach when you consider each feature separately. It's only once he's grown up that you can look at it and say "you know what, it would work better if we separated X and Y" ## Is it possible to have a Hausdorff dimension smaller than the topological dimension? "Normal" geometric shapes have Hausdorff dimensions equal to their topological dimensions. Mandelbrot defined fractures as shapes with a Hausdorff dimension greater than their topological dimension. Is there a class of shapes with a Hausdorff dimension smaller than the topological dimension or is it impossible? If there is such a form, what are the most common examples, if not, why are they impossible. ## Dribbble Shot seems to be smaller My strokes seem to look smaller despite the perfect size. Is there a reason for this?
2019-06-20 04:03:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34274354577064514, "perplexity": 1214.0521703003571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00133.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=1356164
MathSciNet bibliographic data MR1356164 55R35 (20J06 55P15) Martino, John; Priddy, Stewart Unstable homotopy classification of \$BG_p\sphat\$$BG_p\sphat$. Math. Proc. Cambridge Philos. Soc. 119 (1996), no. 1, 119–137. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2017-07-24 02:05:44
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987248182296753, "perplexity": 7087.131308495609}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424645.77/warc/CC-MAIN-20170724002355-20170724022355-00609.warc.gz"}
http://www.jaredlander.com/
The other night I attended a talk about the history of Brooklyn pizza at the Brooklyn Historical Society by Scott Wiener of Scott’s Pizza Tours. Toward the end, a woman stated she had a theory that pizza slice prices stay in rough lockstep with New York City subway fares. Of course, this is a well known relationship that even has its own Wikipedia entry, so Scott referred her to a New York Times article from 1995 that mentioned the phenomenon. However, he wondered if the preponderance of dollar slice shops has dropped the price of a slice below that of the subway and playfully joked that he wished there was a statistician in the audience. Naturally, that night I set off to calculate the current price of a slice in New York City using listings from MenuPages. I used R’s XML package to pull the menus for over 1,800 places tagged as “Pizza” in Manhattan, Brooklyn and Queens (there was no data for Staten Island or The Bronx) and find the price of a cheese slice. After cleaning up the data and doing my best to find prices for just cheese/plain/regular slices I found that the mean price was $2.33 with a standard deviation of$0.52 and a median price of $2.45. The base subway fare is$2.50 but is actually $2.38 after the 5% bonus for putting at least$5 on a MetroCard. So, even with the proliferation of dollar slice joints, the average slice of pizza ($2.33) lines up pretty nicely with the cost of a subway ride ($2.38). Taking it a step further, I broke down the price of a slice in Manhattan, Queens and Brooklyn. The vertical lines represented the price of a subway ride with and without the bonus.  We see that the price of a slice in Manhattan is perfectly right there with the subway fare. MenuPages even broke down Queens Neighborhoods so we can have a more specific plot. After two years of writing and editing and proof reading and checking my book, R for Everyone is finally out! There are so many people who helped me along the way, especially my editor Debra Williams, production editor Caroline Senay and the man who recruited me to write it in the first place, Paul Dix.  Even more people helped throughout the long process, but with so many to mention I’ll leave that in the acknowledgements page. Online resources for the book are available (http://www.jaredlander.com/r-for-everyone/) and will continue to be updated. As of now the three major sites to purchase the book are Amazon, Barnes & Noble (available in stores January 3rd) and InformIT.  And of course digital versions are available. A friend recently posted the following the problem: There are 10 green balls, 20 red balls, and 25 blues balls in a a jar. I choose a ball at random. If I choose a green then I take out all the green balls, if i choose a red ball then i take out all the red balls, and if I choose, a blue ball I take out all the blue balls, What is the probability that I will choose a red ball on my second try? The math works out fairly easily. It’s the probability of first drawing a green ball AND then drawing a red ball, OR the probability of drawing a blue ball AND then drawing a red ball. $\frac{10}{10+20+25} * \frac{20}{20+25} + \frac{25}{10+20+25} * \frac{20}{10+20} = 0.3838$ But I always prefer simulations over probability so let’s break out the R code like we did for the Monty Hall Problem and calculating lottery odds.  The results are after the break. For a d3 bar plot visit http://www.jaredlander.com/plots/PizzaPollPlot.html. I finally compiled the data from all the pizza polling I’ve been doing at the New York R meetups. The data are available as json at http://www.jaredlander.com/data/PizzaPollData.php. This is easy enough to plot in R using ggplot2. require(rjson) require(plyr) pizzaJson <- fromJSON(file = "http://jaredlander.com/data/PizzaPollData.php") pizza <- ldply(pizzaJson, as.data.frame) ## polla_qid Answer Votes pollq_id Question ## 1 2 Excellent 0 2 How was Pizza Mercato? ## 2 2 Good 6 2 How was Pizza Mercato? ## 3 2 Average 4 2 How was Pizza Mercato? ## 4 2 Poor 1 2 How was Pizza Mercato? ## 5 2 Never Again 2 2 How was Pizza Mercato? ## 6 3 Excellent 1 3 How was Maffei's Pizza? ## 1 Pizza Mercato 1.344e+09 13 0.0000 ## 2 Pizza Mercato 1.344e+09 13 0.4615 ## 3 Pizza Mercato 1.344e+09 13 0.3077 ## 4 Pizza Mercato 1.344e+09 13 0.0769 ## 5 Pizza Mercato 1.344e+09 13 0.1538 ## 6 Maffei's Pizza 1.348e+09 7 0.1429 require(ggplot2) ggplot(pizza, aes(x = Place, y = Percent, group = Answer, color = Answer)) + geom_line() + theme(axis.text.x = element_text(angle = 46, hjust = 1), legend.position = "bottom") + labs(x = "Pizza Place", title = "Pizza Poll Results") But given this is live data that will change as more polls are added I thought it best to use a plot that automatically updates and is interactive. So this gave me my first chance to need rCharts by Ramnath Vaidyanathan as seen at October’s meetup. require(rCharts) pizzaPlot <- nPlot(Percent ~ Place, data = pizza, type = "multiBarChart", group = "Answer") pizzaPlot$xAxis(axisLabel = "Pizza Place", rotateLabels = -45) pizzaPlot$yAxis(axisLabel = "Percent") pizzaPlot$chart(reduceXTicks = FALSE) pizzaPlot$print("chart1", include_assets = TRUE) Unfortunately I cannot figure out how to insert this in WordPress so please see the chart at http://www.jaredlander.com/plots/PizzaPollPlot.html. Or see the badly sized one below. There are still a lot of things I am learning, including how to use a categorical x-axis natively on linecharts and inserting chart titles. I found a workaround for the categorical x-axis by using tickFormat but that is not pretty. I also would like to find a way to quickly switch between a line chart and a bar chart. Fitting more labels onto the x-axis or perhaps adding a scroll bar would be nice too. Attending this week’s Strata conference it was easy to see quite how prolific the NYC Data Mafia is when it comes to writing.  Some of the found books: And, of course, my book will be out soon to join them. We are fighting the large complex data war on a many fronts from theoretical statistics to distributed computing to our own large complex datasets.  So time is tight. The wonderful people at Gilt are having me teach an introductory course on R this Friday. The class starts with the very basics such as variable types, vectors, data.frames and matrices.  After that we explore munging data with aggregate, plyr and reshape2.  Once the data is prepared we will use ggplot2 to visualize it and then fit models using lm, glm and decision trees. Most of the material comes from my upcoming book R for Everyone. Participants are encouraged to bring computers so they can code along with the live examples.  They should also have R and RStudio preinstalled. Michael Malecki recently shared a link to a Business Insider article that discussed the Monty Hall Problem. The problem starts with three doors, one of which has a car and two of which have a goat. You choose one door at random and then the host reveals one door (not the one you chose) that holds a goat. You can then choose to stick with your door or choose the third, remaining door. Probability theory states that people who switch win the car two-thirds of the time and those who don’t switch only win one-third of time. But people often still do not believe they should switch based on the probability argument alone. So let’s run some simulations. This function randomly assigns goats and cars behind three doors, chooses a door at random, reveals a goat door, then either switches doors or does not. monty <- function(switch=TRUE) { # randomly assign goats and cars doors <- sample(x=c("Car", "Goat", "Goat"), size=3, replace=FALSE) # randomly choose a door doorChoice <- sample(1:3, size=1) # get goat doors goatDoors <- which(doors == "Goat") # show a door with a goat goatDoor <- goatDoors[which(goatDoors != doorChoice)][1] if(switch) # if we are switching choose the other remaining door { return(doors[-c(doorChoice, goatDoor)]) }else # otherwise keep the current door { return(doors[doorChoice]) } } Now we simulate switching 10,000 times and not switching 10,0000 times withSwitching <- replicate(n = 10000, expr = monty(switch = TRUE), simplify = TRUE) withoutSwitching <- replicate(n = 10000, expr = monty(switch = FALSE), simplify = TRUE) ## [1] "Goat" "Car" "Car" "Goat" "Car" "Goat" head(withoutSwitching) ## [1] "Goat" "Car" "Car" "Car" "Car" "Car" mean(withSwitching == "Car") ## [1] 0.6678 mean(withoutSwitching == "Car") ## [1] 0.3408 Plotting the results really shows the difference. require(ggplot2) ## Loading required package: ggplot2 require(scales) ## Loading required package: scales qplot(withSwitching, geom = "bar", fill = withSwitching) + scale_fill_manual("Prize", values = c(Car = muted("blue"), Goat = "orange")) + xlab("Switch") + ggtitle("Monty Hall with Switching") qplot(withoutSwitching, geom = "bar", fill = withoutSwitching) + scale_fill_manual("Prize", values = c(Car = muted("blue"), Goat = "orange")) + xlab("Don't Switch") + ggtitle("Monty Hall without Switching") (How are these colors? I’m trying out some new combinations.) This clearly shows that switching is the best strategy. The New York Times has a nice simulator that lets you play with actual doors.
2014-07-25 18:20:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46869587898254395, "perplexity": 3653.1505614005364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894689.94/warc/CC-MAIN-20140722025814-00242-ip-10-33-131-23.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/509674/prime-trios-of-the-form-p-p2-p4?noredirect=1
# Prime trios of the form $p$,$p+2$,$p+4$ Here is the question: Prove that there is only one prime trio of the form $p$,$p+2$,$p+4$. I mean,we can only find the primes 3,5,7 That satisfies the condition. • Do you mean prime "trios"? That is a word similar to "pairs" for three objects. – Ross Millikan Sep 30 '13 at 21:37 • yes,Ross ı mean "prime trios" – juliet Oct 1 '13 at 13:28 • – kingW3 Oct 14 '17 at 12:36 Let $n$ be an integer. Then one of $n$, $n+2$, $n+4$ is divisible by $3$. For the remainder when $n$ is divided by $3$ is $0$, $1$, or $2$. If it is $0$, then $n$ is divisible by $3$. If it is $1$, then $n+2$ is divisible by $3$. If it is $2$, then $n+4$ is divisible by $3$. A number $\gt 3$ which is divisible by $3$ cannot be prime. Now suppose that $n,n+2,n+4$ are all prime. Then $n$ cannot be $1$, since $1$ is not prime. Also, $n$ cannot be $2$, since $4$ (and $6$) are not prime. Certainly $n$ can be $3$, giving the primes $3,5,7$ of the question. And $n$ cannot be greater than $3$, for then all of $n,n+2,n+4$ are $\gt 3$, and one of them is divisible by $3$, so not prime.
2020-01-22 14:14:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701403975486755, "perplexity": 245.3323317763229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00178.warc.gz"}
https://www.physicsforums.com/threads/atwood-machine-problem.206062/
# Atwood Machine Problem ## Homework Statement In the three figures given in the attachment consisting of three atwood machines with, the blocks A, B and C of mass m have accelerations a1, a2 and a3 respectively.F1 and F2 are external forces of magnitude 2mg and mg acting on the first and third diagrams respectively. How do the accelerations of the block differ and why is it so? ## Homework Equations Basic Newton's Laws of Motion equations. ## The Attempt at a Solution Acceleration of masses in atwood machine is given by: $$a = (\frac{m_{2}-m_{1}}{m_{2}+m_{1}})g$$ I really don't get it how the acceleration are not the same in each case as the forces add to the same? #### Attachments • pulley.PNG 32.9 KB · Views: 866 Doc Al Mentor Don't use a "canned" formula for Atwood's machine--that's only good in certain situations. Instead, derive the acceleration yourself using Newton's 2nd law. Hint: In the cases with two masses, analyze each mass separately. Then combine the resulting equations to solve for the acceleration. For the second pulley we can straight forward apply the equation and with that we get the acceleration a2 = g/3 Don't use a "canned" formula for Atwood's machine--that's only good in certain situations. Instead, derive the acceleration yourself using Newton's 2nd law. Hint: In the cases with two masses, analyze each mass separately. Then combine the resulting equations to solve for the acceleration. T 22:29 Yes, you were right. The acceleration have to be calculated individually. For the first pulley let the tension in the string be T1 which will be equal to the force pulling F1=2mg. for the block we can write: ma=T-mg or ma=mg or a=g...(this must be right) or a1=g For the third pulley I can't figure out. Last edited: Lets see...F2=mg, $$F_{2}+mg-T=ma ...(1)$$ $$T-mg=ma ...(2)$$ $$mg+mg-mg+T-T=2ma$$ $$mg=2ma$$ $$a=\frac{g}{2}$$ then $$a_{3}=\frac{g}{2}$$ Therefore, the correct option would be no. (b) a1>a3>a2 Last edited: Doc Al Mentor For the third pulley: What forces act on the second mass? Apply Newton! (Looks like you did it while I was typing. Good!) Last edited: Is this correct? Doc Al Mentor Yes. You got it. Thank you very much for your help and support!
2021-10-22 07:26:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7515095472335815, "perplexity": 1636.2292625061968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00377.warc.gz"}
https://www.neetprep.com/ncert-question/228528
37. Following data is given for the reaction ${\mathrm{CaCO}}_{3}\left(\mathrm{s}\right)\to \mathrm{CaO}\left(\mathrm{s}\right)+{\mathrm{CO}}_{2}\left(\mathrm{g}\right)$ Predict the effect of temperature on the equilibrium constant of the above reaction. Given that, In the reaction, Because $∆\mathrm{H}$ value is positive, so the reaction is endothermic. Hence, according to Le-Chateleir's principle, reaction will proceed in forward direction on increasing temperature. Thus, the value of equilibrium constant for the reaction increases.
2021-07-27 02:36:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248636722564697, "perplexity": 700.3302996363302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00127.warc.gz"}
https://www.askiitians.com/forums/Differential-Calculus/find-the-derivative-of-abs-x-3-when-x-1-with-c_152658.htm
Thank you for registering. One of our academic counsellors will contact you within 1 working day. Click to Chat 1800-5470-145 +91 7353221155 CART 0 • 0 MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: Rs. There are no items in this cart. Continue Shopping find the derivative of abs(x-3) when  x = -1 with correct steps Sourabh Singh IIT Patna 4 years ago Hii you have to draw the curve and once you draw it you will see that the slope of the tangent is -1 Ajay 209 Points 4 years ago By definition the modulus function is defined as $\left | x-3 \right | = -(x-3)\ if\ x< 3 \\$ = -x+3 Hence derivative  = -1 for all values of x less than 3 which includes -1
2021-08-05 04:01:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.418953001499176, "perplexity": 8471.679274937112}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00084.warc.gz"}
http://mathoverflow.net/questions/24974/coinciding-induced-maps
# Coinciding induced maps Of course if two morphisms of complexes are homotopic their induced maps coincide, but I'm wondering about the converse: if the induced maps on the cohomologies coincide, when does that imply that the morphisms are homotopic? I've played around with it a bit and I think it might be true for complexes of projective modules? But I'm not sure... are there any well-known results regarding this? - The answer is no. Consider the complex over the integers $A$ which is $\mathbb Z$ in degree $0$ and $1$ and the only non-trivial differential being multiplication by $2$ and let $B$ be the same complex shifted once to the left (so that it is $\mathbb Z$ in degrees $-1$ and $0$). We have a map of complexes $A \to B$ which is the identity in degree $0$ and (necessarily) zero in all other degrees. This induces zero in cohomology (for trivial reasons) but is not null homotopic. (This is easily seen by an explicit calculation. Abstractly however it has to do with the fact that it realises the non-zero element of $\mathrm{Ext}^1_{\mathbb Z}(\mathbb Z/2,\mathbb Z/2)$.) @adeel: Isn't $\mathbb{Z}$ a projective module? And aren't the above complexes (in Ekedahl's answer) bounded? –  Qfwfq May 17 '10 at 10:41 Sorry, I mean complexes $0 \to I_0 \to I_1 \to \cdots$ with $I_i$ injective or $\cdots \to P_1 \to P_0 \to 0$ with $P_i$ projective. Because then you can construct homotopy maps inductively. –  Adeel May 17 '10 at 11:06 It is enough that the source complex is projective (or the target complex injective) to construct things inductively. If you like you can replace $B$ in my example by $\mathbb Q/\mathbb Z$ in degrees $0$ and $1$ with a suitable map $A \to B$ and get the same counterexample. –  Torsten Ekedahl May 17 '10 at 15:50
2014-03-12 06:31:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364141821861267, "perplexity": 200.50751644143222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021425440/warc/CC-MAIN-20140305121025-00062-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathoverflow.net/feeds/question/111893
Cyclotomic polynomial question - MathOverflow [closed] most recent 30 from http://mathoverflow.net 2013-06-20T05:45:26Z http://mathoverflow.net/feeds/question/111893 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/111893/cyclotomic-polynomial-question Cyclotomic polynomial question Hauke Reddmann 2012-11-09T12:14:58Z 2012-11-10T05:54:25Z <p>To avoid case distinction overload, I also call (say) $Z^3-Z^4$ cyclotomic. Just divide out the $Z=0$ solutions in the following if they offend.<br> In the following, all exponents are assumed to be positive integers.<br> Assume that $P=Z^a+Z^b+Z^c-Z^d-Z^e-Z^f$ is cyclotomic. The "standard" form (since $Z-1$ factors out immediately) would be something like $P'=Z^A(Z-1)(1\pm{Z}^B+Z^{2B})$. But note that, say, $P''=(Z-1)(1+Z^3+Z^4+Z^7+Z^8+Z^9+Z^{10})$ telescopes to the same form as $P$. Surely, $P''$ is not cyclotomic, but it was just a random example anyway :-) So: Is there a cyclotomic $P$ than can NOT be written in the form $P'$? OR even some $1\pm{Z}+Z^n, n>2$ that is cyclotomic (although to this my instinct says, no way - the proof is probably an one-liner in complex analysis...).</p> http://mathoverflow.net/questions/111893/cyclotomic-polynomial-question/111964#111964 Answer by Aaron Meyerowitz for Cyclotomic polynomial question Aaron Meyerowitz 2012-11-10T05:54:25Z 2012-11-10T05:54:25Z <p>To start with your second question, cyclotomic polynomials have a central symmetry or antisymmetry, If $f(x)$ is a cyclotomic polynomial of degree $N$ then $f(Z)=\pm Z^nf(\frac1Z).$ So $Z^n\pm Z+1.$ would only be cyclotomic if $n=2$ </p> <p>In your construction $P'$ you could replace $Z-1$ by $Z^k-1$. Looking at $(Z-1)\Phi_q\Phi_r\Phi_s$ which avoid being of this form one finds numerous cases. One is $q,r,s=2,5,8$</p> <p>$$Z^{10}+Z^9+Z^6-Z^4-Z-1=$$ $$(Z-1)(Z+1)(Z^4+Z^3+Z^2+Z+1)(Z^4+1)=$$ $$(Z-1)(Z^9+2Z^8+2Z^7+2Z^6+3Z^5+3Z^4+2Z^3+2Z^2+2Z+1)=$$ $$(Z^2-1)(Z^8+Z^7+Z^6+Z^5+2Z^4+Z^3+Z^2+Z+1)=$$ $$(Z^5-1)(Z^5+Z^4+Z+1)$$</p>
2013-06-20 05:45:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275592565536499, "perplexity": 767.7077960013166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710313659/warc/CC-MAIN-20130516131833-00027-ip-10-60-113-184.ec2.internal.warc.gz"}
http://bib-pubdb1.desy.de/collection/PUB_CFEL-UDSS-20160914?ln=en
# CFEL-UDSS 2018-11-0716:12 [PUBDB-2018-04297] Conference Presentation et al Towards Chirp Control of XUV Free Electron Lasers 8th EPS-QEOD EUROPHOTON CONFERENCE, BarcelonaBarcelona, Spain, 2 Sep 2018 - 7 Sep 2018   We use 267nm femtosecond laser pulses to modulate electrons of a XUV free electron laser. By changing the chirp of this laser pulses we can influence the chirp of the generated XUV FEL pulses, which are characterized by THz streaking. [...] OpenAccess: Summary_Europhoton2018_MK20042018 - PDF PDF (PDFA); Towards Chirp Control of XUV Free Electron Lasers_MK_PUBDB - PPTX; 2018-10-3015:47 [PUBDB-2018-04069] Book/Report/Internal Report Pfluger, J. Field accuracy requirements for the undulator systems of the X-ray FELs at TESLA [DESY-TESLA-FEL-2000-14] TESLA-FEL Reports 11 pp. (2000) [10.3204/PUBDB-2018-04069]   In SASE FELs, the radiation power has to saturate in a single pass through the undulator. In the VUV and X-rayregime, with undulators ranging in length from several tens to hunderds of meters, even a few percent increase ingain length could mean a lengthening of the undulator by several meters. [...] OpenAccess: PDF PDF (PDFA); 2018-10-1414:32 [PUBDB-2018-03782] Poster et al PIPE: The Photon-Ion-Endstation at PETRA III for Experimental Studies of XUV-Photoprocesses in Small Quantum Systems Deutsche Tagung für Forschung mit Synchrotronstrahlung, Neutronen und Ionenstrahlen an Großgeräten, SNI 2018, GarchingGarching, Germany, 17 Sep 2018 - 19 Sep 2018   OpenAccess: PDF PDF (PDFA); 2018-01-1715:59 [PUBDB-2018-00674] Journal Article et al Near-K -edge single, double, and triple photoionization of $C^{+}$ ions Physical review / A 97(1), 013409 (2018) [10.1103/PhysRevA.97.013409]   Single, double, and triple ionization of the $C^{+}$ ion by a single photon have been investigated in the energy range 286 to 326 eV around the K-shell single-ionization threshold at an unprecedented level of detail. At energy resolutions as low as 12 meV, corresponding to a resolving power of 24 000, natural linewidths of the most prominent resonances could be determined [...] OpenAccess: PDF PDF (PDFA); External link: Fulltext 2018-01-0916:17 [PUBDB-2018-00296] Journal Article et al High-resolution resonant inelastic extreme ultraviolet scattering from orbital and spin excitations in a Heisenberg antiferromagnet Physical review / B 96(18), 184420 (2017) [10.1103/PhysRevB.96.184420]   We report a high-resolution resonant inelastic extreme ultraviolet (EUV) scattering study of the quantum Heisenberg antiferromagnet KCoF$_3$. By tuning the EUV photon energy to the cobalt M$_{23}$ edge, a complete set of low-energy 3d spin-orbital excitations is revealed [...] OpenAccess: PDF PDF (PDFA); 2017-12-2213:55 [PUBDB-2017-14056] Journal Article et al Imaging the square of the correlated two-electron wave function of a hydrogen molecule Nature Communications 8(1), 2266 (2017) [10.1038/s41467-017-02437-9]   The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. [...] Restricted: PDF PDF (PDFA); 2017-11-2015:44 [PUBDB-2017-12132] Journal Article et al Quantum imaging with incoherently scattered light from a free-electron laser Nature physics 14, 126-129 (2017) [10.1038/nphys4301]   The advent of accelerator-driven free-electron lasers (FEL) has opened new avenues for high-resolution structure determination via diffraction methods that go far beyond conventional X-ray crystallography methods. These techniques rely on coherent scattering processes that require the maintenance of first-order coherence of the radiation field throughout the imaging procedure. [...] Restricted: PDF PDF (PDFA); 2017-10-2519:37 [PUBDB-2017-11558] Journal Article et al Near $L$-edge Single and Multiple Photoionization of Singly Charged Iron Ions The astrophysical journal 849(1), 5 (2017) [10.3847/1538-4357/aa8fcc]   Absolute cross sections for m-fold photoionization (m=1,...,6) of Fe+ by a single photon were measured employing the photon-ion merged-beams setup PIPE at the PETRA III synchrotron light source, operated by DESY in Hamburg, Germany. Photon energies were in the range 680-920 eV which covers the photoionization resonances associated with 2p and 2s excitation to higher atomic shells as well as the thresholds for 2p and 2s ionization. [...] Restricted: PDF PDF (PDFA); External link: Fulltext 2017-09-1214:07 [PUBDB-2017-10683] Journal Article et al Real-Time Elucidation of Catalytic Pathways in CO Hydrogenation on Ru The journal of physical chemistry letters 8(16), 3820 - 3825 (2017) [10.1021/acs.jpclett.7b01549]   The direct elucidation of the reaction pathways in heterogeneous catalysis has been challenging due to the short-lived nature of reaction intermediates. Here, we directly measured on ultrafast time scales the initial hydrogenation steps of adsorbed CO on a Ru catalyst surface, which is known as the bottleneck reaction in syngas and CO$_2$ reforming processes. [...] Published on 2017-07-31. Available in OpenAccess from 2018-07-31.: CO Hydrogenation Supplemental 2017.07.28 - Resubmitted JPCL - PDF PDF (PDFA); CO Hydrogenation 2017.07.30 - Resubmitted JPCL - PDF PDF (PDFA); Restricted: PDF PDF (PDFA); 2017-08-1916:18 [PUBDB-2017-09552] Journal Article/Contribution to a conference proceedings et al Photoionization and photofragmentation of $\mathrm{Lu}_{3}\mathrm{N@C}_{80}^{\mathrm{q}+}$ ions (q = 1,2,3) XXX International Conference on Photonic, Electronic, and Atomic Collisions, CairnsCairns, Australia, 25 Jul 2017 - 1 Aug 2017   Cross sections for photoionization and photofragmentation of singly, doubly and triply charged Lu3N@C80 ions have been measured using the photon-ion merged-beams technique at the PETRA III synchrotron light source. The measured spectra exhibit prominent resonance features at energies around the carbon K edge. [...] OpenAccess: 2017-ICPEAC-PIPE-Lu3N@C80 - PDF PDF (PDFA); Hellhund_2017_J._Phys.__Conf._Ser._875_032034 - PDF PDF (PDFA);
2018-11-21 16:20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6365437507629395, "perplexity": 12633.368109419587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039749054.66/warc/CC-MAIN-20181121153320-20181121175320-00438.warc.gz"}
https://code.tutsplus.com/courses/advanced-vuejs-component-concepts/lessons/using-slots-to-facilitate-customization
Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From \$16.50/m Lessons:11Length:1.5 hours • Overview • Transcript # 3.1 Using Slots to Facilitate Customization We rarely want static content in our components. Thankfully, we can use slots to allow our components to be used with the component consumer's own content. ## 5.Conclusion1 lesson, 00:57 ### 3.1 Using Slots to Facilitate Customization In the previous lesson, we successfully wrapped Bootstrap's modal component as a Vue component. And it works but it leaves a lot to be desired [LAUGH] because the content is static. It's hard coded in the modal. And there might be some cases where we would want a modal with static content, but none really come to mind. I mean, the whole idea behind having a component like this would be to be able to supply whatever content that we wanted. So in this lesson, we are going to do that. And it's actually quite simple to do. All we have to do is use Vue's wonderful feature called Slots. And adding a slot to a component is very simple. We just have an element called slot. Now you can think of a slot as a placeholder. So that's what we want to do, then, is take the content that we want to supply to our modal and take it out so that all we have is our slot. And then inside of whatever is consuming our modal component, which is our app component, we will put our content in between the opening and closing modal tags. So let's get rid of this white space so that it's nice and pretty. And whenever we show our modal, we will see the same exact result. And to prove that, let's change the title so that instead of Modal title, This is the title. So whenever we say this, we should see that updated in the browser and, voila, there we go. Now this is great. This is a huge step forward. However, if I'm using this modal component, I don't want to have to use all of this, what is essentially boilerplate. I mean, if we have a header then it's probably gonna have a title, it's probably going to have the button for closing the modal. So really, what I would want to do, then, is just supply the title. And yeah, we can do that with a prop very easily. However, that kind of steps away from this idea of supplying content to our modal component. So what we can do, then, is use another slot. And we can give it a name, it's called a named slot, so that we could specify a slot for the title. In fact, it would look like this. We would have slot, it would have a name attribute, and we would just call it title. So let's take this whole element and its children for the header and let's put that inside of our modal component. Now what this is going to do, then, is give us a placeholder that we can use the title, it's right there. So that's inside of our content. We can say div, and then we use the slot attribute. We set it to whatever name that we want, and then This is the title. So that we will essentially get the same results, if we show the modal, we see that happening. However, let's inspect this. And we are going to see that we have our h5 element, but we also have this div element because we specified a div element here. So behind the scenes, Vue saw that this div element is the content for our title slot. So it took this whole element and its content and it put it right where we had that named slot. So if you want that behavior, then that's great. However, in our case, we don't really want to show any extra HTML unless if it is actually inside of our title. So what we are going to do, then, is replace the div element with the template element. And if we save this, over on the right-hand side, we're going to see that the content automatically updated. We have the h5 element and just the content. We don't have that div element anymore. So if you don't want any HTML output from using a named slot, you can use this template element. Let's do the same thing for the footer. So let's just take that element out, and then we will have the footer. We'll have a slot with the name of footer. And then we will supply the buttons there. So it's essentially going to be the same thing. We will use the template element. Let's give it the slot attribute for footer, and then we will have our buttons. So once again, let's show our modal, we have the same results. Now as far as the body is concerned, the body is kind of the default content of our modal. I mean, it makes sense to specify the title, it makes sense to specify the footer. But as far as the body, that doesn't make a whole lot of sense. We should just be automatic, whatever content that we have that's not being used by a named slot should be the body. So what we can do, then, is take out the Bootstrap markup for the body. And we are just going to put our default slot there, because that's what we have. We have named slots and then we have our default slot. So now, if we just get rid of everything except our named slots, and we have our content, we can see that our content is right there, although the whole styling changed, didn't it? Let's make sure that everything was okay. I don't see why, okay, there we go. It was just a little glitch. So we can add another p element that has more content. And we will see that that will display right inside of our modal. And the great thing about this, it doesn't matter what order we specify this content. We can have the title at the bottom, we can have the footer at the top. We can have, well, let's move the default content up at the top. Now we have completely reorganized how we supplied the content. But that's okay because the structure of our component is controlled within the component itself. So if we show the modal, everything looks like it did before and that's exactly what we wanted. So that's a great step forward. However, there are times when we would have a modal that doesn't have a footer. And in those cases, if we don't supply a footer, well, we still have the markup of the footer because that is inside of our component. And that is definitely something that we want to address. And we will do that in the next lesson. Back to the top
2021-05-13 11:57:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6050551533699036, "perplexity": 477.1232844870704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00140.warc.gz"}
https://learn.careers360.com/ncert/question-check-whether-the-given-fractions-are-equivalent-7-by-13-5-by-11/
Q # Check whether the given fractions are equivalent : (c) 7 /13, 5 / 11 6.(c) Check whether the given fractions are equivalent : (c)  $\frac{7}{13},\frac{5}{11}$ Views Multiply both numerator and denominator by $\frac{5}{7}$, we get : $\frac{7}{13}\times \frac{\frac{5}{7}}{\frac{5}{7}}\ =\ \frac{5}{\frac{65}{7}}\ \neq \ \frac{5}{11}$ Hence these two fractions are not the same. Exams Articles Questions
2020-01-27 11:38:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907732367515564, "perplexity": 1827.8974542884853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00049.warc.gz"}
https://questions.examside.com/past-years/jee/question/x-different-wavelengths-may-be-observed-in-the-spectrum-from-jee-main-physics-units-and-measurements-hyvzfhaoispdtmn6
NEW New Website Launch Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc... 1 ### JEE Main 2021 (Online) 27th August Evening Shift Numerical X different wavelengths may be observed in the spectrum from a hydrogen sample if the atoms are exited to states with principal quantum number n = 6 ? The value of X is ______________. ## Explanation No. of different wavelengths = $${{n(n - 1)} \over 2}$$ $$= {{6 \times (6 - 1)} \over 2} = {{6 \times 5} \over 2} = 15$$ 2 ### JEE Main 2021 (Online) 27th July Morning Shift Numerical A particle of mass 9.1 $$\times$$ 10$$-$$31 kg travels in a medium with a speed of 106 m/s and a photon of a radiation of linear momentum 10$$-$$27 kg m/s travels in vacuum. The wavelength of photon is __________ times the wavelength of the particle. ## Explanation For photon $${\lambda _1} = {h \over P} = {{6.6 \times {{10}^{ - 34}}} \over {{{10}^{ - 27}}}}$$ For particle $${\lambda _2} = {h \over {mv}} = {{6.6 \times {{10}^{ - 34}}} \over {9.1 \times {{10}^{ - 31}} \times {{10}^6}}}$$ $$\therefore$$ $${{{\lambda _1}} \over {{\lambda _2}}} = 910$$ 3 ### JEE Main 2021 (Online) 27th July Morning Shift Numerical In Bohr's atomic model, the electron is assumed to revolve in a circular orbit of radius 0.5 $$\mathop A\limits^o$$. If the speed of electron is 2.2 $$\times$$ 166 m/s, then the current associated with the electron will be _____________ $$\times$$ 10$$-$$2 mA. [Take $$\pi$$ as $${{22} \over 7}$$] ## Explanation $$I = {e \over T} = {{e\omega } \over {2\pi }} = {{eV} \over {2\pi r}}$$ $$I = {{1.6 \times {{10}^{ - 19}} \times 2.2 \times {{10}^6} \times 7} \over {2 \times 22 \times 0.5 \times {{10}^{ - 10}}}}$$ = 1.12 mA = 112 $$\times$$ 10$$-$$2 mA 4 ### JEE Main 2021 (Online) 25th July Evening Shift Numerical The nuclear activity of a radioactive element becomes $${\left( {{1 \over 8}} \right)^{th}}$$ of its initial value in 30 years. The half-life of radioactive element is _____________ years. ## Explanation We know, $$A = {A_0}{e^{ - \lambda t}}$$ For half life $${{{A_0}} \over 2} = {e^{ - \lambda {t_{1/2}}}}$$ $$\Rightarrow$$ $${\lambda {t_{1/2}}}$$ = ln 2 .....(1) And when radioactive element becomes $${\left( {{1 \over 8}} \right)^{th}}$$ of its initial value in 30 years $${{{A_0}} \over 8} = {A_0}{e^{ - \lambda \times 30}} \Rightarrow \lambda \times 30 = \ln 8$$ $$\Rightarrow$$ 30$$\lambda = 3\ln 2$$ $$\Rightarrow$$ $$\lambda = {{3\ln 2} \over {30}}$$ .....(2) Putting value of $$\lambda$$ in (1), we get $${{3\ln 2} \over {30}} \times {t_{1/2}}$$ = ln 2 $$\Rightarrow$$ $${t_{1/2}}$$ = 10 years ### Joint Entrance Examination JEE Main JEE Advanced WB JEE ### Graduate Aptitude Test in Engineering GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN NEET Class 12
2022-06-25 17:45:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8295090794563293, "perplexity": 4129.905817864898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00688.warc.gz"}