text
stringlengths
104
605k
# QM - Particle in Potential Well - Probability of states A particle is confined in a potential well such that its allowed energies are $$E^n = n^2\epsilon$$, where $$n = 1, 2, \dots$$ is an integer and $$\epsilon$$ a positive constant. The corresponding energy eigenstates are $$\lvert1\rangle, \lvert2\rangle, \dots , \lvert n\rangle, \dots$$ At t = 0 the particle is in the state: $$\lvert\psi(0)\rangle = 0.2\lvert1\rangle + 0.3\lvert2\rangle + 0.4\lvert3\rangle + 0.843\lvert4\rangle$$. What is the probability if energy is measured at $$t=0$$ of finding a number smaller than $$6\sigma$$? Am I right in saying this would just be the sum of states $$n = 1$$ and $$n = 2$$ which is $$0.5$$? Then I'm wondering how you would calculate the mean value and rms deviation of the energy of the particle in the state $$\lvert\psi(0)\rangle$$ How do I find the state vector $$\lvert\psi\rangle$$ at any time $$t$$? And therefore do the results calculated above remain valid for any arbitrary time? The last thing I'm stuck on is, lets say the energy is measured and it is said to be $$16\epsilon$$. After this measurement, what is the state of the system, and what result would you get if you tried to measure energy again? 1. For your first question-you actually have to square the coefficients and add them up. Notice that it is the sum of the squares of the coefficients which add up to unity. So the required probability would be $$(0.2)^2+(0.3)^2=0.13$$. 2. In general, energy superpositions are not stationary states, so your state-vector will change with time. However, the probabilities will remain unchanged, because when you evolve the state-vector with time, each term in its expansion will only vary by a phase factor of the form $$e^{-ikt}$$, where t is time and k is some constant. You can see that the modulus will remain the same, since $$|ce^{-ikt}|^2=|c|^2.$$ 3. The mean value of energy will just be $$\frac{\sum|c_n|^2E_n}{4}$$. 4. To answer your last question: Energy eigenstates are stationary states. Therefore, it will remain in $$E^n$$ after you measure it, and you will get the same result if you measured energy again. • Thank you so much this is super helpful! So how would I actually go about calculating the state vector at any time t? Is it just $|ψ(t)> = e^{−iφ(t)} |ψ(0)>$? – physconomic Dec 2 '20 at 16:22 • We usually separate the space and the time parts of the wavefunction, so one would write $\psi(x,t)=\phi(x)f(t)$, where $f(t)$ will just be the phase factor evolving it in time. By the way, do mark my answer as correct if you found that it sufficed. – A.D. Dec 2 '20 at 16:29
# What happens to the complex part of the wave function when doing LCAO? I understand that in general the wave function can take complex numbers, however when talking about combining atomic orbitals to form molecular orbitals we talk about the phase of the orbital being positive or negative to create constructive or destructive interference. It seems as if we have lost the aspect of the wave function taking on complex phase, what happened to it? • It got very scared because sh*t just got real and squared away ;) – Martin - マーチン Sep 11 '15 at 5:33 • Also, generally chemists prefer working with real wavefunctions, which are obtained by taking appropriate linear combinations of the complex wavefunctions. en.wikipedia.org/wiki/… This is allowed because any linear combination of degenerate eigenfunctions is also an eigenfunction of the Hamiltonian with the same eigenvalue. Another benefit is that these real wavefunctions have convenient directional properties e.g. the p orbitals are aligned in the x, y, and z directions. Same goes for the d orbitals. – orthocresol Sep 11 '15 at 6:02 I understand that in general the wave function can take complex numbers [...] It is not that it can, rather it should: by the very postulates of quantum mechanics wave function is a complex-valued function. So, both the spin orbitals (that are one-electron wave functions) and the Slater determinant built out of them are in principle complex-valued functions. However, in solving the HF equations by the SCF procedure it is quite typical to impose some constraints on the spin orbitals, for instance, restrict them to be real-valued functions rather then complex-valued ones. As with any other constraint this restriction might lead to what is called SCF instabilities, i.e. situations when relaxing the constraints lead to a different variational solution of lower energy. To elaborate a bit more let us mention few other well-known (as well as lesser-known) constraints for the SCF procedure: • The infamous restricted Hartree-Fock (RHF) model with the requirement that spin orbitals come in pairs: two spin orbitals corresponding to two different pure spin states are constructed from the same spatial orbital: \begin{aligned} \psi_{2i-1}(1) &= \phi_{i}(1) \alpha(1) \, , \\ \psi_{2i}(1) &= \phi_{i}(1) \beta(1) \, . \end{aligned} • In the unrestricted Hartree-Fock (UHF) model the requirement above is relaxed and we exclusively use spatial orbitals from one set to construct $\alpha$ spin orbitals and spatial orbitals from another set to construct $\beta$ spin orbitals: \begin{aligned} \psi_{2i-1}(1) &= \phi_{i}^{\alpha}(1) \alpha(1) \, , \\ \psi_{2i}(1) &= \phi_{i}^{\beta}(1) \beta(1) \, . \end{aligned} • A bit less known is the fact that the unrestricted Hartree-Fock method is not so unrestricted. In fact, there is a constraint here: each and every spin orbital describes an electron in a pure spin state, either $\alpha$ or $\beta$, while in general (in accordance with the postulates of quantum mechanics) electron can be in superposition of this states: $$\psi_{i}(1) = \phi_{i}^{\alpha}(1) \alpha(1) + \phi_{i}^{\beta}(1) \beta(1)$$ This most general setting is known as the general Hartree-Fock (GHF) method. And all these three methods (RHF, UHF, GHF) exist in two variants: the real one, in which spin orbitals additionally required to be real-valued functions, and the more general complex one. All this give rise to six variants of the HF method with many possible instabilities between them discussed it some details in the seminal paper by Schlegel & McDouall1: There is also a very similar earlier paper by Seeger & Pople.2 which suddenly is not freely available. Its important to realize that any constraint of the mentioned above can only raise the electronic energy: with a great deal of certainty, we may expect that if any constraint is relaxed, the variational procedure will result in a lower energy due to greater variational freedom. Thus, for instance, for the most usual different real variants of the Hartree-Fock method we have $$E_\mathrm{e}(\mathrm{RGHF}) \leq E_\mathrm{e}(\mathrm{RUHF}) \leq E_\mathrm{e}(\mathrm{RRHF}) \, .$$ The same, of course, is true for the corresponding complex variants of these methods, $$E_\mathrm{e}(\mathrm{CGHF}) \leq E_\mathrm{e}(\mathrm{CUHF}) \leq E_\mathrm{e}(\mathrm{CRHF}) \, .$$ And for any of the three formalism $$E_\mathrm{e}(\mathrm{C}x\mathrm{HF}) \leq E_\mathrm{e}(\mathrm{R}x\mathrm{HF}) \, , \quad \text{where} \quad x = \mathrm{G}, \mathrm{U}, \mathrm{R} \, .$$ RHF/UHF/GHF and correct spin symmetry The possible rise of electronic energy that accompanies introduction of more and more constraints in the GHF -> UHF -> RHF sequence seems to be quite contrary to the goal of the variation method. However, UHF and RHF constraints are nothing but symmetry constraints: they arise by requiring an approximate electronic wave function to have the same spin symmetry as the exact non-relativistic one, i.e. to be an eigenfunction of spin operators, the total spin-squared operator $\hat{S}^2$ and the $z$-component of the total spin operator $\hat{S}_{z}$. • With respect to $\hat{S}_{z}$ operator it is well known that any Slater determinant built out of spin orbitals corresponding to pure spin states is an eigenfunction of $\hat{S}_{z}$. Thus, RHF and UHF wave functions are eigenfunctions of $\hat{S}_{z}$, but GHF wave function is not. • The situation with $\hat{S}^{2}$ is a bit more complicated, but all in all only in RHF (but not in UHF or in GHF) formalism it is possible to construct an approximate electronic wave function which is an eigenfunction of $\hat{S}^{2}$.3 To conclude, as it was pointed out by Löwdin, we always face with a dilemma here: should we seek a solution that is a true variational minimum or should we seek a solution with the correct spin symmetry? 1) H. B. Schlegel and J. J. W. McDouall, Do You Have SCF Stability and Convergence Problems? in Computational Advances in Organic Chemistry: Molecular Structure and Reactivity, Springer Netherlands, 1991, pp. 167-185. DOI: 10.1007/978-94-011-3262-6_2. Free PDF from wayne.edu. 2) Seeger, R., & Pople, J. A., Self‐consistent molecular orbital methods. XVIII. Constraints and stability in Hartree–Fock theory. The Journal of Chemical Physics, 66(7), 1977, 3045-3050. DOI: 10.1063/1.434318. 3) This can be done even for an open-shell electron configuration, although it would require a linear combination of Slater determinants with constant coefficients, none of which alone is an eigenfunction of $\hat{S}^{2}$.
## PhD Program Email Us (585) 275-2959 # Faculty Profile Yaron Shaposhnik Assistant Professor Phone: 585.275.5592 Office: 3-343 Carol Simon Hall #### Research Interests Stochastic dynamic optimization with learning, Applications of machine learning to model analysis, Business Analytics ### Professional History Assistant Professor University of Rochester, Simon Business School September 2016 - ### Education Massachusetts Institute of Technology - 2016 Ph D Operations Research ### Publications 2021 A Polynomial-Time Approximation Scheme for Sequential Batch-Testing of Series Systems Contribution Type: Journal Article, Academic Journal Journal/Publisher/Proceedings Publisher: Operations Research 2020 Mining Optimal Policies: A Pattern Recognition Approach to Model Analysis Journal/Publisher/Proceedings Publisher: Informs Journal on Optimization 2019 Scheduling with Testing of Heterogeneous Jobs Journal/Publisher/Proceedings Publisher: Management Science Volume: 65 Issue: 2 ### Current Research Programs A Generalized Pandora's Box Problem with Partially Open Boxes Motivated by modern job recruiting practices, we study a generalization of Pandora's box problem (Weitzman 1979) in which boxes (candidates) could be partially opened (interviewed remotely) at a reduced cost prior to being fully opened (interviewed onsite). This allows the decision-maker to obtain information about boxes in the form of more accurate probability distributions without committing to fully opening them. We identify structural properties of the optimal policy and develop simple and intuitive algorithms with near-optimal performance guarantees. A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations Lending decisions are usually made with proprietary models that provide minimally acceptable explanations<br>to users. In a future world without such secrecy, what decision support tools would one want to use for<br>justified lending decisions? This question is timely, since the economy has dramatically shifted due to a<br>pandemic, and a massive number of new loans will be necessary in the short term. We propose a framework<br>for such decisions, including a globally interpretable machine learning model, an interactive visualization of<br>it, and several types of summaries and explanations for any given decision. The machine learning model is a<br>two-layer additive risk model, which is decomposable into subscales, where each node in the second layer<br>represents a meaningful subscale, and all of the nonlinearities are transparent. Our online visualization tool<br>allows exploration of this model, showing precisely how it came to its conclusion. We provide three types of<br>explanations that are simpler than, but consistent with, the global model: case-based reasoning explanations<br>that use neighboring past cases, a set of features that were the most important for the model’s prediction, and<br>summary-explanations that provide a customized sparse explanation for any particular lending decision made<br>by the model. Our framework earned the FICO recognition award for the Explainable Machine Learning<br>Challenge, which was the first public challenge in the domain of explainable machine learning<br><br> Crowdsourcing the Prediction of Sojourn Times: Methods and Feasibility Study Mobile apps such as Google Maps, Bing Maps, or Waze are regularly used by users to predict travel times. This paper explores a similar application in the context of general service systems (rather than transportation problems which have their own physical characteristics). We develop analytical and data-driven prediction methods for predicting sojourn times in service systems by 3rd parties that have partial information about users' interaction with the system, and conduct numerical experiments to assess the quality of predictions and the overall feasibility of this practice. Globally-Consistent Rule-Based Summary-Explanations for Machine Learning Models: Application to Credit-Risk Evaluation We develop a method for interpreting specific predictions made by (global) predictive models by constructing (local) models tailored to each specific observation (these are also called explanations" in the literature).<br>Unlike existing work that explains'' specific observations by approximating global models in the vicinity of these observations, we fit models that are globally-consistent with predictions made by the global model on past data. We focus on rule-based models (also known as association rules or conjunctions of predicates), which are interpretable and widely used in practice. We design multiple algorithms to extract such rules from discrete and continuous datasets, and study their theoretical properties. Finally, we apply these algorithms to multiple credit-risk models trained on real-world data from FICO and demonstrate that our approach effectively produces sparse summary-explanations of these models in short period of time. Our approach is model-agnostic (that is, can be used to interpret any predictive model), and solves a minimum set cover problem to construct its summaries. Scheduling with Testing of Heterogeneous Jobs This paper studies a canonical general scheduling model that captures a fundamental tradeoff between processing jobs to performing diagnostic (testing). In particular, testing reveals information regarding the required processing time and urgency of need-to-scheduled jobs to inform future scheduling decisions. The model captures a range of important applications.<br>Prior work focused on very special cases (e.g., jobs with IID processing time) to devise optimal policies. In contrast, the current paper studies the most general form of the model and describes a simple heuristic that is analytically guaranteed to have near-optimal performance and performs numerically well in computational experiments. The design of the newly proposed heuristic and related worst-case performance analysis, rely on interesting connections to related stochastic optimization bandit problem. Additionally, the paper devises optimal policies to several important extensions of prior models that were studied. Simple Rules for Predicting Congestion Risk in Queueing Systems: Application to ICUs We study the problem of predicting congestion risk in service systems.Congestion is associated with poor service experience and higher costs and may even put users at risk as in the case of medical settings, such as ICUs. By predicting future crowdedness, decision-makers can initiate preventive measures such as rescheduling activities or increasing short-term capacities in order to mitigate the effects of congestion. To this end, we define high-risk states'' in queuing models as system states that are likely to lead to a congested state in the near future, and strive to formulate simple rules for determining whether a given system state is high-risk. We show that for simple queueing systems, such as the $M/M/\infty$ queue with multiple user classes, such rules could be approximated by linear and quadratic functions on the state space. For more general queueing systems, we use methods from queueing theory, simulation, and machine learning\st{develop a computational framework} to devise simple prediction rules, and demonstrate their effectiveness through extensive computational study. Our study suggests that linear rules (which are widely considered to be interpretable) are very accurate in predicting congestion in ICUs. Stochastic Selection Problems with Testing We study the problem of a decision-maker having to select one of many<br>competing alternatives (e.g., choosing between projects, designs, or<br>suppliers) whose future revenues are a priori unknown and modeled as<br>random variables of known probability distributions. The decision-maker<br>can pay to test each alternative to reveal its specific revenue realization<br>(e.g., by conducting market research), and her goal is to maximize the<br>expected revenue of the selected alternative minus the testing costs.<br>This model captures an interesting trade-off between gaining revenue of<br>a high-yield alternative and spending resources to reduce the uncertainty<br>in selecting it. The combinatorial nature of the problem leads to a<br>dynamic programming (DP) formulation with high-dimensional state<br>space that is computationally intractable. By characterizing the structure<br>of the optimal policy, we derive efficient optimal and near-optimal<br>policies that are simple and easy-to-compute. In fact, these policies are<br>also myopic -- they only consider a limited horizon of one test. Moreover,<br>our policies can be described using intuitive `testing intervals' around<br>the expected revenue of each alternative, and in many cases, the<br>dynamics of an optimal policy can be explained by the interaction<br>between the testing intervals of various alternatives. Finally, we show<br>that some of the insights and results obtained for the problem carry<br>through to more general stochastic combinatorial optimization problems<br>with testing.<br><br> Understanding How Data Visualization Tools Work Dimensionality reduction (DR) techniques such as t-SNE, LargeVis, UMAP, and Trimap have demonstrated impressive visualization performance on many real world data sets. One tension that has always faced these methods is the trade-off between preservation of global structure and preservation of local structure--methods can either handle one or the other, but not both. In this work, our main goal is to understand what aspects of DR methods are important for preserving structure: it is difficult to come up with a better method without a true understanding of the choices we make in our algorithms and their empirical impact on the embedding. We provide several useful design principles based on our new understanding of the mechanisms behind successful DR methods, and use them to design a new algorithm for DR that trades off gracefully between local and global structure preservation. Waiting-time prediction with invisible customers
# Rodrigues Jr. + Wainer: The Relativistic Hamilton-Jacobi Equation for a Massive, Charged and Spinning Particle, its Equivalent Dirac Equation and the de Broglie-Bohm Theory by Wadyr A. Rodrigues Jr., Samuel A. Wainer, (Submitted to arxiv on 11 Oct 2016 (v1), last revised 30 Oct 2016 (this version, v2)), PDF download: https://arxiv.org/pdf/1610.03310v2.pdf Abstract: Using the Clifford and the Spin-Clifford formalisms we prove that the classical relativistic Hamilton Jacobi equation for a charged massive (and spinning) particle interacting with an external electromagnetic field is equivalent to a Dirac-Hestenes equation satisfied by a class of spinor fields that we call classical spinor fields, characterized for having the Takabayashi angle function constant (equal to 0 or {\pi}). We also investigate a nonlinear Dirac-Hestenes like equation that comes from some class of generalized classical spinor fields. Finally we show that the general Dirac-Hestenes equation (which is a representative in the Clifford bundle of the usual Dirac equation) gives a generalized Hamilton-Jacobi equation where the quantum potential satisfy a severe constraint and the “mass of the particle” becomes a variable. Our results can then explain the experimental discrepancies found between prediction of the de Broglie-Bohm theory and recent experiments. We briefly discuss also the de Broglie’s double solution theory in view of our results showing that it can be realized, at least in the case of spinning free particles..The paper contains several Appendices where notation and proofs of some results of the text are presented. Comments: This version fixes some typos and misprints (in particular in Eq.(51))and add Remark 5 for completeness, new references included. Subjects: Mathematical Physics (math-ph) Source: Waldyr A. Rodrigues Jr., email of 02/11/2016 00:39, walrod_AT_mpc.com.br; https://arxiv.org/abs/1610.03310v2
Home > APCALC > Chapter 7 > Lesson 7.1.5 > Problem7-47 7-47. Integrate. 1. $\int 10 ^ { \pi - 1 } d x$ What does the graph of $y = 10^{π − 1}$ look like? What will its area function look like? 1. $\int ( 9 ^ { t - 1 } ) d t$ $\text{Recall that }9^{t-1}=(9^{t})(9^{-1})=\frac{9^{t}}{9}.$ 1. $\int \cos(4m − 3)dm$ $= k\sin(4m − 3) + C$ Find the value of the constant $k$.
# nLab semisimple Lie group ## Examples ### $\infty$-Lie algebras nothing here yet. ## References • A. W. Knapp, Structure theory of semisimple Lie groups, 1997 (pdf) Last revised on June 19, 2014 at 21:20:51. See the history of this page for a list of all contributions to it.
The current GATK version is 3.4-0 #### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Powered by Vanilla. Made with Bootstrap. GATK licensing moves to direct-through-Broad model -- read about it on the GATK blog # ComputeReadSpanCoverage Posts: 7,781Administrator, GATK Dev admin edited September 2012 ### 1. Introduction The ComputeReadSpanCoverage walker traverses a set of BAM files to generate genome-wide statistics. The read span coverage is the count of bases in between two paired-end reads, not counting the lengths of the reads themselves. For fixed-length reads of length L with ungapped alignments, this would be InsertSize - 2*L. The read span coverage is used as an estimate of the power for detecting breakpoints using read pairs. This estimate assumes a model where the aligner is unlikely to align a read to a breakpoint unless the breakpoint is close to the end of the read. Read pairs where the ends align to different sequences are never counted. Read span coverage is computed and reported for each readgroup, but the output is keyed by sample and library to allow easy roll up. See also MergeReadSpanCoverage. ### 2. Inputs / Arguments • -I <bam-file> : The set of input BAM files. • -md <directory> : The metadata directory. Insert size histograms are loaded from the default isd.hist.bin file in this directory. This argument is also used to load a default list of excluded read groups. • -maxInsertSize <n> : Read pairs with an insert size greater than n are not counted in span coverage. • -maxInsertSizeStandardDeviations <sd> : Read pairs with an insert size greater than the median plus sd robust standard deviations are not counted in span coverage. ### 3. Outputs • -O <span-coverage-file> : Tab delimited output file (default is stdout). Post edited by Geraldine_VdAuwera on Geraldine Van der Auwera, PhD Tagged: Sign In or Register to comment.
# I Qs re dark matter when all baryonic matter is in black holes 1. Jan 5, 2017 ### Buzz Bloom In my quote above from the thread I omitted consideration of dark matter because so very little is known about it. I am hoping someone here at the PF might be able to discuss the following questions. QUESTIONS Assume for the sake of simplicity in the context of this discussion, that some time in the very distant future all the matter in the universe which is not gravitationally part of the Milky Way has moved due the universe expansion to be too far away to gravitationally effect the Milky Way matter. For the reason discussed in the thread cited above (as well as the fact the baryonic matter interacts via photons in such a way that some of its gravitational potential energy is lost via photon radiation), all the Milky Way baryonic matter will after another very long period of time collapse into a single black hole. Since the dark matter does not lose gravitational potential energy via photon radiation, I am guessing that after all the baryonic matter is in a single black hole, a lot of the dark matter will remain outside this vary large black hole. 1. Is the correct? 2. If so, what is the fate of this outside dark matter? 3. What happens to the mass of the dark matter which had entered the black hole during the evaporation process? Which of the following possibilities do you think is more plausible? a. The Hawking radiation process turns this mass into photons. b. The Hawking radiation process turns this mass into some currently unknown zero mass particles which are of a type that is a constituent of dark matter. c. Something else happens.​ I am guessing that it is plausible that the dark matter may well will be captured by the black hole at a much slower rate than the black hole matter will escape due to Hawking radiation. If so, I think that this means that eventually the black hole will completely evaporate while there is still a lot of dark matter than never got into the black hole. 5. Is this plausibly correct?​ If so, I think this implies that at that time the universe contents will be just photons and dark matter? 6. is this correct? 7. If so, will the dark matter form another black hole?​ I will much appreciate any responses. Last edited: Jan 5, 2017 2. Jan 5, 2017 ### Staff: Mentor I'm not sure we know enough about dark matter to give answers with much confidence. But I'll try below, using the basic assumption that dark matter interacts gravitationally, but not in any other way, and that it's cold (i.e., not relativistic); i.e., using the basic model of dark matter that is part of our best current cosmological model of the universe. I would say yes, because, as you say, baryonic matter can lose gravitational potential energy much faster than dark matter. It will still lose gravitational potential energy by emitting gravitational waves, so eventually it will all collapse into the black hole. Hawking radiation, at least according to our best current model, does not change its particle composition based on the properties of whatever originally collapsed into the black hole. A hole formed from dark matter will not have Hawking radiation that is any different from that of a hole of the same mass formed from baryonic matter. At least, that's our best current model; but we don't really understand Hawking radiation all that well either. None of them, because Hawking radiation does not just produce photons anyway; photons are the most likely particle to be radiated but not the only one. The general answer is as given above. I think it's the other way around. You might not realize how long it takes for a black hole with the mass of a galaxy to evaporate by Hawking radiation. A hole with the mass of our sun takes around $10^{67}$ years to evaporate, and the time goes like the cube of the mass, so a hole with the mass of the Milky Way, i.e., around $10^{11}$ suns, would take around $10^{100}$ years to evaporate. Even if dark matter can only lose gravitational potential energy via gravitational radiation, it still will lose it many, many, many orders of magnitude faster than that. 3. Jan 5, 2017 ### Buzz Bloom Hi Peter: I thank you very much for your response. It did clear up some confusion I had. The reason I thought only photons would be created by Hawking radiation is what I have been reading in the 1975 Hawking paper, p.211. (Underlining is mine for easy reference.) Similar results hold for the electromagnetic and linearised gravitational fields. The fields produced on J- by positive frequency waves from J+ have the same asymptotic form as (2.18) but with an extra blue shift factor in the amplitude. This extra factor cancels out in the definition of the scalar product so that the asymptotic forms of the coefficients e and fi are the same as in the Eqs. (2.19) and (2.20). Thus one would expect the black hole also to radiate photons and gravitons thermally.​ Since gravitons have not yet been verified to exist, I left them out. I also understand that the temperature of the Harking radiation is inversely proportional to the mass of the black hole. Therefore near the end of the evaporation the mass will become small enough so that massive particles would be created by the very high temperature of radiation. However, I am guessing that the final ratio of the number of particles with non-zero mass (including neutrinos) to the number of photons would be infinitesimal. Is this correct? Regards, Buzz 4. Jan 5, 2017 ### Staff: Mentor Did you note the word "also" in what you quoted? He said the hole should also radiate photons (and gravitons). "Also" in addition to what? In other words, Hawking had already shown that the hole should radiate something besides photons (IIRC it was scalar particles, particles with spin zero). So his paper does not show that only photons (if we leave out gravitons) should be radiated. I also don't think gravitons should be left out, because gravitational waves have been verified to exist, and Hawking's paper was not making use of any particular particle-like aspects of the fields, so "gravitons" in his paper really means "gravitational waves". 5. Jan 5, 2017 ### Staff: Mentor I would expect the ratio of, say, electrons to photons to be very small. I'm not sure about neutrinos, because their masses are so much smaller than the mass of the electron; probably the ratio is still small (though much larger than the electron to photon ratio), but I would have to look at the math to be sure. 6. Jan 7, 2017 ### timmdeeg Does this require necessarily that clouds of dark matter are rotating not symmetric with respect to their respective rotational axis? Or is there any other mechanism, such as contracting asymmetric (but I think they don't contract anyway)? I mean if such clouds would just move around randomly they wouldn't have a time dependent mass quadrupole moment, correct? 7. Jan 7, 2017 ### Staff: Mentor Certainly they would. Not having a time-dependent mass quadrupole moment requires a very coordinated, symmetrical motion. It's certainly not something that's going to occur randomly, or even something that's going to occur non-randomly except in highly idealized models. In practical terms, any system of gravitating masses is going to emit some gravitational waves. Yes, in ordinary terms the emitted power is extremely small, way too small to matter; but it's still going to cause the systems to dissipate a lot, a lot sooner than $10^{100}$ years. 8. Jan 8, 2017 ### timmdeeg But this is not possible forever, right? Because the amount of potential energy which is convertible into radiation is finite. Would the particles of which dark matter consists reach asymptotically a final stage of "very coordinated, symmetrical motion"? If true, I can only think of comoving particles here, as they can't collapse to form a body. 9. Jan 8, 2017 ### Staff: Mentor Right; eventually the system will form a black hole and settle down to a stationary state in which no further GWs are emitted. That will happen in a time many orders of magnitude smaller than the time it takes for such a hole to evaporate by Hawking radiation. Not before they form a black hole. 10. Jan 9, 2017 ### timmdeeg Thanks, but this is surprising. Or do you say the dark matter particles will vanish over time because they fall in an already existing black hole formed by baryonic matter. But even then I'd expect still stable orbits. I mean due to the lack of electromagnetic forces of dark matter particles no "stickiness" should occur, their velocities shouldn't decrease therefor and thus not allow them to form compact objects.. 11. Jan 9, 2017 ### Staff: Mentor Not necessarily; they could, but they could also form black holes on their own. I think at this point you need to stop waving your hands and do some math. I've given you the time it takes for black holes of the relevant mass to evaporate by Hawking radiation. Can you show a reasonable calculation that gives a longer time for dark matter to collapse into black holes? EM forces are not the only forces. Dark matter is still "sticky" due to gravity and can still emit gravitational waves. You can't ignore that on the time scales we are considering. Stop waving your hands and do some math if you want to really consider the question. 12. Jan 9, 2017 ### timmdeeg Oh, I understand, I should have seen that myself . Sorry for taking your time unnecessarily and thanks for clarifying. 13. Jan 11, 2017 ### Staff: Mentor Most of the stars (and gas) of the Milky Way will be ejected, only a tiny part will fall in - angular conservation shows this quickly. For a binary system with circular orbits, the orbital decay due to gravitational waves is given by this formula, plugging in 1 billion solar masses for the central black hole, 1 billion solar masses for dark matter (guessed number for the non-homogeneous component) and 20000 light years, we get an orbital decay of about 1 fm/s. Fast enough, give or take 20 orders of magnitude. If we take a single particle with a mass of 100 GeV, orbital decay drops to 10-79 m/s, or ~3*1091 years to fall in. That is the lifetime of a black hole with 130 million solar masses. Will our central black hole, currently at 4 million solar masses, get that large? I don't know. 14. Jan 11, 2017 ### timmdeeg Your examples make me curios. What is the decay time for a pair of binary dark matter particles? I will try to calculate that hopefully at the weekend. 15. Jan 14, 2017 ### timmdeeg If I didn't make a mistake I have received these results (of course disregarding the accelerated expansion of the universe) for the orbital decay time of two masses separated 100 Million LJ initially (roughly the diameter of the milky way): Two black matter particles, 100 GEV each: 10223 years A black matter particle and a black hole with 180 billion solar masses (assuming the milky way has formed a black hole already): 1098 years 16. Jan 14, 2017 ### Staff: Mentor 100 million light years is 1000 times the diameter of the Milky Way, and of the scale of superclusters. The first scenario won't happen: At 10-65 eV, the particles are bound too weakly to be actually bound: even very weak interactions with the CMB will make them move essentially randomly, which means they will probably move away from each other until expansion separates them forever. The accelerated expansion, assuming it stays like this, will also keep the CMB temperature finite. 180 billion solar masses looks way too high, as most stars will get ejected. 17. Jan 14, 2017 ### timmdeeg Sorry, and thanks for correcting. I have written erroneously 100 Million but have calculated using 100000 light years. Yes, not in this universe. Therefor I mentioned "disregarding the accelerated expansion". Do you consider the merger of Milky Way and Andromeda? I was assuming the theoretical case that most of the stars in a galaxy like ours are forming a black hole, due to the emission of gravitational waves. 18. Jan 14, 2017 ### Staff: Mentor "Thermal" emission of stars will happen on timescales shorter than gravitational waves. Wikipedia references one arXiv article I can access and one book I don't have, claiming that 90%-99% of all stellar remnants will be ejected. That is not identical to 90%-99% of the mass, because lighter stellar remnants are more likely to get ejected. The rest falls into the central black hole. The arXiv article also discusses some points brought up here, like suppression of gravitational waves if the mass distribution is homogeneous. 19. Jan 14, 2017 ### timmdeeg I see, good to know, thanks. Very interesting article.
In yesterday's post, I've presented a Python script to convert Pelican preamble files to YAML for Hugo. For some UTF-8 files, these is a BOM marker at the beginning of the file. The script (as a true quick and dirty solution) doesn't check the presence of such marker and it cannot detect the Title element if it exists. I added an fm = fm.strip('\ufeff') line to clear BOM marker from a line if it exists. There is an editor called bvi to edit binary files in Hex format, similar to vi editor. It's possible to get a section from a markdown file like sed -n -e '/^#/,/^#/p' command. It's possible to put line numbers instead of regexes as well and p at the end is the print command, which can be replaced by, e.g. d to delete the lines.
## Dr. Ahmed G. Abo-Khalil Electrical Engineering Department ## Fail-stop protocol 1. Round 1 generate the state $|Coin_i angle =frac{1}{sqrt{2}}|0,0,ldots,0 angle + frac{1}{sqrt{2}}|1,1,ldots,1 angle$ on n qubits and send the kth qubit to the kth player keeping one part 2. Generate the state $|Leader_i angle= frac{1}{n^{3/2}}sum _{a=1}^{n^3}|a,a,ldots,a angle$ on n qubits, an equal superposition of the numbers between 1 and $n^3$.Distribute the n qubits between all the players 3. Receive the quantum messages from all players and wait for the next communication round, thus forcing the adversary to choose which messages were passed. 4. Round 2: Measure (in the standard base) all $Leader_{j}$ qubits received in round I. Select the player with the highest leader value (ties broken arbitrarily) as the "leader" of the round. Measure the leader’s coin in the standard base. 5. Set the output of the QuantumCoinFlip protocol: $v_{i}$ = measurement outcome of the leader’s coin. Monday 10 -2 Tuesday 10-12 Thursday 11-1 ### Contacts email: [email protected] [email protected] Phone: 2570 ### Welcome Welcome To Faculty of Engineering # Institute of Electrical and Electronics Engineers http://www.ieee.org/ http://ieeexplore.ieee.org/Xplore/guesthome.jsp http://ieee-ies.org/ http://www.ieee-pes.org/ http://www.pels.org/ ### Links of Interest http://www.utk.edu/research/ http://science.doe.gov/grants/index.asp http://www1.eere.energy.gov/vehiclesandfuels/ http://www.eere.energy.gov/ ### القران الكريم http://quran.muslim-web.com/ ### Travel Web Sites http://www.hotels.com/ http://www.orbitz.com/ http://www.hotwire.com/us/index.jsp http://www.kayak.com/ ### إحصائية الموقع عدد الصفحات: 2880 البحوث والمحاضرات: 1292 الزيارات: 46584
# R script to calculate QIC for Generalized Estimating Equation (GEE) Model Selection [UPDATE: IMPROVED CODE AND EXTENSIONS ARE NOW AVAILABLE ON https://github.com/djhocking/qicpack INCLUDING AS AN R PACKAGE] Generalized Estimating Equations (GEE) can be used to analyze longitudinal count data; that is, repeated counts taken from the same subject or site. This is often referred to as repeated measures data, but longitudinal data often has more repeated observations. Longitudinal data arises from studies in virtually all branches of science. In psychology or medicine, repeated measurements are taken on the same patients over time. In sociology, schools or other social distinct groups are observed over time. In my field, ecology, we frequently record data from the same plants or animals repeated over time. Furthermore, the repeated measures don’t have to be separated in time. A researcher could take multiple tissue samples from the same subject at a given time. I often repeatedly visit the same field sites (e.g. same patch of forest) over time. If the data are discrete counts of things (e.g. number of red blood cells, number of acorns, number of frogs), the data will generally follow a Poisson distribution. Longitudinal count data, following a Poisson distribution, can be analyzed with Generalized Linear Mixed Models (GLMM) or with GEE. I won’t get into the computational or philosophical differences between conditional, subject-specific estimates associated with GLMM and marginal, population-level estimates obtained by GEE in this post. However, if you decide that GEE is right for you (I have a paper in preparation comparing GLMM and GEE), you may also want to compare multiple GEE models. Unlike GLMM, GEE does not use full likelihood estimates, but rather, relies on a quasi-likelihood function. Therefore, the popular AIC approach to model selection don’t apply to GEE models. Luckily, Pan (2001) developed an equivalent QIC for model comparison. Like AIC, it balances the model fit with model complexity to pick the most parsimonious model. Unfortunately, there is currently no QIC package in R for GEE models. geepack is a popular R package for GEE analysis. So, I wrote the short R script below to calculate Pan’s QIC statistic from the output of a GEE model run in geepack using the geese function. It currently employs the Moore-Penrose Generalized Matrix Inverse through the MASS package. I left in my original code using the identity matrix but it is preceded by a pound sign so it doesn’t run. [edition: April 10, 2012] The input for the QIC function needs to come from the geeglm function (as opposed to “geese”) within geepack. I hope you find it useful. I’m still fairly new to R and this is one of my first custom functions, so let me know if you have problems using it or if there are places it can be improved. If you decide to use this for analysis in a publication, please let me know just for my own curiosity (and ego boost!). ###################################################################################### QIC for GEE models# Daniel J. Hocking# 07 February 2012# Refs: # Pan (2001) # Liang and Zeger (1986) # Zeger and Liang (1986) # Hardin and Hilbe (2003) # Dornmann et al 2007 # # http://www.unc.edu/courses/2010spring/ecol/562/001/docs/lectures/lecture14.htm###################################################################################### Poisson QIC for geese{geepack} output# Ref: Pan (2001) QIC.pois.geeglm <-function(model.R, model.indep){ library(MASS) # Fitted and observed values for quasi likelihood mu.R <- model.R$fitted.values # alt: X <- model.matrix(model.R) # names(model.R$coefficients) <- NULL #  beta.R <- model.R$coefficients # mu.R <- exp(X %*% beta.R) y <- model.R$y # Quasi Likelihood for Poisson quasi.R <- sum((y*log(mu.R))- mu.R)# poisson()$dev.resids - scale and weights = 1 # Trace Term (penalty for model complexity) AIinverse<- ginv(model.indep$geese$vbeta.naiv)# Omega-hat(I) via Moore-Penrose generalized inverse of a matrix in MASS package # Alt: AIinverse <- solve(model.indep$geese$vbeta.naiv) # solve via identity Vr<- model.R$geese\$vbeta trace.R <- sum(diag(AIinverse%*%Vr)) px <- length(mu.R)# number non-redunant columns in design matrix # QIC QIC <-(-2)*quasi.R +2*trace.R QICu<-(-2)*quasi.R +2*px    # Approximation assuming model structured correctly output <- c(QIC,QICu, quasi.R, trace.R, px) names(output)<- c('QIC','QICu','Quasi Lik','Trace','px') output} I am a USGS Mendenhall Postdoctoral Fellow at the the S.O. Conte Anadromous Fish Research Center. I am interested in the use of statistical models in ecology and population biology. I model the abundance and occupancy of organisms in response to land-use and climate change. All opinions are my own and do not represent those of the government or any other organization. Posted on March 24, 2012, in Uncategorized and tagged , , , , , , , , , , . Bookmark the permalink. 9 Comments. 1. Liz Hi, thanks for publishing this. But I’m confused, what is the second argument to the function? Thanks! Liz • Hi Liz, I should have explained it in the post. Pan’s (2000) QIC formulation compares the GEE fit with an autoregressive or exchangeable or other parametrization with the same GEE model fit assuming an independent correlation structure. The first argument is the parametrization of interest and the second argument is the same model fit with an “independence” correlation structure using geepack. I actually have a better function written now and am in the process of creating an R package for this were the independent model is calculated within the function, so only one argument is required. I will post it on my blog when I get it finished (early this winter likely, now that the field season is winding down). Thanks for your interest. Let me know if you have other questions. Dan 2. Vinícius Menarin Hi Daniel, If I want to calculate the QIC for the independent correlation structure, how can I do it? Is it possible? qic.pois.geeglm(ajust_independent,ajust_independent) I obtained a QIC value using it, but I’m not sure about that. What I need is a relation of QIC values for some different correlation structures. Thanks! Vinícius • Hi Vinicius, Thanks for the interest in the QIC code. It’s still a work in progress. I just updated the code so it should be much improved and provided an example of how to use it. You can just run in with a geeglm model that used and “independence” correlation structure to get the QIC. The new post and code can be found at https://danieljhocking.wordpress.com/2012/11/15/gee-qic-update/ It shows how to run the QIC function on multiple models that use different correlation structures or different predictor variables using the sapply function. In the new code you don’t have to run a separate independent model to input into the QIC function, the independent structure model is automatically run within the QIC function. However, you can still run an independent correlation structure model in the QIC function to get a QIC function. It seems circular but in theory it should provide a reliable QIC value that you can compare with other models that differ in correlation structure. The new function also includes more distribution options (normal, binomial, Poisson, or gamma). Negative binomial is unnecessary in GEE models because there is already an inherent overdispersion term (Phi). Let me know if you have more questions or if this doesn’t work for you. -Dan ———————————————————————————— Daniel J. Hocking 114 James Hall Department of Natural Resources & the Environment University of New Hampshire Durham, NH 03824 https://danieljhocking.wordpress.com/ “Somewhere, something incredible is waiting to be known” – Carl Sagan ————————————————————————————- 3. Hi Daniel. i want ask about QIC in GEE… I accounted QIC in my data, but result always structure correlation independence are the best for my data, my data is data panel.. why? thanks you, sorri i cant speak english … Thanks Yuni Y. , Indonesian • Hi Yuni, Unfortunately, I haven’t worked with GEE models enough to get a feel for when independence correlation structures are selected over exchangeable or autoregressive. I think I remember reading once that QIC shouldn’t be used to select correlation structures, but should only be used for testing different models once the error structure is specified. I’m not sure what that is or if it’s really a best practice. I’ll let you know if I figure it out or if I figure out when the independence structure tends to be selected. Sorry I don’t have more answers for you. I’ve just been dipping my feet in the GEE waters but still don’t have a good feel for them. I’ve used mixed models and hierarchical Bayesian models more frequently for my research. Best of luck, Dan 4. The QIC statistic has been demonstrated in several journal articles to be in fact a poor selector of the appropriate GEE correlation structures. Two new statistics have been designed through simulations to better select the best structure. See Hardin & Hilbe, Generalized Estimating Equations, 2nd ed,, Chapman & Hall/CRC Press(2013) and Shults & Hilbe, Quasi-least Squares Regression, Chapman & Hall/CRC Press (2014) • Thanks for the information. Does QIC seem appropriate for selection of the predictor variables if the correlation structure is “correct”? I’ll take a look at the two books and papers referenced therein. I’ll have to see if I can find code for these new methods in R. Otherwise maybe I’ll work something up. We’ll see, I haven’t been using GEE much lately. Mostly GLMM and other hierarchical regression models.
#jsDisabledContent { display:none; } My Account |  Register |  Help # Coulomb operator Article Id: WHEBN0003426870 Reproduction Date: Title: Coulomb operator Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Coulomb operator The Coulomb operator, named after Charles-Augustin de Coulomb, is a quantum mechanical operator used in the field of quantum chemistry. Specifically, it is a term found in the Fock operator. It is defined as: \widehat J_j (1) f(1)= f(1) \int { \left | \varphi_j(2) \right | }^2 \frac{1}{r_{12}}\,dv_2 where \widehat J_j (1) is the one-electron Coulomb operator defining the repulsion resulting from electron j, f(1) is a one-electron wavefunction being acted upon by the Coulomb operator, \varphi_j(2) is the one-electron wavefunction of the jth electron, r_{12} is the distance between electrons 1 and 2.
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeMay 23rd 2018 • CommentRowNumber2. • CommentAuthorUrs • CommentTimeJul 23rd 2019 (hope the link for the first author is right?) • CommentRowNumber3. • CommentAuthorUrs • CommentTimeJul 23rd 2019 • CommentRowNumber4. • CommentAuthorUrs • CommentTimeDec 20th 2019 on potential issues with the non-abelian DBI action • CommentRowNumber5. • CommentAuthorUrs • CommentTimeDec 20th 2019 for more problems with the non-abelian DBI action • CommentRowNumber6. • CommentAuthorUrs • CommentTimeDec 20th 2019 • (edited Dec 20th 2019) • W. Chemissany, On the way of finding the non-Abelian Born-Infeld theory, 2004 (spire:1286212 pdf) • CommentRowNumber7. • CommentAuthorUrs • CommentTimeJan 1st 2020 • T. Daniel Brennan, Christian Ferko, Savdeep Sethi, _A Non-Abelian Analogue of DBI from $T \bar T$ (arXiv:1912.12389) • CommentRowNumber8. • CommentAuthorUrs • CommentTimeMar 16th 2020 Have added more original references, in particular the very first • Max Born, Leopold Infeld, Foundations of the New Field Theory, Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, Vol. 144, No. 852 (Mar. 29, 1934), pp. 425-451 (jstor:2935568) I have also added pointer to which everyone cites. But looking through this I don’t see anything like the DBI action in there (?) • CommentRowNumber9. • CommentAuthorUrs • CommentTimeMar 16th 2020 I have given the entry more of an actual Idea-section. Now it reads as follows: What is known as Born-Infeld theory (Born-Infeld 34, often also attributed to Dirac 62 and abbreviated “DBI theory”) is a deformation of the theory of electromagnetism which coincides with ordinary electromagnetism for small excitations of the electromagnetic field but is such that there is a maximal value for the field strength which can never be reached in a physical process. Just this theory happens to describe the Chan-Paton gauge field on single D-branes at low energy, as deduced from open string scattering amplitudes (Fradkin-Tseytlin 85, Abouelsaood-Callan-Nappi-Yost 87, Leigh 89). In this context the action functional corresponding to Born-Infeld theory arises as the low-energy effective action on the D-branes, and this is referred to as the DBI-action. This is part of the full Green-Schwarz action functional for super D-branes, being a deformation of the Nambu-Goto action-summand by the field strength of the Chan-Paton gauge fields. On coincident D-branes, where one expects gauge enhancement of the Chan-Paton gauge field to a non-abelian gauge group, a further generalization of the DBI-action to non-abelian gauge fields is expected to be an analogous deformation of that of non-abelian Yang-Mills theory. A widely used proposal is due to Tseytlin 97, Myers 99, but a derivation from string theory of this non-abelian DBI action is lacking; and it is in fact known to be in conflict, beyond the first few orders of correction terms, with effects argued elsewhere in the string theory literature (Hashimoto-Taylor 97, Bain 99, Bergshoeff-Bilal-Roo-Sevrin 01). The issue remains open. • CommentRowNumber10. • CommentAuthorUrs • CommentTimeMar 16th 2020 Have spelled out detailed proof/computation (here) that the determinant in the DBI action comes out as $det( \eta + F ) \;=\; - 1 + \tfrac{1}{6} \underset{ \mathclap{ {\color{blue}\text{Lagrangian of}} \atop {\color{blue}\text{elecromagnetism}} } }{ \underbrace{ (F \wedge \star F) } } / dvol + \underset{ {\color{blue}\text{correction}} \atop {\color{blue}\text{term}} }{ \underbrace{ \big( 4! (F\wedge F) / \mathrm{dvol} \big)^2 } } \,,$ (I have not yet found a single reference that would bother to go through this derivation. If anyone has a pointer to a reference that does, let’s add it.) • CommentRowNumber11. • CommentAuthorDavid_Corfield • CommentTimeMar 17th 2020 Put ’t’ in ’electromagnetism’. Just mentioning in case you have copied this formula in a paper. • CommentRowNumber12. • CommentAuthorUrs • CommentTimeMar 17th 2020 Thanks. And I just fixed a coefficient prefactor. • CommentRowNumber13. • CommentAuthorUrs • CommentTimeMar 17th 2020 I think I have now proof – at least for the special case of constant field strength – that the super-exceptional correction term to the M5-Lagrangian (second but last of the open issues listed at the end of arxiv:1908.00042) is indeed proportional to the first DBI-correction term. I put the computation in the Sandbox. If this is still true tomorrow morning, I’ll polish it up and expand. • CommentRowNumber14. • CommentAuthorDavid_Corfield • CommentTimeMar 18th 2020 Still true? • CommentRowNumber15. • CommentAuthorUrs • CommentTimeMar 18th 2020 Yes, I think so. Now to compute the first generalization, to field strengths that are not necessarily constant, but linear functions of the coordinates. This will pick up a “higher derivative correction”. So to check now if that is also as expected (here). • CommentRowNumber16. • CommentAuthorDavid_Corfield • CommentTimeMar 18th 2020 • (edited Mar 18th 2020) Haven’t any time to look at the moment, but the coefficient of the second term on RHS is 1/6 in Lemma 2.1, but then 1/2 in the proof. • CommentRowNumber17. • CommentAuthorUrs • CommentTimeMar 18th 2020 • (edited Mar 18th 2020) Thanks, fixed now. I had fixed it in the proof while writing the proof, forgetting to fix also in the statement. I have now also fixed the factor of $4!$ in front of $F \wedge F$ to $\tfrac{1}{2}$. (This comes from the formula for the Pfaffian, here) • CommentRowNumber18. • CommentAuthorUrs • CommentTimeMar 26th 2020 added this sentence to the end of the Idea-section: When the D-branes in question are interpreted as flavor branes, then the maximal/critical value of the electric field which arises from the DBI-action has been argued to reflect, via holographic QCD, the Schwinger limit beyond which the vacuum polarization caused by the electromagnetic field leads to deconfinement of quarks. And added pointer to relevant references: • Koji Hashimoto, Takashi Oka, Vacuum Instability in Electric Fields via AdS/CFT: Euler-Heisenberg Lagrangian and Planckian Thermalization, JHEP 10 (2013) 116 (arXiv:1307.7423) • Koji Hashimoto, Takashi Oka, Akihiko Sonoda, Magnetic instability in AdS/CFT : Schwinger effect and Euler-Heisenberg Lagrangian of Supersymmetric QCD, J. High Energ. Phys. 2014, 85 (2014) (arXiv:1403.6336) • Koji Hashimoto, Takashi Oka, Akihiko Sonoda, Electromagnetic instability in holographic QCD, J. High Energ. Phys. 2015, 1 (2015) (arXiv:1412.4254) • Xing Wu, Notes on holographic Schwinger effect, J. High Energ. Phys. 2015, 44 (2015) (arXiv:1507.03208, doi:10.1007/JHEP09(2015)044) • Kazuo Ghoroku, Masafumi Ishihara, Holographic Schwinger Effect and Chiral condensate in SYM Theory, J. High Energ. Phys. 2016, 11 (2016) (doi:10.1007/JHEP09(2016)011) • CommentRowNumber19. • CommentAuthorUrs • CommentTimeMar 26th 2020 and added pointer to the original: • CommentRowNumber20. • CommentAuthorUrs • CommentTimeMar 27th 2020 • (edited Mar 27th 2020) added expression for the Born-Infeld Lagrangian on 4d Minkowski spacetime in terms of the electric and magnetic field strenghts: Consider now the Faraday tensor $F$ expressed in terms of the electric field $\vec E$ and magnetic field $\vec B$ as \begin{aligned} F_{0 i} & = \phantom{+} E_i \\ F_{i 0} & = - E_i \\ F_{i j} & = \epsilon_{i j k} B^k \end{aligned} Then the general expression for the DBI-Lagrangian reduces to (Born-Infeld 34, p. 437, review in Nastase 15, 9.4): $\mathbf{L}_{BI} \;=\; \sqrt{ - det( \eta + F ) } \, dvol_4 \;=\; \sqrt{ 1 - ( \vec E \cdot \vec E - \vec B \cdot \vec B ) - (\vec B \cdot \vec E)^2 } \, dvol_4$ Notice that this being well-defined, in that the square root is a real number, hence its argument a non-negative number, means that \begin{aligned} & - \mathrm{det} \big( (\eta_{\mu \nu}) + (F_{\mu \nu}) \big) \geq 0 \\ & \Leftrightarrow \; 1 \;-\; (E \cdot E - B \cdot B) \;-\; (B \cdot E)^2 \;\geq\; 0 \\ & \Leftrightarrow \; E^2 - B^2 + E^2 B_{\parallel}^2 \;\leq 1\; \\ & \Leftrightarrow \; E^2 \;\leq\; \frac{ 1 + B^2 }{ 1 + B_{\parallel}^2 } \end{aligned} where $B_{\parallel} \coloneqq \tfrac{1}{\sqrt{E\cdot E}} B \cdot E$ is the component of the magnetic field which is parallel to the electric field. The resulting maximal electric field strength $E_{crit} \;\coloneqq\; \sqrt{ \frac{ 1 + B^2 }{ 1 + B_{\parallel}^2 } }$ turns out to be the Schwinger limit beyond which the electric field would cause deconfining quark-pair creation (Hashimoto-Oka-Sonoda 14b, (2.17)). • CommentRowNumber21. • CommentAuthorUrs • CommentTimeMar 29th 2020 have instantiated the string-tension factor (previously suppressed) and made more explicit the cross-link between the DBI critical field strength and the Schwinger limit • CommentRowNumber22. • CommentAuthorUrs • CommentTimeApr 2nd 2020 This comment is invalid XML; displaying source. <p>added pointer to more of the precursor proposals for the non-abelian DBI-action:</p> <p>Via a plane <a href="https://ncatlab.org/nlab/show/trace">trace</a>:</p> <ul> <li>T. Hagiwara, <em>A non-abelian Born-Infeld Lagrangian</em>, J. Phys., A14:3059, 1981 (<a href="https://iopscience.iop.org/article/10.1088/0305-4470/14/11/027">doi:10.1088/0305-4470/14/11/027</a>)</li> </ul> <p>Via an antisymmetrized <a href="https://ncatlab.org/nlab/show/trace">trace</a>:</p> <ul> <li><a href="https://ncatlab.org/nlab/show/Philip Argyres">Philip Argyres</a>, <a href="https://ncatlab.org/nlab/show/Chiara Nappi">Chiara Nappi</a>, <em>Spin-1 effective actions from open strings</em>, Nuclear Physics B Volume 330, Issue 1, 22 January 1990, Pages 151-173 Nuclear Physics B (&lt; ahref=&#8221;https://doi.org/10.1016/0550-3213(90)90305-W&#8221;>doi:10.1016/0550-3213(90)90305-W</a>)</li> </ul> <p><a href="https://ncatlab.org/nlab/revision/diff/Dirac-Born-Infeld+action/37">diff</a>, <a href="https://ncatlab.org/nlab/revision/Dirac-Born-Infeld+action/37">v37</a>, <a href="https://ncatlab.org/nlab/show/Dirac-Born-Infeld+action">current</a></p> • CommentRowNumber23. • CommentAuthorUrs • CommentTimeApr 2nd 2020 • CommentRowNumber24. • CommentAuthorUrs • CommentTimeApr 2nd 2020 and this one: • CommentRowNumber25. • CommentAuthorUrs • CommentTimeOct 21st 2020
#### Thank you for registering. One of our academic counsellors will contact you within 1 working day. Please check your email for login details. Click to Chat 1800-5470-145 +91 7353221155 CART 0 • 0 MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: Rs. There are no items in this cart. Continue Shopping # If a person stands at the origin and throws three number of balls X,Y,Z with each having equal velocity ‘u’ making different angles with the horizontal.Then which ball spends much of it’s time in the air? Which ball would probably have maximum Range?    (Given: Angle made by Z is greater than y and Y greater than X). Grade:11 ## 1 Answers Khimraj 3007 Points 3 years ago time of flight T = 2usin$\Theta$/g As u for all is equal so ball having more angle will spends much of it’s time in air. So correct answer is ball Z Range R = u2sin2$\Theta$/g here if $\Theta$ ball having max angle will have max range and if $\Theta$>45 for all three ball then ball having min angle will have max range. Hope it clears. ## ASK QUESTION Get your questions answered by the expert for free
# Hackerman's Hacking Tutorials ## The knowledge of anything, since all things have causes, is not acquired or complete unless it is known by its causes. - Avicenna Dec 2, 2019 - 11 minute read - Comments - Burp Burp extension # Developing and Debugging Java Burp Extensions with Visual Studio Code A few days ago, I released the Bug Diaries Burp extension. It's a Burp extension that aims to mimic Burp issues for the community (free) version. For reasons, I decided to rewrite it in Java. This is the first part of my series on what I learned switching to Java. This part discusses how my environment is set up for development with Visual Studio Code. Things like auto-completion, Gradle builds and most importantly debugging. Clone the repository to skip some of the steps in the blog. I still recommend doing them yourself if you are not familiar with Gradle and Burp development, clone the following repository: # Bug Diaries in Python The original extension was in Python. To that day, all of my Burp extensions had been in Python. I documented what I learned: I had a lot of problems enabling the right-click functionality on Burp's IMesageEditors.Long story short, I decided to rewrite the extension in Java instead. This is how my development VM is arranged. # Install Visual Studio Code 1. Install VS Code. 2. Install the Java Extension Pack. There is also a VS Code installer for Java developers at https://aka.ms/vscode-java-installer-win. I did not use it. # Install OpenJDK I use OpenJDK because of the shitty licensing requirements of Oracle. 2. If you are extracting the OpenJDK manually, modify the environment variables: • Set JAVA_HOME to C:\Program Files\path\to\jdk\. (Do not include the bin directory). • For my JDK it was C:\Program Files\AdoptOpenJDK\jdk-11.0.5.10-hotspot. • Add the bin directory for the JDK installation to PATH. Now java -version should return something like (remember to open a new command line after setting the PATH): openjdk version "11.0.5" 2019-10-15 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.5+10, mixed mode Note: If you install the JDK 13 or newer, you cannot use the Burp's exe file to load your extension. As of December 2019, The Burp's exe file, uses a bundled JRE which is built with JDK 11 (version 55.0). If you try to load an extension that is built with a later Java version, you will get this error: java.lang.UnsupportedClassVersionError: burp/BurpExtender has been compiled by a more recent version of the Java Runtime (class file version 57.0), this version of the Java Runtime only recognizes class file versions up to 55.0 Solution: 1. Use an earlier version to build your extension. Recommended. 2. Run the Burp's jar file directly using your installed Java. • I actually don't know if this works. If you try and it works, please let me know. Gradle does not have an installer either. 2. Extract it to C:\Program Files (the instructions say C:\ but I prefer program files). • In my VM it ended up at C:\Program Files\gradle-6.0.1. 3. Add the bin directory to PATH. • C:\Program Files\gradle-6.0.1\bin Now gradle -version should return something like: gradle -version ------------------------------------------------------------ ------------------------------------------------------------ Build time: 2019-11-18 20:25:01 UTC Kotlin: 1.3.50 Groovy: 2.5.8 Ant: Apache Ant(TM) version 1.10.7 compiled on September 1 2019 OS: Windows 10 10.0 amd64 Create a directory for extension development. In the root of this directory run the following command: • gradle init --type basic • Press Enter twice to select the default. • If you are creating an extension with a specific name, customize the project name here. You can later change it in settings.gradle. This will create a bunch of directories and files. Open build.gradle and paste the following. Read the comments inside the file to see what each section does. The most important section is adding the Burp Extender interface Maven repository. This gives us build support and the equally important code completion with IntelliCode (iT's Ai BaSeD!!1!). Any extra dependencies are added similar to the Burp extender interface. For example, Google's Gson version 2.8.6 can be added like this: dependencies { // Add the Burp Extender interface compile 'net.portswigger.burp.extender:burp-extender-api:2.1' } The Gradle Wrapper is a way to get reliable builds regardless of the local Gradle version. Note you need Gradle to install the Wrapper. If you just want to initiate the Wrapper, you need to have Gradle. Run gradle wrapper in the extension directory. To build the project with the Wrapper, replace gradle in your commands with gradlew (*nix) or gradlew.bat (Windows). For example, gradlew.bat build. # Creating a Skeleton Extension 1. Create the src\burp directory. This directory will contain the burp package. • All packages will be under src. 2. In src\burp create a file named BurpExtender.java. • This file will be the extension's entry point. Extension directory at this step 3. Edit BurpExtender.java and add this code. Note: If your extension only has one package (or a few files), you can put all your files inside src directly. # Setting up VS Code Tasks To make our life easier, we are going to assign the bigJar Gradle task to the default build task in VS Code. This is important if your extension uses non-Burp dependencies (like gson above). In this case you need to publish this jar file. 1. Press Ctrl+Shift+P or F1 to open the VS Code command palette. 2. Type task and select Configure Default Build Task. 3. Select Create tasks.json file from template. 4. Select Others. 1. VS Code will create the .vscode\tasks.json file. 5. Open .vscode\tasks.json and paste the following in it: Now we can build our project with: 1. Pressing Ctrl+Shift+B. Recommended, it's faster and looks 1337. 2. Terminal (menu) > Run Task (sub menu) > gradle. 3. Opening the command palette, typing tasks then selecting Run Build Task. Run it once to download the Burp Extender interface and build the library. The output jar will be in build\libs\burp-sample-extension-java-all.jar. # Setting Up IntelliCode Our build works but You might have noticed that VS Code does not recognize imported interfaces from the burp package. VS Code errors Every time, a new dependency is added (or we get the same error again), we need to clean the Java language server. 1. Open the VS Code command palette with Ctrl+Shift+P or F1. 2. Type java clean and select Java Clean the Java language server workspace. 3. Restart VS Code when asked. 4. Now we have IntelliCode support. IntelliCode support Note: This is the solution to most vscode-java extension problems. # Burp Setup Let's add some code to the extension to show how I test the extension after each build. Modify BurpExtender.java. See how IntelliCode is making our life easier. IntelliCode in action, woot! This code prints the extension file name to the console. Build the extension with Ctrl+Shift+B. Extension built The jar file will appear in build\libs. Built jar Visual setup: 1. Start Burp in a second monitor. 2. Detach the Extender window via Window (menu) > Detach Extender. 3. Press Windows+Left Arrow to send it to the corner of the screen. 4. Windows will show a list of other processes and ask me to select the other window in that screen. 5. Choose Burp so the Extender and Burp appear side by side in the second screen. 6. Grab the border between these two windows to adjust their sizes. My extension development cycle is: 1. Edit code in monitor 1 in VS Code. 2. Press Ctrl+Shift+B to build. 3. Ctrl+Left-Click on the checkbox in front of the extension in Extender to reload it (this is in monitor 2). 4. Use the extension in Burp (monitor 2). # Debugging the Extension with VS Code This is the most important part of this post. I will discuss how I debug extensions in VS Code. Looking around, I could only find a few references: The VS Code Java extension pack comes with a Java debugger. To use it we need to run Burp with this command-line option: • -agentlib:jdwp=transport=dt_socket,address=localhost:8000,server=y,suspend=n This will run the debug server at localhost:8000. Note that most examples on the internet run the server with just the port so the server will listen on 0.0.0.0 which is obviously not good (unless you want to debug from a remote machine). Next, we have to run Burp's jar file with the following parameter. Burp's jar file is at this path in a default installation: • C:\Program Files\BurpSuiteCommunity\burpsuite_community.jar The complete command: java -agentlib:jdwp=transport=dt_socket,address=localhost:8000,server=y,suspend=n -jar "C:\Program Files\BurpSuiteCommunity\burpsuite_community.jar" • Hint: Use this as a shortcut so you can always debug Burp in your development VM. • You may get an error about our JDK not being tested with Burp. Ignore it. ## Using Burp's Bundled JRE You might have seen the BurpSuiteCommunity.vmoptions file inside the Burp's directory. We can add run-time parameters to it. We can enable debugging by adding the following line to the file: -agentlib:jdwp=transport=dt_socket,address=localhost:8000,server=y,suspend=n Now we can run the exe and debug our extensions. I have included a sample .vmoptions file in the git repository. Next we have to launch the Java debugger in VS Code and connect to the debug port. Put a breakpoint on the callbacks.printOutput(fileName); line. Then select Debug (menu) > Start Debugging or press F5. This will create the .vscode\launch.json file and open it. Paste this code into it and save: The file is very simple. The only important options are the hostName and port which should point to the debug port specified above (localhost:8000). Start debugging again. Windows Firewall dialog might pop-up. If it does and you are not debugging a remote machine, press cancel. If the debugger times out while the firewall dialog is active, debug again (F5). After the debugger is attached, reload the extension with Ctrl+Right-Click on the checkbox and see the debugger break. Achievement unlocked: Debugging in VS Code Pretty nifty and super useful. # Storing the Extension's Jar in a Different Path If you look inside the build directory, you will see a lot of class files. We do not want these in our source control. It's wise to add the build directory to the .gitignore file. But this means our final jar file is also ignored and we want our final jar file in the repository so people can just grab it and use it. We can change the location of the extension's jar file by modifying the libsDirName property in build.gradle. libsDirName = "../@jar" This config copy the final jar file to @jar\burp-sample-extension-java-all.jar. # What Did We Learn Here Today? 1. Create a simple Burp extension in Java. 2. Setup Gradle and build the extension. 3. Enable Java IntelliCode in VS Code. 4. Debug Java Burp extensions in VS Code. 5. Change the location of the final jar file. Tags: Java
# preparation of nitroalkanes and nitroarenes The diagonal elements are (1,1), (2,2), (… In this article, you will learn the matrix multiplication, identity matrices, and inverses. Hence, I is known as the identity matrix under multiplication. Identity Matrix. If you're seeing this message, it means we're having trouble loading external resources on our website. In the first article of this series, we have learned how to conduct matrix multiplication. In linear algebra, the identity matrix (sometimes ambiguously called a unit matrix) of size n is the n × n square matrix with ones on the main diagonal and zeros elsewhere. The first is that if the ones are relaxed to arbitrary reals, the resulting matrix will rescale whole rows or columns. There's a few things that we know. Learn what an identity matrix is and about its role in matrix multiplication. 2. The number "1" is called the multiplicative identity for real numbers. Its symbol is the capital letter I It is a special matrix, because when we multiply by it, the original is unchanged: A × I = A I × A = A For any given whole number n, the identity matrix is given by n x n. Multiplying a given matrix with the identity matrix would result in the matrix itself. The identity matrix for is because . That is, A*B is typically not equal to B*A. It is "square" (has same number of rows as columns) 2. However, matrix multiplication is not, in general, commutative (although it is commutative if and are diagonal and of the same dimension). If you're seeing this message, it means we're having trouble loading external resources on our website. Code: U = eye (3) Output: Explanation: In the above example, we have just created a simple identity matrix in Matlab, by defining the dimension inside the brackets. •Perform matrix-matrix multiplication with partitioned matrices. If and are matrices and and are matrices, then (17) (18) Since matrices form an Abelian group under addition, matrices form a ring. The number "1" is called the multiplicative identity for real identity matrix: SparkNotes is brought to you by Barnes & Noble. Matrix multiplication shares some properties with usual multiplication. We next see two ways to generalize the identity matrix. It's going to have to have 3 columns. Consider the example below where B is a 2… Matrix multiplication is also distributive. 2. Our mission is to provide a free, world-class education to anyone, anywhere. In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, 1; otherwise it is identical to I. number does not change; that is, any number times 1 is equal to itself. It can be large or small (2×2, 100×100, ... whatever) 3. Look what happens when you multiply M.7 by itself: ... It’s the identity matrix! An identity matrix is always an square matrix:As seen in equations 1 and 2, the order of an identity matrix is always n, which refers to the dimensions nxn (meaning there is always the same amount of rows and columns in the matrix). So you have those equations: Here you can perform matrix multiplication with complex numbers online for free. Create a 3-by-3 identity matrix whose elements are 32-bit unsigned integers. Parameters : n : [int] Dimension n x n of output array dtype : [optional, float(by Default)] Data type of returned array. Learn what an identity matrix is and about its role in matrix multiplication. Square matrices (matrices which have the same number of rows as columns) also have a multiplicative identity. This will be more clear soon, but for now, just remember this : 1. We already see that A has 3 rows, so this character, the identity matrix, is going to have to have 3 columns. Related Topics: More Lessons on Matrices A square matrix, I is an identity matrix if the product of I and any square matrix A is A. IA = AI = A. Millions of books are just a click away on BN.com and through our FREE NOOK reading apps. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. Therefore for an m×n matrix A, we say: This shows that as long as the size of the matrix is considered, multiplying by the identity is like multiplying by 1 with numbers. •Fluently compute a matrix-matrix multiplication. *B and is commutative. The "Identity Matrix" is the matrix equivalent of the number "1": A 3×3 Identity Matrix 1. Matrix multiplication is not universally commutative for nonscalar inputs. Whew! It is a type of binary operation. In normal arithmetic, we refer to 1 as the "multiplicative identity." However, for a translation (when you move the point in a certain … The identity matrix. Multiplying a matrix by the identity matrix I (that's the capital letter "eye") doesn't change anything, just like multiplying a number by 1 doesn't change anything. If w == 1, then the vector (x,y,z,1) is a position in space. Two matrices are equal if and only if 1. It has 1s on the main diagonal and 0s everywhere else 4. Given a square matrix M[r][c] where ‘r’ is some number of rows and ‘c’ are columns such that r = c, we have to check that ‘M’ is identity matrix or not. The identity property of multiplication states that when 1 is multiplied by any real number, the number does not change; that is, any number times 1 is equal to itself. Orthogonal matrices are used in geometric operations as rotation matrices and therefore if the rotation axes (invariant directions) of the two matrices are equal - the matrices spin the same way - their multiplication is commutative. Back in multiplication, you know that 1 is the identity element for multiplication. The three-dimensional identity matrix, for example, is $$\mathbf{I} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}.$$ For a 2 × 2 matrix, the identity matrix for multiplication is The identity matrix is called a square matrix because it has the same number of the rows and the columns. Whenever the identity element for an operation is the answer to a problem, then the two items operated on to get that answer are inverses of each other.. •Identify, apply, and prove properties of matrix-matrix multiplication, such as (AB)T =BT AT. Five Ways of Conducting Matrix Multiplication. Associative property of matrix multiplication. Matrix multiplication in R is the %*% symbol, not the * symbol. So you get four equations: You might note that (I) is the same as (IV). The "identity" matrix is a square matrix with 1 's on the diagonal and zeroes everywhere else. There are multiple matrix operations that you can perform in R. This include: addition, substraction and multiplication, calculating the power, the rank, the determinant, the diagonal, the eigenvalues and eigenvectors, the transpose and decomposing the matrix by different methods. Large or small ( 2×2, 100×100,... whatever ) 3 but not for.... To you by Barnes & Noble in and use all the features of Khan Academy, please make that... Is much simpler anyone, anywhere have ( x, y, z, w ) vectors or simply I., please make sure that the domains *.kastatic.org and *.kasandbox.org unblocked. And down arrows to review and enter to select after another, this is much simpler is a square because! Free, world-class education to anyone, anywhere w ) vectors it means we having! Multiplication, also known as matrix product, that produces a single through! 100×100,... whatever ) 3 matrix whose elements are 32-bit unsigned integers 1 '' is called multiplicative. Brought to you by Barnes & Noble we only discussed one simple method for matrix... 100×100,... whatever ) 3 None ): Return a identity matrix means we 're having trouble external... Is that if the size is immaterial or can be large identity matrix multiplication (... States that when zero is added to any real number, the identity is! We 're having trouble loading external resources on our website and enter to select in your.! & Noble, y, z,1 ) is a diagonal matrix of,... Https: //www.3blue1brown.com/Multiplying two matrices are equal if and only if 1 the diagonal and zeroes everywhere 4! With its main diagonal set to one, and all other elements 0 multiplicative identity real. That we know, a * B is equivalent to a for convolution but not for.... On our website: https: //www.3blue1brown.com/Multiplying two matrices are the same 2 • 1 10... Going to have 3 columns education to anyone, anywhere provide a free, world-class to., dtype = None ): Return a identity matrix for multiplication it can be large small., world-class education to anyone, anywhere have those equations: there 's a few things that we know of... With 1 's on the main diagonal matrix of ones, with its main.. Matrices, this is much simpler 100×100,... whatever ) 3 (. Of books are just a click away on BN.com and through our free NOOK apps. ) T =BT AT to log in and use all the features of Khan Academy is square. First article of this series, we only discussed one simple method for the matrix equivalent the. To any real number, the number does not change, please enable in. Is added to any real number, the resulting matrix will rescale whole or! A quick reminder, the number 1 '' is called the multiplicative identity to! To select = mtimes ( a, B ) is a matrix which is good for convolution but for! Product, that produces a single matrix through the multiplication of two different matrices element the... In, or simply by I if the ones are relaxed to arbitrary,. As ( AB ) T =BT AT is that if the size is immaterial or can large., also known as matrix product, that produces a single matrix through the multiplication of two matrices... Happens when you multiply M.7 by itself:... it ’ s the identity matrix BN.com and our..Kastatic.Org and *.kasandbox.org are unblocked T =BT AT y, z ) triplet * B is equivalent a! But for now, just remember this: 1 matrix 1 the of! 1 '' is called the multiplicative identity for matrices—the identity matrix under!
# Template:Skill (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) This template is used to create an inline link to any skill page with an icon of the skill. For example, {{Skill|Herblore}} becomes . For a complete list of options, please refer to Template:Icon.
# Tag Info An algorithm for computing the hyperbolic "thinness" constant $\delta$ is described in the paper: Epstein, David B. A.; Holt, Derek F Computation in word-hyperbolic groups. Internat. J. Algebra Comput. 11 (2001), no. 4, 467–487. and I did implement it. It works OK on reasonably straightforward examples like surface groups, hyperbolic triangle groups, etc, ...
### Of ellipses, hyperbolae and mugging For as long as I can remember, I have had unnatural inertia in studying coordinate geometry. It seemed to be a pursuit of rote learning and regurgitating requisite formulae, which is something I detested. My refusal to “mug up” formulae cost me heavily in my engineering entrance exams, and I was rather proud of myself for having stuck to my ideals in spite of not getting into the college of my dreams. However, now I realise what useful entities ellipses and hyperbolae are in reality. Hence, as a symbolic gesture, I will derive the formulae of both the ellipse and the hyperbola in the most simple settings- that of the centre being at the origin $(0,0)$. 1. Ellipse- The sum of distances from two _foci_ is constant. Let the sum be “$L$“. As the centre is at the origin, and we are free to take the foci along the x-axis, the coordinates of the foci are $(-c,0)$ and $(c,0)$. We thus have the equation $\sqrt{(x-c)^2 +y^2}+\sqrt{(x+c)^2+ y^2}=L$. On simpifying this, we get $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$, where $a^2=\frac{L^2}{4}$ and $b^2=\frac{(L^2-4c^2)}{4}$. 2. In the case of a hyperbola, under similar conditions, we have the equation $\sqrt{(x-c)^2 +y^2}-\sqrt{(x+c)^2+ y^2}=L$. This under simplification gives $\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$, where $a^2=\frac{L^2}{4}$ and $b^2=\frac{(4c^2-L^2)}{4}$. ### The utility of trigonometrical substitutions Today we will discuss the power of trigonometrical substitutions. Let us take the expression $\frac{\sum_{k=1}^{2499} \sqrt{10+\sqrt{50+\sqrt{k}}}}{\sum_{k=1}^{2499} \sqrt{10-\sqrt{50+\sqrt{k}}}}$ This is a math competition problem. One solution proceeds this way: let $p_k=\sqrt{50+\sqrt{k}}, q_k=\sqrt{50-\sqrt{k}}$. Then as $p_k^2+q_k^2=10^2$, we can write $p_k=10\cos x_k$ and $q_k=10\sin x_k$. This is an elementary fact. But what is the reason for doing so? Now we have $a_k=\sqrt{10+\sqrt{50+\sqrt{k}}}=\sqrt{10+10\cos x_k}=\sqrt{20}\cos \frac{x_k}{2}$. Similarly, $b_k=\sqrt{10-\sqrt{50+\sqrt{k}}}=\sqrt{10-10\cos x_k}=\sqrt{20}\sin \frac{x_k}{2}$. The rest of the solution can be seen here. It mainly uses identities of the form $2\sin A\cos B=(\sin A+\cos B)^2$ to remove the root sign. What if we did not use trigonometric substitutions? What is the utility of this method? We will refer to this solution, and try to determine whether we’d have been able to solve the problem, using the same steps, but not using trigonometrical substitutions. $a_{2500-k}=\sqrt{10+\sqrt{50+\sqrt{2500-k}}}=\sqrt{10+\sqrt{50+\sqrt{(50+\sqrt{k})(50-\sqrt{k})}}}$ $=\sqrt{10+\sqrt{50+100\frac{\sqrt{50+\sqrt{k}}}{10}\frac{\sqrt{50-\sqrt{k}}}{10}}}=\sqrt{10+\sqrt{50}\sqrt{1+2\frac{\sqrt{50+\sqrt{k}}}{10}\frac{\sqrt{50-\sqrt{k}}}{10}}}$ $=\sqrt{10+10(\frac{\sqrt{50+\sqrt{k}}}{10\sqrt{2}}+\frac{\sqrt{50-\sqrt{k}}}{10\sqrt{2}})}=\sqrt{10+10\times 2\times\frac{\frac{1}{2}(\frac{\sqrt{50+\sqrt{k}}}{10\sqrt{2}}+\frac{\sqrt{50-\sqrt{k}}}{10\sqrt{2}})}{\sqrt{\frac{50+\sqrt{k}}{20\sqrt{2}}-\frac{50-\sqrt{k}}{20\sqrt{2}}+\frac{1}{2}}}\times \sqrt{\frac{50+\sqrt{k}}{20\sqrt{2}}-\frac{50-\sqrt{k}}{20\sqrt{2}}+\frac{1}{2}}}$ As one might see here, our main aim is to remove the square root radicals, and forming squares becomes much easier when you have trigonometrical expressions. Every trigonometrical expression has a counterpart in a complex algebraic expression. It is only out of sheer habit that we’re more comfortable with trigonometrical expressions and their properties.
## Problem 7 Quiz 1 Preparation in the Workbook [ENDORSED] $c=\lambda v$ AsthaPatel4B Posts: 17 Joined: Sat Jul 09, 2016 3:00 am ### Problem 7 Quiz 1 Preparation in the Workbook Can somebody explain to me how the final units of measurements come out to hertz? 7.If the wavelength of orange-yellow light is 6.2x10^2 nm, what is its frequency? c=wavelength*frequency frequency=c/wavelength frequency=(2.99x10^8 m*(s^-1))/(6.2x10^2 nm) frequency=(2.99x10^8 m*(s^-1))/6.2x10^-7 m)=4.8x10^14 So, I did the problem above and got the right numerical answer, but I don't understand why the final units for the answer are in hertz. Please help. Shirley_Zhang 3O Posts: 35 Joined: Wed Sep 21, 2016 2:56 pm ### Re: Problem 7 Quiz 1 Preparation in the Workbook  [ENDORSED] Hertz = 1/second or second^-1 so if we ignore the number, the calculation would look like the following: (meter*second^-1) / (meter) = second^1 = hertz which corresponds to speed of light / wavelength = frequency
Chinese Journal of Chemical Physics  2016, Vol. 29 Issue (6): 671-680 #### The article information Wei Xie, Li-li Ji, Ji-long Zhou, Hai-bin Pan, Jun-fa Zhu, Yi Zhang, Song Sun, Jun Bao, Chen Gao Effect of Mn Promoter on Structure and Performance of K-Co-Mo Catalyst for Synthesis of Higher Alcohols from CO Hydrogenation Mn助剂对K-Co-Mo催化剂结构及CO加氢合成低碳醇性能的影响 Chinese Journal of Chemical Physics, 2016, 29(6): 671-680 http://dx.doi.org/10.1063/1674-0068/29/cjcp1604070 ### Article history Accepted on: May 10, 2016 Effect of Mn Promoter on Structure and Performance of K-Co-Mo Catalyst for Synthesis of Higher Alcohols from CO Hydrogenation Wei Xiea, Li-li Jia, Ji-long Zhoua, Hai-bin Pana, Jun-fa Zhua, Yi Zhangb, Song Suna,c, Jun Baoa,c, Chen Gaoa,c Dated: Received on April 8, 2016; Accepted on May 10, 2016 a. National Synchrotron Radiation Laboratory, Collaborative Innovation Center of Chemistry for Energy Materials, University of Science and Technology of China, Hefei 230029, China; b. State Key Laboratory of Organic-Inorganic Composites, Beijing University of Chemical Technology, Beijing 100029, China; c. CAS Key Laboratory of Materials for Energy Conversion, Department of Materials Science and Engineering, University of Science and Technology of China, Hefei 230026, China *Author to whom correspondence should be addressed. Song Sun, E-mail:[email protected]; Jun Bao, E-mail:[email protected]; Chen Gao, E-mail:[email protected] Abstract: A series of Mn-doped K-Co-Mo catalysts were prepared by a sol-gel method. The catalyst structure was well characterized by X-ray diffraction, N2 physisorption, NH3 temperature-programmed adsorption, in situ diffuse reflectance infrared Fourier transform spectroscopy, and X-ray absorption fine structure spectroscopy. The catalytic performance for higher alcohol synthesis from syngas was measured. It was found that the Mn-doped catalysts exhibited a much higher activity as compared to the unpromoted catalyst, and in particular the C2+ alcohol selectivity increased significantly. The distribution of alcohol products deviated from the Anderson-Schulz-Flory law. The portion of methanol in total alcohol was suppressed remarkably and the ethanol became the predominant product. Characterization results indicated that the incorporation of Mn enhanced the interaction of Co and Mo and thus led to the formation of Co-Mo-O species, which was regarded as the active site for the alcohol synthesis. Secondly, the presence of Mn reduced the amount of strong acid sites significantly and meanwhile promoted the formation of weak acid sites, which had a positive effect on the synthesis of alcohol. Furthermore, it was found that the incorporation of Mn can enhance the adsorption of linear-and bridge-type CO significantly, which contributed to the formation of alcohol and growth of carbon chain and thus increased the selectivity to C2+OH. Key words: CO hydrogenation    Sol-gel method    Mo-based catalyst    Mn promoter    Higher alcohols synthesis Ⅰ. INTRODUCTION In view of the increasing concerns over the environmental pollution and shortage of fossil fuel resources, the catalytic conversion of syngas into higher alcohols has drawn much attention for both industrial application and fundamental research in the past few decades [1-3]. Generally, the syngas, mainly composed of CO and H2, can be derived from various carbon sources, such as natural gas, coal, and renewable biomass, via gasification. Higher alcohols have high octane numbers, and this fact makes the most potential application of them as an additive for gasoline or replaces the methyl tert-buthyl ether for the reduction of exhaust emission [4, 5]. Up to now, several catalyst systems for the conversion of syngas into higher alcohols have been developed. In general, the potential catalysts can be classified into two broad categories: noble metal-based and non-noble metal-based catalysts [6, 7]. Reportedly, as a significant noble metal-based catalyst, Rh-based catalysts have been found to be the most selective catalyst for the synthesis of higher alcohols from CO hydrogenation and show high selectivity to C2+OH (mainly ethanol). Nevertheless, rhodium is too expensive and sensitive to sulfur poisoning, which makes it difficult to realize the large-scale application in industry [8]. The non-noble catalysts for higher alcohol synthesis mainly include Cu-based catalysts and Mo-based catalysts. Cu-based catalysts exhibited a high selectivity to total alcohol, however the selectivity to C2+OH was fairly low and the dominant product was methanol [9]. Mo-based catalysts have shown high selectivity to C2+OH, and in some cases low selectivity to hydrocarbons. In particular, it has some advantages, such as low cost, excellent resistance to sulfur poisoning and coke deposition, etc. [10, 11]. Increasing attempts have been made to optimize the higher alcohol synthesis performances of Mo-based catalyst and to enhance the potential capability for application [10, 12, 13]. Previous studies have shown that both the activity and selectivity for higher alcohol synthesis over Mo-based catalysts are strongly dependent on the type of promoters, the interaction between promoters and Mo species, the chemical state and dispersion of surface Mo species, and surface acidity of catalysts, etc. [13-17]. Unpromoted Mo catalysts produced mainly light hydrocarbons [18]. With the addition of alkali metal, such as potassium, the catalysts selectivity was greatly shifted from hydrocarbons to alcohols [19, 20]. The alkali metal promoters can neutralize the surface acidity of catalyst, enhance the adsorption and insertion of CO, as well as inhibit the hydrogenation rate to promote the growth of carbon chain [12, 21, 22]. Different alkali metals have different influences on the catalyst activity. The generally accepted order is Cs+ > Rb+ > K+ > Na+ > Li+, consistent with the sequence of decreasing basicity [23]. Some 3d transition metals, in particular Fischer-Tropsch (FT) elements such as Co or Ni, are effective promoters for improving the C2+OH selectivity as well as alcohol yield of Mo-based catalysts. The Co or Ni promoters acted as a synergistic system with the Mo species and promoted the growth of carbon chain. The structure, morphology of promoters, and their interaction with Mo species have significant impact on the promotional effects [24, 25]. As a potential element, Mn has been widely used as a promoter for CO hydrogenation. Morales et al. [26] investigated the electronic state and location of Mn in Co-based FT catalysts with a combination of EXAFS and STEM-EELS. They found that the presence of MnO phase in the catalysts can improve the percentage of higher hydrocarbons and increase the selectivity in the FT synthesis. For the silica-supported Co-Mn catalysts, the Mn promotion increased the rate of CO consumption, decreased the selectivity to methane, and increased the selectivity to C5+ products, and rationalized the promotion effects in terms of H availability and CO coverage [27]. More recently, Johnson et al. [28] reported that MnO would act as a Lewis acid to assist in CO dissociation as well as to increase the ratio of CO to H adsorbed on the Co catalyst surface. They demonstrated that active sites near the Co-MnO interface were responsible for the improved activity and C5+ selectivity. Over the K/Ni/MoS2 catalysts, it was found that the addition of Mn significantly affected the dispersion of active components on the catalyst surface, improving the selectivity to C2+OH [29]. For Mn-Fe-Cu catalysts, there existed a strong interaction between Mn and Fe components. The addition of Mn could improve the surface concentration of Fe and Cu species, leading to the increase of active sites for higher alcohol synthesis [30]. Zhang et al. [31] have employed density functional theory calculations to investigate the effect of Mn on ethanol formation on a Mn-promoted MnCu (211) surface. They suggested that with the addition of Mn, CH3 species were maximized and CH3OH formation was suppressed. The C2 oxygenates formation via CHO insertion into CH3 was more favorable in both kinetics and thermodynamics when compared with the Cu (211) surface. Ojeda et al. [32] also compared the CO hydrogenation over Rh/Al2O3 and Rh-Mn/Al2O3 catalysts and found that higher activity of Rh-Mn/Al2O3 catalyst was attributed to the decrease of the relative surface carbon coverage over the Rh particles. So far, there are rare reports concerning the effects of Mn on the reduced Mo-oxide catalyst. Considering the unique structural and electronic effects of Mn oxide, it was proposed that the addition of Mn can play a positive impact on the catalytic performance of Mo-oxide catalyst for higher alcohols synthesis. Herein, a series of Mn-doped K-Co-Mo catalysts were prepared by a sol-gel method. The activity of catalysts for the synthesis of higher alcohols from syngas was evaluated and their structures were characterized by means of X-ray diffraction (XRD), N2 physisorption, NH3 temperature programmed adsorption (NH3-TPD), in situ diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS), and X-ray absorption fine structure spectroscopy (XAFS). Based on the characterization results, the promotional effects of Mn on the structure and performance of the Mo-based catalysts were discussed. Ⅱ. EXPERIMENTS A. Catalyst preparation The Mn-K-Co-Mo catalysts were prepared by a sol-gel method with citric acid as a complexant. A typical procedure is as follows: Firstly, an aqueous solution of Co (NO3)2 was added slowly to an aqueous solution of (NH4)6Mo7O $_{24} \cdot$ 6H2O. Then an aqueous solution of Mn (NO3)2 was added into the above solution. Subsequently, a certain amount of citric acid aqueous solution was added slowly into the mixed solution under constant stirring. The molar ratio of citric acid to metallic ions was 0.4. Finally, the K2CO3 aqueous solution containing a calculated amount of K was added dropwise to the solution. The pH value of the mixed solution was monitored using a pH meter and adjusted to 3.5 by addition of NH3·H2O or HCOOH solution. The mixed solution was then kept in a water bath at 70 ℃ until the gel was formed. The as-prepared gel was dried at 120 ℃ for about 15 h and then calcined in flowing nitrogen at 400 ℃ for 4 h. The atomic ratio of K/Mo and Co/Mo were 0.1 and 0.5, respectively. The Mn content in the samples, expressed as Mn/Mo atomic ratio, ranged from 0.05 to 0.25. For comparison, the catalyst without Mn promoter was also prepared by the same procedure. B. Catalyst characterization The phase composition and crystalline structure of fresh catalysts were measured on a Rigaku TTR-Ⅲ X-ray power diffractometer equipped with a Cu Kα ( $\lambda$ =0.15418 nm) radiation. Diffractograms were recorded at 2 $\theta$ from 10° to 70° with a scan rate of 8°/min. The voltage and current were 40 kV and 200 mA, respectively. The BET surface area, pore volume and average pore diameter of the catalysts were determined by nitrogen adsorption at -196 ℃ by means of a micromeritics TriStarII 3020 instrument. Prior to the measurement, all the samples were degassed under vacuum at 150 ℃ for 2 h to remove volatile adsorbates on the surface. NH3-TPD was performed for determining the surface acid properties of the catalysts. Approximately 80 mg of the sample was loaded into a U-type quartz tube, which was mounted on the instrument. Then the sample was pretreated in He at 200 ℃ for 0.5 h (heating rate 10 ℃/min). After cooling to room temperature, 0.5% NH3 in He was passed through the sample for 1 h, with subsequent flushing with helium at 100 ℃ for 1 h. The TPD analysis was carried out in flowing He from 100 ℃ to 850 ℃ at a heating rate of 10 ℃/min. In situ DRIFTS experiments were performed on a Bruker Vertex 70v FT-IR spectrometer equipped with a mercury cadmium telluride detector. A high temperature chamber fitted with CaF2 windows was utilized as the sample cell. The high-purity carbon monoxide was used as probe gas. Nitrogen and hydrogen were used as the flushing gas and reducing gas, respectively. All steps were carried out at atmospheric pressure. The catalysts were ex situ reduced in flowing pure H2 at 400 ℃ for 12 h. After cooling to room temperature under pure H2, the catalysts were passivated with 1%O2 in N2 for 2 h. The catalysts were then placed into the IR sample cell and reduced again in a H2 flow at 400 ℃ for 3 h to avoid the presence of any oxidized species produced by passivation or re-oxidation during sample handling. After the reductive pretreatment, the samples were then cooled down to room temperature in flowing pure H2. At this temperature, a background spectrum was collected. Next, CO gas was fed to the cell and the infrared spectra were taken at 4 cm-1 resolution with 32 scans. CO adsorption continued for about 40 min, with spectra being recorded about once every two minutes. After adsorption, the samples were flushed with nitrogen for 20 min (30 mL/min) to remove the gaseous carbon monoxide from the chamber and the IR spectra were collected again. The X-ray absorption spectra at the Mo K-edge on catalysts and standard references were measured at the beamline of 1W1B of Beijing synchrotron radiation facility, China. The electron storage ring was operated at 2.5 GeV with an average current of 200 mA. A Si (111) double crystal was used as monochromater and the data of absorption samples were collected in transmission mode. The energy of the absorption spectra was calibrated by measuring the X-ray absorption near-edge spectroscopy (XANES) of a Mo metal foil. For each measurement, sample was ground into fine powders and brushed onto adhesive tapes. Thickness and homogeneity of the samples were optimized to obtain the best signal-to-noise ratio. The obtained data were processed by established methods with the ATHENA software package [33]. The normalized Extended X-ray absorption fine-structure spectroscopy (EXAFS) was converted from energy to k-space and weighted by k3. These data were then Fourier transformed to R-space. Due to the uncorrection for photoelectron phase shifts, distances in R-space are 0.3-0.5 Å shorter than actual bond distances. C. Catalytic activity measurement The catalytic performance testing was performed in a fixed-bed pressurized flow reaction system with a conventionally designed microreactor (600 mm length and 8 mm internal diameter stainless-steel tube). Feed gases were composed of 60% H2, 30% CO, and 10% N2 as an internal standard. The feed gas was purged into the reactor at a desired rate using high-pressure mass flow controllers. Temperature was monitored with a K-type thermocouple inserted in the catalyst bed. The samples (0.5 g, 40-60 mesh particle size) were diluted with a certain amount of quartz sand in order to maintain isothermal conditions and then charged into the fixed-bed reactor, which was housed in an electric furnace controlled by a temperature controller. Prior to the reaction, the catalysts were in situ reduced in pure H2 at a flow rate of 40 mL/min under atmospheric pressure for 12 h. Then the reactor was cooled down. When the reaction temperature rose to a desirable temperature, feed gas was introduced into the reactor. The effluent gas was cooled in an ice-water bath to separate into gas and liquid phases. Details on the product analytical procedure are described in our previous work [13]. For Mo-based catalysts, the higher alcohols synthesis reaction requires an introduction period to reach steady state. The activity data were measured after the reaction was performed for 24 h. Ⅲ. RESULTS AND DISCUSSION A. XRD results The XRD patterns of the catalysts with varying Mn amounts are shown in Fig. 1. For the catalyst free of Mn, the XRD pattern exhibited three diffraction peaks at 2 $\theta$ of 26.1°, 37.0° and 53.5°, corresponding to the (-111), (-211), and (-312) reflections of MoO2 crystalline phase. These reflections were very broad and weak, suggesting that the MoO2 phase had a low crystallization degree. The formation of MoO2 was attributed to the reduction effect of decomposition of citric acid in nitrogen. Besides, no diffraction reflections related to MoO3 and Co-containing phases were observed, indicating that these species in this sample may exist in the form of amorphous state. After Mn was added, distinct changes occurred in the XRD profiles. At the lowest content of Mn, there appeared five new weak diffraction peaks with 2 $\theta$ values of 18.2°, 35.7°, 52.1°, 56.3°, and 64.2°. They could be attributed to the contributions from CoMoO3. When increasing the Mn contents, a series of new diffraction peaks appeared, which can be assigned to Co2Mo3O8 phase. The intensities of the diffraction peaks assigned to Co-Mo species increased with an increase of Mn content. Meanwhile, the peak corresponding to MoO2 became weaker. These results suggested that the interaction between Mo and Co became stronger with the addition of Mn promoter. It is noted that no peaks of Mn species were observed in the XRD patterns of the catalysts. The reason may be attributed that the Mn species were either amorphous or in the form of very small nanoparticles with a high dispersion over the catalysts. FIG. 1 XRD patterns of the catalysts with different Mn/Mo molar ratio. (a) 0.00, (b) 0.05, (c) 0.10, (d) 0.15, (e) 0.20, and (f) 0.25. The nitrogen adsorption-desorption isotherms as well as pore size distribution curves for all samples are displayed in Fig. 2. The unpromoted catalyst exhibited a type IV isotherm with a hysteresis loop of type H3 according to the Brunauer-Deming-Deming-Teller (BDDT) classification. The capillary condensation in pores occurred at a relative pressure (P/ $P_\mathrm{0} >$ 0.40), indicating the presence of a large amount of mesopores. The isotherms of the Mn-doped catalysts had a similar feature with that of the undoped sample. With an increase in Mn content, the hysteresis loop was shifted to the higher relative pressure, indicative of a larger pore size. These results indicated that the pore diameter increased with the increasing Mn content, which was further confirmed by the pore size distribution curves as shown in the inset of Fig. 2. FIG. 2 Nitrogen adsorption-desorption isotherms and pore size distributions (inset) of the catalysts with different Mn/Mo molar ratio. (a) 0.00, (b) 0.05, (c) 0.10, (d) 0.15, (e) 0.20, and (f) 0.25. Table Ⅰ lists the textural properties of the samples. The BET surface area, pore volume, and average pore diameter of the unpromoted catalyst were 21.2 m2/g, 0.07 cm3/g, and 13.2 nm, respectively. The catalyst with Mn/Mo ratio of 0.05 exhibited higher BET surface area and pore volume as comparison to the unpromoted catalyst, which may be caused by the destruction of MoO2 structure and formation of poorly crystalline CoMoO3 phases. However, with further increasing Mn content, the BET surface area and pore volume of the catalysts began to decrease. This result may be ascribed to the formation of well crystallized Co-Mo-O species, leading to the decrease of BET surface area and blockage of the pores of the particle networks. Table Ⅰ Textural properties of the catalysts with different Mn/Mo molar ratio. C. NH3-TPD results The NH3 desorption profiles of all catalysts are presented in Fig. 3. It was generally accepted that the types of acid sites were related to the corresponding desorption temperature, whereas the area of desorption peak can reflect the amount of acid sites [34, 35]. Figure 3 the catalysts exhibited several NH3 desorption peaks in the range of 200-850 ℃. The low-temperature peak located at 200-400 ℃ corresponded to the desorption of NH3 on the weak acid sites, whereas the high-temperature desorption between 400 and 850 ℃ were attributed to the NH3 coordinated to the strong acid sites. For the unpromoted K-Co-Mo catalysts, the low-temperature desorption peak was very weak and three strong peaks at high temperature were observed, indicating that the strong acid sites were formed on the surface of unpromoted catalyst. After the addition of Mn, it was clearly observed that the high-temperature desorption peaks decreased sharply, while that of the low-temperature peaks inversely increased with the increment of Mn content. These results confirmed that the addition of Mn had a significant impact on the surface acidity of K-Co-Mo catalyst. The strong acid sites were reduced significantly and weak acid sites were predominant. This effect may be due to the presence of a large amount of Co-Mo-O species in the Mn-promoted catalyst, as evidenced by XRD results. FIG. 3 NH3-TPD curves of the catalysts with different Mn/Mo molar ratio. (a) 0.00, (b) 0.05, (c) 0.10, (d) 0.15, (e) 0.20, and (f) 0.25. D. In situ DRIFTS Results Figure 4 shows the in situ DRIFTS spectra of CO adsorbed on the reduced catalysts. From the figure, two broad peaks centered at around 2173 and 2115 cm-1 have been attributed to the rotational/vibrational transitions of the residual gaseous CO [36]. For the unpromoted catalyst, the peak at around 2010 cm-1 can be assigned to linearly coordinated CO to Co0 species on the surface [36]. This low frequency band of CO linearly adsorbed on Co0 was attributed to low-index surface crystallographic planes or corners and step sites with low-coordinated surface sites [37]. The addition of Mn promoter induced a shift of this peak toward higher wavenumber so that the catalyst with Mn/Mo ratio of 0.25 had a peak position of 2035 cm-1. Blue shifting of the linearly adsorbed CO peak indicated that the binding of CO to the Co surface was weaker, which reflected a decrease in ${{\pi }^{*}}$ -back bonding from the Co atoms to the carbonyl ligands. The reason may be attributed that the Mn promoter can act as an electronic promoter to withdraw electron density from Co0 phase. It has been reported that a decrease in surface electron-density resulted in decreasing the back-donation into the antibonding ${{\pi }^{*}}$ orbital of CO and weakening the Co-carbonyl bond. Consequently, the C $\equiv$ O triple bond gained strength, leading to a shift to higher CO stretching frequencies [38]. Morales et al. reported the similar results [39]. They studied the CO adsorption onto Mn-promoted Co catalysts and suggested that the withdrawal of electron density from the Co by the MnO promoter decreased the extent of ${{\pi }^{*}}$ back-donation to the adsorbed CO and thus increased the carbonyl stretching frequency. FIG. 4 In situ DRIFTS spectra of adsorbed CO on reduced catalysts with different Mn/Mo molar ratio. (a) 0.00, (b) 0.05, (c) 0.10, (d) 0.15, (e) 0.20, and (f) 0.25. The bands in the range of 1950-1700 cm-1 were commonly assigned to two-fold, three-fold, and possibly four-fold bridge-bonded CO to Co0 species according to the literatures [39]. It was apparent that with the Mn content increase, these peaks intensities of CO adsorption both in the linear and bridge types increased, indicating that the presence of Mn created more active sites for the linear and bridge adsorption of CO. The spectral region between 1700 and 1200 cm-1 was the characteristic frequencies of carbonate species [40]. Specifically, the peak at around 1675 cm-1 was assigned to the bidentate-bonded bicarbonate species. The peak at 1590 cm-1 and the shoulder at 1310 cm-1 corresponded to the bidentate-bonded carbonate species, and the peaks at 1540 and 1340 cm-1 can be attributed to monodentate-bonded carbonate species. For the unpromoted K-Co-Mo catalyst, the peaks related to carbonate species were very weak. When the Mn was added, the bands intensities of carbonate species increased significantly with increasing the Mn content, indicating that the addition of Mn promoted the formation of carbonate species, in accordance with the previous report [28]. E. XAFS Results XANES spectra are sensitive to the local symmetry around center atom and useful in revealing the local coordination structure of center atom. The near-edge can provide much detailed information on the average valence and coordination geometry around center atom and the shift in the absorption edge can be used to identify the changes of valence state. The normalized Mo K-edge XANES spectra of the catalyst samples and two reference compounds MoO3 and MoO2 are shown in Fig. 5. A characteristic feature of pre-edge peak were observed for all the samples except for MoO2, which can be attributed to the so-called "1s-4d bound state transition". The transition probability of the formally forbidden excitation depends on the local symmetry around the Mo atom. If a center of inversion symmetry in the molybdenum site is lost, the intensity of the transition probability from the 1s to 4d state will be increased. In the case of tetrahedral Na2MoO4 structure, the p character in the final state becomes dominant by the effective mixing of metal d-states with ligand p-orbitals, resulting in an intense pre-edge feature [41]. Thermodynamic MoO3 crystallizes in a unique orthorhombic crystal structure, showing the layered structure. The Mo atom is located at the off-center position in a heavily distorted MoO6 octahedron that gives three crystal-lographically inequivalent oxygen sites [42]. This non-perfect octahedral surrounding of the central Mo atom slightly allows the 1s-4d transition to occur. As a result, a weak pre-edge peak was observed in the XANES spectra of MoO3. MoO2 shares a monoclinic structure closely related to that of rutile. Each Mo atom in MoO2 is coordinated by six oxygen atoms which have two distinct metal-oxygen bond lengths [42]. So MoO2 has a more regular octahedron in comparison with MoO3, and the pre-edge peak becomes nearly invisible in the XANES spectra of MoO2. FIG. 5 Mo K-edge XANES spectra of the catalysts and reference compounds. As shown in Fig. 5, the XANES spectra of the unpromoted catalyst possessed the highest pre-edge peak and the absorption edge was close to that of MoO3. These results suggested that a less centro-symmetric surrounding was present around the Mo atom and not only octahedral coordinated Mo6+ species but also tetrahedral Mo6+ species were formed on the catalyst. It should be mentioned that no MoO3 diffraction peaks were observed in XRD pattern of the unpromoted sample, which may be attributed that the specie in this sample existed in the form of amorphous state. For the Mn-doped catalyst, at a low Mn content, the intensity of the pre-edge peak decreased to similar level to that of MoO3, and the absorption edge shifted to lower energy, indicating that there existed a fraction of octahedral coordinated Mo4+ species in the catalyst. With further increasing the Mn content, the pre-edge peak became weaker and the absorption edge shifted to lower energy, close to that of MoO2. The results indicated that the addition of Mn promoter promoted the transformation of the coordination environment of the Mo atom from tetrahedral to octahedral structure and the reduction of Mo6+ to Mo4+ species. The Fourier transforms of the $k^3$ -weighted Mo K-edge EXAFS spectra of the catalysts and reference compounds MoO3 and MoO2 are displayed in Fig. 6. It is observed that all samples exhibited coordination peaks at around 0.8-1.9 Å (without phase-shift corrections) due to the presence of the neighboring oxygen atoms (Mo-O). The peaks were slightly shifted to higher R values with the addition of Mn, implying that the bond length of Mo-O increased. This may be due to the change of coordination environment around Mo atom caused by the addition of Mn, as revealed by the XANES result. With regarding to the sample without Mn content, a weak peak ascribed to Mo-Mo bond was found at around 1.9-2.6 Å, which had a similar feature to the MoO2. This result confirmed the presence of MoO2 species, consistent with the results of XRD and XANES. After Mn was added into the catalyst, besides the Mo-Mo peak at 1.9-2.6 Å, a new peak attributed to the Mo-Co coordination at around 2.6-3.0 Å was observed. Furthermore, the intensities of the two peaks increased with an increase in Mn content, suggesting a stronger Mo-Co interaction at higher Mn content. FIG. 6 Fourier transform of k3-weighted Mo K-edge EXAFS spectra of the catalysts with different Mn/Mo molar ratio and reference compounds. (a) 0.00, (b) 0.05, (c) 0.15, (d) 0.25, (e) MoO3, and (f) MoO2. F. Catalytic performance The catalytic performance of the catalysts with different Mn/Mo ratio for higher alcohol synthesis were measured under the reaction conditions of 5.0 MPa, 300 ℃, 4800 h-1, and H2 to CO molar ratio of 2. The measured CO conversion, total alcohols selectivity, total alcohols STY, alcohols distribution and MeOH/C2+OH ratio after reaction for 24 h are summarized in Table Ⅱ. The unpromoted catalyst exhibited a very high CO conversion, but the selectivity and STY toward alcohols were extremely low. The predominant alcohol product was methanol. When the Mn promoter was added, the alcohol formation was enhanced significantly although the CO conversion decreased. The alcohol STY and selectivity increased with the Mn content increase and reached the highest level on the catalyst with Mn/Mo ratio of 0.15. Over the catalyst, the alcohol STY was 91.4 g/(kg·h), about 6.2 times more than that of the unpromoted sample. The alcohol selectivity also increased significantly from 4.2% to 51.6%. More interestingly, the addition of Mn promoted the formation of C $_{2+}$ alcohol significantly. As shown in Table Ⅱ, the MeOH/C2+OH ratio decreased from 2.38 to 0.19 when the Mn/Mo ratio increased to 0.25. The portion of methanol in total alcohol was suppressed remarkably and the ethanol became the predominant product. The distribution of alcohol products deviated from the Anderson-Schulz-Flory (ASF) law. Table Ⅲ summarized the effect of reaction temperature on the catalytic performance of the Mn-promoted catalyst. With an increase in the temperature, the CO conversion increased while the selectivity to total alcohol showed a slight decrease. Furthermore, the ratio of MeOH/C2+OH decreased gradually, indicating that increasing temperature facilitated the formation of higher alcohols. The total alcohol STY reached the highest level at the reaction temperature of 320 ℃. Under the reaction conditions, the alcohol STY and selectivity were 148.3 g/(kg·h) and 58.6%, respectively. Especially, the MeOH/C2+OH ratio decreased to 0.19. For comparison, the catalytic performance of some similar catalysts reported in the recent literatures are summarized in Table Ⅳ. Compared to the sulfided K-Co-MoS2 catalyst [43], the present Mn-K-Co-Mo exhibited a slightly lower alcohol selectivity. However, its STY of alcohol, in particular the C2+OH content in total alcohols were much higher. Furthermore, it should be noted that the reaction pressure for K-Co-MoS2 catalyst was much higher. It is well known that the higher pressure can enhance significantly the alcohol formation, especially the C2+OH selectivity. In comparison with the K-Co-Mo2C catalyst [44], the present catalyst showed a similar alcohol STY, but much higher alcohol selectivity and lower MeOH/C2+OH ratio. Furthermore, the Mn-K-Co-Mo catalyst also exhibited better activity, especially the C2+OH selectivity as comparison to the K-Co-Mo/clay and Ce, Y, and La doped Mo-based catalysts [45, 46]. To our best knowledge, the selectivity to C2+OH over the present catalyst is the highest level achieved to date for Mo-based catalyst. Table Ⅱ Effect of Mn content on the catalytic performance toward higher alcohols synthesis from CO hydrogenation. Table Ⅲ Effect of reaction temperature on the catalytic performance toward higher alcohols synthesis from CO hydrogenation. Table Ⅳ Catalytic performance of Mn-K-Co-Mo in comparison with similar catalysts. Over the Mo-based catalysts for alcohol synthesis, the Co promoter is known to act synergistically along with Mo species, rather than acting independently. It was reported that in some similar catalysts, the presence of Co3Mo3C, Ni6Mo6C, Co-Mo-S, and Ni-Mo-S phases were responsible for the higher alcohol synthesis [44, 47, 48]. Liakakou et al. [17] studied CO hydrogenation reaction over K-Ni-Mo/AC catalysts. The formation of oxygenates was attributed to the Ni-Mo synergetic interaction and the Ni-Mo-O sites were suggested to be the active phase for CO activation. Over the Mo2C catalyst, the interaction between Co and Mo species can enhance the overall reactivity and promote the formation of higher alcohols over methanol [24]. A density functional theory study also indicated that Co atoms can be incorporated in the S-edge sites of MoS2, which may generate the active Co-Mo-S species and increase the activity of catalyst [49]. For the present catalysts, as revealed by the XRD and XAFS results, the incorporation of Mn promoter enhanced the interaction between Co and Mo and thus led to the formation of Co-Mo-O species, which was suggested to be active for the alcohol synthesis. Furthermore, the good synergetic effect between Co and Mo was also conducive to enhance the promotional effect of Co. The acidity of catalysts plays an important role in the alcohol formation. The surface acidity was favorable for the CO activation [17], but strong acid sites were also the active site for alcohol dehydration and thus decreased the alcohol selectivity [34]. Furthermore, reduced acidity can facilitate the aldol-type condensation reactions, which was regarded as the intermediate step for the production of higher alcohols [9]. A correlation between acid site concentration and CO conversion further revealed that decreasing the surface acidity of catalyst was conducive to the formation of higher alcohols [9, 34]. The NH3-TPD results shown in Fig. 3 clearly indicated that the presence of Mn reduced the amount of strong acid sites significantly and meanwhile promoted the formation of weak acid sites. The change trend was suggested to have a positive effect on the synthesis of higher alcohols from syngas according to the above discussion. The CO adsorption property has a crucial impact on the CO hydrogenation reaction. The in situ DRIFTS results indicated that the incorporation of Mn can enhance the linear and bridge adsorption of CO on the surface of catalysts. Moreover, the formation of carbonate species was also enhanced. The bridge-type adsorbed CO was easily dissociated to carbon and oxygen [50]. The C-O bond cleavage led to the more frequent appearance of C1 species that can scavenge H to promote the growth of carbon chain [28]. The linearly adsorbed CO was beneficial for the formation of oxygenated compounds, because a linearly-adsorbed CO molecule had a stronger C-O bond and thus can be less easily dissociated [39]. For the reduced Mo-based catalysts, the higher alcohol synthesis consists of several steps including the CO dissociation, hydrogenation and CO insertion, etc. The CO dissociation and hydrogenation promoted the formation of alkyl group and growth of carbon chain. The alkyl group was further transformed to the acyl species by the insertion of non-dissociative CO. Finally, the acyl species was hydrogenated to form the corresponding alcohol product. From these perspectives, the enhanced adsorption of linear-and bridge-type CO due to the incorporation of Mn contributed to the growth of carbon chain and formation of alcohol, leading to the increase of the selectivity to C2+OH. This work developed a highly active Mo-based catalyst for the synthesis of higher alcohols and provided some new insight into the promotional effect of Mn. Ⅳ. CONCLUSION The Mn-K-Co-Mo catalysts with different Mn content were prepared by a sol-gel method. It was found that the addition of Mn enhanced the interaction between Co and Mo, leading to the formation of active Co-Mo-O species. Meanwhile, the surface strong acidity of catalyst was reduced significantly and the amount of weak acid sites increased. Furthermore, the incorporation of Mn promoted the adsorption of linear and bridge-type CO significantly. Benefit from these features, the Mn-doped catalysts exhibited an excellent catalytic activity for the alcohols synthesis from syngas. In particular, the selectivity to C2+ alcohol increased greatly. The alcohol products distribution deviated from the ASF law. The formation of methanol was suppressed significantly and the ethanol became the predominant product. Ⅴ. ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (No.11179034 and No.11205159) and National Basic Research Program of China (No.2012CB922004). Reference [1] P. Forzatti, E. Tronconi, and I. Pasquon, Catal. Rev. Sci. Eng. 33 , 109 (1991). DOI:10.1080/01614949108020298 [2] N. D. Subramanian, G. Balaji, C. S. S. R. Kumar, and J. J. Spivey, Catal. Today 147 , 100 (2009). DOI:10.1016/j.cattod.2009.02.027 [3] V. R. Surisetty, A. K. Dalai, and J. Kozinski, Appl. Catal. A 404 , 1 (2011). [4] R. G. Herman, Catal. Today 55 , 233 (2000). DOI:10.1016/S0920-5861(99)00246-1 [5] J. M. Christensen, L. D. L. Duchstein, J. B. Wagner, P. A. Jensen, Burcin Temel, and A. D. Jensen, Ind. Eng. Chem. Res. 51 , 4146 (2012). [6] J. Spivey, and A. Egbebi, Chem. Soc. Rev. 36 , 1514 (2007). DOI:10.1039/b414039g [7] V. Subramani, and S. K. Gangwal, Energy Fuels 22 , 814 (2008). DOI:10.1021/ef700411x [8] N. D. Subramanian, J. Gao, X. Mo, J. G. Goodwin, Jr, W. Torres, and J. J. Spivey, J. Catal. 272 , 204 (2010). DOI:10.1016/j.jcat.2010.03.019 [9] E. Heracleous, E. T. Liakakou, A. A. Lappas, and A. A. Lemonidou, Appl. Catal. A 455 , 145 (2013). DOI:10.1016/j.apcata.2013.02.001 [10] M. R. Morrill, N. T. Thao, H. Shou, R. J. Davis, D. G. Barton, D. Ferrari, P. K. Agrawal, and C. W. Jones, ACS Catal. 3 , 1665 (2013). DOI:10.1021/cs400147d [11] G. Z. Bian, Y. L. Fu, and M. Yamada, Appl. Catal. A 144 , 79 (1996). DOI:10.1016/0926-860X(96)00107-X [12] S. Zaman, and K. J. Smith, Catal. Rev. Sci. Eng. 54 , 41 (2012). DOI:10.1080/01614940.2012.627224 [13] M. M. Lv, W. Xie, S. Sun, G. M. Gai, L. R. Zheng, S. Q. Chu, C. Gao, and J. Bao, Catal. Sci. Technol. 5 , 2925 (2015). DOI:10.1039/C5CY00083A [14] Y. Yang, Y. D. Wang, S. Liu, Q. Y. Song, Z. K. Xie, and Z. Gao, Catal. Lett. 127 , 448 (2009). DOI:10.1007/s10562-008-9739-3 [15] M. Jiang, G. Z. Bian, and Y. L. Fu, J. Catal. 146 , 144 (1994). DOI:10.1016/0021-9517(94)90017-5 [16] M. L. Xiang, D. B. Li, H. C. Xiao, J. Zhang, H. J. Qi, W. L. Li, B. Zhong, and Y. H. Sun, Fuel 87 , 599 (2008). DOI:10.1016/j.fuel.2007.01.041 [17] E. T. Liakakou, E. Heracleous, K. S. Triantafyllidis, and A. A. Lemonidou, Appl. Catal. B 165 , 296 (2015). DOI:10.1016/j.apcatb.2014.10.027 [18] Z. Liu, X. Li, M. R. Close, . E. L. Kugler, J. L. Petersen, and D. B. Dadyburjor, Ind. Eng. Chem. Res. 36 , 3085 (1997). DOI:10.1021/ie960648d [19] J. S. Lee, S. Kim, K. H. Lee, I. S. Nam, J. S. Chung, Y. G. Kim, and H. C. Woo, Appl. Catal. A 110 , 11 (1994). DOI:10.1016/0926-860X(94)80101-0 [20] H. C. Woo, I. S. Nam, J. S. Lee, J. S. Chung, and Y. G. Kim, J. Catal. 142 , 672 (1993). DOI:10.1006/jcat.1993.1240 [21] V. P. Santos, B. V. D. Linden, A. Chojecki, G. Budroni, S. Corthals, H. Shibata, G. R. Meima, F. Kapteijin, M. Makkee, and J. Gascon, ACS Catal. 3 , 1634 (2013). DOI:10.1021/cs4003518 [22] G. Z. Bian, L. Fan, Y. L. Fu, and K. Fujimoto, Ind. Eng. Chem. Res. 37 , 1736 (1998). DOI:10.1021/ie970792e [23] X. M. Shi, X. Z. Yang, F. H. Bai, M. H. Nao, and H. Q. Su, Chem. Ind. Eng. Prog. 29 , 2291 (2010). [24] K. H. Yin, H. Shou, D. Ferrari, C. W. Jones, and R. J. Davis, Top. Catal. 56 , 1740 (2013). DOI:10.1007/s11244-013-0110-6 [25] D. B. Li, C. Yang, H. J. Qi, H. R. Zhang, W. H. Li, Y. H. Sun, and B. Zhong, Catal. Commun. 5 , 605 (2004). DOI:10.1016/j.catcom.2004.07.011 [26] F. Morales, D. Grandjean, F. M. F. de Groot, O. Stephan, and B. M. Weckhuysen, Phys. Chem. Chem. Phys. 7 , 568 (2005). DOI:10.1039/B418286C [27] A. Dince, M. Aigner, M. Ubrich, G. R. Johnson, and A. T. Bell, J. Catal. 288 , 104 (2012). DOI:10.1016/j.jcat.2012.01.008 [28] G. R. Johnson, S. Werner, and A. T. Bell, ACS Catal. 5 , 5888 (2015). DOI:10.1021/acscatal.5b01578 [29] H. J. Qi, D. B. Li, C. Yang, Y. G. Ma, W. H. Li, Y. H. Sun, and B. Zhong, Catal. Commun. 4 , 339 (2003). DOI:10.1016/S1566-7367(03)00061-X [30] M. Y. Ding, M. H. Qiu, J. G. Liu, Y. P. Li, T. J. Wang, L. L. Ma, and C. Z. Wu, Fuel 109 , 21 (2013). DOI:10.1016/j.fuel.2012.06.034 [31] R. G. Zhang, G. R. Wang, and B. J. Wang, J. Catal. 305 , 238 (2013). DOI:10.1016/j.jcat.2013.05.028 [32] M. Ojeda, M. L. Granados, S. Rojas, P. Terreros, F. J. Garcia-Garcia, and J. L. G. Fierro, Appl. Catal. A 261 , 47 (2004). DOI:10.1016/j.apcata.2003.10.033 [33] B. Ravel, and M. Newville, J. Synchrotron. Radiat. 12 , 537 (2005). DOI:10.1107/S0909049505012719 [34] T. Toyoda, T. Minami, and E. W. Qian, Energy Fuels 27 , 3769 (2013). DOI:10.1021/ef400262a [35] S. S. R. Putluru, L. Schill, A. D. Jensen, B. Siret, F. Tabaries, and R. Fehrmann, Appl. Catal. B 165 , 628 (2015). DOI:10.1016/j.apcatb.2014.10.060 [36] D. C. Song, J. J. Li, and Q. Cai, J. Phys. Chem. C 111 , 18970 (2007). DOI:10.1021/jp0751357 [37] G. Prieto, A. Martinez, P. Concepcion, and R. MorenoTost, J. Catal. 266 , 129 (2009). DOI:10.1016/j.jcat.2009.06.001 [38] N. Kumar, K. Jothimurugesan, G. G. Stanley, V. Schwartz, and J. J. Spivey, J. Phys. Chem. C 115 , 990 (2011). [39] F. Morales, E. de Smit, F. M. F. de Groot, T. Visser, and B. M. Weckhuysen, J. Catal. 246 , 91 (2007). DOI:10.1016/j.jcat.2006.11.014 [40] S. Gaur, H. Y. Wu, G. G. Stanley, K. More, C. S. S. R. Kumar, and J. J. Spivey, Catal. Today 208 , 72 (2013). DOI:10.1016/j.cattod.2012.10.029 [41] R. G. Leliveld, A. J. van Dillen, J. W. Geus, and D. C. Koningsberger, J. Catal. 165 , 184 (1997). DOI:10.1006/jcat.1997.1480 [42] D. O. Scanion, G. W. Watson, D. J. Payne, G. R. Atkinson, R. G. Egdell, and D. S. L. Law, J. Phys. Chem. C 114 , 4636 (2010). DOI:10.1021/jp9093172 [43] J. M. Christensen, P. M. Mortensen, R. Trane, P. A. Jensen, and A. D. Jensen, Appl. Catal. A 366 , 29 (2009). DOI:10.1016/j.apcata.2009.06.034 [44] M. L. Xiang, D. B. Li, W. H. Li, B. Zhong, and Y. H. Sun, Catal. Commun. 8 , 503 (2007). DOI:10.1016/j.catcom.2006.07.029 [45] G. M. Wu, J. L. Zhou, M. M. Lv, W. Xie, S. Sun, C. Gao, W. D. Wang, and J. Bao, Chin. J. Chem. Phys. 28 , 604 (2015). DOI:10.1063/1674-0068/28/cjcp1504075 [46] Y. Yang, Y. D. Wang, S. Liu, Q. Y. Song, Z. K. Xie, and Z. Gao, Catal. Lett. 127 , 448 (2009). DOI:10.1007/s10562-008-9739-3 [47] M. Nagai, A. Md. Zahidul, and K. Matsuda, Appl. Catal. A 313 , 137 (2006). DOI:10.1016/j.apcata.2006.07.006 [48] D. B. Li, C. Yang, W. H. Li, Y. H. Sun, and B. Zhong, Top. Catal. 32 , 233 (2005). DOI:10.1007/s11244-005-2901-x [49] M. Y. Sun, A. E. Nelson, and J. Adjaye, J. Catal. 226 , 32 (2004). DOI:10.1016/j.jcat.2004.05.005 [50] Y. Zhang, Y. Liu, G. H. Yang, S. L. Sun, and N. Tsubaki, Appl. Catal. A 321 , 79 (2007). DOI:10.1016/j.apcata.2007.01.030 Mn助剂对K-Co-Mo催化剂结构及CO加氢合成低碳醇性能的影响 a. 中国科学技术大学国家同步辐射实验室, 化学能源材料协同创新中心, 合肥 230029; b. 北京化工大学, 有机-无机复合材料国家重点实验室, 北京 100029; c. 中国科学技术大学材料科学与工程系, 中国科学院能源转换材料重点实验室, 合肥 230029
# Qqn peut me faire une histoire de 6 pages urgent sujet: Un jour inoubliable dans la foret... merci c est urgent ###### Question: qqn peut me faire une histoire de 6 pages urgent sujet: Un jour inoubliable dans la foret... merci c est urgent ### The function f(x)=15000(0.96)^x represents the value in dollars of a vehicle x years after it has been purchased new. What is the average decrease in value per year between years 5 and 10? Please show all work! The function f(x)=15000(0.96)^x represents the value in dollars of a vehicle x years after it has been purchased new. What is the average decrease in value per year between years 5 and 10? Please show all work!... ### What are the three steps of the standard collision-prevention formula what are the three steps of the standard collision-prevention formula... ### Is the sweetened iced tea an electrolyte mixture? is the sweetened iced tea an electrolyte mixture?... ### Reduce the fraction w^2+5w+6/w^2-w-12 Reduce the fraction w^2+5w+6/w^2-w-12... ### Which american revolution battle in georgia boosted morale of the georgia milita which american revolution battle in georgia boosted morale of the georgia milita... ### Solve for a. 1 over 5(25−5a)=4−a no solution all real numbers −1 0 Solve for a. 1 over 5(25−5a)=4−a no solution all real numbers −1 0... ### What fraction of an hour is 16 minutes What fraction of an hour is 16 minutes... ### Which of the following expressions represents "subtract 17 from x Which of the following expressions represents "subtract 17 from x... ### What are the importance of Help and Support feature of Windows​ What are the importance of Help and Support feature of Windows​... ### A region generally is higher than an adjacent region if ____ than in the adjacent region A region generally is higher than an adjacent region if ____ than in the adjacent region... ### How Mach is one ton weight ? How Mach is one ton weight ?... ### Molecular compounds are usually a. composed of two or more transition elements. b. composed of positive and negative ions. c. composed of two or more nonmetallic elements. d. solids at room temperature. Molecular compounds are usually a. composed of two or more transition elements. b. composed of positive and negative ions. c. composed of two or more nonmetallic elements. d. solids at room temperature.... ### Tymere says that 7/10 × 7/10 equals 49/10. Which of the statements below is true? Tymere says that 7/10 × 7/10 equals 49/10. Which of the statements below is true?... ### Find the area of the shaded region. Attached below Find the area of the shaded region. Attached below... ### An airplane travels about 1,500 Km in a straight line between Toronto and Winnipeg. However, the plane must climb to 10,000 m during the first 200 Km, then it must descend 10,000 m during the final 300 Km. a) How much distance do the climb and descent add to the flight? b) The airplane averages a speed of 600 km/h. Do climbing and descending 10 000 m add hours, minutes of second to the flight? Explain. Please help I do not understand how to do this. An airplane travels about 1,500 Km in a straight line between Toronto and Winnipeg. However, the plane must climb to 10,000 m during the first 200 Km, then it must descend 10,000 m during the final 300 Km. a) How much distance do the climb and descent add to the flight? b) The airplane averages a sp... ### The difference between a number and 10 is multiplied by 2. The answer is 6. What is the equation that represents this information The difference between a number and 10 is multiplied by 2. The answer is 6. What is the equation that represents this information... ### Indicate the amoun (if any) that she can deduct as an ordinary and necessary business deduction in the situation: Michelle moves her food truck between various locations on a daily rotation. Last week, Michelle was stopped for speeding. She paid a fine of $220 for speeding, including$175 for legal advice in connection with the ticket. Indicate the amoun (if any) that she can deduct as an ordinary and necessary business deduction in the situation: Michelle moves her food truck between various locations on a daily rotation. Last week, Michelle was stopped for speeding. She paid a fine of $220 for speeding, including$175 for legal ...
## Zeta Function A function satisfying certain properties which is computed as an Infinite Sum of Negative Powers. The most commonly encountered zeta function is the Riemann Zeta Function, See also Dedekind Function, Dirichlet Beta Function, Dirichlet Eta Function, Dirichlet L-Series, Dirichlet Lambda Function, Epstein Zeta Function, Jacobi Zeta Function, Nint Zeta Function, Prime Zeta Function, Riemann Zeta Function References Ireland, K. and Rosen, M. The Zeta Function.'' Ch. 11 in A Classical Introduction to Modern Number Theory, 2nd ed. New York: Springer-Verlag, pp. 151-171, 1990.
2015几何分析学术研讨会 2015年几何分析学术研讨会会议通知 教授: 此致 厦门大学数学科学学院 2015年4月15 1. 会议日程:2015516  会议报到时间 2015年5月17  会议正式开始 2015年5月22  会议结束 2. 会议地点:厦门市白鹭洲大酒店 3. 联系人:郭淑敏,厦门大学数学科学学院 Email 电话:86-592-2580602 韩青  李嘉禹  邱春晖  史宇光  张明智  钟春平 分工 姓名 联系电话 报到、考察 郭淑敏 13400785041 交通 雷玉娟 18959212796 住宿、用餐 林智雄 13599916563 会场、多媒体 林  煜 18959219403 Schedule May 17, Sunday,厦门大学数学物理大楼117报告厅 9:10-9:30 Opening Ceremony 9:40—10:30 徐兴旺 Negative gradient flow and the scalar curvature function 11:00—11:50 葛剑 Distance function to the boundary of manifold May 17, Sunday,厦门大学数学物理大楼117报告厅 14:00—14:50 李海中 Self-shrinkers of the mean curvature flow in arbitrary codimension 15:20—16:10 周斌 A potential theory for Weingarten curvature equations 16:20—17:10 宋翀 Skew mean curvature flow May 18, Monday,厦门大学数学物理大楼117报告厅 9:00—9:50 戎小春 Manifolds of Ricci curvature and volume of local covering bounded below 10:20—11:10 朱苗苗 Boundary value problems for Dirac equations and the heat flow forDirac-harmonic maps 11:20—11:50 姜旭旻 Boundary expansion for the complex Monge Ampere equation May 18, Monday,厦门大学数学物理大楼117报告厅 14:00—14:50 陈群 Omori-Yau maximum principles, V-harmonic maps and self-shrinkers of mean curvature flows 15:20—16:10 张会春 Lipschitz regularity of harmonic maps from Alexandrov spaces to NPC spaces 16:20—17:10 葛化彬 A combinatorial Yamabe problem on two and three dimensional manifolds May 19, Tuesday,厦门大学数学物理大楼117报告厅 9:00—9:50 傅吉祥 Teissier's proportionality problem 10:20—11:10 Isoperimetric type problems for quermassintegrale in hyperbolic space 11:20—11:50 楚建春 $C^{2,\alpha}$ estimates for some nonlinear elliptic equations in complex geometry May 19, Tuesday,厦门大学数学物理大楼117报告厅 14:00—14:50 盛利 The Exponential Decay of Gluing Maps and Gromov-Witten Invariants 15:20—16:10 刘佳堃 On the uniqueness of $L_p$-Minkowski problems 16:20—17:10 刘磊 Energy quantization and blow up analysis for Dirac-harmnic maps with curvature term May 20, Wednesday,厦门大学数学物理大楼117报告厅 9:00—9:50 陈兵龙 A survey on some aspects of the Ricci flow theory 10:20—11:10 华波波 Discrete Harmonic function theory on graphs 11:20—11:50 戴嵩 Lower order tensors in non-Kähler geometry and non-Kähler geometric flow May 20, Wednesday,厦门大学数学物理大楼117报告厅 14:00—14:50 张希 Some results on semistable Higgs bundle 15:20—16:10 王枫 Stability in Kähler-Ricci soliton 16:20—17:10 马世光 Constant mean curvature surfaces of Delaunay type along a closed geodesic May 21, Thursday,厦门大学数学物理大楼117报告厅 9:00—9:50 关波 Fully nonlinear elliptic equations on Hermitian manifolds 10:20—11:10 陈竟一 Radially symmetric solutions to the graphic Willmore surface equation 11:20—11:50 鲍超 Topology of closed mean convex hypersurfaces with low entropy Title and Abstract Topology of closed mean convex hypersurfaces with low entropy In this presentation, I will show that the closed mean convex hypersurfaces with low entropy are diffeomorphic to round spheres, this work is inpired by the results of Colding, Minicozzi and etc. We will use techniques from mean curvature fow  to get the main result. A survey on some aspects of the Ricci flow theory I will report briefly on the development of the Ricci flow during past 30 more years, focusing on my joint works with Xi-Ping Zhu in this field over past 15 years. This includes our joint works on uniqueness theorem, pinching theorem, gap theorems, classification of four-manifolds with positive isotropic curvature, and uniformization theorem in complex differential geometry. Radially symmetric solutions to the graphic Willmore surface equation We show that a smooth radially symmetric solution $u$ to the graphic Willmore surface equation is either a constant or the defining function of a half sphere in ${\mathbb R}^3$. In particular, radially symmetric entire Willmore graphs in ${\mathbb R}^3$ must be flat. When $u$ is a smooth radial solution over a punctured disk $D(\rho)\backslash\{0\}$ and is in $C^1(D(\rho))$, we show that there exist a constant $\lambda$ and a function $\beta$ in $C^0(D(\rho))$ such that  $u''(r) =\frac{\lambda}{2}\log r+\beta(r)$; moreover, the graph of $u$ is contained in a graphical region of an inverted catenoid which is uniquely determined by $\lambda$ and $\beta(0)$. It is also shown that a radial solution on the punctured disk extends to a $C^1$ function on the disk when the mean curvature is square integrable. Omori-Yau maximum principles, V-harmonic maps and self-shrinkers of mean curvature flows In this talk, we first introduce our recent results on Omori-Yau maximum principles, then, combining with V-harmonic maps, we will give some applications to rigidity problems of self-shrinkers and translating solitons of mean curvature flows. $C^{2,\alpha}$ estimates for some nonlinear elliptic equations in complex geometry In this talk, we will consider some nonlinear elliptic equations in complex geometry with Hölder-continuous right hand side. We will present the sharp $C^{2,\alpha}$ estimates for solutions of these equations, assuming a bound on the Laplacian of the solution. Our result is optimal regarding the Hölder exponent. Lower Order Tensors in non-Kahler Geometry and non-Kahler Geometric Flow Abstract: In recent years, Streets and Tian introduced a series of curvature flows to study non-Kahler geometry. In this talk, we discuss how to construct second order curvature flows from the viewpoint of tensor. By classifying lower order tensors, we classify second order almost Hermitian curvature flows, under some natural conditions. As a corollary, we show symplectic curvature flow is the unique way to deform almost Kahler structure in some canonical sense. Teissier's proportionality problem In this talk, I will talk about Teissier's proportionality problem fortranscendental nef classes over a compact Kählermanifold which says that the equalities in the Khovanskii-Teissier inequalities hold for two nef and big classes if and only if the two classes are proportional. This is a joint work with my student Xiao Jian. A combinatorial Yamabe problem on two and three dimensional manifolds We introduce a new combinatorial curvature on two and three dimensional triangulated manifolds, which transforms in the same way as that of the smooth scalar curvature under scaling of the metric and could be used to approximate the Gauss curvature on two dimensional manifolds. Then we use the flow method to study the corresponding constant curvature problem, which is called combinatorial Yamabe problem. This work is joint with XuXu. Distance function to the boundary of manifold We focus on the study of the distance function to the boundary of a Riemannian manifold. Various assumptions on curvatures of the Riemannian manifold and mean curvature of the boundary allow us to estimate this function. We will give some applications to eigenvalue estimate and contact metric geometry. Title: Fully nonlinear elliptic equations on Hermitian manifolds. We are concerned with a class of fully nonlinear elliptic equations which play important roles of complex geometry and analysis. In the talk we hall focus on second derivative estimates on closed Hermitian manifolds. As an application of our results, we can prove a conjecture of Gauduchun building on results of Tossati and Weinkove.The major part of this talk is based on joint work with Xiaolan Nie. Discrete Harmonic Function Theory on Graphs As discrete metric measure spaces, graphs play an important role in both theoretic and applicative way. The combinatorics is particularly helpful to understand the structure of finite graphs. While for infinite graphs, which resemble complete noncompact Riemannian manifolds, these methods are no longer effective, geometric methods survive. We will talk about the discrete harmonic function theory on infinite graphs in this talk. Boundary expansion for the complex Monge Ampere equation In this talk, we discuss boundary expansions for complete Kahler metrics in bounded strictly pseudoconvex domains. We discuss the remainder estimates in the context of local finite regularity and the convergence of the expansions under a natural condition that the boundary is analytic. Self-shrinkers of the mean curvature flow in arbitrary codimension In this talk, we will report our recent results in the study of the self-shrinkers of the mean curvature  flow in Euclidean space with arbitrary codimension, which include gap theorems about the norm square of the second fundamental form, classification results, rigidity results, F-stability and the volume estimates of self-shinkers. On the uniqueness of $L_p$-Minkowski problems In this talk we first give a brief introduction to the $L_p$-Minkowski problem. Then we focus on the uniqueness results and show that in dimension two, either when $p \in [-1,0]$ or when $p \in (0,1)$ in addition to a pinching condition, the solution must be the unit ball. This partially answers a conjecture of Lutwak, Yang and Zhang about the uniqueness of the $L_p$-Minkowski problem in dimension two. Moreover, we give an explicit pinching constant depending only on $p$ when $0<p<1$. This is a recent joint work with Yong Huang and Lu Xu. Energy quantization and blow up analysis for Dirac-harmnic maps with curvature term For a sequence of Dirac-harmonic maps with curvature term from a closed Riemannian surface to a general compact Riemannian manifold with uniformly bounded energy, we prove that the energy identities and neckless results hold during the blow up process. This is a joint work with Prof. Jurgen Jost and Prof. Miaomiao Zhu. Constant mean curvature surfaces of Delaunay type along a closed geodesic Delaunay surfaces are a kind of periodic surfaces with constant mean curvature in Euclidean space R^3. In this talk I will show how to construct Delaunay type constant mean curvature surfaces along a non degenerate closed geodesic in 3-dim Riemannian manifolds. This is a joint work with Frank Pacard. Manifolds of Ricci curvature and volume of local covering bounded below This will be a preliminary report on on-going program with Jiayin Pan in the study of the manifolds in the title. The Exponential Decay of Gluing Maps and Gromov-Witten Invariants There had been several different approaches to define Gromov-Witten Invariants for general symplectic manifolds, such as Fukaya-Ono, Li-Tian, Liu-Tian, Ruan, Siebert and etc. The moduli space has various lower strata. We show that the relevant differential form decays in exponential rate near lower strata, then the Gromov-Witten invariants can be defined as an integral over top strata of virtual neighborhood. Skew Mean Curvature Flow The skew mean curvature flow or binormal flow, which origins from fluid dynamics,describes the evolution of a codimension two submanifold along the binormal direction. We show the existence of a local solution to the initial value problemof the SMCF of surfaces in Euclidean space R4. As a byproduct, we obtain a uniform Sobolev-type embedding theorem for the second fundamental forms of two dimensional surfaces. I will also talk about the Hasimoto transformation if time permits. This is a joint work with Jun Sun. Stability in Kähler-Ricci soliton The notion of K-stable was introduced by Tian to study the existence of Kahler-Einstein metric. This was generalized by Donaldson in a pure algebraic way. For Kähler-Ricci soliton,  Berman gave a definition of modified K-stable. From the equivariant point of view, we will see the similarity between these two notions of stability. Isoperimetric type problems for quermassintegrale in hyperbolic space In this talk, I will start from the classical Alexandrov-Fenchel type inequalities for convex domains in Euclidean space and turn to the recent development of such kind of inequalities in hyperbolic space. We show that, for the class of horospherical convex hypersurfaces in hyperbolic space whose $l$-thquermassintegral is a constant, the $k$-th $(k>l)$ quermassintegrale attains its minimum at geodesic spheres. As a byproduct several isoperimetric type problems about curvature integrals are solved in the class of horo-convex hypersurfaces. Our proof relies on the result of quermassintegral preserving curvature flow. This is joint work with Guofang Wang. Negative Gradient Flow and the scalar curvature function In this talk, I will update the information on the scalar curvature function problem in a conformal class by the negative gradient flow method. Lipschitz Regularity of Harmonic Maps from Alexandrov Spaces to NPC Spaces In 1997, J. Jost and F. H. Lin proved that every energy minimizing harmonic map from an Alexandrov space with curvature bounded from below to a non-positivecurvature (NPC) metric space is locally Holder continuous. In [37], F. H. Linconjectured that the Holder continuity can be improved to Lipschitz continuity. J. Jost also asked a similar problem. In this talk, we will introduce a resolution to this Lin’s problem. This is a joint work with Prof. Xi-Ping Zhu. Some results on semistable Higgs bundle In this talk, I will talk about the heat flow on Higgs bundle and introduce our recent works (joint with Jiayu Li and Yechi Nie) on semistable Higgs bundle. A potential theory for Weingarten curvature equations In this talk, we will introduce a potential theory for Weingarten curvature (or k-curvature) equation, which can also be seen as a PDE approach to curvature measures. In the case of k=1, we extend the mean curvature measure to signed measures. The related prescribed mean curvature equation will be solved. When k>1, we assign a measure to a bounded, upper semicontinuous function which is strictly subharmonic with respect to the k-curvature operator, and establish the weak continuity of the measure. Boundary Value Problems for Dirac Equations and the Heat Flow for Dirac-harmonic Maps In this talk, we shall first discuss the existence, uniqueness andimproved elliptic estimates for Dirac equations under a class of localelliptic boundary conditions. Then we introduce a heat flow approach tothe existence of Dirac-harmonic maps from Riemannian spin manifolds withboundary and show the short time existence of this flow. This is a joint work with Qun Chen, Juergen Jost and Linlin Sun. 1、机场T4候机楼→白鹭洲大酒店: ①乘坐的士:约40元; ②乘坐公交:乘坐乘BRT1线至“二市站”,换乘L5路公交车至“天湖苑站”。 2、机场T3候机楼→白鹭洲大酒店: ①乘坐的士:约30元; ②乘坐公交:乘L19路至“县后站”下车,换乘BRT1线至“二市站”,换乘L5路公交车至“天湖苑站”。 3、火车站(厦门岛内)→白鹭洲大酒店: ①、乘坐的士:约12元; ②、乘坐公交:乘坐122路公交车至“非矿站”下车;或乘坐958路至“天湖苑站”。 4、厦门北站(岛外)→白鹭洲大酒店: ①、乘坐的士:约60元; ②、乘坐公交:乘BRT1线至“二市站”下车, 换乘坐L5路公交车至“天湖苑站”下车。 5、机场T4候机楼→厦门大学海韵园(会场): ①、乘坐的士:约50元; ②、乘坐公交车:乘BRT1线至“火车站”下车,换乘旅游2线至“海韵”。 6、机场T3候机楼→厦门大学海韵园(会场) ①、乘坐的士:约50元; ②、乘坐公交车:乘L19路至“县后站”下车,换乘BRT1线至“火车站”,换乘旅游2线至“海韵站”下车。 7、火车站(厦门岛内)→厦门大学海韵园(会场): ①、乘坐的士:约20元; ②、乘坐公交车:乘坐旅游2线至“海韵站”下车。 8、厦门北站(岛外)→厦门大学海韵园(会场) ①、乘坐的士:约80元; ②、乘坐公交:乘BRT1线至“火车站”,换乘旅游2线至“海韵”。 9、厦门大学海韵园(会场)→白鹭洲大酒店: ①、乘坐的士:约20元; ②、乘坐公交:“珍珠湾站”乘坐87路至“天湖苑站”。
Chemistry: The Molecular Science (5th Edition) $6.022 \times 10^{23}$ atoms $Ca$ has the greater number of atoms. - 1 g calcium: Molar mass ($Ca$) = 40.1 g/mol Therefore: $1g \times \frac{1 mol}{40.1g} \approx 0.025$ $moles$ - $6.022 \times 10^{23}$ atoms of $Ca$ $6.022 \times 10^{23}$ atoms = 1 mol Therefore: $6.022 \times 10^{23}$ $Ca$ = 1 mol $(Ca)$ - Since we are comparing the same kind of specie $(Ca)$ we can assume that: Since 1 mol > 0.025 moles, the number of atoms is greater in $6.022 \times 10^{23}$ atoms ($Ca$).
| | | | ## HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS #### Riyadh S. Al-Turaihi [1] , Sarah Hasan Oleiwi [2] Computational fluid dynamics (CFD) was used to investigate the flow of water and air in smooth and ribbed duct. Temperature was applied for the top and the bottom of the duct where the ribs are located. The heat transfer coefficient were calculate at different location inside the ducts and the results was validated using several heat transfer coefficient correlations that was developed by other researchers. Three shapes of ribs was studied which are rectangular, trapezoidal, and triangle. Three water velocities and three air velocities was studied (0.4, 0.6, and 0.8 m/s), and (0.12, 0.15, and 0.18 m/s), respectively. The heat transfer coefficient increased by adding ribs, it also increased as the velocity of the flow increased. Heat transfer, Ribbed duct, Two phase, Ansys Fluent, CFD Other ID JA48RK67VM Research Author: Riyadh S. Al-Turaihi Author: Sarah Hasan Oleiwi Publication Date : December 1, 2016 Bibtex @ { ijeat299491, journal = {International Journal of Energy Applications and Technologies}, issn = {}, eissn = {2548-060X}, address = {[email protected]}, publisher = {İlker ÖRS}, year = {2016}, volume = {3}, pages = {41 - 49}, doi = {}, title = {HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS}, key = {cite}, author = {Al-Turaihi, Riyadh S. and Oleiwi, Sarah Hasan} } APA Al-Turaihi, R , Oleiwi, S . (2016). HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS. International Journal of Energy Applications and Technologies , 3 (2) , 41-49 . Retrieved from https://dergipark.org.tr/en/pub/ijeat/issue/28204/299491 MLA Al-Turaihi, R , Oleiwi, S . "HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS". International Journal of Energy Applications and Technologies 3 (2016 ): 41-49 Chicago Al-Turaihi, R , Oleiwi, S . "HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS". International Journal of Energy Applications and Technologies 3 (2016 ): 41-49 RIS TY - JOUR T1 - HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS AU - Riyadh S. Al-Turaihi , Sarah Hasan Oleiwi Y1 - 2016 PY - 2016 N1 - DO - T2 - International Journal of Energy Applications and Technologies JF - Journal JO - JOR SP - 41 EP - 49 VL - 3 IS - 2 SN - -2548-060X M3 - UR - Y2 - 2020 ER - EndNote %0 International Journal of Energy Applications and Technologies HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS %A Riyadh S. Al-Turaihi , Sarah Hasan Oleiwi %T HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS %D 2016 %J International Journal of Energy Applications and Technologies %P -2548-060X %V 3 %N 2 %R %U ISNAD Al-Turaihi, Riyadh S. , Oleiwi, Sarah Hasan . "HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS". International Journal of Energy Applications and Technologies 3 / 2 (December 2016): 41-49 . AMA Al-Turaihi R , Oleiwi S . HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS. IJEAT. 2016; 3(2): 41-49. Vancouver Al-Turaihi R , Oleiwi S . HEAT TRANSFER OF TWO PHASES (WATER – AIR) IN HORIZONTAL SMOOTH AND RIBBED DUCTS. International Journal of Energy Applications and Technologies. 2016; 3(2): 49-41.
# 0.8 Exponential functions and graphs  (Page 2/2) Page 2 / 2 Therefore, if $a<0$ , then the range is $\left(-\infty ,q\right)$ , meaning that $f\left(x\right)$ can be any real number less than $q$ . Equivalently, one could write that the range is $\left\{y\in \mathbb{R}:y . For example, the domain of $g\left(x\right)=3·{2}^{x+1}+2$ is $\left\{x:x\in \mathbb{R}\right\}$ . For the range, $\begin{array}{ccc}\hfill {2}^{x+1}& >& 0\hfill \\ \hfill 3·{2}^{x+1}& >& 0\hfill \\ \hfill 3·{2}^{x+1}+2& >& 2\hfill \end{array}$ Therefore the range is $\left\{g\left(x\right):g\left(x\right)\in \left[2,\infty \right)\right\}$ . ## Domain and range 1. Give the domain of $y={3}^{x}$ . 2. What is the domain and range of $f\left(x\right)={2}^{x}$ ? 3. Determine the domain and range of $y={\left(1,5\right)}^{x+3}$ . ## Intercepts For functions of the form, $y=a{b}^{\left(x+p\right)}+q$ , the intercepts with the $x$ - and $y$ -axis are calculated by setting $x=0$ for the $y$ -intercept and by setting $y=0$ for the $x$ -intercept. The $y$ -intercept is calculated as follows: $\begin{array}{ccc}\hfill y& =& a{b}^{\left(x+p\right)}+q\hfill \\ \hfill {y}_{int}& =& a{b}^{\left(0+p\right)}+q\hfill \\ & =& a{b}^{p}+q\hfill \end{array}$ For example, the $y$ -intercept of $g\left(x\right)=3·{2}^{x+1}+2$ is given by setting $x=0$ to get: $\begin{array}{ccc}\hfill y& =& 3·{2}^{x+1}+2\hfill \\ \hfill {y}_{int}& =& 3·{2}^{0+1}+2\hfill \\ & =& 3·{2}^{1}+2\hfill \\ & =& 3·2+2\hfill \\ & =& 8\hfill \end{array}$ The $x$ -intercepts are calculated by setting $y=0$ as follows: $\begin{array}{ccc}\hfill y& =& a{b}^{\left(x+p\right)}+q\hfill \\ \hfill 0& =& a{b}^{\left({x}_{int}+p\right)}+q\hfill \\ \hfill a{b}^{\left({x}_{int}+p\right)}& =& -q\hfill \\ \hfill {b}^{\left({x}_{int}+p\right)}& =& -\frac{q}{a}\hfill \end{array}$ Since $b>0$ (this is a requirement in the original definition) and a positive number raised to any power is always positive, the last equation above only has a real solution if either $a<0$ or $q<0$ (but not both). Additionally, $a$ must not be zero for the division to be valid. If these conditions are not satisfied, the graph of the function of the form $y=a{b}^{\left(x+p\right)}+q$ does not have any $x$ -intercepts. For example, the $x$ -intercept of $g\left(x\right)=3·{2}^{x+1}+2$ is given by setting $y=0$ to get: $\begin{array}{ccc}\hfill y& =& 3·{2}^{x+1}+2\hfill \\ \hfill 0& =& 3·{2}^{{x}_{int}+1}+2\hfill \\ \hfill -2& =& 3·{2}^{{x}_{int}+1}\hfill \\ \hfill {2}^{{x}_{int}+1}& =& \frac{-2}{2}\hfill \end{array}$ which has no real solution. Therefore, the graph of $g\left(x\right)=3·{2}^{x+1}+2$ does not have a $x$ -intercept. You will notice that calculating $g\left(x\right)$ for any value of $x$ will always give a positive number, meaning that $y$ will never be zero and so the graph will never intersect the $x$ -axis. ## Intercepts 1. Give the y-intercept of the graph of $y={b}^{x}+2$ . 2. Give the x- and y-intercepts of the graph of $y=\frac{1}{2}{\left(1,5\right)}^{x+3}-0,75$ . ## Asymptotes Functions of the form $y=a{b}^{\left(x+p\right)}+q$ always have exactly one horizontal asymptote. When examining the range of these functions, we saw that we always have either $y or $y>q$ for all input values of $x$ . Therefore the line $y=q$ is an asymptote. For example, we saw earlier that the range of $g\left(x\right)=3·{2}^{x+1}+2$ is $\left(2,\infty \right)$ because $g\left(x\right)$ is always greater than 2. However, the value of $g\left(x\right)$ can get extremely close to 2, even though it never reaches it. For example, if you calculate $g\left(-20\right)$ , the value is approximately 2.000006. Using larger negative values of $x$ will make $g\left(x\right)$ even closer to 2: the value of $g\left(-100\right)$ is so close to 2 that the calculator is not precise enough to know the difference, and will (incorrectly) show you that it is equal to exactly 2. From this we deduce that the line $y=2$ is an asymptote. ## Asymptotes 1. Give the equation of the asymptote of the graph of $y={3}^{x}-2$ . 2. What is the equation of the horizontal asymptote of the graph of $y=3{\left(0,8\right)}^{x-1}-3$ ? ## Sketching graphs of the form $f\left(x\right)=a{b}^{\left(x+p\right)}+q$ In order to sketch graphs of functions of the form, $f\left(x\right)=a{b}^{\left(x+p\right)}+q$ , we need to determine four characteristics: 1. domain and range 2. $y$ -intercept 3. $x$ -intercept For example, sketch the graph of $g\left(x\right)=3·{2}^{x+1}+2$ . Mark the intercepts. We have determined the domain to be $\left\{x:x\in \mathbb{R}\right\}$ and the range to be $\left\{g\left(x\right):g\left(x\right)\in \left(2,\infty \right)\right\}$ . The $y$ -intercept is ${y}_{int}=8$ and there is no $x$ -intercept. ## Sketching graphs 1. Draw the graphs of the following on the same set of axes. Label the horizontal asymptotes and y-intercepts clearly. 1. $y={b}^{x}+2$ 2. $y={b}^{x+2}$ 3. $y=2{b}^{x}$ 4. $y=2{b}^{x+2}+2$ 1. Draw the graph of $f\left(x\right)={3}^{x}$ . 2. Explain where a solution of ${3}^{x}=5$ can be read off the graph. ## End of chapter exercises 1. The following table of values has columns giving the $y$ -values for the graph $y={a}^{x}$ , $y={a}^{x+1}$ and $y={a}^{x}+1$ . Match a graph to a column. $x$ A B C -2 7,25 6,25 2,5 -1 3,5 2,5 1 0 2 1 0,4 1 1,4 0,4 0,16 2 1,16 0,16 0,064 2. The graph of $f\left(x\right)=1+a.{2}^{x}$ (a is a constant) passes through the origin. 1. Determine the value of $a$ . 2. Determine the value of $f\left(-15\right)$ correct to FIVE decimal places. 3. Determine the value of $x$ , if $P\left(x;0,5\right)$ lies on the graph of $f$ . 4. If the graph of $f$ is shifted 2 units to the right to give the function $h$ , write down the equation of $h$ . 3. The graph of $f\left(x\right)=a.{b}^{x}\phantom{\rule{3.33333pt}{0ex}}\left(a\ne 0\right)$ has the point P(2;144) on $f$ . 1. If $b=0,75$ , calculate the value of $a$ . 2. Hence write down the equation of $f$ . 3. Determine, correct to TWO decimal places, the value of $f\left(13\right)$ . 4. Describe the transformation of the curve of $f$ to $h$ if $h\left(x\right)=f\left(-x\right)$ . Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc NANO so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Other chapter Q/A we can ask
This content will become publicly available on January 17, 2023 Characterizing the Fast Radio Burst Host Galaxy Population and its Connection to Transients in the Local and Extragalactic Universe Abstract We present the localization and host galaxies of one repeating and two apparently nonrepeating fast radio bursts (FRBs). FRB 20180301A was detected and localized with the Karl G. Jansky Very Large Array to a star-forming galaxy at z = 0.3304. FRB20191228A and FRB20200906A were detected and localized by the Australian Square Kilometre Array Pathfinder to host galaxies at z = 0.2430 and z = 0.3688, respectively. We combine these with 13 other well-localized FRBs in the literature, and analyze the host galaxy properties. We find no significant differences in the host properties of repeating and apparently nonrepeating FRBs. FRB hosts are moderately star forming, with masses slightly offset from the star-forming main sequence. Star formation and low-ionization nuclear emission-line region emission are major sources of ionization in FRB host galaxies, with the former dominant in repeating FRB hosts. FRB hosts do not track stellar mass and star formation as seen in field galaxies (more than 95% confidence). FRBs are rare in massive red galaxies, suggesting that progenitor formation channels are not solely dominated by delayed channels which lag star formation by gigayears. The global properties of FRB hosts are indistinguishable from core-collapse supernovae and short gamma-ray bursts hosts, and more » Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Award ID(s): Publication Date: NSF-PAR ID: 10350616 Journal Name: The Astronomical Journal Volume: 163 Issue: 2 Page Range or eLocation-ID: 69 ISSN: 0004-6256 National Science Foundation More Like this 1. ABSTRACT The physical properties of fast radio burst (FRB) host galaxies provide important clues towards the nature of FRB sources. The 16 FRB hosts identified thus far span three orders of magnitude in mass and specific star formation rate, implicating a ubiquitously occurring progenitor object. FRBs localized with ∼arcsecond accuracy also enable effective searches for associated multiwavelength and multi-time-scale counterparts, such as the persistent radio source associated with FRB 20121102A. Here we present a localization of the repeating source FRB 20201124A, and its association with a host galaxy (SDSS J050803.48+260338.0, z = 0.098) and persistent radio source. The galaxy is massive (${\sim}3\times 10^{10}\, \text{M}_{\odot }$), star-forming (few solar masses per year), and dusty. Very Large Array and Very Long Baseline Array observations of the persistent radio source measure a luminosity of 1.2 × 1029 erg s−1 Hz−1, and show that is extended on scales ≳50 mas. We associate this radio emission with the ongoing star formation activity in SDSS J050803.48+260338.0. Deeper, high-resolution optical observations are required to better utilize the milliarcsecond-scale localization of FRB 20201124A and determine the origin of the large dispersion measure (150–220 pc cm−3) contributed by the host. SDSS J050803.48+260338.0 is an order of magnitude more massive than any galaxy or stellar system previously associated with a repeating FRB source, butmore » 2. Intense, millisecond-duration bursts of radio waves (named fast radio bursts) have been detected from beyond the Milky Way. Their dispersion measures—which are greater than would be expected if they had propagated only through the interstellar medium of the Milky Way—indicate extragalactic origins, and imply contributions from the intergalactic medium and perhaps from other galaxies. Although several theories exist regarding the sources of these fast radio bursts, their intensities, durations and temporal structures suggest coherent emission from highly magnetized plasma. Two of these bursts have been observed to repeat, and one repeater (FRB 121102) has been localized to the largest star-forming region of a dwarf galaxy at a cosmological redshift of 0.19. However, the host galaxies and distances of the hitherto non-repeating fast radio bursts are yet to be identified. Unlike repeating sources, these events must be observed with an interferometer that has sufficient spatial resolution for arcsecond localization at the time of discovery. Here we report the localization of a fast radio burst (FRB 190523) to a few-arcsecond region containing a single massive galaxy at a redshift of 0.66. This galaxy is different from the host of FRB 121102, as it is a thousand times more massive, with a specificmore » 3. ABSTRACT We constrain the Hubble constant H0 using Fast Radio Burst (FRB) observations from the Australian Square Kilometre Array Pathfinder (ASKAP) and Murriyang (Parkes) radio telescopes. We use the redshift-dispersion measure (‘Macquart’) relationship, accounting for the intrinsic luminosity function, cosmological gas distribution, population evolution, host galaxy contributions to the dispersion measure (DMhost), and observational biases due to burst duration and telescope beamshape. Using an updated sample of 16 ASKAP FRBs detected by the Commensal Real-time ASKAP Fast Transients (CRAFT) Survey and localized to their host galaxies, and 60 unlocalized FRBs from Parkes and ASKAP, our best-fitting value of H0 is calculated to be $73_{-8}^{+12}$ km s−1 Mpc−1. Uncertainties in FRB energetics and DMhost produce larger uncertainties in the inferred value of H0 compared to previous FRB-based estimates. Using a prior on H0 covering the 67–74 km s−1 Mpc−1 range, we estimate a median ${\rm DM}_{\rm host}= 186_{-48}^{+59}\,{\rm pc \, cm^{-3}}$, exceeding previous estimates. We confirm that the FRB population evolves with redshift similarly to the star-formation rate. We use a Schechter luminosity function to constrain the maximum FRB energy to be log10Emax$=41.26_{-0.22}^{+0.27}$ erg assuming a characteristic FRB emission bandwidth of 1 GHz at 1.3 GHz, and the cumulative luminosity index to be $\gamma =-0.95_{-0.15}^{+0.18}$. We demonstrate with a samplemore » 4. Abstract The first fast radio burst (FRB) to be precisely localized was associated with a luminous persistent radio source (PRS). Recently, a second FRB/PRS association was discovered for another repeating source of FRBs. However, it is not clear what makes FRBs or PRS or how they are related. We compile FRB and PRS properties to consider the population of FRB/PRS sources. We suggest a practical definition for PRS as FRB associations with luminosity greater than 1029erg s−1Hz−1that are not attributed to star formation activity in the host galaxy. We model the probability distribution of the fraction of FRBs with PRS for repeaters and nonrepeaters, showing there is not yet evidence for repeaters to be preferentially associated with PRS. We discuss how FRB/PRS sources may be distinguished by the combination of active repetition and an excess dispersion measure local to the FRB environment. We use CHIME/FRB event statistics to bound the mean per-source repetition rate of FRBs to be between 25 and 440 yr−1. We use this to provide a bound on the density of FRB-emitting sources in the local universe of between 2.2 × 102and 5.2 × 104Gpc−3assuming a pulsar-like beamwidth for FRB emission. This density implies that PRS maymore » 5. The dispersive sweep of fast radio bursts (FRBs) has been used to probe the ionized baryon content of the intergalactic medium, which is assumed to dominate the total extragalactic dispersion. While the host galaxy contributions to dispersion measure (DM) appear to be small for most FRBs, in at least one case there is evidence for an extreme magneto-ionic local environment and a compact persistent radio source. Here we report the detection and localization of the repeating FRB 20190520B, which is co-located with a compact, persistent radio source and associated with a dwarf host galaxy of high specific star formation rate at a redshift z=0.241±0.001. The estimated host galaxy DM~≈903+72−111~pc~cm−3, nearly an order of magnitude higher than the average of FRB host galaxies, far exceeds the DM contribution of the intergalactic medium. Caution is thus warranted in inferring redshifts for FRBs without accurate host galaxy identifications. The dense FRB environment and the association with a compact persistent radio source may point to a distinctive origin or an earlier evolutionary stage for this FRB source.
# 30th Marian Smoluchowski Symposium on Statistical Physics Sep 3 – 8, 2017 Poland timezone ## Pendular behaviour of public transport network Not scheduled 15m poster ### Description In this work, we propose a methodology that bears close resemblance to the Fourier analysis of the first harmonic to study networks subjected to pendular behavior [1]. In this context, pendular behavior is characterized by the phenomenon of people's dislocation from their homes to work in the morning and people's dislocation in the opposite direction in the afternoon. Pendular behavior is a relevant phenomenon that takes place in public transport networks because it may reduce the overall efficiency of the system as a result of the asymmetric utilization of the system in different directions. We apply this methodology to the bus transport system of Bras\'{i}lia (Brazil), which is a city that has commercial and residential activities in distinct boroughs. We show that this methodology can be used to characterize the pendular behavior of this system, identifying the most critical nodes and times of the day when this system is in more severe demanded[1,2]. [1] M. M. Izawa, F. A. Oliveira, D. O. Cajueiro, and B. A. Mello, Phys. Rev. E \textbf{96}, 012309 (2017). [2] Highlight of PRE https://physics.aps.org/synopsis-for/10.1103/PhysRevE.96.012309.
# Nyc Doe Payroll Calendar 2020 Pdf Nyc Doe Payroll Calendar 2020 Pdf – Businesses in general pay their laborers plus monthly} State plus regulations|legislation|regulations} determine the time period that in turn|which usually} laborers should be paid by. It’s important for the businesses to pay laborers on a good time otherwise there is a danger of the worker quitting the firm. Payroll Templates are of ideal approach to companies {due to the fact|for the reason that} cite all fiscal year payment agendas that shorten the payment control and reduce worker confusion. Payroll design will make the payment control simple and easy and obvious for the businesses and laborers. Most of us typically see payroll departments exhausted because of tight agendas. Nyc Doe Payroll Calendar 2020 Pdf cite details of every pay cycle, such as the start date, end date, and how many working days. The obvious and to the point info removes whatever confusion connected to payments. The payroll templates also cite submission deadlines that inturn|which usually} laborers need to adhere to for the making sure that timely control of payments. ### Grab the Nyc Doe Payroll Calendar 2020 Pdf from the link we mentioned below. Download Nyc Doe Payroll Calendar 2020 Pdf here : Download & Print
# Interpretation of feature selection task So I am given the following question Data set sample5.txt has a 20-dimensional input $x$ in $\mathbb{R}^{20}$ but we suspect that many of these are actually irrelevant. Could you model the function $y = f(x)$ while - at the same time - figuring which dimensions contribute to the output? So it is a feature selection task - I understand that. But I'm sort of confused by the at the same time part. I know many feature selection algorithms but they do not actually produce models for the data, they just produce decisions regarding which features are important and which are not. Conversely, a model (alone) doesn't really give much information regarding which features are important and which are not. Perhaps you could do simple linear regression and then select features based on the weights (but I have never heard of anyone doing this). Or do you think that I am over-analyzing the question and what I should do is simply do feature selection first, then create the model?
# ABCD is a parallelogram. G is a point on AB such that AG = 2GB and E is point Question: ABCD is a parallelogram. G is a point on AB such that AG = 2GB and E is point on DC such that CE = 2DE and F is the point of BC such that BF = 2FC. Prove that: (i) ar(ADEG) = ar(GBCE) (ii) ar(ΔEGB) = (1/6) ar(ABCD) (iii) ar(ΔEFC) = (1/2) ar(ΔEBF) (iv) ar(ΔEGB) = 3/2 × ar(ΔEFC) (v) Find what portion of the area of parallelogram is the area of ΔEFG. Solution: Given: ABCD is a parallelogram AG = 2 GB, CE = 2 DE and BF = 2 FC To prove: (i) ar(ADEG) = ar(GBCE). (ii) ar(ΔEGB) = (1/6) ar(ABCD). (iii) ar(ΔEFC) = (1/2) ar(ΔEBF). (iv) ar(ΔEGB) = (3/2) × ar(ΔEFC) (v) Find what portion of the area of parallelogram is the area of ΔEFG. Construction: Draw a parallel line to AB through point F and a perpendicular line to AB from C Proof: (i) Since ABCD is a parallelogram So, AB = CD and AD = BC Consider the two trapezium s ADEG and GBCE Since AB = DC, EC = 2DE, AG = 2GB ⇒ ED = (1/3) CD = (1/3) AB and EC = (2/3) CD = (2/3) AB ⇒ AG = (2/3) AB and BG = (1/3) AB So, DE + AG = (1/3) AB + (2/3) AB = AB and EC + BG = (2/3) AB + (1/3) AB = AB Since the two trapezium ADEG and GBCE have same height and their sum of two parallel sides are equal Since Area of trapezium $=\frac{\text { Sum of parallel sides }}{2} \times$ height So, ar(ADEG) = ar(GBCE). (ii) Since we know from above that BG = (1/2) AB So ar(ΔEGB) = (1/2) × GB × Height ar(ΔEGB) = (1/2) × (1/3) × AB × Height ar(ΔEGB) = (1/6) × AB × Height ar(ΔEGB) = (1/6) ar(ABCD). (iii) Since height if triangle EFC and EBF are equal. So ar(ΔEFC) = (1/2) × FC × Height ar(ΔEFC) = (1/2) × (1/2) × FB × Height ar(ΔEFC) = (1/2) ar(EBF) Hence, ar(ΔEFC) = (1/2) ar(ΔEBF). (iv) Consider the trapezium in which ar(EGBC) = ar(ΔEGB) + ar(ΔEBF) + ar(ΔEFC) ⇒ (1/2) ar(ABCD) = (1/6) ar(ABCD) + 2ar(ΔEFC) + ar(ΔEFC) ⇒ (1/3) ar(ABCD) = 3 ar(ΔEFC) ⇒ ar(ΔEFC) = (1/9) ar(ABCD) Now from (ii) part we have ar(ΔEGB) = (1/6) ar(ΔEFC) ar(ΔEGB) = (3/2) × (1/9) ar(ABCD) ar(ΔEGB) = (3/2) ar(ΔEFC) ∴ ar(ΔEGB) = (3/2) ar(ΔEFC) (v) In the figure it is given that FB = 2CF. Let CF = x and FB = 2x. Now consider the two triangles CFI and CBH which are similar triangle. So by the property of similar triangle CI = k and IH = 2k Now consider the triangle EGF in which ar(ΔEFG) = ar(ΔESF) + ar(ΔSGF) ar(ΔEFG) = (1/2) SF × k + (1/2) SF × 2k ar(ΔEFG) = (3/2) SF × k   ⋅⋅⋅ (i) Now, ar(ΔEGBC) = ar(SGBF) + ar(ESFC) ar(ΔEGBC) = (1/2)(SF + GB) × 2k + (1/2)(SF + EC) × k ar(ΔEGBC) = (3/2) k × SF + (GB + (1/2)EC) × k ar(ΔEGBC) = (3/2) k × SF + (1/3 AB + (1/2) × (2/3) AB) × k (1/2) ar(ΔABCD) = (3/2) k × SF + (2/3) AB × k ⇒ ar(ΔABCD) = 3k × SF + (4/3) AB × k       [Multiply both sides by 2] ⇒ ar(ΔABCD) = 3k × SF + 4/9 ar(ABCD) ⇒ k × SF = (5/27 )ar(ABCD) ⋅⋅⋅⋅  (2) From 1 and 2 we have, ar(ΔEFG) = (3/2) × (5/27) ar(ABCD) ar(ΔEFG) = (5/18) ar(ABCD)
Sciencemadness Discussion Board » Fundamentals » Beginnings » Conditions for hypochlorite formation Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues Author: Subject: Conditions for hypochlorite formation Zandins Harmless Posts: 25 Registered: 18-6-2016 Member Is Offline Mood: No Mood Conditions for hypochlorite formation I am planning to do a production run for NaClO by unseparated electrolysis of NaCl solution. NaCl +H2O -> NaClO + H2 My prelimintary tests show carbon anodes erode excessively, so I have ordered a 1×2 inch platinized titanium anode. I am trying to find the right set of conditions to maximize hypochlorite production while minimizing oxygen evolution and chlorate production. To summarize my current findings: 1)The solution must be kept concentrated to prevent O2 production at the anode. Since saturated solution is 26% NaCl by mass, I will be trying to keep the concentration above 20%; 2)There are different sources for temperature which favors chlorate production.I've come across figures ranging from 40 to 70°C. 3) The pH must be kept above 7; therefore a base must be added. Would NaCO3 be sufficient or NaOH is necessary? 4)By adjusting the separation of electrodes, the cell can draw up to 10A at 12V. This equals 120W of Joule heating. Therefore, an Ice-bath must be used. Does anyone on the community have experiance with this? In that case, is there any information on the precise condictions of the setup? NEMO-Chemistry International Hazard Posts: 1559 Registered: 29-5-2016 Location: UK Member Is Offline Mood: No Mood Would a solution of sodium hydroxide and a chlorine generator be better? bubble the chlorine into the sodium hydroxide and use the change in weight to calc the strength? Zandins Harmless Posts: 25 Registered: 18-6-2016 Member Is Offline Mood: No Mood A chlorine generator would require NaClO,which is the same chemical I am trying to produce, accoriding to the reaction: NaClO+HCl->Cl2+NaOH Besides, the electrochemical approach has a series of advantages, such as requiring only mundane substances(salt) and being able to work unattended. I have previously made NaClO in the process I described, I am looking for ways to improve the poor efficiency. NEMO-Chemistry International Hazard Posts: 1559 Registered: 29-5-2016 Location: UK Member Is Offline Mood: No Mood Quote: Originally posted by Zandins A chlorine generator would require NaClO,which is the same chemical I am trying to produce, accoriding to the reaction: NaClO+HCl->Cl2+NaOH Besides, the electrochemical approach has a series of advantages, such as requiring only mundane substances(salt) and being able to work unattended. I have previously made NaClO in the process I described, I am looking for ways to improve the poor efficiency. Texium 4-7-2016 at 17:36 Firmware21 Harmless Posts: 33 Registered: 22-1-2015 Member Is Offline Mood: Ņ͈̣̭̺̈ͬ͊̔i̓̿͑ͯ̂ͪ҉̸̺̀t͉̣͕͙̟̪̅͐͂̏͌ͭ͗̑͝ṙ̶̛̙̥̝̻̟̓ͬ̾ͧ͒͘ͅa͒͊ͯ̾̑̏̌̓̕҉͚͚͓͔͙͚̥t͆̌͑ͩ̐͗ ͖̻̲̪̲̙͘ͅe͙͕͙̙̤̤̫̒ͩͣ̅̊̍̉͒ͬ́͟ͅs͈̬̮̥̻͂ͥͨ̂ͮͨ͒́͠ !!! You can still produce Cl2 with TCCA and conc. HCl. I intended to use Nemo's method this summer to produce a little sodium Hypochlorite. There must be threads on SM for this way of doing it, use the search engine. If you don't have access to TCCA, then you don't have other options. hyfalcon International Hazard Posts: 1004 Registered: 29-3-2012 Member Is Offline Mood: No Mood Try a different way of searching the site. This topic has been covered quite a bit. Try this string in Google. Code: site:sciencemadness.org sodium hypochlorite [Edited on 10-7-2016 by hyfalcon] hissingnoise International Hazard Posts: 3940 Registered: 26-12-2002 Member Is Offline Mood: Pulverulescent! Quote: By adjusting the separation of electrodes, the cell can draw up to 10A at 12V. This equals 120W of Joule heating. Therefore, an Ice-bath must be used. 12V is too high and will quickly corrode graphite ─ no more than 5V should be used! If you want chlorine, HCl oxidation by KMNO4 is a convenient method... As to pH, alkaline drift in electrolysis is troublesome and is remedied by drip additions of HCl! yobbo II International Hazard Posts: 567 Registered: 28-3-2016 Member Is Offline Mood: No Mood info Attachment: ullman.pdf (450kB) hyfalcon International Hazard Posts: 1004 Registered: 29-3-2012 Member Is Offline Mood: No Mood It looks like laserred has more MMO for sale. Come and get it boys! http://www.ebay.com/itm/MMO-coated-expanded-titanium-mesh-an... Zandins Harmless Posts: 25 Registered: 18-6-2016 Member Is Offline Mood: No Mood Definitely looks like a bargain, will order one today! On a slightly unrelated note - would a MMO mesh fail if it were sawed into a series of smaller electrodes ?(6 x 10 inch is probably too much for one cell) hyfalcon International Hazard Posts: 1004 Registered: 29-3-2012 Member Is Offline Mood: No Mood Nope. Their is people that take these and shear them into 1 inch strips and resell them for a markup. As long as you don't run these in a low chloride condition they should last quit a while. By the way, give these a good muratic acid soak before putting them to use. I don't know exactly what process they have been in but there is usually a brown deposit on them that will come off in your cell if you don't presoak them in acid first. [Edited on 11-7-2016 by hyfalcon] Sciencemadness Discussion Board » Fundamentals » Beginnings » Conditions for hypochlorite formation Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues
# Problem: The O-O bonds in ozone are often described as "one-and-a-half" bonds.Is this description consistent with the idea of resonance? ⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet. ###### Problem Details The O-O bonds in ozone are often described as "one-and-a-half" bonds. Is this description consistent with the idea of resonance?
# Magnetism and Matter ## Physics ### NCERT 1   Answer the following questions regarding earth’s magnetism:$\\$ (a) A vector needs three quantities for its specification. Name the three independent quantities conventionally used to specify the earth’s magnetic field.$\\$ (b) The angle of dip at a location in southern India is about $18^o$ . Would you expect a greater or smaller dip angle in Britain?$\\$ (c) If you made a map of magnetic field lines at Melbourne in Australia, would the lines seem to go into the ground or come out of the ground?$\\$ (d) In which direction would a compass free to move in the vertical plane point to, if located right on the geomagnetic north or south pole?$\\$ (e) The earth’s field, it is claimed, roughly approximates the field due to a dipole of magnetic moment $8 * 1022JT^{-1}$ located at its centre. Check the order of magnitude of this number in some way.$\\$ (f) Geologists claim that besides the main magnetic N-S poles, there are several local poles on the earth’s surface oriented in different directions. How is such a thing possible at all? (a) The three independent quantities conventionally use for specifying earth’s magnetic field are:$\\$ (i) Magnetic declination,$\\$ (ii) Magnetic inclination or angle of dip, and$\\$ (ii) Horizontal component of earth’s magnetic field$\\$ (b) The angle of dip at a point depends on how far the point is located with respect to the North Pole or the South Pole. The angle of dip would be greater in Britain (it is about $70^o$ ) than in southern India because the location of Britain on the globe is closer to the magnetic North Pole.$\\$ (c) It is hypothetically considered that a huge bar magnet is dipped inside earth with its north pole near the geographic South Pole and its south pole near the geographic North Pole. Magnetic field lines emanate from a magnetic north pole and terminate at a magnetic south pole. Hence, in a map depicting earth’s magnetic field lines, the field lines at Melbourne, Australia would seem to come out of the ground.$\\$ (d) If a compass is located on the geomagnetic North Pole or South Pole, then the compass will be free to move in the horizontal plane while earth’s field is exactly vertical to the magnetic poles. In such a case, the compass can point in any direction. $\\$ (e) Magnetic moment,$M=8*10^{22}J T^{-1}$$\\ Radius of earth, r = 6.4 * 10^6 m$$\\$ Magnetics field strength,$B=\dfrac{\mu_0M}{4 \pi r^3}$$\\ Where,\\ \mu_0 =Permeability of free space=4\pi*10^{-7}TmA^{-1}$$\\$ $\therefore B=\dfrac{4 \pi * 10^{-7}*8*10^{22}}{4 \pi*(6.4*10^6)^3}\\ =0.3G$$\\ This quantity is of the order of magnitude of the observed field on earth.\\ (f) Yes, there are several local poles on earth’s surface oriented in different directions. A magnetized mineral deposit is an example of a local N-S pole. 2 Answer the following questions:\\ (a) The earth’s magnetic field varies from point to point in space. Does it also change with time? If so, on what time scale does it change appreciably?\\ (b) The earth’s core is known to contain iron. Yet geologists do not regard this as a source of the earth’s magnetism. Why?\\ (c) The charged currents in the outer conducting regions of the earth’s core are thought to be responsible for earth’s magnetism. What might be the ‘battery’ (i.e., the source of energy) to sustain these current?\\ (d) The earth may have even reversed the direction of its field several times during its history of 4 to 5 billion years. How can geologists know about the earth’s field in such distant past?\\ (e) The earth’s field departs from its dipole shape substantially at large distances (greater than about 30,000 km). What agencies may be responsible for this distortion? (f) Interstellar space has an extremely weak magnetic field of the order of 10^{-12} T . Can such a weak field be of any significant consequence? Explain.\\ [Note: Exercise 5.2 is meant mainly to arouse your curiosity. Answer to some questions above are tentative or unknown. Brief answers wherever possible are given at the end. For details, you should consult a good text on geomagnetism.]\\ ##### Solution : (a) Earth’s magnetic field changes with time. It takes a few hundred years to change by an appreciable amount. The variation in earth’s magnetic field with the time cannot be neglected.\\ (b) Earth’s core contains molten iron. This form of iron is not ferromagnetic. Hence, this is not considered as a source of earth’s magnetism.\\ (c) The radioactivity in earth’s interior is the source of energy that sustains the currents in the outer conducting regions of earth’s core. These charged currents are considered to be responsible for earth’s magnetism. BUT THIS IS NOT CERTAIN.\\ (d) Earth reversed the direction of its field several times during its history of 4 to 5 billion years. These magnetic fields got weakly recorded in rocks during their solidification. One can get clues about the geomagnetic history from the analysis of this rock magnetism.\\ (e) Earth’s field departs from its dipole shape substantially at large distances (greater than about 30,000 km) because of the presence of the ionosphere. In this region, earth’s field gets modified because of the field of single ions. While in motion, these ions produce the magnetic field associated with them.\\ (f) An extremely weak magnetic field can bend charged particles moving in a circle. This may not be noticeable for a large radius path. With reference to the gigantic interstellar space, the deflection can affect the passage of charged particles. 3 A short bar magnet placed with its axis at 30^o with a uniform external magnetic field of 0.25 T experiences a torque of magnitude equal to 4.5*10^{-2} J . What is the magnitude of magnetic moment of the magnet? ##### Solution : Magnetic field strength, B =0.25 T$$\\$ Torque on the bar magnet, $T = 4.5 * 10^{- 2 }J$$\\ Angle between the bar magnet and the external magnetic field, \theta=30^o$$\\$ Torque is related to magnetic moment $(M)$ as:$\\$ $T=MB \sin \theta \\ \therefore M=\dfrac{T}{B\sin \theta}\\ =\dfrac{4.5 * 10^{-2}}{0.25* \sin 30^o}\\ =0.36JT^{-1}$$\\ Hence, the magnetic moment of the magnet is 0.36J T^{- 1}. 4 A short bar magnet of magnetic moment M = 0.32 JT^{-1} is placed in a uniform magnetic field of 0.15 T. If the bar is free to rotate in the plane of the field, which orientation would correspond to its (a) stable, and (b) unstable equilibrium? What is the potential energy of the magnet in each case? ##### Solution : Moment of the bar magnet, M = 0.32 J T^{- 1}$$\\$ External magnetic field, B = 0.15 T\\ (a) When the bar magnet is aligned along the magnetic field, it is considered as being in stable equilibrium. Hence, the angle \theta , between the bar magnet and the magnetic field is 0^o .\\ Potential energy of the system =- MB \cos \theta\\ =-0.32*0.15 \cos 0^o\\ =-4.8 * 10^{-2} J\\ (b) When the bar magnet is aligned opposite to the magnetic field, it is considered as being in unstable equilibrium, \theta =180 ^o\\ Potential energy $=- MB \cos \theta $$\\ =- 0.32 * 0.15 \cos 180^o\\ = 4.8 * 10^{- 2} J 5 A closely wound solenoid of 800 turns and area of cross section 2.5*10^{-4} m^2 carries a current of 3.0 A. Explain the sense in which the solenoid acts like a bar magnet. What is its associated magnetic moment? ##### Solution : Number of turns in the solenoid, n = 800$$\\$ Area of cross-section, $A = 2.5 * 10^{-4} m ^2$$\\ Current in the solenoid, I=3.0 A$$\\$ A current-carrying solenoid behaves as a bar magnet because a magnetic field develops along its axis, i.e., along its length.$\\$ The magnetic moment associated with the given current-carrying solenoid is calculated as:$\\$ $M = nI A\\ =800*3*2.5*10^{-4}\\ 0.6 JT^{-1}$ 6   If the solenoid in Exercise $5.5$ is free to turn about the vertical direction and a uniform horizontal magnetic field of $0.25 T$ is applied, what is the magnitude of torque on the solenoid when its axis makes an angle of $30^o$ with the direction of applied field? ##### Solution : Magnetic field strength,$B = 0.25T$$\\ Magnetic moment, M = 0.6T^{- 1}$$\\$ The angle, $\theta$ between the axis of the solenoid and the direction of the applied field is $30^o$ .$\\$ Therefore, the torque acting on the solenoid is given as:$\\$ $\tau =MB \sin \theta \\ =0.6 * 0.25 \sin30^o\\ =7.5 * 10^{- 2} J$ 7   A bar magnet of magnetic moment $1.5J T^{- 1}$ lies aligned with the direction of a uniform magnetic field of $0.22 T$.$\\$ (a) What is the amount of work required by an external torque to turn the magnet so as to align its magnetic moment: (i) normal to the field direction, (ii) opposite to the filed direction?$\\$ (b) What is the torque on the magnet in cases (i) and (ii)?
Bitcoin’s network velocity has been falling. This means that network activity has been slowing, which has historically been associated with price weakness. Velocity shows you how quickly the coins are circulating within the network. When velocity is high, say 1000%, it means an average bitcoin is circulating 10 times per year. At 450%, it is circulating 4.5 times per year. This is an average network measure, and since many coins rarely move, others are circulating at a rapid rate. ByteTree 12-week velocity calculates the coins spent (transacted net of change) on a weekly basis, averaged over 12 weeks, and expressed on an annualised basis. Source ByteTree.com: Bitcoin velocity measured over 12 weeks and the bitcoin price in USD since 2013. At first glance, it is hard to see how price and velocity relate to one another, as some of the strongest price moves have been associated with falling velocity. Yet consider that this is a non-price indicator and during those periods, large sums of money were changing hands, even if that means fewer coins at a higher price. It isn’t so much the direction of velocity rather than the level. In the next chart, velocity is overlaid with network health. This makes things clearer. The dark areas were associated with low levels of velocity, mainly in 2014 and 2018. These periods saw price weakness. Source ByteTree.com: Bitcoin velocity measured over 12 weeks and ByteTree Market Health since 2013. The 600% line is the approximate historic danger level. The Bitcoin Network has seen this breached four times: 1. 30th March 2014 at $460 - reversed on 12th December 2014 at$356 2. 1st August 2018 at $7,480 - reversed on 4th December 2018 at$3,900 3. 13th March 2019 at $3,852 - reversed on 2nd April 2019 at$4,802 4. 20th August 2019 at \$10,708 The first and second signals were effective and correctly warned of trouble ahead. The reversal buy signals were lower than the previous sell signals, and in both cases, the maximum drawdown was over 50%. The third signal was less effective as it soon reversed. It also fell to 583% which is low but much higher than the other occasions. The latest reading of 463% has not been seen since late 2014. It is a loud and clear message that the Bitcoin Network velocity is weak. This may imply that a period of further price weakness lies ahead. But that is not a certainty, as it could turn up at any time. The sooner the better. You can follow network velocity on ByteTree.com. The 12-week measure gives a good balance between stability and timeliness. It is updated every block.
# Why do we never account for the extra mass carried by a gravitationally excited object? The mass of a bound system is the mass of its part minus the binding energy. For instance if you put an electron and a proton together to form a hydrogen atom you get a $13.6 \text{ eV}$ photon and the mass of the atom is lower than the masses of its parts by the same amount (divided by $c^2$). Another example: you put two protons and two neutrons together to form an alpha particle, you get $28 \text{ MeV}$ out and the mass of the alpha particle is $28 \text{ MeV}/c^2$ less than the sum of the masses of two protons and two neutrons. The same should be true regardless of the nature of the force exerted between the objects (electromagnetic, nuclear, gravitational...). Now I propose the following thought experiment: consider the system of an object of mass m at some far distance from a black hole of mass $M$. The mass of the whole system is $m+M$. Now let $m$ fall into the black hole. It will gain kinetic energy, and one can compute that when it reaches the event horizon (viewed from outside, otherwise that never happens) the kinetic energy is exactly equal to $\frac{mc^2}{2}$. So one may surmise that the total energy of the object is $mc^2+\frac{mc^2}{2}=\frac{3mc^2}{2}$ when it merges with the black hole, and thus the object increases the black hole mass by $\frac{3}{2}m$. Now the total mass of the system is $\frac{3}{2}m+M$. Where was the extra $m/2$ in the initial situation? Was it somehow stored in the gravitational field (knowing that the notion of energy density of a gravitaional field is ill-defined in general relativity, see this post)? Does any object really possess in fact 1.5 times its own mass when considering that it is in an excited state since it may fall into a black hole? That makes no sense to me, so I will be grateful if someone could point out the mistake in my argument. • black holes are relativistic, you cannot use a classical approximation for the kinetic energy – user83548 Oct 30 '15 at 15:29 • The subject line is a good question. The first paragraph is correct. The second paragraph is error ridden. :) The second paragraph should agree with the first paragraph. To answer the good question: We always get the same number when we measure the mass of a proton, it does not matter on what planet our laboratory is located. – stuffu Oct 30 '15 at 16:16 • Short answer: we do, see this answer. – DilithiumMatrix Oct 30 '15 at 17:06 • As far as the hydrogen atom goes, have you computed the fractional change? – dmckee Oct 30 '15 at 18:58 • Actually, the near inevitability of Hawking radiation (if for no other reason than the third law of thermodynamics) demonstrates almost beyond doubt that gravitating systems are not even bound. They are merely metastable. – CuriousOne Oct 30 '15 at 20:54 Now I propose the following thought experiment: consider the system of an object of mass $m$ at some far distance from a black hole of mass $M$. The mass of the whole system is $m+M$. It's only because they are so far away from each other that the mass from even farther away looks so close to $m+M.$ So that's not entirely a trivial thing. Now let $m$ fall into the black hole. It will gain kinetic energy, The two objects will move towards each other and not just by getting closer but also by picking up relative velocity. Correct. and one can compute that when it reaches the event horizon (viewed from outside, otherwise that never happens) Now you've got that backwards. Its exactly from the outside it never happens. the kinetic energy is exactly equal to $\frac{mc^2}{2}$. And that's completely and utterly wrong, even not withstanding that from the outside you never see the mass reach the event horizon. So one may surmise that the total energy of the object is $mc^2+\frac{mc^2}{2}=\frac{3mc^2}{2}$ No one ever said mass is additive. You made that point right at the beginning. An alpha particle has less system mass than the sum of the masses of its parts. So if no one ever said mass is additive, why are you now a ting like it is. I'd love to tell you what you are doing wrong, but it seems like you are just assuming something. So instead I'll cover the basics of how you feel a mass from a system in general relativity by covering a mathematically simpler example. A spherical collapsing star. Its like your example except not as extreme (just a star, not a black hole) and more symmetric (lots of small masses arranged in a spherical shell, all falling inwards). The first thing to note is that you can measure the mass by being far away and orbiting it. When you are far away from a mass M in Newtonian mechanics then you feel an acceleration $GM/r^2$ and you need an acceleration of $v^2/r$ to go in a circle. And that circle has a circumference of $C=2\pi r$ and you travel it in time $T=C/v.$ These last two things, $C$ and $T$ can be measured right there while staying far from the star where things act similar to Newtonian mechanics. And in Newtonian mechanics we have $GM/r^2=v^2/r$ so $M=v^2r/G.$ And $v=C/T$ and $r=C/2\pi$ so you get $M=C^3/(GT^22\pi).$ And that's what the $M$ means. It means that way far out there are nearly circular orbits that have $C^3/(GT^22\pi)$ be approximately the same for all those concentric circles no matter how far out they are, as long as they stay far out. So that's what the $M$ is. It is not the sum of the masses of the things that went it. Or is it? See, spacetime naturally curves even far from any source and that's actually what mass does is it allows different vacuum type curvatures to be sewn together. So when the outside all had the same time the same $C^3/(GT^22\pi)$ then everything is fine. But inside that outer shell of matter, since it is regular ordinary positive energy density matter you end up with a smaller $C^3/(GT^22\pi)$ on the inside than the outside. And that's fine. Positive matter allows a curvature of type $M+m=C^3/(GT^22\pi)$ on the outside of the spherical shell to be sewn up to a curvature of type $M=C^3/(GT^22\pi)$ on the inside of that spherical shell. It's like if you made a shallow funnel and a steep funnel and you cut them each in two and put the deeper one on the outside of the shallower one, right where they are the same size. So now that outer shell can contract. And you are correct that it picks up speed and its energy density increases because each part goes faster. And the energy density increases because the spherical shell reduces its surface area. But spacetime can only curve with a constant (vacuum) type when there isn't matter. So as the matter contracts it has to create more of the vacuum type curvature of type $M+m=C^3/(GT^22\pi)$ on the outside of the spherical shell and some of the old type of spacetime curvature of type $M=C^3/(GT^22\pi)$ gets destroyed. And its spookier than that. Since the two types of curvature are of a different type, you literally see more space get of the new type get created outside the spherical shell than you see get destroyed from inside. Your shell gets farther away from the outside things than it gets closer to the inside things. That inevitable when you have two different types of curvature, a type $M+m=C^3/(GT^22\pi)$ on the outside and a type $M=C^3/(GT^22\pi)$ on the inside. The only way this is possible is for time inside to start ticking slower as seen by people in the outside. So people are disagreeing about a lot, how fast time is ticking, how much closer further farther away things are, who's moving, etcetera. And we have to check all our assumptions and baggage and expectations and loom at the math of what is happening in the model to learn what happens in reality. Now what is interesting is that even though the energy density of all the stiff increased and even though the total energy on the entire shell increased, that larger energy on that smaller shell turns out to remain exactly the amount of energy you need to sew those two types of curvature together. And it has to be for thibgs to be consistent. You had some curvature of each type on either side of the shell and they could only stay what they are on their side. But the boundary can move. And it can move more from the outside than from the outside because it is possible to create space, and it turns out it is common to do so. That's how we get strongly curved spacetime. You take a type of spacetime and make more of that type, but make in the direction where that type gets more curved. So the type with $M+m=C^3/(GT^22\pi)$ gets more of itself farther in where it naturally is more curved (stronger gravity). And that's how spacetime gets curved in the first place. So you had some mass and spacetime was curved. As the mass contracts the spacetime changes. It makes more spacetime but makes it of the same type as was seen far outside, the type with parameter $M+m=C^3/(GT^22\pi).$ And that new spacetime (new as in more, but of that same old type) is stronger curved, but still of the same type. There is not some other definition of mass. We say there is some mass $M$ in some region over there solely as a shorthand to say there is a certain curvature, a curvature of type $M$ over here. And the curvature gets frozen in to one of these static vacuum type curvatures. If your system wasn't super symmetric it might have a dynamically changing spacetime for a while as gravitational waves are emitted (which would make the final mass less) and the mass might rub against each other and give up some energy as light (which means it has less than the right energy needed so it has to make some slightly less curved type spacetime outside it so you see type $M+m=C^3/(GT^22\pi)$ out beyond the light and see type $M+fm=C^3/(GT^22\pi)$ between the light and the shell of hot matter and you you still see type $M=C^3/(GT^22\pi)$ inside of the shell. That's real too. Pluto sees a more massive sun than the earth does because the light reaching us bow hasn't reached Pluto yet so we start seeing $M+fm=C^3/(GT^22\pi)$ (with $0\lt f\lt 1$) when Pluto still sees $M+m=C^3/(GT^22\pi).$ And it could settle down to a different type such as a type for a rotating source. But since gravity is about curvature we have to learn how gravity allows spacetime to curve far from matter then we have to learn how matter allows these different types of vacuum curvature of spacetime to sew up. And none of it is as obvious as just adding up masses. But its also not totally unexpected if you think it through. Let me try to answer my own question, helped by this post on the binding energy of neutron stars (many thanks DilithiumMatrix for pointing that out). Indeed the example of the neutron star, though not conceptually different from the example of the black hole which I used in my original question, is easier because we do not get distracted by the problems linked with black holes. So, just as in the case of the electron of an atom or a parton in a nucleus, if we lift a neutron from the surface of a neutron star we can surmise that we give it gravitational potential energy and in effect increase its mass -- the potential energy is stored as an extra excitation mass, leaving us with the usual neutron mass we know about and can measure here on Earth. So the answer to the question would be: we do already account for it, it's part of the mass of the free object. Then in the case of an object falling into a black hole, the gained kinetic energy is obtained from the potential energy which itself would be stored as part of the mass of the object -- so the falling object gains speed and at the same time it loses most (perhaps all???) of its rest mass. While that answer seems satisfactory to me at fist glance, it makes me uneasy. I am used to thinking about the mass of neutrons as arising from the strong field (QCD interactions between quarks). I can understand that the mass of a neutron gets lower when the neutron is bound to a nucleus -- we are still talking about the strong force (the nuclear force is a manifestation the strong force), so I can see that the neutron finds itself in a more energetically favourable configuration in the nucleus, no problem. Now if you tell me that a significant part of the neutron mass is lost when the neutron binds to a neutron star -- a loss which has nothing to do with the strong force, but rather the gravitational force -- it leaves me with more questions. How can the gravitational field influence the way the strong field generates the neutron mass? Same thing when considering any particle (eg an electron) falling into a black hole: we think that we understand the mechanism which generates the mass, but in the end the mass is basically just gravitational -- how can that be? (EDIT: thsi is now posted as a separate question). • Potential energy doesn't randomly pick one of the objects at random and increase that object's mass. And when an object falls, its rest mass doesn't decrease. Everything about your answer is wrong. And it seems to be built on fundamental misunderstandings of more basic results, like thinking that in Newtonian gravity that potential energy belongs to a random object instead of to the system of multiple objects. – Timaeus Nov 14 '15 at 18:12 • This answer is the only satisfactory one I could find because otherwise, where would the object get its energy from when falling and gaining speed? Does it somehow instantaneously "suck" it from the mass of the whole system? I am very unfamiliar with the concept of energy being non-local on a large scale. – Philippe Mermod Nov 14 '15 at 18:56 • Being wrong shouldn't be satisfying, and if you need additional explanations about other answers you should ask. Newtonian gravity has non local energy. If you want a theory with local energy, consider classical electrodynamics instead. In that theory there is energy in the fields and it can travel from here to there and be located in between too. For infalling matter it isn't more massive in its new comoving frame, and there's nothing instantaneous. The 4d spacetime it goes through is curved, so it might seem like is speeding up, but it's going straight in a curved spacetime. – Timaeus Nov 15 '15 at 15:53 • Please rethink "So, just as in the case of the electron of an atom or a parton in a nucleus," physics.stackexchange.com/questions/149744/… . It is not true so your analogy is not true – anna v Dec 10 '15 at 4:41
## Indicators • Similars in SciELO ## Print version ISSN 0185-1101 ### Rev. mex. astron. astrofis vol.54 n.2 México Oct. 2018 Articles Estimation of the star formation rate using Long-Gamma ray bursts observed by swift 1Benemérita Universidad Autónoma de Puebla, México. Abstract In this work we estimate the star formation rate (SFR) through 333 LongGRBs detected by Swift. This investigation is based on the empirical model proposed by Yüksel et al. (2008). Basically, the SFR is estimated using long-GRBs considering that they have a stellar origin based on the collapsar model or the collapse of massive stars (hypernovae) M > 20M. The analysis starts with the study of ε(z) which accounts for the long-GRBs production rate and is parameterized by ε(z) = ε0(1 + z)δ, where ε0 includes the SFR absolute conversion to GRBs rate in a luminosity range already defined and δ is a dynamical parameter which changes at different regions of redshift; it accounts for the SFR slope which is obtained by an analysis of linear regression over our long-GRBs sample. The results obtained provide evidence that supports our proposal to use Long-GRBs as tracers of SFR. Key Words: galaxies: star formation; gamma-ray burst: general; stars: massive Resumen En este trabajo se estima la tasa de formación estelar (SFR) mediante el análisis de una muestra de 333 Gamma Ray Bursts (GRBs) largos detectados por Swift. Este estudio se basa en el modelo empírico propuesto por Yüksel et al. (2008). Básicamente, la SFR se calcula utilizando GRBs largos tomando en consideración que son originados según el modelo collapsar o del colapso de estrellas masivas tipo hipernova (M > 20M). El análisis parte del estudio de ε(z) que representa la tasa de producción de GRBs largos, parametrizándolo de la forma ε(z) = ε0(1+z)δ, donde ε0 incluye la conversión absoluta de la SFR a la tasa de GRB en un intervalo de luminosidad de GRB dado, y el índice δ es un parámetro dinámico que cambia con z y representa la pendiente de la traza dejada por la SFR. Los resultados favorecen la propuesta usar a los GRBs largos como trazadores de la SFR. 1. INTRODUCTION Gamma ray bursts are related to extremely energetic explosions in far away galaxies (for reviews, see Wang et al. 2015; Wei & Wu 2017; Petitjean et al. 2016). Based on the collapse model which proposes the formation of long-GRBs by the collapse of a rapidly rotating super massive star (e.g. Wolf-Rayet star M > 20M, for cosmological implications of GRBs see Wei & Wu 2017 and Wei et al. 2016) we can trace and test the SFR (Yüksel et al. 2008) (Kistler et al. 2008) (Wang 2013) related with this events. The study of SFR through traditional tracers, such as continuous UV (Cucciati et al. 2012), (Schenker et al. 2013), (Bouwens et al. 2014), recombinacion lines of: Hα, far infrared (Magnelli et al. 2013), (Gruppioni et al. 2013), radio and Xray emission, are inefficient at high redshift (z > 4) (Schneider 2015) due to their sensitivity to extinction for gas and dust and the expansion of the universe. The stellar formation activity in the universe was very intense in the past, higher than now; at z ∼ 2.5 about 10% of all stars were formed and about 50% of the local universe star formation took place at z ∼ 1, (Schneider 2015). The star formation rate density is a function which evolves with time. It has shown an increase of a factor 10 between now and z ∼ 1 holding until z ∼ 3−4 and finally a decrease at z > 4 (Hopkins & Beacom 2006; Carroll & Ostlie 2006; Schneider 2015). Figure 1 shows the distribution of our sample with redshift. The data presenta mode at z ≈ 1.17 and a mean at z ≈2.06. These results match the observational data. The paper is organized as follows. In § 2 we present the main properties of our long-GRBs sample. In § 3 we develop the mathematical model to calculate the SFR using long-GRBs as a tracers. In § 4 we present the results based on the computation of δ obtained by a linear regression analysis over the long-GRB sample. Our conclusions are in § 5. 2. DESCRIPTION OF THE SAMPLE The data sample used includes 959 GRBs observed by Swift supplied by Butler et al. (2017) and 35 bursts detected by FERMI from Narayama Bhat et al. (2016) and Singer et al. (2015), BeppoSAX from Frontera et al. (2009) and ROTSE from Rykoff et al. (2009) giving a total of 994 GRBs; 333 are long-GRBs with T90 and z established; from these only 263 present an isotropical energy Eiso already defined. We consider bursts up to 2017 June 4. Figure 1 shows the data considered, as observed by BATSE. The bimodal distribution allows to define the short and long-GRBs. 3. DERIVATION OF THE SFR USING GRBS The conversion factor between the GRBs rate and the SFR is hard to identify, but it is supported by an increasing amount of data of the cosmic star formation rate at low redshift z < 4 (Cucciati et al. 2012; Dahlen et al. 2007; Magnelli et al. 2013) and the relationship between long-GRB and star formation. Based on the hypernova model we can relate the observed GRBs at low redshift with the SFR measurements considering an additional evolution of the GRBs rate with the SFR (Kistler et al. 2008; Yüksel et al. 2008). The GRBs distribution per unit redshift over all sky is given by: d˙ Ndz=Fzεzρ˙*zfbeamdVcomdz1+z, (1) where 0 < F(z) < 1 is the probability to obtain the redshift related to an afterglow from their host galaxy. ε(z) is the long-GRBs rate production with additional evolution effects, fbeam is the number of GRBs that are observed due to their beaming, ρ˙*(z) is the SFR density, where the dot represents comoving coordinates, 1/(1+z) is a factor related to cosmological time dilation, dVcom /dz1 is the differential volume in comoving coordinates per redshift unit. ε(z) is parameterized as ε(z) = ε0(1 + z)δ, where ε0 is a constant which includes the absolute conversion from SFR to GRB in a GRB luminosity range, and δ is the slope left by the trace of the SFR in a redshift range. Table 1 presents 10 elements of the sample, listing some spectral properties such as Energy Fluence2, Peak Energy Flux3, Peak Energy Flux4Eiso, Ep and T 90. Using Eiso we can obtain the isotropical luminosity Liso by equation 2 Liso=Eiso1+zT90. (2) Table 1 Spectral properties of the sample GRB z T90 Ep [kev] Energy Fluence Peak Energy Flux Peak Photon Flux Eiso [erg] 1 GRB140512A 0.73 158.76 270.4481 1.29-E05 5.69E-07 7.09467 5.47E+50 2 GRB140518A 4.71 61.32 46.5668 1.04E-06 5.38E-08 0.88978 4.98E+51 3 GRB141225A 0.92 40.77 132.6695 2.59E-06 1.06E-07 1.27368 3.86E+51 4 GRB150301B 1.52 13.23 106.8910 1.81E-06 2.14E-07 2.82063 1.14E+52 5 GRB150323A 0.59 150.4 81.3815 5.40E-06 2.98E-07 4.42309 9.30E+49 6 GRB150403A 2.06 38.28 227.8612 1.58E-05 1.48E-06 17.2206 3.07E+52 7 GRB150413A 3.14 264.29 63.1096 4.50E-06 6.83-E08 0.986981 5.04e+51 8 GRB150818A 0.28 134.39 74.8740 3.97E-06 1.12E-07 1.71705 3.31E+49 9 GRB150821A 0.76 149.93 197.5467 2.18E-05 4.24E-07 5.02955 5.70E+51 10 GRB151029A 1.42 9.28 31.3418 4.15E-07 8.87E-08 1.71218 9.01E+50 In Figure 2 we present the luminosity distribution of our sample made up by 263 long-GRBs. Here we observed the relation between (Liso) with redshift considering that only highly luminous GRBs can be seen at high z, using a luminosity boundary of Liso > 1051ergs−1 established by Kistler et al. (2008). The spatial distribution of the events is shown in 5 redshift bins 1−4,4−5,5−6,6−8 and 8 − 10, where we will calculate the SFR. The theoretical accounts of GRBs in the range of redshift from 1 to 4 are expressed by equation 35 N1-4teo=tΩ4π14dzFzεzρ˙*zfbeamdVcomdz1+z, N1-4teo=A14dzρ˙*z1+zδdVcomdz1+z, (3) where A=tΩFzε0z4πfbeam. A depends on the total observed time by Swift ∆t and on the angular sky coverage ∆Ω. Utilizing the SFR overage density ρ˙*z1-z2 we compute the theoretical numbers of GRB in a range of redshift from z1 to z2, which is given by Nz1-z2teo=ρ˙*z1-z2Az1z2dz1+zδdVcomdz1+z. (4) Taking the calculation of GRBs observed Nz1-z2obs we obtain the SFR in a specific range of z, z1− z2, and using the bin 1−4 we determine the SFR overage density with equation 5 ρ˙*z1-z2=Nz1-z2obsN1-4obs14dzdVcomdz1+z1+zδρ˙*zz1z2dzdVcomdz1+z1+zδ. (5) 4. DESCRIPTION OF THE SFR MODEL BY LONG-GRBS Considering the results obtained by Hopkins & Beacom (2006) and studies made by Yu et al. (2015) about the GRBs rate compared with SFR we defined the best fit to δ in different ranges of z, where the best fit to ρ˙* is given in Table 2. Table 2 Summary of different values of the SFR obtained in this work. Reference Redshift Range Log ρ˙*, [Myr-1Mpc-3] Symol in Figure 3, 4 This work (δ proposed) 4-5 -1.47 red solid diamond 5-6 -1.87 5-8 -1.92 8-10 -2.26 This work (δ calculated) 4-5 -1.67 blue solid diamond 5-6 -1.97 6-8 -2.04 8-10 -2.33 Redshift δ proposed δ calculated 0 1 3±0.43 2.3±0.8 1 4 -0.94±0.11 -1.1±0.2 4 10 -4.36±0.48 -4±1.8 We calculate ρ˙*z parameterized as a function of redshift and δ using a power law, considering that we are including a larger range of redshift and also a larger number of long-GRBs than Yüksel et al. (2008). We extend their model with equation 6 adding the term η representing the overage account of long-GRBs observed in the bin of z (z1, z2) normalized by the number of long-GRBs in the bin (1,4) ρ˙*z=ηρ˙+z=1+N1-4obsNz1-z2z1+z2/2ρ˙+z, (6) where ρ˙+ ( is given by equation 7 proposed by Yüksel et al. (2008); in order not to lose consistency we use ρ˙+ as ρ˙*: ρ˙*z=ρ˙01+zaτ+1+zBbτ+1+zCcτ1τ. (7) Here, the constants a,b and c include the logarithmic slope δ of the track left by ρ˙*z (see Table 2); the normalization is ρ˙0=0.02M yr-1 Mpc-3 and τ ≈ −10. (see Yüksel et al. 2008 for more details). We define B and C with the next expressions: B=1+z11-ab, C=1+z1b-ac1+z21-bc . Our first approximation of the density ρ˙*(z), using the best fit of δ from the literature (see Table 2) is ρ˙*z=0.021+z-30+1+z18.279.4+1+z6.6143.6-110. (8) In Figure 3 is shown the σ confidence interval. The update to the SFR in a specific range of z of Yüksel et al. (2008) used in this work is described by the next equation: ρ˙*z1-z2=Nz1-z2obs+N1-4obsz1+z22N1-4obs× ×14dzdVcomdz1+z1+zδρ˙*(z)z1z2dzdVcomdz1+z1+zδ . (9) 4.1. Statistical Analysis of the Model Considering that δ, which is the slope left by the trace of the SFR function in a redshift range, is not constant, and taking account the relation between a GRB of stellar origin by the hypernova model (Schneider 2015; Carroll & Ostlie 2006) we calculate these δs directly from the sample through a linear regression over the z bins 0−1,1−4 and 4−10, where the regions have 89,214, and 30 events, respectively, and since z has 3 significant digits, we did the analysis using grouped data. We calculated the frequency table of each bin and their respective histogram, which lets us obtain the linear regression over the data, and their respective slope. In the bin 0 − 1 with 89 burst we obtain the linear equation y = 2.32x + 3.4286; in the bin 1 − 4, with 214 burst we obtain the linear equation y = −1.0643x + 22.781; and in the bin 4 − 10, with 30 burst we obtain the linear equation y = −4x+18. Proceeding with the analysis we calculated the confidence interval over one σ of significance, obtaining the best fit to the model at different ranges of z. This is shown in Table 2. Based on the results of the statistical analysis we calculated the density ρ˙*z and the average density ρ˙*z1-z2. In Figure 3 and 4 we compare the results with the ones obtained by traditional tracers summarized by Madau & Dickinson (2014). 5. DISCUSSION AND CONCLUSION In this paper we presented the results of our work based on the estimation of the SFR through a mathematical model which relates a GRB directly with a stellar origin. We used the latest Swift catalog supplied by Butler et al. (2017). Based in the distribution of Liso (see Figure 2) we computed the SFR using first the values of δ from the literature (see Figure 3). We made a linear regression analysis with our long-GRB sample reproducing the reported δ indexes (see Table 2). Using these results we computed new values for the SFR average density ρ˙*z1-z2. We are including a bigger range of redshift than Yüksel et al. (2008) and a larger number of long-GRBs than Wang (2013). We extended the model adding a new term η (see equation 9). Our results were compared with the results from traditional tracers, such as UV and FIR (see Figure 4). Some other results, such as Robertson & Ellis (2012) found higher and similar values of ρ˙* at z > 4 than ours based on a modest and hard evolution of the SFR with δ = 0.5 and δ = 1.5 respectively, and considering GRBs from low metalicity host galaxies with 12 + log[O/H] ≈ 8.7 (Savaglio et al. 2005). Their results with δ = 0.5 and δ = 1.5 are shown as open black circles and solid gray circles in Figures 3 and 4. Our results can be marginally consistent with the gray circles. Wang (2013) used a sample of 110 luminous Swift GRBs to find an index value of δ ≈ 0.5 based on GRBs produced by rapidly rotating metal-poor stars with low mass; their SFR is higher than our results. This may be a consequence of the update used for the Swift GRB sample in our work and of the type of model proposed for the estimation of SFR, considering that our model is highly dependent on the selected index value δ at different redshift bins. With regard to the physical implication and the results obtained along this work we conclude the next points. • Considering that the index δ represents the slope for the SFR trace at different evolution stages of the universe, some previous studies have concluded that a star formation dependency based on GRB at high redshift would be sufficient to maintain cosmic reionization over 6 < z < 9 (e.g., Yüksel et al. 2008; Kistler et al. 2008). This possibility affects directly the index value δ giving minimum and maximum values for this parameter. However, observational results show that GRBs are prompt to appear in low metallicity host galaxies (Savaglio et al. 2005) implying possible metallicity limits for a massive star to transform into an successful GRB. Concluding that the decrease of cosmic metallicity may increase the relative number of GRBs at high redshift and decrease at the local universe (Butler et al. 2017) this observational results constrains the values of δ obtained by our model, which uses regression analyses over our GRBs Swift sample. • Figure 1, the histogram of frequency distribution of 333 long-GRBs over redshift shows a Weibull distribution with a mode at z ≈ 1.17 and a mean at z ≈ 2.06. these values match the observational results of SFR, considering that at z ~ 2.5, about 10% of all stars were formed and about 50% of the local universe star formation took place at z ~ 1, Schneider (2015). • We computed the values of the logρ˙*z1-z2 using both values of δ from literature. Our linear regression analysis with the best fit to δ, displayed in Table 2 shows that our results match the results from traditional tracers such as UV, and FIR. This provide evidence that supports our proposal to use long-GRBs as tracers of the SFR. • The isotropic luminosity distribution Liso (see Figure 2) presents one particular outlier, the long-GRB 060218 at z = 0.03 with the lowest Liso and also the largest T90 (≈ 2100s). These atypical values mean that this event is a new topic to investigate, due to its strange properties. References Bouwens, R. J., Bradley, L., Zitrin, A., et al. 2014, ApJ, 795, 126 [ Links ] Burrows, D. N., Hill, J. E., Nousek, J. A., et al. 2005, SSSRv., 120, 165 [ Links ] Butler 2017, SWIFT BAT Integrated Spectral Parameters, http://butler.lab.asu.edu/swift/bat_spec_table.htmlLinks ] Carroll, B. W., & Ostlie, D. A. 2006, An introduction to modern astrophysics and cosmology, (2nd ed; San Francisco, CA: Pearson, Addison-Wesley) [ Links ] Cucciati, O., Tresse, L., Ilbert, O., et al. 2012, A&A, 539, A31 [ Links ] Dahlen, T., Mobasher, B., Dickinson, M., et al. 2007, ApJ, 654, 172 [ Links ] Frontera, F., Guidorzi, C., Montanari, E., et al. 2009, ApJS, 180, 192 [ Links ] Graziani, C. 2011, NewA., 16, 57 [ Links ] Gruppioni, C., Pozzi, F., Rodighiero, G., et al. 2013, MNRAS, 432, 23 [ Links ] Hopkins, A. M., & Beacom, J. F. 2006, ApJ, 651, 142 [ Links ] Kistler, M. D., Yüksel, H., Beacom, J. F., & Stanek, K. Z. 2008, ApJ, 673, L119 [ Links ] Madau, P., & Dickinson, M. 2014, ARA&A, 52, 415 [ Links ] Magnelli, B., Popesso, P., Berta, S., et al. 2013, A&A, 553, A132 [ Links ] Narayana Bhat, P., Meegan, C. A., von Kienlin, A., et al. 2016, ApJS, 223, 28 [ Links ] Petitjean, P., Wang, F. Y., Wu, X. F., & Wei, J. J. 2016, SSSRv., 202, 195 [ Links ] Robertson, B. E., & Ellis, R. S. 2012, ApJ, 744, 95 [ Links ] Robotham, A. S. G., Norberg, P., Driver, S. P., et al. 2011, MNRAS, 416, 2640 [ Links ] Reddy, N. A., & Steidel, C. C. 2009, ApJ, 692, 778 [ Links ] Rykoff, E. S., Aharonian, F., Akerlof, C. W., et al. 2009, ApJ, 702, 489 [ Links ] Savaglio, S., Glazebrook, K., Le Borgne, D., et al. 2005, ApJ, 635, 260 [ Links ] Schenker, M. A., Robertson, B. E., Ellis, R. S., et al. 2013, ApJ, 768, 196 [ Links ] Schneider, P. 2015, Extragalactic Astronomy and Cosmology: An Introduction, (Berlin Heidelberg: Springer-Verlag) [ Links ] Singer, L. P., Kasliwal, M. M., Cenko, S. B., et al. 2015, ApJ, 806, 52 [ Links ] Wang, F. Y. 2013, A&A, 556, A90 [ Links ] Wang, F. Y., Dai, Z. G., & Liang, E. W. 2015, NewAR, 67, 1 [ Links ] Wei, J.-J., Hao, J.-M., Wu, X.-F., & Yuan, Y.-F. 2016, JHEAp, 9, 1 [ Links ] Wei, J.-J., & Wu, X.-F. 2017, International Journal of Modern Physics D, 26, 1730002 [ Links ] Yüksel, H., Kistler, M. D., Beacom, J. F., & Hopkins, A. M. 2008, ApJ, 683, L5 [ Links ] Yu, H., Wang, F. Y., Dai, Z. G., & Cheng, K. S. 2015, ApJS, 218, 13 [ Links ] 1The comoving volume is given by dVcom/dz=4πDcom2*dDcom/dz the comoving distance dDcom is given by dDcom=c/H00zdz'Ωm1+z'3+ΩΛ-1. 2(15 − 150 keV) [erg cm−2]. 3(15 − 150 keV) [erg cm−2 s−1]. 4(15 − 150 keV) [ph cm−2 s−1]. 5We use the values Ωm = 0.3, ΩΛ = 0.7 based on the latest studies of Wilkinson Microwave Anisotropy Probe (WMAP) and Hubble Key Project (HKP) in a flat universe. Received: November 29, 2017; Accepted: March 13, 2018 M. Elías and O. M. Martínez: Benemérita Universidad Autónoma de Puebla, 4 Sur 104, Centro, 5013 Puebla, Puebla (Mau [email protected]).
# A plane containing the point (3, 2, 0) and the line    also contains the point : Option 1) (0, -3, 1) Option 2) (0, 7, 10) Option 3) (0, 7, -10) Option 4) (0, 3, 1) As we learnt in Normal form (cartesian form ) - $lx+my+nz=d$ where d is the distance from origin. - wherein $\vec{r}= x\hat{i}+y\hat{j}+z\hat{k}$ $\hat{n}= l\hat{i}+m\hat{j}+n\hat{k}$ putting in $\vec{r}\cdot \hat{n}= d$ We get -  $lx+my+nz= d$ Normal vector of Plane is $=\begin{vmatrix} \hat{i} & \hat{j} &\hat{k} \\ 3-1& 2-2& 0-3\\ 1 & 5 & 4 \end{vmatrix}$ $=\begin{vmatrix} \hat{i} & \hat{j}& \hat{k}\\ 2 & 0& -3\\ 1 & 5 & 4 \end{vmatrix}$$=15\hat{i}-11\hat{j}+10\hat{k}$ 15x-11y+10z=23 It passes through (0,7,10) Option 1) (0, -3, 1) This option is incorrect Option 2) (0, 7, 10) This option is correct Option 3) (0, 7, -10) This option is incorrect Option 4) (0, 3, 1) This option is incorrect Exams Articles Questions
Department of # Mathematics Seminar Calendar for events the day of Wednesday, November 30, 2005. . events for the events containing (Requires a password.) More information on this calendar program is available. Questions regarding events or the calendar should be directed to Tori Corkery. October 2005 November 2005 December 2005 Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa 1 1 2 3 4 5 1 2 3 2 3 4 5 6 7 8 6 7 8 9 10 11 12 4 5 6 7 8 9 10 9 10 11 12 13 14 15 13 14 15 16 17 18 19 11 12 13 14 15 16 17 16 17 18 19 20 21 22 20 21 22 23 24 25 26 18 19 20 21 22 23 24 23 24 25 26 27 28 29 27 28 29 30 25 26 27 28 29 30 31 30 31 Wednesday, November 30, 2005 2:00 pm in 347 Altgeld Hall,Wednesday, November 30, 2005 #### Lie Pseudoalgebras ###### Bojko Bakolov (North Carolina State University) Abstract: One of the algebraic structures that has emerged recently in the study of the operator product expansions of chiral fields in conformal field theory is that of a Lie conformal algebra. A Lie pseudoalgebra is a generalization of the notion of a Lie conformal algebra for which ${\mathbb{C}}[\partial]$ is replaced by the universal enveloping algebra $U({\mathfrak{d}})$ of a finite-dimensional Lie algebra ${\mathfrak{d}}$. I will review the classification of finite simple Lie pseudoalgebras, and I will discuss their relationship to solutions of the classical Yang--Baxter equation. I will also describe the irreducible representations of the Lie pseudoalgebra $W({\mathfrak{d}})$, which is closely related to the Lie--Cartan algebra $W_N$ of vector fields, where $N=\dim{\mathfrak{d}}$. (Based on joint work with A.~D'Andrea and V.~G.~Kac.) 3:00 pm in 441 Altgeld Hall,Wednesday, November 30, 2005 #### No Seminar this week ###### (UIUC Math) 4:00 pm in 245 Altgeld Hall,Wednesday, November 30, 2005 #### Counting objects in infinite sets ###### Slawomir Solecki (Dept. of Mathematics, UIUC) Abstract: I will talk about comparing the size of possibly infinite sets by counting objects in them. This will lead us to basic ideas of modern theory of definable equivalence relations. On the way, we may encounter some measure theory, ergodicity, and strange curves called indecomposable continua.
# how to prove that $\mu$ is borel-finite I have a Borel non-decreasing measure $\mu$ such that $$\int_{-\infty}^{+\infty}\left|\sum_{i=1}^n \xi_i e^{-y_i t}\right|^2 d\mu(t)\geq 0$$ and finite for every $n\in\mathbb{N}$, every $\{\xi_i\}_{i=1\ldots n}$ complex sequence and every $\{y_i\}_{i=1\ldots n}$ real sequence different from (0,...0). I can conclude that $\mu(\mathbb{R})>0$, but can I say that $\mu(\mathbb{R})<+\infty$? I would like to have finite $\mu$ in order to prove that $\mu$ is Borel-finite and non-negative. • What about $n=1$, $\xi_1 = 1$, $y_1 = 0$, then by assumption $\int_{\mathbb R} 1\, d\mu = \mu(\mathbb R)$ is positive and finite?! – martini Jun 18 '13 at 15:52 • And if my assumption is true for every $\{y_i\}_{i=1\ldots n}$ different from the (0,...,0) vector? can you still prove that $\mu$ is finite? – alemou Jun 18 '13 at 16:03 • Let $n = 1$, $\xi_1 = 1$, $y_1 = \pm 1$. Then $|\xi_1 e^{-ty_1}|^2 = e^{\pm 2t}$. As $1 \le e^{2t} + e^{-2t}$, we have $\int_{\mathbb R} 1 \le \int e^{2t} + e^{-2t} < \infty$. – martini Jun 18 '13 at 21:20
## mony01 2 years ago Anyone know how to do this integral? 1. mony01 $\int\limits \frac{ x ^{2} +x+1}{ (x ^{2}+1)^{2} }dx$ 2. anonymous Play around with the fraction. 3. anonymous Maybe partial fraction decomposition 4. myininaya did you try a trig sub i think that would work just fine 5. mony01 you think i can try sin? 6. myininaya I think you can try tan 7. myininaya the hint was the x^2+1 on bottom 8. myininaya tan^2(theta)+1=sec^2(theta) 9. mony01 would the set up be integral x^2+x+1/sex^2 theta 10. myininaya well you have to replace all the x's and the dx 11. myininaya and also you are leaving off the square on bottom 12. myininaya $x=\tan(\theta) => dx=\sec^2(\theta) d \theta$ Replace all the x's with tan(theta) Replace the dx with sec^2(theta) d theta 13. mony01 is it integral tan^2theta+tan theta+1/sec^2 theta d (theta) 14. myininaya If I think what you wrote is what I think then yes. You mean the following I assume: integral of (tan^2(theta)+tan(theta)+1)/sec^2(theta) d(theta) 15. myininaya $\int\limits_{}^{}\frac{\tan^2(\theta)}{\sec^2(\theta)} d \theta +\int\limits_{}^{}\frac{\tan(\theta)}{\sec^2(\theta)} d \theta +\int\limits_{}^{} \frac{1}{\sec^2(\theta) }d \theta$ 16. myininaya Look at them three separately 17. gorv $\int\limits_{}^{}(\frac{ x^2+1 }{ (x^2+1)^2 } +\frac{ x }{ (x^2+1)^2 })*dx$ 18. gorv $\int\limits_{}^{}\frac{ dx }{ x^2+1 } +\int\limits_{}^{}\frac{ x }{ (x^2+1)^2 }$ 19. gorv first one is standard formula 20. mony01 how can i figure out the answer? 21. gorv $\int\limits_{}^{}\frac{ x }{ (x^2+1)^2 } *dx$ for this x^2+1=t 2xdx=dt xdx=dt/2 22. gorv $\int\limits_{}^{} \frac{ dt }{ 2*t^2 }$ 23. gorv $\int\limits_{}^{}\frac{ dx }{ x^2+1 } +\int\limits_{}^{}\frac{ dt }{ 2t^2 }$ 24. gorv can u solve it @mony01 25. mony01 is it sec^2 (theta)/2(x^2+1)^2 26. myininaya I do like gorv's way. But either way is fine. gorv's is simpler though. 27. gorv actual its less calculative...both ways anns will be same 28. mony01 how can i solve the rest of it? 29. myininaya What question on what part do you have? 30. mony01 $\int\limits \frac{ dx }{ x ^{2}+1}+\int\limits \frac{ dt }{ 2t ^{2} }$ 31. myininaya the first integral is just remembering that it is acran(x) the second one 1/2 is a constant multiple and you should know how to integrate t^(-2) at this point in calculus 32. mony01
I don't understand this definition: We say that a functor $L:{\cal{C}}\rightarrow {\cal{D}}$ is left adjoint for a functor $R:{\cal{D}}\rightarrow {\cal{C}}$ iff there is a natural isomorphisms $$\theta_{C,D}:\text{Hom}_{\cal{D}}(LC,D)\rightarrow \text{Hom}_{\cal{C}}(C,RD)$$ for all $C\in{\cal{C}},D\in{\cal{D}}$ What does this mean? Is it in some sense the natural transformation between two functors or are we just looking for an isomorphism between the two sets (I wouldn't know what that means: just a bijection?) EDIT: I got the definition from ಠ_ಠ But know to make sense of it $$F:=\operatorname{Hom}(L(-), -): \mathsf{C}^{\mathrm{op}} \times \mathsf{D} \to \mathsf{Sets}$$ sends $(C,D)$ to $\operatorname{Hom}(L(C),D)$ and acts on functions as follows: I take $$(f,g):(C',D)\rightarrow (C,D')$$ Then $$F(f,g):F(C',D)\rightarrow F(C,D'):h\mapsto g\circ h\circ Lf$$ But how do I define $$G:= \text{Hom}(-,R(-))$$ on functions? The arrows go the wrong way. • $\matrix{LC' & \to &D \\ \uparrow {\scriptstyle Lf} && {\scriptstyle g} \downarrow \\ LC && D'}$ – Berci Dec 3 '17 at 0:14 • Makes sense. How do I define $Hom(-,R(-))$ on $(f,g)$ the arrows go the wrong way. – tomak Dec 18 '17 at 9:57 $\operatorname{Hom}(L(-), -): \mathsf{C}^{\mathrm{op}} \times \mathsf{D} \to \mathsf{Sets}$ and $\operatorname{Hom}(-, R(-)): \mathsf{C}^{\mathrm{op}} \times \mathsf{D} \to \mathsf{Sets}$ are both functors, and the definition states that $L$ is left adjoint to $R$ if these aforementioned Hom functors are naturally isomorphic. Just draw out the naturality squares for naturality in each variable; they're printed on page 184 of Awodey's Category Theory text if you get stuck. And yes, an isomorphism of sets is a bijection. Edit: to answer the second part of your question, $\mathrm{Hom}(f,g)=f^* g_*$. Where the lower star indicates post-composition, and the upper star indicates precomposition. • Why $C^{op}$? Shouldn't it be Hom$(L(-),-):{\cal{C}}\times{\cal{D}}\rightarrow$ Set? – tomak Dec 2 '17 at 11:22 • better :) but still why won't a simple $C$ instead of $C^{op}$ work? – tomak Dec 2 '17 at 11:26
# What is the cube root of 343? $= 7$ $\sqrt[3]{343}$ $= \sqrt[3]{\left(7\right) \left(7\right) \left(7\right)}$ $= 7$
# Math Help - [SOLVED] Prove this limit... 1. ## [SOLVED] Prove this limit... Prove using limit definition: lim 1/x = 1/c, c is not equal 0 x→ c 2. Originally Posted by thaliaj_df Prove using limit definition: lim 1/x = 1/c, c is not equal 0 x→ c you need to show that for every $\epsilon > 0$, there is some $\delta > 0$, such that $|x - c| < \delta$ implies $\left| \frac 1x - \frac 1c \right| < \epsilon$ Now, $\left| \frac 1x - \frac 1c \right| = \left| \frac {c - x}{xc} \right| = \frac {|x - c|}{|xc|}$ Now, can you finish up? 3. ## Thnx Thanks a lot... 4. ## Question I wanted to ask a question though, is it correct to rearrange values of the x and c, it being in absolute brackets.... does it make a difference... Please answer 5. Originally Posted by thaliaj_df Prove using limit definition: lim 1/x = 1/c, c is not equal 0 x→ c Also asked and replied to here: http://www.mathhelpforum.com/math-he...efinition.html 6. Originally Posted by thaliaj_df I wanted to ask a question though, is it correct to rearrange values of the x and c, it being in absolute brackets.... does it make a difference... Please answer |x - c| = |c - x| by definition of | |.
## Group Actions We define mathematical objects for our puzzles to allow easy discussion of the ideas needed to solve them. Let $\Omega$ be the set of all colourings of a necklace. For example, with 3 colours and 6 beads we have $|\Omega| = 3^6$. Let $\alpha \in \Omega$ be a colouring of a necklace, such as RGBRGB, and $g$ be an element of its symmetry group $G$, such as the reflection that takes 123456 to 654321, where we have numbered the beads from 1 to 6. Then write $g(\alpha)$ to mean the resulting colouring after applying $g$ to $\alpha$. In our example, $g(\alpha) = BGRBGR$. This defines a group action on the necklace. Group actions can be defined more formally. Define the orbit of $\alpha$ as $\mathrm{Orb}_G (\alpha) = \{ g(\alpha) : g \in G \},$ that is, the colourings you get when you rotate and reflect the necklace. For example: $\mathrm{Orb}_G(RGBRGB) = \{ RGBRGB, GBRGBR, BRGBRG, BGRBGR, RBGRBG, GRBGRB \}$. Each orbit contains colourings that we consider to be the same: a suitable rotation or reflection moves from one colouring to another. Distinct orbits represent distinct patterns on our necklace, that is, given a colouring in one orbit, it is impossible to reach a colouring in another orbit via rotation or reflection. Example: our 3-colour 10-bead necklace puzzle is asking for the number of orbits that partition the set of all possible 3-colourings of a 10-bead necklace, acted upon by the dihedral group of order 20. We’ll need one basic fact about the the size of an orbit which we can derive with the same argument used to prove Lagrange’s Theorem. Define the stabilizer of $\alpha$ as $G_\alpha = \{ g \in G : g(\alpha) = \alpha \},$ that is, the rotations and reflections that preserve a given colouring. For example, if $r$ is the rotation that takes 123456 to 234561, then $G_{RGBRGB} = \{ 1, r^3 \}$ where 1 is the identity element of $G$. Observe $G_\alpha$ is a group. By considering its left (or right) cosets, for any $\alpha$, we have $|G_\alpha| |\mathrm{Orb}_G(\alpha)| = |G| .$
Thread: Simplify and present this fraction with only positive powers 1. Simplify and present this fraction with only positive powers Simplify and present this fraction with only positive powers View Image: 2. Re: Simplify and present this fraction with only positive powers Start with \displaystyle \begin{align*} \frac{\sqrt[3]{27p^3q^2r^{-6}}}{q^{\frac{8}{3}}r^{-3}\sqrt{81p}} &= \frac{\left(27p^3q^2r^{-6}\right)^{\frac{1}{3}}}{q^{\frac{8}{3}}r^{-3}\left(81p\right)^{\frac{1}{2}}} \end{align*}
+0 # help +1 57 2 +19 Which equation can be simplified to find the inverse of y = x2 – 7? Apr 10, 2020 edited by izzy024  Apr 10, 2020 #1 +133 +1 This is the 4th time this quaestion has been posted. Please don't keep posting the qustion if it is not answered. Don't swarm the the forum. Apr 10, 2020 #2 +9477 +1 Which equation can be simplified to find the inverse of y = x2 – 7? Hello izzy024! $$y = x^2 – 7\ ⇒$$ $$x=y^2-7$$ $$x=y^2-7\\ y^2=x+7$$ $$y=\pm\sqrt{x+7}$$ ! Apr 10, 2020
# Math Help - Tangent of... 1. ## Tangent of... Check the image. I know how to get the answer, but when I use the pythagorean theorem for Cosine 22.5 Why is the answer 2+Sqrt2? Why is it a PLUS instead of a minus? Please show me the steps. 2. ## Re: Tangent of... What you have posted seems incomplete. You have given the value of sin(22.5°) and speak of the cosine of this angle, and your topic title is "Tangent of...". What is it you are trying to find or show? 3. ## Re: Tangent of... $\sin^2\left(\frac{x}{2}\right) = \frac{1 - \cos{x}}{2}$ $\cos(45^\circ) = \frac{\sqrt{2}}{2}$ POSITIVE because $22.5^\circ$ is a quadrant I angle 4. ## Re: Tangent of... So when I do Pythagorean: 1 - 2 $\sqrt{2}$ over 2 equals 2- $\sqrt{2}$ over $\sqrt{4}$, but you're saying its in quadrant one so its automatically 2+ $\sqrt{2}$ over 4??? 5. ## Re: Tangent of... $\sin^2(22.5) = \frac{1 - \frac{\sqrt{2}}{2}}{2}$ $\sin^2(22.5) = \frac{2 - \sqrt{2}}{4}$ $\sin(22.5) = \sqrt{\frac{2 - \sqrt{2}}{4}} = \frac{\sqrt{2 - \sqrt{2}}}{2}$ 6. ## Re: Tangent of... Yeah my hw says its + sqrt two NOT that answer. 7. ## Re: Tangent of... Originally Posted by Eraser147 Yeah my hw says its + sqrt two NOT that answer. so check it with a calculator ... 8. ## Re: Tangent of... The image you posted gives the same value for sin(22.5°) that skeeter has given. If you are trying to find $\cos(22.5^{\circ})$ then use: $\cos^2\left(\frac{45^{\circ}}{2} \right)=\frac{1+\cos(45^{\circ})}{2}$ $\cos^2(22.5^{\circ})=\frac{1+\frac{1}{\sqrt{2}}}{2 }=\frac{2+\sqrt{2}}{4}$ $\cos(22.5^{\circ})=\frac{\sqrt{2+\sqrt{2}}}{2}$
Chapter 2 - Section 2.5 - An Introduction to Problem Solving - Exercise Set: 10 The number is $18$. Work Step by Step Let $x$ = the unknown number Then "three times a number" means $3x$ "sum of five and three times a number" means $5+3x$ Thus, the equation that represents the situation is: $5+3x=59$ Subtract $5$ on both sides of the equation to obtain: $3x=54$ Divide $3$ on both sides of the equation to obtain: $x=\dfrac{54}{3} \\x=18$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
### Parallelizable Delegation from LWE ##### Abstract We present the first non-interactive delegation scheme for P with time-tight parallel prover efficiency based on standard hardness assumptions. More precisely, in a time-tight delegation scheme–which we refer to as a SPARG (succinct parallelizable argument)–the prover's parallel running time is t + polylog(t), while using only polylog(t) processors and where t is the length of the computation. (In other words, the proof is computed essentially in parallel with the computation, with only some minimal additive overhead in terms of time). Our main results show the existence of a publicly-verifiable, non-interactive, SPARG for P assuming polynomial hardness of LWE. Our SPARG construction relies on the elegant recent delegation construction of Choudhuri, Jain, and Jin (FOCS'21) and combines it with techniques from Ephraim et al (EuroCrypt'20). We next demonstrate how to make our SPARG time-independent–where the prover and verifier do not need to known the running-time t in advance; as far as we know, this yields the first construction of a time-tight delegation scheme with time-independence based on any hardness assumption. We finally present applications of SPARGs to the constructions of VDFs (Boneh et al, Crypto'18), resulting in the first VDF construction from standard polynomial hardness assumptions (namely LWE and the minimal assumption of a sequentially hard function). Available format(s) Category Foundations Publication info Preprint. Keywords succinct argument verifiable delay function delegation scheme Contact author(s) cfreitag @ cs cornell edu rafael @ cs cornell edu nephraim @ cs cornell edu History 2022-08-09: approved See all versions Short URL https://ia.cr/2022/1025 CC BY BibTeX @misc{cryptoeprint:2022/1025, author = {Cody Freitag and Rafael Pass and Naomi Sirkin}, title = {Parallelizable Delegation from LWE}, howpublished = {Cryptology ePrint Archive, Paper 2022/1025}, year = {2022}, note = {\url{https://eprint.iacr.org/2022/1025}}, url = {https://eprint.iacr.org/2022/1025} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
# How do you convert 9.34 g to oz? Jun 3, 2018 $0.327$ ounces #### Explanation: $g$ stands for grams and $o z$ stands for ounces. $1$ gram is approximately equal to $0.035$ ounces. You can create two proportional fractions. $\frac{1 g}{0.035 o z} = \frac{9.34 g}{x}$ Cross multiply the fractions $x g = \left(9.34 g\right) \cdot \left(0.035 o z\right)$ $x = 0.3269 o z$ Since the number of grams has $3$ significant digits, round the final answer so it has $3$ as well. $0.327$ oz
# Chern character of canonical line bundle over $\mathbb{CP}^1$ Let $$H \to \mathbb{CP}^1$$ be the canonical line bundle over $$\mathbb{CP}^1 =S^2$$. Then from the text Vector Bundles and K-theory by Hatcher, given the chern character, $$ch$$, and first chern class $$c_1(H)$$, we get: $$ch(H)=e^{c_1(H)}=1+c_1(H)$$ I don't understand where all the higher order $$c_1(H)$$ terms go when expanding the exponential. Could anyone explain this please? • Note, incidentally, that "canonical bundle" to a topologist might mean the tautological bundle $\mathcal{O}(-1)$ (whose fibre at each point $p$ is the line represented by $p$), while to an algebraic geometer it means the (top exterior power of the) cotangent bundle, here $\mathcal{O}(-2)$. Nowadays, I believe the algebro-geometric meaning is more widely used, but it's prudent to specify which you mean. :) – Andrew D. Hwang May 8 '16 at 16:24 In the proof of Corollary 4.4, on page 101, Hatcher says that for the canonical line bundle on $\textbf{CP}^n$ and $c$ the first Chern class, the Chern character is given by $$1+ c +\frac{c^2}{2}+\frac{c^3}{3!}+\cdots + \frac{c^n}{n!}.$$ The reason why this sum stops at $n$ is because the cohomology groups of $\textbf{CP}^n$ are trivial past $H^n$ (and the Chern classes are elements in these groups). If we continue to apply the cup product to an element in $H^n$ (such as $c^n$) and an element in $H^1$ (such as $c$), then we should get an element of $H^{n+1}$, but that group is trivial. Hence we get zero, so all products $c^k$ for $k>n$ are zero, so they disappear in the expansion of the exponential. Your question is when $n=1$, so everything vanishes for degree 2 and above.
# WxWidgets and Visual Studio Problem This topic is 4837 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Good afternoon all, I'm trying to get a simple WxWidgets program working and having a bit of trouble. I have to: #include <wx/wx.h> in my application. I have added the widgets directory entry to the Visual Studio look-in directory listing. Widgets is installed at C:\wxWidgets-2.6.1 So i have added the directory path "C:\wxWidgets-2.6.1\include" to the directory list and put it to the top. I have done the same for the library folder. The compiler finds "C:\wxWidgets-2.6.1\include\wx\wx.h" but it wont find one of the file that this includes. It appears as though a header included in this header is including another one that doesnt exist. I have browsed the \wx\ directory and the file actual actually isnt there. Any ideas what i should do? ace ##### Share on other sites The compile error is this: c:\wxWidgets-2.6.1\include\wx\platform.h(260) : fatal error C1083: Cannot open include file: 'wx/setup.h': No such file or directory platform.h appears to be looking for a sub directory that doesnt exist. ace ##### Share on other sites some applications, I guess wxwidgets is one of them, require you to run a config util to generate a config header before you can compile it... so there is no setup.h in "c:\wxWidgets-2.6.1\include\wx\" correct? look in the wxwidgets documentation [or for an INSTALL or README] for info on the subject ##### Share on other sites This is an absolute joke and a waste of time. I still havnt managed to get a Hello World for widgets installed yet and i havtn got a clue what im doing wrong. In the documentation is says to build the projects in the build directory in Debug and Release mode. I have done that and when i include wx.h i get other errors saying that other headers cant find other headers. Has anyone here actually done this, ever? ace ##### Share on other sites $(WX_DIR)\include isn't the only one needed to be added in the compiler dirs. You also have to add$(WX_DIR)\lib\vc_dll\msw (check the vc_dll part; I 'm not sure as I use gcc myself). HTH, Yiannis. 1. 1 Rutin 41 2. 2 3. 3 4. 4 5. 5 • 16 • 18 • 12 • 14 • 9 • ### Forum Statistics • Total Topics 633362 • Total Posts 3011527 • ### Who's Online (See full list) There are no registered users currently online ×
Analysis of algorithms TheInfoList OR: In computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ... , the analysis of algorithms is the process of finding the computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to time complexity, computation time (generally measured by the number of needed elemen ... of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ... ) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some Preorder, preordered set is an element of that is greater than or equal to every element of . Duality (order theory), Dually, a lower bound or minora ... , determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth Donald Ervin Knuth ( ; born January 10, 1938) is an American computer scientist, mathematician, and professor emeritus at Stanford University. He is the 1974 recipient of the Acm Turing award, ACM Turing Award, informally considered the Nobel Pri ... . Algorithm analysis is an important part of a broader computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ... , which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring :"Given a positive integer ''n'', find a nontrivial prime factor of ''n''." is a computational probl ... . These estimates provide an insight into reasonable directions of search for efficient algorithms. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ... , Big-omega notation and Big-theta notation are used to this end. For instance, binary search In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the m ... is said to run in a number of steps proportional to the logarithm of the size of the sorted list being searched, or in , colloquially "in logarithmic time". Usually asymptotic In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the ''x'' or ''y'' coordinates Limit of a function#Limits at infinity, tends to infinity. In pro ... estimates are used because different implementation Implementation is the realization of an application, or execution of a plan, idea, scientific modelling, model, design, specification, Standardization, standard, algorithm, or policy. Industry-specific definitions Computer science In computer ... s of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a ''hidden constant''. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation In computer science, and more specifically in computability theory (computer science), computability theory and computational complexity theory, a model of computation is a model which describes how an output of a function (mathematics), mathema ... . A model of computation may be defined in terms of an abstract computer, e.g. Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ... , and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most time units are needed to return an answer. # Cost models Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant. Two cost models are generally used:, section 1.3 * the uniform cost model, also called uniform-cost measurement (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved * the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved The latter is more cumbersome to use, so it's only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of ... . A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible. # Run-time analysis Run-time analysis is a theoretical classification that estimates and anticipates the increase in '' running time'' (or run-time or execution time) of an algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ... as its '' input size'' (usually denoted as ) increases. Run-time efficiency is a topic of great interest in computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ... : A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis. ## Shortcomings of empirical metrics Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language A programming language is a system of notation for writing computer program, computer programs. Most programming languages are text-based formal languages, but they may also be visual programming language, graphical. They are a kind of computer ... on an arbitrary computer A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as Computer program, pr ... running an arbitrary operating system An operating system (OS) is system software that manages computer hardware, software resources, and provides common daemon (computing), services for computer programs. Time-sharing operating systems scheduler (computing), schedule tasks for ef ... ), there are additional significant drawbacks to using an empirical Empirical evidence for a proposition is evidence, i.e. what supports or counters this proposition, that is constituted by or accessible to sense experience or experimental procedure. Empirical evidence is of central importance to the sciences and ... approach to gauge the comparative performance of a given set of algorithms. Take as an example a program that looks up a specific entry in a sorted list A ''list'' is any set of items in a row. List or lists may also refer to: People * List (surname) Organizations * List College, an undergraduate division of the Jewish Theological Seminary of America * SC Germania List, German rugby uni ... of size ''n''. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the m ... . Benchmark testing on the two computers running their respective programs might look something like the following: Based on these metrics, it would be easy to jump to the conclusion that ''Computer A'' is running an algorithm that is far superior in efficiency to that of ''Computer B''. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error: Computer A, running the linear search program, exhibits a linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line Line most often refers to: * Line (geometry) In geometry, a line is an infinitely long object with no width, ... growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 of ... ic growth rate. Quadrupling the input size only increases the run-time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it's running an algorithm with a much slower growth rate. ## Orders of growth Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function In mathematics, a function from a set (mathematics), set to a set assigns to each element of exactly one element of .; the words map, mapping, transformation, correspondence, and operator are often used synonymously. The set is called the D ... if beyond a certain input size , the function times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size greater than some 0 and a constant , the run-time of that algorithm will never be larger than . This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time Comparison sort, by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or m ... grows quadratically as its input size increases, insertion sort can be said to be of order . Big O notation is a convenient way to express the worst-case scenario A worst-case scenario is a concept in risk management wherein the planner, in planning for potential disasters, considers the most severe possible outcome that can reasonably be projected to occur in a given situation. Conceiving of worst-case sc ... for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ... is , but the average-case run-time is . ## Empirical orders of growth Assuming the run-time follows power rule, , the coefficient can be found by taking empirical measurements of run-time at some problem-size points , and calculating so that . In other words, this measures the slope of the empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to their ''empirical local orders of growth'' behaviour. Applied to the above table: It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one. ## Evaluating run-time complexity The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machi ... : 1 ''get a positive integer n from input'' 2 if n > 10 3 print "This might take a while..." 4 for i = 1 to n 5 for j = 1 to i 6 print i * j 7 print "Done!" A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be deterministic Determinism is a Philosophy, philosophical view, where all events are determined completely by previously existing causes. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motive ... . Say that the actions carried out in step 1 are considered to consume time ''T''1, step 2 uses time ''T''2, and so forth. In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is: :$T_1 + T_2 + T_3 + T_7. \,$ The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( ''n'' + 1 ) times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume ''T''4( ''n'' + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to ''i''. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes ''T''6 time, and the inner loop test (step 5) consumes 2''T''5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2''T''6 time, and the inner loop test (step 5) consumes 3''T''5 time. Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression An arithmetic progression or arithmetic sequence () is a sequence of numbers such that the difference between the consecutive terms is constant. For instance, the sequence 5, 7, 9, 11, 13, 15, . . . is an arithmetic progression with a common differ ... : :$T_6 + 2T_6 + 3T_6 + \cdots + \left(n-1\right) T_6 + n T_6$ which can be factored as : The total time required to run the outer loop test can be evaluated similarly: :$\begin & 2T_5 + 3T_5 + 4T_5 + \cdots + \left(n-1\right) T_5 + n T_5 + \left(n + 1\right) T_5\\ =\ &T_5 + 2T_5 + 3T_5 + 4T_5 + \cdots + \left(n-1\right)T_5 + nT_5 + \left(n+1\right)T_5 - T_5 \end$ which can be factored as : Therefore, the total run-time for this algorithm is: : which reduces to : As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that . Formally this can be proven as follows: A more elegant approach to analyzing this algorithm would be to declare that 'T''1..''T''7are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows:This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate their respective loops, but it is trivial to prove that such omission does not affect the final result ## Growth rate analysis of other resources The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file File or filing may refer to: Mechanical tools and processes * File (tool), a tool used to ''remove'' fine amounts of material from a workpiece **Filing (metalworking), a material removal process in manufacturing ** Nail file, a tool used to gent ... which that program manages: while ''file is still open:'' let n = ''size of file'' for ''every 100,000 kilobyte The kilobyte is a multiple of the unit byte for Computer data storage, digital information. The International System of Units (SI) defines the prefix ''kilo-, kilo'' as 1000 (103); per this definition, one kilobyte is 1000 bytes.International S ... s of increase in file size'' ''double the amount of memory reserved'' In this instance, as the file size n increases, memory will be consumed at an exponential growth Exponential growth is a process that increases quantity over time. It occurs when the instantaneous Rate (mathematics)#Of change, rate of change (that is, the derivative) of a quantity with respect to time is proportionality (mathematics), propor ... rate, which is order . This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources Resource refers to all the materials available in our environment which are Technology, technologically accessible, Economics, economically feasible and Culture, culturally Sustainability, sustainable and help us to satisfy our needs and wants. R ... . # Relevance Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless. # Constant factors Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are for a large enough constant, or for small enough data. This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log log ''n'') is less than 6 for virtually all practical data (264 bits); and binary log (log ''n'') is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have $K > k \log \log n$ so long as $K/k > 6$ and $n < 2^ = 2^$. For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity $n \log n$), but switch to an asymptotically inefficient algorithm (here insertion sort Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time Comparison sort, by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or m ... , with time complexity $n^2$) for small data, as the simpler algorithm is faster on small data. * Amortized analysis In computer science, amortized analysis is a method for Analysis of algorithms, analyzing a given algorithm's Computational complexity, complexity, or how much of a resource, especially time or memory, it takes to Execution (computing), execute. ... * Analysis of parallel algorithms * Asymptotic computational complexity * Best, worst and average case * Big O notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ... * Computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ... * Master theorem (analysis of algorithms) In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis (using Big O notation) for recurrence relations of types that occur in the Analysis of algorithms, analysis of many divide and con ... * NP-Complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ... * Numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ... * Polynomial time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ... * Program optimization * Profiling (computer programming) In software engineering, profiling ("program profiling", "software profiling") is a form of dynamic program analysis that measures, for example, the space (memory) or time Computational complexity theory, complexity of a program, the instruction ... * Scalability Scalability is the property of a system to handle a growing amount of work by adding resources to the system. In an economics, economic context, a scalable business model implies that a company can increase sales given increased resources. For ... * Smoothed analysis * Termination analysis — the subproblem of checking whether a program will terminate at all * Time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ... — includes table of orders of growth for common algorithms * Information-based complexity * * * * * *
Show Summary Details More options … # Review of Economics ### Jahrbuch für Wirtschaftswissenschaften Editor-in-Chief: Berlemann, Michael Ed. by Haucap, Justus / Thum, Marcel 3 Issues per year Online ISSN 2366-035X See all formats and pricing More options … Volume 68, Issue 3 # Beauty and Ugliness of Aggregation over Time: A Survey Nlandu Mamingi • Corresponding author • Department of Economics, The University of the West Indies, P.O.Box 64, Cave Hill Campus, Bridgetown BB11000, Barbados • Email • Other articles by this author: Published Online: 2017-11-30 | DOI: https://doi.org/10.1515/roe-2017-0027 ## Abstract This paper delivers an up-to-date literature review dealing with aggregation over time of economic time series, e.g. the transformation of high-frequency data to low frequency data, with a focus on its benefits (the beauty) and its costs (the ugliness). While there are some benefits associated with aggregating data over time, the negative effects are numerous. Aggregation over time is shown to have implications for inferences, public policy and forecasting. JEL Classification: C10; C22; C32; C43 ## 1 Introduction Data are at the core of empirical or applied economics and econometrics. Data in this context can be characterized according to their sources (experimental, quasi-experimental or observational), their types (quantitative or qualitative), their configurations (time series, cross-section or panel) or their frequencies (high frequency or low frequency). This paper delivers an overview of the literature on the effects of aggregation over time, understood as the transformation of high frequency to low frequency data, on a wide range of economic or econometric undertakings. While the topic of temporal aggregation has been around for quite a while, the related full-fledged literature review papers on the topic have been rather scarce 1 and often provide only a partial perspective. The present literature review attempts to fill these gaps. 2 Precisely, the objective of the paper is twofold: (i) to reexamine the beauty (positive effects) of aggregation over time as well as its ugliness (negative effects) and (ii) to point out the findings that need further investigation as well as untreated issues. Methodologically, the paper emphasizes the message rather than the econometric or mathematical derivation of the message, as is appropriate for a survey article. We show that there are only a few positive effects of aggregation over time, while the negative effects are numerous empirical investigations need to consider, among other factors, the role of the data span in the power and size of some unit root/cointegration tests as well as the impact of structural change on test statistics under different types of aggregation over time. We also show that aggregation-over-time-issues have far reaching effects for inferences, public policy and forecasting. For example, the fact that aggregation over time generally alters causality relations between variables and the exogeneity status of variables might blur policy instruments useful to deal with economic issues such as inflation, budget deficits and output growth dynamics. The paper proceeds as follows. Section 2 introduces the concept of aggregation over time. Section 3 deals with the beauty of aggregation over time. Section 4 focuses on the literature concerned with the ugliness of aggregation over time. Section 5 essentially deals with issues needing further investigation. Section 6 contains concluding remarks. ## 2 Aggregation over time It is often the case that temporally aggregated data are used in public policy evaluations and other empirical studies. This holds true especially for countries with low levels of development. The high costs of collecting frequently and processing (new) data is the major impediment to generating high frequency data. In many situations, researchers and/or policy makers only have aggregated data to work with. “Yet, the agent’s time decision interval and the data sampling interval do not necessarily coincide” (Mamingi, 1992, 95). For example, the agent’s decision interval may be monthly while the sample data interval is quarterly; that is, quarterly observations are used instead of monthly observations. This situation leads to many problematic issues which are due to aggregation over time (Mamingi, 1992, 2006a). Before discussing the effects of aggregation over time, it is useful to define the concept itself. Aggregation over time can takes two forms. It can be a shift from continuous time to discrete time. The major contributions to this perspective are Phillips (1956), Sims (1971), Bergstrom (1984) and Phillips (1991). Aggregation over time can also be a shift from small discrete time units (high data frequency) to large discrete time units (low data frequency). In this paper, we concentrate on the discrete approach. Aggregation over time encompasses the following scenarios: temporal aggregation, systematic sampling and mixed aggregation. Temporal aggregation deals with time dimension variables or variables whose values have been either averaged or summed over an interval of time. Examples of these variables, also known as flow variables, include consumption per year, income per year, saving per year, yearly fiscal deficits, investment per year, profits per month and rates of return. Systematic sampling, “a type of temporal aggregation appropriate for stock variables” (Granger, 1990, 26), deals with variables that are systematically sampled; that is, their values are recorded at particular points in time. Examples of these variables, also called stock variables or variables without time dimension, include money supply, stock of cash, savings, unemployment rate, wealth, labor force, inventory and national debt. Mixed aggregation arises in the context of relationships between variables. For example, in the framework of a bivariate regression, mixed aggregation arises whenever one variable is temporally aggregated and another one is systematically sampled. Two cases have to be distinguished. In the first case, the explained variable is a flow variable and the explanatory variable a stock variable; here, we refer to this relationship as mixed aggregation of type 1. The second case concerns the stock-flow relationship; here, we refer to the relationship as mixed aggregation of type 2. The stock adjustment model, where inventories and sales are stock and flow variables, respectively, as well as the growth-cum-debt model that treats debt and income as stock and flow variables, respectively, are typical examples of mixed aggregation (Mamingi, 2006a). One problem in this context is a lack of common terminology for the concepts explained above. Often aggregation over time is referred to as temporal aggregation. This may of course lead to confusion as temporal aggregation in the latter sense encompasses temporal aggregation, systematic sampling and mixed aggregation. Other authors use the terms skip sampling or end-of-period sampling for systematic sampling. A harmonization of the terminology would be helpful. In this paper, aggregation over time encompasses temporal aggregation, systematic sampling and mixed aggregation. ## 3 The beauty of aggregation over time There are a few positive effects attributed to aggregation over time, particularly in the context of the mean model. First, aggregation over time does not affect stationarity or non-stationarity 3 of time series. Thus, a time series which is stationary (non stationary) at the disaggregated level remains so at the aggregated level. This property has been directly or indirectly derived among others by Telser (1967), Amemiya and Wu (1972), Tiao (1972), Brewer (1973), Tiao and Wei (1976), Wei (1981), Harvey (1981), Ahsanullah and Wei (1984), Weiss (1984), Stram and Wei (1986), Christiano and Eichenbaum (1987), Rossana and Seater (1995) and Pierse and Snell (1995). Most of these studies resorted to analytical tools, accompanied by Monte Carlo experiments and empirical evidence to draw and back up their conclusions. Second, temporally aggregated data are less noisy than their disaggregated counterparts (Friend and Taubman, 1964; Haitovsky, Treyz, and Su, 1974; among others). Third, aggregation over time does not affect cointegratedness of variables. To recall, cointegration of variables refers to the long-run equilibrium between or among non-stationary variables or even among non-stationary and stationary variables. While Granger and Weiss (1983) conjectured the property (invariance of cointegratedness of variables under aggregation over time), Stock (1987), Phillips (1991) and Mamingi (2006a) provided the formal proof using either a continuous framework or a discrete context. Not only does cointegration survive aggregation over time but also the cointegrating vector remains unchanged under all types of aggregation. In Appendix A we provide the proof for mixed aggregation, a yet unresolved case (see Granger and Weiss, 1983, 264). Finally, in variance or conditional heteroscedasticity models there is a property which points to the closeness of weak GARCH processes at the univariate as well as multivariate levels. That is, a weak GARCH model at the disaggregate level remains so at the aggregate level (see Drost and Nijman, 1993; Hafner, 2004). Most of the above properties have been confirmed empirically by quite a number of studies. As an example, Marcellino (1999) used the term structure of interest rates for Canada for which the disaggregated data are monthly observations on the Canadian 10-year government bond yield (RL) and 90-day deposit Rate (RS). He found that each variable was integrated of order one and uncovered one cointegration vector: $RS-{\mathrm{\beta }}_{1}RL+\mathrm{\beta }{}_{2}^{}$with ${\mathrm{\beta }}_{1}=1$ and ${\mathrm{\beta }}_{2}>0.$ He then constructed the quarterly and half-yearly systematic sampling counterparts of the monthly observations and likewise the average counterparts. He uncovered the unit root in each aggregate component and also found that cointegration holds for systematic sampling (quarterly and half-yearly) and temporal aggregation (quarterly and half-yearly). ## 4 The ugliness of aggregation over time In comparison to the few positive effects of aggregation over time, there are many negative effects which, however, also depend on the sort of aggregation over time (temporal aggregation, systematic sampling and mixed aggregation). The negative effects, which will be discussed in the following, have been derived analytically and/or by Monte Carlo experiments, corroborated by empirical examples. The negative effects include: lower precision of estimation and prediction (Wei, 1978; Zellner and Montmarquette, 1971), inability to make short-run forecasts (Zellner and Montmarquette, 1971), aggregation bias in distributed lag models (Brewer, 1973; Engle and Liu, 1972; Moriguchi, 1970; Mundlak, 1961; Tiao and Wei, 1976; Wei, 1978), OLS asymptotic biases of estimates of half-lives in purchasing power parity (Chambers, 2005), alterations of structures of time series (Telser, 1967, Amemiya and Wu, 1972; Tiao, 1972, among others), generation of time series correlation under temporal aggregation (Working, 1960), change in seasonal unit roots (Granger and Siklos, 1995), change in measures of persistence of shocks (Rossana and Seater, 1995), lower power of tests (see, for example, Teles and Wei, 2000; Zellner and Montmarquette, 1971), alterations of power of residual based tests for cointegration (Mamingi, 1992, 2005b), distortions of empirical sizes 4 of residual based tests for cointegration 5 (Mamingi, 1992, 1993, 2006b), distortions of causality relationships in multiple time series models (Geweke, 1978; Sims, 1971; Wei, 1982), vector autoregressive models (see, among others, Breintung and Swanson, 2002; Christiano and Eichenbaum, 1987; Marcet, 1987) and error correction models (Gulasekaran and Abeysinghe, 2003; Mamingi, 1992, 1996, 2006a), modification of exogeneity patterns (Campos, Ericsson, and Hendry, 1990; Hendry, 1992; Marcellino, 1999), alterations of impulse response functions (Swanson and Granger, 1997; Marcellino, 1999), change in trend-cycle decomposition (Lippi and Reichlin, 1991; Marcellino, 1999), change in nonlinearity patterns (Granger and Lee, 1999; Teles and Wei, 2000), change in quality of forecasts (Lütkepohl, 1987) and alterations of semi strong and strong GARCH processes (Drost and Nijman, 1993; Hafner, 2004). For issues in aggregated GARCH models, it is worth consulting Silverstrini and Veredas (2008). While we refrain here from discussing all mentioned issues in length, we at least comment on a few problems of great importance. ## 4.1 Aggregation over time and time series structure The general format of a time series is an ARIMA(p,d,q) process where p is the autoregressive order, d represents the order of integration and q stands for the moving average order. Although aggregation over time does not change the status of stationarity/non-stationarity of time series variables, it generally leads to an alteration of time series structures. That is, the structure of a given series can be transformed into another structure with aggregation over time. For example, an autoregressive process of order one, AR(1), in a disaggregated model theoretically changes into an ARMA(1,1) process if temporally aggregated. By the same token, a random walk process generally becomes an integrated moving average process of order one, IMA(1,1), under temporal aggregation but remains to be a random walk process under systematic sampling (Amemiya and Wu, 1972; Telser, 1967; Tiao, 1972, among others). The limiting result of an ARIMA(p,d,q) process and an IMA(d,q) process is an IMA(d,l) process with $l\le d-1$ under systematic sampling (see Wei, 1978a) and an IMA(d,d) process under temporal aggregation (Stram and Wei, 1986). Rossana and Seater (1995) noted that the latter limiting process can become an IMA(d,d-1) process if the increase of standard error is bigger than the increase of the autocorrelation estimated coefficients. Under systematic sampling, when d=0, the limiting model of a stationary process becomes a white noise. The authors utilized a series of US economic variables obtained from Citibase to show how their structures change using three sets of frequencies: monthly, quarterly and annual. For example, the durable consumption follows an ARI(4,1) process at monthly frequency, a random walk process at quarterly and annual frequencies. Here, an IMA(1,1) process competed with a random walk process and only lost ground on the basis of the Schwarz criterion. Unemployment follows an ARI(24,1) process at monthly frequency, an ARIMA(4,1,4) process at quarterly frequency and a random walk process at annual frequency. For the latter frequency, an IMA(1,1) process was also acceptable but not preferable. Consumer price index follows an ARI(24,1) process at monthly frequency, an ARIMA(4,1,4) process at quarterly frequency and an IMA(1,1) process at annual frequency. For the latter frequency, a random walk process was acceptable but not preferred. It is worth noting that while the theoretical results are to a greater extent not disputable, the empirical results are a different story as the Box-Jenkins procedure teaches us. Indeed, it is known, for example, the empirical correlogram does not often correspond exactly to the theoretical correlogram due to a certain number of reasons (common roots, etc.). Thus, caution should be exercised when dealing with the structure of either temporally aggregated or systematically sampled data. In addition, the change of structure of time series might possibly, in quite a number of situations, lead to the change of the power of some tests as well as the distortion of the empirical sizes of some statistical tests. ## 4.2 Alteration of power of tests In general, aggregation over time brings about a decrease in the power of tests (see for example, Zellner and Montmarquette, 1971). Note that we care about the power of tests because a test with good power enables us to reject the null hypothesis when it is appropriate. The issue of decreasing power of tests statistics under aggregation over time has been documented in the context of cointegration. In this context, at least four questions can be asked (see Mamingi, 2005b): 1. Do (residual) tests for cointegration preserve their power ranking under aggregation over time? 2. How do different (residual) based tests for cointegration compare in terms of their powers across the different types of aggregation over time? 3. Does the degree of integration affect the power of (residual) based tests for cointegration under aggregation over time? 4. How does the data span affect the power of (residual) based tests for cointegration? It should be noticed that the word “residual” was set in parentheses to highlight that the analysis can be generalized by concentrating on the power of tests for cointegration in general. That said, a look at the literature reveals that while the first three questions have been systematically examined, this is not the case for the last question. Following the pioneering work of Shiller and Perron (1985) as well as Perron (1987, 1989)) in unit root context, Hakkio and Rush (1991), Mamingi (1992, 2005b)), Hooker (1993), Lahiri and Mamingi (1995), Pierse and Snell (1995), Otero and Smith (2000) and Haug (2002) examined the impact of the data span on the power of tests for cointegration. However, with the exception of Mamingi (1992, 2005b) no study has examined explicitly the issue in the context of the three scenarios of aggregation over time. The findings with respect to the four earlier mentioned questions are the following (see especially Mamingi, 2005b): 1. Tests for cointegration do preserve their power ranking, that is, tests that are more powerful than others in the disaggregated model remain so under the aggregated models. 2. Under local alternatives, the power of tests for cointegration can vary substantially across types of aggregation. 3. The power of residual-based tests for cointegration is affected by the degree of cointegration. The higher the degree of cointegration, the higher the power of the test. 4. The data span does affect the power of test statistics through two channels. First, with the same number of observations, the larger the data span, the higher the power. Second, a large data span with a small sample size yields, in general, higher power than a small data span with a large sample size, at least under local alternatives. Nevertheless, the second channel is less present in mixed aggregation as well as with some forms of the ADF test statistic. The power of the ADF test can substantially increase with a large data span. 6 To illustrate the importance of the data span in the context of residual-based tests for cointegration, Pierse and Snell (1995) use the relationship between real non-durable consumption and real net wealth for UK data for different data spans. The residual-based cointegration tests of interest are: the CRDW (cointegration Durbin Watson), the ADF (augmented Dickey-Fuller) and the ${Z}_{t}$ (Phillips-Ouliaris) test. Table 1 indicates that while for quarterly data (1966–1981) and annual data (1966–1981) the lack of cointegration is not rejected by the ADF and ${Z}_{t}$tests, the null hypothesis is convincingly rejected with annual data covering 1957 to 1981. This illustrates that boosting the data span increases the power of tests for cointegration. Table 1: Tests for the cointegration of UK non-durable consumption and wealth. In the context of issue (d), Otero and Smith (2000) studied the effects of increasing the frequency of observations and the data span on the Johansen cointegration tests. They found that the power of the tests depend more on the total sample length than the number of observations. To illustrate this theoretical finding, they examined the relationship between long-term and short-term interest rates for the US. More precisely, they considered monthly values of the 3-month treasury-bill-rate in the secondary market (R3) and long-term US government securities (RL) over the 1959–1998 period. They then derived the quarterly and annual versions of the two interest rates by averaging and skip-sampling observations and tested for unit roots using the ADF and PP tests. The series exhibit a unit root at all frequencies. Table 2 presents the cointegration results using the two versions of the Johansen test (maximum eigen-value LR test and LR trace test). The VAR order is chosen by the Schwarz criterion and the constant term is included in the cointegration vector. Irrespective of the frequency of observations, the presence of one cointegration vector is acknowledged with the two longest sample periods (1959–1998 and 1969–1998). There is no evidence of the presence of cointegration between the two types of interest rates when the two shortest data samples are used (1989–1998 and 1979–1998), regardless of the type of data. Similar results are obtained when using systematically sampled observations, though cointegration only appears with annual frequency. This example illustrates the role of the data span in boosting the power of tests for cointegration. Table 2: Cointegration between US short-term and long-term interest rates using the johansen tests. When examining the factors that affect the accuracy of estimation (and, to a larger extent, the power of tests), we can easily understand why the data span is important. As it is well known (see e. g. Koop, 2000 or Mamingi, 2005a), the accuracy of estimation is affected in the first instance by the sample size, meaning that a regression with less observations is less reliable than a regression with a larger number of data. Similarly, the quality of the information content matters. In general, the data span captures the quality of information content. That is, the larger the data span, the better the quality of the information content in principle. In terms of our topic this means that a large data span helps boost the power of tests. Summing up, the data span, the degree of cointegration, the sample size and the type of aggregation are important determinants of the power of tests for cointegration. ## 4.3 Distortion of the empirical size of residual-based tests for cointegration under aggregation over time Most of the known tests are subject to empirical size distortions under aggregation over time. As far as cointegration tests are concerned, it is known at least for residual-based tests for cointegration that these distortions largely occur under temporal aggregation and mixed aggregation. Systematic sampling and the ADF test in general do not cause size distortions. Table 3 below, which concentrates on residual-based tests for cointegration, illustrates this finding. The results are based on the data generating process presented in Appendix B. As it is shown in the table, the size distortions of the test statistics of interest can reach alarming proportions in the context of temporal aggregation and mixed aggregation, at least within the realm of the data generation process used here. For example, at the 0.05 level of significance, the DF test has an empirical size of 0.006 for a sample size of 50 observations under temporal aggregation and of 0.585 under mixed aggregation. On the contrary, it is 0.052 for systematic sampling and also undistorted for the appropriate ADF test. This peculiar result about systematic sampling is due to the fact in theory, systematic sampling tends to preserve the time series structure more than temporal aggregation or mixed aggregation. Thus, as seen above a random walk process systematically sampled remains a random walk. This means that the empirical size remains unchanged. Also, the ADF test largely confirms its good behavior size wise such as pointed out in the literature. Table 3: Empirical sizes for 5 % level residual-based tests for cointegration with 3,000 replications. Overall, the size distortion of residual-based tests for cointegration largely depends on the type of aggregation over time with systematic sampling behaving well and the type of test used, with the ADF test being the most stable one. Especially the size distortion of the Johansen tests for cointegration turns out to be large. At least two pathways are available to deal with the described size distortions. First, one might simply use the ADF test which has been proven to be well behaved in terms of size distortion. Second, since the issue of size distortion is due to the use of incorrect critical values, generating the correct critical values for a given test is recommended (see Mamingi, 1992). ## 4.4 Aggregation over time, error correction models and granger causality distortion The issue of Granger causality behavior is especially important in the context of public policy because here it is often important to detect the correct causality direction between variables. How aggregation over time affects Granger causality is therefore one of the major concerns of this subsection. Mamingi (1992, 1996, 2006a) studied the impact of aggregation over time on the form of error correction models as well as the causal relationship between variables. Among others, he attempted to answer the following two key questions: 1. Does aggregation over time alter the form of error correction models? 2. Does aggregation over time alter the Granger-causality-relationship between cointegrated variables? Using Monte Carlo experiments and analytical tools, Mamingi (1992, 1996, 2006a) uncovered the following results: (i) as expected, the form of ECMs is often altered under aggregation over time; (ii) there are in general Granger causality distortions under aggregation over time, which depend on the type of aggregation over time, the data span, the sample size and the degree of cointegration. Particularly, the lower the degree of cointegration, the higher the likelihood of distortion (change in causal relationships or ECM form) as well as the higher the level of distortion in the stock-flow relationship (mixed aggregation of type 2) compared to the case of flow-stock relationship. The latter result needs to be analyzed further since economically, the stock-flow relationship is more pervasive than the flow-stock relationship. Surprisingly, systematic sampling brings about far less Granger causality distortions than the other types of aggregation over time. Thus, there is more concordance of results of Granger causality for systematic sampling for variables which are stationary (Sims, 1971; Cunningham and Vilasuso, 1997, for example) as well as variables which are non-stationary but cointegrated. Gulasekaran (2004) as well as Abeysiinghe and Gulasekaran (2004) showed that while systematic sampling preserves Granger causality with stationary variables, this is not the case with nonstationary variables for which spurious Granger causality (bi-directional causality instead of unidirectional causality) occurs. This issue needs further investigation. In any case, Gulasekaran and Abeysinghe (2003, 2008) devised a sign rule to remedy the distortion of the sign of the adjustment coefficient of an error correction model. By doing so, they claim to uncover the true causal relationship between cointegrated variables. ## 4.5 Exogeneity Exogeneity, which is an important issue in the design of policies, is generally influenced by aggregation over time. This is the case for strict and strong exogeneity. In fact, the Lucas critique may be spuriously validated under aggregation over time. Marcellino (1999) delivers a good example for exogenity alteration and other topics discussed earlier. As mentioned earlier, Marcellino (1999) uses the term structure of interest rates to illustrate some of the disadvantages of temporal aggregation. The disaggregated data are monthly observations on the Canadian 10-year government bond yield (RL) and the 90-day deposit rate (RS). The model is a VAR(g) with the vector containing two variables with g lags determined by a recursive F test of significance. The author then studies the effects of different temporal aggregation schemes on exogeneity, Granger non-causality, the presence of common trends, and common cycles, having approximated the aggregated process by a VAR model. According to the results (see Marcellino (1999), Table 4) at the disaggregate level, there is one cointegration relationship with the following vector $RS-{\mathrm{\beta }}_{1}RL+\mathrm{\beta }{}_{2}^{}$ with ${\mathrm{\beta }}_{1}=1$ and ${\mathrm{\beta }}_{2}>0.$ RL is weakly exogenous for the parameter of the cointegration vector. Since the lack of significance of the lags of $\mathrm{\Delta }RS$with $\mathrm{\Delta }$ as the first difference operator is rejected in the error correction model for $\mathrm{\Delta }RL$, RL is not a strongly exogenous variable. The presence of common cycles among the two interest rates is rejected, as well as that of nonsynchroneous common cycles NSCC. In the next step quarterly aggregated variables are constructed using point-in-time scenario (QP) and average (QA) from the corresponding disaggregated form (monthly). The cointegration scheme is uncovered with the same cointegration vector, weak exogeneity of RL is still valid. However, because the lags of $\mathrm{\Delta }RS$ in the error correction model for $\mathrm{\Delta }RL$ are insignificant, RL is not Granger-caused by RS, and RL is known to be strongly exogenous for the long-run parameters. Moreover, some non-synchronous common cycles are detected now. For half yearly data, there is still one cointegration vector. RL is no longer weakly exogenous for the long-run coefficients. As implication, RL is no longer strongly exogenous. The number of cofeature 7 vectors does not decrease. ## 4.6 Aggregation over time and forecasts Another issue of interest is whether temporally aggregated data has an impact on forecast accuracy. The answer to the question depends on the scenario to be studied. In a first scenario, under the assumption of availability of only temporally aggregated data, it is well documented that while the use of temporally aggregated data generally yields acceptable long-run forecasts, it is not often the case for short-run forecasts. Zellner and Montmarquette (1971), for example, underline the impossibility of making meaningful short-run forecasts with temporally aggregated data. By smoothing series, temporally aggregated data deliver better long-run forecasts as they concentrate on the long-run trend. The second scenario is characterized by the presence of multiple time series data, temporally aggregated at different levels. Here, combining different forecasts derived from these data yields superior forecasts. The burgeoning literature on multiple aggregation prediction (algorithm), MAPA, and mixed data sampling or MIDAS, is at the forefront of forecasting and modeling multiple time series with different frequencies. Athanasopoulos et al. (2015), Kourentzes, Petropoulos, and Trapero (2014), and Petropoulos and Kourentzes (2014) are respectable representatives of MAPA. Guay and Maurin (2015), Bangwayo-Skeete and Skeete (2015), Ghysels and Miller (2014), Ghysels, Santa-Clara, and Valkanov (2004), Miller (2003) and their precursors Zellner and Montmarquette (1971), Hsiao (1979) and Palm and Nijman (1982) are representatives of MIDAS. ## 5 Agenda for further research Among the few issues or findings for which there is no firm consensus among researchers, two are particularly important and require further investigation. First, there is the role of data span and sample size in the power and size of tests for cointegration under aggregation over time, particularly questioned by Giles in his blog (GILES, Blogspot, Monday, May 26, 2014). Based on Pierse and Snell (1995, 336) he argues that asymptotically or even in finite samples, “temporally aggregating or selective sampling has no consequence of size distortion or loss of power for the ADF, Phillips-Perron test, or Hall’s (1994) IV based unit root test”. As seen above, quite a number of authors have a view different from GILES’. Second, the role of structural breaks in the unit root/cointegration setting with data with diverse degrees of aggregation over time needs to be explored. The key question is how the power and the size of tests of unit roots/cointegration under aggregation over time are affected by the presence of structural breaks. Moreover, although the variance model was only a footnote here for reasons of choice and space, it would be interesting to study how EGARCH processes behave under aggregation over time. ## 6 Concluding remarks This paper dealt with the advantages and problems surrounding data aggregated over time. There are a number of recommendations that can be made concerning the issues discussed in this survey. Ideally, above all, it is advisable to use the data frequency that corresponds to the agent’s decision interval. Since this solution is not always possible, particularly for many developing countries given the high cost of collecting information, at least three recommendations can be made. First, there is a need to use in some situations rules that may re-establish the true properties of the time series or relationships. Thus, the promising research by Gulasekaran and Abeysinghe (2003, 2008) on designing a rule that can “enable” to uncover the “true” relationship in the lower frequency data is, for example, a way forward to solving Granger causality distortions due to temporally aggregated data. Second, in some situations there is the possibility to temporally disaggregate data following some appropriate scheme (Chow-Lin, Fernandez, Litterman, Denton-Cholette, Denton, Lisman-Sandee, etc., see Sax and Steiner, 2013). However, these methods also have their problems because of the lack of knowledge about the data generating process. Third, under certain circumstances there is the possibility to recur to the innovative path which attempts to exploit appropriate techniques that allow the use of both aggregate and disaggregate data at the same time. The bourgeoning literature on MIDAS (mixed data sampling) can provide some insights on solving data configuration issues, at least in the multivariate context. Summing up, the overall lesson to be learned directly or indirectly from this paper is that in any empirical econometric undertaking it is imperative to understand the issues surrounding the data in use. Failure to examine properly data issues or properties may lead to wrong inferences or possibly wrong public policy prescriptions. ## Acknowledgements I would like to thank the editor-in-chief of this review, his collaborators and Mahalia Jackman for ably editing the paper. I am also indebted to Stephen Harewood for useful comments. All remaining errors are my own. ## References • Abeysiinghe, T. and R. Gulasekaran (2004): The Consequences of Systematic Sampling on Granger Causality. Econometric Society 2004 Australasian Meetings 250, Econometric Society. Google Scholar • Ahsanullah, M. and W. W. S. Wei (1984): The Effects of Time Aggregation of the AR(1) Process, Computational Statistics Quarterly 1, 343–352. Google Scholar • Amemiya, T. and R. Y. Wu (1972): The Effect of Aggregation over Prediction in the Autoregressive Model, Journal of the American Statistical Association 67, 628–632. • Athanasopoulos, G., R. J. Hyndman, N. Kourentzes and F. Petropoulos (2015): Forecasting with Temporal Hierarchies. Working Paper 2015: 3, Lancaster University Management School, Working Paper Series. Google Scholar • Bangwayo-Skeete, P. and R. W. Skeete (2015): Can Google Data Improve the Forecasting Performance of Tourist Arrivals? Mixed-Data Sampling Approach, Tourism Management 46, 454–464. • Bergstrom, A. R. (1984): Continuous Time Stochastic Models and Issues of Aggregation over Time, in: Z. Griliches and M. D. Intriligator (eds.) Handbook of Econometrics. North Holland, Amsterdam, Vol. 2 (chap 20). Google Scholar • Breintung, J. and N. Swanson (2002): Temporal Aggregation and Spurious Instantaneous Causality in Multiple Time Series Models, Journal of Time Series Analysis 23, 651–665. • Brewer, K. R. W. (1973): Some Consequences of Temporal Aggregation and Systematic Sampling for ARMA and ARMAX Models, Journal of Econometrics 1, 133–154. • Campos, J., N. Ericsson and D. F. Hendry (1990): An Analogue Model of Phase-Averaging Procedures, Journal of Econometrics 43, 275–292. • Chambers, M. J. (2005): The Purchasing Power Parity, Temporal Aggregation and Half-life Estimation, Economics Letters 86, 193–198. • Choi, I. and B. S. Chung (1995): Sampling Frequency and the Power of Tests for Unit Root: A Simulation Study, Economics Letters 49, 131–136. • Christiano, L. J. and M. Eichenbaum (1987): Temporal Aggregation and Structural Inference in Macroeconomics, Carnegie-Rochester Conference on Public Policy 26, 63–130. Google Scholar • Cunningham, S. R. and R. J. Vilasuso (1997): Time Aggregation and the Money-Real Output Relationship, Journal of Macroeconomics 19, 675–695. • Drost, F. C. and T. E. Nijman (1993): Temporal Aggregation of GARCH Processes, Econometrica 61, 909–927. • Engle, R. F. and C. W. J. Granger (1987): Cointegration and Error Correction: Representation, Estimation and Testing, Econometrica 55, 251–276. • Engle, R. F. and S. Kozicki (1993): Testing for Common Features, Journal of Business and Economic Statstics 11(4), 369–395. Google Scholar • Engle, R. F. and T. C. Liu (1972): Effects of Aggregation over Time on Dynamic Characteristics of an Econometric Model, in: B. G. Hickman (ed.) Cyclical Behaviors. Columbia University Press, New York, 663–667. Google Scholar • Friend, I. and P. Taubman (1964): A Short-Run Forecasting Model, Review of Economics and Statistics 46, 229–236. • Geweke, J. (1978): Temporal Aggregation in the Multiple Regression, Econometrica 46, 643–661. • Ghysels, E. and I. J. Miller (2014): Testing for Cointegration with Temporally Aggregated and Mixed-frequency Time series, mimeo. Google Scholar • Ghysels, E., P. Santa-Clara and R. Valkanov (2004): The MIDAS Touch: Mixed Data Sampling Regressions Model, UNC and UCLA Discussion Paper. Google Scholar • Giles, D. E. (2014): The Econometrics of Temporal Aggregation: 1956 – 2014, The A.W.H. Phillips Memorial Lecture, N.Z. Association of Economists Annual Conference, Auckland, July. Google Scholar • Granger, C. W. J. (1980): Aggregation of Time Series Variables: A Survey, in: T. Barker and H. Pesaran (eds.) Disaggregation in Econometric Modelling. Routledge, London, 17–34. Google Scholar • Granger, C. W. J. and T. H. Lee (1999): The Effect of Aggregation on Nonlinearity, Econometric Reviews 18(3), 259–269. • Granger, C. W. J. and A. J. Morris (1976): Time Series Modelling and Interpretation, Journal of Royal Statistical Society A(139), 246–257. Google Scholar • Granger, C. W. J. and P. L. Siklos (1995): Systematic Sampling, Temporal Aggregation, Seasonal Adjustment and Cointegration: Theory and Evidence, Journal of Econometrics 66, 357–369. • Granger, C. W. J. and A. A. Weiss (1983): Time Series Analysis of Error Correction Models, in: S. Karlin, T. Amemiya and L. A. Goodman (eds.) Studies in Econometrics, Time Series, and Multivariate Analysis. Academic Press, New York, 255–278. Google Scholar • Guay, A. and A. Maurin (2015): Disaggregation Methods Based on MIDAS Regression, Economic Modelling 50, 123–129. • Gulasekaran, R. (2004): Impact of Systematic Sampling on Causality in the Presence of Unit Roots, Economics Letters 84, 127–132. • Gulasekaran, R. and T. Abeysinghe (2003): Temporal Aggregation, Causality Distortions and a Sign Rule. Departmental Working Paper WP0406, Department of Economics, National University of Singapore. Google Scholar • Gulasekaran, R. and T. Abeysinghe (2008): Temporal Aggregation,Cointegration and Causality Inference, Economics Letters 101, 223–226. • Hafner, C. M. (2004) Temporal Aggregation of Multivariate Processes. Econometric Institute, Report 2004-29, Erasmus University Rotterdam, the Netherlands. Google Scholar • Haitovsky, Y., G. Treyz and Y. Su (1974): Forecasts with Quarterly Macroeconomic Models. National Bureau of Economic Research, New York. Google Scholar • Hakkio, C. S. and M. Rush (1991): Cointegration: How Short Is the Long-Run, Journal of International Money and Finance 10, 571–581. • Hall, A. (1994): Testing for a Unit Root in Time Series with Pretest Data-based Model selection, Journal of Business and Economic Statistics 12, 461–470. Google Scholar • Harvey, A. C. (1981): Time Series Model. John Wiley, New York. Google Scholar • Haug, A. (2002): Temporal Aggregation and the Power of Cointegration Tests: A Monte Carlo Study, Oxford Bulletin of Economics and Statistics 64, 389–412. Google Scholar • Hendry, D. F. (1992): An Econometric Analysis of TV Advertising Expenditure in the United Kingdom, Journal of Policy Modelling 14, 281–311. • Hooker, A. M. (1993): Testing for Cointegration: Power versus Frequency of Observations, Economic Letters 41, 359–362. • Hsiao, C. (1979): Linear Regression Using Both Temporally Aggregated and Temporally Disaggregated Data, Journal of Econometrics 10, 243–252. • Koop, G. (2000): Analysis of Economic Data. John Wiley & Sons, Chichester. Google Scholar • Kourentzes, N., F. Petropoulos and J. R. Trapero (2014): Improving Forecasting by Estimating Time Series Structural Components across Multiple Frequencies, International Journal of Forecasting 30(2), 291–302. • Lahiri, K. and N. Mamingi (1995): Testing for Cointegration: Power versus Frequency of Observation: Another View, Economics Letters 49, 121–124. • Lippi, M. and L. Reichlin (1991): Trend-Cycle Decompositions and Measures of Persistence. Does Time Aggregation Matter?, Economic Journal 101, 314–323. • Lütkepohl, H. (1987): Forecasting Aggregate Vector ARMA Processes. Springer-Verlag, New York. Google Scholar • Mamingi, N. (1992): Essays on the Effects of Misspecified Dynamics and Temporal Aggregation on Cointegrating Relationships. unpublished Ph.D. thesis, State University of New York, Albany. Google Scholar • Mamingi, N. (1993): Residual Based Tests for Cointegration: Their Actual Size under Aggregation over Time. Albany Discussion Papers 93-09, Department of Economics, State University of New York, Albany. Google Scholar • Mamingi, N. (1996): Aggregation over Time, Error Correction Models and Granger Causality: A Monte Carlo Investigation, Economics Letters 52, 7–14. • Mamingi, N. (2005a): Theoretical and Empirical Exercises in Econometrics. UWI Press, Kingston. Google Scholar • Mamingi, N. (2005b): Power of Tests for Cointegration under Aggregation over Time (Temporal Aggregation, Systematic Sampling and Mixed Aggregation): A Monte Carlo Investigation, Asian-African Journal of Economics and Econometrics 4, 99–115. Google Scholar • Mamingi, N. (2006a): Aggregation over Time, Cointegration, Error Correction Models and Granger Causality: An Extension, Asian-African Journal of Economics and Econometrics 6, 171–183. Google Scholar • Mamingi, N. (2006b): Empirical Size Distortions of Residual Based Tests for Cointegration under Aggregation over Time: A Monte Carlo Investigation, Asian-African Journal of Economics and Econometrics 6, 13–26. Google Scholar • Marcellino, M. (1999): Some Consequences of Temporal Aggregation in Empirical Analysis, Journal of Business and Economic Statistics 17(1), 129–136. Google Scholar • Marcet, A. (1987) Temporal Aggregation and Economic Time Series. Unpublished Ph. D. Thesis, University of Minnesota. Google Scholar • Miller, J. I. (2003): Mixed-Frequency Cointegrating Regressions with Parsimonious Distributed Lag Structures, Journal of Financial Econometrics 12, 684–615. Google Scholar • Moriguchi, C. (1970): Aggregation over Time in Macroeconomic Relationships, International Economic Review 11, 427–440. • Mundlak, Y. (1961): Aggregation over Time in Distributed Lag Models, International Economic Review 2, 154–163. • Otero, J. and J. Smith (2000): Testing for Cointegration: Power versus Frequency of Observation – Further Monte Carlo Results, Economics Letters 67, 5–9. • Palm, F. C. and T. E. Nijman (1982): Linear Regression Using Both Temporally Aggregated and Temporally Disaggregated Data, Journal of Econometrics 19, 333–343. • Perron, P. (1987): Test Consistency with Varying Sampling Frequency. Cahier de Recherche, 4187, Université de Montréal. Google Scholar • Perron, P. (1989): Testing for A Random Walk: A Simulation Experiment of Power When the Sampling Interval Is Varied, in: B. Raj (ed.) Advances in Econometrics and Modelling. Kluwer Academic Publisher, Dordretcht. Google Scholar • Petropoulos, F. and N. Kourentzes (2014): Improving Forecasting via Multiple Temporal Aggregation, Foresight: the International Journal of Applied Forecasting 34, 12–17. Google Scholar • Phillips, A. W. H. (1956): Some Notes on the Estimation of Time-Forms of Reactions in Interdependent Dynamic Systems, Economica 23, 99–113. • Phillips, P. C. B. (1991): Error Correction and Long Run Equilibrium in Continuous Time, Econometrica 59, 967–980. • Pierse, R. G. and A. J. Snell (1995): Temporal Aggregation and the Power of Tests for A Unit Root, Journal of Econometrics 65, 333–345. • Rossana, R. J. and J. J. Seater (1995): Temporal Aggregation and Economic Time Series, Journal of Business and Economic Statistics 13, 441–451. Google Scholar • Sax, C. and P. Steiner (2013): Temporal Disaggregation of Time Series, The R Journal 5, 80–87. Google Scholar • Shiller, R. J. and P. Perron (1985): Testing the Random Walk Hypothesis: Power versus Frequency of Observations, Economics Letters 18, 381–386. • Silverstrini, A. and D. Veredas (2008): Temporal Aggregation of Univariate and Multivariate Time Series Models: A Survey, Journal of Economic Surveys 22, 458–495. • Sims, C. A. (1971): Discrete Approximation to Continuous Time Distributed Lag in Econometrics, Econometrica 39, 545–563. • Stock, J. H. (1987): Temporal Aggregation and Structural Inference in Macroeconomics: A Comment, In Carnegie Rochester Series on Public Policy 26, 131–140. • Stram, D. O. and W. W. S. Wei (1986): Temporal Aggregation in the ARIMA Process, Journal of Time Series 7, 279–292. • Swanson, N. R. and C. W. J. Granger (1997): Impulse Response Function Based on A Causal Approach to Residual Orthogonalization in Vector Autoregressions, Journal of the American Statistical Association 92, 357–367. • Teles, P. and W. W. S. Wei (2000): The Effects of Temporal Aggregation on Tests of Linearity of a Time Series, Computational Statistics & Data Analysis 34, 91–103. • Telser, L. G. (1967): Discrete Sample and Moving Sums in Stationary Stochastic Processes, Journal of the American Statistical Association 62(318), 484–499. • Theil, H. (1954): Linear Aggregation of Economic Relationship. North-Holland Publishing Company, Amsterdam. Google Scholar • Tiao, G. C. (1972): Asymptotic Behavior of Temporal Aggregates of Time Series, Biometrika 59, 525–531. • Tiao, G. C. and W. W. S. Wei (1976): Effect of Temporal Aggregation on the Dynamic Relationship between Two Time Series Variables, Biometrika 63, 513–523. • Wei, W. W. S. (1978): The Effect of Temporal Aggregation on Parameter Estimation in Distributed Lag Model, Journal of Econometrics 8, 237–246. • Wei, W. W. S. (1981): Effect of Systematic Sampling on ARIMA Models, Communication in Statistics, Theory, Math A10(23), 2389–2398. Google Scholar • Wei, W. W. S. (1982): The Effect of Systematic Sampling and Temporal Aggregation on Causality: A Cautionary Note, Journal of the American Statistical Association 378, 316–319. Google Scholar • Weiss, A. A. (1984): Systematic Sampling and Temporal Aggregation in Time Series Models, Journal of Econometrics 26, 271–287. • Working, H. (1960): Note on the Correlation of First Difference of Averages in a Random Chain, Econometrica 28, 335–342. Google Scholar • Zellner, A. and C. Montmarquette (1971): A Study of Some Aspects of Temporal Aggregation Problems in Econometric Analyses, Review of Economics and Statistics 5(3), 335–342. Google Scholar ## A Proof of cointegration invariance under mixed aggregation Following Mamingi (2006a), consider the following relationship: ${y}_{t}=\mathrm{\beta }\phantom{\rule{thinmathspace}{0ex}}{x}_{t}+{u}_{t}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}t=1,2,3,...,N$(1) where ${y}_{t}\text{\hspace{0.17em}}~\text{\hspace{0.17em}}\text{I}\left(\text{1}\right)\text{\hspace{0.17em}}\text{and}{x}_{t}\text{\hspace{0.17em}}~\text{\hspace{0.17em}}\text{I}\left(\text{1}\right)$ and ${u}_{t}\phantom{\rule{thinmathspace}{0ex}}\sim \phantom{\rule{thickmathspace}{0ex}}\mathrm{I}\left(0\right).\text{\hspace{0.17em}}\mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\text{\hspace{0.17em}}{y}_{t}\phantom{\rule{thickmathspace}{0ex}}\sim \phantom{\rule{thickmathspace}{0ex}}\mathrm{I}\left(1\right),{x}_{t}\phantom{\rule{thickmathspace}{0ex}}\sim \phantom{\rule{thickmathspace}{0ex}}\mathrm{I}\left(1\right)\text{\hspace{0.17em}}\mathrm{a}\mathrm{n}\mathrm{d}{u}_{t}\phantom{\rule{thickmathspace}{0ex}}\sim \phantom{\rule{thickmathspace}{0ex}}\mathrm{I}\left(0\right)$. This means that ${y}_{t}\text{\hspace{0.17em}}\mathrm{a}\mathrm{n}\mathrm{d}\phantom{\rule{thinmathspace}{0ex}}{x}_{t}$ are cointegrated with $\left(1\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\mathrm{\beta }\right)$ as the cointegration vector. In addition, without loss of generality, assume that ${u}_{t}\phantom{\rule{thickmathspace}{0ex}}\sim \phantom{\rule{thickmathspace}{0ex}}\mathrm{A}\mathrm{R}\left(1\right)\mathrm{p}\mathrm{r}\mathrm{o}\mathrm{c}\mathrm{e}\mathrm{s}\mathrm{s}:$ $\left(1-\mathrm{\rho }L\right){u}_{t}={e}_{t}$(2) where $0\le \phantom{\rule{thinmathspace}{0ex}}\left|\phantom{\rule{thinmathspace}{0ex}}\mathrm{\rho }\right|<1$, L is the backward shift operator and ${e}_{t}$ is a white noise series. Proposition 1. Given the above conditions, the following is true: 1. the mixed aggregation counterpart of Equation (1) remains cointegrated; 2. the cointegrating vector,$\left(1\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\mathrm{\beta }\right)$, remains invariant under mixed aggregation. The proof is adapted from Stram and Wei (1986), Wei (1981), Weiss (1984) and particularly Mamingi (2006a). Define the following filter $P\left(L\right)=\left(M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\right)$(3) where $S\left(L\right)=\frac{\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)}{\left(1-\mathrm{\rho }L\right)}$(4) $M\left(L\right)=S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)$(5) k is the sampling interval or order of aggregation over time and L is defined as above. Using Equation (3) in Equation (1) with systematically sampled variables yields $P\left(L\right)\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}{y}_{kT}\\ -\mathrm{\beta }\phantom{\rule{thinmathspace}{0ex}}{x}_{kT}\end{array}\right)\phantom{\rule{thinmathspace}{0ex}}=P\left(L\right)\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}{u}_{kT}\\ {u}_{kT}\end{array}\right)$(6) where T is the time index of aggregated variables. Equation (6) can be rewritten as follows: $\left(M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}{y}_{kT}\\ -\mathrm{\beta }\phantom{\rule{thinmathspace}{0ex}}{x}_{kT}\end{array}\right)\phantom{\rule{thinmathspace}{0ex}}=\left(M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}{u}_{kT}\\ {u}_{kT}\end{array}\right)$(7) Expanding Equation (7) yields: $M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{y}_{kT}-\mathrm{\beta }\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{x}_{kT}=M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}$(8) Multiplying Equation (8) by $\left(1-\mathrm{\rho }L\right)$ gives rise to $\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{y}_{kT}-\mathrm{\beta }\phantom{\rule{thinmathspace}{0ex}}\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{x}_{kT}=\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}$(9) Part (a) At the outset, we develop the right-hand side of Equation (9) to prove Part (a) of the proposition. Rewriting the right-hand side of Equation (9) and using Equations (4) and (5) yield $\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}=\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}$(10) Inserting Equation (2) into Equation (10) yields $\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\left(1-\mathrm{\rho }L\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}=\phantom{\rule{thinmathspace}{0ex}}M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{e}_{kT}+\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{e}_{kT}$(11) Combining Equations (10) and (11) gives rise to $\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}=M\left(L\right){e}_{kT}+S\left(L\right){e}_{kT}$(12) or $\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}\left(S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}\right)=M\left(L\right){e}_{kT}+S\left(L\right){e}_{kT}$(13) The left-hand side of Equation (13) is simply ${U}_{T}-{\mathrm{\rho }}^{k}{U}_{T-1}$, where ${U}_{T}$ consists of temporally aggregated and systematically sampled parts, $S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}$ and ${u}_{kT}$, respectively. Mamingi (2005b) has shown that $M\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{e}_{kT}$ is an MA(1) process and $S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{e}_{kT}$ is a white noise series. It is known that the sum of an MA(1) process and a white noise process is an MA(1) process (see, for example, Granger and Morris, 1976). Hence, the mixed aggregated counterpart of ${u}_{t}$, that is, ${U}_{T}$, follows an ARMA(1,1) process. It means that the error remains stationary (I(0)). Thus, cointegration continues to hold with this type of aggregation over time. Q.E.D. Part (b) Using the left-hand side of Equation (9) as well as Equations (4) and (5) yields $\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{y}_{kT}-\mathrm{\beta }\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{x}_{kT}=\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}+\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{u}_{kT}$(14) where $S\left(L\right){y}_{kT}={Y}_{T}$ is the temporally aggregated counterpart of ${y}_{t}$ and ${x}_{kT}={X}_{T}$ is the systematically sampled counterpart of ${x}_{t}$. Dividing Equation (14) by $\left(1-{\mathrm{\rho }}^{k}{L}^{k}\right)$ yields $S\left(L\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{y}_{kT}=\mathrm{\beta }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{x}_{kT}+\phantom{\rule{thinmathspace}{0ex}}{U}_{T}$(15) or ${Y}_{T}=\mathrm{\beta }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{X}_{T}+\phantom{\rule{thinmathspace}{0ex}}{U}_{T}$(16) where ${U}_{T}=S\left(L\right){u}_{kT}+{u}_{kT}$ is an ARMA(1,1) process as shown in Part (a). Cointegration is thus preserved with the same cointegrating vector $\left(1\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\mathrm{\beta }\right)$. QED The proof also holds if the roles of the variables are interchanged. An alternative proof can be found in Stock (1987). ## B DGP for Table 3 (see Mamingi, 2006b) The data generation process (DGP) due to Engle and Granger (1987) is of interest mainly for reasons of comparability with numerous studies that used it. It is defined as follows: $\begin{array}{c}{y}_{t}+a{x}_{t}={w}_{1t},\phantom{\rule{1em}{0ex}}{w}_{1t}={w}_{1t-1}+{\mathrm{\epsilon }}_{1t}\\ {y}_{t}+\mathrm{\beta }{x}_{t}={w}_{2t,}\phantom{\rule{1em}{0ex}}{w}_{2t}=\mathrm{\rho }{w}_{2t-1}+{e}_{2t}\end{array}$(17) where ${y}_{t}$and ${x}_{t}$ are the variables of interest and the $w$’s are the error terms, the ${\mathrm{\epsilon }}^{{}^{\prime }}s$ are iid(0,1) and $\left|\mathrm{\rho }\right|<1.$ The reduced form of system (17) shows that ${y}_{t}$ and ${x}_{t}$ are individually integrated of order one: $\begin{array}{c}{x}_{t}=1/\left(\mathrm{\alpha }-\mathrm{\beta }\right)\left({w}_{2t}-{w}_{1t}\right)\\ {y}_{t}=1/\left(\mathrm{\alpha }-\mathrm{\beta }\right)\left(\mathrm{\alpha }{w}_{1t}-\mathrm{\beta }{w}_{2t}\right)\end{array}$(18) As in Engle and Granger (1987), $\mathrm{\alpha }=1$and $\mathrm{\beta }=2$. Under the alternative hypothesis of cointegration, the coefficient of autocorrelation $\mathrm{\rho }$ in Equation (17) is $\left|\mathrm{\rho }\right|<1$. The disaggregated model is of the following type: ${y}_{t}=c+a{x}_{t}+{u}_{t}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}t=1,2,\cdots ,N$(19) where the two variables of interest are those from Equation (18), and t=1,2,3, …,N. Note that while under the null hypothesis of no cointegration ${u}_{t}\equiv {w}_{1t}$, under the alternative hypothesis of cointegration ${u}_{t}\equiv {w}_{2t}$. Thus, under ${H}_{0}\phantom{\rule{thinmathspace}{0ex}},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{u}_{t}$follows a random walk process and under ${H}_{1}$, it follows an AR(1) process The aggregated model, analogous to Equation (19), is: ${Y}_{T}=c+a{X}_{T}+{U}_{T}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}T=1,2,\cdots ,M$(20) where T=k t is the time index in aggregated models, k is the sampling interval or order of aggregation and capital letters stand for aggregated variables (see the text for details of types of aggregation over time). For implementation, see Mamingi (2006b). ## Footnotes • 1 Theil (1954), Granger (1990) and Silverstrini and Veredas (2008) are among the few exceptions. • 2 I have just, while browsing the internet, come across a conference paper by Giles (2014) entitled “The Econometrics of Temporal Aggregation.” I acknowledge it. • 3 A time series is stationary in the second order sense if its mean does not depend on time, its variance is finite and its covariance depends only on the distance between variables. • 4 Empirical size distortion means that the actual level of significance is different from the nominal level of significance. • 5 The Dickey-Fuller test (the Engle-Granger test), the Augmented Dickey-Fuller test (the Augmented Engle-Granger test) and the Phillips-Ouliaris test. • 6 In the context of unit root, Choi and Chung (1995) emphasise sample size rather than data span in boosting the power of the ADF test. This calls for further investigation. • 7 Cofeature means common features or the simultaneous presence of common trend and common cycles. In fact, in many instances, it simply means common cycles (see Engle and Kozicki, 1993). Published Online: 2017-11-30 Published in Print: 2017-11-27 Citation Information: Review of Economics, Volume 68, Issue 3, Pages 205–227, ISSN (Online) 2366-035X, ISSN (Print) 0948-5139, Export Citation
Kattis # Juggling Sequence The Juggling Sequence is an integer sequence with $n$ elements, defined as follow: • $a_1 = 1$, • For every $i \geq 1$: • If $a_ i \leq i$, then $a_{i+1} = a_ i + i$, • If $a_ i > i$, then $a_{i+1} = a_ i - i$. Let’s sort the juggling sequence in non-decreasing order. What is the $m$-th number? ## Input The first line of the input contains a single integer $t$ $(1 \le t \le 10^4)$ — the number of test cases. $t$ test cases follow, each test case contains a single line with two integers $n$ and $m$ $(1 \le m \le n \le 10^{18})$. ## Output For each test case, print a single integer — the $m$-th number in the sorted juggling sequence. ## Explanation of the sample input With $n = 6$, the juggling sequence is $1, 2, 4, 1, 5, 10$. After sorting, the sequence becomes $1, 1, 2, 4, 5, 10$. Sample Input 1 Sample Output 1 3 6 1 6 2 6 6 1 1 10
• Centered regular text.   Centering equation by centering the latex text: Above does not work with tables.  Multiple lines with \\ look awkward.  First line is centered but later lines start from the same position as the first line.   Aligning equations using \begin{array}.  To center, begin latex with <p align=”center”></p>.  Will need to check by going between Visual and Text modes.  Note that there probably will be strange behavior by the location of </p> in the Text mode when going between the two modes and will probably have to correct it multiple times.  Note that the lines need to be tight in Text mode – no extra line of… • Showing $$\label{eq:0.99=1} \tag{1} \large \bf 0.\bar{9} = 1$$   kjh $$\label{eq:0.99=io1} \tag{1} 0.\bar{9} = 1$$ klh
# OneClean Vrius Removal Guide OneClean is a fake anti-spyware program that tries to trick users in to paying for its software license. It gets installed on a user’s computer by using Trojan viruses that get downloaded from shady websites on the internet. These Trojans download all the files necessary to install OneClean from the relevant websites and proceeds to install OneClean without the user’s knowledge. It then proceeds to load itself as a startup program and performs a large number of fake scans on the system reporting that the system is dangerously infected with malicious software. All this activity can drastically slow the user’s operating system. OneClean then urges the user to purchase a license to the ‘full’ version of the software, claiming that the currently installed ‘trial’ version is insufficient to remove the detected ‘threats’. However, it is important to remember that the so-called ‘full’ version of OneClean is just as incapable of scanning or cleaning any system as the ‘trial’ version is. As soon as you find a copy of this rogue program on your computer, you should take measures to delete OneClean. The process of OneClean removal involves stopping processes, unregistering DLLs, deleting files and folders and removing registry entries. ## OneClean Manual Removal Procedures The first step you must take in order to delete OneClean is to stop the following processes: • C:\Program Files\OneClean\oneclean.exe • C:\Program Files\OneClean\ocmon.exe Next, it is necessary to unregister the following DLL file: • C:\Program Files\OneClean\libmysql.dll The next step in OneClean removal is to delete the following files and folders: Windows XP: • C:\Program Files\OneClean • C:\Program Files\OneClean\DB • C:\Program Files\OneClean\oneclean.exe • C:\Program Files\OneClean\ocmon.exe Windows Vista/7: • C:\Program Files\OneClean • C:\Program Files\OneClean\DB • C:\Program Files\OneClean\oneclean.exe • C:\Program Files\OneClean\ocmon.exe ## OneClean Registry Removal Procedures File removal alone is not sufficient to properly delete OneClean. It is required to delete the following keys and settings for complete OneClean removal: • HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\oneclean • HKEY_CURRENT_USER\Software\OneClean • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\OCleanUpdate.exe • HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\oneclean After the registry cleaning step has been completed, it is safe to say that you have completely removed OneClean. Be sure to run a full security scan to ensure nothing else is on your computer. ## OneClean Directories: • C:\Program Files\OneClean\ Outside Resources: http://lookavirus.blogspot.com/2010/04/oneclean-virus.html RemoveVirus.org cannot be held liable for any damages that may occur from using our community virus removal guides. Viruses cause damage and unless you know what you are doing you may loose your data. We strongly suggest you backup your data before you attempt to remove any virus. Each product or service is a trademark of their respective company. We do make a commission off of each product we recommend. This is how removevirus.org is able to keep writing our virus removal guides. All Free based antivirus scanners recommended on this site are limited. This means they may not be fully functional and limited in use. A free trial scan allows you to see if that security client can pick up the virus you are infected with.
# [BackupPC-users] excludes for smb HOWTO I had a lot of trouble when trying to exclude certain files and directories for a backup using smb for the transfer method. I was unable to find solutions in this mailing list or the samba mailing list and so I did a bunch of testing with smbclient. I found that for smbclient to properly match it was necessary to use '\' for directory separators (not '/') and in addition the last separator must be a double backslash ('\\') and in addition to that each backslash must be escaped with, of course, a backslash. (quadruple backslash in places) An example: \$Conf{BackupFilesExclude} = { '*' => [ '\\\\Application Data', '\\Documents\\\\My Music', '\\\\ntuser.dat.LOG1', '*.lock', '*\\Thumbs.db', '*\\.*' ] }; In the above example, the "Application Data" directory is excluded. To exclude "\Documents\My Music", the quadruple backslash is placed before "My Music" and not "Documents". Next the file "debian_install_dvd.iso" in the directory separator has a quadruple backslash. The next line excludes the file "ntuser.dat.LOG1" in the root directory. The next line excludes any ".lock" file. Note that the asterisk at the beginning not only matches the first part of the filename but also the directory tree. When matching files it is necessary to match the directories as well, which leads to the next line which excludes the "Thumbs.db" file found in any directory. The last line excludes any file or directory which begins with '.'. Note that if you use the BackupPC GUI, the backslashes do not need to be escaped. You can use single and double backslashes instead of double and This is how it worked for me using BackupPC 3.1.0 with samba 3.2.5 on Debian Lenny backing up a Windows Vista computer. Since I can't imagine that this is how it's supposed to work you may or may not get the same results with different versions of samba or Windows or whatever. But hopefully others are able to make use of my own trials and heartache. -- Chris Purves "I can calculate the motion of heavenly bodies, but not the madness of people." - Sir Isaac Newton ------------------------------------------------------------------------------ Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing. http://p.sf.net/sfu/novell-sfdev2dev _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
# Angle Bisectors of Angles Formed by Intersecting Lines Go back to  'Straight Lines' $$\textbf{Art 9 :} \qquad\boxed{\text{Angle Bisectors}}$$ Consider two straight lines L1 and L2 with the equations \begin{align}{L_1}\,\,\,\,:\,\,\,{a_1}x + {b_1}y + {c_1} = 0\\{L_2}\,\,\,\,:\,\,\,{a_2}x + {b_2}y + {c_2} = 0\end{align} We intend to find the angle bisector formed at the intersection point P of L1 and L2. Note that there will be two such angle bisectors To write down the equations of the two angle bisectors, we first modify the equations of L1 and L2 so that c1 and c2 are say, both negative in sign. This can always be done. Why this is done will soon become clear. We first write down the equation of A1, the angle bisector of the angle in which the origin lies. By virtue of being an angle bisector, if any point $$P(x',y')$$ lies on A1, the distance of P from L1 and L2 must be equal. Using the perpendicular distance formula of Art -7, we have \begin{align}&\Rightarrow\qquad \left| {\frac{{{a_1}x' + {b_1}y' + {c_1}}}{{\sqrt {a_1^{\,\,2} + b_1^{\,\,2}} }}} \right| = \left| {\frac{{{a_2}x' + {b_2}y' + {c_2}}}{{\sqrt {a_2^{\,\,2} + b_2^{\,\,2}} }}} \right|\\&\Rightarrow\qquad\frac{{{a_1}x' + {b_1}y' + {c_1}}}{{\sqrt {a_1^{\,\,2} + b_1^{\,\,2}} }} = \pm \frac{{{a_2}x' + {b_2}y' + {c_2}}}{{\sqrt {a_2^{\,\,2} + b_2^{\,\,2}} }} \qquad \qquad \qquad \qquad \dots \rm{(1)}\end{align} Which sign should we select, “+” or “–”, for the bisector of the angle containing the origin? Since P and origin lie on the same side of $${L_1,}\,{a_1}x' + {b_1}y' + {c_1}$$ and c1 must be of the same sign by Art - 6. Similarly, $${a_2}x' + {b_2}y' + {c_2}$$ and c2 must be of the same sign. But since we have already arranged c1 and c2 to be of the same sign (both negative), we must have $$\left( {{a_1}x' + {b_1}y' + {c_1}} \right)$$ and $$\left( {{a_2}x' + {b_2}y' + {c_2}} \right)$$ also of the same sign. Thus, it follows from (1) that to write the equation of the angle bisector of the angle containing the origin, we must select the “+” sign since $$\left( {{a_1}x' + {b_1}y' + {c_1}} \right)$$ and $$\left( {{a_2}x' + {b_2}y' + {c_2}} \right)$$ are of the same sign. The “–” sign gives the angle bisector of the angle not containing the origin, i.e., the equation of A2. To summarize, we first arrange the equations of Land L2 so that c1 and c2 are both of the same sign. Subsequently, using the property of any angle bisector, we obtain \begin{align}&\boxed{{\frac{{{a_1}x + {b_1}y + {c_1}}}{{\sqrt {a_1^{\,\,2} + b_1^{\,\,2}} }} = + \frac{{{a_2}x + {b_2}y + {c_2}}}{{\sqrt {a_2^{\,\,2} + b_2^{\,\,2}} }}}}\qquad \qquad: \qquad \qquad \begin{array}{l}{\textbf{Angle bisector of angle}}\\{\textbf{contaning the origin}}\end{array}\\\\& \qquad \qquad \text {and}\\\\ &\boxed{{\frac{{{a_1}x + {b_1}y + {c_1}}}{{\sqrt {a_1^{\,\,2} + b_1^{\,\,2}} }} = - \frac{{{a_2}x + {b_2}y + {c_2}}}{{\sqrt {a_2^{\,\,2} + b_2^{\,\,2}} }}}}\qquad \qquad: \qquad \qquad \begin{array}{l}{\textbf{Angle bisector of angle not}}\\{\textbf{contaning the origin}}\end{array}\end{align}
### Low frequency radio observations of the ‘quiet’ corona during the descending phase of sunspot cycle 24 by R. Ramesh et al.* 2020-10-13 The shape, size and electron density (Ne) of the corona varies with the sunspot cycle, which is now very well established by white-light observations. In our study, we investigated the ‘quiet’ solar corona (i.e., the corona distinct from emission due to transient and long-lasting discrete sources) at radio frequencies during the descending phase of sunspot cycle 24. We used the radio images obtained with the Gauribidanur RAdioheliograPH (GRAPH; Ramesh1998) at […] ### Microwave Spectral Imaging of an Erupting Magnetic Flux Rope During a Large Solar Flare by B. Chen et al.* 2020-09-22 Magnetic flux ropes are believed to be the centerpiece of the three-part structure of coronal mass ejections. In the standard model of eruptive solar flares, flux rope eruption also induces the impulsive flare energy release through magnetic reconnection. Signatures of flare-associated flux ropes in the low solar corona have been frequently reported in extreme ultraviolet (EUV) wavelengths, particularly the so-called EUV “hot channel” structures (see, e.g., Cheng et al. 2017 […] ### Polarisation and source structure of solar stationary type IV radio bursts by C. Salas-Matamoros and L. Klein 2020-09-08 A coronal mass ejection (CME) is a phenomenon which produces large-scale ejections of mass and magnetic field from the lower corona into the interplanetary space (e.g. Forbes, 2000). The ejection of CMEs is often accompanied by the radio emission from non-thermal electrons. Particularly, stationary type IV continua are thought to be emitted by electrons in closed magnetic structures in the wake of the rising CME. Although basic properties, such as […] ### Radio echo in the turbulent corona and simulations of solar drift-pair radio bursts observed with LOFAR by Kuznetsov et al 2020-08-18 Drift-pair bursts are a rare and mysterious type of fine spectral structures in the low-frequency domain of solar radio emission. First identified by Roberts (1958), they appear in the dynamic spectrum as two parallel frequency-drifting bright stripes separated in time; the trailing stripe seems to repeat the morphology of the leading one with a typical delay of ~1–2 s (see, e.g., Figure 1). Recent imaging spectroscopy observations with the LOw-Frequency […] ### Observations of fragmented energy release during solar flare emission by R. Ramesh et al.* 2020-07-28 Type III radio bursts from the Sun are signatures of energetic (∼1–100 keV) electrons, accelerated at the reconnection sites, propagating upward through the corona into the interplanetary medium along open magnetic field lines. The emission mechanism of the bursts is widely believed to be due to coherent plasma processes. The bursts are observed typically in the frequency range $\approx 1\,$GHz$– 10\,$kHz, which corresponds to radial distance range between the […] ### Density and magnetic field turbulence in solar flares estimated from radio zebra observations by M. Karlicky and L. Yasnov 2020-07-14 Solar flares are characterized by fast plasma flows. In such plasma flows owing to the Kelvin-Helmholtz instability the maqnetohydrodynamic turbulence can be generated. Although, the turbulence plays an important role in solar flares, e.g., in acceleration of particles, a knowledge about the level of  this turbulence is still very limited.  In the present study we estimate the levels of this turbulence. For this purpose we use observations of the so called zebra […] ### First radio evidence of impulsive heating contribution to the quiet solar corona by Surajit Mondal et al 2020-06-30 The solar community has been trying to understand the mechanism responsible for coronal heating for several decades now. In the past decade, a number of studies have shown that the active regions and the coronal loops are heated impulsively. However, such a consensus is yet to be reached for the quiet sun. Past searches for impulsive events in the EUV and X-ray are yet to provide conclusive evidence in favour […] ### Fast CME caused by the eruption of a quiescent prominence by V. Grechnev and I. Kuzmenko 2020-06-09 CMEs can be the sources of space-weather disturbances. Shock waves expanding ahead of fast CMEs and associated flares are the probable sources of energetic protons. Two categories of CMEs have traditionally been identified depending on the presence of conspicuous chromospheric emissions during their development. Flare-associated CMEs erupt from active regions, carry stronger magnetic fields, can reach higher speeds and cause stronger space-weather disturbances. Non-flare-associated CMEs develop in eruptions of large […] 1 2 3 18
kidzsearch.com > wiki   Explore:images videos games # Approximation (Redirected from Approximate) Archimedes used polygons to approximate a circle, and to calculate an approximation for $\pi$ as shown below:
These links were collected over March, so I will presumably be a month late with the various internet tomfoolery associated with this post’s publication date. Berkson’s paradox is a counterintuitive result which most often manifests as falsely observing a negative correlation between independent variables when one mistakenly only sees cases when at least one of them occurs. Wikipedia’s example: ‘For example, a person may observe from their experience that fast food restaurants in their area which serve good hamburgers tend to serve bad fries and vice versa; but because they would likely not eat anywhere where both were bad, they fail to allow for the large number of restaurants in this category which would weaken or even flip the correlation.’ Probably a useful concept to have a handle for. Colin Wright is mostly famous for inventing the juggling notation known as ‘siteswap’, but was in fact a maths PhD supervised by Béla Bollabás at Cambridge. He has a blog full of interesting puzzles and others, including a series of twists on the classical ‘rope around the Earth’ problem. This one is my favourite. During the Second World War, the US was developing a revolutionary new method of incendiary bomb delivery: hibernating bats. It was only cancelled because the Manhattan project was progressing more quickly. I can only wonder at the world in which incendiary bat bombs were the threat on which the Cold War was built instead of nuclear weapons. Why can’t font size be set on a:visited? Surprising things that have to be done in the name of privacy. OpenType font shaping is Turing complete, although this requires a custom HarfBuzz build to increase the recursion limit past 6. Also Turing complete is the x86 instruction set, but using only mov. I am once again reminded that Turing-completeness is not a high bar. Long post on Status as a Service, and how social capital is the real motive behind people’s behaviour on social networks. NYT article on a carbon dioxide scrubbing startup. Encouraging ideas, but seems energetically implausible. John Baez writes about similar ideas. ‘Why are you reading about airships?’ ‘Because airships are cool.’ Back-of-the-envelope calculations (admittedly by an airship blog) suggest that giant cargo airships could be a trillion dollar industry (the calculations are rough enough that what we mean by that doesn’t matter: could be market cap, revenue, value of assets…). They offer a previously nonexistent low-cost, medium speed transport method that is agnostic of land or sea. From the point of view of carbon, they are encouraging in that they only require energy to go forwards, although drag is a big concern and hydrogen is fairly expensive to produce.
It is currently 18 Oct 2017, 17:26 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # In traveling from city A to city B, John drove for 1 hour at 50 mph an Author Message TAGS: ### Hide Tags VP Status: Preparing for the GMAT Joined: 02 Nov 2016 Posts: 1231 Kudos [?]: 1040 [0], given: 536 Location: Pakistan GPA: 3.4 In traveling from city A to city B, John drove for 1 hour at 50 mph an [#permalink] ### Show Tags 05 Sep 2017, 10:56 00:00 Difficulty: 25% (medium) Question Stats: 76% (00:45) correct 24% (01:20) wrong based on 21 sessions ### HideShow timer Statistics In traveling from city A to city B, John drove for 1 hour at 50 mph and for 3 hoursat 60 mph. What was his average speed for the whole trip? (A) 50 (B) 53.5 (C) 55 (D) 56 (E) 57.5 [Reveal] Spoiler: OA _________________ Official PS Practice Questions Press +1 Kudos if this post is helpful Kudos [?]: 1040 [0], given: 536 BSchool Forum Moderator Joined: 26 Feb 2016 Posts: 1451 Kudos [?]: 593 [1], given: 16 Location: India WE: Sales (Retail) Re: In traveling from city A to city B, John drove for 1 hour at 50 mph an [#permalink] ### Show Tags 05 Sep 2017, 11:01 1 KUDOS In traveling from city A to city B, John drove for 1 hour at 50 mph and for 3 hoursat 60 mph. What was his average speed for the whole trip? (A) 50 (B) 53.5 (C) 55 (D) 56 (E) 57.5 The total distance traveled from City A to City B is 50*1 + 60*3 = 230 Since John drove for 4 hours to travel this distance, the average speed of the whole trip is $$\frac{Distance}{Time} = \frac{230}{4}$$ = 57.5(Option E) _________________ Stay hungry, Stay foolish Kudos [?]: 593 [1], given: 16 Re: In traveling from city A to city B, John drove for 1 hour at 50 mph an   [#permalink] 05 Sep 2017, 11:01 Display posts from previous: Sort by
# Integral of the Square of density probability function 1. Jan 16, 2010 ### BoMa Hi, I'm looking for the value of the integral of the square of a density probability function on a bounded interval. Tks 2. Jan 16, 2010 ### mathman It could be anything, depending on the density function itself. 3. Jan 17, 2010 ### BoMa Is there a way to characterise this "anything" , you're talking about. $$\int \int f^{2}(x,y) dx\,dy$$ Some norm on the probability space ???? 4. Jan 17, 2010 ### winterfors You wouldn't use it as a norm (the L^2 norm), having a probability space implies that you are using the L^1 norm (and that this norm equals one). The integral of the squared PDF can however be interpreted as a measure of the average probability density of the PDF, i.e. how concentrated the probability is. Further more, you know that it will be larger or equal to the average probability density of a constant PDF: $$\int \int f^{2}(x,y) dx\,dy \geq 1 / \int \int dx\,dy$$ 5. Jan 17, 2010 Just a question: you seem to be automatically assuming that the density is a joint density of two variables, hence you write $$\iint f^2(x,y) \, dxdy$$ If, however, you consider a univariate density, the proper expression for the integral of the density squared is $$\int f^2(x) \, dx$$ 6. Jan 18, 2010 ### BoMa Yes I'm talking about the bivariate bounded probability density function (pdf) $$f(x,y).$$ Sorry I can't understand the difference between L1 or L2 norm on the probability space. About not using the L2 norm , I thought that the pdf could be written as $$\int^{b}_{a} \int^{b}_{a} f^{2}(x,y)dxdy =\int^{b}_{a} \int^{b}_{a} |f^{2}(x,y)|dxdy=||f||^2$$ which looks like the L2 norm , more than the L1 norm. Could you please explain in more details why this case can't be seen as L2 norm ? 7. Jan 18, 2010 ### frustr8photon what are L1 and L2? 8. Jan 18, 2010 ### mathman To get the L2 norm, take a square root. I really don't understand what you are trying to achieve. To give a simple example. Assume that f is constant over a square with area 1/A, then f=A over this area and the integral of f2 is A. Since A is completely arbitrary, the integral can have any value. 9. Jan 18, 2010 ### BoMa I understood that the value will be any constant depending on the choice of f which is arbitry chosen here. So I wanted to say that it is some L2 norm. But someone on the list said that It should be L1 norm (not L2 norm !!), because the problem here is on a probabity space. So I wonder why, he said L1 norm instead of L2 norm ? 10. Jan 19, 2010 ### mathman Sorry, I am not a mind reader. Just make sure you take the square root to get the norm. In the simple example the L2 norm is √A.
The sign of the leading coefficient of the function … Powerful women's group has claimed that men and women differ in attitudes about sexual discrimination. Express the rule in equivalent factored form and c. Use ⇒ Last option is correct. They're customizable and designed to help you study and learn more effectively. Order Your Homework Today! Add your answer and earn points. Descartes' Rule of Signs has to do with the number of real roots possible for a given polynomial function f (x). Find the degree, leading term, leading coe cient and constant term of the fol-lowing polynomial functions. To recall, a polynomial is defined as an expression of more than two algebraic terms, especially the sum (or difference) of several terms that contain different powers of the same or different variable(s). Instead, they can (and usually do) turn around and head back the other way, possibly multiple times. Defines polynomials by showing the elements that make up a polynomial and rules regarding what's NOT considered a polynomial. A polynomial of degree n can have as many as n– 1 extreme values. degrees of 4 or greater even degrees of... And millions of other answers 4U without ads, Add a question text of at least 10 characters. Learn about different types, how to find the degree, and take a quiz to test your Cubic Polynomial Function: ax3+bx2+cx+d 5. This graph cannot possibly be of a degree-six polynomial. Same length is comparing because it’s saying its the same and not different. f(x) 2- Get more help from Chegg. End BehaviorMultiplicities"Flexing""Bumps"Graphing. Add your answer and earn points. Also, I'll want to check the zeroes (and their multiplicities) to see if they give me any additional information. Describe the end behavior and determine a possible degree of the polynomial function in the graph below. a. Expert Answer 100% (1 rating) Previous question Next question Transcribed Image Text from this Question. If a polynomial is of n degrees, its derivative has n – 1 degrees. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Many transcendental functions (e.g. ezelle 2. Polynomial functions of degree 2 or more are smooth, continuous functions. у A х The least possible degree is Number Use the graph below to write the formula for a polynomial function of least degree. Variables are also sometimes called indeterminates. By using this website, you agree to our Cookie Policy. What are the possible degrees for the polynomial function? But extra pairs of factors (from the Quadratic Formula) don't show up in the graph as anything much more visible than just a little extra flexing or flattening in the graph. This comes in handy when finding extreme values. To graph polynomial functions, find the zeros and their multiplicities, determine the end behavior, and ensure that the final graph has at most $$n−1$$ turning Web Design by. (If you enter p(x)=a+bx+cx^2+dx^3+fx^4+gx^5 in Desmos 2 , you'll get prompted to add sliders that make it easy to explore a degree $$5$$ polynomial.) The graph below is a polynomial function c(x). This follows directly from the fact that at an extremum, the derivative of the function is zero. This might be the graph of a sixth-degree polynomial. y = -2x7 + 5x6 - 24. Suppose that 3% of all athletes are using the endurance-enhancing hormone epo (you should be able to simply compute the percentage of all athletes that are not using epo). degrees of 4 or greater even degrees of 4 or greater degrees of 5 or greater odd degrees of 5 or greater Answers: 2 "it's actually a chemistry question"... Where was George Washington born? Polynomial degree greater than Degree 7 have not been properly named due to the rarity of their use, but Degree 8 can be stated as octic, Degree 9 as nonic, and Degree 10 as decic. According to the Fundamental Theorem, every polynomial function has at least one complex zero. algebra 3 Show transcribed image text. In this case, the degree is 6, so the highest number of bumps the graph could have would be 6 – 1 = 5. Zeros Calculator The zeros of a polynomial equation are the solutions of the function f(x) = 0. Determine a polynomial function with some information about the function. In mathematics, the degree of a polynomial is the highest of the degrees of the polynomial’s monomials (individual terms) with non-zero coefficients. which statement shows the measure of angle x′y′z′? The degree is odd, so the graph has ends that go in opposite directions. at = 0.03, you should reject h0. What are the possible degrees for the polynomial function? the probability of a positive result, given the presence of epo is .99. the probability of a negative result, when epo is not present, is .90. what is the probability that a randomly selected athlete tests positive for epo? Find the polynomial function P of the lowest possible degree, having real coefficients, with the given zeros. To graph polynomial functions, find the zeros and their multiplicities, determine the end behavior, and ensure that the final graph has at most turning points. This video explains how to determine an equation of a polynomial function from the graph of the function. Possible Zeros of a Third Degree Polynomial The third-degree polynomials are those that are composed by terms where the major exponent of the variable is … In mathematics, a polynomial is an expression consisting of variables (also called indeterminates) and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponentiation of variables. Instead, they can (and usually do) turn around and head back the other way, possibly multiple times. Graphs of polynomials don't always head in just one direction, like nice neat straight lines. What is the “best” polynomial approximation of $f$ of degree zero? Homework Statement Determine the least possible degree of the function corresponding to the graph shown below. Question: The finite difference of a polynomial function, whose leading coefficient is a whole number, is 144. a group of 50 men (group 1) and 40 women (group 2) were asked if they thought sexual discrimination is a problem in the united states. Therefore, The function has at least five solutions. Find the Graphs A and E might be degree-six, and Graphs C and H probably are. Take any nice, real-valued function $f$ on the interval $[-1,1]$. TutorsOnSpot.com. If f(x) is a third degree polynomial then by corollary to the fundamental theorem of algebra , it must have 3 roots. ... fourth degree polynomial function. 0.9( 9/10) + 7.2 ^2 = 16.4 hope i could ! Another way to find the x-intercepts of a polynomial function is to graph the function and identify the points at which the graph crosses the x-axis. If some row of differences is all zeros, then the next row up is fit by a constant polynomial, the one after by a linear polynomial, and so on. To find the zeros of a polynomial function, if it can be factored, factor the function and set each factor equal to zero. This can't possibly be a degree-six graph. Also, the bump in the middle looks flattened at the axis, so this is probably a repeated zero of multiplicity 4 or more. gives me the ceiling on the number of bumps. Homework Equations The graph is attached. First Degree Polynomial Function. So I've determined that Graphs B, D, F, and G can't possibly be graphs of degree-six polynomials. just do 5.2 + 2 ( 7.2) and 1/3 x 3 (.9) and youv'e got your equation. Graph B: This has seven bumps, so this is a polynomial of degree at least 8, which is too high. The “nth” refers to the degree of the polynomial you’re using to approximate the function.. Polynomial Equation Discover free flashcards, games, and test prep activities designed to help you learn about Polynomial Equation and other concepts. What are the possible degrees for the polynomial function? The function has five x-intercepts, Therefore, The function has at least five solutions, ⇒ The degree of the function is 5 or more than 5, Hence, the function has Odd degrees of 5 or greater. The sum of the multiplicities is the degree of the polynomial function. But as complex roots occurs in pairs, thus there must be even number of complex roots. Another way to find the x- intercepts of a polynomial function is to graph the function and identify the points where the graph crosses the x -axis. What are the possible degrees for the polynomial function? Corollary to the fundamental theorem states that every polynomial of degree n>0 has exactly n zeroes. How To: Given a graph of a polynomial function of degree n , identify the zeros and their multiplicities. With the two other zeroes looking like multiplicity-1 zeroes, this is very likely a graph of a sixth-degree polynomial. 1. All right reserved. This change of direction often happens because of the polynomial's zeroes or factors. turning point. degrees of 6 or greater even degrees of 6 or greater degrees of 5 or greater odd degrees of 5 or greater TutorsOnSpot.com Order Your Homework Today! B. enlarged breasts The one bump is fairly flat, so this is more than just a quadratic. The lowest possible degree will be the same as the number of roots. Which is the end behavior of a function has odd degree and positive leading coefficient. Graph H: From the ends, I can see that this is an even-degree graph, and there aren't too many bumps, seeing as there's only the one. New questions in Mathematics. Each factor will be in the form where is a complex number. Graphs of polynomials don't always head in just one direction, like nice neat straight lines. Write the polynomial equation given information about a graph. (a) p(x) = x(x 2)(x 3) (b) h(x) = (x+ Question sent to expert. Would the eurpeans have take the same course in africa if the people there had been Christian like them selves... Is a silver ring a homogeneous or a heterogeneous mixture Zero Polynomial Function: P(x) = a = ax0 2. This polynomial function is of degree 4. Each time the graph goes down and hooks back up, or goes up and then hooks back down, this is a "turning" of the graph. Adding these up, the number of zeroes is at least 2 + 1 + 3 + 2 = 8 zeroes, which is way too many for a degree-six polynomial. for our purposes, a “positive” test result is one that indicates presence of epo in an athlete’s bloodstream. In other words, you wouldn’t usually find any exponents in the terms of a first degree polynomial. First degree polynomials have terms with a maximum degree of 1. It can also be said as the roots of the polynomial equation. quintic function. But this exercise is asking me for the minimum possible degree. But looking at the zeroes, the left-most zero is of even multiplicity; the next zero passes right through the horizontal axis, so it's probably of multiplicity 1; the next zero (to the right of the vertical axis) flexes as it passes through the horizontal axis, so it's of multiplicity 3 or more; and the zero at the far right is another even-multiplicity zero (of multiplicity two or four or...). You will receive an answer to the email. Because a polynomial function written in factored form will have an x -intercept where each factor is equal to zero, we can form a function that will pass through a set of x -intercepts by introducing a … . Angle xyz is formed by segments xy and yz on the coordinate grid below: a coordinate plane is shown. -x^8 and 5x^7. (b) Write the . For instance, the following graph has three bumps, as indicated by the arrows: Below are graphs, grouped according to degree, showing the different sorts of "bump" collection each degree value, from two to six, can have. What are the possible degrees for the polynomial function? The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer. 3+2i, -2 and 1 . That is, the degree of the polynomial gives you the upper limit (the ceiling) on the number of bumps possible for the graph (this upper limit being one less than the degree of the polynomial), and the number of bumps gives you the lower limit (the floor) on degree of the polynomial (this lower limit being one more than the number of bumps). Show Solution As the input values x get very large, the output values $f\left(x\right)$ increase without bound. Naming polynomial degrees will help students and teachers alike determine the number of solutions to the equation as well as being able to recognize how these operate on a graph. Graph D: This has six bumps, which is too many; this is from a polynomial of at least degree seven. This video explains how to determine the least possible degree of a polynomial based upon the graph of the function by analyzing the intercepts and turns of the graph. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer. What is the degree of c(x)? Start studying Polynomial Functions, Polynomial Graphs. Example 3.1.2. What are the possible degrees for the polynomial function? A. Allowing for multiplicities, a polynomial function will have the same number of factors as its degree. Quadratics are degree-two polynomials and have one bump (always); cubics are degree-three polynomials and have two bumps or none (having a flex point instead). 4 2. 4.Graph each polynomial function. See . Explain how you know. Graph C: This has three bumps (so not too many), it's an even-degree polynomial (being "up" on both ends), and the zero in the middle is an even-multiplicity zero. 2 See answers omarrshdan48228172 omarrshdan48228172 Answer: and "Bumps" Purplemath. Label all roots with their degrees and mark all intercepts. Polynomial functions of degree 2 or more are smooth, continuous functions. As usual, correctly scale and label the graph and all axes. We can use this information to make some intelligent guesses about polynomials from their graphs, and about graphs from their polynomials. Polynomial Equation – Properties, Techniques, and Examples The first few equations you’ll learn to solve in an Algebra class is actually an example of polynomial equations. Shows that the number of turnings provides the smallest possible degree, but that the degree could be larger $\endgroup$ – John Hughes Oct 25 '19 at 18:13 add a comment | In particular, note the maximum number of "bumps" for each graph, as compared to the degree of the polynomial: You can see from these graphs that, for degree n, the graph will have, at most, n – 1 bumps. What can the possible degrees and leading coefficients of this function be? So going from your polynomial to your graph, you subtract, and going from your graph to your polynomial, you add. We can perform arithmetic operations such as addition, subtraction, multiplication and also positive integer exponents for polynomial expressions but not division by variable. Just use the 'formula' for finding the degree of a polynomial. As such, it cannot possibly be the graph of an even-degree polynomial, of degree six or any other even number. So my answer is: The minimum possible degree is 5. Compare the numbers of bumps in the graphs below to the degrees of their polynomials. Justify your answer with appropriate calculations and a brief explanation. Polynomial regression can reduce your costs returned by the cost function. Note that the polynomial of degree n doesn’t necessarily have n – 1 extreme values—that’s just the upper limit. But this exercise is asking me for the minimum possible degree. So there is 2 complex distinct complex roots are possible in third degree polynomial. none of these would be a correct statement. have a good day! By experimenting with coefficients in Desmos, find a formula for a polynomial function that has the stated properties, or explain why no such polynomial exists. This question asks me to say which of the graphs could represent the graph of a polynomial function of degree six, so my answer is: To help you keep straight when to add and when to subtract, remember your graphs of quadratics and cubics. For a univariate polynomial, the degree of the polynomial is simply the highest exponent occurring in the polynomial. angle xyz is rotated 270 degrees counterclockwise about the origin to form angle x′y′z′. For example, the following are first degree polynomials: 2x + 1, xyz + 50, 10a + 4b + 20. If you know your quadratics and cubics very well, and if you remember that you're dealing with families of polynomials and their family characteristics, you shouldn't have any trouble with this sort of exercise. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. Since the ends head off in opposite directions, then this is another odd-degree graph. Help 1 See answer theniamonet is waiting for your help. By using this site, you consent to the use of cookies. Get an answer to your question “Construct a polynomial function of least degree possible using the given information.Real roots: - 1, 1, 3 and (2, f (2)) = (2, 5) ...” in Mathematics if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. The higher order polynomial offers a function with more complexity than the single order one. The largest exponent of any term in the polynomial. Demonstrates the relationship between the turnings, or "bumps", on a graph and the degree of the associated polynomial. y = x2(x — 2)(x + 3)(x + 5) Here is a graph of a 7th degree polynomial with a similar shape. Use the information from the graph to write a possible rule for c(x). The bumps represent the spots where the graph turns back on itself and heads back the way it came. lol thankss i think she deleted it New questions in Mathematics On top of that, this is an odd-degree graph, since the ends head off in opposite directions. For instance: Given a polynomial's graph, I can count the bumps. webew7 and 43 more users found this answer helpful. Get Free Polynomial Function Of Degree 3 now and use Polynomial Function Of Degree 3 immediately to get % off or \$ off or free shipping To answer this question, I have to remember that the polynomial's degree gives me the ceiling on the number of bumps. Polynomials can be classified by degree. This is probably just a quadratic, but it might possibly be a sixth-degree polynomial (with four of the zeroes being complex). Image by Author This equation has k*d+1 degrees of freedom, where k is the order of the polynomial. ⇒ The degree of the function is 5 or more than 5, Hence, the function has Odd degrees of 5 or greater. See . Justify your answer. If you're not sure how to keep track of the relationship, think about the simplest curvy line you've graphed, being the parabola. The actual number of extreme values will always be n – a, where a is an odd number. So the lowest possible degree is three. The bumps were right, but the zeroes were wrong. To find the zeros of a polynomial function, if it can be factored, factor the function and set each factor equal to zero. I'll consider each graph, in turn. if the p-value turns out to be 0.035 (which is not the real value in this data set), then at = 0.05, you should fail to reject h0. Individuals now are accustomed to using the net in gadgets to see image and video information for inspiration, and according to the title of the article I will talk about about … degrees of 4 or greater even degrees of 4 or greater degrees of 5 or greater odd degrees of 5 or greater LOGIN TO VIEW ANSWER kageyamaammie kageyamaammie Here, mark them brainliest! The term order has been used as a synonym of degree but, nowadays, may refer to several other concepts. Because pairs of factors have this habit of disappearing from the graph (or hiding in the picture as a little bit of extra flexture or flattening), the graph may have two fewer, or four fewer, or six fewer, etc, bumps than you might otherwise expect, or it may have flex points instead of some of the bumps. 0.0297, 18 16 11 45 33 11 33 14 18 11 what is the mode for this data set. The Townshend Acts and The Writs of Assistance search and seizure laws were worse than the other taxes and laws.... Steroid use can have several physical consequences. You can refuse to use cookies by setting the necessary parameters in your browser. What are the possible degrees for the polynomial function? Y X. One good thing that comes from De nition3.2is that we can now think of linear functions as degree 1 (or ‘ rst degree’) polynomial functions and quadratic functions as degree 2 (or ‘second degree’) polynomial functions. ie--look for the value of the largest exponent the answer is 2 since the first term is squared . It has degree two, and has one bump, being its vertex.). Then, identify the degree of the polynomial function. Every polynomial function with degree greater than 0 has at least one complex zero. Algebra. A polynomial function of degree $$n$$ has at most $$n−1$$ turning points. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer. Graph E: From the end-behavior, I can tell that this graph is from an even-degree polynomial. Write the equation of a polynomial function given its graph. Answer to 1. First, identify the leading term of the polynomial function if the function were expanded. Explain how each of the added terms above would change the graph. Looking at the two zeroes, they both look like at least multiplicity-3 zeroes. The degree of a polynomial is the highest power of the variable in a polynomial expression. This problem has been solved! So there is 2 complex distinct complex roots are possible in third degree polynomial. URL: https://www.purplemath.com/modules/polyends4.htm, © 2020 Purplemath. The maximum number of turning points is 4 – 1 = 3. It indicates the number of roots (real and complex) that a polynomial function has. This isn't standard terminology, and you'll learn the proper terms (such as "local maximum" and "global extrema") when you get to calculus, but, for now, we'll talk about graphs, their degrees, and their "bumps". Find the y– and x-intercepts of … To See if they give me any additional information degrees of the polynomial function given its.... Academic writers ready and waiting to help you learn about polynomial equation given information about graph... The upper limit algebra 3 Determine a possible Rule for c ( x ) = ax + 3. Nth ” refers to the degree of the polynomial function has at least 8, which constant most closely [... Help 1 See answer theniamonet is waiting for your help x2 − 4x + 7 Determine the possible. Third degree polynomial often happens because of the polynomial you ’ re using to approximate the f... Would change the graph going down the possible degrees of 5 or greater could maybe be a sixth-degree polynomial with... Values—That ’ s just the upper limit answer on the degree of 1 make up a polynomial from their.! 1 extreme values—that ’ s just the upper limit polynomial has 4 1... 1 and 6 negative 2 and 3 comma negative 1 and 6 negative 2 and 3 negative. Function, whose leading coefficient you achieve academic success but this exercise is asking me for polynomial..., is 144 19 of the polynomial function represented by the graph and the degree a... A. deepened voice B. enlarged breasts c. increased fac... View a few ads and the! Test result is one that indicates presence of epo in an athlete ’ s saying the! Also, i can tell that this graph can not be determined its.! Shape and makes it the details of these polynomial functions, we can use them to write formulas based graphs. Has 4 – 1 extreme values—that ’ s just the upper limit Image by Author this equation k. Will always be n – 1 = 3 for our purposes, “... Can tell that this graph is from an even-degree polynomial, of degree at least solutions... And going from your graph to your what are the possible degrees for the polynomial function?, of degree zero find the polynomial,... Positive leading coefficient is a 5th degree polynomial by using this site, you agree to our Cookie.., it can not possibly be the graph below to the Fundamental Theorem every. Bumps or perhaps only 1 bump website, you subtract, and more with,. Differ in attitudes about sexual discrimination best experience these polynomial functions of degree at least 8, which too., so this could very well be a sixth-degree polynomial you add s bloodstream '' Graphing what are the possible degrees for the polynomial function? right but... Example, a polynomial graph as its degree like multiplicity-1 zeroes, have... '' '' bumps '' Graphing of bumps in the terms of a polynomial... 1/3 x 3 (.9 ) and youv ' E got your equation they... D, f, and it has degree two, and it has bumps! It indicates the number of complex roots are possible in third degree polynomial D, f, and has bump. To do with the given zeros, 18 16 11 45 33 11 33 14 18 11 is!: P ( x ) # a # ) additional information a х the possible... Is more than just a quadratic polynomial, you agree to our Cookie Policy and c! Image Text from this question ax0 2 occurs in pairs, thus showing flattening as the graph shown is 5! It indicates the number of turning points is 4 – 1 = 3 has used. Epo in an athlete ’ s bloodstream answer helpful expert answer 100 % ( 1 rating ) Previous Next! Use this information to make some intelligent guesses about polynomials from their are. 1 See answer theniamonet is waiting for your help real coefficients, with the two,. T usually find any exponents in the graph shown is c. 5 d. 7 b and youv ' E your! Degree seven degree polynomials: 2x + 1, xyz + 50, 10a + 4b +.. Ie -- look for the polynomial function with degree greater than 0 exactly... The graph going down rotated 270 degrees counterclockwise about the function about graphs from their polynomials: this has bumps... Is asking me for the minimum possible degree around and head back the way it came ) turning points one... Mathematics, the function has odd degrees of 5 or greater 6 negative 2 3... Coefficients, with the number of turning points would have expected at least 8, which the... The following are first degree polynomial it came turnings, or bumps '' Purplemath and other concepts the. What is the order of the function has odd degree and positive leading coefficient straight lines than. Sum of the multiplicities of the polynomial brief explanation i 've determined that graphs,! Counterclockwise about the origin to form angle x′y′z′ bumps '' Graphing i could 18... # ) explained below any other even number measures 36.87 degrees xyz formed! From your polynomial to your polynomial, and about graphs from their graphs are explained below and... The formula for a polynomial graph as its bumps '', on a graph setting. But it might possibly be graphs of polynomials do n't always head in just direction! Turnings '' of a polynomial is the highest of the polynomial function with more complexity than the single order.., its derivative has n – 1 degrees too high '', on a of. N'T possibly be of a polynomial function 5x6 - 24 is rotated degrees., so the graph, you add a degree-six polynomial answer on the number of real roots possible for polynomial! Women 's group has claimed that men and 19 of the zeroes being complex ) that polynomial... It indicates the number of roots graphs a and E might be the same and not different solutions... A few ads and unblock the answer on the number of factors as its bumps. They both look like at least degree the origin to form angle x′y′z′ i 'll want to check zeroes. Vertex. ) bumps or perhaps only 1 bump just use the graph turns back on itself and back. Cient and constant term of the multiplicities of the polynomial equation Discover free flashcards,,! Best ” what are the possible degrees for the polynomial function? approximation of [ math ] f [ /math ] polynomials! Through the axis c ( x ) = ax + b 3 end-behavior i! The one bump is fairly flat, so this ca n't possibly be of a sixth-degree (! 4Th degree polynomial has 4 – 1 = 3 on men intelligent guesses about polynomials their! Can the use of cookies is from a polynomial function: P ( x ) any term in the function! E: from the graph from above, and the right-hand end leaves the graph of a polynomial degree. Degrees of their polynomials them to write the polynomial x2 − 4x + 7 linear polynomial function shown.! Spots where the graph below there must be even what are the possible degrees for the polynomial function? of bumps third )... Its vertex. ) end-behavior, i can count the bumps therefore, what are the possible degrees for the polynomial function? function has odd degrees of polynomials! And G ca n't possibly be a sixth-degree polynomial % ( 1 rating ) question... Of real roots possible for a univariate polynomial, and other concepts this data.!: P ( x ) leading coefficient complex roots are possible in third degree polynomial steroids have on?. Considered a polynomial a graph: Determine the least possible degree of a function. End-Behavior, i can tell that this graph can not possibly be graphs of degree-six polynomials perhaps only 1.. Graph G: the finite difference of a polynomial function order one always..., with the number of bumps that every polynomial of a polynomial of degree n have. Greater degrees of 5 or greater its bumps '', on graph! Degrees, its derivative has n – a, where k is degree! Polynomial equation calculator - Solve polynomials equations step-by-step this website uses cookies ensure! Measures 36.87 degrees just one direction, what are the possible degrees for the polynomial function? nice neat straight lines that! 'S left-hand end enters the graph of an even-degree polynomial this graph is from polynomial! The 'formula ' for finding the y– and x-Intercepts of … the actual function is a polynomial function some... C and H probably are vocabulary, terms, and has one bump is fairly,. 'S group has claimed that men and women differ in attitudes about sexual is. For a given polynomial function steroids have on men nth ” refers to the Fundamental states! And graphs c and H probably are site, you agree to our Cookie Policy where! Of higher degree zeros 1 answer Nov 5 # f # a ). One direction, like nice neat straight lines with more complexity than the single order one the,... Graph as its bumps '' n zeroes graph and the degree of c x! Higher order polynomial offers a function has odd degree and positive leading coefficient a. The origin to form angle x′y′z′ a degree-six polynomial the largest exponent of any term in the where! Has at least one of the added terms above would change the graph, depending the. That this graph is from a polynomial and rules regarding what 's not considered a function... Degrees, its derivative has n – a, where a is an odd-degree graph: Determine least... 50, 10a + 4b + 20 polynomials from their graphs are explained below, whose coefficient... 18 16 11 45 33 11 33 14 what are the possible degrees for the polynomial function? 11 what is the mode this. Help you achieve academic success 5.2 + 2 ( 7.2 ) and 1/3 3... Bromley Council Planning, Asl Sign For Spouse, Quikrete 5000 Lowe's, Bee's Wrap Variety Pack, How To Remove Empty Spaces In Word, Seachem Denitrate As Substrate, Great Dane For Sale Philippines, The Monster Study Google Scholar, Youvisit American University, Synovus Bank Address Columbus Ga,
reshape_loadings(x, ...) # S3 method for parameters_efa # S3 method for data.frame reshape_loadings(x, threshold = NULL, loadings_columns = NULL, ...) ## Arguments x A data frame or a statistical model. Arguments passed to or from other methods. A value between 0 and 1 indicates which (absolute) values from the loadings should be removed. An integer higher than 1 indicates the n strongest loadings to retain. Can also be "max", in which case it will only display the maximum loading per variable (the most simple structure). Vector indicating the columns corresponding to loadings. ## Examples library(parameters) library(psych) pca <- model_parameters(psych::fa(attitude, nfactors = 3)) #> ---------------------------------------------------------- #> MR1 | rating | 0.90 | 1.02 | 0.23 #> MR1 | complaints | 0.97 | 1.01 | 0.10 #> MR1 | privileges | 0.44 | 1.64 | 0.65 #> MR1 | learning | 0.47 | 2.51 | 0.24 #> MR1 | raises | 0.55 | 2.35 | 0.23 #> MR1 | critical | 0.16 | 1.46 | 0.67 #> MR1 | advance | -0.11 | 1.04 | 0.22 #> MR2 | rating | -0.07 | 1.02 | 0.23 #> MR2 | complaints | -0.06 | 1.01 | 0.10 #> MR2 | privileges | 0.25 | 1.64 | 0.65 #> MR2 | learning | 0.54 | 2.51 | 0.24 #> MR2 | raises | 0.43 | 2.35 | 0.23 #> MR2 | critical | 0.17 | 1.46 | 0.67 #> MR2 | advance | 0.91 | 1.04 | 0.22 #> MR3 | rating | -0.05 | 1.02 | 0.23 #> MR3 | complaints | 0.04 | 1.01 | 0.10 #> MR3 | privileges | -0.05 | 1.64 | 0.65 #> MR3 | learning | -0.28 | 2.51 | 0.24 #> MR3 | raises | 0.25 | 2.35 | 0.23 #> MR3 | critical | 0.48 | 1.46 | 0.67 #> MR3 | advance | 0.07 | 1.04 | 0.22
# Entropy has units of energy-per-temperature; what is the impled relation between energy and temperature? Entropy has units of energy-per-temperature, e.g. $X$ joules per Kelvin. Can this be reduced to a simple English sentence that describes the relationship between those $X$ joules and 1 Kelvin? For example, when we say that an object's velocity is 10 meters per second north, we mean, At that instant, if you measure a small enough change in the object's position, and divide by the corresponding change in time, the ratio approaches 10 meters north per second. In every other SI unit I've encountered that was a ratio of one thing to some other thing, it was always possible to write a sentence like that to connect them. So, if a system's entropy is 10 joules per Kelvin, how would you complete this sentence: If you _______________ small amount of energy, and _______________ small change in temperature, the ratio approaches 10 joules per Kelvin. After all, heat capacity is also measured in joules per Kelvin, and that's much easier to understand: "5 joules per Kelvin means if you apply 5 joules of heat energy to the system you raise the temperature by 1 Kelvin." But I can't figure out a similar understanding of joules per Kelvin for entropy. • Have you checked Clausius Theorem/Inequality construction? It says that for a reservoir of constant temperature $T$ acting on a system, either if you have a reversible or irreversible process in curse, you can build up that several reversible machines acting between two points in the process "carry" changes $\Delta Q_{rev}$ in reversible heat (summing up in a quasi-static process). Entropy comes as the relation between them, reversible heat by absolute temperature in a given reservoir. And the construction leads to show that this quotient is a thermodynamic variable (coordinate). – chandrasekhar17 Jul 4 '18 at 3:19 • Just to note it, we qualify temperature units like Celsius and Fahrenheit with "degrees" to point out that their zeros aren't at absolute zero. By contrast, temperature units like Kelvin and Rankine have their zeros at absolute zero, such that they're not "degree" units. – Nat Jul 8 '18 at 20:13 The derivative encountered in calculus is the limit of the ratio of two distinct changes which are interdependent. For e.g. for a vehicle, distance travelled $x$ is a function of time $t$, and the derivative $dx/dt$ gives its velocity $v$. But suppose you write this relation as $dx/v=dt$. Now we have a strange quantity $dx/v$, equal to change in time $dt$, which cannot be interpreted as "change of something when something else changes". But the strangeness is only apparent; to make it look natural rewrite the relation as $dx=v~dt$ or $dx/dt=v$. Same goes for change in entropy, $dS=dQ/T$, in which $Q$ is heat energy, $T$ is temperature, and the heat transfer is reversible. If it feels strange then rewrite it as $dQ=T~dS$ and interpret accordingly. You can't interpret entropy change as "change of something when something else changes" simply because of the way it is defined. • OK but I still don't understand how to interpret $dS=dQ/T$ in terms of what happens. Are you saying that if I add a small quantity of heat energy $dQ$, then the entropy change in the system will be $dS$, where $dS=dQ/T$ ? – Bennett Jul 4 '18 at 7:46 • @Bennett Yes, that's right. It is like saying, if I move a small distance $dx$ at velocity $v$ then the elapsed time will be $dt=dx/v$. – Deep Jul 4 '18 at 7:55
# The end of 2018 It is almost the end of 2018. It is a good time to review what I have achieved during the year and look forward to a brand new 2019. I wrote a similar post for 2017 here. ### Some highlights of the year 2018: • My son Noah Tang was born in April. He is so lovely and we love him so much. Can’t believe he is almost 9 months old. • Our epigenomic project was selected by the Data Science Road-Trip program by USC. I spent 2 weeks in PNNL and worked closely with Lisa Bramer and developed a pipeline to do feature selection using machine learning from a lot of chromHMM data sets. You can find the github repo here. I kept a note for everyday what I did as well at here. I will think about writing it up. • I finally migrated my previous blog to blogdown which you are reading now :) Oh my, I love it. It makes blogging so much fun. • I taught the ChIP-seq lesson for 2018 ANGUS Next-Gen Sequence Analysis Workshop held in UC Davis from 7/1/2018 to 7/14/2018, and TAed for the rest of the sessions. It was a great teaching experience for me. I got to know many people and built connections. Most importantly, I enjoyed the teaching very much! Thanks Titus Brown for the invitation. I highly recommend you to attend this workshop if you want to start learning sequencing data analysis. The environment is so welcoming and Titus is so hilarious:) I was in the workshop to learn in 2014 and now I am back to teach! • I went back to University of Florida where I did my PhD to give a talk. It was very nice to be back home and catch up with my supervisor, other professors and some church friends! • Several co-author papers/see the publications section are out in 2018. My first video shot for JOVE can be found at https://www.jove.com/video/56972/an-integrated-platform-for-genome-wide-mapping-chromatin-states-using. I was nervous but It was fun. • In addition, two papers are out in Biorxiv at the end of 2018. I am co-first author in one of them. Both papers describe how epigenetic regulator KMT2D mediate tumor progression in melanoma and lung cancer, respectively. We are in the process submitting those papers to journals, but you can read more details at https://www.biorxiv.org/content/early/2018/12/28/507202 and https://www.biorxiv.org/content/early/2018/12/27/507327. The work was done in Kunal Rai’s lab where I had a chance to play with large amount of ChIP-seq data sets. My Snakemake pipeline is being used in the lab by others and has processed thousands of ChIP-seq data sets. • My three most stared github repos were cited in a paper GitHub Statistics as a Measure of the Impact of Open-Source Bioinformatics Software by Mikhail G. Dozmorov. The table summarizing the popular github repos can be found at here. It’s over three years’ cumulative work for those repos. I am so glad that my notes are helpful for other researchers. I am always supportive for open science and believe sharing knowledge is the way to promote science progress. • I joined Harvard FAS Informatics as a bioinformatics scientist in October and started working on single-cell RNAseq with Dulac lab and will have a chance to play with other single molecule transcriptome data generated from Xiaowei Zhuang’s lab. I am really excited about learning more! • I have my green card approved. This is important so I can work in the US without worrying about my visa status. Thanks to my previous postdoc adviser Roel Verhaak, Luca Pinello, Istvan Albert, Praveen Sethupathy and Liang Cheng for writing recommendation letters. I will need to visit Luca’s lab to say thanks personally, I have not had a chance to meet him after I moved to Harvard. • I finally got a chance to write my frist R packagefor single cell cluster stability testing. It is in github: scclusteval. I implemented some functions for visualizing single cell data and evaluating cluster stability. I will make it public once I clean up a bit. I was so satisfied to have it installed by devtools::install_github() and all functions and help pages are readily available as a package. I mean I am gradually transiting myself from an R user to R programmer. I know I still have a lot to learn, but this is exciting! By the way, I highly recommend the usethis package for writing R packages and read the R pacakges book by Hadley Wickham. read this and this blog post to get started on how to write a minimal functional R package. ### What to expect in 2019 • I will have a lot of opportunities to teach workshops on R, Unix and single-cell RNAseq at my current position. • A lot to learn on neuroscience. I will audit classes taught by Catherine Dulac. • I will guest lecture a few lessons for STAT 115: Introduction to Computational Biology and Bioinformatics taught by Sheirly Liu. • I would love to learn more on deep learning and apply it to single cell data analysis. Currently, I am taking classes from Coursera. • Attend several conferences. Would love to catch up with the twitter-verse in person. • A few more papers to write. I have at least 2 first author papers to finish. It’s hanging there forever. • Of course, spend time with the family. Previous
# Can you find a discriminant for a linear equation? Yes, I would say so. If you thought of a linear function as $y = 0 {x}^{2} + b x + c$, then the discriminant would be $D = {b}^{2} - 4 \cdot 0 \cdot c = {b}^{2}$,
# median and mean of the sample mean of i.i.d. log-normal Let $y:=\frac1n\sum_{i=1}^n x_i$, where $\{x_i\}_{i=1}^n$ is a set of i.i.d. random variables, and every $x_i$ has a lognormal distribution $x_i \sim\text{Lognormal}(\mu,\sigma^2)$. Let $\text{Med}[y]$ be the median of $y$. Is the following inequality true $\forall (n,\mu,\sigma)$? $$\text{Med}[y]<\mathbf E[y]$$ Motivation: I am computing the sample mean of the lognormal random variables via Monte Carlo. The sample mean seems tend to concentrate below the mean for large $\sigma$. I am wondering whether this is true for all cases. It is true for a sigle sample. However, there is no explicit formula for the distribution of the mean of finite number of --- not even two -- samples i.i.d. lognormal variables. I have no idea how to prove it. • Is E the true expectation or is it just the sample mean? Either way this seems badly posed: sometimes the median is less than the sample mean, sometimes more. – Ian May 31 '17 at 17:38 • @Ian: $y$ is the sample mean as defined in the question and therefore a random variable. $\mathbf E[y]$ is the expectation of the random variable $y$ the sample mean. Why is it badly posed? These are mathematically rigorous statements. Each is just a definitive scalar. What time is median less and what time is it more than the expectation of the sample mean? Could you please prove your claims? – Hans May 31 '17 at 17:48 • Oh, sorry, I missed some notation at the start that clarified what was going on; Med is not the median of the sample, it's the median of the distribution of the sample mean of the sample. Now it makes sense. – Ian May 31 '17 at 18:29 • In your edit do you mean to say "median" rather than "mean"? – Ian Jun 1 '17 at 19:44 • @Ian: I mean to say "mean". (Double meaning. Ha) Median is below a certain point is equivalent to the probability of the variable below that point is greater than 0.5 or "concentrating below" that point. – Hans Jun 1 '17 at 19:56 Yes, it seems the result holds. ## Case: $n = 1$ Here, $Y = X_1$. First note that, $$Med(X_i) = e^\mu$$ $$E(X_i) = e^\mu e^{\sigma^2/2}$$ Since $\sigma^2 > 0$, we know that $e^{\sigma^2/2} > 1$ which implies that $E(Y) > Med(Y)$. ## Case: $n = \infty$ In the limit, it is clear by the central limit theorem that $$y \stackrel{d}{\rightarrow} N(\mu^*, \sigma^*/\sqrt{n})$$ Where $\mu^*$ and $\sigma^*$ are the mean and standard deviation of the $X_i$ respectively. Since this is converging to a normal distribution, in the limit we have $E(Y) = Med(Y)$. ## Case: $1 < n < \infty$ This is not at all rigorous, but when $n=1$ we have a positive skew distribution and when $n\rightarrow\infty$ we converge to a symmetric (normal) distribution for $Y$. In between, the sampling distribution of $Y$ will remain positively skewed and hence $E(Y)$ will be greater than $Med(Y)$ for finite $n$. This plot, which was constructed via simulation illustrates this point informally. The vertical lines represent the median (green) and mean (blue). ## A (Partial) More Rigorous Approach Define $D_n = E(Y_n) - Med(Y_n)$, we wish to show that $\forall \ n, \ \mu, \ \sigma$, it is the case that $D_n \geq 0$. By the extreme cases above, we have that: $$D_1 > 0$$ $$\lim_{n\rightarrow\infty}D_n = 0$$ Thus, if we can show that $D_{n} > D_{n+1}$ the by MCT we can argue that for any finite $n$ it must be the case that $D_n \geq 0$. In attempt to show that $D_n - D_{n+1} > 0$, we can write: $$D_n - D_{n+1} = \left[E(Y_n) - E(Y_{n+1}\right] + \left[Med(Y_{n+1}) - Med(Y_n)\right]$$ Since $Y$ is the sample mean, the expected value will be the same as $X_1$, regardless of $n$ (as seen in the simulation above). Hence we need only show that: $$Med(Y_{n+1}) > Med(Y_n)$$ One way we can accomplish this, is by showing that $$P(Y_{n+1} < M_n) < \frac{1}{2}$$ where $M_n = Med(Y_n)$. We can write this as: \begin{align*} P(Y_{n+1} < M_n) &= P\left(\frac{n}{n+1}Y_n + \frac{x_{n+1}}{n+1} < M_n\right) \\ &= P\left(Y_n + \frac{X_{n+1} - Y_n}{n+1} < M_n\right) \end{align*} Informally, it seems that if $\frac{X_{n+1} - Y_n}{n+1}$ is positive more often than it is negative, the above probability should indeed be less than 1/2. Perhaps somebody smarter than me can figure out where to go from here. (: • Your implication "distribution of $Y$ will remain positively skewed and hence $E(Y)$ will be greater than $Med(Y)$ for finite $n$" does not hold. Also, have you tried different $\sigma$'s? Is there a scaling argument relating all $\sigma$'s? – Hans May 31 '17 at 19:46 • Hmm I agree with you that it is a leap. However, I suspect the implication holds under certain conditions (Uni-modal, etc). The base case holds for any $\sigma$ since $\sigma^2 > 0$. It would be great to see a more rigorous approach, but I remain convinced that the difference should be montone as a function of $n$ between the $n=1$ and $n=\infty$ case. May 31 '17 at 21:49 • By the way, would you mind changing the phrase "this should hold" to "this seems to hold"? "Should" means "indeed does" while "seem" indicates there is evidence but great uncertainty. You have not proved the conjecture. It is more appropriate to use "seem" than "should". – Hans Jun 2 '17 at 3:25
0 Q: # A and B can do a piece of work in 40 and 50 days. If they work at it an alternate days with A beginning in how many days, the work will be finished ? Q: A garrison of 2000 men has provisions for 54 days. At the end of 15 days, a reinforcement arrives, and it is now found that the provisions will last only for 20 days more. What is the reinforcement ? A) 1900 B) 2100 C) 1700 D) 2000 Explanation: Given 2000 ---- 54 The provisions for 2000 men for 39 days can be completed by 'm' men for only 20 days. i.e, 2000 ----- 39 days == m ---- 20 days => m x 20 = 2000 x 39 m = 3900 So total men for 20 days is 3900 => 2000 old and 1900 new reinforcement. Hence, reinforcement = 1900. 1 14 Q: A take twice as much time as B or thrice as much time to finish a piece of work. Working together, they can finish the work in 2 days. B can do the work alone in ? A) 3 hrs B) 6 hrs C) 7 hrs D) 5 hrs Explanation: Suppose A, B and C take x, x/2 and x/3 respectively to finish the work. Then, (1/x + 2/x + 3/x) = 1/2 6/x = 1/2 => x = 12 So, B takes 6 hours to finish the work. 2 35 Q: A can complete a work in 12 days with a working of 8 hours per day. B can complete the same work in 8 days when working 10 hours a day. If A and B work together, working 8 hours a day, the work can be completed in --- days  ? A) 51/24 B) 87/5 C) 57/12 D) 60/11 Explanation: A can complete the work in 12 days working 8 hours a day => Number of hours A can complete the work = 12×8 = 96 hours => Work done by A in 1 hour = 1/96 B can complete the work in 8 days working 10 hours a day => Number of hours B can complete the work = 8×10 = 80 hours => Work done by B in 1 hour = 1/80 Work done by A and B in 1 hour = 1/96 + 1/80 = 11/480 => A and B can complete the work in 480/11 hours A and B works 8 hours a day Hence total days to complete the work with A and B working together = (480/11)/ (8) = 60/11 days 4 47 Q: K can build a wall in 30 days. L can demolish that wall in 60 days. If K and L work on alternate days, when will the wall be completed ? A) 120 days B) 119 days C) 118 days D) 117 days Explanation: K's work in a day(1st day) = 1/30 L's work in a day(2nd day)= -1/60(demolishing) hence in 2 days, combined work= 1/30 - 1/60 =1/60 since both works alternatively, K will work in odd days and L will work in even days. 1/60 unit work is done in 2 days 58/60 unit work will be done in 2 x 58 days = 116 days Remaining work = 1-58/60 = 2/60 = 1/30 Next day, it will be K's turn and K will finish the remaining 1/30 work. hence total days = 116 + 1 = 117. 5 101 Q: A contractor undertook to complete a piece of work in 120 Days and employed 140 men upon it. At the end of 66 days only half of the work was done,so he put on 25 extra men. By how much time did he exceed the specific time ? A) 2 days B) 3 days C) 4 days D) 5 days Explanation: work done=total number of person x number of days; half of work done = 140 x 66; For half of remaining work 25 extra men are added. Therefore, total men for half work remaining = 140 + 25 = 165; Let 2nd half work will be completed in K days; 140 x 66 = 165 x K K = 122; Hence, extra days => 122-120 = 2days.
## Category: Operator theory ### Michael Hartz awarded Zemanek prize in functional analysis Idly skimming through the September issue of the Newsletter of the European Mathematical Society, I stumbled upon the very happy announcement that the 2020 Jaroslav and Barbara Zemanek prize in functional analysis with emphasis on operator theory was awarded to Michael Hartz. The breakthrough result that every complete Nevanlinna-Pick space has the column-row property is one of his latest results and has appeared on the arxiv this May. Besides solving an interesting open problem, it is a really elegant and strong paper. It is satisfying to see a young and very talented mathematician get recognition! Full disclosure 😉 Michael is a sort of mathematical relative (he was a PhD student of my postdoc supervisor Ken Davidson), a collaborator (together with Ken Davidson we wrote the paper Multipliers of embedded discs) and a friend. I have to boast that from the moment that I heard about him I knew that he will do great things – in his first paper, which he wrote as a masters student, he ingeniously solved an open problem of Davidson, Ramsey and myself. Since then he has worked a lot on some problems that are close to my interests, and I have been following him with admiration. Congratulations Michael! ### New paper: Dilations of commuting unitaries Malte Gerhold, Satish Pandey, Baruch Solel and I have recently posted a new paper on the arxiv. Check it out here. Here is the abstract: Abstract: We study the space of all $d$-tuples of unitaries $u=(u_1,\ldots, u_d)$ using dilation theory and matrix ranges. Given two $d$-tuples $u$ and $v$ generating C*-algebras $\mathcal A$ and $\mathcal B$, we seek the minimal dilation constant $c=c(u,v)$ such that $u\prec cv$, by which we mean that $u$ is a compression of some $*$-isomorphic copy of $cv$. This gives rise to a metric $d_D(u,v)=\log\max\{c(u,v),c(v,u)\}$ on the set of equivalence classes of $*$-isomorphic tuples of unitaries. We also consider the metric $d_{HR}(u,v)$ $= \inf \{\|u'-v'\|:u',v'\in B(H)^d, u'\sim u$ and $v'\sim v\},$ and we show the inequality $d_{HR}(u,v) \leq K d_D(u,v)^{1/2}.$ Let $u_\Theta$ be the universal unitary tuple $(u_1,\ldots,u_d)$ satisfying $u_\ell u_k=e^{i\theta_{k,\ell}} u_k u_\ell$, where $\Theta=(\theta_{k,\ell})$ is a real antisymmetric matrix. We find that $c(u_\Theta, u_{\Theta'})\leq e^{\frac{1}{4}\|\Theta-\Theta'\|}$. From this we recover the result of Haagerup-Rordam and Gao that there exists a map $\Theta\mapsto U(\Theta)\in B(H)^d$ such that $U(\Theta)\sim u_\Theta$ and $\|U(\Theta)-U({\Theta'})\|\leq K\|\Theta-\Theta'\|^{1/2}.$ Of special interest are: the universal $d$-tuple of noncommuting unitaries ${\mathrm u}$, the $d$-tuple of free Haar unitaries $u_f$, and the universal $d$-tuple of commuting unitaries $u_0$. We obtain the bounds $2\sqrt{1-\frac{1}{d}}\leq c(u_f,u_0)\leq 2\sqrt{1-\frac{1}{2d}}.$ From this, we recover Passer’s upper bound for the universal unitaries $c({\mathrm u},u_0)\leq\sqrt{2d}$. In the case $d=3$ we obtain the new lower bound $c({\mathrm u},u_0)\geq 1.858$ improving on the previously known lower bound $c({\mathrm u},u_0)\geq\sqrt{3}$. ### Dilations of q-commuting unitaries Malte Gerhold and I just have just uploaded a revision of our paper “Dilations of q-commuting unitaries” to the arxiv. This paper has been recently accepted to appear in IMRN, and was previously rejected by CMP, so we have four anonymous referees and two handling editors to be thankful to for various corrections and suggested improvements (though, as you may understand, one editor and two referees have reached quite a wrong conclusion regarding our beautiful paper :-). This is a quite short paper (200 full pages shorter than the paper I recently announced), which tells a simple and interesting story: we find that optimal constant $c_\theta$, such that every pair of unitaries $u,v$ satisfying the q-commutation relation $vu = e^{i\theta} uv$ dilates to a pair of commuting normal operators with norm less than or equal to $c_\theta$ (this problems is related to the “complex matrix cube problem” that we considered in the summer project half year ago and the one before). We provide a full solution. There are a few ramifications of this idea, as well as surprising connections and applications, so I invite you to check out the nice little introduction. ### A survey (another one!) on dilation theory I recently uploaded to the arxiv my new survey “Dilation theory: a guided tour“. I am pretty happy and proud of the result! Right now I feel like it is the best survey ever written (honest, that’s how I feel, I know that its an illusion), but experience tells me that two months from now I might be a little embarrassed (like: how could I be so vain to think that I could pull of a survey on this gigantic topic?). (Well, these are the usual highs and lows of being a mathematician, but since this is a survey paper and not a research paper, I feel comfortable enough to share these feelings). This survey was submitted (and will hopefully appear in) to the Proceedings of the International Workshop on Operator Theory and its Applications (IWOTA) 2019, Portugal. It is an expanded version of the semi-plenary talk that I gave in that conference. I used a preliminary version of this survey as lecture notes for the mini-course that I gave at the recent workshop “Noncommutative Geometry and its Applications” at NISER Bhubaneswar. I hope somebody finds it useful or entertaining 🙂 ### New paper: “On the matrix range of random matrices” Malte Gerhold and I recently posted our new paper “On the matrix range of random matrices” on the arxiv, and I want to write a few words about it. Recall that the matrix range of a $d$-tuple of operators $A = (A_1, \ldots, A_d) \in B(H)^d$ is the noncommutative set $W(A) = \cup_n W_n(A)$, where $W_n(A) = \{ (\phi(A_1), \ldots, \phi(A_d)) : \phi : B(H) \to M_n$ is UCP $\}$. The matrix range appeared in several recent papers of mine (for example this one), it is a complete invariant for the unital operator space generated by $A_1 \ldots, A_d$, and is within some classes is also a unitary invariant. The idea for this paper came from my recent (last couple of years or so) flirt with numerical experiments. It has dawned on me that choosing matrices randomly from some ensembles, for example by setting G = randn(N); X = (G + G')/sqrt(2*N); (this is the GOE ensemble) is a rather bad way for testing “classical” conjectures in mathematics, such as what is the best constant for some inequality. Rather, as $N$ increases, random $N \times N$ behave in a very “structured” way (as least in some sense). So we were driven to try to understand, roughly what kind of operator theoretic phenomena do we tend to observe when choosing random matrices. The above paragraph is a confession of the origin of our motive, but at the end of the day we ask and answer honest mathematical questions with theorems and proofs. If $X^N = (X^N_1, \ldots, X^N_d)$ is a $d$-tuple of $N \times N$ matrices picked at random according to the Matlab code above, then experience with the law of large numbers, the central limit theorem, and Wigner’s semicircle law, suggests that $W(X^N)$ will “converge” to something. And by experience with free probability theory, if it converges to something, then is should be the matrix range of the free semicircular tuple. We find that this is indeed what happens. Theorem: Let $X^N$ be as above, and let $s = (s_1, \ldots, s_d)$ be a semicircular family. Then for all $n$, $\lim_{N \to \infty} d_H(W_n(X^N),W(s)) = 0$ almost surely in the Hausdorff metric. The semicircular tuple $s = (s_1, \ldots, s_d)$ is a certain $d$-tuple of operators that can be explicitly described (see our paper, for example). We make heavy use of some fantastic results in free probability and random matrix theory, and our contribution boils down to finding the way to use existing results in order to understand what happens at the level of matrix ranges. This involves studying the continuity of matrix ranges for continuous fields of operators, in particular, we study the relationship between the convergence (*) $\lim_{N \to \infty} \|p(X^N)\| = \|p(X)\|$ (which holds for $X^N$ as above and $X = s$ by a result of Haagerup and Torbjornsen) and (**) $\lim_{N \to \infty} d_H(W_n(X^N),W(X)) = 0$. To move from (*) to (**), we needed to devise a certain quantitative Effros-Winkler Hahn-banach type separation theorem for matrix convex sets.
In math, an integral assigns values to functions that help complete other functions. For example, to calculate the amount of water a boat displaces, an integral would give value to the area of the boat. This term is used in the branch of math called calculus and comes from the word integration. This is because it integrates data into an equation that would otherwise be impossible to solve. Definite integrals are a value that falls between certain criteria. This type of integral falls between the upper and lower limits of a variable that is independent of a statement. In other words, it’s a variable that could have a value between two limits. An example of this would be a maximum size or minimum size of the hull of the boat described above. An improper integral has a much wider limit to its value. It could either be infinite or have the value of integrand, which is an integral that approaches infinite. This concept is a little more challenging to deal with than most integrals since the value could fall between much wider criteria. Calculating Improper Integrals The trickiest part of calculating an improper integral is that it isn’t really calculated, but rather analyzed in an equation independent from the function. They are represented symbolically as a limit of a form. Because there is no definite value to the integral, this is technically abuse of notation. As bad as this may sound, it’s simply a way of using notation to explain something that may not have a formal type of notation, such as a definite integral that may or may not fall somewhere between a fixed value and notation. Integral Limitations The limitations of integrals are what sets them apart. If an integral has two limitations that are clearly defined it’s a definite integral. If one or both limitations may approach infinity, it’s an improper integral. The notation of these problems will show that one is solvable to find an asymptote with a definite value, while the other cannot provide a value that does not approach infinite. In other words, the first problem can be solved, while the other can only be analyzed. Convergence and Divergence Of the two examples above, one is unsolvable, but that doesn’t mean there isn’t data to be found in the analysis. If an improper integral is found to have a finite value it converges. If it is found that both limits approach infinity, it diverges. This means that a starting point can be found in the asymptote if the function is to be graphed. It may not be much, but this could provide a solution to the function the integral provides value for. What’s most confusing about this type of analysis is that there are still values that can be given by analyzing the infinite limits. It’s all about adding perspective. Because the two values are infinite, there is no middle. Instead, the point of x axis could be considered the middle. Hopefully this will be enough to solve the question of divergence if the integral is to be solved for.
# A diverging beam of light from a point source S having devergence angle alpha, falls symmetrically on a glass slab as shown. The angles of incidence 13 views in Physics closed A diverging beam of light from a point source S having devergence angle alpha, falls symmetrically on a glass slab as shown. The angles of incidence of the two extreme rays are equal. If the thickness of the glass slab is t and the refractive index n, then the divergence angle of the emergent beam is A. zero B. alpha C. sin^-1(1//n) D. 2sin^-1(1//n) by (84.0k points) selected i=alpha/2 Divergent angle of emergent beam =2i=alpha
assignmentutor™您的专属作业导师 assignmentutor-lab™ 为您的留学生涯保驾护航 在代写博弈论Game Theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写博弈论Game Theory代写方面经验极为丰富,各种代写博弈论Game Theory相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 经济代写|博弈论代写Game Theory代考|Playing the Field Condition (ES2) for evolutionary stability is applicable to a two-player game. When the game is against the field, payoffs are non-linear functions of the second argument (Section 2.9) and condition (ES2)(ii) must be modified to (ii) $W\left(x^{\prime}, x_{\epsilon}\right)<W\left(x^{}, x_{\epsilon}\right)$ for all sufficiently small positive $\epsilon$. Here, in an abuse of notation, the term $W\left(x, x_{\epsilon}\right)$ denotes the payoff to an individual with strategy $x$ when the resident population comprises a proportion $\epsilon$ of $x^{\prime}$ strategists and a proportion $1-\epsilon$ of $x^{}$ strategists. One can then show that the original (ES2) condition is a special case of this more general condition when payoffs are linear in the second argument, as is the case in a two-player game (Exercise 4.4). In the sex-ratio game of Section $3.8$ and Box $3.1$ the unique Nash equilibrium strategy is to produce daughters and sons with equal probability. When this is the resident strategy, condition (ES2′) holds for all mutants, so the strategy is an ESS (Exercise 4.5). The intuition behind this result is similar to that for the Hawk-Dove game. For example, if the resident strategy is to produce sons and daughters with equal probability and a mutant arises that produces mainly daughters, then the breeding population becomes more female-biased if this mutant starts to become common, and the mutant strategy then does worse than the resident. ## 经济代写|博弈论代写Game Theory代考|Illustration of Stability for a Continuous Trait It is common for fish to approach a predator in order to gain valuable information on the threat posed (Pitcher et al., 1986). Approach is dangerous, so a fish often approaches together with another fish in order to dilute the risk to themselves, and this is a form of cooperation. The phenomenon has been the focus of much empirical study. Figure $4.1$ illustrates a typical laboratory set-up. Under such circumstances we might expect the behaviour of each of the two fish to depend on the other. To model the game between the fish we loosely follow the approach of Parker and Milinski (1997). We are critical of the biological realism of our model (Section 8.4), but this simple game serves to illustrate the ESS analysis for a continuous trait. Two fish are initially unit distance away from a predator. Each fish chooses how far to travel towards the predator. Thus an action is a distance $x$ in the range $0 \leq x \leq 1$, where $x=0$ is the action of not approaching the predator at all and $x=1$ the action of travelling right up to the predator. If a fish moves distance $x^{\prime}$ while its partner moves distance $x$, the focal fish survives with probability $S\left(x^{\prime}, x\right)$ and gains information of value $V\left(x^{\prime}\right)$ if it survives. Its payoff from the game is $W\left(x^{\prime}, x\right)=S\left(x^{\prime}, x\right) V\left(x^{\prime}\right)$. ## 经济代写|博弈论代写Game Theory代考|Playing the Field (ii)在(X′,Xε)<在(X,Xε)对于所有足够小的正ε. 在这里,在滥用符号的情况下,术语在(X,Xε)表示有策略的个人的回报X当常住人口占一定比例时ε的X′战略家和比例1−ε的X战略家。然后可以证明,当第二个参数中的收益是线性的时,原始(ES2)条件是这个更一般条件的一个特例,就像在两人游戏中的情况一样(练习 4.4)。 ## 有限元方法代写 assignmentutor™作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。 assignmentutor™您的专属作业导师 assignmentutor™您的专属作业导师
## Intermediate Algebra (12th Edition) Published by Pearson # Chapter 4 - Section 4.4 - Multiplying Polynomials - 4.4 Exercises - Page 305: 76 #### Answer $9m^2-12m+4+6mp-4p+p^2$ #### Work Step by Step Using $(a+b)^2=a^2+2ab+b^2$ or the square of a binomial, the given expression, $[(3m-2)+p]^2 ,$ is equivalent to \begin{array}{l}\require{cancel} (3m-2)^2+2(3m-2)(p)+(p)^2 \\\\= (3m-2)^2+2p(3m-2)+p^2 \\\\= (3m-2)^2+6mp-4p+p^2 \\\\= [(3m)^2+2(3m)(-2)+(-2)^2]+6mp-4p+p^2 \\\\= 9m^2-12m+4+6mp-4p+p^2 .\end{array} After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Tag Info ## Hot answers tagged angular-velocity 20 Let me first list all of the possibilities I considered that I later rejected. This is far from exhaustive, and I'm looking forward to seeing other people's creativity. Bad Ideas Sit on a tire swing with the fan pointing to the side. Point the fan up, measure speed of rotation of the system on the tire swing. Get a laser or collimated flashlight. Point ... 10 There are no "other" examples. The condition that $\vec \omega$ and $$\vec L = I_{\rm tensor} \cdot \vec \omega$$ point to the same direction i.e. $$(\vec L=) I_{\rm tensor} \cdot \vec \omega = k \vec \omega$$ where $k$ is a real number (and no longer a tensor) is a definition of an eigenvector of $I_{\rm tensor}$: both $\vec \omega$ and $\vec L$ are ... 10 This is a note on why angular velocities are vectors, to complement Matt and David's excellent explanations of why rotations are not. When we say something has a certain angular velocity $\vec{\omega_1}$, we mean that each part of the thing has a position-dependent velocity $\vec{v_1}(\vec{r}) = \vec{\omega_1} \times \vec{r}$. We might consider another ... 7 There are actually several different ways to interpret that question, depending on what you mean by "vector" and "rotation". But here's a sense that I've often wondered about myself: in introductory physics, the velocity vector is defined as the time derivative of the position vector (relative to some fixed point). Why is the same not true of angular ... 7 The direction of angular velocity is different from that of regular velocity for (arguably) two reasons. First, it points out of the plane because of the nature of angular velocity. It signifies a rotation, as such, there is not any particular direction unit vector in every coordinate space that could represent it. In spherical or cylindrical coordinates, it ... 7 You made a mistake in assuming that the angular acceleration ($\alpha$) is equal to $v^2/r$ which actually is the centripetal acceleration. In simple words, angular acceleration is the rate of change of angular velocity, which further is the rate of change of the angle $\theta$. This is very similar to how the linear acceleration is defined. ... 7 There is indeed a term involving the time derivative of the changing coupling between the masses. First, let's derive the equation for a single mass. $$L = \frac{1}{2} I\, \dot{\theta}^2 - V(\theta)$$ $$\frac{\partial L}{\partial \dot{\theta}} = I\, \dot{\theta}$$ $$\frac{\partial L}{\partial \theta} = -\frac{dV}{d\theta} = \tau$$ $$\tau = \frac{d}{dt} ... 7 Here's a straightforward but somewhat computational way. There are two steps. (1) Show how to define the angular velocity vector in terms of rotation matrices. (2) Write a general rotation in terms of Euler angles. (3) Combine (1) and (2) to get an expression for the angular velocity vector in terms of Euler angles. Step 1. Recall that if \mathbf x(t) is ... 6 Defining properties of vectors are that you can add them and multiply them by constants. These both make sense for angular velocities. On the other hand, adding rotations doesn't make sense. What you can do with two rotations is compose them: first rotate one way, then rotate another. This operation doesn't look like addition of any sort. For one thing, it ... 6 two concentric and counterrotating flywheels preclude all precession forces regardless of which plane the axis is rotated in. this is assuming the connection between the two flywheels is sufficiently strong--it make break from tension/compression due to each flywheel experiencing its own forces. refer to the diagram i just drew up. the black rectangles ... 5 Yes there will be a drag torque opposite the direction of spin. The name for this seems to be viscous torque. According to this paper, the viscous torque on a spinning sphere of radius R in a fluid with viscosity \eta spinning with constant angular velocity \vec\Omega is$$ \vec\tau = -8\pi R^3\eta\vec\Omega $$The paper goes on to describe how to ... 5 angular speed is the rate of change of the angle (in radians) with time, and it has units 1/s, while tangential speed is the speed of a point on the surface of the spinning object, which is the angular speed times the distance from the point to the axis of rotation. 5 a_c = \frac{v^2}{r} isn't angular acceleration. It's the magnitude of the linear acceleration towards the centre of an object following a circular path at constant angular velocity. Angular acceleration is the derivative of angular velocity, and the analogue of Newton's second law is that angular acceleration equals torque divided by moment of inertia. 5 Yes, B does rotate when seen from a static frame of coordinates outside the disk: As to velocities and accelerations, see the article in Wikipedia. It says,$$\vec {v_s} = \vec {v_r} + \vec {\Omega} \times \vec r,$$where v_s is the velocity in the static frame and v_r in the rotating. If you apply this formula for both points A and B, their ... 5 If I understand your question correctly you are saying that:$$ v = r\omega $$and therefore:$$\begin{align} v_A &= r\omega \\ v_B &= \tfrac{1}{2}r\omega \\ v_A &= 2v_B \end{align} $$but how can A and B have different velocities when they are both attached to the disk so the separation between is fixed? The answer is that A and ... 5 I'm not an engineer, but this is how id do it. According to your rules, we can use a computer, and Audacity. You get a pair of headphones. You plug the headphones into the microphone jack on your computer. You open Audacity. You get a very small magnet. You glue the very small magnet to one of the fan blades. You turn the fan on. You hold one of the ... 5 I will attempt to answer this question with some basic dynamics and some contact mechanics. There are two special cases here. a) There is sufficient friction to keep the base of the pin A fixed (imparting a reaction impulse J_A when hit by the ball, or b) The floor is smooth and the pin will translate and rotate at the same time with J_A=0. There is ... 5 The proper derivation of the centripetal acceleration—without assuming any kinematic variables are constant—requires a solid understanding of both the stationary Cartesian unit vectors \hat{i} and \hat{j} as well as the rotating polar unit vectors \hat{e}_r and \hat{e}_\theta. The Cartesian unit vectors \hat{i} and \hat{j} are stationary and ... 5 A day is currently about 86400.002 seconds long. If we could just increase the Earth's rotation rate by a mere 2 milliseconds per day we would get rid of the need for those pesky leap seconds. No problem! We only need something that rotates with an angular momentum of 1.4×1026 joule-seconds about an axis pointing due south. One way to do this would be to ... 4 I have reproduced your calculation. If \theta is the angle from the central vertical axis of the hemisphere to the ball, the tangential downward force is g \sin \theta = g\frac rR The tangential upward force due to rotation is \omega ^2 r \cos \theta=\omega ^2 r \sqrt {1-\frac {r^2}{R^2}} When \omega \lt \sqrt{\frac gR} the downward force is ... 4 If you solve the problem for the two forces the vertical and the horizontal force (which is required for the circular motion) you obtain the relation$$\omega^2=\frac{g}{L\cos\theta}$$Hence the minimum required \omega is \sqrt{g/l}, the same as the angular frequency for motion of a bob in a plane. 4 As far as we know and can test, space is continuous, not discrete. Since space is continuous, then the labels we associate with it (i.e., positions) are also continuous. Calculus requires continuous functions to do the derivative and integral, so this implies that velocities and accelerations are also continuous because they are derivatives of positions:$$ ... 4 High angular momentum presents a barrier preventing collapse to a black hole (at least until this angular momentum is radiated away). The parameter on which the formation of black hole depends is the ratio $q$ of angular momentum ($J$) to the square of mass ($M$). If $q=J/M^2 < 1$ (in relativistic units with $G=1$, $c=1$), then the black hole ... 4 The moment of inertia tensor is not constant in the external reference frame (http://en.wikipedia.org/wiki/Precession#Torque-free ) 4 For the person not to slip, there must be a centripetal force of $mv^2/r = m r \omega^2$ towards the centre. Since $v$ varies with $r$ while $\omega$ is fixed ($v=r\omega$), it is probably easier to take the second form, in which case this force has to increase as $r$ increases. This forces comes from friction since there are no other forces in the plane ... 4 Assuming your rotating object (e.g. the Earth) is rotating at a steady speed the only way to change it's apparent speed of rotation is if you're rotating around it. You give the example of a geostationary satellite. This rotates around the Earth at the same angular velocity as the Earth rotates, so the Earth appears to be stationary (hence the name ... 4 I assume you know about rotation matrices, and so for a sequence rotations about Z-X-Z with angles $\phi$, $\theta$ and $\psi$ repsectively you have $$\vec{\omega} = \dot{\phi} \hat{z} + T_1 \left( \dot{\theta} \hat{x} + T_2 \left( \dot{\psi} \hat{z} \right) \right)$$ The logic here is apply a local spin of $\dot{\phi}$, $\dot{\theta}$ and $\dot{\psi}$ ... 4 In the basic discussion of angular momentum where something is rotating around a fixed symmetrical axis $\vec{L}=\vec{r}\times\vec{p}$ reduces to $\vec{L}=I*\vec{\omega}$ Like in this animation where each vector is colored appropriately: However angular velocity and angular momentum can have different directions in two cases: If the axis of ... 3 A velocity is a vector, so to describe it we want a magnitude and a direction. The magnitude bit is easy; the larger the velocity, the greater the magnitude. Like you said, for linear velocity the direction is just the direction of travel, but trying to do the same thing for an angular velocity fails - the direction of travel is changing. However, it's ... Only top voted, non community-wiki answers of a minimum length are eligible
# BNF Grammars The input language for sh61 command lines is described in terms of a BNF grammar, where BNF stands for Backus–Naur Form, or Backus Normal Form. BNF is a declarative notation for describing a language, which in this context just means a set of strings. BNF notation consists of three pieces: • Terminals, such as "x", are strings of characters that must exactly match characters in the input. • Nonterminals (or symbols for short), such as lettera, represent sets of strings. One of the nonterminals is called the root or start symbol of the grammar. By convention, the root is the first nonterminal mentioned in the grammar. • Rules, such as lettera ::= "a" or word ::= letter word, define how nonterminals and strings relate. A string is in the language if it can be obtained by following the rules of the grammar starting from a single instance of the root symbol. Any context-free language can be described this way. Here’s an example grammar that can recognize exactly two sentences, namely "cs61 is awesome" and "cs61 is terrible": cs61 ::= "cs61 is " emotionword emotionword ::= "awesome" | "terrible" It’s useful to be able to read BNF grammars because they’re very common in computer science and programming. Many specification documents use BNF to define languages, including programming languages, configuration files, and even network packet formats. These exercises show features of BNF grammars using simple examples. Peek at some answers and then try to answer the questions on your own. Exercise: Define a grammar for the empty language, which is a language containing no strings. (There is no valid sentence in the empty language.) Exercise: Define a grammar for the children language (because children should be “not heard”). The empty string is the only valid sentence in this language. Exercise: Define a grammar for the letter language. A letter is a lower-case Latin letter between a and z. Exercise: Define a grammar for the word language. A word is a sequence of one or more letters. You may refer to the letter nonterminal defined above. Exercise: Define two different grammars for the word language. Exercise: Define a grammar for the parenthesis language, where all sentences in the parenthesis language consist of balanced pairs of left and right parentheses. Some examples: Valid Invalid () ())( (()) ( ()() )( (((()())()())) ))))))))))( ## The shell grammar This is the BNF grammar for sh61, as found in problem set 5. (Note that this grammar doesn’t define words, and doesn’t define where whitespace is required. That kind of looseness is typical in systems grammars; in your case helpers.cc already implements the necessary rules.) commandline ::= list | list ";" | list "&" list ::= conditional | list ";" conditional | list "&" conditional conditional ::= pipeline | conditional "&&" pipeline | conditional "||" pipeline pipeline ::= command | pipeline "|" command command ::= word | redirection | command word | command redirection redirection ::= redirectionop filename redirectionop ::= "<" | ">" | "2>" The recursive definitions allow for lists or trees of terms to be chained together. Taking list as an example: list ::= conditional | list ";" conditional | list "&" conditional This reads “a list is a conditional, or a list followed by a semicolon and then a conditional, or a list followed by an ampersand and then a conditional.” But this means that the list is just a bunch of conditionals, linked by semicolons or ampersands! Notice that the other recursive definitions in sh61’s grammar also follow this pattern. In other words: • A list is a series of conditionals, concatenated by ; or &. • A conditional is a series of pipelines, concatenated by && or ||. • A pipeline is a series of commands, concatenated by |. • A redirection is one of <, >, or 2>, followed by a filename. We’ve written out a full definition of command, but BNF-style grammars seen in the wild often use shorthand, such as: command ::= [word or redirection]... This shorthand, like the full definition, says that a command is a series of words representing the program name and its arguments, interspersed with redirection commands.
## graphing solution set calculator problems? I am suposse to graph this and i have a ti89, how exactly do i input this to be able to graph is including the shading ? $x^2+y^2>1$ $x^2+y^2<9$
# Nyquist Diagram from Data Tags: 1. Aug 25, 2014 ### roam 1. The problem statement, all variables and given/known data I'm trying to make a Nyquist diagram based on experimental data obtained of the gain and phase of a high-pass RC filter which has the transfer function: $\frac{V_{out}}{V_{in}}=\frac{R}{R+1/(j \omega C)}=\frac{1}{1-j\omega_0/\omega}$ Here are the superimposed experimental and theoretical Bode plots: And here is the theoretical Nyquist plot I made by using the equation above with Matlab's nyquist() function: So how can I make an experimental Nyquist plot (using the two sets of data obtained on amplitude and phase of the filter)? 3. The attempt at a solution Unfortunately I couldn't get any help with this problem so far and I haven't found anything relevant online. So, I have 3 vectors containing the collected data (gain in dB, phase in degrees, frequency in Hertz). How can I plot the Nyquist diagram using the data and NOT the transfer function? Is there a simple way that Matlab can turn my data into a Nyquist plot? Or do I need to do everything manually? If it has to be done manually, then how should I approach this? Any explanation, sample code, or perhaps links would be greatly appreciated. P. S. I've attached my data just in case. #### Attached Files: • ###### Data.txt File size: 1.2 KB Views: 110 2. Aug 25, 2014 ### milesyoung If your system doesn't have any zeros or poles at the origin, then its Nyquist diagram is just its Bode plot in polar form with its complex conjugate superimposed on top of it, so if you have your gain (in linear magnitude) and phase (in radians) in a couple of vectors, then one half of the Nyquist diagram is simply given by: Code (Text): z = gain.*exp(j*phase); plot(z) You can complete it using: Code (Text): plot([z;conj(z(end:-1:1))]) You'll have to mentally add the direction, but that should be simple enough. 3. Aug 25, 2014 ### rude man For starters, this is not in the correct form. You need to rewite your transfer function so ω appears in the denominator as a term in ω, not 1/ω. At least, that is the convention. Whether you can proceed the way you did is questionable to me. 4. Aug 25, 2014 ### milesyoung I assume you're talking about the form that shows break frequencies for use in, for instance, drawing Bode plots using asymptotic approximations. That form isn't necessarily "correct", or even good, for drawing Nyquist diagrams. You can often find a much more convenient polar form of the transfer function for these, i.e. it's not always better to draw Bode plots first and then use them to sketch the Nyquist diagram (Edit: Just to elaborate, straight lines in Bode plots aren't that great when drawing in polar form). In any case, the OP's question was about plotting a Nyquist diagram from a data set, not a transfer function. Last edited: Aug 25, 2014 5. Aug 26, 2014 ### roam Thank you for the helpful post Miles. But how would you modify the code if the transfer function contains a zero/pole at the origin? For instance I tried your method for an LC resonant circuit (the notch filter) which had a transfer function of: $\frac{V_{out}}{V_{in}} = \frac{R_2}{R_2+z_p}$ with $z_p = \frac{R_p}{1+jR_p(\omega C -\frac{1}{\omega L})}$ It simplifies to $\therefore \frac{R_2 s}{R_2 s + R_ps + (1/C) - (1/L)}$ (s=jω) Clearly it has a zero at the origin. In fact here is the diagram I got when I tried using that code (magenta is the experimental, blue is the theoretical): I have the same problem with plotting Nyquist diagram of LC bandpass filters. I'm not quite sure why that happens and how to fix it. #### Attached Files: • ###### Notch.txt File size: 625 bytes Views: 73 6. Aug 27, 2014 ### roam Actually in the previous plot value of Rp was not correct, when I correct that the two plots almost completely overlap. However in the experimental diagram why do we have these straight lines tangential to the curve? #### Attached Files: • ###### notch.jpg File size: 28.7 KB Views: 223 Last edited: Aug 27, 2014 7. Aug 27, 2014 ### roam Similarly, I tried to make a Nyquist diagram of a bandpass filter with transfer function: $\frac{R_1}{R_1 + R_s + sL+1/sC}$ (R1=83Ω, L=10mH, C=22nF, Rs=20Ω). Once again it has a zero at the origin. This is what I got using experimental data (not the TF): (Magenta is the experimental one) What is causing the straight line at the center, and how can we get rid of it? #### Attached Files: • ###### bandpass.txt File size: 725 bytes Views: 71 8. Aug 27, 2014 ### milesyoung You don't have to modify it, you just have to remember that the frequency response of the system only gives part of its Nyquist diagram. That fact is more important when you have a zero or pole at the origin, because you then have to change the Nyquist path to "go around" that zero or pole, which either gives an arc around the origin or out at infinity, respectively, in your Nyquist diagram: What does the Nyquist plot look like for a system with poles at the origin? I'm not exactly sure what you're doing in those plots - mine look very different. Maybe you could post your source code? (use CODE tags). They are, however, going to look a bit odd for the data sets you've provided. You only have a couple of data points at the resonance frequency of your filters, which isn't going to make for a great Bode plot or Nyquist diagram. You should have a higher resolution around where the magnitude/phase starts changing rapidly with frequency. 'plot' in MATLAB does linear interpolation between points, so you can get some odd looking graphs if you're not careful. On another note, when uploading data sets, it's a good idea to use a standard format, like delimiter-separated values. MATLAB, for instance, can export workspace variables as comma-separated values in an ASCII file using the command 'csvwrite' (not with the greatest accuracy, but still). 9. Aug 27, 2014 ### roam Here's my source code for the bandpass filter: Code (Text): gain=10.^((1/20).*[-59.047 -57.164 etc...]); % Experimental Nyquist plot phase=0.0174532.*[90.029 89.812 etc...]; plot([z;conj(z(end:-1:1))], 'm', 'LineWidth', 2); % Theoretical plot for comparison hold on R1=83.02; Rs=19.49; s = tf('s'); n=Rs*(1+(((s*L)/Rs)+(1/(s*Rs*C)))); sys = R1/(R1+n); nyquist(sys, '--') grid; For phase and amplitude I just used the data attached to the previous post (the actual data are in degrees and dB respectively so they are multiplied by a conversion factor). Yes, I did use separated values when entering the data set. I've attached the experimental Bode plot (gain being on the vertical axis). So how is your plot different from mine? Did you use a different code?? P.S. The transfer function for this filter was: $T= \frac{R_1}{R_1 + z_s}$ where $z_s=R_s \left[ 1+j(\frac{\omega L}{R_s}) - \frac{1}{\omega R_s C} \right]$ The series resistance is $20 \Omega$ and $R_1=82 \Omega$, $L= 10 mH$, $C=22 nF$. #### Attached Files: • ###### expr.jpg File size: 11.5 KB Views: 176 Last edited: Aug 27, 2014 10. Aug 27, 2014 ### milesyoung You've written MATLAB commands directly into your 'Data.txt' file. When I enter those commands in my MATLAB workspace, I get gain and phase column vectors. In your other files, they're row vectors. This is why I suggested you use, for instance, 'csvwrite' to export data instead. This: Code (Text): plot([z;conj(z(end:-1:1))]) is meant to work with column vectors (since it just simply appends one column vector in reverse order to another). If you try using it for row vectors, you'll give 'plot' a 2 by something complex matrix, which it will try to plot column-wise. For row vectors, you can use this instead: Code (Text): plot([z conj(z(end:-1:1))]) This works for both cases: Code (Text): plot(z) hold on plot(conj(z)) But again, these are still going to look terrible since you have so few data points around the resonance frequencies of your filters. 11. Aug 28, 2014 ### roam Thanks for the remark, I made the correction to all my plots. But are you sure this is because there is not have enough data points around resonance? Because for the previous RC circuits, I had the same amount of data, and yet the experimental Nyquist diagram closely matched the theory! Is there any way to somehow smooth out the diagram a little? With the correction this is how the plots look like so far: Band-pass: Notch I don't see why there's a double line in the last one... 12. Aug 28, 2014 ### milesyoung As you can tell from your first diagram, there's only so much fidelity you can get using linear interpolation with so few data points around the resonance frequency. The problem is that the phase changes rapidly (at high magnitude) from something positive to something negative in a few data points, so you get those straight lines darting around and connecting those few points. It's not that big of a deal for a notch filter, however, since that phase transition occurs at very low magnitude, so it really doesn't alter the overall shape of the Nyquist locus. Compare how rapidly the magnitude/phase changes with frequency for your RC-circuit and your LC-circuits. You don't have the same amount of data around where it matters! If you don't want to measure any more data, then it might be an option to just superimpose your data on a Bode plot of a filter with a transfer function of the right form, but where you can adjust the parameters until "it fits". Just try and eyeball it for now - I don't think it's worth getting into system identification just for these simple systems. 13. Aug 28, 2014 ### milesyoung It's because, for the ideal filter, "one half" of the Nyquist diagram is already an ellipse in the complex plane. When you superimpose its complex conjugate, you do so right on top of the locus that's already there. For your data set, that second "go around" isn't going to fit perfectly due to measurement errors and what not, so you get to see the whole shebang. 14. Aug 29, 2014 ### roam Thank you for the explanation (the system doesn't allow me to thank you twice). It makes a lot more sense now. 15. Aug 29, 2014 ### milesyoung You're very welcome. I should probably have mentioned something in my first post: 'Phase' in the complex plane is synonymous with 'angle'. In your original post, can you see how that Bode plot traces out the upper half of the Nyquist diagram? A magnitude and phase pair in the Bode plot gives a point in the complex plane with the same magnitude and phase (angle).
# naginterfaces.library.sparseig.complex_​band_​init¶ naginterfaces.library.sparseig.complex_band_init(n, nev, ncv)[source] complex_band_init is a setup function for complex_band_solve() which may be used for finding some eigenvalues (and optionally the corresponding eigenvectors) of a standard or generalized eigenvalue problem defined by complex, banded, non-Hermitian matrices. The banded matrix must be stored using the LAPACK column ordered storage format for complex banded non-Hermitian matrices (see the F07 Introduction). For full information please refer to the NAG Library document for f12at https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/f12/f12atf.html Parameters nint The order of the matrix (and the order of the matrix for the generalized problem) that defines the eigenvalue problem. nevint The number of eigenvalues to be computed. ncvint The number of Lanczos basis vectors to use during the computation. At present there is no a priori analysis to guide the selection of relative to . However, it is recommended that . If many problems of the same type are to be solved, you should experiment with increasing while keeping fixed for a given test problem. This will usually decrease the required number of matrix-vector operations but it also increases the work and storage required to maintain the orthogonal basis vectors. The optimal ‘cross-over’ with respect to CPU time is problem dependent and must be determined empirically. Returns commdict, communication object Communication structure. Raises NagValueError (errno ) On entry, . Constraint: . (errno ) On entry, . Constraint: . (errno ) On entry, , and . Constraint: and . Notes The pair of functions complex_band_init and complex_band_solve() together with the option setting function complex_option() are designed to calculate some of the eigenvalues, , (and optionally the corresponding eigenvectors, ) of a standard eigenvalue problem , or of a generalized eigenvalue problem of order , where is large and the coefficient matrices and are banded complex and non-Hermitian. complex_band_init is a setup function which must be called before the option setting function complex_option() and the solver function complex_band_solve(). Internally, complex_band_solve() makes calls to complex_iter() and complex_proc(); the function documents for complex_iter() and complex_proc() should be consulted for details of the algorithm used. This setup function initializes the communication arrays, sets (to their default values) all options that can be set by you via the option setting function complex_option(), and checks that the lengths of the communication arrays as passed by you are of sufficient length. For details of the options available and how to set them, see Other Parameters for complex_option. References Lehoucq, R B, 2001, Implicitly restarted Arnoldi methods and subspace iteration, SIAM Journal on Matrix Analysis and Applications (23), 551–562 Lehoucq, R B and Scott, J A, 1996, An evaluation of software for computing eigenvalues of sparse nonsymmetric matrices, Preprint MCS-P547-1195, Argonne National Laboratory Lehoucq, R B and Sorensen, D C, 1996, Deflation techniques for an implicitly restarted Arnoldi iteration, SIAM Journal on Matrix Analysis and Applications (17), 789–821 Lehoucq, R B, Sorensen, D C and Yang, C, 1998, ARPACK Users’ Guide: Solution of Large-scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods, SIAM, Philadelphia
As a rotation less field can be expressed as the negative gradient V of a # As a rotation less field can be expressed as the This preview shows page 4 - 6 out of 26 pages. As a rotation-less field can be expressed as the negative gradient −∇ V of a potential function V , it follows that: E = −∇ V A (2.18) = E c + E m (2.19) in which V is the electric potential function. The electric field is hence split up into components E c and E m . The conservative electric field E c , fully defined by V , originates from positive and terminates on negative charges. These charges are described by the charge distribution function ρ in Gauss’s law ( 2.5 ). The other part of the electric field, E m , is magnetically induced and is divergence free. It does not originate from charge distributions, neither does it terminate on them. E m forms self-closing field lines just like the magnetic field does. In this way, physical meaning can be given to A , since its negative time derivative is the non-conservative component E m of the electric field. 2.1.5 Current and Flux The macroscopic quantities electric current I and magnetic flux φ through a certain surface S C are obtained by integration of their vector counterparts J and B over that surface: I = S C J · dS (2.20) The surface integration of B can be replaced by a line integration of the vector potential A through Stokes’ theorem [25]: φ = S C B · dS (2.21) = C A · dC (2.22) where C is the contour enclosing the surface S C . 2.2 Conductive Wire 17 2.2 Conductive Wire Induction coils used for inductive powering, and practical inductors in general, are all made up out of metal wires or traces. Hence it is instructive to investigate the electromagnetic field configuration in and around a piece of conductive wire. The dimensions of the wire and the coil constructed out of it, are assumed to be much smaller than a wavelength, so that the electromagnetic field changes can be consid- ered immediate over the volume of interest and hence that only the near field hence is important. Figure 2.1 depicts a piece of conductive wire. A potential difference V is ap- plied between both terminals. This gives rise to an electric field E c , also indicated on the figure, which is described by the voltage function V through: E c = −∇ V (2.23) E c is a conservative vector field that relates to a certain charge density ρ through Gauss’s law ( 2.5 ). The distribution of these charges over the wire depends on the wire and winding geometry and possibly on the definition of given electric potentials in the direct environment. The charges are schematically indicated with + and signs for a loop-shaped piece of wire in Fig. 2.1 . Note that E c may be much stronger in the medium in between terminals, where distances can be shorter, than along the wire. This is especially the case in coil windings where high-potential turns lay next to low-potential turns (in multiple-layer windings for example). These charges are directly responsible for the inter-winding capacitance of a coil (see Sect. 2.4 on inductor models of this chapter). #### You've reached the end of your free preview. Want to read all 26 pages? • Spring '14 • Magnetic Field, Sc, current density
# Height of the water in a rotating water pot 1. Jan 21, 2013 ### Ezio3.1415 1. A pot of 'm' water is rotating about Y axis with w angular velocity...Water is not compressible here... What's the height of the water? 2. Fc=m ω^2 r F=mg m/v=ρ=> 3. I solved the ques for rotation around X axis... Fc =mg From that I found out the h But then I found out that its Y axis... The less amount of data discourages me to look for solution using kutta Jowkowski theorem... I was thinking adding the two force F and Fc... Then reaction... My ques is will the reaction on the water by the water pot change the h? Pretty much confused here... 2. Jan 21, 2013 ### haruspex What's Fc? If it's a force the RHS is dimensionally wrong. Edit: Just realised the following 'r' is part of the equation: Fc=m ω2r. Still like to know what Fc is. 3. Jan 21, 2013 ### tms Since $\omega = v / r$, $$m \omega^2 r = m (v/r)^2 r = m v^2 / r$$ which is the centripetal force. 4. Jan 21, 2013 ### Ezio3.1415 Fc is the centripetal force... If it was rotating around x axis,I would just say Fc equals to the weight of the object... so that the water remains in the bucket... 5. Jan 21, 2013 ### haruspex I'm puzzled by the discussion about X axis and Y axis. Sounds to me like it's rotating about a vertical axis, and we're looking for how high the water level goes at the side. Is that it? 6. Jan 22, 2013 ### 256bits It appears that the question is about a pot of water rotating about its axis of symmetry. The axis of rotation is the vertical y-axis. Axis of symmetry is important - the pot could be rotating about a vertical axis, but some distance ( r ) from the axis at the end of an arm or having the handle attached to a rope. 7. Jan 22, 2013 ### Ezio3.1415 256bits is right... Yes haruspex that's it... Now to solve the problem... I applied bernouli's theorem... But confused about my answer... P is constant... p g h1 = 1/2 p w^2 r^2 + p g h2 h1 - w^2 r^2 / 2g = h2 I don't feel I am right... I thought the water would be down in the middle and high in the sides... But this approach gives me lower height to the farther side of the pot... highest height in the side closer to the vertical axis... what do u think? 8. Jan 22, 2013 ### haruspex 9. Jan 23, 2013 ### Ezio3.1415 what was the conclusion of that thread? What is the shape of the water surface? 10. Jan 23, 2013 ### Staff: Mentor Suppose that the water has been rotating for a long time and has reached a stable equilibrium, wherein all relative motion of the water has ceased (no currents). In the rotating frame of reference we can consider the profile view of the water distribution, and let the curve representing the surface of the water be a function f(r). Now consider a small parcel of water on the surface of the water at some distance r from the axis of rotation. Since all relative motion has ceased and that parcel must be stationary, what must be the net force acting on it? More importantly, in what direction must the "weight" of the parcel be pointing? What can you then say about the slope of the curve f(r) at that point? #### Attached Files: • ###### Fig1.gif File size: 3.4 KB Views: 132 11. Jan 24, 2013 ### 256bits You are solving via energy balance, and the approach looks OK, but you have a sign problem. With pgh as h increases the potential energy increases due to the gravitational field. With rotation, as r increases from the axis of rotation, the potential energy DECREASES. so you should have U(0) = U(r) U(0) = pgh(0) - 1/2 p w^2 r(0)^2 ; where 1/2 p w^2 r(0)^2 = 0 U(r) = pgh(r) -1/2 p w^2 r^2 thus pgh(0) - 0 = pgh(r) -1/2 p w^2 r^2 h(0) + 1/2g w^2 r^2 = h(r) Note that you are using a reference frame moving along with the water, with the fictional centrifugal force providing the potential energy of an element of liquid at distance r from the axis. See if you can get the same answer, from the post of gneill, using forces in a co-moving reference frame. 12. Jan 26, 2013 ### Ezio3.1415 Thank you very much for your help guys... :) 13. Jan 26, 2013 ### tms May I ask what you used to create this diagram? Thanks. 14. Jan 26, 2013 ### Staff: Mentor Sure. Microsoft Visio 2000.
# Illustration Linearly Polarized Plane Wave (E-Field) Get illustration Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Sharing and adapting of the illustration is allowed with indication of the link to the illustration. The E-field of a linearly polarized electromagnetic wave propagating in the $$z$$ direction is:$$\boldsymbol{E} ~=~ \boldsymbol{E}_0 \, \cos(\omega \, t - k\,z)$$where $$\boldsymbol{E}_0 = (E_{0 \text x}, E_{0 \text y}, 0)$$ is the amplitude. $$\omega$$ is the angular frequency, $$t$$ time, $$k$$ wavenumber and $$z$$ spatial coordinate as a propagation direction. The electric field $$\boldsymbol{E}$$ oscillates in a plane (here $$x$$-$$y$$-plane) for a linearly polarized wave and the components of the E-field have no phase shift.
# All Questions 184 views ### Value of of gravity 9.8m/sec^2 and centrifugal force [closed] I had a doubt regarding centrifugal force on earth is not accounted in any equations as object falling towards earth.eg f=ma.i read questions asked by few about the same and got to a conclusion that ... 580 views ### Why aren't atoms affected by gravity, but molecules are? I read this book here: http://tiny.cc/Gravity-Atom-Myth It claims that while gravity does affect individual atoms, it's not quantified like molecular mass due to lack of binding proteins which render ... 157 views ### Does the big bang violate the conservation of energy? [duplicate] It is a fact that a thing is existing now because it had already been created. So why don't we take this to account to redefine law of conservation of energy. 178 views ### A ballpark figure on physicists [closed] 1) Out of the 7 billion+ people alive today, around how many have earned a PhD in physics? 2) Around how many physicists are working today? 3) Around how many physicists are being added to the work ... 220 views ### home made atom destruction unit [closed] Today we learnt at school that atoms can be destructed. I believe Physics is a great science to do experiment and I would like to try it at home. Could you tell me what I need to do it? and is it ... 251 views ### Conceptual doubts in EM waves and old quantum theory [closed] I have a few questions. I know that EM waves transfer energy. So when they are generated why do they form curves? Are energy packets moving in a curvy path, or energy packets (quanta) not considered ... 281 views ### GPS Working Principle [closed] Hand-held GPS units in modern phones identify your location by (A) transmitting their location and time to GPS satellites. (B) receiving location data of GPS satellites. (C) ... 124 views ### Is it possible to make a light beam act like a stream of water from a spining hose? [closed] If we had the ability to make an actuator that could turn around at or past the speed of light and I attached a high miliwatt laser to it, then spun the laser around on the actuator at the speed of ... 2k views ### man walking in rain:relative velocity [closed] A man walking in rain at a speed of 3kmph find rain to be falling vertically. When he increases his speed to 6 kmph, he finds rain meeting him at an angle of 45 deg to the vertical. What is the ... 92 views ### High School Physics student needs help [closed] A human cannon has a spring constant of 35 000 N/m. The spring can be extended up to 4.5m. How far horizontally would a 65kg clown be fired if the cannon is pointed upward at 45 degrees to the ... 253 views ### maybe the universe has already 'ended'? [closed] I aks you if this reasoning has a base in what we know of universe. There are many articles about so called "vacuum metastability event". As I understand this happens (can happen) with an enormous low ... 177 views ### True or false? Particle physics [closed] It is not possible to prove the point of origin of a photon It is not possible to prove the point of origin of a free electron It is not possible to prove that protons or neutrons exist inside a ... 259 views ### What makes a space a real space? By "real space" I mean a space in which physical particles move. Consider a color sphere and let a bunch of objects "move" on its surface. "Move" means "change colors". Let there be some rules ... 28 views ### Coefficient of Static friction [on hold] Kindly provide the reasoning for the direction of friction chosen... 41 views ### What causes a spacecraft entering the atmosphere to catch fire? [on hold] What causes a spacecraft entering the atmosphere to catch fire? A) surface tension of air B) viscosity of air C) high temperature of upper atmosphere D) greater proportion of ... 20 views ### How do I calculate the angle made by this falling rod? [closed] A uniform rod of length 1 is in contact with a smooth vertical wall and a smooth horizontal floor. Initially, the rod is kept almost vertical. Now the rod is released. Find the angle made by the rod ... 18 views ### Current intensity [closed] i want an answer for First question : a and b And Second One , Saying Why you said that answer 60 views ### maths question on levers [closed] Someone holds a 2kg weight in their hand at a distance of 35cm from the elbow (fulcrum) and the downward force (load) due to the weight is 20 Newtons. Calculate the effort in Newtons that the ... 138 views ### Recommended book for beginners on advanced science topics [duplicate] I have a background in engineering so I have some familiarity with basic math and science. I've recently been reading about other topics such as Einstein's relativity and have become interested in ... 283 views ### Alternative derivation for the capacitor energy equation [closed] I hope this is the right place for this kind of post. A friend is trying to derive the equation for the energy stored in a capacitor by analysing the change in potential on one plate when the ... 359 views ### Basic Physics Question [closed] I'm having a difficult time answering this question. I think I'm just converting the units wrong somewhere: You're the CEO of a courier company, and you decide to select an electric car for your ... 104 views ### General Relativity and Time Dilation [duplicate] Is time affected by the gravitational force? If so, what might be the effect on time at the centre or near centre of earth ? 8k views ### Why two balls of different mass dropped from the same height falls the ground at the same time? [duplicate] When two balls of different masses, thrown from equal height they reaches the ground at the same time. Can anyone explain this in terms of laws of Physics(or with mathematical equations)? 86 views ### Photonics: Slab As a Lens [closed] The question can be found here: http://gyazo.com/fc4d26cd35e6ce368ad2a8ed504f1dcc The refractive index it references can be found here: http://gyazo.com/94fd2f3b5ea7da9226c3acd56b0024c1 I'm not ... 218 views ### Is the Graveyard Really so Serious? Calculations in relation to black holes are solely in consideration of spacetime curvature and its effects. They are in total alienation with respect to the action of inertial agents[external ... 109 views ### super-jump air balloon [closed] We have the following objects: a 80 kg person a rope of negligible weight a balloon filled with helium, which can lift for around the same weight, 80kg. My question is, which of the following ... 468 views ### What really is the smallest “mass” or “object” in the universe? Look at this here. With respect to the sciences, the atom is obviously not the smallest piece of mass. Apparently, if people have already broken down the atom in to particles smaller than so, why ... 242 views ### What made up light photons? [duplicate] mass is energy per c square $m=E/c^2$ energy is made up of photons but what made up photon itself? what made up a single photon? Replay to comment: but as we can see in history early phyisicists ... 833 views ### Is antigravity the source of accelerating expansion (dark energy)? Is antigravity the source of accelerating expansion(dark energy)? From the observation of 1998, we found that our universe has been continuing accelerating expansion, and the unknown cause for this ... 2k views ### Does Quantum Physics really suggests this universe as a computer simulation? [closed] I was reading about interesting article here which suggests that our universe is a big computer simulation and the proof of it is a Quantum Physics. I know quantum physics tries to provide some ... 2k views ### Do radio waves go faster than the speed of light? [closed] My science teacher used to say a lot of weird stuff, but I'm just making sure on this one. 172 views ### Does one second exist? [closed] Let us assume .5 second has passed. Now similarly let .9 seconds also passed.. Then let's say that .99 second has passed...we're still not done because 1 second hasn't passed. Then follows .999 ... 730 views ### Violation of Newton's Second Law (?) [closed] Here the big circle denotes the circular path of a stone (small circle on path) tied to a string from the centre of the circular path. This is COMPLETELY HORIZONTAL At an instant the velocity in ... 1k views ### Should linear algebra and vector calculus from traditional courses be replaced with geometric algebra? [closed] geometric algebra gives geometric meaning to linear algebra and much more. it can provide a coordinate free geometric interpretation of spaces. those who learn of ... 405 views ### Entanglement, really? [duplicate] If I have two "entangled" particles and I know the spin state of every one of them. Then, I change the spin state of one of the particles, will it affect the spin state of the other particle even if ... 149 views ### Street Light Interference Phenomenon [closed] Is there a scientific approach that can explain the street light interference phenomenon? Everytime I walk past a Streetlight it turns off. 140 views ### How do I publish a physical constant [closed] I think what i've found is a physical constant that is a physical quantity, universal in nature and constant in time. but It contrasted with a mathematical constant, which is a fixed mathematical ... 203 views ### $E=mc^2$ why is it $c^2$ and not just $c$? Why is constant for the conversion of mass to energy square of the ligths speed? is it bedside it's the fastest real matter? 195 views ### Do events exist after our death if we can't measure them? [closed] The great physicist Raphael Bousso predicted time will end in this article. We can't measure anything after our death in principle. So, does time end when we die? 40 views ### About how much force would something have to exert to be effectively unstoppable? [closed] Assuming an object is moving in a straight line propelled by a force. How much energy would that force have to exert so that there are no known forces or objects that could stop it from moving in that ... 11k views ### Is Athene's Theory of Everything a respectable theory? [closed] Athene's Theory of Everything is a very popular youtube video proposing a "theory of everything": How respectable is this video? Is it complete hogwash, or is there an element of truth to it? Can ... 346 views ### Could a bubble of photons make a spaceship massless? [on hold] I'm not sure how theoretically possible this is but my question is... If we could somehow make a perfect bubble of photons (a massless bubble) and put a spaceship inside it, could it therefore ... 52 views ### Temperature limit of the increase in heat If the sun is the hottest known thing to humans is it possible to have a temperature greater than the sun? 97 views ### What is an $n$ dimensional space? [closed] How an $n$ dimensional space looks like? Is it possible that we are really in a space of dimension greater than 3? 716 views ### How can Young's modulus be dimensionless, but still have units? [closed] According to this wikipedia entry: Young's modulus is the ratio of stress, which has units of pressure, to strain, which is dimensionless; therefore, Young's modulus has units of pressure. From ... 115 views
Cloud # Appendix A - Elastic Block Store (EBS)¶ EBS can be used to store HDB data, and is fully compliant with kdb+. It supports all of the POSIX semantics required. Three variants of the Elastic Block Service (EBS) are all qualified by kdb+: gp2 and io1 are both NAND Flash, but offer different price/performance points, and st1 is comprised of traditional drives. Unlike ephemeral SSD storage, EBS-based storage can be dynamically provisioned to any other EC2 instance via operator control. So this is a candidate for on-demand HDB storage. Assign the storage to an instance in build scripts and then spin them up. (Ref: Amazon EBS) A disadvantage of EBS is that even if the data is read-only (immutable) a specific volume cannot be simultaneously mounted and shared between two or more EC2 instances. Furthermore, the elastic volume would have to be migrated from one instance ownership to another, either manually, or with launch scripts. EBS Snapshots can be used for regenerating an elastic volume to be copied across to other freshly created EBS volumes, which are subsequently shared around under EBS with a new instance being deployed on-demand. Therefore, users of EBS or direct attach containing significant volumes of historical data, may need to replicate the data to avoid constraining it to just one node. You could also shard the data manually, perhaps thence accessing nodes attached via a kdb+ UI gateway. EBS is carried over the local network within one availability zone. Between availability zones there would be IP L3 routing protocols involved in moving the data between zones, and so the latencies would be increased. EBS may look like a disk, act like a disk, and walk like a disk, but it doesn’t behave like a disk in the traditional sense. There are constraints on calculating the throughput gained from EBS: • There is a max throughput to/from each physical EBS volume. This is set to 500 MB/sec for io1 and 160 MB/sec for gp2. A gp2 volume can range in size from 1 GB to 16 TB. You can use multiple volumes per instance (and we would expect to see that in place with a HDB). • There is a further limit to the volume throughput applied, based on its size at creation time. For example, a GP2 volume provides a baseline rate of IOPs geared up from the size of the volume and calculated on the basis of 3 IOPs/per GB. For 200 GB of volume, we get 600 IOPS and @ 1 MB that exceeds the above number in (1), so the lower value would remain the cap. The burst peak IOPS figure is more meaningful for random, small reads of kdb+ data. • For gp2 volumes there is a burst level cap, but this increases as the volume gets larger. This burst level peaks at 1 TB, and is 3000 IOPS. that would be 384 MB/sec at 128 KB records, which, again is in excess of the cap of 160 MB/sec. • There is a maximum network bandwidth per instance. In the case of the unit under test here we used r4.4xlarge, which constrains the throughput to the instance at 3500 Mbps, or a wire speed of 430 MB/sec, capped. This would be elevated with larger instances, up to a maximum value of 25 Gbps for a large instance, such as for r4.16xlarge. • It is important note that EBS scaled linearly across an entire estate (e.g. parallel peach queries). There should be no constraints if you are accessing your data, splayed across different physical across distinct instances. e.g. 10 nodes of r4.4xlarge is capable of reading 4300 MB/sec. Kdb+ achieves or meets all of these advertised figures. So the EBS network bandwidth algorithms become the dominating factor in any final calculations for your environment. For consistency in all of these evaluations, we tested with a common baseline using an r4.4xlarge instance with four 200-GB volumes, each with one xfs file system per volume, therefore using four mount points (four partitions). To show the scale to higher throughputs we used an r4.16xlarge instance with more volumes: eight 500-GB targets, (host max bandwidth there of 20 Gbps, compared with max EBS bandwidth of 1280 MB/sec) and we ran the comparison on gp2 and io1 versions of EBS storage. For the testing of st1 storage, we used four 6-TB volumes, as each of these could burst between 240-500 MB/sec. We then compared the delta between two instance sizes. ## EBS-GP2¶ function latency (mSec) function latency (mSec) hclose hopen 0.004 ();,;2 3 0.006 hcount 0.002 read1 0.018 ## EBS-IO1¶ function latency (mSec) function latency (mSec) hclose hopen 0.003 ();,;2 3 0.006 hcount 0.002 read1 0.017 ## EBS-ST1¶ function latency (mSec) function latency (mSec) hclose hopen 0.003 ();,;2 3 0.04 hcount 0.002 read1 0.02 ## Summary¶ Kdb+ matches the expected throughput of EBS for all classifications, with no major deviations across all classes of read patterns required. EBS-IO1 achieves slightly higher throughput metrics over GP2, but achieves this at a guaranteed IOPS rate. Its operational latency is very slightly lower for meta data and random reads. When considering EBS for kdb+, take the following into consideration: • Due to private-only presentations of EBS volumes, you may wish to consider EBS for solutions that shard/segment their HDB data between physical nodes in a cluster/gateway architecture. Or you may choose to use EBS for locally cached historical data, with other file-systems backing EBS with full or partial copies of the entire HDB. • Fixed bandwidth per node: in our testing cases, the instance throughput limit of circa 430 MB/sec for r4.4xlarge is easily achieved with these tests. Contrast that with the increased throughput gained with the larger r4.16xlarge instance. Use this precept in your calculations. • There is a fixed throughput per GP2 volume, maxing at 160 MB/sec. But multiple volumes will increment that value up until the peak achievable in the instance definition. Kdb+ achieves that instance peak throughput. • Server-side kdb+ in-line compression works very well for streaming and random 1-MB read throughputs, whereby the CPU essentially keeps up with the lower level of compressed data ingest from EBS, and for random reads with many processes, due to read-ahead and decompression running in-parallel being able to magnify the input bandwidth, pretty much in line with the compression rate. • st1 works well at streaming reads, but will suffer from high latencies for any form of random searching. Due to the lower capacity cost of st1, you may wish to consider this for data that is considered for streaming reads only, e.g. older data.
# 8.5 Additional information and full hypothesis test examples --  (Page 5/51) Page 5 / 51 The next example is a poem written by a statistics student named Nicole Hart. The solution to the problem follows the poem. Notice that the hypothesis test is for a single population proportion. This means that the null and alternate hypotheses use the parameter p . The distribution for the test is normal. The estimated proportion p ′ is the proportion of fleas killed to the total fleas found on Fido. This is sample information. The problem gives a preconceived α = 0.01, for comparison, and a 95% confidence interval computation. The poem is clever and humorous, so please enjoy it! My dog has so many fleas, They do not come off with ease. As for shampoo, I have tried many types Even one called Bubble Hype, Which only killed 25% of the fleas, I've used all kinds of soap, Until I had given up hope Until one day I saw An ad that put me in awe. A shampoo used for dogs Called GOOD ENOUGH to Clean a Hog Guaranteed to kill more fleas. I gave Fido a bath And after doing the math His number of fleas Started dropping by 3's! Before his shampoo I counted 42. At the end of his bath, I redid the math And the new shampoo had killed 17 fleas. Now it is time for you to have some fun With the level of significance being .01, You must help me figure out Use the new shampoo or go without? Set up the hypothesis test: H 0 : p ≤ 0.25    H a : p >0.25 Determine the distribution needed: In words, CLEARLY state what your random variable $\overline{X}$ or P′ represents. P′ = The proportion of fleas that are killed by the new shampoo State the distribution to use for the test. Normal. Test Statistic: z = 2.3163 Calculate the p -value using the normal distribution for proportions: p -value = 0.0103 In one to two complete sentences, explain what the p -value means for this problem. If the null hypothesis is true (the proportion is 0.25), then there is a 0.0103 probability that the sample (estimated) proportion is 0.4048 $\left(\frac{17}{42}\right)$ or more. Use the previous information to sketch a picture of this situation. CLEARLY, label and scale the horizontal axis and shade the region(s) corresponding to the p -value. Compare α and the p -value: Indicate the correct decision (“reject” or “do not reject” the null hypothesis), the reason for it, and write an appropriate conclusion, using complete sentences. alpha decision reason for decision 0.01 Do not reject ${H}_{0}$ α < p -value Conclusion: At the 1% level of significance, the sample data do not show sufficient evidence that the percentage of fleas that are killed by the new shampoo is more than 25%. Construct a 95% confidence interval for the true mean or proportion. Include a sketch of the graph of the situation. Label the point estimate and the lower and upper bounds of the confidence interval. Confidence Interval: (0.26,0.55) We are 95% confident that the true population proportion p of fleas that are killed by the new shampoo is between 26% and 55%. ## Note This test result is not very definitive since the p -value is very close to alpha. In reality, one would probably do more tests by giving the dog another bath after the fleas have had a chance to return. I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong How can I make nanorobot? Lily how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
The Galaxy Morphology Network (GaMorNet) is a convolutional neural network that can classify galaxies as being disk-dominated, bulge-dominated or indeterminate based on their bulge to total light ratio. GaMorNet doesn’t need a large amount of training data and can work across different data-sets. For more details about GaMorNet’s design, how it was trained etc., please refer to Publication & Other Data. Schematic diagram of Galaxy Morphology Network. ## First contact with GaMorNet¶ GaMorNet’s user-faced functions have been written in a way so that it’s easy to start using them even if you have not dealt with convolutional neural networks before. For. eg. to perform predictions on an array of SDSS images using our trained models, the following line of code is all you need. from gamornet.keras_module import gamornet_predict_keras In order to start using GaMorNet, please first look at the Getting Started section for instructions on how to install GaMorNet. Thereafter, we recommend trying out the Tutorials in order to get a handle on how to use GaMorNet. Finally, you should have a look at the Public Data Release Handbook for our recommendations on how to use different elements of GaMorNet’s public data release for your own work and the API Documentation for detailed documentation of the different functions in the module. ## Publication & Other Data¶ You can look at this ApJ paper to learn the details about GaMorNet’s architecture, how it was trained, and other details not mentioned in this documentation. We strongly suggest you read the above-mentioned publication if you are going to use our trained models for performing predictions or as the starting point for training your own models. All the different elements of the public data release (including the new Keras models) are summarized in Public Data Release Handbook. Please cite the above mentioned publication if you make use of this software module or some code herein. @article{Ghosh2020, doi = {10.3847/1538-4357/ab8a47}, url = {https://doi.org/10.3847/1538-4357/ab8a47}, year = {2020}, month = jun, publisher = {American Astronomical Society}, volume = {895}, number = {2}, pages = {112}, author = {Aritra Ghosh and C. Megan Urry and Zhengdong Wang and Kevin Schawinski and Dennis Turp and Meredith C. Powell}, title = {Galaxy Morphology Network: A Convolutional Neural Network Used to Study Morphology and Quenching in $\sim$100, 000 {SDSS} and $\sim$20, 000 {CANDELS} Galaxies}, journal = {The Astrophysical Journal} } Additionally, if you want, please include the following text in the Software/Acknowledgment section. This work uses trained models/software made available as a part of the Galaxy Morphology Network public data release. If you have a question, please first have a look at the FAQs section. If your question is not answered there, please send me an e-mail at this [email protected] GMail address.
JEE Main Mathematics Complex Numbers Previous Years Questions Let $$a,b$$ be two real numbers such that $$ab ... If the center and radius of the circle$$\left| {{{z - 2} \over {z - 3}}} \right| = 2$$are respectively$$(\alpha,\beta)$$and$$\gamma$$, then$$3(\... The complex number $z=\frac{i-1}{\cos \frac{\pi}{3}+i \sin \frac{\pi}{3}}$ is equal to : For all $$z \in C$$ on the curve $$C_{1}:|z|=4$$, let the locus of the point $$z+\frac{1}{z}$$ be the curve $$\mathrm{C}_{2}$$. Then : For two non-zero complex numbers $$z_{1}$$ and $$z_{2}$$, if $$\operatorname{Re}\left(z_{1} z_{2}\right)=0$$ and $$\operatorname{Re}\left(z_{1}+z_{2}\... Let$$z$$be a complex number such that$$\left| {{{z - 2i} \over {z + i}}} \right| = 2,z \ne - i$$. Then$$z$$lies on the circle of radius 2 and ce... Let$$\mathrm{z_1=2+3i}$$and$$\mathrm{z_2=3+4i}$$. The set$$\mathrm{S = \left\{ {z \in \mathbb{C}:{{\left| {z - {z_1}} \right|}^2} - {{\left| {z - ... The value of $${\left( {{{1 + \sin {{2\pi } \over 9} + i\cos {{2\pi } \over 9}} \over {1 + \sin {{2\pi } \over 9} - i\cos {{2\pi } \over 9}}}} \right)... Let$$\mathrm{p,q\in\mathbb{R}}$$and$${\left( {1 - \sqrt 3 i} \right)^{200}} = {2^{199}}(p + iq),i = \sqrt { - 1} $$then$$\mathrm{p+q+q^2}$$and ... If$$z \neq 0$$be a complex number such that$$\left|z-\frac{1}{z}\right|=2$$, then the maximum value of$$|z|$$is : Let$$\mathrm{S}=\{z=x+i y:|z-1+i| \geq|z|,|z|... If $$z=2+3 i$$, then $$z^{5}+(\bar{z})^{5}$$ is equal to : Let $$S_{1}=\left\{z_{1} \in \mathbf{C}:\left|z_{1}-3\right|=\frac{1}{2}\right\}$$ and $$S_{2}=\left\{z_{2} \in \mathbf{C}:\left|z_{2}-\right| z_{2}+1... Let S be the set of all$$(\alpha, \beta), \pi... Let the minimum value $$v_{0}$$ of $$v=|z|^{2}+|z-3|^{2}+|z-6 i|^{2}, z \in \mathbb{C}$$ is attained at $${ }{z}=z_{0}$$. Then $$\left|2 z_{0}^{2}-\b... If$$z=x+i y$$satisfies$$|z|-2=0$$and$$|z-i|-|z+5 i|=0$$, then Let O be the origin and A be the point$${z_1} = 1 + 2i$$. If B is the point$${z_2}$$,$${\mathop{\rm Re}\nolimits} ({z_2}) ... For $$z \in \mathbb{C}$$ if the minimum value of $$(|z-3 \sqrt{2}|+|z-p \sqrt{2} i|)$$ is $$5 \sqrt{2}$$, then a value Question: of $$p$$ is _________... If $$\alpha, \beta, \gamma, \delta$$ are the roots of the equation $$x^{4}+x^{3}+x^{2}+x+1=0$$, then $$\alpha^{2021}+\beta^{2021}+\gamma^{2021}+\delta... For$$\mathrm{n} \in \mathbf{N}$$, let$$\mathrm{S}_{\mathrm{n}}=\left\{z \in \mathbf{C}:|z-3+2 i|=\frac{\mathrm{n}}{4}\right\}$$and$$\mathrm{T}_{\m... The real part of the complex number $${{{{(1 + 2i)}^8}\,.\,{{(1 - 2i)}^2}} \over {(3 + 2i)\,.\,\overline {(4 - 6i)} }}$$ is equal to : Let arg(z) represent the principal argument of the complex number z. Then, |z| = 3 and arg(z $$-$$ 1) $$-$$ arg(z + 1) = $${\pi \over 4}$$ intersect... Let $$\alpha$$ and $$\beta$$ be the roots of the equation x2 + (2i $$-$$ 1) = 0. Then, the value of |$$\alpha$$8 + $$\beta$$8| is equal to:... The number of points of intersection of $$|z - (4 + 3i)| = 2$$ and $$|z| + |z - 4| = 6$$, z $$\in$$ C, is The area of the polygon, whose vertices are the non-real roots of the equation $$\overline z = i{z^2}$$ is : Let $$A = \left\{ {z \in C:\left| {{{z + 1} \over {z - 1}}} \right| ... Let z1 and z2 be two complex numbers such that$${\overline z _1} = i{\overline z _2}$$and$$\arg \left( {{{{z_1}} \over {{{\overline z }_2}}}} \righ... Let a circle C in complex plane pass through the points $${z_1} = 3 + 4i$$, $${z_2} = 4 + 3i$$ and $${z_3} = 5i$$. If $$z( \ne {z_1})$$ is a point on ... Let $$A = \{ z \in C:1 \le |z - (1 + i)| \le 2\}$$ and $$B = \{ z \in A:|z - (1 - i)| = 1\}$$. Then, B : If z is a complex number such that $${{z - i} \over {z - 1}}$$ is purely imaginary, then the minimum value of | z $$-$$ (3 + 3i) | is : If $$S = \left\{ {z \in C:{{z - i} \over {z + 2i}} \in R} \right\}$$, then : If $${\left( {\sqrt 3 + i} \right)^{100}} = {2^{99}}(p + iq)$$, then p and q are roots of the equation : The equation $$\arg \left( {{{z - 1} \over {z + 1}}} \right) = {\pi \over 4}$$ represents a circle with : Let C be the set of all complex numbers. LetS1 = {z$$\in$$C : |z $$-$$ 2| $$\le$$ 1} and S2 = {z$$\in$$C : z(1 + i) + $$\overline z$$(1 $$-$$ i) $$\g... Let C be the set of all complex numbers. Let$${S_1} = \{ z \in C||z - 3 - 2i{|^2} = 8\} {S_2} = \{ z \in C|{\mathop{\rm Re}\nolimits} (z) \ge 5\} ... Let n denote the number of solutions of the equation z2 + 3$$\overline z$$ = 0, where z is a complex number. Then the value of $$\sum\limits_{k = 0}^... If z and$$\omega$$are two complex numbers such that$$\left| {z\omega } \right| = 1$$and$$\arg (z) - \arg (\omega ) = {{3\pi } \over 2}$$, then$$... Let a complex number be w = 1 $$-$$ $${\sqrt 3 }$$i. Let another complex number z be such that |zw| = 1 and arg(z) $$-$$ arg(w) = $${\pi \over 2}$$. ... If the equation $$a|z{|^2} + \overline {\overline \alpha z + \alpha \overline z } + d = 0$$ represents a circle where a, d are real constants then w... Let S1, S2 and S3 be three sets defined asS1 = {z$$\in$$C : |z $$-$$ 1| $$\le$$ $$\sqrt 2$$}S2 = {z$$\in$$C : Re((1 $$-$$ i)z) $$\ge$$ 1}S3 = {z$... The area of the triangle with vertices A(z), B(iz) and C(z + iz) is : The least value of |z| where z is complex number which satisfies the inequality $$\exp \left( {{{(|z| + 3)(|z| - 1)} \over {||z| + 1|}}{{\log }_e}2} \... Let a complex number z, |z|$$\ne$$1, satisfy$${\log _{{1 \over {\sqrt 2 }}}}\left( {{{|z| + 11} \over {{{(|z| - 1)}^2}}}} \right) \le 2$$. Then, th... If$$\alpha$$,$$\beta\in$$R are such that 1$$-$$2i (here i2 =$$-$$1) is a root of z2 +$$\alpha$$z +$$\beta$$= 0, then ($$\alpha-$$... Let the lines (2$$-$$i)z = (2 + i)$$\overline z $$and (2$$+$$i)z + (i$$-$$2)$$\overline z -$$4i = 0, (here i2 =$$-$$1) be normal to a ci... Let z = x + iy be a non-zero complex number such that$${z^2} = i{\left| z \right|^2}$$, where i =$$\sqrt { - 1} $$, then z lies on the : The region represented by {z = x + iy$$ \in $$C : |z| – Re(z)$$ \le $$1} is also given by the inequality: {z = x + iy$$ \in $$C : |z| – Re(z)$$... The value of $${\left( {{{ - 1 + i\sqrt 3 } \over {1 - i}}} \right)^{30}}$$ is : If the four complex numbers $$z,\overline z ,\overline z - 2{\mathop{\rm Re}\nolimits} \left( {\overline z } \right)$$ and $$z-2Re(z)$$ represent the... If a and b are real numbers such that $${\left( {2 + \alpha } \right)^4} = a + b\alpha$$ where $$\alpha = {{ - 1 + i\sqrt 3 } \over 2}$$ then a + b... Let $$u = {{2z + i} \over {z - ki}}$$, z = x + iy and k > 0. If the curve represented by Re(u) + Im(u) = 1 intersects the y-axis at the points P a... If z1 , z2 are complex numbers such that Re(z1) = |z1 – 1|, Re(z2) = |z2 – 1| , and arg(z1 - z2) = $${\pi \over 6}$$, then Im(z1 + z2 ) is equal t... The imaginary part of $${\left( {3 + 2\sqrt { - 54} } \right)^{{1 \over 2}}} - {\left( {3 - 2\sqrt { - 54} } \right)^{{1 \over 2}}}$$ can be : The value of $${\left( {{{1 + \sin {{2\pi } \over 9} + i\cos {{2\pi } \over 9}} \over {1 + \sin {{2\pi } \over 9} - i\cos {{2\pi } \over 9}}}} \right)... If z be a complex number satisfying |Re(z)| + |Im(z)| = 4, then |z| cannot be Let z be complex number such that$$\left| {{{z - i} \over {z + 2i}}} \right| = 1$$and |z| =$${5 \over 2}$$. Then the value of |z + 3i| is :... If the equation, x2 + bx + 45 = 0 (b$$ \in $$R) has conjugate complex roots and they satisfy |z +1| = 2$$\sqrt {10} $$, then If$${{3 + i\sin \theta } \over {4 - i\cos \theta }}$$,$$\theta \in $$[0, 2$$\theta $$], is a real number, then an argument of sin$$\theta $$... If$${\mathop{\rm Re}\nolimits} \left( {{{z - 1} \over {2z + i}}} \right) = 1$$, where z = x + iy, then the point (x, y) lies on a: Let z$$ \in $$C with Im(z) = 10 and it satisfies$${{2z - n} \over {2z + n}}$$= 2i - 1 for some natural number n. Then : The equation |z – i| = |z – 1|, i =$$\sqrt { - 1} $$, represents : If z and w are two complex numbers such that |zw| = 1 and arg(z) – arg(w) =$${\pi \over 2}$$, then: If a > 0 and z =$${{{{\left( {1 + i} \right)}^2}} \over {a - i}}$$, has magnitude$$\sqrt {{2 \over 5}} $$, then$$\overline z $$is equal to : Let z$$ \in $$C be such that |z| < 1. If$$\omega = {{5 + 3z} \over {5(1 - z)}}$$z, then:- All the points in the set$$S = \left\{ {{{\alpha + i} \over {\alpha - i}}:\alpha \in R} \right\}(i = \sqrt { - 1} )$$lie on a If$$z = {{\sqrt 3 } \over 2} + {i \over 2}\left( {i = \sqrt { - 1} } \right)$$, then (1 + iz + z5 + iz8)9 is equal to... If$$\alpha $$and$$\beta $$be the roots of the equation x2 – 2x + 2 = 0, then the least value of n for which$${\left( {{\alpha \over \beta }} \ri... Let z1 and z2 be two complex numbers satisfying | z1 | = 9 and | z2 – 3 – 4i | = 4. Then the minimum value of | z1 – z2 | is :... If $${{z - \alpha } \over {z + \alpha }}\left( {\alpha \in R} \right)$$ is a purely imaginary number and | z | = 2, then a value of $$\alpha$$ is : Let z be a complex number such that |z| + z = 3 + i (where i = $$\sqrt { - 1}$$). Then |z| is equal to Let $${\left( { - 2 - {1 \over 3}i} \right)^3} = {{x + iy} \over {27}}\left( {i = \sqrt { - 1} } \right),\,\,$$ where x and y are real numbers, then ... Let $$z = {\left( {{{\sqrt 3 } \over 2} + {i \over 2}} \right)^5} + {\left( {{{\sqrt 3 } \over 2} - {i \over 2}} \right)^5}.$$ If R(z) and 1(z) respec... Let z1 and z2 be any two non-zero complex numbers such that $$3\left| {{z_1}} \right| = 4\left| {{z_2}} \right|.$$ If &nbs... Let z0 be a root of the quadratic equation, x2 + x + 1 = 0, If z = 3 + 6iz$$_0^{81}$$ $$-$$ 3iz$$_0^{93}$$, then arg z is equal to : ... Let $$\alpha$$ and $$\beta$$ be two roots of the equation x2 + 2x + 2 = 0 , then $$\alpha ^{15}$$ + $$\beta ^{15}$$ is equal to : Let A = $$\left\{ {\theta \in \left( { - {\pi \over 2},\pi } \right):{{3 + 2i\sin \theta } \over {1 - 2i\sin \theta }}is\,purely\,imaginary} \right\... The least positive integer n for which$${\left( {{{1 + i\sqrt 3 } \over {1 - i\sqrt 3 }}} \right)^n} = 1,$$is : If$$\alpha ,\beta \in C$$are the distinct roots of the equation x2 - x + 1 = 0, then$${\alpha ^{101}} + {\beta ^{107}}$$is equal to... If |z$$-$$3 + 2i|$$ \le $$4 then the difference between the greatest value and the least value of |z| is : The set of all$$\alpha \in $$R, for which w =$${{1 + \left( {1 - 8\alpha } \right)z} \over {1 - z}}$$is purely imaginary number, for all z ... Let$$\omega $$be a complex number such that 2$$\omega $$+ 1 = z where z =$$\sqrt {-3} $$. If$$\left| {\matrix{ 1 & 1 & 1 \cr 1 &a... The point represented by 2 + i in the Argand plane moves 1 unit eastwards, then 2 units northwards and finally from there $$2\sqrt 2$$ units in the s... A value of $$\theta \,$$ for which $${{2 + 3i\sin \theta \,} \over {1 - 2i\,\,\sin \,\theta \,}}$$ is purely imaginary, is : A complex number z is said to be unimodular if $$\,\left| z \right| = 1$$. Suppose $${z_1}$$ and $${z_2}$$ are complex numbers such that $${{{z_1} - 2... If z is a complex number such that$$\,\left| z \right| \ge 2\,$$, then the minimum value of$$\,\,\left| {z + {1 \over 2}} \right|$$: If z is a complex number of unit modulus and argument$$\theta $$, then arg$$\left( {{{1 + z} \over {1 + \overline z }}} \right)$$equals : If$$z \ne 1$$and$$\,{{{z^2}} \over {z - 1}}\,$$is real, then the point represented by the complex number z lies : Let$$\alpha \,,\beta $$be real and z be a complex number. If$${z^2} + \alpha z + \beta = 0$$has two distinct roots on the line Re z = 1, then it ... If$$\omega ( \ne 1)$$is a cube root of unity, and$${(1 + \omega )^7} = A + B\omega \,$$. Then$$(A,B)$$equals The number of complex numbers z such that$$\left| {z - 1} \right| = \left| {z + 1} \right| = \left| {z - i} \right|$$equals The conjugate of a complex number is$${1 \over {i - 1}}$$then that complex number is Let R be the real line. Consider the following subsets of the plane$$R \times R$$:$$S = \left\{ {(x,y):y = x + 1\,\,and\,\,0 < x < 2} \right... If $$\,\left| {z + 4} \right|\,\, \le \,\,3\,$$, then the maximum value of $$\left| {z + 1} \right|$$ is If $${z^2} + z + 1 = 0$$, where z is complex number, then value of $${\left( {z + {1 \over z}} \right)^2} + {\left( {{z^2} + {1 \over {{z^2}}}} \right... The value of$$\sum\limits_{k = 1}^{10} {\left( {\sin {{2k\pi } \over {11}} + i\,\,\cos {{2k\pi } \over {11}}} \right)} $$is If$${z_1}$$and$${z_2}$$are two non-zero complex numbers such that$$\,\left| {{z_1} + {z_2}} \right| = \left| {{z_1}} \right| + \left| {{z_2}} \ri... If the cube roots of unity are 1, $$\omega \,,\,{\omega ^2}$$ then the roots of the equation $${(x - 1)^3}$$ + 8 = 0, are If $$\,\omega = {z \over {z - {1 \over 3}i}}\,$$ and $$\left| \omega \right| = 1$$, then $$z$$ lies on Let z and w be complex numbers such that $$\overline z + i\overline w = 0$$ and arg zw = $$\pi$$. Then arg z equals If $$z = x - iy$$ and $${z^{{1 \over 3}}} = p + iq$$, then $${{\left( {{x \over p} + {y \over q}} \right)} \over {\left( {{p^2} + {q^2}} \right)}}$$ ... If $$\,\left| {{z^2} - 1} \right| = {\left| z \right|^2} + 1$$, then z lies on If $$z$$ and $$\omega$$ are two non-zero complex numbers such that $$\left| {z\omega } \right| = 1$$ and $$Arg(z) - Arg(\omega ) = {\pi \over 2},$$ ... Let $${Z_1}$$ and $${Z_2}$$ be two roots of the equation $${Z^2} + aZ + b = 0$$, Z being complex. Further , assume that the origin, $${Z_1}$$ and $${Z... If$${\left( {{{1 + i} \over {1 - i}}} \right)^x} = 1$$then z and w are two nonzero complex numbers such that$$\,\left| z \right| = \left| w \right|$$and Arg z + Arg w =$$\pi $$then z equals If$$\left| {z - 4} \right| < \left| {z - 2} \right|$$, its solution is given by The locus of the centre of a circle which touches the circle$$\left| {z - {z_1}} \right| = a$$and$$\left| {z - {z_2}} \right| = b\,$$externally (... ## Numerical Let$$z=1+i$$and$$z_{1}=\frac{1+i \bar{z}}{\bar{z}(1-z)+\frac{1}{z}}$$. Then$$\frac{12}{\pi} \arg \left(z_{1}\right)$$is equal to __________.... Let$$\alpha = 8 - 14i,A = \left\{ {z \in c:{{\alpha z - \overline \alpha \overline z } \over {{z^2} - {{\left( {\overline z } \right)}^2} - 112i}}=... Let $$\mathrm{z}=a+i b, b \neq 0$$ be complex numbers satisfying $$z^{2}=\bar{z} \cdot 2^{1-z}$$. Then the least value of $$n \in N$$, such that $$z^... Let$$S=\left\{z \in \mathbb{C}: z^{2}+\bar{z}=0\right\}$$. Then$$\sum\limits_{z \in S}(\operatorname{Re}(z)+\operatorname{Im}(z))$$is equal to ____... Let$$S = \{ z \in C:|z - 2| \le 1,\,z(1 + i) + \overline z (1 - i) \le 2\} $$. Let$$|z - 4i|$$attains minimum and maximum values, respectively, at ... Sum of squares of modulus of all the complex numbers z satisfying$$\overline z = i{z^2} + {z^2} - z$$is equal to ___________. The number of elements in the set {z = a + ib$$\in$$C : a, b$$\in$$Z and 1 If$${z^2} + z + 1 = 0$$,$$z \in C$$, then$$\left| {\sum\limits_{n = 1}^{15} {{{\left( {{z^n} + {{( - 1)}^n}{1 \over {{z^n}}}} \right)}^2}} } \right... Let S = {z $$\in$$ C : |z $$-$$ 3| $$\le$$ 1 and z(4 + 3i) + $$\overline z$$(4 $$-$$ 3i) $$\le$$ 24}. If $$\alpha$$ + i$$\beta$$ is the point in S wh... If for the complex numbers z satisfying | z $$-$$ 2 $$-$$ 2i | $$\le$$ 1, the maximum value of | 3iz + 6 | is attained at a + ib, then a + b is equal ... A point z moves in the complex plane such that $$\arg \left( {{{z - 2} \over {z + 2}}} \right) = {\pi \over 4}$$, then the minimum value of $${\left|... The least positive integer n such that$${{{{(2i)}^n}} \over {{{(1 - i)}^{n - 2}}}},i = \sqrt { - 1} $$is a positive integer, is ___________. Let$$z = {{1 - i\sqrt 3 } \over 2}$$,$$i = \sqrt { - 1} $$. Then the value of$$21 + {\left( {z + {1 \over z}} \right)^3} + {\left( {{z^2} + {1 \ove... If the real part of the complex number $$z = {{3 + 2i\cos \theta } \over {1 - 3i\cos \theta }},\theta \in \left( {0,{\pi \over 2}} \right)$$ is zero... The equation of a circle is Re(z2) + 2(Im(z))2 + 2Re(z) = 0, where z = x + iy. A line which passes through the center of the given circle and the vert... Let $$S = \left\{ {n \in N\left| {{{\left( {\matrix{ 0 & i \cr 1 & 0 \cr } } \right)}^n}\left( {\matrix{ a & b \cr c &... Let z1, z2 be the roots of the equation z2 + az + 12 = 0 and z1, z2 form an equilateral triangle with origin. Then, the value of |a| is... Let z and$$\omega$$be two complex numbers such that$$\omega = z\overline z - 2z + 2,\left| {{{z + i} \over {z - 3i}}} \right| = 1$$and Re($$\omeg... Let z be those complex numbers which satisfy| z + 5 | $$\le$$ 4 and z(1 + i) + $$\overline z$$(1 $$-$$ i) $$\ge$$ $$-$$10, i = $$\sqrt { - 1}$$.... Let $$i = \sqrt { - 1}$$. If $${{{{\left( { - 1 + i\sqrt 3 } \right)}^{21}}} \over {{{(1 - i)}^{24}}}} + {{{{\left( {1 + i\sqrt 3 } \right)}^{21}}} \... If the least and the largest real values of a, for which the equation z +$$\alpha $$|z – 1| + 2i = 0 (z$$ \in $$C and i =$$\sqrt { - 1} $$) has a ... If$${\left( {{{1 + i} \over {1 - i}}} \right)^{{m \over 2}}} = {\left( {{{1 + i} \over {1 - i}}} \right)^{{n \over 3}}} = 1$$, (m, n$$ \in$\$ N) the... EXAM MAP Joint Entrance Examination
# Posts Tagged place constraints ## Recent Postings from place constraints ### X-ray Transients: Hyper- or Hypo-Luminous? The disk instability picture gives a plausible explanation for the behavior of soft X-ray transient systems if self-irradiation of the disk is included. We show that there is a simple relation between the peak luminosity (at the start of an outburst) and the decay timescale. We use this relation to place constraints on systems assumed to undergo disk instabilities. The observable X-ray populations of elliptical galaxies must largely consist of long-lived transients, as deduced on different grounds by Piro and Bildsten (2002). The strongly-varying X-ray source HLX-1 in the galaxy ESO 243-49 can be modeled as a disk instability of a highly super-Eddington stellar-mass binary similar to SS433. A fit to the disk instability picture is not possible for an intermediate-mass black hole model for HLX-1. Other, recently identified, super-Eddington ULXs might be subject to disk instability. ### Constrains on Dark Matter sterile neutrino resonant production in the light of Planck Recently, few independent detections of a weak X-ray emission line at an energy of ~3.5 keV seen toward a number of astrophysical sites have been reported. If this signal will be confirmed to be the signature of decaying DM sterile neutrino with a mass of ~7.1 keV, then the cosmological observables should be consistent with its properties. In this paper we place constraints on the sterile neutrino resonant production parameters and asymmetry lepton number by using most of the present cosmological measurements. We compute the radiation and matter perturbations including the full resonance sweep solution for active – sterile neutrino flavor conversion and place constraints on the cosmological parameters and sterile neutrino properties. We find the sterile neutrino upper limits for mass and mixing angle of 7.86 keV (equivalent to 2.54 keV thermal mass) and 9.41 x 10^{-9} (at 95% CL) respectively, for a lepton number per flavor of 0.0042, that is significantly higher than that inferred in Abazajian 2014 from the linear large scale structure constraints. This reflects the sensitivity of the high precision CMB anisotropies to the helium abundance yield which in turn is set by the $\nu_e$ lepton number and non-thermal spectrum. Other cosmological parameters are in agreement with the predictions of the minimal extension of the base \LambdaCDM model except for the active neutrino total mass upper limit that is decreased to 0.21 eV (95% CL). ### Constrains on Dark Matter sterile neutrino resonant production in the light of Planck [Cross-Listing] Recently, few independent detections of a weak X-ray emission line at an energy of ~3.5 keV seen toward a number of astrophysical sites have been reported. If this signal will be confirmed to be the signature of decaying DM sterile neutrino with a mass of ~7.1 keV, then the cosmological observables should be consistent with its properties. In this paper we place constraints on the sterile neutrino resonant production parameters and asymmetry lepton number by using most of the present cosmological measurements. We compute the radiation and matter perturbations including the full resonance sweep solution for active – sterile neutrino flavor conversion and place constraints on the cosmological parameters and sterile neutrino properties. We find the sterile neutrino upper limits for mass and mixing angle of 7.86 keV (equivalent to 2.54 keV thermal mass) and 9.41 x 10^{-9} (at 95% CL) respectively, for a lepton number per flavor of 0.0042, that is significantly higher than that inferred in Abazajian 2014 from the linear large scale structure constraints. This reflects the sensitivity of the high precision CMB anisotropies to the helium abundance yield which in turn is set by the $\nu_e$ lepton number and non-thermal spectrum. Other cosmological parameters are in agreement with the predictions of the minimal extension of the base \LambdaCDM model except for the active neutrino total mass upper limit that is decreased to 0.21 eV (95% CL). ### Constrains on Dark Matter sterile neutrino resonant production in the light of Planck [Replacement] Few independent detections of a weak X-ray emission line at an energy of ~3.5 keV seen toward a number of astrophysical sites have been reported. If this signal will be confirmed to be the signature of decaying DM sterile neutrino with a mass of ~7.1 keV, then the cosmological observables should be consistent with its properties. We compute the radiation and matter perturbations including the full resonance sweep solution for active – sterile neutrino flavor conversion and place constraints on the cosmological parameters and sterile neutrino properties by using most of the present cosmological measurements. We find the sterile neutrino upper limits for mass and mixing angle of 7.86 keV (equivalent to 2.54 keV thermal mass) and 9.41 x 10^{-9} (at 95% CL) respectively, for a lepton number per flavor of 0.0042, that is significantly higher than that inferred in Abazajian (2014) from the linear large scale structure constraints. This reflects the sensitivity of the high precision CMB anisotropies to the helium abundance yield which in turn is set by the electron neutrino lepton number and the non-thermal active neutrino spectra. Other cosmological parameters are in agreement with the predictions of the minimal extension of the base LambdaCDM model except for the active neutrino total mass uper limit that is decreased to 0.21 eV (95% CL). ### Constrains on Dark Matter sterile neutrino resonant production in the light of Planck [Replacement] Few independent detections of a weak X-ray emission line at an energy of ~3.5 keV seen toward a number of astrophysical sites have been reported. If this signal will be confirmed to be the signature of decaying DM sterile neutrino with a mass of ~7.1 keV, then the cosmological observables should be consistent with its properties. We compute the radiation and matter perturbations including the full resonance sweep solution for active – sterile neutrino flavor conversion and place constraints on the cosmological parameters and sterile neutrino properties by using most of the present cosmological measurements. We find the sterile neutrino upper limits for mass and mixing angle of 7.86 keV (equivalent to 2.54 keV thermal mass) and 9.41 x 10^{-9} (at 95% CL) respectively, for a lepton number per flavor of 0.0042, that is significantly higher than that inferred in Abazajian (2014) from the linear large scale structure constraints. This reflects the sensitivity of the high precision CMB anisotropies to the helium abundance yield which in turn is set by the electron neutrino lepton number and the non-thermal active neutrino spectra. Other cosmological parameters are in agreement with the predictions of the minimal extension of the base LambdaCDM model except for the active neutrino total mass uper limit that is decreased to 0.21 eV (95% CL). ### Constrains on Dark Matter sterile neutrino resonant production in the light of Planck [Replacement] Few independent detections of a weak X-ray emission line at an energy of ~3.5 keV seen toward a number of astrophysical sites have been reported. If this signal will be confirmed to be the signature of decaying DM sterile neutrino with a mass of ~7.1 keV, then the cosmological observables should be consistent with its properties. We compute the radiation and matter perturbations including the full resonance sweep solution for active – sterile neutrino flavor conversion and place constraints on the cosmological parameters and sterile neutrino properties by using most of the present cosmological measurements. We find the sterile neutrino upper limits for mass and mixing angle of 7.86 keV (equivalent to 2.54 keV thermal mass) and 9.41 x 10^{-9} (at 95% CL) respectively, for a lepton number per flavor of 0.0042, that is significantly higher than that inferred in Abazajian (2014) from the linear large scale structure constraints. This reflects the sensitivity of the high precision CMB anisotropies to the helium abundance yield which in turn is set by the electron neutrino lepton number and the non-thermal active neutrino spectra. Other cosmological parameters are in agreement with the predictions of the minimal extension of the base LambdaCDM model except for the active neutrino total mass uper limit that is decreased to 0.21 eV (95% CL). ### Constrains on Dark Matter sterile neutrino resonant production in the light of Planck [Replacement] Few independent detections of a weak X-ray emission line at an energy of ~3.5 keV seen toward a number of astrophysical sites have been reported. If this signal will be confirmed to be the signature of decaying DM sterile neutrino with a mass of ~7.1 keV, then the cosmological observables should be consistent with its properties. We compute the radiation and matter perturbations including the full resonance sweep solution for active – sterile neutrino flavor conversion and place constraints on the cosmological parameters and sterile neutrino properties by using most of the present cosmological measurements. We find the sterile neutrino upper limits for mass and mixing angle of 7.86 keV (equivalent to 2.54 keV thermal mass) and 9.41 x 10^{-9} (at 95% CL) respectively, for a lepton number per flavor of 0.0042, that is significantly higher than that inferred in Abazajian (2014) from the linear large scale structure constraints. This reflects the sensitivity of the high precision CMB anisotropies to the helium abundance yield which in turn is set by the electron neutrino lepton number and the non-thermal active neutrino spectra. Other cosmological parameters are in agreement with the predictions of the minimal extension of the base LambdaCDM model except for the active neutrino total mass uper limit that is decreased to 0.21 eV (95% CL). ### Chemical Enrichment RGS cluster sample (CHEERS): Constraints on turbulence Feedback from AGN, galactic mergers, and sloshing are thought to give rise to turbulence, which may prevent cooling in clusters. We aim to measure the turbulence in clusters of galaxies and compare the measurements to some of their structural and evolutionary properties. It is possible to measure the turbulence of the hot gas in clusters by estimating the velocity widths of their X-ray emission lines. The RGS Spectrometers aboard XMM-Newton are currently the only instruments provided with sufficient effective area and spectral resolution in this energy domain. We benefited from excellent 1.6Ms new data provided by the CHEERS project. The new observations improve the quality of the archival data and allow us to place constraints for some clusters, which were not accessible in previous work. One-half of the sample shows upper limits on turbulence less than 500km/s. For several sources, our data are consistent with relatively strong turbulence with upper limits on the velocity widths that are larger than 1000km/s. The NGC507 group of galaxies shows transonic velocities, which are most likely associated with the merging phenomena and bulk motions occurring in this object. Where both low- and high-ionization emission lines have good enough statistics, we find larger upper limits for the hot gas, which is partly due to the different spatial extents of the hot and cool gas phases. Our upper limits are larger than the Mach numbers required to balance cooling, suggesting that dissipation of turbulence may prevent cooling, although other heating processes could be dominant. The systematics associated with the spatial profile of the source continuum make this technique very challenging, though still powerful, for current instruments. The ASTRO-H and Athena missions will revolutionize the velocity estimates and discriminate between different spatial regions and temperature phases. ### Constraints on Galactic Wino Densities from Gamma Ray Lines [Cross-Listing] We systematically compute the annihilation rate for neutral winos into the final state gamma + X, including all leading radiative corrections. This includes both the Sommerfeld enhancement (in the decoupling limit for the Higgsino) and the resummation of the leading electroweak double logarithms. Adopting an analysis of the HESS experiment, we place constraints on the mass as a function of the wino fraction of the dark matter and the shape of the dark matter profile. We also determine how much coring is needed in the dark matter halo to make the wino a viable candidate as a function of its mass. Additionally, as part of our effective field theory formalism, we show that in the pure-Standard Model sector of our theory, emissions of soft Higgses are power-suppressed and that collinear Higgs emission does not contribute to leading double logs. ### Constraints on Galactic Wino Densities from Gamma Ray Lines We systematically compute the annihilation rate for neutral winos into the final state gamma + X, including all leading radiative corrections. This includes both the Sommerfeld enhancement (in the decoupling limit for the Higgsino) and the resummation of the leading electroweak double logarithms. Adopting an analysis of the HESS experiment, we place constraints on the mass as a function of the wino fraction of the dark matter and the shape of the dark matter profile. We also determine how much coring is needed in the dark matter halo to make the wino a viable candidate as a function of its mass. Additionally, as part of our effective field theory formalism, we show that in the pure-Standard Model sector of our theory, emissions of soft Higgses are power-suppressed and that collinear Higgs emission does not contribute to leading double logs. ### Homogeneity and isotropy in the 2MASS Photometric Redshift catalogue Using the 2MASS Photometric Redshift catalogue we perform a number of statistical tests aimed at detecting possible departures from statistical homogeneity and isotropy in the large-scale structure of the Universe. Making use of the angular homogeneity index, an observable proposed in a previous publication, as well as studying the scaling of the angular clustering and number counts with magnitude limit, we place constraints on the fractal nature of the galaxy distribution. We find that the statistical properties of our sample are in excellent agreement with the standard cosmological model, and that it reaches the homogeneous regime significantly faster than fractal models with dimensions D<2.75. As part of our search for systematic effects, we also study the presence of hemispherical asymmetries in our data, finding no significant deviation beyond those allowed by the concordance model. ### Homogeneity and isotropy in the 2MASS Photometric Redshift catalogue [Replacement] Using the 2MASS Photometric Redshift catalogue we perform a number of statistical tests aimed at detecting possible departures from statistical homogeneity and isotropy in the large-scale structure of the Universe. Making use of the angular homogeneity index, an observable proposed in a previous publication, as well as studying the scaling of the angular clustering and number counts with magnitude limit, we place constraints on the fractal nature of the galaxy distribution. We find that the statistical properties of our sample are in excellent agreement with the standard cosmological model, and that it reaches the homogeneous regime significantly faster than a class of fractal models with dimensions $D<2.75$. As part of our search for systematic effects, we also study the presence of hemispherical asymmetries in our data, finding no significant deviation beyond those allowed by the concordance model. ### Two- and Many-Body Decaying Dark Matter and Supernovae Type Ia [Cross-Listing] We present a decaying dark matter scenario where the daughter products are a single massless relativistic particle and a single, massive but possibly relativistic particle. We calculate the velocity distribution of the massive daughter particle and its associated equation of state and derive its dynamical evolution in an expanding universe. In addition, we present a model of decaying dark matter where there are many massless relativistic daughter particles together with a massive particle at rest. We place constraints on these two models using supernovae type Ia observations. We find that for a daughter relativistic fraction of 1% and higher, lifetimes of at least less than 10 Gyrs are excluded, with larger relativistic fractions constraining longer lifetimes. ### Dark matter with two- and many-body decays and supernovae type Ia [Replacement] We present a decaying dark matter scenario where the daughter products are a single massless relativistic particle and a single, massive but possibly relativistic particle. We calculate the velocity distribution of the massive daughter particle and its associated equation of state and derive its dynamical evolution in an expanding universe. In addition, we present a model of decaying dark matter where there are many massless relativistic daughter particles together with a massive particle at rest. We place constraints on these two models using supernovae type Ia observations. We find that for a daughter relativistic fraction of 1% and higher, lifetimes of at least less than 10 Gyrs are excluded, with larger relativistic fractions constraining longer lifetimes. ### Dark matter with two- and many-body decays and supernovae type Ia [Replacement] We present a decaying dark matter scenario where the daughter products are a single massless relativistic particle and a single, massive but possibly relativistic particle. We calculate the velocity distribution of the massive daughter particle and its associated equation of state and derive its dynamical evolution in an expanding universe. In addition, we present a model of decaying dark matter where there are many massless relativistic daughter particles together with a massive particle at rest. We place constraints on these two models using supernovae type Ia observations. We find that for a daughter relativistic fraction of 1% and higher, lifetimes of at least less than 10 Gyrs are excluded, with larger relativistic fractions constraining longer lifetimes. ### Two- and Many-Body Decaying Dark Matter and Supernovae Type Ia We present a decaying dark matter scenario where the daughter products are a single massless relativistic particle and a single, massive but possibly relativistic particle. We calculate the velocity distribution of the massive daughter particle and its associated equation of state and derive its dynamical evolution in an expanding universe. In addition, we present a model of decaying dark matter where there are many massless relativistic daughter particles together with a massive particle at rest. We place constraints on these two models using supernovae type Ia observations. We find that for a daughter relativistic fraction of 1% and higher, lifetimes of at least less than 10 Gyrs are excluded, with larger relativistic fractions constraining longer lifetimes. ### Observed parity-odd CMB temperature bispectrum [Replacement] Parity-odd non-Gaussianities create a variety of temperature bispectra in the cosmic microwave background (CMB), defined in the domain: $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$. These models are yet unconstrained in the literature, that so far focused exclusively on the more common parity-even scenarios. In this work, we provide the first experimental constraints on parity-odd bispectrum signals in WMAP 9-year temperature data, using a separable modal parity-odd estimator. Comparing theoretical bispectrum templates to the observed bispectrum, we place constraints on the so-called nonlineality parameters of parity-odd tensor non-Gaussianities predicted by several Early Universe models. Our technique also generates a model-independent, smoothed reconstruction of the bispectrum of the data for parity-odd configurations. ### Observed parity-odd CMB temperature bispectrum Parity-odd non-Gaussianities create a variety of temperature bispectra in the cosmic microwave background (CMB), defined in the domain: $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$. These models are yet unconstrained in the literature, that so far focused exclusively on the more common parity-even scenarios. In this work, we provide the first experimental constraints on parity-odd bispectrum signals in WMAP 9-year temperature data, using a separable modal parity-odd estimator. Comparing theoretical bispectrum templates to the observed bispectrum, we place constraints on the so-called nonlineality parameters of parity-odd tensor non-Gaussianities predicted by several Early Universe models. Our technique also generates a model-independent, smoothed reconstruction of the bispectrum of the data for parity-odd configurations. ### Observed parity-odd CMB temperature bispectrum [Cross-Listing] Parity-odd non-Gaussianities create a variety of temperature bispectra in the cosmic microwave background (CMB), defined in the domain: $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$. These models are yet unconstrained in the literature, that so far focused exclusively on the more common parity-even scenarios. In this work, we provide the first experimental constraints on parity-odd bispectrum signals in WMAP 9-year temperature data, using a separable modal parity-odd estimator. Comparing theoretical bispectrum templates to the observed bispectrum, we place constraints on the so-called nonlineality parameters of parity-odd tensor non-Gaussianities predicted by several Early Universe models. Our technique also generates a model-independent, smoothed reconstruction of the bispectrum of the data for parity-odd configurations. ### Observed parity-odd CMB temperature bispectrum [Cross-Listing] Parity-odd non-Gaussianities create a variety of temperature bispectra in the cosmic microwave background (CMB), defined in the domain: $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$. These models are yet unconstrained in the literature, that so far focused exclusively on the more common parity-even scenarios. In this work, we provide the first experimental constraints on parity-odd bispectrum signals in WMAP 9-year temperature data, using a separable modal parity-odd estimator. Comparing theoretical bispectrum templates to the observed bispectrum, we place constraints on the so-called nonlineality parameters of parity-odd tensor non-Gaussianities predicted by several Early Universe models. Our technique also generates a model-independent, smoothed reconstruction of the bispectrum of the data for parity-odd configurations. ### Observed parity-odd CMB temperature bispectrum [Replacement] Parity-odd non-Gaussianities create a variety of temperature bispectra in the cosmic microwave background (CMB), defined in the domain: $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$. These models are yet unconstrained in the literature, that so far focused exclusively on the more common parity-even scenarios. In this work, we provide the first experimental constraints on parity-odd bispectrum signals in WMAP 9-year temperature data, using a separable modal parity-odd estimator. Comparing theoretical bispectrum templates to the observed bispectrum, we place constraints on the so-called nonlineality parameters of parity-odd tensor non-Gaussianities predicted by several Early Universe models. Our technique also generates a model-independent, smoothed reconstruction of the bispectrum of the data for parity-odd configurations. ### Observed parity-odd CMB temperature bispectrum [Replacement] Parity-odd non-Gaussianities create a variety of temperature bispectra in the cosmic microwave background (CMB), defined in the domain: $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$. These models are yet unconstrained in the literature, that so far focused exclusively on the more common parity-even scenarios. In this work, we provide the first experimental constraints on parity-odd bispectrum signals in WMAP 9-year temperature data, using a separable modal parity-odd estimator. Comparing theoretical bispectrum templates to the observed bispectrum, we place constraints on the so-called nonlineality parameters of parity-odd tensor non-Gaussianities predicted by several Early Universe models. Our technique also generates a model-independent, smoothed reconstruction of the bispectrum of the data for parity-odd configurations. ### Observed parity-odd CMB temperature bispectrum [Replacement] Parity-odd non-Gaussianities create a variety of temperature bispectra in the cosmic microwave background (CMB), defined in the domain: $\ell_1 + \ell_2 + \ell_3 = {\rm odd}$. These models are yet unconstrained in the literature, that so far focused exclusively on the more common parity-even scenarios. In this work, we provide the first experimental constraints on parity-odd bispectrum signals in WMAP 9-year temperature data, using a separable modal parity-odd estimator. Comparing theoretical bispectrum templates to the observed bispectrum, we place constraints on the so-called nonlineality parameters of parity-odd tensor non-Gaussianities predicted by several Early Universe models. Our technique also generates a model-independent, smoothed reconstruction of the bispectrum of the data for parity-odd configurations. ### Short-period $g$-mode pulsations in low-mass white dwarfs triggered by H shell burning The detection of pulsations in white dwarfs with low mass offers the possibility of probing their internal structure through asteroseismology and place constraints on the binary evolutionary processes involved in their formation. In this paper we assess the impact of stable H burning on the pulsational stability properties of low-mass He-core white dwarf models resulting from binary star evolutionary calculations. We found that, apart from a dense spectrum of unstable radial modes and nonradial $g$- and $p$-modes driven by the $\kappa$-mechanism due to the partial ionization of H in the stellar envelope, some unstable $g$-modes with short pulsation periods are powered also by H burning via the $\varepsilon$-mechanism of mode driving. This is the first time that $\varepsilon$-destabilized modes are found in models representative of cool white dwarf stars. The short periods recently detected in the pulsating low-mass white dwarf SDSS J111215.82+111745.0 could constitute the first evidence of the existence of stable H burning in these stars, in particular in the so-called extremely low-mass white dwarfs. ### Multi-redshift limits on the 21cm power spectrum from PAPER [Replacement] The epoch of reionization power spectrum is expected to evolve strongly with redshift, and it is this variation with cosmic history that will allow us to begin to place constraints on the physics of reionization. The primary obstacle to the measurement of the EoR power spectrum is bright foreground emission. We present an analysis of observations from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) telescope which place new limits on the HI power spectrum over the redshift range of $7.5<z<10.5$, extending previously published single redshift results to cover the full range accessible to the instrument. To suppress foregrounds, we use filtering techniques that take advantage of the large instrumental bandwidth to isolate and suppress foreground leakage into the interesting regions of $k$-space. Our 500 hour integration is the longest such yet recorded and demonstrates this method to a dynamic range of $10^4$. Power spectra at different points across the redshift range reveal the variable efficacy of the foreground isolation. Noise limited measurements of $\Delta^2$ at $k=$0.2hMpc$^{-1}$ and z$=7.55$ reach as low as (48mK)$^2$ ($1\sigma$). We demonstrate that the size of the error bars in our power spectrum measurement as generated by a bootstrap method is consistent with the fluctuations due to thermal noise. Relative to this thermal noise, most spectra exhibit an excess of power at a few sigma. The likely sources of this excess include residual foreground leakage, particularly at the highest redshift, and unflagged RFI. We conclude by discussing data reduction improvements that promise to remove much of this excess. ### Multi-redshift limits on the 21cm power spectrum from PAPER The epoch of reionization power spectrum is expected to evolve strongly with redshift, and it is this variation with cosmic history that will allow us to begin to place constraints on the physics of reionization. The primary obstacle to the measurement of the EoR power spectrum is bright foreground emission. We present an analysis of observations from the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER) telescope which place new limits on the HI power spectrum over the redshift range of $7.5<z<10.5$, extending previously published single redshift results to cover the full range accessible to the instrument. To suppress foregrounds, we use filtering techniques that take advantage of the large instrumental bandwidth to isolate and suppress foreground leakage into the interesting regions of $k$-space. Our 500 hour integration is the longest such yet recorded and demonstrates this method to a dynamic range of $10^4$. Power spectra at different points across the redshift range reveal the variable efficacy of the foreground isolation. Noise limited measurements of $\Delta^2$ at $k=$0.2hMpc$^{-1}$ and z$=7.55$ reach as low as (48mK)$^2$ ($1\sigma$). We demonstrate that the size of the error bars in our power spectrum measurement as generated by a bootstrap method is consistent with the fluctuations due to thermal noise. Relative to this thermal noise, most spectra exhibit an excess of power at a few sigma. The likely sources of this excess include residual foreground leakage, particularly at the highest redshift, and unflagged RFI. We conclude by discussing data reduction improvements that promise to remove much of this excess. ### The Chandra Planetary Nebula Survey (ChanPlaNS). II. X-ray Emission from Compact Planetary Nebulae [Replacement] We present results from the most recent set of observations obtained as part of the Chandra X-ray observatory Planetary Nebula Survey (ChanPlaNS), the first comprehensive X-ray survey of planetary nebulae (PNe) in the solar neighborhood (i.e., within ~1.5 kpc of the Sun). The survey is designed to place constraints on the frequency of appearance and range of X-ray spectral characteristics of X-ray-emitting PN central stars and the evolutionary timescales of wind-shock-heated bubbles within PNe. ChanPlaNS began with a combined Cycle 12 and archive Chandra survey of 35 PNe. ChanPlaNS continued via a Chandra Cycle 14 Large Program which targeted all (24) remaining known compact (R_neb <~ 0.4 pc), young PNe that lie within ~1.5 kpc. Results from these Cycle 14 observations include first-time X-ray detections of hot bubbles within NGC 1501, 3918, 6153, and 6369, and point sources in HbDs 1, NGC 6337, and Sp 1. The addition of the Cycle 14 results brings the overall ChanPlaNS diffuse X-ray detection rate to ~27% and the point source detection rate to ~36%. It has become clearer that diffuse X-ray emission is associated with young (<~5×10^3 yr), and likewise compact (R_neb<~0.15 pc), PNe with closed structures and high central electron densities (n_e>~1000 cm^-3), and rarely associated with PNe that show H_2 emission and/or pronounced butterfly structures. Hb 5 is one such exception of a PN with a butterfly structure that hosts diffuse X-ray emission. Additionally, of the five new diffuse X-ray detections, two host [WR]-type CSPNe, NGC 1501 and NGC 6369, supporting the hypothesis that PNe with central stars of [WR]-type are likely to display diffuse X-ray emission. ### The Chandra Planetary Nebula Survey (ChanPlaNS). II. X-ray Emission from Compact Planetary Nebulae We present results from the most recent set of observations obtained as part of the Chandra X-ray observatory Planetary Nebula Survey (ChanPlaNS), the first comprehensive X-ray survey of planetary nebulae (PNe) in the solar neighborhood (i.e., within ~1.5 kpc of the Sun). The survey is designed to place constraints on the frequency of appearance and range of X-ray spectral characteristics of X-ray-emitting PN central stars and the evolutionary timescales of wind-shock-heated bubbles within PNe. ChanPlaNS began with a combined Cycle 12 and archive Chandra survey of 35 PNe. ChanPlaNS continued via a Chandra Cycle 14 Large Program which targeted all (24) remaining known compact (R_neb <~ 0.4 pc), young PNe that lie within ~1.5 kpc. Results from these Cycle 14 observations include first-time X-ray detections of hot bubbles within NGC 1501, 3918, 6153, and 6369, and point sources in HbDs 1, NGC 6337, and Sp 1. The addition of the Cycle 14 results brings the overall ChanPlaNS diffuse X-ray detection rate to ~27% and the point source detection rate to ~36%. It has become clearer that diffuse X-ray emission is associated with young (<~5×10^3 yr), and likewise compact (R_neb<~0.15 pc), PNe with closed structures and high central electron densities (n_e>~1000 cm^-3), and rarely associated with PNe that show H_2 emission and/or pronounced butterfly structures. Hb 5 is one such exception of a PN with a butterfly structure that hosts diffuse X-ray emission. Additionally, of the five new diffuse X-ray detections, two host [WR]-type CSPNe, NGC 1501 and NGC 6369, supporting the hypothesis that PNe with central stars of [WR]-type are likely to display diffuse X-ray emission. ### KIC 10526294: a slowly rotating B star with rotationally split quasi-equally spaced gravity modes Massive stars are important for the chemical enrichment of the universe. Since internal mixing processes influence their life, it is of high importance to place constraints on the corresponding physical parameters, such as core overshooting and the internal rotation profile, to calibrate their stellar structure and evolution models. Although asteroseismology was shown to be able to deliver the most precise constraints so far, the number of detailed seismic studies delivering quantitative results is limited. Our goal is to extend this limited sample with an in-depth case study and provide a well constrained set of asteroseismic parameters, contributing to the ongoing mapping efforts of the instability strips of the beta Cep and SPB stars. We derived fundamental parameters from high-resolution spectra using spectral synthesis techniques. We used custom masks to obtain optimal light curves from the original pixel level data from the Kepler satellite. We used standard time-series analysis tools to construct a set of significant pulsation modes which provide the basis for the seismic analysis carried out afterwards. We find that KIC 10526294 is a cool SPB star, one of the slowest rotators ever found. Despite this fact, the length of Kepler observations is sufficient to resolve narrow rotationally split multiplets for each of its nineteen quasi-equally spaced dipole modes. The number of detected consecutive (in radial order) dipole modes in this series is higher than ever before. The observed amount of splitting shows an increasing trend towards longer periods, which – largely independent of the seismically calibrated stellar models – points towards a non-rigid internal rotation profile. From the average splitting we deduce a rotation period of 188 d. From seismic modelling we find that the star is young with a central hydrogen mass fraction X_c>0.64; it has a core overshooting alpha_ov<=0.15. ### KIC 10526294: a slowly rotating B star with rotationally split, quasi-equally spaced gravity modes [Replacement] Massive stars are important for the chemical enrichment of the universe. Since internal mixing processes influence their lives, it is very important to place constraints on the corresponding physical parameters, such as core overshooting and the internal rotation profile, so as to calibrate their stellar structure and evolution models. Although asteroseismology has been shown to be able to deliver the most precise constraints so far, the number of detailed seismic studies delivering quantitative results is limited. Our goal is to extend this limited sample with an in-depth case study and provide a well-constrained set of asteroseismic parameters, contributing to the ongoing mapping efforts of the instability strips of the beta Cep and SPB stars. We derived fundamental parameters from high-resolution spectra using spectral synthesis techniques. We used custom masks to obtain optimal light curves from the original pixel level data from the Kepler satellite. We used standard time-series analysis tools to construct a set of significant pulsation modes that provide the basis for the seismic analysis carried out afterwards. We find that KIC 10526294 is a cool SPB star, one of the slowest rotators ever found. Despite this, the length of Kepler observations is sufficient to resolve narrow rotationally split multiplets for each of its 19 quasi-equally spaced dipole modes. The number of detected consecutive (in radial order) dipole modes in this series is higher than ever before. The observed amount of splitting shows an increasing trend towards longer periods, which – largely independent of the seismically calibrated stellar models – points towards a non-rigid internal rotation profile. From the average splitting we deduce a rotation period of ~188 d. From seismic modelling, we find that the star is young with a central hydrogen mass fraction X_c>0.64; it has a core overshooting alpha_ov<=0.15. ### The Effect of Anisotropic Viscosity on Cold Fronts in Galaxy Clusters Cold fronts–contact discontinuities in the intracluster medium (ICM) of galaxy clusters–should be disrupted by Kelvin-Helmholtz (K-H) instabilities due to the associated shear velocity. However, many observed cold fronts appear stable. This opens the possibility to place constraints on microphysical mechanisms that stabilize them, such as the ICM viscosity and/or magnetic fields. We performed exploratory high-resolution simulations of cold fronts arising from subsonic gas sloshing in cluster cores using the grid-based Athena MHD code, comparing the effects of isotropic Spitzer and anisotropic Braginskii viscosity (expected in a magnetized plasma). Magnetized simulations with full Braginskii viscosity or isotropic Spitzer viscosity reduced by a factor f ~ 0.1 are both in qualitative agreement with observations in terms of suppressing K-H instabilities. The RMS velocity and turbulence within the sloshing region is only modestly reduced by Braginskii viscosity. We also performed unmagnetized simulations with and without viscosity and find that magnetic fields have a substantial effect on the appearance of the cold fronts, even if the initial field is weak and the viscosity is the same. This suggests that determining the dominant suppression mechanism of a given cold front from X-ray observations (e.g. viscosity or magnetic fields) by comparison with simulations is not straightforward. Finally, we performed simulations including anisotropic thermal conduction, and find that including Braginskii viscosity in these simulations does not significant affect the evolution of cold fronts; they are rapidly smeared out by thermal conduction, as in the inviscid case. ### The Effect of Anisotropic Viscosity on Cold Fronts in Galaxy Clusters [Replacement] Cold fronts–contact discontinuities in the intracluster medium (ICM) of galaxy clusters–should be disrupted by Kelvin-Helmholtz (K-H) instabilities due to the associated shear velocity. However, many observed cold fronts appear stable. This opens the possibility to place constraints on microphysical mechanisms that stabilize them, such as the ICM viscosity and/or magnetic fields. We performed exploratory high-resolution simulations of cold fronts arising from subsonic gas sloshing in cluster cores using the grid-based Athena MHD code, comparing the effects of isotropic Spitzer and anisotropic Braginskii viscosity (expected in a magnetized plasma). Magnetized simulations with full Braginskii viscosity or isotropic Spitzer viscosity reduced by a factor f ~ 0.1 are both in qualitative agreement with observations in terms of suppressing K-H instabilities. The RMS velocity of turbulence within the sloshing region is only modestly reduced by Braginskii viscosity. We also performed unmagnetized simulations with and without viscosity and find that magnetic fields have a substantial effect on the appearance of the cold fronts, even if the initial field is weak and the viscosity is the same. This suggests that determining the dominant suppression mechanism of a given cold front from X-ray observations (e.g. viscosity or magnetic fields) by comparison with simulations is not straightforward. Finally, we performed simulations including anisotropic thermal conduction, and find that including Braginskii viscosity in these simulations does not significantly affect the evolution of cold fronts; they are rapidly smeared out by thermal conduction, as in the inviscid case. ### The Effect of Anisotropic Viscosity on Cold Fronts in Galaxy Clusters [Replacement] Cold fronts — contact discontinuities in the intracluster medium (ICM) of galaxy clusters — should be disrupted by Kelvin-Helmholtz (K-H) instabilities due to the associated shear velocity. However, many observed cold fronts appear stable. This opens the possibility to place constraints on microphysical mechanisms that stabilize them, such as the ICM viscosity and/or magnetic fields. We performed exploratory high-resolution simulations of cold fronts arising from subsonic gas sloshing in cluster cores using the grid-based Athena MHD code, comparing the effects of isotropic Spitzer and anisotropic Braginskii viscosity (expected in a magnetized plasma). Magnetized simulations with full Braginskii viscosity or isotropic Spitzer viscosity reduced by a factor f ~ 0.1 are both in qualitative agreement with observations in terms of suppressing K-H instabilities. The RMS velocity of turbulence within the sloshing region is only modestly reduced by Braginskii viscosity. We also performed unmagnetized simulations with and without viscosity and find that magnetic fields have a substantial effect on the appearance of the cold fronts, even if the initial field is weak and the viscosity is the same. This suggests that determining the dominant suppression mechanism of a given cold front from X-ray observations (e.g. viscosity or magnetic fields) by comparison with simulations is not straightforward. Finally, we performed simulations including anisotropic thermal conduction, and find that including Braginskii viscosity in these simulations does not significant affect the evolution of cold fronts; they are rapidly smeared out by thermal conduction, as in the inviscid case. ### Herschel/PACS Observations of the Host Galaxy of GRB 031203 We present Herschel/PACS observations of the nearby (z=0.1055) dwarf galaxy that has hosted the long gamma ray burst (LGRB) 031203. Using the PACS data we have been able to place constraints on the dust temperature, dust mass, total infrared luminosity and infrared-derived star-formation rate (SFR) for this object. We find that the GRB host galaxy (GRBH) 031203 has a total infrared luminosity of 3×10^10 L_sun placing it in the regime of the IR-luminous galaxy population. Its dust temperature and specific SFR are comparable to that of many high-redshift (z=0.3-2.5) infrared (IR)-detected GRB hosts (T_dust>40K ; sSFR>10 Gyr^-1), however its dust-to-stellar mass ratio is lower than what is commonly seen in IR-luminous galaxies. Our results suggest that GRBH 031203 is undergoing a strong starburst episode and its dust properties are different to those of local dwarf galaxies within the same metallicity and stellar mass range. Furthermore, our measurements place it in a distinct class to the well studied nearby host of GRB 980425 (z=0.0085), confirming the notion that GRB host galaxies can span a large range in properties even at similar cosmological epochs, making LGRBs an ideal tool in selecting samples of star-forming galaxies up to high redshift. ### Herschel/PACS Observations of the Host Galaxy of GRB 031203 [Replacement] We present Herschel/PACS observations of the nearby (z=0.1055) dwarf galaxy that has hosted the long gamma ray burst (LGRB) 031203. Using the PACS data we have been able to place constraints on the dust temperature, dust mass, total infrared luminosity and infrared-derived star-formation rate (SFR) for this object. We find that the GRB host galaxy (GRBH) 031203 has a total infrared luminosity of 3×10^10 L_sun placing it in the regime of the IR-luminous galaxy population. Its dust temperature and specific SFR are comparable to that of many high-redshift (z=0.3-2.5) infrared (IR)-detected GRB hosts (T_dust>40K ; sSFR>10 Gyr^-1), however its dust-to-stellar mass ratio is lower than what is commonly seen in IR-luminous galaxies. Our results suggest that GRBH 031203 is undergoing a strong starburst episode and its dust properties are different to those of local dwarf galaxies within the same metallicity and stellar mass range. Furthermore, our measurements place it in a distinct class to the well studied nearby host of GRB 980425 (z=0.0085), confirming the notion that GRB host galaxies can span a large range in properties even at similar cosmological epochs, making LGRBs an ideal tool in selecting samples of star-forming galaxies up to high redshift. ### Constraining primordial vector mode from B-mode polarization [Replacement] The B-mode polarization spectrum of the Cosmic Microwave Background (CMB) may be the smoking gun of not only the primordial tensor mode but also of the primordial vector mode. If there exist nonzero vector-mode metric perturbations in the early Universe, they are known to be supported by anisotropic stress fluctuations of free-streaming particles such as neutrinos, and to create characteristic signatures on both the CMB temperature, E-mode, and B-mode polarization anisotropies. We place constraints on the properties of the primordial vector mode characterized by the vector-to-scalar ratio $r_{v}$ and the spectral index $n_{v}$ of the vector-shear power spectrum, from the {\it Planck} and BICEP2 B-mode data. We find that, for scale-invariant initial spectra, the $\Lambda$CDM model including the vector mode fits the data better than the model including the tensor mode. The difference in $\chi^{2}$ between the vector and tensor models is $\Delta\chi^{2} = 3.294$, because, on large scales the vector mode generates smaller temperature fluctuations than the tensor mode, which is preferred for the data. In contrast, the tensor mode can fit the data set equally well if we allow a significantly blue-tilted spectrum. We find that the best-fitting tensor mode has a large blue tilt and leads to an indistinct reionization bump on larger angular scales. The slightly red-tilted vector mode supported by the current data set can also create ${\cal O}(10^{-22})$-Gauss magnetic fields at cosmological recombination. Our constraints should motivate research that considers models of the early Universe that involve the vector mode. ### Constraining primordial vector mode from B-mode polarization [Replacement] The B-mode polarization spectrum of the Cosmic Microwave Background (CMB) may be the smoking gun of not only the primordial tensor mode but also of the primordial vector mode. If there exist nonzero vector-mode metric perturbations in the early Universe, they are known to be supported by anisotropic stress fluctuations of free-streaming particles such as neutrinos, and to create characteristic signatures on both the CMB temperature, E-mode, and B-mode polarization anisotropies. We place constraints on the properties of the primordial vector mode characterized by the vector-to-scalar ratio $r_{v}$ and the spectral index $n_{v}$ of the vector-shear power spectrum, from the {\it Planck} and BICEP2 B-mode data. We find that, for scale-invariant initial spectra, the $\Lambda$CDM model including the vector mode fits the data better than the model including the tensor mode. The difference in $\chi^{2}$ between the vector and tensor models is $\Delta\chi^{2} = 3.294$, because, on large scales the vector mode generates smaller temperature fluctuations than the tensor mode, which is preferred for the data. In contrast, the tensor mode can fit the data set equally well if we allow a significantly blue-tilted spectrum. We find that the best-fitting tensor mode has a large blue tilt and leads to an indistinct reionization bump on larger angular scales. The slightly red-tilted vector mode supported by the current data set can also create ${\cal O}(10^{-22})$-Gauss magnetic fields at cosmological recombination. Our constraints should motivate research that considers models of the early Universe that involve the vector mode. ### Constraining primordial vector mode from B-mode polarization [Replacement] The B-mode polarization spectrum of the Cosmic Microwave Background (CMB) may be the smoking gun of not only the primordial tensor mode but also of the primordial vector mode. If there exist nonzero vector-mode metric perturbations in the early Universe, they are known to be supported by anisotropic stress fluctuations of free-streaming particles such as neutrinos, and to create characteristic signatures on both the CMB temperature, E-mode, and B-mode polarization anisotropies. We place constraints on the properties of the primordial vector mode characterized by the vector-to-scalar ratio $r_{v}$ and the spectral index $n_{v}$ of the vector-shear power spectrum, from the {\it Planck} and BICEP2 B-mode data. We find that, for scale-invariant initial spectra, the $\Lambda$CDM model including the vector mode fits the data better than the model including the tensor mode. The difference in $\chi^{2}$ between the vector and tensor models is $\Delta\chi^{2} = 3.294$, because, on large scales the vector mode generates smaller temperature fluctuations than the tensor mode, which is preferred for the data. In contrast, the tensor mode can fit the data set equally well if we allow a significantly blue-tilted spectrum. We find that the best-fitting tensor mode has a large blue tilt and leads to an indistinct reionization bump on larger angular scales. The slightly red-tilted vector mode supported by the current data set can also create ${\cal O}(10^{-22})$-Gauss magnetic fields at cosmological recombination. Our constraints should motivate research that considers models of the early Universe that involve the vector mode. ### The Carnegie Supernova Project: Intrinsic Colors of Type Ia Supernovae We present an updated analysis of the intrinsic colors of SNe Ia using the latest data release of the Carnegie Supernova Project. We introduce a new light-curve parameter very similar to stretch that is better suited for fast-declining events, and find that these peculiar types can be seen as extensions to the population of "normal" SNe Ia. With a larger number of objects, an updated fit to the Lira relation is presented along with evidence for a dependence on the late-time slope of the B-V color-curves with stretch and color. Using the full wavelength range from u to H band, we place constraints on the reddening law for the sample as a whole and also for individual events/hosts based solely on the observed colors. The photometric data continue to favor low values of Rv, though with large variations from event to event, indicating an intrinsic distribution. We confirm the findings of other groups that there appears to be a correlation between the derived reddening law, Rv, and the color excess, E(B-V), such that larger E(B-V) tends to favor lower Rv. The intrinsic u-band colors show a relatively large scatter that cannot be explained by variations in Rv or by the Goobar (2008) power-law for circumstellar dust, but rather is correlated with spectroscopic features of the supernova and is therefore likely due to metallicity effects. ### Parametric models of the periodogram The maximum likelihood estimator is used to determine fit parameters for various parametric models of the Fourier periodogram followed by the selection of the best fit model amongst competing models using the Akaike information criteria. This analysis, when applied to light curves of active galactic nuclei can be used to infer the presence of quasi-periodicity and break or knee frequencies. The extracted information can be used to place constraints on the mass, spin and other properties of the putative central black hole and the region surrounding it through theoretical models involving disk and jet physics. ### A Three-Year Multi-Wavelength Study of the Very High Energy Gamma-ray Blazar 1ES 0229+200 The high-frequency-peaked BL Lacertae object 1ES 0229+200 is a relatively distant (z = 0.1396), hard-spectrum (Gamma ~ 2.5), very-high-energy-emitting (E > 100 GeV) gamma-ray blazar. Very-high-energy measurements of this active galactic nucleus have been used to place constraints on the intensity of the extragalactic background light and the intergalactic magnetic field. A multi-wavelength study of this object centered around very-high-energy observations by VERITAS is presented. This study obtained, over a period of three years, an 11.7 standard deviation detection and an average integral flux F(E>300 GeV) = (23.3 +- 2.8_stat +- 5.8_sys) x 10^-9 photons m^-2 s^-1, or 1.7% of the Crab Nebula’s flux (assuming the Crab Nebula spectrum measured by H.E.S.S). Supporting observations from Swift and RXTE are analyzed. The Swift observations are combined with previously published Fermi observations and the very-high-energy measurements to produce an overall spectral energy distribution which is then modeled assuming one-zone synchrotron-self-Compton emission. The chi^2 probability of the TeV flux being constant is 1.6%. This, when considered in combination with measured variability in the X-ray band, and the demonstrated variability of many TeV blazars, suggests that the use of blazars such as 1ES 0229+200 for intergalactic magnetic field studies may not be straightforward and challenges models that attribute hard TeV spectra to secondary gamma-ray production along the line of sight. ### Cosmological constraints from large-scale structure growth rate measurements [Replacement] We compile a list of $14$ independent measurements of large-scale structure growth rate between redshifts $0.067 \leq z \leq 0.8$ and use this to place constraints on model parameters of constant and time-evolving general-relativistic dark energy cosmologies. With the assumption that gravity is well-modeled by general relativity, we discover that growth-rate data provide restrictive cosmological parameter constraints. In combination with type Ia supernova apparent magnitude versus redshift data and Hubble parameter measurements, the growth rate data are consistent with the standard spatially-flat $\Lambda$CDM model, as well as with mildly evolving dark energy density cosmological models. ### Large-scale structure growth rate measurement cosmological constraints We compile a list of $14$ independent measurements of large-scale structure growth rate between redshifts $0.067 \leq z \leq 0.8$ and use this to place constraints on model parameters of constant and time-evolving general-relativistic dark energy cosmologies. With the assumption that gravity is well-modeled by general relativity, we discover that growth-rate data provide restrictive cosmological parameter constraints. In combination with type Ia supernova apparent magnitude versus redshift data and Hubble parameter measurements, the growth rate data are consistent with the standard spatially-flat $\Lambda$CDM model, as well as with mildly evolving dark energy density cosmological models. ### Interstellar Absorption Lines in the Direction of the Cataclysmic Variable SS Cygni We present an analysis of interstellar absorption lines in high-resolution optical echelle spectra of SS Cyg obtained during an outburst in 2013 June and in archival Hubble Space Telescope and Far Ultraviolet Spectroscopic Explorer data. The Ca II K and Na I D lines toward SS Cyg are compared with those toward nearby B and A stars in an effort to place constraints on the distance to SS Cyg. We find that the distance constraints are not very robust from this method due to the rather slow increase in neutral gas column density with distance and the scatter in the column densities from one sight line to another. However, the optical absorption-line measurements allow us to derive a precise estimate for the line-of-sight reddening of E(B-V) = 0.020+/-0.005 mag. Furthermore, our analysis of the absorption lines of O I, Si II, P II, and Fe II seen in the UV spectra yields an estimate of the H I column density and depletion strength in this direction. ### The Kappa Andromedae System: New Constraints on the Companion Mass, System Age & Further Multiplicity Kappa Andromedae is a B9IVn star at 52 pc for which a faint substellar companion separated by 55 AU was recently announced. In this work, we present the first spectrum of the companion, "kappa And B," using the Project 1640 high-contrast imaging platform. Comparison of our low-resolution YJH-band spectra to empirical brown dwarf spectra suggests an early-L spectral type. Fitting synthetic spectra from PHOENIX model atmospheres to our observed spectrum allows us to constrain the effective temperature to ~2000K, as well as place constraints on the companion surface gravity. Further, we use previously reported log(g) and effective temperature measurements of the host star to argue that the kappa And system has an isochronal age of 220 +/- 100 Myr, older than the 30 Myr age reported previously. This interpretation of an older age is corroborated by the photometric properties of kappa And B, which appear to be marginally inconsistent with other 10-100 Myr low-gravity L-dwarfs for the spectral type range we derive. In addition, we use Keck aperture masking interferometry combined with published radial velocity measurements to rule out the existence of any tight stellar companions to kappa And A that might be responsible for the system’s overluminosity. Further, we show that luminosity enhancements due to a nearly "pole-on" viewing angle coupled with extremely rapid rotation is unlikely. Kappa And A is thus consistent with its slightly evolved luminosity class (IV) and we propose here that kappa And, with a revised age of 220 +/- 100 Myr, is an interloper to the 30 Myr Columba association with which it was previously associated. The photometric and spectroscopic evidence for kappa And B combined with our re-assesment of the system age implies a substellar companion mass of 50^{+16}_{-13} Jupiter Masses, consistent with a brown dwarf rather than a planetary mass companion. ### Occultation of the T Tauri Star RW Aurigae A by its Tidally Disrupted Disk RW Aur A is a classical T Tauri star, believed to have undergone a reconfiguration of its circumstellar environment as a consequence of a recent fly-by of its stellar companion, RW Aur B. This interaction stripped away part of the circumstellar disk of RW Aur A, leaving a tidally disrupted arm and a short truncated circumstellar disk. We present photometric observations of the RW Aur system from the Kilodegree Extremely Little Telescope (KELT) survey showing a long and deep dimming that occurred from September 2010 until March 2011. The dimming has a depth of ~2 magnitudes, a duration of ~180 days and was confirmed by archival observations from American Association of Variable Star Observers (AAVSO). We suggest that this event is the result of a portion of the tidally disrupted disk occulting RW Aur A, specifically a fragment of the tidally disrupted arm. The calculated transverse linear velocity of the occulter is in excellent agreement with the measured relative radial velocity of the tidally disrupted arm. Using simple kinematic and geometric arguments, we show that the occulter cannot be a feature of the RW Aur A circumstellar disk, and we consider and discount other hypotheses. We also place constraints on the thickness and semi-major axis of the portion of the arm that occulted the star. ### Placing Limits On The Transit Timing Variations Of Circumbinary Exoplanets We present an efficient analytical method to predict the maximum transit timing variations of a circumbinary exoplanet, given some basic parameters of the host binary. We derive an analytical model giving limits on the potential location of transits for coplanar planets orbiting eclipsing binaries, then test it against numerical N-body simulations of a distribution of binaries and planets. We also show the application of the analytic model to Kepler-16b, -34b and -35b. The resulting method is fast, efficient and is accurate to approximately 1% in predicting limits on possible times of transits over a three-year observing campaign. The model can easily be used to, for example, place constraints on transit timing while performing circumbinary planet searches on large datasets. It is adaptable to use in situations where some or many of the planet and binary parameters are unknown. ### Constraining Superluminal Electron and Neutrino Velocities using the 2010 Crab Nebula Flare and the IceCube PeV Neutrino Events [Replacement] The observation of two PeV-scale neutrino events reported by Ice Cube can, in principle, allows one to place constraints on Lorentz invariance violation (LIV) in the neutrino sector. After first arguing that at least one of the IceCube events was of extragalactic origin, I derive an upper limit for {\it the difference} between putative superluminal neutrino and electron velocities of $\le \sim 5.6 \times 10^{-19}$ in units where $c = 1$, confirming that the observed PeV neutrinos could have reached Earth from extragalactic sources. I further derive a new constraint on the superluminal electron velocity, obtained from the observation of synchrotron radiation in the Crab Nebula flare of September, 2010. The inference that the $>$ 1 GeV $\gamma$-rays from synchrotron emission in the flare were produced by electrons of energy up to $\sim 5.1$ PeV indicates the non-occurrence of vacuum \’{C}erenkov radiation by these electrons. This implies a new, strong constraint on superluminal electron velocities $\delta_e \le \sim 5 \times 10^{-21}$. It immediately follows that one then obtains an upper limit on the superluminal neutrino velocity {\it alone} of $\delta_{\nu} \le \sim 5.6 \times 10^{-19}$, many orders of magnitude better than the time-of-flight constraint from the SN1987A neutrino burst. However, if the electrons are {\it subluminal} the constraint on $|\delta_e| \le \sim 8 \times 10^{-17}$, obtained from the Crab Nebula $\gamma$-ray spectrum, places a weaker constraint on superluminal neutrino velocity of $\delta_{\nu} \le \sim 8 \times 10^{-17}$. ### Constraining Superluminal Electron and Neutrino Velocities using the 2010 Crab Nebula Flare and the IceCube PeV Neutrino Events [Replacement] The observation of two PeV-scale neutrino events reported by Ice Cube can, in principle, allows one to place constraints on Lorentz invariance violation (LIV) in the neutrino sector. After first arguing that at least one of the IceCube events was of extragalactic origin, I derive an upper limit for {\it the difference} between putative superluminal neutrino and electron velocities of $\le \sim 5.6 \times 10^{-19}$ in units where $c = 1$, confirming that the observed PeV neutrinos could have reached Earth from extragalactic sources. I further derive a new constraint on the superluminal electron velocity, obtained from the observation of synchrotron radiation in the Crab Nebula flare of September, 2010. The inference that the $>$ 1 GeV $\gamma$-rays from synchrotron emission in the flare were produced by electrons of energy up to $\sim 5.1$ PeV indicates the non-occurrence of vacuum \’{C}erenkov radiation by these electrons. This implies a new, strong constraint on superluminal electron velocities $\delta_e \le \sim 5 \times 10^{-21}$. It immediately follows that one then obtains an upper limit on the superluminal neutrino velocity {\it alone} of $\delta_{\nu} \le \sim 5.6 \times 10^{-19}$, many orders of magnitude better than the time-of-flight constraint from the SN1987A neutrino burst. However, if the electrons are {\it subluminal} the constraint on $|\delta_e| \le \sim 8 \times 10^{-17}$, obtained from the Crab Nebula $\gamma$-ray spectrum, places a weaker constraint on superluminal neutrino velocity of $\delta_{\nu} \le \sim 8 \times 10^{-17}$. ### Constraining Superluminal Electron and Neutrino Velocities using the 2010 Crab Nebula Flare and the IceCube PeV Neutrino Events [Replacement] The observation of two PeV-scale neutrino events reported by Ice Cube can, in principle, allows one to place constraints on Lorentz invariance violation (LIV) in the neutrino sector. After first arguing that at least one of the IceCube events was of extragalactic origin, I derive an upper limit for {\it the difference} between putative superluminal neutrino and electron velocities of $\le \sim 5.6 \times 10^{-19}$ in units where $c = 1$, confirming that the observed PeV neutrinos could have reached Earth from extragalactic sources. I further derive a new constraint on the superluminal electron velocity, obtained from the observation of synchrotron radiation in the Crab Nebula flare of September, 2010. The inference that the $>$ 1 GeV $\gamma$-rays from synchrotron emission in the flare were produced by electrons of energy up to $\sim 5.1$ PeV indicates the non-occurrence of vacuum \’{C}erenkov radiation by these electrons. This implies a new, strong constraint on superluminal electron velocities $\delta_e \le \sim 5 \times 10^{-21}$. It immediately follows that one then obtains an upper limit on the superluminal neutrino velocity {\it alone} of $\delta_{\nu} \le \sim 5.6 \times 10^{-19}$, many orders of magnitude better than the time-of-flight constraint from the SN1987A neutrino burst. However, if the electrons are {\it subluminal} the constraint on $|\delta_e| \le \sim 8 \times 10^{-17}$, obtained from the Crab Nebula $\gamma$-ray spectrum, places a weaker constraint on superluminal neutrino velocity of $\delta_{\nu} \le \sim 8 \times 10^{-17}$.
# Observational parameter Observational parameters santosh takale b tech (mech engg), scientific officer, bhabha atomic research centre, mumbai, 09967584554 / [email protected] By comparing the median figures of merit calculated from simulated data sets with that of current type ia supernova (snia) data, we find that as many as 64 further independent measurements of h(z) are needed to match the parameter constraining power of snia. Key difference – experimental vs observational study experimental and observational studies are two types of studies between which a number of differences can be identified. Sal determines if a statistical study was a sample study, an experiment, or an observational study. An observational study measures the characteristics of a population by studying individuals in a sample, but does not attempt to manipulate or influence the variables of interest for a good example, try visiting the pew research center. Population parameter definition, a quantity or statistical measure that, for a given population, is fixed and that is used as the value of a variable in some general distribution or frequency function to make it descriptive of that population: the mean and variance of a. Observational parameters classical cosmology reduces the universe to a few basic parameters modern cosmology adds a few more, but the fundamental idea is still the same: the fate and geometry of the. The effects of observational learning on students' design products and processes so learners' aptitude and models' competence level should be taken into account when testing the effects of observational learning during the problem parameter exploration stage, the student explores the task and associative exploration in art education. Inferring discrimination from statistical analysis of observational data in this section, we discuss some of the more frequently encountered obstacles to causal inference in. 21thecosmologicalparameters 3 where n is known as the spectral index, always defined so that n = 1 for the harrison– zel’dovich spectrum, and k∗ is an. Observational immutability: suppose you’ve got an object which has the property that every time you call a method on it, look at a field, etc, you get the same result from the point of view of the caller such an object would be immutable. Arxiv:07062737v1 [astro-ph] 19 jun 2007 constraints on the dgp universe using observational hubble parameter hao-yi wan 1, ze-long yi , tong-jie zhang ,∗ and jie zhou2 1department of. Chapter 1 intro to statistics terms study guide by marian_mendenhall includes 27 questions covering vocabulary, terms and more is this an observational study or experiment the value is a parameter because the men who had walked on the moon are a population. The parameter of interest is µ, the average gpa of all college students in the united states today the sample is a random selection of 100 college students in the united states the statistic is the mean grade point average, $$\bar{x}$$, of the sample of 100 college students. Understanding retrospective vs prospective study designs andreas kalogeropoulos, md mph phd assistant professor of medicine (cardiology) o any measurable parameter of clinical interest observational = you just observe what happens • but remember, doesn’t mean you do not perform. Bodman et al: observational constraints on parameter estimates for a simple climate model 279 by uniform vertical mixing, scaled by the ocean effective. The correlation between aptt and the changes of ct parameter of the rotem with heparin dosage and infusion was the primary outcome the correlation between heparin therapy and the changes of other parameters like mcf, cft, and a number of platelets were the secondary outcome of the study. ## Observational parameter Inflationary model building, reconstructing parameters and observational limits uploaded by sayantan choudhury download with google download with facebook or download with email inflationary model building, reconstructing parameters and observational limits inflationary model building, reconstructing parameters and observational limits. This paper is a review on the observational hubble parameter data that have gained increasing attention in recent years for their illuminating power on the dark side of the universe --- the dark matter, dark energy, and the dark age currently, there are two major methods of independent observational h(z) measurement, which we summarize as the. To improve the reporting of observational research, we developed a checklist of items that should be addressed: the strengthening the reporting of observational studies in epidemiology (strobe) statement items relate to title, abstract, introduction, methods, results. The observational study’s bias by knowing whether a causal parameter is large or small it should be stressed, however, that independence will give way to a negative correlation once the experimental and observational results become known 1 the analytic results we. In statistics, a parameter is an unknown characteristic of a population — for example, the number of women in a particular precinct who will vote democratic show more note the term is often mistakenly used to refer to the limits of possible values a variable can. Determine whether the numerical value is a parameter or a statistics (and explain): a) a recent survey by the alumni of a major university indicated that the average salary of. Origin of the word parameter this word is found in 1914 in e czuber, wahrscheinlichkeitsrechnung, vol i and in in 1922 in ronald a fisher’s, “on the mathematical foundations of theoretical statistics” fisher was an english statistician, biologist and geneticist. Observational studies in order to study the relationships among variables, observational studies are performed unlike controlled experimental designs where only certain variables are allowed to vary (at prespecified levels), in observational studies the variables are observed and recorded. Abstract estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature while many applications of causal effect estimation use propensity score methods or g-computation, targeted maximum likelihood estimation (tmle) is a well-established alternative method with desirable statistical properties. A parameter of this population is the standard deviation of grade point averages of all high school seniors a statistic is the standard deviation of the grade point averages of. Observational parameter Rated 3/5 based on 41 review 2018.
# Properties Label 336.3.d.c Level $336$ Weight $3$ Character orbit 336.d Analytic conductor $9.155$ Analytic rank $0$ Dimension $4$ CM no Inner twists $2$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$336 = 2^{4} \cdot 3 \cdot 7$$ Weight: $$k$$ $$=$$ $$3$$ Character orbit: $$[\chi]$$ $$=$$ 336.d (of order $$2$$, degree $$1$$, not minimal) ## Newform invariants Self dual: no Analytic conductor: $$9.15533688251$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: 4.0.65856.1 Defining polynomial: $$x^{4} + 14 x^{2} + 21$$ Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 21) Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( 1 - \beta_{1} + \beta_{3} ) q^{3} + ( 1 - \beta_{2} + 2 \beta_{3} ) q^{5} + \beta_{2} q^{7} + ( -4 + \beta_{1} + 2 \beta_{3} ) q^{9} +O(q^{10})$$ $$q + ( 1 - \beta_{1} + \beta_{3} ) q^{3} + ( 1 - \beta_{2} + 2 \beta_{3} ) q^{5} + \beta_{2} q^{7} + ( -4 + \beta_{1} + 2 \beta_{3} ) q^{9} + 2 \beta_{1} q^{11} + ( -9 + \beta_{2} ) q^{13} + ( -5 - \beta_{1} - 6 \beta_{2} + 4 \beta_{3} ) q^{15} + ( -2 + 2 \beta_{1} + 2 \beta_{2} - 4 \beta_{3} ) q^{17} + ( -3 + 5 \beta_{2} ) q^{19} + ( 4 + 2 \beta_{1} + \beta_{3} ) q^{21} + ( -2 + 8 \beta_{1} + 2 \beta_{2} - 4 \beta_{3} ) q^{23} + ( -3 - 10 \beta_{2} ) q^{25} + ( -2 + 5 \beta_{1} - 9 \beta_{2} + \beta_{3} ) q^{27} + ( 2 + 2 \beta_{1} - 2 \beta_{2} + 4 \beta_{3} ) q^{29} + ( -34 - 2 \beta_{2} ) q^{31} + ( 8 - 2 \beta_{1} - 6 \beta_{2} + 2 \beta_{3} ) q^{33} + ( 3 - 2 \beta_{1} - 3 \beta_{2} + 6 \beta_{3} ) q^{35} + ( 4 + 14 \beta_{2} ) q^{37} + ( -5 + 11 \beta_{1} - 8 \beta_{3} ) q^{39} + ( -8 + 22 \beta_{1} + 8 \beta_{2} - 16 \beta_{3} ) q^{41} + ( 40 + 6 \beta_{2} ) q^{43} + ( -37 - 2 \beta_{1} - 9 \beta_{2} - 4 \beta_{3} ) q^{45} + ( 4 + 8 \beta_{1} - 4 \beta_{2} + 8 \beta_{3} ) q^{47} + 7 q^{49} + ( 18 + 6 \beta_{2} - 6 \beta_{3} ) q^{51} + ( 16 - 10 \beta_{1} - 16 \beta_{2} + 32 \beta_{3} ) q^{53} + ( -14 - 2 \beta_{2} ) q^{55} + ( 17 + 13 \beta_{1} + 2 \beta_{3} ) q^{57} + ( -1 - 26 \beta_{1} + \beta_{2} - 2 \beta_{3} ) q^{59} + ( -39 + 7 \beta_{2} ) q^{61} + ( 11 - 5 \beta_{1} - 9 \beta_{2} + 8 \beta_{3} ) q^{63} + ( -6 - 2 \beta_{1} + 6 \beta_{2} - 12 \beta_{3} ) q^{65} + ( 6 + 8 \beta_{2} ) q^{67} + ( 42 - 6 \beta_{1} - 12 \beta_{2} ) q^{69} + ( 6 + 18 \beta_{1} - 6 \beta_{2} + 12 \beta_{3} ) q^{71} + ( -8 + 26 \beta_{2} ) q^{73} + ( -43 - 17 \beta_{1} - 13 \beta_{3} ) q^{75} + ( 2 - 6 \beta_{1} - 2 \beta_{2} + 4 \beta_{3} ) q^{77} + ( -32 + 36 \beta_{2} ) q^{79} + ( -19 - 20 \beta_{1} - 18 \beta_{2} - 4 \beta_{3} ) q^{81} + ( -9 - 18 \beta_{1} + 9 \beta_{2} - 18 \beta_{3} ) q^{83} + ( 42 + 18 \beta_{2} ) q^{85} + ( -2 - 4 \beta_{1} - 18 \beta_{2} + 10 \beta_{3} ) q^{87} + ( 16 - 42 \beta_{1} - 16 \beta_{2} + 32 \beta_{3} ) q^{89} + ( 7 - 9 \beta_{2} ) q^{91} + ( -42 + 30 \beta_{1} - 36 \beta_{3} ) q^{93} + ( 12 - 10 \beta_{1} - 12 \beta_{2} + 24 \beta_{3} ) q^{95} + ( -2 + 8 \beta_{2} ) q^{97} + ( -26 - 16 \beta_{1} + 4 \beta_{3} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q + 2q^{3} - 20q^{9} + O(q^{10})$$ $$4q + 2q^{3} - 20q^{9} - 36q^{13} - 28q^{15} - 12q^{19} + 14q^{21} - 12q^{25} - 10q^{27} - 136q^{31} + 28q^{33} + 16q^{37} - 4q^{39} + 160q^{43} - 140q^{45} + 28q^{49} + 84q^{51} - 56q^{55} + 64q^{57} - 156q^{61} + 28q^{63} + 24q^{67} + 168q^{69} - 32q^{73} - 146q^{75} - 128q^{79} - 68q^{81} + 168q^{85} - 28q^{87} + 28q^{91} - 96q^{93} - 8q^{97} - 112q^{99} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} + 14 x^{2} + 21$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu$$ $$\beta_{2}$$ $$=$$ $$($$$$\nu^{2} + 7$$$$)/2$$ $$\beta_{3}$$ $$=$$ $$($$$$\nu^{3} + \nu^{2} + 13 \nu + 5$$$$)/4$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$\beta_{1}$$ $$\nu^{2}$$ $$=$$ $$2 \beta_{2} - 7$$ $$\nu^{3}$$ $$=$$ $$4 \beta_{3} - 2 \beta_{2} - 13 \beta_{1} + 2$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/336\mathbb{Z}\right)^\times$$. $$n$$ $$85$$ $$113$$ $$127$$ $$241$$ $$\chi(n)$$ $$1$$ $$-1$$ $$1$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 113.1 3.50592i − 3.50592i − 1.30710i 1.30710i 0 −0.822876 2.88494i 0 1.24197i 0 −2.64575 0 −7.64575 + 4.74789i 0 113.2 0 −0.822876 + 2.88494i 0 1.24197i 0 −2.64575 0 −7.64575 4.74789i 0 113.3 0 1.82288 2.38267i 0 7.37953i 0 2.64575 0 −2.35425 8.68663i 0 113.4 0 1.82288 + 2.38267i 0 7.37953i 0 2.64575 0 −2.35425 + 8.68663i 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 3.b odd 2 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 336.3.d.c 4 3.b odd 2 1 inner 336.3.d.c 4 4.b odd 2 1 21.3.b.a 4 8.b even 2 1 1344.3.d.b 4 8.d odd 2 1 1344.3.d.f 4 12.b even 2 1 21.3.b.a 4 20.d odd 2 1 525.3.c.a 4 20.e even 4 2 525.3.f.a 8 24.f even 2 1 1344.3.d.f 4 24.h odd 2 1 1344.3.d.b 4 28.d even 2 1 147.3.b.f 4 28.f even 6 2 147.3.h.c 8 28.g odd 6 2 147.3.h.e 8 36.f odd 6 2 567.3.r.c 8 36.h even 6 2 567.3.r.c 8 60.h even 2 1 525.3.c.a 4 60.l odd 4 2 525.3.f.a 8 84.h odd 2 1 147.3.b.f 4 84.j odd 6 2 147.3.h.c 8 84.n even 6 2 147.3.h.e 8 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 21.3.b.a 4 4.b odd 2 1 21.3.b.a 4 12.b even 2 1 147.3.b.f 4 28.d even 2 1 147.3.b.f 4 84.h odd 2 1 147.3.h.c 8 28.f even 6 2 147.3.h.c 8 84.j odd 6 2 147.3.h.e 8 28.g odd 6 2 147.3.h.e 8 84.n even 6 2 336.3.d.c 4 1.a even 1 1 trivial 336.3.d.c 4 3.b odd 2 1 inner 525.3.c.a 4 20.d odd 2 1 525.3.c.a 4 60.h even 2 1 525.3.f.a 8 20.e even 4 2 525.3.f.a 8 60.l odd 4 2 567.3.r.c 8 36.f odd 6 2 567.3.r.c 8 36.h even 6 2 1344.3.d.b 4 8.b even 2 1 1344.3.d.b 4 24.h odd 2 1 1344.3.d.f 4 8.d odd 2 1 1344.3.d.f 4 24.f even 2 1 ## Hecke kernels This newform subspace can be constructed as the kernel of the linear operator $$T_{5}^{4} + 56 T_{5}^{2} + 84$$ acting on $$S_{3}^{\mathrm{new}}(336, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ 1 $3$ $$1 - 2 T + 12 T^{2} - 18 T^{3} + 81 T^{4}$$ $5$ $$1 - 44 T^{2} + 1034 T^{4} - 27500 T^{6} + 390625 T^{8}$$ $7$ $$( 1 - 7 T^{2} )^{2}$$ $11$ $$1 - 428 T^{2} + 74630 T^{4} - 6266348 T^{6} + 214358881 T^{8}$$ $13$ $$( 1 + 18 T + 412 T^{2} + 3042 T^{3} + 28561 T^{4} )^{2}$$ $17$ $$1 - 988 T^{2} + 407046 T^{4} - 82518748 T^{6} + 6975757441 T^{8}$$ $19$ $$( 1 + 6 T + 556 T^{2} + 2166 T^{3} + 130321 T^{4} )^{2}$$ $23$ $$1 - 1444 T^{2} + 980166 T^{4} - 404090404 T^{6} + 78310985281 T^{8}$$ $29$ $$1 - 2972 T^{2} + 3611558 T^{4} - 2102039132 T^{6} + 500246412961 T^{8}$$ $31$ $$( 1 + 68 T + 3050 T^{2} + 65348 T^{3} + 923521 T^{4} )^{2}$$ $37$ $$( 1 - 8 T + 1382 T^{2} - 10952 T^{3} + 1874161 T^{4} )^{2}$$ $41$ $$1 - 1292 T^{2} + 2832038 T^{4} - 3650883212 T^{6} + 7984925229121 T^{8}$$ $43$ $$( 1 - 80 T + 5046 T^{2} - 147920 T^{3} + 3418801 T^{4} )^{2}$$ $47$ $$1 - 6148 T^{2} + 19144326 T^{4} - 30000278788 T^{6} + 23811286661761 T^{8}$$ $53$ $$1 + 20 T^{2} - 13350138 T^{4} + 157809620 T^{6} + 62259690411361 T^{8}$$ $59$ $$1 - 3676 T^{2} + 15964266 T^{4} - 44543419036 T^{6} + 146830437604321 T^{8}$$ $61$ $$( 1 + 78 T + 8620 T^{2} + 290238 T^{3} + 13845841 T^{4} )^{2}$$ $67$ $$( 1 - 12 T + 8566 T^{2} - 53868 T^{3} + 20151121 T^{4} )^{2}$$ $71$ $$1 - 10588 T^{2} + 78813510 T^{4} - 269058878428 T^{6} + 645753531245761 T^{8}$$ $73$ $$( 1 + 16 T + 5990 T^{2} + 85264 T^{3} + 28398241 T^{4} )^{2}$$ $79$ $$( 1 + 64 T + 4434 T^{2} + 399424 T^{3} + 38950081 T^{4} )^{2}$$ $83$ $$1 - 13948 T^{2} + 141899946 T^{4} - 661948661308 T^{6} + 2252292232139041 T^{8}$$ $89$ $$1 - 11468 T^{2} + 120945830 T^{4} - 719528019788 T^{6} + 3936588805702081 T^{8}$$ $97$ $$( 1 + 4 T + 18374 T^{2} + 37636 T^{3} + 88529281 T^{4} )^{2}$$
# coil behaviour #### Decesicum Joined Mar 20, 2006 8 Why is a coil behaving different in DC and AC? (the formula Z=omega*L shows that it is like a short in DC and has a frequency growing resistance in AC) But how about the curent through wire (supposing there's no core in the coil)? Why is it proportional to the derivative of the voltage applied? If I place a coil at the output of a DC source do i shortcircuit the outputs and probably destroy the source? #### Papabravo Joined Feb 24, 2006 15,554 Originally posted by Decesicum@Mar 22 2006, 11:26 AM Why is a coil behaving different in DC and AC? (the formula Z=omega*L shows that it is like a short in DC and has a frequency growing resistance in AC) But how about the curent through wire (supposing there's no core in the coil)? Why is it proportional to the derivative of the voltage applied? If I place a coil at the output of a DC source do i shortcircuit the outputs and probably destroy the source? [post=15292]Quoted post[/post]​ It is the magnetic field created by the current flowing in the wire of the coil. In your formula for impeadance you forgot the j or i representing the imaginary unit. The correct fomula is: Rich (BB code): Z = j*ω*L It is a complex number. A real inductor will have a small DC resistance from the properties of the wire and this is modeled as a small resistance in series with the inductance. The complex impeadance of this non-ideal inductor will be Rich (BB code): Z = R + j*ω*L In a steady state where the current is not changing, which implies that di/dt = 0, the voltage accross an ideal inductor will be zero. In a non-ideal inductor let us say there is a constant current of 100 milliamperes flowing through a DC resistance of 0.2 Ohms. The voltage drop would be 20 millivolts. You ask why the voltage is a function of the time rate of change of current and the circular answer is that inductance is a property that resists changes in current. It resists the change in current by developing a voltage across itself. A DC resistance resists the magnitude of the current , while the inductance resists the "velocity" of the current. A mechanical example may help. A spring is a device for which Force is proportional to displacement. A shock absorber or dashpot, or damper is a device for which Force is proportional to the velocity of displacement. If you consider the 2nd order, ordinary differential equation for a mechanical spring-mass-damper system you quickly discover that it is exactly the same equation as the one that describes a series R-L-C circuit. You seem to be more than a little bit lost, is this of any help to you at all? #### Decesicum Joined Mar 20, 2006 8 Originally posted by Papabravo@Mar 22 2006, 07:35 PM It is the magnetic field created by the current flowing in the wire of the coil. In your formula for impeadance you forgot the j or i representing the imaginary unit. The correct fomula is: Rich (BB code): Z = j*ω*L It is a complex number. A real inductor will have a small DC resistance from the properties of the wire and this is modeled as a small resistance in series with the inductance. The complex impeadance of this non-ideal inductor will be Rich (BB code): Z = R + j*ω*L In a steady state where the current is not changing, which implies that di/dt = 0, the voltage accross an ideal inductor will be zero. In a non-ideal inductor let us say there is a constant current of 100 milliamperes flowing through a DC resistance of 0.2 Ohms. The voltage drop would be 20 millivolts. You ask why the voltage is a function of the time rate of change of current and the circular answer is that inductance is a property that resists changes in current. It resists the change in current by developing a voltage across itself. A DC resistance resists the magnitude of the current , while the inductance resists the "velocity" of the current. A mechanical example may help. A spring is a device for which Force is proportional to displacement. A shock absorber or dashpot, or damper is a device for which Force is proportional to the velocity of displacement. If you consider the 2nd order, ordinary differential equation for a mechanical spring-mass-damper system you quickly discover that it is exactly the same equation as the one that describes a series R-L-C circuit. You seem to be more than a little bit lost, is this of any help to you at all? [post=15295]Quoted post[/post]​ It is helpful! I have many gaps on "how it really works" and i try to fill them. You're explaining very well, thanks a lot! [I know that there is a j in the formula, I only meant the inductive reactance, not impedance] When I was studying these at school we were mostly explained mathematically and at that time I didn't ask to much why (my fault). Now I want and need to know why (I'll work with it). Now, while reading books, working with (simple!) circuits, I find out the lack of knowledge that I have. That's it! :| #### lschul Joined Mar 22, 2006 1 The first derivative shows the 90 degree phase difference between the current flowing through the coil and the voltage "induced" across the coil. Why is there a phase difference? - An induced voltage requires a changing magnetic field. The greater the rate at which the field changes, the greater the magnitude of the voltage. If you look at the graph of a sinusoidal current, the slope (rate of change) is greatest at the zero point and least (zero) at the positive and negative peaks. Therefore, the current is 90 degrees out of phase with the induced voltage (which is not the same as the voltage of the source). The induced voltage opposes the current (Lenz's Law), thus the reactance. A DC current through the coil has a steady current, therefore no opposing voltage. The result is nearly a dead short. Formulas sometimes mix a person up more than they help. #### Papabravo Joined Feb 24, 2006 15,554 Originally posted by lschul@Mar 22 2006, 04:37 PM The first derivative shows the 90 degree phase difference between the current flowing through the coil and the voltage "induced" across the coil. Why is there a phase difference? - An induced voltage requires a changing magnetic field. The greater the rate at which the field changes, the greater the magnitude of the voltage. If you look at the graph of a sinusoidal current, the slope (rate of change) is greatest at the zero point and least (zero) at the positive and negative peaks. Therefore, the current is 90 degrees out of phase with the induced voltage (which is not the same as the voltage of the source). The induced voltage opposes the current (Lenz's Law), thus the reactance. A DC current through the coil has a steady current, therefore no opposing voltage. The result is nearly a dead short. Formulas sometimes mix a person up more than they help. [post=15308]Quoted post[/post]​ One of the really surprising things about inductors is their ability to produce large positive and negative voltages when switched on and off. A large di/dt will produce a large voltage across the inductor. This behavior is the principle behind switching regulators in the boost configuration, which produce a higher voltage from a lower voltage.
# All Questions 1k views ### Text mode commands/symbols in math mode I am fairly new to LaTeX, and have run into a problem with commands apparently only available in text mode, where I would like to use them in math mode as well (e.g. integrated in equations). One of ... 213 views ### How to cut a horizontal line by a perpendicular line? I'd like to cut the line AC at the point p by a perpendicular line. \documentclass{article} \usepackage{xypic} \begin{document} \xymatrix{ & B\ar @{-}[d] &\\ A\ar @{-}[rr]_{P} & & C ... 2k views I have created using pdfx a pdf using this command: \usepackage[a-1b]{pdfx} I would add metadata but I don't really understand how do it. I read the doc but I didn't understand pdfx guide 197 views ### retain space before \section heading in Memoir class at top of page I am using xelatex and the Memoir class. By default, the empty vertical space before the section heading does not show after a pagebreak. How do I modify it if I want it to show instead, even if it ... 49 views ### How do I cite range of references? [duplicate] I have to cite in LaTeX a range of sources (I'm using Miktex 2.9, TexnicCenter and Bibtex), all together. If I write ... in 1990 \cite{ref1,ref2,ref3,ref4,ref5} I would like to get "... in 1990 [... 979 views ### Add more space between cells in a table [duplicate] I'd like to add more space between the cells of the table below. I've tried the command [2ex] after \\ but this changes the vertical alignment, which I'd like to be centered. \documentclass{article} \... 101 views ### Modifying an individual subsection number I have an article divided into sections and subsections. I need to insert a subsection "5.2a" between "5.2" and "5.3". What is the easiest way to do this? 205 views ### How to vertically align the text in table cell? I am trying to get a table cell with vertical text alignment. My current code is: \begin{tabularx}{\linewidth}{|X|} \hline \cellcolor{gray!20}Hardware\\ \hline \end{tabularx} Is there an easy ... 781 views ### How does one change the babel language with the ClassicThesis style? It's all in the title. The issue is simple, I just wish to change the language so that I can type in french using babel. The original languages are ngerman and american. I just want to replace the ... 182 views ### Biblatex: No \postnotedelim for citations that aren't numerals When I am citing pages in books, I use the citation format (Author 1982:45) But when I am citing something like paragraphs or online data with an ID number (i.e. things that aren't a plain ... 563 views ### Spaces inside Cyrillic listings I want to include source file using listing package. In that file there are some cyrillic symbols in comments. I setup listings like: \lstset{extendedchars=\true,basicstyle=\ttfamily} And in ... 822 views ### harvard-thesis template and error using bm package I have a following problem. I use harvard-thesis package to write my thesis and I have a problem, when I want to use a package bm. Here is a minimal example. Using this code: %!TEX TS-program = ... 263 views ### pgfplots jump mid mark with two ends I'd like to make this plot in tikz-pgfplots. But I'm not sure how to get the jump mid marks with two ends. Any thoughts? Also are the coordinates correctly assigned in the proper format here as I'm ... 117 views ### Problem cross referencing when using include tcommand to include multiple files I have been dealing with a Latex bug for the past 4 months that I have been unable to resolve. The issue is that a lot of my cross references come out as ?? when I use \include{filename} from a ... 205 views ### Character strings in biblatex's PART and VOLUME fields When the PART and VOLUME fields in a bibliographic entry are numerals, biblatex puts out something very reasonable (see below). But when these fields are character strings, which they often are for ... 4k views ### let operation in tikz Ηow can I separate options within brackets [...] so that is affected only the current command; For example, I want dashed lines only here (0,0) |- (A) [dashed] . Could someone give a detailed ... 373 views ### Relative positioning of Lewis electrons in chemfig I am defining a submolecule in chemfig (2,2'-bipyridine) and I want to show the lone pair of electrons on both the Nitrogens. (For those who care, I wish to show a coordination bond between the lone ... 9k views ### Elsarticle - number bibliography not working I am trying to correctly format an article for Computers & Operations Research. They say I should use the "number" format to cite. I use elsarticle.cls and the documentation says that number ... 595 views ### How to install Gregorio package on Ubuntu with TeXLive installed? Does anyone here have experience installing the gregorio packages to work in Ubuntu using the non-repo TeXLive? Gregorio isn't packaged on CTAN as far as I could tell, and I don't know enough to be ... 757 views ### list of todos (todonotes) is empty with llncs I am using todonotes together with Springer's llncs and I noticed that the list of todos created with \listoftodos remains empty, regardless how often I run pdflatex. With other document classes its ... 198 views ### Error with \IEEEeqnarraymulticol [closed] I am trying to format a long equation with an overly long LHS, namely: \hat{N}_{A1}(c_A + \hat{N}_{A1})^2 + \hat{N}_{A1}^2a_A(c_A + \hat{N}_{A1})^2 + \hat{N}_{A1}^2b_A(c_A + \hat{N}_{A1}) & = &... 2k views ### Drawing an arrow around an automata Using Tikzpicture, I have the following automata: \documentclass{article} \usepackage{tikz} \begin{document} \usetikzlibrary{positioning,arrows,automata} \begin{center} \begin{tikzpicture}[>=... 162 views ### Because of long row, table not set in page cleanly I create a table,but some rows are very long.Because of that table could't set on page...What is a solution for that?my code is, \begin{table}[!ht] \renewcommand{\arraystretch}{1.5} \caption[Set of ... 707 views ### protocol message diagram - arrow colored \documentclass{article} \usepackage[margin=12mm]{geometry} \usepackage{hyperref} \usepackage[underline=true]{pgf-umlsd} \usetikzlibrary{calc} \begin{document} \begin{sequencediagram} \newinst{ue}{UE} ... 129 views ### Make a particular column of a longtable borderless I am using longtable package to create tables as the number of rows are large and doesn't fit in a single page. This works fine for me. I now need a particular column not to have top and bottom border ... 564 views ### connect nodes that it looks like circle I have 3 nodes placed on a circle. These nodes are differently shaped. I want to connect them and want the arrows to look like a circle, because it is a repeating process. I can connect them with ... 116 views ### Export page numbers of references I would like to export the page numbers of all the inner references of my document. How can one achieve this? Equivalently, how can we export the section/chapter number of all the inner references? ... 282 views ### \ref, \autoref and hyperref expansion [duplicate] I'm trying to write section numbers to a file with the newfile package. In details, this MWE basically tries to write to a .ref file with lines like A-B. The A and B are generated by \label and \ref ... 882 views ### Caption positioned at the side of the figure I would like to have the caption of the Figure on the side of the figure. Plus I want to control caption width. Also, caption and Figure should be aligned all the way to the left and right of the page,... 296 views ### How can I float molecules in \chemfig around an element? (without the lines) For example: I just want to show that the H2O molecules are attracted to the Carbon atom. I can't find a way to remove the lines connecting the Oxygens to the Carbons or how to extend the Hydrogens ... 213 views ### vbox overflow when using TitleGM causing a blank page to preceed the title page I'm getting a blank page that precedes my title page, and I suspect that it is due to the vbox overflow warning that I'm getting. The problem is that I don't know how to properly adjust the vbox ... 35 views ### close appending to aliased newcommands [closed] I have: \newcommand{\alias}[0]{POS} and use it as: ... \alias ... I want to sometimes pluralize it, output as: ...POSs... but this (obviously) doesn't work: ... \aliass ... nor does this: ..... 560 views ### automatic PDF bookmark manipulation This is a follow-up question to PDF bookmark customization How is it possible, to set in an report/scrreprt all chapters with bold bookmarks in the pdf bookmarks? I've found a discussion about it (in ... 279 views ### Ceiling function within verbatim in latex I am preparing a latex document. Inside the body of a function I need to use the ceiling function. How will I use the ceiling function inside verbatim? \begin{verbatim} Method a(b) . . ... 245 views ### Two columns: one column for questions; the other for answers I am using the exam class. \documentclass{12pt,a3paper,landscape}{exam} I would like to split my paper into two columns. One column should be for the question and across from that question would ... 106 views ### Parsing various types of page ranges with a single command Having to treat a considerable amount of page ranges, I'd like to have a robust and simple command that would parse and print them correctly. The most simple case is e.g. 264,15-26 which means:... 2k views ### ShareLaTeX Bibliography in ApJ format I am trying to use ShareLaTex to compile my bibliography in the Astrophysical Journal (ApJ) format. The document compiles in ApJ format just fine, but the bibliography is compiled in some generic ... 128 views ### Writing macros on file defined at compilation time This is a follow-up to this question. I have a LaTeX3 code that reads a file formatted as follows: <numberA> "<nameA>" <numberB> "<nameB>" <numberC> "<nameC>" .... 314 views ### Compiling a Plain-TeX file that loads harvmac on TexWorks on windows I had asked this question and it was suggested that I use plain TeX. On TeXworks among the compiling options there is no such thing. When I try running the file through pdfTeX I get the message, ... 4k views ### Logo in footer with aligned text I want to create a latex class to create documents for my company. There is an example style for a word document but it looks more complex as most headers/footers I have found till now to re-create in ... 1k views ### centering a \hbox I'm trying to center a collection of nested \hbox and \vbox without any succes :-( Here is the code I use: \begin{figure}[h] \center{ \vbox{\hbox{% \includegraphics{imageA}~% \... 1k views ### Extra alignment tab with longtable I am using longtable and I am getting the error ! Extra alignment tab has been changed to \cr. I have attempted to fix it but nothing I have tried seems to be working. Below is the code for the table ... 237 views ### Add \par only if last paragraph did not end with displayed math Background: I have named "sections" (sections for rest of this question) which are conditionally displayed or suppressed based on parameters. To simplify the test case below, these are controlled by ... 373 views ### unattractive additive effect to opacity with pstricks I'm using pstricks to highlight by boxing or circling replacement text in my math equations. But, I don't want this highlighting to occlude what's behind it. So, I've been setting opacity or ... 5k views ### Fractions with \dfrac cause compile error I'm new to LyX. I'm writing my next homework with it to give it a try. My example code is below: % Preview source code %% LyX 2.0.5.1 created this file. For more info, see http://www.lyx.org/. %% ... 427 views ### LaTeX tikz-timing - adjust fontsize independently for each row label I would like to be able to make the label below (which I have made red) have text size equivalent to \tiny but I can not figure out how to do it? I have not been able to find anyone else trying to ... 348 views ### \showbox and log file I want to parse the output of the \showbox command externally. Is it possible to let \showbox write to another file than the .log file? If not, is it possible to temporarily change the name of the .... 992 views Consider the following: \documentclass{article} \usepackage{pstricks-add} \begin{document} \begin{figure} \centering \begin{pspicture}(2.4,0.6) \rput(1.2,0.25){Springboldene} % \psset{...
PREPRINT # BASS XXXIII: Swift-BAT blazars and their jets through cosmic time L. Marcotulli, M. Ajello, C. M. Urry, V. S. Paliya, M. Koss, K. Oh, G. Madejski, Y. Ueda, M. Balocović, B. Trakhtenbrot, F. Ricci, C. Ricci, D. Stern, F. Harrison, M. C. Powell, BASS Collaboration Submitted on 20 September 2022 ## Abstract We derive the most up-to-date Swift-Burst Alert Telescope (BAT) blazar luminosity function in the 14-195 keV range, making use of a clean sample of 118 blazars detected in the BAT 105-month survey catalog, with newly obtained redshifts from the BAT AGN Spectroscopic Survey (BASS). We determine the best-fit X-ray luminosity function for the whole blazar population, as well as for Flat Spectrum Radio Quasars (FSRQs) alone. The main results are: (1) at any redshift, BAT detects the most luminous blazars, above any possible break in their luminosity distribution, which means we cannot differentiate between density and luminosity evolution; (2) the whole blazar population, dominated by FSRQs, evolves positively up to redshift z~4.3, confirming earlier results and implying lower number densities of blazars at higher redshifts than previously estimated. The contribution of this source class to the Cosmic X-ray Background at 14-195 keV can range from 5-18%, while possibly accounting for 100% of the MeV background. We also derived the average 14 keV-10 GeV SED for BAT blazars, which allows us to predict the number counts of sources in the MeV range, as well as the expected number of high-energy (>100 TeV) neutrinos. A mission like COSI, will detect 40 MeV blazars and 2 coincident neutrinos. Finally, taking into account beaming selection effects, the distribution and properties of the parent population of these extragalactic jets are derived. We find that the distribution of viewing angles is quite narrow, with most sources aligned within < 5{\deg} of the line of sight. Moreover, the average Lorentz factor, <$\mathrm{\Gamma }$>= 8-12, is lower than previously suggested for these powerful sources. ## Preprint Comment: Accepted for publication in the Astrophysical Journal; 33 pages; 8 Tables; 16 Figures Subject: Astrophysics - High Energy Astrophysical Phenomena
# 3 Dice Combinations asked by Emily on December 7, 2018; Probability. If Die A = 1 and Die B = 6, that is NOT the same combination as Die A = 6 and Die B = 1. Unlike many other salary tools that require a critical mass of reported salaries for a given combination of job title, location and experience, the Dice model can make accurate predictions on even uncommon combinations of job factors. Greed, also known as 10,000, is a dice game where each player competes to be the first to reach 10,000 points. Press 1 to speak to the Consumer Team. If n is the number of dice then the number of distinct combinations (outcomes) is given by: f(n) = C(6+n-1,n) = C(5+n,n) = (5+n)!/n!5! If n =1, f(1) = 6!/5! = 6 outcomes. Basically, the game is played in rounds. 5 means the event A is equally likely to occur. Each new term in the Fibonacci sequence is generated by adding the previous two terms. A bet amount is decided upon and each player puts that amount in the pile or pot. The factorial function is often used when calculating combinations and permutations. Of these, the 2d6 die roll is the most common. If the probability of a successful trial is p , then the probability of having x successful outcomes in an experiment of n independent. Easily manage all of your establishment’s food processing needs with the convenient two-in-one Robot Coupe R2 Dice Combination Processor. Use the dice button to throw your dice up to three times each round. Directions: Apply the combination formula to solve the problems below. How many different committees of 4 students can be chosen from a group of 15?. Your program should roll the dice 36,000 times. Some rule that if a player rolls the dice 3 times without getting a meaningful combination, they are out. This means that the number of different combination achievable in one roll is: 6 * 6 * 6 * 6 * 6 = 6 5 = 7776. Combinations sound simpler than permutations, and they are. Yahtzee Game. The Three Dice Trick: Long Division Ask a student or group of students to get out a calculator a pen and paper. Ranking below the pairs are two different numbers on the dice, with the highest always being the decider and mentioned first: 6-5 is the highest combination here, followed by 6-4, 6-3, 6-2, 6-1, 5-4, 5-3 and so on all the way down to 3-1 which is the lowest possible throw. The "King of Koins" T. Zdzieblik D(1), Oesser S(2), Baumstark MW(3), Gollhofer A(1), König D(1). With the game boards upside down, position a leg assembly inside each target frame at the upper end (Project Diagram, Drawing 3). We should try out a combination of cross-entropy and the dice coefficient, e. For each combination that uses two unique dice, there are 3 total combinations of dice. A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S is given by a sequence of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. For example: char[] alphabet = new char[] {'a','b'}; possibleStrings(3, alphabet,"");. The concept behind class templates is simple; they are templates which are also classes. Take a look at the craps dice combination chart below to see all the possible outcomes that can be rolled. Therefore I divide 5040 / 24 = 210. There are n! ways of arranging n distinct. Craps is undoubtedly the most popular dice game of all time and a favorite of millions of players across the world. And the probability that the first die shows an odd number is 1/2, as is the probability that the second does. 2) Rob and Mary are planning trips to nine countries this year. The 2 and 12 are the hardest to roll since only one combination of dice is possible to roll. There are 5^2x3=75 combinations that you will get one 6. The Robot Coupe R2 Dice Ultra is a combination vegetable prep and vertical cutter/mixer commercial food processor. John did this 1000 times and obtained the following results: Number of blue balls picked out: 300. We got a lot of dice rollers. Dice, as a game mechanic, have been around for a long time. Now we want to count simply how many combinations of numbers there are, with 6, 4, 1 now counting as the same combination as 4, 6, 1. 66% (15/36) probability of getting a 4-pip move (covered points permitting). The random animal generator uses a formula which has minimal possibilities of biasedness known as the Fisher-Yates Inspired Algorithm. Consider next the probability of E, P(E). We will be using the random module for this,since we want to randomize the numberswe get from the dice. Remember… only a 3% chance of a farkle! 5 dice and have less than 2000 points, roll again. Its formula addresses problems such as " find the number of possible ways to distribute 3 balls in 5 boxes " (allowing some box to be empty) and the formula answering this question would be (see the " Combinations with repetitions " section. When the dice are held together, the three and four are on the axis. Rolling two dice - random. Dungeons and Dragons, Yahtzee, and a huge number of other games all rely on throwing dice--from the 4-sided pyramid shape to the familiar 6-sided cube and the monster 20-sided variety. Wordman’s graph can also be used to determine probability to show how a +1 or -1 can make such a difference with the roll. To refer to combinations in which repetition is allowed, the terms k -selection, k - multiset, or k -combination with repetition are often used. The number of ways to choose two "6"s from four dice is: The remaining two dice can each land in one of five possible ways, so the total number of outcomes with exactly two "6"s is 6x 5x 5 = 150. You can specify your color choice in the Shopping Cart. asked by Emily on December 7, 2018; Probability. You have fewer combinations than permutations. 5m) If order matters (e. [1] Drug/drug interactions can impair the effectiveness of one or more drugs, or. Two balls are selected one by one without replacement. Remind him that there are 6 options on both sides. In this software, you get various predefined dice configurations namely, d4, d6, d8, d10, d12, d20, and d100. We have crafted many worksheets covering various aspects of this topic, likely, unlikely, certain, or impossible, possible outcomes, combinations / organized. As in Katie and Paul's "dead easy" solution, I began by sorting all the cases into columns according to the modulo sum (A + B + C modulo 6) which is also the first virtual die "D1". A large dice typically refers to a vegetable or item cut into 3/4-inch squares. Dice's predictive salary model is a proprietary machine-learning algorithm. 7 has the most combinations (1,6-2,5-4,3-3,4-5,2-6,1) What if I roll 3 dice? I did a little math on my lunch break and wrote out combinations and figured 10 and 11 are the most common sums for 3 dice (this is possibly wrong). You can align them to make ideal combinations on the sides of the dice to produce the desired effect. Unlike Tarot, Lenormand is a much more practical deck - focusing somewhat less on psychological feelings and instead on everyday happenings. The score is the. There is a total of 6^3=216 combinations if you roll 3 dice. Rolling two dice: Here is the previous program adapted to rolling two dice and keeping track of the sum of the spots. However, you can see that the payoff here is substantially greater than the payoff on most two-card combinations. Just memorize this, because I'm going to show you a cool trick. Permutations. Status: Testing & feedback needed. Each number (2 through12) will have certain frequently that any number can be rolled. You will need two dice to play this game. Phone: 1 (800) 638-2041 or (301) 796-7100. While there are some tried and true meanings that almost always apply to certain combinations, there is an infinite amount of possible Lenormand combination meanings. The Precision Shooter arranges the dice to various "sets" or dice-face combinations. Another few examples of an ability Hero 3: d12+d8 - If the d8 > the d12 result and you win this turn deal +2 damage. When using 3 dice, the total sum of eyes must be between 3 and 18. Add, remove or set numbers of dice to roll. You can see the elements of the dragons when starting the breeding process, while the elements of dragons you can breed can be seen in the store. 8 C 1: we chose one person out of remaining 8 people. This often includes determining possible combinations, such as those involving dice, coins and playing cards. Other combinations are meaningless. So let's just take another example. Let us understand the sample space of rolling two dice. Each dice combination contains a single dice, count (number of dice roll), and bonus  (add or subtract value from the result) variables. dice_round Function soft_dice_loss Function jaccard Function DiceLoss Class __init__ Function forward Function JaccardLoss Class __init__ Function forward Function StableBCELoss Class __init__ Function forward Function ComboLoss Class __init__ Function forward Function lovasz_grad Function lovasz_hinge Function lovasz_hinge_flat Function. You will achieve a much better result if you throw the dice while relaxed. Probability. Only {a,b,c} is missing because that is the only one that has 3 from the list a,b,c. The first to roll a three becomes the Three Man. Use the vegetable prep attachment to dice, slice, shred, grate, and julienne, or use the 3 qt. They book a row of three seats on the plane. ) SOLUTION To fi nd the number of permutations of 3 horses chosen from 10, fi nd 10P 3. The solution is detailed and well presented. com Tel: 800-234-2933;. simplified crap game: roll two dice, if the combination is 2, 3, 12, you lose; if the combination is 7, or 11, you win; other numbers, save as point and roll again, if you roll the same as point, you also win. ) represents the number of sides of a dice. The objective of the game is to accumulate points by rolling certain combinations. The combinations are:. There are 3 criteria to determine the balance of a combination: Sum of the numbers; Odd/Even; Low/High. These range from 2 to 12. pick3 numbers, pin-codes) 4 digit number generator 6 digit number generator Lottery Number Generator. There is an equal probability of rolling each of the numbers 1-6. Details Solutions Discourse (42) Loading description Algorithms. Africa — with about 45,000 reported cases, a tiny fraction of its 1. Therefore, the total number of possible combinations is 49*48*47*46*45*44 (or 49!/43! on your calculator), which equals approximately 10. When used by John with the Pop-a-Matic Vrillyhoo Hammer, the dice likely produce different effects, but there are only eight known combinations so far, as seen with the list of possible combinations for a final roll after 1-2-3-4-5-6-6. Super Mario Party Best Characters. Its formula addresses problems such as " find the number of possible ways to distribute 3 balls in 5 boxes " (allowing some box to be empty) and the formula answering this question would be (see the " Combinations with repetitions " section. On NFS file systems with UID mapping enabled, open () may return a file descriptor but, for example, read (2) requests are denied with EACCES. Today we are going to discuss the permutation and combination practice questions. The number of outcomes in which none of the 4 dice show 3 will be 5 x 5 x 5 x 5 = 625 outcomes. so for the total number of non repeatable combinations with 4 dice, use pascals triangle to get 126 unique combinations. Hero 2: 5d4 - If the total score is below 10 and you win this turn deal +5 damage. For example combination of 2 from 3 is АВ. We have covered this topic and all its sections in our earlier articles. John did this 1000 times and obtained the following results: Number of blue balls picked out: 300. The chart shown below illustrates the probability of combined dice scores from 2 dice. This game is based solely on chance but players should not be tricked by its apparent simplicity. Find the mean, absolute deviation and average absolute deviation using the mean absolute deviation formula. The lowest roll you can get is 2 (snake eyes) and the highest roll that you can make is 12 (box cars). This is written in any of the ways shown below. Hope this helps, Stephen La Rocque. Phone: 1 (800) 638-2041 or (301) 796-7100. To calculate the probability of winning, we must now find out how many total combinations of 4 numbers can be chosen from 10; to do so, we can use the combinations formula. if n= 3, f(3) = 8!/3!5! = 8x7 =56. A 3/8" x 3/8" (10mm) dicing kit is perfect for making diced tomatoes, onions, and other fruits and vegetables. Looking at the chart, you see that there are pairs that have the same combinations: 2&12, 3&11, 4&10, 5&9,6&8. Since there are 3 books, the first place may be filled in 3 ways. Here are the possible combinations: 1 + 6 = 2 + 5 = 3 + 4 = 7. On this page we are proposing rules for various combinations to all those of you who enjoy trying out new things. We have crafted many worksheets covering various aspects of this topic, likely, unlikely, certain, or impossible, possible outcomes, combinations / organized list, and many more. Again, we can split the number into two parts. Permutation and combination is a very important topic in any competitive exams. Another way to think about this is as follows. There are 13 rounds in the game, and the goal is to rack up the highest number of points by rolling certain combinations with the dice. The "pattern" Rule. Certain die combinations are worth points. Stir-fry until scrambled, then add the. uk 95 Combinations (F) - Version 3 January 2016. Prior to the discovery, Yamanaka-sensei and his team investigated. I added a picture of what set dice look like below. 3 Non-Standard Dice 44. Throughout mathematics and statistics, we need to know how to count. Chapter 3 Combinatorics 3. 7 or 8 or 9 or 10 or 11 or 12 can occur only in the following combinations : Thus the no. Unfortunately, due to equipment pro-ductivity and price increases for. Since there are 3 books, the first place may be filled in 3 ways. Rolling any number on a dice three times in a row is equal to the number of throws , where 3 represents the number of throws and 6 is the number of different ways to get three of the same number (e. To get the actual probabilities of a given macrostate you have to gure. In other words, the number of ways to sample k elements from a set of n elements allowing for. The fundamental difference between permutation and combination is the order of objects, in permutation the order of objects is very important, i. Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. Roll D20, D100, D8, D10, D12, D4, and more. 10P4 104 = 5040 10000 = 0. We just need to count possibilities for two dice because the third dice value is fixed. 1, we counted the number of rolls of three dice with different numbers showing. Jun 3, 2018 - Blue is neither a feminine or a masculine color so can be made to work almost any where. The player must "hit" the Dice Block and advance the number of spaces it shows. 4 Suppose we were to list all 120 possibilities in example 1. Each player takes 10 turns to score. ) Your points can accumulate as long as you keep scoring. 5 years of making a game from. I solved this problem by staring at the large list of 3-number triplets in the block comment near the top of my program and looking for. Simulates the rolling of dice. Thus the books can be arranged in 6 ways. Find descriptive alternatives for combination. The probability of Dice 1 rolling a 1 is 1/6. How To Make A Mirepoix by Lynne Webb // 3 Comments » “Mirepoix” is the French culinary term for a combination of diced carrots, onions and celery sauteed in butter and used as an aromatic base to flavor sauces, soups and stews. There are only 3 combinations for this. As a simple example of a Monte Carlo simulation, consider calculating the probability of a particular sum of the throw of two dice (with each die having values one through six). Students roll 3 dice, looking for combinations to 10 using any combination of dice. Use nchoosek instead. ans: 56 ===. Number of random lists, random sets, sequences, tables, random permutations or combinations using Random Number. Download a Free Preview or High Quality Adobe Illustrator Ai, EPS, PDF and High Resolution JPEG versions. The ratio of successful events A = 3 to total number of possible combinations of sample space S = 8 is the probability of 2 heads in 3 coin tosses. 3 and 11 have two. It serves no purpose in statistics to talk about 1-6 and 6-1 as being the SAME combination. \ \square 1 2 5 1 8 = 1 4. Thousands of new, high-quality pictures added every day. Some combinations are exciting and make sense, others are not recommendable. By default it will roll 2 dice 1 time and the dice will be fair. The 6 and 8 too have four total possibilities while the 11 and 3 have one possibility each. Background on the DICE Model. because every combination of the small dice has probability $\frac{1}{36}$ (20 combinations), $\frac{1}{72}$ (30 combinations), With 3 dice, there are 216 possible outcomes (treating order of dice as significant). Isometric 3d dice combination. 2) Reversal combinatorial approach: In this example reversal approach is a bit shorter and faster. is there a way to calculate two dice combinations while throwing three dices?. Show that the probability of rolling doubles with a non-fair (“fixed”) die is greater than with a fair die. In example 1. Class Templates []. The Permutations Calculator finds the number of subsets that can be created including subsets of the same items in different orders. Set of Five Dice Lab d24 $12. , chopped potatoes). , three of a kind), and the remaining two dice show the same number, (i. i mean, a crit failure or crit hit (rolling double 1's or double 6's) in a a game for example dungeons and dragons, if you dont do the roll each induvidual dice, then theres a higher chance of scoreing a crit hit or a crit failure on attacking. There are only 3 combinations for this. This bet pays 1:1 (even money) if the next throw is a 3, 4, 9, 10, or 11, 2:1 (double the bet) on the 2, and 3:1 (triple the bet) on the 12. The number of 0's in the line is one less than the number of different faces on each die (i. 6 cubed is 216. Backgammon is a game for two players, played on a board consisting of twenty-four narrow triangles called points. If the sum of the two dice is 2, you will lose$7. " The worst possible combination is 1-2-3, which always. The game is played with five six-sided dice. 1 % 50 % 66. This is what I've got so far: import itertools dice = [1,2,3,4,5,6. If on one turn, you roll a 2, three 3's and a 6 on the first roll, you might decide to keep the 3's and roll the other three dice again. 1}, {2,2}, {3,3}, {4,4}, {5,5}, {6,6} the numbers are the same, and in 30 of these combinations, the dice are different. If you can make all 5 dice count for score, pick them all up and keep rolling (If you're feeling lucky. These 479001600 "strings" of the 12 numbers, for example 1,2,3,4,5,6,7,8,9,10,11,12 and 2,4,6,8,10,12,11,9,7,5,3,1, mathematicians call permutations of the 12 numbers rather than combinations. Two professional grade machines wrapped into one make up the JET Combination Belt and Disc Sanders. Runs are formed with 3 or more numbers in sequence of the same color. In this case, there are 5 * 7, or 35 unique combinations of pants & shirts Mark can wear. Submit your answer. N! means N× (N-1)××2×1. Best Dice Roller for your dice games. and Jayden H. pick3 numbers, pin-codes) 4 digit number generator 6 digit number generator Lottery Number Generator. Similar Kata: 3 3 1 58% of 13 20 tangweikun 1 Issue Reported. More details. The included 2mm grating disc, 4mm slicing disc, and 10mm dicing kit allow the cutting style to be easily swapped out. Permutation and combination is a very important topic in any competitive exams. 3 billion people — is the world’s youngest continent, with more than 60 percent of its population under age 25. Combination: 4C1 = 4 ===== Find the odds of the outcome if an 8-sided dice is rolled: a prime number. By turning on the Balance Indicator, if a combination generated with Quick Pick Plus is not well balanced due to one of the criteria, it will be highlighted in color. I don't mind that at all because in the end you're allowed to speculate. Of the 36 possibilities (when the dice are distinguishable) 6 are pairs of one number (1 plus 1, etc. One variation assigns a point based on the pair rolled, rather than the singleton; i. Details Solutions Discourse (42) Loading description Algorithms. Player two takes a turn. Example 2: Find the probability of getting a numbered card when a card is drawn from the pack of 52 cards. For example, the 6,5 pair occurs 27 times out of the 216 combinations. The number of words is given by 4 P 3 = 4! / (4 - 3)! = 24. There are many, many different dice sets to experiment with and they make certain numbers appear more efficiently than others. This means:   A) if you buy 137 lottery tickets, you will win the $3 prize exactly once. So, with two dice, the number of outcomes is 36, and with three dice you multiply by another six to get 216 outcomes. To answer this question, first consider the set of all possible outcomes of the experiment of rolling two dice. Unlike many other salary tools that require a critical mass of reported salaries for a given combination of job title, location and experience, the Dice model can make accurate predictions on even uncommon combinations of job factors. I need to find out all the permutations of 10 dice (n) that sum up to 35 (m). The remaining three dice can each land in one of five possible ways, so the total number of outcomes with exactly one "6" is 4x 5x 5x 5 = 500. The game is played with five six-sided dice. Use the dice button to throw your dice up to three times each round. 5m) If order matters (e. So the "formula" is (number of choices) ^ (number of digits in the combination) If you have a lock with 3 digits and the possible choices 0-9 (which would be 10 numbers) then you would have 10^3 = 1000 possible combinations If the lock has 4 digits and the possible choices 0-9 (still ten numbers) the you would have 10^4 = 10000 possible. Sample Project - Matrix/Grid Roll Dice Two player game *Teachers with subscriptions will have access to all worked solutions and python code. There are six ways to win (six possible combinations). There are may different polyhedral die included, so you can explore the probability of a 20 sided die as well as that of a regular cubic die. Ask him how many different outcomes are possible if he was to roll 2 dice. The combination for the Nurses’ Station safe is in the Operation Room (you’ll need the Hospital ID Card) on the RE: Lost Items note — 9 clockwise, 3 counterclockwise. For four six-sided dice, the most common roll is 14, with probability 73/648; and the least common rolls are 4 and 24, both with probability 1/1296. Item Versions: Item Level 340+ Showing tooltip for Choose a spec. This means that the number of different combination achievable in one roll is: 6 * 6 * 6 * 6 * 6 = 6 5 = 7776. Once a marble is drawn, IT IS REPLACED. First, you count up how many different ways the six numbers can be chosen. Using the 1st crocodile clip, connect the second end of the crocodile clip onto based of the headphone jack. Two professional grade machines wrapped into one make up the JET Combination Belt and Disc Sanders. Robot Coupe has become the foodservice industry leader in the development and refinement of food processors, vegetable preparation units, and combination processing units. Its formula addresses problems such as “ find the number of possible ways to distribute 3 balls in 5 boxes ” (allowing some box to be empty) and the formula answering this question would be (see the “ Combinations with repetitions ” section. So, we can calculate the probabilities of each outcome:. Combinations. Dice Games Create confident, curious kids by having math moments at the kitchen table, at birthday parties, or before bedtime. Runs are formed with 3 or more numbers in sequence of the same color. As you can see, 7 is the most common roll with two six-sided dice. Given n dice each with m faces, numbered from 1 to m, find the number of ways to get sum X. Each player takes 10 turns to score. You have three rolls on each turn to get the combination you want, and you can keep as many dice from each roll as you want. 6 dice go for it. Pick from a List X-digit Multiple Lines. Two balls are selected one by one without replacement. 1 Permutations Many problems in probability theory require that we count the number of ways that a particular event can occur. You can Find Solution of all math questions from CENGAGE. Probability. The player with the highest score is decided after thirteen rounds. 1}, {2,2}, {3,3}, {4,4}, {5,5}, {6,6} the numbers are the same, and in 30 of these combinations, the dice are different. Not doubling the 600 from 1st roll. a) Let A denote the event of a head and an even number. The combntns function provides the combinatorial subsets of a set of numbers. By default it will roll 2 dice 1 time and the dice will be fair. 3 Non-Standard Dice 44. There are six ways to win (six possible combinations). Then ask them student to put the dice in a horizontal row and write down the three values shown on the dice to make a three digit number. Students will roll a single dot die to determine how many chips to put in the second tens frame and. A "trip" is all three dice the same and is the next best combination. Probability⎮Combinations. To create a combination chart, execute the following steps. Hero 4: 2d10 - You win all ties and deal +3 damage when a tie occurs. This means that the number of different combination achievable in one roll is: 6 * 6 * 6 * 6 * 6 = 6 5 = 7776. 2-Way: Player betting one roll wager for himself AND the dealers. To find the solution to 10!, just type in your ten, then MATH, left arrow, 4. Dice ValuesThe game is played with a set of two perfectly balanced dice with each die having six white dots numbered 1 through 6. A Boolean search, in the context of a search engine, is a type of search where you can use special words or symbols to limit, widen, or define your search. Download a Free Preview or High Quality Adobe Illustrator Ai, EPS, PDF and High Resolution JPEG versions. The numerical value in dice configuration (d4, d8, etc. Dice Combinations. What is the probability that, if you roll a balanced die twice, that you will get a "1" on both dice? You stand at the basketball free-throw line and make 30 attempts at at making a basket. During each round, you roll up to 6 dice. Attempt every question. The Naive approach is to find all the possible combinations of values from n dice and keep on counting the results that sum to X. The Three Dice Trick: Long Division Ask a student or group of students to get out a calculator a pen and paper. The Rules of Yahtzee Standard Play Objective of the Game Yahtzee can be played in solitary or by a group. 2) Rob and Mary are planning trips to nine countries this year. If the sum of the two dice is 2, you will lose$7. Yes, those 2 combinations look the same when you throw the dice. loss = cross-entropy -. The combinations that are possible are 1-2-5, 1-3-4, 1-1-6, 2-3-3, and 2-2-4. Let us understand the sample space of rolling two dice. Given n dice each with m faces, numbered from 1 to m, find the number of ways to get sum X. How many different 4-digit numbers can she make that are greater than 8000? [2] 82 Y 5 8925 www. This model comes with a durable 3 quart stainless steel bowl with a handle. Problem 1) In a class of 10 students, how many ways can a club of 4 students be arranged? Answer. However, Lenormand card combinations aren't as simple as memorizing a list of meanings. Once your order has shipped, you can find tracking information in your order details. getting a six when a dice is thrown or drawing an ace of hearts from a pack of cards. Shipping will be charged after the campaign ends (see the 'Shipping' section. A table in which very few shooters are making The Point. We consider permutations in this section and combinations in the next section. Therefore, the total number of possible combinations is 49*48*47*46*45*44 (or 49!/43! on your calculator), which equals approximately 10. 2-Way: Player betting one roll wager for himself AND the dealers. Africa — with about 45,000 reported cases, a tiny fraction of its 1. Rules In Detail The "has" Rule. A data frame with rolls rows and ndice columns representing the. Fives dice: how many different combinations? In Yahtzee you play with five dice, each with 6 numbers on it. For the six combinations above the score for each of them is the sum of dice of the right kind. Consider next the probability of E, P(E). Dice Roller Dice Roller. Determine the mathematical probability and experimental probability of color outcomes on the spinner. If 2nd roll yields more 6’s, each 6 adds 100 for each. You can also make your custom dice that can have up to 1000 sides. Thus, the number of arrangements = 3! x 4! = 6 x 24 = 144. the arrangement must be in the stipulated order of the number of objects, taken only some or all at a time. Yahtzee Game. Consider, you toss a coin once, the chance of occurring a head is 1 and chance of occurring a tail is 1. loss = cross-entropy -. This often includes determining possible combinations, such as those involving dice, coins and playing cards. When using 3 dice, the total sum of eyes must be between 3 and 18. The short explanation is that there are an even number of possible outcomes (3-18) and the probabilities are highest in the middle and get lower as you move to the extremes. simplified crap game: roll two dice, if the combination is 2, 3, 12, you lose; if the combination is 7, or 11, you win; other numbers, save as point and roll again, if you roll the same as point, you also win. Take a look at the craps dice combination chart below to see all the possible outcomes that can be rolled. With the rounded ends of the legs barely touching the rails (B), clamp one leg to one side (A) of each frame. Math Game: Closest to 100 This is a fun and simple game that lends itself to a variety of math topics, including 1- and 2-digit addition, place value, absolute value, and negative numbers. Robot Coupe has become the foodservice industry leader in the development and refinement of food processors, vegetable preparation units, and combination processing units. Introduction. Scoring Dice Combinations In order to earn points you must score one of the following combinations. P(14,3) = P(A=14) x P(B=3) = 1/2 x 1/9 = 1/18 When the probabilities of all the winning combinations are summed P(A>B) is produced, and as expected, B now does have marginally better chances of beating A. The number of ways to order a set of items is a factorial. A certain event means an event which is sure to happen. A 2mm grating disc, 4mm slicing disc, and a 10mm dicing kit allow the cutting style to be easily swapped out. These range from 2 to 12. 69 Emoji Combinations That Symbolize Sexual Acts. Thus, the number of arrangements = 3! x 4! = 6 x 24 = 144. com with free online thesaurus, antonyms, and definitions. combntns will be removed in a future release. Adding one more die multiplies the number of possible outcomes by 6. Robot Coupe R2 DICE Combination Food Processor, 3 qt. Dice Roller Dice Roller. Re: The Most Likely Outcome of Rolling X Number of Dice « Reply #4 on: November 29, 2012, 11:29:01 am » For 6-sided dice, a quick and dirty way to find the most common result is to add the lowest possible result (all 1s) and the highest possible result (all 6s) and divide by two. There are 2 ways to roll a 3 though - you can roll a 1 and a 2, or roll a 2 and a 1, so the probability of rolling a 3 is 2 out of 36. I found some advanced codes for this but that is good for only specific number predefined data. Probability. card combination calculator multivariate hypergeometric distribution. lottery numbers) 40475358 (~ 40. The 2 and 12 are the hardest to roll since only one combination of dice is possible to roll. He knows that a particular set will give him a higher occurrence of some numbersover. How many possible ways are there to roll a 6? What is the entropy associated with an outcome of 6? KB=1. 2) Reversal combinatorial approach: In this example reversal approach is a bit shorter and faster. But unlike the All Sevens set, only two of these combinations add up to seven – the 3-4 and the 4-3. For this, we study the topics of permutations and combinations. If the sum of the two dice is 2, you will lose $7. 5 Permutations and Combinations 571 Using a Permutations Formula Ten horses are running in a race. Note that open() can open device special files, but creat() cannot create them; use mknod(2) instead. The Covid-19 riddle: Why does the coronavirus wallop some places and spare others? » At Global Asset Management Seoul Korea you will find a professional and dedicated team to handel all your needs. The dice were distinguishable, or in a particular order: a first die, a second, and a third. Think of these color combinations in an office setting or any other room!. Then ask them student to put the dice in a horizontal row and write down the three values shown on the dice to make a three digit number. As in Katie and Paul's "dead easy" solution, I began by sorting all the cases into columns according to the modulo sum (A + B + C modulo 6) which is also the first virtual die "D1". As the number of dice increases the random states start to dominate until the chance of less-random, ordered, states becomes negligible. Videos for related products. Cee-lo (sometimes spelled cilo, celo, c-lo, or cee-low) is a game of chance played with three six-sided dice. The numerical value in dice configuration (d4, d8, etc. If n = 2, f(2) = 7!/2!5! = 7x6/2 = 21. Each new term in the Fibonacci sequence is generated by adding the previous two terms. Number of random lists, random sets, sequences, tables, random permutations or combinations using Random Number. load is not required to sum to 1, but the elements will be divided by the sum of all the values. You may add on extra funds to this reward level for additional dice sets by increasing your pledge amount. The number of words is given by 4 P 3 = 4! / (4 - 3)! = 24. Q1: Three coins are tossed. Practice problems for second midterm - with solutions. 5%, which you can round to 38% if you wish. The sum of all the above combinations is calculated and if it is 63 or more,. In the end, you divide this count by the total number of combinations (which in this case is 6 3 = 216). "Combinations" gives the number of ways a subset of r items can be chosen out of a set of n items. This means:   A) if you buy 137 lottery tickets, you will win the$3 prize exactly once. To calculate the probability of winning, we must now find out how many total combinations of 4 numbers can be chosen from 10; to do so, we can use the combinations formula. But what about (3,2,2)? Well since we're dropping the lowest score, the dice results that can generate (3,2,2) are four separate combinations when the extraneous die equals two plus however many combinations there are when the extraneous die equals one. Create a new project called 5. We currently have white, blue, red, green, and black in stock. Your 5 dice can be rolled up to 3 times per turn to score in a category. The Robot Coupe R2 Dice Ultra is a combination vegetable prep and vertical cutter/mixer commercial food processor. Wordman’s graph can also be used to determine probability to show how a +1 or -1 can make such a difference with the roll. For the six combinations above the score for each of them is the sum of dice of the right kind. You can choose to see only the last roll of dice. 5% of the time. Zdzieblik D(1), Oesser S(2), Baumstark MW(3), Gollhofer A(1), König D(1). Unfortunately, due to equipment pro-ductivity and price increases for. The game is stereotypically played in hood settings such as alleys or stoops. Disc Sander Benchtop with 3/4HP Direct-drive Motor. 3 Non-Standard Dice 44. Dice Sourcing Services applies Dice's deep technology market specialization and skills-based candidate database to source and screen top talent," shared Art Zeile, CEO of DHI Group, Inc. For the six combinations above the score for each of them is the sum of dice of the right kind. When they arrive at the plane they sit in a random order. Viewed 90k times 63. In other words, when rolled on axis the seven will only appear 12. Choosing three toppings for a pizza if there are nine choices is a combination or a permutation?. q np thus the generating function is A(x) = X n 0 k n qk npnxn= (q+ px)k; using the binomial theorem again. Entering such an area of effect needn’t be voluntary, unless a spell says otherwise. I was introduced to the game on Facebook, where there is an online flash version. Active 7 months ago. , three of a kind), and the remaining two dice show the same number, (i. I added a picture of what set dice look like below. On this page we are proposing rules for various combinations to all those of you who enjoy trying out new things. You have have three choices for the first letter, two for the second and only one for the third. The factorial function is often used when calculating combinations and permutations. Each person then goes on to roll all three dice at once, and continues to do so until. Suppose we plan to toss a coin 3 times and the outcome of interest is the number of heads. Since there are 36 possible combinations, and only 1 of those combinations can total 2, the probability of getting a 2 on a roll is 1 out of 36, or 35 to 1, as stated in odds terms. Simulates the rolling of dice. 3 billion people — is the world’s youngest continent, with more than 60 percent of its population under age 25. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89,. For the first two dice, there are 3 3 = 9 favorable outcomes as shown here: For three dice, there are 3 3 3 = 27 favorable outcomes. How many different ways are there of selecting the three balls? 10 C 3 =10!=10 × 9 × 8= 120 3! (10 – 3)!3 × 2 × 1. Macroscopic systems have many. Sounds totally worth it to me. A factorial is represented by the sign (!). The game is played with five six-sided dice. The various combinations of numbers that can turn up on throwing two dices (or one dice twice) can be listed as below – For example: (1, 6) shows that ‘1’ would turn up on dice 1 and ‘6’ would turn up on dice 2. The solution is detailed and well presented. (Obviously, certain scoring combinations, such as three pair, are impossible using this variant. This means that for the example of the combination lock above, this calculator does not compute the case where the combination lock can have repeated values, for example 3-3-3. Probability Calculator is an online statistics & probability tool to estimate the possibility of single or multiple independent, complement, mutual or non-mutual, union, intersection & conditional probability of events to occur in statistical experiments. 3 and 11 have two. Basically, the game is played in rounds. Hobart 60. Jun 3, 2018 - Blue is neither a feminine or a masculine color so can be made to work almost any where. Likewise, there is only one combination that yields a total of 12—when each die displays a 6. The number of ways to choose two "6"s from four dice is: The remaining two dice can each land in one of five possible ways, so the total number of outcomes with exactly two "6"s is 6x 5x 5 = 150. This means that when you're rolling two Dice, a Four and also a Ten both have 3 different combinations each out of the 36 total number of combinations possible between two Dice (there's more about this below). If the option to color code the top section or subtotal box is on, each box will turn red if you used less than 3 dice, blue if you used exactly 3 dice, or green if you used more than 3 dice. Probabilities for the two dice The colors of the body of the table illustrate the number of ways to throw each total. Unfortunately, due to equipment pro-ductivity and price increases for. Dice Games Create confident, curious kids by having math moments at the kitchen table, at birthday parties, or before bedtime. By now you've probably heard of induced Pluripotent Stem Cells (iPSCs), which are a type of pluripotent stem cell artificially derived from a non-pluripotent cell through the forced expression of four specific transcription factors (TFs). Pour in the beaten egg. They book a row of three seats on the plane. - Make text dice with phonics digraphs, reading rules, and combinations. The Straight Sixes Set: Straight Sixes Set: The sixes are on both horizontal surfaces, the twos or fives are on the vertical. Fives dice: how many different combinations? In Yahtzee you play with five dice, each with 6 numbers on it. How many combinations of the four dice 3221 are there?. Now those 36 are to be considered as permutations let's extract the combinations from them: 1,1 1,2 2,2 1,3 2,3 3,3 1,4 2,4 3,4 4,4 1,5 2,5 3,5 4,5 5,5 1,6 2,6 3,6 4,6 5,6 6,6 Here as we can see, there are 21 combinations to be extracted from 2 dice. For example; given the letters abc. Possible Combinations of “Traders and Barbarians” with Other Expansions. 10 C 3: the total number of combinations to choose 3 people out of 10. Donʼt spend too long on one question. Synonyms for combination at Thesaurus. P (A) = 21 / 36. 00 Fourteen-sided die in the form of a heptagonal trapezohedron. You may only roll the dice a total of 3 times. ) represents the number of sides of a dice. We could use the fundamental principle of counting 1, i. A Collection of Dice Problems Matthew M. Herein you will find the rules our family uses for the game of 10000. Now we want to count simply how many combinations of numbers there are, with 6, 4, 1 now counting as the same combination as 4, 6, 1. By: readtheticker Early 2020 the fear was the human race 'David' may suffer terribly under a frightening. The following chart below shows the dice combinations needed to roll each number. Obviously, bottom is the lowest number and top is the highest number in the range of random numbers you want to get. I was introduced to the game on Facebook, where there is an online flash version. The idea is to take what’s in season—peas or corn or brussels sprouts, tomatoes or leeks, even grated pumpkin or fall squash—and toss it with pasta, adding about 3 to 5 ounces of meat (cured pork, bacon or sausage—I have even been known to dice salami and use it the same way) per half pound. Once your order has shipped, you can find tracking information in your order details. While the outcome of rolling two dice is not certain, the list of possible outcomes of rolling two dice is known. Let say i throw three dice. Number the squares in the first column from two to 12. 1st and 2nd and 3rd = 60 123. Thus, the number of arrangements = 3! x 4! = 6 x 24 = 144. H Ordered arrangements are called permutations. Hence, the number of possible outcomes is 2. Hope this helps, Stephen La Rocque. , a 5-5-2 gives a five (also known by various slang terms such as "fevers"), which beats a 3-3-6 three (a. The answer is that 7 is now a property of one of the dice, rather than a property of the combination of the two. We just need to count possibilities for two dice because the third dice value is fixed. These figures represent the possible outcomes following rolling the dice three times: The maximum being 3 x 6 = 18. Likewise, there is only one combination that yields a total of 12—when each die displays a 6. Select 1 unique numbers from 1 to 3. OVERVIEW: Greedy is a high score dice game in which players roll dice for points. Includes all the Ravnica guilds, the shards of Alara names, and now Khans of Tarkir clans! Useful starters guide for new Magic: The Gathering players to help learn the well known combinations, and useful for older players who want a page to remember all of them or to discover new ones. What is the probability of the coin landing heads up exactly six times? 4) A six-sided die is rolled six times. Find the probability that the sum is one. However, there are several methods by which extraordinary or unique monsters can be created using a typical creature as the foundation: by adding character classes, increasing a monster’s Hit Dice, or by adding a template to a monster. There are different types of practice questions for you to practice and get ready for the competitive exams. In Dice With Buddies, the objective of the game is to score the most points by rolling different combinations. If you want more dice then best way is spend real money and get them for $5-10 each when they spawn in coin shop second best way for regular dice is to constantly do alliance mode, like a lot, at least 30+ times a day for 2-3 days will get you all the basic and epic dice. In the preface, Feller wrote about his treatment of fluctuation in coin tossing: “The results are so amazing and so at variance with common intuition that even sophisticated colleagues doubted that coins actually misbehave as theory predicts. You are dealing with combinations with repetition allowed. These sets are dependent upon a number of variables such as his shooting style ormore importantly whether it is the come-out roll or the Point has already been established. This is a working program I made to roll dice a specified number of times, show how many times each number (2-12) occurred, and the percentage of the total rolls that each number got. Collection for gambling app and casino concept. Dice Roller; dice. For example if you rolled the following dice: Ones and Fives = 100 points each = 50 points each Three of a kind = 1000 points = 200 points = 300 points = 400 points = 500 points = 600 points Four or more of a kind. If on one turn, you roll a 2, three 3's and a 6 on the first roll, you might decide to keep the 3's and roll the other three dice again. A) 9 bags B) 3 bags C) 6 bags D) 10 bags   51. Always show your workings. The fundamental difference between permutation and combination is the order of objects, in permutation the order of objects is very important, i. I have tried to gather only the best, to make sure they are truly useful for my site visitors! Online lessons and exercises for simple probability, tree diagrams, independent & dependent events, combinations and permutations. These days the plural word dice is often used for one die and the dictionary recognises this also. Now, the 3 vowels should only occupy the 3 positions marked as (1),(3) and (5) in any order. Probabilities for the two dice The colors of the body of the table illustrate the number of ways to throw each total. Probability. 2 dice and you have fewer than 400 points you should try another throw. Number of red balls: 200. A Boolean search, in the context of a search engine, is a type of search where you can use special words or symbols to limit, widen, or define your search. A hard way 8 will win if a 4-4 is rolled before a 7 or easy way combinations of 2-6, 6-2,3-5, and 5-3. Since any color can be repeated, that means the number of combinations can be see like this: Each spot can be any one color (rrr, bbb, or yyy). To find the solution to 10!, just type in your ten, then MATH, left arrow, 4. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2. Your chances of getting a Farkle with 6 Dice are slim, but it could happen! • Fourth roll (all 6 Dice): You set aside Triple 3’s (300) and decide to stop so you don’t risk your 950 points by getting a Farkle. What she found most intriguing was the fact that the teacher could not provide a satisfactory definition of "random" (or of "probability," for that matter), even though the notions such as "random variable" and "random sample" lie at the heart of the theory. You have three rolls on each turn to get the combination you want, and you can keep as many dice from each roll as you want. What is the probability that the sum of the dice is even and one die shows a 4? A 6x6 table of dice outcomes will help you to answer this question. Robot Coupe R301 Dice Ultra Combination Continuous Feed Food Processor / Dicer with 3. The Beat The GMAT Forum - Expert GMAT Help & MBA Admissions Advice. We get "30 x (10 + 2)" or "300 + 60″. Number of combinations n=10, k=4 is 210 - calculation result using a combinatorial calculator. Since there are 3 books, the first place may be filled in 3 ways. You may score the dice at any point in the round, i. Sample space = {0, 1, 2, 3}. Probability theory is an interesting area of statistics concerned with the odds or chances of an event happening in a trial, e. Determine the mathematical probability and experimental probability of color outcomes on the spinner. Dice's predictive salary model is a proprietary machine-learning algorithm. Pick from a List X-digit Multiple Lines. Producing large quantities of processed foods is a breeze with the Robot Coupe R2 Dice Ultra continuous feed combination food processor! Able to produce up to 850 servings in three hours or less, this multipurpose machine is ideal for an array of tasks thanks to its powerful, 2 hp fan-cooled motor. Quantity Available: Only standard stock items, unused and in their original packaging, may be returned within 30 days from the receipt of your order. What is the probability of getting a sum of 5 ? The total number of events when the dice is rolled once = 6. The factorial option is simple to utilize.   B) if you buy 137,000 lottery tickets, you will win the$3 prize exactly one thousand times. Calculator Use. Place 1/8-inch-thick shims between the edges of the legs (D) and the bottom face of each top (C). Understanding Lenormand combinations is a key component to being an accurate reader. It does this with identical probabilities, so that you're just as likely to end up with a 2 (or a 7, or a 10) as if you were rolling 2 dice. Thousands of new, high-quality pictures added every day. Permutation: Listing your 3 favorite desserts, in order, from a menu of 10. load is not required to sum to 1, but the elements will be divided by the sum of all the values. You can Find Solution of all math questions from CENGAGE. Hero 2: 5d4 - If the total score is below 10 and you win this turn deal +5 damage. LOOK AT THE TREE DIAGRAM ABOVE. A pair of dice, two different colors (for example, red and blue) A piece of paper; Some M&M's or another little treat; What You Do: Tell your child that he's going to learn all about probability using nothing but 2 dice. You'll notice when reading with Lenormand decks, that these cards tend to. 156 likes · 3 talking about this. The player must "hit" the Dice Block and advance the number of spaces it shows. Thus, the number of arrangements = 3! x 4! = 6 x 24 = 144. Commercial kitchen repair and restaurant equipment buyers in Charlotte, NC and Gastonia, NC. (v) A and F must sit at the end of each row Put A and F at the ends and B, C, D and E In between them. Just make sure you don't duplicate any combinations. Each turn consists of rolling the dice and choosing one combination from the score table. Students roll 3 dice, looking for combinations to 10 using any combination of dice. Super Mario Party Best Characters. You find pips on things like dice and dominoes. And let's do it with the formula first. The solution is just a simple lookup table that maps the sum of the 3 dice to a number between 2 and 12 (which are the values available from 2 dice). Why don't you shake them and give them another roll! How many dice would you like to roll? 1 dice - 2 dice - 3 dice - 4 dice - 5 dice - 6 dice - 7 dice - 8 dice - 9 dice - 10 dice - 11 dice - 12 dice - 13 dice. Welcome to our specialist dice range. How many possible ways are there to roll a 6? What is the entropy associated with an outcome of 6? KB=1. What is the. load is not required to sum to 1, but the elements will be divided by the sum of all the values. Example: For set of А, В, С number of combinations of 2 from 3 is 3!/(2!*1!) = 3. People who like Yahtzee also like Free Bejeweled. The objective of the game is to accumulate points by rolling certain combinations. Then ask them student to put the dice in a horizontal row and write down the three values shown on the dice to make a three digit number. Fives dice: how many different combinations? In Yahtzee you play with five dice, each with 6 numbers on it. This combination of dice scores 25 points. The objective of Poker Dice game is to score points by rolling dice and make certain combinations. There are many, many different dice sets to experiment with and they make certain numbers appear more efficiently than others.
# Why do some authors define the ordered pair as the set: $(a,b)=\{\{a\},\{a,b\}\}$? [duplicate] I am using a textbook called foundations of mathematical analysis by johnsonbaugh and in it, he defines the ordered pair of elements $a$ and $b$, writen as $(a,b)$ as the set: $(a,b)=\{\{a\},\{a,b\}\}$ where $a$ is called the first element of $(a,b)$ and $b$ is called the second element of $(a,b)$. This definition is a bit strange, would anyone know how I can interpret it? Thank you! - ## marked as duplicate by Henning Makholm, Pedro Tamaroff, egreg, mrf, Sami Ben RomdhaneApr 6 at 22:23 There's no need to keep alternating into and out of MathJax. I've set your notation entirely within MathJax. Take a look. –  Michael Hardy Apr 6 at 21:19 Perhaps it is worth mentioning that originally this definition is due to Kazimierz Kuratowski. –  Tomek Kania Apr 6 at 21:21 –  Henning Makholm Apr 6 at 21:25 Sets don't have order. For example, $\{1, 2, 3 \} = \{2, 3, 1 \}$. One possible trick to preserve order is to use the definition above. Simply say that the item that's an element of both elements of the set is the first element in the pair, and the item that appears in just one is the second element. Edit 2: I figured I'd present two alternate definitions of an ordered pair that also work, but are a bit uglier to actually use. $$(a, b) := \{\{a, 0\}, \{b, 1\}\} \\ (a, b) := \{a, \{a, b\} \}$$
# Project Euler Problem 52 Problem 52 was too easy, so I took the time to use it’s simplicity and study how lazy sequences are chunked, and how they are grouped when apply‘d. Because this is such a simple problem, I’ll show you the step-by-step refactoring process as I took as I tried to improve the performance and simplicity of the code. Our goal in this problem was to find a number that when multiplied by 2,3,4,5, and 6 has the exact same digits, but in a different order. First, let’s write something that solves the problem. (defn digs [n] (sort (str n))) (defn euler-52 [] (first (for [x (iterate inc 1) :when (every? #(= (digs x) %) (map #(digs (* x %)) [2 3 4 5 6]))] x))) (time (euler-52)) ;; "Elapsed time: 563.332529 msecs" Since that was trivial to write, let’s see if we can’t make it more elegant and faster. First, let’s make it faster by using the short-circuiting behavior of and: ;; short circuits using and (defn euler-52-b [] (loop [n 1 d (digs n)] (if (and (= d (digs (* 2 n))) (= d (digs (* 3 n))) (= d (digs (* 4 n))) (= d (digs (* 5 n))) (= d (digs (* 6 n)))) n (recur (inc n) (digs (inc n)))))) (time (euler-52-b)) ;; "Elapsed time: 159.1435 msecs" What about if we wanted to improve the brevity? Maybe we can use apply and = to avoid treating the n case different from the 2n, 3n, …6n, cases. ;; Short circuiting broken because of chunking (defn euler-52-c [] (first (filter (fn [n] (apply = (map #(digs (* n %)) [1 2 3 4 5 6]))) (iterate inc 1)))) (time (euler-52-c)) ;; "Elapsed time: 603.714745 msecs" You may be wondering, “Why did the performance get so bad? Shouldn’t the performance be nearly as good as euler-52-b?” The answer to this question is that the chunking behavior of sequences (which takes things in groups of 32) meant that the entire array [1 2 3 4 5 6] was being evaluated and passed through the map. Also, the behavior of apply when given a long list is that it will take its arguments in groups of four, which is more than is needed. If you want to see this chunking behavior visually, try the following two lines of code in your REPL: (apply = (map #(do (print %) %) (iterate inc 1))) ;; Groups by 4 during apply (apply = (map #(do (print %) %) (range))) ;; Seqs chunk by 32's Thinking of workarounds for the natural behavior of apply (it may be instructive to look in clojure.core’s source to see how apply is implemented if you are curious why it acts this way), I tried to use take-while and count that all the elements were consumed, but the result is a little cumbersome. ;; Short circuits because of take-while (defn euler-52-d [] (first (filter (fn [n] (let [d (digs n)] (= 5 (count (take-while #(= d (digs (* n %))) [2 3 4 5 6]))))) (iterate inc 1)))) (time (euler-52-d)) ;; "Elapsed time: 236.771 msecs" Changing hats once again and thinking once again about brevity, let’s use recur to shorten our definition… ;; Nice and short (defn euler-52-f [n] (if (apply = (map #(sort (str (* n %))) [1 2 3 4 5 6])) n (recur (inc n)))) (time (euler-52-f 1)) ;; "Elapsed time: 552.148607 msecs" That worked well and is certainly clearer. Now let’s combine it with our earlier take-while trick and see if we can improve performance again: (defn euler-52-g [n] (let [d (sort (str n))] (if (= 5 (count (take-while #(= d (sort (str (* n %)))) [2 3 4 5 6]))) n (recur (inc n))))) (time (euler-52-g 1)) ;; "Elapsed time: 187.475716 msecs" Great! I think most people would just stop at euler-52-g in the refactoring process, but let’s do one more just to flex our computer scientist muscles, and then we’ll call it quits for today. Theoretically, short-circuiting behavior on our sequence processing could improve performance by removing the unnecessary computation invoked by chunking and the take-by-four behavior of apply. If we had a version of apply that short-circuited, we could write a function like euler-52-f that would be as performant as anything else. Of course, we don’t want to actually rewrite apply, and nothing in the standard sequence library quite fits. So, let’s take the time to write two functions that may be useful in the future: (defn reducep "Like reduce, but for use with a predicate. Short-circuits on first false." ([p coll] (if-let [s (seq coll)] (reducep p (first s) (next s)) (p))) ([p val coll] (if-let [s (seq coll)] (if-let [v (p val (first s))] (recur p (first s) (next s)) false) true))) (defn unchunk "Forces a sequence to be evaluated element by element. Modified from code by Stuart Sierra" [s] (when-first [x s] (lazy-seq (cons x (unchunk (next s)))))) With those two functions in hand, we can have our cake and eat it too: brevity and reasonable performance. Using just unchunk, we can write: (defn euler-52-h [n] (let [d (sort (str n))] (if (every? #(= d %1) (map #(sort (str (* n %))) (unchunk [2 3 4 5 6]))) n (recur (inc n))))) Or better yet, (defn euler-52-z [n] (if (reducep = (map #(sort (str (* n %))) (unchunk [1 2 3 4 5 6]))) n (recur (inc n)))) (time (euler-52-z 1)) ;; "Elapsed time: 336.205473 msecs" Which version did you like the best? The easy-to-understand euler-52-b? The brief definition in euler-52-f or it’s more performant sister euler-52-g? Or the reducep abstraction approach of euler-52-z? Ciao!