url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://math.stackexchange.com/questions/40688/localisation-of-an-ideal
# Localisation of an ideal This should be quite easy, but somehow I can't find the proof. Let $P\neq Q$ be two maximal ideals in the commutative ring $R$. Then $P_Q=R_Q$. ($P_Q$ is the localisation of the R-module $P$ at $Q$ and $R_Q$ is the localisation of R at Q) - Are you sure this is the question? What if R=Z and P, Q are <2>, <3>? The localizations are not equal, and I don't think they're isomorphic. – Gadi A May 22 '11 at 16:05 Are they really distinct? an element of $\mathbb{Z}_{(3)}$ is a fraction $\frac{x}{y}$ with y coprime to 3. This element is equal to $\frac{2x}{2y}$, which is an element of $(2)_{(3)}$. – Michalis May 22 '11 at 16:40 Michalis, your excellent response to Gadi's objection is easy to generalize to a proof of the general statement: just use that $P$ is not contained in $Q$. – Georges Elencwajg May 22 '11 at 17:10 @elgeorges: you're right :D I was a bit confused when I posed the question. – Michalis May 22 '11 at 17:15 Since $P$ and $Q$ are distinct maximal ideals, $P$ is not contained in $Q$ and thus there exists $x \in P \cap (R \setminus Q)$. This element becomes a unit in the localization, so the localized ideal contains a unit and is thus the entire localized ring $R_Q$. This is a special case of basic results on pushing forward and pulling back ideals under a localization map: see e.g. $\S 7.2$ of my commutative algebra notes for more details. (Or see any other commutative algebra text, of course.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172366261482239, "perplexity": 230.90702561091868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276304.88/warc/CC-MAIN-20160524002116-00075-ip-10-185-217-139.ec2.internal.warc.gz"}
http://comunidadwindows.org/sum-of/standard-error-sum-of-squares.php
Home > Sum Of > Standard Error Sum Of Squares # Standard Error Sum Of Squares ## Contents BOOKS BY KEN TANGEN ADVERTISEMENT COMING SOOn At Dr. Is it possible to fit any distribution to something like this in R? You might also enjoy: Sign up There was an error. The observed difference is usually the difference between the mean values by the two methods. navigate here The second use of the SS is to determine the standard deviation. Our goal is to bring tools, technology and training into today's healthcare industry — by featuring QC lessons, QC case studies and frequent essays from leaders in the quality control area. To preserve their value, I have attempted here to relay (my take on) the key ideas arising in those replies and their comments. SS represents the sum of squared differences from the mean and is an extremely important term in statistics. This Site ## Sum Of Squares Example Mathematically I believe that the sum of averages is equal to the monthly average times 12. –klonq Apr 5 '12 at 6:37 1 Yes, klonq, that is a very reasonable Mathematically it is the square root of SS over N; statisticians take a short cut and call it s over the square root of N. You can also use the sum of squares (SSQ) function in the Calculator to calculate the uncorrected sum of squares for a column or row. so that ( n − 1 ) S n − 1 2 σ 2 ∼ χ n − 1 2 {\displaystyle {\frac {(n-1)S_{n-1}^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}} . Loss function Squared error loss is one of the most widely used loss functions in statistics, though its widespread use stems more from mathematical convenience than considerations of actual loss in It's important to recognize again that it is the sum of squares that leads to variance which in turn leads to standard deviation. Each of the 20 students in class can choose a device (ruler, scale, tape, or yardstick) and is allowed to measure the table 10 times. Sum Of Squared Deviations Formula Browse other questions tagged r regression residuals residual-analysis or ask your own question. Please enter a valid email address. share|improve this answer edited Aug 7 '14 at 8:13 answered Aug 7 '14 at 7:55 Andrie 42848 add a comment| up vote 11 down vote The original poster asked for an After all, in the example above it seems that there are just as many calculations. https://en.wikipedia.org/wiki/Residual_sum_of_squares It is calculated as a summation of the squares of the differences from the mean. That is, take the sum of the X’s. Sum Of Squared Deviations Calculator Note the similarity of the formula for σest to the formula for σ.  It turns out that σest is the standard deviation of the errors of prediction (each Y - The change that would be important or significant depends on the standard error of the mean and the sampling distribution of the means. This zero is an important check on calculations and is called the first moment. (The moments are used in the Pearson Product Moment Correlation calculation that is often used with method ## Sum Of Squares Equation Show every installed command-line shell? http://onlinestatbook.com/lms/regression/accuracy.html Laboratorians tend to calculate the SD from a memorized formula, without making much note of the terms. Sum Of Squares Example This isn't an answer to the OP's question. How To Calculate Sum Of Squares In math rules, we  square before we divide, and we divide before we subtract. Calculation of the mean of a "sample of 100" Column A Value or Score(X) Column B Deviation Score () (X-Xbar) Column CDeviation Score² (²) (X-Xbar)² 100 100-94.3 = 5.7 (5.7)² = check over here Madelon F. She is a member of the: American Society for Clinical Laboratory Science, Kentucky State Society for Clinical Laboratory Science, American Educational Research Association, and the National Science Teachers Association. errors of the mean: deviation of the means from the "truth", EM=M-t. Sum Of Squares Calculator Sysmex XN 2000 Sigma-metric analysis of the Sysmex XN 2000 Hematology Analyzer NEW! In short, sampling distributions and their theorems help to assure that we are working with normal distributions and that we can use all the familiar "gates." Important laboratory applications. On the Blog Theranos Bleeds Out... his comment is here Belmont, CA, USA: Thomson Higher Education. The deviation method is for teaching the concept of dispersion. How To Calculate Sum Of Squares In Excel The mean of the sampling distribution is always the same as the mean of the population from which the samples were drawn. Show how the SD is calculated from the variance and SS. ## If from the prior example of 2000 patient results, all possible samples of 100 were drawn and all their means were calculated, we would be able to plot these values to However, in most applications, the sampling distribution can not be physically generated (too much work, time, effort, cost), so instead it is derived theoretically. Because i needed to do this again today, but wanted to double-check that i average the variances. Star Fasteners I've just "mv"ed a 49GB directory to a bad file path, is it possible to restore the original state of the files? Sum Of Squares Error Formula Variance Further information: Sample variance The usual estimator for the variance is the corrected sample variance: S n − 1 2 = 1 n − 1 ∑ i = 1 n Unsourced material may be challenged and removed. (April 2013) (Learn how and when to remove this template message) In statistics, the residual sum of squares (RSS), also known as the sum The questions of acceptable performance often depend on determining whether an observed difference is greater than that expected by chance. Parenting Be sure to check out PsychNut.com Recent Posts Netflix Uses Stats To Target You Mini-Podcasts How Many Numbers You Got Next! weblink However, a biased estimator may have lower MSE; see estimator bias. What's the difference between the standard deviation and the standard error of the mean? Please clarify that for us. –whuber♦ Apr 4 '12 at 21:40 @whuber I have added an example to clarify. They are also sometimes called errors (as will be seen later in this lesson). Mean. However, one can use other estimators for σ 2 {\displaystyle \sigma ^{2}} which are proportional to S n − 1 2 {\displaystyle S_{n-1}^{2}} , and an appropriate choice can always give Adjusted sums of squares Adjusted sums of squares does not depend on the order the factors are entered into the model.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721306324005127, "perplexity": 1074.328533465557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00776.warc.gz"}
https://learnzillion.com/lesson_plans/7338-find-the-mass-of-an-object-using-a-balance-scale
# Find the mass of an object using a balance scale teaches Common Core State Standards CCSS.Math.Content.3.MD.A.2 http://corestandards.org/Math/Content/3/MD/A/2 ## You have saved this lesson! Here's where you can access your saved items. Dismiss Card of
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684579730033875, "perplexity": 4784.6341372531815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543315.68/warc/CC-MAIN-20161202170903-00064-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/31669-n-n-uncountable.html
# Math Help - N^N is uncountable 1. ## N^N is uncountable hello, how do you prove that N^N is uncountable? couldn't find anything on the internet so far tx 2. Originally Posted by tbetra5 how do you prove that N^N is uncountable? couldn't find anything on the internet so far Cantor's diagonal argument.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455119729042053, "perplexity": 4432.397061771883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422118973352.69/warc/CC-MAIN-20150124170253-00017-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathoverflow.net/users/5506/abc
# ABC less info reputation 819 bio website location Here age member for 4 years, 10 months seen Feb 11 at 17:14 profile views 1,138 36 How much reading do you do before you attack a problem? 12 Examples of common false beliefs in mathematics 11 Direct proof of irrationality? 7 Proofs without words 7 Most helpful math resources on the web # 248 Reputation +10 Kolmogorov superposition for smooth functions +5 Does listing the prime factors always stop? +10 Direct proof of irrationality? +5 If $f$ is $C^{\infty}$ and $f^2$ is analytic, then $f$ is analytic # 13 Questions 12 Does listing the prime factors always stop? 9 Is PA consistent? do we know it? 9 Finite generation and Henselization 8 If $f$ is $C^{\infty}$ and $f^2$ is analytic, then $f$ is analytic 6 What strict resolutions of singularities are needed? # 54 Tags 18 ca.analysis-and-odes × 6 2 ra.rings-and-algebras 6 measure-theory × 2 2 linear-algebra 4 real-analysis × 6 1 gn.general-topology × 2 4 integration 1 factorization 3 ag.algebraic-geometry × 7 1 trinomial # 15 Accounts TeX - LaTeX 990 rep 11121 Music: Practice & Theory 296 rep 111 MathOverflow 248 rep 819 English Language & Usage 141 rep 7 Linguistics 137 rep 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008276224136353, "perplexity": 3248.1089494814737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461494.41/warc/CC-MAIN-20150226074101-00093-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.math.uic.edu/seminars/view_seminar?id=5615
## Departmental Colloquium Arend Bayer Edinburgh Surfaces from curves via derived categories Abstract: I will explain how the seemingly highly abstract machinery of derived categories can be used to answer fundamental and concrete questions in algebraic geometry. I will give several examples of this philosophy; the one alluded to in the title is due to Soheyla Feyzbakhsh, who showed that a generic K3 surface X can be geometrically reconstructed from any curve in X of minimal possible degree. Please note the unusual room. Friday December 7, 2018 at 3:00 PM in LC D5 UIC LAS MSCS > seminars >
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.931355357170105, "perplexity": 1423.645677528635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656530.8/warc/CC-MAIN-20190115225438-20190116011438-00565.warc.gz"}
https://link.springer.com/article/10.1007%2Fs11633-015-0893-y
# Feature selection and feature learning for high-dimensional batch reinforcement learning: A survey • De-Rong Liu • Hong-Liang Li • Ding Wang Survey Paper ## Abstract Tremendous amount of data are being generated and saved in many complex engineering and social systems every day. It is significant and feasible to utilize the big data to make better decisions by machine learning techniques. In this paper, we focus on batch reinforcement learning (RL) algorithms for discounted Markov decision processes (MDPs) with large discrete or continuous state spaces, aiming to learn the best possible policy given a fixed amount of training data. The batch RL algorithms with handcrafted feature representations work well for low-dimensional MDPs. However, for many real-world RL tasks which often involve high-dimensional state spaces, it is difficult and even infeasible to use feature engineering methods to design features for value function approximation. To cope with high-dimensional RL problems, the desire to obtain data-driven features has led to a lot of works in incorporating feature selection and feature learning into traditional batch RL algorithms. In this paper, we provide a comprehensive survey on automatic feature selection and unsupervised feature learning for high-dimensional batch RL. Moreover, we present recent theoretical developments on applying statistical learning to establish finite-sample error bounds for batch RL algorithms based on weighted L p norms. Finally, we derive some future directions in the research of RL algorithms, theories and applications. ## Keywords Intelligent control reinforcement learning adaptive dynamic programming feature selection feature learning big data ## 1 Introduction With the wide application of information technologies, large volumes of data are being generated in many complex engineering and social systems, such as power grid, transportation, health care, finance, Internet, etc. Machine learning techniques such as supervised learning and unsupervised learning have come to play a vital role in the area of big data. However, these techniques mainly focused on the prediction tasks and automatic extraction of knowledge from data. Therefore, techniques which can learn how to utilize the big data to make better decisions are urgently required. As one of the most active research topics in machine learning, reinforcement learning (RL)[1] is a computational approach which can perform automatic goal-directed decision-making. The decision-making problem is usually described in the framework of Markov decision processes (MDPs)[2]. Dynamic programming[3] is a standard approach to solve MDPs, but it suffers from “the curse of dimensionality” and requires the knowledge of models. RL algorithms[4] are practical for MDPs with large discrete or continuous state spaces, and can also deal with the learning scenario when the model is unknown. A closely related area is adaptive or approximate dynamic programming[5, 6, 7, 8, 9, 10, 11, 12, 13, 14] which adopts a control-theoretic point of view and terminology. The RL methods can be classified into offline or online methods based on whether data can be obtained in advance or not. Online RL algorithms like Q learning are learning by interacting with the environment, and hence may come up against inefficient use of data and stability issues. The convergence proof of online RL algorithms is usually given by the stochastic approximation method[15, 16]. Offline or batch RL[17] is a subfield of dynamic programming based RL, and can make more efficient use of data and avoid stability issues. Another advantage of batch RL algorithms over online RL algorithms is that they can be combined with many nonparametric approximation architectures. The batch RL refers to the learning scenario, where only a fixed batch of data collected from the unknown system is given a priori. The goal of batch RL is to learn the best possible policy from the given training data. The batch RL methods are more preferable than the online RL methods in the context where more and more data are being gathered every day. A major challenge in RL is that it is infeasible to represent the solutions exactly for MDPs with large discrete or continuous state spaces. Approximate value iteration (AVI) [9] and approximate policy iteration (API) [18] are two classes of iterative algorithms to solve batch RL problems with large or continuous state spaces. AVI starts from an initial value function, and iterates between value function update and greedy policy update until the value function converges to the near-optimal one. API starts from an initial policy, and iterates between policy evaluation and policy improvement to find an approximate solution to the fixed point of Bellman optimality equation. AVI or API with state aggregation is essentially a discretization method of state space, and becomes intractable when the state space is high-dimensional. Function approximation methods[19, 20, 21] can provide a compact representation for value function by storing only the parameters of the approximator, and thus hold great promise for high-dimensional RL problems. Fitted value iteration is a typical algorithm of AVI-based batch RL approaches. Gordon[22] first introduced the fitting idea into AVI and established the fitted value iteration algorithm which has become the foundation of batch RL algorithms. Ormoneit and Sen[23] utilized the idea of fitted value iteration to develop a kernel-based batch RL algorithm, where kernel-based averaging was used to update Q function iteratively. Ernst et al.[24] developed a fitted Q iteration algorithm which allows to fit any parametric or nonparametric approximation architecture to the Q function. They also applied several tree-based supervised learning methods and ensemble learning algorithms to the fitted Q iteration algorithm. Riedmiller[25] proposed a neural fitted Q iteration by using a multilayer perception as the approximator. The fitted Q iteration algorithm allows to approximate the Q function from a given batch of data by solving a sequence of supervised learning problems, and thus it has become one of the most popular batch RL algorithms. Fitted policy iteration is another basic one of batch RL algorithms which is constructed by combining function approximation architectures with API. Bradtke and Barto[26] proposed a popular least-squares temporal difference (LSTD) algorithm to perform policy evaluation. LSTD was extended to LSTD(λ) in [27, 28]. Lagoudakis and Parr[29] developed a least-squares policy iteration (LSPI) algorithm by extending the LSTD algorithm to control problems. The LSPI is off-policy and model-free algorithm which is constructed by learning the Q function without the generative model of MDPs, and it is easy to implement because of the use of linear parametric architectures. Therefore, it has become the foundation of all the API-based batch RL algorithms. Antos et al.[30] studied a model-free fitted policy iteration algorithm based on the idea of Bellman residual minimization, which avoided the direct use of the projection operator in LSPI. Because of the empirical risk minimization principle, existing tools of statistical machine learning can be applied directly to the theoretical analysis of batch RL algorithms. Antos et al. [31] developed a value-iteration based fitted policy iteration algorithm, where the policy evaluation was obtained by AVI. Approximate modified policy iteration[32, 33, 34] represents a spectrum of batch RL algorithms which contains the AVI and the API. This algorithm is more preferable than API when a nonlinear approximation architecture is used. The batch RL algorithms with hand-crafted representations work well for low-dimensional MDPs. However, as the dimension of the state space of MDPs increases, the number of features required will explode exponentially. It is difficult to design suitable features for high-dimensional RL problems. When the features of a approximator are improperly designed, the batch RL algorithms may have poor performance. It is a natural idea to develop RL algorithms by selecting or learning features automatically instead of by designing features manually. Actually, there has been rapidly growing interest in automating feature selection for RL algorithms by regularization[35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60], which is a very effective tool in supervised learning. Furthermore, some nonparametric techniques like manifold learning and spectral learning have been used to learn features for RL algorithms[61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81]. Deep learning or representation learning[82, 83, 84, 85, 86, 87, 88, 89, 90] is now one of the hottest topics in machine learning, and has been successfully applied to image recognition and speech recognition. The core idea of deep learning is to use unsupervised or supervised learning methods to automatically learn representations or features from data. Recently, there have been few pioneering research results on combining deep learning with RL to learn representations and controls in MDPs[91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]. In this paper, we will provide a comprehensive survey on feature selection and feature learning for high-dimensional batch RL algorithms. Another hot topic in RL is to apply statistical learning to establish convergence analysis and performance analysis. Bertsekas[102] established error bounds for RL algorithms based on maximum or L norms. The error bound in L norms is expressed in terms of the uniform approximation error over the whole state space, hence it is difficult to guarantee for large discrete or continuous state spaces. Moreover, the L norm is not very practical since the L p norm is more preferable for most function approximators, such as linear parametric architectures, neural networks, kernel machines, etc. Statistical machine learning can analyze the L p -norm approximation errors in terms of the number of samples and a capacity measure of the function space. Therefore, some promising theoretical results[103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113] have been developed by establishing finite-sample error bounds for batch RL algorithms based on weighted L p norms. In this paper, we consider the problem of finding a nearoptimal policy for discounted MDPs with large discrete or continuous state spaces. We focus on batch RL techniques, i.e., learning the best possible policy given a fixed amount of training data. The remainder of this paper is organized as follows. Section 2 provides background on MDPs and batch RL. In Section 3, we provide recent results on feature selection and feature learning for high-dimensional batch RL problems. Section 4 presents recent theoretical developments on error bounds of batch RL algorithms and is followed by conclusions and future directions in Section 5. ## 2 Preliminaries In this section, we first give the background on MDPs and optimal control, and then present some basic batch RL algorithms. ### 2.1 Background on MDPs A discounted MDP is defined as a 5-tuple $$({\cal X},{\cal A},P,R,\gamma)$$, where $${\cal X}$$ is the finite or continuous state space, $${\cal A}$$ is the finite action space, $$P:{\cal X} \times {\cal A} \to P(\cdot |{x_t},{a_t})$$ is the Markov transition model which gives the next-state distribution upon taking action a t at state x t , $${x_t},R:{\cal X} \times {\cal A} \times {\cal X} \to {\bf{R}}$$ is the bounded deterministic reward function which gives an immediate reward r t = R(x t , a t , x t ), and γ ∈ [0, 1) is the discount factor. A mapping $$\pi :{\cal X} \to {\cal A}$$ is called a deterministic stationary Markov policy, and hence π(x t ) indicates the action taken at state x t . The state-value function V π of a policy π is defined as the expected total discounted reward: $${V^\pi}(x) = {\mathbb{E}_\pi}\left[ {\sum\limits_{t = 0}^\infty {{\gamma ^t}{r_t}\left\vert {{x_0} = x} \right.}} \right].$$ (1) According to the Markov property, the value function V π satisfies the Bellman equation $${V^\pi}(x) = {\mathbb{E}_\pi}[R(x,a,{x^{\prime}}) + \gamma {V^\pi}({x^{\prime}})]$$ (2) and V π is the unique solution of this equation. The goal of RL algorithms is to find a policy that attains the best possible values $${V^{\ast}}(x) = \mathop {\sup}\limits_\pi {V^\pi}(x),\forall x \in {\cal X}$$ (3) where V* is called the optimal value function. A policy π* is called optimal if it attains the optimal value V*(x) for any state $$x \in {\cal X}$$, i.e., $${V^{{\pi ^*}}} = {V^*}(x)$$. The optimal value function V* satisfies the Bellman optimality equation $${V^{\ast}}(x) = \mathop {\max}\limits_{a \in {\cal A}} \mathbb{E}[R(x,a,{x^{\prime}}) + \gamma {V^{\ast}}({x^{\prime}})].$$ (4) The Bellman optimality operator $${{\cal T}^*}$$ is defined as $$({{\cal T}^{\ast}}V)(x) = \mathop {\max}\limits_{a \in {\cal A}} \mathbb{E}[R,a,{x^{\prime}}] + \gamma V({x^{\prime}})].$$ (5) The operator $${{\cal T}^*}$$ is a contraction mapping in the L norm with contraction rate γ, and V* is its unique fixed point, i.e., $${V^{^*}} = {{\cal T}^*}{V^*}$$. To develop model-free batch RL algorithms, the action-value function (or Q function) is defined as $${Q^\pi}(x,a) = {{\mathbb{E}}_\pi}\left[ {\sum\limits_{t = 0}^\infty {{\gamma ^t}{r_t}\left\vert {{x_0} = x,{a_0} = a} \right.}} \right].$$ (6) The action-value function Q π satisfies the Bellman equation $${Q^\pi}(x,a) = {{\mathbb{E}}_\pi}[R(x,a,{x^{\prime}}) + \gamma {Q^\pi}({x^{\prime}},{a^{\prime}})].$$ (7) A policy is called greedy with respect to the action-value function if $$\pi (x) = \arg \mathop {\max}\limits_{a \in {\cal A}} Q(x,a).$$ (8) The optimal action-value function Q*(x, a) is defined as $${Q^{\ast}}(x,a) = \mathop {\sup}\limits_\pi {Q^\pi}(x,a),\forall x \in {\cal X},\forall a \in {\cal A}.$$ (9) The optimal state-value function V* and action-value function Q* have the relationship as $${V^{\ast}}(x) = \mathop {\max}\limits_{a \in {\cal A}} {Q^{\ast}}(x,a).$$ (10) The optimal action-value function satisfies the Bellman optimality equation $${Q^{\ast}}(x,a) = {\mathbb{E}}[R(x,a,{x^{\prime}}) + \gamma \mathop {\max}\limits_{{a^{\prime}}} {Q^{\ast}}({x^{\prime}},{a^{\prime}})].$$ (11) The optimal policy π* can be obtained by $${\pi ^{\ast}}(x) = \arg \mathop {\max}\limits_{a \in {\cal A}} {Q^{\ast}}(x,a).$$ (12) The Bellman optimality operator is defined as $$({{\cal T}^{\ast}}Q)(x,a) = {\mathbb{E}}[R(x,a,{x^{\prime}}) + \gamma \mathop {\max}\limits_{{a^{\prime}}} Q({x^{\prime}},{a^{\prime}})].$$ (13) The operator is also a contraction mapping in the L norm with contraction rate γ, and it has the unique fixed point $${Q^*}$$, i.e., $${Q^*} = {{\cal T}^*}{Q^*}$$. ### 2.2 Batch RL The task of batch RL algorithms is to learn the best possible policy from a fixed batch of data which is given a priori[17]. For a discounted MDP $$({\cal X},{\cal A},P,R,\gamma)$$, the transition model P and the reward function R are assumed to be unknown in advance, while the state space $${\cal X}$$, the action space $${\cal A}$$ and the discount factor γ are available. The basic framework of batch RL is shown in Fig. 1. First, a batch of data is collected from the environment with an arbitrary sampling policy. Then, a batch RL algorithm is implemented in order to learn the best possible policy. Finally, the learned policy is applied to the real-world environment. The dashed line in Fig. 1 means that the algorithm is not allowed to interact with the environment during learning. The learning phase is separated from data collection and application phases, hence there does not exist the exploration-exploitation dilemma for batch RL algorithms. In the batch RL scenario, the training set is given in the form of a batch of data as $${\cal A} = \left\{{({x_k},{a_k},r_k^{\prime})\vert k = 1, \cdots ,N} \right\}$$ (14) which are sampled from the unknown environment. These samples may be collected by using a purely random policy or an arbitrary known policy. The states x k and xk+1 may be sampled independently, or may be sampled along a connected trajectory, i.e., x k +1 = x k . The samples need to cover the state-action space adequately, since the distribution of the training data will affect the performance of the learned policy. The batch of data $${\cal D}$$ can be reused at each iteration, so the batch RL algorithms can make efficient use of data. The batch RL algorithms implement a sequence of supervised learning algorithms, thus enjoy the stability of the learning process. The LSPI algorithm[29] is the most important one of fitted policy iteration algorithms, which is shown in Algorithm 1. It utilizes the LSTD learning to evaluate the action-value function of a given policy (see Steps 4–9) of Algorithm 1. The action-value function is approximated by a linear parametric architecture, i.e., $$\hat Q(x,a) = {w^{\rm{T}}}\phi (x,a)$$, where w is a parameter vector and ϕ(x, a) is a feature vector or basis functions. The basis function can be selected as polynomial, radial basis function, wavelet, Fourier, etc. Since the LSTD learning uses the linear parametric architecture, the policy evaluation can be solved by the least-squares method. Given the training set $${\cal D}$$, the LSTD learning finds the parameter vector w such that the corresponding action-value function satisfies the Bellman equation approximately by solving the fixed point $$w = \arg \mathop {\min}\limits_u \Vert\Phi u - (R + \gamma {\Phi ^{\prime}}w)\Vert_2^2$$ (15) where $$\Phi = \left[ {\matrix{{\phi {{({x_1},{a_1})}^{\rm{T}}}} \cr\vdots \cr{\phi {{({x_N},{a_N})}^{\rm{T}}}} \cr}} \right],{\Phi ^{\prime}} = \left[ {\matrix{{\phi {{(x_1^{\prime},a_1^{\prime})}^{\rm{T}}}} \cr\vdots \cr{\phi {{(x_N^{\prime},a_N^{\prime})}^{\rm{T}}}} \cr}} \right],R = \left[ {\matrix{{{r_1}} \cr\vdots \cr{{r_N}} \cr}} \right].$$ The problem (15) can also be written as $${u^{\ast}} = \arg \mathop {\min}\limits_u \Vert\Phi u - (R + \gamma {\Phi ^{\prime}}w)\Vert_2^2$$ (16) $${w^{\ast}} = \arg \mathop {\min}\limits_w \Vert\Phi w - \Phi {u^{\ast}}\Vert_2^2$$ (17) where (16) is the projection equation and (17) is the minimization equation. Therefore, the parameter vector w has a closed-form solution $$w = {({\Phi ^{\rm{T}}}(\Phi - \gamma {\Phi ^{\prime}}))^{- 1}}{\Phi ^{\rm{T}}}R \buildrel \Delta \over = {A^{- 1}}b.$$ (18) Actually, the result of Steps 4–9 in Algorithm 1 is to solve the policy evaluation equation as $${\hat Q_i}({x_k},{a_k}) = {r_k} + \gamma {\hat Q_{i}}(x_k^{\prime},{\pi _i}(x_k^{\prime})).$$ (19) It can be solved by AVI when using a nonlinear approximation architecture. The fitted Q iteration[24] is the most important one of fitted value iteration algorithms, which is shown in Algorithm 2. The dynamic programming operator (Step 5 in Algorithm 2) is separated from the fitting process (Step 7 in Algorithm 2). Therefore, the function “fit” allows to use both linear and nonlinear approximation architectures, with all kinds of learning algorithms, such as gradient descent and conjugate gradient. ## 3 Feature selection and feature learning for high-dimensional batch RL Since many real-world RL tasks often involve high-dimensional state spaces, it is difficult to use feature engineering methods to design features for function approximators. To cope with high-dimensional RL problems, the desire to design data-driven features has led to a lot of works in incorporating feature selection and feature learning into traditional batch RL algorithms. Automatic feature selection is to select features from a given set of features by using regularization, matching pursuit, random projection, etc. Automatic feature learning is to learn features from data by learning the structure of the state space using unsupervised learning methods, such as manifold learning, spectral learning, deep learning, etc. In this section, we present a comprehensive survey on these promising research works. ### 3.1 Batch RL based on feature selection The regularized approaches have been applied to batch RL to perform automatic feature selection and prevent over-fitting when the number of samples is small compared to the number of features. The basic idea is to solve L2 or L1 penalized least-squares, also known as ridge or Lasso regression, respectively. In this subsection, we introduce data-driven automatic feature selection for batch RL algorithms. Farahmand et al.[35] proposed two regularized policy iteration algorithms by adding L2 regularization to two policy evaluation methods, i.e., Bellman residual minimization and LSTD learning. Farahmand et al.[36] presented a regularized fitted Q iteration algorithm based on L2 regularization to control the complexity of the value function. Farahmand and Szepesvári[37] developed a complexity regularization-based algorithm to solve the problem of model selection in the batch RL algorithms, which was formulated as finding an action-value function with a small Bellman error among a set of candidate functions. The L2 regularized LSTD problem is presented by adding an L2 penalty term into the projection equation (16) $${u^{\ast}} = \arg \mathop {\min}\limits_u \Vert\Phi u - (R + \gamma {\Phi ^{\prime}}w)\Vert_2^2 + \beta \Vert u\Vert_2^2$$ (20) $${w^{\ast}} = \arg \mathop {\min}\limits_w \Vert\Phi w - \Phi {u^{\ast}}\Vert_2^2$$ (21) where β ∈ [0, ∞) is a regularization parameter. This problem can be equivalently expressed as the fixed point $$w = \arg \mathop {\min}\limits_u \Vert\Phi u - (\hat R + \gamma {\Phi ^{\prime}}w)\Vert_2^2 + \beta \Vert u\Vert_2^2.$$ (22) The closed-form solution of the parameter vector w can also be obtained as $$w = {({\Phi ^{\rm{T}}}(\Phi - \gamma {\Phi ^{\prime}}) + \beta)^{- 1}}{\Phi ^{\rm{T}}}R \buildrel \Delta \over = {(A + \beta I)^{- 1}}b.$$ (23) The L1 regularization can provide sparse solutions, thus it can achieve automatic feature selection in value function approximation. Loth et al.[38] proposed a sparse temporal different (TD) learning by applying the Lasso to the Bellman residual minimization, and introduced an equigradient descent algorithm which is similar to least angle regression (LARS). Kolter and Ng[39] proposed an L1 regularization framework for the LSTD algorithm based on state-value function, and presented an LARS-TD algorithm to compute the fixed point of the L1 regularized LSTD problem. Johns et al.[40] formulated the L1 regularized linear fixed point problem as a linear complementarity (LC) problem, and proposed an LC-TD algorithm to solve this problem. Ghavamzadeh et al.[41] proposed a Lasso-TD algorithm by incorporating an L1 penalty into the projection equation. Liu et al.[42] presented an L1 regularized off-policy convergent TD-learning (RO-TD) method based on the primal-dual subgradient saddle-point algorithm. Mahadevan and Liu[43] proposed a sparse mirror-descent RL algorithm to find sparse fixed points of an L1 regularized Bellman equation involving only linear complexity in the number of features. The L1 regularized LSTD problem is given by including an L1 penalty term into the projection equation (16) $${u^{\ast}} = \arg \mathop {\min}\limits_u \Vert\Phi u - (R + \gamma {\Phi ^{\prime}}w)\Vert_2^2 + \beta \Vert u\Vert{_1}$$ (24) $${w^{\ast}} = \arg \mathop {\min}\limits_w \Vert\Phi w - \Phi {u^{\ast}}\Vert_2^2$$ (25) which is the same as $$w = \arg \mathop {\min}\limits_u \Vert\Phi u - (R + \gamma {\Phi ^{\prime}}w)\Vert_2^2 + \beta \Vert u\Vert{_1}.$$ (26) This problem does not have a closed-form solution like the L2 regularization problem, and cannot be expressed as a convex optimization. Petrik et al.[44] introduced an approximate linear programming algorithm to find the L1 regularized solution of the Bellman equation. Different from [36], Geist and Scherrer[45] added the L1 penalty term to the minimization equation (17) $${u^{\ast}} = \arg \mathop {\min}\limits_u \Vert\Phi u - (R + \gamma {\Phi ^{\prime}}w)\Vert_2^2$$ (27) $${w^{\ast}} = \arg \mathop {\min}\limits_w \Vert\Phi w - \Phi {u^{\ast}}\Vert_2^2 + \beta \Vert w\Vert{_1}$$ (28) which actually penalizes the projected Bellman residual and yields a convex optimization problem. Geist et al.[46] proposed a Dantzig-LSTD algorithm by integrating LSTD with the Dantzig selector, and solved for the parameters by linear programming. Qin et al.[47] also proposed a sparse RL algorithm based on this kind of L1 regularization, and used the alternating direction method of multipliers to solve the constrained convex optimization problem. Hoffman et al.[48] proposed an L21 regularized LSTD algorithm which added an L2 penalty to the projection equation (16) and added an L1 penalty to the minimization equation (17) $${u^{\ast}} = \arg \mathop {\min}\limits_u \Vert\Phi u - (R + \gamma {\Phi ^{\prime}}w)\Vert_2^2 + \beta \Vert u\Vert_2^2$$ (29) $${w^{\ast}} = \arg \mathop {\min}\limits_w \Vert\Phi w - \Phi {u^{\ast}}\Vert_2^2 + {\beta ^{\prime}}\Vert w\Vert{_1}.$$ (30) The above optimization problem can be reduced to a standard Lasso problem. An L22 regularized LSTD algorithm was also given in [48] $${u^{\ast}} = \arg \mathop {\min}\limits_u \Vert\hat \Phi u - (\hat R + \gamma {\hat \Phi ^{\prime}}w)\Vert_2^2 + \beta \Vert u\Vert_2^2$$ (31) $${w^{\ast}} = \arg \mathop {\min}\limits_w \Vert\hat \Phi w - \hat \Phi {u^{\ast}}\Vert_2^2 + {\beta ^{\prime}}\Vert w\Vert_2^2$$ (32) which has a closed-form solution. Besides applying the regularization technique to perform feature selection, matching pursuit can also find a sparse representation of value function by greedily selecting features from a finite feature dictionary. Two variants of matching pursuit are orthogonal matching pursuit (OMP) and order recursive matching pursuit (ORMP). Johns and Mahadevan[49] presented and evaluated four sparse feature selection algorithms for LSTD, i.e., OMP, ORMP, Lasso, and LARS, based on graph-based basis functions. Painter-Wakefield and Parr[50] applied the OMP to RL by proposing the OMP Bellman residual minimization and OMP TD learning algorithms. Farahmand and Percup[51] proposed a value pursuit iteration by using a modified version of OMP, where some new features based on the currently learned value function were added to the feature dictionary at each iteration. As an alternative, random projection methods can be also used to perform feature selection for high-dimensional RL problems. Ghavamzadeh et al. [52] proposed an LSTD learning algorithm with random projections, where the value function of a given policy was learned in a low-dimensional subspace generated by linear random projection from the original high-dimensional feature space. The dimension of the subspace can be given in advance by the designer. Liu and Mahadevan[53] extended the results of [52], and proposed a compressive RL algorithm with oblique random projections. Kernelized RL[54] aims to obtain sparsity in the samples, which is different from regularized RL aiming to obtain sparsity in the features given by the designer. Jung and Polani[55] proposed a sparse least-squares support vector machine framework for the LSTD method. Xu et al.[56] presented a kernel-based LSPI algorithm, where the kernelbased feature vectors were automatically selected using the kernel sparsification approach based on approximate linear dependency. Compared with the feature selection, there exists an opposite approach which is automatic feature generation. Feature generation is to iteratively add new basis functions to the current set based on the Bellman error of the current value estimate. Keller et al.[57] used neighborhood component analysis to map a high-dimensional state space to a low-dimensional space, and added new features in the low-dimensional space for the linear value function approximation. Parr et al.[58] provided a theoretical analysis of the effects of generating basis functions based on the Bellman error, and gave some insights on the feature generation method based on Bellman error basis functions in [59]. Fard et al. [60] presented a compressed Bellman error based feature generation approach for policy evaluation in sparse and high-dimensional state spaces by random projections. ### 3.2 Batch RL based on feature learning Recently, there has been rapidly growing interest in applying unsupervised feature learning to high-dimensional RL problems. The idea is to use an unsupervised learning method for learning a feature-extracting mapping from data automatically (see Fig. 2). This section includes linear nonparametric methods, such as manifold learning and spectral learning, and nonlinear parametric methods, such as deep learning. In pattern recognition, manifold learning (also referred to as nonlinear dimensionality reduction) is to develop low-dimensional representations for high-dimensional data. Some prominent manifold learning algorithms include Laplacian eigenmaps[61], locally linear embedding[62], and isometric mapping[63]. For many high-dimensional MDPs, the states often lie on an embedded low-dimensional manifold within the high-dimensional space. Therefore, it is quite promising to integrate manifold learning into RL. Mahadevan et al.[64, 65, 66, 67, 68, 69] introduced a spectral learning framework for learning representations and optimal policies in MDPs. The basic idea is to use spectral analysis of symmetric diffusion operators to construct nonparametric task-independent feature vectors which reflect the geometry of the state space. Compared to the hand-engineering features (e.g., basis functions are selected uniformly in all regions of the state space), this framework can extract significant topological information by building a graph based on samples. A representation policy iteration algorithm (see Algorithm 3) was developed in [68] by combining representation learning and policy learning. It includes three main processes: collect samples from the MDP; learn feature vectors from the training data; and learn an optimal policy. For MDPs with discrete state spaces, assume that the underlying state space is represented as an undirected graph G = (V, E, W), where V and E are the set of vertices and edges, and W is the symmetric weight matrix with W(i, j) > 0 if (i, j) ∈ E. The diffusion model can be defined by the combinatorial graph Laplacian matrix L = D − W, where D is the valency matrix. Another useful diffusion operator is the normalized Laplacian $${\cal L} = {D^{- {1 \over 2}}}L{D^{- {1 \over 2}}}$$. Each eigenvector of the graph Laplacian is viewed as a proto-value function[64]. The basis functions for state value functions can be constructed by computing the smoothest eigenvectors of the graph Laplacian. For scaling to large discrete or continuous state spaces, the k-nearest neighbor rule can be used to connect states and generate graphs, and the Nyström interpolation approach can be applied to extend eigenfunctions computed on sample states to new unexplored states[66]. To approximate the action-value function rather than the state value function, proto-value functions can be computed on state action graphs, in which vertices represent state action pairs[70]. Johns and Mahadevan[71] extended the undirected graph Laplacian to the directed graph Laplacian for expressing state connectivity in both discrete and continuous domains. Johns et al.[72] used Kronecker factorization to construct compact spectral basis functions without significant loss in performance. Petrik[73] presented a weighted spectral method to construct basis functions from the eigenvectors of the transition matrix. Metzen[74] derived a heuristic method to learn representations of continuous environments based on the maximum graph likelihood. Xu et al.[75] presented a clustering-based (K-means clustering or fuzzy C-means clustering) graph Laplacian framework for automatic learning of features in MDPs with continuous state spaces. Rohanimanesh et al. [76] applied the graph Laplacian method to learn features for the actor critic algorithm with function approximation architectures. Generated by diagonalizing symmetric diffusion operators, a proto-value function is actually a Laplacian eigenmaps embedding. Sprekeler[77] showed that the Laplacian eigenmaps are closely related to slow feature analysis[78] which is an unsupervised learning method for learning invariant or slowly varying features from a vector input signal. Luciw and Schmidhuber[79] applied the incremental slow feature analysis to learn proto-value functions directly from a high-dimensional sensory data stream. Legenstein et al. [80] proposed a hierarchical slow feature analysis to learn features for RL problems on high-dimensional visual input streams. Böohmer et al. [81] proposed a regularized sparse kernel slow feature analysis algorithm for LSPI in both discrete and continuous state spaces, and applied this algorithm to a robotic visual navigation task. Deep learning[82, 83, 84] aims to learn high-level features from raw sensory data. Some prominent deep learning techniques include deep belief networks (DBNs)[85], deep Boltzmann machines[86], deep autoencoders[87], and convolutional neural networks (CNNs)[89]. To cope with the difficulty of optimization, deep neural networks are learned with greedy layer-wise unsupervised pretraining followed by back-propagation fine-tuning phase. Although RL methods with linear parametric architectures and hand-crafted features have been very successful in many applications, learning control policies directly from high-dimensional sensory inputs is still a big challenge. It is natural to utilize the feature learning of deep neural networks for solving high-dimensional RL problems. Different from pattern classification problems, there exist some challenges when applying deep learning to RL: the RL agent only learns from a scalar delayed reward signal; the training data in RL may be imbalanced and highly correlated; and the data distribution in RL may be non-stationary. Restricted Boltzmann machine (RBM)[90] is an undirected graphical model (see Fig. 3), in which there are no connections between variables of the same layer. The top layer represents a vector of hidden random variables h and the bottom layer represents a vector of visible random variables v. For high-dimensional RL problems, Sallans and Hinton[91] presented an energy-based TD learning framework, where the action-value function was approximated as the negative free energy of an RBM $$\matrix{{\hat Q(x,a) = - F([x;\;\;a]) \buildrel \Delta \over = - F(v) =} \hfill \cr{\quad \quad \;\;{v^{\rm{T}}}W\tilde h - \sum\limits_{i = 1}^M {{{\tilde h}_i}\log} \;{{\tilde h}_i} - \sum\limits_{i = 1}^M {(1 - {{\tilde h}_i})\log (1 - {{\tilde h}_i}).}} \hfill \cr}$$ The expected value of the hidden random variables h is given by $$\tilde h = \sigma ({v^{\rm{T}}}W)$$, where σ(·) denotes the logistic function. The Markov chain Monte Carlo sampling was used to select action from the large action spaces. Otsuka et al.[92] extended the energy-based TD learning algorithm to partially observable MDPs by incorporating a recurrent neural network. Elfwing et al. [93] applied this algorithm to robot navigation problems with raw visual inputs. Heess et al.[94] proposed actor critic algorithms with energy-based policies based on [91]. A DBN[85] is a probabilistic graphical model built by stacking up RBMs (see Fig. 4). The top two layers of a DBN form an undirected graph and the remaining layers form a belief network with directed, top-down connections. Abtahi and Fasel[95] incorporated the DBN into the neural fitted Q iteration algorithm for action-value function approximation in RL problems with continuous state spaces (see Algorithm 4). The unsupervised pre-training phase in DBNs can learn suitable features and capture the structural properties of the state-action space from the training data. The action-value function is approximated by adding an extra output layer to the DBN, and the network is trained by a supervised fine-tuning phase. To deal with the problem of imbalanced data, a hint-to-goal heuristic approach was used in [95], where samples from the desirable regions of the state space were added to the training data manually. In [96], a DBN based on conditional RBMs was proposed for modeling hierarchical RL policies. Faulkner and Precup[97] applied the DBN to learn a generative model of the environment for the Dyna-style RL architecture. A deep autoencoder[87, 88] is a multilayer neural network which can extract increasingly refined features and compact representations from the input data (see Fig. 5). It is generated by stacking shallow autoencoders on top of each other during layer-wise pretraining. Then, a fine-tuning phase is performed by unfolding whole network and back-propagating the reconstruction errors. After the training process, the features can be generated in the output layer of the encoder network. Lange et al.[98, 99, 100] proposed a deep fitted Q iteration framework to learn a control policy directly for a visual navigation task only with raw sensory input data. A deep autoencoder neural network was used to learn compact features out of raw images automatically, and the action-value function was approximated by adding one output layer after the encoder. A CNN is a multilayer neural network which reduces the number of weight parameters by sharing weights between the local receptive fields. The pretraining phase is usually not required. Mnih et al. [101] presented a deep Q learning algorithm to play Atari 2600 games successfully. This algorithm can learn control policies directly from high-dimensional, raw video data without hand-designed features. A CNN was used as the action-value function approximator. To scale to large data set, the stochastic gradient descent instead of batch update was used to adapt the weights. An experience replay idea was used to deal with the problem of correlated data and non-stationary distributions. This algorithm outperformed all previous approaches on six of the games and even surpassed a human expert on three of them. ## 4 Error bounds for batch RL Bertsekas[102] gave the error bounds in L norms for AVI as $$\mathop {\lim}\limits_{i \rightarrow \infty} \;\sup \Vert{V^{\ast}} - {V^{{\pi _i}}}\Vert{_\infty} \leq {{2\gamma \epsilon} \over {{{(1 - \gamma)}^2}}}$$ where sup i $$\left\| {{V_{i + 1}} - {\cal T}{V_i}} \right\|\infty \le \epsilon$$, and gave the error bounds in L norms for API as $$\mathop {\lim}\limits_{i \rightarrow \infty} \;\sup \Vert {V^{\ast}} - {V^{{\pi _i}}}\Vert{_\infty} \leq {{2\gamma \epsilon} \over {{{(1 - \gamma)}^2}}}$$ where sup i $$\left\| {{V_{i + 1}} - {V^{{\pi _i}}}} \right\|\infty \le \epsilon$$. The L norm is expressed in terms of the uniform approximation error over all states, and is difficult to compute in practice for large or continuous state spaces. According to the equivalency of norms $${\left\| h \right\|_p} \le {\left\| h \right\|_\infty} \le \sqrt N {\left\| h \right\|_p}$$, it is quite possible that the approximation errors have a small L p norm but a very large L norm because of the factor $$\sqrt N$$ Moreover, most function approximators use the L p norm to express approximation errors. In this section, we will summarize the recent developments in establishing finite-sample error bounds in L p norms for batch RL algorithms. For discounted infinite-horizon optimal control problems with a large discrete state space and a finite action space, Munos[103] provided error bounds for API using weighted L2 norms as $$\mathop {\lim}\limits_{i \rightarrow \infty} \;\sup \Vert{V^{\ast}} - {V^{{\pi _i}}}\Vert {_\mu} \leq {{2\gamma} \over {{{(1 - \gamma)}^2}}}\mathop {\lim}\limits_{i \rightarrow \infty} \sup \Vert{V_i} - {V^{{\pi _i}}}\Vert{_{{{\tilde \mu}_k}}}.$$ Munos[104] provided performance bounds based on weighted L p norms for AVI as $$\mathop {\lim}\limits_{i \rightarrow \infty} \;\sup \Vert {V^{\ast}} - {V^{{\pi _i}}}\Vert{_{p,\nu}} \leq {{2\gamma} \over {{{(1 - \gamma)}^2}}}{[{C_2}(\nu ,\mu)]^{{1 \over p}}}\epsilon$$ where C2(ν, μ) is the second order discounted future state distribution concentration coefficient, and $${\left\| {{V_{i + 1}} - {\cal T}{V^i}} \right\|_{p,u}} \le \epsilon$$. The new bounds consider a concentration coefficient C(ν, μ) that estimates how much the discounted future-state distribution starting from a probability distribution ν used to evaluate the performance of AVI can possibly differ from the distribution μ used in the regression process. For MDPs with a continuous state space and a finite action space, Munos and Szepesvari[105] extended the results in [104] to finite-sample bounds for fitted value iteration based on weighted L p norms. Murphy[106] established finite-sample bounds of fitted Q iteration for finite-horizon undiscounted problems. Antos et al.[30] provided finite-sample error bounds in weighted L2 norms for model-free fitted policy iteration algorithm based on modified Bellman residual minimization. The bounds considered the approximation power of the function set and the number of steps of policy iteration. Maillard et al.[107] derived finite-sample error bounds of API using empirical Bellman residual minimization. Antos et al.[31] established probably approximately correct finite-sample error bounds for the value-iteration based fitted policy iteration, where the policies were evaluated by AVI. They also analyzed how the errors in AVI propagate through fitted policy iteration. Lazaric et al.[108] derived a finite-sample analysis of a classification-based API algorithm. Farahmand et al.[109] provided finite-sample bounds of API/AVI by considering the propagation of approximation errors/Bellman residuals at each iteration. The results indicate that it is better to put more effort on having a lower approximation error or Bellman residual in later iterations, such as by gradually increasing the number of samples and using more powerful function approximators[110]. Scherrer et al.[34] provided an error propagation analysis for approximate modified policy iteration and established finite-sample error bounds in weighted L p norms for classification-based approximate modified policy iteration. For MDPs with a continuous state space and a continuous action space, Antos et al.[111] provided finite-sample performance bounds of fitted actor-critic algorithm, where the action selection was replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. Since LSTD is not derived from a risk minimization principle, the finite-sample bounds in [30, 107, 109] cannot be directly applied to the performance analysis of LSTD and LSPI. Lazaric et al.[112] established the first finite-sample performance analysis of the LSTD learning algorithm. Moreover, Lazaric et al.[113] provided finite-sample performance bounds for the LSPI algorithm, and analyzed the error propagation through the iterations. Farahmand et al.[35, 36] provided finite-sample performance bounds and error propagation analysis for the L2 regularized policy iteration algorithm and the L2 regularized fitted Q iteration algorithm. Ghavamzadeh et al.[41] presented the finite-sample analysis of the L1 regularized TD algorithm. ## 5 Conclusions and future directions Batch RL is a model-free and data efficient technique, and can learn to make decisions from a large amount of data. For high-dimensional RL problems, it is necessary to develop RL algorithms which can select or learn features automatically from data. In this paper, we have provided a survey on recent progress in feature selection and feature learning for high-dimensional batch RL problems. The automatic feature selection techniques like regularization, matching pursuit, random projection can select suitable features for batch RL algorithms from a set of features given by the designer. Unsupervised feature learning methods, such as manifold learning, spectral learning, and deep learning, can learn representations or features, and thus hold great promise for high-dimensional RL algorithms. It will be an advanced intelligent control method by combining unsupervised learning and supervised learning with RL. Furthermore, we have also presented a survey on recent theoretical progress in applying statistical machine learning to establish rigorous convergence and performance analysis for batch RL algorithms with function approximation architectures. To further promote the development of RL, we think that the following directions need to be considered in the near future. Most existing batch RL methods assume that the action space is finite, but many real-world systems have continuous action spaces. When the action space is large or continuous, it is difficult to compute the greedy policy at each iteration. Therefore, it is important to develop RL algorithms which can solve MDPs with large or continuous action spaces. RL has a strong relationship with supervised learning and unsupervised learning, so it is quite appealing to introduce more machine learning methods to RL problems. For example, there have been some research on combining transfer learning with RL[114], aiming to solve different tasks with transferred knowledge. When the training data set is large, the computational cost of batch RL algorithms will become a serious problem. It will be quite promising to parallelize the existing RL algorithms in the framework of parallel or distributed computing to deal with large scale problems. For example, the MapReduce framework[115] was used to design parallel RL algorithms. Last but not least, it is significant to apply the batch RL algorithms based on feature selection or feature learning to solve real-world problems in power grid, transportation, health care, etc. ## References 1. [1] R. S. Sutton, A. G. Barto. Reinforcement Learning: An Introduction, Cambridge, MA, USA MIT Press, 1998.Google Scholar 2. [2] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming, New York, NY, USA: John Wiley & Sons, Inc., 1994. 3. [3] R. E. Bellman. Dynamic Programming, Princeton, NJ, USA: Princeton University Press, 1957. 4. [4] C. Szepesvari. Algorithms for Reinforcement Learning, San Mateo, CA, USA: Morgan & Claypool Publishers, 2010. 5. [5] P. J. Werbos. Approximate dynamic programming for realtime control and neural modeling. Handbook of Intelligent Control: Neural, Fuzzy, and Adaptive Approaches, D. A. White, D. A. Sofge, Eds., New York, USA: Van Nostrand Reinhold, 1992.Google Scholar 6. [6] D. P. Bertsekas, J. N. Tsitsiklis. Neuro-dynamic Programming, Belmont, MA, USA: Athena Scientific, 1996. 7. [7] J. Si, A. G. Barto, W. B. Powell, D. C. Wunsch. Handbook of Learning and Approximate Dynamic Programming, New York, USA: Wiley-IEEE Press, 2004. 8. [8] W. B. Powell. Approximate Dynamic Programming: Solving the Curses of Dimensionality, New York, USA: Wiley-Interscience, 2007. 9. [9] F. Y. Wang, H. G. Zhang, D. R. Liu. Adaptive dynamic programming: An introduction. IEEE Computational Intelligence Magazine, vol. 4, no. 2, pp. 39–47, 2009. 10. [10] F. L. Lewis, D. R. Liu. Reinforcement Learning and Approximate Dynamic Programming for Feedback Control, Hoboken, NJ, USA: Wiley-IEEE Press, 2013.Google Scholar 11. [11] F. Y. Wang, N. Jin, D. R. Liu, Q. L. Wei. Adaptive dynamic programming for finite-horizon optimal control of discrete-time nonlinear systems with ε-error bound. IEEE Transactions on Neural Networks, vol. 22, no. 1, pp. 24–36, 2011. 12. [12] D. Wang, D. R. Liu, Q. L. Wei, D. B. Zhao, N. Jin. Optimal control of unknown nonaffine nonlinear discrete-time systems based on adaptive dynamic programming. Automatica, vol. 48, no. 8, pp. 1825–1832, 2012. 13. [13] D. R. Liu, D. Wang, X. Yang. An iterative adaptive dynamic programming algorithm for optimal control of unknown discrete-time nonlinear systems with constrained inputs. Information Sciences, vol. 220, pp. 331–342, 2013. 14. [14] H. Li, D. Liu. Optimal control for discrete-time affine non-linear systems using general value iteration. IET Control Theory and Applications, vol. 6, no. 18, pp. 2725–2736, 2012. 15. [15] A. Gosavi. Simulation-based Optimization: Parametric Optimization Techniques and Reinforcement Learning, Secaucus, NJ, USA: Springer Science & Business Media, 2003. 16. [16] V. S. Borkar. Stochastic Approximation: A Dynamical Systems Viewpoint, Hindustan, India: Hindustan Book Agency, 2008. 17. [17] S. Lange, T. Gabel, M. Riedmiller. Batch reinforcement learning. Reinforcement Learning: State-of-the-Art, Adaptation, Learning, and Optimization, M. Wiering, M. van Otterlo, Eds., Berlin, Germany: Springer-Verlag, pp. 45–73, 2012. 18. [18] D. P. Bertsekas. Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, vol. 9, no. 3, pp. 310–335, 2011. 19. [19] L. Busoniu, R. Babuska, B. D. Schutter, D. Ernst. Reinforcement Learning and Dynamic Programming Using Function Approximators (Automation and Control Engineering), Boca Raton, FL, USA: CRC Press, 2010. 20. [20] L. Busoniu, D. Ernst, B. De Schutter, R. Babuska. Approximate reinforcement learning: An overview. In Proceedings of IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, IEEE, Paris, France, 2011.Google Scholar 21. [21] M. Geist, O. Pietquin. Algorithmic survey of parametric value function approximation. IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 6, pp. 845–867, 2013. 22. [22] G. J. Gordon. Approximate Solutions to Markov Decision Processes, Ph.D. dissertation, Carnegie Mellon University, USA, 1999.Google Scholar 23. [23] D. Ormoneit, Ś. Sen. Kernel-based reinforcement learning. Machine Learning, vol. 49, no. 2–3, pp. 161–178, 2002. 24. [24] D. Ernst, P. Geurts, L. Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, vol. 6, pp. 503–556, 2005. 25. [25] M. Riedmiller. Neural fitted Q iteration-first experiences with a data efficient neural reinforcement learning method. In Proceedings of the 16th European Conference on Machine Learning, Springer, Porto, Portugal, pp. 317–328, 2005.Google Scholar 26. [26] S. J. Bradtke, A. G. Barto. Linear least-squares algorithms for temporal difference learning. Machine Learning, vol. 22, no. 1–3, pp. 33–57, 1996. 27. [27] J. A. Boyan. Technical update: Least-squares temporal difference learning. Machine Learning, vol. 49, no. 2–3, pp. 233–246, 2002. 28. [28] A. Nedić, D. P. Bertsekas. Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems, vol. 13, no. 1–2, pp. 79–110, 2003. 29. [29] M. G. Lagoudakis, R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, vol. 4, pp. 1107–1149, 2003. 30. [30] A. Antos, C. Szepesvári, R. Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, vol. 71, no. 1, pp. 89–129, 2008. 31. [31] A. Antos, C. Szepsevári, R. Munos. Value-iteration based fitted policy iteration: Learning with a single trajectory. In Proceedings of IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, IEEE, Honolulu, Hawaii, USA, 2007, pp. 330–337, 2007.Google Scholar 32. [32] M. Puterman, M. Shin. Modified policy iteration algorithms for discounted Markov decision problems. Management Science, vol. 24, no. 11, pp. 1127–1137, 1978. 33. [33] J. N. Tsitsiklis. On the convergence of optimistic policy iteration. Journal of Machine Learning Research, vol. 3, pp. 59–72, 2002. 34. [34] B. Scherrer, V. Gabillon, M. Ghavamzadeh, M. Geist. Approximate modified policy iteration. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK, pp. 1207–1214, 2012.Google Scholar 35. [35] A. M. Farahmand, M. Ghavamzadeh, C. Szepesvári, S. Mannor. Regularized policy iteration. Advances in Neural Information Processing Systems, D. Koller, D. Schuurmans, Y. Bengio, L. Bottou, Eds., Cambridge, MA, USA: MIT Press, pp. 441–448, 2008.Google Scholar 36. [36] A. M. Farahmand, M. Ghavamzadeh, C. Szepesvari, S. Mannor. Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems. In Proceedings of American Control Conference, IEEE, St. Louis, MO, USA, pp. 725–730, 2009.Google Scholar 37. [37] A. M. Farahmand, C. Szepesvári. Model selection in reinforcement learning. Machine Learning, vol. 85, no. 3, pp. 299–332, 2011. 38. [38] M. Loth, M. Davy, P. Preux. Sparse temporal difference learning using LASSO. In Proceedings of IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, IEEE, Honolulu, Hawaii, USA, pp. 352–359, 2007.Google Scholar 39. [39] J. Z. Kolter, A. Y. Ng. Regularization and feature selection in least-squares temporal difference learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ACM, New York, NY, USA, pp. 521–528, 2009.Google Scholar 40. [40] J. Johns, C. Painter-Wakefield, R. Parr. Linear complementarity for regularized policy evaluation and improvement. In Proceedings of Neural Information and Processing Systems, Curran Associates, New York, USA, pp. 1009–1017, 2010.Google Scholar 41. [41] M. Ghavamzadeh, A. Lazaric, R. Munos, M. W. Hoffman. Finite-sample analysis of Lasso-TD. In Proceedings of the 28th International Conference on Machine Learning, Bellevue, USA, pp. 1177–1184, 2011.Google Scholar 42. [42] B. Liu, S. Mahadevan, J. Liu. Regularized off-policy TDlearning. In Proceedings of Advances in Neural Information Processing Systems 25, pp. 845–853, 2012.Google Scholar 43. [43] S. Mahadevan, B. Liu. Sparse Q-learning with mirror descent. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, pp. 564–573, 2012.Google Scholar 44. [44] M. Petrik, G. Taylor, R. Parr, S. Zilberstein. Feature selection using regularization in approximate linear programs for Markov decision processes. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, pp. 871–878, 2010.Google Scholar 45. [45] M. Geist, B. Scherrer. L 1-penalized projected Bellman residual. In Proceedings of the 9th European Workshop on Reinforcement Learning, Athens, Greece, pp. 89–101, 2011.Google Scholar 46. [46] M. Geist, B. Scherrer, A. Lazaric, M. Ghavamzadeh. A Dantzig selector approach to temporal difference learning. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, pp. 1399–1406, 2012.Google Scholar 47. [47] Z. W. Qin, W. C. Li, F. Janoos. Sparse reinforcement learning via convex optimization. In Proceedings of the 31st International Conference on Machine Learning, Beijing, China, pp. 424–432, 2014.Google Scholar 48. [48] M. W. Hoffman, A. Lazaric, M. Ghavamzadeh, R. Munos. Regularized least squares temporal difference learning with nested l 2 and l 1 penalization. In Proceedings of the 9th European Conference on Recent Advances in Reinforcement Learning, Athens, Greece, pp. 102–114, 2012. 49. [49] J. Johns, S. Mahadevan. Sparse Approximate Policy Evaluation Using Graph-based Basis Functions, Technical Report UM-CS-2009-041, University of Massachusetts, Amherst, USA, 2009.Google Scholar 50. [50] C. Painter-Wakefield, R. Parr. Greedy algorithms for sparse reinforcement learning. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, pp. 1391–1398, 2012.Google Scholar 51. [51] A. M. Farahmand, D. Precup. Value pursuit iteration. In Proceedings of Advances in Neural Information Processing Systems 25, Stateline, NV, USA pp. 1349–1357, 2012.Google Scholar 52. [52] M. Ghavamzadeh, A. Lazaric, O. A. Maillard, R. Munos. LSTD with random projections. In Proceedings of Advances in Neural Information Processing Systems 23, Vancourer, Canada, pp. 721–729, 2010.Google Scholar 53. [53] B. Liu, S. Mahadevan. Compressive Reinforcement Learning with Oblique Random Projections, Technical Report UM-CS-2011-024, University of Massachusetts, Amherst, USA, 2011.Google Scholar 54. [54] G. Taylor, R. Parr. Kernelized value function approximation for reinforcement learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ACM, New York, NY, USA, pp. 1017–1024, 2009.Google Scholar 55. [55] T. Jung, D. Polani. Least squares SVM for least squares TD learning. In Proceedings of the 17th European Conference on Artificial Intelligence, Trento, Italy, pp. 499–503, 2006.Google Scholar 56. [56] X. Xu, D. W. Hu, X. C. Lu. Kernel-based least squares policy iteration for reinforcement learning. IEEE Transactions on Neural Networks, vol. 18, no. 4, pp. 973–992, 2007. 57. [57] F. W. Keller, S. Mannor, D. Precup. Automatic basis function construction for approximate dynamic programming and reinforcement learning. In Proceedings of the 23rd International Conference on Machine Learning, ACM, New York, NY, USA, pp. 449–456, 2006.Google Scholar 58. [58] R. Parr, C. Painter-Wakefield, L. H. Li, M. L. Littman. Analyzing feature generation for value-function approximation. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, USA, pp. 737–744, 2007.Google Scholar 59. [59] R. Parr, L. Li, G. Taylor, C. Painter-Wakefield, M. L. Littman. An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning. In Proceedings of the 25th International Conference on Machine Learning, ACM, New York, NY, USA, pp. 752–759, 2008.Google Scholar 60. [60] M. M. Fard, Y. Grinberg, A. M. Farahmand, J. Pineau, D. Precup. Bellman error based feature generation using random projections on sparse spaces. In Proceedings of Advances in Neural Information Processing Systems 26, Stateline, NV, USA, pp. 3030–3038, 2013.Google Scholar 61. [61] M. Belkin, P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, vol. 15, no. 6, pp. 1373–1396, 2003. 62. [62] S. T. Roweis, L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, vol. 290, no. 5500, pp. 2323–2326, 2000. 63. [63] J. Tenenbaum, V. de Silva, J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, vol. 290, no. 5500, pp. 2319–2323, 2000. 64. [64] S. Mahadevan. Proto-value functions: Developmental reinforcement learning. In Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, pp. 553–560, 2005.Google Scholar 65. [65] S. Mahadevan. Representation policy iteration. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, Edinburgh, Scotland, pp. 372–379, 2005.Google Scholar 66. [66] S. Mahadevan, M. Maggioni, K. Ferguson, S. Osentoski. Learning representation and control in continuous Markov decision processes. In Proceedings of the 21st National Conference on Artificial Intelligence, Boston, USA, pp. 1194–1199, 2006.Google Scholar 67. [67] S. Mahadevan, M. Maggioni. Value function approximation with diffusion wavelets and Laplacian eigenfunctions. In Proceedings of Advances in Neural Information Processing Systems 18, Vancourer, Canada, pp. 843–850, 2005.Google Scholar 68. [68] S. Mahadevan, M. Maggioni. Proto-value functions: A Laplacian framework for learning representation and control in Markov decision processes. Journal of Machine Learning Research, vol. 8, no. 10, pp. 2169–2231, 2007. 69. [69] S. Mahadevan. Learning representation and control in Markov decision processes: New frontiers. Foundations and Trends in Machine Learning, vol. 1, no. 4, pp. 403–565, 2009. 70. [70] S. Osentoski, S. Mahadevan. Learning state-action basis functions for hierarchical MDPs. In Proceedings of the 24th International Conference on Machine Learning, ACM, New York, NY, USA, pp. 705–712, 2007.Google Scholar 71. [71] J. Johns, S. Mahadevan. Constructing basis functions from directed graphs for value function approximation. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, USA, pp. 385–392, 2007.Google Scholar 72. [72] J. Johns, S. Mahadevan, C. Wang. Compact spectral bases for value function approximation using Kronecker factorization. In Proceedings of the 22nd National Conference on Artificial Intelligence, AAAI, California, USA, pp. 559–564, 2007.Google Scholar 73. [73] M. Petrik. An analysis of Laplacian methods for value function approximation in MDPs. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, Hyderabad, India, pp. 2574–2579, 2007.Google Scholar 74. [74] J. H. Metzen. Learning graph-based representations for continuous reinforcement learning domains. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Czech Republic, pp. 81–96, 2013.Google Scholar 75. [75] X. Xu, Z. H. Huang, D. Graves, W. Pedrycz. A clusteringbased graph Laplacian framework for value function approximation in reinforcement learning. IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2613–2625, 2014. 76. [76] K. Rohanimanesh, N. Roy, R. Tedrake. Towards feature selection in actor-critic algorithms. In Proceedings of Workshop on Abstraction in Reinforcement Learning, Montreal, Canada, pp. 1–9, 2009.Google Scholar 77. [77] H. Sprekeler. On the relation of slow feature analysis and Laplacian eigenmaps. Neural Computation, vol. 23, no. 12, pp. 3287–3302, 2011. 78. [78] L. Wiskott, T. Sejnowski. Slow feature analysis: Uunsupervised learning of invariances. Neural Computation, vol. 14, no. 4, pp. 715–770, 2002. 79. [79] M. Luciw, J. Schmidhuber. Low complexity proto-value function learning from sensory observations with incremental slow feature analysis. In Proceedings of the 22nd International Conference on Artificial Neural Networks and Machine Learning, Lausame, Switzerland, pp. 279–287, 2012.Google Scholar 80. [80] R. Legenstein, N. Wilbert, L. Wiskott. Reinforcement learning on slow features of high-dimensional input streams. PLoS Computational Biology, vol. 6, no. 8, Article number e1000894, 2010.Google Scholar 81. [81] W. Böhmer, S. Grünewälder, Y. Shen, M. Musial, K. Obermayer. Construction of approximation spaces for reinforcement learning. Journal of Machine Learning Research, vol. 14, pp. 2067–2118, 2013. 82. [82] G. E. Hinton, R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, vol. 313, no. 5786, pp. 504–507, 2006. 83. [83] Y. Bengio, A. Courville, P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013. 84. [84] I. Arel, D. C. Rose, T. P. Karnowski. Deep machine learning — A new frontier in artificial intelligence research. IEEE Computational Intelligence Magazine, vol. 5, no. 4, pp. 13–18, 2010. 85. [85] G. E. Hinton, S, Osindero, Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. 86. [86] R. Salakhutdinov, G. E. Hinton. A better way to pretrain deep Boltzmann machines. In Proceedings of Advances in Neural Information Processing Systems 25, MIT Press, Cambridge, MA, pp. 2456–2464, 2012.Google Scholar 87. [87] Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle. Greedy layer-wise training of deep networks. In Proceedings of Advances in Neural Information Processing Systems 19, Stateline, NV, USA, pp. 153–160, 2007.Google Scholar 88. [88] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P. A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, vol. 11, pp. 3371–3408, 2010. 89. [89] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. 90. [90] G. E. Hinton. A practical guide to training restricted Boltzmann machines. Neural Networks: Tricks of the Trade, 2nd ed., G. Montavon, G. B. Orr, K. R. Müller, Eds., Berlin, Germany Springer, pp. 599–619, 2012. 91. [91] B. Sallans, G. E. Hinton. Reinforcement learning with factored states and actions. Journal of Machine Learning Research, vol. 5, pp. 1063–1088, 2004. 92. [92] M. Otsuka, J. Yoshimoto, K. Doya. Free-energy-based reinforcement learning in a partially observable environment. In Proceedings of the 18th European Symposium on Artifical Neural Networks, Bruges, Belgium, pp. 541–546, 2010.Google Scholar 93. [93] S. Elfwing, M. Otsuka, E. Uchibe, K. Doya. Free-energy based reinforcement learning for vision-based navigation with high-dimensional sensory inputs. In Proceedings of the 17th International Conference on Neural Information Processing: Theory and algorithms, Sydney, Australia, pp. 215–222, 2010. 94. [94] N. Heess, D. Silver, Y. W. Teh. Actor-critic reinforcement learning with energy-based policies. In Proceedings of the 10th European Workshop on Reinforcement Learning, pp. 43–58, 2012.Google Scholar 95. [95] F. Abtahi, I. Fasel. Deep belief nets as function approximators for reinforcement learning. In Proceedings of IEEE ICDL-EPIROB, Frankfurt, Germany, 2011.Google Scholar 96. [96] P. D. Djurdjevic, D. M. Huber. Deep belief network for modeling hierarchical reinforcement learning policies. In Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, IEEE, Manchester, UK, pp. 2485–2491, 2013.Google Scholar 97. [97] R. Faulkner, D. Precup. Dyna planning using a feature based generative model. In Proceedings of Neural Information Processing Systems Workshop on Deep Learning and Unsupervised Feature Learning, Vancourer, Canada, pp. 1–9, 2010.Google Scholar 98. [98] S. Lange, M. Riedmiller, A. Voigtlander. Autonomous reinforcement learning on raw visual input data in a real world application. In Proceedings of International Joint Conference on Neural Networks, Brisbane, Australia, pp. 1–8, 2012.Google Scholar 99. [99] S. Lange, M. Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Proceedings of International Joint Conference on Neural Networks, IEEE, Barcelona, Spain, 2010.Google Scholar 100. [100] J. Mattner, S. Lange, M. Riedmiller. Learn to swing up and balance a real pole based on raw visual input data. In Proceedings of Advances on Neural Information Processing, Springer-Verlag, Stateline, USA, pp. 126–133, 2012. 101. [101] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antogoglou, D. Wierstra, M. Riedmiller. Playing Atari with deep reinforcement learning. In Proceedings of Neural Information Processing Systems Workshop on Deep Learning and Unsupervised Feature Learning, Nevada, USA, pp. 1–9, 2013.Google Scholar 102. [102] D. P. Bertsekas. Weighted Sup-norm Contractions in Dynamic Programming: A Review and Some New Applications, Technical Report LIDS-P-2884, Laboratory for Information and Decision Systems, MIT, USA, 2012.Google Scholar 103. [103] R. Munos. Error bounds for approximate policy iteration. In Proceedings of the 20th International Conference on Machine Learning, Washington DC, USA, pp. 560–567, 2003.Google Scholar 104. [104] R. Munos. Performance bounds in L p-norm for approximate value iteration. SIAM Journal on Control and Optimization, vol. 46, no. 2, pp. 541–561, 2007. 105. [105] R. Munos, C. Szepesvari. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, vol. 9, pp. 815–857, 2008. 106. [106] S. A. Murphy. A generalization error for Q-learning. Journal of Machine Learning Research, vol. 6, pp. 1073–1097, 2005. 107. [107] O. Maillard, R. Munos, A. Lazaric, M. Ghavamzadeh. Finite-sample analysis of Bellman residual minimization. In Proceedings of the 2nd Asian Conference on Machine Learning, Tokyo, Japan, pp. 299–314, 2010.Google Scholar 108. [108] A. Lazaric, M. Ghavamzadeh, R. Munos. Analysis of classification-based policy iteration algorithms. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, pp. 607–614, 2010.Google Scholar 109. [109] A. Farahmand, R. Munos, C. Szepesvári. Error propagation for approximate policy and value iteration. In Proceedings of Advances on Neural Information and Processing Systems 23, Vancourer, Canada, pp. 568–576, 2010.Google Scholar 110. [110] A. Almudevar, E. F. de Arruda. Optimal approximation schedules for a class of iterative algorithms, with an application to multigrid value iteration. IEEE Transactions on Automatic Control, vol. 57, no. 12, pp. 3132–3146, 2012. 111. [111] A. Antos, R. Munos, C. Szepsevári. Fitted Q-iteration in continuous action-space MDPs. In Proceedings of Advances in Neural Information and Processing Systems 20, pp. 1–8, 2007.Google Scholar 112. [112] A. Lazaric, M. Ghavamzadeh, R. Munos. Finite-sample analysis of LSTD. In Proceedings of the 27th International Conference onMachine Learning, Haifa, Israel, pp. 615–622, 2010.Google Scholar 113. [113] A. Lazaric, M. Ghavamzadeh, R. Munos. Finite-sample analysis of least-squares policy iteration. Journal of Machine Learning Research, vol. 13, no. 1, pp. 3041–3074, 2012. 114. [114] A. Lazaric. Transfer in reinforcement learning: A framework and a survey. Reinforcement Learning: State-of-the-Art, Adaptation, Learning, and Optimization, M. Wiering, M. van Otterlo, Eds., Berlin, Germeny: Springer-Verlag, pp. 143–173, 2012. 115. [115] Y. X. Li, D. Schuurmans. MapReduce for parallel reinforcement learning. In Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning, Athens, Greece, pp. 309–320, 2011.Google Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696088790893555, "perplexity": 1628.0986179215809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00150.warc.gz"}
https://jmlr.org/papers/v20/18-321.html
Prediction Risk for the Horseshoe Regression Anindya Bhadra, Jyotishka Datta, Yunfan Li, Nicholas G. Polson, Brandon Willard; 20(78):1−39, 2019. Abstract We show that prediction performance for global-local shrinkage regression can overcome two major difficulties of global shrinkage regression: (i) the amount of relative shrinkage is monotone in the singular values of the design matrix and (ii) the shrinkage is determined by a single tuning parameter. Specifically, we show that the horseshoe regression, with heavy-tailed component-specific local shrinkage parameters, in conjunction with a global parameter providing shrinkage towards zero, alleviates both these difficulties and consequently, results in an improved risk for prediction. Numerical demonstrations of improved prediction over competing approaches in simulations and in a pharmacogenomics data set confirm our theoretical findings. [abs][pdf][bib]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820401668548584, "perplexity": 3179.2146658181364}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00506.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/29722
# COMPARISON OF THE $\nu_{1} + \nu_{3}$ BAND INTENSITY OF $SO_{2}$ DETERMINED BY HIGH RESOLUTION MEASUREMENTS AND THE TOTAL INTEGRATED BAND INTENSITY TECHNIQUE Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/29722 Files Size Format View 1995-RL-09.jpg 88.30Kb JPEG image Title: COMPARISON OF THE $\nu_{1} + \nu_{3}$ BAND INTENSITY OF $SO_{2}$ DETERMINED BY HIGH RESOLUTION MEASUREMENTS AND THE TOTAL INTEGRATED BAND INTENSITY TECHNIQUE Creators: Lafferty, W. J.; Pine, A. S.; Sams, Robert L.; Flaud, J.- M. Issue Date: 1995 Publisher: Ohio State University Abstract: The $\nu_{1} + \nu_{3}$ combination band is the strongest absorption band of $SO_{2}$ that falls in an atmospheric window. We have studied this band with a difference-frequency laser spectrometer at low pressure (0.20 Torr - 8.25 m path) to minimize pressure broadening effects. Close to 2000 lines of the (101-000) and (111-010) bands have been assigned as well as 100 lines of the (101-000) band of the $^{34}SO_{2}$ isotopic species. After correction for a very weak Fermi interaction of the enery levels of (101) With the nearby (120) state, the observed transition wavenumbers can be fit using a Watson Hamiltonian to within the experimental uncertainty $(\pm 0.00011 cm^{-1})$. The observed peak absorptions together with small corrections for pressure broadening and instrumental effects were used to calculated individual line intensities for all unblended lines. These intensities were least-squares fit, and transition moments as well as their rotational corrections were obtained. These rotational and transition moment constants were then used to generate a listing of line positions and intensities, and the total band intensity was obtained by summing all the calculated intensities. The band intensities obtained in $cm^{-2} atm^{-1}$ at 296 K for ${^{32}}SO_{2}$ are $S_{\nu}(101-000) = 13.36, S_{\nu}(111-010) = 1.052$ and $S_{\nu}(120-000) = 0.170$. The uncertainty in these values is estimated to be $\pm 5%$. The total integrated band intensity was obtained using a commercial FT spectrometer at a resolution of $0.11 cm^{-1}$ using NIST primary standard gas mixtures of $SO_{2}$ in $N_{2}$ with a total pressure of 1 atm. Nine measurements were made using a variety of partial pressures of $SO_{2}$ and path lengths. The total integrated intensity obtained was $15.19(46) cm^{-2} atm^{-1}$ at 296 K. After correction for hot band contributions to the intensity and the isotopic abundance of sulfur, a value for the total band intensity of the (101-000) band of $^{32}SO_{2}$ of $13.31(40) cm^{-2} atm^{-1}$ was obtained in excellent agreement with the high resolution results. Description: Author Institution: NIST, Gaithersburg, MD 20899; NIST, Gaithersburg, MD 20899; Univ. de P. et M. Curie, Tour 13 Bte. 76, 4 Place Jussieu 75252 Paris Cedex 05, France. URI: http://hdl.handle.net/1811/29722 Other Identifiers: 1995-RL-09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364362120628357, "perplexity": 1986.714715828899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062723.96/warc/CC-MAIN-20150728002422-00065-ip-10-236-191-2.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/259619/a-question-on-irreducible-markov-chains
A question on irreducible markov chains I have a question on irreducible Markov Chains that has been bugging me for a few hours now I have the markov chain defined by: $P(i, i-1) = 1 - P(i,i+1) = \frac{1}{2(i+1)}$ for $i>=1$, and $p(0,1) = 1$. Now this chain is irreducible and I'm asked to prove that a.s. starting from state $i$ we hit the state $a$ when $a > i$ in a finite amount of time. I think it can be proven by saying that for all states between $0$ and $a$, we have a probability $p > (a+2) /2*(a+1) > 0.5$ to do +1, so I think I can compare the markov chain to a random walk of uniform probability $\frac{a+2}{2(a+1)}$ which tends towards $+\infty$. Is my reasoning correct? I feel that either there is a much more direct method or that what I'm trying to prove is true in general as long as the chain is irreductible (regardless of the transition probabilities) Thanks! - What do you mean by in a finite amount of time? Almost surely? Then indeed the specifics of the transition probabilities are irrelevant. – Did Dec 16 '12 at 9:18 @did : I mean that almost surely we hit the state a starting from state i – lezebulon Dec 16 '12 at 14:56 Then the chain starting from i before it hits a > i is irreducible on a finite state space hence it visits each state almost surely, including state a. This proves that one hits a almost surely, irrespectvely of the detail of the transition probabilities. – Did Dec 16 '12 at 18:02 @did : thanks but I'm not entirely sure. Indeed it is not possible that we almost surely never hit state a. But does that prove that we almost surely hit state a? What if the probability to hit state a starting from i is something < 1? – lezebulon Dec 16 '12 at 20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9435192942619324, "perplexity": 182.4030321401494}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276131.97/warc/CC-MAIN-20160524002116-00036-ip-10-185-217-139.ec2.internal.warc.gz"}
http://forum.cosmoquest.org/showthread.php?168876-Van-der-Waals-and-Bremsshralung-radiation-from-Sonoluminscence&s=2d00b7c61e8246cfadbb2f6c9fa7b7d6
1. Banned Join Date May 2018 Posts 158 ## Van der Waals and Bremsshralung radiation from Sonoluminscence abstract After some derivations, I have reached some theoretical equations that wish to attempt to explain corrective energies to sonolumiscence. By taking into consideration that the bubble wall velocity increases, charge particles inside of the bubble could very well be caught in this motion, collectively giving off an energy satisfying the equations of motion for the logarithmic spiral. We also study a form of the equation which takes into account any Van der Waals forces between the particles, before being dominated by the large thermal contribution. My original derivations looked a bit messy so I sought a simplification. The ordinary equation was Distribution of the density and simplifying some terms I get Integrating the volume element we obtain the simplified version of our equations or With being a charge density and The dimensions of this equation is force over length or energy over area. It has an ‘’acoustic energy’’ part given by and a wall velocity term . This part can be seen in terms of an ''acoustic intensity'' term. It’s also been known for the surface tension to have a coefficient of where is the critical temperature (known as the Guggenheim–Katayama formula). As temperature increases the surface tension decreases. [1] - an alternate simplification from a previous Langrangian of the theory we formalised, requires only the additional binding or repulsive energies from Van der Waals forces (which is the Langrangian) The repulsive nature could temporally explain the expanding of the bubble but it seems more likely related to pressures and temperature. [2] - Further, there is a part of this equation Namely this expression can be fashioned in a different way: This is not too far from the difference of such a potential which actually gives rise to the Lamb shift, a direct consequence itself of the vacuum energy, ie. Casimir effect, par the powers of the fine structure This was all possible from theoretical assumptions being added to the physics of the Rayleigh-Plesset equation which took into account thermal variations - which now, we give in the form with additional assumptions made about the wall kinematic viscosity with the motion of charges inside of the bubble: The equations for the expanding and collapsing bubble was very similar to those of the Friedmann equation and it gave me some possible insights into the suggested ''corrective terms.'' Last edited by Dubbelosix; 2018-Jun-01 at 01:27 PM. 2. Dear Dubbelosix You might want to consider to give a bit more information here, because (for me and I suppose others) the "introduction" leaves me bewildered about what is going to be presented and then you jump into a set of equations of which I (and probably others) have no idea what they are supposed to describe and why the = signs that you give are valid. 3. Banned Join Date May 2018 Posts 158 Originally Posted by tusenfem Dear Dubbelosix You might want to consider to give a bit more information here, because (for me and I suppose others) the "introduction" leaves me bewildered about what is going to be presented and then you jump into a set of equations of which I (and probably others) have no idea what they are supposed to describe and why the = signs that you give are valid. Sure. I'll take people through the entire derivation.... Will take some time, but at least people won't be so clueless. 4. Banned Join Date May 2018 Posts 158 the Reyleigh-Plesset Equation and the Friedmann equation share some common features yet I also explained, this was due to the fact they both describe similar physics founding the fluid expansion and collapse modes of the Friedmann equation for a spherical homogeneous distribution of matter. Let’s draw in on those relationships first, then we will be able to proceed further with my ideas. The Rayleigh-Plesset equation is In which the following variables are defined as: ρ - is the density (https://en.wikipedia.org/wiki/Density) of the surrounding liquid, assumed to be constant R(t) - is the radius of the bubble which when taken as a ratio with itself is by definition related to the scale factor which also features implicit time dependence. ν - is the kinematic viscosity (https://en.wikipedia.org/wiki/Kinematic_viscosity) of the surrounding liquid, assumed to be constant γ - is the surface tension (https://en.wikipedia.org/wiki/Surface_tension) of the bubble-liquid interface I have noticed, from my own work, that the fluid expansion plays a role of coefficient on all the terms in the non-conserved definition of the equations. Nevertheless, the definition above too also implements a fluid expansion of its own. Of course, expanding universes and expanding ‘’bubbles’’ require the same base mathematics. Let’s take a look at one formulation of the equations I have written in the past. If we take one of the time derivatives (preferrably the second term) as a curvature term then we can modify the dynamics of the Reyleigh-Plesset equation in some unique ways. We can also add density and pressure terms, which, if you take relativity seriously enough like I do, then it should be in there. First let us look at a few equations I derived, and then after let’s assume an object can be created from it. The equations which feature similar terms are: The last term refers to rotating systems and so will produce Larmor radiation, or known as the Cyclotron radiation. **Since the viscosity actually refers to the motion of the fluid around the bubble then any charged particles compressed to the region [will] exhibit the behaviour of this equation above!** Comparing these results with the Reyleigh-Plesset equation (since surface tension is then we may derive from the last equation a direct proportionality with viscosity ) (which is another form to write it with terms appearing on the right hand side of the equation) and courtesy of wiki, I extract a quick set of definitions for the terms: ‘’This is an approximate equation that is derived from the Navier–Stokes equations (https://en.wikipedia.org/wiki/Navier...okes_equations) (written in spherical coordinate system (https://en.wikipedia.org/wiki/Spheri...rdinate_system)) and describes the motion of the radius of the bubble *R* as a function of time *t*. Here, *μ* is the viscosity (https://en.wikipedia.org/wiki/Viscosity), *p* the pressure (https://en.wikipedia.org/wiki/Pressure), and *γ* the surface tension (https://en.wikipedia.org/wiki/Surface_tension). ‘’ It becomes a lot clearer why the dynamics are similar, at least, in respect of like terms. We may use the previous equation later, but I want to concentrate just for now on this form of the equation Simply because (it formats) the terms very clearly with relationships to each other and also, it is this form of equation (when setting it to zero) can you formulate a Langrangian. Let’s have a look at that (what I will call this time) the Reyleigh-Plesset Langrangian: All we do is assume distribution of a mass term There are some obvious crucial dynamical differences between this equation and the Friedmann equation only within heuristic framework. In fact, drawing on sonoluminiscence, we may think a compression of the material exerted by the forces imploding the material inside the bubble to be nothing more than a cold fusion which releases a large amount of energy. There was detection of excess neutrons from no external source supporting this notion that some kind of fusion is occuring. There is also tantalizing situations and arguments for the case of it being vacuum related, since the amount of energy released can perhaps provide more energy than what can be taken from thermonuclear reactions has been postulated by a number of authors. The kinematic viscosity (https://en.wikipedia.org/wiki/Kinematic_viscosity) of the surrounding liquid is often assumed constant but **it is very likely** phase transitions do occur when the bubble expands and inexorably implodes. Last edited by Dubbelosix; 2018-Jun-01 at 01:37 PM. 5. Banned Join Date May 2018 Posts 158 Let’s jump straight into it. From the last post the two equations which interested me was: With the last equation I stated: The reason why this was stated because it is believed by a number of scientists that Larmor radiation from accelerated charged particles may be a contributor to the phenomenon. Based only on the fact that magnetic fields have been detected around the ‘’star in a jar’’ - I think this is possible since a rotary feature gives rise to a closed current in which charged particles could be bound in a high momentum, giving off Larmor radiation. Certainly the amount of energy from the source, (if it cannot be described alone by nuclear fusion) could have other additional contributors. Another equation which interested me was the proposed Langragian of the theory: (The fluid equation using relative consequence of with \ $where is of course the density and features standard notation for the pressure). However, from this point on, if those who can use the equations to find that Larmor radiation from accelerated particles within the framework of the ‘’reaction’’ can fully and adequatly explain the energies observed emitted from the source then I would now be inclined not to think of a nuclear reaction explanation but one primarily induced from accelerated particles from the weak equivalence of general relativity. For instance, the viscocity **should **have an effect on the bubbles interior particles if it contributes to angular coupling of momentum to the interior particle motions. It would be similar why, we expect a rotating universe would mean the systems inside of it owe its own property of rotation to the same effect. In such cases, magnetic fields do in fact exist, even if the electric forces have tended to zero! Now… there are a few ways to continue from here, but I think I have found the most simplest way to continue: We’ve established from my point of view that if Larmor radiation is involved here then this equation Will also then require to add new terms to account for rotational radiation from accelerated motion of charges. According to wiki: ‘’This equation, though approximate, has been shown to give good estimates on the motion of the bubble under the acoustically (https://en.wikipedia.org/wiki/Acoustics) driven field except during the final stages of collapse. Both simulation and experimental measurement show that during the critical final stages of collapse, the bubble wall velocity exceeds the speed of sound of the gas inside the bubble.’’ Then already we have an immediate connection between the viscocity of the surrounding fluid and possible increasing temperatures from the accelerated charges contained inside the bubble, under tremendous pressure and possibly strong density (compared to elements on the periodic table). Regardless, it should be obvious this important because the Reyleigh-Plesset equation is only approximate towards its final stages, in which the bubble undergoes it’s imploding feature - If it had explained this phase transition, today the phenomenon would not have been such a phenomenon. Last edited by Dubbelosix; 2018-Jun-01 at 01:37 PM. 6. Banned Join Date May 2018 Posts 158 Spiral trajectories for charged particles will follow on to the jerk and then higher derivatives such as and is a continuous function unless electrons collapse towards the center - since we do not know the full dynamics and that some have suspected some nuclear fusion of sorts to be going on during the high temperature phase, anything is possible. If they where to collapse and add energy to fusion, then Larmor radiation definitely is involved. The above has dimensions of power so it can be written as such: Ignoring this definition though, the path we take will be to find an analogous Friedmann ‘’set-up’’ for the Reyleigh-Plesset equation of the form (ignoring additional constants): featuring the charge to mass ratio coefficient. Either way, this is stil not the equation we will investigate, we need to simplify a bit more: This term describes the fluctuations of a vacuum energy contribution as or in the form we suggest Which allows the dual production of photon pairs from the vacuum energy directly. It’s also curvature dependent because in this particular theory, curvature is a mechanism for giving virtual particles into real boosts. It's also unlikely I will take into too much serious consideration a vacuum energy explanation. Last edited by Dubbelosix; 2018-Jun-01 at 10:37 PM. 7. Banned Join Date May 2018 Posts 158 Now... ms of the Rayleigh-Plesset equation. Not only is this adding to the difficulty of me going forward, there is in hindsight loads of different variations of the Rayleigh-Plesset equation for different perspectives and different kinds of physics. Of course our ultimate goal is to solve one issue it has concerning a temperature gradient. The very structure of the equation is inexorably more complicated than the Friedmann equation it seems, because there are not only boundary conditions but with it extra variables making it a complicated equation overall. The more variables you have, the harder it is to solve an equation like this. For instance, we have new variables like which is the material derivative, which is the sum of all steady and unsteady pressures outside of the bubble. the pressure at the boundary within the fluid. the sum of all steady and unsteady pressures in the gas. the sum of all steady and unsteady pressures in the bubble interior. as a vapor pressure. pressure in the liquid at the bubble wall. is the static pressure in the liquid outside the bubble wall. Then in limits, is the assumed magnitude of far from the bubble. Small will be reserved for the range from bubble center and large for the bubble radius itself. in common notation is the liquid particle velocity... and so much more which makes this one of the harder equations we will have investigated. - it seems like we should cover why the Rayleigh-Plesset equation is what it is, work with a simple framework to start it off. To do this we will need to derive the equation along the thought-processes which led Reyleigh and Plesset to their discovery. **Derivation of the Reyleigh-Plesset Equation in terms of Energy Balance** A spherical gas ''bubble'' in an incompressible liquid has a fluid velocity which falls off as an inverse square law with range$r$as a result of the liquid incompressibility (which is an assumption of the theory). The velocity is Where here, is once again the bubble radius with this time, interpreted as the wall velocity. The bubble can change, in a Friedmann equation, this is also true as it pertains to an expanding or collapsing metric. As the bubble radius changed, from an equilibrium at to some other, work is done on the bubble by that pressure which would exist at the location of the center of the bubble. There is experimental evidence that all the dynamics are in fact resulting from a varying pressure inside the bubble giving rise to the new physics found in sonoluminscence. If the spatial scales over which the pressure changes are much greater than the bubble radius than this almost equals the liquid pressure far from the radius . It is said here, that is an included dynamic component, but really the object looks very similar to that found in the effective density of a Friedmann equation which features as - they arise as a relativistic consequence. In previous work, when another derivative of time featured giving the non-conserved solutions of a Friedmann equation, a temperature gradient could have been added directly from a fluid/state equation. The difference between this work and the work done by the pressure at the bubble wall will equal the kinetic energy in the liquid: The balance equation states that Differentiation of this with respect to gives The terms on the left arises from the difference in the work done at the bubble wall and remote from the bubble and the terms on the right arise from the kinetic energy imparted to the liquid. Note also from this last object we can form: If the pressure far from the bubble comprises both a static component$p_0$and an applied driving pressure$Pthen this can be expressed giving the Rayleigh-Plesset equation of motion ignoring additional constants. And we did what we wanted for this post, which was deriving the Rayleigh-Plesset equation, but really, we derived it also for the second last equation because its likely we will come back to the second last equation with physics learned from Friedmann cosmology. **References:** http://brennen.caltech.edu/fluidbook...etequation.pdf (http://brennen.caltech.edu/fluidbook...etequation.pdf) https://eprints.soton.ac.uk/45698/1/Pub9182.pdf (https://eprints.soton.ac.uk/45698/1/Pub9182.pdf) simpler version which we will look at have been seen in literature: https://www.researchgate.net/post/Ho...esset_equation (https://www.researchgate.net/post/Ho...esset_equation) 8. Banned Join Date May 2018 Posts 158 The wall velocity has to be proportional to a dragging or inertial effect on the surrounded gas bubble coupling to inerior dynamics, similar again, to Friedmann cosmology in the rotating model, when early galaxies tended to align themselves with the rotatonal motion of the horizon of a universe. A good thought analogy, though different physics, is how electrons align themselves in a Stern-Gerlach experiment when electrons tend to align themselves towards the magnetic lines of force. An early rotating universe is also expected to have primordial magnetic forces. The wall velocities are intrinsically part of the functions giving rise to the LHS of the equation. Those velocities also satifsied Larmor radiation from the spiral trajectories or accelerated motions which I have already given as a relationship satifying: Where we define as a ‘’central potential’’ related to rotating systems. Note also the potential depends on the bubble radius. It may become an important additional to any equation we submit in this work as ‘’a candidate’’ to explain dynamics outside of the normal Reyleigh-Plesset equation. I have no idea whether the gradient of the temperature has been considered from the simplified form of the Reyleigh-Plesset equation of the form: We have a temperature gradient allows a logical modification of the Reyleigh-Plesset equation - best bit is, we didn’t use any ad hoc statements to get there since within general relativity, the relationship ratio’s hold in hydrodynamics combined. The equation of state in my original cosmology studies which allowed a Friedmann equation to explain anistropies took the form: As I said before, the equation confirms the existence of an effective density component based on the assumptions made aboutp_{\infty}$, since, if the spatial scales over which the pressure changes are much greater than the bubble radius than this almost equals the liquid pressure far from the radius$\rho_{\infty} = p_0 + P(t)$. This result can be seen in complete analogy with the effective density found within Friedmann dynamics$\rho + P$which makes me wonder if we can extract that physics and redefine the Reyleigh-Plesset equation in a way, but hopefully a dynamically interesting one. The obvious way forward, is that if$p_{\infty}$arises dynamically synonmous with the reasons the effective density arises in the Friedmann equation, then maybe the two could very well be replaced with each other - it doesn’t seem to harm the physics in my mind, so long as we do not forget the real reason why the equation’s internal dynamics are actually rotating —- simply due to the viscosity of the surrounding incompressible fluid. In order to do that, we will need to take another look at some key equations that will provide insight how to move forward, those are: An obvious relationship first we can construct relies in the kinematic energy density of the system which is proportional to the proper density Also from the equation with richer dynamics, we can construct the energy of the system as (this is just all notes we are making right now). It’s not leading anywhere specific. The Reyleigh-Plesset equation was: In the Friedmann equation I have shown, actually features like terms: Last edited by Dubbelosix; 2018-Jun-01 at 01:29 PM. 9. Banned Join Date May 2018 Posts 158 And finally this sums it up. I Suggested in the previous that the equation takes the form with . In terms of its latent heat, a certain approach taken in literature, is that as the bubble shrinks and passes through its equilibrium radius, the condensate will be destroyed and discharge its energy. This kind of model predicts that the each condensate stores an amount of latent heat energy released in the discharge given by the following: - constant volume heat capacity per mole of the gas in the bubble -is the ideal gas constant - is the ambient atmospheric pressure - the number of moles of gas in the bubble - the Van der Waals excluded volume per mole (notation may vary) One mole of gas under the ideal gas law is: Next assume that all particles are hard spheres of the same finite radius r (the van der Waals radius), the effect of the finite volume of the particles is to decrease the available void space in which the particles are free to move - something we briefly covered in previous parts - the momentum is subject to the commutation property with position of the particles. We must replace the corrected equation becomes using the excluded volume is$b'$is well-known to be four times the proper volume of the particle. The excluded volume is an important feature in the Van der Waals formula ~ where is the temperature and is a pressure. The Van der Waals formula consists further another additional feature which describes the attractive properties of the particles at certain distances, feature the molar mass volume$V_m$. In which$a\$ a measure of the average attraction between the particles - it has the form of and . It also features is Avagadro’s constant with a Boltzmann constant coefficient . The Loschmidt constant (https://en.wikipedia.org/wiki/Loschmidt_constant) is If we take a dynamic pressure inside of the bubble then the modified Rayleigh-Plesset equation can take into account the additional interparticle forces where is the number density. As you can see, no need to introduce any new additional features on the RHS when taking into consideration Van der Waal forces. Alternatively the pressure can be entirely understood as In which case another form of the equation to be studied will be **References:** Van der Waals equation - Wikipedia (https://en.wikipedia.org/wiki/Van_der_Waals_equation) Last edited by Dubbelosix; 2018-Jun-01 at 06:50 PM. 10. Originally Posted by Dubbelosix abstract After some derivations, I have reached some theoretical equations that wish to attempt to explain corrective energies to sonolumiscence. By taking into consideration that the bubble wall velocity increases, charge particles inside of the bubble could very well be caught in this motion, collectively giving off an energy satisfying the equations of motion for the logarithmic spiral. Which logarithmic spiral? Is this an ATM (Against The Mainstream) theory? We also study a form of the equation which takes into account any Van der Waals forces between the particles, before being dominated by the large thermal contribution. My original derivations looked a bit messy so I sought a simplification. The ordinary equation was Distribution of the density and simplifying some terms I get It appears that you have multiplied both sides of the equation by , except the subscript "waals" has been removed for some reason. Integrating the volume element we obtain the simplified version of our equations Integrating the right hand side with respect to V, does result in the "constant" terms just being multiplied by V, but that is *not* the case for the terms that have 1/V or 1/V2 factors. 11. Banned Join Date May 2018 Posts 158 Originally Posted by grapes Which logarithmic spiral? Is this an ATM (Against The Mainstream) theory? It appears that you have multiplied both sides of the equation by , except the subscript "waals" has been removed for some reason. Integrating the right hand side with respect to V, does result in the "constant" terms just being multiplied by V, but that is *not* the case for the terms that have 1/V or 1/V2 factors. Yes, the subscript I think here, I just replaced for a bolded version to differentiate the difference. ''Integrating the right hand side with respect to V, does result in the "constant" terms just being multiplied by V, but that is *not* the case for the terms that have 1/V or 1/V2 factors.'' My calculus is a bit rusty, can you explain why? The way I deduced this, was through a dimensional analysis, the appearance of a volume element on the LHS would reduce the dimensions of all other terms on the RHS. 12. Banned Join Date May 2018 Posts 158 Originally Posted by grapes Which logarithmic spiral? Is this an ATM (Against The Mainstream) theory? I used a logarithmic spiral equation which calculates the power given off by charged particles on those types of trajectories. I don't know if it is against the mainstream, the problem is there is no consensus on anything here. 13. Banned Join Date May 2018 Posts 158 Oh right yes, you can also write it as the log Grapes, if this is what you meant. It actually never occurred to me to write it like that. Like this? integrating volume element gives: using: Last edited by Dubbelosix; 2018-Jun-01 at 06:15 PM. 14. Originally Posted by Dubbelosix ''Integrating the right hand side with respect to V, does result in the "constant" terms just being multiplied by V, but that is *not* the case for the terms that have 1/V or 1/V2 factors.'' My calculus is a bit rusty, can you explain why? The way I deduced this, was through a dimensional analysis, the appearance of a volume element on the LHS would reduce the dimensions of all other terms on the RHS. Rusty? Well, that's not the way integration works at all! Originally Posted by Dubbelosix Oh right yes, you can also write it as the log Grapes, if this is what you meant. It actually never occurred to me to write it like that. Like this? integrating volume element gives: using: That's headed in the right direction, but that equation is wrong, obviously. You may be missing a few symbols? Did you forget some? No, that's definitely wrong! Hey, this is the first few lines of your first post. You might want to oil up your calculus, if it is that rusty. 15. Banned Join Date May 2018 Posts 158 Originally Posted by grapes Rusty? Well, that's not the way integration works at all! That's headed in the right direction, but that equation is wrong, obviously. You may be missing a few symbols? Did you forget some? No, that's definitely wrong! Hey, this is the first few lines of your first post. You might want to oil up your calculus, if it is that rusty. Maybe you could be a gentlement and refresh that memory? You said I was heading in the right direction, so since I am almost there, why not be a bit more clear. It's not like you are harboring secrets. 16. Banned Join Date May 2018 Posts 158 Originally Posted by grapes That's headed in the right direction, but that equation is wrong, obviously. You may be missing a few symbols? Did you forget some? Did I forget something, let's see? Yes I did, Last edited by Dubbelosix; 2018-Jun-02 at 04:26 PM. 17. Established Member Join Date Oct 2009 Posts 1,698 Originally Posted by Dubbelosix Did I forget something, let's see? Yes I did, Do you have an actual ATM idea that you are prepared to defend here? So far it seems that you are merely presenting a series of badly executed homework exercises. While no doubt thrilling to you, watching someone else work out first-year maths holds rather less interest for others. 18. Banned Join Date May 2018 Posts 158 Originally Posted by Geo Kaplan Do you have an actual ATM idea that you are prepared to defend here? So far it seems that you are merely presenting a series of badly executed homework exercises. While no doubt thrilling to you, watching someone else work out first-year maths holds rather less interest for others. That's very condescending of you isn't it? Personally, just looking at some of the standards of the threads posted in this subforum, I would have thought this would be a refreshing change. 19. Banned Join Date May 2018 Posts 158 The idea I am defending here are corrections to the Reyleigh Plesset equation. It's an entirely new way to view this equation because it takes into respect the cyclotron radiation from rotating charges inside of the bubble cavity. If you cannot appreciate a good discussion in physics, maybe you should not try. 20. Banned Join Date May 2018 Posts 158 Also I would like to defend, there is nothing ''simple'' in what I have done. Just because it looks like I have used a simple algebra, does not take away the computation a person has to put in to make sure you are tracking dimensions. It takes study and an understanding of what you are doing to do this in the first place. I put intuition from my Friedmann studies to understand how to put in the pressure from Van der Waals forces and found a direct way to plug in equivalent terms for the radiation from cyclotron motion. 21. Established Member Join Date Oct 2009 Posts 1,698 Originally Posted by Dubbelosix The idea I am defending here are corrections to the Reyleigh Plesset equation. It's an entirely new way to view this equation because it takes into respect the cyclotron radiation from rotating charges inside of the bubble cavity. If you cannot appreciate a good discussion in physics, maybe you should not try. Good discussions in physics are always welcome here at CQ. However, the ATM forum is for a very different and specific purpose. You are labouring under a misapprehension that perhaps stems from not having read the rules of the forum (link is in the mod's signature, for your convenience). Again, the fact that you are still attempting to get first-year calculus correct tells us that you are not near ready to defend your idea. Perhaps you would consider working out the details first, rather than expecting a collaborative development here. 22. Originally Posted by Dubbelosix Also I would like to defend, there is nothing ''simple'' in what I have done. Just because it looks like I have used a simple algebra, does not take away the computation a person has to put in to make sure you are tracking dimensions. It takes study and an understanding of what you are doing to do this in the first place. I put intuition from my Friedmann studies to understand how to put in the pressure from Van der Waals forces and found a direct way to plug in equivalent terms for the radiation from cyclotron motion. Maybe a good way of starting a thread like this is by actually telling the people what you want to discuss, because most likely few will know what the Reyleigh Plesset equation is, let alone why this need the correction that you claim to introduce here. So, I would advise you, as I did in my first comment, to actually tell us what you are going for here. Also, it might be good not to post immediately 7 pages of calculus, when you claim yourself you are rusty on the topic. Small steps also get you towards your goal. 23. Banned Join Date May 2018 Posts 158 Originally Posted by tusenfem Maybe a good way of starting a thread like this is by actually telling the people what you want to discuss, because most likely few will know what the Reyleigh Plesset equation is, let alone why this need the correction that you claim to introduce here. So, I would advise you, as I did in my first comment, to actually tell us what you are going for here. Also, it might be good not to post immediately 7 pages of calculus, when you claim yourself you are rusty on the topic. Small steps also get you towards your goal. I have explained what we are doing, I will explain it simpler - the Reyleigh Plesset equation describes the physics of an expanding and collapsing bubble. The equation does not work well though: ‘’Some have argued that the Rayleigh–Plesset equation described above is unreliable for predicting bubble temperatures and that actual temperatures in sonoluminescing systems can be far higher than 20,000 kelvins. Some research claims to have measured temperatures as high as 100,000 kelvins, and speculates temperatures could reach into the millions of kelvins. Temperatures this high could cause thermonuclear fusion.’’ So we have a temperature gradient which is one unique change in our approach. Since magnetic field have been detected around the phenomenon, it's very likely Larmor/cyclotron radiation will be present from accelerating charges. In the dynamic view I have taken in this work, is that the charges owe their radiation from coupling to the wall velocity, which, as it collapses, goes much faster than the speed of sound. This is what the new extra terms on the RHS of the following equation does: So what is it I am trying to achieve? Well hopefully it's clear, its about correcting the Reyleigh Plesset equation to suit the new physics we suspect. Last edited by Dubbelosix; 2018-Jun-02 at 08:54 PM. 24. Banned Join Date May 2018 Posts 158 Originally Posted by Geo Kaplan Again, the fact that you are still attempting to get first-year calculus correct tells us that you are not near ready to defend your idea. Perhaps you would consider working out the details first, rather than expecting a collaborative development here. No not at all, you came in here with a very arrogant attitude claiming the work was of highschool level. I really do challenge that. I don't think you understand how little the objection was concerning the log of the volume, while it is a completely dimensionless component and has no dimensional significance whatsoever. You came in here rude, it's our first meeting. Go figure. This isn't about defending the work, it's about you trying to insult me. 25. Originally Posted by Dubbelosix The equation does not work well though: ‘’Some have argued that the Rayleigh–Plesset equation described above is unreliable for predicting bubble temperatures and that actual temperatures in sonoluminescing systems can be far higher than 20,000 kelvins. Some research claims to have measured temperatures as high as 100,000 kelvins, and speculates temperatures could reach into the millions of kelvins. Temperatures this high could cause thermonuclear fusion.’’ Can you provide a reference/link for this quotation? 26. Banned Join Date May 2018 Posts 158 Originally Posted by Strange Can you provide a reference/link for this quotation? sure: https://en.wikipedia.org/wiki/Sonoluminescence 27. Banned Join Date May 2018 Posts 158 I got the idea for the coupling of wall velocity to internal dynamics from my Friedmann cosmological studies. It is similar to how a hypothetical rotating Godel metric tends to drag matter around with it in the early stages of cosmology. 28. Originally Posted by Geo Kaplan Do you have an actual ATM idea that you are prepared to defend here? So far it seems that you are merely presenting a series of badly executed homework exercises. While no doubt thrilling to you, watching someone else work out first-year maths holds rather less interest for others. Originally Posted by Dubbelosix No not at all, you came in here with a very arrogant attitude claiming the work was of highschool level. I really do challenge that. I don't think you understand how little the objection was concerning the log of the volume, while it is a completely dimensionless component and has no dimensional significance whatsoever. You came in here rude, it's our first meeting. Go figure. This isn't about defending the work, it's about you trying to insult me. Both of you knock it off. Stop making assumptions about others motivations, stop with the veiled insults, and stop with calling out other members. If you have a problem with someone's post, you Report it, you do start in-thread arguments about it. If you keep it up, we'll start giving out infractions. Stick to math and physics, questions and answers. Last edited by Swift; 2018-Jun-03 at 03:36 AM. Reason: typo 29. Originally Posted by Dubbelosix Maybe you could be a gentlement and refresh that memory? You said I was heading in the right direction, so since I am almost there, why not be a bit more clear. It's not like you are harboring secrets. It is your derivation, with multiple errors. You seem to have found one of the errors. ATM is not the place for guiding you through calculus corrections, although we have started with avowed calculus illiterates and ended up with literates https://forum.cosmoquest.org/showthr...-x-y&p=1087190 Originally Posted by Dubbelosix Did I forget something, let's see? Yes I did, Yes, but now you've misapplied that to your equation Originally Posted by Dubbelosix Also I would like to defend, there is nothing ''simple'' in what I have done. Just because it looks like I have used a simple algebra, does not take away the computation a person has to put in to make sure you are tracking dimensions. It takes study and an understanding of what you are doing to do this in the first place. I put intuition from my Friedmann studies to understand how to put in the pressure from Van der Waals forces and found a direct way to plug in equivalent terms for the radiation from cyclotron motion. Dimensional analysis is necessary but not sufficient. 1 meter doesn't equal 10 meters even though the dimensional analysis is balanced. Originally Posted by Dubbelosix I have explained what we are doing, I will explain it simpler - the Reyleigh Plesset equation describes the physics of an expanding and collapsing bubble. The equation does not work well though: ‘’Some have argued that the Rayleigh–Plesset equation described above is unreliable for predicting bubble temperatures and that actual temperatures in sonoluminescing systems can be far higher than 20,000 kelvins. Some research claims to have measured temperatures as high as 100,000 kelvins, and speculates temperatures could reach into the millions of kelvins. Temperatures this high could cause thermonuclear fusion.’’ So we have a temperature gradient which is one unique change in our approach. Since magnetic field have been detected around the phenomenon, it's very likely Larmor/cyclotron radiation will be present from accelerating charges. In the dynamic view I have taken in this work, is that the charges owe their radiation from coupling to the wall velocity, which, as it collapses, goes much faster than the speed of sound. This is what the new extra terms on the RHS of the following equation does: So what is it I am trying to achieve? Well hopefully it's clear, its about correcting the Reyleigh Plesset equation to suit the new physics we suspect. But that's the same equation that I first quoted, right? Without any correction at all. 30. Banned Join Date May 2018 Posts 158 Originally Posted by grapes But that's the same equation that I first quoted, right? Without any correction at all. Yes, no correction. As for multiple errors, you'll need to defend this. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405035734176636, "perplexity": 910.3626571295775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00023.warc.gz"}
https://eccc.weizmann.ac.il/author/09702/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > AUTHORS > PHUONG NGUYEN: All reports by Author Phuong Nguyen: TR11-155 | 22nd November 2011 We study the $k$-party `number on the forehead' communication complexity of composed functions $f \circ \vec{g}$, where $f:\{0,1\}^n \to \{\pm 1\}$, $\vec{g} = (g_1,\ldots,g_n)$, $g_i : \{0,1\}^k \to \{0,1\}$ and for $(x_1,\ldots,x_k) \in (\{0,1\}^n)^k$, $f \circ \vec{g}(x_1,\ldots,x_k) = f(\ldots,g_i(x_{1,i},\ldots,x_{k,i}), \ldots)$. When $\vec{g} = (g,g,\ldots,g)$ we denote $f \circ \vec{g}$ by ... more >>>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992935061454773, "perplexity": 3292.1176269140215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657555.87/warc/CC-MAIN-20190116154927-20190116180927-00328.warc.gz"}
https://arxiv.org/abs/1305.3131
cs.LO (what is this?) # Title: Refinement in the Tableau Synthesis Framework Abstract: This paper is concerned with the possibilities of refining and improving calculi generated in the tableau synthesis framework. A general method in the tableau synthesis framework allows to reduce the branching factor of tableau rules and preserves completeness if a general rule refinement condition holds. In this paper we consider two approaches to satisfy this general rule refinement condition. Comments: 19 pages Subjects: Logic in Computer Science (cs.LO) Cite as: arXiv:1305.3131 [cs.LO] (or arXiv:1305.3131v1 [cs.LO] for this version) ## Submission history From: Dmitry Tishkovsky [view email] [v1] Tue, 14 May 2013 12:29:49 GMT (18kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.813273012638092, "perplexity": 4270.330003904874}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00369.warc.gz"}
https://bobsegarini.wordpress.com/tag/global-warming/
Roxanne Tellier – The Past is Prologue Posted in Opinion, politics with tags , , , , , , , , , , , , , , , , , , , on November 3, 2019 by segarini One of the few benefits of getting older is having not only a lot of past to remember, but for some, the time to do so in a leisurely fashion, and with a philosophical bent. If we are lucky, and if we look back with clear eyes, we may actually begin to see where we’ve been, and maybe even to see how our past has impacted upon our present. Roxanne Tellier – Wild and Wacky Weathering Posted in Opinion, Review with tags , , , , , , , , , , , , , , , on July 9, 2017 by segarini In June 2012, Amazon picked The Age of Miracles by Karen Thompson Walker as one of the month’s best reads. A combination coming-of-age story and apocalyptic page turner, the novel focused on how people would react to a changed world, where “the Earth’s rotation slows, gradually stretching out days and nights and subtly affecting the planet’s gravity. ” Roxanne Tellier – Weather or Not We’re Together Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , on February 20, 2017 by segarini I don’t want to startle anyone … but there’s been quite a lot of blue in the sky lately, and there’s this big yellowy orange ‘ball’ up there as well ….  and it’s been getting kind of warmer, too. Should I worry? Roxanne Tellier – Motown: the Musical Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on September 27, 2015 by segarini The sixties were a glorious time, unlikely to ever be repeated or rivalled. The fifties had been a cautious decade, where women stayed home after marrying to take care of their men, kids didn’t sass parents, and no one questioned authority in the family or in their country. Well, at least on the surface. Darrell Vickers: 2014 – A Sentimental Look Back Posted in Opinion with tags , , , , , , , , , , , , , , , on January 5, 2015 by segarini Well, it’s 2015 (It’s so hard to believe that 1915 was a whole hundred years ago, isn’t it?) and that epic New Year’s hangover, that made you feel as though a diseased ice weasel was gnawing upon your temporal lobe, has at last begun to wane.  The bulging puddles of vomit you pass on the way to work each day have dried into lumpy blobs of bilious concrete and the dead and the dying are finally being carted away by city workers. JAIMIE VERNON – MY GLACIER WAS GONE Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on May 31, 2014 by segarini There is an old adage that says that you can’t go home again. It has a double meaning – not only can’t you relive the past but in many cases the places where the events in your past actually happened are no longer there. Roxanne Tellier: – Smarm and Self-Righteousness Rule 2013 Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , on January 19, 2014 by segarini History will not be kind to the memory of 2013. It was a year of meanness and spite from people in power, and a deepening of resentment towards politicians, as the ever present surveillance and social media exposed every little thing people never wanted known. As they say, a little knowledge is a dangerous thing … a lot of knowledge is Facebook.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.882862389087677, "perplexity": 1010.7118691293106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00569.warc.gz"}
http://www.wikihow.com/Find-the-Equation-of-a-Tangent-Line
Edit Article # wikiHow to Find the Equation of a Tangent Line Unlike a straight line, a curve's slope constantly changes as you move along the graph. Calculus introduces students to the idea that each point on this graph could be described with a slope, or an "instantaneous rate of change." The tangent line is a straight line with that slope, passing through that exact point on the graph. To find the equation for the tangent, you'll need to know how to take the derivative of the original equation. ### Method 1 Finding the Equation of a Tangent Line 1. 1 Sketch the function and tangent line (recommended). A graph makes it easier to follow the problem and check whether the answer makes sense. Sketch the function on a piece of graph paper, using a graphing calculator as a reference if necessary. Sketch the tangent line going through the given point. (Remember, the tangent line runs through that point and has the same slope as the graph at that point.) • Example 1: Sketch the graph of the parabola ${\displaystyle f(x)=0.5x^{2}+3x-1}$. Draw the tangent going through point (-6, -1). You don't know the tangent's equation yet, but you can already tell that its slope is negative, and that its y-intercept is negative (well below the parabola vertex with y value -5.5). If your final answer doesn't match these details, you'll know to check your work for mistakes. 2. 2 Take the first derivative to find the equation for the slope of the tangent line. For function f(x), the first derivative f'(x) represents the equation for the slope of the tangent line at any point on f(x). There are many ways to take derivatives. Here's a simple example using the power rule:[1] • Example 1 (cont.): The graph is described by the function ${\displaystyle f(x)=0.5x^{2}+3x-1}$. Recall the power rule when taking derivatives: ${\displaystyle {\frac {d}{dx}}x^{n}=nx^{n-1}}$. The function's first derivative = f'(x) = (2)(0.5)x + 3 - 0. f'(x) = x + 3. Plug any value a for x into this equation, and the result will be the slope of the line tangent to f(x) at the point were x = a. 3. 3 Enter the x value of the point you're investigating. Read the problem to discover the coordinates of the point for which you're finding the tangent line. Enter the x-coordinate of this point into f'(x). The output is the slope of the tangent line at this point. • Example 1 (cont.): The point mentioned in the problem is (-6, -1). Use the x-coordinate -6 as the input for f'(x): f'(-6) = -6 + 3 = -3 The slope of the tangent line is -3. 4. 4 Write the tangent line equation in point-slope form. The point-slope form of a linear equation is ${\displaystyle y-y_{1}=m(x-x_{1})}$, where m is the slope and ${\displaystyle (x_{1},y_{1})}$ is a point on the line.[2] You now have all the information you need to write the tangent line's equation in this form. • Example 1 (cont.): ${\displaystyle y-y_{1}=m(x-x_{1})}$ The slope of the line is -3, so ${\displaystyle y-y_{1}=-3(x-x_{1})}$ The tangent line passes through (-6, -1), so the final equation is ${\displaystyle y-(-1)=-3(x-(-6))}$ Simplify to ${\displaystyle y+1=-3x-18}$ ${\displaystyle y=-3x-19}$ 5. 5 Confirm the equation on your graph. If you have a graphing calculator, graph the original function and the tangent line to check that you have the correct answer. If working on paper, refer to your earlier graph to make sure there are no glaring mistakes in your answer. • Example 1 (cont.): The initial sketch showed that the slope of the tangent line was negative, and the y-intercept was well below -5.5. The tangent line equation we found is y = -3x - 19 in slope-intercept form, meaning -3 is the slope and -19 is the y-intercept. Both of these attributes match the initial predictions. 6. 6 Try a more difficult problem. Here's a run-through of the whole process again. This time, the goal is to find the line tangent to ${\displaystyle f(x)=x^{3}+2x^{2}+5x+1}$ at x = 2: • Using the power rule, the first derivative ${\displaystyle f'(x)=3x^{2}+4x+5}$. This function will tell us the slope of the tangent. • Since x = 2, find ${\displaystyle f'(2)=3(2)^{2}+4(2)+5=25}$. This is the slope at x = 2. • Notice we do not have a point this time, only an x-coordinate. To find the y-coordinate, plug x = 2 into the initial function: ${\displaystyle f(2)=2^{3}+2(2)^{2}+5(2)+1=27}$. The point is (2,27). • Write the tangent line equation in point-slope form: ${\displaystyle y-y_{1}=m(x-x_{1})}$ ${\displaystyle y-27=25(x-2)}$ If required, simplify to y = 25x - 23. ### Method 2 Solving Related Problems 1. 1 Find the extreme points on a graph. These are points where the graph reaches a local maximum (a point higher than the points on either side), or local minimum (lower than the points on either side). The tangent line always has a slope of 0 at these points (a horizontal line), but a zero slope alone does not guarantee an extreme point. Here's how to find them:[3] • Take the first derivative of the function to get f'(x), the equation for the tangent's slope. • Solve for f'(x) = 0 to find possible extreme points. • Take the second derivative to get f''(x), the equation that tells you how quickly the tangent's slope is changing. • For each possible extreme point, plug the x-coordinate a into f''(x). If f''(a) is positive, there is a local minimum at a. If f''(a) is negative, there is a local maximum. If f''(a) is 0, there is an inflection point, not an extreme point. • If there is a maximum or minimum at a, find f(a) to get the y-coordinate. 2. 2 Find the equation of the normal. The "normal" to a curve at a particular point passes through that point, but has a slope perpendicular to a tangent. To find the equation for the normal, take advantage of the fact that (slope of tangent)(slope of normal) = -1, when they both pass through the same point on the graph.[4] In other words: • Find f'(x), the slope of the tangent line. • If the point is at x = a, find f'(a) to find the slope of the tangent at that point. • Calculate ${\displaystyle {\frac {-1}{f'(a)}}}$ to find the slope of the normal. • Write the normal equation in slope-point form. ## Community Q&A Search • How do I find the equation of the line that is tangent to the graph of f(x) and parallel to the line y = 2x + 3? wikiHow Contributor Parallel lines always have the same slope, so since y = 2x + 3 has a slope of 2 (since it's in slope-intercept form), the tangent also has a slope of 2. Now you also know that f'(x) will equal 2 at the point the tangent line passes through. Differentiate to get the equation for f'(x), then set it equal to 2. Now you can solve for x to find your x-coordinate, plug that into f(x) to find the y-coordinate, and use all the information you've found to write the tangent line equation in point-slope form. • How do I find the equations of 2 lines that are tangent to a graph given the slope? wikiHow Contributor The equation for a line is, in general, y=mx+c. To find the equations for lines, you need to find m and c. m is the slope. For example, if your line goes up two units in the y direction, for every three units across in the x direction, then m=2/3. If you have the slope, m, then all you need now is c. To find c in any line, you can use any (x,y) points you know. In the case of a line that is tangent to a graph, you can use the point (x,y) where the line touches the graph. If you use that x and that y and the slope m, you can use algebra to find c. y=mx+c, so, c=y-mx. Once you have c, you have the equation of the line! Done. • How to calculate a linear equation that's pendular to the tangent line? Assuming that the pendular line is 90º to the x-axis, since the y-axis has infinite slope, y/x = y/0 (or the limit as x --> 0 very closely), then we know the slope (y2 - y1)/(x2 - x1) = 0 in the denominator. If you are given {x1, y1} as a point on the tangent line, then x2 will be easy to answer. • My original equation f(x) contains a sine function. How do I find the tangent line? wikiHow Contributor Unless you are given the slope of the tangent line, you'll need to find it the same way you would for any other problem: finding the derivative f'(x). Trigonometric functions have their own rules for differentiation, which you can look up in your textbook or online. To get you started, the derivative of sin(x) is cos(x). 200 characters left ## Tips • If necessary, start by rewriting the initial equation in standard form: f(x) = ... or y = ... ## Article Info Categories: Mathematics In other languages: Italiano: Trovare l'Equazione della Tangente alla Curva, Português: Encontrar a Equação de uma Reta Tangente à Curva, Español: encontrar la ecuación de una tangente, 中文: 得出一条切线的等式, Русский: найти уравнение касательной, Deutsch: Die Gleichung einer Tangente finden, Français: trouver l’équation d’une droite tangente, Bahasa Indonesia: Mencari Persamaan Garis Tangen, Nederlands: De vergelijking van een raaklijn vinden, Tiếng Việt: Tìm phương trình tiếp tuyến Thanks to all authors for creating a page that has been read 782,963 times.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83363276720047, "perplexity": 348.80023411991243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.73/warc/CC-MAIN-20170923194310-20170923214310-00713.warc.gz"}
http://www.physicsforums.com/printthread.php?t=593298
Physics Forums (http://www.physicsforums.com/index.php) -   Calculus & Beyond Homework (http://www.physicsforums.com/forumdisplay.php?f=156) 1. The problem statement, all variables and given/known data Find a function f(x,y,z) such that F = (gradient of F). 3. The attempt at a solution I don't know :( I'm so confused tiny-tim Apr4-12 05:46 AM Quote: Quote by calculusisrad (Post 3849202) Find a function f(x,y,z) such that F = (gradient of F). do you mean "Find a function f(x,y,z) such that F = (gradient of f)" ? i don't understand either :confused: is either f or F given in the question? Sorry, yes you're right. The gradient of f should not be bolded. 1MileCrash Apr4-12 02:15 PM Think about what a gradient is. If I told you to find the gradient of a function, what would you do? You would differentiate the function wrt x, and that is the i component of the gradient, you would differentiate the function wrt y, and that is the j component, and then you would differentiate the function wrt z, and that is the k component. Now, we are going in reverse. What is the reverse of differentiation? I completely forgot the biggest part of the problem. WOW. Sorry about that!!! Let F = (2xye^z)i + ((e^z)(x^2))j + ((x^2)y(e^z)+(z^2))k NOW find a function f(x,y,z) such that F = Gradient of f. DivisionByZro Apr4-12 10:53 PM This was due last Thursday, I'm horribly behind on homework, I'm desperate here. Dick Apr4-12 11:16 PM Quote: Quote by calculusisrad (Post 3850566) This was due last Thursday, I'm horribly behind on homework, I'm desperate here. It's pretty easy to guess a form for f that works. Start guessing. That's often the easiest way to solve problems like this. What's a likely form for f given the first component of F? HallsofIvy Apr5-12 08:09 AM $\frac{\partial f}{\partial x}=$ what? $\frac{\partial f}{\partial y}=$ what? $\frac{\partial f}{\partial z}=$ what?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237964153289795, "perplexity": 2185.1313056295667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163037568/warc/CC-MAIN-20131204131717-00000-ip-10-33-133-15.ec2.internal.warc.gz"}
https://proxies-free.com/tag/precalculus/
## Algebra precalculus – solving two equations in three variables Thank you for writing a reply to Mathematics Stack Exchange! But avoid • Make statements based on opinions; Cover them with references or personal experience. Use MathJax to format equations. Mathjax reference. ## Algebra Pre-calculus – However, the attempt to solve this inequality has not progressed at all. The problem is, that $$x, y, z$$ are right breaks, and each of them is greater than zero. Given: $$x + y + z = 2$$ Must prove $$frac {x. y. z} {(1-x). (1-y). (1-z)} 8q$$, I tried to solve this with AM $$geq$$ GM inequality. Attempt : $$frac { frac {1-x} {x} + frac {1-y} {y} + frac {1-z} {z}} {3}$$ $$geq$$ $$biggl ( frac {(1-x). (1-y). (1-z)} {x.y.z} biggl) ^ {1/3}$$ What should I do to calculate the value of? $$frac {1} {x} + frac {1} {y} + frac {1} {z}$$ ? ## Algebra precalculus – $x_1 x_2 x_3 x_4 + x_2 x_3 x_4 x_5 + …… + x_n x_1 x_2 x_3 = 0$ What is $n$? It's really hard for me to understand this problem. What I understood is that I have to find a natural number $$n$$ for which the equation holds, whatever the values ​​of $$x_i$$ That's impossible. Because of this, some conditions seem to be on $$x_i$$ s are necessary .. ## Algebra Pre-calculus – Problems in interpreting information to develop mathematical functions Context: assignment of the university code So far I've developed two models based on this information: For the surface zone I have $$T (l) = frac {-11} {45} l + 24$$ and for the deep zone I have $$T (l) = 2$$ from where $$T$$ is the summer temperature in degrees Celsius and $$l$$ is the latitude in degrees (where $$0$$ is the equator and $$90$$ are the poles). I have trouble interpreting this information to develop a linear model for the summer temperature of seawater ($$T$$) in the thermocline zone. I know that this model will be different from the last two, because it depends on depth (not latitude), but the lack of data really makes me nervous. Any guidance would be very grateful. ## Algebra Pre-Calculus – How to multiply both sides of $frac {5} {X_1-X_2}> 10$ with $X_1-X_2$ if $X_i$ are independent of each other as random variables? Suppose we have random variables $$X_1$$ and $$X_2$$ they are independent and distributed identically. Suppose I am interested in the inequality $$frac {5} {X_1-X_2}> 10$$, How can I multiply both sides of this inequality? $$X_1-X_2$$ Especially since $$X_1$$ and $$X_2$$ are random variables, I do not know if $$X_1 -X_2$$ is positive or negative. So I do not know if I have to reverse the sign of inequality or not. Furthermore, a statement would please $$frac {5} {X_1-X_2}> 10$$ THEN AND ONLY THEN, IF $$. 5> X_1-X_2$$ be right? I think it would be just an attempt to prove it. Suppose that $$.5> X_1-X_2$$, Then it has to be that $$X_1> X_2$$ and we can rearrange to get $$frac {5} {X_1-X_2}> 10$$ by $$X_1-X_2$$ , (So the IF direction applies) And for the only if directionagain if $$frac {5} {X_1-X_2}> 10$$ by $$X_1-X_2$$ then $$X_1-X_2$$ must be positive, since a negative number can not be greater than $$10$$and we get the result by rearranging. ## Algebra Precalculus – Number of possible integer values ​​of x for which a given expression is an integer How many integers, $$x$$Check the following $$begin {equation *} frac {x ^ 3 + 2x ^ 2 + 9} {x ^ 2 + 4x + 5} end {equation *}$$ is an integer? I did it: $$begin {equation *} frac {x ^ 3 + 2x ^ 2 + 9} {x ^ 2 + 4x + 5} = x-2 + frac {3x + 19} {x ^ 2 + 4x + 5} end {equation *}$$ but I can not continue. ## Algebra precalculus – Complex number: $frac {(3 + i) ^ 2} {(1 + 2i) ^ 2}$ – can not access the textbook solution I have a complex quotient $$frac {(3 + i) ^ 2} {(1 + 2i) ^ 2}$$ The solution in my textbook is $$-2i$$, I came to different solutions and would like to know where I went wrong. So far I've been working with the complex number i in my textbook chapter ($$sqrt {-1}$$). I understand that one can not divide by a complex number in the denominator, so I have to multiply both the numerator and the denominator with the complex conjugate. However, I am confused in this exercise because my expression is nested in parentheses and is square. For example, if my denominator was straight $$1 + 2i$$ I know that would be the complex conjugate $$1-2i$$, So I'm confused about what to do because the whole denominator is in parentheses and square. Only with what I know did I try to solve the square in numerator and denominator: $$(3 + i) ^ 2$$ = $$3 ^ 2 + i ^ 2$$ = $$9-1$$ = $$8$$ For the denominator: $$(1 + 2i) ^ 2$$ = $$1 ^ 2 + 2 ^ 2i ^ 2$$ = $$1 + 4 * -1$$ = $$1-4$$ = $$-3$$ Then I would arrive $$frac {8} {- 3}$$ That's not the solution. How do I arrive? $$-2i$$? ## Algebra precalculus – Find the extremes of $cos left ( frac pi2 cos x right) + cos left ( frac pi2 sin x right)$ without distinction The question is: find minimum and maximum of $$f (x)$$: $$f (x) = cos left ( frac pi2 cos x right) + cos left ( frac pi2 sin x right)$$ without distinction. This problem should only be solved with pre-calculation knowledge, but I have no idea how to do it. $$f (x)$$ decreases monotonously $$frac {n pi} 2$$ to $$frac {n pi} 2+ frac pi4$$and rises monotonously $$frac {n pi} 2+ frac pi4$$ to $$frac {(n + 1) pi} 2$$But how can one prove the monotony without calculation? I also tried to transform the expression into begin {align} f (x) = & 2 cos left ( frac pi4 ( cos x + sin x) right) cos left ( frac pi4 ( cos x- sin x ) right) \ = & 2 cos left ( frac { sqrt2 pi} 4 sin left (x + frac pi4 right) right) cos left ( frac { sqrt2 pi} 4 sin left (-x + frac pi4 right) right) end {align} but found it has the same problem to prove the monotony. ## Algebra precalculus – What extension do I use for ln (x) if I do not know what x is greater or less than? I try to make extensions for ln and log in to honor Algebra 2, but I can not figure that out. The equation I'm working on is ln15x. I came to ln15 + lnx, but I do not know how to simplify it. Note: This is probably a very simple concept, but I've been sick for a few months, so I have no idea what I'm mainly doing and the internet is not helping. ## Algebra precalculus – Solve the equation $13x + 2 (3x + 2) sqrt {x + 3} + 42 = 0$. Solve the equation $$13x + 2 (3x + 2) sqrt {x + 3} + 42 = 0$$, To let $$y = sqrt {x + 3} implies 3 = y ^ 2 – x$$, large begin {align} & 13x + 2 (3x + 2) sqrt {x + 3} +42 \ = & 14 (x + 3) + (6x + 4) y – x \ = & 14y ^ 2 + [6(x + 3) – 14]y – x = & 14y (y – 1) – (y ^ 2 – x – 9) y ^ 3 – x \ = & 14y (y – 1) + x (y ^ 3 – y) – y ^ 3 (y ^ 2 – 1) + 8y ^ 3 \ = & 14y (y – 1) + (xy ^ 2 + xy + x) (y – 1) – (y ^ 4 + y ^ 3) (y – 1) + 8y ^ 3 \ = & (- y ^ 4 + y ^ 3 + xy ^ 2 + xy + x + 14y) (y – 1) + 8y ^ 3 \ end {align} And I'm stuck here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 65, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8380208015441895, "perplexity": 361.340099091833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00006.warc.gz"}
https://tex.stackexchange.com/questions/401127/can-tex-luatex-render-documents-with-a-20-foot-page-size
# Can Tex/LuaTeX render documents with a 20 foot page size? In the 2D computer game WordLand--which is just an idea--characters walk around on top of a very large document, something like 20'x20'. I don't intend to print this on a poster. How can I get tex/luatex/latex etc. to use a 20 foot page size? • The texbook says \danger \TeX\ will not deal with dimensions whose absolute value is $\rm2^{30}\,sp$ or more. In other words, the ^{maximum legal dimension} is slightly less than $16384\pt$. This is a distance of about 18.892 feet (5.7583 meters), so it won't cramp your style. So 20 foot is a fraction too much but just typeset to a 20cm page and scale the resulting pdf. – David Carlisle Nov 13 '17 at 17:43 • Surely you don't mean a 20 foot margin, but a 20 foot page size? The margin is the space around the text. – Alan Munn Nov 13 '17 at 18:11 • Yes that is what I meant. – selden Nov 13 '17 at 20:21 To quote TeX (which you can reproduce by typing \hsize=666in into a document and running tex on it): ! Dimension too large. l.1 \hsize=666in ? H I can't work with sizes bigger than about 19 feet. Continue and I'll use the largest value I can. Or, if you look at the TeXbook, on page 58, you have TeX will not deal with dimensions whose absolute value is 230 sp or more. In other words, the maximum legal dimension is slightly less than 16384 pt. This is a distance of about 18.892 feet (5.7583 meters), so it won't cramp your style. But in our case, it indeed cramps your style. • It would be helpful if you provided a more precise reference, ideally with page number(s), from the TeXbook. – Mico Nov 13 '17 at 18:04 • To be fair, I'm quoting the TeX program itself, but i guess I should include a code snippet – A Gold Man Nov 13 '17 at 18:05 • @AGoldMan The point of providing a reference along is so that others can verify it… I've edited your answer to make it clear to the reader how they can obtain the same result from the TeX program (and also included the full quote). (Though feel free to revert.) – ShreevatsaR Nov 13 '17 at 18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524497747421265, "perplexity": 1346.4940141225138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00385.warc.gz"}
http://quant.stackexchange.com/users/2261/eric?tab=activity&sort=comments
Eric Reputation Top tag Next privilege 150 Rep. Create tags Apr11 comment A few questions about signs of the Greek letters I think what you meant is that under risk neutral probability, $F_T=E(S_T)=S_0{e}^{rT}$ is increasing in $r$. There's no dispute about that. But notice the ${e}^{-rT}$ term in $C={e}^{-rT}E[max(S_T-K,0)]$. So even if you can conclude that $E[max(S_T-K,0)]$ is increasing in $r$, can you really conclude ${e}^{-rT}E[max(S_T-K,0)]$ is increasing in $r$ from that? Apr7 comment A few questions about signs of the Greek letters @Owe: I was not claiming that $C$ can go up or down (it can only go up in the BS model) following an increase in $r_f$. I meant when $r_f$ increases, there are two contradicting forces on the movement of $C$. But in the BS model the latter force always prevails. I want to know why, and I want to know whether it holds true for any distributions of prices of the underlying. Apr6 comment A few questions about signs of the Greek letters I understand your point. But as I emphasized in the question, this is not the whole story. As $r$ increases, present value of future payoff also gets discounted more. Why is the gain from the bond always outweighs the loss from discounted payoff? Apr6 comment A few questions about signs of the Greek letters By put-call parity $C=S-K/{e}^{rT}+P$. But the value of the call is always equal to that of the leverage on the RHS. Why is buying the call more attractive? And what do you mean by "not paying for the underlying until a later date"?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940683007240295, "perplexity": 353.26883644068084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655962.81/warc/CC-MAIN-20150417045735-00289-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.varsitytutors.com/gre_math-help/coordinate-geometry/geometry/lines
# GRE Math : Lines ## Example Questions ← Previous 1 3 4 5 6 7 8 9 10 11 ### Example Question #1 : Parallel Lines What is one possible equation for a line parallel to the one passing through the points (4,2) and (15,-4)? y = -11/6x + 8.32 y = 15x + 12 y = 11/6x + 88 y = -6/11x + 57.4 y = 6/11x - 33 y = -6/11x + 57.4 Explanation: (4,2) and (15,-4) All that we really need to ascertain is the slope of our line.  So long as a given answer has this slope, it will not matter what its y-intercept is (given the openness of our question).  To find the slope, use the formula: m = rise / run = (y1 - y2) / (x1 - x2): (2 - (-4)) / (4 - 15) = (2 + 4) / -11 = -6/11 Given this slope, our answer is: y = -6/11x + 57.4 ### Example Question #1 : Coordinate Geometry Lines m and n are parallel What is the value of angle ? 130 115 125 180 145 145 Explanation: By using the complementary and supplementary rules of geometry (due to lines m and n being parallel), as well as the fact that the sum of all angles within a triangle is 180, we can carry through the operations through stepwise subtraction of 180. x = 125 → angle directly below also = 125. Since a line is 180 degrees, 180 – 125 = 55. Since right triangle, 90 + 55 = 145 → rightmost angle of triangle 180 – 145 = 35 which is equal to the reflected angle. Use supplementary rule again for 180 – 35 = 145 = y. Once can also recognize that both a straight line and triangle must sum up to 180 degrees to skip the last step. ### Example Question #3 : Coordinate Geometry What is the equation for the line running through  and parallel to ? Explanation: To begin, solve the given equation for .  This will give you the slope-intercept form of the line. Divide everything by : Therefore, the slope of the line is . Now, for a point , the point-slope form of a line is: , where  is the slope For our point, this is: This is the same as: Distribute and solve for : ### Example Question #1 : Coordinate Geometry What is the equation for the line running through  and parallel to ? Explanation: To begin, solve the given equation for .  This will give you the slope-intercept form of the line. Divide everything by : Therefore, the slope of the line is . Now, for a point , the point-slope form of a line is: , where  is the slope For our point, this is: Distribute and solve for : ### Example Question #5 : Coordinate Geometry Which of the following is parallel to the line running through the points  and ? Explanation: To begin, it is necessary to find the slope of the line running through the two points.  (A parallel line will have the same slope.   Recall that the slope is: Or, for two points  and : For our points this is: Now, to solve for this problem, the easiest way is to solve each equation for the form .  When you do this, the slope () will be very easy to calculate.  The only option that reduces to the correct slope is Notice what happens when you solve for : This shows that the slope of this line is . ### Example Question #6 : Coordinate Geometry There is a line defined by the equation below: There is a second line that passes through the point  and is parallel to the line given above. What is the equation of this second line? Explanation: Parallel lines have the same slope. Solve for the slope in the first line by converting the equation to slope-intercept form. 3x + 4y = 12 4y = 3x + 12 y = (3/4)x + 3 slope = 3/4 We know that the second line will also have a slope of 3/4, and we are given the point (1,2). We can set up an equation in slope-intercept form and use these values to solve for the y-intercept. y = mx + b 2 = 3/4(1) + b 2 = 3/4 + b b = 2 + 3/4 = 2.75 Plug the y-intercept back into the equation to get our final answer. y = (3/4)x + 2.75 ### Example Question #1 : Parallel Lines What is the equation of a line that is parallel to  and passes through ? Explanation: To solve, we will need to find the slope of the line. We know that it is parallel to the line given by the equation, meaning that the two lines will have equal slopes. Find the slope of the given line by converting the equation to slope-intercept form. The slope of the line will be . In slope intercept-form, we know that the line will be . Now we can use the given point to find the y-intercept. The final equation for the line will be . ### Example Question #2 : How To Find The Equation Of A Parallel Line What line is parallel to   and passes through the point ? Explanation: Start by converting the original equation to slop-intercept form. The slope of this line is . A parallel line will have the same slope. Now that we know the slope of our new line, we can use slope-intercept form and the given point to solve for the y-intercept. Plug the y-intercept into the slope-intercept equation to get the final answer. ### Example Question #21 : Parallel Lines What is the equation of a line that is parallel to the line and includes the point ? Explanation: The line parallel to must have a slope of , giving us the equation . To solve for b, we can substitute the values for y and x. Therefore, the equation of the line is . ### Example Question #1 : Coordinate Geometry What line is parallel to , and passes through the point ?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375054597854614, "perplexity": 520.0296328640633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00172.warc.gz"}
http://cerco.cs.unibo.it/changeset/3350
# Changeset 3350 Ignore: Timestamp: Jun 14, 2013, 11:46:10 AM (5 years ago) Message: ... File: 1 edited ### Legend: Unmodified r3349 associated to $L_1$ is the number of cycles required to execute the block $I_3$ and \verb+COND l_2+, while the cost $k_2$ associated to $L_2$ counts the cycles required by the block $I_4$ and \verb+GOTO l_1+. The compiler also guarantees cycles required by the block $I_4$, \verb+GOTO l_1+ and \verb+COND l_2+. The compiler also guarantees that every executed instruction is in the scope of some code emission label, that each scope does not contain loops (to associate a finite cost), and that runs are weakly similar to the source code runs. The notion of weak bisimulation for structured traces is a global property The notion of weak simulation for structured traces is a global property which is hard to prove formally and much more demanding than the simple forward simulation required for proofs of preservation of functional properties. Let's consider a generic unstructured language already equipped with a small step structured operational semantics (SOS). We introduce a deterministic labelled transition system~\cite{LTS} $(S,s_{\mathrm{init}},\Lambda,\to)$ deterministic labelled transition system~\cite{LTS} $(S,\Lambda,\to)$ that refines the SOS by observing function calls and the beginning of basic blocks. $S$ is the set of states of the program, $s_\mathrm{init}$ the initial state and $S$ is the set of states of the program and $\Lambda = \{ \tau, RET \} \cup \Labels \cup \Functions$ where $\Functions$ is the set of names of functions that can occur in the denotes the image of this function. The transition function is defined as $s_1 \to[o] s_2$ if $s_1$ moves to $s_2$ according to the SOS; moreover $o = f \in \Functions$ if $s_1$ moves to $s_2$ according to the SOS and $o = f \in \Functions$ if the function $f$ is called, $o = RET$ if a \verb+RETURN+ is executed, $o = L \in \Labels$ if an \verb+EMIT $L$+ is executed to signal the Because we assume the language to be deterministic, the label emitted can actually be computed simply observing $s_1$. Finally, $S$ is also endowed with a relation $s\ar s'$ ($s'$ \emph{follows} $s$) when the instruction to be executed in $s'$ is just after the one in $s$. a relation $s\ar s'$ ($s'$ \emph{follows} $s$) that holds when the instruction to be executed in $s'$ follows syntactically the one in $s$ in the source program. In the rest of the paper we write $s_0 \to^{*} s_n$ for the finite execution fragment $T = s_0 \to[o_0] s_1 \to[o_1] \ldots \to[o_{n-1}] s_n$ and, we call \emph{weak trace} of $T$ (denoted as $|T|$) the subsequence $o_{i_0} \ldots o_{i_m}$ of $o_0 \ldots o_{n-1}$ obtained dropping every internal action $\tau$. %Let $k$ be a cost model for observables actions that maps elements of %$\Lambda \setminus \{\tau\}$ to any commutative cost monoid %(e.g. natural numbers). We extend the domain of $k$ to executable fragments %by posing $k(T) = \Sigma_{o \in |T|} k(o)$. \paragraph{Structured execution fragments} Among all possible finite execution fragments we want to identify the ones that satisfy the requirements we sketched in the previous section. We say that an execution fragment $s_0 \to[o_0] s_1 \to[o_1] \ldots \to[o_n] s_n$ is \emph{structured} (marking it as $s_0 \To s_n$) iff the following conditions $s_0 \to^{*} s_n$ is \emph{structured} (and we denote it as $s_0 \To s_n$) iff the following conditions are met. \begin{enumerate} $s_i \ar s_{k+1}$. In other words, $s_{i+1}$ must start execution with \verb+EMIT $\ell(f)$+ --- so that no instruction falls outside the scope of every label --- and then continue with a structured fragment returning control to the instruction immediately following the call. This captures the requirements that the body of function calls always start with a label emission statement, and every function call must converge yielding back control just after it. \item For every $i$ and $f$, if $s_{i+1}\to[\ell(f)]s_{i+2}$ then $s_i\to[f]s_{i+1}$. This is a technical condition needed to ensure that labels associated with functions always follow a call. The condition also enforces convergence of every function call, which is necessary to bound the cost of the fragment. Note that non convergent programs may still have structured execution fragments that are worth measuring. For example, we can measure the reaction time of a server implemented as an unbounded loop whose body waits for an input, process it and performs an output before looping: the processing steps form a structured execution fragment. \item The number of $RET$'s in the fragment is equal to the number of calls, i.e.\ the number of observables in $\Functions$. This, together with the above condition, captures the well-bracketing of the fragment with respect to function calls. calls performed. In combination with the previous condition, this ensures well-backeting of function calls. \item \label{req3} For every $i$ and $f$, if $s_{i+1}\to[\ell(f)]s_{i+2}$ then $s_i\to[f]s_{i+1}$. This is a technical condition needed to ensure that a label associated with a function is only used at the beginning of its body. Its use will become clear in~\autoref{simulation}. \item For every $i$, if the instruction to be executed in $s_i$ is a conditional branch, then there is an $L$ such that $s_{i+1} \to[L] s_{i+2}$ or, equivalently, that $s_{i+1}$ must start execution with an \verb+EMIT $L$+. This captures the requirement that every branch which is live code must start with a label emission. live code must start with a label emission. Otherwise, it would be possible to have conditional instructions whose branches are assigned different costs, making impossible to assign a single cost to the label whose scope contains the jump. \end{enumerate} One might wonder why $f$ and $\ell(f)$, that aways appear in this order, are not collapsed into a single observable. This would indeed simplify some aspects of the formalisation, but has the problem of misassagning the cost of calls, which would fall under the associated label. As different call instructions with different costs are possible, this is not acceptable. Let $T = s_0 \to[o_0] s_1 \ldots \to[o_n] s_{n+1}$ be an execution fragment. The \emph{weak trace} $|T|$ associated to $T$ is the subsequence $o_{i_0} \ldots o_{i_m}$ of $o_0 \ldots o_n$ obtained dropping every internal action $\tau$. Let $k$ be a cost model that maps observables actions to any commutative cost monoid (e.g. natural numbers). We extend the domain of $k$ to fragments by posing $k(T) = \Sigma_{o \in |T|} k(o)$. The labelling approach is based on the idea that the execution cost of collapsed into a single observable. This would simplify some aspects of the formalisation at the price of others. For example, we should add special cases when the fragment starts at the beginning of a function body (e.g. the one of \texttt{main}) because in that case nobody would have emitted the observable $\ell(f)$. \paragraph{Measurable execution fragments and their cost prediction.} The first main theorem of CerCo deals with programs written in object code. It states that the execution cost of certain execution fragments, that we call \emph{measurable fragments}, can be computed from their weak trace by choosing a $k$ that assigns to any label the cost of the instructions in its scope. A structured fragment $T = s_0 \To s_n$ is measurable if it does not start or end in the middle of a basic block. Ending in the middle of a block would mean having pre-paid more instructions than the ones executed, and starting in the middle would mean not paying any instruction up to the first label emission. Formally we require $o_0 \in \Labels$ (or equivalently computed from their weak trace by choosing the cost model $k$ that assigns to any label the cost (in clock cycles) of the instructions in its scope, and $0$ to function calls and $RET$ observables. \begin{theorem} \label{static} for all measurable fragment $T = s_0 \to^{*} s_n$,\\ $$\Delta_t := \verb+clock+_{s_n} - \verb+clock+_{s_0} = \Sigma_{o \in |T|} k(o)$$ \end{theorem} An execution fragment $s_0 \to^{*} s_n$ is measurable if it is structured (up to a possible final \texttt{RETURN}) and if it does not start or end in the middle of a basic block. Ending in the middle of a block would mean that the last label encountered would have pre-paid more instructions than the ones executed; starting in the middle would mean not paying any instruction up to the first label emission. Formally, $s_0 \to^{*} s_n$ is measurable iff $o_0 \in \Labels$ (or equivalently in $s_0$ the program must emit a label) and either $s_{n-1}\to[RET]s_n$ or $s_n$ must be a label emission statement (i.e.\ $s_n \to[L] s_{n+1}$). $s_0 \To s_{n-1}$ and $s_{n-1}\to[RET]s_n$ or $s_0 \To s_n$ and $s_n$ must be a label emission statement. \textbf{CSC: PROVA----------------------} % The theorem is proved by structural induction over the structured % trace, and is based on the invariant that % iff the function that computes the cost model has analysed the instruction % to be executed at $s_2$ after the one to be executed at $s_1$, and if % the structured trace starts with $s_1$, then eventually it will contain also % $s_2$. When $s_1$ is not a function call, the result holds trivially because % of the $s_1\exec s_2$ condition obtained by inversion on % the trace. The only non % trivial case is the one of function calls: the cost model computation function % does recursion on the first instruction that follows that function call; the % \verb+as_after_return+ condition of the \verb+tal_base_call+ and % \verb+tal_step_call+ grants exactly that the execution will eventually reach % this state. \paragraph{Weak similarity and cost invariance.} Given two deterministic unstructured programming languages with their own operational semantics, we say that a state $s_2$ of the second language (weakly) simulates the state $s_1$ of the first iff the two unique weak traces that originate from them are equal. If $s_1$ also (weakly) simulates $s_2$, then the two states are weakly trace equivalent. or, equivalently because of determinism, that they are weakly bisimilar. operational semantics, we say that two execution fragments are \emph{weakly trace equivalent} if their weak traces are equal. A compiler (pass) that preserves the program semantics also preserves weak traces and propagates measurability iff for every measurable fragment $T_1 = s_1 \to^{*} s_1'$ of the source code, the corresponding execution fragment $T_2 = s_2 \to^{*} s_2'$ of the object code is measurable and $T_1$ and $T_2$ are weakly trace equivalent. The very intuitive notion of corresponding fragment'' is made clear in the forward simulation proof of preservation of the semantics of the program by saying that $s_2$ and $s_1$ are in a certain relation. Clearly the property holds for a compiler if it holds for each compiler pass. Having proved in~\autoref{static} that the statically computed cost model is accurate for the object code, we get as a corollary that it is also accurate for the source code if the compiler preserves weak traces and propagates measurability. Thus it becomes possible to compute cost models on the object code, transfer it to the source code and then reason comfortably on the source code only. \begin{theorem}\label{preservation} Given a compiler that preserves weak traces and propagates measurability, for all measurable execution fragment $T_1 = s_1 \to^{*} s_1'$ of the source code such that $T_2 = s_2 \to^{*} s_2'$ is the corresponding fragment of the object code, $$\Delta_t := \verb+clock+_{s_2'} - \verb+clock+_{s_2} = \Sigma_{o \in |T_2|} k(o) = \Sigma_{o \in |T_1|} k(o)$$ \end{theorem} \section{Forward simulation} \label{simulation} Because of \autoref{preservation}, to certify a compiler for the labelling approach we need to both prove that it respects the functional semantics of the program, and that it preserves weak traces and propagates measurability. The first property is standard and can be proved by means of a forward simulation argument (see for example~\cite{compcert}) that runs like this. First a relation between the corresponding source and target states is established. Then a lemma establishes a local simulation condition: given two states in relation, if the source one performs one step then the target one performs zero or more steps and the two resulting states are synchronized again according to the relation. Finally, the lemma is iterated over the execution trace to establish the final result. In principle, preservation of weak traces could be easily shown with the same argument (and also at the same time). Surprisingly, propagation of measurability cannot. What makes the standard forward simulation proof work is the fact that usually a compiler pass performs some kind of local or global analysis of the code followed by a compositional, order preserving translation of every instruction. In order to produce structured traces, however, code emission cannot be fully compositional any longer. For example, consider~requirement \ref{req3} that asks every function body to start with a label emission statement. Some compiler passes must add preambles to functions, for example to take care of the parameter passing convention. In order to not violate the requirement, the preamble must be inserted after the label emission. In the forward simulation proof, however, function call steps in the source language are simulated by the new function call followed by the execution of the preamble, and only at the end of the preamble the reached states are again in the expected relation. In the meantime, however, the object code has already performed the label emission statement, that still needs to be executed in the source code, breaking forward simulation. Another reason why the standard argument breaks is due to the requirement that function calls should yield back control after the calling point. This must be enforced just after \textbf{XXXXXXXXXXXXXXXXXX} A compiler preserves the program semantics by suppressing or introducing $\tau$ actions. Intuitively, it is because To understand why, consider the case of a function call and the pass that fixes the parameter passing conventions. A function call in the source code takes in input an arbitrary number of pseudo-registers (the actual parameters to pass) and returns an arbitrary number of pseudo-registers (where the result is stored). A function call in the target language has no input nor output parameters. The pass must add explicit code before and after the function call to move the pseudo-registers content from/to the hardware registers or the stack in order to implement the parameter passing strategy. Similarly, each function body must be augmented with a preamble and a postamble to complete/initiate the parameter passing strategy for the call/return phase. Therefore what used to be a call followed by the next instruction to execute after the function return, now becomes a sequence of instructions, followed by a call, followed by another sequence. The two states at the beginning of the first sequence and at the end of the second sequence are in relation with the status before/after the call in the source code, like in an usual forward simulation. How can we prove however the additional condition for function calls that asks that when the function returns the instruction immediately after the function call is called? To grant this invariant, there must be another relation between the address of the function call in the source and in the target code. This additional relation is to be used in particular to relate the two stacks. % @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ % % %   trace_label_return $$s_1$$ $$s_2$$ $$\to$$ list (as_cost_label S) % % \end{alltt} % % \paragraph{Cost prediction on structured traces.} % % The first main theorem of CerCo about traces % (theorem \verb+compute_max_trace_label_return_cost_ok_with_trace+) % holds for the % instantiation % of the structured traces to the concrete status of object code programs. % Simplifying a bit, it states that % \label{th1} % \begin{array}{l}\forall s_1,s_2. \forall \tau: \verb+TLR+~s_1~s_2.~ %   \verb+clock+~s_2 = \verb+clock+~s_1 + %   \Sigma_{\alpha \in |\tau|}\;k(\alpha) % \end{array} % % where the cost model $k$ is statically computed from the object code % by associating to each label $\alpha$ the sum of the cost of the instructions % in the basic block that starts at $\alpha$ and ends before the next labelled % instruction. The theorem is proved by structural induction over the structured % trace, and is based on the invariant that % iff the function that computes the cost model has analysed the instruction % to be executed at $s_2$ after the one to be executed at $s_1$, and if % the structured trace starts with $s_1$, then eventually it will contain also % $s_2$. When $s_1$ is not a function call, the result holds trivially because % of the $s_1\exec s_2$ condition obtained by inversion on % the trace. The only non % trivial case is the one of function calls: the cost model computation function % does recursion on the first instruction that follows that function call; the % \verb+as_after_return+ condition of the \verb+tal_base_call+ and % \verb+tal_step_call+ grants exactly that the execution will eventually reach % this state. % % \paragraph{Structured traces similarity and cost prediction invariance.} % As should be expected, even though the rules are asymmetric $\approx$ is in fact % an equivalence relation. \section{Forward simulation} \label{simulation} We summarise here the results of the previous sections. Each intermediate
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029163718223572, "perplexity": 648.1798116440426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887660.30/warc/CC-MAIN-20180118230513-20180119010513-00720.warc.gz"}
http://mathoverflow.net/questions/36967/elliptic-regularity-on-bounded-domains/37001
# Elliptic regularity on bounded domains I'm concerned with a generic uniformly elliptic operator $L$ on $\mathbb{R}^n$. If $L$ is uniformly elliptic and I am studying the equation $Lu=f$ then the way I can deduce regularity on $\mathbb{R}^n$ is via the Fourier transform: $\hat{Lu} = \hat{f}$ which leads to $P(\xi)\hat{u} = \hat{f}$. From this finally I use the assumption that $P(\xi) \geq c |\xi|^2$ to deduce along with Parseval that $\|u\|_{H^2} \lesssim \|f\|_{L^2} + \|u\|_{L^2}$. My question is, why does this become so complicated on bounded domains? Question: Why can't we simply write $u = \sum_k \phi_k \hat{u}(k)$ as a Fourier series and deduce from the equation that $\|k\|^2|\hat{u}(k)|^2 = |f(k)|^2$ (the Fourier coefficients) and use this to deduce that $$\sum_k (1+|k|^2)^2 |\hat{u}(k)|^2 \lesssim \|u\|_{L^2} + \|f\|_{L^2}$$ where again I've used Parseval's identity. In other words, doesn't everything from the $\mathbb{R}^n$ case just get converted into statements about the Fourier series? (as opposed to tranform). Hope this is clear! Thanks! - Ok but I don't want to think about this in terms of difference quotients and integration by parts (as is the approach done by Evans). I feel that one should be able to take advantage of the fact that the operators are constant coefficient to describe what's going on in frequency space. What do I mean by Fourier series? I mean expressing $u$ in an orthnormal basis of say $-Delta u$ with $0$ boundary conditions. –  Dorian Aug 28 '10 at 17:31 The fact that operators are constant coefficients means that they are translation invariant and well Fourier transform works well in $\mathbb{R}^n$ because it, too, is translation invariant. It's not very useful on a general domain, especially if the boundary is complicated. –  Victor Protsak Aug 28 '10 at 17:43 Sure but fourier series makes sense on any nice bounded domain (nice enough so that you can use as your eigenbasis the eigenfunctions of the laplacian operator with dirichlet boundary conditions for instance). –  Dorian Aug 28 '10 at 18:05 The Fourier series makes sense, but the Fourier basis functions are no longer eigenfunctions for your operator, because they don't satisfy the boundary conditions. –  Nate Eldredge Aug 28 '10 at 19:11 I'm presuming 0 dirichlet boundary conditions so boundary conditions are non problem... –  Dorian Aug 28 '10 at 19:28 Dorian, aren't you messing up things a little? Surely you can expand any $L^2$ function in a series of eigenfunctions for the elliptic operator, but please notice that this simple fact already requires quite a detailed theory of elliptic operators on bounded domains. In order to prove the existence of eigenfunctions you must be able to solve the equation $Lu=\lambda u$, and if you want to use your expansion for regularity results, you need to study the properties of the eigenfunctions, their growth etc. But actually you are right, in a sense. It is indeed possible to prove existence and regularity of solution at the interior of the domain by using essentially the same methods as on the whole space. However, if you want to control the properties of the solution at the boundary, then this requires new tools. As a minimum, in the lucky situation of a smooth boundary, you can reduce to the case of a half space, but no less than that. If you are not convinced, think of the fact that some results cease to be true if you drop the assumption that the boundary is Lipschitz or satisfies some suitable cone condition. As you suspect, it is also possible to do frequency space analysis much in the same way as on the whole space, but I would not call this easy. There is a beautiful set of notes "Lectures on semiclassical analysis" available on the web, by Evans and Zworsky, see Theorem 3.17 there (they prove interior Schauder estimates using Paley-Littlewood, apparently following a suggestion of H.Smith). I repeat: this is interior regularity, the behaviour at the boundary is substantially more difficult. - That's a wonderful set of notes. Thank you for pointing it out. –  Willie Wong Aug 28 '10 at 23:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245727062225342, "perplexity": 160.79269489445392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.50/warc/CC-MAIN-20150728002310-00084-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/harmonics-of-a-closed-closed-tube.356038/
# Harmonics of a closed-closed tube 1. Nov 18, 2009 ### abbeygeib I don't understand how to get n... if that doesn't make sense i can explain more... I have the length and velocity... from there i just don't understand what n even is or means... 2. Nov 18, 2009 ### FredGarvin n is the mode or the multiple of the fundamental frequency. If you want the third harmonic, n=3. 3. Nov 19, 2009 ### sophiecentaur It should really be referred to as the second (U)overtone(/U) for a physical resonator because the frequencies of overtones may not be exactly harmonically related. Look at the spec of Quartz crystals for use in oscillators and you'll see what I mean; It's all to do with 'end effect' and effective length of the oscillating object, in wavelengths. Having said this, for a closed-closed tube, the end effect will be v. small. The fundamental frequency will be the frequency at which there is a half wavelength between the two ends - allowing a node at each end*. The first overtone will be when there is a node in the centre (i.e. at near twice the frequency) and the second will be when there are two nodes - asoasf. * fundamental f =c/2x where c is the speed of sound in the tube and x is the effective length Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Harmonics of a closed-closed tube
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8139171600341797, "perplexity": 1082.176439892374}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00335.warc.gz"}
http://www.jiskha.com/display.cgi?id=1361576240
Monday December 22, 2014 # Homework Help: Chem Posted by Mira on Friday, February 22, 2013 at 6:37pm. Calculate the mass percent composition of C,H, O and in aspirin. Express your answers using two significant figures. Enter your answers numerically separated by commas. • Chem - DrBob222, Friday, February 22, 2013 at 9:19pm Related Questions Chem - Calculate the mass percent composition of C,H, O and in aspirin. Express ... physics - A 23.6μC point charge lies at the origin. Find the electric field... Physics - The human ear canal is approximately 2.3 cm long. It is open to the ... Chemistry - Use Equation 7.1 and assuming that core electrons contribute 1.00 ... physics - Find the center of mass of a system composed of three spherical ... physics - Three charges are fixed in the x−y plane as follows: 1.5nC at ... chemistry - Use Equation 7.1 and assuming that core electrons contribute 1.00 ... Chemistry - Use Equation 7.1 and assuming that core electrons contribute 1.00 ... Physics - Problem 21.40 A pair of 10uF capacitors in a high-power laser are ... Search Members
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840627133846283, "perplexity": 3264.7960592983277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775394.157/warc/CC-MAIN-20141217075255-00143-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/41654-taylor-expansion-d-d-print.html
# Taylor Expansion on d/d • June 15th 2008, 08:23 PM was1984 Taylor Expansion on d/d(f(x))[sqrt(x+f(x))] I need to find the first 2 orders of the taylor series on the expression below. sqrt(Psi_s0 + Phi_t*exp((Psi_s0-2*Phi_F - V_SB)/Phi_t)) where Phi_t*exp((Psi_s0-2*Phi_F - V_SB)/Phi_t) is defined as Xi, and the taylor series is around Xi = 0. • June 15th 2008, 08:56 PM Mathstud28 Quote: Originally Posted by was1984 I need to find the first 2 orders of the taylor series on the expression below. sqrt(Psi_s0 + Phi_t*exp((Psi_s0-2*Phi_F - V_SB)/Phi_t)) where Phi_t*exp((Psi_s0-2*Phi_F - V_SB)/Phi_t) is defined as Xi, and the taylor series is around Xi = 0. $\sqrt{\psi_{s0}+\phi_{t}e^{\frac{\psi_{s0}-2\cdot\phi_f-V_{SB}}{\phi_t}}}$?? • June 15th 2008, 08:57 PM was1984 Yes, thank you, that is correct. • June 15th 2008, 09:00 PM Mathstud28 Quote: Originally Posted by was1984 Yes, thank you, that is correct. What is the variable in which we are differentiating in respect to? I will assume it is $\psi$ Then let $f(\psi)$ be equal to the above expression, then the second order polynomial would be $f(0)+f'(0)x+\frac{f''(0)x^2}{2}$ • June 15th 2008, 09:03 PM was1984 The variable we are differentiating with respect to is Xi, which is the second term under the square root. • June 15th 2008, 09:11 PM was1984 I'll try to elaborate. We are setting $\xi = \phi_{t}e^{\frac{\psi_{s0}-2\cdot\phi_f-V_{SB}}{\phi_t}}$ Then we are solving the series around $\xi = 0$ So I actually want expand an equation of the form $\sqrt{\psi_{s0}+\xi(\psi_{s0})}$ for $\xi(\psi_{s0})$, and I only need the first two terms, fortunately. :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660141468048096, "perplexity": 1212.3695537526885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645220976.55/warc/CC-MAIN-20150827031340-00261-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-14-partial-derivatives-review-exercises-page-1024/47
## Calculus 8th Edition $\dfrac{\sqrt{145}}{2},\lt 4,\dfrac{9}{2} \gt$ Our aim is to determine the maximum rate of change of $f(x,y)$.In order to find this, we have : $D_uf=|\nabla f(x,y)|$ Given: $f(x,y)=x^2y+\sqrt y$ $\nabla f(x,y)=\lt 2xy, x^2+\dfrac{1}{2\sqrt y} \gt$ From the given data, we have $f(x,y)=f(2,1)$ $\nabla f(2,1)=\lt (2)(2)(1),2^2+\dfrac{1}{2\sqrt 1} \gt=\lt 4,\dfrac{9}{2} \gt$ $|\nabla f(2,1)|=\sqrt{4^2+(\dfrac{9}{2})^2}=\dfrac{\sqrt{145}}{2}$ Therefore, the maximum rate of change of $f(x,y)$ and the direction is: $\dfrac{\sqrt{145}}{2},\lt 4,\dfrac{9}{2} \gt$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473421931266785, "perplexity": 96.76909689401937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00545.warc.gz"}
http://math.stackexchange.com/questions/166612/proving-fracabc1-fracbca1-fraccab1-ge1
# Proving : $\frac{a}{b+c+1}+\frac{b}{c+a+1}+\frac{c}{a+b+1}\ge1$ Let $a,b,c > 0$ be a real numbers ,such that : $abc=1$,how to prove that: $$\frac{a}{b+c+1}+\frac{b}{c+a+1}+\frac{c}{a+b+1}\ge1$$ - should we create a symmetric-inequalities tag? – leonbloy Jul 4 '12 at 17:03 @leonbloy I am no in favor of the idea – Belgi Jul 4 '12 at 18:02 Since the inequality is symmetric in $a,b,c$ without loss of generality we can assume that $a \geq b \geq c$. Then $$b+c+1 \leq a+c+1 \leq a+b+1$$ and $$\frac{a}{b+c+1} \geq \frac{b}{c+a+1} \geq \frac{c}{a+b+1}$$ Now, by Chebyshev's sum inequality you have $$\frac{1}{3} \left[ \frac{a}{b+c+1} \cdot (b+c+1)+\frac{b}{c+a+1}\cdot (a+c+1)+\frac{c}{a+b+1}\cdot (a+b+1) \right]$$ $$\leq \frac{1}{9}\left[ \frac{a}{b+c+1} +\frac{b}{c+a+1}+\frac{c}{a+b+1}\right]\left[ (b+c+1)+ (a+c+1)+ (a+b+1) \right]$$ or equivalently $$3(a+b+c) \leq \left[ \frac{a}{b+c+1} +\frac{b}{c+a+1}+\frac{c}{a+b+1}\right]\left[ 2a+2b+2c+3 \right] \,. \tag{*}$$ Now, by AM-GM you have $$1 \leq \sqrt[3]{abc} \leq \frac{a+b+c}{3}$$ hence $$2a+2b+2c+3 \leq 3(a+b+c) \tag{*\!*}$$ Combining $(*)$ with $(**)$ you get your desired inequality. P.S. I don't know why, probably experience with these, but when I saw it the inequality screamed Chebyshev to me... - Just a note: you can use \tag{something} to name your equations. It places them at the far alligned right, so it doesn't get mixed up as part of the actual equation. Other then that, nice post! – Joe Jul 4 '12 at 16:08 @Frank: A bit late to comment, but the solution isn't quite right. If you have $a \ge k$, it doesn't imply $\dfrac{1}{a} \ge \dfrac{1}{k}$. – Inceptio Jun 20 '13 at 13:48 As A.M.≥G.M. for positive real numbers, $a+b+c ≥ 3(abc)^{1/3}$ = 3 L.H.S. = $\sum\frac{a}{b+c+1}$ = -3+$\sum(\frac{a}{b+c+1}$+1) = -3 + (a+b+c+1)$\sum\frac{1}{b+c+1}$ As A.M.≥H.M. for positive real numbers, clearly $\frac{1}{b+c+1}$>0. So, (1/3)$\sum\frac{1}{b+c+1} ≥ 3/\sum(b+c+1)$ taking A.M. and H.M. of $\frac{1}{b+c+1}$ etc., Or, $\sum(b+c+1) \sum\frac{1}{b+c+1}$ ≥ 9, Or, (2(a+b+c)+3) $\sum\frac{1}{b+c+1}$ ≥ 9, Let a+b+c=3d where d≥1 => 2(a+b+c)+3=6d+3 => $\sum\frac{1}{b+c+1}≥\frac{9}{6d+3}=\frac{3}{2d+1}$ L.H.S. = -3 + (3d+1) $\frac{3}{2d+1}=\frac{9d+3}{2d+1} - 3 =\frac{3d}{2d+1}$ Now, $\frac{3d}{2d+1}$ will be ≥1 if 3d≥2d+1 or if d≥1 which is true. - You have some severe formatting problems, e.g. non-TeX math, unmatched parentheses and undefined symbol Σ (is this a sum or a sigma? this is why I didn't edit the post myself). For math please use $\TeX$, for example $\frac{a}{b+c+1}$ can be typeset as $\frac{a}{b+c+1}$, sums $\sum_{a}^{b}c$ $\sum_{a}^{b}c$, sigma-s $\Sigma$, $\Sigma$, implications $\Rightarrow$ $\Rightarrow$, inequalities $\leq \geq$ $\leq \geq$. – dtldarek Jul 4 '12 at 18:17 @lab bhattacharjee: How does $(2(a+b+c)+3)\sum\frac{1}{b+c+1}\geq 9$ and $2(a+b+c)+3\geq 9$ imply $\sum\frac{1}{b+c+1}\geq 1$ ? You are saying that $xy\geq 9$ and $x\geq 9$ imply $y\geq 1$ which is not true, take $x=18,y=1/2$ – pritam Jul 5 '12 at 14:56 Thanks Pritam for your observation, would you please verify once more? – lab bhattacharjee Jul 5 '12 at 17:47 @labbhattacharjee: Yeah now it looks correct. – pritam Jul 5 '12 at 18:05 As A.M.≥H.M. for positive real numbers, clearly 1b+c+1>0. So, (1/3)∑1b+c+1≥3/∑(b+c+1) taking A.M. and H.M. of 1b+c+1 etc., Or, ∑(b+c+1)∑1b+c+1 ≥ 9, Or, (2(a+b+c)+3) ∑1b+c+1 ≥ 9, Let a+b+c=3d where d≥1 => 2(a+b+c)+3=6d+3 => ∑1b+c+1≥96d+3=32d+1 L.H.S. = -3 + (3d+1) 32d+1=9d+32d+1−3=3d2d+1 Now, 3d2d+1 will be ≥1 if 3d≥2d+1 or if d≥1 which is true.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304972887039185, "perplexity": 2847.2843901852207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158481.37/warc/CC-MAIN-20160205193918-00333-ip-10-236-182-209.ec2.internal.warc.gz"}
https://blogs.mathworks.com/loren/2011/10/25/simplifying-symbolic-results/
## Loren on the Art of MATLABTurn ideas into MATLAB Note Loren on the Art of MATLAB has been retired and will not be updated. # Simplifying Symbolic Results I am pleased to introduce guest blogger Kai Gehrs. Kai is a developer for the Symbolic Math Toolbox. His main focus in this post is on special approaches to symbolic simplification and equation solving. ### Have You Noticed Results Not Being Simplified Well? When using the Symbolic Math Toolbox for symbolic simplification, have you ever been wondering why it does not apply certain classic book rules automatically to return the results you have in mind? Some of these classic book rules are: Just using simplify to get the result on input does not work: syms a b simplify(log(a)+log(b)) ans = log(a) + log(b) ### Using Assumptions on Variables Of course, we all know that the rule applies only under appropriate mathematical assumptions on and . For example, if we assume that and are positive, we will get the desired result: syms a b positive simplify(log(a)+log(b)) ans = log(a*b) To get rid of all previously specified assumptions, use syms a b clear ### When Things Get More Complicated Does it mean that setting the right assumptions is a universal solution here? Well, not always! Assume and appear as intermediate results in some really huge symbolic computations somewhere in, lets say, line 454 of your MATLAB script. From the context of your application, you know that and are positive. Now you want MATLAB to automatically compute a simplified form of in line 455. How will you manage to set the right assumptions? As an example, think of and as being something like syms x y a = -(x + 1)^(1/2)/((exp(x - y) - sin(x + y)) * ... (log((x^2 + 1)/(y^2 + 1))/exp(y) + (x - y)^y + ... 1/(x - y)^x)); b = (cos(x)*sin(y))/((x - y)^x*(x + 1)^(1/2)) - ... (exp(x)*(x - y)^y)/(exp(y)*(x + 1)^(1/2)) + ... (cos(y)*sin(x))/((x - y)^x*(x + 1)^(1/2)) - ... (exp(x)*log((x^2 + 1)/(y^2 + 1)))/ ... (exp(2*y)*(x + 1)^(1/2)) - exp(x)/ ... (exp(y)*(x - y)^x*(x + 1)^(1/2)) + ... (cos(x)*sin(y)*(x - y)^y)/(x + 1)^(1/2) + ... (cos(y)*sin(x)*(x - y)^y)/(x + 1)^(1/2) + ... (log((x^2 + 1)/(y^2 + 1))*cos(x)*sin(y))/ ... (exp(y)*(x + 1)^(1/2)) + (log((x^2 + 1)/ ... (y^2 + 1))*cos(y)*sin(x))/(exp(y)*(x + 1)^(1/2)); Now executing the simplify command does not seem to be helpful: S = simplify(log(a)+log(b)); pretty(S) / y | sin(x + y) sin(x + y) (x - y) log| ------------------- + ------------------- + | x 1/2 1/2 \ (x - y) (x + 1) (x + 1) y #1 sin(x + y) exp(x) (x - y) ------------- - --------------- - #2 #2 \ exp(x) #1 exp(x) | ------------------- - -------------------------- | + 1/2 x 1/2 | exp(2 y) (x + 1) exp(y) (x - y) (x + 1) / / 1/2 / log| - ((x + 1) ) / | (exp(x - y) - sin(x + y)) | | \ \ / #1 y 1 \ \ \ | ------ + (x - y) + -------- | | | | exp(y) x | | | \ (x - y) / / / where / 2 \ | x + 1 | #1 = log| ------ | | 2 | \ y + 1 / 1/2 #2 = exp(y) (x + 1) Assuming and to be positive does not significantly improve the result either. The reason is that we need to set such assumptions on and that would make the expressions and positive. We can try to find the right assumptions for this example, but in general it seems like one has to be a genius to guess what's appropriate. ### Using Option IgnoreAnalyticConstraints for Simplification A possible solution to the problem is to ignore certain analytic constraints, that is, to use the IgnoreAnalyticConstraints option for simplify. With this option the simplifier internally applies the following rules: • for all values of and . In particular for all values of , and . • for all values of and . In particular for all values of , and . • If and are standard math functions and holds for all small positive numbers, then is assumed to be valid for all (for example as in case of ). So how does this work in our example? simplify(log(a)+log(b),'IgnoreAnalyticConstraints',true) ans = 0 The result is , because . Hence, under the above assumptions, we get . Of course, it is important to keep in mind that the rules applied by IgnoreAnalyticConstraints are not correct in a strict mathematical sense. Nevertheless, in practice these rules are often very helpful to get simpler results. Another nice side effect is that ignoring some analytic constraints often helps you speed up your computations. This documentation describes more details on IgnoreAnalyticConstraints. ### Using IgnoreAnalyticConstraints for Equation Solving Not surprisingly the concept of ignoring analytic constraints also makes sense for equation solving. Imagine that you want to solve the equation for . Ignoring analytic constraints would certainly mean to write this as . Assuming to be nonzero we get and, finally, . Without using any restrictions, Symbolic Math Toolbox returns the result: syms x n solve(log(x^n),x) Warning: The solutions are parametrized by the symbols: k = Z_ intersect Dom::Interval([-1/(2*Re(1/n))], 1/(2*Re(1/n))) ans = 1/exp((pi*k*2*i)/n) So you get a parameterized solution which strongly depends on the values of . This is reasonable, because, for example, for you get the four solutions solve(log(x^4),x) ans = 1 -1 i -i whereas for there only is one solution: solve(log(x^(1/2)),x) ans = 1 Applying IgnoreAnalyticConstraints we get solve(log(x^n),x,'IgnoreAnalyticConstraints',true) ans = 1 Also for equations involving roots where no additional symbolic parameters are present, it may be useful to apply IgnoreAnalyticConstraints to get simpler results: solve(x^(5/2) - 8^(sym(10/3)), 'IgnoreAnalyticConstraints', true) ans = 16 Here the solver ignores branch cuts during internal simplifications and, hence, returns only one solution. See the MATLAB doc page on solve for further details. The IgnoreAnalyticConstraints option can also be used for other Symbolic Math Toolbox functions like the function int for doing symbolic integration. The option is also available for the related MuPAD Notebook Interface functions. ### Have You Tried IgnoreAnalyticConstraints? Have you tried the IgnoreAnalyticConstraints option to get simpler, shorter, and easier to handle results? Let me know here. Published with MATLAB® 7.13 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959476947784424, "perplexity": 2160.6946940745256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103360935.27/warc/CC-MAIN-20220628081102-20220628111102-00153.warc.gz"}
https://applying-maths-book.com/chapter-5/chapter-5-answers-45-52.html
# Solutions Q45 - 52# # import all python add-ons etc that will be needed later on %matplotlib inline import numpy as np import matplotlib.pyplot as plt from sympy import * init_printing() # allows printing of SymPy results in typeset maths format plt.rcParams.update({'font.size': 14}) # set font size for plots The potential has the value $$V = 0.25x^2$$. Using equation (41) the energy change to first order the $$n6\text{th}$$ level is given by $\displaystyle E_n^{(1)}=\frac{1}{2L}\int\limits_0^L \sin\left(n\pi \frac{x}{L}\right)x^2\sin\left(n\pi \frac{x}{L}\right)dx$ The perturbation ‘particle in a box’ code used in the text can be simply modified to do the calculation. The energies in cm$$^{-1}$$ are $$E_1^{(1)} = 410.86,\, E_2^{(1)} = 888.65, \, E_1^{(2)} = -9.53, \, E_2^{(2)} = -8.98$$. The orthogonality of the wavefunctions means that $$\int \psi_n\psi_m dx = \delta_{nm}$$ therefore only terms with $$n = m$$ are non-zero, and this condition produces the normal, unperturbed, energies. The first-order corrections to the energies are given by the equation $\displaystyle E_n^{(1)} =\int\limits_\infty^\infty \psi(n,x)\,x\,\psi(n,x)dx$ In this case as the $$\psi$$ are real, the complex conjugate has no meaning. Multiplying by the ‘operator’ $$x$$ will not change the wavefunction but will change the integral. By symmetry, $$\psi^2$$ has to be symmetric about zero; multiplying by $$x$$ will make the product $\displaystyle \psi(n,x)\,x\,\psi(n,x)$ which is an odd function ( e.g. $$f(x)= -f(x)$$ ) and so the integral forming $$E_n^{(1)}$$ should be zero as the integration limits are symmetrical about zero. This means that all the first order corrections will be zero no matter what the quantum number. Looking at the wavefunctions (see fig 15 below) they have an alternate ‘even /odd’ character which means that $\displaystyle \int\limits_\infty^\infty \psi(m,x)\,x\,\psi(n,x)dx \ne 0\quad\text{ if }\quad n = m\, \pm 1, \, \pm 3, \cdots$ However, in evaluating these integrals it is found that only the $$n=m \pm 1$$ is not zero. The others ($$\pm$$ 3, 5 $$\cdots$$) are accidentally zero due to the shape of the wavefunctions. The calculation of $$\int H(n,x)\,x\,H(m,x)dx$$ using SymPy is shown next for a range of $$n$$ and $$m$$. In calculating the second order correction (eqn. 44) a series is summed over index $$k$$, and there are only two values that are not zero, so the sum has terms only where $$k = n \pm 1$$. The calculation below checks values and adds terms. from sympy.functions.special.polynomials import hermite n, m, x,alpha,hbar,omega = symbols('n,m,x,alpha,hbar,omega',positive=True) #using sympy E = lambda n:hbar*omega*(n+1/2) #------------------------------------- def psi(n): # wavefunction, factorial is an inbuilt SymPy function return 1/sqrt(2**n*factorial(n))*sqrt(sqrt(alpha/pi))*\ hermite(n,x*sqrt(alpha))*exp(-(alpha*x**2)/2) #------------------------------------- print('{:s}'.format('non-zero values;2nd order correction')) print('{:s}'.format('n, sum(int(psi(n) * psi(m))**2)/(E(n)-E(m))' ) ) alist = [] # list to hold results for printing s = 0 for n in range(5): s = 0 for m in range(0,8,1): if n != m: f01 = psi(n)*x*psi(m) #f01 = psi(n)*x**3*psi(m) # use this for Q47 ans = integrate(f01,(x,-oo,oo),conds='none') # integrate algeraically #print(n,m,ans/(E(n)-E(m))) if ans != 0: # check if integral is zero s = s + ans**2/( E(n) - E(m) ) # add each 2nd order term for i in range(5): print(i,alist[i]) non-zero values;2nd order correction n, sum(int(psi(n) * psi(m))**2)/(E(n)-E(m)) 0 -0.5/(alpha*hbar*omega) 1 -0.5/(alpha*hbar*omega) 2 -0.5/(alpha*hbar*omega) 3 -0.5/(alpha*hbar*omega) 4 -0.5/(alpha*hbar*omega) These results produce $$\displaystyle\left | \int\psi(n)\,x\,\psi(m)dx\right|^2$$, see eqn 44. The result can now be found because the denominator is always the difference in energy between the two levels, which is $$\pm \hbar\omega$$. Thus for $$n = 0$$ the correction is $$-1/(2\alpha\hbar\omega)$$ and substituting for $$\displaystyle \alpha =\sqrt{k\mu}/\hbar=\mu\omega/\hbar$$ gives $$\displaystyle E_1^{(2)}= -\frac{a^2}{2\mu\omega^2}$$ The terms from $$n = 1$$ go to $$n = 0, 2$$ and these have values $$\displaystyle E_1^{(2)}= +\frac{a^2}{2\mu\omega^2}-\frac{a^2}{\mu\omega^2} =-\frac{a^2}{2\mu\omega^2}$$. Similar calculations for other levels produce the same result. The energy is $\displaystyle E_n= \hbar\omega(n+1/2) -\frac{a^2}{2\mu\omega^2}$ which shows that the potential is lowered by a constant amount, independent of the quantum number. Coincidentally, this perturbation result is the same as an exact calculation. The electric field simply lowers the potential energy. It does not change the spectrum because the shape of the potential, or equivalently the force constant, is unaffected and therefore the quantum number does not enter into the correction term. In SI units the constant $$a$$, is $$Ee/4\pi \epsilon_0$$ where $$E$$ is the electric field intensity, $$e$$ the electronic charge and $$\epsilon_0$$ the permittivity of free space. The second-order correction to the energy is calculated in a similar way to the previous calculation but using $$bx^3$$ instead of $$ax$$. The corrections now involve levels $$n=\pm$$ 1, 3 and the corrections are $\displaystyle E_0^{(2)} = -\frac{11}{8}\frac{b^2}{\alpha^3\hbar\omega}\quad\text{ and }\quad \displaystyle E_1^{(2)} = -\frac{71}{8}\frac{b^2}{\alpha^3\hbar\omega}$ The difference in energy levels becomes $\displaystyle \Delta E_{01}= \hbar \omega -\frac{60}{8}\frac{b^2}{\alpha^3\hbar\omega},\quad\text{and}\quad\displaystyle \Delta E_{1,2} = \hbar\omega -\frac{120}{8}\frac{b^2}{\alpha^3\hbar \omega }$ so that the energy gaps become smaller as $$n$$ increases, as is expected for an anharmonic potential. Thus the spectrum is a series of lines approaching a limit provided that the temperature is high enough to populate several vibrational levels. Exercise: Repeat the calculation with a quartic potential term, $$bx^4$$, or both cubic and quartic. (a) Solving the equation by differentiating the wavefunction produces the energy of level $$n$$ as $$\displaystyle E_n=n^2\frac{\hbar^2}{2\mu}$$. (b) Because $$n = 0$$ and the potential is not zero only from $$\pm a \pi$$ the first-order perturbation is the integral (equation (41)), $\displaystyle E^{(1)}=\frac{V}{2\pi}\int\limits_{-a\pi}^{a\pi}d\phi =aV$ The second-order correction contains the integral $$\langle\psi_0^k | V |\psi_n^0 \rangle$$ and in the bra; the left-hand wavefunction must be made into its complex conjugate. The integral is \begin{split}\displaystyle \begin {align} \langle \psi_k^0|V| \psi_n^0 \rangle &= \frac{V}{2\pi} \int\limits_{-a\pi}^{a\pi} e^{ik\phi} e^{in\phi} = \frac{V}{2i\pi}\left. \frac{e^{i(n-k)}}{n-k} \right|_{-a\pi}^{a\pi} \\&=\frac{V}{\pi(n-k)}\frac{e^{i\pi(n-k)a} -e^{-i\pi(n-k)a} }{2i}\\&=\frac{V}{\pi(n-k)}\sin\left((n-k)a\pi\right) \end{align}\end{split} Because we are dealing only with the lowest level (otherwise we would have to use degenerate perturbation theory) $$n = 0$$ and the integral becomes $$\displaystyle \langle \psi_k^0|V| \psi_n^0 \rangle =-\frac{V}{k\pi}\sin(ak\pi)$$ and because the lowest unperturbed energy is zero the energy (equation (44)) is the summation, $\displaystyle E_n^{(2)}=-\left( \frac{V}{\pi}\right)^2 \sum_{k=\pm 1,\pm 2 \cdots} \frac{|\sin(-ak\pi)|^2}{k^2E_k^0}$ which is two summations; one with $$k = 1, 2, 3$$, because $$k = n = 0$$ is excluded, and the other with $$k = -1, -2, -3$$. Since $$\sin(-x) = -\sin(x)$$ squaring terms makes them both positive. These two series are the same and the summation is therefore doubled to give $\displaystyle E_n^{(2)}=-2\mu\left( \frac{V}{\pi\hbar}\right)^2 \sum_{k=1} \frac{|\sin(-ak\pi)|^2}{k^4}$ The total energy of the lowest level corrected to second order is $$E=aV-E_n^{(2)}$$. In the case that $$a = 1/6$$ the summation rapidly converges to $$\approx 0.313$$, because of the influence of the $$k^{-4}$$ term. As the potential is $$V = 0.1E_1 = \hbar^2/20\mu$$ the energy is $$E = 0.01651\hbar^2/\mu$$ which is very small compared to the energy $$E_1-E_0$$ energy gap of $$0.5\hbar/\mu$$. (a) The frequency is calculated with $$\Delta E =h\nu$$ and $$\omega = 2\pi\nu$$ or $$\omega=(E_4 - E_3)/\hbar$$ radian/sec which is $$\displaystyle \omega = 7\frac{2\pi h}{8mL^2} = 4\cdot 10^{15}\,\mathrm{ rad\, s^{-1}}$$ and the period $$2\pi/\omega = 1.57 \cdot 10^{-15}$$s. (b) The superposition is $$\Psi=N(\psi_1/2+2\psi_2/3)$$ where $$N=(1/4+4/9)^{-1/2}=6/5$$ is the normalisation. Substituting into equation (54) produces, $\displaystyle P(x,t)=\left[ \frac{3}{5}\psi_3^0(x) \right]^2+\left[ \frac{4}{5}\psi_4^0(x) \right]^2+\frac{24}{25}\psi_3^0\psi_4^0\cos(\omega t)$ Python is used to do the calculation then plot the function. The coefficients $$c_1,\, c_2$$ (below) represent the amounts of $$\psi_1$$ and $$\psi_2$$. The wavefunction is made into a function of $$n,\, x$$ and $$t$$. The complex unit $$i$$ is represented in python as a capital 1j. Equation (53) is used as this is the most general form. # wavepacket of particle in a box wavefunctions fig1 = plt.figure(figsize=(13,6)) plt.rcParams.update({'font.size': 14}) # set font size for plots ax = [plt.subplot(2,5,i) for i in range(1,11,1)] # ax0 to ax10 m = 9.109e-31 # mass electron kg h = 6.626e-34 # Planck constant J s nm= 1e-9 L = 1*nm # box length m c1= 1/2.0 c2= 2/3.0 # coefficients N = 1.0/np.sqrt(c1**2 + c2**2) # normalisation E = lambda n: (h*n)**2/(8*m*L**2) # energy J omega = (E(4)-E(3))*2*np.pi/h # frequency s^(-1) period = 2*np.pi/omega # s print('{:s} {:6.2f}\n{:s} {:10.4g} {:10.4g}\n{:s} {:10.4g}\n{:s} {:10.4g}\n'.\ format('normalisation', N, 'energy n= 3,4 ',E(3),E(4),'frequency 1/sec', omega,'period s', period)) psi = lambda x,n,t: np.sqrt(2/L)*np.sin(n*np.pi*x/L)*np.exp(-1j*2*np.pi*E(n)*t/h) # wavefunction prob= lambda x,n,m,t: 1e-9*N**2*((c1*psi(x,n,t) + c2*psi(x,m,t))* np.conjugate(c1*psi(x,n,t)+ c2*psi(x,m,t))).real x = np.linspace(0,L,200) cols=['red','blue','green','red','blue','green','red','blue','green','grey','black'] tmes = [i for i in range(10)] for j, i in enumerate(tmes): tme = i*period/10 ax[j].plot(x/nm, prob(x,3,4,tme),color=cols[j]) #ax[j].set_xlabel('x /nm') ax[j].annotate('% period = '+str(i*10),xy=(0.25,4),fontsize=12 ) ax[j].set_ylim([0,5]) ax[0].set_ylabel('wavepacket amplitiude') plt.tight_layout() plt.show() normalisation 1.20 energy n= 3,4 5.422e-19 9.64e-19 frequency 1/sec 3.999e+15 period s 1.571e-15 Figure 34. Superposition of time dependent wavefunctions (probability) changing with time pictured in units of the oscillation period for the $$3^{rd}$$ and $$4^{th}$$ wavefunction for a particle in a box. (The probability is arbitrarily divided by $$10^9$$ to make a sensible scale and each x-axis is in nm). (a) Substituting for energy produces $\displaystyle \Psi(x,t)=\sum\limits_n a_n\psi_n(x) e^{-iE_nt/\hbar} = \sum\limits_n a_n\psi_n(x)e^{-i\omega(n+1/2)t/\hbar}$ and at times $$t = mT = 2\pi m/\omega$$, which are $$m$$ multiples of the period $$T$$, gives $\displaystyle \Psi(x,mT)=\sum\limits_n a_n\psi_n(x)e^{-2i\pi nm}e^{-i\pi m}$ In the summation $$n$$ is a positive integer as is $$m$$, which is the number of periods. The first exponential term $$\displaystyle e^{-2i\pi nm}=1$$ is the same for all positive integer $$m$$ values because 2$$n\cdot m$$ is an even number and $$\displaystyle e^{-2i\pi}=e^{-4i\pi}=\cdots$$ = 1. The identities $$e^{-i\pi} = -1,\, e^{-2i\pi} = 1$$ can now be used to find higher powers; for example, $$e^{-3i\pi} = -1$$, and so forth. Therefore, the second exponential has the form $$e^{-mi\pi} = (-1)^m$$. Substituting into the last equation gives, $$\displaystyle \Psi(x,mT) = \sum\limits_n \psi_n(x)(-1)^m$$ and therefore $\displaystyle \Psi(x,mT)=(-1)^m\Psi_n(x,0)$ This means that the wavepacket periodically reforms itself, but only within a change in sign. The probability density is the square of the wavepacket amplitude, which reforms itself exactly after each successive period, and appears to be the mirror image of itself after exactly $$1/2$$ a period. (b) In calculating the wavefunction, the Hermite polynomials are needed. These have been used in question 46 but now numerical values, not algebraic are needed so we need to use numpy instead of SymPy # wavepacket calculation This code follows some of the calculation desribed in the text. #-------------- def Hermite(n,x): # use recursion formulae, x is real, n is order. if n==0: return 1 elif n==1: return 2*x else: return 2*x*Hermite(n-1,x) - 2*(n-1)*Hermite(n-2,x) #-------------- def fact(n): # factorial accurate for n<100 only if n ==0 or n==1: return 1 else: return n*fact(n-1) #-------------- fig1= plt.figure(figsize=(10, 10)) # use figure to define plot size plt.rcParams.update({'font.size': 16}) # set font size for plots pm = 1e-12 # picometres ps = 1e-12 # picoseconds amu= 1.6604e-27 # kg c = 2.9979e10 # cm/s h = 6.6256e-34 # J.s mu = 1*35/(1 + 35)*amu # kg nu = 2989.7 # HCl frequency in cm^(-1) k = (2*np.pi*nu*c)**2*mu # force constant N.m alpha = 2*np.pi*np.sqrt(mu*k)/h # aperiod= 2.0*np.pi/(nu*c) # seconds ni = 0 # initial and final quantum numbers nf = 5 cn = [0.1, 0.2, 0.25, 0.2, 0.1] # wavepacket coefficients V = lambda x: 0.5*k*x**2/(h*c) # potential energy in cm^(-1) Enrg = lambda n, nu: nu*(n+1/2) # energy cm^(-1) avE = sum([Enrg(n,nu)*h*c for n in range(ni,nf,1)])/len(cn) psi = lambda x, n, alpha: np.sqrt( 1/(2^n*fact(n)) * np.sqrt(alpha/np.pi) )*\ np.exp(-alpha*x**2/2.0)*Hermite(n,x*np.sqrt(alpha)) # harmonic ascillator wavefunction wp = lambda x,tm: sum([ cn[n-ni]*psi(x,n,alpha)*np.exp(-1j*Enrg(n,nu)*c*tm) for n in range(ni,nf)] ) numx = 200 # spatial points numt = 100 # time points x = np.linspace(-60*pm,60*pm,numx) y = np.linspace(0, 3*aperiod,numt) xvals,tvals = np.meshgrid(x,y) # set up grid of points to plot contour prob = np.zeros((numt,numx),dtype=float) prob = 1e-10*(wp(xvals,tvals) * np.conjugate(wp(xvals,tvals))).real # scale by 10^-10 levs=[i for i in np.linspace(0,2.0,25)] # 25 levels between 0 and 2 plt.contour(xvals/pm,tvals/aperiod,prob, cmap = plt.cm.brg, levels=levs) plt.axvline( np.sqrt(2*avE/k)/pm,linestyle='dashed',color='grey') plt.axvline(-np.sqrt(2*avE/k)/pm,linestyle='dashed',color='grey') plt.ylabel( 'period') plt.xlabel('displacement /pm') plt.title('HCL wavepacket motion, n= '+str(ni)+' to '+str(nf-1)+'\ndashed line is potential at average energy') plt.show() Figure 35. Probabilities of a wavepacket made from five harmonic oscillator wavefunctions vs period $$t$$ and bond displacement $$x$$. The wavepacket consists of the $$n = 0 \to 4$$ vibrational levels of HI over two vibrational periods; a period is $$14.4$$ fs. The wavepacket is $$\displaystyle \Psi(r,t)=\sum\limits_n a_n\psi_n(r)e^{-iRt/(n^2\hbar)}$$ and the radial probability distribution is $\displaystyle P(r,t)= \left |r^2 \sum\limits_n a_n\psi_n(r)e^{-iRt/(n^2\hbar)} \right|^2$ The atomic radial wavefunction is $$\psi_n(r)$$ is the radial wavefunction where $$r$$ is the distance from the nucleus. Consulting a textbook, the quantum numbers are $$n$$, principal, $$L$$ the angular momentum quantum number $$\lt n$$ (in the code $$L$$ is labelled $$el$$ for clarity below). The Bohr radius is $$a_0$$, $$R$$ the Rydberg constant; $$r$$ is the distance from the nucleus. The units are; time in picoseconds, distances and $$a_0$$ in nm, Planck’s constant is in cm$$^{-1}$$ ps and therefore $$h = 33.35\,\mathrm{ cm^{-1}\, ps}$$. The complete normalised equation for Hydrogenic atoms using the generalised (associated) Laguerre ($$Lg$$) the equation is $\displaystyle \Psi_{n,L,M}(r,\theta,\phi)=\left [ \alpha^3 \frac{(n-L-1)!}{2n(n+L)!} \right]^{1/2} \cdot (\alpha r)^Le^{-\alpha r/2} Lg_{n-L-1}^{2L+1}(\alpha r) Y_L^M(\theta, \phi)$ but the angular part is not needed and so can be ignored including its normalisation is $$\sqrt( (2n+1)/(4\pi))$$ since this only depends slightly on $$n$$ over the range of $$n$$ used. For hydrogen $$Z = 1,\, Lg$$ is the generalised Laguerre polynomial and $$\alpha =2Z/(na_0)$$. The calculation is quite slow, $$\approx 10$$ secs. from scipy.special import eval_genlaguerre as GL fig1= plt.figure(figsize=(15, 9)) # use figure to define plot size and subplots plt.rcParams.update({'font.size': 16}) # set font size for plots ax0 = plt.subplot(1,2,1) ax1 = plt.subplot(3,2,2) ax2 = plt.subplot(3,2,4) ax3 = plt.subplot(3,2,6) #------------- def fact(n): # factorial if n == 0 or n == 1: return 1 else: return n*fact(n - 1) #-------------- Radl = lambda n, el, r: np.sqrt( ( fact(n-el-1) * (2/(n*a0))**3)/(2*n*fact(n+el)) )\ * ( (2*r)/(n*a0))**el *np.exp(-r/(n*a0) )* GL( n-el-1,2*el+1,(2.0*r)/(n*a0) ) pm = 1e-12 ps = 1e-12 R = 109737.0 # Rydberg in cm^(-1) h = 5.308*2*np.pi # cm^(-1) ps a0 = 0.052918 # Bohr radius nm r1 = 100 # r1, r2 two plots. Set separation from nucleus ( pm) to plot wavepacket profile vs time r2 = 19 cn = [ 0.1, 0.2, 0.3, 0.5, 0.8, 1.0, 0.8, 0.5, 0.3, 0.2, 0.1 ] # amplitudes a el = 1 # p orbital L=1 n0 = 25 # lower q number qn = [ n0+i for i in range( 0,len(cn)) ] # range of q numbers same length as cn print('{:s}{:s}'.format('quantum numbers ',str(qn))) psi = lambda r, n, el, t: r*Radl( n, el, r)*np.exp( 2*np.pi*1j*R*t/(h*n**2) ) # time dep wavefunction wp = lambda r,el,t: sum( [cn[n-n0]*psi(r,n,el,t) for n in qn ]) # wavepacket , sum over n numr = 512 # spatial points pm numt = 512 # time points ps tstep = 1 xr = np.linspace(0, 3000*a0, numr) # fix size of radial calculation yt = np.linspace(0,tstep*numt,numt)# fix time range print('{:s} {:f}'.format('time / point ps', yt[1]-yt[0])) print('{:s} {:f}'.format('distance / point pm', xr[1]-xr[0])) rvals,tvals = np.meshgrid(xr,yt) # set up grid of points to plot contour aprob = (wp(rvals,el,tvals) * np.conjugate(wp(rvals,el,tvals))).real # make Psi^*Psi; divide by 10^-12 toscale Lvls= [i for i in np.linspace(0,0.1,15)] # adjust linspace to chnage range & number of contours ax0.contour(rvals,tvals, aprob, cmap = plt.cm.brg, levels=Lvls) ax0.xaxis.grid(True, zorder=0) ax0.yaxis.grid(True, zorder=0) ax0.set_title('H atom wave packet n= '+str(qn[0])+ ' to '+str(qn[10])) ax0.set_xlabel(' distance /pm',fontsize=14) ax0.set_ylabel('time /ps',fontsize=14) #ax0.set_ylim([0,100]) #. adjust limits to expand in on region # calculate just certain values at fixed x and all times. Equivalent to probing at certain wavelengths func = lambda rval :(wp(rval,el,yt) * np.conjugate(wp(rval,el,yt))).real ax1.plot(xr, aprob[0],color='red') ax1.plot(xr, aprob[5],color='blue') ax1.set_title('wavepacket vs. position at t=0 (red) & 4 ps') ax1.set_xlabel('separation /pm') ax2.plot(yt, func(r1),color='blue') ax2.plot(yt, func(r2)+0.25,color='red') ax2.set_title('wavepacket vs. time at '+str(r1)+ '(blue) & '+str(r2)+ ' pm') ax2.set_xlabel('time /ps') isfft = np.fft.rfft(func(r1)) freq = np.linspace(0,1.0/(2*tstep),numt//2) fmax = max(freq) ymax = max(np.abs(isfft[2:])) ax3.plot(freq,np.abs(isfft[:-1])) ax3.set_xlim([0.0,fmax]) ax3.set_ylim([-0.02,ymax*1.2]) minor_ticks=np.linspace(0,fmax,51) ax3.set_xticks(minor_ticks, minor=True) ax3.set_title('abs(fft) of plot at '+str(r1)+' pm') ax3.set_xlabel('frequency /'+ r'$ps^{-1}$') fig1.tight_layout() plt.show() quantum numbers [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35] time / point ps 1.001957 distance / point pm 0.310673 Figure 36. Left. Contour map of the motion of a H atom Rydberg wavepacket excited to $$n = 25 \to 35$$ by a short laser pulse. Green has the greatest amplitude, blue the least. Right, top profile of wavepackets vs distance from nucleus, and lower, time profile of wavepacket motion at the separations shown. Bottom right Fourier transform of a wavepacket vs time at $$90$$ nm shows that many different frequencies are in the wavepacket. After about $$6$$ ps the wavefunctions have constructively interfered to form a more compact wavepacket at the outer turning point; $$\approx 100$$ nm, see the top right figure. The wavepacket recurs with periods of about $$4$$ ps, plus longer periods of about $$40$$ ps. If the atom could be measured vs time at 90 nm the signal would look like that shown on the right bottom. The top right figure shows the spatial shape of the wavepacket at two different times. (b) The size of the Rydberg atom with $$n = 30$$ is vast, $$\approx 2a_0n^2$$ or $$\approx 95$$ nm, compared to the H atom in its ground state, with its classical radius $$a_0 = 0.0529$$ nm. The oscillation frequency is $\displaystyle \omega = \frac{1}{\hbar}\bigg|\frac{dE_n}{dn}\bigg|=\frac{2R}{\hbar n^3}$ which is a frequency of $$\nu=\omega/2\pi$$ per second or a period of $$4.1$$ ps. (a) Using information in the question the fluorescence signal is the square of the sum of the amplitudes of each path, $$1\to 2$$ and $$1\to 3$$, and using equation (56), $\displaystyle f(t)=A\left| \int \varphi_3\mu [a_1\varphi_1e^{-iE_1t/\hbar -kt/2+}+a_2\varphi_1e^{-iE_2t/\hbar -kt/2+}]dq \right |^2$ Expanding inside the absolute value brackets | | and substituting for the integrals, which are in spatial coordinates $$q$$, and not time gives $\displaystyle \displaystyle F(t)=A\left|a_1B_{31}e^{-iE_1t/\hbar-kt/2}+a_1B_{31}e^{-iE_1t/\hbar-kt/2}\right|^2$ The rule for calculating absolute values of any complex number $$z$$ is $$| z |^2 = z^*z$$, where $$z^*$$ is the complex conjugate of $$z$$ obtained by replacing $$i$$ with $$-i$$. Doing this the fluorescence intensity at time $$t$$ is; $\displaystyle f(t)= A\left[(a_1B_{31})^2 +(a_2B_{32})^2 +a_1a_2B_{31}B_{32}(e^{i(E_1-E_2)t/\hbar} + e^{-i(E_1-E_2)t/\hbar}) \right]e^{-kt}$ Making the energy gap $$E_2 - E_1 = \Delta E$$ and using the definition of a cosine $$2\cos(x) = e^{ix} + e^{-ix}$$ gives $\displaystyle f(t)= A\left[(a_1B_{31})^2 +(a_2B_{32})^2 +2a_1a_2B_{31}B_{32}\cos(\Delta E \cdot t/\hbar)\right]e^{-kt}$ This expression shows us that the fluorescence signal decays overall with a rate constant $$k$$. This is because the whole expression is multiplied by $$e^{-kt}$$ and the decay is modulated by the cosine term. # Quantum Beat calculation fig1= plt.figure(figsize=(10, 5.0)) # use figure to define plot size and subplots ax0 = plt.subplot(1,2,1) ax1 = plt.subplot(1,2,2) deltaE = 1.0 # cm-1 a1 = 0.3 a2 = 0.7 B31 = 2.0 B32 = 2.0 ps = 1e-12 k = 1e9*ps # rate const of 10^9 /sec = 10^(-3) /psec hbar = 1.054e-34*5.034e22/ps # to cm-1 ps print('{:s} {:g} {:s}'.format('hbar = ', hbar, 'cm^(-1) ps')) f= lambda t: ((a1*B31)**2+ (a2*B32)**2 + 2*a1*a2*B31*B32*np.cos(deltaE*t/hbar))*np.exp(-k*t) numt = 500 T = 2.0 # T is gap between points t = np.linspace(0, numt*T, numt) # t in ps ax0.plot(t,f(t)/f(0),color='blue') ax0.set_ylim([0,1]) ax0.set_xlim([0,1000]) ax0.set_xlabel('time /ps') ax0.set_title('Quantum beats') invt = np.linspace(0,1.0/(2.0*T), numt//2) # set frequency for fft isfft = np.fft.rfft(f(t)) freq = max(invt) ax1.plot(invt,np.abs(isfft[:-1]),color='red') ax1.set_xlim([0,freq]) minor_ticks=np.linspace(0,0.1,11) ax1.set_xticks(minor_ticks, minor=True) ax1.set_title('abs fourier transform of signal') ax1.set_xlabel('frequency /'+r'$ps^{-1}$') plt.show() hbar = 5.30584 cm^(-1) ps Figure 37. Quantum beats with an energy gap $$\Delta E = 1\,\mathrm{ cm^{-1}}$$, with $$a_1 = 0.3, \,a_2 = 0.7, \,B_{31} = B_{32} = 2$$ and $$k=10^9\,\mathrm{ s^{-1}}$$. The beat frequency should be $$c\Delta E = 3 \cdot 10^{10}\,\mathrm{ s^{-1}}$$. Right. The Fourier transform of the signal is shown. The main feature is the oscillation frequency at $$3\cdot 10^{10}\,\mathrm{ s^{-1}}$$. If $$\Delta E$$ were zero the excited levels have the same energy and no beats are observed. Similarly when the integral $$B_{32}$$ or $$B_{31}$$ is zero, for example if only one level is observed in emission or only one level is initially excited, then either $$a_1$$ or $$a_2$$ is zero and again, no beats are observed. The beating or oscillatory signal is due to an interference of the two pathways from levels $$1$$ and $$2$$ to level $$3$$ and is present in the equation as the cross-term in the multiplication. These are also called the off-diagonal terms and are a reference to the matrix formulation of quantum problems. The off-diagonal terms always lead to interactions and time dependence. (b) The normalization is obtained when $$t = 0$$ and is $$(a_1B_{31} + a_2B_{32})^2$$. Exercise: repeat the calculation with four levels with energies above the lowest level of $$1, 2$$, and $$3\,\mathrm{ cm^{-1}}$$ excited with amounts $$0.3, 0.7, 0.3$$, and $$0.7$$. Assume that the other parameters are the same as in the question. Plot the graph of the decaying signal. Fourier transform it to find the frequencies present.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265440106391907, "perplexity": 1886.2632229450383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00450.warc.gz"}
https://projecteuclid.org/ebooks/books-by-independent-authors/Advanced-Real-Analysis/Chapter/Chapter-III-Topics-in-Euclidean-Fourier-Analysis/10.3792/euclid/9781429799911-3
Translator Disclaimer 2017 Chapter III. Topics in Euclidean Fourier Analysis ## Abstract This chapter takes up several independent topics in Euclidean Fourier analysis, all having some bearing on the subject of partial differential equations. Section 1 elaborates on the relationship between the Fourier transform and the Schwartz space, the subspace of $L^{1}(\mathbb{R}^{N})$ consisting of smooth functions with the property that the product of any iterated partial derivative of the function with any polynomial is bounded. It is possible to make the Schwartz space into a metric space, and then one can consider the space of continuous linear functionals; these continuous linear functionals are called “tempered distributions.” The Fourier transform carries the space of tempered distributions in one-one fashion onto itself. Section 2 concerns weak derivatives, and the main result is Sobolev’s Theorem, which tells how to recover information about ordinary derivatives from information about weak derivatives. Weak derivatives are easy to manipulate, and Sobolev’s Theorem is therefore a helpful tool for handling derivatives without continually having to check the validity of interchanges of limits. Sections 3–4 concern harmonic functions, those functions on open sets in Euclidean space that are annihilated by the Laplacian. The main results of Section 3 are a characterization of harmonic functions in terms of a mean-value property, a reflection principle that allows the extension to all of Euclidean space of any harmonic function in a half space that vanishes at the boundary, and a result of Liouville that the only bounded harmonic functions in all of Euclidean space are the constants. The main result of Section 4 is a converse to properties of Poisson integrals for half spaces, showing that harmonic functions in a half space are given as Poisson integrals of functions or of finite complex measures if their $L^{p}$ norms over translates of the bounding Euclidean space are bounded. Sections 5–6 concern the Calderón–Zygmund Theorem, a far-reaching generalization of the theorem concerning the boundedness of the Hilbert transform. Section 5 gives the statement and proof, and two applications are the subject of Section 6. One of the applications is to Riesz transforms, and the other is to the Beltrami equation, whose solutions are “quasiconformal mappings.” Sections 7–8 concern multiple Fourier series for smooth periodic functions. The theory is established in Section 7, and an application to traces of integral operators is given in Section 8. ## Information Published: 1 January 2017 First available in Project Euclid: 21 May 2018 Digital Object Identifier: 10.3792/euclid/9781429799911-3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909235417842865, "perplexity": 318.80951589194393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00646.warc.gz"}
https://www.physicsforums.com/threads/metals-plasmon-frequency.360164/
# Metal's plasmon frequency 1. Dec 3, 2009 ### ahatef Hi guys; I have a question about the Plasmon frequencies in metals I know that it can be calculated as following: ωp2=Ne20meff I was just wondering if anyone knows it is possible to change the plasmon frequency of any metal let say silver experimentally or not. I am working on metallic photonic crystals I want to find the effect of plasmon frequency on photonic band gap. Thanks 2. Dec 3, 2009 ### nbo10 Change either the electron density or the electrons effective mass.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424180150032043, "perplexity": 1374.8219815835746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647519.62/warc/CC-MAIN-20180320170119-20180320190119-00795.warc.gz"}
https://undergroundmathematics.org/geometry-of-equations/r6234
Review question # Can we show that this point moves on a circle? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource Ref: R6234 ## Question 1. The point $A$ has coordinates $(5,16)$ and the point $B$ has coordinates $(-4,4)$. The variable point $P$ has coordinates $(x,y)$ and moves on a path such that $AP = 2BP$. Show that the Cartesian equation of the path of $P$ is $(x + 7)^2 + y^2 = 100.$ 2. The point $C$ has coordinates $(a,0)$ and the point $D$ has coordinates $(b,0)$. The variable point $Q$ moves on a path such that $QC = k \times QD,$ where $k > 1$. Given that the path of $Q$ is the same as the path of $P$, show that $\frac{a + 7}{b + 7} = \frac{a^2 + 51}{b^2 + 51}.$ Show further that $(a + 7)(b + 7) = 100$, in the case $a \neq b$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606519103050232, "perplexity": 115.45978114160205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.84/warc/CC-MAIN-20180320224824-20180321004824-00410.warc.gz"}
https://www.physicsforums.com/threads/why-only-closed-forms-matter-in-derham-cohomology.771265/
# Why only Closed Forms Matter in DeRham Cohomology? 1. Sep 16, 2014 ### WWGD Hi All, One gets homological/topological information (DeRham cohomology ) from a manifold by forming the algebraic quotients H^Dr (n):= (Closed n-Forms)/(Exact n- Forms) Why do we care only about closed forms ? I imagine we can use DeRham's theorem that gives us a specific isomorphism with Singular Homology to see why, but I cannot see an answer off-hand? 2. Sep 16, 2014 ### lavinia We do not care only about closed forms. But closed forms modulo exact forms give you the chronology with real coefficients. The key is Stokes Theorem. 3. Sep 16, 2014 ### WWGD Well yes, but that seems to beg the question ( and I think I am using that expression correctly here). Why is it that the quotient gives you information ; why is it that non-closed forms give you no information? 4. Sep 17, 2014 ### homeomorphic To give an example, if you have a divergence-free vector field, then to find the line integral around some loop, you can choose any surface you want and integrate the curl vector field over that. Gives you independence of the surface. So, it's a higher-dimensional version of that. As to why it gives topological information, that's the universal coefficient theorem, which basically says homology and cohomology are two sides of the same coin. The example I gave is illustrating the fact that the Kronecker pairing is well defined on homology (cocycle evaluates to the same thing in a well-defined way on homology, not just on the chain-level), and that's what's involved in the universal coefficient theorem. So, when you mod out by exact forms, you are making the Kronecker pairing well-defined, so that you get something that's sort of dual to homology, which is the more geometrically meaningful thing, for which we understand why it should be a topological invariant. You could say the same for singular cohomology, as well as De Rham. At least, that's the way I see it. 5. Sep 17, 2014 ### lavinia I guess I don't understand your question. De Rham's theorem proves that de Rham cohomology is isomorphic to singular cohomology with real coefficients. Are you asking how the proof works? A non closed form will not take on the same value ,in general, on homologous smooth cycles. Therefore such a form is not in the dual space to the singular homology. A closed form is. Two closed forms that differ by an exterior derivative will take on the same value on homologous cycles. Taking the quotient identifies them which is right because their values are the same on smooth cycles. But I feel that I am still begging the question. A good example of a form that is not closed but is information packed is a connection one form on a circle bundle. Such a form is closed only if the connection is flat. Last edited: Sep 17, 2014 6. Sep 17, 2014 ### WWGD Thanks, Lavinia, Homeo; I guess it is up to me now to read the DeRham proof and your answers more carefully before asking a new question. 7. Sep 17, 2014 ### jergens It might help to revisit where exactly the definition for de Rham cohomology comes from. Basically the idea is given a smooth manifold M we can look at the space Kq of differential q-forms and notice that the exterior derivative gives a map d:Kq→Kq+1 that satisfies d2 = 0. This means we have a cochain complex associated to each manifold and so with a little work it makes sense to call the homology of this cochain complex the (de Rham) cohomology of M; that is, the de Rham cohomology is closed forms modulo exact forms. My hunch is that this is all pretty unsatisfying, but on a formal level at least it explains why closed forms come in. Now to understand the topological information this provides it is probably best to pay attention to integration and the de Rham isomorphism, as others in the thread have suggested. Try working some examples, especially in the small dimensional cases where we can actually draw the manifold, to get some intuition on exactly what topological features de Rham cohomology captures. If you want a textbook that covers this kind of stuff in detail, then Bott and Tu is a great choice. Lastly since you seem to be asking why pay attention to only closed q-forms (modulo exact forms) instead of all q-forms, the answer partially comes down to computability. The (co)chain complexes arising in our definitions for the (co)homology of a space often contain lots of topological information. Problem is these things are enormous and pretty much impossible to compute with. Passing to (co)homology provides a major advantage since things get smaller and more manageable and we also pick up some formal rules for computing (like excision, long-exact sequence of a pair, etc) which are tremendously helpful. Last edited: Sep 17, 2014 8. Sep 17, 2014 ### lavinia I strongly suggest the beautiful proof in Singer and Thorpe. Similar Discussions: Why only Closed Forms Matter in DeRham Cohomology?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913179636001587, "perplexity": 485.79438488559686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825227.80/warc/CC-MAIN-20171022113105-20171022133105-00518.warc.gz"}
https://www.physicsforums.com/threads/friction-problem.46436/
# Friction Problem 1. Oct 6, 2004 ### AtlBraves I am having trouble with this problem. I found the answer to a to be 14 N, but the online quiz says it is wrong. Fx-f=0. Fcos(theta)-f=0. f=18cos40=14 N. What am I doing wrong? A 3.5 kg block is pushed along a horizontal floor by a force F of magnitude 18 N at an angle = 40° with the horizontal (Figure 6-20). The coefficient of kinetic friction between the block and floor is 0.25. (a) Calculate the magnitude of the frictional force on the block from the floor. (b) Calculate the magnitude of the block's acceleration acceleration. Last edited: Oct 6, 2004 2. Oct 6, 2004 ### arildno Welcome to PF! In order to delete in the other forum, press the "Edit button". On top of that, there's a "Delete" option. 3. Oct 6, 2004 ### Pyrrhus $$F_{x} - F_{f}$$ is not 0, It is moving on the x-axis. Remember $$F_{f} = \mu N$$ 4. Oct 6, 2004 ### arildno Note that the NORMAL force acting on the block must be GREATER than the weight, due the vertical component of F. Hence, the frictional force is also greater.. 5. Oct 6, 2004 ### AtlBraves So if $$F_{f} = \mu N,$$ then $$F_{f} = \mu * (mg+18sin40) = .25 * ((3.5*9.8)+18sin40) = 12 N?$$ 6. Oct 6, 2004 ### arildno That looks correct, yes. (Do you understand, to your own satisfaction, why you need that addition to the weight?) I haven't checked your numbers, though.. 7. Oct 6, 2004 ### AtlBraves Oops. I mistyped. The correct answer is 11 N I'm hoping. Yes I do now see why the normal force is greater than the weight. Thanks for all of the help. 8. Oct 6, 2004 ### arildno Perhaps you shouldn't round down to an integer answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997809886932373, "perplexity": 1745.6957254339618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717959.91/warc/CC-MAIN-20161020183837-00167-ip-10-142-188-19.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/155258-question-about-validating-argument.html
# Math Help - question about validating an argument 1. ## question about validating an argument I am trying to detrmine whether this argument is valid or not: p → q ∼p ——— ∴∼q I did not see this as modus ponens or modus tollens. But I described it as: p: I walk my dog q: he is happy p → q: if I walk my dog he is happy ~p: I did not walk my dog therefore ~q: he is not happy This is not valid according to the table there could be two different outcomes. is this correct? p|q|p → q| ~p|~q T|T|T |F |F T|F|F |F |T F|T|T |T |F F|F|T |T |T Is this correct? If now please explian how to prove (valid) or (invalid)? 2. Let $P:x>3$ and $Q:x-1>0$. If $x=2$ you have $\neg P$ do you have $\neg Q?$ 3. yes 4. Originally Posted by robasc yes Yes what? You have $\neg(2-1>0)?$ 5. yes it is false 6. Well then the argument is not valid. 7. this is known as a fallacy, denying the antecedent correct? 8. Yes. What is the point of all this? 9. I am double checking myself to make sure I am understanding the material correctly?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851774275302887, "perplexity": 3644.320615947134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131145.0/warc/CC-MAIN-20140914011211-00244-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.computer.org/csdl/trans/tk/2007/04/k0468-abs.html
Subscribe Issue No.04 - April (2007 vol.19) pp: 468-484 ABSTRACT We explore in this paper a novel sampling algorithm, referred to as algorithm PAS (standing for Proportion Approximation Sampling), to generate a high-quality online sample with the desired sample rate. The sampling quality refers to the consistency between the population proportion and the sample proportion of each categorical value in the database. Note that the state-of-the-art sampling algorithm to preserve the sampling quality has to examine the population proportion of each categorical value in a pilot sample a priori and is thus not applicable to incremental mining applications. To remedy this, algorithm PAS adaptively determines the inclusion probability of each incoming tuple in such a way that the sampling quality can be sequentially preserved while also guaranteeing the sample rate close to the user specified one. Importantly, PAS not only guarantees the proportion consistency of each categorical value but also excellently preserves the proportion consistency of multivariate statistics, which will be significantly beneficial to various data mining applications. For better execution efficiency, we further devise an algorithm, called algorithm EQAS (standing for Efficient Quality-Aware Sampling), which integrates PAS and random sampling to provide the flexibility of striking a compromise between the sampling quality and the sampling efficiency. As validated in experimental results on real and synthetic data, algorithm PAS can stably provide high-quality samples with corresponding computational overhead, whereas algorithm EQAS can flexibly generate samples with the desired balance between sampling quality and sampling efficiency. In addition, while applying the sample generated by algorithms PAS and EQAS to incremental mining applications, a significant efficiency improvement can be obtained without compromising the resulting precision, showing the prominent advantage of both proposed algorithms to be the quality-aware sampling means for incremental mining applications. INDEX TERMS Sequential sampling, incremental data mining. CITATION Kun-Ta Chuang, Keng-Pei Lin, Ming-Syan Chen, "Quality-Aware Sampling and Its Applications in Incremental Data Mining", IEEE Transactions on Knowledge & Data Engineering, vol.19, no. 4, pp. 468-484, April 2007, doi:10.1109/TKDE.2007.1005
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965343177318573, "perplexity": 2138.4946457295478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095671.53/warc/CC-MAIN-20150627031815-00026-ip-10-179-60-89.ec2.internal.warc.gz"}
https://reference.wolfram.com/language/ref/ReactionPDETerm.html
# ReactionPDETerm ReactionPDETerm[vars,a] represents a reaction term with reaction coefficient and with model variables vars. ReactionPDETerm[{u,{x1,,xn}},a,pars] uses model parameters pars. # Details • Reaction terms are used to model absorption or emission in a number of domains, such as biology, chemistry and physics. • Reaction with a reaction coefficient is the process of absorbing of the dependent variable : • ReactionPDETerm returns differential operators term to be used as a part of partial differential equations: • ReactionPDETerm can be used to model reaction equations with dependent variable , independent variables and time variable . • Stationary model variables vars are vars={u[x1,,xn],{x1,,xn}}. • Time-dependent model variables vars are vars={u[t,x1,,xn],{x1,,xn}} or vars={u[t,x1,,xn],t,{x1,,xn}}. • The reaction term in context with other PDE terms is given by: • The reaction coefficient has the following form: • scalar a • For a system of PDEs with dependent variables {u1,,um}, the reaction represents: • The reaction term in context systems of PDE terms: • The reaction coefficient is a tensor of rank 2 of the form where each submatrix is a scalar that can specified in the same way as for a single dependent variable. • The reaction coefficient can depend on time, space, parameters and the dependent variables. • The coefficient does not affect the meaning of NeumannValue. • All quantities that do not explicitly depend on the independent variables given are taken to have zero partial derivative. # Examples open allclose all ## Basic Examples(4) Define a stationary reaction term: Define a stationary reaction term with a parameter: Solve a reaction diffusion equation build with basic terms: Visualize the result: Solve for the eigenvalues of a reaction diffusion equation: ## Scope(6) Define a time-dependent reaction term: Define a symbolic reaction term: Define a 2D stationary reaction term: Define a reaction term with multiple dependent variables: Define a Helmholtz model: Solve for the eigenvalues of the Helmholtz equation: Solve the Helmholtz equation with a source term: Visualize the solution: Solve a nonlinear reaction diffusion equation build with basic terms: Visualize the result: Wolfram Research (2020), ReactionPDETerm, Wolfram Language function, https://reference.wolfram.com/language/ref/ReactionPDETerm.html. #### Text Wolfram Research (2020), ReactionPDETerm, Wolfram Language function, https://reference.wolfram.com/language/ref/ReactionPDETerm.html. #### BibTeX @misc{reference.wolfram_2020_reactionpdeterm, author="Wolfram Research", title="{ReactionPDETerm}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/ReactionPDETerm.html}", note=[Accessed: 16-January-2021 ]} #### BibLaTeX @online{reference.wolfram_2020_reactionpdeterm, organization={Wolfram Research}, title={ReactionPDETerm}, year={2020}, url={https://reference.wolfram.com/language/ref/ReactionPDETerm.html}, note=[Accessed: 16-January-2021 ]} #### CMS Wolfram Language. 2020. "ReactionPDETerm." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/ReactionPDETerm.html. #### APA Wolfram Language. (2020). ReactionPDETerm. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/ReactionPDETerm.html
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216246008872986, "perplexity": 4716.489387832488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00213.warc.gz"}
https://www.physicsforums.com/threads/rocket-f-dp-dt.14269/
# Rocket F=dp/dt 1. Feb 10, 2004 ### RedX I was looking through this forum, and I noticed one of those problems where a rocket ejects mass to get into orbit. F=dp/dt Now the arguments I saw went something like this: p=mv F=d(mv)/dt=v(dm/dt)+m(dv/dt) But that equation is for a point particle I believe. All you can say for a rocket is: F=m*a where a is the acceleration of the center of mass. For one thing, in the equation F=d(mv)/dt=v(dm/dt)+m(dv/dt) what is v? It's the relative velocity between the ejected mass and the rocket right? How does F=dp/dt "know" information about how the mass is going to be ejected? You could chuck the fuel with a small velocity or high velocity. I think when you write F=d(mv)/dt you do that because relativity says m is a function of v (which is a function of t). But we are still assuming a point particle, and I don't think this expression can honestly be used for a rocket problem. However, I see everyone doing that, so I'm not too sure of myself on this one. 2. Feb 10, 2004 ### PrudensOptimus Re: F=dp/dt Well if you know a little calculus u would know that &Sigma;F = ma = dp/dt = dmv/dt = m(dv/dt) = ma. ==> if mass is constant. 3. Feb 11, 2004 ### krab Re: F=dp/dt No. v is the velocity with respect to an inertial frame. 4. Feb 11, 2004 ### RedX Re: Re: F=dp/dt Now I thought that the laws of physics are the same for different inertial frames. If you have F=v(dm/dt)+ma then the force depends on v, it depends on your frame of reference. If you put the relative velocity (between the ejected fuel and the rocket) in the calculations, it works for rocket problems. However, I don't think it's legitimate physics to say that it works because of this equation: F=v(dm/dt)+ma and Newton meant for v to be the relative velocity. You can show that the equation works when v is the relative velocity just by considering a mass dm changing its momentum from the rocket velocity to the velocity of the ejected mass, and divide this by dt to get the relative velocity times the mass flow rate. 5. Feb 11, 2004 ### ZapperZ Staff Emeritus Re: F=dp/dt When a force acts on an object in SIMPLE, BASIC, classical mechanics, the force is applied onto the object's center of mass. When you draw a free-body diagram of the system, you don't draw the object, you represent the object simply as a point, because unless we are considering rotational motion here, the "size" of the object is irrelevant at this level. In the above comment, you seem to be mixing the object in question. The system that is under consideration is the rocket. It has a mass m(t). It's mass is changing over time. It's velocity isn't, and this is measured based on some inertial reference frame. The stuff being spewed out of the rocket, as soon as it leaves the rocket, is no longer part of the "system" under consideration. The force F is the force that acts only on the rocket system, and not on the spewed gases, etc. The system here is just the rocket itself, and only the rocket. You also can't say that "F=m*a where a is the acceleration of the center of mass." because you need to specify the center of mass of what? The rocket? The rocket + ejected mass? Since the mass of the rocket is changing and being redistributed, is the location of the center of mass of the rocket also changing with time? Does this add an added "a" to the overall acceleration in the CM frame? But what I don't quite understand is your statement: "How does F=dp/dt "know" information about how the mass is going to be ejected? You could chuck the fuel with a small velocity or high velocity." This is the issue of the chicken or the egg. When you push an object that is in contact with the floor, and you apply a force just so that it moves with constant velocity, did you know a priori just how much you need to push to get it to move this way? The question has been set up so that the conditions are: (i) it has no net acceleration and (ii) it is also losing mass. One can imaging of throttling the engine just enough to get to achive this, and so getting just the right level of rate of mass loss given the size of the mass "chunks" that the engine is spewing. BTW, question like this isn't unique. Another popular scenario that is often encountered in intro physics is a conveyor belt moving with constant velocity, while a long length of chains with a constant mass per unit length is dropped onto it at a certain rate. Again, no net acceleration, but the conveyor belt (which is now THE system under consideration) is gaining mass. The treatment of this problem is identical to the rocket. Zz. 6. Feb 11, 2004 ### krab Re: Re: Re: F=dp/dt Right. But you weren't using any inertial frame. 7. Feb 15, 2004 ### RedX Okay I understand now. Thanks everyone.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946183979511261, "perplexity": 455.8239091073261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743351.61/warc/CC-MAIN-20181117082141-20181117103516-00010.warc.gz"}
https://www.nature.com/articles/srep26656?error=cookies_not_supported&code=547c4664-544a-4ff4-9a92-4e56c6f64a84
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Large area molybdenum disulphide- epitaxial graphene vertical Van der Waals heterostructures ## Abstract Two-dimensional layered transition metal dichalcogenides (TMDCs) show great potential for optoelectronic devices due to their electronic and optical properties. A metal-semiconductor interface, as epitaxial graphene - molybdenum disulfide (MoS2), is of great interest from the standpoint of fundamental science, as it constitutes an outstanding platform to investigate the interlayer interaction in van der Waals heterostructures. Here, we study large area MoS2-graphene-heterostructures formed by direct transfer of chemical-vapor deposited MoS2 layer onto epitaxial graphene/SiC. We show that via a direct transfer, which minimizes interface contamination, we can obtain high quality and homogeneous van der Waals heterostructures. Angle-resolved photoemission spectroscopy (ARPES) measurements combined with Density Functional Theory (DFT) calculations show that the transition from indirect to direct bandgap in monolayer MoS2 is maintained in these heterostructures due to the weak van der Waals interaction with epitaxial graphene. A downshift of the Raman 2D band of the graphene, an up shift of the A1g peak of MoS2 and a significant photoluminescence quenching are observed for both monolayer and bilayer MoS2 as a result of charge transfer from MoS2 to epitaxial graphene under illumination. Our work provides a possible route to modify the thin film TDMCs photoluminescence properties via substrate engineering for future device design. ## Introduction The study of graphene, a two dimensional (2D) atomic crystal formed of carbon atoms arranged in a honeycomb structure, is one of the hottest topics in material science due to its unique capabilities1,2. The importance of graphene not only lies in its properties but also on the fact that it opened the way and promoted the synthesis of many other 2D materials3. The last 10 years of research on graphene have led to many methods for synthesizing, transferring, manipulating and characterizing the properties of this 2D material, which can be applied to all layered van der Waals (vdW) materials. As one has full control of the 2D crystals, one can also create a stack of these crystals in a completely new heterostructures. Since the portfolio of the original 2D crystals is very rich4,5,6, a new world of materials is accessible. Combining different 2D layers with complementary characteristics can lead to new vdW heterostructures with tunable properties leading to an outstanding range of possible applications7,8. Among these systems, the combination of a transition metal dichalcogenide (TMDCs) such as MoS2 and graphene, forming an heterostructure is very interesting, since it combines the excellent optical properties of MoS2 and the high mobility and transparency of graphene9,10,11,12. One well-established method to produce high quality wafer-scale monolayer graphene is the epitaxial approach based on the graphitization of the Si face of SiC13,14. A considerable advantage of this technique lies in the fact that the wide band gap semiconductor SiC wafers can be employed as a substrate, so that no additional transfer step is required to conduct electrical or optical measurements. To our knowledge, until now there are few works discussing the possibility of growing TMDCs materials (MoS2 or WS2) on epitaxial graphene/SiC15,16,17,18. In particular a direct growth approach by Chemical Vapor Deposition (CVD)15,16,18,19,20,21 or metal-organic chemical vapour deposition (MOCVD)17 is generally used. Even if the authors obtained high quality interfaces between TMDCs film and graphene, the direct growth method suffers of the fact that the obtained TMDC grain size is small (ranging from hundreds of nm to few microns). However the possibility to obtain large area TMDCs/graphene heterostructures is important for a large variety of in situ characterization techniques, and is also a basic requirement for realistic applications. Recently Han et al. developed a novel seeded CVD patterned growth method to obtain highly crystalline MoS2 flakes on oxidized silicon substrate22. MoS2 grown by this approach has high crystallinity, with large flakes (between 20–100 μm) and electrical and optical properties comparable to exfoliated material. These CVD-grown flakes are suitable for transfer onto epitaxial graphene/SiC substrate in order to obtain large area of MoS2/graphene heterostructures. Using a transfer process (see results and discussion and Fig. 1(a)) we have obtained monolayer and bilayer MoS2/graphene heterostructures, allowing the study of interlayer interaction between TDMCs materials and graphene on a large scale, using several complementary techniques. Angle resolved photoemission spectroscopy (ARPES) measurements were used to study the electronic structure of the MoS2/epitaxial graphene heterostructure. Due to a weak interlayer coupling the electronic structure of graphene and MoS2 are well retained in their respective layers. However the band alignment in the MoS2/graphene heterostructure allows under illumination a charge transfer process from MoS2 to graphene. This is revealed by a downshift of the graphene 2D Raman bands, an upshift of the A1g Raman band of MoS2, and a strong quenching of photoluminescence (PL) of MoS2/graphene heterostructure. Complementary we performed photocurrent measurements to elucidate how the presence of the semiconductor affects the photoconductive properties of graphene. In that respect, this work may open a new way in graphene optoelectronics by modulating the graphene photoelectric response through 2D materials interfacing. ## Results and Discussions TMDCs/graphene heterostructures were made from MoS2 flakes grown by chemical vapor deposition on oxidized silicon substrates that were then transferred onto epitaxial graphene grown on SiC(0001) (Fig. 1(a)). The graphene underlayer used in this study was obtained by annealing 4H-SiC(0001) (see methods) (Fig. 1(b)i–iii). The CVD growth procedure of MoS2 on SiO2 results in characteristic single-crystal domains shaped as well-defined equilateral triangles22 (Figure S1). The single- crystal flakes with mono, bi and multilayer thicknesses were identified by their optical contrast and characteristic triangular shape, and further confirmed by micro-Raman, and micro photoluminescence (micro-PL) measurements. For the transfer step, we spin-coated PMMA onto the MoS2 flakes and peeled them off from the SiO2 substrate by wet etching in KOH solution (Fig. 1(a)ii,iii). Afterward, we transferred the PMMA/MoS2 layer onto the graphene/SiC substrate (Fig. 1(c)). We finally removed the PMMA using acetone. Due to the high density of MoS2 flakes on the Si/SiO2 substrate (50% of the total area of the sample), we were able to obtain several flakes with various stacking order and orientation in a single transfer step (Fig. 2(a)). The MoS2 domains transferred onto the graphene retain their triangular shapes with lateral sizes of ~20 to ~100 μm. To further clean the surface and interface of the MoS2/graphene heterostructure, we annealed the samples at T = 300 °C for 30 mn in UHV (base pressure below P ~ 10−10 mbar). For the following experiments, the monolayer, bilayer and multilayer coverage was estimated from optical analysis to be around 83%, 15% and 2% respectively. In order to better understand the electronic properties of the MoS2/graphene/SiC heterostructure, we measured its band structure by angle-resolved photoemission spectroscopy (ARPES) at the Cassiopée beamline of Synchrotron Soleil. The small x-ray spot size (50 × 50 μm2) allowed the measurement of the band structure of a single flake (monolayer or bilayer) forming the MoS2/graphene heterostructure. The photoelectron intensity is presented in Fig. 2(b) as a function of energy and k-momentum, along the K′–Γ–K direction of the first graphene Brillouin zone. The second-derivative spectrum in Fig. 2(c) is provided to enhance the visibility of the band structure. The zero of the binding energy (i.e., the Fermi level) was determined by fitting the leading edge of the graphene layer at the same photon energies and under the same experimental conditions. Beside the typical linear dispersion of the π bands of graphene, a new set of bands is visible at the Γ point of the Brillouin zone independently of the orientation angle between the flake and the graphene underlayer, which is the signature of MoS2 valence band. A close inspection of the K point of graphene Brillouin zone is shown in Fig. 2(d). This spectrum is obtained orienting the sample along the Γ-K direction of the graphene Brillouin zone. In this case the mismatch angle between MoS2 flake and graphene underlayer is critical. In order to obtain a perfect alignment of the Brillouin zone of the two materials we need a mismatch angle of zero degree. As we can see in Fig. 2(d), along the Γ-K direction, we only see the graphene signature. The two spin split bands expected at the K(K′) point23,24 of MoS2 are not visible indicating a non-zero mismatch angle for this flake. However we clearly see the graphene band structure, in particular the π bands of graphene preserve their linearity characteristic of a massless Dirac fermions signature, indicating a high structural quality of the MoS2/graphene heterostructure. Moreover, similar to pristine monolayer graphene, the Dirac point (ED) is located at 0.3 eV below the Fermi level (FL). From a linear fit, using the relation E =  ħvFk, we obtain the value of the Fermi velocity vF ~ 1.1 × 106 m/s, which matches the expected value for monolayer graphene on SiC. As the linear dispersion and Fermi velocity of the pristine graphene is preserved in the MoS2/graphene heterostructure, we can infer that the MoS2 transfer does not affect the electronic structure of monolayer graphene in the heterostructure formation. In addition, this feature is expected theoretically since in van der Waals heterostructures, the superposition of each layer electronic structure constitutes a good approximation of the electronic structure of the multilayer. Differently from previous work of Diaz et al. no signature of interlayer hybridization25 is present on the π-band of graphene. This is probably due to the mismatch angle between the MoS2 flake and the graphene underlayer. However identifying the dependence of this effect on the mismatch angle would clearly require a further work and go much beyond the main objective of this paper. Figure 3(a,b) show a direct comparison of the calculated band structures (see method) and the corresponding ARPES spectra of the monolayer and bilayer MoS2 on epitaxial graphene, along the K-Γ-K direction in the hexagonal Brillouin zone, the respective second derivative are shown in Fig. 3(c,d). Comparisons with our Density Functional Theory (DFT) calculations clearly show that the monolayer, bilayer-dependent band structure evolution shows excellent agreement with theoretical calculations. Monolayer MoS2 presents only one band at the Γ point (maximum at Binding Energy (BE) ~ −1.68 eV ± 0.05 eV), and this structure evolves into two branches in the case of bilayer MoS2 (maximum at BE ~ −1.25 eV ± 0.05 eV). This evolution is representative of the splitting of the bands due to the weak van der Waals interaction between the two MoS2 layers. The relative position of the top of the valence bands at the Γ point in the bilayer film is closer to the Fermi level than the one obtained from the monolayer (Figure S2). This indicates that MoS2 undergoes a crossover from an indirect to a direct bandgap in monolayer9,23,26, as predicted theoretically (Figure S3). Indeed, we can observe from Figure S3 (a) to (c) the evolution of the band structure of MoS2 from monolayer to bi- and trilayer, calculated in DFT. Even though the trilayer MoS2 has not been considered in details here experimentally, we show the corresponding DFT result to exhibit the evolution of the band structure when considering multilayer MoS2. In the bi- and trilayer systems, the top of the valence band is located at the Γ point, yielding an indirect band gap with the bottom of the conduction band between the K (K′) and Γ points. However, when considering the monolayer band structure, the top of the valence band becomes very flat near the Γ point, leading to a direct gap at the K (K′) point. The evolution of the valence band at the Γ point provides a straightforward method to identify the thickness of ultrathin MoS2 films, and also proves the high quality of MoS2 transferred on epitaxial graphene. The epitaxial graphene underlayer does not affect the MoS2 band structure, as expected for a van der Waals heterostructure. However, we do not exclude the presence of the universal buckled form of 2D crystal in our MoS2 layer3. To further investigate the electronic properties of mono and bilayer MoS2/graphene heterostructure, micro photoluminescence (micro-PL) and micro-Raman measurements are performed at room temperature (see method). The PL spectra present the two characteristic excitonic peaks A and B27,28,29,30,31 originating from the transition at the K-point of the Brillouin zone (Fig. 4(a)). The spectra also reflect the band structure change from indirect band gap semiconductor of 2ML MoS2 to direct gap semiconductor in monolayer MoS2. The PL signal is strongly enhanced when a direct band gap is present. The same behavior was also observed for the as grown MoS2 on SiO2 (Figure S4). From the PL spectrum we can extrapolate the band gap value, which corresponds to 1.83 eV in agreement with our DFT calculations and previous experimental works32,33,34,35,36. In the case of vertical vdW heterostructures, possible PL intensity variations could arise from the interference effects due to different optical constants and thickness of the different layers forming the heterostructure37. Buscema et al. defined in their work37 a substrate-dependent enhancement factor Γ−1 which allows a normalization procedure of the spectra taking into account the effect of optical interferences37, for the ML MoS2 on different substrate. Following their results we can see that Γ−1 ~ 1 in the case of SiO2 substrate. In the case of very thick FLG (15 nm), Γ−1 is between 8–14, and presents a huge decrease as the thickness of graphene layer is reduced (~2 in the case of 5 nm FLG). In our case, we have only one graphene layer, meaning that as for SiO2 we can suppose Γ−1 ~ 1. Therefore, in both cases we can neglect the effect of optical interferences, and directly compare the PL raw data of MoS2/SiO2 and MoS2/graphene heterostructures as shown in Fig. 4(b,c). In MoS2/SiO2 heterostructure strain can exist due to the different thermal expansion coefficients of MoS2 and the SiO2 substrate during the MoS2 flakes growing38. The transfer process on the graphene substrate due to the weak van der Waal forces at the interface releases the lattice strain39. This effect is reflected on the PL spectrum as a blue-shift of the A peak in the MoS2/graphene heterostructure38,39,40. Moreover, the PL signal in the case of graphene underlayer is strongly quenched (about 60–70% with respect to the SiO2 substrate). This phenomenon was explored also as a function of the laser power. In Figure S5 the integrated PL intensity as a function of excitation power is shown. As expected in this range of powers (between 0.5 mW and 25 mW) the MoS2 PL intensity evolves linearly with increasing laser excitation41,42 for both substrate SiO2 and graphene. But in the case of graphene underlayer the PL signal is quenched for each laser power. This phenomenon is the signature of electron transfer from MoS2 to the graphene which hinders the recombination of electron−hole pairs created by the photoexcitation32,43. This electron transfer is not attributed to a strong coupling between MoS2 and graphene (since we consider a weak van der Waals interaction for this structure), but rather to a standard hopping from an electron in MoS2 conduction band to an unoccupied state at the same energy in graphene43,44. Figure 4(d) shows typical Raman spectra of the MoS2/graphene heterostructure and pristine graphene layer45 in the wavenumber range of 300–2800 cm−1. Besides the typical second-order Raman bands that originate from the SiC substrate, the three main structures typical of graphene are present on the pristine graphene and MoS2/graphene spectra: i) the D band (defect induced mode), ii) the G band (in-plane vibration mode) and iii) the 2D band (two-phonon mode)46. In the case of MoS2/graphene heterostructure within the wavenumber range between 350–450 cm−1 two new peaks are present. These two characteristic features correspond to the in-plane vibration () and out of plane (A1g) of Mo and S atoms in the MoS2 film34,47. The intensity and Raman shift (Δ) maps of the A1g and are shown in Fig. 5(a). The intensity and the Raman shift Δ increase with the number of MoS2 layers. The average Raman spectra obtained from each layer are shown in Fig. 5(b). The obtained values of Δ are ~19 cm−1, ~21 cm−1 and 24 cm−1 corresponding to monolayer, bilayer and multilayer (~three layers) MoS2, respectively34,47. To further examine the role of the interaction with the substrate, we compare the MoS2 on graphene and on SiO2 substrates (Fig. 5(c)). After the transfer we observed that the A1g and of the 1ML and 2ML MoS2 on graphene upshifted by about 4 and 3 cm−1 and 3 and 2 cm−1, respectively, meanwhile the line widths of these peaks, calculated by the full widths at half maximum (fwhm) result of a Lorentzian fitting, are narrower than those on SiO2 (~3 cm−1 smaller for 1 ML and ~2 cm−1 for the 2 ML for both peaks). As in the case of the PL spectra, the upshift of the in plane Raman mode () is the result of tensile strain release after the MoS2 transfer on the graphene underlayer38,39,40,48. The A1g mode shows a weaker strain dependence than the . Consequently, its large upshift upon the transfer can be explained as the result of additional effects: i) the establishment of a van der Waals interaction between MoS2 and graphene47, and ii) a decrease in the electron concentration of the MoS249. This latest, as shown by the micro-PL measurements is due to a charge transfer under illumination from MoS2 to graphene. As illustrated by Zhou et al., the important upshift of the A1g is also a signature of the high quality of interface between MoS2 and graphene48. Moreover, when MoS2 is transferred from SiO2 to graphene the reduced substrate surface roughness and impurities as well as the similar lattice structure are responsible for the narrowing of the MoS2 Raman features50. A detailed analysis of the Raman spectra of the monolayer and bilayer MoS2/graphene heterostructure and pristine graphene layer in the wavenumber range of 1300–2800 cm−1 are shown in Fig. 5(d). As explained before the presence of graphene is indicated by three main structures: i) the D band, ii) the G band and iii) the 2D band. The D peak is small (~1% of the G peak intensity), indicating the high quality of pristine graphene. The intensity of this peak did not increase after transfer, suggesting that the MoS2 transfer process did not induce defects in the graphene substrate. The Raman spectra of the graphene below MoS2 domains showed clear differences from that of the free-graphene areas. First, we observe a broad background which increases with higher wavenumbers. This background comes from the PL of MoS2, which confirms the presence of both graphene and MoS2 in the measured area. Second, the intensity of the 2D band was reduced by MoS2. Third, both the G and the 2D bands are shifted. In pristine graphene the 2D band is located at 2722 cm−1 and it is shifted at 2710, and 2708 cm−1 for monolayer and bilayer MoS2, respectively. There are two factors that can influence the Raman 2D band position: charge transfer45,51,52, and strain53,54,55. In our Raman measurements, the spectra were taken at room temperature and the laser power was low (~5 mW) to avoid the influence of laser heating. Thus, the observed 2D band downshift does not originate in differences of temperature, which can induce different strain in graphene and MoS2. It is known that depending on the introduced carriers, the 2D band position shifts differently56, with up- and downshifts corresponding to hole and electron doping, respectively51. Then in MoS2/graphene heterostructure this downshift indicates an increase in the electron concentration in graphene under illumination (n-type doping)53. This phenomenon is in agreement with the up-shift of the Raman feature of the MoS2, and confirms that the photoelectrons generated by the Raman laser are transferred from the MoS2 to graphene. At the same time the G peak presents an upshift. If we focus our attention on monolayer MoS2/graphene we have an upshift of ~3 cm−1 ± 1cm−1. From this shift we can obtain a quantitative analysis of the level of electron doping under illumination53,57,58,59. In fact the G peak frequency blue-shifts linearly with the Fermi level position as . From this expression, we estimate the Fermi level position with respect to the Dirac point for pristine graphene and 1ML MoS2/graphene as and , respectively. Then the upshift of the G mode of ~3 cm−1 implies a change in the graphene Fermi level of ~70 meV. From these values we can calculate the variation in the n-doping of graphene using the relation , and the value of the Fermi velocity obtained before by the linear fit of the Dirac cone in Fig. 2(d) (vF = 1.1 × 106 m/s), we obtained an electron density for pristine graphene |N| ~ 7 × 1012 cm−2 which increases to |N| ~ 1013 cm−2 for 1ML MoS2/graphene under illumination. Phototransport properties of the sample are investigated in a planar geometry (i.e. the electrodes are connected to MoS2 decorated graphene) at room temperature while illuminating the sample with light energy (λ = 405 nm (hν ≈ 3 eV)) above the MoS2 band gap (Fig. 6(a)). We prepared a lateral device by standard optical lithography. We used dry etching to define a graphene mesa and titanium/gold contacts (20/200 nm). Compared to pristine graphene the photoresponse of the vdW heterostructure is significantly enhanced and a clear modulation of the current is observed under illumination, see Fig. 6(b). The generated photocurrent presents almost no dependence with the applied bias (Figure S6). Similar behavior has already been observed and attributed to thermoelectric effect60 and the later result from the inhomogeneous distribution of the MoS2 flake at the scale of the light source spot. On the other hand, if the size of the device is reduced down to a single MoS2 flake, the phototransport is very different since we observe a more usual photoconductive behavior, where the slope of the I-V curve chnage under illumination (Figure S7). Since the spot diameter of the laser is 1 mm2, and the density of the MoS2 fakes is about 30% of the sample we can estimate the responsivity to be around 6 μA.W−1. This limited value is the result of the limited absorption of the semiconductor (MoS2) because of its thickness and limited coverage. The photocurrent generation in TMDC heterostructure generally suffers from two main limitations from an applied point of view, which are their slow response time35 and their strong dependence of the photoresponse with the light intensity61,62. In the following we investigate these two properties. We measured the light intensity dependence of the response and found an almost linear dependence of the current with the photon flux (Φ) see Fig. 6(c). More precisely the power dependence of the current (I) can be fitted using a power low44, IαΦ0.8. Such dependence is weaker than for large gain system like graphene–PbS quantum dot hybrid system63, which allows using MoS2 decorated graphene system over a larger range of light flux. While modulating the incident laser intensity thanks to a signal generator we can extract the frequency dependence of the graphene-MoS2 system64. The 3dB cut off frequency is measured to be 2 kHz, see Fig. 6(d). In summary, we have studied the electronic properties of the wet transfer of large area MoS2 on epitaxial graphene layer. From the PL, ARPES and micro-Raman data presented in Figs 1, 2, 3, it is clear that the MoS2/graphene heterostructure presents a good long-range order at large scale. Our ARPES measurements on the heterostructure showed that graphene and MoS2 largely retained their original electronic structure indicating weak van der Waals interactions between the two crystals. The PL quenching in the MoS2-graphene heterostructure and the upshift of the A1g Raman mode of MoS2 and the downshift of the 2D Raman mode of graphene confirm the charge transfer between the MoS2 and graphene layer. Our work suggests that the optical properties of MoS2 are strongly affected by the underlayer graphene. The fact that an interaction is visible between MoS2 and graphene is a clear signature of the quality of the wet transfer process and of the absence of interfacial contaminations. Moreover, this charge transfer can be influenced by the underlayer graphene doping, varying the band alignment in the heterostructure, which should be considered in device design and fabrication. Furthermore, efficient photoresponse was observed on large 2D MoS2/epitaxial graphene devices, opening a new way in graphene optoelectronics. ## Methods ### Growth of graphene/SiC(0001) Monolayer graphene studied in this paper is produced via a two-step process beginning with a starting substrate of 4H-SiC(0001)65. Prior to graphitization, the substrate is hydrogen etched (100% H2) at 1550 °C to produce well-ordered atomic terraces of SiC. Subsequently, the SiC sample is heated to 1000 °C at a pressure of about 10−5 mbar and then further heated to 1550 °C in an Ar atmosphere. This graphitization process results in the growth of an electrically active graphene layer on top of the buffer layer, covalently bound to the substrate66. The sample was cooled down to room temperature and transferred ex-situ to perform different measurements. ### Growth of MoS2/SiO2/Si(001) MoS2/SiO2 samples were grown via CVD in a 1″ quartz tube furnace. Microliter droplets of saturated ammonia heptamolybdate solution were dried onto the corners of a Si/SiO2 growth substrate that had previously been coated with a layer of sodium cholate (1% solution spin coated 4000 rpm for 60 sec). Sodium cholate is a known growth promoter, acting to increase diffusion of the molybdenum source by increasing the surface adhesive energy relative to the adatom cohesive energy22. The growth substrate was placed in the center of the furnace and heated to 800 °C. A 25 mg sulfur pellet was placed on a piece of silicon and positioned upstream in the furnace such that its temperature was approximately 150 °C. Carrier gas (500 sccm N2) was used to bring sulfur vapor into the furnace for a 30 min growth period. The sample was then rapidly cooled by cracking open the furnace and sliding it downstream with respect to the quartz tube. ### Characterization of MoS2/graphene heterostructure The PL measurements were carried out using a confocal commercial Renishaw micro-Raman microscope with a 100× objective and a Si detector (detection range up to ~2.2 eV). The Raman spectra measurements were performed on the same microscope using a 532 nm laser in an ambient environment at room temperature. To ensure the reproducibility of the data, we followed a careful alignment and optimization protocol. In addition, the excitation laser was focused onto the samples with spot diameter of ~1 μm and incident power of ~5 mW. The integration time was optimized to obtain a satisfactory signal-to-noise ratio. We obtained Raman spatial maps by raster scanning with 0.3 μm step size using a precision 2D mapping stage. The ARPES measurements were conducted at the CASSIOPEE beamline of Synchrotron SOLEIL (Saint-Aubin, France). We used linearly polarized photons of 50 eV and 90 eV and a hemispherical electron analyzer with vertical slits to allow band mapping. The total angle and energy resolutions were 0.25° and 16 meV. The mean diameter of the incident photon beam was smaller than 50 μm. All ARPES experiments were done at room temperature. For electrical measurements the samples are electrically characterized in air at room temperature. Temporal and power dependence of the current are obtained while biasing the sample and measuring the current with a Keithley 2634B as sourcemeter. Illumination is ensured by a 405 nm blue laser diode with tunable light intensity. The frequency dependence of the photocurrent is measured while the sample is biased using a Keithley 2634B. The output signal is amplified in a Keithley 427 current amplifier and the signal acquired on HP oscilloscope. ### DFT calculations First-principles calculations have been performed using a very efficient DFT localized orbital molecular dynamic technique (FIREBALL)67,68,69,70. Basis sets of sp3d5 for S and Mo were used with cutoff radii (in atomic units) s = 4.3, p = 4.7, d = 5.5 (S) and s = 5.0, p = 5.6, d = 4.8 (Mo). In this study we have considered standard unit cells of 3, 6 and 9 atoms to describe respectively a mono-bi- and trilayer of MoS2. Each configuration has been relaxed using a sample of 32 k-points in the Brillouin zone. In case of multilayer MoS2, we have considered the most stable AB stacking, and the equilibrium distance has been determined using the LCAO-S2 + vdW formalism71,72. Finally, a set of 300 special k points along the K′–Γ–K path has been used for the band structure calculations. The corresponding band structures as well as extended atomic representations of the multilayers MoS2 are provided in Fig. S3(a–f) in the Supplementary Informations. How to cite this article: Pierucci, D. et al. Large area molybdenum disulphide - epitaxial graphene vertical Van der Waals heterostructures. Sci. Rep. 6, 26656; doi: 10.1038/srep26656 (2016). ## References 1. Geim, A. K. Graphene: status and prospects. Science 324, 1530–4 (2009). 2. Geim, A. K. & Novoselov, K. S. The rise of graphene. Nat. Mater. 6, 183–91 (2007). 3. O’Hare, A., Kusmartsev, F. V & Kugel, K. I. A Stable “Flat” Form of Two-Dimensional Crystals: Could Graphene, Silicene, Germanene Be Minigap Semiconductors? Nano Lett. 12, 1045–1052 (2012). 4. Geim, A. K. & Grigorieva, I. V. Van der Waals heterostructures. Nature 499, 419–25 (2013). 5. Butler, S. Z. et al. Progress, Challenges, and Opportunities in Two-Dimensional Materials Beyond Graphene. ACS Nano 7, 2898–2926 (2013). 6. Novoselov, K. S. Nobel Lecture: Graphene: Materials in the Flatland. Rev. Mod. Phys. 83, 837–849 (2011). 7. Novoselov, K. S. et al. A roadmap for graphene. Nature 490, 192–200 (2012). 8. Novoselov, K. S. & Castro Neto, A. H. Two-dimensional crystals-based heterostructures: materials with tailored properties. Phys. Scr. T146, 014006 (2012). 9. Wang, Q. H., Kalantar-Zadeh, K., Kis, A., Coleman, J. N. & Strano, M. S. Electronics and optoelectronics of two-dimensional transition metal dichalcogenides. Nat. Nanotechnol. 7, 699–712 (2012). 10. Britnell, L. et al. Strong Light-Matter Interactions in Heterostructures of Atomically Thin Films. Science 340, 1311–1314 (2013). 11. Roy, K. et al. Graphene–MoS2 hybrid structures for multifunctional photoresponsive memory devices. Nat. Nanotechnol. 8, 826–830 (2013). 12. Zhang, W. et al. Ultrahigh-Gain Photodetectors Based on Atomically Thin Graphene-MoS2 Heterostructures. Sci. Rep. 4, 1–8 (2014). 13. Pallecchi, E. et al. High Electron Mobility in Epitaxial Graphene on 4H-SiC(0001) via post-growth annealing under hydrogen. Sci. Rep. 4, 4558 (2014). 14. Pierucci, D. et al. Self-organized metal-semiconductor epitaxial graphene layer on off-axis 4H-SiC(0001). Nano Res. 8, 1026–1037 (2015). 15. Lin, Y.-C. et al. Direct Synthesis of van der Waals Solids. ACS Nano 8, 3715–3723 (2014). 16. Lin, Y. et al. Atomically Thin Heterostructures Based on Single-Layer Tungsten Diselenide and Graphene. 1–6 (2014). 17. Eichfeld, S. M. et al. Highly Scalable, Atomically Thin WSe2 Grown via Metal-Organic Chemical Vapor Deposition. ACS Nano 9, 2080–2087 (2015). 18. Miwa, J. A. et al. Van der Waals Epitaxy of Two-Dimensional MoS2-Graphene Heterostructures in Ultrahigh Vacuum. ACS Nano 9, 6502–6510 (2015). 19. Liu, X. et al. Rotationally Commensurate Growth of MoS2 on Epitaxial Graphene. ACS Nano, 10, 1, 1067–1075, (2016). 20. Shi, Y. et al. Van der Waals Epitaxy of MoS2 Layers Using Graphene as Growth Templates. Nano Lett. 12, 2784–2791 (2012). 21. Ago, H. et al. Controlled van der Waals Epitaxy of Monolayer MoS2 Triangular Domains on Graphene. ACS Appl. Mater. Interfaces 7, 5265–5273 (2015). 22. Han, G. H. et al. Seeded growth of highly crystalline molybdenum disulphide monolayers at controlled locations. Nat. Commun. 6, 6128 (2015). 23. Jin, W. et al. Direct Measurement of the Thickness-Dependent Electronic Band Structure of MoS2 Using Angle-Resolved Photoemission Spectroscopy. Phys. Rev. Lett. 111, 106801 (2013). 24. Brumme, T., Calandra, M. & Mauri, F. First-principles theory of field-effect doping in transition-metal dichalcogenides: Structural properties, electronic structure, Hall coefficient, and electrical conductivity. Phys. Rev. B 91, 155436 (2015). 25. Diaz, H. C. et al. Direct observation of interlayer hybridization and Dirac relativistic carriers in graphene/MoS2 van der Waals heterostructures. Nano Lett. 15, 1135–40 (2015). 26. Kuc, A ., Zibouche, N. & Heine, T. Influence of quantum confinement on the electronic structure of the transition metal sulfide TS2. Phys. Rev. B 83, 245213 (2011). 27. Splendiani, A. et al. Emerging photoluminescence in monolayer MoS2 . Nano Lett. 10, 1271–5 (2010). 28. Eda, G. et al. Photoluminescence from Chemically Exfoliated MoS2 . Nano Lett. 11, 5111–5116 (2011). 29. Mak, K. F., Lee, C., Hone, J., Shan, J. & Heinz, T. F. Atomically Thin MoS2: A New Direct-Gap Semiconductor. Phys. Rev. Lett. 105, 136805 (2010). 30. Fang, H. et al. Strong interlayer coupling in van der Waals heterostructures built from single-layer chalcogenides. Proc. Natl. Acad. Sci. USA 111, 6198–202 (2014). 31. Zhang, X. et al. Vertical heterostructures of layered metal chalcogenides by van der Waals epitaxy. Nano Lett. 14, 3047–54 (2014). 32. Bhanu, U., Islam, M. R., Tetard, L. & Khondaker, S. I. Photoluminescence quenching in gold - MoS2 hybrid nanoflakes. Sci. Rep. 4, 5575 (2014). 33. Lagarde, D. et al. Carrier and Polarization Dynamics in Monolayer MoS2 . Phys. Rev. Lett. 112, 047401 (2014). 34. Li, H. et al. From Bulk to Monolayer MoS2: Evolution of Raman Scattering. Adv. Funct. Mater. 22, 1385–1390 (2012). 35. Yin, Z. et al. Single-Layer MoS2 Phototransistors. ACS Nano 6, 74–80 (2012). 36. Deng, Y. et al. Black Phosphorus -Monolayer MoS2 van der Waals Heterojunction p-n Diode. ACS Nano 8, 8292–8299 (2014). 37. Buscema, M., Steele, G. A., van der Zant, H. S. J. & Castellanos-Gomez, A. The effect of the substrate on the Raman and photoluminescence emission of single-layer MoS2 . Nano Res. 7, 561–571 (2015). 38. Wang, S., Wang, X., Warner, J. H., All Chemical Vapor Deposition Growth of MoS2:h-BN Vertical van der Waals Heterostructures. ACS Nano 9, 5246 (2015). 39. Liu, K. et al. Elastic Properties of Chemical-Vapor-Deposited Monolayer MoS2, WS2, and Their Bilayer Heterostructures. Nano Lett. 14, 5097–5103 (2014). 40. Conley, H. J. et al. Bandgap Engineering of Strained Monolayer and Bilayer MoS2 . Nano Lett. 13, 3626–3630 (2013). 41. Ko, P. J. et al. Laser Power Dependent Optical Properties of Mono- and Few-Layer MoS2 . J. Nanosci. Nanotechnol. 15, 6843–6846 (2015). 42. Korn, T., Heydrich, S., Hirmer, M., Schmutzler, J. & Schüller, C. Low-temperature photocarrier dynamics in monolayer MoS2 . Appl. Phys. Lett. 99, 102109 (2011). 43. Mose, L. S. Large-Area Single-Layer MoSe2 and Its van der Waals Heterostructures. ACS Nano 8, 6655–6662 (2014). 44. Chen, Z., Biscaras, J. & Shukla, A. A high performance graphene/few-layer InSe photo-detector. Nanoscale 7, 5981–6 (2015). 45. Trabelsi, A. B. G. et al. Charged nano-domes and bubbles in epitaxial graphene. Nanotechnology 25, 165704 (2014). 46. Ni, Z. et al. Raman spectroscopy of epitaxial graphene on a SiC substrate. Phys. Rev. B 77, 115416 (2008). 47. Lee, C. et al. Anomalous Lattice Vibrations of Single-and Few-Layer MoS2 . ACS Nano 4, 2695–2700 (2010). 48. Zhou, K. et al. Raman Modes of MoS2 Used as Fingerprint of van der Waals Interactions. ACS Nano 9914–9924 (2014). 49. Chakraborty, B. et al. Symmetry-dependent phonon renormalization in monolayer MoS2 transistor. Phys. Rev. B 85, 161403 (2012). 50. Li, L. et al. Raman shift and electrical properties of MoS2 bilayer on boron nitride substrate. Nanotechnology 26, 295702 (2015). 51. Bkakri, R. et al. Effects of the graphene content on the conversion efficiency of P3HT:Graphene based organic solar cells. J. Phys. Chem. Solids 85, 206–211 (2015). 52. Bkakri, R., Kusmartseva, O. E., Kusmartsev, F. V., Song, M. & Bouazizi, A. Degree of phase separation effects on the charge transfer properties of P3HT:Graphene nanocomposites. J. Lumin. 161, 264–270 (2015). 53. Das, A. et al. Monitoring dopants by Raman scattering in an electrochemically top-gated graphene transistor. Nat. Nanotechnol. 3, 210–5 (2008). 54. Ferrari, A. C. & Basko, D. M. Raman spectroscopy as a versatile tool for studying the properties of graphene. Nat. Nanotechnol. 8, 235–46 (2013). 55. Federspiel, F. et al. Distance dependence of the energy transfer rate from a single semiconductor nanostructure to graphene. Nano Lett. 15, 1252–8 (2015). 56. Froehlicher, G. & Berciaud, S. Raman spectroscopy of electrochemically gated graphene transistors: Geometrical capacitance, electron-phonon, electron-electron, and electron-defect scattering. Phys. Rev. B 91, 205413 (2015). 57. Yan, J., Zhang, Y., Kim, P. & Pinczuk, A. Electric Field Effect Tuning of Electron-Phonon Coupling in Graphene. Phys. Rev. Lett. 98, 166802 (2007). 58. Chen, C.-F. et al. Controlling inelastic light scattering quantum pathways in graphene. Nature 471, 617–20 (2011). 59. Jnawali, G. et al. Observation of Ground- and Excited-State Charge Transfer at the C 60/Graphene Interface. ACS Nano 9, 7175–7185 (2015). 60. Buscema, M. et al. Photocurrent generation with two-dimensional van der Waals semiconductors. Chem. Soc. Rev. 44, 3691–718 (2015). 61. Lopez-Sanchez, O., Lembke, D., Kayci, M., Radenovic, A. & Kis, A. Ultrasensitive photodetectors based on monolayer MoS2 . Nat. Nanotechnol. 8, 497–501 (2013). 62. Boscher, N. D., Carmalt, C. J., Palgrave, R. G., Gil-Tomas, J. J. & Parkin, I. P. Atmospheric Pressure CVD of Molybdenum Diselenide Films on Glass. Chem. Vap. Depos. 12, 692–698 (2006). 63. Sun, Z. et al. Infrared photodetectors based on CVD-grown graphene and PbS quantum dots with ultrahigh responsivity. Adv. Mater. 24, 5878–83 (2012). 64. Yung, K. C., Wu, W. M., Pierpoint, M. P. & Kusmartsev, F. V. Introduction to graphene electronics – a new era of digital transistors and devices. Contemp. Phys. 54, 233–251 (2013). 65. Pallecchi, E. et al. Observation of the quantum Hall effect in epitaxial graphene on SiC(0001) with oxygen adsorption. Appl. Phys. Lett. 100, 253109 (2012). 66. Lalmi, B. et al. Flower-shaped domains and wrinkles in trilayer epitaxial graphene on silicon carbide. Sci. Rep. 4, 4066 (2014). 67. Lewis, J. et al. Further developments in the local-orbital density-functional-theory tight-binding method. Phys. Rev. B 64, 195103 (2001). 68. Lewis, J. P. et al. Advances and applications in the FIREBALLab initio tight-binding molecular-dynamics formalism. Phys. Status Solidi 248, 9, 1989–2007, (2011). 69. Jelínek, P., Wang, H., Lewis, J., Sankey, O. & Ortega, J. Multicenter approach to the exchange-correlation interactions in ab initio tight-binding methods. Phys. Rev. B 71, 235101 (2005). 70. Sankey, O. F. & Niklewski, D. J. Ab initio multicenter tight-binding model for molecular-dynamics simulations and other applications in covalent systems. Phys. Rev. B 40, 3979–3995 (1989). 71. Dappe, Y., Ortega, J. & Flores, F. Intermolecular interaction in density functional theory: Application to carbon nanotubes and fullerenes. Phys. Rev. B 79, 165409 (2009). 72. Švec, M. et al. van der Waals interactions mediating the cohesion of fullerenes on graphene. Phys. Rev. B 86, 121407 (2012). ## Acknowledgements This work was supported by the ANR H2DH grants. This work is supported by a public grant overseen by the French National Research Agency (ANR) as part of the “Investissements d’Avenir” program (reference: ANR-10-LABX-0035, Labex NanoSaclay). C.H.N. and A.T.C.J. acknowledge support from the National Science Foundation EFRI-2DARE program, grant number ENG- 1542879. ## Author information Authors ### Contributions D.P., H.H. and H.S. grows the graphene sample and carried the Raman spectroscopy, C.H.N., A.B. and A.T.C.J. grow the MoS2 on SiO2. J.R., F.B., P.F. and A.O. conducted the measurements ARPES. E.L. and H.H. conducted the Phototransport measurement. Y.J.D. carried the DFT calculations. All the authors participated in analyzed the data and writing the paper. ### Corresponding author Correspondence to Abdelkarim Ouerghi. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. ## Rights and permissions Reprints and Permissions Pierucci, D., Henck, H., Naylor, C. et al. Large area molybdenum disulphide- epitaxial graphene vertical Van der Waals heterostructures. Sci Rep 6, 26656 (2016). https://doi.org/10.1038/srep26656 • Accepted: • Published: • DOI: https://doi.org/10.1038/srep26656 • ### Indirect to direct band gap crossover in two-dimensional WS2(1−x)Se2x alloys • Cyrine Ernandes • Lama Khalil • Abdelkarim Ouerghi npj 2D Materials and Applications (2021) • ### Synthesis and characterization of WS2/graphene/SiC van der Waals heterostructures via WO3−x thin film sulfurization • Mahnaz Shafiei • Nunzio Motta Scientific Reports (2020) • ### Growth of ‘W’ doped molybdenum disulfide on graphene transferred molybdenum substrate • Vijayshankar Asokan • Dancheng Zhu • Chuanhong Jin Scientific Reports (2018)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8810759782791138, "perplexity": 3916.164120764429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00553.warc.gz"}
https://www.cuemath.com/jee/examples-on-domains-and-ranges-of-set-7-functions/
# Examples on Domains and Ranges of Functions Set 7 Go back to  'Functions' Example- 23 Find the range for (a) f(x) = \begin{align}\frac{1}{{2 + \sin 3x + \cos 3x}}\end{align}                                                                     (b) $$f(x) = [{x^2}] - {[x]^2}$$ (c) $$f(x) = \sqrt {a - x} \,\,\, + \,\,\,\sqrt {x - b}$$  $$\quad a > b > 0$$                                   (d) $$f(x) = {x^3} + 3{x^2} + 4x + 5$$ Solution: In case of linear, quadratic and other simple functions, we can express x in terms of f(x) and find the values of f(x) for which x is defined (These values form the range) We did so in the last two questions. However, this expression is not always easily possible. So we have to find other ways that could yield the answer more easily: (a) f (x) =  \begin{align} \frac{1}{{2 + \sin 3x + \cos 3x}} = \frac{1}{{2 + \sqrt 2 \sin (3x + \pi /4)}}\end{align} The denominator can vary from $$2 - \sqrt 2$$ to $$2 + \sqrt 2$$ because $$-{\rm{ }}1 \le sin\theta \le 1$$ (The denominator is never 0 and hence D = $$\mathbb{R}$$ ) Therefore, $\frac{1}{{2 + \sqrt 2 }} \le f(x){\rm{ }} \le \frac{1}{{2 - \sqrt 2 }}$ (b) Let x be expressed as I + f where I is the integral part and f the fractional part of x. f (x) = $$[{(I + f)^2}] - {I^2} = [{I^2} + {f^2} + 2If] - {I^2} = [{f^2} + 2If]$$ A little thinking will show that the right side can take on any integral value, whether positive, zero or negative. (Try out some examples: for x = 0.5, f(x) = 0; for x = –0.5, f(x) = –1 for x = 100.9 f(x) =180; for x = –100.9 f(x) = –21) Also assume any value for f(x) and see whether you can find a corresponding value for x) {you could also note that the difference between $${x^2}$$ and $${\left[ x \right]^2}$$ can be increased arbitrarily in magnitude due to the term 2If (which contains I). Try visualizing this in the form of a graph} Hence, f(x) can take on any integer value. i.e. R = $$\mathbb{Z}$$(the set of integers). (c) For the domain, we require a x > 0 and xb > 0 $$\Rightarrow D = \left[ {b,\,\,a} \right]$$ For the range, let y = f(x) = $$\sqrt {a - x} + \sqrt {x - b}$$ We see that y > 0. Observe the expression for f(x) carefully. The expression is ‘symmetric w.r.t ‘a’ and ‘b’. This gives us a hint that f(x) should attain an extremum at $$x = \frac{{a + b}}{2}$$. (We can of course, prove this) At $$x = a$$, f(x) = $$\sqrt {a - b}$$ $$x = b$$, f(x) = $$\sqrt {a - b}$$ x = \begin{align}\frac{{a + b}}{2} f(x) = \sqrt {2(a - b)}\end{align} Hence, the observation of some symmetry in the expression directly allows us to write the range as R = $$\left[ {\sqrt {a - b} ,\sqrt {2(a - b)} } \right]$$ Still not convinced about the symmetry part? Consider $g(x) = {\rm{ }}{(x - {a_1})^2} + {(x - {a_2})^2} + ......... + {(x - {a_n})^2}$ Obviously, the maximum for $$g\left( x \right)$$ is unbounded. What is the minimum, and for what value of x is it attained? Symmetry in the expression hints that {x_{min}} =\begin{align} {\rm{ }}\frac{{{a_1} + {a_2} + ...... + {a_n}}}{n}.\end{align} Lets verify this. $$g(x) = {\rm{ }}n{x^2} - 2({a_1} + {a_2} + .... + {a_n})x + a_1^2 + a_2^2 + ...... + a_n^2$$ $$p{x^2} + qx + r$$ = \begin{align}q = - 2({a_2} + {a_2} + ... + {a_n})\\r = a_1^2 + a_2^2 + .... + a_n^2\end{align} where From quadratic expressions, we know that this expression is minimum for x = $$\frac{{ - q}}{{2p}}$$, (verify this), which for this case, becomes $$\frac{{{a_1} + {a_2} + ..... + {a_n}}}{n}!$$ Symmetry directly tells us the answer. In our original question, suppose you want to solve it analytically y = $$\sqrt {a - x} + \sqrt {x - b}$$ Squaring and rearranging gives $${y^2}$$ = $$(a - b) + \,\,\underbrace {2\sqrt {(a - x)(x - b)} }_{{\rm{always positive}}}$$ > ab $$\Rightarrow$$  y > $$\sqrt {a - b}$$ ... (i) Now $${y^2} - (a - b)$$ = $$2\sqrt {(a - x)(x - b)}$$ Squaring and arranging in the form of a quadratic in x gives $$4{x^2} - 4(a + b)x + {y^4} - 2(a - b){y^2} + {(a + b)^2}$$ = 0 Since x is real, the discriminant for this equation should satisfy D > 0 (This gives a constraint on y, or the range, as in the earlier cases) $$\Rightarrow$$  $${(a + b)^2}$$> $${y^4} - 2(a - b){y^2} + {(a + b)^2}$$ $$\Rightarrow {\rm{}}{y^{2{\rm{ }}}} \le 2(a - b)$$ $$\Rightarrow$$  y $$\le \sqrt {2(a - b)}$$ .....(ii) Combining (i) and (ii) gives R = $$\left[ {\sqrt {a - b} ,\sqrt {2(a - b)} } \right]$$ (d) Although the expression for f(x) look a bit complicated, we can at once determine the range as follows. As x increases (or as x $$\rightarrow \infty$$ ), f (x) will keep on increasing in an unbounded fashion $$\left( {f\left( x \right) \to \infty } \right)$$. Similarly, as x decreases (or as x $$\rightarrow -\infty$$), f (x) will keep on decreasing in an unbounded fashion $$\left( {{\rm{or}}\,f\left( x \right) \to - \infty } \right)$$ Also, since f (x) is a polynomial function, it is continuous (and hence will vary continuously). Hence, f(x) will vary between – $$\infty$$ and + $$\infty$$ R = $$\mathbb{R}$$ Example- 24 Find the range for (i) f\left( x \right) =\begin{align} \frac{1}{{\sin x + 2\cos x + 3}}\end{align} (ii) f\left( x \right) =\begin{align} \frac{1}{{{x^4} + 2{x^2} + 2}}\end{align} (iii) $$f\left( x \right) = \sqrt {3{x^2} - 4x + 5}$$ (iv) f\left( x \right) = \begin{align}\frac{1}{{1 + 3{{\left\{ x \right\}}^2}}}\end{align} Solution: (i) f\left( x \right) = \begin{align} \frac{1}{{\sin x + 2\cos x + 3}}\end{align} To evaluate the range, our approach should be to somehow determine the range of the variable term {sin x + 2 cos x} in the denominator; this can be determined by somehow reducing this term to a simpler form; we are describing the general approach to reduce A sin x + B cos x : $A\sin x + B\cos x = \sqrt {{A^2} + {B^2}} \left( {\frac{A}{{\sqrt {{A^2} + {B^2}} }}\sin x + \frac{B}{{\sqrt {{A^2} + {B^2}} }}\cos x} \right)$ Put $$\frac{A}{{\sqrt {{A^2} + {B^2}} }} = \cos \phi \,{\rm{and }}\frac{B}{{\sqrt {{A^2} + {B^2}} }} = \sin \phi \,\,{\rm{where }}\tan \phi = \frac{B}{A}$$ (verify that this substitution is valid) Therefore, $$A\sin x + B\cos x = \sqrt {{A^2} + {B^2}} \sin \left( {x + \phi } \right)$$ For this particular question, $$\sin x + 2\cos x = \sqrt 5 \sin \left( {x + \phi } \right)\,\,\,\,\,\,\,\,\,{\rm{where }}\tan \phi = 2$$ \begin{align}&\Rightarrow - \sqrt 5 \le \sqrt 5 \sin \left( {x + \phi } \right) \le \sqrt 5 \\&\Rightarrow - \sqrt 5 \le \sin x + 2\cos x \le \sqrt 5 \Rightarrow 3 - \sqrt 5 \le \sin x + 2\cos x + 3 \le 3 + \sqrt 5 \\&\Rightarrow \frac{1}{{3 + \sqrt 5 }} \le \frac{1}{{\sin x + 2\cos x + 3}} \le \frac{1}{{3 - \sqrt 5 }}\,\,\,\,\,\,\,\,\,\,\, \Rightarrow R = \left[ {\frac{1}{{3 + \sqrt 5 }},\frac{1}{{3 - \sqrt 5 }}} \right]\end{align} (ii) f\left( x \right) = \begin{align} \frac{1}{{{x^4} + 2{x^2} + 2}} = \frac{1}{{{{\left( {{x^2} + 1} \right)}^2} + 1}}\end{align} Now, $${x^2} + 1 \ge 1$$ \begin{align} \Rightarrow {\left( {{x^2} + 1} \right)^2} + 1 \ge 2\,\,\, \Rightarrow 0 < \frac{1}{{{{\left( {{x^2} + 1} \right)}^2} + 1}} \le \frac{1}{2}\,\,\,\,\,\, \Rightarrow R = \left( {0,\frac{1}{2}} \right]\end{align} (iii) $$f\left( x \right) = \sqrt {3{x^2} - 4x + 5}$$ The expression inside the square root function is $$3{x^2} - 4x + 5 = 3\left( {{x^2} - \frac{4}{3}x + \frac{5}{3}} \right) = 3{\left( {x - \frac{2}{3}} \right)^2} + \frac{{11}}{3} \ge \frac{{11}}{3}$$ Therefore, $$\sqrt {3{x^2} - 4x + 5} \ge \sqrt {\frac{{11}}{3}} \Rightarrow R = \left[ {\sqrt {\frac{{11}}{3}} ,\infty } \right)$$ (iv) f\left( x \right) = \begin{align}\frac{1}{{1 + 3{{\left\{ x \right\}}^2}}}\end{align} \begin{align}&0 \le \left\{ x \right\} < {\kern 1pt} \,1\,\,\,\, \Rightarrow 0 \le 3{\left\{ x \right\}^2} < 3\,\,\,\,\,\, \Rightarrow 1 \le 1 + 3{\left\{ x \right\}^2} < 4\\& \Rightarrow \frac{1}{4} < \frac{1}{{1 + 3{{\left\{ x \right\}}^2}}} \le 1\qquad\qquad \Rightarrow R = \left( {\frac{1}{4},1} \right]\end{align} ## TRY YOURSELF - II Q. 1 Find the domains of the following functions (a) $$f\left( x \right) = \sqrt {2x + 1}$$ (b) f\left( x \right) = \begin{align}\frac{1}{{\sqrt {\sin x - 1} }}\end{align} (c) f\left( x \right) =\begin{align} \frac{1}{{\sqrt {1 - \sin x} }}\end{align} (d) f\left( x \right) =\begin{align} \frac{1}{{1 - {{\cos }^2}x}}\end{align} (e) f\left( x \right) =\begin{align} \frac{1}{{\sqrt {\left\{ x \right\}} }}\end{align} (f) $$f\left( x \right) = \sqrt {{x^2} - 3x + 2}$$ (g) f\left( x \right) =\begin{align} \frac{1}{{{x^2} + 2x + 4}}\end{align} (h) f\left( x \right) =\begin{align} \frac{1}{{2{x^2} + 5x + 2}}\end{align} (i) f\left( x \right) =\begin{align} \frac{1}{{{x^2} + 7x + 1}}\end{align} (j) f\left( x \right) = \begin{align}\frac{1}{{1 + 2\left[ x \right] + {{\left[ x \right]}^2}}}\end{align} (k) f\left( x \right) =\begin{align} \frac{1}{{1 + {x^2}}} + \sqrt {\left[ x \right]} \end{align} (l) f\left( x \right) =\begin{align} \frac{1}{{\left[ {{x^2}} \right]}}\end{align} (m) f\left( x \right) = \begin{align}\frac{1}{{\left[ {\left| x \right|} \right]}}\end{align} (n) f\left( x \right) =\begin{align} \frac{1}{{\left| {\left[ x \right]} \right|}}\end{align} (o) f\left( x \right) =\begin{align} \cdot \frac{1}{{\sqrt {\left[ x \right]} }}\end{align} Q. 2 Find the range of the following functions (a) $$f\left( x \right) = \left[ x \right] + 1$$ (b) $$f\left( x \right) = 1 + \sin x$$ (c) $$f\left( x \right) = \left| {\left[ x \right]} \right|$$ (d) f\left( x \right) = \begin{align}\frac{1}{{{x^2}}}\end{align} (e) $$f\left( x \right) = \left| {\tan x} \right|$$ (f) $$f\left( x \right) = \left| {{x^2} - 3x + 2} \right|$$ (g) $$f\left( x \right) = \left[ {{x^2}} \right]$$ (h) $$f\left( x \right) = {\left[ x \right]^2}$$ (i) f\left( x \right) =\begin{align} 1 - x - {x^2}\end{align} (j) $$f\left( x \right) = 4{x^2} + 3x + 2$$ (k) $$f\left( x \right) = - 1 - 2x - 3{x^2}$$ (l) f\left( x \right) = \begin{align}\frac{1}{{\left| x \right|}}\end{align} (m) $$f\left( x \right) = 3\sin x + 4\cos x\,$$ (n) $$f\left( x \right) = \left[ {{x^2} - 2} \right]$$ (o) $$f\left( x \right) = \left[ {\sin \left| x \right|} \right]$$ Learn math from the experts and clarify doubts instantly • Instant doubt clearing (live one on one) • Learn from India’s best math teachers • Completely personalized curriculum
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 60, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955617785453796, "perplexity": 2049.1634644048413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00319.warc.gz"}
https://arxiv.org/abs/0905.0275
# Title:The embedding conjecture for quasi-ordinary hypersurfaces Abstract: This paper has two objectives: we first generalize the theory of Abhyankar-Moh to quasi-ordinary polynomials, then we use the notion of approximate roots and that of generalized Newton polygons in order to prove the embedding conjecture for this class of polynomials. This conjecture -made by S.S. Abhyankar and A. Sathaye- says that if a hypersurface of the affine space is isomorphic to a coordinate, then it is equivalent to it. Subjects: Algebraic Geometry (math.AG); Commutative Algebra (math.AC) MSC classes: 32S25, 32S70 Cite as: arXiv:0905.0275 [math.AG] (or arXiv:0905.0275v1 [math.AG] for this version) ## Submission history From: Abdallah Assi [view email] [v1] Sun, 3 May 2009 17:26:30 UTC (12 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113672733306885, "perplexity": 3429.210339383012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00158.warc.gz"}
http://mathonline.wikidot.com/integration-with-partial-fractions
Integration with Partial Fractions # Integration with Partial Fractions Before you read this section on Integration by Partial Fractions, please consult the page on Long Division of Improper Rational Functions. Suppose that we have a function in the form of $f(x) = \frac{P(x)}{Q(x)}$ where $P$ and $Q$ are both polynomials. Hence, $f$ is said to be a rational function. Functions in this form can be integrated with a technique known as integration by partial fractions. We will now demonstrate this. Suppose that we have the following function, $f(x) = \frac{x}{x+2} + \frac{1 + x}{x + 3}$. If we were to simplify this function and write it all under a common denominator, then we would obtain: (1) \begin{align} f(x) = \frac{x(x+3) + (1 + x)(x+2)}{(x+2)(x+3)} = \frac{x^2 + 3x + x + 2 + x^2 + 2x}{x^2 + 3x + 2x + 6} = \frac{2x^2 + 6x + 2}{x^2 + 5x + 6} \end{align} Now suppose that instead, we were given that $f(x) = \frac{2x^2 + 6x + 2}{x^2 + 5x + 6}$ without knowing its partial fraction decomposition, and suppose we wanted to find $\int \frac{2x^2 + 6x + 2}{x^2 + 5x + 6} \: dx$. It turns out that knowing the partial fraction decomposition and integrating that instead is generally much easier as: (2) \begin{align} \int \frac{x}{x+2} + \frac{1 + x}{x + 3} \: dx = \frac{2x^2 + 6x + 2}{x^2 + 5x + 6} \: dx \end{align} # Finding the Partial Fractions of a Rational Function Recall the page on Long Division of Improper Rational Functions. We will be able to apply the technique of integration with partial fractions only when the rational function is proper. If the rational function is improper, then we must first use long division. Let's first look at an example. ## Example 1 For $f(x) = \frac{x^3 + x^2 + x + 1}{x - 1}$, determine $\int \frac{x^3 + x^2 + x + 1}{x - 1} \: dx$. We can use integration by partial fractions for this example. First let's note that the degree of the denominator is greater than the degree of the denominator, so we can use long division of polynomials to find the partial fraction decomposition of rational function. When we divide the numerator by the denominator, we get that $S(x) = x^2 + 2x + 3$ and that the remainder is $R(x) = 4$. Hence our partial fraction decomposition is: (3) \begin{align} f(x) = \frac{x^3 + x^2 + x + 1}{x - 1} = x^2 + 2x + 3 + \frac{4}{x - 1} \end{align} Integrating $f$ is much easier now: (4) \begin{align} \int \frac{x^3 + x^2 + x + 1}{x - 1} \: dx = \int x^2 + 2x + 3 + \frac{4}{x - 1} \: dx \\ = \int x^2 + 2x + 3 \: dx + \int \frac{4}{x - 1} \: dx \\ = \frac{x^3}{3} + x^2 + 3x + 4 \ln \mid x - 1\mid + C \end{align} # Partial Fraction Decomposition of Proper Rational Functions If we get that $\frac{R(x)}{Q(x)}$ into a proper rational function form, then we can now decompose $\frac{R(x)}{Q(x)}$. In such case, the polynomial Q can be factored as a product of linear factors $ax + b$ or irreducible quadratic factors $ax^2 + bx + c$ which we precisely define as follows: Definition: A factor $m$ of a function $f$ is a Linear Factor if $m = ax + b$. Furthermore, $m$ is an Irreducible Quadratic Factor if $m = ax^2 + bx + c$ cannot be reduced further into linear factors, that is $b^2 - 4ac < 0$ (the descriminant of $m$ is negative). We will now look at all of the possible cases in factoring $Q(x)$ and subsequently take a look at some examples of integration by partial fractions. ## Case 1: Q(x) is a product of distinct linear factors. Suppose that $Q(x)$ has a product of $n$ distinct linear factors, that is $Q(x) = (a_1x + b_1)(a_2x + b_2)...(a_nx + b_n)...$, where none of these factors are repeated and none of these factors are a constant multiple of one another, that is for a constant C, $(a_ix + b_i) ≠ C(a_jx + b_j)$ for all factors. Hence, the partial fraction decomposition of $\frac{R(x)}{Q(x)}$ will be: (5) \begin{align} \quad \frac{R(x)}{Q(x)} = \frac{A_1}{(a_1x + b_1)} + \frac{A_2}{(a_2x + b_2)} + ... + \frac{A_n}{(a_nx + b_n)} + ... \end{align} ### Example 1 Integrate $f(x) = \frac{2x}{x^2 - x - 2}$. By factoring the denominator of $f$, we get two distinct linear factors, namely $(x - 2)$ and $(x + 1)$. We hence know that for some $A$ and $B$: (6) \begin{align} \frac{2x}{x^2 - x - 2} = \frac{A}{x - 2} + \frac{B}{x + 1} \\ \frac{2x}{x^2 - x - 2} = \frac{A(x + 1) + B(x - 2)}{(x - 2)(x + 1)} \end{align} Hence it follows that $2x = A(x + 1) + B(x - 2)$. Hence, we can now choose any $x$ and solve for $A$ and $B$. When $x = 2$, then we get that $4 = 3A$, or more appropriately $A = \frac{4}{3}$. When $x = -1$, we get that $-2 = -3B$, or rather $B = \frac{2}{3}$. Hence it follows that our partial fraction decomposition is: (7) \begin{align} \frac{2x}{x^2 -x + 2} = \frac{4/3}{x - 2} + \frac{2/3}{x + 1} \end{align} Now we can integrate this function: (8) \begin{align} \int \frac{4/3}{x - 2} + \frac{2/3}{x + 1} \: dx \\ = (4/3) \ln (x - 2) + (2/3) \ln (x + 1) + C \end{align} ## Case 2: Q(x) is a product of distinct linear factors, some of which are repeated. Suppose that $Q(x)$ has a linear factor $(a_0x + b_0)$ that is repeated $r$-times. Then the partial fraction decomposition of $\frac{R(x)}{Q(x)}$ is: (9) \begin{align} \quad \frac{R(x)}{Q(x)} = \frac{A_1}{(a_0x + b_0)} + \frac{A_2}{(a_0x + b_0)^2} + ... + \frac{A_r}{(a_0x + b_0)^r} + ... \end{align} ## Case 3: Q(x) contains an irreducible quadratic factor that isn't repeated. Suppose that $Q(x)$ has an irreducible quadratic factor $ax^2 + bx + c$ that is not repeated. Then the partial fraction decomposition of $\frac{R(x)}{Q(x)}$ is: (10) \begin{align} \frac{R(x)}{Q(x)} = \frac{Ax + B}{(ax^2 + bx + c)} + ... \end{align} ## Case 4: Q(x) contains an irreducible quadratic factor that is repeated. Suppose that $Q(x)$ has an irreducible quadratic factor $a_0x^2 + b_0x + c_0$ that is repeated $r$-times. Then the partial fraction decomposition of $\frac{R(x)}{Q(x)}$ is: (11) \begin{align} \quad \frac{R(x)}{Q(x)} = \frac{A_1x + B_1}{(a_0x^2 + b_0x + c_0)} + \frac{A_2x + B_2}{(a_0x^2 + b_0x + c_0)^2} + ... + \frac{A_rx + B_r}{(a_0x^2 + b_0x + c_0)^r} + ... \end{align} We will look at more examples of integration by partial fractions on the Integration with Partial Fractions Examples 1 and Integration with Partial Fractions Examples 2 pages.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 11, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.997839629650116, "perplexity": 249.96852446729022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00572.warc.gz"}
http://www.aimsciences.org/search/author?author=Klara%20Janglajew
# American Institute of Mathematical Sciences ## Journals PROC The reduction principle is generalized to the case of the nonautonomous difference equations in Banach space whose right-handed side is allowed to be noninvertible and whose linear part satisfies weaker condition than exponential dichotomy. keywords: DCDS-B The paper is devoted to the investigation of a linear differential equation with advanced argument $\dot y(t)=c(t)y(t+\tau),$ where $\tau>0$, and the function $c\colon [t_0,\infty)\to (0,\infty)$, $t_0\in \mathbb{R}$ is bounded and locally Lipschitz continuous. New explicit coefficient criterion for the existence of a positive solution in terms of $c$ and $\tau$ is derived. keywords:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442301988601685, "perplexity": 217.67464341227424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591718.31/warc/CC-MAIN-20180720154756-20180720174756-00203.warc.gz"}
https://mapletacommunity.com/topic/154/using-mla-repositories-in-mathapps
Using *.mla repositories in MathApps browsing • Hi, I am curious about how to apply commands from an *.mla repository file in the startup code for a MathApp question. I have put a repository file called GEO3.mla into our server folder [https://. . . ./Public_Html/] (which contains all the other material, figures, etc., for our class), so that the full path to the repository file is [https://. . . ./Public_Html/GEO3.mla]. How do I then make sure that a given startup code will be able to find and run the commands in GEO3.mla? I have tried to insert various libname extensions into the head of the startup code, e.g. as follows: libname :=  libname, "https://. . . /Public_Html/"; with(GEO3); but it does not seem to activate the commands in the repository file when calling them via the MathApp question in MapleTA. Does it require a special type of libname extension or a special location of the *.mla file? Thank you very much in advance! • @Steen I think this is because the MathApp is looking for the mla file on the local file system. I would suggest you ask [email protected]. @Anatoly is looking at this for you, we'll post it here or DM you if we work anything out. • Does loading the mla file from a https/http server work in desktop Maple? • Usage of libraries in MathApps has been determined to be a security concern is no longer allowed on the hosted instances. If you're working on self-hosted (university hosted) instances, then this can be overridden by: 1. Stop Tomcat 2. Open <Tomat>/webapps/maplenet/WEB-INF/classes/maplenetserver.properties 3. Find definition of kernel.localhost.program_args 4. Add the following to the end of the definition (w/o quotes): “—secure-read=<MapleTA10>/maple/records/…” where <MapleTA> is full path to your MapleTA installation folder. 5. Start Tomcat. I hope this helps! • Thanks for looking into this! Unfortunately we do not (yet) have a local server for MapleTA, but this security issue could be an argument for becoming self-hosted. One way around the concrete problem (without self-hosting, I think) is to embed the needed (if not all) *.mla procedures directly via manual copy-paste into the startup code from the worksheet that defines the *.mla file. But this, of course, is much more cumbersome than the wished-for simple one-line reference to the *.mla file itself. Your answer, however, then also induces a similar question concerning the use of repository files inside MapleTA itself, as thoroughly explained in: [https://mapletacommunity.com/topic/64/how-to-create-and-use-a-maple-repository-in-maple-ta]. Admittedly, I did not check this out yet, but the question is, if this functionality also has been depreciated or blocked in the meantime? Thank you very much in advance!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048295378684998, "perplexity": 3938.6002353103295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823214.37/warc/CC-MAIN-20171019012514-20171019032514-00293.warc.gz"}
http://scipy.github.io/devdocs/reference/generated/scipy.optimize.brentq.html
# scipy.optimize.brentq¶ scipy.optimize.brentq(f, a, b, args=(), xtol=2e-12, rtol=8.881784197001252e-16, maxiter=100, full_output=False, disp=True)[source] Find a root of a function in a bracketing interval using Brent’s method. Uses the classic Brent’s method to find a zero of the function f on the sign changing interval [a , b]. Generally considered the best of the rootfinding routines here. It is a safe version of the secant method that uses inverse quadratic extrapolation. Brent’s method combines root bracketing, interval bisection, and inverse quadratic interpolation. It is sometimes known as the van Wijngaarden-Dekker-Brent method. Brent (1973) claims convergence is guaranteed for functions computable within [a,b]. [Brent1973] provides the classic description of the algorithm. Another description can be found in a recent edition of Numerical Recipes, including [PressEtal1992]. A third description is at http://mathworld.wolfram.com/BrentsMethod.html. It should be easy to understand the algorithm just by reading our code. Our code diverges a bit from standard presentations: we choose a different formula for the extrapolation step. Parameters ffunction Python function returning a number. The function $$f$$ must be continuous, and $$f(a)$$ and $$f(b)$$ must have opposite signs. ascalar One end of the bracketing interval $$[a, b]$$. bscalar The other end of the bracketing interval $$[a, b]$$. xtolnumber, optional The computed root x0 will satisfy np.allclose(x, x0, atol=xtol, rtol=rtol), where x is the exact root. The parameter must be nonnegative. For nice functions, Brent’s method will often satisfy the above condition with xtol/2 and rtol/2. [Brent1973] rtolnumber, optional The computed root x0 will satisfy np.allclose(x, x0, atol=xtol, rtol=rtol), where x is the exact root. The parameter cannot be smaller than its default value of 4*np.finfo(float).eps. For nice functions, Brent’s method will often satisfy the above condition with xtol/2 and rtol/2. [Brent1973] maxiterint, optional If convergence is not achieved in maxiter iterations, an error is raised. Must be >= 0. argstuple, optional Containing extra arguments for the function f. f is called by apply(f, (x)+args). full_outputbool, optional If full_output is False, the root is returned. If full_output is True, the return value is (x, r), where x is the root, and r is a RootResults object. dispbool, optional If True, raise RuntimeError if the algorithm didn’t converge. Otherwise, the convergence status is recorded in any RootResults return object. Returns x0float Zero of f between a and b. rRootResults (present if full_output = True) Object containing information about the convergence. In particular, r.converged is True if the routine converged. Notes f must be continuous. f(a) and f(b) must have opposite signs. Related functions fall into several classes: multivariate local optimizers nonlinear least squares minimizer leastsq constrained multivariate optimizers global optimizers local scalar minimizers N-D root-finding fsolve 1-D root-finding scalar fixed-point finder fixed_point References Brent1973(1,2,3) Brent, R. P., Algorithms for Minimization Without Derivatives. Englewood Cliffs, NJ: Prentice-Hall, 1973. Ch. 3-4. PressEtal1992 Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; and Vetterling, W. T. Numerical Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed. Cambridge, England: Cambridge University Press, pp. 352-355, 1992. Section 9.3: “Van Wijngaarden-Dekker-Brent Method.” Examples >>> def f(x): ... return (x**2 - 1) >>> from scipy import optimize >>> root = optimize.brentq(f, -2, 0) >>> root -1.0 >>> root = optimize.brentq(f, 0, 2) >>> root 1.0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8049778938293457, "perplexity": 3987.040289978142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00060.warc.gz"}
http://www.physicsforums.com/showthread.php?t=1025
speed of gravity = c by wolram Tags: gravity, speed PF Patron P: 3,666 it has been reported that the speed of gravity = c does this mean that an object traveling at c would not feel the efects of gravity? ttayeg Mentor P: 21,652 Originally posted by wolram it has been reported that the speed of gravity = c does this mean that an object traveling at c would not feel the efects of gravity? ttayeg Well... since an object can't travel at C, its kinda a pointless question. LIGHT however, travels at C and is affected by gravity (or rather the curvature of space that gravity creates). Sci Advisor P: 2,501 Right, the path of a photon is affected when it travels through an area of space-time that has already been curved by gravity. However, if a photon were to pass through a relatively "flat" area of space-time and, after it had passed, a massive object (such as a planet) suddenly materialized in that space, the gravity from that planet would (theoretically) never effect that photon. The sudden appearance of the planet would send a huge gravity wave out in all directions, this wave would propagate at lightspeed, and never "catch up" to anything traveling at lightspeed the had already passed. P: 1,341 speed of gravity = c Greetings ! Originally posted by wolram it has been reported that the speed of gravity = c ... I believe the current proven possible range was reported to be something like 0.8 - 1.05 c. (This could be outdated or slightly inaccurate info.) More accurate tests are still required to make sure that reality "follows" the laws of theory. As for the question - since no particle with rest mass can reach c, it is somewhat pointless to ask
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203476667404175, "perplexity": 873.0496940867861}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051984/warc/CC-MAIN-20131204131731-00022-ip-10-33-133-15.ec2.internal.warc.gz"}
https://yutsumura.com/compute-determinant-of-a-matrix-using-linearly-independent-vectors/
# Compute Determinant of a Matrix Using Linearly Independent Vectors ## Problem 193 Let $A$ be a $3 \times 3$ matrix. Let $\mathbf{x}, \mathbf{y}, \mathbf{z}$ are linearly independent $3$-dimensional vectors. Suppose that we have $A\mathbf{x}=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, A\mathbf{y}=\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, A\mathbf{z}=\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}.$ Then find the value of the determinant of the matrix $A$. Contents We give two solutions. ## Solution 1. Let $B$ be the $3\times 3$ matrix whose columns are the vectors $\mathbf{x},\mathbf{y}, \mathbf{z}$, that is, $B=[\mathbf{x} \mathbf{y} \mathbf{z}].$ Then we have $AB=\begin{bmatrix} 1 & 0 & 1 \\ 0 &1 &1 \\ 1 & 0 & 1 \end{bmatrix}.$ Then we have $\det(A)\det(B)=\det(AB)=\begin{vmatrix} 1 & 0 & 1 \\ 0 &1 &1 \\ 1 & 0 & 1 \end{vmatrix}=0.$ (If two rows are equal, then the determinant is zero. Or you may compute the determinant by the second column cofactor expansion.) Note that the column vectors of $B$ are linearly independent, and hence $B$ is nonsingular matrix. Thus the $\det(B)\neq 0$. Therefore the determinant of $A$ must be zero. ## Solution 2. Since $\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}+\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}=\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix},$ we have $A\mathbf{x}+A\mathbf{y}=A\mathbf{z}.$ It follows that we have $A(\mathbf{x}+\mathbf{y}-\mathbf{z})=\mathbf{0}.$ Since the vectors $\mathbf{x}, \mathbf{y}, \mathbf{z}$ are linearly independent, the linear combination $\mathbf{x}+\mathbf{y}-\mathbf{z} \neq \mathbf{0}$. Hence the matrix $A$ is singular, and the determinant of $A$ is zero. (Recall that a matrix $A$ is singular if and only if there exist nonzero vector $\mathbf{v}$ such that $A\mathbf{u}=\mathbf{0}$.) ### More from my site • Find Values of $h$ so that the Given Vectors are Linearly Independent Find the value(s) of $h$ for which the following set of vectors $\left \{ \mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} h \\ 1 \\ -h \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 1 \\ 2h \\ 3h+1 […] • Find All the Values of x so that a Given 3\times 3 Matrix is Singular Find all the values of x so that the following matrix A is a singular matrix. \[A=\begin{bmatrix} x & x^2 & 1 \\ 2 &3 &1 \\ 0 & -1 & 1 \end{bmatrix}.$   Hint. Use the fact that a matrix is singular if and only if its determinant is […] • Find All Values of $x$ so that a Matrix is Singular Let $A=\begin{bmatrix} 1 & -x & 0 & 0 \\ 0 &1 & -x & 0 \\ 0 & 0 & 1 & -x \\ 0 & 1 & 0 & -1 \end{bmatrix}$ be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular.   Hint. Use the fact that a matrix is singular if and only […] • Properties of Nonsingular and Singular Matrices An $n \times n$ matrix $A$ is called nonsingular if the only solution of the equation $A \mathbf{x}=\mathbf{0}$ is the zero vector $\mathbf{x}=\mathbf{0}$. Otherwise $A$ is called singular. (a) Show that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is […] • Find the Nullity of the Matrix $A+I$ if Eigenvalues are $1, 2, 3, 4, 5$ Let $A$ be an $n\times n$ matrix. Its only eigenvalues are $1, 2, 3, 4, 5$, possibly with multiplicities. What is the nullity of the matrix $A+I_n$, where $I_n$ is the $n\times n$ identity matrix? (The Ohio State University, Linear Algebra Final Exam […] • Determine Conditions on Scalars so that the Set of Vectors is Linearly Dependent Determine conditions on the scalars $a, b$ so that the following set $S$ of vectors is linearly dependent. \begin{align*} S=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}, \end{align*} where $\mathbf{v}_1=\begin{bmatrix} 1 \\ 3 \\ 1 \end{bmatrix}, […] • Rotation Matrix in Space and its Determinant and Eigenvalues For a real number 0\leq \theta \leq \pi, we define the real 3\times 3 matrix A by \[A=\begin{bmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta &\cos\theta &0 \\ 0 & 0 & 1 \end{bmatrix}.$ (a) Find the determinant of the matrix $A$. (b) Show that $A$ is an […] • Maximize the Dimension of the Null Space of $A-aI$ Let $A=\begin{bmatrix} 5 & 2 & -1 \\ 2 &2 &2 \\ -1 & 2 & 5 \end{bmatrix}.$ Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix. Your score of this problem is equal to that […] #### You may also like... ##### Find the Eigenvalues and Eigenvectors of the Matrix $A^4-3A^3+3A^2-2A+8E$. Let $A=\begin{bmatrix} 1 & -1\\ 2& 3 \end{bmatrix}.$ Find the eigenvalues and the eigenvectors of the matrix $B=A^4-3A^3+3A^2-2A+8E.$ (Nagoya University... Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853018522262573, "perplexity": 105.04164401309197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811243.29/warc/CC-MAIN-20180218003946-20180218023946-00305.warc.gz"}
http://openmx-square.org/openmx_man3.7/node37.html
Next: Automatic determination of Kerker's Up: SCF convergence Previous: SCF convergence   Contents   Index ## General Five charge mixing schemes in OpenMX Ver. 3.7 are available by the keyword 'scf.Mixing.Type': • Simple mixing (Simple) Relevant keywords: scf.Init.Mixing.Weight, scf.Min.Mixing.Weight, scf.Max.Mixing.Weight • Residual minimization method in the direct inversion iterative subspace (RMM-DIIS) [40] Relevant keywords: scf.Init.Mixing.Weight, scf.Min.Mixing.Weight, scf.Max.Mixing.Weight, scf.Mixing.History, scf.Mixing.StartPulay • Guaranteed reduction Pulay method (GR-Pulay) [39] Relevant keywords: scf.Init.Mixing.Weight, scf.Min.Mixing.Weight, scf.Max.Mixing.Weight, scf.Mixing.History, scf.Mixing.StartPulay • Kerker mixing (Kerker) [41] Relevant keywords: scf.Init.Mixing.Weight, scf.Min.Mixing.Weight, scf.Max.Mixing.Weight, scf.Kerker.factor • RMM-DIIS with Kerker metric (RMM-DIISK) [40] Relevant keywords: scf.Init.Mixing.Weight, scf.Min.Mixing.Weight, scf.Max.Mixing.Weight, scf.Mixing.History, scf.Mixing.StartPulay, scf.Mixing.EveryPulay, scf.Kerker.factor In the first three schemes density matrices, which are regarded as a quantity in real space, are mixed to generate the input density matrix which can be easily converted into (spin) charge density. On the other hand, the charge mixing is made in Fourier space in the last two schemes. Generally, it is easier to achieve SCF convergence in large gap systems using any mixing scheme. However, it would be difficult to achieve a sufficient SCF convergence in smaller gap and metallic systems, since a charge sloshing problem in the SCF calculations becomes serious often. To handle such difficult systems, two mixing schemes are currently available: Kerker and RMM-DIISK methods. The two mixing schemes could be an effective way for achieving the SCF convergence of metallic systems. When 'Kerker' or 'RMM-DIISK' is used, the following prescriptions are helpful to obtain the convergence of SCF calculations: • Increase of 'scf.Mixing.History'. A relatively larger vaule 30-50 may lead to the convergence. In addition, 'scf.Mixing.EveryPulay' should be set in 1. • Use a rather larger value for 'scf.Mixing.StartPulay'. Before starting the Pulay-type mixing, achieve a convergence at some level. An appropriate value may be 10 to 30 for 'scf.Mixing.StartPulay'. • Use a rather larger value for 'scf.ElectronicTemperature' in case of metallic systems. When 'scf.ElectronicTemperature' is small, numerical instabilities appear often. In addition, the charge sloshing, which comes from charge components with long wave length, can be significantly suppressed by tuning Kerker's factor by the keyword 'scf.Kerker.factor', where Kerker's metric is defined by where is the vector with the minimum magnitude except 0-vector. A larger significantly suppresses the charge sloshing, but leads to slower convergence. Since an optimum value depends on system, you may tune an appropriate value for your system. Furthermore, the behavior of 'RMM-DIISK' can be controlled by the following keyword: scf.Mixing.EveryPulay 5 # default = 1 The residual vectors in the Pulay-type mixing schemes tend to become linearly dependent on each other as the mixing steps accumulate, and the linear dependence among the residual vectors makes the convergence difficult. A way of avoiding the linear dependence is to do the Pulay-type mixing occasionally during the Kerker mixing. With this prescription, you can specify the frequency using the keyword 'scf.Mixing.EveryPulay'. For example, in case of 'scf.Mixing.EveryPulay=5', the Pulay-mixing is made at every five SCF iterations, while the Kerker-type mixing is used at the other steps. 'scf.Mixing.EveryPulay=1' corresponds to the conventional Pulay-type mixing. It is noted that the keyword 'scf.Mixing.EveryPulay' is supported for only 'RMM-DIISK', and the default value is '1'. The above prescription works in some cases. But the most recommended prescription to accelerate the convergence is the following: • Increase of 'scf.Mixing.History'. A relatively larger vaule 30-50 may lead to the convergence. In addition, 'scf.Mixing.EveryPulay' should be set in 1. Since the Pulay-type mixing such as RMM-DIIS and RMM-DIISK is based on a quasi Newton method, the convergence speed is governed by how a good approximate Hessian matrix can be found. As 'scf.Mixing.History' increases, the calculated Hessian may become more accurate. In Fig. 6 a comparison of five mixing schemes is shown for the SCF convergence for (a) a sialic acid molecule, (b) a Pt cluster, and (c) a Pt cluster, where the norm of residual density matrix or charge density can be found as NormRD in the file '*.out' and the input files are 'SialicAcid.dat', 'Pt13.dat', and 'Pt63.dat' in the directory 'work'. We see that 'RMM-DIISK' works with robustness for all the systems shown in Fig. 6. In most cases, 'RMM-DIISK' will be the best choice, while the use of 'Kerker' is required with a large 'scf.Kerker.factor' and a small 'scf.Max.Mixing.Weight' for quite difficult cases in which the convergence is hardly obtained. Next: Automatic determination of Kerker's Up: SCF convergence Previous: SCF convergence   Contents   Index t-ozaki 2013-05-22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949893057346344, "perplexity": 3663.521991111172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746112.65/warc/CC-MAIN-20181119212731-20181119234731-00283.warc.gz"}
http://cie.co.at/eilvterm/17-25-096
# 17-25-096 forty-five degree annular geometry 45°a geometry irradiation of reflecting materials at 45° to the normal, from all azimuthal directions, simultaneously Note 1 to entry: In measuring the colours of reflecting samples by irradiating with the forty-five degree annular geometry, the effects of texture and directionality are minimized. The forty-five degree annular geometry can be achieved by the use of a small source and an elliptic ring reflector or other aspheric optics. Note 2 to entry: Forty-five degree annular geometry is sometimes approximated by the use of a number of sources in a ring or a number of fibre bundles illuminated by a single source and terminated in a ring. Such an approximation to annular geometry is called "circumferential geometry", with notation "45°c". Note 3 to entry: This entry was numbered 17-469 in CIE S 017:2011. Publication date: 2020-12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226518869400024, "perplexity": 2939.344403615029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00431.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ph/9702274/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. ## 1 Introduction The smallness of the collision systems studied here requires appropriate theoretical tools: in order to properly compare theoretical predicted multiplicities to experimental ones, the use of statistical mechanics in its canonical form is mandatory, that means exact quantum numbers conservation is required, unlike in the grand-canonical formalism [5]. It will be shown indeed that particle average particle multiplicities in small systems are heavily affected by conservation laws well beyond what the use of chemical potentials predicts (this was previously observed in a similar canonical thermodynamic analysis of annihilation at rest [6]). However, in the high multiplicity (or large volume) limit the grand-canonical formalism recovers its validity. This paper generalizes the thermodynamical model introduced in ref. [7] for eecollisions by releasing some assumptions which were made there; calculations are performed with a larger symmetry group (actually by also taking into account the conservation of the electric charge). Moreover, formulae of global correlations between different particles species are provided, and a comparison with data is made in this regard as well. ## 2 The model In refs. [7, 8] a thermodynamical model of hadron production in eecollisions was developed on the basis of the following assumption: the hadronic jets observed in the final state of a event must be identified with hadron gas phases having a collective motion. This identification is valid at the decoupling time, when hadrons stop interacting after their formation and (possibly) a short expansion (freeze-out). Throughout this paper we will refer to such hadron gas phases with a collective motion as fireballs, following refs. [1, 2]. Since most events in a reaction are two-jet events, it was assumed that two fireballs are formed and that their internal properties, namely quantum numbers, are related to those of the corresponding primary quarks. In the so-called correlated jet scheme correlations between the quantum numbers of the two fireballs were allowed beyond the simple correspondance between the fireball and the parent quark quantum numbers. This scheme turned out to be in better agreement with the data than a correlation-free scheme [7]. The more complicated structure of a hadronic collision does not allow a straightforward extension of this model. If the assumption of hadron gas fireballs is maintained, the possibility of an arbitrary number of fireballs with an arbitrary configuration of quantum numbers should be taken into account [9]. To be specific, let us define a vector with integer components equal to the electric charge, baryon number, strangeness, charm and beauty respectively. We assume that the final state of a pp or a interaction consists of a set of fireballs, each with its own four-vector , where is the temperature and is the four-velocity [10], quantum numbers and volume in the rest frame . The quantum vectors must fulfill the overall conservation constraint where is the vector of the initial quantum numbers, that is in a pp collision and in a collision. The invariant partition function of a single fireball is, by definition: Zi(Q0i)=∑statese−βi⋅PiδQi,Q0i, (1) where is its total four-momentum. The factor is the usual Kronecker tensor, which forces the sum to be performed only over the fireball states whose quantum numbers are equal to the particular set . It is worth emphasizing that this partition function corresponds to the canonical ensemble of statistical mechanics since only the states fulfilling a fixed chemical requirement, as expressed by the factor , are involved in the sum (1). By using the integral representation of : δQi,Q0i=1(2π)5∫2π0∫2π0∫2π0∫2π0∫2π0d5ϕei(Q0i−Qi)⋅ϕ, (2) Eq. (1) becomes: Zi(Q0i)=∑states1(2π)5∫2π0…∫2π0d5ϕe−βi⋅Piei(Q0i−Qi)⋅ϕ. (3) This equation could also have been derived from the general expression of partition function of systems with internal symmetry [11, 12] by requiring a U(1) symmetry group, each U(1) corresponding to a conserved quantum number; that was the procedure taken in ref. [7]. The sum over states in Eq. 3 can be worked out quite straightforwardly for a hadron gas of boson species and fermion species. A state is specified by a set of occupation numbers for each phase space cell and for each particle species . Since and , where is the quantum numbers vector associated to the particle species, the partition function (3) reads, after summing over states: Zi(Q0i)=1(2π)5∫d5ϕeiQ0i⋅ϕexp[NB∑j=1∑klog(1−e−βi⋅pk−iqj⋅ϕ)−1+NF∑j=1∑klog(1+e−β⋅pk−iqj⋅ϕ)]. (4) The last expression of the partition function is manifestly Lorentz-invariant because the sum over phase space is a Lorentz-invariant operation which can be performed in any frame. The most suitable one is the fireball rest frame, where the four-vector reduces to: βi=(1Ti,0,0,0) (5) being the temperature of the fireball. Moreover, the sum over phase space cells in Eq. (4) can be turned into an integration over momentum space going to the continuum limit: ∑k⟶(2Jj+1)V(2π)3∫d3p, (6) where is the fireball volume and the spin of the hadron. As in previous studies on eecollisions [7] and heavy ions collisions [13], we supplement the ordinary statistical mechanics formalism with a strangeness suppression factor accounting for a partial strangeness phase space saturation222Possible charm and beauty suppression parameters and are unobservable, see also Appendix C.; actually the Boltzmann factor of any hadron species containing strange valence quarks or anti-quarks is multiplied by . With the transformation (6) and choosing the fireball rest frame to perform the integration, the sum over phase space in Eq. (4) becomes: ∑klog(1±γsjse−βi⋅pk−iqj⋅ϕ)±1⟶ (2Jj+1)Vi(2π)3∫d3plog(1±γsjse−√p2+m2j/Ti−iqj⋅ϕ)±1≡ViFj(Ti,γs,ϕ), (7) where the upper sign is for fermions, the lower for bosons and is the fireball volume in its rest frame; the function is a shorthand notation of the momentum integral in Eq. (7). Hence, the partition function (4) can be written: Zi(Q0i)=1(2π)5∫d5ϕeiQ0i⋅ϕexp[Vi∑jFj(Ti,γs,ϕ)]. (8) The mean number of the particle species in the fireball can be derived from by multiplying the Boltzmann factor , in the function in Eq. (8) by a fictitious fugacity and taking the derivative of with respect to at : ⟨nj⟩i=∂∂λjlogZi(Q0i,λj)∣∣λj=1. (9) The partition function supplemented with the factor is still a Lorentz-invariant quantity and so is the mean number . From a more physical point of view, this means that the average multiplicity of any hadron does not depend on fireball collective motion, unlike its mean number in a particular momentum state. The overall average multiplicity of the hadron, for a set of fireballs in a certain quantum configuration is the sum of all mean numbers of that hadron in each fireball: ⟨nj⟩=N∑i=1∂∂λjlogZi(Q0i,λj)∣∣λj=1=∂∂λjlogN∏i=1Zi(Q0i,λj)∣∣λj=1. (10) In general, as the quantum number configurations may fluctuate, hadron production should be further averaged over all possible fireballs configurations fulfilling the constraint . To this end, suitable weights , representing the probability of configuration to occur for a set of fireballs, must be introduced. Basic features of those weights are: w(Q01,…,Q0N)=0ifN∑i=1Q0i≠Q0, ∑Q01,…,Q0Nw(Q01,…,Q0N)=1. (11) For the overall average multiplicity of hadron we get: ⟨⟨nj⟩⟩=∑Q01,…,Q0Nw(Q01,…,Q0N)∂∂λjlogN∏i=1Zi(Q0i,λj)∣∣λj=1. (12) There are infinitely many possible choices of the weights , all of them equally legitimate. However, one of them is the most pertinent from the statistical mechanics point of view, namely: w(Q01,…,Q0N)=δΣiQ0i,Q0∏Ni=1Zi(Q0i)∑Q01,…,Q0NδΣiQ0i,Q0∏Ni=1Zi(Q0i). (13) It can be shown indeed that this choice corresponds to the minimal deviation from statistical equilibrium of the system as a whole. In fact, putting weights (13) in the Eq. (12), one obtains: ⟨⟨nj⟩⟩=∂∂λjlog∑Q01,…,Q0NδΣiQ0i,Q0N∏i=1Zi(Q0i,λj)∣∣λj=1. (14) This means that the average multiplicity of any hadron can be derived from the following function of : Z(Q0)=∑Q01,…,Q0NδΣiQ0i,Q0N∏i=1Zi(Q0i), (15) with the same recipe given for a single fireball in Eq. (9). By using expression (1) for the partition functions , Eq. (15) becomes: Z(Q0)=∑Q01,…,Q0NδΣiQ0i,Q0N∏i=1∑statesie−βi⋅PiδQ0i,Qi. (16) Since ∑Q01,…,Q0NδΣiQ0i,Q0δQ0i,Qi=δΣiQi,Q0, (17) the function (16) can be written as Z(Q0)=∑states1…∑statesNe−β1⋅P1…e−βN⋅PNδΣiQi,Q0. (18) This expression demonstrates that may be properly called the global partition function of a system split into subsystems which are in mutual chemical equilibrium but not in mutual thermal and mechanical equilibrium. Indeed it is a Lorentz-invariant quantity and, in case of complete equilibrium, i.e. , it would reduce to: Z(Q0)=∑states1…∑statesNe−β⋅(P1+…⋅PN)δΣiQi,Q0=∑statese−β⋅PδQ,Q0, (19) which is the basic definition of the partition function. To summarize, the choice of weights (13) allows the construction of a system which is out of equilibrium only by virtue of its subdivision into several parts having different temperatures and velocities. Another very important consequence of that choice is the following: if we assume that the freeze-out temperature of the various fireballs is constant, that is , and that the strangeness suppression factor is constant too, then the global partition function (18) has the following expression: Z(Q0)=1(2π)5∫d5ϕeiQ0⋅ϕexp[(ΣiVi)∑jFj(T,γs,ϕ)]. (20) Here the ’s are the fireball volumes in their own rest frames; a proof of (20) [8] is given in Appendix A. Eq. (20) demonstrates that the global partition function has the same functional form (3), (4), (8) as the partition function of a single fireball, once the volume is replaced by the global volume . Note that the global volume absorbs any dependence of the global partition function (20) on the number of fireballs . Thus, possible variations of the number and the size of fireballs on an event by event basis can be turned into fluctuations of the global volume. In the remainder of this Section and in Sects. 3, 4 we will ignore these fluctuations; in Sect. 5 it will be shown that they do not affect any of the following results on the average hadron multiplicities. The average multiplicity of the hadron can be determined with the formulae (14)-(15), by using expression (20) for the function : ⟨⟨nj⟩⟩ = 1(2π)5∫d5ϕeiQ0⋅ϕexp[V∑jFj(T,γs,ϕ)] (21) × (2Jj+1)V(2π)3∫d3pγ−sjsexp(√p2+m2j/T+iqj⋅ϕ)±1, where the upper sign is for fermions and the lower for bosons. This formula can be written in a more compact form as a series: ⟨⟨nj⟩⟩=∞∑n=1(∓1)n+1γnsjszj(n)Z(Q0−nqj)Z(Q0), (22) where the functions are defined as: zj(n)≡(2Jj+1)V(2π)3∫d3pexp(−n√p2+m2j/T)=(2Jj+1)VT2π2nm2jK2(nmjT). (23) K is the McDonald function of order 2. Eq. (22) is the final expression for the average multiplicity of hadrons at freeze-out. Accordingly, the production rate of a hadron species depends only on its spin, mass, quantum numbers and strange quark content. The chemical factors in Eq. (22) are a typical feature of the canonical approach due to the requirement of exact conservation of the initial set of quantum numbers. These factors suppress or enhance production of particles according to the vicinity of their quantum numbers to the initial vector. The behaviour of as a function of electric charge, baryon number and strangeness for suitable , and values is shown in Fig. 1; for instance, it is evident that the baryon chemical factors connected with an initially neutral system play a major role in determining the baryon multiplicities. The ultimate physical reason of “charged” particle () suppression with respect to “neutral” ones (), in a completely neutral system (), is the necessity, once a “charged” particle is created, of a simultaneous creation of an anti-charged particle in order to fulfill the conservation laws. In a finite system this pair creation mechanism is the more unlikely the more massive is the lightest particle needed to compensate the first particle’s quantum numbers. For instance, once a baryon is created, at least one anti-nucleon must be generated, which is rather unlikely since its mass is much greater than the temperature and the total energy is finite. On the other hand, if a non-strange charged meson is generated, just a pion is needed to balance the total electric charge; its creation is clearly a less unlikely event with respect to the creation of a baryon as the energy to be spent is lower. This argument illustrates why the dependence of on the electric charge is much milder that on baryon number and strangeness (see Fig. 1). In view of that, the dependence of on electric charge was neglected in the previous study on hadron production in eecollisions [7]. These chemical suppression effects are not accountable in a grand-canonical framework; in fact, in a completely neutral system, all chemical potentials should be set to zero and consequently “charged” particles do not undergo any suppression with respect to “neutral” ones. A compact analytic expression for the function does not exist. However, an approximation of valid for large global volumes (see Appendix B) exists in which chemical factors reduce to a product of a chemical-potential-like factor and an additional multivariate gaussian factor having no correspondence in the grand-canonical framework. The gaussian factor tends to 1 for proving the equivalence between canonical and grand-canonical approaches for large systems. The global partition function (18) has to be further modified in collisions owing to a major effect in such reactions, the leading baryon effect [14]. Indeed, the sum (18) includes states with vanishing net absolute value of baryon number, whereas in collisions at least one baryon-antibaryon pair is always observed. Hence, the simplest way to account for the leading baryon effect is to exclude those states from the sum. Thus, if denotes the absolute value of the baryon number of the system, the global partition function (18) should be turned into: Z=∑states1…∑statesNe−β1⋅P1…e−βN⋅PNδΣiQi,Q0−∑states1…∑statesNe−β1⋅P1…e−βN⋅PNδΣiQi,Q0δ|N|,0. (24) The first term, that we define as , is equal to the function in Eqs. (18), (20), while the second term is the sum over all states having vanishing net absolute value of baryon number. The absolute value of baryon number can be treated as a new independent quantum number so that the processing of the partition function described in Eqs. (1)-(3) can be repeated for the second term in Eq. (24) with a U(1) symmetry group. Accordingly, this term can be naturally denoted by , so that Eq. (24) reads: Z=Z1(Q0)−Z2(Q0,0). (25) By using the integral representation of δ|N|,0=12π∫2π0dψei|N|⋅ψ (26) in the second term of Eq. (24), one gets: Z2(Q0,0) = 1(2π)6∫d5ϕeiQ0⋅ϕexp[V∑jFj(T,γs,ϕ)] (27) × ∫dψexp[∑j(2Jj+1)V(2π)3∫d3plog(1+γsjse−√p2+m2j/T−iqj⋅ϕ−iψ)], where the first sum over runs over all mesons and the second over all baryons. The average multiplicity of any hadron species can be derived from the global partition function (25) with the usual prescription: ⟨⟨nj⟩⟩=∂∂λjlogZ(λj)∣∣λj=1. (28) ## 3 Fit procedure and data set The model described so far has three free parameters: the temperature , the global volume and the strangeness suppression parameter . They will be determined by a fit to the available data on hadron inclusive production at each centre of mass energy. Eq. (22) yields the mean number of hadrons emerging directly from the thermal source at freeze-out, the so-called primary hadrons [7, 15], as a function of the three free parameters. After freeze-out, primary hadrons trigger a decay chain process which must be properly taken into account in a comparison between model predictions and experimental data, as the latter generally embodies both primary hadrons and hadrons generated by heavier particles decays. Therefore, in order to calculate overall average multiplicities to be compared with experimental data, the primary yield of each hadron species, determined according to Eq. (22) (or (28) for collisions) is added to the contribution stemming from the decay of heavier hadrons, which is calculated by using experimentally known decay modes and branching ratios [16, 17]. The calculation of the average multiplicity of primaries according to Eq. (22) involves several rather complicated five-dimensional integrals which have been calculated numerically after some useful approximations, described in the following. Since the temperature is expected to be below 200 MeV, the primary production rate of all hadrons, except pions, is very well approximated by the first term of the series (22): ⟨⟨nj⟩⟩≃γsjszjZ(Q0−qj)Z(Q0), (29) where we have put . This approximation corresponds to the Boltzmann limit of Fermi and Bose statistics. Actually, for a temperature of 170 MeV, the primary production rate of K, the lightest hadron after pions, differs at most (i.e. without the strangeness suppression parameter and the chemical factors which further reduce the contribution of neglected terms) by 1.5% from that calculated with Eq. (29), well within usual experimental uncertainties. Corresponding Boltzmannian approximations can be made in the function , namely log(1±e−√p2+m2j/T−iqj⋅ϕ)±1≃e−√p2+m2j/T−iqj⋅ϕ, (30) which turns Eq. (20) (for a generic ) into: Z(Q)≃1(2π)5∫d5ϕeiQ⋅ϕexp[∑jzjγsjse−iqj⋅ϕ+3∑j=1V(2π)3∫d3plog(1−e−√p2+m2j/T−iqj⋅ϕ)−1], (31) where the first sum runs over all hadrons except pions and the second over pions. As a further consequence of the expected temperature value, the functions of all charmed and bottomed hadrons are very small: with MeV and a primary production rate of K mesons of the order of one, as the data states, the function of the lightest charmed hadron, D, turns out to be ; chemical factors produce a further suppression of a factor . Therefore, thermal production of heavy flavoured hadrons can be neglected, as well as their functions in the exponentiated sum in Eq. (31), so that the integration over the variables and can be performed: Z(Q,C,B) ≃ 1(2π)3∫d3ϕeiQ⋅ϕexp[∑jzjγsjse−iqj⋅ϕ (32) − 3∑j=1V(2π)3∫d3plog(1−e−√p2+m2j/T−iqj⋅ϕ)]δC,0δB,0≡ζ(Q)δC,0δB,0. and are now three-dimensional vectors consisting of electric charge, baryon number, and strangeness; the five-dimensional integrals have been reduced to three-dimensional ones. Apart from the hadronization contribution, which is expected to be negligible in this model, production of heavy flavoured hadrons in hadronic collisions mainly proceeds from hard perturbative QCD processes of c and b pairs creation. The fact that promptly generated heavy quarks do not reannihilate into light quarks indicates a strong deviation from statistical equilibrium of charm and beauty, much stronger than the strangeness suppression linked with . Nevertheless, it has been found in eecollisions [7] that the relative abundances of charmed and bottomed hadrons are in agreement with those predicted by the statistical equilibrium assumption, confirming its full validity for light quarks and quantum numbers associated to them. The additional source of heavy flavoured hadrons arising from perturbative processes can be accounted for by modifying the partition function (31). In particular, the presence of one heavy flavoured hadron and one anti-flavoured hadron should be demanded in a fraction of events (or ) where is meant to be the total inelastic or non-single-diffractive cross section. Accordingly, the partition function to be used in events with a perturbative c pair, is, by analogy with Eq. (24)-(25) and the leading baryon effect: Z = ∑states1…∑statesNe−β1⋅P1…e−βN⋅PNδΣiQi,Q0 (33) − ∑states1…∑statesNe−β1⋅P1…e−βN⋅PNδΣiQi,Q0δ|C|,0≡Z1(Q0)−Z2(Q0,0), where is the absolute value of charm. The primary yield of charmed hadrons, calculated according to Eq. (28) and partition function (33), is derived in Appendix C. A significant production rate of heavy flavoured hadrons might affect light hadrons abundances through decay feed-down, so it is important to know how large the fraction is. Available data on charm cross-sections [18] indicate a fraction at centre of mass energies GeV and, consequently, much lower values for bottom quark production. Therefore, the perturbative production of heavy quarks can be neglected as long as one deals with light flavoured hadron production at GeV. We assume that it may be neglected at any centre of mass energy; this point will be discussed in more detail in the next section. All light flavoured hadrons and resonances with a mass GeV have been included among the primary generated hadron species; the effect of this cut-off on obtained results will be discussed in the next section. The mass of resonances with MeV has been distributed according to a relativistic Breit-Wigner function within from the central value. The strangeness suppression factor has also been applied to neutral mesons such as , , etc. according to the their strange valence quark content; mixing angles quoted in ref. [16] have been used. Once the average multiplicities of the primary hadrons have been calculated as a function of the three parameters , and , the decay chain is performed until , , K, K, , , , or stable particles are reached, in order to match the average multiplicity definition in pp and collisions experiments. It is worth mentioning that, unlike pp and , all eecolliders experiments also include the decay products of K, , , and in their multiplicity definition. Finally, the overall yield is compared with experimental measurements, and the : χ2=∑i(theoi−expei)2/error2i (34) is minimized. As far as the data set is concerned, we used all available measurements of hadron multiplicities in non-single-diffractive and inelastic pp collisions down to a centre of mass energy of about 19 GeV (see Tables 2 and 3), fulfilling the following quality requirements: 1. the data is the result of an actual experimental measurement and not a derivation based on isospin symmetry arguments; indeed, this model predicts slight violations of isospin symmetry due to mass differences; 2. the multiplicity definition is unambiguous, that means it is clear what decay products are included in the quoted numbers; actually, all referenced papers take the multiplicity definition previously mentioned; 3. the data is the result of an extrapolation of a spectrum measured over a large kinematical region. Some referenced papers about pp collisions quote cross sections instead of average multiplicities. In some cases (e.g. ref. [19]) both of them are quoted for some particles, which makes it possible to obtain the average multiplicity of particles for which only the cross section is given. Otherwise, total inelastic pp cross sections have been extracted from other papers. Whenever several measurements at the same centre of mass energy have been available, averages have been calculated according to a weighting procedure described in ref. [20] prescribing rescaling of errors to take into account a posteriori correlations and disagreements of experimental results. Since the decay chain is an essential step of the fitting procedure, calculated theoretical multiplicities are affected by experimental uncertainties on masses, widths and branching ratios of all involved hadron species. In order to estimate the effect of these uncertainties on the results of the fit, a two-step procedure for the fit itself has been adopted: firstly, the fit has been performed with a including only experimental errors and a set of parameters , , has been obtained. Then, the various masses, widths and branching ratios have been varied in turn by their errors, as quoted in ref. [16], and new theoretical multiplicities calculated, keeping the parameters , , fixed. The differences between old and new theoretical multiplicity values have been considered as additional systematic errors to be added in quadrature to experimental errors. Finally, the fit has been repeated with a including overall errors so as to obtain final values for model parameters and for theoretical multiplicities. Among the mass, width and branching ratio uncertainties, only those producing significant variations of final hadron yields (actually more than 130) have been considered. ## 4 Results and checks The fitted values of the parameters , , at various centre of mass energy points are quoted in Table 1 while the fitted values of average multiplicities are quoted in Table 2, 3 along with measured average multiplicities and the estimated primary fraction. The fit quality is very good at almost all centre of mass energies as demonstrated by the low values of ’s and by the Figs. 2, 3, 4, 5, 6. Owing to the relatively large value of at GeV, variations of fitted parameters larger than fit errors must be expected when repeating the fit excluding data points with the largest deviations from the theoretical values. Therefore, the fit at GeV pp collisions has been repeated excluding in turn (, , ) and (K, pions), respectively, from the data set; the maximum difference between the new and old fit parameters has been considered as an additional systematic error and is quoted in Table 1 within brackets. The fitted temperatures are compatible with a constant value at freeze-out independently of collision energy and kind of reaction (see Fig. 7). On the other hand, exhibits a very slow rise from 20 to 900 GeV (see Fig. 8); its value of over the whole explored centre of mass energy range proves that complete strangeness equilibrium is not attained. Moreover, the temperature value MeV is in good agreement with that found in eecollisions [7, 33] and in heavy ions collisions [34]. On the other hand, the global volume does increase as a function of centre of mass energy as it is proportional, for nearly constant and , to overall multiplicity which indeed increases with energy. Its values range from 6.4 fm at GeV pp collisions, at a temperature of 191 MeV, up to 67 fm at GeV collisions at a temperature of 170 MeV. However, since volume values are strongly correlated to those of temperature in the fit, errors turn out to be quite large and fit convergence is slowed down; that is the reason why we actually fitted the product instead of alone. Once , and are determined by fitting average multiplicities of some hadron species, their values can be used to predict average multiplicities of any other species, at a given centre of mass energy. Since the dependence of the chemical factors on the global volume is quite mild in the region of interest (see Fig. 2), the hadron density mainly depends on the temperature and (cf. Eqs. (22), (29)). Therefore, constant values of temperature and imply a nearly constant hadron density at freeze-out, which turns out to be hadrons/fm, as shown in Fig. 10, corresponding to a mean distance between hadrons of approximately fm. Unfortunately, due to its dramatic dependence on the temperature, all density values, except that at GeV, are affected by large errors, and thus a definite claim of a constant freeze-out density cannot be made. The same statement is true for the pressure, also shown in Fig. 10, whose definition is given in Appendix D. The physical significance of the results found so far depends on their stability as a function of the various approximations and assumptions which have been introduced. First, the temperature and values are low enough to justify the use of the Boltzmann limits (29), (30) for all hadrons except pions, as explained in Sect. 3. As far as the effect of a cut-off in the hadronic mass spectrum goes, the most relevant test proving that our results so far do not depend on it is the stability of the number of primary hadrons against changes of the cut-off mass. The fit procedure intrinsically attempts to reproduce fixed experimental multiplicities; if the number of primary hadrons does not change significantly by repeating the fit with a slightly lower cut-off, the production of heavier hadrons excluded by the cut-off must be negligible, in particular with regard to its decay contributions to light hadron yields. In this spirit all fits have been repeated moving the mass cut-off value from 1.7 down to 1.3 GeV in steps of 0.1 GeV, checking the stability of the amount of primary hadrons as well as of the fit parameters. It is worth remarking that the number of hadronic states with a mass between 1.7 and 1.6 GeV is 238 out of 535 overall, so that their exclusion is really a severe test for the reliability of the final results. Figure 11 shows the model parameters and the primary hadrons in collisions at GeV; above a cut-off of 1.5 GeV the number of primary hadrons settles at an asymptotically stable value, whilst the fitted values for , , do not show any particular dependence on the cut-off. Therefore, we conclude that the chosen value of 1.7 GeV ensures that the obtained results are meaningful. As mentioned in Sect. 3, the perturbative production of heavy quarks has been neglected. This is legitimate in low energy pp collisions, where it has been actually measured [18], but not necessarily in GeV collisions, where no measurement exists and one has to rely on theoretical estimates. In general, the latter predict very low b quark cross sections, but a possibly not negligible c quark production. We used the calculations of ref. [35] according to which the fraction of non-single-diffractive events in which pairs are produced (see Sect. 3) rises as a function centre of mass energy. We repeated the fit for collisions at , where the fraction is expected to be the largest, by using the upper estimate of a cross section mb, corresponding to , in order to maximize the effect of charm production. The partition function to be used in such events is that in Eq. (33) with a further modification according to Eq. (24) to take into account the leading baryon effect. The model parameters fitted with are quoted in Table 4; their variation with respect to is within fit errors, implying that extra charm production does not affect them significantly.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345092177391052, "perplexity": 696.8588041731048}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181179.12/warc/CC-MAIN-20201125041943-20201125071943-00710.warc.gz"}
https://math.uzh.ch/?id=ve_mfs_sem_vor0&key1=0&key2=1430&key3=4824
# Vortrag Take a polynomial map $\mathbb{F}_q\to\mathbb{F}_q$ over a (large) finite field and compute the fraction of elements in its image. Most likely, you got $\approx 0.632$. Once we explain why that is, we arrive at a group theory question: Suppose a subgroup $G$ of the symmetric group $S_n$ has the same fraction of fixed-point-free elements as $S_n$ itself. Does it follow that $G=S_n$? The talk will be nontechnical. We will invoke a property of $e=2.71...$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850416541099548, "perplexity": 181.96156375498984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00340.warc.gz"}
https://maths.anu.edu.au/study/student-projects/arithmetic-algebraic-geometry
Arithmetic algebraic geometry You will investigate a chosen topic in number theory or algebraic geometry. Possible topics include algebraic number theory, elliptic curves, and modular forms, but many other topics are possible.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558865785598755, "perplexity": 650.2175218848001}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00029.warc.gz"}
http://www.ch.imperial.ac.uk/rzepa/blog/?tag=mobius
## Posts Tagged ‘Möbius’ ### A pericyclic dichotomy. Friday, November 30th, 2012 A dichotomy is a division into two mutually exclusive, opposed, or contradictory groups. Consider the reaction below. The bicyclic pentadiene on the left could in principle open on heating to give the monocyclic [12]-annulene (blue or red) via what is called an electrocyclic reaction as either a six (red) or eight (blue) electron process. These two possibilities represent our dichotomy; according to the Woodward-Hoffmann (WH) pericyclic selection rules, they represent contradictory groups. Depending on the (relative) stereochemistry at the ring junctions, if one reaction is allowed by the WH rules, the other must be forbidden, and of course vice-versa. It is a nice challenge to ask students to see if the dichotomy can be reconciled. ### The stereochemistry of [8+2] pericyclic cycloadditions. Sunday, July 10th, 2011 Steve Bachrach has blogged on the reaction shown below. If it were a pericyclic cycloaddition, both new bonds would form simultaneously, as shown with the indicated arrow pushing. Ten electrons would be involved, and in theory, the transition state would have 4n+2 aromaticity. In fact Fernandez, Sierra and Torres have reported that they can trap an intermediate zwitterion 2, and in this sense therefore, the reaction is not pericyclic but nucleophilic addition from the imine lone pair to the carbonyl of the ketene (it finds the half way stage convivial). But this got me thinking. Does this reaction have any pericyclic character at all? And if so, could it be enhanced by design? ### Valentine chemistry Sunday, February 13th, 2011 The Möbius band is an experimental delight. In its original forms, it came flat-packed as below. The one shown on the left is related to the international symbol for recycling (if we denote the number of half twists imparted as m, this one has m=3). The middle one (m=4) shows a 4-twisted variant, and the one on the right has a 5-twist (m=5). These all come from Möbius’ original sketches, found amongst his belongings when he died. In this post they will form the basis for some experiments in molecular chirality.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8444134593009949, "perplexity": 3085.5340020509443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201455.20/warc/CC-MAIN-20190318152343-20190318174343-00300.warc.gz"}
http://www.tcs.tifr.res.in/events/closure-small-circuits-under-taking-factors
# Closure of Small Circuits Under Taking Factors Speaker: ## Time: Friday, 5 April 2019, 17:15 to 18:15 ## Venue: • A-201 (STCS Seminar Room) ## Organisers: Abstract: In the 1980s, Kaltofen proved one of the most remarkable results in algebraic complexity theory. He showed that if a polynomial can be computed by a small circuit, then each of its factors can also be computed by small circuits. In fact, given a circuit for the original polynomial, he also gave an efficient algorithm for computing circuits for the factors. This result has many applications, one of which is the algebraic analogue of the hardness vs randomness question. In most applications, however, it is only required to show the existence of small circuits for the factors (as opposed to actually computing them). Very recently, Mrinal Kumar, Chi-Ning Chou and Noam Solomon gave a short, simple and almost completely self contained proof of this fact, and in this talk we will discuss their proof. Formally, we will prove the following statement. If an n variate degree d polynomial f can be computed by an algebraic circuit of size s, then each of its factors can be computed by an algebraic circuit of size at most poly(s, n, d).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660131335258484, "perplexity": 449.8009866243006}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00230.warc.gz"}
http://forums.pctex.com/viewtopic.php?t=54
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting Author Message stubner Joined: 14 Mar 2006 Posts: 7 Posted: Wed Apr 19, 2006 2:19 pm    Post subject: absolute values Hi everybody, it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter. In order to test this systematically, I have taken some code from testfont.tex (without fully understanding it ;-): \documentclass{article} \renewcommand{\rmdefault}{ptm} \usepackage[slantedGreek]{mtpro2} \def\math{\def\ii{i} \def\jj{j} \def\\##1{|##1|+}\mathtrial \def\\##1{##1_2+}\mathtrial \def\\##1{##1^2+}\mathtrial \def\\##1{##1/2+}\mathtrial \def\\##1{2/##1+}\mathtrial \def\\##1{##1,{}+}\mathtrial \def\\##1{d##1+}\mathtrial \let\ii=\imath \let\jj=\jmath \def\\##1{\hat##1+}\mathtrial} \newcount\skewtrial \skewtrial='177 \def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z \\a \\b \\c \\d \\e \\f \\g \\h \\\ii \\\jj \\k \\l \\m \\n \\o \\p \\q \\r \\s \\t \\u \\v \\w \\x \\y \\z \\\alpha \\\beta \\\gamma \\\delta \\\epsilon \\\zeta \\\eta \\\theta \\\iota \\\kappa \\\lambda \\\mu \\\nu \\\xi \\\pi \\\rho \\\sigma \\\tau \\\upsilon \\\phi \\\chi \\\psi \\\omega \\\vartheta \\\varpi \\\varphi \\\Gamma \\\Delta \\\Theta \\\Lambda \\\Xi \\\Pi \\\Sigma \\\Upsilon \\\Phi \\\Psi \\\Omega \\\partial \\\ell \\\wp$\par} \def\mathsy{\begingroup\skewtrial='060 % for math symbol font tests \def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z$\par} \math\endgroup} \begin{document} \math \end{document} IMO most lower case letters look off-center to me with to much space to the tight of the letter (f,v,w,e are exceptions). The greeks are fine, while the uppercase letters are mixed (R has to much space to the left, U and M on the right). The other tests (besides absolute values) look fine. Other opinions? cheerio ralf jautschbach Joined: 17 Mar 2006 Posts: 11 Posted: Wed Apr 19, 2006 3:52 pm    Post subject: Re: absolute values stubner wrote: Hi everybody, it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter. ralf |i| and |\pi|, for example, seem to have too much space on the right. |\eta| looks like there is not enough space on the right. Jochen Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Thu Apr 20, 2006 4:04 pm    Post subject: Basically, I want to reiterate the remark I made in the last post to the "firstimpressions" posting by stubner. If you start looking carefully at any mathematical typesetting (as opposed to just reading it) you will find thousands of non-optimal things. Some of these are actually due to the design of TeX (see some remarks of mine in the "spacing" posting by zeller), and some to the varying circumstances of individual characters. All sorts of things that one would never even notice while reading a mathematics paper can stand out when one looks at things a character at a time, and sometimes one becomes overly concerned. (The link http://support.pctex.com/files/JWPXMWRZTYLV/abs.pdf shows Computer Modern and MTPro2 characters inside absolute values and parentheses, and I think that you will find cases where CM is spaced better than MTPro2, but also cases where the opposite is true.) For example, although I agree that |M| and |U| have too much space to the right of the letters, I wouldn't agree that |R| has too much space to the left of the R, or at most just a tiny extra bit of space. By contrast, in Computer Modern, the |R| definitely has this problem to a much greater degree. Notice, moreover, that in MTPro2, (M) and (U) and (R) look nicely balanced. Of course, that's partly because of the character of the right parenthesis---it has a top piece that extends backwards, unlike almost all characters! In Computer Modern this doesn't pose as great a problem mainly because the ) is much thinner and unshaped. The case of |i|, where there is certainly more space on the right, is also instructive. Notice that the dot on the Times-Italic i is very close to being the rightmost part of the character, while in CM it is nowhere near the right, because of the curlicue at the bottom. For this reason, I had to make the italic correction of the i rather big; otherwise, superscripts would be very close to the dot, making reading very unpleasant. Since the italic correction is always added to the i, this gives the extra space before the | or the ). Naturally, I had to compensate for this by adding more negative kerning between the i and all other characters, but you can't kern with the ), as I've mentioned before, in one of the two postings I mentioned. Similarly, if you compare x^i in CM and MTPro2, you'll see that the superscript i in CM has a curlicue to the left, which keeps it separated from the x, while in MTPro2, I needed to make a greater italic correction to the x in order to get superscripts adequately far away. TeX has \scriptspace to determine extra space after a subscript or superscript; alas, that it does not also have a \prescriptspace, to determine some extra space _before_ superscripts! (And similarly, see one of the previously mentioned postings, the spacing in scriptstyle and scriptscriptstyle should be more flexible.) At any rate, for now, I'll leave things as they are. Possibly in a future release I'll try to address some of these questions, though it simply isn't possible to optimize all spacing. Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First All times are GMT - 7 Hours Page 1 of 1 Jump to: Select a forum Support----------------PCTeX 6PCTeX v6 Beta Fonts----------------MathTime Pro II New SymbolsMathTime Pro IILucida FontsMathTime Pro II BetaMathTime Pro TUG----------------PracTeX Production You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9292898178100586, "perplexity": 2079.2490907032175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314959.58/warc/CC-MAIN-20190819201207-20190819223207-00262.warc.gz"}
http://cmaclaurin.com/2021/09/
## Coordinates adapted to observer 4-velocity field Suppose you have a 4-velocity field , which might be interpreted physically as observers or a fluid. It may be useful to derive a time coordinate which both coincides with proper time for the observers, and synchronises them in the usual way. Here we consider only the geodesic and vorticity-free case. Define: The “flat” symbol is just a fancy way to denote lowering the index, so the RHS is just . On the LHS, is the gradient of a scalar, which may be expressed using the familiar chain rule: where is a coordinate basis. Technically is a covector, with components in the cobasis . Similarly , so we must match the components: . For our purposes we do not need to integrate explicitly, it is sufficient to know the original equation is well-defined. (No such time coordinate exists if there is acceleration or vorticity, which is a corollary of the Frobenius theorem, see Ellis+ 2012 §4.6.2.) The new coordinate is timelike, since . One can show its change with proper time is . Further, the hypersurfaces are orthogonal to , since the normal vector is parallel to . This orthogonality means that at each point, the hypersurface agrees with the usual simultaneity defined locally by the observer at that point. (Orthogonality corresponds to the Poincaré-Einstein convention, so named by H. Brown 2005 §4.6). We want to replace the -coordinate by , and keep the others. What are the resulting metric components for this new coordinate? (Of course it’s the same metric, just a different expression of this tensor.) Notice the original components of the inverse metric satisfy . Similarly one new component is . Also , where . The are the same by symmetry, and the remaining components are unchanged. Hence the new components in terms of original components are: The matrix inverse gives the new metric components . The 4-velocity components are: by the original equation. Also , and the are unchanged. Hence . Anecdote: I used to write out , rearrange for , and substitute it into the original line element. This works but is clunky. My original inspiration was Taylor & Wheeler 2000 §B4, and I was thrilled to discover their derivation of Gullstrand-Painlevé coordinates from Schwarzschild coordinates plus certain radial velocities. (I give more references in MacLaurin 2019  §3.) I imagine that if a textbook presented the material above — given limited space and more formality — it may seem as if the more elegant approach were obvious. However I only (re?)-discovered it today by accident, using a specific 4-velocity from the previous post, and noticing the inverse metric components looked simple and familiar… ## Total angular momentum in Schwarzschild spacetime In relativity, distances and times are relative to an observer’s velocity. Hence one should be careful when defining an angular momentum. Speaking generally, a natural parametrisation of 4-velocities uses Killing vector fields, if the spacetime has any. In Schwarzschild spacetime, Hartle (2003 §9.3) defines the Killing energy per mass and Killing angular momentum per mass as: The angle brackets are the metric scalar product, has range , and we will take to be a 4-velocity.  I have relabeled Hartle’s as . While and are just coordinate basis vectors for Schwarzschild coordinates, as Killing vector fields (KVFs) they have geometric significance beyond this convenient description. [ is the unique KVF which as in “our universe” (region I), is future-pointing with squared-norm . On the other hand has squared-norm , so is partly determined by having maximum squared-norm amongst points at any given , which implies it is orthogonal to , although the specific orientation is not otherwise determined geometrically.] In fact is the portion of angular momentum (per mass) about the -axis. In Cartesian coordinates , the KVF has components . Similarly, we can define angular momentum about the -axis using the KVF , which in spherical coordinates is . For the -axis we use , which is in the original coordinates. Then: Hence we can define the total angular momentum as the Pythagorean relation , that is: This is a natural quantity determined from the geometry alone, unlike the individual etc. which rely on an arbitrary choice of axes. It is non-negative. I came up with this independently, but do not claim originality, and the general idea could be centuries old. Similarly quantum mechanics uses and , which I first encountered in a 3rd year course, although these are operators on flat space. One 4-velocity field which conveniently implements the total angular momentum is: In this case the axial momenta are , , and , for a total Killing angular momentum as claimed. There are restrictions on the parameters, in particular the “” must be a minus in the black hole interior. Incidentally this field is geodesic since . It also has zero vorticity (I wrote a technical post on the kinematic decomposition previously), so we might say it has macroscopic rotation but no microscopic rotation. Another possibility is in terms of and : where the first two components are the same as the previous vector. The expressions are simpler with a lowered index .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618650674819946, "perplexity": 670.7274324191976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00652.warc.gz"}
https://puzzling.stackexchange.com/questions/72582/1984-take-the-digits-1-9-8-and-4-and-make-246/72586
# 1984 - take the digits 1,9, 8 and 4 and make 246 Next year is the 70th anniversary of the publication of the book 1984 by George Orwell. Here is a puzzle to start the anniversary celebrations off a bit early ... Warm up Can you assemble a formula using the numbers $1$, $9$, $8$, and $4$ in any order so that the results equals $246$. You may use the operations $x + y$, $x - y$, $x \times y$, $x \div y$, $x!$, $\sqrt{x}$, $\sqrt[\leftroot{-2}\uproot{2}x]{y}$ and $x^y$, as long as all operands are either $1$, $9$, $8$, or $4$. Operands may of course also be derived from calculations e.g. $19*8*(\sqrt{4})$. You may also use brackets to clarify order of operations, and you may concatenate two or more of the four digits you start with (such as $8$ and $4$ to make the number $84$) if you wish. You may only use each of the starting digits once and you must use all four of them. I'm afraid that concatenation of numbers from calculations is not permitted, but answers with concatenations will get plus one from me. Main Event If you used concatenation above then make a formula using the numbers $1$, $9$, $8$, and $4$ in any order so that the results equals $246$ without using any concatenation. so, for example, you cannot put $8$ and $4$ together to make the number $84$. The rest of the rules above apply, but concatenation is not allowed. If you didn't use any concatenation above then you have solved the puzzle, but you could try to solve it with concatenation, that is concatenation of the initial numbers only. Note (and perhaps hint): For this second part any finite number of functions can be used, though ingenious solutions with infinite numbers of functions will get plus one from me. Note that in all the puzzles above Double, triple, etc. factorials (n-druple-factorials), such as $4!! = 4 \times 2$ are not allowed, but factorials of factorials are fine, such as $(4!)! = 24!$. I will upvote answers with double, triple and n-druple-factorials which get the required answers, but will not mark them as correct - particularly because a general method was developed by @Carl Schildkraut to solve these puzzles. many thanks to the authors of the similar questions below for inspiring this question. $\left(\sqrt{4}\right)^8-(9+1)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894048810005188, "perplexity": 320.02874131024936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821381.83/warc/CC-MAIN-20210127090152-20210127120152-00037.warc.gz"}
https://brilliant.org/discussions/thread/problem-prove-that-31005-equiv-1-mod-2011/
× # Problem: Prove that $$3^{1005} \equiv =-1 (\mod 2011)$$ Hello everybody, I'm having difficulty proving 3^1005 +1 is divisible by 2011, which is one step to solve this problem: Prove that there exists x,y in {1,2,...,1005} such that x^2+3y^2-3 is divisible by 2011. Thanks a lot. Note by Anh Huy Nguyen 4 years, 4 months ago Sort by: [As Zi Song pointed out, your discussion should say that $$3^{1005}+1$$ is divisible by 2011. Please update that.] I'm assuming that you are familiar with concepts like Euler's Theorem and (Gauss Law of) Quadratic Reciprocity. You should be able to fill in the details. This approach is pretty standard. [As Sambit suggested] Since 2011 is prime, we have $$\phi (2011) = 2010$$. By Euler's theorem, we know that $$3^{2010} \equiv 1 \pmod{2011}$$. Let $$3^{1005} = x$$, then $$0 \equiv x^2 - 1 \equiv (x-1)(x+1) \pmod{2011}$$, so $$x \equiv \pm 1 \pmod{2011}$$ (since 2011 is prime). If $$x \equiv 1 \pmod{2011}$$, then $$[ 3^{503} ] ^2 \equiv 3^{1006} \equiv 3 \pmod{ 2011}$$, so 3 is a quadratic residue. However, since $$2011 \equiv 7 \pmod{12}$$, this contradicts the fact that the Legendre symbol $$\left( \frac {3}{2011} \right) = -1$$. Pop quiz: Does this show that 3 is a primitive root modulo 2011? Why, or why not? Staff · 4 years, 4 months ago You have my gratitude, Calvin. Legendre is the least thing I think of, but I guess it's inevitable in some cases. A good lesson for me. · 4 years, 4 months ago psi(2011)=2010=2*1005. Maybe you can use this. · 4 years, 4 months ago $$2011 \mid 3^{1005} + 1$$ or $$2011 \mid 3^{1005} - 1$$ · 4 years, 4 months ago @Anh Huy N. Do you want to prove the latter or the former? Your title contradicts what you said. · 4 years, 4 months ago By Wolfram Alpha, you want to prove the former. :) · 4 years, 4 months ago @ann huy N.,can u explain me your approach to the problem? · 4 years, 4 months ago My deepest apology, the title is what I want to prove. Many thanks to Zi Song Y. @Bhargav D: This is my approach to the original problem. Let $$A=\{1,2,...,1005\}$$. Consider two sets $$B=\{ x^2 - 3 | x \in A\} ; C=\{ -3y^2 | y \in A\}$$. It's easy to see that: • No two elements of B are $$\equiv (\mod 2011)$$. The same for C. • No element of C is divisible by 2011. The same for B (1). (1) is not obvious and needs to be proven. Still the only way I can think of is, assume there is a number x in A satisfying $$x^2 - 3 \vdots 2011 \Rightarrow x^2 \equiv 3 \Rightarrow x^{2010} \equiv 3^{1005} \equiv -1$$, which contradicts Fermat's theory $$x^{2010} \equiv 1 (\mod 2011)$$. So to prove (1) we need to prove $$3^{1005} \equiv -1$$, which is true but .... Continuing with the original problem. We consider 2 cases: Case 1: There are two numbers $$a \in B, b \in C$$ satisfying $$a \equiv b \Rightarrow x^3 - 3 \equiv -3y^2$$ (QED). Case 2: For all $$a \in B, b \in C$$, a and b are not $$\equiv (\mod 2011)$$: Since $$|B|=|C|=1005 \Rightarrow B \cup C$$ is a set of 2010 elements, and can be expressed as $$T=\{a_1,a_2,...,a_{2010} \}$$ where $$a_i \equiv i (\mod 2011)$$. Therefore $\sum_{x \in T} x \equiv \sum_{i=1}^{2010}i \equiv 0 ( \mod 2011)$ However, by calculation, $$\sum_{x \in T} x$$ is not divisible by 2011. (Its exact result is $$-3015 - 2\sum_{i=1}^{1005} i^2$$ ) · 4 years, 4 months ago This problem depends on how you think about it, and what you know. For example, I simply set $$x =2, y\equiv \pm 3^{502} \pmod{2011}$$ (where I use $$\pm$$ to ensure that it lies in the domain). This gives $x^2 + 3y^2 - 3 \equiv 4 + 3 \times 3^{1004} - 3 \equiv 3^{1005} + 1\equiv 0 \pmod{2011}$ Staff · 4 years, 4 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929549098014832, "perplexity": 617.2947407837814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549436316.91/warc/CC-MAIN-20170728002503-20170728022503-00533.warc.gz"}
https://preprint.impa.br/visualizar?id=1308
Preprint D2/2005 A Rigidity Theorem for Complete CMC Hypersurfaces in Lorentz Manifolds Antonio Caminha Keywords: Differential Geometry | Lorentz-Minkowsky Space In this paper we use the standard formula for the Laplacian of the squared norm of the second fundamental form and the asymptotic maximum principle of H. Omori and S. T. Yau to classify complete CMC spacelike hypersurfaces of a Lorentz ambient space of nonnegative constant sectional curvature, under appropriate bounds on the scalar curvature.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634953141212463, "perplexity": 413.69093849868943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00425.warc.gz"}
https://www.physicsforums.com/threads/difference-between-temporal-and-spatial-coherence.722048/
# Difference between temporal and spatial coherence 1. ### ppy 64 Hi, I am confused about the difference between temporal and spatial coherence. I know coherence is when the waves have the same wavelength. An explanation in simple terms would be great thanks :) ### Staff: Mentor Last edited: Nov 11, 2013 ### Staff: Mentor 1 person likes this. 4. ### Claude Bile 1,479 Coherence describes the degree of correlation between two phases. Perfect coherence = perfectly correlated, means knowing one phase allows you to deduce the other with infinite precision. Perfect incoherence = perfectly uncorrelated, means knowing one phase gives no information whatsoever about the other phase (i.e. it is statistically random). Spatial coherence is correlated phase between two points in space. Temporal coherence is correlated phase between two points in time. Claude. 1 person likes this. Draft saved Draft deleted Similar discussions for: Difference between temporal and spatial coherence
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875245213508606, "perplexity": 3013.528248835474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736679756.40/warc/CC-MAIN-20151001215759-00003-ip-10-137-6-227.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/150200-correct.html
# Math Help - Is this correct ? 1. ## Is this correct ? Find l. $l=\displaystyle\lim_{n\to\infty}\int^{2}_{0}\frac{ |x-n|}{x+n}\,dx$ Here's how i did it: 2. Yes, it is correct. Good work!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9055629968643188, "perplexity": 1906.253320550027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500825567.38/warc/CC-MAIN-20140820021345-00428-ip-10-180-136-8.ec2.internal.warc.gz"}
http://bookini.ru/interdisciplinary-applied-mathematics/418/
# Interdisciplinary Applied Mathematics Скачать в pdf «Interdisciplinary Applied Mathematics» bH2(H + 26(1 + к)) H + 6(2 + к) ) This velocity is constant, and thus we can solve the above differential equation in terms of pressure by integrating twice and assuming that p = 0 at r ^    <x>,    and dp/dr = 0    at    r = 0    due    to    symmetry.    The    equation    for    the pressure is then p(r) =P ,    (10.9) consisting of two factors, namely, the Reynolds part and the correction factor p* given by p= 2AH + 2H2(B — A ln(1 + B/H) — C-A ln(1 + C/H) . BC C — B    B2    C2 (10.10) The constants A, B, C in this expression characterize the two surfaces; they are given by A = b(2 + k), В = 26(2 + к + Vl + к + к2), С = 26(2 + к — Vl + к + к2). The resistance forces acting on the spheres are equal in magnitude and are primarily due to the pressure, so the force can be computed exactly from Fz = /0° p2nrdr, to obtain 6npR2ev h f * (10.11) s FIGURE 10.26. Correction factor f * as a function of the gap to slip length ratio for the three asymptotic cases discussed in the text. consisting also of two factors, namely, the Reynolds part and the correction factor f * given by ,* =Ah 1 BC (™-Л>,„(1+С/,,)).    (Ю.12) For the aforementioned three limiting cases, the above expression reduces to Скачать в pdf «Interdisciplinary Applied Mathematics»
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614534974098206, "perplexity": 2500.367079876159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00031-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/4th-order-polynomials.451775/
# 4th order polynomials 1. Nov 28, 2010 ### rlspin Hey everyone Im doing control engineering and was wondering what methods i could use to find the roots of a 4th order polynomial? For example: (x^4) + (8x^3) + (7x^2) + 6x = 5 Could I seperate that into two brackets of quadratics or will i need to use a really long winded method? Thanks in advance for any help 2. Nov 28, 2010 ### Outlined 'order'? You mean 'degree'! Well yes there is a solution for it but it is indeed long winded. You can try to find easy solutions by try. Else use Maple or Matlab. 3. Nov 28, 2010 ### Gib Z A formula exists for 4 degree polynomials analogously to the quadratic formula, but it is very long and complicated and coding it into a program would take too much time. Just numerically approximate all the roots. 4. Nov 28, 2010 ### rlspin Sorry, i do mean degree! Slipped up cos im working with a 4th order system. I was worried id have to do it the long way. I did use Matlab but wanted to see if I could work out the answer by hand. Anywat, thanks for the help guys. I really appreciate it! 5. Jan 30, 2012 ### abypatel also check this website http://xrjunque.nom.es/precis/rootfinder.aspx
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9068028330802917, "perplexity": 1234.9765007702642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661775.4/warc/CC-MAIN-20160924173741-00159-ip-10-143-35-109.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/165816/computing-determinant-of-a-specific-matrix?answertab=active
# Computing determinant of a specific matrix. How to calculate the determinant of $$A=(a_{i,j})_{n \times n}=\left( \begin{array}{ccccc} a&b&b& \cdots & b\\ b& a& b& \cdots& b\\ \vdots& \vdots& \vdots& \ddots&\cdots\\ b&b&b & \cdots&a \end{array} \right)?$$ - May be you should accept some answers.... –  Tapu Jul 2 '12 at 19:10 We note that the sum of the elements of each column is $a+(n-1)b$ for each column, hence $$\det A=(a+(n-1)b)\det\pmatrix{1&1&\dots& 1&1\\ b&a&\dots&b&b\\ \vdots&\ddots&\ddots&\vdots&\vdots\\ b&b&\dots&b&a}.$$ To each row of index $\geq 2$, we do $R_j\leftarrow bR_1$ to get $$\det A=(a+(n-1)b)\det\pmatrix{1&1&\dots& 1&1\\ 0&a-b&\dots&0&0\\ \vdots&\ddots&\ddots&\vdots&\vdots\\ 0&0&\dots&0&a-b}.$$ Finally, we obtain $$\det A=(a+(n-1)b)(a-b)^{n-1}.$$ - HINT 1: Your matrix $A$ is $$(a-b)I + b e e^T$$ Can you now compute the determinant? Move your mouse over the gray area below for another hint. HINT 2: Make use of the fact that $\det(\lambda A) = \lambda^n \det(A)$ Move your mouse over the gray area below for another hint. HINT 3: $\det(I + \alpha e e^T) = (1+n \alpha)$ where $e$ is a column vector of ones. $$\det ((a-b)I + b e e^T) = (a-b)^n \det \left( I + \dfrac{b}{a-b} e e^T\right)$$ Hence, all we need is to find the determinant of $I + \alpha ee^T$, where $\alpha = \dfrac{b}{a-b}$ in our case. Note that $ee^T$ is a rank one matrix and its eigen values are $e^Te = n$ and $n-1$ zeros. If $\lambda$ is an eigen value of $I + \alpha ee^T$, then $$\det (I + \alpha ee^T - \lambda I) = \alpha^n \det \left(ee^T + \dfrac{(1-\lambda)}{\alpha}I \right) = 0$$ This means that $-\dfrac{(1-\lambda)}{\alpha}$ are the eigenvalues of $ee^T$. Hence, we get that $$-\dfrac{(1-\lambda)}{\alpha} = n \text{ or }0 \text{ (n-1 times)}.$$ Hence, we get that $$\lambda = 1 + n \alpha, 1 \text{ (n-1 times)}$$ Hence, $$\det ((a-b)I + b e e^T) = (a-b)^n \times \left( 1 + n \dfrac{b}{a-b} \right) = (a-b)^{n-1} (a+(n-1)b)$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956405520439148, "perplexity": 90.03455497501284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768957.83/warc/CC-MAIN-20141217075248-00102-ip-10-231-17-201.ec2.internal.warc.gz"}
http://safelearning.ai/2017/01/14/Ghosts/
Update: There’s an exciting new paper on differential ghosts from André Platzer and Yong Kiam Tan Warning: The Interface to differential ghosts provided by KeYmaera X has changed since I wrote this post. I maintain working examples from this blog post at the KeYmaera X projects repository but haven’t had time to update this blog post. All of the conceptual explanations remain relevant, but refer to the projects repository for up-to-date tactic scripts. The purpose of this post is to explain how KeYmaera X automatically proves reachability properties about differential equations of the form `x' = f(x)`; and, in particular, equations whose solutions are exponential functions. My intended audience are people who are already familiar with the first 5 or 6 lectures of Foundations of Cyber-Physical Systems. This is about equivalent to the half-day KeYmaera X tutorials1. Ultimately, the goal of this note is to explain how to find a useful differential ghost in order to prove a property about an exponential system such as x’=x. ## Background and Motivation: Continuous Rechability Problems in KeYmaera X Many control problems boil down to ensuring that a continuous system can only reach a safe subset of the overall state space. For example, consider a model of an adaptive cruise control protocol. The engineering problem is to design a control algorithm that actuates a follower car’s acceleration such that the distance between the lead car and the follower car, denoted rx, is always strictly positive. In this example, the safe states are all states where rx > 0. More generally, continuous reachability problems are expressed in Differential Dynamic Logic (dL) as formulas of the form: ``````P -> [{x' = f & H}]Q `````` where P, Q, and H are a first-order formulas of real arithmetic, and `P -> [{X' = f}]Q` is true iff every flow of the system x’ = f restricted to the set `H` and starting in the set P stays within the set Q. Returning to the adaptive cruise control example, we can phrase our reachability problem as: ``````P -> [{rx' = rv, rv' = ra & rv > 0}] rx > 0 `````` where `rx` is relative position and `& rv > 0` is our syntax for restricting the flow of the system to sets in which rv > 0. There are many valid choices for `P`; for example: ``````rx > 0 & rv = 12 & ra = 0 `````` In dL, we prove properties like this one by reducing the question to one that is expressible in a decidable fragment of first-order real arithmetic. In this case, we can integrate the system and arrive at a formula that looks something like: ``````P -> \forall t (t>=0->\forall s (0<=s&s<=t->ra*s+rv>0)->ra/2*t^2+rv*t+rx>0) `````` The truth of this formula is then established using a decision procedure for real arithmetic. ### So why not just solve x’ = x? Because model theory. Unfortunately, not all systems have explicit closed form solutions expressible in terms of decidable fragments of Real Arithmetic. In particular, consider the equation `x' = x`, which has the solution `e^t`. Therefore, the arithmetic question corresponding to the reachability question `P -> [{x' = x}] x > 0` looks something like: ``````P -> \forall t. e^t > 0 `````` In general, we can’t simply appeal to a arithmetic decision procedure to answer questions of this form – the decidability of real arithmetic with exponentials remains an open problem. In this post, I’ll explain how you can prove reachability properties about equations of the form `x' = f(x)` in KeYmaera X while staying with fragments of arithmetic that are known to be decidable. ## Differential Ghosts I assume as background familiarity with André Platzer’s lecture notes on differential ghosts, or a bit of experience with dL (e.g., having attended one of our KeYmaera X tutorials). The following review of ghosts is non-comprehensive, but might be a nice refresher. dL’s differential ghost axiom augments a continuous system with a new equation that isn’t mentioned in the rest of the system: ``````[{ode & H}]P <-> ∃y.[{ode, y'=a*y+b & H}]P `````` The terms `a` and `b` may not mention `y`. Additionally, `y` must not occur anywhere in `[{ode & H}]P`. I.e., `y' = a*y+b` must by a linear system and `y` must be a fresh (new) variable. Differential auxiliaries are a related concept that allow the post-condition to be re-stated in terms of the system’s new variable: ``````P <-> ∃y.G G |- [{ode, y' = a*y+b & H}]G --------------------------------------------- diffAux(y, a, b, G) P |- [{ode & H}]P `````` This proof rule appears a bit confusing at first. Seeing why this rule should be sound is not too subtle. `P <-> ∃y.G` establishes that `G` implies `P` for some choice of `y`, and `G |- [{ode, y' = a*y+b & H}]G` establishes that `G` remains invariant for any choice of `y`. Seeing why this rule is helpful is a bit more subtle. I could waste some ink, but instead, I’ll just jump into examples! ## Ghosts for Open Sets In this section, we’ll consider various systems of the form `x > 0 -> [{x' = f(x)}] x > 0`. Notice that we can’t prove anything about these systems using differential variants, and we can’t solve the system without turning one undecidable problem into another (for now, at least). Example 1 `x > 0 -> [{x' = -x}] x > 0` Before reading further, see if you can come up with a first order formula G that makes the premise of diffAux true; i.e., find a predicate `G(x,y)` that makes `x>0 <-> ∃y.G` true. `````` ------------------------------------- R |- -xy^2 + 2xy(y/2) = 0 ----------------- R ------------------------------------- diffInd x>0 <-> ∃y.xy^2=1 xy^2=1 |- [{x' = -x, y' = y/2}]xy^2=1 ----------------------------------------------------------- diffAux(y, 1/2, 0, xy^2 = 1) x > 0 |- [{x' = -x}] x > 0 `````` Hopefully you arrived at `xy^2 = 1` by yourself. Notice that getting to a choice of `y' = ay + b` after fixing `G` is pretty mechanical; we just compute Lie derivatives of our choice of G: ``````(xy^2=1)' <-> (xy^2)' = 0 (def'n Lie operator =) <-> x'y^2 + x2yy' = 0 (def'n Lie operator *) <-> -xy^2 + x2yy' = 0 (because x' = -x) <-> y'(2xy) = xy^2 <-> y' = xy^2 / 2xy <-> y' = y/2 <-> y' = (1/2)y + 0 (just re-stating so it's clear a=1/2 and b=0) `````` The same proof generalizes nicely to monomials. Example 2 `x > 0 -> [{x' = -x^2}] x > 0` The choice of `y` changes slightly: ``````(xy^2=1)' <-> ... <-> -x^2y^2 + x2yy' = 0 <-> y' = x^2y^2 / 2xy <-> y' = (x/2)y + 0 `````` Instead of giving the sequent calculus proof, here’s the KeYmaera X tactic: ``````implyR(1) ; DA4({`x*y^2=1`}, {`y`}, {`x/2`}, {`0`}, 1) ; <( QE, implyR(1) ; diffInd(1) ) `````` At this point, you’ve already noticed that the choice of `G` and `y' = ay+b` are closely coupled. In fact, you only have to be creative once – there’s a systematic way of deriving `a` and `b` from a fixed `G`. This is true in the other direction as well. I typically start with G, but as we’ll see with equilibrium points, it’s somethings helpful to move back and forth. ## Ghosts for Equilibrium Points In some sense, you would expect the proof of `x=0 -> [{x' = -x}]x=0` to be trivial. But the proof of this property in dL (without extra proof rules like Khalil Ghorbal’s DRI) is a tough exercise. Before continuing, see if you can find some candidates for `G` and/or `y'`. Here’s a choice for G: `y>0 & x*y=0`. Obviously, `∃y.x=0 <-> y>0 & x*y=0`. However, a simple differential induction argument doesn’t suffice to establish the remaining subgoal: ``````y>0 & x*y=0 -> [{x' = -x}](y>0 & x*y=0) `````` We’ll need another ghost. Notice that we already know how to prove ``````y>0 -> [{x' = -x}](y>0) `````` via a differential ghost argument. So let’s split this subgoal into two cases, which is a proof we can exploit using the `boxAnd` (`[]^`) axiom: `````` (use open set approach) ... ----------------------------- ---------------------------------- y>0 & x*y=0 |- [{x' = -x, y'=???}]y>0 y>0 & x*y=0 |- [{x' = -x}]x*y=0 -------------------------------------------------------------------------- y>0 & x*y=0 |- [{x' = -x, y'=???}](y>0 & x*y=0) `````` Because we already know how to prove the first case, it makes sense to choose a value for `y' = ax + b` that makes the `x*y=0` case prove by a differential invariance argument: ``````(x*y=0)' <-> (x*y)'=0 <-> x'y + y'x = 0 <-> -xy + y'x = 0 <-> y' = xy/x = y `````` The proof for the case with the `y>0` post-condition is the same as the proof for the close set examples. ## Ghosts for Closed/Clopen Sets The easiest way of dealing with properties of the form `x>=0` is to think of them as a combination of a closed set and an equilibrium point: ``````x >= 0 <-> x > 0 | x = 0 `````` The key to this techniques is to cut `x > 0 | x = 0`, case distinguish, and then use the following proof rule to rewrite the post-condition: `````` |- Q -> P G |- [a]Q ------------------------ G[] G |- [a]P `````` E.g. by taking `P as x >= 0` and `Q as x > 0`. Here’s how that proof goes (starting after the cut): `````` * cont'd * cont'd -------------- -------------------- ------------- ------------------- |- x>0 -> x>=0 |- x>0 |- [{ode}]x>0 |- x=0 -> x>0 x=0 |- [{ode}] x=0 ------------------------------------ G[] ----------------------------------- G[] x>0 |- [{ode}] x>=0 x=0 |- [{ode}] x>=0 -------------------------------------------------------------------------------- orL x > 0 | x = 0 |- [{ode}] x>=0 `````` For reference, here’s the tactic that does that (the open goals are `skip`‘d over): ``````implyR(1) ; cut({`x>0 | x=0`}) ; <( hideL(-1) ; orL(-1) ; <( generalizeb({`x>0`}, 1) ; <(skip, QE), generalizeb({`x=0`}, 1) ; <(skip, QE) ), hideR(1) ; QE ) `````` ## Conclusion Proving reachability properties about exponential functions is one of the more difficult tasks for new KeYmaera X users. This may seem surprising because properties such as `x=0 -> [{x'=x}]x=0` are so intuitively simple; however, given what we know about real arithmetic, perhaps this shouldn’t be so surprising after all. ## Appendices ### Appendix B: Axiomatizing Differential Dynamics; Lie Derivatives Differential invariants are the key piece of technology that allow us to obtain correct-by-construction solutions without resorting to a verified implementation of a fixed-point procedure. KeYmaera X axiomatizes continuous dynamical systems in terms of differential invariants and Lie derivatives. A differential invariant is just a formula that remains true throughout the entire flow of an ODE. For example, suppose `x=1` initially. Then: • `x=1` is an invariant of `{x'=0}`, • `x>0` is an invariant of `{x'=1}`, but • `x<1` is not an invariant of the equation `{x'=-1}` (because `x<1` is not true in the first instant of the flow). In KeYmaera X we can ask questions about invariants of differential equations using a formula of the form `[ODE]inv` where `ODE` is a (system of) differential equation(s) and `inv` is a formula describing the invariant. Notice that `[ODE]inv` is a formula, so it can be used with other logical connectives. For example, we can express the english prose “if `x=1` initially then `x>0` is an invariant of `{x'=1}`” using the formula `x=1 -> [{x'=1}]x>0`. Notice that `x=0 -> [{x'=-1}]x<0` is also well-formed, but is false. Evolution domain constraints allow us to assume invariants of a differential equation by restricting the flow of a differential equation to a particular domain. For example, if we define our dynamical system as the flow of `x'=-1` restricted to the set `x>0`, then `x>0` is, by force of definition, an invariant of the system. The KeYmaera X syntax for expressing that `x>0` is an invariant of the system `x'=-1` constrainted to `x>0` is `x=1 -> [{x'=-1 & x>0}]x>0`. Differential cuts are a way of embedding an invariant into the defintiion of a contiuous dynamical system. Think of cuts like lemmas – we first establish that `I` is an invariant of our system, and then get to assume throughout the rest of our proof that `I` is a component of the evolution domain constraint: ``````Γ |- [{ode & C}]I Γ |- [{ode & C & I}]P ------------------------------------------ diffCut Γ |- [{ode & C}]P `````` Differential induction can be used to prove that a formula is an invariant of a differential equation: ``````[{x'=t}]P <-> [x':=t;]P' (diffInd) `````` where P’ is the Lie Derivative of `P`. The definition of `P'` is straight-forward for terms: ``````(s+t)' = s' + t' (s*t)' = s'*t + t'*s `````` and so on. The definition of unquantified formulas of real arithmetic is subtle (there are no typos on the following lines): ``````(f=g)' <-> f' = g' (f!=g)' <-> f' = g' (f>g)' <-> f' >= g' (f<g)' <-> f' <= g' ... `````` Explaining differential induction is beyond the scope of this blog post, but there is an excellent video with accompanying lecture notes for interested readers. Change Log: • 2/26/2017: Fixed some typos and updated Bellerophon code samples to new `;` syntax. • July 2017: Added warning about tactics not working anymore. I’ll update the note soon. 1. In particular, I assume the reader is familiar with Differential Dynamic Logic and differential invariants/ghosts, but I’ll quickly review the basics. There are also a couple of appendices with incomplete explanations of various concepts.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152311682701111, "perplexity": 1752.7931111971636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00391.warc.gz"}
http://physics.stackexchange.com/questions/30472/imaginary-pertubation-to-a-hamiltonian-how-is-it-the-same-as-rotation-to-imagin/30488
# Imaginary pertubation to a Hamiltonian: how is it the same as rotation to imaginary time? I am struggling with the following affirmation found in Ryder's QFT book, page 177: instead of rotating the time axis as we have done, the ground state contribution may be isolated by adding a small negative imaginary part to the Hamiltonian The author refers to an effort to isolate the vacuum state in a sum over energy eigenstates: $$\langle Q|e^{-i (T-t) H}|q\rangle = \sum \phi_n(q) \phi^{*}_n(Q)\; e^{-i (T-t) E_n }$$ One option is to make time imaginary: $T \rightarrow \infty e^{- i \epsilon}$. Another, says the author, is to change the Hamiltonian by adding $-\frac{1}{2} i \epsilon q^2$. We would have then: $H^{\epsilon} = H -\frac{1}{2} i \epsilon q^2$, and: $$\langle Q|e^{-i (T-t) H^{\epsilon}}|q\rangle = \sum \phi_n(q) \phi^{*}_n(Q)\; e^{-i (T-t) E^{\epsilon}_n }$$ I´m guessing that you could treat this as a time independent perturbation, so that the first correction to energy is (lets call the new eigenvalues $E_n^{\epsilon}$): $$E_n^{\epsilon} = E_n -\frac{1}{2} i \epsilon \langle E_n|q^2|E_n\rangle + ...$$ That makes the new eigenvalues imaginary, but that’s not enough. What we need to have the sum dominated by the ground state is for $Im[E_n^{\epsilon}]$ to be proportional to $E_n$, so that we have a $E_n$ factor in the non-oscillatory part of the exponential. That means $\langle E_n|q^2|E_n\rangle$ should be proportional to $E_n$. It is true for an harmonic oscilator but, can we say that in general? - There's nothing wrong with linking to Google Books (or any particular site) here, so don't worry about that. Also, a tip: use \langle and \rangle for matrix elements etc. (If you're doing it in real LaTeX you might want to check out the braket package, but that's not available on this site.) – David Z Jun 21 '12 at 0:34 This is not a good substitute for rotating to imaginary time. The right way to do it is to add $i\epsilon H$ to H, not $i\epsilon q^2$. To see this, normalize things so that the vacuum is not decaying: $$H - i(\epsilon q^2 - A)$$ Where A is the vacuum expectation value of $q^2$. Now consider a potential where some excited state has a lower value of $q^2$ than the ground state. To find such a potential, you can use the semiclassical method of computing averages described in this answer: Do stationary states with higher energy necessarily have higher position-momentum uncertainty? The upshot of the semiclassical method is that the average of $q^2$ is the time-average of q^2 in the classical orbit to leading semiclassical order. To make a potential which has a small average q^2, you can consider a potential with deep minima at q=-A and q=A, and a shallow minimum at q=0. In the ground state, the particle is a superposition of q=A and q=-A, while in one the excited state, the shallow minimum has a big peak (since the particle is moving slowly there) and the q=-A and q=A minima have a small oscillatory wavefunction from tunneling. With this addition to H, the excited state will not be suppressed but growing. The asymptotic value at large imaginary times will be dominated by this excited state rather than by the ground state. - Yes, I noticed it would be far simpler to add $i \epsilon H$ but, when the author goes on to quantizing a scalar field (page 182) – Forever_a_Newcomer Jun 21 '12 at 15:25 (...) he uses exactly this substitution to get $Z[J]$. Of course, the potential for a free scalar field is proportional to $q^2$, but wouldn't adding the whole $i\epsilon H$ mess up the momentum integral you need to get from the hamiltonian to the lagrangean? The bothering point here is that this $i \epsilon$ factor will be the one (on page 185) dictating the path of integration to get the propagator. – Forever_a_Newcomer Jun 21 '12 at 15:41 @Forever_a_Newcomer: For a scalar field in perturbation theory, this is equivalent to rotating to imaginary time, because adding a little bit of imaginary mass to a Harmonic oscillator does work. He is trying to justify the $i\epsilon$ prescription, but the justification is all wrong--- the right way is to continue the particle-path path-integral, the Schwinger representation, where the mass is the action per unit proper time, and making it slightly imaginary cuts off long proper times. You should take the $i\epsilon$ prescription as the definition of which propagator you use in perturbations – Ron Maimon Jun 21 '12 at 16:53 The full continuation of adding $i\epsilon H$ doesn't mess up the momentum integral, it makes it convergent! The most extreme case is to make time evolution be $e^{-tH}$ instead of $e^{-itH}$, and this is the full imaginary time formulation which is mathematically much more well defined because the Gaussians are decaying instead of oscillating. You should work this out in QM first, then once you understand it, the QFT will be simple. – Ron Maimon Jun 21 '12 at 16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977513313293457, "perplexity": 359.91741986121383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116929.30/warc/CC-MAIN-20160428161516-00195-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/about-the-linear-dependence-of-linear-operators.321002/
# About the linear dependence of linear operators 1. Jun 20, 2009 ### sanctifier Notations: F denotes a field V denotes a vector space over F L(V) denotes a vector space whose members are linear operators from V to V itself and its field is F, then L(V) is an algebra where multiplication is composition of functions. τ denotes a linear operator contained in L(V) ι denotes the multiplicative identity of L(V) Question: Why is n2 + 1 vectors: ι, τ, τ2, ... ,τn2 are linearly dependent in L(V)? I wonder why, if these vectors are linear dependent, then one of them can be expressed as the linear combination of other vectors, but how? Thanks for any help! 2. Jun 27, 2009 ### morphism I presume V is n-dimensional? If this is the case, then your list of vectors is linearly dependent because dim L(V) = ____ (fill in the blank). 3. Jun 28, 2009 ### sanctifier Yes, V is n-dimensional. dim L(V) = n2 Now I understand, if n2 + 1 vectors: ι, τ, τ2, ... ,τn2 are llinear independent, then these vectors span L(V) and dim L(V) = n2 + 1 ≠ n2 = dim L(V), it's a contradiction, so, these vectors must be linear dependent. morphism, thanks a lot!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883679747581482, "perplexity": 2402.4773237579375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794869732.36/warc/CC-MAIN-20180527170428-20180527190428-00132.warc.gz"}
http://www.reference.com/browse/Star+designations
Definitions Nearby Words # Star A star is a massive, luminous ball of plasma. The nearest star to Earth is the Sun, which is the source of most of the energy on Earth. Other stars are visible in the night sky, when they are not outshone by the Sun. For most of its life, a star shines due to thermonuclear fusion in its core releasing energy that traverses the star's interior and then radiates into outer space. Almost all elements heavier than hydrogen and helium were created by fusion processes in stars. Astronomers can determine the mass, age, chemical composition and many other properties of a star by observing its spectrum, luminosity and motion through space. The total mass of a star is the principal determinant in its evolution and eventual fate. Other characteristics of a star are determined by its evolutionary history, including the diameter, rotation, movement and temperature. A plot of the temperature of many stars against their luminosities, known as a Hertzsprung-Russell diagram (H–R diagram), allows the age and evolutionary state of a star to be determined. A star begins as a collapsing cloud of material composed primarily of hydrogen, along with helium and trace amounts of heavier elements. Once the stellar core is sufficiently dense, some of the hydrogen is steadily converted into helium through the process of nuclear fusion. The remainder of the star's interior carries energy away from the core through a combination of radiative and convective processes. The star's internal pressure prevents it from collapsing further under its own gravity. Once the hydrogen fuel at the core is exhausted, those stars having at least 0.4 times the mass of the Sun expand to become a red giant, in some cases fusing heavier elements at the core or in shells around the core. The star then evolves into a degenerate form, recycling a portion of the matter into the interstellar environment, where it will form a new generation of stars with a higher proportion of heavy elements. Binary and multi-star systems consist of two or more stars that are gravitationally bound, and generally move around each other in stable orbits. When two such stars have a relatively close orbit, their gravitational interaction can have a significant impact on their evolution. ## Observation history Historically, stars have been important to civilizations throughout the world. They have been used in religious practices and for celestial navigation and orientation. Many ancient astronomers believed that stars were permanently affixed to a heavenly sphere, and that they were immutable. By convention, astronomers grouped stars into constellations and used them to track the motions of the planets and the inferred position of the Sun. The motion of the Sun against the background stars (and the horizon) was used to create calendars, which could be used to regulate agricultural practices. The Gregorian calendar, currently used nearly everywhere in the world, is a solar calendar based on the angle of the Earth's rotational axis relative to the nearest star, the Sun. The oldest accurately-dated star chart appeared in Ancient Egypt in 1,534 BCE. Islamic astronomers gave to many stars Arabic names which are still used today, and they invented numerous astronomical instruments which could compute the positions of the stars. In the 11th century, Abū Rayhān al-Bīrūnī described the Milky Way galaxy as multitude of fragments having the properties of nebulous stars, and also gave the latitudes of various stars during a lunar eclipse in 1019. In spite of the apparent immutability of the heavens, Chinese astronomers were aware that new stars could appear. Early European astronomers such as Tycho Brahe identified new stars in the night sky (later termed novae), suggesting that the heavens were not immutable. In 1584 Giordano Bruno suggested that the stars were actually other suns, and may have other planets, possibly even Earth-like, in orbit around them, an idea that had been suggested earlier by such ancient Greek philosophers as Democritus and Epicurus. By the following century the idea of the stars as distant suns was reaching a consensus among astronomers. To explain why these stars exerted no net gravitational pull on the solar system, Isaac Newton suggested that the stars were equally distributed in every direction, an idea prompted by the theologian Richard Bentley. The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in 1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars, demonstrating that they had changed positions from the time of the ancient Greek astronomers Ptolemy and Hipparchus. The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the heavens. William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the 1780s, he performed a series of gauges in 600 directions, and counted the stars observed along each line of sight. From this he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the Milky Way core. His son John Herschel repeated this study in the southern hemisphere and found a corresponding increase in the same direction. In addition to his other accomplishments, William Herschel is also noted for his discovery that some stars do not merely lie along the same line of sight, but are also physical companions that form binary star systems. The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption lines—the dark lines in a stellar spectra due to the absorption of specific frequencies by the atmosphere. In 1865 Secchi began classifying stars into spectral types. However, the modern version of the stellar classification scheme was developed by Annie J. Cannon during the 1900s. Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich Bessel observed changes in the proper motion of the star Sirius, and inferred a hidden companion. Edward Pickering discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the star Mizar in a 104 day period. Detailed observations of many binary star systems were collected by astronomers such as William Struve and S. W. Burnham, allowing the masses of stars to be determined from computation of the orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations was made by Felix Savary in 1827. The twentieth century saw increasingly rapid advances in the scientific study of stars. The photograph became a valuable astronomical tool. Karl Schwarzschild discovered that the color of a star, and hence its temperature, could be determined by comparing the visual magnitude against the photographic magnitude. The development of the photoelectric photometer allowed very precise measurements of magnitude at multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using an interferometer on the Hooker telescope. Important conceptual work on the physical basis of stars occurred during the first decades of the twentieth century. In 1913, the Hertzsprung-Russell diagram was developed, propelling the astrophysical study of stars. Successful models were developed to explain the interiors of stars and stellar evolution. The spectra of stars were also successfully explained through advances in quantum physics. This allowed the chemical composition of the stellar atmosphere to be determined. With the exception of supernovae, individual stars have primarily been observed in our Local Group of galaxies, and especially in the visible part of the Milky Way (as demonstrated by the detailed star catalogues available for our galaxy). But some stars have been observed in the M100 galaxy of the Virgo Cluster, about 100 million light years from the Earth. In the Local Supercluster it is possible to see star clusters, and current telescopes could in principle observe faint individual stars in the Local Cluster—the most distant stars resolved have up to hundred million light years away (see Cepheids). However, outside the Local Supercluster of galaxies, neither individual stars nor clusters of stars have been observed. The only exception is a faint image of a large star cluster containing hundreds of thousands of stars located one billion light years away—ten times the distance of the most distant star cluster previously observed. ## Star designations The concept of the constellation was known to exist during the Babylonian period. Ancient sky watchers imagined that prominent arrangements of stars formed patterns, and they associated these with particular aspects of nature or their myths. Twelve of these formations lay along the band of the ecliptic and these became the basis of astrology. Many of the more prominent individual stars were also given names, particularly with Arabic or Latin designations. As well as certain constellations and the Sun itself, stars as a whole have their own myths. They were thought to be the souls of the dead or gods. An example is the star Algol, which was thought to represent the eye of the Gorgon Medusa. To the Ancient Greeks, some "stars," known as planets (Greek πλανήτης (planētēs), meaning "wanderer"), represented various important deities, from which the names of the planets Mercury, Venus, Mars, Jupiter and Saturn were taken. (Uranus and Neptune were also Greek and Roman gods, but neither planet was known in Antiquity because of their low brightness. Their names were assigned by later astronomers). Circa 1600, the names of the constellations were used to name the stars in the corresponding regions of the sky. The German astronomer Johann Bayer created a series of star maps and applied Greek letters as designations to the stars in each constellation. Later the English astronomer John Flamsteed came up with a system using numbers, which would later be known as the Flamsteed designation. Numerous additional systems have since been created as star catalogues have appeared. The only body which has been recognized by the scientific community as having the authority to name stars or other celestial bodies is the International Astronomical Union (IAU). A number of private companies (for instance, the "International Star Registry") purport to sell names to stars; however, these names are neither recognized by the scientific community nor used by them, and many in the astronomy community view these organizations as frauds preying on people ignorant of star naming procedure. ## Units of measurement Most stellar parameters are expressed in SI units by convention, but CGS units are also used (e.g., expressing luminosity in ergs per second). Mass, luminosity, and radii are usually given in solar units, based on the characteristics of the Sun: solar mass: $begin\left\{smallmatrix\right\}M_odot = 1.9891 times 10^\left\{30\right\}end\left\{smallmatrix\right\}$ kg solar luminosity: $begin\left\{smallmatrix\right\}L_odot = 3.827 times 10^\left\{26\right\}end\left\{smallmatrix\right\}$ watts solar radius: $begin\left\{smallmatrix\right\}R_odot = 6.960 times 10^\left\{8\right\}end\left\{smallmatrix\right\}$ m Large lengths, such as the radius of a giant star or the semi-major axis of a binary star system, are often expressed in terms of the astronomical unit (AU)—approximately the mean distance between the Earth and the Sun (150 million km or 93 million miles). ## Formation and evolution Stars are formed within extended regions of higher density in the interstellar medium, although the density is still lower than the inside of an earthly vacuum chamber. These regions are called molecular clouds and consist mostly of hydrogen, with about 23–28% helium and a few percent heavier elements. One example of such a star-forming region is the Orion Nebula. As massive stars are formed from molecular clouds, they powerfully illuminate those clouds. They also ionize the hydrogen, creating an H II region. ### Protostar formation The formation of a star begins with a gravitational instability inside a molecular cloud, often triggered by shockwaves from supernovae (massive stellar explosions) or the collision of two galaxies (as in a starburst galaxy). Once a region reaches a sufficient density of matter to satisfy the criteria for Jeans Instability it begins to collapse under its own gravitational force. As the cloud collapses, individual conglomerations of dense dust and gas form what are known as Bok globules. These can contain up to 50 solar masses of material. As a globule collapses and the density increases, the gravitational energy is converted into heat and the temperature rises. When the protostellar cloud has approximately reached the stable condition of hydrostatic equilibrium, a protostar forms at the core. These pre-main sequence stars are often surrounded by a protoplanetary disk. The period of gravitational contraction lasts for about 10–15 million years. Early stars of less than 2 solar masses are called T Tauri stars, while those with greater mass are Herbig Ae/Be stars. These newly-born stars emit jets of gas along their axis of rotation, producing small patches of nebulosity known as Herbig-Haro objects. ### Main sequence Stars spend about 90% of their lifetime fusing hydrogen to produce helium in high-temperature and high-pressure reactions near the core. Such stars are said to be on the main sequence and are called dwarf stars. Starting at zero-age main sequence, the proportion of helium in a star's core will steadily increase. As a consequence, in order to maintain the required rate of nuclear fusion at the core, the star will slowly increase in temperature and luminosity. The Sun, for example, is estimated to have increased in luminosity by about 40% since it reached the main sequence 4.6 billion years ago. Every star generates a stellar wind of particles that causes a continual outflow of gas into space. For most stars, the amount of mass lost is negligible. The Sun loses 10−14 solar masses every year, or about 0.01% of its total mass over its entire lifespan. However very massive stars can lose 10−7 to 10−5 solar masses each year, significantly affecting their evolution. Stars that begin with more than 50 solar masses can lose over half their total mass while they remain on the main sequence. The duration that a star spends on the main sequence depends primarily on the amount of fuel it has to burn and the rate at which it burns that fuel. In other words, its initial mass and its luminosity. For the Sun, this is estimated to be about 1010 years. Large stars burn their fuel very rapidly and are short-lived. Small stars (called red dwarfs) burn their fuel very slowly and last tens to hundreds of billions of years. At the end of their lives, they simply become dimmer and dimmer. However, since the lifespan of such stars is greater than the current age of the universe (13.7 billion years), no such stars are expected to exist yet. Besides mass, the portion of elements heavier than helium can play a significant role in the evolution of stars. In astronomy all elements heavier than helium are considered a "metal", and the chemical concentration of these elements is called the metallicity. The metallicity can influence the duration that a star will burn its fuel, control the formation of magnetic fields and modify the strength of the stellar wind. Older, population II stars have substantially less metallicity than the younger, population I stars due to the composition of the molecular clouds from which they formed. (Over time these clouds become increasingly enriched in heavier elements as older stars die and shed portions of their atmospheres.) ### Post-main sequence As stars of at least 0.4 solar masses exhaust their supply of hydrogen at their core, their outer layers expand greatly and cool to form a red giant. For example, in about 5 billion years, when the Sun is a red giant, it will expand out to a maximum radius of roughly , 250 times its present size. As a giant, the Sun will lose roughly 30% of its current mass. In a red giant of up to 2.25 solar masses, hydrogen fusion proceeds in a shell-layer surrounding the core. Eventually the core is compressed enough to start helium fusion, and the star now gradually shrinks in radius and increases its surface temperature. For larger stars, the core region transitions directly from fusing hydrogen to fusing helium. After the star has consumed the helium at the core, fusion continues in a shell around a hot core of carbon and oxygen. The star then follows an evolutionary path that parallels the original red giant phase, but at a higher surface temperature. #### Massive stars During their helium-burning phase, very high mass stars with more than nine solar masses expand to form red supergiants. Once this fuel is exhausted at the core, they can continue to fuse elements heavier than helium. The core contracts until the temperature and pressure are sufficient to fuse carbon (see carbon burning process). This process continues, with the successive stages being fueled by neon (see neon burning process), oxygen (see oxygen burning process), and silicon (see silicon burning process). Near the end of the star's life, fusion can occur along a series of onion-layer shells within the star. Each shell fuses a different element, with the outermost shell fusing hydrogen; the next shell fusing helium, and so forth. The final stage is reached when the star begins producing iron. Since iron nuclei are more tightly bound than any heavier nuclei, if they are fused they do not release energy—the process would, on the contrary, consume energy. Likewise, since they are more tightly bound than all lighter nuclei, energy cannot be released by fission. In relatively old, very massive stars, a large core of inert iron will accumulate in the center of the star. The heavier elements in these stars can work their way up to the surface, forming evolved objects known as Wolf-Rayet stars that have a dense stellar wind which sheds the outer atmosphere. #### Collapse An evolved, average-size star will now shed its outer layers as a planetary nebula. If what remains after the outer atmosphere has been shed is less than 1.4 solar masses, it shrinks to a relatively tiny object (about the size of Earth) that is not massive enough for further compression to take place, known as a white dwarf. The electron-degenerate matter inside a white dwarf is no longer a plasma, even though stars are generally referred to as being spheres of plasma. White dwarfs will eventually fade into black dwarfs over a very long stretch of time. In larger stars, fusion continues until the iron core has grown so large (more than 1.4 solar masses) that it can no longer support its own mass. This core will suddenly collapse as its electrons are driven into its protons, forming neutrons and neutrinos in a burst of inverse beta decay, or electron capture. The shockwave formed by this sudden collapse causes the rest of the star to explode in a supernova. Supernovae are so bright that they may briefly outshine the star's entire home galaxy. When they occur within the Milky Way, supernovae have historically been observed by naked-eye observers as "new stars" where none existed before. Most of the matter in the star is blown away by the supernovae explosion (forming nebulae such as the Crab Nebula) and what remains will be a neutron star (which sometimes manifests itself as a pulsar or X-ray burster) or, in the case of the largest stars (large enough to leave a stellar remnant greater than roughly 4 solar masses), a black hole. In a neutron star the matter is in a state known as neutron-degenerate matter, with a more exotic form of degenerate matter, QCD matter, possibly present in the core. Within a black hole the matter is in a state that is not currently understood. The blown-off outer layers of dying stars include heavy elements which may be recycled during new star formation. These heavy elements allow the formation of rocky planets. The outflow from supernovae and the stellar wind of large stars play an important part in shaping the interstellar medium. ## Distribution In addition to isolated stars, a multi-star system can consist of two or more gravitationally bound stars that orbit around each other. The most common multi-star system is a binary star, but systems of three or more stars are also found. For reasons of orbital stability, such multi-star systems are often organized into hierarchical sets of co-orbiting binary stars. Larger groups called star clusters also exist. These range from loose stellar associations with only a few stars, up to enormous globular clusters with hundreds of thousands of stars. It has been a long-held assumption that the majority of stars occur in gravitationally bound, multiple-star systems. This is particularly true for very massive O and B class stars, where 80% of the systems are believed to be multiple. However the portion of single star systems increases for smaller stars, so that only 25% of red dwarfs are known to have stellar companions. As 85% of all stars are red dwarfs, most stars in the Milky Way are likely single from birth. Stars are not spread uniformly across the universe, but are normally grouped into galaxies along with interstellar gas and dust. A typical galaxy contains hundreds of billions of stars, and there are more than 100 billion (1011) galaxies in the observable universe. While it is often believed that stars only exist within galaxies, intergalactic stars have been discovered. Astronomers estimate that there are at least 70 sextillion (7×1022) stars in the observable universe. The nearest star to the Earth, apart from the Sun, is Proxima Centauri, which is 39.9 trillion (1012) kilometres, or 4.2 light-years away. Light from Proxima Centauri takes 4.2 years to reach Earth. Travelling at the orbital speed of the Space Shuttle (5 miles per second—almost 30,000 kilometres per hour), it would take about 150,000 years to get there. Distances like this are typical inside galactic discs, including in the vicinity of the solar system. Stars can be much closer to each other in the centres of galaxies and in globular clusters, or much farther apart in galactic halos. Due to the relatively vast distances between stars outside the galactic nucleus, collisions between stars are thought to be rare. In denser regions such as the core of globular clusters or the galactic center, collisions can be more common. Such collisions can produce what are known as blue stragglers. These abnormal stars have a higher surface temperature than the other main sequence stars with the same luminosity in the cluster . ## Characteristics Almost everything about a star is determined by its initial mass, including essential characteristics such as luminosity and size, as well as the star's evolution, lifespan, and eventual fate. ### Age Most stars are between 1 billion and 10 billion years old. Some stars may even be close to 13.7 billion years old—the observed age of the universe. The oldest star yet discovered, HE 1523-0901, is an estimated 13.2 billion years old. The more massive the star, the shorter its lifespan, primarily because massive stars have greater pressure on their cores, causing them to burn hydrogen more rapidly. The most massive stars last an average of about one million years, while stars of minimum mass (red dwarfs) burn their fuel very slowly and last tens to hundreds of billions of years. ### Chemical composition When stars form they are composed of about 70% hydrogen and 28% helium, as measured by mass, with a small fraction of heavier elements. Typically the portion of heavy elements is measured in terms of the iron content of the stellar atmosphere, as iron is a common element and its absorption lines are relatively easy to measure. Because the molecular clouds where stars form are steadily enriched by heavier elements from supernovae explosions, a measurement of the chemical composition of a star can be used to infer its age. The portion of heavier elements may also be an indicator of the likelihood that the star has a planetary system. The star with the lowest iron content ever measured is the dwarf HE1327-2326, with only 1/200,000th the iron content of the Sun. By contrast, the super-metal-rich star μ Leonis has nearly double the abundance of iron as the Sun, while the planet-bearing star 14 Herculis has nearly triple the iron. There also exist chemically peculiar stars that show unusual abundances of certain elements in their spectrum; especially chromium and rare earth elements. ### Diameter Due to their great distance from the Earth, all stars except the Sun appear to the human eye as shining points in the night sky that twinkle because of the effect of the Earth's atmosphere. The Sun is also a star, but it is close enough to the Earth to appear as a disk instead, and to provide daylight. Other than the Sun, the star with the largest apparent size is R Doradus, with an angular diameter of only 0.057 arcseconds. The disks of most stars are much too small in angular size to be observed with current ground-based optical telescopes, and so interferometer telescopes are required in order to produce images of these objects. Another technique for measuring the angular size of stars is through occultation. By precisely measuring the drop in brightness of a star as it is occulted by the Moon (or the rise in brightness when it reappears), the star's angular diameter can be computed. Stars range in size from neutron stars, which vary anywhere from 20 to 40 km in diameter, to supergiants like Betelgeuse in the Orion constellation, which has a diameter approximately 650 times larger than the Sun—about 0.9 billion kilometres. However, Betelgeuse has a much lower density than the Sun. ### Kinematics The motion of a star relative to the Sun can provide useful information about the origin and age of a star, as well as the structure and evolution of the surrounding galaxy. The components of motion of a star consist of the radial velocity toward or away from the Sun, and the traverse angular movement, which is called its proper motion. Radial velocity is measured by the doppler shift of the star's spectral lines, and is given in units of km/s. The proper motion of a star is determined by precise astrometric measurements in units of milli-arc seconds (mas) per year. By determining the parallax of a star, the proper motion can then be converted into units of velocity. Stars with high rates of proper motion are likely to be relatively close to the Sun, making them good candidates for parallax measurements. Once both rates of movement are known, the space velocity of the star relative to the Sun or the galaxy can be computed. Among nearby stars, it has been found that population I stars have generally lower velocities than older, population II stars. The latter have elliptical orbits that are inclined to the plane of the galaxy. Comparison of the kinematics of nearby stars has also led to the identification of stellar associations. These are most likely groups of stars that share a common point of origin in giant molecular clouds. ### Magnetic field The magnetic field of a star is generated within regions of the interior where convective circulation occurs. This movement of conductive plasma functions like a dynamo, generating magnetic fields that extend throughout the star. The strength of the magnetic field varies with the mass and composition of the star, and the amount of magnetic surface activity depends upon the star's rate of rotation. This surface activity produces starspots, which are regions of strong magnetic fields and lower than normal surface temperatures. Coronal loops are arching magnetic fields that reach out into the corona from active regions. Stellar flares are bursts of high-energy particles that are emitted due to the same magnetic activity. Young, rapidly rotating stars tend to have high levels of surface activity because of their magnetic field. The magnetic field can act upon a star's stellar wind, however, functioning as a brake to gradually slow the rate of rotation as the star grows older. Thus, older stars such as the Sun have a much slower rate of rotation and a lower level of surface activity. The activity levels of slowly-rotating stars tend to vary in a cyclical manner and can shut down altogether for periods. During the Maunder minimum, for example, the Sun underwent a 70-year period with almost no sunspot activity. ### Mass One of the most massive stars known is Eta Carinae, with 100–150 times as much mass as the Sun; its lifespan is very short—only several million years at most. A recent study of the Arches cluster suggests that 150 solar masses is the upper limit for stars in the current era of the universe. The reason for this limit is not precisely known, but it is partially due to the Eddington luminosity which defines the maximum amount of luminosity that can pass through the atmosphere of a star without ejecting the gases into space. The first stars to form after the Big Bang may have been larger, up to 300 solar masses or more, due to the complete absence of elements heavier than lithium in their composition. This generation of supermassive, population III stars is long extinct, however, and currently only theoretical. With a mass only 93 times that of Jupiter, AB Doradus C, a companion to AB Doradus A, is the smallest known star undergoing nuclear fusion in its core. For stars with similar metallicity to the Sun, the theoretical minimum mass the star can have, and still undergo fusion at the core, is estimated to be about 75 times the mass of Jupiter. When the metallicity is very low, however, a recent study of the faintest stars found that the minimum star size seems to be about 8.3% of the solar mass, or about 87 times the mass of Jupiter. Smaller bodies are called brown dwarfs, which occupy a poorly-defined grey area between stars and gas giants. The combination of the radius and the mass of a star determines the surface gravity. Giant stars have a much lower surface gravity than main sequence stars, while the opposite is the case for degenerate, compact stars such as white dwarfs. The surface gravity can influence the appearance of a star's spectrum, with higher gravity causing a broadening of the absorption lines. ### Rotation The rotation rate of stars can be approximated through spectroscopic measurement, or more exactly determined by tracking the rotation rate of starspots. Young stars can have a rapid rate of rotation greater than 100 km/s at the equator. The B-class star Achernar, for example, has an equatorial rotation velocity of about 225 km/s or greater, giving it an equatorial diameter that is more than 50% larger than the distance between the poles. This rate of rotation is just below the critical velocity of 300 km/s where the star would break apart. By contrast, the Sun only rotates once every 25 – 35 days, with an equatorial velocity of 1.994 km/s. The star's magnetic field and the stellar wind serve to slow down a main sequence star's rate of rotation by a significant amount as it evolves on the main sequence. Degenerate stars have contracted into a compact mass, resulting in a rapid rate of rotation. However they have relatively low rates of rotation compared to what would be expected by conservation of angular momentum—the tendency of a rotating body to compensate for a contraction in size by increasing its rate of spin. A large portion of the star's angular momentum is dissipated as a result of mass loss through the stellar wind. In spite of this, the rate of rotation for a pulsar can be very rapid. The pulsar at the heart of the Crab nebula, for example, rotates 30 times per second. The rotation rate of the pulsar will gradually slow due to the emission of radiation. ### Temperature The surface temperature of a main sequence star is determined by the rate of energy production at the core and the radius of the star and is often estimated from the star's color index. It is normally given as the effective temperature, which is the temperature of an idealized black body that radiates its energy at the same luminosity per surface area as the star. Note that the effective temperature is only a representative value, however, as stars actually have a temperature gradient that decreases with increasing distance from the core. The temperature in the core region of a star is several million kelvins. The stellar temperature will determine the rate of energization or ionization of different elements, resulting in characteristic absorption lines in the spectrum. The surface temperature of a star, along with its visual absolute magnitude and absorption features, is used to classify a star (see classification below). Massive main sequence stars can have surface temperatures of 50,000 K. Smaller stars such as the Sun have surface temperatures of a few thousand degrees. Red giants have relatively low surface temperatures of about 3,600 K, but they also have a high luminosity due to their large exterior surface area. The energy produced by stars, as a by-product of nuclear fusion, radiates into space as both electromagnetic radiation and particle radiation. The particle radiation emitted by a star is manifested as the stellar wind (which exists as a steady stream of electrically charged particles, such as free protons, alpha particles, and beta particles, emanating from the star’s outer layers) and as a steady stream of neutrinos emanating from the star’s core. The production of energy at the core is the reason why stars shine so brightly: every time two or more atomic nuclei of one element fuse together to form an atomic nucleus of a new heavier element, gamma ray photons are released from the nuclear fusion reaction. This energy is converted to other forms of electromagnetic energy, including visible light, by the time it reaches the star’s outer layers. The color of a star, as determined by the peak frequency of the visible light, depends on the temperature of the star’s outer layers, including its photosphere. Besides visible light, stars also emit forms of electromagnetic radiation that are invisible to the human eye. In fact, stellar electromagnetic radiation spans the entire electromagnetic spectrum, from the longest wavelengths of radio waves and infrared to the shortest wavelengths of ultraviolet, X-rays, and gamma rays. All components of stellar electromagnetic radiation, both visible and invisible, are typically significant. Using the stellar spectrum, astronomers can also determine the surface temperature, surface gravity, metallicity and rotational velocity of a star. If the distance of the star is known, such as by measuring the parallax, then the luminosity of the star can be derived. The mass, radius, surface gravity, and rotation period can then be estimated based on stellar models. (Mass can be measured directly for stars in binary systems. The technique of gravitational microlensing will also yield the mass of a star.) With these parameters, astronomers can also estimate the age of the star. ### Luminosity In astronomy, luminosity is the amount of light, and other forms of radiant energy, a star radiates per unit of time. The luminosity of a star is determined by the radius and the surface temperature. However, many stars do not radiate a uniform flux—the amount of energy radiated per unit area—across their entire surface. The rapidly-rotating star Vega, for example, has a higher energy flux at its poles than along its equator. Surface patches with a lower temperature and luminosity than average are known as starspots. Small, dwarf stars such as the Sun generally have essentially featureless disks with only small starspots. Larger, giant stars have much bigger, much more obvious starspots, and they also exhibit strong stellar limb darkening. That is, the brightness decreases towards the edge of the stellar disk. Red dwarf flare stars such as UV Ceti may also possess prominent starspot features. ### Magnitude The apparent brightness of a star is measured by its apparent magnitude, which is the brightness of a star with respect to the star’s luminosity, distance from Earth, and the altering of the star’s light as it passes through Earth’s atmosphere. Intrinsic or absolute magnitude is what the apparent magnitude a star would be if the distance between the Earth and the star were 10 parsecs (32.6 light-years), and it is directly related to a star’s luminosity. Number of stars brighter than magnitude Apparent magnitude Number of Stars 0 4 1 15 2 48 3 171 4 513 5 1,602 6 4,800 7 14,000 Both the apparent and absolute magnitude scales are logarithmic units: one whole number difference in magnitude is equal to a brightness variation of about 2.5 times (the 5th root of 100 or approximately 2.512). This means that a first magnitude (+1.00) star is about 2.5 times brighter than a second magnitude (+2.00) star, and approximately 100 times brighter than a sixth magnitude (+6.00) star. The faintest stars visible to the naked eye under good seeing conditions are about magnitude +6. On both apparent and absolute magnitude scales, the smaller the magnitude number, the brighter the star; the larger the magnitude number, the fainter. The brightest stars, on either scale, have negative magnitude numbers. The variation in brightness between two stars is calculated by subtracting the magnitude number of the brighter star (mb) from the magnitude number of the fainter star (mf), then using the difference as an exponent for the base number 2.512; that is to say: $Delta\left\{m\right\} = m_f - m_b$ $2.512^\left\{Delta\left\{m\right\}\right\} =$ variation in brightness Relative to both luminosity and distance from Earth, absolute magnitude (M) and apparent magnitude (m) are not equivalent for an individual star; for example, the bright star Sirius has an apparent magnitude of −1.44, but it has an absolute magnitude of +1.41. The Sun has an apparent magnitude of −26.7, but its absolute magnitude is only +4.83. Sirius, the brightest star in the night sky as seen from Earth, is approximately 23 times more luminous than the Sun, while Canopus, the second brightest star in the night sky with an absolute magnitude of −5.53, is approximately 14,000 times more luminous than the Sun. Despite Canopus being vastly more luminous than Sirius, however, Sirius appears brighter than Canopus. This is because Sirius is merely 8.6 light-years from the Earth, while Canopus is much farther away at a distance of 310 light-years. As of 2006, the star with the highest known absolute magnitude is LBV 1806-20, with a magnitude of −14.2. This star is at least 5,000,000 times more luminous than the Sun. The least luminous stars that are currently known are located in the NGC 6397 cluster. The faintest red dwarfs in the cluster were magnitude 26, while a 28th magnitude white dwarf was also discovered. These faint stars are so dim that their light is as bright as a birthday candle on the Moon when viewed from the Earth. ## Classification Surface Temperature Ranges for Different Stellar Classes Class Temperature Sample star O 33,000 K or more Zeta Ophiuchi B 10,500–30,000 K Rigel A 7,500–10,000 K Altair F 6,000–7,200 K Procyon A G 5,500–6,000 K Sun K 4,000–5,250 K Epsilon Indi M 2,600–3,850 K Proxima Centauri The current stellar classification system originated in the early 20th century, when stars were classified from A to Q based on the strength of the hydrogen line. It was not known at the time that the major influence on the line strength was temperature; the hydrogen line strength reaches a peak at around 9000 K, and is weaker at both hotter and cooler temperatures. When the classifications were reordered by temperature, it more closely resembled the modern scheme. There are different single-letter classifications of stars according to their spectra, ranging from type O, which are very hot, to M, which are so cool that molecules may form in their atmospheres. The main classifications in order of decreasing surface temperature are: O, B, A, F, G, K, and M. A variety of rare spectral types have special classifications. The most common of these are types L and T, which classify the coldest low-mass stars and brown dwarfs. Each letter has 10 sub-divisions, numbered from 0 to 9, in order of decreasing temperature. However, this system breaks down at extreme high temperatures: class O0 and O1 stars may not exist. In addition, stars may be classified by the luminosity effects found in their spectral lines, which correspond to their spatial size and is determined by the surface gravity. These range from 0 (hypergiants) through III (giants) to V (main sequence dwarfs) and VII (white dwarfs). Most stars belong to the main sequence, which consists of ordinary hydrogen-burning stars. These fall along a narrow, diagonal band when graphed according to their absolute magnitude and spectral type. Our Sun is a main sequence G2V yellow dwarf, being of intermediate temperature and ordinary size. Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum. For example, an "e" can indicate the presence of emission lines; "m" represents unusually strong levels of metals, and "var" can mean variations in the spectral type. White dwarf stars have their own class that begins with the letter D. This is further sub-divided into the classes DA, DB, DC, DO, DZ, and DQ, depending on the types of prominent lines found in the spectrum. This is followed by a numerical value that indicates the temperature index. ## Variable stars Variable stars have periodic or random changes in luminosity because of intrinsic or extrinsic properties. Of the intrinsically variable stars, the primary types can be subdivided into three principal groups. During their stellar evolution, some stars pass through phases where they can become pulsating variables. Pulsating variable stars vary in radius and luminosity over time, expanding and contracting with periods ranging from minutes to years, depending on the size of the star. This category includes Cepheid and cepheid-like stars, and long-period variables such as Mira. Eruptive variables are stars that experience sudden increases in luminosity because of flares or mass ejection events. This group includes protostars, Wolf-Rayet stars, and Flare stars, as well as giant and supergiant stars. Cataclysmic or explosive variables undergo a dramatic change in their properties. This group includes novae and supernovae. A binary star system that includes a nearby white dwarf can produce certain types of these spectacular stellar explosions, including the nova and a Type 1a supernova. The explosion is created when the white dwarf accretes hydrogen from the companion star, building up mass until the hydrogen undergoes fusion. Some novae are also recurrent, having periodic outbursts of moderate amplitude. Stars can also vary in luminosity because of extrinsic factors, such as eclipsing binaries, as well as rotating stars that produce extreme starspots. A notable example of an eclipsing binary is Algol, which regularly varies in magnitude from 2.3 to 3.5 over a period of 2.87 days. ## Structure The interior of a stable star is in a state of hydrostatic equilibrium: the forces on any small volume almost exactly counterbalance each other. The balanced forces are inward gravitational force and an outward force due to the pressure gradient within the star. The pressure gradient is established by the temperature gradient of the plasma; the outer part of the star is cooler than the core. The temperature at the core of a main sequence or giant star is at least on the order of 107 K. The resulting temperature and pressure at the hydrogen-burning core of a main sequence star are sufficient for nuclear fusion to occur and for sufficient energy to be produced to prevent further collapse of the star. As atomic nuclei are fused in the core, they emit energy in the form of gamma rays. These photons interact with the surrounding plasma, adding to the thermal energy at the core. Stars on the main sequence convert hydrogen into helium, creating a slowly but steadily increasing proportion of helium in the core. Eventually the helium content becomes predominant and energy production ceases at the core. Instead, for stars of more than 0.4 solar masses, fusion occurs in a slowly expanding shell around the degenerate helium core. In addition to hydrostatic equilibrium, the interior of a stable star will also maintain an energy balance of thermal equilibrium. There is a radial temperature gradient throughout the interior that results in a flux of energy flowing toward the exterior. The outgoing flux of energy leaving any layer within the star will exactly match the incoming flux from below. The radiation zone is the region within the stellar interior where radiative transfer is sufficiently efficient to maintain the flux of energy. In this region the plasma will not be perturbed and any mass motions will die out. If this is not the case, however, then the plasma becomes unstable and convection will occur, forming a convection zone. This can occur, for example, in regions where very high energy fluxes occur, such as near the core or in areas with high opacity as in the outer envelope. The occurrence of convection in the outer envelope of a main sequence star depends on the mass. Stars with several times the mass of the Sun have a convection zone deep within the interior and a radiative zone in the outer layers. Smaller stars such as the Sun are just the opposite, with the convective zone located in the outer layers. Red dwarf stars with less than 0.4 solar masses are convective throughout, which prevents the accumulation of a helium core. For most stars the convective zones will also vary over time as the star ages and the constitution of the interior is modified. The portion of a star that is visible to an observer is called the photosphere. This is the layer at which the plasma of the star becomes transparent to photons of light. From here, the energy generated at the core becomes free to propagate out into space. It is within the photosphere that sun spots, or regions of lower than average temperature, appear. Above the level of the photosphere is the stellar atmosphere. In a main sequence star such as the Sun, the lowest level of the atmosphere is the thin chromosphere region, where spicules appear and stellar flares begin. This is surrounded by a transition region, where the temperature rapidly increases within a distance of only 100 km. Beyond this is the corona, a volume of super-heated plasma that can extend outward to several million kilometres. The existence of a corona appears to be dependent on a convective zone in the outer layers of the star. Despite its high temperature, the corona emits very little light. The corona region of the Sun is normally only visible during a solar eclipse. From the corona, a stellar wind of plasma particles expands outward from the star, propagating until it interacts with the interstellar medium. For the Sun, the influence of its solar wind extends throughout the bubble-shaped region of the heliosphere. ## Nuclear fusion reaction pathways A variety of different nuclear fusion reactions take place inside the cores of stars, depending upon their mass and composition, as part of stellar nucleosynthesis. The net mass of the fused atomic nuclei is smaller than the sum of the constituents. This lost mass is converted into energy, according to the mass-energy equivalence relationship E = mc². The hydrogen fusion process is temperature-sensitive, so a moderate increase in the core temperature will result in a significant increase in the fusion rate. As a result the core temperature of main sequence stars only varies from 4 million K for a small M-class star to 40 million K for a massive O-class star. In the Sun, with a 10 million K core, hydrogen fuses to form helium in the proton-proton chain reaction: 41H → 22H + 2e+ + 2νe (4.0 MeV + 1.0 MeV) 21H + 22H → 23He + 2γ (5.5 MeV) 23He → 4He + 21H (12.9 MeV) These reactions result in the overall reaction: 41H → 4He + 2e+ + 2γ + 2νe (26.7 MeV) where e+ is a positron, γ is a gamma ray photon, νe is a neutrino, and H and He are isotopes of hydrogen and helium, respectively. The energy released by this reaction is in millions of electron volts, which is actually only a tiny amount of energy. However enormous numbers of these reactions occur constantly, producing all the energy necessary to sustain the star's radiation output. Minimum stellar mass required for fusion Element Solar masses Hydrogen 0.01 Helium 0.4 Carbon 4 Neon 8 In more massive stars, helium is produced in a cycle of reactions catalyzed by carbon—the carbon-nitrogen-oxygen cycle. In evolved stars with cores at 100 million K and masses between 0.5 and 10 solar masses, helium can be transformed into carbon in the triple-alpha process that uses the intermediate element beryllium: 4He + 4He + 92 keV → 8*Be 4He + 8*Be + 67 keV → 12*C 12*C → 12C + γ + 7.4 MeV For an overall reaction of: 34He → 12C + γ + 7.2 MeV In massive stars, heavier elements can also be burned in a contracting core through the neon burning process and oxygen burning process. The final stage in the stellar nucleosynthesis process is the silicon burning process that results in the production of the stable isotope iron-56. Fusion can not proceed any further except through an endothermic process, and so further energy can only be produced through gravitational collapse. The example below shows the amount of time required for a star of 20 solar masses to consume all of its nuclear fuel. As an O-class main sequence star, it would be 8 times the solar radius and 62,000 times the Sun's luminosity. Fuel material Temperature (million kelvins) Density (kg/cm³) Burn duration (τ in years) H 37 0.0045 8.1 million He 188 0.97 1.2 million C 870 170 976 Ne 1,570 3,100 0.6 O 1,980 5,550 1.25 S/Si 3,340 33,400 0.0315 General topics Types of stars Types of former stars Types of hypothetical stars ## References • Pickover, Cliff (2001). The Stars of Heaven. Oxford University Press. ISBN 0-19-514874-6. • Gribbin, John; Mary Gribbin (2001). Stardust: Supernovae and Life—The Cosmic Connection. Yale University Press. ISBN 0-300-09097-8. • Hawking, Stephen (1988). A Brief History of Time. Bantam Books. ISBN 0-553-17521-1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700035214424133, "perplexity": 875.924542270801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/university-calculus-early-transcendentals-3rd-edition/chapter-5-section-5-6-definite-integral-substitutions-and-the-area-between-curves-exercises-page-339/74
## University Calculus: Early Transcendentals (3rd Edition) The area of the region is $9/2$. $x=y^2$ and $x=y+2$ 1) Draft the graph: The graph is drafted below. We would use integration with respect to $y$ here. 2) Find the limits of integration: To find the limits of integration, we find intersection points with respect to $y$. $$y^2=y+2$$ $$y^2-y-2=0$$ $$(y+1)(y-2)=0$$ $$y=-1\hspace{1cm}\text{or}\hspace{1cm}y=2$$ So the upper limit is $2$ and the lower limit is $-1$. 3) Find the area: The region from $y=-1$ to $y=2$ is bounded on the right by $x=y+2$ and on the left by $x=y^2$. Therefore, the area of the region is $$A=\int^2_{-1}(y+2-y^2)dy$$ $$A=\frac{y^2}{2}+2y-\frac{y^3}{3}\Big]^2_{-1}$$ $$A=\Big(2+4-\frac{8}{3}\Big)-\Big(\frac{1}{2}-2+\frac{1}{3}\Big)$$ $$A=\frac{10}{3}-(-\frac{7}{6})=\frac{9}{2}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559751152992249, "perplexity": 92.12425455758073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00481.warc.gz"}
http://www.physicsforums.com/showpost.php?p=2923986&postcount=3
View Single Post P: 6 Nevermind, I figured out how to solve the problem using my old differential equations textbook. In case someone is curious, here's what I did: I rearranged the terms so that like terms were on the same side: $$\frac{dz_{1}}{dt}=-z_{1}^{2}$$ $$\frac{dz_{1}}{-z_{1}^{2}}=dt$$ I then integrated each side: $$\int -\frac{1}{z_{1}^{2}}dz=\int dt$$ $$\frac{1}{z_{1}}+C_{1}=t+C_{2}$$ Since both sides had constants, I dropped one, and I then used the initial condition of $$z_{1}(t=0)=x_{1}$$ to solve for C $$\frac{1}{z_{1}}+C=t$$ $$\frac{1}{z_{1}}=t-C$$ $$z_{1}=\frac{1}{t-C}$$ $$z_{1}(0)=x_{1}=\frac{1}{0-C}$$ $$C=-\frac{1}{x_{1}}$$ I plugged in C above and got this equation for Lagrangian position: $$z_{1}=\frac{1}{t+\frac{1}{x_{1}}}=\frac{x_{1}}{1+tx_{1}}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547309875488281, "perplexity": 235.09127126604758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00042-ip-10-146-231-18.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/421764/decidability-of-a-polynomial-exponential-equation-in-two-variables?noredirect=1
# Decidability of a polynomial-exponential equation in two variables My question is with regards to the following (algorithmic) problem: Problem. Given $$f\in \mathbb{Z}[x,y], a,b\in \mathbb{Q}, r\in \mathbb{Z}$$, do there exist positive integers $$m,n$$ such that $$f(m,n) = r a^m b^n$$? Is this problem decidable? Is it decidable in any special case (e.g. taking a specific (non-trivial) $$f$$, taking $$a,b\in \mathbb{Z}$$ or choosing specific $$a,b,r$$)? Thank you! • Apparently, if both $|a|,|b|>1$ or both $|a|,|b|<1$, then there is only a finite number of candidate $m,n$. So, interesting case is when say $|a|< 1 < |b|$. May 4 at 18:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794502854347229, "perplexity": 434.9536635212535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00795.warc.gz"}
https://arxiv.org/abs/math/0101065
math # Title:Bessel Integrals and Fundamental Solutions for a Generalized Tricomi Operator Abstract: Partial Fourier transforms are used to find explicit formulas for two remarkable fundamental solutions for a generalized Tricomi operator. These fundamental solutions reflect clearly the mixed type of the operator. In order to prove these results, we establish explicit formulas for Fourier transforms of some type of Bessel functions. Subjects: Classical Analysis and ODEs (math.CA); Analysis of PDEs (math.AP) MSC classes: 35M10 (primary) 46F10, 42B10 (secondary) Cite as: arXiv:math/0101065 [math.CA] (or arXiv:math/0101065v1 [math.CA] for this version) ## Submission history From: J. Barros-Neto [view email] [v1] Mon, 8 Jan 2001 20:43:52 UTC (14 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519138932228088, "perplexity": 4187.821156108811}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00184.warc.gz"}
http://cpr-condmat-disnn.blogspot.com/2013/01/13015769-satoshi-takabe-et-al.html
## Minimum vertex cover problems on random hypergraphs: replica symmetric solution and a leaf removal algorithm    [PDF] Satoshi Takabe, Koji Hukushima We study minimum vertex cover problems on random \alpha-uniform hypergraphs using two different approaches, a replica method in statistical mechanics of random systems and a leaf removal algorithm. It is found that there exists a phase transition at the critical average degree e/(\alpha-1). Below the critical degree, a replica symmetric ansatz in the statistical-mechanical method holdsand the algorithm estimates a solution of the problem which coincide with that by the replica method. In contrast, above the critical degree, the replica symmetric solution becomes unstable and these methods fail to estimate the exact solution.These results strongly suggest a close relation between the replica symmetry and the performance of approximation algorithm. View original: http://arxiv.org/abs/1301.5769
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8583505749702454, "perplexity": 923.6798613939726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579564.61/warc/CC-MAIN-20171215192327-20171215214327-00223.warc.gz"}
http://mca.nowgray.com/2017/03/solved-sorting-when-there-are-only-olog.html
Cheap and Secure Web Hosting Provider : See Now # [Solved]: Sorting when there are only O(log n) many different numbers , , Problem Detail: We have $n$ integers with lot's of repeated numbers. In this list, the number of distinct elements is $O(\log n)$. What's the best asymptotic number of comparisons for sorting this list? Any idea or hint or pseudo code? In fact I want to learn pseudo code. #### Answered By : Chao Xu Because you asked for minimum number of comparisons, so I assume the algorithm can only compare the numbers. The idea is to extend the sorting lower bound argument. Assume you want to sort $n$ elements knowing there exist at most $k$ distinct values. There are $n!$ ways to permute the elements, but many of them are equivalent. If there are $n_i$ element of the $i$th values. Each permutation is equivalent to $\prod_{i=1}^k n_i!$ other permutations. So the total number of distinct permutations is $$\frac{n!}{\prod_{i=1}^k n_i!}$$ The number of required comparisons is bounded below by $$\log_2 \left( n!/\min \{ \prod_{i=1}^k n_i! \big| \sum_{i=1}^k n_i = n, n_i\geq 0\text{ for all } i\} \right)$$ Good thing that minimization part can be easily shown by extend factorial to the continuous domain. $\min \{ \prod_{i=1}^k n_i! \big| \sum_{i=1}^k n_i = n, n_i\geq 0\text{ for all } i\}$ is attained when $n_i=n/k$. (note the $\log$ in the next computation is base $e$ for convenience) $$\log \left( n!/{(n/k)!^k} \right) = \log (n!) - k \log ((n/k)!) = n\log(n) - n\log(n/k) + O(\log n)= n(\log(n)-\log(n)+\log(k)) + O(\log n) = \Omega(n\log k)$$ $\log(n!) = n\log n - n + O(\log n)$ is Ramanujan's approximation. To get an upper bound. Just consider storing the unique values in a binary search tree, and each insert we either increase the number of occurrence of an element in the BST, or insert a new element into the BST. Finally, print the output from the BST. This would take $O(n\log k)$ time. Since both the lower bound and upper bound works for all $k$, the algorithm would take $O(n\log \log n)$ time for your problem. I just figured out from @Pseudonym's comment that this proof also proves that we need at least $nH$ comparisons where $H$ is the entropy of the alphabet, so I might as well add this to the answer. Let $c = \log 2$ and $p_i = n_i/n$. The entropy of the alphabet where the $i$th letter appears $n_i$ time is $H=-\sum p_i \log_2 p_i$. $nH = -\sum n_i (\log_2(n_i)-\log_2(n)) = \sum n_i (\log_2(n) - \log_2(n_i)) = c \sum n_i (\log(n) - \log(n_i))$. \begin{align*} \log_2 \left( n!/\prod_{i=1}^k n_i! \right) &= \log_2(n!)-\sum_{i=1}^k \log_2(n_i!) \\ &= \log_2(n!)-\sum_{i=1}^k \log_2(n_i!) \\ &= c (\log(n!)-\sum_{i=1}^k \log(n_i!)) \\ &= c (n \log n-n + O(\log n) - \sum_{i=1}^k n_i \log(n_i)-n_i+O(\log n_i)) \\ &\geq c (n \log n-n - \sum_{i=1}^k n_i \log(n_i)-n_i) \\ &= c (-n + \sum_{i=1}^k n_i(\log(n) - \log(n_i))+n_i) \\ &= c (\sum_{i=1}^k n_i(\log(n) - \log(n_i))) \\ &= nH \end{align*} ###### Best Answer from StackOverflow Question Source : http://cs.stackexchange.com/questions/29195 3.2K people like this
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999051094055176, "perplexity": 651.6366972735891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106367.1/warc/CC-MAIN-20170820092918-20170820112918-00399.warc.gz"}
http://tex.stackexchange.com/questions/88949/curved-waved-lines-with-tikz
# Curved waved lines with TikZ I'm trying to write a code for Feynman diagrams with TikZ (I know it is a common subject in the forum). I found very useful posts. However, I would like to draw a curved waved line \documentclass{beamer} \usepackage{tikz} \usetikzlibrary{decorations.pathreplacing,decorations.markings,snakes} \begin{document} \begin{tikzpicture}[thick] \path [draw=blue] (-4,0) -- (-2,0) -- (2,0) -- (4,0); \draw[draw=blue,snake=coil, segment aspect=0] (2,0) arc (0:180:2cm); \end{tikzpicture} \end{document} The above code just draw a solid line, no waved. Can anyone help me solve this weird behaviour? - You aren't invoking the decorations. A decoration needs to be defined (usually via the decoration key) and invoked, via the decorate key. – Loop Space Jan 2 '13 at 14:56 Actually the library snakes has been superseded by decorations, but for your need I think you are just looking for decorations.pathmorphing. Thus, your code revised, is like: \documentclass{beamer} \usepackage{lmodern} \usepackage{tikz} \usetikzlibrary{decorations.pathmorphing} \tikzset{snake it/.style={decorate, decoration=snake}} \begin{document} \vspace*{2cm} \begin{center} \begin{tikzpicture}[thick] \path [draw=blue,snake it] (-4,0) -- (-2,0) -- (2,0) -- (4,0); \draw[draw=blue, snake it] (2,0) arc (0:180:2cm); \end{tikzpicture} \end{center} \end{document} Notice that I grouped in a single style snake it the keys that really set up the decoration decorate and the one that sets the type of decoration decoration=snake. For further details, you can have a look to section 72 Decorations and more specifically to section 30.2 Path Morphing Decorations on the pgfmanual. (section 48.2 in pgfmanual 3.0) - Thank you @claudio, your answer was very helpful!! Cheers – Dox Jan 2 '13 at 15:26 The new TikZ-Feynman package (see also the project page) makes it easy to create Feynman diagrams. The following is just an example to show its capabilities. The relevant key to produce the half circle is half left. You have to compile with lualatex in order to exploit the automatic positioning of vertices. \documentclass[tikz]{standalone} \usepackage{tikz-feynman} \begin{document} \feynmandiagram [layered layout, horizontal=a to d] { a -- [scalar,red] b -- [fermion,blue] c -- [gluon,orange] d, b -- [photon, half left] c, % this produces the curved photon line }; \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9077262878417969, "perplexity": 3621.5514345888428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464050919950.49/warc/CC-MAIN-20160524004839-00058-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.fi.freelancer.com/job-search/i-need-a-design-for-a-bottle-label-this-is-for-a-vitamin-water-and-it-comes-in-4-flavours-it-is-quite-simple-as-i-already-have-a/
# Etsitkö I need a design for a bottle label this is for a vitamin water and it comes in 4 flavours it is quite simple as i already have a-freelance-töitä? Tarvitsetko apua taidon I need a design for a bottle label this is for a vitamin water and it comes in 4 flavours it is quite simple as i already have a kanssa? Palkkaa freelancer jo tänään! Onko erikoisuutenasi I need a design for a bottle label this is for a vitamin water and it comes in 4 flavours it is quite simple as i already have a? Käytä taitoasi I need a design for a bottle label this is for a vitamin water and it comes in 4 flavours it is quite simple as i already have a ja aloita rahan tienaaminen verkossa jo tänään! Freelancer on maailman suurin työn markkinapaikka. Sivustollamme on parhaillaan 17 764 työtarjousta odottamassa sinua! 3236 86 119 89 531
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9276841282844543, "perplexity": 4030.48831138851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646602.39/warc/CC-MAIN-20180319062143-20180319082143-00033.warc.gz"}
http://legisquebec.gouv.qc.ca/en/showversion/cs/I-8.1?code=se:12&pointInTime=20210906
### I-8.1 - Act respecting offences relating to alcoholic beverages 12. (Repealed). 1971, c. 19, s. 10; 1974, c. 14, s. 6; 1979, c. 71, s. 120. 12. In the exercise of its powers the Commission must comply with the regulations made for that purpose by the Gouvernement and with the regulations made by the Commission and approved by the Gouvernement. The regulations may, in particular, deal with: (a)  the tenor of applications for permits and the documents which must accompany them, where such is the case; (b)  the duties payable to the Commission to obtain copies of the ofjections made against an application for a permit and copies of the documents supporting them; (c)  the procedure to be followed by the Commission to certify the renewal of a permit; (d)  the standards that the Commission must observe, where necessary, to establish the maximum number of patrons who may be admitted at one time to a room where a permit is in use; (e)  the conditions governing the use of permits elsewhere than in the rooms of an establishment, in particular, at a swimming-pool or on a terrace situated in the proximity of the establishment and if necessary, in such case, the provisions of this act which do not apply to the use of such permit; (f)  what constitutes the presentation of music or spectacles with regard to the classes of permits authorizing their presentation; (g)  the standards that the Commission must observe to determine the minimum surface area required to allow dancing or, as the case may be, the presentation of spectacles in an establishment where a permit authorizing it is in use; (h)  the standards that the Commission must observe to determine whether an establishment is a grocery within the meaning of section 20; (i)  the conditions respecting the issue and use of permits and, in particular, of club permits, hunting and fishing lodge permits, trading post permits, reception permits and reunion permits; (j)  what constitutes a trading post for the purposes of section 27 of this act; (k)  the posting-up of permits and, in the case or reception permits, the posting-up of the contract of rent of the hall used for the reception; (l)  the standards governing the arrangement, lighting and furnishing of establishments and rooms for which an applicant applies for a permit or where a permit is in use; (m)  the form and content of the returns the Commission may require from a permit-holder under section 75 and the times when such returns must be filed; (n)  where such is the case, the date of renewal of permits; (o)  any other measure necessary for the application of this act; (p)  the rules respecting the internal management of the Commission and the carrying on of its affairs. Every regulation made under this section must be published in the Gazette officielle du Québec and shall come into force from such publication or on any later date indicated therein. 1971, c. 19, s. 10; 1974, c. 14, s. 6.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343610763549805, "perplexity": 3764.2790429110537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00341.warc.gz"}
https://www.kofastudy.com/courses/ss1-economics-2nd-term/lessons/population-distribution-week-6/topic/population-structure-distribution/
Lesson 6, Topic 1 In Progress # Population Structure / Distribution Lesson Progress 0% Complete Population distribution refers to the ways the population of a given country is distributed into certain categories such as age, sex, occupation, and geographical distribution. ### Age Distribution: Age distribution is defined as the breakdown of the population of a country into age groups. Age distribution is very important in economics because it shows the usefulness of the population and the supply of labour required in different sectors of the economy. The population of a country can be divided into the following age brackets. These are: i. 0 – 17 years. ii. 18 –  60 years. iii. Above 60 years. From the above classification, the population within the age bracket 0 – 17 years includes infants, children, and pupils in nursery, primary, secondary and tertiary institutions. This age group is called dependent population because they are not economically productive as they cannot be employed in the labour market. They will need to depend on the other groups for their needs. The age group 18 – 60 years is popularly referred to as the active population or working population, workforce or labour force. This is an economic age bracket that is involved in productive activities or employment. Because they are the working population and depend on themselves for subsistence, they are collectively called independent population. If the number of people in this group is high, there will be a higher supply of labour and a higher standard of living. The age group above 60 years is old age. Just like the children (0 – 17 years), they do not involve themselves in productive activities hence they are also classified as dependent population. Dependent population is defined as that part of the population that does not work and relies on others for the goods and services they consume. … In general those categorized as dependents include the children and the elderly. In summary, the age distribution of any given population can be grouped as follows: i. 0 – 17 years – children (Dependent population) ii. 18 – 60 years – adults (working population as labour force) iii. Above 60 years – old age (Dependent population) ### Importance of Age Distribution of Population: 1. Knowledge of Dependent Population: The number of dependents can be determined easily through age distribution in a population. 2. Size of Labour Force: With a good age distribution the number of people working can easily be determined. 3. Determination of Government Budget: The age structure of the population will assist the government to draw up its budget. 4. It Determines the Birth and Death rates: If there is a higher percentage of old people, the death rate will be higher, and vice versa. 5. It Determines the Standard of Living: The age structure of a given population will reveal the income per capita and standard of living, while dependent population reduces income per capita and standard of living, a high working population or labour force increases income per capita and standard of living. 6. It determines the Nature of the Market: For example, if there is a high working population, the demand for transportation services, including sales of cars, will increase. error:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029457330703735, "perplexity": 1659.6212986838052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00179.warc.gz"}
https://possiblyphilosophy.wordpress.com/tag/unmeasurable-sets/
## Help! My credences are unmeasurable! September 29, 2008 This is a brief follow up to the puzzle I posted a few days ago, and Kenny’s very insightful post and the comments to his post, where he answers a lot of the pressing questions to do with the probability and measurability of various events. What I want to do here is just note a few probabilistic principles that get violated when you have unmeasurable credences (mostly a summary of what Kenny showed in the comments), and then say a few words about the use of the axiom of choice. Reflection. Bas van Fraassens’ reflection principle states, informally, that if you are certain that your future credence in p will be x, then your current credence in p should be x (ignoring situations where you’re certain you’ll have a cognitive mishap, and the problems to do with self locating propositions.) If pn says “I will guess the n’th coin toss from the end correctly”, then Kenny shows, assuming translation invariance (that Cr(p)=Cr(q) if p can be gotten from q by uniformly flipping the values of tosses indexed by a fixed set of naturals for each sequence in q) that once we have chosen a strategy, but before the coins are flipped, there will be an n such that Cr(pn) will be unmeasurable (so fix n to be as such from now on.) However, given reasonable assumptions, no matter how the coins land before n, once you have learned that the coins have landed in such and such a way, Cr(pn)=1/2. Thus you may be certain that you will have credence 1/2 in pn even though you’re credence in pn is currently unmeasurable. Conglomerability. This says that if you have some propositions, S, which are pairwise incompatible, but jointly exhaust the space, then if your credence in p conditional on each element of S is in an interval [a, b], then your unconditional credence in p should be in that interval. Kenny points out that conglomerability, as stated, is violated here too. The unconditional probability of pn is unmeasurable, but the conditional probability of pn on the outcome of each possible sequence up to n, is 1/2. (In this case, it is perhaps best to think of the conditional credence as what you’re credence would be after you have learned the outcome of the sequence up to n.) You can generate similar puzzles in more familiar settings. For example what should your credence be that a dart thrown at the real line will hit the Vitali set? Presumably it should be unmeasurable. However, conditional on each of the propositions $\mathbb{Q}+\alpha, \alpha \in \mathbb{R}$, which partition the reals, the probability should be zero – the probability of hitting exactly one point from countably many. The Principal Principle. States, informally, that if you’re certain that the objective chance of p is x, then you should set your credence to x (provided you don’t have any ‘inadmissible’ evidence concerning p.) Intuitively, chances of simple physical scenarios like pn shouldn’t be unmeasurable. This turns out to be not so obvious. It is first worth noting that the argument that your credence in pn is unmeasurable doesn’t apply to the chance of pn, because there are physically possible worlds that are doxastically impossible for you (i.e. worlds where you don’t follow the chosen strategy at guess n.) Secondly, although the chance in a proposition can change over time, so it could technically be unmeasurable before any coin tosses, but 1/2 before the nth coin toss, the way that chances evolve is governed by the physics of the situation — the Schrodinger equation, or what have you. In the example we described we said nothing about the physics, but even so, it does seem like we can consistently stipulate that the chance of pn remains constant at 1/2. In such a scenario we would have a violation of the principal principle – before the tosses you can be certain that the chance of pn is 1/2, but your credence in pn is unmeasurable. (Of course, one could just take this to mean you can’t really be certain you’re going to follow a given strategy in a chancy universe – some things are beyond your control.) Anyway, after telling some people this puzzle, and the related hats puzzle, a lot of people seemed to think that it was the axiom of choice that’s at fault. To evaluate that claim requires a lot of care, I think. Usually to say the Axiom of Choice is false, is to say that there are sets which cannot be well ordered, or something equivalent. And presumably this depends on which structure accurately fits the extension of sethood and membership, the extension of which is partially determined by the linguistic practices of set theorists (much like ‘arthritis’ and ‘beech’, the extension of ‘membership’ cannot be primarily determined by usage of the ordinary man on the street.) After all there are many structures that satisfy even the relatively sophisticated axioms of first order ZF, only some of which satisfy the axiom of choice. If it is this question that is being asked, then the answer is almost certainly: yes, the axiom of choice is true. The structure with which set theorists, and more generally mathematicians, are concerned with is one in which choice is true. (It’d be interesting to do a survey, but I think it is common practice in mathematics not to even mention that you’ve used choice in a proof. Note, it is a different question whether mathematicians think the axiom of choice is true – I’ve found often, especially when they realise they’re talking to a “philosophy” student, they’ll be suddenly become formalists.) But I find it very hard to see how this answer has *any* bearing on the puzzle here. What structure best fits mathematical practice seems to have no implications whatsoever on whether it is possible for an idealised agent to adopt a certain strategy. This has rather to do with the nature of possibility, not sets. What possible scenarios are concretely realisable? For example, can there be a concretely realised agent whose mental state encodes the choice function on the relevant partition of sequences? (Where a choice function here needn’t be a set, but rather, quite literally, a physical arrangement of concrete objects.) Or another example: imagine a world with some number of epochs. In each epoch there is some number of people – all of them wearing green shirts. Is it possible that exactly one person in each epoch wears a red shirt instead? Surely the answer is yes, whether any person wears a red shirt or not is logically independent of whether the other people in the epoch wear a red shirt. A similar possibility can be guaranteed by Lewis’s principle of recombination – it is possible to arbitrarily delete bits of worlds. If so, it should be possible that exactly one of these people exists in each epoch. Or, suppose you have two collection of objects, A and B. Is it possible to physically arrange these objects into pairs such that either every A-thing is in one of the pairs, or every B-thing is in one of the pairs. Providing that there are possible worlds are large enough to contain big sets, it seems the answer again is yes. However, all of these modal claims correspond to some kind of choice principle. Perhaps you’ll disagree about whether all of these scenarios are metaphysically possible. For example, can there be spacetimes large enough to contain all these objects? I think there is a natural class of spacetimes that can contain arbitrarily many objects – those constructed from ‘long lines’ (if $\alpha$ is an ordinal, a long line is $\alpha \times [0, 1)$ under the lexigraphic ordering, which behaves much like the positive reals, and can be used to construct large equivalents of $\mathbb{R}^4$.) Another route of justification might be the principle that if a proposition is mathematically consistent, in that it is true in some mathematical structure, that structure should have a metaphysically possible isomorph. Since Choice is certainly regarded to be mathematically consistent, if not true, one might have thought that the modal principles to get the puzzle of the ground should hold.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669981360435486, "perplexity": 505.0265485534041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998879.63/warc/CC-MAIN-20190619003600-20190619025600-00251.warc.gz"}
https://slideplayer.com/slide/4287252/
# Boundary Layer Flow Describes the transport phenomena near the surface for the case of fluid flowing past a solid object. ## Presentation on theme: "Boundary Layer Flow Describes the transport phenomena near the surface for the case of fluid flowing past a solid object."— Presentation transcript: Boundary Layer Flow Describes the transport phenomena near the surface for the case of fluid flowing past a solid object Drag force The surrounding fluid exerts pressure forces and viscous forces on an object. The components of the resultant force acting on the object immersed in the fluid are the drag force and the lift force. p < 0 U p > 0 tw The drag force acts in the direction of the motion of the fluid relative to the object. The lift force acts normal to the flow direction. Both are influenced by the size and shape of the object and the Reynolds number of the flow. Drag prediction The drag force is due to the pressure and shear forces acting on the surface of the object. The tangential shear stresses acting on the object produce friction drag (or viscous drag). p < 0 U p > 0 tw Drag prediction Friction drag is dominant in flow past a flat plate and is given by the surface shear stress times the area: Pressure or form drag results from variations in the normal pressure around the object: p < 0 U p > 0 tw In order to predict the drag on an object correctly, we need to correctly predict the pressure field and the surface shear stress. This, in turn, requires correct treatment and prediction of boundary layers and flow separation. Viscous boundary layer An originally laminar flow is affected by the presence of the walls. Flow over flat plate is visualized by introducing bubbles that follow the local fluid velocity. Most of the flow is unaffected by the presence of the plate. Viscous boundary layer However, in the region closest to the wall, the velocity decreases to zero. The flow away from the walls can be treated as inviscid, and can sometimes be approximated as potential flow. Viscous boundary layer The region near the wall where the viscous forces are of the same order as the inertial forces is termed the boundary layer. Viscous boundary layer The distance over which the viscous forces have an effect is termed the boundary layer thickness. The thickness is a function of the ratio between the inertial forces and the viscous forces- i.e., the Reynolds number. As NRe increases, the thickness decreases. Effect of viscosity The layers closer to the wall start moving right away due to the no-slip boundary condition. The layers farther away from the wall start moving later. The distance from the wall that is affected by the motion is also called the viscous diffusion length. This distance increases as time goes on. Consider the following experiment where a viscous liquid is placed with an immiscible liquid in a container subjected to slow angular/rotational motion. The system shown on the left is performed with a higher viscosity fluid (100 mPas). On the right, a lower viscosity fluid (10 mPas) is shown. Notice the parabolic profile in the more viscous liquid as compared to the almost flat (uniform) profile in the less viscous liquid. Moving plate boundary layer Consider an impulsively started plate in a stagnant fluid. When the wall in contact with the still fluid suddenly starts to move, the layers of fluid close to the wall are dragged along while the layers farther away from the wall move with a lower velocity. The viscous layer develops as a result of the no-slip boundary condition at the wall. Flow separation Flow separation occurs when (a) the velocity at the wall is zero or negative and an inflection point exists in the velocity profile, and a positive or adverse pressure gradient occurs in the direction of flow. Boundary layer theory Consider a flow over a semi-infinite flat plate (and also for a finite flat plate), under steady state conditions: Fluid Velocity, v Away from plate, inviscid flow assumption is valid. Near the plate, viscosity effects are significant. Why not linear? --> next slide Boundary layer theory Solid Boundary INF INF Velocity v INF INF INVISCID FLOW ASSUMPTION OK HERE No Slip Velocity 0 FRICTION CANNOT BE NEGLECTED HERE Boundary layer theory INF INF Solid Boundary BL thickness INF INF BL thickness 99% Free stream velocity And a pictorial representation of what you expect to see (qualitatively). Far from surface, viscous forces are unimportant and inertial forces dominate; on the other hand, near the surface, viscous forces are comparable to inertial forces In the very beginning, only a small region of fluid is affected by the presence of the plate. Let us say that when the velocity of the fluid is 99% of the free stream (bulk) velocity, we assume that the effect of plate is practically zero (i.e., the velocity from this distance from the plate surface up to y = infinity is that of the bulk velocity). Then the boundary layer thickness, denoted by delta, will be determined this condition When you go further down the plate (to the right), the effect of plate is felt into the fluid for a larger value of delta. So, the delta increases as a function of x. Does it increase linearly? --> next slide d d What happens to d when you move in x? BL thickness increases with x x Laminar boundary layer The dashed line L shows the progression of the increase of delta. The layer or zone between the plate and the dash line constitutes the boundary layer. When the flow is laminar, delta increases with the square root of x (distance downstream from the leading edge of the plate). Laminar boundary layer BL Reynolds number: Blasius approximation of d: For laminar flow: NRe < 2  105 For transition flow: 2  105 < NRe < 3  106 Turbulent boundary layer Turbulent boundary layer At high enough fluid velocity, inertial forces dominate Viscous forces cannot prevent a wayward particle from motion Chaotic flow ensues Turbulence near the wall For wall-bounded flows, turbulence initiates near the wall Turbulent boundary layer In turbulent flow, the velocity component normal to the surface is much smaller than the velocity parallel to the surface The gradients of the flow across the layer are much greater than the gradients in the flow direction. Turbulent boundary layer Turbulent boundary layer Eddies and Vorticity An eddy is a particle of vorticity that typically forms within regions of velocity gradient An eddy begins as a disturbance near the wall, followed by the formation of a vortex filament that later stretches into a horseshoe or hairpin vortex Turbulent boundary layer Turbulence is comprised of irregular, chaotic, three-dimensional fluid motion, but containing coherent structures. Turbulence occurs at high Reynolds numbers, where instabilities give way to chaotic motion. Turbulence is comprised of many scales of eddies, which dissipate energy and momentum through a series of scale ranges. The largest eddies contain the bulk of the kinetic energy, and break up by inertial forces. The smallest eddies contain the bulk of the vorticity, and dissipate by viscosity into heat. Turbulent flows are not only dissipative, but also dispersive through the advection mechanism. Dimensional Analysis Buckingham Pi Theorem Tells how many dimensionless groups (p) may define a system. Theorem: If n variables are involved in a problem and these are expressed using k base dimensions, then (n – k) dimensionless groups are required to characterize the system/problem. Buckingham Pi Theorem Example: In describing the motion of a pendulum, the variables are time [T], length [L], gravity [L/T2], mass [M]. Therefore, n = 4 k = 3. So, only one (4 – 3) dimensionless group is required to describe the system. But how do we derive this? Buckingham Pi Theorem How to find the dimensional groups: For the pendulum example: let a, b, c and d be the coefficients of t, L, g and m in the group, respectively. In terms of dimensions: Buckingham Pi Theorem How to find the dimensional groups: For the pendulum example: let a, b, c and d be the coefficients of t, L, g and m in the group, respectively. Since the group is dimensionless: Therefore: Buckingham Pi Theorem How to find the dimensional groups: For the pendulum example: let a, b, c and d be the coefficients of t, L, g and m in the group, respectively. Arbitrarily choosing a = 1: Therefore: Buckingham Pi Theorem Example: Drag on a sphere Drag depends on FOUR parameters: sphere size (D); fluid speed (v); fluid density (r); fluid viscosity (m) Difficult to know how to set up experiments to determine dependencies and how to present results (four graphs?) Buckingham Pi Theorem Step 1: List all the parameters involved Let n be the number of parameters Example: For drag on a sphere: F, v, D, r, and m (n = 5) Step 2: Select a set of primary dimensions Let k be the number of primary dimensions For this example: M (kg), L (m), t (sec); thus k = 3 Buckingham Pi Theorem Step 3: Determine the number of dimensionless groups required to define the system Step 4: Select a set of k dimensional parameters that includes all the primary dimensions For example: select r, v, D Buckingham Pi Theorem Step 4: Select a set of k dimensional parameters that includes all the primary dimensions For example: select r, v, D AND F Buckingham Pi Theorem Step 4: Select a set of k dimensional parameters that includes all the primary dimensions For example: select r, v, D AND F Let d = 1: Therefore: The exponent of drag force F must be 1 since it must always appear in this group. Buckingham Pi Theorem Step 4: Select a set of k dimensional parameters that includes all the primary dimensions Next group: select r, v, D and m Buckingham Pi Theorem Step 4: Select a set of k dimensional parameters that includes all the primary dimensions Next group: select r, v, D and m Let a = 1: Therefore: Buckingham Pi Theorem Only one dependent and one independent variable Easy to set up experiments to determine dependency Easy to present results (one graph) Download ppt "Boundary Layer Flow Describes the transport phenomena near the surface for the case of fluid flowing past a solid object." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688332438468933, "perplexity": 748.3119262831807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371813538.73/warc/CC-MAIN-20200408104113-20200408134613-00085.warc.gz"}
https://digitalscholarship.unlv.edu/physastr_fac_articles/148/
## Physics & Astronomy Faculty Publications #### Title GRB 130427A Afterglow: A Test for GRB Models #### Document Type Conference Proceeding #### Abstract Gamma-ray Burst 130427A had the largest fluence for almost 30 years. With an isotropic energy output of 8.5×1053 erg and redshift of 0.34, it combined a very high energy release with a relative proximity to Earth in an unprecedented fashion. Sensitive X-ray facilities such as {\it XMM-Newton} and {\it Chandra} detected the afterglow of this event for a record-breaking baseline of 90 Ms. We show the X-ray light curve of GRB 130427A of this event over such an interval. The light curve shows an unbroken power law decay with a slope of α=1.31 over more than three decades in time. In this presentation, we investigate the consequences of this result for the scenarios proposed to interpret GRB 130427A and the implications in the context of the forward shock model (jet opening angle, energetics, surrounding medium). We also remark the chance of extending GRB afterglow observations for several hundreds of Ms with {\it Athena}. #### Keywords Cosmology; Fighter aircraft; X-rays #### Disciplines Astrophysics and Astronomy application/pdf 681 Kb
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649448394775391, "perplexity": 4190.419795593765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00392.warc.gz"}
https://www.tripadvisor.com.tw/Hotel_Review-g297633-d450932-Reviews-Casino_Hotel-Kochi_Cochin_Ernakulam_District_Kerala.html
Windows: Internet Explorer, Mozilla Firefox, Google Chrome. Mac: Safari. # 賭場酒店 (柯欽) Casino Hotel 670 則評論 • 很棒31% • 非常好44% • 一般17% • 5% • 糟透了3% 5 星級飯店 {"containerClass":null,"containerAttributes":null,"widget":{"name":"ibex_photo_carousel","moduleList":["handlers"],"template":"ibex_photo_carousel__widget","divClasses":"prw_rup prw_ibex_photo_carousel","js":{"handlers":"(ta.prwidgets.getjs(this,'handlers'))"},"dust":{"nav_controls":"ibex_photo_carousel__nav_controls"}},"scriptFlags":null} 322 筆結果 112 151 48 11 1 112 151 48 11 1 1 - 5 則評論,共 322 Thank you so much for taking the time to write to us. We’re glad you were pleased with your well...更多 It has been such a pleasure reading your comments. Thank you so much for taking the time to write to...更多 Thank you so much for taking the time to write to us. We’re glad you were pleased with your well...更多 Thank you so much for taking the time to write to us. We’re glad you were pleased with your comfortable...更多 Spa Spa US\$50 - US\$88 (根據標準客房的平均房價) US\$50 - US\$88 (根據標準客房的平均房價)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432919979095459, "perplexity": 4780.052006380685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864139.22/warc/CC-MAIN-20180621094633-20180621114633-00185.warc.gz"}
http://math.stackexchange.com/questions/420018/proving-lim-limits-n-to-infty-left1-fracxn-rightn-textex
# Proving $\lim \limits_{n\to +\infty } \left(1+\frac{x}{n}\right)^n=\text{e}^x$. I knew that $e^x=\lim \limits_{n\to+\infty }{\left(1+\frac{x}{n}\right)^n}$. But I've never seen its proof. So I tried to prove it using $\exp(\ln x)=\ln(\exp(x))=x$. Here is what I've tried so far : $$\left(1+\frac{x}{n}\right) ^n=e^{n\ln(1+\frac{x}{n})}$$ $$\text{I'll now study just } {n\ln\left(1+\frac{x}{n}\right)}.$$$$\text{If this function has the line }y=x \text{ as oblique asymptote, then the equality is proven.}$$ $$n\ln\left(1+\frac{x}{n}\right) = n\ln\left(\frac{n+x}{n}\right)$$ $$=n[\ln(n+x)-ln(n)]$$ $$=n\left[\int_1^{n}\frac{dt}{t}+\int_{n}^{x}\frac{dt}{t}-\int_1^{n}\frac{dt}{t}\right]$$ $$=n[\ln(x)-\ln(n)]$$ But I just don't know how to show that this expression has an oblique asymptote $y=x$. I've thought that if there is an oblique asymptote as $n$ goes to infinity, than for a huge $n$, we have : $$\ln\left(1+\frac{x}{n}\right)\approx \frac{x}{n}\approx0$$ Which looks correct but we could have any other function $f(x)$, $\ln\left(1+\frac{x}{n}\right)\approx\frac{f(x)}{n}\approx 0$. Which doesn't prove the oblique asymptote because $x$ is constant. So how can prove $e^x=\lim \limits_{n\to +\infty } \left(1+\frac{x}{n}\right)^n$? And where did I go wrong? - What is your definition of $e^x$? –  Ink Jun 14 '13 at 6:39 What is your definition of $\exp (x)$? –  Git Gud Jun 14 '13 at 6:40 @moray95: Your integral should be $\int_1^{nx}\frac{dt}{t}+ \int_{nx}^{nx+1} \frac{dt}{t}- \int_1^n \frac{dt}{t}$. –  Seirios Jun 14 '13 at 6:41 I'm using the definition $\exp(\ln(x))=\ln(\exp(x))=x$ –  moray95 Jun 14 '13 at 6:50 Are you defining $\exp$ as the inverse of $\ln$, is that it? –  Git Gud Jun 14 '13 at 6:53 If you're allowed to use Taylor (power) expansions this is pretty simple: $$n\log\left(1+\frac xn\right)=n\sum_{k=1}^\infty (-1)^{k+1}\frac{x^k}{k\,n^k}=n\left(\frac xn+\mathcal O\left(\frac1{n^2}\right)\right)=$$ $$=x+\mathcal O\left(\frac1n\right)\xrightarrow[n\to\infty]{}x$$ - Great response but I'd like to prove it without the power series... –  moray95 Jun 14 '13 at 15:51 Ok...did you read my comment under your question? Because you haven't yet corrected your work there... –  DonAntonio Jun 14 '13 at 15:53 Edited my question with your remark but still, I still have the same thing at the end... –  moray95 Jun 14 '13 at 17:07 I don't know if it helps you, it is just a suggestion, if you know the fundamental limite: $$\lim_{n\to \infty}(1+\frac{1}{n})^n=e$$ Then you have for $$\lim_{n\to \infty}(1+\frac{x}{n})^n$$ replacing $k=\frac{n}{x}$ we get $$\lim_{n\to \infty}(1+\frac{1}{k})^{kx}= (\lim_{k\to \infty}(1+\frac{1}{k})^{k})^x =e^x$$ - I've thought about something like that but then I'll need to prove first $\lim_{n\to \infty}(1+\frac{1}{n})^n=e$. –  moray95 Jun 15 '13 at 10:23 @moray95 This is often taken as the definition of $e$. The only other definition I know of is the power series, which you can show is equal to this expression using binomial expansion. –  User-33433 Jan 7 at 20:44 Let's start with the "where did I go wrong" part of the question. Where you wrote $$=n\left[\int_1^{n}\frac{dt}{t}+\int_{n}^{x}\frac{dt}{t}-\int_1^{n}\frac{dt}{t}\right]$$ you should have written $$=n\left[\int_1^{n}\frac{dt}{t}+\int_{n}^{n+x}\frac{dt}{t}-\int_1^{n}\frac{dt}{t}\right]$$ Note, the correct upper limit in the middle integration is $n+x$, not just $x$. Otherwise you were on the right track. The corrected integral leaves us with $$n\ln\left(1+\frac{x}{n}\right)=n\int_n^{n+x}{dt\over t}$$ Now as soon at $n\gt x$, we have $${1\over n+|x|}\le{1\over t}\le{1\over n-|x|}$$ on the interval $t\in[n-|x|,n+|x|]$, which certainly includes the interval between $n$ and $n+x$. If you are careful with the minus signs, you can conclude that $${nx\over n+x}\le n\int_n^{n+x}{dt\over t}\le{nx\over n-x}$$ and it now follows easily that $$\lim_{n\to\infty}n\ln\left(1+\frac{x}{n}\right)=x$$ - Here's a way to do it with the integral: \begin{align*} n\log{\left(1+{x\over n}\right)} & = n\int_1^{1+{x/ n}}{dt\over t} \end{align*} and by the simplest conceivable estimate \begin{align*} {x\over 1+x/n} = n{x/n\over1+{x/ n}}\leq n\int_1^{1+{x/ n}}{dt\over t} \leq n{x\over n} = x. \end{align*} (The inequalities hold when $x$ is negative too, provided the expressions are defined.) Now make $n\to\infty$ and apply the squeeze theorem. - $f(x) = e^x$ is the only solution to the differential equation $\dfrac{dy}{dx} = y$ with $f(0)=1$. To approximate $f(a)$, we can use Euler's method on the interval $[0,a]$ with $n$ subintervals. $f(0) = 1, f'(0)=1 \implies f(\frac{a}{n}) \approx 1+\frac{a}{n}$ $f(\frac{a}{n}) \approx 1+\frac{a}{n}, f'(\frac{a}{n}) \approx 1+\frac{a}{n} \implies f(\frac{2a}{n}) \approx 1+\frac{a}{n} + \frac{a}{n}(1+\frac{a}{n}) = (1+\frac{a}{n})^2$ . . . $f(a) \approx (1+\frac{a}{n})^n$ Since Euler's method actually converges in the limit, we have $$e^a = \lim_{n \to \infty} (1+\frac{a}{n})^n$$ - p.s. If you think long and hard about this same proof, only applied to the line segment from $0$ to $i$ in the complex plane, you will understand why $e^{i\pi}=-1$ –  Steven Gubkin Jan 7 at 19:54 Start with the functions $$f_n(x) = \left(1 + \frac{x}{n}\right)^n.$$ Then $$f'_n(x) = \left(1 + \frac{x}{n}\right)^{n-1} = \left(1 + \frac{x}{n}\right)^{-1}f_n(x)$$ If we take the limit and call $f(x) = \lim_{n\rightarrow\infty}f_n(x)$, then $$f'(x) = f(x)$$ and $f(0) = 1$. This first-order ODE has the unique solution $f(x) = e^x$. - If you already know the exponential function, then you probably also know the inequality $e^x\ge 1+x$. Then also $$e^x=\frac1{e^{-x}}\le\frac1{1-x}=(1+x)\frac1{1-x^2}$$ Now $e^x=(e^{\frac xn})^n$, so for $n>|x|$ $$\left(1+\tfrac xn\right)^n \le e^x \le \left(1+\tfrac{x}{n}\right)^n\frac1{\left(1-\tfrac{x^2}{n^2}\right)^n}\le\left(1+\tfrac{x}{n}\right)^n\frac1{1-\tfrac{x^2}{n}}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9797372817993164, "perplexity": 366.46836379525024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678692841/warc/CC-MAIN-20140313024452-00049-ip-10-183-142-35.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/387261/solution-to-geodesics-on-a-2-sphere
# Solution to Geodesics on a 2-sphere I have been tasked to finding particle trajectories for a point mass travelling along the surface of the 2-sphere $t=t(\tau)$, $\theta=\theta(\tau)$ and $\phi=\phi(\tau)$. My supervisor gave me the spacetime metric $ds^2 = -dt^2 +R^2(d\theta^2 +sin^2{\theta}d\phi^2)$ I am finding timelike geodesics, $1 = -g_{\mu\nu}\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}$. Here is what I have so far, $t = E\tau$, $\dot{t} = E$, $sin^2\theta\frac{d\phi}{d\tau} = \frac{k}{R}$ where $\frac{k}{R}$ is some dimensionless constant. I substituted $\dot{\phi}$ and $\dot{t}$ back into the proper time gauge to get. $\dot{\theta} = \pm\frac{1}{Rsin\theta}\sqrt{E^2sin^2\theta-k^2-sin^2\theta}$ I attempted using the substitution $u=cos\theta$ to eliminate the $sin\theta$ and hopefully get some expression that I could integrate to obtain some inverse trigonometric function, I know that the geodesics should describe great circles along the surface of the sphere. But I cant solve this final equation. Thank you • May I suggest you look at the $k=0$ solutions first? This should result in great circles through the poles... – mmeent Feb 19 '18 at 9:04 • Are you sure your differential equations are correct? What happens if you set $E=1$? – octonion May 11 '18 at 18:46 Your manifold is the product $(\Bbb R, -{\rm d}t^2)\times \Bbb S^2(R)$. By Proposition $38$ in page $208$ (with $f=1$) of O'Neill's Semi-Riemannian Geometry With Applications to Relativity, a curve $\gamma = (\alpha, \beta)$ there is a geodesic if and only if $\alpha$ and $\beta$ are geodesics in $\Bbb R$ and $\Bbb S^2(R)$, respectively. It is then clear that $\alpha$ must be $\alpha(t) = \pm t+a$ for some $a \in \Bbb R$, and $\beta$ must parametrize a great circle. As for checking that geodesics of the sphere are great circles, there are better ways to do it instead of solving those differential equations. For instance, one can argue that given a tangent vector $v$ at a point $p$, there is a unique maximal geodesic starting at $p$ with velocity $v$, compute directly that great circles are geodesics, and finally that given $v$ and $p$, there is a great circle passing through $p$ with direction $v$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722535967826843, "perplexity": 80.00031293773401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00293.warc.gz"}
http://cmcampos.xyz/publications/cft-t3
# Classical field theories of first order and lagrangian submanifolds of premultisymplectic manifolds ## Classical field theories of first order and lagrangian submanifolds of premultisymplectic manifolds A description of classical field theories of first order in terms of Lagrangian submanifolds of premultisymplectic manifolds is presented. For this purpose, a Tulczyjew's triple associated with a fibration is discussed. The triple is adapted to the extended Hamiltonian formalism. Using this triple, we prove that Euler-Lagrange and Hamilton-De Donder-Weyl equations are the local equations defining Lagrangian submanifolds of a premultisymplectic manifold.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487369060516357, "perplexity": 754.1635465931422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00383.warc.gz"}
http://www.physicsforums.com/showpost.php?p=2646592&postcount=4
Thread: Christoffel symbols View Single Post Math Emeritus Sci Advisor Thanks PF Gold P: 38,879 ## Christoffel symbols The ordinary derivative of a tensor is NOT a tensor. In order to make it one, the "covariant derivative", you have to subtract off the Christoffel symbols- or, to put it another way, the Chrisoffel symbols are the covariant derivative minus the ordinary derivative.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132629632949829, "perplexity": 981.3325466610044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.aanda.org/articles/aa/full_html/2009/23/aa11351-08/aa11351-08.html
Subscriber Authentication Point Free Access Issue A&A Volume 500, Number 2, June III 2009 763 - 768 Extragalactic astronomy https://doi.org/10.1051/0004-6361/200811351 29 April 2009 ## Determination of the cosmic far-infrared background level with the ISOPHOT instrument,(Research Note) M. Juvela1 - K. Mattila1 - D. Lemke2 - U. Klaas2 - C. Leinert2 - Cs. Kiss3 1 - Observatory, University of Helsinki, PO Box 14, 00014 Helsinki, Finland 2 - Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany 3 - Konkoly Observatory of the Hungarian Academy of Sciences, PO Box 67, 1525 Budapest, Hungary Received 15 November 2008 / Accepted 8 April 2008 Abstract Context. The cosmic infrared background (CIRB) consists mainly of the integrated light of distant galaxies. In the far-infrared the current estimates of its surface brightness are based on the measurements of the COBE satellite. Independent confirmation of these results is still needed from other instruments. Aims. In this paper we derive estimates of the far-infrared CIRB using measurements made with the ISOPHOT instrument aboard the ISO satellite. The results are used to seek further confirmation of the CIRB levels that have been derived by various groups using the COBE data. Methods. We study three regions of very low cirrus emission. The surface brightness observed with the ISOPHOT instrument at 90, 150, and 180 m is correlated with hydrogen 21 cm line data from the Effelsberg radio telescope. Extrapolation to zero hydrogen column density gives an estimate for the sum of extragalactic signal plus zodiacal light. The zodiacal light is subtracted using ISOPHOT data at shorter wavelengths. Thus, the resulting estimate of the far-infrared CIRB is based on ISO measurements alone. Results. In the range 150 to 180 m, we obtain a CIRB value of 1.08  0.32  0.30 MJy sr-1 quoting statistical and systematic errors separately. In the 90 m band, we obtain a 2- upper limit of 2.3 MJy sr-1. Conclusions. The estimates derived from ISOPHOT far-infrared maps are consistent with the earlier COBE results. Key words: galaxies: evolution - cosmology: observations - infrared: galaxies ## 1 Introduction The extragalactic background light (EBL) consists of the integrated light of all galaxies along the line of sight with possible additional contributions from intergalactic gas and dust and hypothetical decaying relic particles. It plays an important role in cosmological studies because most of the gravitational and fusion energy released in the universe since the recombination epoch is expected to reside in the EBL. Measurements of the cosmic infrared background, CIRB, help to address some central, but still largely open astrophysical problems, including the early evolution of galaxies, and the entire star formation history of the universe. An important issue is the balance between the UV-optical-NIR and the far-infrared backgrounds; the fraction of optical radiation lost by dust obscuration re-appears as dust emission at longer wavelengths. The absolute level of the CIRB, the fluctuations in the CIRB surface brightness, and the resolved bright end of the distribution of galaxies contributing to the CIRB all provide strong constraints on the models of galaxy evolution through different epochs. For reviews, see Hauser & Dwek (2001) and Lagache et al. (2005). The full analysis of the data from the DIRBE (Hauser et al. 1998; Schlegel et al. 1998) and FIRAS (Fixen et al. 1997) experiments indicated a CIRB at a surprisingly high level of 1 MJy sr-1 between 140 and 240 m. Preliminary results had been obtained by Puget et al. (1996). Lagache et al. (1999) claimed the detection of a component of Galactic dust emission associated with warm ionised medium. The removal of this component led to a CIRB level of 0.7 MJy sr-1 at 140 m. Because the FIR CIRB is important for cosmology these results need to be confirmed by independent measurements. The ISOPHOT instrument (Lemke et al. 1996), flown on the cryogenic, actively cooled ISO satellite, provided the capabilities for this. The ISOPHOT observation technique was different from COBE: (1) with its relatively small f.o.v. ISOPHOT was capable of looking into the darkest spots between the cirrus clouds; (2) ISOPHOT had high sensitivity in the important FIR window at 120-200 m; (3) with its good spatial and multi-wavelength FIR spectral sampling ISOPHOT gave an improved possibility of separating and eliminating the emission of Galactic cirrus. The primary goal of the ISOPHOT EBL project is the determination of the absolute level of the FIR CIRB. The other goals are the measurement of the spatial CIRB fluctuations and the detection of the bright end of the FIR point source distribution. The bright end of the galaxy population contributing to the FIR CIRB signal was analysed by Juvela et al. (2000). ## 2 The method We examine three regions of low cirrus emission that were mapped with the ISOPHOT at 90, 150, and 180 m. Because of the high sensitivity of the ISOPHOT FIR detectors, we can directly correlate HI with ISOPHOT measurements for each FIR band separately. In the case of DIRBE, the original analysis performed by the DIRBE team used 100 m as an ISM template and, therefore, the accuracy of the CIRB detections at 140 m and 240 m also depended on the systematic uncertainties of the 100 m data (Hauser et al. 1998; Arendt et al. 1998). The HI lines are optically thin and their intensity traces the amount of neutral hydrogen along the line-of-sight. The level of FIR emission associated with the ionised medium is still uncertain and we will consider the possible effects later in the analysis. As a first step, a relation between the HI line area and the FIR surface brightness is obtained. The relation depends on the gas-to-dust ratio, grain properties, and the radiation field illuminating the interstellar medium (ISM) along the line-of-sight. No significant variations have been observed in the gas-to-dust ratio apart from those associated with large scale metallicity variations. Similarly, because of the diffuse nature of the HI clouds, no small scale changes in the intrinsic dust properties or dust temperature are expected. Under these conditions the FIR signal should have a linear dependence on the HI column density. Because each field is considered individually, possible differences in the HI-FIR relation towards different regions can and will be taken into account. For each field, an extrapolation to zero HI intensity eliminates emission associated with the neutral ISM (for details, see Sect. 4.1). The remaining signal is equal to the sum of the zodiacal light (ZL) and the CIRB. These components are not removed because they are uncorrelated with the HI emission. Furthermore, the ZL has a smooth distribution and remains practically constant within each of the areas covered by individual ISOPHOT maps (see Ábrahám et al. 1997). If the ZL level is known, the absolute value of the CIRB can be obtained. The ZL estimation is described in detail in Sect. 4.2. ## 3 Observations We study three low surface brightness fields that are labelled NGP, EBL22, and EBL26. The field NGP is located at the North Galactic Pole, the field EBL22 is similarly at a high ecliptic latitude, while the third one, EBL26, lies close to the ecliptic plane (see Table B.3). EBL26 was selected as a field with high ZL level with the purpose of estimating the ZL contribution at the different wavelengths observed in this project. The observations of the hydrogen 21 cm line were made with the Effelsberg radio telescope in May 2002. The telescope beam has a FWHM of 9 arcmin. The areas mapped with the ISOPHOT instrument were covered with pointings at steps of FWHM/2. The stray radiation was removed with a program developed by Kalberla (see Kalberla 1982; Hartmann et al. 1996; Kalberla et al. 2005). For details of the observations of the EBL fields and the associated data reduction, see Appendix B. The principles of ISOPHOT data reduction and calibration of surface brightness measurements are explained in Appendix A. ## 4 Analysis and results ### 4.1 Subtraction of Galactic cirrus emission using HI data The FIR surface brightness was correlated at each observed wavelength with the integrated line area of the HI spectra. At each observed HI position the average FIR signal was calculated using spatial weighting with a Gaussian with FWHM equal to 9 . Only those pointings are used where the centre of the Effelsberg beam falls inside the FIR map. In addition to the observational uncertainties, each data point was weighted in direct proportion to the fraction of the HI FWHM beam that was covered by FIR observations. Therefore, the data close to FIR map boundaries get lower weight in the following analysis. The obtained correlations are shown in Fig. 1. For FIR observations the plotted error bars are based on the statistical uncertainties reported by the PIA. The figures include linear fits that take into account the estimated uncertainties in both FIR and HI data. The slopes and zero points of the fit are given in Table 1. In field EBL26 there is a clear break in the relation above W(HI) = 200 K km s-1 that may indicate the presence of molecular gas. There is also one fairly bright galaxy that is located in the region of higher cirrus emission and may have affected the correlation. Therefore, in the field EBL26 the linear fitting was carried out using only data below  K km s-1. In the other fields the hydrogen column densities are in general smaller,  K km s-1, so that the fraction of molecular gas can be expected to be insignificant. The offsets thus obtained correspond to an extrapolation to zero HI column density. To the extent to which the remaining contributions of ionised and molecular gas can be ignored (see below), the values correspond to the sum of CIRB and the zodiacal light. Figure 1: FIR surface brightness as a function of HI line area W(HI) in the three EBL fields, EBL22 ( left), EBL26 ( middle), and NGP ( right). Each point corresponds to one pointing of the HI observations. The uncertainties in the HI line area are estimated based on the noise in velocity channels outside detected HI emission. For each HI spectrum the corresponding average FIR signal has been calculated using for weighting a Gaussian with FWHM = 9 . The corresponding error bars are based on error estimates reported by PIA from which the formal uncertainties of the weighted mean are calculated. The long dashed line shows the result of a linear fit that takes into account the uncertainties in both variables. The dotted lines indicate 67% confidence intervals that are obtained with the bootstrap method. Open with DEXTER ### 4.2 Subtraction of the zodiacal light The zodiacal light (ZL) emission is assumed to have a pure black body spectrum. The colour temperature of the spectrum depends on the ecliptic coordinates of the source and the solar elongation at the time of the observations. Leinert et al. (2002) have studied the variations of mid-infrared ZL spectra over the sky using a set of observations made with the ISOPHOT spectrometer. We use their results to fix the colour temperature of the ZL spectra. The absolute intensity of the ZL emission in the FIR is estimated with the help of shorter wavelength ISOPHOT observations made using the ISOPHOT P detector in the absolute photometry observing mode PHT-05 (Laureijs et al. 2003). Because the observations were made in regions of low cirrus emission, the mid-infrared signal is completely dominated by the ZL. The measurements were carried out close to the larger raster maps, in terms of both time and position. Therefore, they give a good estimate for the zodiacal light emission present in the raster maps. FIR absolute photometry measurements were made at the same time and at the same positions. These are used to make a correction for the contribution that the interstellar dust has, conversely, on the measured mid-infrared values. The complete list of observations is given in Table B.2. Table 1:   Parameters of linear fits of FIR surface brightness versus the HI line area. The derived ZL values obtained from the fits (ZL+cirrus) are listed in Table 2. The values are given at the nominal wavelengths assuming a spectrum const. The uncertainties were estimated based on the quality of the fits (see Appendix D). In fields EBL26 and NGP, because error estimate of each of the two measurements is itself uncertain, we conservatively take the average of the two error estimates as the uncertainty of the mean. Table 2:   The estimated zodiacal light emission. ### 4.3 Estimated CIRB levels and their uncertainties Table 3 lists the CIRB levels that are estimated based on the linear fits between FIR and HI data (Table 1) and the zodiacal light values of Table 2. The uncertainties are obtained by adding in quadrature the estimated errors of the offsets from Table 1, the errors of the zodiacal light values from Table 2, and the error resulting from the dark current subtraction (see Appendix C), (1) The uncertainty due to the dark current is estimated to be 0.25-0.30 MJy sr-1 and it is likely to be the main factor affecting the uncertainty of the zero point of the FIR observations (see Appendix C). The results obtained for the three individual fields can be combined, deriving our final estimates for the CIRB and its uncertainty. In the case of the field EBL26 the values are relatively unprecise because of the high ZL level. This uncertainty is reflected in the error estimates. Combining the results we get average values -0.54  0.65 MJy sr-1, 0.83  0.41 MJy sr-1, 1.26  0.37 MJy sr-1, at 90 m, 150 m, and 180 m, respectively, as given in the last line of Table 3. The 90 m values are very low, because in both the EBL22 and EBL26 fields negative values are obtained. In the case of EBL26 the negative value is not surprising, because the expected CIRB level is only a small fraction of the zodiacal light which itself has a considerable statistical uncertainty. Therefore, the result is sensitive also to any systematic errors of the ZL estimates. Apart from the results at 90 m, the variation between fields is only slightly larger than expected on the basis of the quoted error estimates. At 150 m a negative value is obtained for EBL26 which, nevertheless, is less than 2- below the highest values. At 90 m we can derive for EBL only an upper limit. The 150 m and 180 m bands are close to each other and the CIRB values should be very similar. Therefore, based on the three fields and the two frequency bands, we can calculate, as a weighted average, an estimate for the CIRB in the range 150-180 m. The result is 1.08  0.32 MJy sr-1. The result would not change significantly (less than 1-) even if either EBL22 or EBL26 were omitted from the analysis. Table 3:   Estimated level of the CIRB for the individual fields. ### 4.4 The reliability of the CIRB values In addition to the statistical uncertainties, the results are affected by systematic errors. The CIRB estimates are not affected by the HI antenna temperature scale. However, the presence of unsubtracted stray radiation could affect the HI zero point of the HI data and, thus, lower the CIRB values. We cannot directly estimate the presence of residual stray radiation in the HI data. However, in Sect. B.3 we compare some of our HI spectra with data from the Leiden/Dwingeloo survey (Hartmann & Burton 1997; Kalberla et al. 2005) and we find that the residual stray radiation is likely to be less than 4 K km s-1 which, assuming a slope of 29  10-3 MJy sr-1 (K km s-1)-1 (see Table 1), corresponds to 0.12 MJy sr-1. In this case HI stray radiation would not be a major source of error. The relative calibration of the ISOPHOT FIR cameras and the P-detectors directly affects the estimated FIR ZL levels and is probably the most important source of systematic errors. According to the ISOPHOT Handbook (Laureijs et al. 2003) the absolute accuracy of the C100 and C200 cameras and the P-detectors is typically of the order of 20%. If there were a difference of 10% in the relative calibration of the MIR and FIR bands, this would cause a similar percentual error to the ZL estimates. The fact that we obtained negative CIRB values at 90 m, especially when the absolute level of the ZL is high, suggests that the FIR ZL levels may have been overestimated. The effect of an error of 10% would range from 2 MJy sr-1 at 90 m in the field EBL26 to 0.2 MJy sr-1 at 150-180 m in the field NGP. Taking into account the relative weighting of the three fields, a 10% error in ZL corresponds to an uncertainty of 0.3 MJy sr-1 in the 150-180 m CIRB estimate. Assuming a systematic uncertainty of this magnitude, the CIRB estimate can be written as 1.08  0.32  0.30 MJy sr-1 where the first error estimate refers to statistical and the second to systematic uncertainties. At 90 m the negative value obtained for EBL26 carries very little weight and an additional 10% systematic uncertainty in the ZL would correspond to an additional uncertainty of 2 MJy sr-1. In EBL22 the CIRB values was -2.16  1.04 and a 10% systematic error in the ZL values would correspond to 0.77 MJy sr-1. The CIRB value is 2- below zero and suggests that the ZL values may contain a systematic error of 10-20% that has reduced the obtained CIRB values. In the field NGP the 90 m CIRB estimate was 0.35  0.74 MJy sr-1. Assuming a 10% systematic uncertainty in the ZL and adding the error estimates in quadrature, the CIRB estimate becomes 0.35  0.95 MJy sr-1 and we obtain a 2- upper limit of 2.3 MJy sr-1. ## 5 Discussion ### 5.1 Dust emission associated with ionised gas Our analysis was based on the correlation of HI emission and FIR intensity. So far we have omitted the possible effect that dust mixed with ionised gas might have. The ionised component can affect the results only as far as it is uncorrelated with the HI emission. Lagache et al. (2000) decomposed the DIRBE FIR intensity into components correlated with the neutral and the ionised medium. The column density of ionised hydrogen, , was estimated based on the H line. They found that the infrared emissivity of dust associated with the ionised medium would be very similar to the emissivity of dust within the neutral medium. However, Odegard et al. (2007) recently re-examined this issue, obtaining significantly lower emissivity values for the ionised medium. The derived 2- upper limit for the 100 m emissivity per hydrogen ion was typically only 40% of the emissivity in the neutral atomic medium. We use the all-sky H map produced by Finkbeiner (2003) to examine the possible contribution of the ionised medium to the FIR emission. The resolution of the H data is 6 for fields EBL22 and EBL26 and one degree at the location of the field NGP. The average H emission in the EBL22, EBL26, and NGP fields is 0.7 R, 0.5 R, and 0.6 R, respectively. The H background contains small scale structure that may be caused by faint point sources, mainly stars. Therefore, the quoted H levels are not caused by the diffuse ISM only. For example, in NGP the H image is dominated by an unresolved (one degree) emission peak at the centre of the field, the nature of which remains unknown. Apart from this, the H background does not show any significant gradients or correlation with the FIR emission. Therefore, we consider only the effect on the average FIR signal. Using the Lagache et al. (2000) conversion factors an H signal of 0.6 R would correspond to 0.5 MJy sr-1 in the FIR. Therefore, the CIRB values could be overestimated by a similar amount. However, adopting the Odegard et al. (2007) 1- upper limits, the contribution from the ionised medium should remain below 0.1 MJy/sr. Furthermore, in our analysis we have correlated the FIR emission only with HI while in the quoted studies the FIR signal was correlated simultaneously with both HI and H+. Therefore, since HI and H+ are themselves correlated, the correction to be applied to our results should be correspondingly smaller. Therefore, we believe that the possible effects due to the presence of an ionised medium are small compared with the other uncertainties given above. ### 5.2 Dust emission associated with molecular gas If molecular gas is present, the HI lines will underestimate the total column density of gas and, because the fraction of molecular gas increases with column density, the relation between FIR emission and the HI intensity becomes steeper. Our fields have low column density and, therefore, the fraction of molecular gas should be low. Hydrogen molecules cannot survive in clouds with visual extinction below and, consequently, no molecular gas should exist in clouds with column densities below   1020 cm-2. Arendt et al. (1998) detected a steepening in the FIR vs. HI relation which, however, in different regions took place at different column densities. The effect could start already around   1020 cm-2 which corresponds to an HI line area of  K km s-1. Kiss et al. (2003) observed a change in the spatial power spectra of FIR surface brightness around  cm-2. This was similarly interpreted as a sign of the transition between atomic and molecular phases. In the EBL fields, molecular emission could be significant only in the EBL26 region, where the slope between HI and FIR data appears to change at  K km s-1 (see Fig. 1). Below this limit there is a good, linear correlation between the FIR surface brightness and HI line area which also shows that toward those positions the fraction of molecular gas is low. In the EBL estimation only data below  K km s-1 were used. ### 5.3 Comparison with earlier results The earlier CIRB results in the FIR range are all based on measurements of the COBE satellite. We have derived our CIRB estimates using the ISOPHOT measurements, without relying on the COBE data even in the determination of the ZL levels. Therefore, our result is the first completely independent CIRB estimate after the COBE detections. Table E.1 in Appendix E lists the existing CIRB estimates in the FIR wavelength range. In the range 150-180 m our value is consistent with the DIRBE results at 140 m. According to Kiss et al. (2006) the COBE/DIRBE and ISOPHOT FIR surface brightness values agree to within 15% and, therefore, the differences in the surface brightness scales are likely to be smaller than our statistical uncertainty. In the ZL subtraction, the relative calibration of the ISOPHOT-P detector and the FIR cameras could introduce a systematic error that has a magnitude comparable to that of the statistical uncertainty. The low, even negative CIRB estimates obtained at 90 m suggest that this systematic error causes the ZL estimates to be 10% too large. Taking into account our statistical and systematic uncertainties at 150-180 m, we cannot exclude even the highest DIRBE estimates close to 1.5 MJy-1. At 90 m our 2- upper limit of 2.3 MJy sr-1 is consistent with the existing DIRBE results. Based on the above values, the galaxies resolved with ISO FIR observations account for some 5% of the total CIRB (e.g., Juvela et al. 2000; Héraudeau et al. 2004; Lagache & Dole 2001; Kawara et al. 2004). A stacking analysis of Spitzer measurements (Dole et al. 2006) showed that galaxies detected at 24 m contribute some 0.7 MJy sr-1 to the 160 m sky surface brightness. Therefore, the results from galaxy counts and measurements of the absolute level of CIRB are converging, and probably more than half of the sources responsible for the CIRB have already been identified. ## 6 Conclusions For the ISOPHOT EBL project far-infrared raster maps were obtained in selected low-cirrus regions. We have analysed these observations and, by correlating the FIR surface brightness with HI line areas measured with the Effelsberg radio telescope, we derive estimates for the cosmic infrared background in the wavelength range 90-180 m. We determined the level of ZL emission using shorter wavelength ISOPHOT observations, without relying on a model of the spatial distribution of the ZL emission on the sky. Therefore, our results are independent of the existing COBE results. Based on this study we conclude the following: • At 90 m we derived a 2- upper limit of 2.3 MJy sr-1 for the CIRB. • In the range 150-180 m we obtained a CIRB value of 1.08  0.32  0.30 MJy sr-1, where we quote separately the estimated statistical and systematic uncertainties. • The accuracy of the results was determined mostly by the accuracy of the zodiacal light estimates and the dark signal subtraction. • Assuming the latest emissivity values of dust associated with the ionised medium, the uncertainty related to the presence of ionised medium was small compared with the other sources of uncertainty. Acknowledgements We thank the anonymous referee for valuable comments. This work was supported by the Academy of Finland grants No. 115056, 107701, 124620, and 119641. ISOPHOT and the Data Centre at MPIA, Heidelberg, were funded by the DLR and the Max-Planck-Gesellschaft. We thank P. Kalberla for his help in the planning of the HI measurements and for performing the stray radiation correction of these observations. ## Appendix A: The principles of surface brightness observations with ISOPHOT: data reduction and calibration The most detailed description of the ISOPHOT instrument, its observing modes (so called Astronomical Observation Templates, AOTs) and the corresponding data analysis and calibration steps is given in the ISOPHOT Handbook (Laureijs et al. 2003). In the following we describe recent calibration techniques which are beyond the scope of the Handbook and which are essential for the determination of the EBL surface brightness. ### A.1 Absolute surface brightness calibration of ISOPHOT observations ISOPHOT was absolutely calibrated against a flux grid of celestial point source standards consisting of stars, asteroids and planets, thus covering a fair fraction of the entire dynamic flux range from 100 mJy up to about 1000 Jy. Each detector aperture/pixel was individually calibrated against these standards. Therefore, the basic ISOPHOT calibration is in Jy pixel-1. In order to derive proper surface brightness values in MJy sr-1, the solid angles of each detector aperture/pixel must be accurately known (A.1) with being the surface brightness, the effective solid angle of the pixel/aperture,  the total flux of a celestial standard and the fraction of the Point Spread Function contained in the pixel/aperture (i.e. the convolution of the PSF with the aperture response) when being centred at position (0, 0). Hence, is the flux per pixel. ISOPHOT's effective solid angles have been determined by 2D-scanning of a point source over the pixel/aperture in fine steps dx' and dy' and measuring the resulting intensity at each measurement point  , the footprint, taking into account a non-flat aperture/pixel response: (A.2) If the peak of the point source was located outside the aperture by 1/2 of the aperture size, the S/N of the resulting intensity dropped so much that at this border the summation was complemented by a model of the broad band telescope PSF and adding up the corresponding PSF fractions out to 10 arcmin assuming a flat response, but taking into account a cut by ISO's pyramidal central mirror feeding the 4 instrument beams. An example of such a synthetic footprint is shown in Fig. A.1. Figure A.1: Synthetic (outer part, i.e. green and blue coloured areas, modelled) footprints (convolution of the ISO telescope PSF with the pixel aperture response) of the 3  3 pixels of ISOPHOT's C100 array for the 60 m broad band filter. The solid angles of each pixel are obtained by integration over the footprint area. Open with DEXTER The values of the solid angles used in PIA V11.3 are listed in Tables A.1 and A.2. Table A.1:   Effective solid angles for the 3  3 pixels of ISOPHOT's C100 array for the 6 filters with central wavelengths  . Table A.2:   Effective solid angles for the 2  2 pixels of ISOPHOT's C200 array for the 5 filters with central wavelengths  . It should be noted that an absolute surface brightness calibration is more accurate than an absolute calibration of a compact source of similar brightness, since no background subtraction has to be performed, which introduces an additional uncertainty. The accuracies quoted in the ISOPHOT Handbook (Laureijs et al. 2003), Table 9.1 for extended emission take COBE/DIRBE photometry as the reference. By not referring to COBE/DIRBE photometry, the absolute surface brightness calibration for ISOPHOT's C100 and C200 array is as good as that for bright compact sources, i.e. better than 15%. ### A.2 New calibration products and strategies for PIA V11.3 For the very sensitive analysis needed for the EBL determination and, in particular, an absolute surface brightness calibration that is as accurate as possible, a number of calibration upgrades and new calibration features have been developed and implemented in PIA V11.3. For the ones which are not described in the ISOPHOT Handbook (Laureijs et al. 2003), we provide a description and examples for the C100 detector in the following. An overview of the individual calibration steps associated with different instrument components is shown in Fig. A.2. By application of all these steps, instrumental artefacts are minimized, the resulting detector signals are homogenized and a high calibration reproducibility and accuracy is achieved. Figure A.2: Scheme of the ISOPHOT calibration steps associated with the different instrument components. The meaning of the abbreviations is the following: BSL = Bypassing Sky Light correction, DS = detector Dark Signal, RL = Ramp Linearisation, TC = signal Transient Correction, and RIC = Reset Interval Correction. Open with DEXTER #### A.2.1 Detector responsivity calibration The absolute photometric calibration of an individual measurement is performed via a transfer calibration using the internal calibration sources. This measures the actual responsivity of the detector and provides the absolute signal-to-flux conversion. It is a separate measurement of each observation mode by deflecting the chopper mirror to the field of view of the internal calibrator (Fine Calibration Source, FCS). The illumination level of the internal calibrator was not fixed but adjusted as much as possible to the expected brightness level of the sky. This was achieved by selecting an appropriate heating power for the internal source. There exists a calibration relation between this heating power and the optical power received by each detector pixel which is established from measurements on celestial standards. Figure A.3: Steps in the generation of a homogeneous and most complete calibration of ISOPHOT's long wavelength internal calibration sources (FCS). This is illustrated for the central pixel (#5) of the C100 array camera. Upper left: measured relation between optical power received on the detector and the heating power applied to the internal source. Dots indicate the discrete measurements, the solid line is a fit. Upper right: display of the input curves for all C100 filters within the reliable heating power range. Middle left: for a selected heating power (here: 1.0 mW) monochromatic and colour corrected fluxes of all filters are fitted by a modified BB curve. Middle right: by repeating the fits with the same modified BB type for the whole heating power range covered the relation between heating power and temperature of the internal source is established. Lower centre: by applying the FCS model the relation between optical power and heating power is homogenized and extended to the maximum heating power range covered by at least one measurement in any of the C100 or C200 filters. Open with DEXTER Therefore, for reliable and accurate transfer calibrations, the following requirements are put on the FCS: 1) High reproducibility. This was better than 1%, since the monitoring of the flux of faint standards was reproducible within a few percent, and this uncertainty was dominated by the signal noise (Klaas et al. 2001). 2) A very detailed characterization of the illuminated power depending on the heating power applied to the source. This is illustrated in Fig. A.3. It involves the following steps: a) For each C100 and C200 array filter all measurements of celestial standards done in raster map mode were evaluated such that for each pixel the background signal was properly subtracted and the resulting source signal was associated with the celestial standard flux. The ratio of the source signal and the simultaneously obtained FCS signal gave the illumination power by the FCS for the selected heating power. The discrete results were fitted and the reliable lower and upper heating power limits covered by measurements were identified (Fig. A.3 upper left). The heating power ranges were not identical or equally large for each filter (Fig. A.3 upper right). In general they were shifted to smaller heating power values for longer wavelengths. b) For fine discrete steps in heating power the inband powers were read from the relations and were converted to monochromatic surface brightnesses by applying the bandpass conversions derived from the relative system response profiles, see ISOPHOT Handbook (Laureijs et al. 2003), Sect. A.2, and the solid angles of Tables A.1 and A.2. These fluxes were fitted with a modified BB curve after appropriate colour correction (Fig. A.3 middle left). If for a certain filter the selected heating power was outside the reliable limits, the value of this filter was excluded from the fit. The fit gave the temperature of the FCS for the selected heating power. An additional constraint was that the temperature had to be the same for the fit curves of all pixels. C100 and C200 filter values had to be fitted independently because of the different detector areas and hence illumination factors, however, the fits were checked for consistent temperatures, because the illuminating FCS was the same for both detectors. c) This was achieved for the heating power range from 0.07 up to 6.5 mW adopting an emissivity of the source yielding the temperature vs. heating power relation as shown in Fig. A.3 middle right. d) By applying this FCS temperature model and the established illumination factors for each pixel it was possible to establish homogeneous calibration curves of the internal reference source, thus polishing out measurement outliers affecting the initial empirical curves. The multi-filter approach connecting all curves and not treating them individually enabled a large extension and a common range for all filters: compare Fig. A.3 lower centre with Fig. A.3 upper right. #### A.2.2 Bypassing sky light correction of FCS signal As a safety design against single point failures, ISOPHOT was not equipped with any cold shutter to suppress straylight when performing internal calibration measurements. Therefore, when deflecting the chopper onto the illuminated internal calibration (FCS) sources, some fraction of the power received on the detector did not come from the FCS but from sky light bypassing along non nominal light paths. Since this depends on the sky brightness it is subtracted in the transfer calibration measurements on celestial standards and hence has to be subtracted for any FCS measurement in order to get a reproducible zero point. This was achieved by performing a number of measurements on the switched-off, i.e. cold FCS, so that only the bypassing sky light contribution was measured. The result for one C100 array pixel is shown in Fig. A.4 which demonstrates a linear dependence of the bypassing sky light contribution to the FCS signal on the sky background. This correction was established for all C100 and C200 array pixels. The bypassing sky light contribution contains the detector dark signal contribution, cf. Sect. A.2.5. Figure A.4: Bypassing sky light contribution to the FCS signal depending on the sky background. Open with DEXTER #### A.2.3 Effective pixel/aperture solid angles These are described in the previous Sect. A.1 and their values are compiled in Tables A.1 and A.2. Figure A.5: Orbit dependent dark signal determination for the central pixel 5 of ISOPHOT's C100 array. Dots represent individual measurements obtained during the entire ISO mission, filled and open signals identify a different reset interval in the integration of the dark signal. The solid line is the fit to the measurements providing the so-called default dark level. The dotted line is the default dark level of an older calibration version used before 2001. Open with DEXTER #### A.2.4 Filter profiles The bandpass system responses and the conversion factors from inband power to a monochromatic flux, as well as colour correction factors are described in the ISOPHOT Handbook (Laureijs et al. 2003). #### A.2.5 Detector dark signal The detector dark signals were re-analyzed as described in del Burgo (2002). In this analysis special care was given to exclude those dark measurements suffering from memory effects by preceding bright illuminations, thus not representing the true dark level. An example of the new results is shown in Fig. A.5 for the central pixel 5 of the C100 array. A slight orbital dependence is visible with an increase of the dark signal towards the beginning and the end of the observational window. It can also be noticed that there is a scatter of the dark signals at the same orbit position and there are occasional large outliers. These are not due to signal determination uncertainties, but are real variations due to space weather effects on different revolutions over the ISO mission. #### A.2.6 Ramp linearisation This was performed as described in the ISOPHOT Handbook (Laureijs et al. 2003). For ISOPHOT's far-infrared detectors two types of effects cause non-linearities of the integration ramps: 1) De-biasing effects of the photoconductors operated with low bias caused by feed-back from the integration capacitor. 2) Non-linearities generated in the cold read-out electronics. #### A.2.7 Signal dependence on reset interval correction Despite the ramp linearisation step, signals obtained under constant illumination, but with different reset intervals show a systematic difference, see Fig. A.6 upper panel. In order to have a consistent signal handling of measurements with different reset interval settings applied - to optimize the dynamic range of the signal - all signals were converted as if they were taken with a 1/4 s reset interval. The correction relations were established from special calibration measurements applying the full suite of reset intervals under constant illumination and this for different illumination levels. In this way signal corrections were established for all reset intervals in the range 1/32 s to 8 s (Fig. A.6 middle and lower panel). While previously, as still described in the ISOPHOT Handbook (Laureijs et al. 2003) a linear correlation with offset was used, a re-analysis (del Burgo et al. 2002) yielded non-linear relations as shown in Fig. A.6. This latter analysis also found a bi-modal behaviour for C100 array pixels, such that the pixels on the main diagonal, #1, 5 and 9, behaved differently from the rest of the pixels. For the C200 array all pixels behaved in the same way. Figure A.6: Correction of the signal dependence on the selected reset interval. Upper panel: demonstration of the effect, showing the resulting signal versus the selected reset interval over the range from 1/32 s up to 8 s (reset intervals were commanded in powers of 2) under constant illumination. Middle panel: solid line: correction relation for a reset interval of 8 s w.r.t. the reference reset interval of 1/4 s for all C100 array pixels, except the ones on the main diagonal. Dotted line: old linear correlation used before the re-analysis. Lower panel: solid line: Correction relation for a reset interval of 8 s w.r.t. the reference reset interval of 1/4 s for all C100 array pixels on the main diagonal (pixels #1, 5, and 9). Dotted line: old linear correlation used before the re-analysis (same as for middle panel). Open with DEXTER #### A.2.8 Signal transient correction The ISOPHOT detectors were photoconductors operated under low background conditions provided by a cryogenically cooled spacecraft. Under these conditions they showed the behaviour that the output signal was not instantaneously adjusted to a flux change but rather, following an initial jump by a certain fraction of the flux step, the signal adjusted with some time constant to the final level, see e.g. Acosta et al. (2000). In particular the ISOPHOT C100 detector showed significant transient behaviour. This time constant depended on the detector material (doping of the semi-conductor and its contacts), the flux step, the direction of the flux step (dark to bright versus bright to dark) and the illumination history. Attempts had been made to model this behaviour (Acosta et al. 2000), but no unique description could be found for the FIR detectors. To overcome this effect at least partly the method of transient recognition was implemented in the ISOPHOT analysis software as described in the ISOPHOT Handbook (Laureijs et al. 2003) using the most stable part of the measurement for signal determination. Finally, another approach was to use a data base of long measurements with signals stabilising and to determine the deviation from the end level for shorter intermediate times (del Burgo et al. 2002), see Fig. A.7 for an illustration. A measurement time of 128 s was used as reference, because 1) Most calibration measurements in staring mode were performed with this basic measurement time. 2) In most cases the signals stabilised within this time. For the C200 array pixels the signal transient effect is considerably smaller and faster and therefore it is sufficient to apply the transient recognition as described in the ISOPHOT Handbook (Laureijs et al. 2003). Figure A.7: Empirical signal transient correction for ISOPHOT's C100 array. The left column shows the signal loss for integration times of 4, 8, 16, 32, and 64 s (commendable integration times of ISOPHOT detectors) with regard to the reference time of 128 s. The red line is a fit through the measured points over the covered signal range and is used as the correction relation. The right column shows the residuals after applying this correction. Open with DEXTER ## Appendix B: Observations and data reduction for the EBL fields ### B.1 ISOPHOT observations The following tables give details of the ISOPHOT observations used in the paper. Table B.1 lists the raster maps and absolute photometry measurements that were made at 90, 150, and 180 m. Correspondingly, Table B.2 lists observations used for the determination of the zodiacal light levels. These include both mid-infrared measurements carried out with the ISOPHOT-P detector and longer wavelength absolute photometry measurements carried out with the C100 and C200 cameras. Table B.1:   List of ISOPHOT observations of EBL fields carried out in the PHT-22 and PHT-25 observation modes. Table B.2:   Observations used for the determination of the zodiacal light emission. The columns are: (1) name of the field; (2), (3) position; (4) wavelength; (5) the ISO identifier number (TDT) of the observation; and (6) time difference between the listed observation and the observation of the EBL raster maps of Table B.1. A time difference is quoted only when the observations were not performed within the same day. Observations at wavelengths below 60 m are made with the ISOPHOT-P detector. Each field was mapped in the PHT22 staring raster map mode (ISOPHOT Handbook, Laureijs et al. 2003) using filters C_90, C_135, and C_180. The corresponding reference wavelengths of the filters are 90 m, 150 m, and 180 m. The 90 m observations were made with the C100 detector consisting of 3  3 pixels, with 43.5   43.5 each. The longer wavelength observations were made with the C200 detector which has 2  2 detector pixels, with 89.4   89.4 each. The same raster maps were used in Juvela et al. (2000). Table B.3 lists the coordinates and the sizes of the maps. Additionally, we make use of PHT25 absolute photometry measurements (see ISOPHOT Handbook, Laureijs et al. 2003) made at the same three wavelengths. Two positions in NGP, two positions in EBL26, and one position in EBL22 were observed in this mode. Table B.3:   The positions and sizes of the observed fields. Columns are: (1) name of the field; (2), (3) equatorial coordinates of the centre of the field; (4), (5) galactic coordinates; (6), (7) ecliptic coordinates; (8) number of raster points; (9) area in square degrees; and (10) additional remarks. All areas were observed at 90, 150, and 180 m. In NGP(N) an additional square map was observed at 180 m only. Details of the individual measurements are listed in Appendix, in Table B.1. Figure B.1: The ISOPHOT EBL fields. The three frames show the 180 m and 90 m ISOPHOT maps. The coordinates correspond to the 180 m maps. The 90 m maps cover the same area but, in the figure, the 90 m maps have been plotted south of the 180 m maps. The small yellow circles indicate the positions observed with the ISOPHOT P-detector for the determination of the zodiacal light levels. To indicate the locations of the fields with respect to the galactic and ecliptica planes, the positions are shown on an all-sky map that is combined from DIRBE observations between the wavelengths of 12 m and 240 m. Open with DEXTER ### B.2 Reduction of EBL field observations The ISOPHOT data were processed with PIA (PHT Interactive Analysis) program version 11.3. For details of the analysis steps, see the ISOPHOT Handbook (Laureijs et al. 2003) and Appendix, Sect. A. For C100 a method of signal transient correction was introduced in PIA 11.3. This procedure was used for all C100 measurements. Nevertheless, some of the internal calibrator (FCS) measurements show residual drifts. In those cases we applied transient recognition which removes the initial, unstabilised part of the measurements. The flux density calibration was made using the internal calibrator measurements (FCS1) performed immediately before and after each map for actual detector response assessment. The calibration was applied using the average response of the two FCS measurements. The reduced data contained a few artifacts. These include short time scale detector drifts at the beginning of some C100 observations, temporary signal variations caused by cosmic ray glitches, and occasional drifting of some detector pixels that may also be connected with cosmic ray hits. The time ordered data were examined by eye. For rasters and detector pixels affected by clear anomalies (glitches or drifting) the corresponding PIA error estimates were scaled upwards, typically by a factor of a few. For each detector pixel the signal values were scaled so that their average value over a map became equal to the overall average over all detector pixels. The scaling takes into account the already manually adjusted error estimates. The flat fielding would actually not be necessary, because FIR fluxes are compared only with observed HI 21 cm lines and, therefore, averaged over areas that are large compared with the size of the ISOPHOT rasters. Long term detector response drifts are not taken out by a simple averaging of the FCS measurements, nor is an initial non-linear drift corrected for by linear interpolation between the two FCS measurements. Both could introduce an artificial gradient in the time ordered data and, because of the systematic scan pattern, also in the maps. The maps were compared with IRAS data in order to see if there were any gradients uncorrelated with the IRAS 100 m signal. The only significant difference was found in the C200 observations of the southern NGP field. The gradient was removed while keeping the average surface brightness unchanged. The correction has little effect on the subsequent analysis. Apart from the EBL22 field, all maps contain four detector scans that run alternatively in opposite directions along the longer map dimension. When data are correlated with the lower resolution HI observations, the subsequent scan legs tend to cancel out any long term drifts. The raster map observations themselves do not contain any direct measurement of the dark current. In such cases one usually relies on the orbit dependent default'' dark current estimates included in the PIA. However, absolute photometry PHT-25 measurements were carried out within a couple of hours before or after each raster map. The data reduction was carried out also using the dark current and cold FCS values obtained from those measurements. In the subsequent analysis, we use maps that are averages of those obtained using default dark current values and those obtained using PHT-25 dark current measurements. When absolute photometry points were inside the mapped area they were compared with the surface brightness of the raster maps. The maps were re-scaled so that the final surface brightness corresponds to the average of the original FCS calibrated maps and the values given by the absolute photometry measurements. This causes systematic lowering of the surface brightness values of the original maps. For EBL26, NGP(N), and NGP(S) the change is typically 4%, for both C100 and C200 observations. In the case of EBL22 the correction is larger, some 20%, for the C200 detector. In the region NGP there are separate northern and southern fields that overlap by a few arcminutes. The maps, each containing 32  4 raster points, were fitted together using the overlapping area, where the final map is at a level equal to the average of the northern and the southern maps. The resulting change in the surface brightness levels of individual maps was 5% or less. In the north there is yet another 15  15 raster map that was observed only at 180 m. Because that measurement includes only very short FCS measurements, it was scaled to fit the already combined long 180 m map. This required scaling of the surface brightness values by a factor of 1.05. Figure B.2: The figures shows as black rectangles the areas mapped with ISOPHOT (90, 150, and 180 m) and as circles the pointings used in the Effelsberg HI observations. The diameter of the circles, 9 , is equal to the FWHM of the Effelsberg beam. The frames  a)- c) correspond to regions EBL22, EBL26, and NGP. In the case of NGP, the dashed red line indicates the area that was mapped at 180 m only. Open with DEXTER The main maps of the field EBL22 cover an area of low cirrus emission. There are additional one-directional scans that extend to a region of higher surface brightness in the west. In the absence of scans in the opposite direction, it is not possible to directly determine the presence of detector response drifts. However, these observations were reduced using the average of the responsivities given by the two FCS measurements and the error bars reflect also the difference in the responsivity before and after the measurement. Using the overlapping area, the 32  1 raster strips were scaled to the same level with the 32  3 raster maps. The scalings applied were 0.97, 1.02, and 0.84 at 90 m, 150 m, and 180 m, respectively. Figure B.3: Comparison of HI spectra from the Leiden/Dwingeloo survey (Kalberla et al. 2005; dashed lines) and our Effelsberg data convolved with a beam of 36 (solid lines). The spectra correspond to positions at the southern and northern end of the NGP map (13420 +40 30 0 and 13520 +38 40 0 ) and one position in the field EBL26 (1170 +2 20 0 ). The EBL26 spectra have been scaled by a factor 0.5. Open with DEXTER The final FIR errorbars show the uncertainty for the weighted means over the Effelsberg beam. The noise of each HI spectrum was estimated separately using the velocity channels outside the line. The uncertainty of the line area was calculated assuming the same, uncorrelated noise for the integrated velocity interval. This might underestimate the total uncertainty, because it ignores the uncertainties in the stray radiation subtraction that do not affect the signal in the line wings. However, for a small field the stray radiation causes a constant systematic error rather than statistical uncertainty and does not affect the weighting of the observations when the linear fit is made. For selected positions there exist mid-infrared observations made with the ISOPHOT P-detectors as well as further absolute photometry measurements with the C100 and C200 cameras (see Appendix, Table B.2). These observations were performed for the purpose of estimating the zodiacal light. The data reduction of P-detector data is similar to that of the C100 and C200 cameras, except that also signal linearisation is included. ### B.3 HI measurements The observations of the hydrogen 21 cm line were made with the Effelsberg radio telescope in May 2002. The observed positions, 580 in number, are indicated in Fig. B.2. The integration times were 30 s in EBL22 and 62 s in EBL26. In the field NGP the observations were done with 62 s integrations except for the northern part where the integration time was 94 s. The average noise estimated from the velocity intervals outside the HI line is 0.15 K per channel of 1 km s-1. This corresponds to a typical uncertainty of 1.7 K km s-1 in the integrated line area. For calibration purposes and for precise subtraction of the stray radiation, regular observations of the standard region S7 were made. The stray radiation subtraction is crucial because it affects the zero point of the estimated HI column densities. The observed fields, NGP in particular, have some of the lowest line-of-sight column densities over the whole sky. Under these conditions the stray radiation received by the telescope side lobes becomes a significant fraction of the total signal. The stray radiation was removed with a program developed by Kalberla (Kalberla et al. 2005; see Sect. 3). In Fig. B.3 we compare our data with spectra from the Leiden/Dwingeloo survey (Hartmann & Burton 1997; Kalberla et al. 2005). For this comparison, in order to match the resolution of the Leiden/Dwingeloo survey, the Effelsberg data were convolved with a Gaussian with FWHM equal to 36 . The HI profiles agree very well. Part of the differences may be caused by the fact that our HI maps do not cover the whole area of the 36  beams. Nevertheless, the figure shows that the observations and the stray radiation subtraction (see Sect. 3) are consistent with the Kalberla (2005) results. ## Appendix C: Calibration accuracy The error estimates listed in Table 1 are based on the statistical uncertainties in the fits between FIR and HI data. The scatter of data points around the fitted lines is usually larger than their estimated uncertainty. This could be a sign of underestimated measurement uncertainties but is more likely caused by true scatter in the relation. If the formal uncertainties of the line parameters were estimated based on the error estimates of the individual points, the uncertainties could be severely underestimated. Therefore, instead of relying only on the measurement uncertainties, the uncertainty of the fit parameters was estimated separately with the bootstrap method so that they reflect the true scatter of observed points. The error estimates corresponding to a 67% confidence interval are given in Table 1. These uncertainties do not include estimates for the systematic errors introduced by the independent calibration of each map or the absolute accuracy of the overall ISOPHOT calibration. There are both multiplicative and additive sources of uncertainty. The former include, for example, uncertainties in the internal calibration source (FCS) measurements (e.g., detector drifts) that alter the estimated detector response. The uncertainties that affect the zero point of the intensity scale are more critical, because the CIRB is small compared with the observed signal and can be recovered only as the residual after the subtraction of the ZL. Table C.1 lists an assessment of uncertainty that, using data in Table 1, have been converted into uncertainty of the FIR flux at zero HI column density. The quoted values are half of the difference of two values obtained in two independent ways. Thereby the quoted values are also 1- estimates for the uncertainty of the average of the two values. In Table C.1 Col. 4 has been obtained by comparing the fine calibration source measurements performed before and after each map. The numbers indicate the statistical uncertainty of the detector response measurements. The FCS measurements are generally very consistent, particularly in the case of the C200 detector. On the other hand, the effect of the drift affecting the first FCS measurement of the one-dimensional strip map of EBL22 is clearly visible at 90 m. The dark signal subtraction is the most important correction affecting the zero point of the FIR intensity. Close to each of the raster map observations, we have one or two absolute photometry observations which include dark signal measurements of their own. In PIA, the default dark current calibration is based on a larger set (70) of dark current measurements for which the orbit trend has been determined. Therefore, the PIA default dark current calibration is less affected by the noise of individual measurements but may not take into account short time scale variations in the detector dark current on a specific orbit. The maps were reduced using the default dark current values and the actually measured dark current values. In Table C.1 Col. 5 shows the associated uncertainty in the FIR signal at zero HI column density. The observed uncertainty in the dark current values is comparable with the variation observed in the systematic analysis of a large sample of ISOPHOT observations (del Burgo et al. 2002; see also Fig. A.5). When absolute photometry measurements existed within mapped areas, those were used to re-scale the surface brightness values of the maps (see Sect. B.2). The difference in the absolute photometry and mapping measurements is used to derive the values in Col. 6 of Table C.1. The final column reflects the difference in the surface brightness in areas where two independently calibrated maps overlap. The numbers in Cols. 6 and 7 include, of course, dark current and FCS uncertainties as one of their components. For the C100 observations at 90 m the uncertainty is close to 1 MJy sr-1, i.e., comparable with the expected EBL signal. On the other hand, for the C200 detector the uncertainty of an individual map is 0.3 MJy sr-1. Most of this is caused by the uncertainty in the dark current values. Table C.1:   Assessment of the calibration uncertainty for the ISOPHOT maps. Columns are (1) name of the field; (2) wavelength; (3) average surface brightness of the map; (4) difference between calibration measurements performed before and after each map; (5) difference between actual dark current measurements and default dark current values; (6) difference between the independently calibrated absolute photometry measurements and raster maps; and (7) difference between partially overlapping maps. These uncertainties have been converted to correspond to the uncertainties at zero hydrogen column density using the fit parameters listed in Table 1. Figure C.1: Fits used to estimate the ZL levels in the three fields EBL22, EBL26, and NGP (frames  a)- c), respectively). The red circles are ISOPHOT observations. The lower lines are the cirrus (blue solid line) and the ZL (red solid line) templates, the uppermost solid green line is their sum. The figures also show DIRBE values for the closest DIRBE pixel, read from the DIRBE weekly maps. The solid squares correspond to observations with the same solar elongation as in the case of ISOPHOT observations, the open squares to the other measurement with identical absolute value of the solar aspect angle but opposite solar elongation. For clarity, the latter have been shifted slightly in wavelength. The dashed line shows the predictions of the Kelsall et al. (1998) ZL model. Open with DEXTER Straylight may be another instrumental artefact affecting the zero level of the FIR surface brightness. By design and operation ISO's viewing direction stayed by several tens of degrees away from the brightest FIR emitters in the sky, the Sun, the Earth and the Moon (Kessler et al. 2003). A dedicated straylight program was executed verifying by deep differential'' integrations that the uniform straylight level due to these sources was below ISOPHOT's detection limit, even under the most unfavourable pointing conditions close to the visibility constraints (Lemke et al. 2001). Specular straylight by the second brightest class of objects, the giant planets Jupiter and Saturn, was observed when pointing to within 15 to 1 of the planet, expressing itself as finger-like stripes or faint ghost rings (Kessler et al. 2003; Lemke et al. 2001). The NGP and EBL 22 fields are far away from the ecliptic and can thus not suffer from this type of straylight. For EBL 26 we checked the positions of the planets Mars, Jupiter, Saturn, Uranus and Neptune at the time of the observations, 1997-06-26 and 1997-07-11, respectively. Mars, Jupiter, Uranus, Neptune were all far off. Saturn was at a distance of 3.25 degrees, which is still more than a factor of 3 off of any known straylight-critical distance. ## Appendix D: Determination of the ZL levels The ZL level was estimated by fitting ZL and cirrus templates to ISOPHOT observations in the wavelength range from 7.3 m to 200 m (see Sect. 4.2). Figure C.1 shows the results of these fits. In the field EBL22 we had observations of one position and in the fields EBL26 and NGP of two positions (see Table B.2). For the latter two fields, the figures show the fit to data combined from the two positions. Table B.2 lists the time difference between the listed observations and the observations of the raster maps. In the case of NGP these are relative to the 150 m observations. The 90 m maps were observed four days before and the 180 m one day after the 150 m maps. According to the Kelsall et al. (1998) ZL model the four day difference causes only 1.5% change in the expected ZL. The combined NGP map is almost 1.5 degrees long. In the Kelsall model the difference in the centre positions of the southern and northern parts corresponds to about 1% difference in the ZL. Therefore, we use only one zodiacal estimate value for both NGP(N) and NGP(S) and for all observations made during the five day interval. In the fields EBL26 and NGP, MIR observations exist for two separate positions (see Fig. B.1). In both fields, the measurements at these two positions are close to each other, both in time and position. Therefore, their ZL values should be identical and also the cirrus levels should be very similar. Comparison of the fits performed using these independent sets of measurements gives the first indication of the statistical uncertainty of the ZL values. In both fields, the ZL values obtained for the two positions agree within 10%. The observations are fitted as a sum of ZL and cirrus components. The ZL template is a black body curve at the temperature obtained from Leinert et al. (2002). The cirrus template is based on the model by Li & Draine (2001). Using the ISOPHOT filter profiles we calculate for both radiation components, ZL and cirrus, and for each filter the in-band power values that can be directly compared with the observed values. In the fit we have only two free parameters, the intensity of the ZL component and the intensity of the cirrus component. The ZL estimates should be based mainly on data between 10 m and 60 m where the ZL is clearly the dominant component. Therefore, in the fit, the weight of the data points in this wavelength range is increased by a factor of two. The level of the cirrus component is determined mostly by the longer wavelength data. In reality, the component corresponds to the sum of the cirrus and CIRB signals. As long as the component is small in the MIR, the ZL estimates are almost independent of the exact shape of this template. We confirmed this by replacing the Li & Draine (2001) cirrus template by a pure CIRB template, using the model curve from Dole et al. (2006; Fig. 13). The resulting change in the ZL estimates was less than one per cent. The actual statistical errors of the ZL values are estimated using the standard deviation of the relative errors when observations are compared with the fitted ZL curve. The last column of Table 2 lists the corresponding error of the mean, calculated using data points between 7.3 m and 90 m. In the case of fields NGP and EBL26, the error estimates are calculated from the fits where we have combined the data from the two measured positions within each field. In all three fields, the obtained relative uncertainties are 10%. In the fields EBL26 and NGP the uncertainties are also consistent with the difference of the ZL values obtained for the two individual positions. The ZL fits are shown in Fig. C.1. In this paper we have used original ISOPHOT observations without applying colour corrections. Therefore, in the fitting procedure also the ZL and cirrus templates were converted to corresponding values using the ISOPHOT filter profiles. However, for Fig. C.1 we have performed colour corrections. The templates are plotted by connecting the values at the nominal wavelengths by a straight line. The template spectra used in the ZL fitting are colour corrected using their respective spectral shapes. In the figure, the colour correction of the observed surface brightness values is done assuming the blackbody ZL spectrum below 90 m, and a modified black body cirrus spectrum, , at 90 m and longer wavelengths. The plots include DIRBE values from the DIRBE weekly maps. These correspond to the DIRBE pixel closest to the centre of the corresponding ISOPHOT map. Linear interpolation was performed between the weeks in order to accurately match the solar elongation of the ISOPHOT observations. In addition to the DIRBE value that corresponds directly to the ISOPHOT observations (solid squares) we plot the DIRBE value for the same solar aspect angle and opposite sign of the solar elongation. Assuming that the zodiacal dust cloud is symmetric along the ecliptic, the two values should be identical. The predictions of the ZL model of Kelsall et al. (1998) are also plotted. The DIRBE values are colour corrected. As in the case of ISOPHOT data, colour correction of the observations assumes a blackbody ZL spectrum at and below 60 m, and a modified black body cirrus spectrum, , at the longer wavelengths, 100, 140, and 240 m. There is a clear difference in the ISOPHOT and DIRBE surface brightness scales. The DIRBE values are consistently lower by some 20-30%, in the MIR range. In the FIR bands the extended cirrus structures combined with the much larger pixel size and noise in the DIRBE pixels precludes direct comparison. The determination of the CIRB values is not directly affected by a possible calibration difference between DIRBE and ISOPHOT because, in this paper, we use exclusively ISOPHOT measurements. Systematic uncertainties affecting all ISOPHOT bands have only little impact on the derived CIRB values. The relative calibration accuracy between the FIR cameras and the ISOPHOT-P photometer is more important, because the zodiacal light estimates are based on the latter. When the absolute level of the zodiacal light was estimated we calculated the scatter between the SED model and the observations at different wavelengths (see Table 2). The scatter was typically 10-20%. The importance of this error source depends, of course, on the absolute level of the ZL emission. The field EBL26 is located near the ecliptic plane and at 90 m the observed signal and the ZL are both of the order of 20 MJy sr-1. Therefore, a relative uncertainty of 10% would already correspond to about twice the expected level of the CIRB. For EBL22 and especially for NGP the zodiacal light level is much lower so that more meaningful limits can be derived for the CIRB also at 90 m. The quoted ZL error estimates reflect the uncertainty in the determined ZL level in the mid-infrared. If there were a systematic difference in the calibration of the mid- and FIR-bands, the ZL estimates could be wrong by the corresponding amount. Generally the relative calibration accuracy is considered to be within 15%. This uncertainty would not necessarily be reflected in the quality of the ZL spectrum fits, because a systematic calibration error could have been partly compensated by a change in the intensity of the cirrus component. The ZL spectrum was assumed to be a pure black body with the temperature given by Leinert et al. (2002). As far as the mid-infrared points are concerned, a wrong temperature would, at some level, be reflected also in our error estimate. However, if the ZL spectrum deviated from the assumed shape only in the FIR this could again be masked by a change in the fitted cirrus component without a corresponding increase in the rms value. Therefore, we must explicitly assume that the same ZL temperature is applicable both at mid-infrared and far-infrared wavelengths. However, because a 5 K change in the ZL temperature corresponds to only 2% relative change in the ratio of 150 m and 25 m intensities, this source of uncertainty is unimportant compared with the uncertainty in the relative calibrations of the different detectors. ## Appendix E: Comparison with DIRBE EBL estimates This present study represents the first determination of the absolute level of the FIR EBL that is independent of measurements of the COBE DIRBE instrument. In Table E.1 we list FIR EBL estimates given in seven publications based on the DIRBE measurements. Included are also our 2- upper limit at 90 m and the EBL estimate for the range 150-180 m. Table E.1:   Comparison of existing CIRB estimates in the FIR range. The error estimates quoted by the authors are shown in parenthesis. In our case, we include only the statistical uncertainty. ## Footnotes ... instrument Based on observations with the Infrared Space Observatory ISO. ISO is an ESA project with instruments funded by ESA member states (especially the PI countries France, Germany, The Netherlands, and the UK) and with participation of ISAS and NASA. ... Appendices are only available in electronic form at http://www.aanda.org ## All Tables Table 1:   Parameters of linear fits of FIR surface brightness versus the HI line area. Table 2:   The estimated zodiacal light emission. Table 3:   Estimated level of the CIRB for the individual fields. Table A.1:   Effective solid angles for the 3  3 pixels of ISOPHOT's C100 array for the 6 filters with central wavelengths  . Table A.2:   Effective solid angles for the 2  2 pixels of ISOPHOT's C200 array for the 5 filters with central wavelengths  . Table B.1:   List of ISOPHOT observations of EBL fields carried out in the PHT-22 and PHT-25 observation modes. Table B.2:   Observations used for the determination of the zodiacal light emission. The columns are: (1) name of the field; (2), (3) position; (4) wavelength; (5) the ISO identifier number (TDT) of the observation; and (6) time difference between the listed observation and the observation of the EBL raster maps of Table B.1. A time difference is quoted only when the observations were not performed within the same day. Observations at wavelengths below 60 m are made with the ISOPHOT-P detector. Table B.3:   The positions and sizes of the observed fields. Columns are: (1) name of the field; (2), (3) equatorial coordinates of the centre of the field; (4), (5) galactic coordinates; (6), (7) ecliptic coordinates; (8) number of raster points; (9) area in square degrees; and (10) additional remarks. All areas were observed at 90, 150, and 180 m. In NGP(N) an additional square map was observed at 180 m only. Details of the individual measurements are listed in Appendix, in Table B.1. Table C.1:   Assessment of the calibration uncertainty for the ISOPHOT maps. Columns are (1) name of the field; (2) wavelength; (3) average surface brightness of the map; (4) difference between calibration measurements performed before and after each map; (5) difference between actual dark current measurements and default dark current values; (6) difference between the independently calibrated absolute photometry measurements and raster maps; and (7) difference between partially overlapping maps. These uncertainties have been converted to correspond to the uncertainties at zero hydrogen column density using the fit parameters listed in Table 1. Table E.1:   Comparison of existing CIRB estimates in the FIR range. The error estimates quoted by the authors are shown in parenthesis. In our case, we include only the statistical uncertainty. ## All Figures Figure 1: FIR surface brightness as a function of HI line area W(HI) in the three EBL fields, EBL22 ( left), EBL26 ( middle), and NGP ( right). Each point corresponds to one pointing of the HI observations. The uncertainties in the HI line area are estimated based on the noise in velocity channels outside detected HI emission. For each HI spectrum the corresponding average FIR signal has been calculated using for weighting a Gaussian with FWHM = 9 . The corresponding error bars are based on error estimates reported by PIA from which the formal uncertainties of the weighted mean are calculated. The long dashed line shows the result of a linear fit that takes into account the uncertainties in both variables. The dotted lines indicate 67% confidence intervals that are obtained with the bootstrap method. Open with DEXTER In the text Figure A.1: Synthetic (outer part, i.e. green and blue coloured areas, modelled) footprints (convolution of the ISO telescope PSF with the pixel aperture response) of the 3  3 pixels of ISOPHOT's C100 array for the 60 m broad band filter. The solid angles of each pixel are obtained by integration over the footprint area. Open with DEXTER In the text Figure A.2: Scheme of the ISOPHOT calibration steps associated with the different instrument components. The meaning of the abbreviations is the following: BSL = Bypassing Sky Light correction, DS = detector Dark Signal, RL = Ramp Linearisation, TC = signal Transient Correction, and RIC = Reset Interval Correction. Open with DEXTER In the text Figure A.3: Steps in the generation of a homogeneous and most complete calibration of ISOPHOT's long wavelength internal calibration sources (FCS). This is illustrated for the central pixel (#5) of the C100 array camera. Upper left: measured relation between optical power received on the detector and the heating power applied to the internal source. Dots indicate the discrete measurements, the solid line is a fit. Upper right: display of the input curves for all C100 filters within the reliable heating power range. Middle left: for a selected heating power (here: 1.0 mW) monochromatic and colour corrected fluxes of all filters are fitted by a modified BB curve. Middle right: by repeating the fits with the same modified BB type for the whole heating power range covered the relation between heating power and temperature of the internal source is established. Lower centre: by applying the FCS model the relation between optical power and heating power is homogenized and extended to the maximum heating power range covered by at least one measurement in any of the C100 or C200 filters. Open with DEXTER In the text Figure A.4: Bypassing sky light contribution to the FCS signal depending on the sky background. Open with DEXTER In the text Figure A.5: Orbit dependent dark signal determination for the central pixel 5 of ISOPHOT's C100 array. Dots represent individual measurements obtained during the entire ISO mission, filled and open signals identify a different reset interval in the integration of the dark signal. The solid line is the fit to the measurements providing the so-called default dark level. The dotted line is the default dark level of an older calibration version used before 2001. Open with DEXTER In the text Figure A.6: Correction of the signal dependence on the selected reset interval. Upper panel: demonstration of the effect, showing the resulting signal versus the selected reset interval over the range from 1/32 s up to 8 s (reset intervals were commanded in powers of 2) under constant illumination. Middle panel: solid line: correction relation for a reset interval of 8 s w.r.t. the reference reset interval of 1/4 s for all C100 array pixels, except the ones on the main diagonal. Dotted line: old linear correlation used before the re-analysis. Lower panel: solid line: Correction relation for a reset interval of 8 s w.r.t. the reference reset interval of 1/4 s for all C100 array pixels on the main diagonal (pixels #1, 5, and 9). Dotted line: old linear correlation used before the re-analysis (same as for middle panel). Open with DEXTER In the text Figure A.7: Empirical signal transient correction for ISOPHOT's C100 array. The left column shows the signal loss for integration times of 4, 8, 16, 32, and 64 s (commendable integration times of ISOPHOT detectors) with regard to the reference time of 128 s. The red line is a fit through the measured points over the covered signal range and is used as the correction relation. The right column shows the residuals after applying this correction. Open with DEXTER In the text Figure B.1: The ISOPHOT EBL fields. The three frames show the 180 m and 90 m ISOPHOT maps. The coordinates correspond to the 180 m maps. The 90 m maps cover the same area but, in the figure, the 90 m maps have been plotted south of the 180 m maps. The small yellow circles indicate the positions observed with the ISOPHOT P-detector for the determination of the zodiacal light levels. To indicate the locations of the fields with respect to the galactic and ecliptica planes, the positions are shown on an all-sky map that is combined from DIRBE observations between the wavelengths of 12 m and 240 m. Open with DEXTER In the text Figure B.2: The figures shows as black rectangles the areas mapped with ISOPHOT (90, 150, and 180 m) and as circles the pointings used in the Effelsberg HI observations. The diameter of the circles, 9 , is equal to the FWHM of the Effelsberg beam. The frames  a)- c) correspond to regions EBL22, EBL26, and NGP. In the case of NGP, the dashed red line indicates the area that was mapped at 180 m only. Open with DEXTER In the text Figure B.3: Comparison of HI spectra from the Leiden/Dwingeloo survey (Kalberla et al. 2005; dashed lines) and our Effelsberg data convolved with a beam of 36 (solid lines). The spectra correspond to positions at the southern and northern end of the NGP map (13420 +40 30 0 and 13520 +38 40 0 ) and one position in the field EBL26 (1170 +2 20 0 ). The EBL26 spectra have been scaled by a factor 0.5. Open with DEXTER In the text Figure C.1: Fits used to estimate the ZL levels in the three fields EBL22, EBL26, and NGP (frames  a)- c), respectively). The red circles are ISOPHOT observations. The lower lines are the cirrus (blue solid line) and the ZL (red solid line) templates, the uppermost solid green line is their sum. The figures also show DIRBE values for the closest DIRBE pixel, read from the DIRBE weekly maps. The solid squares correspond to observations with the same solar elongation as in the case of ISOPHOT observations, the open squares to the other measurement with identical absolute value of the solar aspect angle but opposite solar elongation. For clarity, the latter have been shifted slightly in wavelength. The dashed line shows the predictions of the Kelsall et al. (1998) ZL model. Open with DEXTER In the text
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898891270160675, "perplexity": 1608.4263548436552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986717235.56/warc/CC-MAIN-20191020160500-20191020184000-00134.warc.gz"}
https://kavigupta.org/2016/05/07/Monoids-Bioids-And-Beyond/
# Monoids, Bioids, and Beyond ## Two View of Monoids. ### Monoids Monoids are defined in Haskell as follows: class Monoid a where m_id :: a (++) :: a -> a -> a Monoids define some operation with identity (here called m_id). We can define the required laws, identity, and associativity, as follows: monoidLaw1, monoidLaw2 :: (Monoid a, Eq a) => a -> Bool monoidLaw1 x = x ++ m_id == x monoidLaw2 x = m_id ++ x == x monoidLaw3 :: (Monoid a, Eq a) => a -> a -> a -> Bool monoidLaw3 x y z = (x ++ y) ++ z == x ++ (y ++ z) ### Morphisms Now, I’ll introduce something else: class (Morphism f) where id :: f x x (.) :: f b c -> f a b -> f a c Morphisms also have laws: morphismLaw1, morphismLaw2 :: (Morphism f, Eq (f a b)) => f a b -> Bool morphismLaw1 f = f . id == f morphismLaw2 f = id . f == f morphismLaw3 :: (Morphism f, Eq (f a d)) => f c d -> f b c -> f a b -> Bool morphismLaw3 f g h = (f . g) . h == f . (g . h) This definition defines a morphism, or a generalization of a function, in the category of Haskell types. We can make things instances of morphisms as such: instance (Morphism (->)) where id x = x (f . g) x = f (g x) ### Morphisms and Monoids Note that while a Monoid is a concrete type, a Morphism is a higher-order-type that takes two types as inputs. However, apart from this, we have fairly similar set of given functions with a similar set of laws. To make the comparison explicit, we can look at endomorphisms, that is morphisms from a set to itself. data Endo morph set = Endo (morph set set) instance (Morphism morph) => (Monoid (Endo morph set)) where m_id = Endo id Endo f ++ Endo g = Endo (f . g) So, we can see that monoids can be viewed as morphisms from a set to itself. Here is a diagram of that. The elements of the monoid are the arrows. In any case, we now have a new way of expressing a monoid: as a set of morphisms from a set to itself. ## Bioids We can define a bioid to be the arrows between two objects. These fall into four different sets, those from $$A \to A$$, those from $$A \to B$$, those from $$B \to A$$, and those from $$B \to B$$. This can be represented diagrammatically as follows: If we want to enforce the laws of the category, we know that we need to enforce a monad structure on both $$A$$ and $$B$$. We also know that we must have four additional composition values. In Haskell syntax, we have: class (Monoid aa, Monoid bb) => Bioid aa bb ab ba | aa -> ab, aa -> ba, bb -> ab, bb -> ba, ab -> aa, ab -> bb, ba -> aa, ba -> bb, aa -> bb, bb -> aa, ab -> ba, ba -> ab where id_a :: aa id_b :: bb (%%) :: ab -> ba -> aa (^^) :: ba -> ab -> bb (#>) :: ab -> aa -> ab (<#) :: aa -> ba -> ba ($>) :: bb -> ab -> ab (<$) :: ba -> bb -> ba OK, I don’t want to write down the laws for that mess. I now understand why mathematicians stopped at monoids but not bioids. To be completely honest, I was hoping to get semirings out of this mess, but I don’t think it’s actually possible. ## And Beyond In any case, we can still notice something interesting about a Bioid. It has a total of eight composition rules, if you include the two monoid rules. How can we calculate this number in general? First, let’s define an n-oid as the morphisms on a category with $$n$$ objects. We know that there are $$n^2$$ types of morphisms ($$n$$ sources and $$n$$ destinations). We also know that each composition rule contains three types in any order: $$\circ :: m b c \to m a b \to m b c$$ So there are therefore $$n^3$$ ++ equivalents on an n-oid. Yeah, in general that’s going to get messy very fast.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206773638725281, "perplexity": 2269.8881184596116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00012.warc.gz"}
https://acs.figshare.com/articles/Thermal_Transport_in_Silicon_Nanowires_at_High_Temperature_up_to_700_K/3407575/1
## Thermal Transport in Silicon Nanowires at High Temperature up to 700 K 2016-05-31T17:20:49Z (GMT) by Thermal transport in silicon nanowires has captured the attention of scientists for understanding phonon transport at the nanoscale, and the thermoelectric figure-of-merit (ZT) reported in rough nanowires has inspired engineers to develop cost-effective waste heat recovery systems. Thermoelectric generators composed of silicon target high-temperature applications due to improved efficiency beyond 550 K. However, there have been no studies of thermal transport in silicon nanowires beyond room temperature. High-temperature measurements also enable studies of unanswered questions regarding the impact of surface boundaries and varying mode contributions as the highest vibrational modes are activated (Debye temperature of silicon is 645 K). Here, we develop a technique to investigate thermal transport in nanowires up to 700 K. Our thermal conductivity measurements on smooth silicon nanowires show the classical diameter dependence from 40 to 120 nm. In conjunction with Boltzmann transport equation, we also probe an increasing contribution of high-frequency phonons (optical phonons) in smooth silicon nanowires as the diameter decreases and the temperature increases. Thermal conductivity of rough silicon nanowires is significantly reduced throughout the temperature range, demonstrating a potential for efficient thermoelectric generation (e.g., ZT = 1 at 700 K).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216180443763733, "perplexity": 3269.641068891913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00010.warc.gz"}
https://math.stackexchange.com/questions/1123591/combinatorial-proof-of-identity-b-n
# Combinatorial Proof of Identity b_n Prove that: $$b_n = 1 + \sum\limits_{k=1}^{∞} \binom{n-1}{k}b_k.$$ Workings: The first thing I noticed is that the above equation looks very similar to a Bell Numbers proof: $b_{n+1}=\sum\limits_{k=0}^n\binom{n}{k}b_k$ Making me think there is some sort of relation between the two. Though this may not be true. Since the one I need to prove does have an infinity. So because of this I'm not to sure on what to do. Any help will be appreciated. • What are the $b_n$? – Umberto P. Jan 28 '15 at 16:17 • @UmbertoP. The Bell numbers according to my prof. – TillermansTea Jan 28 '15 at 16:46 • The upper limit of $\infty$ doesn’t matter: $\binom{n-1}k$ is non-zero only for $0\le k\le n-1$ anyway, so in effect the summation is from $1$ through $n-1$. – Brian M. Scott Jan 28 '15 at 18:19 If $k > n-1$ then $\binom{n-1}{k} = 0$. Thus the original equality states $$b_n = 1 + \sum_{k=1}^{n-1} \binom{n-1}{k} b_k.$$ If you have that $b_0 = 1$ then $1 = \binom{n-1}{0} b_0$ so in fact $$b_n = \sum_{k=0}^{n-1} \binom{n-1}{k} b_k.$$ • So for a proof I could: suppose we are partitioning the set {1,2,…,n}. Focus first on the block containing the element 1. Let k denote the number of elements other than 1 that belong to this block. We can choose these elements in $\binom{n-1}{k}$ ways. Having formed this block, we partition the remaining $n−k$ elements in $b_{n−k}$ ways. Summing over $k$ gives: $\sum_{k = 0}^{n-1} \binom{n-1}{k} b_{n-k}.$ ways Which is equivalent to $\sum_{k = 0}^{n-1} \binom{n-1}{k} b_{k}.$ – TillermansTea Jan 28 '15 at 16:15 • The answer shows that the stated recurrence, along with $b_0 = 1$, is the same as the Bell recurrence. – Umberto P. Jan 28 '15 at 16:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945648193359375, "perplexity": 199.91334821814456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00128.warc.gz"}
http://ecrypt-eu.blogspot.com/2016/11/verifiable-random-functions.html
## Monday, November 28, 2016 ### Verifiable Random Functions Pseudorandom functions (PRFs) are a central concept in modern cryptography. A PRF is a deterministic keyed primitive guaranteeing that a computationally bounded adversary having access to PRF's outputs at chosen points, cannot distinguish between the PRF and a truly random function mapping between the same domain and range as the PRF. The pseudorandomness property in the well-known candidates follows from various computational hardness assumptions. The first number-theoretical pseudorandom functions (PRF), has been proposed in the seminal work of Goldreich, Goldwasser and Micali1. Since then, PRFs found applications in the construction of both symmetric and public-key primitives. Following the beginning of their investigation, various number-theoretical constructions targeted efficiency or enhancing the security guarantees. Recent developments of PRF s include works on key-homomorphic PRFs or functional PRFs and their variants. A related, and more powerful concept, is the notion of verifiable random functions (VRFs). They were proposed in 1999 by Micali, Rabin and Vadhan2. VRFs are in some sense comparable to their simpler counterparts (PRFs), but in addition to the output values, a VRF also produces a publicly verifiable proof $\pi$ (therefore, there is also need for a public verification key). The purpose of the proofs $\pi$ is to efficiently validate the correctness of the computed outputs. The pseudorandomness property must hold, exactly as in the case of a PRF, with the noticeable difference that no proof will be released for the challenge input during the security experiment. Since the introduction of VRFs, constructions achieving adaptive security, exponentially large input spaces or security under standard assumptions were introduced. However, the construction of VRFs meeting all aforementioned constraints at the same time has been proven a challenging academic exercise. Finally, progress in this direction has been made due to the work of Hofheinz and Jager3, who solved the open problem via a construction meeting all the requirements. A major constraint in achieving adaptive security under a static assumption resided in the lack of techniques for removing the "q-type assumptions" (the "size" of the assumptions is parameterized by "q" rather then being fixed) from the security proofs of the previous constructions. ### An adaptive-secure VRF from standard assumptions The scheme by Hofheinz and Jager has its roots in the VRF4 proposed by Lysyanskaya. In Lysyanskaya's construction, for an input point $x$ in the domain of the VRF, represented in binary as $x = (x_1,\dots, x_n)$, the corresponding output is set to the following encoding: $y = g^{\prod_{i=1}^{n} a_{i, x_i}}$, which for brevity we will denote $[\prod_{i=1}^{n} a_{i, x_i}]$. The pseudorandomness proof requires a q-type assumption. To remove it, the technique proposed in the Hofheinz and Jager paper replaces the set of scalar exponents $\{a_{1,0}, \dots, a_{n,1}\}$ with corresponding matrix exponents. A pairing is also needed for verifiability. Therefore, a point $x = (x_1,\dots, x_n)$ in the domain of the VRF will be mapped to a vector of points. Informally, the construction samples $\vec{u} \leftarrow \mathbb{Z}_p^{k}$ (p prime) and a set of $2n$ square matrices over $\mathbb{Z}_p^{k \times k}$: \begin{aligned} \left \{ \begin{array}{cccc} {M_{1,0}} & M_{2,0} & \dots & M_{n,0}\\ {M_{1,1}} & M_{2,1} & \dots & M_{n,1}\\ \end{array} \right \} \end{aligned}. The secret key is set to the plain values of the $\{ \vec{u}, M_{1,0}, \dots, M_{n,1} \}$ while the verification key will consists of the encodings (element-wise) of the entries forming the secret key. To evaluate at point $x$, one computes: $VRF(sk, x = (x_1,\dots, x_n)) = \Bigg[ \vec{u}^t \cdot \Big(\prod_{i=1}^{n}M_{i,x_i} \Big) \Bigg]$. The complete construction requires an extra step, that post-processes the output generated via the chained matrix multiplications with a randomness extractor. We omit this detail. A vital observation is that the multi-dimensional form of the secret key allows to discard the q-type assumptions, and replace it with a static one. ### Proof intuition The intuition for the proof can be summarized as follows: • during the adaptive pseudorandomness game, a property called "well-distributed outputs" ensures that all the evaluation queries except the one for the challenge will output encoded vectors $[\vec{v} = \vec{u}^t \cdot (\prod_{i=1}^{n}M_{i,x_i})]$, such that each vector but the one corresponding to the challenge belongs to a special designated rowspace. This is depicted in the figure, where the right side presents the evaluation of the challenge input $x^*$, while the left side presents the evaluation at $x\ne x^*$. • to enforce well distributed outputs, the matrices $M_{i,x_i}$ must have special forms; for simplicity, consider $x^* = (0, 1, \dots, 0)$ of Hamming weight 1 and the corresponding secret key: \begin{aligned} \vec{u}^t , \left \{ \begin{array}{cccc} U_{1,0} & L_{2,0} & \dots & U_{n,0} \\ L_{1,1} & U_{2,1} & \dots & L_{n,1} \\ \end{array} \right \} \end{aligned} where $L_i$ stands for an $n$-$1$ rank matrix (lower rank), while the $U_i$ denotes a full rank matrix that map between RowSpace($L_{i-1}$) and RowSpace($L_{i}$). Rowspace($L_0$) will be a randomly chosen subspace of dimension $n-1$, and $\vec{u} \not \in$RowSpace($L_0$) with overwhelming probability. Also, notice the full rank matrices occur in the positions corresponding to $x^*$, in order to ensure well-distributed outputs. • finally, and maybe most importantly, one must take into account that the distribution of matrices used to ensure well-distributed outputs must be indistinguishable from the distribution of uniformly sampled square matrices. A hybrid argument is required for this proof with the transition between the games being based on the $n$-Rank assumption (from the Matrix-DDH family of assumptions). ### References 1. Goldreich, O., Goldwasser, S., & Micali, S. (1986). How to construct random functions. Journal of the ACM (JACM), 33(4), 792-807. 2. Micali, S., Rabin, M., & Vadhan, S. (1999). Verifiable random functions. In Foundations of Computer Science, 1999. 40th Annual Symposium on (pp. 120-130). IEEE. 3. Hofheinz, D., & Jager, T. (2016, January). Verifiable random functions from standard assumptions. In Theory of Cryptography Conference (pp. 336-362). Springer Berlin Heidelberg. 4. Lysyanskaya, A. (2002, August). Unique signatures and verifiable random functions from the DH-DDH separation. In Annual International Cryptology Conference (pp. 597-612). Springer Berlin Heidelberg.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9556251168251038, "perplexity": 1318.5829747279593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323895.99/warc/CC-MAIN-20170629084615-20170629104615-00222.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/J._Datta
• J Datta Articles written in Pramana – Journal of Physics • Linear delta expansion technique for the solution of anharmonic oscillations The linear delta expansion technique has been developed for solving the differential equation of motion for symmetric and asymmetric anharmonic oscillators. We have also demonstrated the sophistication and simplicity of this new perturbation technique. • Generalization of quasi-exactly solvable and isospectral potentials A unified approach in the light of supersymmetric quantum mechanics (SSQM) has been suggested for generating multidimensional quasi-exactly solvable (QES) potentials. This method provides a convenient means to construct isospectral potentials of derived potentials. • Iterative approach for the eigenvalue problems An approximation method based on the iterative technique is developed within the framework of linear delta expansion (LDE) technique for the eigenvalues and eigenfunctions of the one-dimensional and three-dimensional realistic physical problems. This technique allows us to obtain the coefficient in the perturbation series for the eigenfunctions and the eigenvalues directly by knowing the eigenfunctions and the eigenvalues of the unperturbed problems in quantum mechanics. Examples are presented to support this. Hence, the LDE technique can be used for non-perturbative as well as perturbative systems to find approximate solutions of eigenvalue problems. • # Pramana – Journal of Physics Volume 95, 2021 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867663145065308, "perplexity": 883.0768519400962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00608.warc.gz"}
http://www.ams.org/joursearch/servlet/PubSearch?f1=msc&onejrnl=proc&pubname=one&v1=20.40&startRec=1
# American Mathematical Society My Account · My Cart · Customer Services · FAQ Publications Meetings The Profession Membership Programs Math Samplings Washington Office In the News About the AMS You are here: Home > Publications AMS eContent Search Results Matches for: msc=(20.40) AND publication=(proc) Sort order: Date Format: Standard display Results: 1 to 30 of 43 found      Go to page: 1 2 [1] Guy T. Hogan. Elements of maximal order in finite $p$-groups . Proc. Amer. Math. Soc. 32 (1972) 37-41. MR 0289645. Abstract, references, and article information    View Article: PDF This article is available free of charge [2] D. K. Friesen. Products of normal supersolvable subgroups . Proc. Amer. Math. Soc. 30 (1971) 46-48. MR 0280590. Abstract, references, and article information    View Article: PDF This article is available free of charge [3] Fletcher Gross. $p$-solvable groups with few automorphism classes of subgroups of order $p$ . Proc. Amer. Math. Soc. 30 (1971) 437-444. MR 0286887. Abstract, references, and article information    View Article: PDF This article is available free of charge [4] Richard M. Davitt and Albert D. Otto. On the automorphism group of a finite $p$-group with the central quotient metacyclic . Proc. Amer. Math. Soc. 30 (1971) 467-472. MR 0281797. Abstract, references, and article information    View Article: PDF This article is available free of charge [5] Graham A. Chambers. On the conjugacy of injectors . Proc. Amer. Math. Soc. 28 (1971) 358-360. MR 0277612. Abstract, references, and article information    View Article: PDF This article is available free of charge [6] Joseph E. Kuczkowski. On roots and subsemigroups of nilpotent groups . Proc. Amer. Math. Soc. 28 (1971) 50-52. MR 0274585. Abstract, references, and article information    View Article: PDF This article is available free of charge [7] R. Faudree. Groups in which each element commutes with its endomorphic images . Proc. Amer. Math. Soc. 27 (1971) 236-240. MR 0269737. Abstract, references, and article information    View Article: PDF This article is available free of charge [8] I. D. Macdonald. Solution of the Hughes problem for finite $p$-groups of class $2p-2$. . Proc. Amer. Math. Soc. 27 (1971) 39-42. MR 0271230. Abstract, references, and article information    View Article: PDF This article is available free of charge [9] Edward Formanek. A short proof of a theorem of Jennings . Proc. Amer. Math. Soc. 26 (1970) 405-407. MR 0272895. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] Ernest L. Stitzinger. On elementary groups . Proc. Amer. Math. Soc. 26 (1970) 236-238. MR 0265467. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] Nobuo Inagaki. On $\mathfrak{F}$-normalizers and $\mathfrak{F}$-hypercenter . Proc. Amer. Math. Soc. 26 (1970) 21-22. MR 0263921. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] Ti Yen. On $\mathfrak{F}$-normalizers . Proc. Amer. Math. Soc. 26 (1970) 49-56. MR 0262366. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] H. J. Schmidt. On normal complements of $\mathfrak{F}$-covering subgroups . Proc. Amer. Math. Soc. 25 (1970) 457-459. MR 0258960. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] Forrest Richen. Decomposition numbers of $p$-solvable groups . Proc. Amer. Math. Soc. 25 (1970) 100-104. MR 0254146. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] D. B. Coleman and D. S. Passman. Units in modular group rings . Proc. Amer. Math. Soc. 25 (1970) 510-512. MR 0262360. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] Larry Dornhoff. Jordan's theorem for solvable groups . Proc. Amer. Math. Soc. 24 (1970) 533-537. MR 0255680. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] N. D. Gupta. The free metabelian group of exponent $p\sp{2}$ . Proc. Amer. Math. Soc. 22 (1969) 375-376. MR 0245678. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] Avino’am Mann. On subgroups of finite solvable groups . Proc. Amer. Math. Soc. 22 (1969) 214-216. MR 0241539. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] S. Bauman. A note on cover and avoidance properties in solvable groups . Proc. Amer. Math. Soc. 21 (1969) 173-174. MR 0238950. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] Eugene Schenkman. Some criteria for nilpotency in groups and Lie algebras . Proc. Amer. Math. Soc. 21 (1969) 714-718. MR 0241540. Abstract, references, and article information    View Article: PDF This article is available free of charge [21] H. Lausch. Formations of group and $\pi$-decomposability . Proc. Amer. Math. Soc. 20 (1969) 203-206. MR 0233891. Abstract, references, and article information    View Article: PDF This article is available free of charge [22] Guy T. Hogan and Wolfgang P. Kappe. On the $H\sb{p}$-problem for finite $p$-groups . Proc. Amer. Math. Soc. 20 (1969) 450-454. MR 0238952. Abstract, references, and article information    View Article: PDF This article is available free of charge [23] Chong-yun Chao. A theorem of nilpotent groups . Proc. Amer. Math. Soc. 19 (1968) 959-960. MR 0229721. Abstract, references, and article information    View Article: PDF This article is available free of charge [24] Ralph Faudree. A note on the automorphism group of a $p$-group . Proc. Amer. Math. Soc. 19 (1968) 1379-1382. MR 0248224. Abstract, references, and article information    View Article: PDF This article is available free of charge [25] Fletcher Gross. A note on fixed-point-free solvable operator groups . Proc. Amer. Math. Soc. 19 (1968) 1363-1365. MR 0231909. Abstract, references, and article information    View Article: PDF This article is available free of charge [26] Avino’am Mann. On $\mathfrak{F}$-normalizers and $\mathfrak{F}$-covering subgroups . Proc. Amer. Math. Soc. 19 (1968) 1159-1160. MR 0231911. Abstract, references, and article information    View Article: PDF This article is available free of charge [27] D. L. Winter. Finite $p$-solvable linear groups with a cyclic Sylow $p$-subgroup . Proc. Amer. Math. Soc. 18 (1967) 341-343. MR 0207845. Abstract, references, and article information    View Article: PDF This article is available free of charge [28] Paul M. Weichsel. On $p$-abelian groups . Proc. Amer. Math. Soc. 18 (1967) 736-737. MR 0213443. Abstract, references, and article information    View Article: PDF This article is available free of charge [29] R. Faudree. Embedding theorems for ascending nilpotent groups . Proc. Amer. Math. Soc. 18 (1967) 148-154. MR 0206105. Abstract, references, and article information    View Article: PDF This article is available free of charge [30] Richard G. Swan. Representations of polycyclic groups . Proc. Amer. Math. Soc. 18 (1967) 573-574. MR 0213442. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 30 of 43 found      Go to page: 1 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272565245628357, "perplexity": 1464.219624410354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189589.37/warc/CC-MAIN-20170322212949-00148-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.physicsmynd.com/?p=1103
# IIT JEE Main / Advanced Physics 2018 Tips & Trends – Young’s Double Slit Experiment [YDSE] The experiment by the English scientist Thomas Young in 1801 demonstrated the inseparability of the wave and particle nature of light or wave – particle duality.Morever , he was able to determine the wavelength of light by this experiment . The importance of this experiment was such that , Richard Feyman , one of the greatest physicts opined that all of quantum mechanics could be gleaned from carefully thinking through the implications of this single experiment.  In this post , we will look at the key theoretical aspects of YDSE and problems related to it. Young’s double slit experiment Points to note • The light used for this experiment is usually ( for best results ) collimated – light rays are distinct , parallel to each other and doesn’t mix with each other , monochromatic – having a single wavelength [ to validate the results] and coherent – having a single phase. • The single slit ensures that light from only a single direction falls on the double slits. • The light passing through S1 and S2 will be coherent as their source is the same. • A bright fringe is created directly opposite the mid-point between the two slits. Here , the distances between the fringe and the two slits ( l1 and l2 ) are equal and waves contain the same no of wave lenghts – thus resulting in constructive interference . In addition , constructive interference produces more bright fringes on both sides of the middle , wherever the difference between l1 and l2 is an integer number of wave lenghts – λ , 2λ, 3λ etc. • For dark fringes, l2 is larger than l1 by exactly one half a wavelength. Additional dark fringes are created at points where the difference between l1 and l2 equals an odd integer no of half wavelengths like 1[λ/2] , 3[λ/2]..etc . Dark fringes indicate destructive interference. • Angle θ for maximum interference ( constructive) is given by sin θ = m λ/d where m = 0,1,2,3.. and d is the distance between S1 and S2. • Angle θ for minimum interference ( destructive ) is given by sin θ = [ m + ½ ] λ/d where m = 0,1,2,3…. Conditions for Constructive & Destructive Interference Consider a point on a distant screen , such that the distance D from the slits is D >> d .Now , the small arc of the circle from the chosen point is almost a straight line and the path difference , Δx = d sin θ . • The condition for constructive interference is  Δ x = ± nλ ( n = o,1,2,3..) and for maxima , d sin θ = ± nλ . • The condition for destructive interference is Δ x = ± [ n – ½ ] λ ( n = 1,2,3..) and for minima d sin θ = ± [ n – ½]λ . • When D >> d , then  sin θ  = tan θ = θ = y/D . • Position of nth bright fringe is   yn =  nλ  [ D/d ] . • Position of the nth dark fringe is   yn =  [ n – ½ ] λ D/d . Fringe width Fringe width ( β ) is defined as the distance between two sucessive maxima or minima. It is given by β = λD/d . • β is independent of n ( fringe order) as long as d and θ are small , i.e fringes are evenly spaced. • For red light β will be higher as β  is proportional to λ . • If this experiment is done in a medium other than air , say water with a refractive index of μ , then , β’ = β/μ . Maximum order for Interference Fringes • When n << d/λ  ,  y/D =  nλ / d . • When n ≈ d /λ , then   n  =  d sin θ /λ . • Therefore n max =  [ d/λ ] and   n min =  [ d/λ  + ½ ] . Variations 1 . When the rays are not parallel to the Principal axis • Δ x  =  d  sin θ  –  yd/D 2 . When the source is placed beyond the central line • Maxima    :   Δ x  =  nλ • Minima     :   Δ x  =  ( 2n – 1 ) λ / 2 3 .  When a transparent glass slab of thickness t and refractive index μ is placed in one of the incoming wave paths , due to the increase of the path by ( μ – 1 ) t , the fringe pattern undergoes a shift , s given by • s  =  D/d  ( μ – 1 ) t Now on to some problems …….. 1. In a Young’s experiment, the upper slit is covered by a thin glass pate of refractive index 1.4, while the lower slit is covered by another glass plate, having the same thickness as the first one but having refractive index 1.7.  Interference pattern is observed using light of wavelength 5400 Å.  It is found that the point P on the screen, where the central maximum(n = 0) fall before the glass plates were inserted, now has ¾ the original intensity.  It is further observed that what used to be the fifth maximum earlier lies below the point P while the sixth minima lies above P.  Calculate the thickness of glass plate.  (Absorption of light by glass plate may be neglected). ( IIT JEE 1997 ) Solution – From the given data ,μ1 = 1.4 and μ2 = 1.7 . Let  t be the thickness of each glass plates. Path difference at O, due to insertion of glass plates will be Δx =  ( μ2– μ1) t =  (1.7 – 1.4) t =  0.3 t    …(1) . The point to deduce here is that the path dfference would be lying between 5 λ  and 5λ + λ/2 , since the 5th maxima and the 6th minima lies on both sides of O . Hence , Δx is taken as    5λ + Δ     …(2) . Here ,   Δ  < λ/2  . Now , the phase difference at O will be – Φ  =  2π /λ . Δx    =    2π /λ  ( 5λ + Δ )    =  ( 10 π + 2π/λ   Δ )   …… (3) Now , we know that I (Φ ) = Imax cos²  [ Φ / 2 ]   and the intensity at O is given as 3/4 I max . Therefore , 3/4 I max =  I (Φ ) = Imax cos²  [ Φ / 2 ]       or    3/4  =  cos²  [ Φ / 2 ]   ….. (4) Substituting the value of eq (3 ) in eq. (4) and solving , we get  Δ = λ /6 . Hence , Δx =  5λ + λ /6  = 31 λ / 6  .   Now from eq. (1)  Δx =  0.3 t  .  Therefore  , t =  31 λ/6  x  0.3   =   9.3 x 10-6 m =   9.3 μm . 2.A coherent parallel beam of microwaves of wavelength λ = 0.5 mm falls on a Young’s double slit apparatus.  The separation between the slits is1.0 mm.  The intensity of microwaves is measured on a screen placed parallel to the plane of the slits at a distance of 1.0 m from it as shown in the figure. a ) If the incident beam falls normally on the double slit apparatus, find the y-coordinates of all the interference minima on the screen. b ) If the incident beam makes an angle of 30o with the x-axis (as in the dotted arrow shown in figure), find the y-coordinates of the first minima on either side of the central maximum. ( IIT JEE 1998 ) Solution – normal path of incident rays Refer [variations.2] above . For minimum intensity , d sin θ  = ( 2n – 1 ) λ /2    so ,  sin θ  = ( 2n – 1 ) λ /2d . Subtituting the values from the given data, sin θ  =  2n – 1  /4 . The point to note – as sin θ ≤ 1 ,  ( 2n – 1 ) λ /2  ≤ 1 ,  means that  n  ≤ 2.5 . Hence n can either be 1 or 2 . If n = 1 ,  sin θ1  = 1/4  and tan θ1  = 1 /√15  . Conversely, if n = 2 , then sin θ2  = 3/4  and tan θ2   = 3 /√7 . When D =  1m , y = tan θ  .  Hence ,  y1  = 1/√15 m = 0.26 m .Similarily , y2  = 3 / √7 m =  1.13 m . As there can be minima on either side of O , there ideally should be 4 minima at positions ± 0.26 m  and ± 1.13 m . Part b ) If α = 30° , then   Δx1  = d sin 30   = 0.5 mm which is equal to the given λ . Hence , the path difference betwwen the rays is λ . At central maxima , the net path difference is zero . Hence ,  d sin θ  = d sin α   and  θ  = α = 30° or tan θ  = 1/√3 = y0/D . Therefore , y0 = 0.58m. Let X1 and X2 be the minimas flanking the central maxima. Now , considering X2 – Δx2  – Δx1 = λ /2  therefore , Δx2 =  3λ /2  as Δx1 = λ .  Thus , d sin θ2 = 3λ /2   = ¾ . tan θ2 = y2/D  = 3/√7   ,  hence , y2 = 1.13m . Working out in the same manner for X1 , tan θ1 = y1/D = 1/√15  , hence , y1 = 0.26m. 3. In a Young’s double slit experiment, two wavelengths of 500 nm and 700 nm were used.  What is the minimum distance from the central maximum where their maximas coincide again?  Take D/d = 103.  Symbols have their usual meanings. ( IIT JEE 2004 ) Taking n1 as the bright fringe of wavelength λ1 which is given as 500 nm and n2 as that of the bright fringe λ2 given as 700nm , then – n1 .  λ1D/d    =  n2 . λ2 D/d . Therefore , n1/n2  = λ2/λ1 = 7/5 . The inference here is that the 7th maxima of λ1 will coincide with the 5th maxima of λ2 . Hence , the minimum distance  =  n1λ1D/d =  7 x 5 x 10 -7 x 10³  =  3.5 mm
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788767695426941, "perplexity": 3159.5641482410883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864343.37/warc/CC-MAIN-20180622030142-20180622050142-00052.warc.gz"}
https://www.h-its.org/research/tos/
TOS Group Theory and Observations of Stars # Theory and Observations of Stars (TOS) Stars are an important source of electromagnetic radiation in the universe allowing for studies of many phenomena, from distant galaxies to the interstellar medium and extra-solar planets. However due to their opacity it was once said that “at first sight it would seem that the deep interior of the sun and stars is less accessible to scientific investigation than any other region of the universe” (Sir Arthur Eddington, 1926). Now, through modern mathematical techniques and high-quality data, it has become possible to probe and study the internal stellar structure directly through global stellar oscillations: a method known as asteroseismology. Asteroseismology uses similar techniques to helioseismology carried out on our closest star, the Sun, to study the structure of other stars. The properties of waves are used to trace the internal conditions. Oscillations that impact upon the whole star reveal information that is hidden by the opaque surface. This asteroseismic information from the CoRoT, Kepler, K2, TESS, SONG and Plato observatories combined with astrometric observations from Gaia, spectroscopic data from the SDSS-V APOGEE, interferometry or photometry and state-of-the-art stellar models such as MESA provides insight into the stellar structure and the physical processes that take place in stars. Understanding these physical processes that take place in stars and how these change as a function of stellar evolution is the ultimate goal of the Theory and Observations of Stars (TOS) group at HITS which has been established in 2020. We focus, but not limit ourselves to, low-mass main-sequence stars, subgiants, and red giants. These stars are interesting as they go through a series of internal structure changes. Furthermore, these are potential hosts of planets and standard candles for galactic studies (core helium burning red-giant stars), and hence exoplanet studies as well as Galactic archaeology, will also benefit from an increased understanding of these stars. The TOS group is an international node of the Stellar Astrophysics Centre (SAC), Aarhus, Denmark.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030964970588684, "perplexity": 1539.405876250301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00590.warc.gz"}
http://atomosyd.net/spip.php/local/cache-vignettes/l51xh120/skelato/plugins/dw2/overlib/)http:/www.davidsauzay.com/www.davidsauzay.com/spip.php?article70
Autres Articles de : Lorenz-like systems # 1992 A thermal convection loop Christophe LETELLIER 09/05/2009 Yuzhou Wang, Jonathan Singer & Haim Bau Haim Bau When the Rayleigh-Bénard experiments were carried out, the behaviors observed as the temperature difference is increased did not correspond at all to those computed with the model — too simplified — of Lorenz. The basic reason is that the approximations made to obtain the Lorenz system are not applicable unless the fluid is at rest (there is no convection in the fluid). In the midst of the 1970s, as the Lorenz system began to be adopted as a description for some types of irregular behavior, an experiment [1] was conceived that corresponded to the dynamics described by the Lorenz equations : it was the inverse of the usual procedure. Usually the model is modified to describe the experiment ; this time the experiment was designed to suit the model. Fig. 1 : Schematic description of the convection experiment. The experiment consisted of an annular ring, with internal diameter d=0.03 m, in a vertical orientation (Fig. 1). The ring has diameter D=0.76 m. A fluid is allowed to circulate within the tube. The liquid is heated by a heating ribbon in the lower half of the tube and cooled by a water cooling jacket in the upper half of the tube. The fluid temperature was measured at two diametrically opposite points in the ring, in the cooled part of the ring. The system Following the procedure used by Lorenz Haim Bau’s group reduced the Navier-Stokes equations to three ordinary differential equations [2] where R is the Rayleigh number and the Prandtl number. These equations are not exactly similar to the those obtained by Lorenz, but the departure does not imply notable changes in the nature of its solution. For instance, with R=18.5 and , this system produces a chaotic attractor (Fig. 2) topologically equivalent to the Lorenz attractor. This attractor is governed by a tearing mechanism that takes place in the neighborhood of the saddle fixed point located at the origin of the phase space. Fig. 2 : Chaotic "Lorenz" attractor solution to the Wang-Singer-Bau system. For other parameter values - R=128 and - the chaotic attractor is topologically equivalent to a "Burke and Shaw" attractor (Fig. 3). Fig. 3 : Chaotic "Burke and Shaw" attractor solution to the WAng-Singer-Bau system. [1] H. F. Creveling, J. F. de Paz, J. Y. Baladi & R. H. Schoenhals, Stability Characteristics of a single thermal convection loop, Journal of Fluid Mechanics, 67, 65-84, 1975. [2] Y.-Z. Wang, J. Singer & H. H. Bau, Controlling Chaos in a thermal convectionloop, Journal of Fluid Mechanics, 237, 479-498, 1992. ATOMOSYD © 2007-2018 |   |   |   |  MàJ . 02/09/2018
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9173256158828735, "perplexity": 1272.8902302961526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00236.warc.gz"}
https://home.cern/tags/speed-light
# speed of light The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its value is exactly 299,792,458 metres per second, as the length of the metre is defined from this constant and the international standard for time. 1 results Accelerators Accelerators
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987068772315979, "perplexity": 437.98312960661445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00652.warc.gz"}
https://www.physicsforums.com/threads/solving-forth-order-nonlinear-ode.244326/
# Solving forth order nonlinear ode 1. Jul 10, 2008 ### hamidD hello I want to find exact solution of a nonlinear ode with its boundary conditions . the equation and its b.cs are written below : a*y''''+y''' y -y'' y' = 0 y(h/2)=V1 , y(-h/2)=V2 , y'(h/2)=0 , y'(-h/2)=0 where V1 , V2 , a and h are constant . although with integerating from above equation , the order of ode reduce to 3 but the problem is until unsolveable . after integrating from above equation we have : a y''' +y'' y -y'^2 =C where C is constant . 2. Jul 10, 2008 ### arildno In general, you won't be able to find exact solutions to non-linear diff.eqs; you'll need to solve it numerically. Similar Discussions: Solving forth order nonlinear ode
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783710241317749, "perplexity": 4477.243572622723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189244.95/warc/CC-MAIN-20170322212949-00225-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.calculatored.com/math/probability/variance-tutorial
× Dark Mode ## How to use variance Formula and calculate variance : Step-1:Determine all possible outcomes This calculator calculates the variance from set of values. First step it uses is to take square of all the values available in the entire population: X X2 25 625 35 1225 45 2025 55 3025 Step-1: Calculate the Mean $$\sum x\;=\;160$$ Take the square of answer and divide that value by size of population. $$\frac{(\sum x)^2}{N}\;=\;\frac{160^2}{4}\;=$$ $$\frac{25600} {4}\;=\;6400$$ Then calculate the sum of all the square values, ∑x2 $$\sum x^2\;=\; 6900$$ Subtract, $$\sum x^2\;-\;\frac{(\sum x)^2}{N}\;=$$ $$6400–160\;=\;6240$$ Step-13: Calculate Variance For Variance, divide the answer with size of population, $$σ^2\;=\;\frac{\sum x^2\;-\;\frac{(\sum x)^2}{N}}{N}\;=$$ $$\frac{6340} {4}=7576$$ So the Variance is 1585
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687831997871399, "perplexity": 2150.1137551053334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00781.warc.gz"}
http://tex.stackexchange.com/questions/56602/how-to-insert-custom-text-in-place-of-figure
# How to insert custom text in place of figure I would like to replace some figures in a document with a black frame of the same size as the figure, such as that which you get when using \includegraphics in draft mode. However, this places the name of the figure file inside the framed box. Is there a way to replace this text with centered text stating Figure removed due to copyright restrictions? - A suggestion can be the combination of collectbox, adjustbox and tikz. I tried to avoid TikZ but I wasn't able to create the red rectangle with adjustbox. The code provides two things: 1. A switch do print the image or the copyright information. You can choose by \copyrightimagefalse or \copyrightimagetrue. 2. Instead of using includegraphics you must use \copyrightimage to work with the previous switch. The Code \documentclass{article} \usepackage{mwe}% or load ’graphicx’ and ’blindtext’ manually \usepackage{xcolor} \usepackage{collectbox} \usepackage{tikz} \collectbox{% \tikz[outer sep=0pt]\node[fill=red!20,minimum height=\totalheight,minimum width=\width]{% \smash{\parbox{\width}{\centering\large\bfseries Figure removed due to copyright restrictions}} };% \else \BOXCONTENT \fi% } \begin{document} \begin{figure}[!ht] \centering \end{figure} \blindtext \begin{figure}[!ht] Thanks for your suggestion. However, I cannot seem to get this code working, even if I simply copy-and-paste it and change only the image file. pdfLatex keeps on complaining that \copyrightimage is an undefined control sequence: Undefined control sequence. \\copyrightimage ...ENT \fi }{\adjincludegraphics [#1]{#2}} l.24 \copyrightimage[width=0.48\linewidth]{dice} ? Could you please confirm if the code works as is? Thanks again... –  Miguel May 20 '12 at 17:11 Thanks - the code now runs! But the box is too wide and shifted to the right (lies outside of the text margins when the image has width = \textwidth). Compare: \begin{figure}[!ht] \centering \includegraphics[draft, width=\textwidth]{some-image} \end{figure} \blindtext versus \begin{figure}[!ht] \centering\copyrightimagetrue \copyrightimage[width=\textwidth]{some-image} \end{figure} \blindtext –  Miguel May 21 '12 at 7:13 @Miguel: You use \includegraphics instead of \copyrightimage. As I wrote I defined a new command instead of redefining includegraphics –  Marco Daniel May 21 '12 at 18:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8270887732505798, "perplexity": 3519.235984484038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678666156/warc/CC-MAIN-20140313024426-00074-ip-10-183-142-35.ec2.internal.warc.gz"}