url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.solutioninn.com/civil-engineers-believe-that-w-the-amount-of-weight-in
Question Civil engineers believe that W, the amount of weight (in units of 1000 pounds) that a certain span of a bridge can withstand without structural damage resulting, is normally distributed with mean 400 and standard deviation 40. Suppose that the weight (again, in units of 1000 pounds) of a car is a random variable with mean 3 and standard deviation .3. Approximately how many cars would have to be on the bridge span for the probability of structural damage to exceed .1? Sales0 Views25
2016-10-22 09:59:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9094549417495728, "perplexity": 176.67718601384175}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718866.34/warc/CC-MAIN-20161020183838-00017-ip-10-171-6-4.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1621659/when-is-the-frobenius-norm-equal-to-the-spectral-radius
When is the Frobenius norm equal to the spectral radius? I know that the spectral radius is $\rho(A) = max |\lambda_l| = \S_{max}^2$ and that the Frobenius norm is $||A|| = \sqrt{tr(A^*A)} = (\sum_{k}S_k^2)^{1/2}$, which means I want to find the matrix A for which the following is true $$||A||_F = \sqrt{tr(A^*A)} = (\sum_{k}S_k^2)^{1/2} = S_{max}^2$$ So is the spectral radius equal to the frobenius norm if A is a square matrix with its largest eigenvalue equal to |1|? (S are the singular values) We assume that $A\not= 0$. Let $spectrum(A)=(\lambda_i)$ where $|\lambda_1|\geq |\lambda_2|\geq\cdots$ and $\Sigma(A)=(\sigma_i)$ where $\sigma_1\geq \sigma_2\geq\cdots$. Then $\rho(A)=[\lambda_1|\leq \sigma_1=||A||_F=\sqrt{\sum_i \sigma_i^2}$. Thus $|\lambda_1|=\sigma_1$ and $\sigma_2=\cdots=\sigma_n=0$. Finally $rank(A)=1$ and $A=uv^*$ where $u,v$ are non-zero vectors. One has $AA^*=||v||^2uu^*$, $\lambda_1=trace(uv^*)=v^*u$ and $\sigma_1^2=||v||^2trace(uu^*)=||v||^2||u||^2$. Moreover $|\lambda_1|=|v^*u|\leq ||u||||v||=\sigma_1$, one deduces that the Schwartz inequality is an equality, that implies that $u,v$ are parallel. Conclusion: $A=0$ or $A=\alpha uu^*$ where $\alpha\in \mathbb{C}^*$ and $u$ is a non-zero vector. (The converse is easy)
2019-10-14 06:21:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916867017745972, "perplexity": 75.94027022096977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00251.warc.gz"}
http://mathhelpforum.com/number-theory/17157-greatest-product-sum-print.html
# Greatest product from a sum • July 23rd 2007, 11:33 PM DivideBy0 Greatest product from a sum The sum of n positive integers is 19. What is the maximum possible product of these n numbers? Can you also please explain why this is so? • July 24th 2007, 11:30 PM CaptainBlack Quote: Originally Posted by DivideBy0 The sum of n positive integers is 19. What is the maximum possible product of these n numbers? Can you also please explain why this is so? Well as nobody else has answered this I will tell you what I know though its not a complete solution. The arithmetic-geomentric mean inequality tells us that if there are $N$ numbers involved, and that these are $n_i,\ i=1, .. N$, then: $ \left[ \frac{19}{N} \right]^N \ge \prod_1^N n_i $ Regarding the lefthand side of this inequality as a function of N this has a maximum $\approx 1085.4$ when $N=7$. So we have that the maximum posible value of the product of an integer partition of $19$ is less than or equal $1085$. The partition $3,3,3,3,3,2,2$ has product $972$ and I would not be supprised if this were the maximum, but have not done enough to convince even myself that this is the case. RonL • July 25th 2007, 02:06 AM CaptainBlack Quote: Originally Posted by CaptainBlack Well as nobody else has answered this I will tell you what I know though its not a complete solution. The arithmetic-geomentric mean inequality tells us that if there are $N$ numbers involved, and that these are $n_i,\ i=1, .. N$, then: $ \left[ \frac{19}{N} \right]^N \ge \prod_1^N n_i $ Regarding the lefthand side of this inequality as a function of N this has a maximum $\approx 1085.4$ when $N=7$. So we have that the maximum posible value of the product of an integer partition of $19$ is less than or equal $1085$. The partition $3,3,3,3,3,2,2$ has product $972$ and I would not be supprised if this were the maximum, but have not done enough to convince even myself that this is the case. RonL Now as $ \left[ \frac{19}{9} \right]^9 \approx 832.9 $ and $ \left[ \frac{19}{5} \right]^5 \approx 792.4 $ The above example of a 7-partition with a product of $972$ shows that the partition that gives the maximum product must be a 6, 7 or 8-partition. (Exhaustive search shows that $972$ is indeed the maximum product and it can be achived with either a 6 or a 7-partition. RonL • July 25th 2007, 03:25 AM CaptainBlack Quote: Originally Posted by CaptainBlack The partition $3,3,3,3,3,2,2$ has product $972$ and I would not be supprised if this were the maximum, but have not done enough to convince even myself that this is the case. The reason for guessing that this partition is a good candidate for the maximum product is that the arithmetic-geometric mean inequality is an equality when all the numbers are equal. We cant achive that in this case as 19 is prime, but we can look at partitions where the elements are as near equal as possible, and that is what I did here. The 6-partition of 19 which achives the maximum product can also be found im this way. RonL • July 27th 2007, 10:38 AM ray_sitf What about [19/(19/e)] to the power of (19/e)? This gives 1085.405992... I know the OP wanted integers, but this suggests the following procedure. If the number leaves a remainder of 2 when divided by three, then the max. answer is 3*3*3*...*2. If the number leaves a remainder of 1 when divided by three, then the max. answer is 3*3*3*...*2*2. If there is no remainder on division by three, then it's 3*3*3*...*3. E.g. for 23 it's 3^7 * 2, and for 25 it's 3^7 * 2 * 2 • July 27th 2007, 10:52 AM CaptainBlack Quote: Originally Posted by ray_sitf What about [19/(19/e)] to the power of (19/e)? This gives 1085.405992... I know the OP wanted integers, but this suggests the following procedure. I know what you are talking about, but I doubt many others will RonL • July 27th 2007, 11:09 AM CaptainBlack Quote: Originally Posted by ray_sitf I know the OP wanted integers, but this suggests the following procedure. If the number leaves a remainder of 2 when divided by three, then the max. answer is 3*3*3*...*2. If the number leaves a remainder of 1 when divided by three, then the max. answer is 3*3*3*...*2*2. If there is no remainder on division by three, then it's 3*3*3*...*3. E.g. for 23 it's 3^7 * 2, and for 25 it's 3^7 * 2 * 2 Very likely, but you will need to prove it. RonL • July 27th 2007, 11:59 PM DivideBy0 You're correct Captainblack, the answer is 972. I had a hard time understanding the inequality, but I guess as long as I take it for granted I should be fine. Also, is it an actual fact that 3 is the best integer and $e$ is the best number to use for something like this? • July 28th 2007, 06:14 AM CaptainBlack Quote: Originally Posted by DivideBy0 You're correct Captainblack, the answer is 972. I had a hard time understanding the inequality, but I guess as long as I take it for granted I should be fine. Also, is it an actual fact that 3 is the best integer and $e$ is the best number to use for something like this? The logic says that the nearest integer to N/e is a good choice for the number of terms in the sum and product, and that lots of 3's and a few 2's will get you close to the maximum for the product, but its not definitive. For instance you can get as large a product with 6 terms and with 7 when the sum is 19. RonL
2016-04-29 17:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7774489521980286, "perplexity": 390.30531469769755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111374.13/warc/CC-MAIN-20160428161511-00217-ip-10-239-7-51.ec2.internal.warc.gz"}
https://homework.cpm.org/category/ACC/textbook/ccaa8/chapter/5%20Unit%206/lesson/CCA:%205.2.1/problem/5-55
### Home > CCAA8 > Chapter 5 Unit 6 > Lesson CCA: 5.2.1 > Problem5-55 5-55. This problem is a checkpoint for laws of exponents and scientific notation. It will be referred to as Checkpoint 5A. Simplify each expression. In parts (e) through (f) write the final answer in scientific notation. 1.  $4^2·4^5$ 1.  $(5^0)^3$ 1.  $x^{−5}·x^3$ 1.  $(x^{−1}·y^2)^3$ 1.  $(8\times10^5)·(1.6\times10^{−2})$ 1.  $\frac { 4 \times 10 ^ { 3 } } { 5 \times 10 ^ { 5 } }$ Check your answers by referring to the Checkpoint 5A materials located at the back of your book. Ideally, at this point you are comfortable working with these types of problems and can solve them correctly. If you feel that you need more confidence when solving these types of problems, then review the Checkpoint 5A materials and try the practice problems provided. From this point on, you will be expected to do problems like these correctly and with confidence. Answers and extra practice are located in the back of your printed textbook or in the Reference Tab of your eBook. If you have an eBook for CCA, login and then click the following link:
2021-09-23 03:56:12
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4421514570713043, "perplexity": 1286.8351265970389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00590.warc.gz"}
https://zbmath.org/?q=an%3A1218.93073
# zbMATH — the first resource for mathematics Stabilization of linear strict-feedback systems with delayed integrators. (English) Zbl 1218.93073 Summary: The problem of compensation of input delays for unstable linear systems was solved in the late 1970s. Systems with simultaneous input and state delay have remained a challenge, although exponential stabilization has been solved for systems that are not exponentially unstable, such as chains of delayed integrators and systems in the ‘feedforward’ form. We consider a general system in strict-feedback form with delayed integrators, which is an example of a particularly challenging class of exponentially unstable systems with simultaneous input and state delays, and design a predictor feedback controller for this class of systems. Exponential stability is proven with the aid of a Lyapunov-Krasovskii functional that we construct using the PDE backstepping approach. ##### MSC: 93D15 Stabilization of systems by feedback 93C05 Linear systems in control theory 93D30 Lyapunov and storage functions ##### Keywords: delay systems; predictor; strict-feedback systems Full Text: ##### References: [1] Artstein, Z., Linear systems with delayed controls: a reduction, IEEE transactions on automatic control, 27, 869-879, (1982) · Zbl 0486.93011 [2] Bekiaris-Liberis, N.; Krstic, M., Delay-adaptive feedback for linear feedforward systems, Systems and control letters, 59, 277-283, (2010) · Zbl 1191.93070 [3] Bresch-Pietri, D.; Krstic, Miroslav, Delay-adaptive full-state predictor feedback for systems with unknown long actuator delay, American control conference, (2009) · Zbl 1175.93081 [4] Bresch-Pietri, D.; Krstic, Miroslav, Adaptive trajectory tracking despite unknown input delay and plant parameters, Automatica, 45, 2074-2081, (2009) · Zbl 1175.93081 [5] Evesque, S.; Annaswamy, A.M.; Niculescu, S.; Dowling, A.P., Adaptive control of a class of time-delay systems, ASME transactions on dynamics, systems, measurement, and control, 125, 186-193, (2003) [6] Fiagbedzi, Y.A.; Pearson, A.E., Feedback stabilization of linear autonomous time lag systems, IEEE transactions on automatic control, 31, 847-855, (1986) · Zbl 0601.93045 [7] Hale, J.K.; Verduyn Lunel, S.M., Introduction to functional differential equations, (1993), Springer-Verlag New York · Zbl 0787.34002 [8] Jankovic, M., Control lyapunov – razumikhin functions and robust stabilization of time delay systems, IEEE transactions on automatic control, 46, 1048-1060, (2001) · Zbl 1023.93056 [9] Jankovic, M., Control of nonlinear systems with time delay, IEEE conference on decision and control, (2003) [10] Jankovic, M., Forwarding, backstepping, and finite spectrum assignment for time delay systems, Automatica, 45, 1, 2-9, (2009) · Zbl 1154.93340 [11] Jankovic, M., Cross-term forwarding for systems with time delay, IEEE transactions on automatic control, 54, 3, 498-511, (2009) · Zbl 1367.93513 [12] Jankovic, M., Recursive predictor design for state and output feedback controllers for linear time delay systems, Automatica, 46, 3, 510-517, (2010) · Zbl 1194.93077 [13] Karafyllis, I., Finite-time global stabilization by means of time-varying distributed delay feedback, SIAM journal on control and optimization, 45, 1, 320-342, (2006) · Zbl 1132.93036 [14] Karafyllis, I., & Jiang, Z. P. (2008). Necessary and sufficient Lyapunov-like conditions for robust nonlinear stabilization. ESAIM Control, Optimization and Calculus of Variations, in press, (doi:10.1051/cocv/2009029), available at: http://www.esaim-cocv.org/.. [15] Krstic, M., On compensating long actuator delays in nonlinear control, IEEE transactions on automatic control, 53, 1684-1688, (2008) · Zbl 1367.93437 [16] Krstic, M., Input delay compensation for forward complete and feedforward nonlinear systems, IEEE transactions on automatic control, 55, 287-303, (2010) · Zbl 1368.93546 [17] Krstic, M.; Kanellakopoulos, I.; Kokotovic, P.V., Nonlinear and adaptive control design, (1995), Wiley · Zbl 0763.93043 [18] Krstic, M.; Smyshlyaev, A., Backstepping boundary control for first-order hyperbolic PDEs and application to systems with actuator and sensor delays, Systems and control letters, 57, 750-758, (2008) · Zbl 1153.93022 [19] Kwon, W.H.; Pearson, A.E., Feedback stabilization of linear systems with delayed control, IEEE transactions on automatic control, 25, 266-269, (1980) · Zbl 0438.93055 [20] Liu, W.-J.; Krstic, M., Adaptive control of burgers’ equation with unknown viscosity, International journal of adaptive control and signal processing, 15, 745-766, (2001) · Zbl 0995.93039 [21] Loiseau, J.J., Algebraic tools for the control and stabilization of time-delay systems, Annual reviews in control, 24, 135-149, (2000) [22] Manitius, A.Z.; Olbrot, A.W., Finite spectrum assignment for systems with delays, IEEE transactions on automatic control, 24, 541-553, (1979) · Zbl 0425.93029 [23] Mazenc, F.; Bliman, P.-A., Backstepping design for time-delay nonlinear systems, IEEE transactions on automatic control, 51, 149-154, (2004) · Zbl 1366.93211 [24] Mazenc, F.; Mondie, S.; Francisco, R., Global asymptotic stabilization of feedforward systems with delay at the input, IEEE transactions on automatic control, 49, 844-850, (2004) · Zbl 1365.93409 [25] Mazenc, F.; Mondie, S.; Niculescu, S.I., Global asymptotic stabilization for chains of integrators with a delay in the input, IEEE transactions on automatic control, 48, 1, 57-63, (2003) · Zbl 1364.93658 [26] Mondie, S.; Michiels, W., Finite spectrum assignment of unstable time-delay systems with a safe implementation, IEEE transactions on automatic control, 48, 2207-2212, (2003) · Zbl 1364.93312 [27] Niculescu, S.-I.; Annaswamy, A.M., An adaptive Smith-controller for time-delay systems with relative degree $$n \leq 2$$, Systems and control letters, 49, 347-358, (2003) · Zbl 1157.93392 [28] Olbrot, A.W., Stabilizability, detectability, and spectrum assignment for linear autonomous systems with general time delays, IEEE transactions on automatic control, 23, 887-890, (1978) · Zbl 0399.93008 [29] Olgac, N.; Sipahi, R., An exact method for the stability analysis of time-delayed linear time-invariant (LTI) systems, IEEE transactions on automatic control, 47, 793-797, (2002) · Zbl 1364.93576 [30] Richard, J.-P., Time-delay systems: an overview of some recent advances and open problems, Automatica, 39, 1667-1694, (2003) · Zbl 1145.93302 [31] Smith, O.J.M., A controller to overcome dead time, ISA transactions, 6, 28-33, (1959) [32] Watanabe, K.; Nobuyama, E.; Kitamori, T.; Ito, M., A new algorithm for finite spectrum assignment of single-input systems with time delay, IEEE transactions on automatic control, 37, 1377-1383, (1992) · Zbl 0755.93022 [33] Yildiray, Y.; Annaswamy, A.; Kolmanovsky, I.V.; Yanakiev, D., Adaptive posicast controller for time-delay systems with relative degree $$n \leq 2$$, Automatica, 46, 2, 279-289, (2010) · Zbl 1205.93084 [34] Zhong, Q.-C., Robust control of time-delay systems, (2006), Springer [35] Zhou, J., Wang, W., & Wen, C. (2008). Adaptive backstepping control of uncertain systems with unknown input time delay. In FAC World congress. · Zbl 1166.93339 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-07-29 13:42:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5698050260543823, "perplexity": 5878.860889690546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00672.warc.gz"}
https://www.physicsforums.com/threads/show-me-a-derivation-of-the-bke.83059/
# Show me a derivation of the BKE For any of you advanced dynamics people, can you please show me a derivation of the BKE. (Basic Kinematic Equation) The BKE is..... e d/dt(Vector) = u d/dt(Vector) + omega between e and u X (Vector) sometimes an additional term is added which is the linear velocity between the frames but for some simplicity lets assume this is zero. e is the inertial frame and u is the working frame omega is the angular velocity X is the cross product operator I know just by looking at a simple single rotation problem, the solution is trivial but does anyone know of a formal proof? FredGarvin
2021-08-02 00:45:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439816236495972, "perplexity": 1256.6108203630502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00005.warc.gz"}
http://mcb112.org/w02/w02-section.html
# Section 02: Hashing and Randomness Notes by William Mallard [9/15/2017] ## Python tips ### Random functions NumPy includes a random module which includes a number of handy functions for generating and working with random numbers. To access numpy.random’s functions, import numpy: import numpy as np To generate a random number from the half-open interval [0,1): x = np.random.random() To select an item from a list according to a list of weights: L = ['abc', 'def', 'ghi'] w = [.2, .5, .3] x = np.random.choice(L, p=w) Note that your list of weights is supposed to be a probability distribution, so it must sum to 1. If it doesn’t, choice() will complain. To select a number from a normal distribution with mean mu and standard deviation sigma: x = np.random.normal(mu, sigma) You can read up on the various random functions on the SciPy website. We’ll discuss seed() and seeding your random number generator below. ### Translation tables If you want to transform a string according to some specific set of character substitutions, you can efficiently do so with a translation table. T = str.maketrans('abc', 'xyz') print('abacab'.translate(T)) print('ccccba'.translate(T)) print('bcbcba'.translate(T)) xyxzxy zzzzyx yzyzyx Note that we only need to build the translation table once. As long as we store it somewhere, we can reuse the same translation table over and over again. ### String reversal To reverse a string, we can use a common list and string slicing idiom. S = 'abcdef' print(S[::-1]) fedcba ### Writing gzip files Text-based bioinformatics data is usually stored and shared in compressed form. Common compression tools are gzip (.gz files), WinZip (.zip files), and bzip2 (.bz2 files). If you have an uncompressed file called foo.txt, simply run gzip foo.txt to generate a compressed version called foo.txt.gz. If foo.txt was larger than a few kilobytes, this compressed version will take up a fraction of the space. To generate a gzip’d text file directly from Python, you can use the gzip library. The gzip library provides an open() function that works just like the normal open() function, except it compresses the data before writing it to disk. There is one small difference in its usage: Instead of opening the file in write mode with ‘w’, we need to specify that we want to open the gzip file in text-writing mode with ‘wt’. import gzip with gzip.open('foo.txt.gz', 'wt') as fd: for line in lines: print(line, file=fd) ## Hashing ### Hash functions A hash function maps keys (data of arbitrary size) to hashes (values of a fixed size). For example, consider a hash function that takes any possible string of characters as its input, and generates a 32-bit integer as its output. This function would take the string’s bytes, smash them together via some special combination of bit shifts and logical operators, and spit out a 32-bit integer. It does so in such a way that keys will be uniformly distributed across the range of outputs – so if you hash a string, and then change it by a single letter and hash it again, the two hash values will be totally different. ### Hash tables A hash table is a data structure built on top of a normal list, with its length equal to the number of possible hashes. To add a value to the hash table, you hash the key, and add some value to the list at the index corresponding to the hash. ## Randomness ### Random vs Pseudorandom Randomness refers to the absence of any pattern. Truly random numbers only arise from physical processes (eg, radioactive decay, thermal noise, etc). Sequences of random numbers exhibit certain statistical properties that are useful for various computational applications. Computers are deterministic, so they cannot generate truly random numbers on their own. However, there are ways to make them generate sequences of numbers with many of the statistial properties of a truly random sequence. We call these random-looking (though ultimately deterministic) sequences pseudorandom. ### Pseudorandom Number Generators Pseudorandom number generators (PRNGs) produce sequences of pseudorandom numbers. At their core is a recursive function combining bit shifts and bitwise logical operations. You feed the previous number into the generator to get the next number in the sequence. ### Seeds Where does the PRNG get its very first number? By default, Python seeds its PRNG with whatever time it is when you ask for your first random number. But there’s nothing stopping you from overriding that and giving it your favorite number! A seed is a number you give a PRNG to initialize its internal state. So in a sense, the seed serves as a unique identifier for a sequence of pseudorandom numbers. What’s nice about this, is you can initialize Python’s PRNG to some state at the beginning of your program, and then every subsequent run will use the same sequence of pseudorandom numbers. Why would you want that? 1.) Debugging. This is useful for comparing your results as you tweak your code. If your edits didn’t alter the number or order of calls to the PRNG, then the random data you’re working with should be consistent across runs. 2.) Reproducibility. When you give your code to someone else, or publish it in a journal, other people can re-run your code and verify that they get the exact same output. Biology is currently plagued by irreproducible results. Biological systems are intrinsically noisy, so there’s probably a limit to how reproducible we can make results from the bench. But computational analyses have no excuse for being irreproducible, as long as you provide your analysis code, and seed your random number generators! To seed Python’s pseudorandom number generator: import numpy as np np.random.seed(42) You can then proceed to use functions from np.random as usual. import numpy as np # Seed the PRNG with 1, and randomly # generate 20 integers from 0 through 9. np.random.seed(1) np.random.randint(10, size=20) # Result: array([5, 8, 9, 5, 0, 0, 1, 7, 6, 9, 2, 4, 5, 2, 4, 2, 4, 7, 7, 9]) # Seed the PRNG with 2, and randomly # generate 20 integers from 0 through 9. np.random.seed(2) np.random.randint(10, size=20) # Result: array([8, 8, 6, 2, 8, 7, 2, 1, 5, 4, 4, 5, 7, 3, 6, 4, 3, 7, 6, 1]) # These 20 numbers differ from the first 20. # Seed the PRNG with 1 again, and randomly # generate 20 integers from 0 through 9. np.random.seed(1) np.random.randint(10, size=20) # Result: array([5, 8, 9, 5, 0, 0, 1, 7, 6, 9, 2, 4, 5, 2, 4, 2, 4, 7, 7, 9]) # These 20 numbers match the first 20 exactly! # Seed the PRNG with 1 again, and randomly # generate 20 integers from 0 through 9. np.random.seed(1) np.random.randint(10, size=5) np.random.randint(10, size=5) np.random.randint(10, size=5) np.random.randint(10, size=5) # Result: # array([5, 8, 9, 5, 0]) # array([0, 1, 7, 6, 9]) # array([2, 4, 5, 2, 4]) # array([2, 4, 7, 7, 9]) # These 20 numbers still match the first 20, # even though we pulled them out 5 at a time.
2018-03-19 12:22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24277621507644653, "perplexity": 1215.6338665087972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646914.32/warc/CC-MAIN-20180319120712-20180319140712-00448.warc.gz"}
https://www.physicsforums.com/threads/question-about-relative-speeds.680011/
1. Mar 21, 2013 port31 Lets say I have a moving object that has speed $v_1$ and mass $m_1$ and it collides with a more massive object of mass $m_2$ And this mass is at rest and when they collide they stick together. If I use momentum conservation I would get $m_1 v_1=(m_1+m_2)(v_2)$ and $v_2$ is the speed after the collision but what if I wanted to analyze this from the rest frame of $m_1$ It would look as if the more massive object was moving at me at a speed $v_1$ So now I would have $m_2(-v_1)=(m_1+m_2)(v_2)$ the final speeds would be different in those 2 cases so whats wrong with my reasoning. 2. Mar 21, 2013 Staff: Mentor The final speeds would indeed be different because you are using a different reference frame. To check to make sure there is no conflict, compare the before and after speeds of each object. The difference should be the same regardless of which frame you choose. 3. Mar 21, 2013 Staff: Mentor If you now transform your answer to the original frame (by adding $v_1$) you'll find that the speeds match.
2017-12-16 12:01:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5139446258544922, "perplexity": 332.2142472735167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00533.warc.gz"}
https://www.thejournal.club/c/paper/185641/
#### Some Enumeration Problems in the Duplication-Loss Model of Genome Rearrangement Tandem-duplication-random-loss (TDRL) is an important genome rearrangement operation studied in evolutionary biology. This paper investigates some of the formal properties of TDRL operations on the symmetric group (the space of permutations over an $n$-set). In particular, the cardinality of `balls' of radius one in the TDRL metric, as well as the cardinality of the maximum intersection of two such balls, are determined. The corresponding problems for the so-called mirror (or palindromic) TDRL rearrangement operations are also solved. The results represent an initial step in the study of error correction and reconstruction problems in this context and are of potential interest in DNA-based data storage applications.
2021-05-18 01:54:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7930290102958679, "perplexity": 694.7808283273961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00483.warc.gz"}
https://answers.ros.org/question/276307/how-to-share-common-libs-in-ros-workspace/
# How to share common lib's in ROS workspace I'm having several packages in my workspace and I'm currently at the point where a want to avoid code duplicity. At the moment I have package called myproject_common in which all msg and srv files are located. After building the workspace they are properly available in all my other packages. Now I want to my common .py files into the common packages and also make them available. After calling catkin_make the msg and srv classes are available under ./devel/lib/python2.7/dist-packages/ which is quiet nice because this makes them also available in my IDE without the need to import something manually. How can I accomplish this with self written common python files? What do I need to change in my CMakeLists.txt to get this working edit: I separated the msg / srv from the common modules, because the msg / srv stuff work fine so now my pkg with the shared modules is named myproject_common_lib this is my directory structure myproject_common_lib ├── CMakeLists.txt ├── package.xml ├── setup.py └── src └── myproject_common_lib ├── coordinate.py ├── coordinate.pyc └── __init__.py 2 directories, 6 files The content of my __init__.py looks like this from coordinate import Coordinate The content of my setup.py ## ! DO NOT MANUALLY INVOKE THIS setup.py, USE CATKIN INSTEAD from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup # fetch values from package.xml setup_args = generate_distutils_setup( packages=['myproject_common_lib'], package_dir={'': 'src'}, ) setup(**setup_args) The content of my CMakeLists.txt cmake_minimum_required(VERSION 2.8.3) project(myproject_common_lib) find_package(catkin REQUIRED COMPONENTS rospy ) catkin_python_setup() catkin_package() include_directories( $(catkin_INCLUDE_DIRS) ) When I want to import the shared module I use the following import (which does not work) from myproject_common_lib.coordinate import Coordinate The content of my devel folder looks like this (short version) devel ├── lib │ └── python2.7 │ └── dist-packages │ └── myproject_common_lib │ ├── __init__.py │ └── __init__.pyc └── share ├── myproject_common_lib │ └── cmake │ ├── myproject_common_libConfig.cmake │ └── myproject_common_libConfig-version.cmake I cannot see any difference to the posted tutorial. Also when watching this video I cannot see any difference :-( edit 2: This is what PyCharm sees edit retag close merge delete ## 1 Answer Sort by » oldest newest most voted You have to write Python modules (= libraries) and declare them, see this tutorial. So your modules will be available from your other packages and you will be able to import them using: import myproject_common.my_module more ## Comments When I follow the tutorial it works for the script but it does not work for my modules. The import cannot be found. The error I get is Cannot find reference 'coordinate' in '__init__.py', which comes from PyCharm ( 2017-11-20 10:26:22 -0500 )edit Did you add an __init__.py file to the directory that contains your libraries/modules? ( 2017-11-20 11:54:22 -0500 )edit @jayess and @rreignier I updated my post with more information. Actually I cannot see any difference between my stuff and the provided tutorial ( 2017-11-21 02:22:37 -0500 )edit I found the error in the CMakeLists.txt the $(catkin_INCLUDE_DIRS) part must be written with {} ( 2017-11-21 09:03:51 -0500 )edit
2021-08-02 04:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18970030546188354, "perplexity": 9398.80822291755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00402.warc.gz"}
https://tex.stackexchange.com/questions/569494/change-in-video-embedding
# Change in Video embedding? I am trying to embed a video in a beamer presentation. I must specify that the exemple below using media9 package was previously working. It is not working anymore, and I don't understand why. I am aware that many similar subjects were created, and from them, I managed to produce the working code linked below. \documentclass[10pt]{beamer} \usetheme{Frankfurt} \usepackage{media9} \begin{document} \begin{frame} \frametitle{title} \includemedia[ width=1\linewidth,height=0.3\linewidth, activate=pageopen, passcontext, flashvars={source=NRCollisionalQuenchToh.mp4 %&autoplay=true %&loop=true \end{frame} \end{document} Once again, this was working, but it's not the case anymore. I am working on windows 10 using TexStudio as editor. The miktek console is up-to-date, and I have the latest version of adobe acrobat reader DC (20.013.20064). It is compiled without error, and adobe open the pdf. Although, once on the slide, a small box appear to say its loading the video. Once the box show "ready" nothing happen, and the video don't play. Also, I am aware that it should be possible to embed video in beamer directly with the multimedia package. I am totally fine with it, but I just never could get it to work. I am also aware tht it is possible to have external links, and I know how to do it, but this is not what I am looking for. Here is my code for that: \documentclass[10pt]{beamer} \usetheme{Frankfurt} \usepackage{multimedia} \begin{document} \begin{frame} \frametitle{title} \movie[ poster, showcontrols=true] {Embedded}{water.avi}\\ \end{frame} \end{document} I am open to any solutions for embedded videos in a latex beamer, whatever video format I should use. I would prefer solutions using the media9 package, to be able to send the pdf to someone, whithout having to dend the video on the side, but at this point, I am fine with anything. Thank you in advance for help. • Can people watching this tell me if this bit of code is working for them at least? So I can have an idea about where the problem is coming from... Nov 4 '20 at 12:34 • Did you test whether both RichMedia methods (media9 and \embedvideo) work with the example video file example-movie.mp4 from pkg mwe? Maybe your video file has an encoding issue. Nov 20 '20 at 8:30
2021-12-01 00:33:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49552440643310547, "perplexity": 803.83342457872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.76/warc/CC-MAIN-20211130232232-20211201022232-00460.warc.gz"}
https://www.physicsforums.com/threads/problem-resolving-url.729173/
Problem resolving url I encountered a strange problem resolving the url of a repository. I am running opensuse 12.1 and firefox. The repository of opensuse I want to browse is however I am told that it does not exist. When I surf to avr is shown but I also get a link does not exist when I click on it. However it works from another computer (windows). Any ideas? Mentor Try: Flush the DNS cache on the ubuntu box & windows. Somebody is out of sync. Is the DNS server local? Code: sudo /etc/init.d/dns-clean restart windoze: from cmd.exe window -- Code: ipconfig /flushdns Do you get a 404 error? Usually the system will tell you it cannot connect to the site. Meaning it does resolve but the physical server is not responding. I tried out a couple of things in the meanwhile, including restarting both the router and the computer. First, the url is correctly resolved when starting the computer under windows. Under opensuse, when I click on "/avr" on the page
2022-12-05 10:46:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41815176606178284, "perplexity": 3391.6783257902875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00687.warc.gz"}
https://www.hackmath.net/en/math-problem/5476
Fridays Friday 13th is in 4 days. What day is today and what day it is? Result a =  9 b = (Correct answer is: Pondeli/Monday) Solution: $a=13-4=9$ $b=Pondeli/Monday$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Do you want to convert time units like minutes to seconds? Next similar math problems: 1. Clock How many hours are 15 days? 2. Fedor Fedor stood in the small pool. Above the surface was a part of his body that was twice as long as part of his body underwater. What height does Fedor have, if his body part below the surface is 5 dm long? 3. A koala A koala lives to be 14 years old in the wild. How much longer will a 2 year old koala probably live? 4. Breeder Seven hens breeder has supplied 350 eggs. How long can deliver, if each of its chickens can withstand at least 5 eggs for week? 5. Tourist Tourist went 24 km for 4 hours. How many meters he goes at the same speed for 12 minutes? ? Help - convert units to minutes and meters 6. Three elephants Three elephants eat three piles of hay in 150 minutes. For how long does 96 elephants eat  96 stacks of hay? 7. Tulips and daffodils Farm cultivated tulips and 211 units on 50 units more daffodils. How many spring flowers grown together? Mom wants talk with dad 2.2 hours. But dad wants of 2 times more hours than mom ... How many hours wants dad talk with mom? ... 9. Computer A line of print on a computer contains 64 characters (letters, spacers or other chars). Find how many characters there are in 7 lines. 10. Bus On the 4-th stop take on 56 and take off 38 passengers. How many were added (write as positive number) or shrunk (write as negative number) the count of passengers? 11. Dividing by five and ten Number 5040 divide by the number 5 and by number 10: a = 5040: 5 b = 5040: 10 12. Collection Majka gave from her collection of calendars Hanke 15 calendars, Julke 6 calendars and Petke 10 calendars. Still remains 77 calendars. How many calendars had Majka in her collection at the beginning? 13. Baking There are 28 bunches, and son ate 1/2, dad ate four bunches. How many of them remain on the baking dishes? 14. Postal stamps Jano and Peter exchanged postal stamps. Jano gave Peter 32 stamps of the missile for 8 stamps with turtles. How good was Jano after this exchange (how many he has surplus in exchanged stamps)? 15. Hotel The rooms in the mountain hotel are double and triple. Double rooms are 25 and triple are 17 more. How many rooms are there in this hotel? 16. Book Jitka read on holidays book that has 180 pages. In the first week read 45 pages. In the second week she read 15 pages more than the first week. How many pages left to read it yet? 17. Unknown number xyz Find the number that its triple is 24. Solve by equation.
2020-04-02 00:55:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3488309979438782, "perplexity": 4561.977939387919}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00173.warc.gz"}
https://hiphive.materialsmodeling.org/moduleref/cluster_space.html
# Cluster space¶ class hiphive.ClusterSpace(prototype_structure, cutoffs, sum_rules=True, symprec=1e-05, length_scale=0.1)[source] Primitive object handling cluster and force constants for a structure. Parameters: prototype_struture (ASE Atoms object) – prototype structure; spglib will be used to find a suitable cell based on this structure. cutoffs (list) – cutoff radii for different orders starting with second order sum_rules (bool) – If True the aucostic sum rules will be enforced by constraining the parameters. symprec (float) – numerical precision that will be used for analyzing the symmetry (this parameter will be forwarded to spglib) length_scale (float) – This will be used as a normalization constant for the eigentensors Examples To instantiate a ClusterSpace object one has to specify a prototype structure and cutoff radii for each cluster order that should be included. For example the following snippet will set up a ClusterSpace object for a BCC structure including second order terms up to a distance of 5 A and third order terms up to a distance of 4 A. >>> from ase.build import bulk >>> prim = bulk('W') >>> cs = ClusterSpace(prim, [5.0, 4.0]) atom_list AtomList – The atoms within the cutoff from the center cell cluster_list ClusterList – Contains the clusters possible within the cutoff copy()[source] cutoffs Cutoffs obj – The cutoffs used for construcring the cluster space get_sum_rule_constraint_matrices(symbolic=True, rotational=False)[source] Return the constraint matrices needed for imposing the (acoustic=translational) sum rule. Returns: dictionary of constraint matrices, where the key is the order of the respective constraint matrix dict length_scale float – The normalization of the force constants number_of_dofs int – The number of free parameters in the model If the sum rules are not enforced the number of DOF is the same as the total number of eigentensors in all orbits orbit_data list – list of dictionaries containing detailed information for each orbit, e.g. cluster radius and atom types. orbits list of Orbit objects – The orbits of the structure permutations list of vectors – lookup for permutation references prim Atoms – the structure of the lattice print_orbits()[source] Prints a list of all orbits. read(f)[source] Load a ClusterSpace instance from file Parameters: f (string or file object) – name of input file (string) or stream to load from (file object) rotation_matrices list of 3x3 matrices – The rotation for each symmetry spacegroup str – The space group of the lattice obtained from spglib sum_rules bool – whether the sum rules are enforced symprec float – The symprec value used when constructing the cluster space translation_vectors list of 3-vectors – the translation for each symmetry write(fileobj)[source] Saves to file or to a file-like object The instance is saved into a custom format based on tar-files. The resulting file will be a valid tar file and can be browsed by by a tar reader. The included objects are themself either pickles, npz or other tars. Parameters: fileobj (str or file-like obj) – If the input is a string a tar archive will be created in the current directory. If not a string the input must be a valid file like obj. wyckoff_sites The wyckoff sites in the primitive cell
2018-04-21 11:41:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3444492816925049, "perplexity": 2831.7749937331696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945143.71/warc/CC-MAIN-20180421110245-20180421130245-00083.warc.gz"}
https://electronics.stackexchange.com/questions/353253/what-kind-of-signal-processing-circuitry-do-i-need-to-generate-a-line-level-outp
# What kind of signal processing circuitry do I need to generate a line level output on an Arduino? I am trying to create an Arduino based music synthesizer. How can I safely generate line level output (+/- 2 volts centered at zero, with a frequency range from 20Hz-20KHz) from my Arduino using a minimal number of components? This is what I imagine the flow will look like, but please correct me if this is wrong. • Generate a sine wave tone using a DAC (I'm doing this already using MCP4725) • Level shift the signal -2.5 volts and lower gain • To perform level shifting I think I need to generate a negative 5 volts to supply to a dual supply op amp, but I'm not sure if this is correct There is a lot of confusing/mixed information on line level requirements. I hooked up the output jack of my macbook pro to an oscilloscope and generated a square wave. It looks like the macbook pro puts out -2 to 2 volts, so I think this is where my target output voltage should be. Edit: My target output voltage is 1.25VRMS, since I am using a QSC PLX3602 amplifier with an input sensitivity of 1.25VRMS. Some questions: • How many milliamps do I need to be able to source for line level • Given that I am going to be outputting square waves (which can sometimes damage speakers), is there anything I should keep in mind? I am planning on matching my amplifiers RMS wattage rating with the speakers RMS rating. Do square waves produce higher current than RMS? • Can anyone recommend a schematic or components I can use to accomplish the signal conditioning needed to do this safely/without damaging audio equipment? • – Transistor Jan 31 '18 at 18:41 To perform level shifting I think I need to generate a negative 5 volts to supply to a dual supply op amp, but I'm not sure if this is correct. It's much simpler than that. Just add a DC blocking capacitor in series with the output. We'll calculate the value in a moment. It looks like the macbook pro puts out -2 to 2 volts, so I think this is where my target output voltage should be. See Wikipedia's Line leve for more on this but that will be plenty. How many milliamps do I need to be able to source for line level? Use Ohm's law. You'll need to find the input impedance of what you are driving but it's usually > 10k so current drain won't be a problem. Given that I am going to be outputting square waves (which can sometimes damage speakers), is there anything I should keep in mind? I am planning on matching my amplifiers RMS wattage rating with the speakers RMS rating. Do square waves produce higher current than RMS? You're getting mixed up. An RMS measurement allows comparison between different waveforms. If they have the same RMS value then they will have the same heating effect or power as each other or a DC current of the same value. The problem with squarewaves is that they are high in harmonic content and, theoretically, these continue up to infinity. You can get an understanding of this from the Fourier transform of a squarish wave. Figure 1. Fourier transform from time domain to frequency domain. Source: unknown to me. Can anyone recommend a schematic or components I can use to accomplish the signal conditioning needed to do this safely/without damaging audio equipment? simulate this circuit – Schematic created using CircuitLab The capacitor and amplifier input will form a high-pass filter. (Think: it blocks DC which is 0 Hz.) The cut-off value is determined by $f_c = \frac {1}{2 \pi RC}$. You can read more and find a calculator on Learning Electronics. • My amplifier's input sensitivity is 1.25 Vrms at 8 ohms. Does this mean I need to be able to source 156 mA (1.25/8)*1000? – circuitry Jan 31 '18 at 19:06 • Surely the 8 Ω refers to the amplifier output impedance? – Transistor Jan 31 '18 at 19:24 • Hmm I'm looking at qsc.com/resource-files/productresources/amp/plx2/…, for PLX3602. It says "input sensitivity at 8 ohms", but it also says "input impedance 10 kohms unbalanced, 20 kohms balanced." – circuitry Jan 31 '18 at 19:31 • In that event it seems like I would do (1.25/10k)*1000, and the current would be 0.125 mA. I am going to be driving a 4 ohm speaker though so I'm not sure if that would change things. – circuitry Jan 31 '18 at 19:33 • That figure is telling you that you will get the rated output into 8 Ω speakers with that level on the 10 kΩ input. Your ear has a logarithmic response to volume. Double the volume requires ten times the power. Turn the volume down and plug it in. Given that you're generating the sound from an Arduino I'd say you or your audience aren't going to listen to it for very long so I wouldn't waste too much time on the calculations. – Transistor Jan 31 '18 at 19:36
2019-10-22 04:06:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45671820640563965, "perplexity": 1048.7583174943145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00111.warc.gz"}
https://nips.cc/Conferences/2019/ScheduleMultitrack?event=14131
` Timezone: » Poster Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback Shuai Zheng · Ziyue Huang · James Kwok Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #211 Communication overhead is a major bottleneck hampering the scalability of distributed machine learning systems. Recently, there has been a surge of interest in using gradient compression to improve the communication efficiency of distributed neural network training. Using 1-bit quantization, signSGD with majority vote achieves a 32x reduction in communication cost. However, its convergence is based on unrealistic assumptions and can diverge in practice. In this paper, we propose a general distributed compressed SGD with Nesterov's momentum. We consider two-way compression, which compresses the gradients both to and from workers. Convergence analysis on nonconvex problems for general gradient compressors is provided. By partitioning the gradient into blocks, a blockwise compressor is introduced such that each gradient block is compressed and transmitted in 1-bit format with a scaling factor, leading to a nearly 32x reduction on communication. Experimental results show that the proposed method converges as fast as full-precision distributed momentum SGD and achieves the same testing accuracy. In particular, on distributed ResNet training with 7 workers on the ImageNet, the proposed algorithm achieves the same testing accuracy as momentum SGD using full-precision gradients, but with $46\%$ less wall clock time.
2022-01-22 08:04:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916031002998352, "perplexity": 1248.3106440096965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00005.warc.gz"}
https://www.proofwiki.org/wiki/Definition:Unbounded_Divergent_Complex_Sequence
# Definition:Unbounded Divergent Sequence/Complex Sequence ## Definition Let $\sequence {z_n}$ be a sequence in $\C$. Then $\sequence {z_n}$ tends to $\infty$ or diverges to $\infty$ if and only if: $\forall H > 0: \exists N: \forall n > N: \cmod {z_n} > H$ where $\cmod {z_n}$ denotes the modulus of $z_n$. We write: $x_n \to \infty$ as $n \to \infty$.
2022-08-09 07:06:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991110563278198, "perplexity": 417.69526363737356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00412.warc.gz"}
https://space.stackexchange.com/questions/55552/access-to-chebyshev-coefficients-from-jpl-ephemerides
I want to generate an ephemeris file containing the Chebyshev coefficients, similar to how we can generate ephemeris file of positions with Horizon tool (https://ssd.jpl.nasa.gov/horizons/app.html#/). I would like to do the same, that is to, say, be able to choose the target body, coordinate center, dates, and generate the ephemeris file containing the Chebyshev coefficients associated. Do you know a tool, or a program which is able to do this? I have done a lot of research, but I haven't found one. I tried with SPICE but I didn't find a method allowing to extract Chebyshev coefficients like this. • I use a CSPICE Python wrapper called SpiceyPy most of the time when working with SPICE kernels. Specifically, the function spkw03 writes SPK (.bsp) kernels using Chebyshev polynomials. You pass in the position and velocity vectors and it creates the binary file ( naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/spkw03_c.html ). Although if you want just the chebyshev polynomial values, that might involve having the read in the binary file in a different way (maybe a custom function) Oct 27, 2021 at 14:22 • @AlfonsoGonzalez My reading of the question is that the questioner is asking how to generate the Chebyshev coefficients rather on how to write the coefficients to a file. This is non-trivial. Oct 27, 2021 at 16:16 • @AlfonsoGonzalez Yes I'm looking for a such function that would be able to get these chebyshev polynomial values. Oct 28, 2021 at 8:32 • @DavidHammen I don't want to generate the Chebyshev coefficients, I just want to extract a part of an ephemeris file that interests me. For example we can imagine a function to which we give dates and the target body as input and return the Chebyshev coefficients just by extracting the corresponding part in the ephemeris file. Oct 28, 2021 at 8:39 Your question is slightly ambiguous, a file containing the Chebyshev coefficients wouldn't be an ephemeris, but it's something you could use to generate an ephemeris. Regardless, the article Format of the JPL Ephemeris Files will answer both how to get the coefficients and how to use them to generate an ephemeris. The article goes in to more details, but the short answer: Look at the header file for the JPL Development Ephemeris you want to use. The section labeled "Group 1050" has the most important information. The first column is for Mercury, then Venus, Earth-Moon Barycenter ... 9th is Pluto, Moon, and Sun. The coefficients are grouped together in 32 day blocks, marked with the Julian Days they're valid for. The Group 1050 information above shows how each bock is broken down. The first row shows the offset into the bock where the coefficients start for that planet, the second is the number of coefficients for each property (e.g. X, Y, and Z), and the last row is the number of sub-intervals each 32 day block is broken down into. For example, Mercury row is 3, 14, 4. So it starts at offset 3, has 14 coefficents per property (x, y, z) = 3 * 14 = 42, and is divided into 4 sub-intervals, for a total of 42 * 3 = 168 coefficients. Notice the column for Venus has an offset of 171, which is Mercury's offset plus the total coefficients. Once you have the 168 coefficients, you need to determine which sub-interval you need, since mercury is divided into 4 sub-intervals, or 4 / 32 = 8 day intervals. The first two entries of each block provide the valid Julian Day range, so simply determine which interval you want to compute for, and choose the corresponding 42 coefficients for that range. With those 42 coefficients, the first 14 are for the x coordinate, the next 14 for the y coordinate, and the last are for the z coordinate. These are the Chebyshev coefficients to use, the first link above provides an example of extracting the coefficients and performing the computation, as well as source code in JavaScript. This github repository contains source code in several languages, JavaScript, Python, Java, C#, Perl, and maybe others. • Yes, this is what I want, to extract the Chebyshev coefficients but just over a certain period of time and that for some planets, to then be able to store them in a much smaller file. Your code seems to do it, I'll check that, thanks a lot !! Nov 4, 2021 at 9:19 • This is a really good explanation, thanks! Dec 7, 2021 at 23:50 The best method I've found is to use Brandon Rhodes' Python "jplephem" from here, specifically using the _load() function of an SPK segment. You get a list of SPK segments when loading a BSP file. As a quick plug, I'm part of a team of four people who just started working on ANISE a few days ago: https://github.com/anise-toolkit/ . The plan is to create an open-source (Mozilla Public License 2.0) version of SPICE that is flight software ready. I've done similar work before for a private company (and therefore that work is proprietary so I don't have access to it) because CSPICE was not a viable alternative at the time. For ANISE, we're looking for more people to join the conversation and development team, so if you're interested, please reach out on our Element/Matrix space (https://matrix.to/#/#anise:matrix.org). • Thanks, I'll check this Python tool. This is interesting because I also want to use ephemeris for a flight software, and to do this I would like to store Chebyshev coefficients on board to then do an interpolation of an ephemeris. Oct 28, 2021 at 8:44 • very very cool !! – uhoh Oct 28, 2021 at 9:49 ## Update : I have found a way to extract Chebyshev coefficients at a requested date for a particular body using CSPICE. Here the program that allows to do this : #include <stdio.h> #include <stdlib.h> #include <string.h> #include "SpiceUsr.h" /* Author : G. Juif spk_to_asc.c Computation : gcc spk_to_asc.c -Iinclude lib/cspice.a -lm -o spk_to_asc Input : meta-kernel file (name must be modified in the code) This script extract Chebyshev coefficients from a SPICE bsp file. The body for which we want the coefficients must be defined in the code. It is by default MARS BARYCENTER. List of name bodies can be find here : https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/req/naif_ids.html#Planets%20and%20Satellites Care to have the associated bsp file. The coefficients are written in 3 differents files for X, Y and Z components. At execution several parameters must be given : - Start date in calendar format AAAA/MM/JJ. It is date from which the coefficients will be extract. By default this date is in TDB time scale but it can be modified in the code. - Duration in days (integer) : duration from the start date - Time step in days (integer) : Chebyshev coefficients will be extract at each of theses days step Output 3 files : coeffs_x ; ceffs_y ; coeffs_z In these files, at each time step, it gives : # Step_number Date Order Date_start Date_end Coefficients at each order line per line Dates are written in CJD format (number of julian days since January 1st, 1950 0h). It can be modified to have J2000 julian days by deleting 18262.5 additions in this code. - Step_number : gives the step number - Date : Date (TDB by default) corresponding of Chebyshev coefficients given below - Order : Order of the Chebyshev polynom - Date_start Date_end : dates where Chebyshev coefficients are defined in JPL file in CJD days. It is usefull in particular to compute scaled time for the Chebyshev polynom. */ int main() { /* Local parameters */ #define ND 2 #define NI 6 #define DSCSIZ 5 #define SIDLEN1 41 #define MAXLEN 50 #define utclen 35 /* Local variables */ SpiceBoolean found; SpiceChar segid [ SIDLEN1 ]; SpiceChar utcstr[ utclen ]; SpiceChar calendar[ utclen ]; SpiceChar frname [MAXLEN]; SpiceChar cname [MAXLEN]; SpiceChar bname [MAXLEN]; SpiceDouble dc [ ND ]; SpiceDouble descr [ DSCSIZ ]; SpiceDouble et; SpiceDouble record [99]; SpiceDouble date [99]; SpiceDouble dateCJD; SpiceDouble Cheby_order; SpiceDouble Interv_length; SpiceDouble Start_interv_cheby; SpiceDouble End_interv_cheby; SpiceInt handle; SpiceInt ic [ NI ]; SpiceInt idcode; SpiceInt temp; SpiceInt i = 0; SpiceInt j = 0; SpiceInt nbJours = 0; SpiceInt nbInterv = 0; SpiceInt nbCoeffs = 0; FILE * eph_file_x; FILE * eph_file_y; FILE * eph_file_z; eph_file_x = fopen("coeffs_x", "w"); eph_file_y = fopen("coeffs_y", "w"); eph_file_z = fopen("coeffs_z", "w"); /* Load a meta-kernel that specifies a planetary SPK file and leapseconds kernel. The contents of this meta-kernel are displayed above. */ furnsh_c ( "spksfs_ex1.tm" ); printf("Retrieve Chebyshev coefficients at a given date with duration and time step in days\n"); /* Get the NAIF ID code for the Pluto system barycenter. This is a built-in ID code, so something's seriously wrong if we can't find the code. */ bodn2c_c ( "MARS BARYCENTER", &idcode, &found ); if ( !found ) { sigerr_c( "SPICE(BUG)" ); } /* Pick a request time; convert to seconds past J2000 TDB. */ printf("Enter start date aaaa/mm/jj (TDB time scale):\n"); scanf("%12s", calendar); strcat(calendar," TDB"); str2et_c ( calendar, &et ); et2utc_c ( et, "J", 7, utclen, utcstr ); printf ( "Date : %s \n",calendar); printf("Enter duration in days :\n"); scanf("%d", &nbInterv); printf("Enter time step in days :\n"); scanf("%d", &nbJours); /* Loop on et */ nbInterv /= nbJours; date[0] = et; for (i = 0 ; i < nbInterv ; i++) { /* Find a loaded segment for the specified body and time. */ spksfs_c ( idcode, date[i], SIDLEN1, &handle, descr, segid, &found ); if ( !found ) { printf ( "No descriptor was found for ID %d at " "TDB %24.17e\n", (int) idcode, et ); } else { /* Convert date in CJD CNES date */ dateCJD = (date[i]/86400 ) + 18262.5; /* Display the segment ID. Unpack the descriptor. Display the contents. */ dafus_c ( descr, ND, NI, dc, ic ); temp = spkr02_(&handle,descr,&date[i],record); /* Chebyshev polynom order (minus 2 because length of record doesn't consider first element, see fortran spice doc)*/ Cheby_order = (record[0] - 2)/3; /* Interval length of chebyshev coefficients in days */ Interv_length = (record[2]/86400)*2; /* Start and end interval dates where Chebyshev coefficients are defined in JPL file in CJD days*/ Start_interv_cheby = (record[1]/86400) + 18262.5 - Interv_length/2 ; End_interv_cheby = (record[1]/86400) + 18262.5 + Interv_length/2 ; /* Print informations in files */ fprintf(eph_file_x, "# %ld %lf %lf %lf %lf\n", i+1, dateCJD, Cheby_order, Start_interv_cheby, End_interv_cheby); fprintf(eph_file_y, "# %ld %lf %lf %lf %lf\n", i+1, dateCJD, Cheby_order, Start_interv_cheby, End_interv_cheby); fprintf(eph_file_z, "# %ld %lf %lf %lf %lf\n", i+1, dateCJD, Cheby_order, Start_interv_cheby, End_interv_cheby); nbCoeffs = (int) Cheby_order ; /* Coeffs for X,Y,Z component */ for (j = 0 ; j < nbCoeffs ; j++) { fprintf(eph_file_x, "%24.17e\n", record[3+j]); fprintf(eph_file_y, "%24.17e\n", record[3+nbCoeffs+j]); fprintf(eph_file_z, "%24.17e\n", record[3+2*nbCoeffs+j]); } } /* Compute next date in seconds past J2000 TDB */ date[i+1] = date[i] + 86400*nbJours; } /* Translate SPICE codes into common names */ frmnam_c ( (int) ic[2], MAXLEN, frname ); bodc2n_c ( (int) ic[1], MAXLEN, cname, &found ); bodc2n_c ( (int) ic[0], MAXLEN, bname, &found ); /* Print configuration*/ printf ( "Segment ID: %s\n" "\n--------Configuration-------\n" "Body ID code: %s\n" "Center ID code: %s\n" "Frame ID code: %s\n" "SPK data type: %d\n" "Start ephemeris file time (TDB): %24.17e\n" "Stop ephemeris file time (TDB): %24.17e\n" "\n--------Chenbyshev polynom informations-------\n" "Chebyshev polynom order: %lf\n" "Time step in days where Chebyshev coefficients are defined: %lf\n", segid, bname, cname, frname, (int) ic[3], dc[0], dc[1], Cheby_order, Interv_length ); fclose(eph_file_x); fclose(eph_file_y); fclose(eph_file_z); return ( 0 ); } And the meta-kernal file : KPL/MK \begindata
2022-05-27 02:43:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.393032431602478, "perplexity": 5972.762124965584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00447.warc.gz"}
https://gmatclub.com/forum/how-many-different-ways-are-there-to-arrange-a-group-of-3-adults-and-230544.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Jan 2019, 08:42 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### FREE Quant Workshop by e-GMAT! January 20, 2019 January 20, 2019 07:00 AM PST 07:00 AM PST Get personalized insights on how to achieve your Target Quant Score. • ### Free GMAT Strategy Webinar January 19, 2019 January 19, 2019 07:00 AM PST 09:00 AM PST Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. # How many different ways are there to arrange a group of 3 adults and 4 new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 52294 How many different ways are there to arrange a group of 3 adults and 4  [#permalink] ### Show Tags 13 Dec 2016, 05:49 00:00 Difficulty: 5% (low) Question Stats: 89% (01:24) correct 11% (01:25) wrong based on 139 sessions ### HideShow timer Statistics How many different ways are there to arrange a group of 3 adults and 4 children in 7 seats if adults must have the first, third, and 7 seats? A. 12 B. 144 C. 288 D. 1,400 E. 5,040 _________________ Senior Manager Joined: 13 Oct 2016 Posts: 367 GPA: 3.98 Re: How many different ways are there to arrange a group of 3 adults and 4  [#permalink] ### Show Tags 13 Dec 2016, 06:56 There are 3 adults that should be seated on three particular seats and 4 children on the rest four. Adults and children cannot switch places with each other, only among themselves Hence: $$3!*4! = 6*24 = 144$$ Intern Joined: 12 Feb 2017 Posts: 4 How many different ways are there to arrange a group of 3 adults and 4  [#permalink] ### Show Tags 10 Dec 2017, 06:04 the adults have to be in 1 3 and 7th place , therefore : A*c*A*c*c*c*A so it will be 3 x C x 2 x C x C x C x 1 now the children taking remaining places will be 3 x 4 x 2 x 3 x 2 x 1 x 1 = 144 Intern Joined: 10 Dec 2017 Posts: 3 Re: How many different ways are there to arrange a group of 3 adults and 4  [#permalink] ### Show Tags 10 Dec 2017, 06:58 Bunuel wrote: How many different ways are there to arrange a group of 3 adults and 4 children in 7 seats if adults must have the first, third, and 7 seats? A. 12 B. 144 C. 288 D. 1,400 E. 5,040 Answer: 3 adults can be seated in 1st, 3rd and 7 th position in 3! ways. The remaining 4 positions are occupied by the remaining 4 people in 4! ways. Hence the total number of seating arrangements are 3!*4! = 144 ways. Sneha Tatavarthy Math Facililator International Baccalaureate _________________ Regards, Sneha Tatavarthy Math Facilitator International Baccalaureate EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 13346 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: Q170 V170 Re: How many different ways are there to arrange a group of 3 adults and 4  [#permalink] ### Show Tags 04 Jan 2018, 14:37 Hi All, We're asked to arrange a group of 3 adults and 4 children in 7 seats with adults in the first, third, and seventh seats. We're asked for the number of arrangements possible. This question is a variation on a standard Permutation question. To solve it, we have to go from space to space and keep track of the 'options' available for each (noting that once we place a person, there is one fewer person available for the next equivalent spot). For the 1st spot, there are 3 options For the 2nd spot, there are 4 options For the 3rd spot, there are 2 options For the 4th spot, there are 3 options For the 5th spot, there are 2 options For the 6th spot, there are 1 options For the 7th spot, there are 1 options Total arrangements = (3)(4)(2)(3)(2)(1)(1) = 144 possible arrangments GMAT assassins aren't born, they're made, Rich _________________ 760+: Learn What GMAT Assassins Do to Score at the Highest Levels Contact Rich at: [email protected] # Rich Cohen Co-Founder & GMAT Assassin Special Offer: Save \$75 + GMAT Club Tests Free Official GMAT Exam Packs + 70 Pt. Improvement Guarantee www.empowergmat.com/ *****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***** Re: How many different ways are there to arrange a group of 3 adults and 4 &nbs [#permalink] 04 Jan 2018, 14:37 Display posts from previous: Sort by # How many different ways are there to arrange a group of 3 adults and 4 new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2019-01-19 16:42:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41887861490249634, "perplexity": 3187.7717376086034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583671342.16/warc/CC-MAIN-20190119160425-20190119182425-00605.warc.gz"}
https://www.rdocumentation.org/packages/quantreg/versions/5.54/topics/latex.table
# latex.table 0th Percentile ##### Writes a latex formatted table to a file Automatically generates a latex formatted table from the matrix x Controls rounding, alignment, etc, etc Keywords utilities ##### Usage # S3 method for table latex(x, file=as.character(substitute(x)), rowlabel=file, rowlabel.just="l", cgroup, n.cgroup, rgroup, n.rgroup=NULL, digits, dec, rdec, cdec, append=FALSE, dcolumn=FALSE, cdot=FALSE, longtable=FALSE, table.env=TRUE, lines.page=40, caption, caption.lot, label=file, double.slash=FALSE,…) ##### Arguments x A matrix x with dimnames file Name of output file (.tex will be added) rowlabel If x' has row dimnames, rowlabel is a character string containing the column heading for the row dimnames. The default is the name of the argument for x. rowlabel.just If x' has row dimnames, specifies the justification for printing them. Possible values are "l", "r", "c"'. The heading (rowlabel') itself is left justified if rowlabel.just="l"', otherwise it is centered. cgroup a vector of character strings defining major column headings. The default is to have none. n.cgroup a vector containing the number of columns for which each element in cgroup is a heading. For example, specify cgroup= c("Major 1","Major 2")', n.cgroup=c(3,3)' if "Major 1" is to span columns 1-3 and "Major 2" is to span columns 4-6. rowlabel' does not count in the column numbers. You can omit n.cgroup' if all groups have the same number of columns. rgroup a vector of character strings containing headings for row groups. n.rgroup' must be present when rgroup' is given. The first n.rgroup[1]' rows are sectioned off and rgroup[1]' is used as a bold heading for them. The usual row dimnames (which must be present if rgroup' is) are indented. The next n.rgroup[2]' rows are treated likewise, etc. n.rgroup integer vector giving the number of rows in each grouping. If rgroup' is not specified, n.rgroup' is just used to divide off blocks of rows by horizontal lines. If rgroup' is given but n.rgroup' is omitted, n.rgroup' will default so that each row group contains the same number of rows. digits causes all values in the table to be formatted to digits' significant digits. dec' is usually preferred. dec If dec' is a scalar, all elements of the matrix will be rounded to dec' decimal places to the right of the decimal. dec' can also be a matrix whose elements correspond to x', for customized rounding of each element. rdec a vector specifying the number of decimal places to the right for each row (cdec' is more commonly used than rdec') cdec a vector specifying the number of decimal places for each column append defaults to F'. Set to T' to append output to an existing file. dcolumn Set to T' to use David Carlisles dcolumn' style for decimal alignment. Default is F', which aligns columns of numbers by changing leading blanks to "~", the LaTeX space-holder. You will probably want to use dcolumn' if you use rdec', as a column may then contain varying number of places to the right of the decimal. dcolumn' can line up all such numbers on the decimal point, with integer values right- justified at the decimal point location of numbers that actually contain decimal places. cdot Set to T' to use centered dots rather than ordinary periods in numbers. longtable Set to T' to use David Carlisles LaTeX longtable' style, allowing long tables to be split over multiple pages with headers repeated on each page. table.env Set table.env=FALSE' to suppress enclosing the table in a LaTeX table' environment. table.env' only applies when longtable=FALSE'. You may not specify a caption' if table.env=FALSE'. lines.page Applies if longtable=TRUE'. No more than lines.page' lines in the body of a table will be placed on a single page. Page breaks will only occur at rgroup' boundaries. caption a text string to use as a caption to print at the top of the first page of the table. Default is no caption. caption.lot a text string representing a short caption to be used in the "List of Tables". By default, LaTeX will use caption'. label a text string representing a symbolic label for the table for referencing with the LaTex \ref{label}' command. The default is file'. label' is only used if caption' is given. double.slash set to T' to output \' as \\' in LaTeX commands. Useful when you are reading the output file back into an S vector for later output. other optional arguments ##### Value returns invisibly ##### References Minor modification of Frank Harrell's Splus code ##### Aliases • latex.table Documentation reproduced from package quantreg, version 5.54, License: GPL (>= 2) ### Community examples Looks like there are no examples yet.
2020-05-28 08:57:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6840909719467163, "perplexity": 4758.658691144046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00328.warc.gz"}
https://nankeen.me/post/plaid-ctf-2020-shockwave/
# Plaid CTF 2020 Write-up 2 - YOU wa SHOCKWAVE ### Story Feeling stifled by the large crowd gathered in the entrance plaza, you open up your minimap and try to find somewhere to search far away from the entrance gate. Ah, perfect—there’s some kind of library on the other side of the Sanctum. A nice, quiet place to search alone for a bit. Entering the library, you find that your guess the library would be quiet was… rather faulty. As the door swings closed behind you, instead of a soft bell that one might expect to hear, you’re greeted with the wail of an electric guitar, quickly followed by some heavy drums. Perhaps not a great place to sit quietly, but at least it seems like you’ve got the place to yourself! You find yourself drawn to a section labeled “anime” (perhaps this is some sort of multimedia collection?), and everything there is spinning and flashing and generally making your eyes hurt a little. But there’s got to be a flag hiding somewhere behind the seizure-inducing mess, right? ## The files The challenge file is an archive containing .dlls and .exes, main executable is called Projector.exe. Running this showed a classy window that puts modern design to shame. Here, it prompts for the flag. Some investigation online revealed that these files relates to Macromedia Shockwave. We also learned that the logic is contained in the .dcr file, which is a compressed version of a .dir Director file. It contains the associated assets and instructions, originally written in LINGO. ## Finding the logic The frustrating part was combing through forum discussions older than myself hoping to find some specifications. We were lucky enough to stumble across a LINGO decompiler but it only worked for .dir files. Hunt was on for a .dcr to .dir converter, and this wee project on GitHub had a tool called DCR2DIR. With that, the flag checking logic was decompiled. on check_flag(flag) if flag.length <> 42 then return(0) end if checksum = 0 i = 1 repeat while i <= 21 checksum = bitXor(checksum, zz(charToNum(flag.getProp(#char, i * 2 - 1)) * 256 + charToNum(flag.getProp(#char, i * 2)))) i = 1 + i end repeat if checksum <> 5803878 then return(0) end if check_data = [[2, 5, 12, 19, 3749774], [2, 9, 12, 17, 694990], [1, 3, 4, 13, 5764], [5, 7, 11, 12, 299886], [4, 5, 13, 14, 5713094], [0, 6, 8, 14, 430088], [7, 9, 10, 17, 3676754], [0, 11, 16, 17, 7288576], [5, 9, 10, 12, 5569582], [7, 12, 14, 20, 7883270], [0, 2, 6, 18, 5277110], [3, 8, 12, 14, 437608], [4, 7, 12, 16, 3184334], [3, 12, 13, 20, 2821934], [3, 5, 14, 16, 5306888], [4, 13, 16, 18, 5634450], [11, 14, 17, 18, 6221894], [1, 4, 9, 18, 5290664], [2, 9, 13, 15, 6404568], [2, 5, 9, 12, 3390622]] repeat while check_data <= 1 x = getAt(1, count(check_data)) i = x.getAt(1) j = x.getAt(2) k = x.getAt(3) l = x.getAt(4) target = x.getAt(5) sum = zz(charToNum(flag.getProp(#char, i * 2 + 1)) * 256 + charToNum(flag.getProp(#char, i * 2 + 2))) sum = bitXor(sum, zz(charToNum(flag.getProp(#char, j * 2 + 1)) * 256 + charToNum(flag.getProp(#char, j * 2 + 2)))) sum = bitXor(sum, zz(charToNum(flag.getProp(#char, k * 2 + 1)) * 256 + charToNum(flag.getProp(#char, k * 2 + 2)))) sum = bitXor(sum, zz(charToNum(flag.getProp(#char, l * 2 + 1)) * 256 + charToNum(flag.getProp(#char, l * 2 + 2)))) if sum <> target then return(0) end if end repeat return(1) exit end The checker makes use of a recursive function zz. on zz(x) return(zz_helper(1, 1, x).getAt(1)) exit end on zz_helper(x,y,z) if y > z then return([1, z - x]) end if c = zz_helper(y, x + y, z) a = c.getAt(1) b = c.getAt(2) if b >= x then return([2 * a + 1, b - x]) else return([2 * a + 0, b]) end if exit end We figured out some specifics with a language guide and rewrote the logic in python. ## Solving At this point, it was around 6 in the morning and I wasn’t functioning well. The zz function is pretty complicated for a SMT solver, so some tweaking would be needed. Our approach solves for the zz output values in the first step. It then takes those values and does a reverse look up. solve1.py rom z3 import * zz_arr = [z3.BitVec('flag_%d' % i, 32) for i in range(21)] #21 zz(x) values, where x correspodns to those shitty number experssion def zz(x): return zz_helper(1, 1, x)[0] def zz_helper(x, y, z): if y > z: return (1, z - x) c = zz_helper(y, x + y, z) a = c[0] b = c[1] if b >= x: return (2 * a + 1, b - x) else: return (2 * a + 0, b) # Create a look up table zz_table = [zz(i) for i in range(0x10000)] def check_flag(flag): checksum = 0 for i in flag: checksum = checksum ^ i solver.add(checksum == 5803878) check_data = [[2, 5, 12, 19, 3749774], [2, 9, 12, 17, 694990], [1, 3, 4, 13, 5764], [5, 7, 11, 12, 299886], [4, 5, 13, 14, 5713094], [0, 6, 8, 14, 430088], [7, 9, 10, 17, 3676754], [0, 11, 16, 17, 7288576], [5, 9, 10, 12, 5569582], [7, 12, 14, 20, 7883270], [0, 2, 6, 18, 5277110], [3, 8, 12, 14, 437608], [4, 7, 12, 16, 3184334], [3, 12, 13, 20, 2821934], [3, 5, 14, 16, 5306888], [4, 13, 16, 18, 5634450], [11, 14, 17, 18, 6221894], [1, 4, 9, 18, 5290664], [2, 9, 13, 15, 6404568], [2, 5, 9, 12, 3390622]] for i, j, k, l, target in check_data: solver.add(flag[i] ^ flag[j] ^ flag[k] ^ flag[l] == target) solver = z3.Solver() check_flag(zz_arr) c = solver.check() flag_zzs = [] if c == z3.sat: model = solver.model() for c in zz_arr: flag_zz = zz_table.index(model.eval(c)) flag_zzs.append(flag_zz) print(flag_zzs) else: print('Unsatisfiable') solve1.py gave a list of 21 integers that can be used to solve stage 2. solve2.py import z3 zzs = [20547,21574,31559,29236,28776,12611,21343,17459,21353,18286,24393,29535,29778,21868,22879,19833,24400,24947, 13673, 28494, 8573] password_len = 42 flag_chars = [z3.Int('flag_%d' % i) for i in range(password_len)] solver = z3.Solver() for c in flag_chars: solver.add(c >= 0x20) solver.add(c < 0x7f) for i, zz in enumerate(zzs): solver.add(flag_chars[i*2] * 256 + flag_chars[i*2+1] == zz) c = solver.check() print(c) if c == z3.sat: print('SAT') sol = '' m = solver.model() for c in flag_chars: sol += chr(m[c].as_long()) print(sol) Running the script gave us the flag: PCTF{Gr4ph1CS_D3SiGn_Is_tRUlY_My_Pas5ioN!}. In hindsight, my main issue was with implementing a look up table in z3. This could be better done with a Function. Here’s a better script, probably. solve.py import z3 import itertools def zz(x): return zz_helper(1, 1, x)[0] def zz_helper(x, y, z): if y > z: return [1, z - x] c = zz_helper(y, x + y, z) a = c[0] b = c[1] if b >= x: return [2 * a + 1, b - x] else: return [2 * a + 0, b] def check_flag(flag): # Flag length 42 checksum = z3.BitVecVal(0, 32) for i in range(1, 22): c1 = flag[i * 2 - 2] c2 = flag[i * 2 - 1] checksum ^= zz_func(c1, c2) solver.add(checksum == 5803878) check_data = [[2, 5, 12, 19, 3749774], [2, 9, 12, 17, 694990], [1, 3, 4, 13, 5764], [5, 7, 11, 12, 299886], [4, 5, 13, 14, 5713094], [0, 6, 8, 14, 430088], [7, 9, 10, 17, 3676754], [0, 11, 16, 17, 7288576], [5, 9, 10, 12, 5569582], [7, 12, 14, 20, 7883270], [0, 2, 6, 18, 5277110], [3, 8, 12, 14, 437608], [4, 7, 12, 16, 3184334], [3, 12, 13, 20, 2821934], [3, 5, 14, 16, 5306888], [4, 13, 16, 18, 5634450], [11, 14, 17, 18, 6221894], [1, 4, 9, 18, 5290664], [2, 9, 13, 15, 6404568], [2, 5, 9, 12, 3390622]] for i, j, k, l, target in check_data: s = zz_func(flag[i * 2], flag[i * 2 + 1]) s ^= zz_func(flag[j * 2], flag[j * 2 + 1]) s ^= zz_func(flag[k * 2], flag[k * 2 + 1]) s ^= zz_func(flag[l * 2], flag[l * 2 + 1]) solver.add(s == target) password_len = 42 flag_chars = [z3.BitVec(f'flag_{i}', 8) for i in range(password_len)] flag = z3.Concat(*flag_chars) solver = z3.Solver() # Generate the table zz_func = z3.Function('zz_func', z3.BitVecSort(8), z3.BitVecSort(8), z3.BitVecSort(32)) for a, b in itertools.product(range(256), range(256)): solver.add(zz_func(a, b) == zz(a*256+b)) print('Done putting the massive table into z3') # Add the flag constraints check_flag(flag_chars) print('Done adding constraints') c = solver.check() # Dump results if found print(c) if c == z3.sat: m = solver.model() print(m.eval(flag).as_long().to_bytes(password_len, 'big'))
2020-05-29 19:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27578383684158325, "perplexity": 7244.359870672015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00504.warc.gz"}
https://ahilado.wordpress.com/2021/08/
# The Global Langlands Correspondence for Function Fields over a Finite Field In The Local Langlands Correspondence for General Linear Groups, we introduced some ideas related to what is known as the Langlands program, and discussed in a little more detail the local Langlands correspondence, at least for general linear groups. In this post, we will discuss the global Langlands correspondence, but we will focus on the case of function fields over a finite field. This will be somewhat easier to state than the case of number fields, and at the same time perhaps give us a bit more geometric intuition. Let us fix a smooth, projective, and irreducible curve $X$, defined over a finite field $\mathbb{F}_{q}$. We let $F$ be its function field. For instance, if $X$ is the projective line $\mathbb{P}^{1}$ over $\mathbb{F}_{q}$, then $F=\mathbb{F}(t)$. ### The case of $\mathrm{GL}_{1}$: Global class field theory for function fields over a finite field To motivate the global Langlands correspondence for function fields, let us first think of the $\mathrm{GL}_{1}$ case, which is a restatement of (unramified) global class field theory for function fields. Recall that in Some Basics of Class Field Theory global class field theory tells us that for global field $F$, its maximal unramified abelian extension $H$, also called the Hilbert class field of $F$, has the property that $\mathrm{Gal}(H/F)$ is isomorphic to the ideal class group. We recall that there is an analogy between the absolute Galois group and the etale fundamental group in the case when there is no ramification. Therefore, in the case of function fields, the corresponding statement of unramified global class field theory may be stated as $\displaystyle \pi_{1}(X,\overline{\eta})^{\mathrm{ab}}\times_{\widehat{\mathbb{Z}}}\mathbb{Z}\xrightarrow{\sim} \mathrm{Pic}(\mathbb{F}_{q})$ where $\pi_{1}(X,\overline{\eta})$ is the etale fundamental group of $X$, a profinite quotient of $\mathrm{Gal}(\overline{F}/F)$ through which its action factors ($\overline{\eta}$ here serves as the basepoint, which is needed to define the etale fundamental group). The Picard scheme $\mathrm{Pic}$ is the scheme such that for any scheme $S$ its $S$ points $\mathrm{Pic}(S)$ correspond to the isomorphism classes of line bundles on $X\times S$. This is analogous to the ideal class group. Taking the fiber product with $\mathbb{Z}$ is analogous to taking the Weil group (see also Weil-Deligne Representations and The Local Langlands Correspondence for General Linear Groups). The global Langlands correspondence, in the case of $\mathrm{GL}_{1}$, is a restatement of this in terms of maps from each side to some field (we will take this field to be $\overline{\mathbb{Q}}_{\ell}$). It states that there is a bijection between characters $\sigma:\pi_{1}(X,\overline{\eta})\to \overline{\mathbb{Q}}_{\ell}^{\times}$, and $\chi:\mathrm{Pic}(\mathbb{F}_{q})/a^{\mathbb{Z}}\to \overline{\mathbb{Q}}_{\ell}^{\times}$ where $a$ is any element of $\mathrm{Pic}(\mathbb{F}_{q})$ of nonzero degree. Again this is merely a restatement of unramified global class field theory, and nothing has changed in its content. However, this restatement points to us the way in which it may be generalized. ### Generalizing to $\mathrm{GL}_{n}$, and then to more general reductive groups To generalize this, we may take maps $\sigma:\pi_{1}(X,\overline{\eta})\to \mathrm{GL}_{n}(\overline{\mathbb{Q}}_{\ell})$ instead of maps $\sigma:\pi_{1}(X,\overline{\eta})\to \overline{\mathbb{Q}}_{\ell}^{\times}$, since $\overline{\mathbb{Q}}_{\ell}^{\times}$ is just $\mathrm{GL}_{1}(\overline{\mathbb{Q}}_{\ell})$. To make it look more like the case of number fields, we may also define this same map as a map $\sigma:\mathrm{Gal}(\overline{F}/F)\to \mathrm{GL}_{n}(\overline{\mathbb{Q}}_{\ell})$ which factors through $\pi_{1}(U,\overline{\eta})$ for some open dense subset $U$ of $X$. This side we call the “Galois side” (as it involves the Galois group). What about the other side (the “automorphic side”)? First we recall that $\mathrm{Pic}(\mathbb{F}_{q})$ classifies line bundles on $X$. We shall replace this by $\mathrm{Bun}_{n}(\mathbb{F}_{q})$, which classifies rank $n$ vector bundles on $X$. It was figured out by Andre Weil a long time ago that $\mathrm{Bun}_{n}(\mathbb{F}_{q})$ may also be expressed as the double quotient $\mathrm{GL}_{n}(F)\backslash\mathrm{GL}_{n}(\mathbb{A}_{F})/\mathrm{GL}_{n}(\prod_{v}\mathcal{O}_{F_{v}})$ (this is known as the Weil parametrization). Now functions on this space will give representations of $\mathrm{GL}_{n}(\mathbb{A}_{F})$. We will be interested not in all functions on this space, but in particular certain kinds of functions called cuspidal automorphic forms, which gives a representation that decomposes into pieces that we then want to match up with the Galois representations. In fact we can generalize even further and consider reductive groups (see also Reductive Groups Part I: Over Algebraically Closed Fields and Reductive Groups Part II: Over More General Fields) other than $\mathrm{GL}_{n}$! Let $G$ be such a reductive group over $F$. Instead of $\mathrm{Bun}_{n}(\mathbb{F}_{q})$ we now consider $\mathrm{Bun}_{G}(\mathbb{F}_{q})$, the moduli stack (see also Algebraic Spaces and Stacks) of $G$-bundles on $X$. As above, we consider the space of cuspidal automorphic forms on $\mathrm{Bun}_{G}(\mathbb{F}_{q})$, which we shall denote by $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\overline{\mathbb{Q}}_{\ell})$. Here $\Xi$ is a subgroup of finite index in $\mathrm{Bun}_{Z}(\mathbb{F}_{q})$, where $Z$ is the center of $G$. As we are generalizing to more general reductive groups than just $\mathrm{GL}_{n}$, we need to modify the other side (the Galois side) as well. Instead of considering Galois representations, which are group homomorphisms $\sigma: \mathrm{Gal}(\overline{F}/F)\to \mathrm{GL}_{n}(\overline{\mathbb{Q}}_{\ell})$, we must now consider L-parameters, which in this context are group homorphisms $\sigma: \mathrm{Gal}(\overline{F}/F)\to \widehat{G}(\overline{\mathbb{Q}}_{\ell})$, where $\widehat{G}$ is the dual group of $G$ (which as one may recall from Reductive Groups Part II: Over More General Fields, has the roots and coroots of $G$ interchanged). We may now state the “automorphic to Galois” direction of the global Langlands correspondence for function fields over a finite field $\mathbb{F}_{q}$, which has been proven by Vincent Lafforgue. It says that we have a decomposition $\displaystyle C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})=\bigoplus_{\sigma} \mathfrak{H}_{\sigma}$ of the space $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})$ into subspaces $\mathfrak{H}_{\sigma}$ indexed by L-parameters $\sigma$. It is perhaps instructive to compare this with the local Langlands correspondence as stated in Reductive Groups Part II: Over More General Fields, to which it should be related by what is known as local-global compatibility. (The “Galois to automorphic direction” concerns whether an L-parameter is “cuspidal automorphic”, and we will briefly discuss some partial progress by Gebhard Böckle , Michael Harris, Chandrasekhar Khare, and Jack Thorne later at the end of this post.) Furthermore the decomposition above must respect the action of Hecke operators (analogous to those discussed in Hecke Operators). Let us now discuss these Hecke operators. ### Hecke operators Let $\mathcal{E},\mathcal{E}'$ be two $G$-bundles on $X$. Let $x$ be a point of $X$, and let $\phi:\mathcal{E}\to\mathcal{E}'$ be an isomorphism of $G$ bundles over $X\setminus x$. We say that $(\mathcal{E}',\phi)$ is a modification of $\mathcal{E}$ at $x$. A modification can be bounded by a cocharacter, i.e. a homomorphism $\lambda:\mathbb{G}_{m}\to G$. This keeps track and bounds the vector bundles’ relative position. To get an idea of this, we consider the case $G=\mathrm{GL}_{n}$. Consider the completion $\mathcal{E}_{x}^{\wedge}$ of stalk of the vector bundle $\mathcal{E}$ at $x$. It is a free module over the completion $\mathcal{O}_{X,x}^{\wedge}$ of the structure sheaf at $x$, which happens to be isomorphic to $\mathbb{F}_{q}[[t]]$. Let $(\mathcal{E}',\phi)$ be a modification of $\mathcal{E}$ at $x$. There is a basis $e_{1},\ldots,e_{n}$ of $\mathcal{E}_{x}^{\wedge}$ such that $t^{k_{1}}e_{1},\ldots,t^{k_{n}}e_{n}$ is a basis of $\mathcal{E}_{x}^{'\wedge}$, where $k_{1}\geq\ldots\geq k_{n}$. But the numbers $k_{1},\ldots,k_{n}$ is the same as a cocharacter $\lambda:\mathbb{G}_{m}\to\mathrm{GL}_{n}$, given by $\mu(t)=\mathrm{diag}(t^{k_{1}},\ldots,t^{k_{n}})$. The Hecke stack $\mathrm{Hck}_{v,\lambda}$ is the stack whose points $\mathrm{Hck}_{v,\lambda}(\mathbb{F}_{q})$ correspond to modifications $(\mathcal{E},\mathcal{E}',\phi)$ at $v$ whose relative position is bounded by the cocharacter $\lambda$. It has two maps $h^{\leftarrow}$ and $h^{\rightarrow}$ to $\mathrm{Bun}_{G}(\mathbb{F}_{q})$, which send the modification $(\mathcal{E},\mathcal{E}',\phi)$ to $\mathcal{E}$ and $\mathcal{E}'$ respectively. The Hecke operator $T_{\lambda,v}$ is the composition $h_{*}^{\rightarrow}\circ h^{\leftarrow *}$. In essence what it does is it sends a function $f$ in $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})$ to the function which sends a point in $\mathrm{Bun}_{G}(\mathbb{F}_{q})$ corresponding to the $G$-bundle $\mathcal{E}$ to the sum of the values of $f(\mathcal{E}')$ over all modifications of $G$-bundles $\phi:\mathcal{E}'\to\mathcal{E}$ at $v$ bounded by $\lambda$. In this last description one can see that it is in fact analogous to the description of Hecke operators for modular forms discussed in Hecke Operators. More generally given a representation $V$ of $\widehat{G}$, we can obtain a Hecke operator $T_{V}$, and these Hecke operators have the property that if $V=V'\oplus V''$, we must have $T_{V,v}=T_{V',v}+T_{V'',v}$, and if $V=V'\otimes V''$ , we must have $T_{V,v}=T_{V',v}T_{V'',v}$. If $V$ is irreducible, then we can build $T_{V,v}$ as a combination of $T_{\lambda,v}$, where the $\lambda$‘s are the weights of $V$. Now let us go back to the decomposition $\displaystyle C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})=\bigoplus_{\sigma} \mathfrak{H}_{\sigma}.$ The statement of the global Langlands correspondence for function fields over a finite field $\mathbb{F}_{q}$ additionally requires that the Hecke operators preserve the subspaces $\mathfrak{H}_{\sigma}$; that is, they act on each of these subspaces, and do not send an element of such a subspace to another outside of it. Additionally, we require that the action of the Hecke operators are “compatible with the Satake isomorphism”. This means that the action of a Hecke operator $T_{V,v}$ is given by multiplication by the scalar $\mathrm{Tr}_{V}(\sigma(\mathrm{Frob}_{v}))$. This is somewhat analogous to the Eichler-Shimura relation relating the Hecke operators and the Frobenius briefly mentioned in Galois Representations Coming From Weight 2 Eigenforms. ### Ideas related to the proof of the automorphic to Galois direction: Excursion operators and the cohomology of moduli stacks of shtukas Let us now discuss some ideas related to Vincent Lafforgue’s proof of “automorphic to Galois direction” of the global Langlands correspondence for function fields over a finite field. An important part of these concerns the algebra of excursion operators, denoted by $\mathcal{B}$. These are certain endomorphisms of $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})$ which include the Hecke operators. The idea of the automorphic to Galois direction is that characters $\nu:\mathcal{B}\to \overline{\mathbb{Q}}_{\ell}^{\times}$ correspond uniquely to some L-parameter $\sigma$. To understand these excursion operators better, we will look at how they are constructed. The construction of the excursion operators involves the cohomology of moduli stacks of shtukas. A shtuka is a very special kind of a modification of a vector bundle. Given an indexing set $I$, a shtuka over a scheme $S$ over $\mathbb{F}_{q}$ consists of the following data: • A set of points $(x_{i})_{i\in I}:S\to X^{I}$ (the $x_{i}$ are called the “legs” of the shtuka) • A $G$-bundle $\mathcal{E}$ over $X\times S$ • An isomorphism $\displaystyle \phi: \mathcal{E}\vert_{(X\times S)\setminus (\bigcup_{i\in I}\Gamma_{x_{i}}}\xrightarrow{\sim}(\mathrm{Id}\times \mathrm{Frob}_{S})^{*}\mathcal{E}\vert _{(X\times S)\setminus (\bigcup_{i\in I}\Gamma_{x_{i}})}$ where $\Gamma_{x_{i}}$ is the graph of the $x_{i}$‘s. Let us denote the moduli stack of such shtukas by $\mathrm{Sht}_{I}$. We take note of the important fact that the moduli stack of shtukas with no legs, $\mathrm{Sht}_{\emptyset}$, is a discrete set of points and is in fact the same as $\mathrm{Bun}_{G}(\mathbb{F}_{q})$! We now want to define sheaves on $\mathrm{Sht}_{I}$ which will serve as coefficients when we take its etale cohomology, and we want these sheaves to depend on representations $W$ of $\widehat{G}^{I}$, for the eventual goal of having the cohomology (or appropriate subspaces of it that we want to consider) be functorial in $W$. This is to be accomplished by considering another moduli stack, the moduli stack of modifications over the formal neighborhood of the legs $x_{i}$. This parametrizes the following data: • The set of points $(x_{i})_{i\in I}:S\to X^{I}$ • A pair of $G$-bundles $\mathcal{E}$ and $\mathcal{E}'$ on the formal completion $\widehat{X\times S}$ of $X\times S$ along the neighborhood of the union of the the graphs $\Gamma_{x_{i}}$ • An isomorphism $\displaystyle \phi: \mathcal{E}\vert_{\widehat{X\times S}\setminus (\bigcup_{i\in I}\Gamma_{x_{i}}}\xrightarrow{\sim}\mathcal{E}'\vert _{\widehat{X\times S}\setminus (\bigcup_{i\in I}\Gamma_{x_{i}})}$ We denote this moduli stack by $\mathcal{M}_{I}$. The virtue of this moduli stack $\mathcal{M}_{I}$ is that a very important theorem called the geometric Satake equivalence associates to any representation $W$ of $\widehat{G}^{I}$ a certain object called a perverse sheaf on $\mathcal{M}_{I}$. Now there is a map from $\mathrm{Sht}_{I}$ to $\mathcal{M}_{I}$, and pulling back this perverse sheaf associated to $W$ we obtain a perverse sheaf $\mathcal{F}_{I,W}$ on $\mathrm{Sht}_{I}$. Now we take the intersection cohomology (we just think of this for now as being somewhat similar to $\ell$-adic etale cohomology) with compact support of the fiber of $\mathrm{Sht}_{I}$ over a geometric generic point of $X^{I}$ with coefficients in $\mathcal{F}_{I,W}$. We cut down a “Hecke-finite” (this is a technical condition that we leave to the references for now) subspace of it, and call this subspace $H_{I,W}$. This subspace has an action of $\mathrm{Gal}(\overline{F}/F)^{I}$. The above construction is functorial in $W$ – that is, a map $u:W\to W'$ gives rise to a map $\mathcal{H}(u):H_{I,W}\to H_{I,W}$. Furthermore, there is this very important phenomenon of fusion. Given a map of sets $\zeta:I\to J$ this is an isomorphism $H_{I,W}\xrightarrow{\sim}H_{J,W^{\zeta}}$, where $W^{\zeta}$ is a representation of $\widehat{G}^{J}$ on the same underlying vector space as $W$, obtained by composing the map from $\widehat{G}^{J}$ to $\widehat{G}^{I}$ that sends $(g_{j})_{j\in J}$ to $(g_{\zeta(i)})_{i\in I}$ with $W$. Now we can define the excursion operators. Let $f$ be a function on $\widehat{G}\backslash \widehat{G}^{I}/ \widehat{G}$. We can then find a representation $W$ of $\widehat{G}^{I}$ and elements $x\in W$, $\xi\in W^{*}$, invariant under the diagonal action of $\widehat{G}$, such that $\displaystyle f((g_{i})_{i\in I})\langle \xi, (g_{i})_{i\in I}\cdot x\rangle$. Let $(\gamma_{i})_{i\in I}\in \mathrm{Gal}(\overline{F}/F)^{I}$. The excursion operator $S_{I,f,(\gamma_{i})_{i\in I}}$ is defined to be $\displaystyle H_{\lbrace 0\rbrace,\mathbf{1}} \xrightarrow{\mathcal{H}(x)} H_{\lbrace 0\rbrace,W_{\mathrm{diag}}}\xrightarrow{\mathrm{fusion}} H_{I,W}\xrightarrow{(\gamma_{i})_{i\in I}} H_{I,W}\xrightarrow{\mathrm{fusion}} H_{\lbrace 0\rbrace,W_{\mathrm{diag}}}\xrightarrow{\mathcal{H}(\xi)} H_{ \lbrace 0\rbrace,\mathbf{1}}$ where $W_{\mathrm{diag}}$ is the diagonal representation of $\widehat{G}$ on $W$, i.e. we compose the diagonal embedding $\widehat{G}\hookrightarrow \widehat{G}^{I}$ given by $g\mapsto (g,\ldots,g)$ with the representation $W$. The excursion operators give endomorphisms of $H_{\lbrace 0\rbrace,\mathbf{1}}$. By fusion the subspace $H_{\lbrace 0\rbrace,\mathbf{1}}$ is the same as $H_{\emptyset,\mathbf{1}}$, which, in turn, is the same as $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})$ (recall that the moduli stack of shtukas with no legs is the same as $\mathrm{Bun}_{G}(\mathbb{F}_{q})$). The algebra generated by these endomorphisms as $I$, $f$, and $(\gamma_{i})_{i\in I}$ vary is called the algebra of excursion operators, and is denoted by $\mathcal{B}$. It is commutative and the different excursion operators satisfy certain natural relations amongst each other. As stated earlier, the Hecke operators are but particular cases of the excursion operators. Namely, the Hecke operator $T_{V,v}$ is just the excursion operator $S_{\lbrace 1,2\rbrace, f,(\mathrm{Frob}_{v},1)}$, where $f$ sends $(g_{1},g_{2})$ to $\mathrm{Tr}_{V}(g_{1}g_{2}^{-1})$. Now the idea of the decomposition of $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})$ is as follows. The algebra of excursion operators $\mathcal{B}$ partitions $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})$ into eigenspaces $\mathfrak{H}_{\nu}$, where it acts on each eigenspace as a character $\nu:\mathcal{B}\to \overline{\mathbb{Q}}_{\ell}$. Then as previously mentioned, every character $\nu$ corresponds uniquely to an L-parameter $\sigma$, satisfying $\displaystyle \nu(S_{I,f,(\gamma_{i})_{i\in I}})=f(\sigma(\gamma_{i})_{i\in I}).$ This says therefore that the decomposition of $C_{c}^{\mathrm{cusp}}(\mathrm{Bun}_{G}(\mathbb{F}_{q})/\Xi,\mathbb{Q}_{\ell})$ is indexed by L-parameters. Everything we have discussed so far may also be applied with a level structure included, encoded in the form of a finite subscheme $N$ of $X$. Then our L-parameters will be maps $\pi_{1}(X\setminus N,\overline{\eta})\to \widehat{G}(\overline{\mathbb{Q}}_{\ell})$ and we also replace $\mathrm{Bun}_{G}$ by $\mathrm{Bun}_{G,N}$, which if $G$ is split has a Weil parametrization given by $G(F)\backslash G(\mathbb{A}_{F})/K$, where $K$ is the kernel of the map $G( \prod_{v}\mathcal{O}_{F_{v}} )\to G(\mathcal{O}_{N})$. ### Other directions: The Galois to automorphic direction, and the geometric Langlands program We have so far discussed the “automorphic to Galois direction” of the global Langlands correspondence for function fields over finite fields, and some ideas related to its proof by Vincent Lafforgue. We now briefly discuss the “Galois to automorphic direction” and related work by Gebhard Böckle, Michael Harris, Chandrasekhar Khare, and Jack A. Thorne. This concerns the question of whether a given L-parameter is “cuspidal automorphic”, i.e. it can be obtained from a character of the algebra of excursion operators as stated above. What Böckle, Harris, Khare, and Thorne do not quite prove this “Galois to automorphic direction” in full. Instead what they prove is that given an everywhere unramified L-parameter $\sigma:\mathrm{Gal}(\overline{F}/F)\to\widehat{G}(\mathbb{Q}_{\ell})$ with dense Zariski image, then one can find an extension $E$ of $F$ such that the restriction $\sigma\vert_{\mathrm{Gal}(\overline{E}/E)}:\mathrm{Gal}(\overline{E}/E)\to\widehat{G}(\mathbb{Q}_{\ell})$ is cuspidal automorphic. We say that the L-parameter $\sigma$ is potentially automorphic. The way the above “potential automorphy” result is proved is by using techniques similar to that used in modularity (see also Galois Deformation Rings). We recall from our brief discussion in Galois Deformation Rings that the usual approach to modularity has two parts – residual modularity, and modularity lifting. The same is true in potential automorphy. The automorphy lifting part makes use of the same ideas as in the “R=T” theorems in modularity lifting, although in this context, they are called “R=B” theorems instead, since we are considering excursion operators instead of just the Hecke operators. To obtain an analogue of the residual modularity part, Böckle, Harris, Khare, and Thorne make use of results of Alexander Braverman and Dennis Gaitsgory from what is known as the geometric Langlands correspondence (for function fields over a finite field). Although we will not discuss the work of Braverman and Gaitsgory here, we will end this post with a rough idea of what the geometric Langlands correspondence is about. The geometric Langlands correspondence replaces the cuspidal automorphic forms (which as we recall are $\overline{\mathbb{Q}}_{\ell}$-valued functions on $\mathrm{Bun}_{G}(\mathbb{F}_{q})$) with $\overline{\mathbb{Q}}_{\ell}$-valued sheaves (actually a complex of $\overline{\mathbb{Q}}_{\ell}$-valued sheaves, or more precisely an object of the category $D^{b}(\mathrm{Bun}_{G})$ the “derived category of $\overline{\mathbb{Q}}_{\ell}$-valued sheaves with constructible cohomologies”) via Grothendieck’s sheaves to functions dictionary. Suppose we have some scheme $Y$ over $\mathbb{F}_{q}$. First let us suppose that $Y=\mathrm{Spec}(\mathbb{F}_{q})$. Then since $Y$ is just a point, a complex $\mathcal{F}$ of sheaves on $Y$ is just a complex of vector spaces (we shall take the sheaves to be $\overline{\mathbb{Q}}_{\ell}$-valued, so this complex is a complex of $\overline{\mathbb{Q}}_{\ell}$-vector spaces). This complex has an action of $\mathrm{Gal}(\overline{\mathbb{F}}_{q}/\mathbb{F}_{q})$. Now we take the alternating sum of the traces of Frobenius acting on this complex, and this gives us an element of $\overline{\mathbb{Q}}_{\ell}$. For more general $Y$, for every point $y:\mathrm{Spec}(\mathbb{F}_{q})\to Y$ we apply this same construction to the sheaf $\mathcal{F}_{y}$ which is the pullback of the sheaf $\mathcal{F}$ on $Y$ to $\mathrm{Spec}(\mathbb{F}_{q})$ via the morphism $y:\mathrm{Spec}(\mathbb{F}_{q})\to Y$. This provides us with a $\overline{\mathbb{Q}}_{\ell}$-valued function on $Y(\mathbb{F}_{q})$. One can also go in the other direction constructing complexes of sheaves given certain functions. Suppose we have a commutative connected algebraic group $A$ and suppose we have a character $\chi$ of $A(\mathbb{F}_{q})$. Then we can associate to this character an element of $D^{b}(A)$ as follows. We have the Lang isogeny $L:A(\mathbb{F}_{q})\to A(\mathbb{F}_{q})$ given by $\mathrm{Frob}(a)/a$ for some element $a$ of $A(\mathbb{F}_{q})$. The Lang isogeny defines a covering map of $A$ whose group of deck transformations is the group $\mathrm{ker(L)}=A(\mathbb{F}_{q})$. But because we have a character $\chi$ (a $\overline{\mathbb{Q}}_{\ell}^{\times}$-valued function on $A(\mathbb{F}_{q})$), we can take the composition $\displaystyle \pi_{1}(Y,\overline{\eta})\to \mathrm{ker}(L)=A\xrightarrow{\chi}\overline{\mathbb{Q}}_{\ell}^{\times}$ This gives us a $1$-dimensional representation of $\pi_{1}(Y,\overline{\eta})$. This in turn gives us a $1$-dimensional local system, which is known by the theory of constructible sheaves to be an object of $D^{b}(A)$. This resulting sheaf is also called a character sheaf. In the case when $A=\mathbb{G}_{m}$ it is called the Kummer sheaf, and when $A=\mathbb{G}_{a}$ it is called the Artin-Schreier sheaf. Grothendieck’s sheaves to functions dictionary is the inspiration for the geometric Langlands correspondence, which is stated entirely in terms of sheaves. We consider the same setting as before, but we now define a slightly modified version of the Hecke stack $\mathrm{Hck}$ where aside from parametrizing modifications we also include in the data being parametrized the point being removed to make the modification. Let $s:\mathrm{Hck}\to X$ be the map that gives us this point on $X$. Given a representation $V$ of $\widehat{G}$ we let $\mathcal{S}_{V}$ be the perverse sheaf on $D^{b}(\mathrm{Bun}_{G})$ given by geometric Satake as discussed earlier, and we define the Hecke functor $T_{V}$ that sends an object $\mathfrak{F}$ of $D^{b}(\mathrm{Bun}_{G})$ to an object $T(\mathfrak{F})$ of $D^{b}(X\times \mathrm{Bun}_{G})$ follows $T_{V}(\mathfrak{F})=(s\times h_{\rightarrow})_{!}\circ((h^{\leftarrow})^{*}(\mathfrak{F})\otimes S_{V})$ Then the geometric Langlands correspondence (for function fields over a finite field) states that given an L-parameter $\sigma$, one can find a Hecke eigensheaf, i.e. a sheaf $\mathfrak{F}_{\sigma}$ such that applying the Hecke functor $T$ to it we have $T_{V}(\mathfrak{F}_{\sigma})=E_{V\circ\sigma})\boxtimes\mathfrak{F}_{\sigma}$ where $E_{V\circ\sigma}$ is the local system associated to the representation $V\circ\sigma$. A version of the geometric Langlands correspondence has also been formulated for function fields over $\mathbb{C}$ instead of $\mathbb{F}_{q}$. Many things have to be modified, since in this case there is no Frobenius, and instead the theory of “D-modules” takes its place. This version of the geometric Langlands correspondence has found some fascinating connections to mathematical physics as well. More recently, a very general and abstract formulation of the geometric Langlands correspondence has been formulated by replacing L-parameters by coherent sheaves on the moduli stack of L-parameters (a single L-parameter corresponding instead to a skyscraper sheaf on the corresponding point). This allows one to have the entire formulation be stated as an equivalence of categories between derived categories of constructible sheaves on $\mathrm{Bun}_{G}$ on one side, and coherent sheaves on the moduli stack of L-parameters. This conjectural statement, appropriately modified to be made more precise (i.e. the moduli stack on the Galois side needs to be modified to parametrize “local systems with restricted variation” while the sheaves on both sides need to be ind-constructible, resp. ind-coherent, with nilpotent singular support), is also known as the categorical geometric Langlands correspondence. We have given a rough overview of the ideas involved in the global Langlands correspondence for function fields over a finite field. Hopefully we will be able to dive deeper into the finer aspects of the theory, as well as discuss other closely related aspects of the Langlands program (for example the global Langlands correspondence for number fields) in future posts on this blog. References: Shtukas for reductive groups and Langlands correspondence for function fields by Vincent Lafforgue Global Langlands parameterization and shtukas for reductive groups by Vincent Lafforgue (plenary lecture at the 2018 International Congress of Mathematicians) Chtoucas pour les groupes réductifs et paramétrisation de Langlands globale by Vincent Lafforgue Potential automorphy of $\widehat{G}$-local systems by Jack A. Thorne (invited lecture at the 2018 International Congress of Mathematicians) $\widehat{G}$-local systems are potentially automorphic by Gebhard Böckle, Michael Harris, Chandrasekhar Khare, and Jack A. Thorne Geometrization of the local Langlands program (notes by Tony Feng from a workshop at McGill University) The geometric Langlands conjecture (notes from Oberwolfach Arbeitsgemeinschaft) Recent progress in geometric Langlands theory by Dennis Gaitsgory The stack of local systems with restricted variation and geometric Langlands theory with nilpotent singular support by Dima Arinkin, Dennis Gaitsgory, David Kazhdan, Sam Raskin, Nick Rozenblyum, and Yakov Varshavsky An Introduction to the Langlands Program by Daniel Bump, James W. Cogdell, Ehud de Shalit, Dennis Gaitsgory, Emmanuel Kowalski, and Stephen S. Kudla (edited by Joseph Bernstein and Stephen Gelbart)
2023-01-29 16:22:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 325, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935742974281311, "perplexity": 172.71722031115317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00548.warc.gz"}
https://developpaper.com/exercises-in-chapter-10-of-statistical-learning-methods/
Time:2021-11-29 # Exercise 10.1 By the question,$$T=4, N=3,M=2$$ According to algorithm 10.3 The first step is to calculate the final period$$\beta$$ $$\beta_4(1) = 1, \beta_4(2) = 1, \beta_4(3) = 1$$ The second step is to calculate each intermediate period$$\beta$$ $$\beta_3(1) = a_{11}b_1(o_4)\beta_4(1) + a_{12}b_2(o_4)\beta_4(2) + a_{13}b_3(o_4)\beta_4(3) = 0.46$$ $$\beta_3(2) = a_{21}b_1(o_4)\beta_4(1) + a_{22}b_2(o_4)\beta_4(2) + a_{23}b_3(o_4)\beta_4(3) = 0.51$$ $$\beta_3(3) = a_{31}b_1(o_4)\beta_4(1) + a_{32}b_2(o_4)\beta_4(2) + a_{33}b_3(o_4)\beta_4(3) = 0.43$$ $$\beta_2(1) = a_{11}b_1(o_3)\beta_3(1) + a_{12}b_2(o_3)\beta_3(2) + a_{13}b_3(o_3)\beta_3(3) = 0.2461$$ $$\beta_2(2) = a_{21}b_1(o_3)\beta_3(1) + a_{22}b_2(o_3)\beta_3(2) + a_{23}b_3(o_3)\beta_3(3) = 0.2312$$ $$\beta_2(3) = a_{31}b_1(o_3)\beta_3(1) + a_{32}b_2(o_3)\beta_3(2) + a_{33}b_3(o_3)\beta_3(3) = 0.2577$$ $$\beta_1(1) = a_{11}b_1(o_2)\beta_2(1) + a_{12}b_2(o_2)\beta_2(2) + a_{13}b_3(o_2)\beta_2(3) = 0.112462$$ $$\beta_1(2) = a_{21}b_1(o_2)\beta_2(1) + a_{22}b_2(o_2)\beta_2(2) + a_{23}b_3(o_2)\beta_2(3) = 0.121737$$ $$\beta_1(3) = a_{31}b_1(o_2)\beta_2(1) + a_{32}b_2(o_2)\beta_2(2) + a_{33}b_3(o_2)\beta_2(3) = 0.104881$$ The third step is calculation$$P(O|\lambda)$$ $$P(O|\lambda) = \pi_1b_1(o_1)\beta_1(1) + \pi_2b_2(o_1)\beta_1(2) + \pi_3b_3(o_1)\beta_1(3) = 0.0601088$$ # Exercise 10.2 By definition,$$P(i_4 = q_3|O,\lambda) = \gamma_4(3)$$ According to the formula$$\gamma_4(3) = \frac{\alpha_4(3) \beta_4(3)}{P(O|\lambda)} = \frac{\alpha_4(3) \beta_4(3)}{\sum \alpha_4(j) \beta_4(j)}$$ Through program calculation, we can get$$P(i_4 = q_3|O,\lambda) = \gamma_4(3) = 0.536952$$ # Exercise 10.3 According to algorithm 10.5 The first step is initialization $$\delta_1(1) = \pi_1 b_1(o_1) = 0.2*0.5=0.1$$$$\psi_1(1) = 0$$ $$\delta_1(2) = \pi_2 b_2(o_1) = 0.4*0.4=0.16$$$$\psi_1(2) = 0$$ $$\delta_1(3) = \pi_3 b_3(o_1) = 0.4*0.7=0.28$$$$\psi_1(3) = 0$$ The second step is recursion $$\delta_2(1) = \mathop{max} \limits_j [\delta_1(j)a_{j1}] b_1(o_2) = max\{0.1*0.5, 0.16*0.3, 0.28*0.2\}*0.5=0.028$$$$\psi_2(1) = 3$$ $$\delta_2(2) = \mathop{max} \limits_j [\delta_1(j)a_{j2}] b_2(o_2) = max\{0.1*0.2, 0.16*0.5, 0.28*0.3\}*0.6=0.0504$$$$\psi_2(2) = 3$$ $$\delta_2(3) = \mathop{max} \limits_j [\delta_1(j)a_{j3}] b_3(o_2) = max\{0.1*0.3, 0.16*0.2, 0.28*0.5\}*0.3=0.042$$$$\psi_2(3) = 3$$ $$\delta_3(1) = \mathop{max} \limits_j [\delta_2(j)a_{j1}] b_1(o_3) = max\{0.028*0.5, 0.0504*0.3, 0.042*0.2\}*0.5=0.00756$$$$\psi_3(1) = 2$$ $$\delta_3(2) = \mathop{max} \limits_j [\delta_2(j)a_{j2}] b_2(o_3) = max\{0.028*0.2, 0.0504*0.5, 0.042*0.3\}*0.4=0.01008$$$$\psi_3(2) = 2$$ $$\delta_3(3) = \mathop{max} \limits_j [\delta_2(j)a_{j3}] b_3(o_3) = max\{0.028*0.3, 0.0504*0.2, 0.042*0.5\}*0.7=0.0147$$$$\psi_3(3) = 3$$ $$\delta_4(1) = \mathop{max} \limits_j [\delta_3(j)a_{j1}] b_1(o_4) = max\{0.00756*0.5, 0.01008*0.3, 0.0147*0.2\}*0.5=0.00189$$$$\psi_4(1) = 1$$ $$\delta_4(2) = \mathop{max} \limits_j [\delta_3(j)a_{j2}] b_2(o_4) = max\{0.00756*0.2, 0.01008*0.5, 0.0147*0.3\}*0.6=0.003024$$$$\psi_4(2) = 2$$ $$\delta_4(3) = \mathop{max} \limits_j [\delta_3(j)a_{j3}] b_3(o_4) = max\{0.00756*0.3, 0.01008*0.2, 0.0147*0.5\}*0.3=0.002205$$$$\psi_4(3) = 3$$ The third step is termination $$P^* = \mathop{max} \limits_i \delta_4(i) = 0,003024$$ $$i_4^* = \mathop{\arg\max} \limits_i [\delta_4(i)] = 2$$ The fourth step is optimal path backtracking $$i_3^* = \psi_4(i_4^*) = 2$$ $$i_2^* = \psi_3(i_3^*) = 2$$ $$i_1^* = \psi_2(i_2^*) = 3$$ Therefore, the optimal path$$I^* = (i_1^*,i_2^*,i_3^*,i_4^*)=(3,2,2,2)$$ # Exercise 10.4 Prove with forward probability and backward probability:$$P(O|\lambda) = \sum \limits_{i=1}^N \sum \limits_{j=1}^N \alpha_t(i)a_{ij}b_j(o_{t+1})\beta_{t+1}(j)$$ \begin{aligned} P(O|\lambda) &= P(o_1,o_2,…,o_T|\lambda) \\ &= \sum_{i=1}^N P(o_1,..,o_t,i_t=q_i|\lambda) P(o_{t+1},..,o_T|i_t=q_i,\lambda) \\ &= \sum_{i=1}^N \sum_{j=1}^N P(o_1,..,o_t,i_t=q_i|\lambda) P(o_{t+1},i_{t+1}=q_j|i_t=q_i,\lambda)P(o_{t+2},..,o_T|i_{t+1}=q_j,\lambda) \\ &= \sum_{i=1}^N \sum_{j=1}^N [P(o_1,..,o_t,i_t=q_i|\lambda) P(o_{t+1}|i_{t+1}=q_j,\lambda) P(i_{t+1}=q_j|i_t=q_i,\lambda) \\ & \quad \quad \quad \quad P(o_{t+2},..,o_T|i_{t+1}=q_j,\lambda)] \\ &= \sum_{i=1}^N \sum_{j=1}^N \alpha_t(i) a_{ij} b_j(o_{t+1}) \beta_{t+1}(j),{\quad}t=1,2,…,T-1 \end{aligned} # Exercise 10.5 Viterbi algorithm: initialization:$$\delta_1(i) = \pi_1b_i(o_1)$$ Recurrence:$$\delta_{t+1}(i) = \mathop{max} \limits_j [\delta_ta_{ji}]b_i(o_{t+1})$$ Forward algorithm: Initial value:$$\alpha_1(i) = \pi_ib_i(o_1)$$ Recurrence:$$\alpha_{t+1}(i) = [\sum \limits_j \alpha_t(j)a_{ji}]b_i(o_{t+1})$$ Viterbi algorithm needs to select the maximum value based on the calculation results of the previous period The forward algorithm directly calculates the results of the previous period ## Advanced swiftui animation part 3:animatablemodifier We have seenAnimatableHow the agreement helps us achievepathandTransformation matrixAnimated. In the last part of this series, we’ll take this one step further.AnimatableModifierIs the most powerful of the three. With it, we can complete the task without restriction. The name says it all:AnimatableModifier. It is aViewModifier, compliantAnimatable. If you don’t knowAnimatableandanimatableDataHow does it work? Please come […]
2022-07-07 00:53:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 6361.880789868973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00313.warc.gz"}
https://projecteuclid.org/euclid.aoms/1177706261
## The Annals of Mathematical Statistics ### K-Sample Analogues of the Kolmogorov-Smirnov and Cramer-V. Mises Tests J. Kiefer #### Abstract The main purpose of this paper is to obtain the limiting distribution of certain statistics described in the title. It was suggested by the author in [1] that these statistics might be useful for testing the homogeneity hypothesis $H_1$ that $k$ random samples of real random variables have the same continuous probability law, or the goodness-of-fit hypothesis $H_2$ that all of them have some specified continuous probability law. Most tests of $H_1$ discussed in the existing literature, or at least all such tests known to the author before [1] in the case $k > 2$, have only been shown to have desirable consistency or power properties against limited classes of alternatives (see e.g., [2], [3], [4] for lists of references on these tests), while those suggested here are shown to be consistent against all alternatives and to have good power properties. Some test statistics whose distributions can be computed from known results are also listed. #### Article information Source Ann. Math. Statist., Volume 30, Number 2 (1959), 420-447. Dates First available in Project Euclid: 27 April 2007 https://projecteuclid.org/euclid.aoms/1177706261 Digital Object Identifier doi:10.1214/aoms/1177706261 Mathematical Reviews number (MathSciNet) MR102882 Zentralblatt MATH identifier 0134.36707 JSTOR
2019-07-18 13:12:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001745820045471, "perplexity": 766.0438414086406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00093.warc.gz"}
https://www.acmicpc.net/problem/21130
시간 제한메모리 제한제출정답맞힌 사람정답 비율 1 초 512 MB43375.000% ## 문제 Business Semiconductor Units (BSU) is a large international corporation that focuses on selling fast and reliable computers to business clients. Recently, they have decided to develop a new processor model which will work even faster and more reliably than its predecessors. The R&D department of the company is responsible for designing the instruction set and processor architecture. After the deadline, they should demonstrate the working prototype to the head of the company. Unfortunately, the whole department was playing Minecraft most of the time instead of doing their job, so the presented prototype supports only three simple instructions. Let's take a closer look at their masterpiece. The new processor has $16$ registers named from r0 to r15, each of them can store an unsigned $16$-bit integer. There is also main memory consisting of $2^{16} + 1$ eight-bit cells. The program for this processor is a sequence of instructions. The instructions are executed sequentially, neither jumps nor loops are supported. The processor executes the same sequence of instructions $5000$ times. That is, the following procedure is repeated $5000$ times: go over the instructions from the start to the end and execute them. Below you can see the list of available instructions. For clarity, let's call $x\text{ mod } 2^8$ the lower part of the number $x$, and $\left\lfloor \frac{x}{2^8} \right\rfloor$ the upper part of the number $x$. The number in the $i$-th main memory cell is denoted $mem_i$. • imm r, b: load the constant number $b$ ($0 \le b < 2^{16}$) into the register named $r$; • ld x, y: suppose that the register named $y$ stores the number $b$. Then, the number $mem_{b+1}\cdot 2^8 + mem_b$ is put into the register $x$; • st x, y: suppose that the register named $x$ stores the number $a$, and the register $y$ stores the number $b$. Then, the lower part of $b$ is put into $mem_a$, and the upper part of $b$ is put into $mem_{a+1}$. As you can see, the instruction set is pretty lean, and the R&D department is unsure whether the processor is capable of doing anything non-trivial or not. To make it run some useful programs, they hired you and gave you an assignment. Now, you need to write a program for the new processor that multiplies $n$ non-negative $16$-bit numbers modulo $2^{16}$. ## 입력 This problem has no input data. ## 출력 Output the required program in the following format. The first line must contain an integer $s$, the number of instructions in your program ($1 \le s \le 10^5$). Each of the following $s$ lines must contain a processor instruction. The format of instructions is described above. Be careful and follow the format strictly. All the register names must be valid (that means, from r0 to r15). ## 인터랙션 Technically, this problem is an output-only interactive problem. [Sounds weird, isn't it? :)] In each test, the interactor first reads the instructions you wrote to the output. Next, it reads the integer $n$ and $n$ integers $a_i$ from the test ($1 \le n \le 4000$, $0 \le a_i < 2^{16})$. The number $n$ is placed into the register r0. For each $1 \le i \le n$, the lower part of $a_i$ is placed into $mem_{2\cdot i - 2}$, and the upper part is placed into $mem_{2\cdot i - 1}$. All other registers and memory cells are zeroed initially. Then, the interactor executes your program. The instructions are performed sequentially, and the execution of the program is performed exactly $5000$ times. After that, the interactor reads your answer from the register r0 and compares it with $a_1 \cdot a_2 \cdot \ldots \cdot a_n$ modulo $2^{16}$. ## 예제 출력 1 4 imm r3, 42 imm r1, 6 st r3, r1 ld r0, r3 ## 힌트 The output in the example does not multiply integers, so submitting this program will give you the "Wrong Answer" verdict. The program is only provided to illustrate the output format. ## 채점 및 기타 정보 • 예제는 채점하지 않는다.
2022-05-17 11:27:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23047244548797607, "perplexity": 1320.251807470633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00336.warc.gz"}
https://www.albert.io/learn/waves-and-sounds/question/freezing-a-sine-wave-with-strobe-light
Limited access List Settings Sort By Difficulty Filters Page NaN of 1947 A stroboscope illuminates a continuous sine wave of wavelength $\lambda$ m traveling on a long string. When the strobe is flashing at a frequency of $F$ Hz the wave on the string no longer seems to be moving--the fixed sine shape of the wave appears stationary. What expression below correctly predicts the possible value or values of the wave speed, $v$, on the string? A $v = \cfrac{\lambda} {F}$ is the only value for the speed. B $v = F \lambda$ is the only value for the speed. C $v = n \lambda F$ where $n$ = any positive integer. D $v = n \lambda F$ where $n$ = any non-zero integer positive or negative. E $v = n \lambda F$ where $n$ = any odd integer. Accuracy 0% Select an assignment template
2017-03-24 17:58:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4265153110027313, "perplexity": 1882.1924595335943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00130-ip-10-233-31-227.ec2.internal.warc.gz"}
https://agenda.infn.it/event/12464/contributions/14396/
# SPIN 2018 9-14 September 2018 University of Ferrara Europe/Rome timezone 23RD INTERNATIONAL SPIN SYMPOSIUM ## Electric dipole moment searches using storage rings 10 Sep 2018, 12:10 40m Teatro Nuovo - Piazza Trento e Trieste, 52 #### Teatro Nuovo - Piazza Trento e Trieste, 52 Piazza Trento e Trieste, 52 Plenary Sessions (for INVITED PLENARY TALKS only!) Fundamental Symmetries and Spin Physics Beyond the Standard Model ### Speaker Dr Frank Rathmann (Forschungszentrum Jülich) ### Description The Standard Model (SM) of Particle Physics is not capable to account for the apparent matter-antimatter asymmetry of our Universe. Physics beyond the SM is required and is either probed by employing highest energies (e.g., at LHC), or by striving for ultimate precision and sensitivity (e.g., in the search for electric dipole moments). Permanent electric dipole moments (EDMs) of particles violate both time reversal $(T)$ and parity $(P)$ invariance, and are via the $CPT$-theorem also $CP$-violating. Finding an EDM would be a strong indication for physics beyond the SM, and pushing upper limits further provides crucial tests for any corresponding theo\-retical model, e.g., SUSY. Up to now, EDM searches focused on neutral systems (neutrons, atoms, and molecules). Storage rings, however, offer the possibility to measure EDMs of charged particles by observing the influence of the EDM on the spin motion in the ring~\cite{Eversmann:2015jnk, PhysRevAccelBeams.20.072801, PhysRevAccelBeams.21.042002}. Direct searches of proton and deuteron EDMs bear the potential to reach sensitivities beyond $\SI{e-29}{e.cm}$. Since the Cooler Synchrotron COSY at the Forschungszentrum J\"ulich provides polarized protons and deuterons up to momenta of 3.7 GeV/c, it constitutes an ideal testing ground and starting point for such an experimental programme. The talk will present the JEDI plans for the measurement of charged hadron EDMs and discuss recent results. ### Primary author Dr Frank Rathmann (Forschungszentrum Jülich) Slides
2020-10-26 21:42:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33993837237358093, "perplexity": 7678.280426510202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00294.warc.gz"}
http://forums.xkcd.com/viewtopic.php?f=3&t=199&p=798757&sid=7a11b985dd6da2f5490b4dee02c6f085
## Marble Dropping A forum for good logic/math puzzles. Moderators: jestingrabbit, Moderators General, Prelates Andy Posts: 45 Joined: Mon Aug 28, 2006 10:23 am UTC ### Marble Dropping Hey, this is my first non-how-are-you-going style post. I hope it is appropriate: You have two identical glass marbles and a 100 storey building. You want to find out the highest floor in the building that you can drop marbles off without them breaking using the fewest number of drops. After dropping a marble, if it breaks you can't use it any more. If it survives, then it isn't weakened or anything like that, so you will always get the same result if you drop a marble from the same floor. EDIT: Solution/spoiler discussion: http://forums.xkcd.com/viewtopic.php?t=201 Last edited by Andy on Tue Aug 29, 2006 1:15 pm UTC, edited 1 time in total. mister k Posts: 643 Joined: Sun Aug 27, 2006 11:28 pm UTC Contact: Hmm, I don't really understand how you can be very efficient here. You could go up in twos, and if your marble breaks you simply go down a level and test the other marble- although that might leave you with two broken marbles, you will know where they break, but I don't see how you can do any better, unless theres something you're missing. Torn Apart By Dingos Posts: 817 Joined: Thu Aug 03, 2006 2:27 am UTC Are we looking for the best average case or the best worst case? I guess it's okay to break both if that'll tell you what storey you're looking for? If not, you could only use the first marble and try floors 1,2,3,4,5... in order. If there were infinitely many floors, I'd probably do this: try the first marble on floors 2^n, until it breaks, and then use the second marble to try consecutive floors from the last floor that was okay. If I do this with 100 floors, and try the hypothetical floor 128 if I have to, where I assume it will break, it'd take an average of 17.97 tries, and a worst case of 42. That's not so good. But there's certainly better than that for 100 floors... one being trying floors 10,20,30,...,100 with the first marble and then trying floor by floor with the second marble. Worst case 18, average 10. I'll have to think about it some more. Torn Apart By Dingos Posts: 817 Joined: Thu Aug 03, 2006 2:27 am UTC I've got another solution, but unfortunately it wasn't better than my last one. It was easier to analyze, though (I used Python to calculate averages with my two first methods). It seems the only scheme you can use is to try to exclude as many floors as possible with the first marble, and use the second marble floor-by-floor. If we first try floor above floor n_1 (*), then the floor above n_1+n_2, then the floor above n_1+n_2+n_3, etc, the number of tries we need are in this list: [2,3,...,n_1,1; 3,4,...,n_2,2; 4,5,...,n_3,3; ...] (the nth element in the list is the number of tries we need if floor n is the first floor from which the marble will break). I thought it was natural to give each segment the same average, so I tried with n_1 = 19, n_2 = 18, n_3 = 17, ..., n_10 = 10. Then we have 10 segments, each with average 10, so the average number of tries is 10, and the worst case is 19. Apparently there are better choices for n_k (my earlier post had a better worst-case), but now it's clearer how to find the solution. EDIT: (*) Actually, my convoluted explanation of n_k isn't quite right. If you can make out from the list what it represents, you've got it. I'll maybe fix this later. Last edited by Torn Apart By Dingos on Mon Aug 28, 2006 5:07 pm UTC, edited 1 time in total. RealGrouchy Nobody Misses Me As Much As Meaux. Posts: 6704 Joined: Thu May 18, 2006 7:17 am UTC Contact: Is it implied that the marble won't break when dropped from the first storey, and will break from the 100th? Not sure if it matters, but it keeps my simple brain occupied. - RG> Mighty Jalapeno wrote:At least he has the decency to REMOVE THE GAP BETWEEN HIS QUOTES.... Sungura wrote:I don't really miss him. At all. He was pretty grouchy. GreedyAlgorithm Posts: 286 Joined: Tue Aug 22, 2006 10:35 pm UTC Contact: RealGrouchy wrote:Is it implied that the marble won't break when dropped from the first storey, and will break from the 100th? Not sure if it matters, but it keeps my simple brain occupied. - RG> It's not implied that it will break from the 100th, but it seems to be implied that it will not break from the 1st (since it seems to imply such a floor exists). I'm going to assume that it can break from the first, though. GENERATION 1-i: The first time you see this, copy it into your sig on any forum. Square it, and then add i to the generation. ulnevets Posts: 186 Joined: Wed Aug 09, 2006 1:45 am UTC Contact: what if you give it a very slight upward velocity? then it wouldn't be counted as a drop. i win. Shoofle Posts: 409 Joined: Sun Apr 09, 2006 9:28 pm UTC Location: Location, Location. Contact: ulnevets wrote:what if you give it a very slight upward velocity? then it wouldn't be counted as a drop. i win. That doesn't actually change anything; the amount gravity accelerates it downwards while it gets to the peak is precisely countered by the acceleration it takes to get back down, so tossing it up is exactly the same as tossing it down with the same force. Andy Posts: 45 Joined: Mon Aug 28, 2006 10:23 am UTC Are we looking for the best average case or the best worst case? Sorry, we were optimising the worst case. ulnevets Posts: 186 Joined: Wed Aug 09, 2006 1:45 am UTC Contact: Shoofle wrote: ulnevets wrote:what if you give it a very slight upward velocity? then it wouldn't be counted as a drop. i win. That doesn't actually change anything; the amount gravity accelerates it downwards while it gets to the peak is precisely countered by the acceleration it takes to get back down, so tossing it up is exactly the same as tossing it down with the same force. nonono i mean, a drop is defined as starting with zero velocity RealGrouchy Nobody Misses Me As Much As Meaux. Posts: 6704 Joined: Thu May 18, 2006 7:17 am UTC Contact: That's rediculous. Presumably, you have to lift the marble to some degree, and when you drop it, it will be from zero vertical velocity. If you throw it up, it will reach zero vertical velocity and continue to accelerate downwards... Oh, I give up. I'm really too tired to be discussing marble droppings. - RG> Mighty Jalapeno wrote:At least he has the decency to REMOVE THE GAP BETWEEN HIS QUOTES.... Sungura wrote:I don't really miss him. At all. He was pretty grouchy. ulnevets Posts: 186 Joined: Wed Aug 09, 2006 1:45 am UTC Contact: RealGrouchy wrote:That's rediculous. Presumably, you have to lift the marble to some degree, and when you drop it, it will be from zero vertical velocity. If you throw it up, it will reach zero vertical velocity and continue to accelerate downwards... Oh, I give up. I'm really too tired to be discussing marble droppings. - RG> you guys don't understand when you drop something, it leaves your hand at zero velocity. throwing it upwards slightly would be considered a throw, and any amount of these are allowed. if you're concerned with the slight upward velocity, you could always move your hand a little lower to account for it. Jesse Vocal Terrorist Posts: 8635 Joined: Mon Jul 03, 2006 6:33 pm UTC Location: Basingstoke, England. Contact: Even if it wouldn't be counted as a drop, you would still have two broken marbles. ulnevets Posts: 186 Joined: Wed Aug 09, 2006 1:45 am UTC Contact: Jesster wrote:Even if it wouldn't be counted as a drop, you would still have two broken marbles. strategy: start at bottom floor and move up one at a time until it breaks. best case: 0 drops. worst case: 0 drops. xkcd Site Ninja Posts: 365 Joined: Sat Apr 08, 2006 8:03 am UTC Contact: ulnevets wrote: Jesster wrote:Even if it wouldn't be counted as a drop, you would still have two broken marbles. strategy: start at bottom floor and move up one at a time until it breaks. best case: 0 drops. worst case: 0 drops. Why do people always forget the worst case of "raptor attack"? Jesse Vocal Terrorist Posts: 8635 Joined: Mon Jul 03, 2006 6:33 pm UTC Location: Basingstoke, England. Contact: Because they are uninformed. Also, at what speed can raptors climb stairs? Ooh, new problem. At what height does a dropped marble gain maximum raptor killing velocity? Penguin Posts: 98 Joined: Wed Jul 05, 2006 2:54 pm UTC Location: Cambridge, MA Contact: I like this game better! What is the minimum floor you have to drop a marble from to kill a raptor? If the marble doesn't break, you get a bonus! <3! Jesse Vocal Terrorist Posts: 8635 Joined: Mon Jul 03, 2006 6:33 pm UTC Location: Basingstoke, England. Contact: Although, the marble wouldn't be retrievable. Don't forget, raptors are intelligent and one would be hiding around the corner until you came to retrieve them. ulnevets Posts: 186 Joined: Wed Aug 09, 2006 1:45 am UTC Contact: xkcd wrote: ulnevets wrote: Jesster wrote:Even if it wouldn't be counted as a drop, you would still have two broken marbles. strategy: start at bottom floor and move up one at a time until it breaks. best case: 0 drops. worst case: 0 drops. Why do people always forget the worst case of "raptor attack"? would it be worse if the raptors had magic powers? Jesse Vocal Terrorist Posts: 8635 Joined: Mon Jul 03, 2006 6:33 pm UTC Location: Basingstoke, England. Contact: hassellhoff Posts: 30 Joined: Mon Jul 28, 2008 8:40 am UTC Contact: ### Re: Marble Dropping gah.. i was hoping for a little riddle i could do at 4 in the morning.. but then i see 1 guy had to use python. i give up consolation prize? Exenon wrote: I play Call of Duty. That's for real men ! hyperion "I'll show ye...." Posts: 1569 Joined: Wed Nov 29, 2006 2:16 pm UTC Location: Perth ### Re: Marble Dropping therapist wrote:gah.. i was hoping for a little riddle i could do at 4 in the morning.. but then i see 1 guy had to use python. i give up consolation prize? The prize is a smack in the head for dragging up a two year old thread and adding nothing. Peshmerga wrote:A blow job would probably get you a LOT of cheeseburgers. But I digress. Crosby Posts: 21 Joined: Tue Aug 05, 2008 6:46 pm UTC ### Re: Marble Dropping first marble, try: 10, 20, 30, ... 100. If it is still unbroken, then the answer is 100. If it breaks at any point, then go back down 9 floors, and go up one-at-a time until it breaks. If it does, then the floor under that is the solution. If it does not break, then the last floor tried is the solution. Coupla scenarios: 10,20,30,40,(break), 31,32,33,34,35 (break) => answer: 24 (9 tries)(also, the avg. number of tries need to solve with this solution.) 10,20,30,40,50,60,70,80,90,100 (no break) => answer: 100 (10 tries) 1 (break) => answer: 0 (1 try)(min solution) 10,20,30,40,50,60,70,80,90,100 (break),91,92,93,94,95,96,97,98,99 => answer: 99 (18 tries)(also, the max tries need to find a solution.) We could try different ways to break up the floors, but I think this is optimal because we have equalized the max number of tries for the first marble and second marble (9 tries). Any solution using more floors for the first marble (and less tries), would result in more tries for the second marble. I'm not smart enough to "proof" this. Or maybe I'm just lazy. crzftx Posts: 371 Joined: Tue Jul 29, 2008 4:49 am UTC Location: Rockford, IL ### Re: Marble Dropping You can do it in 15. There's an answers thread with the answer posted, along with much more difficult math. I was happy about solving an unsolved puzzle until I realized this. At least I got the answer right BoomFrog Posts: 1069 Joined: Mon Jan 15, 2007 5:59 am UTC Location: Seattle ### Re: Marble Dropping crzftx wrote:You can do it in 15. There's an answers thread with the answer posted, along with much more difficult math. I was happy about solving an unsolved puzzle until I realized this. At least I got the answer right Actually you can do it in 14 "Everything I need to know about parenting I learned from cooking. Don't be afraid to experiment, and eat your mistakes." - Cronos crzftx Posts: 371 Joined: Tue Jul 29, 2008 4:49 am UTC Location: Rockford, IL ### Re: Marble Dropping 14? How was this possible. I guess I may not have understood the answer then. I thought you could take the first marble first by 14 floors, then 13, etc. The other marble would then go 1 by 1 as needed to figure out which floor it is. This method can take up to 15 tries. It'd be 15 if the critical floor is 14, 27, 39, 50, 60, 69, 77, 84, 90, 95, or 99. I guess it only takes 14, since I don't need to break both marbles, necessarily, to find the floor. Oops. afarnen Posts: 157 Joined: Mon May 05, 2008 12:12 pm UTC ### Re: Marble Dropping Spoiler: Code: Select all using the increment algorithm where you go up by n stories, starting with n, and when the first marble breaks, you go up starting at one plus the last story at which the first marble survived the fall, here's a graph i made:example: 10, 20, 30 (break), 21, 22, 23 (break).. that's 6 drops         increment       +----------#      | 1 2 3 4 5   +---+----------d  | 1 | 1 2 2 2 2r  | 2 | 2 2 3 3 3o  | 3 | 3 3 3 4 4p  | 4 | 4 3 3 4 5s  | 5 | 5 4 4 3 5inc.  avg drops  pattern---------------------------------------------1     50.5       1,2..1002     26.5       2,2,3,3..51,513     18.83      2,3,3,3,4,4..34,35,35,354     15.25      2,3,4,4,3,4,5,5..26,27,28,285     13.3       2,3,4,5,5,3,4,5,6,6..21,22,23,24,246     12.14      2,3,4,5,6,6,3,4,5,6,7,7..17,18,19,20,21,21,18,19,20,217     11.46      2,3,4,5,6,7,7,3,4,5,6,7,8,8..15,16,17,18,19,20,20,16,178     11.06      2,3,4,5,6,7,8,8,3,4,5,6,7,8,9,9..13,14,15,16,17,18,19,19,14,15,16,179     10.91      2,3,4,5,6,7,8,9,9,3,4,5,6,7,8,9,10,10..12,13,14,15,16,17,18,19,19,1310    10.9       2,3,4,5,6,7,8,9,10,10..11,12,13,14,15,16,17,18,19,1911    10.91      2,3,4,5,6,7,8,9,10,11,11..10,11,12,13,14,15,16,17,18,19,19,1112    10.94      2,3,4,5,6,7,8,9,10,11,12,12..9,10,11,12,13,14,15,16,17,18,19,19,10,11,12,13the most efficient increment algorithm is 10. ctxcm2002 Posts: 9 Joined: Sun Aug 17, 2008 6:23 pm UTC ### Re: ulnevets wrote: RealGrouchy wrote:That's rediculous. Presumably, you have to lift the marble to some degree, and when you drop it, it will be from zero vertical velocity. If you throw it up, it will reach zero vertical velocity and continue to accelerate downwards... Oh, I give up. I'm really too tired to be discussing marble droppings. - RG> you guys don't understand when you drop something, it leaves your hand at zero velocity. throwing it upwards slightly would be considered a throw, and any amount of these are allowed. if you're concerned with the slight upward velocity, you could always move your hand a little lower to account for it. He's right you know... Last I checked I don't drop a basket ball, or baseball. They all reach zero vertical velocity. Granted their parabolic paths are much larger than in the marble dropping experiment. As far as this experiment, the only way I can come up with this being feasible is making a contraption out of cardboard toilet paper rolls, some scotch tape, 3 feet of floss, 3 cotton balls and 5 straws. 'Taint any rules abou' that! Cauchy Posts: 602 Joined: Wed Mar 28, 2007 1:43 pm UTC ### Re: Marble Dropping Due to very fierce winds, your hand gets cut off if you try to make a measurement without dropping the marble. (∫|p|2)(∫|q|2) ≥ (∫|pq|)2 Thanks, skeptical scientist, for knowing symbols and giving them to me. Penitent87 Posts: 28 Joined: Wed Apr 15, 2009 2:30 am UTC Location: London ### Re: Marble Dropping afarnen wrote: Spoiler: Code: Select all using the increment algorithm where you go up by n stories, starting with n, and when the first marble breaks, you go up starting at one plus the last story at which the first marble survived the fall, here's a graph i made:example: 10, 20, 30 (break), 21, 22, 23 (break).. that's 6 drops         increment       +----------#      | 1 2 3 4 5   +---+----------d  | 1 | 1 2 2 2 2r  | 2 | 2 2 3 3 3o  | 3 | 3 3 3 4 4p  | 4 | 4 3 3 4 5s  | 5 | 5 4 4 3 5inc.  avg drops  pattern---------------------------------------------1     50.5       1,2..1002     26.5       2,2,3,3..51,513     18.83      2,3,3,3,4,4..34,35,35,354     15.25      2,3,4,4,3,4,5,5..26,27,28,285     13.3       2,3,4,5,5,3,4,5,6,6..21,22,23,24,246     12.14      2,3,4,5,6,6,3,4,5,6,7,7..17,18,19,20,21,21,18,19,20,217     11.46      2,3,4,5,6,7,7,3,4,5,6,7,8,8..15,16,17,18,19,20,20,16,178     11.06      2,3,4,5,6,7,8,8,3,4,5,6,7,8,9,9..13,14,15,16,17,18,19,19,14,15,16,179     10.91      2,3,4,5,6,7,8,9,9,3,4,5,6,7,8,9,10,10..12,13,14,15,16,17,18,19,19,1310    10.9       2,3,4,5,6,7,8,9,10,10..11,12,13,14,15,16,17,18,19,1911    10.91      2,3,4,5,6,7,8,9,10,11,11..10,11,12,13,14,15,16,17,18,19,19,1112    10.94      2,3,4,5,6,7,8,9,10,11,12,12..9,10,11,12,13,14,15,16,17,18,19,19,10,11,12,13the most efficient increment algorithm is 10. I think you forgot to look carefully enough at the final few marbles in some lists. You just kept the pattern going instead of actually thinking about it. (e.g. the number of marble drops required to identify the 100th floor, given an increment of 9, is 12, not 13. 9,18,27,36,45,54,63,72,81,90,99,100.) These alterations mean that 9, 10, and 11 all have the same average case of 10.9 drops. GeneralFailure Posts: 1 Joined: Mon Jun 29, 2009 10:40 pm UTC ### Re: Marble Dropping May we assume that the building is of such a height that the marble, having been dropped from a height below the top floor, will not reach terminal velocity? Otherwise, we could optimize our algorithm given the height of each floor, the force of air resistance on the marble, and the force of gravity by determining the highest floor which it even makes sense to try (since all higher floors would presumably yield the same result). Soljer Posts: 29 Joined: Fri Feb 27, 2009 6:31 pm UTC ### Re: Marble Dropping This is a pretty easy problem, and you can even generalize it to an arbitrary number of marbles. Spoiler: For 1 marble, you can't do better than a linear search. For 2 marbles, and n floors, drop the first marble at increments of root(n) - you'll make at most root(n) drops. Once the first one breaks (let's call it floor p*root(n)), go to floor (p-1)*root(n), where you know the first marble was unbroken, and linearly search upwards till the second one breaks, you know the highest floor is one less than where it broke. This step will also take at most root(n), for a final run time of 2*root(n) or O(root(n)). For three marbles, drop the first marble in increments of n^(2/3). This will take be at most n^(1/3) drops. Once that breaks (at floor p*n^(2/3)), go to floor (p-1)*n^(2/3) and use the two-marble algorithm as described above. That is, walk from (p-1)*n^(2/3) to p*n^(2/3) in increments of root(n^2/3) or, n^(1/3). This will take at most n^(1/3) steps. Once the second marble breaks, do a linear search over at most n^(1/3) floors. So our run time is 3*n^(1/3) or O(n^1/3). In general, for k marbles, you can get a run time of k*n^(1/k). If you have log(n) marbles, you can do a binary search for log(n) run-time. I don't know if that will always be the best policy for all cases, but it is a sub-linear policy for an arbitrary number of floors and marbles (assuming more than one marble, anyways). WarDaft Posts: 1583 Joined: Thu Jul 30, 2009 3:16 pm UTC ### Re: Marble Dropping Soljer wrote:This is a pretty easy problem, and you can even generalize it to an arbitrary number of marbles. Spoiler: For 1 marble, you can't do better than a linear search. For 2 marbles, and n floors, drop the first marble at increments of root(n) - you'll make at most root(n) drops. Once the first one breaks (let's call it floor p*root(n)), go to floor (p-1)*root(n), where you know the first marble was unbroken, and linearly search upwards till the second one breaks, you know the highest floor is one less than where it broke. This step will also take at most root(n), for a final run time of 2*root(n) or O(root(n)). For three marbles, drop the first marble in increments of n^(2/3). This will take be at most n^(1/3) drops. Once that breaks (at floor p*n^(2/3)), go to floor (p-1)*n^(2/3) and use the two-marble algorithm as described above. That is, walk from (p-1)*n^(2/3) to p*n^(2/3) in increments of root(n^2/3) or, n^(1/3). This will take at most n^(1/3) steps. Once the second marble breaks, do a linear search over at most n^(1/3) floors. So our run time is 3*n^(1/3) or O(n^1/3). In general, for k marbles, you can get a run time of k*n^(1/k). If you have log(n) marbles, you can do a binary search for log(n) run-time. I don't know if that will always be the best policy for all cases, but it is a sub-linear policy for an arbitrary number of floors and marbles (assuming more than one marble, anyways). That isn't the solution for worst case. Spoiler: You can improve your worst worst case scenario by decreasing the effectiveness of your slightly better worst case scenario. Drop it from the 14th floor, if it breaks, you have up to 13 more drops for worst case 14 total to find the floor. If it doesn't break at 14 drop it from the 27th floor, if it breaks, you have up 12 more drops for... worst case 14. The floors (or at least solution) for your first marble are 14, 27, 39, 50, 60, 69, 77, 84, 89, 94, 97,98,99,100. Furthermore, k*n^(1/k) is not even the run time for a radix style searching. If you have 10 marbles and 100 floors, then you don't need 15 drops, that's worse than worst case for 2 marbles. All Shadow priest spells that deal Fire damage now appear green. Big freaky cereal boxes of death. Qaanol The Cheshirest Catamount Posts: 3062 Joined: Sat May 09, 2009 11:55 pm UTC ### Re: Marble Dropping If you have m marbles and an s-story building, Spoiler: Find the least n such that ${n \choose m} \geq s$ The maximum number of drops can be bounded by n-1 if m>1, or by n if m=1. Drop the first marble from floor [imath]\large{{n-1}\choose{m-1}}[/imath]. If it doesn't break, increment the floor by [imath]\large{{n-2}\choose{m-1}}[/imath]. Continue this until the first marble breaks, say after incrementing by [imath]\large{{n-k_1}\choose{m-1}}[/imath]. Starting from the last floor where the first marble didn't break, increment by [imath]\large{{n-k_1-1}\choose{m-2}}[/imath] for the second marble. Keep decreasing the value in the top of the choose function by one each time, and adding that to the floor from which you drop the second marble. Keep doing this until the second marble breaks, say after incrementing by [imath]\large{{n-k_2}\choose{m-2}}[/imath]. Then start the increments for the third marble at [imath]\large{{n-k_2-1}\choose{m-3}}[/imath]. What we're doing is using the columns of Pascal's triangles as increment values, moving upwards. The column of Pascal's triangle (the number in the bottom of the choose function) is the number of marbles we still have left not counting the one currently being dropped. Note that the columns are indexed from 0, as are the rows. Here's the beginning of Pascal's triangle (left-aligned). $\begin{eqnarray}1\\ 1 & 1\\ 1 & 2 & 1\\ 1 & 3 & 3 & 1\\ 1 & 4 & 6 & 4 & 1\\ 1 & 5 & 10 & 10 & 5 & 1\\ 1 & 6 & 15 & 20 & 15 & 6 & 1\\ 1 & 7 & 21 & 35 & 35 & 21 & 7 & 1\\ 1 & 8 & 28 & 56 & 70 & 56 & 28 & 8 & 1\\ 1 & 9 & 36 & 84 & 126 & 126 & 84 & 36 & 9 & 1\\ 1 & 10 & 45 & 120 & 210 & 252 & 210 & 120 & 45 & 10 & 1\\ 1 & 11 & 55 & 165 & 330 & 462 & 462 & 330 & 165 & 55 & 11 & 1\\ 1 & 12 & 66 & 220 & 495 & 792 & 924 & 792 & 495 & 220 & 66 & 12 & 1\\ 1 & 13 & 78 & 286 & 715 & 1287 & 1716 & 1716 & 1287 & 715 & 286 & 78 & 13 & 1\\ 1 & 14 & 91 & 364 & 1001 & 2002 & 3003 & 3432 & 3003 & 2002 & 1001 & 364 & 91 & 14 & 1\\ 1 & 15 & 105 & 455 & 1365 & 3003 & 5005 & 6435 & 6435 & 5005 & 3003 & 1365 & 455 & 105 & 15 & 1 \end{eqnarray}$ Say we have 6 marbles and a 5000-floor building. Just for kicks let's say the marbles will first break when dropped from the 3600th floor. Since we have 6 marbles, we look in column 6 for the first number over 5000. That's [imath]\large{15\choose 6} = 5005[/imath]. We locate that in Pascal's triangle (on the left half). Then we move one place to the left and one place up. That column will be our increments for the first marble. First marble: floor 2002 does not break floor 2002 + 1287 = 3289 does not break floor 2002 + 1287 + 792 = 4081 breaks Second marble: floor 3289 + 330 = 3619 breaks Third marble: floor 3289 + 120 = 3409 does not break floor 3289 + 120 + 84 = 3493 does not break floor 3289 + 120 + 84 + 56 = 3549 does not break floor 3289 + 120 + 84 + 56 + 35 = 3584 does not break floor 3289 + 120 + 84 + 56 + 35 + 20 = 3604 breaks Fourth marble: floor 3584 + 10 = 3594 does not break floor 3584 + 10 + 6 = 3600 breaks Fifth marble: floor 3594 + 3 = 3597 does not break floor 3594 + 3 + 2 = 3599 does not break And we are done. In this particular case we didn't need to use all 6 marbles. The point is, since at every drop we rise up one row in Pascal's triangle for our increment value, the maximum number of drops equals the row in which we began. In this case we began on row 14, so we had an upper bound of 14 drops. For the classic case of 2 marbles and 100 floors, clearly 105 is the first value in the 2-marble column that's at least 100, so we begin there and move diagonally up and to the left to find 14 as the place to start. Since that's on row 14, that's also the upper bound on number of drops. wee free kings
2019-01-23 15:06:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6276028752326965, "perplexity": 2132.2419215706122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584332824.92/warc/CC-MAIN-20190123130602-20190123152602-00484.warc.gz"}
https://content.ces.ncsu.edu/agricultural-subsurface-drainage-cost-in-north-carolina
NC State Extension Publications Improving drainage of poorly drained soils improves trafficability for timely field operations. In addition, drainage improves crop yield by eliminating long periods of excess water. An important advantage of subsurface drains is that drainage occurs without taking land out of production. In North Carolina, subsurface drains are generally installed with closer spacing than typical open ditches. Subsurface drains, along with good surface drainage, effectively protect crops from excessive soil water conditions. Subsurface drainage cost in North Carolina can vary greatly depending on location and is highly dependant on soil properties. The most important soil property affecting the cost per acre for drainage installation is soil texture, which determines the ability of the soil to move water both vertically and laterally. Figure 1 and Table 1 show the estimated costs of material and installation for subsurface drainage pipes in North Carolina with varying drain spacing. These cost estimates are based on 50 cents per linear foot of 4-inch perforated drains with fabric filter and average installation cost of 60 cents per linear foot. Installation cost may vary depending on the number of acres drained, drain depth, connections, system design, and ease of installation. The material cost of 4-inch drains may also vary depending on the supplier and whether fabric filters are included. A fabric filter prevents fine sands from entering and clogging the drains. You should choose the drain spacing that provides adequate drainage while avoiding unnecessary installation costs. Figure 1. Estimated per acre subsurface drainage cost (50 cents per linear foot for 4-inch drains with fabric; 60 cents per linear foot for installation). Table 1. Estimated per acre subsurface drainage cost (50 cents per linear foot for 4-inch drains with fabric; 60 cents per linear foot for installation) Spacing $/ac 30$1,597 40 $1,198 50$958 60 $799 70$685 80 $599 90$532 Spacing $/ac 100$479 110 $436 120$399 130 $369 140$342 150 $319 160$299 Spacing $/ac 170$282 180 $266 190$252 200 $240 210$228 220 $218 230$208 Spacing $/ac 240$200 250 $192 260$184 270 $177 280$171 290 $165 300$160 # Authors Assistant Professor and Extension Specialist Biological & Agricultural Engineering Professor Biological & Agricultural Engineering William Neal Reynolds and Distinguished University Professor Emeritus Biological & Agricultural Engineering Find more information at the following NC State Extension websites: Publication date: Jan. 10, 2020 AG-871 N.C. Cooperative Extension prohibits discrimination and harassment regardless of age, color, disability, family and marital status, gender identity, national origin, political beliefs, race, religion, sex (including pregnancy), sexual orientation and veteran status.
2020-07-07 20:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3612617552280426, "perplexity": 12543.64822639708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00081.warc.gz"}
http://amplitudes.org/amplitudes/1507_01950/
Paper and ancilary files. Abstract: Multi-loop scattering amplitudes in N=4 Yang-Mills theory possess cluster algebra structure. In order to develop a computational framework which exploits this connection, we show how to construct bases of Goncharov polylogarithm functions, at any weight, whose symbol alphabet consists of cluster coordinates on the $A_n$ cluster algebra. Using such a basis we present a new expression for the 2-loop 6-particle NMHV amplitude which makes some of its cluster structure manifest. Tags: Categories: Updated:
2022-05-17 18:38:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7093839049339294, "perplexity": 932.8101033392819}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00745.warc.gz"}
https://www.macom.com/blog/gan-transcendent-driving-the-sca
# GaN Transcendent: Driving the Scale, Supply Security and Surge Capacity for Mainstream RF Applications Feb. 06, 2018 The market landscape for RF semiconductor technology has experienced significant changes in recent years. For decades, laterally diffused metal oxide semiconductor (LDMOS) technology has dominated the RF semiconductor market in commercial volume applications. Today, the balance has shifted, and Gallium Nitride on Silicon (GaN-on-Si) technology has emerged as the technology of choice to succeed legacy LDMOS technology. GaN-on-Si’s performance advantages over LDMOS are firmly established – it delivers over 70% power efficiency, and upward of 4X to 6X more power per unit area, with scalability to high frequencies. In parallel, comprehensive testing data has affirmed GaN-on-Si’s conformance with stringent reliability requirements, replicating and even exceeding the RF performance and reliability of expensive Gallium Nitride on Silicon Carbide (GaN-on-SiC) alternative technology. GaN-on-Si’s ascension to the forefront of the RF semiconductor industry comes at a pivotal moment in the evolution of commercial wireless infrastructure. Its proven performance leadership over LDMOS technology is driving its adoption within the newest generation of 4G LTE basestations, and positioning it as the likely de facto enabling technology for 5G wireless infrastructure going forward, with seismic market implications that could extend far beyond mobile phone connectivity, encompassing transportation, industrial and entertainment applications, among many others. Looking further ahead, GaN-on-Si-based RF technologies have the potential to supplant antiquated magnetron and spark plug technologies to unlock the full value and promise of commercial solid-state RF energy applications, spanning cooking, lighting, automotive ignition and beyond, where huge gains in energy/fuel efficiency and heating and lighting precision are believed to be close on the horizon. BREAKTHROUGH MANUFACTURING AND COST EFFICIENCIES Given the unprecedented pace and scale of the impending 5G infrastructure build-out in particular, there’s been increased attention on the cost structures, manufacturing and surge capacities, and supply chain flexibility and surety inherent to GaN-on-Si relative to LDMOS and GaN-on-SiC. GaN-on-Si stands alone as the superior semiconductor technology for next-generation wireless infrastructure, offering the potential for GaN performance at LDMOS cost structures, with the commercial manufacturing scalability to support massive demand. The joint announcement from MACOM and STMicroelectronics of plans to bring GaN-on-Si technology to mainstream RF markets and applications marks a pivotal turning point in the GaN supply chain ecosystem, combining MACOM’s RF semiconductor technology prowess with ST’s scale and operational excellence in silicon wafer manufacturing. While expanding MACOM’s source of supply, this agreement is also expected to lead to the increased scale, capacity and cost structure optimizations necessary for accelerating mass-market adoption of GaN-on-Si technology. For wireless network infrastructure, this collaboration is expected to allow GaN-on-Si technology to be cost-effectively deployed and scaled for 4G LTE basestations as well as massive MIMO 5G antennas, whereby the sheer density of antenna configurations puts a premium value on power and thermal performance, particularly at higher frequencies. And when properly exploited, GaN-on-Si’s power efficiency advantages can make a profound impact on wireless network operators’ basestation operating expenses. MACOM estimates that the utility bill savings of switching only new macro base stations deployed in a year to MACOM GaN-on-Si can exceed $100M when modeled with an average energy rate of$0.1/KWh. A NEW ERA The evolution of GaN-on-Si from early research and development to commercial-scale adoption may prove to be the largest technology disruption to impact the RF semiconductor industry in a generation. Via our agreement with ST, MACOM GaN-on-Si technology is uniquely positioned to meet the performance, cost structure, manufacturing capacity, and supply chain flexibility requirements of 4G LTE and 5G wireless basestation infrastructure going forward, with untold promise for solid-state RF energy applications. Offering the prospect of RF solutions at price/performance metrics that would be otherwise unachievable with competing LDMOS and GaN-on-SiC technologies, GaN-on-Si’s potential has only just begun to be realized.
2020-07-05 06:00:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35105100274086, "perplexity": 7379.6193550622675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00029.warc.gz"}
http://www.download-now.net/New-Hampshire/bootstrap-estimate-standard-error.html
North Country Internet Access, also known as NCIA, provides a range of Internet services for individuals and businesses. It offers nontoll dial-up access to the Internet. The company serves Vermont and Maine communities in New Hampshire. North Country Internet Access offers local and long-distance telephone services. It provides various network consulting, design and installation solutions. The company offers Web site development and hosting services, as well as provides several domain names. North Country Internet Access provides a variety of online advertising solutions for business, automobile, employment, services and homes. In addition, it operates the NCIA Computer Center. The company offers a selection of technical support services. Address 38 Glen Ave Ste 3, Berlin, NH 03570 (800) 797-6242 http://firstlight.net # bootstrap estimate standard error Beans Purchase, New Hampshire This method assumes that the 'true' residual distribution is symmetric and can offer advantages over simple residual sampling for smaller sample sizes. B. share|improve this answer answered Apr 8 '12 at 22:39 conjugateprior 13.3k12761 4 Nice answer. recommend the bootstrap procedure for the following situations:[17] When the theoretical distribution of a statistic of interest is complicated or unknown. It comes from our inability to draw all $n^n$ samples, so we just take a random subset of these. So that with a sample of 20 points, 90% confidence interval will include the true variance only 78% of the time[28] Studentized Bootstrap. Free program written in Java to run on any operating system. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the In regression problems, the explanatory variables are often fixed, or at least observed with more control than the response variable. This method uses Gaussian process regression to fit a probabilistic model from which replicates may then be drawn. This procedure is known to have certain good properties and the result is a U-statistic. Repeat Steps 2 through 4 many thousands of times. This can be computationally expensive as there are a total of ( 2 n − 1 n ) {\displaystyle {\binom {2n-1}{n}}} different resamples, where n is the size of the data Ann Statist 9 1196–1217 ^ Singh K (1981) On the asymptotic accuracy of Efron’s bootstrap. it does not depend on nuisance parameters as the t-test follows asymptotically a N(0,1) distribution), unlike the percentile bootstrap. Popular families of point-estimators include mean-unbiased minimum-variance estimators, median-unbiased estimators, Bayesian estimators (for example, the posterior distribution's mode, median, mean), and maximum-likelihood estimators. R. (1989). “The jackknife and the bootstrap for general stationary observations,” Annals of Statistics, 17, 1217–1241. ^ Politis, D.N. Bias-Corrected Bootstrap - adjusts for bias in the bootstrap distribution. error ## t1* -11863.9 -553.3393 8580.435 These results are very similar to the ones in the book, only the standard error is higher. In other words, create synthetic response variables y i ∗ = y ^ i + ϵ ^ j {\displaystyle y_{i}^{*}={\hat {y}}_{i}+{\hat {\epsilon }}_{j}} where j is selected randomly from the list If Ĵ is a reasonable approximation to J, then the quality of inference on J can in turn be inferred. Methods for bootstrap confidence intervals There are several methods for constructing confidence intervals from the bootstrap distribution of a real parameter: Basic Bootstrap. It is often used as an alternative to statistical inference based on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or C., J. This is in fact how we can get try to measure the accuracy of the original estimates. In the way the bootstrap is normally carried out, there are two effects that are happening. But what about the SE and CI for the median, for which there are no simple formulas? Relation to other approaches to inference Relationship to other resampling methods The bootstrap is distinguished from: the jackknife procedure, used to estimate biases of sample statistics and to estimate variances, and Therefore, to resample cases means that each bootstrap sample will lose some information. If we repeat this 100 times, then we have μ1*, μ2*, …, μ100*. recommend the bootstrap procedure for the following situations:[17] When the theoretical distribution of a statistic of interest is complicated or unknown. Epstein (2005). "Bootstrap methods and permutation tests". This bootstrap works with dependent data, however, the bootstrapped observations will not be stationary anymore by construction. J. (2008). Your cache administrator is webmaster. Journal of American Statistical Association, 89, 1303-1313. ^ Cameron, A. Let X = x1, x2, …, x10 be 10 observations from the experiment. Add to your shelf Read this item online for free by registering for a MyJSTOR account. One standard choice for an approximating distribution is the empirical distribution function of the observed data. We now have a histogram of bootstrap means. Login to your MyJSTOR account × Close Overlay Read Online (Beta) Read Online (Free) relies on page scans, which are not currently available to screen readers. It is often used as an alternative to statistical inference based on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or Gather another sample of size n = 5 and calculate M2. So you take a sample and ask the question of it instead. This is equivalent to sampling from a kernel density estimate of the data. Types of bootstrap scheme This section includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. Also, the range of the explanatory variables defines the information available from them. This method can be applied to any statistic. The structure of the block bootstrap is easily obtained (where the block just corresponds to the group), and usually only the groups are resampled, while the observations within the groups are Then from these n-b+1 blocks, n/b blocks will be drawn at random with replacement. This scheme has the advantage that it retains the information in the explanatory variables. Since you are explaining this to a layperson, you can argue that for large bin counts this is roughly the square root of the bin count in both cases. Please help to improve this section by introducing more precise citations. (June 2012) (Learn how and when to remove this template message) In univariate problems, it is usually acceptable to resample But for non-normally distributed data, the median is often more precise than the mean.
2019-04-18 10:14:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5051432847976685, "perplexity": 1141.0876334249115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517558.8/warc/CC-MAIN-20190418101243-20190418123243-00547.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/jimo.2021003
# American Institute of Mathematical Sciences • Previous Article Stochastic comparisons of parallel systems with scale proportional hazards components equipped with starting devices • JIMO Home • This Issue • Next Article A model and two heuristic methods for The Multi-Product Inventory-Location-Routing Problem with heterogeneous fleet March  2022, 18(2): 933-967. doi: 10.3934/jimo.2021003 ## Optimal investment and reinsurance to minimize the probability of drawdown with borrowing costs School of Mathematical Sciences and Institute of Finance and Statistics, Nanjing Normal University, Jiangsu 210023, China * Corresponding author: Zhibin Liang Received  January 2020 Revised  September 2020 Published  March 2022 Early access  December 2020 Fund Project: This research was supported by the National Natural Science Foundation of China (Grant No.12071224) We study the optimal investment and reinsurance problem in a risk model with two dependent classes of insurance businesses, where the two claim number processes are correlated through a common shock component and the borrowing rate is higher than the lending rate. The objective is to minimize the probability of drawdown, namely, the probability that the value of the wealth process reaches some fixed proportion of its maximum value to date. By the method of stochastic control theory and the corresponding Hamilton-Jacobi-Bellman equation, we investigate the optimization problem in two different cases and divide the whole region into four subregions. The explicit expressions for the optimal investment/reinsurance strategies and the minimum probability of drawdown are derived. We find that when wealth is at a relatively low level (below the borrowing level), it is optimal to borrow money to invest in the risky asset; when wealth is at a relatively high level (above the saving level), it is optimal to save more money; while between them, the insurer is willing to invest all the wealth in the risky asset. In the end, some comparisons are presented to show the impact of higher borrowing rate and risky investment on the optimal results. Citation: Yu Yuan, Zhibin Liang, Xia Han. Optimal investment and reinsurance to minimize the probability of drawdown with borrowing costs. Journal of Industrial and Management Optimization, 2022, 18 (2) : 933-967. doi: 10.3934/jimo.2021003 ##### References: [1] B. Angoshtari, E. Bayraktar and V. R. Young, Optimal investment to minimize the probability of drawdown, Stochastics, 88 (2016), 946-958.  doi: 10.1080/17442508.2016.1155590. [2] B. Angoshtari, E. Bayraktar and V. R. Young, Minimizing the probability of lifetime drawdown under constant consumption, Insurance: Mathematics and Economics, 69 (2016), 210-223.  doi: 10.1016/j.insmatheco.2016.05.007. [3] N. B$\ddot{a}$uerle, Benchmark and mean-variance problems for insurers, Mathematical Methods of Operations Research, 62 (2005), 159-165.  doi: 10.1007/s00186-005-0446-1. [4] E. Bayraktar and V. R. Young, Minimizing the probability of lifetime ruin under borrowing constraints, Insurance: Mathematics and Economics, 41 (2007), 196-221.  doi: 10.1016/j.insmatheco.2006.10.015. [5] E. Bayraktar and V. R. Young, Minimizing the probability of ruin when consumption is ratcheted, North American Actuarial Journal, 12 (2008), 428-442.  doi: 10.1080/10920277.2008.10597535. [6] L. Bo and A. Capponi, Optimal credit investment with borrowing costs, Mathematics of Operations Research, 42 (2017), 546-575. doi: 10.1287/moor.2016.0818. [7] S. Brown, Optimal investment policies for a firm with a random risk process: Exponential utility and minimizing the probaiblity of ruin, Mathematics of Operations Research, 20 (1995), 937-958.  doi: 10.1287/moor.20.4.937. [8] X. Chen, D. Landriault, B. Li and D. Li, On minimizing drawdown risks of lifetime investments, Insurance: Mathematics and Economics, 65 (2015), 46-54.  doi: 10.1016/j.insmatheco.2015.08.007. [9] J. Cvitanić and I. Karatzas, On portfolio optimization under drawdown constrainsts, IMA Volumes in Mathematics and its Applications, 65 (1995), 77-88. [10] C. Deng, X. Zeng and H. Zhu, Non-zero-sum stochastic differential reinsurance and investment games with default risk, European Journal of Operational Research, 264 (2018), 1144-1158.  doi: 10.1016/j.ejor.2017.06.065. [11] R. Elie and N. Touzi, Optimal lifetime consumption and investment under a drawdown constrainst, Finance and Stochastics, 12 (2008), 299-330.  doi: 10.1007/s00780-008-0066-8. [12] C. Fu, A. Lari-Lavassani and X. Li, Dynamic mean-variance portfolio selection with borrowing constraint, European Journal of Operational Research, 200 (2010), 312-319.  doi: 10.1016/j.ejor.2009.01.005. [13] J. Grandell, A class of approximations of ruin probabilities, Scandinavian Actuarial Journal, 1977 (1977), 37-52.  doi: 10.1080/03461238.1977.10405071. [14] J. Grandell, Aspects of Risk Theory, Springer-Verlag, New York, 1991. doi: 10.1007/978-1-4613-9058-9. [15] S. Grossman and Z. Zhou, Optimal investment strategies for controlling drawdowns, Mathematical Finance, 3 (1993), 241-276.  doi: 10.1111/j.1467-9965.1993.tb00044.x. [16] X. Han, Z. Liang and K. C. Yuen, Optimal proportional reinsurance to minimize the probability of drawdown under thinning-dependence structure, Scandinavian Actuarial Journal, 2018 (2018), 863-889.  doi: 10.1080/03461238.2018.1469098. [17] X. Han, Z. Liang and V. R. Young, Optimal reinsurance to minimize the probability of drawdown under the mean-variance premium principle, Scandinavian Actuarial Journal, 2020 (2020), 879-903.  doi: 10.1080/03461238.2020.1788136. [18] X. Han, Z. Liang and C. Zhang, Optimal proportional reinsurance with common shock dependence to minimise the probability of drawdown, Annals of Actuarial Science, 13 (2019), 268-294.  doi: 10.1017/S1748499518000210. [19] C. Hipp and M. Taksar, Optimal non-proportional reinsurance, Insurance: Mathematics and Economics, 47 (2010), 246-254.  doi: 10.1016/j.insmatheco.2010.04.001. [20] X. Liang, Z. Liang and V. R. Young, Optimal reinsurance under the mean-variance premium principle to minimize the probability of ruin, Insurance: Mathematics and Economics, 92 (2020), 128-146.  doi: 10.1016/j.insmatheco.2020.03.008. [21] X. Liang and V. R. Young, Minimizing the probability of ruin: Optimal per-loss reinsurance, Insurance: Mathematics and Economics, 82 (2018), 181-190.  doi: 10.1016/j.insmatheco.2018.07.005. [22] Z. Liang and E. Bayraktar, Optimal proportional reinsurance and investment with unobservable claim size and intensity, Insurance: Mathematics and Economics, 55 (2014), 156-166.  doi: 10.1016/j.insmatheco.2014.01.011. [23] Z. Liang and K. C. Yuen, Optimal dynamic reinsurance with dependent risks: variance premium principle, Scandinavian Actuarial Journal, 2016 (2016), 18-36.  doi: 10.1080/03461238.2014.892899. [24] S. Luo, Ruin minimization for insurers with borrowing constrainsts, North American Actuarial Journal, 12 (2008), 143-174.  doi: 10.1080/10920277.2008.10597508. [25] R. C. Merton, Lifetime portfolio selection under uncertainty: The continuous-time case, The Review of Economics and Statistics, 51 (1969), 247-257.  doi: 10.2307/1926560. [26] R. C. Merton, Optimum consumption and portfolio rules in a continuous-time model, J. Econom. Theory, 3 (1971), 373-413.  doi: 10.1016/0022-0531(71)90038-X. [27] S. D. Promislow and V. R. Young, Minimizing the probability of ruin when claims follow Brownian motion with drift, North American Actuarial Journal, 9 (2005), 110-128.  doi: 10.1080/10920277.2005.10596214. [28] V. R. Young, Optimal investmet strategy to minimize the probability of lifetime ruin, North American Actuarial Journal, 8 (2004), 105-126.  doi: 10.1080/10920277.2004.10596174. [29] K. C. Yuen, Z. Liang and M. Zhou, Optimal proportional reinsurance with common shock dependence, Insurance: Mathematic and Economics, 64 (2015), 1-13.  doi: 10.1016/j.insmatheco.2015.04.009. [30] X. Zhang, H. Meng and Y. Zeng, Optimal investment and reinsurance strategies for insurers with generalized mean-variance premium principle and no-short selling, Insurance: Mathematic and Economics, 67 (2016), 125-132.  doi: 10.1016/j.insmatheco.2016.01.001. show all references ##### References: [1] B. Angoshtari, E. Bayraktar and V. R. Young, Optimal investment to minimize the probability of drawdown, Stochastics, 88 (2016), 946-958.  doi: 10.1080/17442508.2016.1155590. [2] B. Angoshtari, E. Bayraktar and V. R. Young, Minimizing the probability of lifetime drawdown under constant consumption, Insurance: Mathematics and Economics, 69 (2016), 210-223.  doi: 10.1016/j.insmatheco.2016.05.007. [3] N. B$\ddot{a}$uerle, Benchmark and mean-variance problems for insurers, Mathematical Methods of Operations Research, 62 (2005), 159-165.  doi: 10.1007/s00186-005-0446-1. [4] E. Bayraktar and V. R. Young, Minimizing the probability of lifetime ruin under borrowing constraints, Insurance: Mathematics and Economics, 41 (2007), 196-221.  doi: 10.1016/j.insmatheco.2006.10.015. [5] E. Bayraktar and V. R. Young, Minimizing the probability of ruin when consumption is ratcheted, North American Actuarial Journal, 12 (2008), 428-442.  doi: 10.1080/10920277.2008.10597535. [6] L. Bo and A. Capponi, Optimal credit investment with borrowing costs, Mathematics of Operations Research, 42 (2017), 546-575. doi: 10.1287/moor.2016.0818. [7] S. Brown, Optimal investment policies for a firm with a random risk process: Exponential utility and minimizing the probaiblity of ruin, Mathematics of Operations Research, 20 (1995), 937-958.  doi: 10.1287/moor.20.4.937. [8] X. Chen, D. Landriault, B. Li and D. Li, On minimizing drawdown risks of lifetime investments, Insurance: Mathematics and Economics, 65 (2015), 46-54.  doi: 10.1016/j.insmatheco.2015.08.007. [9] J. Cvitanić and I. Karatzas, On portfolio optimization under drawdown constrainsts, IMA Volumes in Mathematics and its Applications, 65 (1995), 77-88. [10] C. Deng, X. Zeng and H. Zhu, Non-zero-sum stochastic differential reinsurance and investment games with default risk, European Journal of Operational Research, 264 (2018), 1144-1158.  doi: 10.1016/j.ejor.2017.06.065. [11] R. Elie and N. Touzi, Optimal lifetime consumption and investment under a drawdown constrainst, Finance and Stochastics, 12 (2008), 299-330.  doi: 10.1007/s00780-008-0066-8. [12] C. Fu, A. Lari-Lavassani and X. Li, Dynamic mean-variance portfolio selection with borrowing constraint, European Journal of Operational Research, 200 (2010), 312-319.  doi: 10.1016/j.ejor.2009.01.005. [13] J. Grandell, A class of approximations of ruin probabilities, Scandinavian Actuarial Journal, 1977 (1977), 37-52.  doi: 10.1080/03461238.1977.10405071. [14] J. Grandell, Aspects of Risk Theory, Springer-Verlag, New York, 1991. doi: 10.1007/978-1-4613-9058-9. [15] S. Grossman and Z. Zhou, Optimal investment strategies for controlling drawdowns, Mathematical Finance, 3 (1993), 241-276.  doi: 10.1111/j.1467-9965.1993.tb00044.x. [16] X. Han, Z. Liang and K. C. Yuen, Optimal proportional reinsurance to minimize the probability of drawdown under thinning-dependence structure, Scandinavian Actuarial Journal, 2018 (2018), 863-889.  doi: 10.1080/03461238.2018.1469098. [17] X. Han, Z. Liang and V. R. Young, Optimal reinsurance to minimize the probability of drawdown under the mean-variance premium principle, Scandinavian Actuarial Journal, 2020 (2020), 879-903.  doi: 10.1080/03461238.2020.1788136. [18] X. Han, Z. Liang and C. Zhang, Optimal proportional reinsurance with common shock dependence to minimise the probability of drawdown, Annals of Actuarial Science, 13 (2019), 268-294.  doi: 10.1017/S1748499518000210. [19] C. Hipp and M. Taksar, Optimal non-proportional reinsurance, Insurance: Mathematics and Economics, 47 (2010), 246-254.  doi: 10.1016/j.insmatheco.2010.04.001. [20] X. Liang, Z. Liang and V. R. Young, Optimal reinsurance under the mean-variance premium principle to minimize the probability of ruin, Insurance: Mathematics and Economics, 92 (2020), 128-146.  doi: 10.1016/j.insmatheco.2020.03.008. [21] X. Liang and V. R. Young, Minimizing the probability of ruin: Optimal per-loss reinsurance, Insurance: Mathematics and Economics, 82 (2018), 181-190.  doi: 10.1016/j.insmatheco.2018.07.005. [22] Z. Liang and E. Bayraktar, Optimal proportional reinsurance and investment with unobservable claim size and intensity, Insurance: Mathematics and Economics, 55 (2014), 156-166.  doi: 10.1016/j.insmatheco.2014.01.011. [23] Z. Liang and K. C. Yuen, Optimal dynamic reinsurance with dependent risks: variance premium principle, Scandinavian Actuarial Journal, 2016 (2016), 18-36.  doi: 10.1080/03461238.2014.892899. [24] S. Luo, Ruin minimization for insurers with borrowing constrainsts, North American Actuarial Journal, 12 (2008), 143-174.  doi: 10.1080/10920277.2008.10597508. [25] R. C. Merton, Lifetime portfolio selection under uncertainty: The continuous-time case, The Review of Economics and Statistics, 51 (1969), 247-257.  doi: 10.2307/1926560. [26] R. C. Merton, Optimum consumption and portfolio rules in a continuous-time model, J. Econom. Theory, 3 (1971), 373-413.  doi: 10.1016/0022-0531(71)90038-X. [27] S. D. Promislow and V. R. Young, Minimizing the probability of ruin when claims follow Brownian motion with drift, North American Actuarial Journal, 9 (2005), 110-128.  doi: 10.1080/10920277.2005.10596214. [28] V. R. Young, Optimal investmet strategy to minimize the probability of lifetime ruin, North American Actuarial Journal, 8 (2004), 105-126.  doi: 10.1080/10920277.2004.10596174. [29] K. C. Yuen, Z. Liang and M. Zhou, Optimal proportional reinsurance with common shock dependence, Insurance: Mathematic and Economics, 64 (2015), 1-13.  doi: 10.1016/j.insmatheco.2015.04.009. [30] X. Zhang, H. Meng and Y. Zeng, Optimal investment and reinsurance strategies for insurers with generalized mean-variance premium principle and no-short selling, Insurance: Mathematic and Economics, 67 (2016), 125-132.  doi: 10.1016/j.insmatheco.2016.01.001. The influence of higher borrowing rate on the optimal investment strategies The influence of higher borrowing rate on the optimal reinsurance strategies The influence of risky investment on the optimal reinsurance strategies [1] Sheng Li, Wei Yuan, Peimin Chen. Optimal control on investment and reinsurance strategies with delay and common shock dependence in a jump-diffusion financial market. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022068 [2] Xia Han, Zhibin Liang, Yu Yuan, Caibin Zhang. Optimal per-loss reinsurance and investment to minimize the probability of drawdown. Journal of Industrial and Management Optimization, 2022, 18 (6) : 4011-4041. doi: 10.3934/jimo.2021145 [3] Chonghu Guan, Xun Li, Rui Zhou, Wenxin Zhou. Free boundary problem for an optimal investment problem with a borrowing constraint. Journal of Industrial and Management Optimization, 2022, 18 (3) : 1915-1934. doi: 10.3934/jimo.2021049 [4] Xin Jiang, Kam Chuen Yuen, Mi Chen. Optimal investment and reinsurance with premium control. Journal of Industrial and Management Optimization, 2020, 16 (6) : 2781-2797. doi: 10.3934/jimo.2019080 [5] Bo Yang, Rongming Wang, Gongpin Cheng. Optimal dividend and capital injection strategies in common shock dependence model with time-inconsistent preferences. Mathematical Control and Related Fields, 2022  doi: 10.3934/mcrf.2022034 [6] Yin Li, Xuerong Mao, Yazhi Song, Jian Tao. Optimal investment and proportional reinsurance strategy under the mean-reverting Ornstein-Uhlenbeck process and net profit condition. Journal of Industrial and Management Optimization, 2022, 18 (1) : 75-93. doi: 10.3934/jimo.2020143 [7] Hiroaki Hata, Li-Hsien Sun. Optimal investment and reinsurance of insurers with lognormal stochastic factor model. Mathematical Control and Related Fields, 2022, 12 (2) : 531-566. doi: 10.3934/mcrf.2021033 [8] Ming Yan, Hongtao Yang, Lei Zhang, Shuhua Zhang. Optimal investment-reinsurance policy with regime switching and value-at-risk constraint. Journal of Industrial and Management Optimization, 2020, 16 (5) : 2195-2211. doi: 10.3934/jimo.2019050 [9] Jingzhen Liu, Lihua Bai, Ka-Fai Cedric Yiu. Optimal investment with a value-at-risk constraint. Journal of Industrial and Management Optimization, 2012, 8 (3) : 531-547. doi: 10.3934/jimo.2012.8.531 [10] Jingzhen Liu, Ka-Fai Cedric Yiu, Kok Lay Teo. Optimal investment-consumption problem with constraint. Journal of Industrial and Management Optimization, 2013, 9 (4) : 743-768. doi: 10.3934/jimo.2013.9.743 [11] Zuo Quan Xu, Fahuai Yi. An optimal consumption-investment model with constraint on consumption. Mathematical Control and Related Fields, 2016, 6 (3) : 517-534. doi: 10.3934/mcrf.2016014 [12] Pengxu Xie, Lihua Bai, Huayue Zhang. Optimal proportional reinsurance and pairs trading under exponential utility criterion for the insurer. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022020 [13] Xin Zhang, Jie Xiong, Shuaiqi Zhang. Optimal reinsurance-investment and dividends problem with fixed transaction costs. Journal of Industrial and Management Optimization, 2021, 17 (2) : 981-999. doi: 10.3934/jimo.2020008 [14] Yan Zhang, Peibiao Zhao. Optimal reinsurance-investment problem with dependent risks based on Legendre transform. Journal of Industrial and Management Optimization, 2020, 16 (3) : 1457-1479. doi: 10.3934/jimo.2019011 [15] Lv Chen, Hailiang Yang. Optimal reinsurance and investment strategy with two piece utility function. Journal of Industrial and Management Optimization, 2017, 13 (2) : 737-755. doi: 10.3934/jimo.2016044 [16] Qian Zhao, Zhuo Jin, Jiaqin Wei. Optimal investment and dividend payment strategies with debt management and reinsurance. Journal of Industrial and Management Optimization, 2018, 14 (4) : 1323-1348. doi: 10.3934/jimo.2018009 [17] Xin Zhang, Hui Meng, Jie Xiong, Yang Shen. Robust optimal investment and reinsurance of an insurer under Jump-diffusion models. Mathematical Control and Related Fields, 2019, 9 (1) : 59-76. doi: 10.3934/mcrf.2019003 [18] Xiaoyu Xing, Caixia Geng. Optimal investment-reinsurance strategy in the correlated insurance and financial markets. Journal of Industrial and Management Optimization, 2022, 18 (5) : 3445-3459. doi: 10.3934/jimo.2021120 [19] Xia Zhou, Peimin Chen, Jiawei Zhang, Jingwen Tu, Yong He. The optimal investment-reinsurance strategies for ambiguity aversion insurer in uncertain environment. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022141 [20] Meng Wu, Jiefeng Yang. The optimal exit of staged investment when consider the posterior probability. Journal of Industrial and Management Optimization, 2017, 13 (2) : 1105-1123. doi: 10.3934/jimo.2016064 2021 Impact Factor: 1.411
2022-09-26 06:49:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7505447864532471, "perplexity": 7925.790247630227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00088.warc.gz"}
http://www.panaderiatroyano.com/acetophenone-c-skylsxs/simplifying-radical-expressions-7d1cea
Seleccionar página Remember that getting the square root of “something” is equivalent to raising that “something” to a fractional exponent of {1 \over 2}. We hope that some of those pieces can be further simplified because the radicands (stuff inside the symbol) are perfect squares. If you would like a lesson on solving radical equations, then please visit our lesson page . Example 8: Simplify the radical expression \sqrt {54{a^{10}}{b^{16}}{c^7}}. When the radical is a cube root, you should try to have terms raised to a power of three (3, 6, 9, 12, etc.). Why? IntroSimplify / MultiplyAdd / SubtractConjugates / DividingRationalizingHigher IndicesEt cetera. However, the best option is the largest possible one because this greatly reduces the number of steps in the solution. (a) Solution: Start by factoring the radicand's coefficient; in other words, write it as a product of smaller numbers. no perfect square factors other than 1 in the radicand $$\sqrt{16x}=\sqrt{16}\cdot \sqrt{x}=\sqrt{4^{2}}\cdot \sqrt{x}=4\sqrt{x}$$ no … To simplify complicated radical expressions, we can use some definitions and rules from simplifying exponents. Section 6.3: Simplifying Radical Expressions, and . Below is a screenshot of the answer from the calculator which verifies our answer. 4. If the term has an even power already, then you have nothing to do. Simplifying expressions is an important intermediate step when solving equations. applying all the rules - explanation of terms and step by step guide showing how to simplify radical expressions containing: square roots, cube roots, . Verify Related. Repeat the process until such time when the radicand no longer has a perfect square factor. Adding and Subtracting Radical Expressions, That’s the reason why we want to express them with even powers since. 2) Product (Multiplication) formula of radicals with equal indices is given by Simplifying hairy expression with fractional exponents. Generally speaking, it is the process of simplifying expressions applied to radicals. Simplifying Expressions Grade 7 - Displaying top 8 worksheets found for this concept.. Simplifying Radicals – Techniques & Examples The word radical in Latin and Greek means “root” and “branch” respectively. To create these "common" denominators, you would multiply, top and bottom, by whatever the denominator needed. Example 14: Simplify the radical expression \sqrt {18m{}^{11}{n^{12}}{k^{13}}}. The denominator here contains a radical, but that radical is part of a larger expression. Anything divided by itself is just 1, and multiplying by 1 doesn't change the value of whatever you're multiplying by that 1. Simplifying Radical Expressions Date_____ Period____ Simplify. Example 1: to simplify $(\sqrt{2}-1)(\sqrt{2}+1)$ type (r2 - 1)(r2 + 1) . If I multiply top and bottom by root-three, then I will have multiplied the fraction by a strategic form of 1. For the numerical term 12, its largest perfect square factor is 4. In this tutorial, the primary focus is on simplifying radical expressions with an index of 2. Here's how to simplify a rational expression. We can use this same technique to rationalize radical denominators. While these look like geometry questions, you’ll have to put your GMAT algebra skills to work! By using this website, you agree to our Cookie Policy. These properties can be used to simplify radical expressions. Meanwhile, √ is the radical symbol while n is the index. Free radical equation calculator - solve radical equations step-by-step ... System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Induction Logical Sets. The calculator presents the answer a little bit different. A worked example of simplifying elaborate expressions that contain radicals with two variables. Number Line. 2:55. Let’s simplify this expression by first rewriting the odd exponents as powers of an even number plus 1. Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Simplifying logarithmic expressions. Simplifying Exponents Worksheet from Simplifying Radical Expressions Worksheet Answers, source: homeschooldressage.com. We typically assume that all variable expressions within the radical are nonnegative. ACT MATH ONLINE TEST. Picking the largest one makes the solution very short and to the point. Use the multiplication property. For the number in the radicand, I see that 400 = 202. By quick inspection, the number 4 is a perfect square that can divide 60. Simplifying Radical Expressions - Part 17. Related Posts. nth roots . Adding and Subtracting Radical Expressions For this problem, we are going to solve it in two ways. #1. Next lesson. Simplifying Radical Expressions A radical expression is composed of three parts: a radical symbol, a radicand, and an index In this tutorial, the primary focus is on simplifying radical expressions with an index of 2. Quotient Property of Radicals. Play this game to review Algebra II. Simplifying Radical Expressions 2. Simplifying radical expressions This calculator simplifies ANY radical expressions. I can create this pair of 3's by multiplying my fraction, top and bottom, by another copy of root-three. Remember the rule below as you will use this over and over again. We use cookies to give you the best experience on our website. Learn more Accept. Then express the prime numbers in pairs as much as possible. … Let’s do that by going over concrete examples. PRODUCT PROPERTY OF SQUARE ROOTS For all real numbers a and b , a ⋅ b = a ⋅ b That is, the square root of the product is the same as the product of the square roots. Click on the link to see some examples of Prime Factorization. . Going through some of the squares of the natural numbers…. Otherwise, you need to express it as some even power plus 1. The answer must be some number n found between 7 and 8. However, the key concept is there. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Try to further simplify. To simplify radical expressions, look for factors of the radicand with powers that match the index. Topic. COMPETITIVE EXAMS. On the previous page, all the fractions containing radicals (or radicals containing fractions) had denominators that cancelled off or else simplified to whole numbers. Simply because you should supply solutions within a genuine as well as trustworthy supplier, most people offer beneficial information about numerous subject areas along with topics. The only thing that factors out of the numerator is a 3, but that won't cancel with the 2 in the denominator. The goal is to show that there is an easier way to approach it especially when the exponents of the variables are getting larger. Simplifying radical expressions: three variables. Here is an example: 2x^2+x(4x+3) Simplifying Expressions Video Lesson. Simplifying Radical Expressions. Menu Algebra 2 / Polynomials and radical expressions / Simplify expressions. So all I really have to do here is "rationalize" the denominator. One way to think about it, a pair of any number is a perfect square! Topic. Please click OK or SCROLL DOWN to use this site with cookies. In addition, those numbers are perfect squares because they all can be expressed as exponential numbers with even powers. Exponential vs. linear growth. More so, the variable expressions above are also perfect squares because all variables have even exponents or powers. By using this website, you agree to our Cookie Policy. Simplifying Radical Expressions Before you can simplify a radical expression, you have to know the important properties of radicals . I won't have changed the value, but simplification will now be possible: This last form, "five, root-three, divided by three", is the "right" answer they're looking for. Improve your math knowledge with free questions in "Simplify radical expressions" and thousands of other math skills. Simplifying expressions makes those expressions easier to compare with other expressions (which have also been simplified). Test - I . Rationalizing the Denominator. However, since the index of the radical is 3, you want the factors to be powers of 3 if possible. Video transcript. Just as you were able to break down a number into its smaller pieces, you can do the same with variables. Simplifying Radicals Kick into gear with this bundle of printable simplifying radicals worksheets, and acquaint yourself with writing radical expressions in the simplest form. Further the calculator will show the solution for simplifying the radical by prime factorization. Looks like the calculator agrees with our answer. Unit 4 Radical Expressions and Rational Exponents (chapter 7) Learning Targets: Properties of Exponents 1. Whole number answer we get an expression where the denominator nothing to do 's calculator please., top and bottom by root-three, then please visit our lesson page { x^2 {... Its largest perfect square factors but it probably wo n't cancel with 2. Factors out of the perfect squares because all variables have even exponents or powers cube,! A given power using the site another radical expression, we simplify (! Factor for the entire fraction, top and bottom, by another copy of root-three the squares of the possible! Calculator which verifies our answer n is the index search phrases that today 's searchers used to simplify algebraic. With that, I 'll multiply by the conjugate in order to simplify '' expression... Then 49, etc time when the radicand no longer has a perfect square factors I made it.... Down into pieces of “ smaller ” radical expressions Worksheet Answers, source: homeschooldressage.com why we to... Expressions using algebraic rules step-by-step this website uses cookies to give you the best is. Show simplifying radical expressions the steps involving in simplifying radicals that have coefficients expression look... Is rationalize '' the denominator of a larger expression to simplify the radical expression \sqrt { 80 { }... Radical denominators MultiplyAdd / SubtractConjugates / DividingRationalizingHigher IndicesEt cetera, © 2020 Purplemath separating out multiples of the original.... Like geometry questions, you can use some definitions and rules from simplifying exponents Worksheet simplifying... Using this website, you could n't add the fractions unless they had the same.... Two ways SCROLL down to use this site with cookies raising a.! Simplifying exponents Worksheet from simplifying exponents Worksheet from simplifying radical expressions simplified because the radicands ( stuff the. S recall the properties of exponents 1 properties can be attributed to exponentiation, or raising a.. 1 ) first we factored 12 to get its prime factors given power it 's wrong '',. Are getting larger ( which have also been simplified ) is the process of manipulating a radical can further. Out multiples of the number inside the radical expression \sqrt { 200 } option! 10: simplify the radical in the denominator then divide by 3, 5, 7 etc... And rules from simplifying exponents because the radicands ( stuff inside the radical is rationalize '' the denominator a. It is the process until such time when the exponents of the index of radical part! Addition, those numbers are perfect squares = 42 = 16 always look for factors of 3 inside radical! Video lesson: properties of them principal square root of a square root of 60 contain! Do here is rationalize '' the denominator here contains a radical symbol while n is the of. Parts of expressions n't leave a Reply cancel Reply your email address will not be.! Perfect square factor for the radicand no longer has a perfect square factors that all variable expressions above also. But that wo n't matter to your instructor right now, but it probably n't... Square with area 48 the work by separating out multiples of the radicand, those numbers are perfect.! Allows us to focus on simplifying radical expressions, look for factors of the square root each. I multiply top and bottom, by another copy of root-three 15, 2020 in. '' denominators, you have to take radical sign and indicates the principal root. Its radicand does not contain any factors that can divide 200, the square root forth! Perfect powers of the square root symbol, while the single prime will stay inside of an even plus... Products of square roots expression \sqrt { 72 } and Subtracting signed numbers Worksheet algabra., top and bottom by root-three, then 49, etc it out such that one the. T find this name in any algebra textbook because I made it up by! Necessary rationalization is worse than what I 'd started with radicand ( stuff inside the radical is. Answer to Mathway 's you agree to our Cookie Policy for numerator and rip out the for. We expect that the square root in the radicand, and fourth roots able to them... Going to solve this is to show that there is an example 2x^2+x... And an index of the numerator and rip out the 6 for squared... To its power squared gives 60 n found between 7 and 8 are the search phrases today! To perform prime factorization on the link to see if anything simplifies at that point High School Expo... To get its prime factors uses cookies to ensure you get the ''! As products of square roots ) need to express each variable as a of... 49, etc to work option is the process of manipulating a radical expression said! Address will not be published recognize how a perfect square on January 15, 2020, in GMAT algebra to. Subtractconjugates / DividingRationalizingHigher IndicesEt cetera IndicesEt cetera use cookies to ensure you get the best on... Be defined as a product of square roots, and fourth roots expressions applied to radicals Persuasive Prompts... Terms with even powers since we expect that the square root expression can also involve variables as well as.! Show the work by separating out multiples of the index of the radicand longer! We operate with exponents some number n found between 7 and 8 simplify complicated radical.! Rule below as you were able to break it down into pieces of “ smaller ” radical,. Reply cancel Reply your email address will not be published see some of. Get its prime factors of the denominator, which includes multiplying by the conjugate in order to simplify this! On solving radical equations adding and Subtracting signed numbers Worksheet, algabra, math how scale... Get rid of it, I hope you can see that 400 = 202 some number n found 7... Radical are nonnegative something like this… because this issue may matter to instructors. Simplifies any radical expressions Worksheet Answers Lovely simplify radicals Works in 2020 simplifying radical expressions using rules. Rule did I use to break down a number under a square root of number! Are also perfect squares comes out very nicely yields a whole number answer instructors later... Steps to help us understand the steps involving in simplifying radicals practice Worksheet Awesome Maths Worksheets for High School Expo! Expressions and Rational exponents ( chapter 7 ) Learning Targets: properties of them searchers to. Factoring it out such that one of the number 16 is obviously perfect. One radical expression \sqrt { 60 }, 2012, UPDATED on January 15 2020... Something ” by 2 'll need to be “ 2 ” all the … a radical expression entered. That “ something ” by 2 examples the word radical in the here... Look something like this… to radicals make sure that you ca n't leave a Reply cancel your! Powers as even numbers plus 1 by Sal Khan and Monterey Institute Technology! Prime numbers will get out of the whole denominator wo n't matter to your instructor right now but. Due to the point prime will stay inside t need to follow when simplifying radicals, unit 10 - expressions. However, I see that 400 = 202 defined as a product of terms with even powers 125 } {. Using each of the radical expression steps '' to compare your answer to Mathway 's: the... Approach is to perform prime factorization on the radicand, I 'll multiply by first. Square that can be expressed as exponential numbers with even and odd exponents powers... The radicals are. ) under a square with area 48 High on... X^3 } y\, { z^5 } } going over concrete examples with questions! Leave a square root, forth root are all radicals wo n't,! 9, then 49, etc radical expression \sqrt { 147 { w^6 } q^7. { 32 } Monterey Institute for Technology and Education / MultiplyAdd / SubtractConjugates / DividingRationalizingHigher cetera..., surd solver, adding and Subtracting radical expressions multiplying radical expressions, that s! Target number would like a lesson on solving radical equations adding and Subtracting expressions... Of root-three numerical term 12, its largest perfect square or alternate form into calculator, and roots... There is an example: 2x^2+x ( 4x+3 ) simplifying expressions Grade 7 - Displaying top 8 found..., source: homeschooldressage.com operate with exponents powers ” method: you can do the same denominators presents! On January 15, 2020, in GMAT algebra, a radicand, I multiply... Three possible perfect square the variables are getting larger in GMAT algebra your own exercise will multiplied! Are no common factors then apply the square root of each number above yields a whole that...
2021-02-26 01:18:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6934690475463867, "perplexity": 1158.0649420891407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00337.warc.gz"}
http://webdice.rdcep.org/glossary/climate_model
### Climate model webDICE includes a model of how emissions of CO2 affect temperatures. The model has two components. The first is how carbon moves around the earth, between the oceans and the atmosphere. This component determines long emissions of CO2 stay in the atmosphere. The second is how the emissions that stay in the atmosphere increase temperatures. This latter component is called radiative forcing. The model of how emission move around the earth assumes that emissions from economic activity are released into the atmosphere. Some of these emissions are absorbed by the upper layers of the ocean and from there into the lower ocean. The rate of absorption into the upper and lower ocean determines how long emissions stay in the atmosphere. There are no parameters choices for the climate model, but in Advanced Move, you can choose the default model, which is used in Nordhaus’s DICE models, or the ‘BEAM, simplified‘ model, which is a more accurate but slightly slower running model. (Optimization using BEAM may be particularly slow and possibly time out.) more The default climate model in DICE simulates the carbon cycle using a linear there-reservoir model where the three reservoirs are the deep oceans, the upper ocean and the atmosphere. Each of these reservoirs is well-mixed in the short run. A transition matrix governs the transfer of carbon among the reservoirs. If $M_{i}(t)$, is he mass of carbon (gigatons) in reservoir $i$, then: $\left[\begin{array}{c} M_{AT}(t)\\ M_{UP}(t)\\ M_{LO}(t) \end{array}\right]=\left[\begin{array}{ccc} \phi_{11} & \phi_{12} & 0\\ 1-\phi_{11} & 1-\phi_{12}-\phi_{32} & \phi_{23}\\ 0 & \phi_{32} & 1-\phi_{23} \end{array}\right]\left[\begin{array}{c} M_{AT}(t-1)\\ M_{UP}(t-1)\\ M_{L0}(t-1) \end{array}\right]+\left[\begin{array}{c} E(t-1)\\ 0\\ 0 \end{array}\right],$ where the parameters $\phi_{i,j}$ represent the transfer rate from reservoir $i$ to reservoir $j$ (per time period), and $E(t)$ is emissions at time $t$. The model only includes CO2 in its emissions factor and atmospheric carbon concentration. Other greenhouse gases are assumed to be exogenous and enter the forcing equation separately.
2018-01-20 15:04:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6154309511184692, "perplexity": 1087.825432173136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889660.55/warc/CC-MAIN-20180120142458-20180120162458-00767.warc.gz"}
https://mathematica.stackexchange.com/questions/180416/mathematica-can-solve-the-eigenvalues-of-a-large-sparse-non-hermitian-non-symme
# Mathematica can solve the eigenvalues of a large sparse non-Hermitian (non-symmetrical) matrix? The Arnoldi algorithms of function "Eigensystems" in Mathematica can be used to solve the eigenvalues of a Large Sparse non-Hermitian (non-symmetrical) Complex matrix? • I'm not entirely sure what your question is. Are you asking if Arnoldi algorithms can be used to solve sparse matrix problems in Mathematica? If so, yes. See the Method section under the Options section on the documentation page for Eigensystem. – b3m2a1 Aug 22 '18 at 2:25 • Then the Arnoldi algorithms can be used for complex non-symmetrical matrix ? – Steven Aug 22 '18 at 5:37 • Yes, it can be used to compute few eigenvalues and eigenvectors. Why don't you just try it? – Henrik Schumacher Aug 22 '18 at 7:20
2020-10-29 22:41:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6542541980743408, "perplexity": 837.4117111993513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905965.68/warc/CC-MAIN-20201029214439-20201030004439-00139.warc.gz"}
https://terrytao.wordpress.com/2013/09/
You are currently browsing the monthly archive for September 2013. The main purpose of this post is to roll over the discussion from the previous Polymath8 thread, which has become rather full with comments.  We are still writing the paper, but it appears to have stabilised in a near-final form (source files available here); the main remaining tasks are proofreading, checking the mathematics, and polishing the exposition.  We also have a tentative consensus to submit the paper to Algebra and Number Theory when the proofreading is all complete. The paper is quite large now (164 pages!) but it is fortunately rather modular, and thus hopefully somewhat readable (particularly regarding the first half of the paper, which does not  need any of the advanced exponential sum estimates).  The size should not be a major issue for the journal, so I would not seek to artificially shorten the paper at the expense of readability or content. Define a partition of ${1}$ to be a finite or infinite multiset ${\Sigma}$ of real numbers in the interval ${I \in (0,1]}$ (that is, an unordered set of real numbers in ${I}$, possibly with multiplicity) whose total sum is ${1}$: ${\sum_{t \in \Sigma}t = 1}$. For instance, ${\{1/2,1/4,1/8,1/16,\ldots\}}$ is a partition of ${1}$. Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance: 1. (Prime factorisation) Given a natural number ${n}$, one can decompose it into prime factors ${n = p_1 \ldots p_k}$ (counting multiplicity), and then the multiset $\displaystyle \Sigma_{PF}(n) := \{ \frac{\log p_1}{\log n}, \ldots,\frac{\log p_k}{\log n} \}$ is a partition of ${1}$. 2. (Cycle decomposition) Given a permutation ${\sigma \in S_n}$ on ${n}$ labels ${\{1,\ldots,n\}}$, one can decompose ${\sigma}$ into cycles ${C_1,\ldots,C_k}$, and then the multiset $\displaystyle \Sigma_{CD}(\sigma) := \{ \frac{|C_1|}{n}, \ldots, \frac{|C_k|}{n} \}$ is a partition of ${1}$. 3. (Normalisation) Given a multiset ${\Gamma}$ of positive real numbers whose sum ${S := \sum_{x\in \Gamma}x}$ is finite and non-zero, the multiset $\displaystyle \Sigma_N( \Gamma) := \frac{1}{S} \cdot \Gamma = \{ \frac{x}{S}: x \in \Gamma \}$ is a partition of ${1}$. In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the Poisson-Dirichlet distribution. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other One way to proceed is to random partition ${\Sigma}$ as a type of point process on the interval ${I}$, with the constraint that ${\sum_{x \in \Sigma} x = 1}$, in which case one can study statistics such as the counting functions $\displaystyle N_{[a,b]} := |\Sigma \cap [a,b]| = \sum_{x \in\Sigma} 1_{[a,b]}(x)$ (where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of ${\Sigma}$ in decreasing order $\displaystyle t_1 \geq t_2 \geq t_3 \geq \ldots \geq 0,$ with the convention that one pads the sequence ${t_n}$ by an infinite number of zeroes if ${\Sigma}$ is finite; this identifies the space of partitions with an infinite dimensional simplex $\displaystyle \{ (t_1,t_2,\ldots) \in [0,1]^{\bf N}: t_1 \geq t_2 \geq \ldots; \sum_{n=1}^\infty t_n = 1 \}.$ However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as ${(x,y) \mapsto \max(x,y)}$ and ${(x,y) \mapsto \min(x,y)}$ are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated. It turns out that there is a better (or at least “smoother”) way to enumerate the elements ${u_1,(1-u_1)u_2,(1-u_1)(1-u_2)u_3,\ldots}$ of a partition ${\Sigma}$ than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows. 1. Given a partition ${\Sigma}$, let ${u_1}$ be an element of ${\Sigma}$ chosen at random, with each element ${t\in \Sigma}$ having a probability ${t}$ of being chosen as ${u_1}$ (so if ${t \in \Sigma}$ occurs with multiplicity ${m}$, the net probability that ${t}$ is chosen as ${u_1}$ is actually ${mt}$). Note that this is well-defined since the elements of ${\Sigma}$ sum to ${1}$. 2. Now suppose ${u_1}$ is chosen. If ${\Sigma \backslash \{u_1\}}$ is empty, we set ${u_2,u_3,\ldots}$ all equal to zero and stop. Otherwise, let ${u_2}$ be an element of ${\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})}$ chosen at random, with each element ${t \in \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})}$ having a probability ${t}$ of being chosen as ${u_2}$. (For instance, if ${u_1}$ occurred with some multiplicity ${m>1}$ in ${\Sigma}$, then ${u_2}$ can equal ${\frac{u_1}{1-u_1}}$ with probability ${(m-1)u_1/(1-u_1)}$.) 3. Now suppose ${u_1,u_2}$ are both chosen. If ${\Sigma \backslash \{u_1,u_2\}}$ is empty, we set ${u_3, u_4, \ldots}$ all equal to zero and stop. Otherwise, let ${u_3}$ be an element of ${\frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}$, with ech element ${t \in \frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}$ having a probability ${t}$ of being chosen as ${u_3}$. 4. We continue this process indefinitely to create elements ${u_1,u_2,u_3,\ldots \in [0,1]}$. We denote the random sequence ${Enum(\Sigma) := (u_1,u_2,\ldots) \in [0,1]^{\bf N}}$ formed from a partition ${\Sigma}$ in the above manner as the random normalised enumeration of ${\Sigma}$; this is a random variable in the infinite unit cube ${[0,1]^{\bf N}}$, and can be defined recursively by the formula $\displaystyle Enum(\Sigma) = (u_1, Enum(\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})))$ with ${u_1}$ drawn randomly from ${\Sigma}$, with each element ${t \in \Sigma}$ chosen with probability ${t}$, except when ${\Sigma =\{1\}}$ in which case we instead have $\displaystyle Enum(\{1\}) = (1, 0,0,\ldots).$ Note that one can recover ${\Sigma}$ from any of its random normalised enumerations ${Enum(\Sigma) := (u_1,u_2,\ldots)}$ by the formula $\displaystyle \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\} \ \ \ \ \ (1)$ with the convention that one discards any zero elements on the right-hand side. Thus ${Enum}$ can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube ${[0,1]^{\bf N}}$, which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier. Note that this random enumeration procedure can also be adapted to the three models described earlier: 1. Given a natural number ${n}$, one can randomly enumerate its prime factors ${n =p'_1 p'_2 \ldots p'_k}$ by letting each prime factor ${p}$ of ${n}$ be equal to ${p'_1}$ with probability ${\frac{\log p}{\log n}}$, then once ${p'_1}$ is chosen, let each remaining prime factor ${p}$ of ${n/p'_1}$ be equal to ${p'_2}$ with probability ${\frac{\log p}{\log n/p'_1}}$, and so forth. 2. Given a permutation ${\sigma\in S_n}$, one can randomly enumerate its cycles ${C'_1,\ldots,C'_k}$ by letting each cycle ${C}$ in ${\sigma}$ be equal to ${C'_1}$ with probability ${\frac{|C|}{n}}$, and once ${C'_1}$ is chosen, let each remaining cycle ${C}$ be equalto ${C'_2}$ with probability ${\frac{|C|}{n-|C'_1|}}$, and so forth. Alternatively, one traverse the elements of ${\{1,\ldots,n\}}$ in random order, then let ${C'_1}$ be the first cycle one encounters when performing this traversal, let ${C'_2}$ be the next cycle (not equal to ${C'_1}$ one encounters when performing this traversal, and so forth. 3. Given a multiset ${\Gamma}$ of positive real numbers whose sum ${S := \sum_{x\in\Gamma} x}$ is finite, we can randomly enumerate ${x'_1,x'_2,\ldots}$ the elements of this sequence by letting each ${x \in \Gamma}$ have a ${\frac{x}{S}}$ probability of being set equal to ${x'_1}$, and then once ${x'_1}$ is chosen, let each remaining ${x \in \Gamma\backslash \{x'_1\}}$ have a ${\frac{x_i}{S-x'_1}}$ probability of being set equal to ${x'_2}$, and so forth. We then have the following result: Proposition 1 (Existence of the Poisson-Dirichlet process) There exists a random partition ${\Sigma}$ whose random enumeration ${Enum(\Sigma) = (u_1,u_2,\ldots)}$ has the uniform distribution on ${[0,1]^{\bf N}}$, thus ${u_1,u_2,\ldots}$ are independently and identically distributed copies of the uniform distribution on ${[0,1]}$. A random partition ${\Sigma}$ with this property will be called the Poisson-Dirichlet process. This process, first introduced by Kingman, can be described explicitly using (1) as $\displaystyle \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\},$ where ${u_1,u_2,\ldots}$ are iid copies of the uniform distribution of ${[0,1]}$, although it is not immediately obvious from this definition that ${Enum(\Sigma)}$ is indeed uniformly distributed on ${[0,1]^{\bf N}}$. We prove this proposition below the fold. An equivalent definition of a Poisson-Dirichlet process is a random partition ${\Sigma}$ with the property that $\displaystyle (u_1, \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})) \equiv (U, \Sigma) \ \ \ \ \ (2)$ where ${u_1}$ is a random element of ${\Sigma}$ with each ${t \in\Sigma}$ having a probability ${t}$ of being equal to ${u_1}$, ${U}$ is a uniform variable on ${[0,1]}$ that is independent of ${\Sigma}$, and ${\equiv}$ denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of ${\Sigma}$: if one randomly removes one element from ${\Sigma}$ and rescales, one gets a new copy of ${\Sigma}$. It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum ${1}$: Proposition 2 (Poisson-Dirichlet processes via Poisson processes) Let ${a>0}$, and let ${\Gamma_a}$ be a Poisson process on ${(0,+\infty)}$ with intensity function ${t \mapsto \frac{1}{t} e^{-at}}$. Then the sum ${S :=\sum_{x \in \Gamma_a} x}$ is almost surely finite, and the normalisation ${\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a}$ is a Poisson-Dirichlet process. Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post): Proposition 3 (Large cycles of a typical permutation) For each natural number ${n}$, let ${\sigma}$ be a permutation drawn uniformly at random from ${S_n}$. Then the random partition ${\Sigma_{CD}(\sigma)}$ converges in the limit ${n \rightarrow\infty}$ to a Poisson-Dirichlet process ${\Sigma_{PF}}$ in the following sense: given any fixed sequence of intervals ${[a_1,b_1],\ldots,[a_k,b_k] \subset I}$ (independent of ${n}$), the joint discrete random variable ${(N_{[a_1,b_1]}(\Sigma_{CD}(\sigma)),\ldots,N_{[a_k,b_k]}(\Sigma_{CD}(\sigma)))}$ converges in distribution to ${(N_{[a_1,b_1]}(\Sigma),\ldots,N_{[a_k,b_k]}(\Sigma))}$. Finally, we turn to the first way: Proposition 4 (Large prime factors of a typical number) Let ${x > 0}$, and let ${N_x}$ be a random natural number chosen according to one of the following three rules: 1. (Uniform distribution) ${N_x}$ is drawn uniformly at random from the natural numbers in ${[1,x]}$. 2. (Shifted uniform distribution) ${N_x}$ is drawn uniformly at random from the natural numbers in ${[x,2x]}$. 3. (Zeta distribution) Each natural number ${n}$ has a probability ${\frac{1}{\zeta(s)}\frac{1}{n^s}}$ of being equal to ${N_x}$, where ${s := 1 +\frac{1}{\log x}}$and ${\zeta(s):=\sum_{n=1}^\infty \frac{1}{n^s}}$. Then ${\Sigma_{PF}(N_x)}$ converges as ${x \rightarrow \infty}$ to a Poisson-Dirichlet process ${\Sigma}$ in the same fashion as in Proposition 3. The process ${\Sigma_{PF}(N_x)}$ was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold. The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion. As a sample application, consider the problem of estimating the number ${\pi(x,x^{1/u})}$ of integers up to ${x}$ which are not divisible by any prime larger than ${x^{1/u}}$ (i.e. they are ${x^{1/u}}$smooth), where ${u>0}$ is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals ${[x,2x]}$ and ${[1,x]}$) the probability that ${\Sigma}$ avoids ${[1/u,1]}$, which by the above theorem converges to the probability ${\rho(u)}$ that ${\Sigma_{PF}}$ avoids ${[1/u,1]}$. Below the fold we will show that this function is given by the Dickman function, defined by setting ${\rho(u)=1}$ for ${u < 1}$ and ${u\rho'(u) = \rho(u-1)}$ for ${u \geq 1}$, thus recovering the classical result of Dickman that ${\pi(x,x^{1/u}) = (\rho(u)+o(1))x}$. I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful. Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if ${dt}$ is infinitesimal then we will abbreviate ${dt + o(dt)}$ simply as ${dt}$. Emmanuel Breuillard, Ben Green, Bob Guralnick, and I have just uploaded to the arXiv our joint paper “Expansion in finite simple groups of Lie type“. This long-delayed paper (announced way back in 2010!) is a followup to our previous paper in which we showed that, with one possible exception, generic pairs of elements of a simple algebraic group (over an uncountable field) generated a free group which was strongly dense in the sense that any nonabelian subgroup of this group was Zariski dense. The main result of this paper is to establish the analogous result for finite simple groups of Lie type (as defined in the previous blog post) and bounded rank, namely that almost all pairs ${a,b}$ of elements of such a group generate a Cayley graph which is a (two-sided) expander, with expansion constant bounded below by a quantity depending on the rank of the group. (Informally, this means that the random walk generated by ${a,b}$ spreads out in logarithmic time to be essentially uniformly distributed across the group, as opposed for instance to being largely trapped in an algebraic subgroup. Thus if generic elements did not generate a strongly dense group, one would probably expect expansion to fail.) There are also some related results established in the paper. Firstly, as we discovered after writing our first paper, there was one class of algebraic groups for which our demonstration of strongly dense subgroups broke down, namely the ${Sp_4}$ groups in characteristic three. In the current paper we provide in a pair of appendices a new argument that covers this case (or more generally, ${Sp_4}$ in odd characteristic), by first reducing to the case of affine groups ${k^2 \rtimes SL_2(k)}$ (which can be found inside ${Sp_4}$ as a subgroup) and then using a ping-pong argument (in a p-adic metric) in the latter context. Secondly, we show that the distinction between one-sided expansion and two-sided expansion (see this set of lecture notes of mine for definitions) is erased in the context of Cayley graphs of bounded degree, in the sense that such graphs are one-sided expanders if and only if they are two-sided expanders (perhaps with slightly different expansion constants). The argument turns out to be an elementary combinatorial one, based on the “pivot” argument discussed in these lecture notes of mine. Now to the main result of the paper, namely the expansion of random Cayley graphs. This result had previously been established for ${SL_2}$ by Bourgain and Gamburd, and Ben, Emmanuel and I had used the Bourgain-Gamburd method to achieve the same result for Suzuki groups. For the other finite simple groups of Lie type, expander graphs had been constructed by Kassabov, Lubotzky, and Nikolov, but they required more than two generators, which were placed deterministically rather than randomly. (Here, I am skipping over a large number of other results on expanding Cayley graphs; see this survey of Lubotzsky for a fairly recent summary of developments.) The current paper also uses the “Bourgain-Gamburd machine”, as discussed in these lecture notes of mine, to demonstrate expansion. This machine shows how expansion of a Cayley graph follows from three basic ingredients, which we state informally as follows: • Non-concentration (A random walk in this graph does not concentrate in a proper subgroup); • Product theorem (A medium-sized subset of this group which is not trapped in a proper subgroup will expand under multiplication); and • Quasirandomness (The group has no small non-trivial linear representations). Quasirandomness of arbitrary finite simple groups of Lie type was established many years ago (predating, in fact, the introduction of the term “quasirandomness” by Gowers for this property) by Landazuri-Seitz and Seitz-Zalesskii, and the product theorem was already established by Pyber-Szabo and independently by Breuillard, Green, and myself. So the main problem is to establish non-concentration: that for a random Cayley graph on a finite simple group ${G}$ of Lie type, random walks did not concentrate in proper subgroups. The first step was to classify the proper subgroups of ${G}$. Fortunately, these are all known; in particular, such groups are either contained in proper algebraic subgroups of the algebraic group containing ${G}$ (or a bounded cover thereof) with bounded complexity, or are else arising (up to conjugacy) from a version ${G(F')}$ of the same group ${G =G(F)}$ associated to a proper subfield ${F'}$ of the field ${F}$ respectively; this follows for instance from the work of Larsen and Pink, but also can be deduced using the classification of finite simple groups, together with some work of Aschbacher, Liebeck-Seitz, and Nori. We refer to the two types of subgroups here as “structural subgroups” and “subfield subgroups”. To preclude concentration in a structural subgroup, we use our previous result that generic elements of an algebraic group generate a strongly dense subgroup, and so do not concentrate in any algebraic subgroup. To translate this result from the algebraic group setting to the finite group setting, we need a Schwarz-Zippel lemma for finite simple groups of Lie type. This is straightforward for Chevalley groups, but turns out to be a bit trickier for the Steinberg and Suzuki-Ree groups, and we have to go back to the Chevalley-type parameterisation of such groups in terms of (twisted) one-parameter subgroups, that can be found for instance in the text of Carter; this “twisted Schwartz-Zippel lemma” may possibly have further application to analysis on twisted simple groups of Lie type. Unfortunately, the Schwartz-Zippel estimate becomes weaker in twisted settings, and particularly in the case of triality groups ${{}^3 D_4(q)}$, which require a somewhat ad hoc additional treatment that relies on passing to a simpler subgroup present in a triality group, namely a central product of two different ${SL_2}$‘s. To rule out concentration in a conjugate of a subfield group, we repeat an argument we introduced in our Suzuki paper and pass to a matrix model and analyse the coefficients of the characteristic polynomial of words in this Cayley graph, to prevent them from concentrating in a subfield. (Note that these coefficients are conjugation-invariant.) In this previous post I recorded some (very standard) material on the structural theory of finite-dimensional complex Lie algebras (or Lie algebras for short), with a particular focus on those Lie algebras which were semisimple or simple. Among other things, these notes discussed the Weyl complete reducibility theorem (asserting that semisimple Lie algebras are the direct sum of simple Lie algebras) and the classification of simple Lie algebras (with all such Lie algebras being (up to isomorphism) of the form ${A_n}$, ${B_n}$, ${C_n}$, ${D_n}$, ${E_6}$, ${E_7}$, ${E_8}$, ${F_4}$, or ${G_2}$). Among other things, the structural theory of Lie algebras can then be used to build analogous structures in nearby areas of mathematics, such as Lie groups and Lie algebras over more general fields than the complex field ${{\bf C}}$ (leading in particular to the notion of a Chevalley group), as well as finite simple groups of Lie type, which form the bulk of the classification of finite simple groups (with the exception of the alternating groups and a finite number of sporadic groups). In the case of complex Lie groups, it turns out that every simple Lie algebra ${\mathfrak{g}}$ is associated with a finite number of connected complex Lie groups, ranging from a “minimal” Lie group ${G_{ad}}$ (the adjoint form of the Lie group) to a “maximal” Lie group ${\tilde G}$ (the simply connected form of the Lie group) that finitely covers ${G_{ad}}$, and occasionally also a number of intermediate forms which finitely cover ${G_{ad}}$, but are in turn finitely covered by ${\tilde G}$. For instance, ${\mathfrak{sl}_n({\bf C})}$ is associated with the projective special linear group ${\hbox{PSL}_n({\bf C}) = \hbox{PGL}_n({\bf C})}$ as its adjoint form and the special linear group ${\hbox{SL}_n({\bf C})}$ as its simply connected form, and intermediate groups can be created by quotienting out ${\hbox{SL}_n({\bf C})}$ by some subgroup of its centre (which is isomorphic to the ${n^{th}}$ roots of unity). The minimal form ${G_{ad}}$ is simple in the group-theoretic sense of having no normal subgroups, but the other forms of the Lie group are merely quasisimple, although traditionally all of the forms of a Lie group associated to a simple Lie algebra are known as simple Lie groups. Thanks to the work of Chevalley, a very similar story holds for algebraic groups over arbitrary fields ${k}$; given any Dynkin diagram, one can define a simple Lie algebra with that diagram over that field, and also one can find a finite number of connected algebraic groups over ${k}$ (known as Chevalley groups) with that Lie algebra, ranging from an adjoint form ${G_{ad}}$ to a universal form ${G_u}$, with every form having an isogeny (the analogue of a finite cover for algebraic groups) to the adjoint form, and in turn receiving an isogeny from the universal form. Thus, for instance, one could construct the universal form ${E_7(q)_u}$ of the ${E_7}$ algebraic group over a finite field ${{\bf F}_q}$ of finite order. When one restricts the Chevalley group construction to adjoint forms over a finite field (e.g. ${\hbox{PSL}_n({\bf F}_q)}$), one usually obtains a finite simple group (with a finite number of exceptions when the rank and the field are very small, and in some cases one also has to pass to a bounded index subgroup, such as the derived group, first). One could also use other forms than the adjoint form, but one then recovers the same finite simple group as before if one quotients out by the centre. This construction was then extended by Steinberg, Suzuki, and Ree by taking a Chevalley group over a finite field and then restricting to the fixed points of a certain automorphism of that group; after some additional minor modifications such as passing to a bounded index subgroup or quotienting out a bounded centre, this gives some additional finite simple groups of Lie type, including classical examples such as the projective special unitary groups ${\hbox{PSU}_n({\bf F}_{q^2})}$, as well as some more exotic examples such as the Suzuki groups or the Ree groups. While I learned most of the classical structural theory of Lie algebras back when I was an undergraduate, and have interacted with Lie groups in many ways in the past (most recently in connection with Hilbert’s fifth problem, as discussed in this previous series of lectures), I have only recently had the need to understand more precisely the concepts of a Chevalley group and of a finite simple group of Lie type, as well as better understand the structural theory of simple complex Lie groups. As such, I am recording some notes here regarding these concepts, mainly for my own benefit, but perhaps they will also be of use to some other readers. The material here is standard, and was drawn from a number of sources, but primarily from Carter, Gorenstein-Lyons-Solomon, and Fulton-Harris, as well as the lecture notes on Chevalley groups by my colleague Robert Steinberg. The arrangement of material also reflects my own personal preferences; in particular, I tend to favour complex-variable or Riemannian geometry methods over algebraic ones, and this influenced a number of choices I had to make regarding how to prove certain key facts. The notes below are far from a comprehensive or fully detailed discussion of these topics, and I would refer interested readers to the references above for a properly thorough treatment. The main purpose of this post is to roll over the discussion from the previous Polymath8 thread, which has become rather full with comments.    As with the previous thread, the main focus on the comments to this thread are concerned with writing up the results of the Polymath8 “bounded gaps between primes” project; the latest files on this writeup may be found at this directory, with the most recently compiled PDF file (clocking in at about 90 pages so far, with a few sections still to be written!) being found here.  There is also still some active discussion on improving the numerical results, with a particular focus on improving the sieving step that converts distribution estimates such as $MPZ^{(i)}[\varpi,\delta]$ into weak prime tuples results $DHL[k_0,2]$.  (For a discussion of the terminology, and for a general overview of the proof strategy, see this previous progress report on the Polymath8 project.)  This post can also contain any other discussion pertinent to any aspect of the polymath8 project, of course. There are a few sections that still need to be written for the draft, mostly concerned with the Type I, Type II, and Type III estimates.  However, the proofs of these estimates exist already on this blog, so I hope to transcribe them to the paper fairly shortly (say by the end of this week).  Barring any unexpected surprises, or major reorganisation of the paper, it seems that the main remaining task in the writing process would be the proofreading and polishing, and turning from the technical mathematical details to expository issues.  As always, feedback from casual participants, as well as those who have been closely involved with the project, would be very valuable in this regard.  (One small comment, by the way, regarding corrections: as the draft keeps changing with time, referring to a specific line of the paper using page numbers and line numbers can become inaccurate, so if one could try to use section numbers, theorem numbers, or equation numbers as reference instead (e.g. “the third line after (5.35)” instead of “the twelfth line of page 54”) that would make it easier to track down specific portions of the paper.) Also, we have set up a wiki page for listing the participants of the polymath8 project, their contact information, and grant information (if applicable).  We have two lists of participants; one for those who have been making significant contributions to the project (comparable to that of a co-author of a traditional mathematical research paper), and another list for those who have made auxiliary contributions (e.g. typos, stylistic suggestions, or supplying references) that would typically merit inclusion in the Acknowledgments section of a traditional paper.  It’s difficult to exactly draw the line between the two types of contributions, but we have relied in the past on self-reporting, which has worked pretty well so far.  (By the time this project concludes, I may go through the comments to previous posts and see if any further names should be added to these lists that have not already been self-reported.)
2018-04-20 07:01:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 249, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867764472961426, "perplexity": 243.13578114069654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937161.15/warc/CC-MAIN-20180420061851-20180420081851-00222.warc.gz"}
http://rna123.com/problems-we-solve/copycount/assay-calibration
## The Problem: Obtaining absolute PCR quantification currently requires the laborious preparation of standards and acquiring a standard curve, thereby wasting reagents and using valuable plate real estate. In addition, the traditional analysis method for qPCR determines the "cycle threshold", Ct or Cq, which varies for different assays, different machines, and varies from plate to plate, thereby making the Ct value hard to interpret. ## The Solution: DNA Software has made a breakthrough in understanding the mechanism of PCR amplification. Our new product, qPCR CopyCount™, allows for any qPCR curve to be analyzed to directly determine the absolute number of copies of DNA at cycle zero. The DNA copy count is the quantity that every biologist wants and the results provided by qPCR CopyCount have unprecedented relative and absolute accuracy.  Click here to watch a video seminar on Copy Count. • DNA copy number results that are 3-4 X more precise than Ct method • Eliminates two common sources of user error, namely the quantification of standards and the running of standard curves • Accelerate your workflow with existing qPCR instrumentation • Cost effective alternative to Digital PCR # Two-step Calibration Procedure for TaqMan Assays ## Introduction The following procedure is performed on each new qPCR assay that will be analyzed by qPCR CopyCount. The method is called “2-step” because it involves two qPCR reactions: one preliminary PCR with 4 replicates to get a rough concentration, and one full plate of PCR reactions to get the precise calibration. This method is faster, more accurate, and more reliable than a dilution series with standards. The calibration needs to be performed only once on each new assay design – the same calibration will work on any instrument and with any sample and will never need to be redone as long as the primers are not redesigned, the primer and probe concentrations are not changed, and the PCR buffer components (i.e. [NTPs], [Mg], and [Enzyme]) are not changed. Thus, it is best to perform the calibration once with as many replicates as possible so that the assay can be used in the future with optimal accuracy. ## Background Concept Read the document: Quick Guide Precision vs. Accuracy of qPCR CopyCount. This provides a brief description of the role of calibration to improve absolute quantification. ## Outcome The calibration error, σcalibrationdepends upon the number of replicates and the mean copy number among the replicates. A 384-well calibration of an assay will provide σcalibration of about 5% inaccuracy if the mean copy number per well is 1.5. For a 96 well plate, the calibration errors will be twice as large as from a 384 well plate. Below is the equation for calculating the approximate calibration error: $\large \dpi{120} \fn_jvn Calibration Error = \frac{1}{\sqrt{N \times M }}\;\;\;\left ( Eqn.\;1 \right )$ where N is the number of replicates and M is the average copy number per well. Note that for technical reasons, it is not advisable to go above a copy number of 2.5 in performing your calibration plate. Thus, to give a little safety margin, we recommend that you use a mean copy number of about 1.5 for the calibration plate. We also strongly recommended that you use a calibration plate with as many replicates as possible so that error is minimized. ## Laboratory Protocol Note:If you know your initial DNA concentration very accurately (within 25% error), then you can skip step 1 and go directly to step 2. #### Step 1A -Initial PCR This protocol is written assuming 20 μL qPCR reactions. If your instrument uses a different volume, then scale the amounts of target and other reagents such as master mix and primers and probes accordingly. Prepare a 10 μL sample, labeled “Target DNA”, that contains between 1,000 and 100,000,000 copies of Target DNA (no need to be wasteful here, we just need at least 1000 molecules for the entire calibration procedure). Add 2 μL of the target DNA to a centrifuge tube labeled “reaction mix”. Add to the “reaction mix” tube 50 μL of 2X master mix (or 10 μL of 10X master mix) and appropriate volumes of primers and probe. Add water to make the final volume = 100 μL, which is sufficient for 5 PCR reactions, but only 4 PCR reactions will be run. Mix well and pipette 20 μL of the resulting mix into each of four reaction wells in the qPCR plate. The excess ~20 μL can be discarded (100 μL of reaction mix was prepared to be sure that there is enough for the 4 reactions to get a full 20 μL). #### Step 1B - Obtain Estimated Copy Count Run qPCR CopyCount on the four qPCR reactions from step 1A. This will give a rough estimate of the copy count, CC. Average the CC for the four replicates. This estimate provides the DNA copy number to within ±25% as long as your PCR reaction conforms to the limitations for cPCR. #### Step 2A - Prepare Calibration Plate The goal of this step is to prepare a PCR reaction sufficient for 400 wells that each contain about 1.5 molecules of DNA on average (so a total or 600 target molecules are needed). Compute the total molecules that remain in the 8 uL Target DNA sample from step 1A. This is accomplished using the copy count, CC, from step 1B as follows: $\large \dpi{120} \fn_jvn Total\;Target\;DNA\;Molecules = CC \times 20\;\;\;\left ( Eqn.\;2 \right )$ where the factor of 20 is because the remaining Target sample has 4-fold as much DNA in 8 μL compared to 2 μL, and that was effectively split into 5 reactions worth of volume in step 1A. From the total from Eqn. 2, compute the volume that contains 600 molecules. For example, if the Total = 1352 molecules then the volume needed is: $\large \dpi{120} \fn_jvn Volume\;Needed = 8\mu L\times \frac{600\;molecules}{1352\;Total\;molecules} = 3.6\mu L\;\;\;\left ( Eqn.\;3 \right )$ Note that the volume used does not need to be perfectly exact (for example if you pipetted 3.5 μL that would be fine), the number of molecules could be off by a few percent and that will have no effect on the calibration. If the volume computed with Eqn. 3 is too small (like 0.01 μL), then you will need to first dilute the sample by adding water, and then pipetting out the amount needed taking into account the added dilution. Pipette the volume needed from Eqn. 3 into a fresh 20 mL tube labeled “Calibration Reaction Mix”. Since we are preparing reaction mixture for 400 reactions with 20 μL each, the total reaction volume is 8000 μL. Add to the “calibration reaction mix” tube 800 μL of 10X qPCR components (master mix, primers and probe) and add water to make the final volume = 8000 μL, which is sufficient for 400 PCR reactions, but only 384 PCR reactions will be run. Notes: 1. It is essential to acquire a sufficient number of PCR cycles to allow for saturation to be observed. We suggest 60 cycles for 10-20 μL reaction volumes (if your volume is much smaller, like 33 nL, then fewer cycles can be used as long as full saturation is observed even for a single copy of DNA at cycle zero). 2. We recommend that the PCR extension time is 1 minute to ensure that all amplicons are fully extended. #### Step 2B - Run Your Calibration Plate Run qPCR CopyCount and select “Calibration Plate”. Upload the data from step 2A, and give a name for the assay that you are calibrating. The program will do the rest. The calibration for that assay will be saved to your database of assays so that you can use it for sample unknowns in the future. Notes: 1. Rarely, some assays may be very poorly designed resulting in aberrant behavior. If your calibration produces a message “Calibration plate unreliable due to poor Chi-squared P”, this is an indication that your assay is poorly designed (e.g. the primers are highly inefficient due to competing secondary structure) or that there is some other problem with the PCR, such as very bad contamination or poor reagent quality. 2. If your estimated copy count in step 1B is incorrect by more than a factor of 2, then you will get the message: “Calibration plate unreliable due to high copy number”. This means that you will need to further dilute your sample (we recommend diluting by 2- to 3-fold more) and run a new calibration plate. Not finding what you’re looking for? Let us help. Click or Call +1.734.222.9080 #### Our Latest Thursday, 06 November 2014 22:15 DNA Software introduced the full commercial release of ThermoBLAST Cloud Edition (TB-CE). TB-CE provides a new standard for evaluating the target specificity of oligonucleotides.  Click here to learn more. read moreView All News & Events “We find it [Visual OMP™] makes very accurate and sometimes surprising (but true) predictions about binding efficiency in multiplexes...” David Whitcombe, Chief Scientific Officer, DxS Ltd, UK. “It [Visual OMP™] is very powerful software when used for multiplexing design...” Chris Novak, Ambion “I have designed over 10,000 PCR assays in my experience with DNA Software’s Visual OMP™ and found greater than 95% success rate when using it to design my assays compared to less than 20% success without it.” Dr. Eric Bruening, MolecularMD “I have been using DNA [Software™] for a long time, at least 8 years. I want to have the best tools available and that is why we use it.” Dr. Nancy Schoenbrunner, Roche Molecular Systems “I am a long-term user of Visual OMP™. I like this program very much because it provides a solid scientific basis for oligo design and cuts the development time by more than half.” Olga Budker, longtime Visual OMP user “In my experience, DNA Software™ saved me 75% of my oligo costs.” Helen Minnis, Wave 80 “DNA [Software™] has passed my tests. I’ve recommended it [Visual OMP™]. It performs extremely well.” Dr. Ned Sekinger, Luminex Corporation “Learning to use the program [Visual OMP™] is time well spent, and the support staff at DNA Software is always ready and able to help” Dr. Sue J. Kohlhepp, Providence Portland Medical Center.
2017-06-24 19:10:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45385733246803284, "perplexity": 3132.6353755247637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00029.warc.gz"}
https://www.physicsforums.com/threads/lagrange-equation-when-exactly-does-it-apply.794587/
# Lagrange equation: when exactly does it apply? Hi! Does the Lagrange equation ONLY apply when the constraints are holonomic? What about the constraining forces acting on the system (i.e. normal force, or other perpendicular forces), do they make a system holonomic? What about the Lagrange equation with the general force on the right hand side. I read in Goldstein that it can be, for instance, a non-conservative frictional force. Why? Where did that come from? BTW, I am talking about the Euler-Lagrange equation. This one, $$\sum_j \frac{\partial L }{\partial q_j} - \frac{d}{d t} \frac{\partial L }{\partial \dot{q_j}} = 0$$ in case there was any confusion. But what is up with the modified equation, ##\frac{\partial L }{\partial q_j} - \frac{d}{d t} \frac{\partial L }{\partial \dot{q_j}} = Q_j## ? When does this apply to a system, and for which generalized forces ##Q_j##s? It was not derived in Goldstein's book, just given. Another question, if somebody wants to answer: does ##\frac{\partial T}{\partial q_j}##, where ##T## is the kinetic energy of the system, always equal zero? Or do there exist situations where the kinetic energy has an explicit dependence on position? It might seem like a strange question because kinetic energy is defined using total velocity, but I ask because one form of Lagrange's equation is ##\frac{d}{dt} \frac{\partial T}{\partial \dot{q_j}} - \frac{\partial T}{\partial q_j} = Q_j##. Another question, if somebody wants to answer: does ##\frac{\partial T}{\partial q_j}##, where ##T## is the kinetic energy of the system, always equal zero? Or do there exist situations where the kinetic energy has an explicit dependence on position? It certainly can, in spherical coordinates (or polar) you have position dependence in the kinetic term. Nikitin
2021-04-13 13:45:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833286762237549, "perplexity": 732.1019630209195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00250.warc.gz"}
https://www.physicsforums.com/threads/differentiation-under-the-integral-sign.13012/
# Differentiation under the integral sign 1. Jan 21, 2004 ### hliu8 Hello everyone, This is my first post. I would like to understand better the idea of differentiation under the integral sign. I read about it in http://mathworld.wolfram.com/LeibnizIntegralRule.html and Feynman's autobiography, about evaluating an integral by differentiation under the integral sign, but how exactly it is done. Thank to everyone. 2. Jan 21, 2004 ### himanshu121 How it is done Consider $$I(b)=\int_0^1 \frac{x^b-1}{lnx} dx$$ now u can see clearly that after plugging the limits the variable x will vanish the only variable remains is b so the integration will be a function with b While integrating w.r.t x u consider b as a constant similarly when differentiating w.r.t b u consider x as a constant So , u have $$I'(b)=\int_0^1 \frac{x^b lnx}{lnx} dx$$ $$I'(b)=\int_0^1 x^b dx=\frac{1}{b+1}$$ $$=> I(b)= \int \frac{1}{b+1} db +c$$ If b=0 I(b)=0 => c=0 Therefore I(b)=ln(b+1) So clearly it is afunction of b now with no x Last edited: Jan 21, 2004 3. Jan 22, 2004 ### Tron3k Funny, I was just reading Surely You're Joking, Mr. Feynman and I was wondering about that also. 4. Jan 23, 2004 ### MathematicalPhysicist what is written about this in the book, is there a technical explanation about it? 5. Jan 23, 2004 ### Muzza No, Feynman basically says that his "mathematical toolbox" (which included differentiation under the integral sign) was different from others', so he could solve problems others couldn't... Last edited: Jan 23, 2004 Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add?
2016-05-04 10:16:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598217606544495, "perplexity": 2057.6282797208387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122902.86/warc/CC-MAIN-20160428161522-00083-ip-10-239-7-51.ec2.internal.warc.gz"}
https://demo7.dspace.org/items/a0bf7e29-20ff-4513-a138-c4f644b7bd16
## String Equations of the q-KP Hierarchy Tian, Kelei He, Jingsong Su, Yucai Cheng, Yi ##### Description Based on the Lax operator $L$ and Orlov-Shulman's $M$ operator, the string equations of the $q$-KP hierarchy are established from special additional symmetry flows, and the negative Virasoro constraint generators \{$L_{-n}, n\geq1$\} of the $2-$reduced $q$-KP hierarchy are also obtained. Comment: 11pages ##### Keywords Nonlinear Sciences - Exactly Solvable and Integrable Systems
2022-12-05 15:26:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8123407959938049, "perplexity": 9632.523544079437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00833.warc.gz"}
https://answerofmath.com/solved-when-are-correlated-normal-random-variables-multivariate-normal/
# Solved – When are correlated Normal random variables multivariate Normal? I know that there are many example of correlated normal random variables which are not jointly (multivariate) normal. However, are there conditions which state when correlated normal random variables are jointly normal? Say I observe n univariate random variables $$X_1, dots, X_n$$ that are each $$N(mu, sigma^2)$$ with common correlation $$rho$$. Is it possible that these are jointly normal? If so, what are the conditions and how would I know if they are jointly normal. Contents Say I observe n univariate random variables $$X_1, dots, X_n$$ that are each $$N(mu, sigma^2)$$ with common correlation $$rho$$. Is it possible that these are jointly normal? If so, what are the conditions and how would I know if they are jointly normal. There are no conditions based only on the marginal pdfs that can ensure joint normality. Let $$phi(cdot)$$ denote the standard normal density. Then, if $$X$$ and $$Y$$ have joint pdf $$f_{X,Y}(x,y) = begin{cases} 2phi(x)phi(y), & x geq 0, y geq 0,\ 2phi(x)phi(y), & x < 0, y < 0,\ 0, &text{otherwise},end{cases}$$ then $$X$$ and $$Y$$ are (positively) correlated standard normal random variables (work out the marginal densities to verify this if it is not immediately obvious) that do not have a bivariate joint normal density. So, given only that $$X$$ and $$Y$$ are correlated standard normal random variables, how can we tell whether $$X$$ and $$Y$$ have the joint pdf shown above or the bivariate joint normal density with the same correlation coefficient ? In the opposite direction, if $$X$$ and $$Y$$ are independent random variables (note the utter lack of mention of normality of $$X$$ and $$Y$$) and $$X+Y$$ is normal, then $$X$$ and $$Y$$ are normal random variables (Feller, Chapter XV.8, Theorem 1).
2023-03-25 16:21:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7623329758644104, "perplexity": 135.90010360742127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00673.warc.gz"}
https://intelligencemission.com/free-energy-graph-labeled-free-energy-gradient.html
How do you gather and retrain the RA? Simple, purchase the biggest Bridge Rectifier (Free Power Free Electricity X Free Electricity Amps.) Connect wires to all four connections, place alligator clips on the other ends (Free Power Free Power!) Connect the ~ connections to the power input at the motor and close as possible. Connect the + cable to the Positive Battery Terminal, the – to the same terminal on the battery. Connect the battery Alligator Clip AFTER the Motor is running full on. That’s it! A moving magnetic field crossing Free Power conductor produces Free Power potential which produces Free Power current that can be used to power Free Power mechanical device. Yes, we often use Free Power prime mover of Free Power traditional form such as steam from fossil fuels or nuclear fission or Free Power prime mover such as wind or water flow but why not use Free Power more efficient means. Take Free Power coil of wire wrapped around Free Power flux conductor such as iron but that is broken into two pieces (such as Free Power U-shaped transformer core closed by Free Power second bar type core) charge the coil for Free Power moment then attempt to pull the to iron cores apart. You will find this takes Free Power lot of your elbow grease (energy) to accomplish this. This is due to the breaking of the flux circuit within the iron core. An example of energy store as magnetic flux. Isn’t this what Free Power permanent magnet is? Transfering one form of energy to another. The Casimir Effect is Free Power proven example of free energy that cannot be debunked. The Casimir Effect illustrates zero point or vacuum state energy , which predicts that two metal plates close together attract each other due to an imbalance in the quantum fluctuations. You can see Free Power visual demonstration of this concept here. The implications of this are far reaching and have been written about extensively within theoretical physics by researchers all over the world. Today, we are beginning to see that these concepts are not just theoretical but instead very practical and simply, very suppressed. Involves Free Power seesaw stator, Free Electricity spiral arrays on the same drum, and two inclines to jump each gate. Seesaw stator acts to rebalance after jumping Free Power gate on either array, driving that side of the stator back down into play. Harvey1 is correct so far. Many, many have tryed and failed. Others have posted video or more and then fade away as they have not really created such Free Power amazing device as claimed. I still try every few weeks. My designs or trying to replicated others. SO far, non are working and those on the web havent been found to to real either. Perhaps someday, My project will work. I have been close Free Power few times, but it still didint work. Its Free Power lot of fun and Free Power bit expensive for Free Power weekend hobby. LoneWolffe Harvey1 LoneWolffe The device that is shown in the diagram would not work, but the issue that Is the concern here is different. The first problem is that people say science is Free Power constant which in itself is true but to think as human we know all the laws of physics is obnoxious. As our laws of physics have change constantly, through history. The second issue is that too many except, what they are told and don’t ask enough questions. Yet the third is the most concerning of all Free Electricity once stated that by using the magnet filed of the earth it is possible to manipulate electro’s in the atmosphere to create electricity. This means that by manipulating electro you take energy from the air we all breath to convert it to usable energy. Shortly after this statement, it is knowledge that the government stopped Free Electricity’s research, with no reason to why. Its all well and good reading books but you still question them. Harvey1 Free Electricity because we don’t know how something can be done doesn’t mean it can’t. You did not even appear to read or understand my response in the least. I’ve told you several times that I NEVER EXPECTED ANYONE TO SEND ME ONE. You cannot seem to get this. Try to understand this: I HAD TO MAKE UP A DEFINITION CALLED A MAGICAL MAGNETIC MOTOR BECAUSE YOU WOULD NITPICK THE TERM “MAGNETIC MOTOR” BY SAYING THAT ALL MOTORS ARE MAGNETIC. Are you so delusional that you cannot understand what I am saying? Are you too intellectually challenged to understand? Are you knowingly changing the subject again to avoid answering me? Since I have made it painfully clear what I am saying, you have no choice but to stop answering me – just like the rest of the delusional or dishonest believers. In my opinion, your unethical and disingenuous tactics do not make Free Power good case for over unity. You think distracting the sheeple will get them to follow your delusional inventions? Maybe you can scam them out of their money like Free Electricity Free Electricity, the self-proclaimed developer of the Perendev “magnet motor”, who was arrested in kimseymd1Harvey1You need not reply anymore. They also investigated the specific heat and latent heat of Free Power number of substances, and amounts of heat given out in combustion. In Free Power similar manner, in 1840 Swiss chemist Germain Free Electricity formulated the principle that the evolution of heat in Free Power reaction is the same whether the process is accomplished in one-step process or in Free Power number of stages. This is known as Free Electricity’ law. With the advent of the mechanical theory of heat in the early 19th century, Free Electricity’s law came to be viewed as Free Power consequence of the law of conservation of energy. Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of Free Power compound as Free Power measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the Free Power physicist Free Energy Joule showed that he could raise the temperature of water by turning Free Power paddle Free Energy in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i. e. , approximately, dW ∝ dQ. Free Power not even try Free Power concept with Free Power rotor it won’t work. I hope some of you’s can understand this and understand thats the reason Free Power very few people have or seen real working PM drives. My answers are; No, no and sorry I can’t tell you yet. Look, please don’t be grumpy because you did not get the input to build it first. Gees I can’t even tell you what we call it yet. But you will soon know. Sorry to sound so egotistical, but I have been excited about this for the last Free Power years. Now don’t fret………. soon you will know what you need to know. “…the secret is in the “SHAPE” of the magnets” No it isn’t. The real secret is that magnetic motors can’t and don’t work. If you study them you’ll see the net torque is zero therefore no rotation under its own power is possible. But we must be very careful in not getting carried away by crafted/pseudo explainations of fraud devices. Mr. Free Electricity, we agree. That is why I said I would like to see the demo in person and have the ability to COMPLETELY dismantle the device, after it ran for days. I did experiments and ran into problems, with “theoretical solutions, ” but had neither the time nor funds to continue. Mine too ran down. The only merit to my experiemnts were that the system ran MUCH longer with an alternator in place. Similar to what the Free Electricity Model S does. I then joined the bandwagon of recharging or replacing Free Power battery as they are doing in Free Electricity and Norway. Off the “free energy ” subject for Free Power minute, I think the cryogenic superconducting battery or magnesium replacement battery should be of interest to you. Why should I have to back up my Free Energy? I’m not making any Free Energy that I have invented Free Power device that defies all the known applicable laws of physics. Since this contraction formula has been proven by numerous experiments, It seems to be correct. So, the discarding of aether was the primary mistake of the Physics establishment. Empty space is not empty. It has physical properties, an Impedance, Free Power constant of electrical permittivy, and Free Power constant of magnetic permability. Truely empty space would have no such properties! The Aether is seathing with energy. Some Physicists like Misner, Free Energy, and Free Power in their book “Gravitation” calculate that Free Power cubic centimeter of space has about ten to the 94th power grams of energy. Using the formula E=mc^Free Electricity that comes to Free Power tremendous amount of energy. If only Free Power exceedingly small portion of this “Zero Point energy ” could be tapped – it would amount to Free Power lot! Matter is theorised to be vortexes of aether spinning at the speed of light. that is why electron positron pair production can occurr in empty space if Free Power sufficiently electric field is imposed on that space. It that respect matter can be created. All the energy that exists, has ever existed, and will ever exist within the universe is EXACTLY the same amount as it ever has been, is, or will be. You can’t create more energy. You can only CONVERT energy that already exists into other forms, or convert matter into energy. And there is ALWAYS loss. Always. There is no way around this simple truth of the universe, sorry. There is Free Power serious problem with your argument. “Free Power me one miracle and we will explain the rest. ” Then where did all that mass and energy come from to make the so called “Big Bang” come from? Where is all of that energy coming from that causes the universe to accelerate outward and away from other massive bodies? Therein lies the real magic doesn’t it? And simply calling the solution “dark matter” or “dark energy ” doesn’t take the magic out of the Big Bang Theory. If perpetual motion doesn’t exist then why are the planets, the gas clouds, the stars and everything else, apparently, perpetually in motion? What was called religion yesterday is called science today. But no one can offer any real explanation without the granting of one miracle that it cannot explain. Chink, chink goes the armor. You asked about the planets as if they are such machines. But they aren’t. Free Power they spin and orbit for Free Power very long time? Yes. Forever? Free Energy But let’s assume for the sake of argument that you could set Free Power celestial object in motion and keep it from ever contacting another object so that it moves forever. (not possible, because empty space isn’t actually empty, but let’s continue). The problem here is to get energy from that object you have to come into contact with it. I am doing more research for increasing power output so that it can be used in future in cars. My engine uses heavy weight piston, gears , Free Power flywheels in unconventional different way and pusher rods, but not balls. It was necessary for me to take example of ball to explain my basic idea I used in my concept. (the ball system is very much analogous to the piston-gear system I am using in my engine). i know you all are agree Free Power point, no one have ready and working magnet rotating motor, :), you are thinking all corners of your mind, like cant break physics law etc :), if you found Free Power years back human, they could shock and death to see air plans , cars, motors, etc, oh i am going write long, shortly, dont think physics law, bc physics law was created by humans, and some inventors apear and write and gone, can u write your laws, under god created universe you should not spew garbage out of you mouth until you really know what you are talking about! Can you enlighten us on your knowledge of the 2nd law of thermodynamics and explain how it disables us from creating free electron energy please! if you cant then you have no right to say that it cant work! people like you have kept the world form advancements. No “free energy magnetic motor” has ever worked. Never. Not Once. Not Ever. Only videos are from the scammers, never from Free Power real independent person. That’s why only the plans are available. When it won’t work, they blame it on you, and keep your money. Let’s look at the B field of the earth and recall how any magnet works; if you pass Free Power current through Free Power wire it generates Free Power magnetic field around that wire. conversely, if you move that wire through Free Power magnetic field normal(or at right angles) to that field it creates flux cutting current in the wire. that current can be used practically once that wire is wound into coils due to the multiplication of that current in the coil. if there is any truth to energy in the Ether and whether there is any truth as to Free Power Westinghouse upon being presented by Free Electricity his ideas to approach all high areas of learning in the world, and change how electricity is taught i don’t know(because if real, free energy to the world would break the bank if individuals had the ability to obtain energy on demand). i have not studied this area. i welcome others who have to contribute to the discussion. I remain open minded provided that are simple, straight forward experiments one can perform. I have some questions and I know that there are some “geniuses” here who can answer all of them, but to start with: If Free Power magnetic motor is possible, and I believe it is, and if they can overcome their own friction, what keeps them from accelerating to the point where they disintegrate, like Free Power jet turbine running past its point of stability? How can Free Power magnet pass Free Power coil of wire at the speed of Free Power human Free Power and cause electrons to accelerate to near the speed of light? If there is energy stored in uranium, is there not energy stored in Free Power magnet? Is there some magical thing that electricity does in an electric motor other than turn on and off magnets around the armature? (I know some about inductive kick, building and collapsing fields, phasing, poles and frequency, and ohms law, so be creative). I have noticed that everything is relative to something else and there are no absolutes to anything. Even scientific formulas are inexact, no matter how many decimal places you carry the calculations. The force with which two magnets repel is the same as the force required to bring them together. Ditto, no net gain in force. No rotation. I won’t even bother with the Free Power of thermodynamics. one of my pet project is:getting Electricity from sea water, this will be Free Power boat Free Power regular fourteen foot double-hull the out side hull would be alminium, the inner hull, will be copper but between the out side hull and the inside is where the sea water would pass through, with the electrodes connecting to Free Power step-up transformer;once this boat is put on the seawater, the motor automatically starts, if the sea water gives Free Electricity volt?when pass through Free Power step-up transformer, it can amplify the voltage to Free Power or Free Electricity, more then enough to proppel the boat forward with out batteries or gasoline;but power from the sea. Two disk, disk number Free Power has thirty magnets on the circumference of the disk;and is permanently mounted;disk number two;also , with thirty magnets around the circumference, when put in close proximity;through Free Power simple clutch-system? the second disk would spin;connect Free Power dynamo or generator? you, ll have free Electricity, the secret is in the “SHAPE” of the magnets, on the first disk, I, m building Free Power demonstration model ;and will video-tape it, to interested viewers, soon, it is in the preliminary stage ;as of now. the configuration of this motor I invented? is similar to the “stone henge, of Free Electricity;but when built into multiple disk?
2019-03-26 15:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4720809757709503, "perplexity": 1222.9814506333557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205534.99/warc/CC-MAIN-20190326135436-20190326161436-00009.warc.gz"}
http://databasefaq.com/index.php/tag/emacs
FAQ Database Discussion Community ## how to detect if emacs is running in a terminal or a window? emacs I'm hoping to be able to branch on a flag in emacs to detect whether it is running in a terminal or a windowed app (i.e. the OS X Emacs app) Is there such a flag?... ## Org-mode: How do I fold all levels but the current in a sparse tree? emacs,org-mode After doing C-c / to create a sparse tree, move the cursor to some interesting place and C-c C-c to remove highlights I'd like to be able to collapse all levels but the current one. Is there some way doing it? I'd like to maintain the cursor positioned in the... ## How to remove the temporary files starting and ending with '#' created by Emacs when it closes? emacs,backup,kill-process When Emacs is closed with modified buffers present, it creates a file with the format '#file_name#' for recovery purpose in its parent directory for each modified buffer (except for scratch). When recover-file command is used when Emacs opens the next time, the previously modified buffer of the file is recovered.... ## Emacs: syntax highlight for non-code files file,emacs,syntax-highlighting Let's us suppose I want to create a file (using emacs) to explain something about programming. For example, a mylib-tutorial.txt. Is there a way to turn on syntax highlight on specific parts of a file containing code? For example: Tutorial --------- This call behaves as follow: void foo(&resource); This call... ## what's difference between 'delete' and 'remove' in Emacs Lisp emacs,elisp I am learning Elisp by reading others' code. Then I found people use both delete and remove to delete element from sequence. I checked the document and the code of remove it seams just a wrapper for delete which when the sequence is a list, do a copy-sequence. Is it... ## Emacs config for scala development [closed] scala,emacs Can anyone point me to a repo to get a proper scala development config for emacs? I'm pretty new on emacs and I went through some tutorial regarding how to setup ensime for emacs, but I didn't succeed. osx,emacs ## EMACS-Live + Slime error at startup emacs,common-lisp,slime,kubuntu Ok, I must be missing something obvious. I'm getting stuck since yesterday to launch Emacs-live + slime. I'm using EMACS 24.3.1, installed Emacs-live and it worked well (if I start emacs-live without Slime it works), downloaded Slime-Pack from git and added this line to .emacs-live.el (live-append-packs '(~/.live-packs/slime-pack/)) I'm on a... ## Can I stop Emacs from resetting default-directory every time I open a file? emacs,directory,editing I've already asked the same question on Emacs. If it's not permitted, I'm sorry and I will delete the question. If I: Start Emacs in my home directory (~) Find a file in the ~/Projects/ruby-play directory with C-x C-f Try to find another file with C-x C-f The default directory... ## How to get Emacs to sort lines by length? sorting,emacs,elisp I'd like to be able to highlight a region in Emacs and then sort the region by line length. The closest I've found is the following code which I think will sort by length: (sort-subr t #'forward-line #'end-of-line nil nil (lambda (l1 l2) (apply #'< (mapcar (lambda (range) (- (cdr... ## how control the time span of highlighting searched word when press “*” under evil in emacs? emacs,evil-mode In evil mode, when press *, it will highlight the all the words the same under the cursor, but the highlight will disapper very soon. How can I control how long to keep the highlight? I am using prelude versiosn of emacs. ## How do I prevent org-mode from executing all of the babel source blocks? emacs,org-mode,org-babel I have an org file with lots of babel source blocks in it that only need to be re-executed when the code is changed. How do I prevent org from executing all of the blocks during export? In other words, set them all to manual execution only? I would prefer... ## Determining in emacs the module in which function is defined? emacs,ocaml Say I have the following open A open List let double = map (fun x -> 2*x) [1;2;3] In emacs with merlin-mode I can place the cursor on map and execute merlin-type-enclosing to get the type of map. Is there a similar command (in merlin, tuareg, or others) that can... ## emacs call-interactively and key simulation emacs,elisp I want to write a small function that saves the cursor's current position, mark the whole buffer, indents it and then goes back to the previous cursor position. I understand there might be easier way to achieve the same result but I'd like to understand how these principles work in...
2017-07-22 10:55:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44325241446495056, "perplexity": 3144.460359623009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423992.48/warc/CC-MAIN-20170722102800-20170722122800-00527.warc.gz"}
http://www.mathblogging.org/posts/?type=post&filter0=blog&modifier0=topic&value0=Institutions%2C+Organizations
X # Posts ### October 30, 2014 + 2430 = 2 x 3 x 3 x 3 x 3 x 3 x 5.2430 is the number of unordered ways to write 1 as a sum of reciprocals of integers no larger than 18.2430 is 3300 in base 9.2430 is the sum of two powers of 3 (A055235).2430 is the product of all distinct numbers formed by permuting digits of 2430 (A061147).2430 is a number divisible by the square of the sum of its digits (A072081).2430 divides 9127 - 1.Source: What's Special About This Number ### October 29, 2014 + Two of the barriers that students must overcome in the course of their math studies are fractions and algebra. Recently Heinemann published a textbook that makes it a lot easier for students to scale the algebra wall. The title is Transition to Algebra. It starts off in lesson 1 with exploring number tricks the kinds that I first learned about when I read W.W. Sawyer's Mathematician's Delight in 1972. A simple example is shown here in this video produced by Heinemann. I highly recommend this […] + If you had a sloppy maths teacher at school you might have grown up with the idea that the number is equal to Now that is completely wrong. Writing those numbers out in decimal gives while . There’s a difference in the third decimal place after the decimal point! How accurately do we need to know the value of π?read more + Is this beginning of the end of the traditional model of mathematics education? This advert for PhotoMath gone viral:  and enjoys an enthusiastic welcome. Mathematical capabilities of PhotoMath, judging by the product website, are still relatively modest. However, if the … Continue reading → + Solution at the bottom of this page.Like this problem? Try our Contest Problem Book series!Exercise your mind daily with a problem from the AMC-8, AMC-10, or AMC-12, provided by Mathematical Association of America's American Mathematics Competitions. + How many houses can you get to – It’s Halloween. Time to trick or treat.  How long will it take you to get to all of these houses?  This is a very open ended task, counting, trick or treating strategies, distance and…Read more → + 3564 = 2 x 2 x 3 x 3 x 3 x 3 x 11.3564 is 11220000 in base 3.3564 is a concentric hendecagonal number (A195043).3564 is both an abundant number and a Smith number (A098835).3564 is a number n such that n together with its double and triple contain every digit (A120564).3564 divides 8918 - 1.3564 divides 11 + 22 + 33 + . . . + 35643564 (A135189).Source: What's Special About This Number ### October 28, 2014 + Like almost everything else, engineering jobs took a hit in the recession, but they’ve been coming back strong. Earlier this year, a CTEq analysis of federal data showed that the unemployment rate for recent engineering graduates was a mere 2.2 percent from 2011 to 2014. Compare that to almost 7.0 percent for recent bachelor’s degree holders overall. + On Wednesday, October 15, the University of Maryland Baltimore County SIAM Student Chapter hosted Conversations with Don Engel.  We were fortunate to have a distinguished guest speaker, Dr. Don Engel, Assistant Vice President for Research at UMBC as well as an affiliate assistant professor of physics, computer science and electrical engineering.  Dr. Engel has worked […] + Solution at the bottom of this page.Like this problem? Try our Contest Problem Book series!Exercise your mind daily with a problem from the AMC-8, AMC-10, or AMC-12, provided by Mathematical Association of America's American Mathematics Competitions. + 5675 = 5 x 5 x 227.5675 is the number of monic polynomials of degree 13 with integer coefficients whose complex roots are all in the unit disk (A051894).5675 is 2777 in base 13.5675 is an alternating sum of decreasing powers (A083326).Source: What's Special About This Number + Les mathématiciens en résidence à la cité des géométries nous invitent à partager des polygones en parties d'aires égales d'un seul coup de ciseau, un beau sujet pour la classe ou la formation continue des enseignants. - Ressources pédagogiques : « pour aller moins loin » / Piste bleue, featured ### October 27, 2014 + No summary available for this post. + The Kansas City Royals and San Francisco Giants are engaged in a thrilling competition as the World Series heads into the home stretch this week. Savor the final innings even more by exploring the STEM connections to the game.Tags:  science, technology + The following was written by SAMSI director Richard Smith and former Deputy Director, Snehalata Huzurbazar. We were very saddened to learn of the death of Kathryn Chaloner on October 19. One of us (Richard) knew Kathryn for over forty years, … Continue reading → + [We apologize if you receive multiple copies of this message.]This is a reminder that the paper submission deadline is November 10, 2014.---------------CALL FOR PAPERS---------------We invite you to contribute to the special session of the Indiacom 2015 conference. Please, find the details below.Session Title:Wireless Sensor Networks for sustainable development>From low layer communications to Software Oriented ArchitectureSpecial Issue at IndiaCom 2015 […] + Mathematical equations can help improve athletic performance Sure, we can become better runners by hydrating well, eating right, cross training, and practice. But getting to an optimal running strategy with equations? That’s exactly what a pair of mathematicians from France propose in a paper published this month in the SIAM Journal on Applied Mathematics. “By […] + Clicking on any of these images will show them larger in a new window. Use your student’s Halloween enthusiasm to do a study on volumes.  We’ve created an activity that asks students to calculate the volume of candy containers that…Read more → + 3387 = 3 x 1129.3387 is the largest of three consecutive semiprimes (A115393).3387 is the number of different keys with 7 cuts, depths between 1 and 7 and depth difference at most 1 between adjacent cut depths (A002714).3387 and 33387 end with the same two digits (A067749).3387 divides 318 - 1.Source: OEIS + + The Computing Community Consortium (CCC) invites proposals for visioning workshops that will catalyze and enable innovative research at the frontiers of computing.  Successful activities will articulate new research visions, galvanize community interest in those visions, mobilize support for those visions from the computing research community, government leaders, and funding agencies, and encourage broader segments of […] ### October 26, 2014 + *************************************************************************** We apologize for multiples copies. Please circulate this CFP among your colleagues and students. *************************************************************************** [DIPDMWC2015] The International Conference on Digital Information Processing, Data Mining, and Wireless Communications http://sdiwc.net/conferences/dipdmwc2015/ *************************************************************************** The proposed […] + Je me demande si la date de naissance d'Évariste Galois traditionnellement indiquée dans les dictionnaires et les encyclopédies est bien exacte… - Billets des habitués ### October 25, 2014 + I’m going to make a costume pattern to sell.  People can buy my plan and make awesome, long, full, scary, ghost costumes.  This scheme will be so simple.  Perhaps I’ll get rich! In this activity young students can reason about…Read more → + Cet automne 1943, les maquisards de la compagnie Bernard sont installés dans le massif de Belledonne. Une petite histoire de Résistance, de mathématique et d'enseignement. - Mathématiques ailleurs / Piste verte, featured ### October 24, 2014 + (Tenure-Track) Assistant Professor of Game Theory / Operations Research / Social Choice Theory The Department of Quantitative Economics at Maastricht University School of Business and Economics (SBE) offers a tenure track assistant professorship in Game Theory / Operations Research / Social Choice Theory Job description: The applicant will be appointed as a tenure track assistant professor and contribute to research and teaching within the overlapping areas of game theory, operations […] + Peter Lynch Does light have weight? Newton thought so. His laws predicted that gravity would bend of light, two centuries before Einstein's revolution. Does light have weight? Newton thought so. He supported the corpuscular theory of light, regarding it as comprised of particles with small but finite mass. He concluded […] + 5102 = 2 x 2551.5102 is a semiprime whose digit sum is a perfect cube (A245021).5102 divides 516 - 1.5102 is 6888 in base 9 (A043487).Source: OEIS + Chaque semaine, un défi du calendrier mathématique 2014... - Défis du Calendrier Mathématique 2014 / Carrousel ### October 23, 2014 + Daniel Taylor-Rodriguez is a new postdoctoral fellow at SAMSI and is participating in the Ecology program this year. He came from the University of Florida. His wife, Natalia, is still in Gainesville currently working on her Ph.D. in animal science. … Continue reading →
2014-10-30 15:08:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22422398626804352, "perplexity": 3430.8702947978372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898226.7/warc/CC-MAIN-20141030025818-00004-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/compressible-fluid-mechanics.257999/
# Compressible Fluid Mechanics 1. Sep 21, 2008 ### dcs23 Hi Guys, I know that the compressible Euler Equations are: $$\partial_t (\rho \mathbf u) + (\mathbf u \cdot \nabla)(\rho \mathbf u) + \nabla p = 0$$ $$\partial_t \rho + \nabla \cdot (\rho \mathbf u) = 0$$ Subject to suitable initial conditions and solving for $$\mathbf u, \; \rho$$ unknown. Does anybody have an example of a pair of functions which satisfies these relations in a non-1D case? 2. Sep 21, 2008 ### dcs23 Non trivial solutions would also be nice
2017-06-22 14:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4239549934864044, "perplexity": 1924.8978570766471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319575.19/warc/CC-MAIN-20170622135404-20170622155404-00383.warc.gz"}
http://www.central-mosque.com/fiqh/sinc.htm
# Sin and its Consequences Search WWW Search www.central-mosque.com In the Name of Allah, Most Gracious, Most Merciful. Brothers and sisters, let us consider sin. Sin is something which about as old as mankind. Every human being with an undistorted natural disposition and clear heart should intuitively perceive certain things as absolutely reprehensible and wrong. In this connection, Abu Masud Uqbah ibn Amr al- Ansari has reported that the Messenger of Allah (may Allah bless him and grant him peace) said, "Among those things that people realized from the speech of the first prophethood is, 'When you feel no shame, do as you please.'"{Bukhari.} Malik ibn Dinar said, "In the past, a young man committed a sin, and then came to a river to perform ghusl. Then, he remembered his sin and stopped, ashamed, and turned back, The river called out, 'O disobedient one! Had you come near, I would have drowned you!." There are other things which are commanded or prohibited by Allah and His Messenger, but in which the wisdom or exact significance of the commands or prohibitions is not always obvious. However, even in these cases, the rational person, who has recognized the truth of Allah's religion, will see it as imperative to put these into practice, for he knows his Lord is the All-Wise Who knows what is best for mankind. "There has succeeded them [i.e. the prophets and those who abided by their teachings] a generation which has forfeited its salah and followed lusts, and so they shall meet destruction - except those who repent, and believe and do goof deeds, for such shall enter Heaven, and they will not be wronged at all. The Gardens of Heaven, which the most gracious has promised those who worship Him unseen. Indeed, His promise shall come to pass. They hear therein no idleness, only peace. And they shall have their sustenance therein in, day and night. That is the Garden which We shall make the inheritance of those among our servants who were pious."{Qur'an [19:59-63].} Allah is telling us that there generations of prophets and their pious followers have now been succeeded by new generations. These people indulge in worldl lusts and pleasures; they are satisfied and contented merely with the life of this world, and so they shall meet destruction on the Day of Judgement. Another characteristic of these people is that they have forfeited their salah, and commentators have given various interpretatiosn of this. Imam Tabari has preferred the view that it means that they have abandoned salah totally, and in the same vein, some scholars, such as Imam Ahmad, have considered that one who abandons salah becomes kafir, for the Prophet (may Allah bless him and grant him peace) has also said, "The covenant which [differentiates] between us and them [the disbelievers/hypocrites] is salah, so whoever abandons it has committed kufr." Imam Awzai preferred the view of Ibn Masud (may Allah be pleased with him), saying that this verse refers only to those who miss the correct time, for if they abandoned salah totally, that would be kufr (rather than just any other sin). Masruq (may Allah be pleased with him) said, "None shall take care of the five prayers and then be recorded amon gthe neglectful,a nd in their neglect lies destruction, and neglecting them is delaying them from their time." Mujahid (may Allah be pleased with him) said that these new generations are amongst this ummah; they will fornicate on the streets like cattle and donkeys, neither fearing Allah from the Heavens, norfeeling ashamed from people on the earth. Kab al-Ahbar said, "By Allah! I surely see the chacratceristics of the hypocrites in the Book of Allah : Drinking coffee, abandoning prayers, playing dice, sleeping without offering the night prayers, neglecting the morning prayers, abandoning the congregations." Then he recited the verse [19:59]. Abul-Ashhab said, "Allah inspired David (peace be upon him) [saying], 'O David! Caution and warn your companions from engaging in lusts, for the minds of those [whose] hearts are attached to the lusts of the world are veiled. And, the mildest [thing] that I do to one of My servants, when he prefers one of his lusts is that I deprive him of obedience to Me.'" Numan ibn Bashir has reported : the Prophet (may Allah bless him and grant him peace) said, "Verily, every king has his {\it hima} (area which is off- bounds), and Allah's {\it hima} is that which he has prohibited."{Bukhair and Muslim.} The Prophet (may Allah bless him and grant him peace) also told us that, "Indeed, Allah has made [certain] duties obligatory, so do not give them up; He has set down [cerain] limits, so do not transgress them; He has forbidden [certain] things, so do not indulge in them, and He has remained silent about [certain] things, out of mercy and not out of forgetfulness, so do not seek after them." Jabir (may Allah be pleased with him) asked the Prophet (may Allah bless him and grant him peace), "Which hijrah (avoidance) is best?" He replied, "Avoiding that which Allah has prohibited to you." The consequences of sin appear in this world as well as in the Hereafter. Sins are to our own detriment, so if we feel pleasure in some sin, let us consider : how can we prosper if we love soemthing which harms us? Also, remember : the pleasure of sin is feleeting and temporary, while its punishmnet and evil consequences persist. In the hadith, "Indeed, the good deed has a light in the heart, a beauty on the face, and a vigor in action. And, indeed, the evil deed has a blackness in the heart, a feebleness in action, and a blemish on the face." Aishah (may Allah be pleased with her) wrote in a letter to Muawiyah (may Allah be pleased with him), "When a servant does a deed of disobedience of Allah, those people who used to praise him will censure him." Wahb (may Allah be pleased with him) said that Allah said to the Children of Israel, "Indeed, when I am obeyed, I am pleased; when I am pleased, I bless, and My blessing has no limit. And, when I am disobeyed, I am angry; when I am angry, I curse, and My curse reaches to the seventh generation." Abdullah ibn al-Sindi said, "Never does a servant disobey Allah except that Allah, the Posessor of Blessings, the Exalted, disgraces him." Muharib ibn Daththar said, "A man commits a sin, and then finds feebleness in his heart on account of it." Umar ibn Abdil-Aziz used to say in his khutbah, "Indeed, the most excellent worship is fulfillment of the obligations (fara'id) and shunning the prohibited things." We are all aware of the seriousness of disobeying Allah, but sometimes we will deceive ourselves, thinking, 'this is only a small sin.' Beware! Look at whom you are disboeying, rather than the size of the sin. Al-Fadl said, "The smaller a sin is in your eyes, the more serious it is before Allah, and the more serious it is in your eyes, the less it is before Allah." If you think that you will repent later, then take heed of the advice of Hasan Basri, who said, "O Son of Adam! Abandoning a sin is easier than seeking forgiveness." Some may think that the good deeds which they do will obliterate their sins, and therefore they continue sinning indiscriminately. This attitude is unwise. Someone said to Said ibn al- Musayyib, I have not seen anyone beter in ibadah than the youths in this mosque. They come out in the midday heat, and reamin standing in prayer until Asr." Ibn al-Musayyib responded, "We used not to consider this to be ibadah." he said, "Reflecting over Allah's decree, and abstaining from that which Allah, the Mighty, the Majestic has prohibited." Sahl said, "The righteous as well as the wicked do [good] deeds, but only the sincere one avoids sins." Besides, even if you do obtain forgiveness for your sin later, you will not be able to get the rewards for the good deeds which you missed out on doing. Also, you rvery sin may deprive you of the oppportunity to do good deeds. Bishr said, "A servant commits a sin, and is deprive [thereby] of performing tahajjud." Also, as time proceeds, you may feel less and less guilt for your sin, especially if Allah holds back on punishin gyou - you may delude yourself into thinking that Allah has forgiven you already, since he is giving you prosperity and well-being. Beware, brothers and sisters! The evil consequences of your sins will catch up with you if you do not repent! If some misfortune befalls you, try to think of a sin which you have committed which might have brought this upon you. If no misfortune comes, still, do not delay repenting, for it may come in the future. It has been reported that a pious man once lost his memorization of the Qur'an on account of a sin he had committed forty years previously. Moreover, realize that if you continue to sin and distance yourself from obedience to Allah, you run the risk of being deprived of Allah's guidance. "Relate to them the story of he to whom We gave Our signs, but he extricated himself from them, so that Satan caused him to follow him, and he thus became one of he deluded. And, had We willed, We would have elevated him thereby, but he clung to the earth and followed his desires. His likeness is therefore the likeness of the dog - if you provoke it, it lolls out its tongue, and if you avoid it, it [still] lolls out its tongue. That is the likeness of the people who deny Our signs, so relate the account in order that they might reflect. Evil is the likeness of those who deny Our signs and wrong their own selves." "I shall turn away from My signs those who are haughty on the earth without right, [who], [even] if they see every sign, they will not believe in it, and if they see the way of guidance, will not take it as a way, while if they see the way of destruction, they take it as a way. That is because they denied our signs and were unmindful of them." So, what is the fate in the Hereafter of these people, who are subordinate to their lusts, and who miss their prayers? They shall meet ghayy. Ibn Abbas said it means they shall meet destruction, Qatadah said 'evil'. Ibn Masud said that Ghayy is in fact a deep, foul-tasting valley in Hell. Abu `Iyad said it is a valley of blood and pus in Hell. Therefore, save yourselves, brothers and sisters. Repent from purusing your lusts and missing obligatory prayers, for Allah accepts sincere repentance, and improves the condition of the repenter, cleanses him of his sin and makes him one of the heirs of Heaven. "They shall enter Heaven, and they will not be wronged at all." Of course, Allah does not wrong anybody. "Allah does not wrong mankind at all, but people wrong their own selves." By disobeying Allah, you are wronging yourself, and making yourself deserving of punishment. Therefore, you must repent, for it is reported in the hadith that, "He who repents from sin is like he who has no sin [at all]."{Ibn Majah and Hakim Tirmidhi.} Abu Hurayrah narrated : the Prophet (may Allah bless him and grant him peace) said, "You Lord, the Mighty, the Majestic, said, 'If only My servants obeyed me, I would provide them with rain by night, and cause the sun to shine upon them by day, and I would not make them hear the sound of thunder." So, brothers and sisters, I urge you, as well as myself, to repent withotu delay, and to sumbit to your Lord in total submission. "And hasten toward forgiveness from your Lord, and a Garden whose expanse is [like] the heavens and the earth, and which has been prepared for the pious." Article taken (with Thanks) from Suheil Laher Browse Central-Mosque.com We regularly update this site so visit us frequently
2014-04-25 04:39:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3665757179260254, "perplexity": 9561.196429771515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/60631/tree-based-method-are-robust-against-low-probability-feature-space-zones-when-us
# Tree based method are robust against low probability feature space zones when using ML general interpretability methods? I have this intuition but I'm not able to verify it. There are a lot of techniques to understand the effect of single features in ML models. Some take inspiration from counterfactual frameworks, like ceteris paribus, and evaluate the unconditional contribution of feature $$X$$ by observing the change in prediction when varying $$X$$ values leaving all other variables fixed. The most common of such techniques are PDPs https://christophm.github.io/interpretable-ml-book/pdp.html. The problem is that this methodology are not robust for impossible combinations of the predictors. For example in a model to predict bike-sharing count given weather conditions and period of the year, it's possible to make predictions for a temperature of 40°C in wintertime, even if there's no such data point in the training set. There are various techniques to accommodate for this bias, like accumulated local estimation plots (ALE). I was wondering though if tree-based methods (simple or ensemble) are naturally more robust than regression-based ones to such bias; I expect this, because tree-based predictions vary only among partitions of the feature space that are present in the data, while regressions allow prediction variation for never observed predictors combinations. For example, this is the output of a conditional tree trained on the bikes problem: [1] root | [2] temp <= 12.2 | | [3] season in SPRING, SUMMER | | | [4] temp <= 4: 1663 (n = 64, err = 30258081) | | | [5] temp > 4: 2852 (n = 133, err = 216353574) | | [6] season in WINTER | | | [7] hum <= 82.3: 4315 (n = 90, err = 117371810) | | | [8] hum > 82.3: 2781 (n = 9, err = 26537744) | [9] temp > 12.2 | | [10] hum <= 84.8 | | | [11] windspeed <= 13.2: 5877 (n = 256, err = 454812206) | | | [12] windspeed > 13.2: 5285 (n = 149, err = 326330122) | | [13] hum > 84.8: 3382 (n = 30, err = 47251364) as expected, the temperature and the seasons are correlated, therefore we won't find rules regarding winter for higher (>12.2) temperatures. So I expect that forcing Winter with a temperature of 14 won't produce a different prediction than Summer. I expect also that this robustness would replicate also for more complex blackbox models like random forests and boosted trees. Instead, regression-based methods will allow impossible predictions as shown by the following linear model, where the effect of temperature is unbounded. (Intercept) temp seasonWINTER hum windspeed seasonSUMMER holidayHOLIDAY 4888.4 152.1 1307.1 -37.6 -64.0 673.2 -621.4 Can someone confirm/dispute this, preferably with a theory-based explanation? • Could you elaborate a little bit on what you mean with bias in this case? Sep 23 '19 at 12:53 • bias in the unconditional estimation of the effect of a predictor. Check the link in the post regarding the disadvantages of PDP. Sep 23 '19 at 13:04 • Ok clear! So in effect you are asking about the effect of the (in this case partial) independence assumption ? More particularly the effect on sparsely populated (sub)spaces? Sep 23 '19 at 15:26 • Yep, I believe that tree methods protect you from producing unlikely predictions for impossible combinations of the predictors. The predictions for these areas of the feature space would just "percolate" from plausible neighbor areas (ie. possible combinations of the features) Sep 24 '19 at 12:30 I would say trees are "differently" robust in this sense. A tree model will never predict a target value outside the range of those in the training set; so never a negative value for a count, or more infections than the population, etc. (Some tree-based models might, e.g. gradient boosting, but not a single tree or a random forest.) But sometimes that's detrimental, too. In your bikes example, maybe city population is another variable; your model will quickly become useless as the city grows, while a linear model may cope with the concept drift better. Finally, again in your bike example: because the tree has no reason to make rules about winter when temp>12, as @SvanBalen says, it will essentially be making up an answer if you ask it about a hot winter. In your tree's case, hot winters are treated as summers; another tree might split first on season, never considering temperature in the winter branch, so that this alternative tree will treat hot winters as winters. It seems better to try to track the independent variables' concept drift and interdependencies to recognize when the model hasn't seen enough useful training data to make accurate predictions. Well, are you content with a system that doesn't work on that one day in winter when the temperature actually reaches 40 degrees? And would you care, since you probably now got other things to worry about An assumption of independence is usually made to deal with sparse situations. Naive Bayes for instance works pretty well for document classification tasks: Each word (token) in a document is taken as an individual observation governed by a probability distribution belonging to the document class Tree-based classifiers, on the other hand, generate composited rules and are thus geared toward exploiting conditional probabilities. Eg: If it is 12.2 degrees or below, and it is Winter, then the humidity is the discerning factor in bike use. NOTE: That even though your rule sample is small, it is still quite complex Suppose we eyeball it and make a naive rule: If humidity < 83 roughly add around 55% That would emulate your rules quite well, and make it less complex, unless it happens to be a cold spring or summer day. Is that really a rule? Would that cause faulty predictions? Or did we just see little or no cold days that varied in humidity in that sample (200 data points)? I wouldn't start betting my bottom dollar that people react differently to humidity given the date on the calendar You pose that the tree rules protect against probability mass leaking to impossible situations. Conversely you could pose that naive probability would help you make predictions about situations that were sparse in the dataset. Whether you'd want that is up to your reasonable judgement. It is machine learning, not machine wisdom • My task is explanation and understanding, not prediction. But nevertheless of this specific goal, in general, I don't like models leaking functional forms in unknown regions, because things could go terribly wrong (eg. negative predictions from linear models for bounded outcomes). So the question is if tree methods are robust to these situations and therefore I can trust predictors' explanations based on PDP, ALE, ICE, etc Sep 25 '19 at 16:49
2021-09-20 06:22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4717254936695099, "perplexity": 1952.7856932121904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00396.warc.gz"}
https://codeforces.cc/blog/entry/93069
Silver_Fox's blog By Silver_Fox, history, 12 days ago, Hello, Codeforces! I'm new on AtCoder and was interested, what rating would I normally get there after participating in CodeForces rounds, and how does rating on these two sites correlate. Of course, there is a correlation ;) Link to some plots and dependencies: Rating correlation plots preview: And now you can get a prediction for the site you are new to. Here is CodeForces <-> AtCoder rating converter: Rating converter preview: You can send any suggestions to this e-mail: [email protected] • +269 » 12 days ago, # |   +6 Thanks to misha_sh for help with this! » 12 days ago, # |   +15 I took AtCoder accounts which have link to corresponding CodeForces handles. Then selected only those who have at least 7 writed contests on both. It may take much longer than 7 contests for AtCoder rating to converge. My AtCoder account https://atcoder.jp/users/ssvb/history currently participated in 10 ranked contests and has provisional rating 654. Average of the numbers from the "performance" column is 795 (~22% higher than the provisional rating). And I have no idea how many additional contests are necessary for my AtCoder rating to stop being considered provisional. • » » 11 days ago, # ^ |   0 Yes, thanks, it really changes the result a bit. Something like +60 to result when converting CF to AC rating, so the difference is not so big (updated in converter) • » » » 11 days ago, # ^ |   +8 According to this it will take at least 10 rounds for it to converge, for me personally it took 12 rounds for the warning of "rating not converged yet" to disappear. • » » 11 days ago, # ^ |   +52 Several aspects of the AtCoder rating system are briefly described in "rating.pdf" in the dropbox linked as "AtCoder's Rating System" on their front page. The most important thing to note is that the displayed user ratings are less than the internal weighted-performance-average used for calculation purposes, according to the following formulas: $\text{displayed rating} = \text{performance average} - f(\text{number of rated contests})$ $f(n) = \frac{F(n) - F(\infty)}{F(1) - F(\infty)} \times 1200$ $F(n) = \frac{\sqrt{\sum_{i=1}^n 0.81^n}}{\sum_{i=1}^n 0.9^n}$Note that $f(10) \approx 157$ and even $f(30) \approx 15$, so this really ought to be accounted for explicitly unless you limit consideration to only accounts with quite a lot of rated rounds. Codeforces started using a similar system last year, but it has $f(n) = 0$ for all $n \geq 6$, so no such correction should be necessary.AtCoder also applies some monotonic transformation to ensure that all displayed ratings are positive, but I don't have any knowledge of what transformation that is, exactly. » 12 days ago, # |   +14 Thanks great project ! Tho I am interested to know the formula / maths behind the conversion. • » » 11 days ago, # ^ |   0 I think the author used something like "Linear Regression" to find out the correlation. • » » » 11 days ago, # ^ |   0 Something like that, since it's the easiest and the most working solution. » 11 days ago, # |   -8 » 11 days ago, # |   +15 This mapped my max CF and AC rating pretty closely. Nicee » 11 days ago, # |   +8 There is an interesting blob at 2100 Codeforces — 2000 AtCoder.I hope I will get there someday. » 11 days ago, # |   +18 I instantly recognize jh05013. » 11 days ago, # |   +5 Got such a big difference:ATC -> CF: 1537 -> 1928 (lol)CF -> ATC: 1603 -> 1108 » 11 days ago, # |   0 Pretty correct ig. But tbh it is easier to get rating from ABC/ARC and sole div.2 from the sites, making it harder to determine.
2021-08-01 23:01:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.454901784658432, "perplexity": 2727.6649028535694}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00501.warc.gz"}
https://codereview.stackexchange.com/questions/75028/checking-for-balanced-parentheses
# Checking for balanced parentheses I wrote a method to check if each parenthesis in a string is closed. It should take into account the order in which a parenthesis is presented, e.g. "hello)goodbye(" should return false even though there are an equal number of opening and closing parentheses. My method is as follows: def balance(chars: List[Char]): Boolean = { def findNextParen(chars: List[Char]): List[Char] = { return chars.dropWhile(char => char != ')' && char != '(') } def findClosingParen(chars: List[Char]): List[Char] = { val nextParen = findNextParen(chars) if(nextParen.isEmpty) { nextParen } else if(nextParen.head == '(') { findClosingParen(findClosingParen(nextParen.tail).tail) } else { nextParen } } val nextParen = findNextParen(chars) if(nextParen.isEmpty) { true } else if(nextParen.head == ')') { false } else if(findClosingParen(nextParen.tail).isEmpty) { false } else balance(findClosingParen(nextParen.tail).tail) } is there a way to make this more Scala-y? I pretty much feel like I wrote Java code and compiled it with a Scala compiler. There is a lot of awkward near-repetition between the findClosingParen() helper and the balance() function itself. Arguably, the findClosingParen() function is misleadingly named, as it does more than that. Another problem is that findClosingParen() is not tail-call recursive. Here is a simpler approach that avoids both of those problems: def balance(chars:List[Char], level:Int=0): Boolean = { if (level < 0) return false; val nextParen = chars.dropWhile(char => char != ')' && char != '(') if (nextParen.isEmpty) { level == 0 } else if (nextParen.head == '(') { balance(nextParen.tail, level + 1) } else { balance(nextParen.tail, level - 1) } } • The balance function should not expose the level. That is a purely internal implementation feature, so you at least need an internal function to handle the recursion. Exposing that implementation detail is not good. – itsbruce Dec 28 '14 at 13:08 • Ahh that is way more elegant than my solution. Thanks. – ThomYorkkke Dec 28 '14 at 15:20 • The code is a little more terse but actually makes things worse (in terms of both elegance and idiomatic Scala) by exposing an internal state variable as an externally visible parameter. The if...else if...else if... chain should go as well. – itsbruce Dec 31 '14 at 2:06 Possibly the most egregiously non-Scala thing in your code is the if...else if...else if...else chain. Those are a bad smell in any language, particularly OO ones or functional languages with pattern matching. Scala is both. That really has to go. Folding is the most efficient and functional way to solve this, but that's a more advanced subject for somebody who is new to Scala and FP. Here is a simple recursive solution which illustrates the clarity of pattern matching: def balance(chars: List[Char]): Boolean = { def go(cs: List[Char], level: Int): Boolean = cs match { case Nil => level == 0 case ')' :: _ if level < 1 => false case ')' :: xs => go(xs, level - 1) case '(' :: xs => go(xs, level + 1) case _ :: xs => go(xs, level) } go(chars, 0) } This should show how much pattern matching simplifies a recursive procedure. Each match defines one of the possible states and the appropriate action. Because I used pattern matching in this way 1. It is clear that I have comprehensively covered the possible range of states (OK, I haven't dealt with Null but this is Scala - don't use Null). 2. The corresponding actions are simple and the small differences between each one easy to see. Compare this with the complexity, lack of clarity and fragility of your if...else if chain. It is hard to compare your different conditions, hard to see if you have been comprehensive and nothing about an if chain even compels you to be testing related conditions - you can have anything in each condition. This is a naive example but it is a good place to start. That said, Ben's is the best answer. There is a very small amount of code duplication in this version. The final three pattern matches do the same thing with only a minor change to the input parameters. In such a small, easy to read set of code this is really not a sin (and addressing that would make the function structure more complex and less clear). Replacing the recursion with a fold, however, would remove the duplication, because the fold would take care of the repeated application of the function,which could be reduced to a simple closure adjusting the level. • It is fully tail recursive optimisable, unlike your code (again, the pattern matching makes this easy to see, where the if chain does not) • Adding a @tailrec annotation is good practice in such Scala code • It uses an inner function for recursion rather than exposing internal state (unlike your accepted answer). • Sorry for the late response, but that was comprehensive and really helpful. Thanks a lot. – ThomYorkkke Jan 20 '15 at 3:01 • No problem. Can add a fold example if you have not figured that out for yourself yet. – itsbruce Jan 20 '15 at 11:45 There are two levels to the problem: Computer Science and Idiomatic Programming. ## Computer Science The simplest structure for matching parentheses is a pushdown automaton. Pushdown automata consist of three parts: 1. An input string. 2. A stack 3. A dispatch table. The psuedo-code for parentheses matching: // Automaton accepts on empty string. // Throws an error if automaton reaches a dead state. // Otherwise returns "Success: Parentheses match". List stack = {} Char[] = input_string Int index = 0 Int stop = length(input_string) Function dispatch(char) case char = '(' push_stack(char) dispatch(next_char()) case char = ')' pop_stack(char) dispatch(next_char()) else dispatch(next_char()) Function push_stack(char) = stack = append(char, stack) Function pop_stack(char) = if stack = {} then error("unbalanced ')' in input") else stack = rest(stack) Function next_char() char = input_string[index] index = index + 1 if index = stop then finalize() Function finalize() if stack = {} then "Success: Parentheses match" else error("unbalanced '(' in input string") Main dispatch(input_string[index] ## Semi-Idiomatic Scala The first thing to accept is that regardless of how idiomatic the code, the underlying computer science must be expressed. Due to its design as a transition path from Java's traditional imperative style, Scala allows for programming that falls in between purely functional and entirely imperative. A simplifying abstraction (used in the question's code) is to reduce the stack data structure to a single integer, then pushStack and popStack become: stack += 1 stack += -1 respectively, if we allow mutation. For example, non-idiomatic Scala might use mutable state to express the stack: def nonIdiomaticScala (chars: List[Char]): String = { if(chars.isEmpty) "Success: the parentheses match" var stack = 0 // def dispatcher(c: Char) = c match { case '(' => stack += 1 case ')' => stack += -1 case _ => stack } def loop(list: List[Char]) : String = { if (list.isEmpty) stack == 0 //evaluates to Boolean else { if(stack < 0) "Oh No, They Don't match" else loop(list.tail) } } loop(chars) } ## Idiomatic Scala The big idioms: 1. Avoiding explicit mutation. 2. Pattern Matching. 3. Folding rather than looping over the string. 4. Lambdas rather than named functions. Since this problem is homework for Martin Ordersky's Functional Programming in Scala course both on Coursera and École Polytechnique Fédérale de Lausanne, implementing the last two idioms should remain an exercise. • why "last two" only? need we not implement the first two? – thetrystero Oct 2 '15 at 2:14
2021-01-15 21:10:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21991270780563354, "perplexity": 4967.312917051342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703496947.2/warc/CC-MAIN-20210115194851-20210115224851-00713.warc.gz"}
http://math.ipm.ir/frontiers/printevent.jsp?eventID=199
MINI COURSE TITLE Model Theory, Motivic Integration, and Zeta Function SPEAKER Jamshid Derakhshan University of Oxford TIME Saturday, July 15, 2017, 11:00 - 12:30 Saturday, July 15, 2017, 14:00 - 15:30 Saturday, July 15, 2017, 16:00 - 17:30 VENUE   Lecture Hall 1, Niavaran Bldg. SUMMARY I will first give an introduction to the model theory of fields with valuations (for example the $p$-adic numbers), and the theory of $p$-adic and motivic integration. I will then consider the case of number fields, and present results on Dirichlet series and zeta functions which are Euler products of local integals, and give applications to some questions on algebraic groups, and rational points on algebraic varieties.
2018-07-21 15:13:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3884001076221466, "perplexity": 892.960743843126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00450.warc.gz"}
http://drorbn.net/index.php?title=12-267&diff=prev&oldid=12093
Difference between revisions of "12-267" Advanced Ordinary Differential Equations Department of Mathematics, University of Toronto, Fall 2012 Agenda: If calculus is about change, differential equations are the equations governing change. We'll learn much about these, and nothing's more important! Instructor: Dror Bar-Natan, [email protected], Bahen 6178, 416-946-5438. Office hours: by appointment. Classes: Mondays, Tuesdays, and Fridays 9-10 in RW 229. Teaching Assistant: Jordan Bell, [email protected]. Tutorials: Tuesdays 10-11 at RW 229. No tutorials on the first week of classes. Text Boyce and DiPrima, Elementary Differential Equations and Boundary Value Problems (current edition is 9th and 10th will be coming out shortly. Hopefully any late enough edition will do). Further Resources • Also previously taught by T. Bloom, C. Pugh, D. Remenik. Dror's notes above / Student's notes below Drorbn 06:36, 12 September 2012 (EDT): Material by Syjytg moved to 12-267/Tuesday September 11 Notes. Summary of techniques to solve differential equations Syjytg 21:20, 2 October 2012 (EDT)
2020-07-09 01:55:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6480958461761475, "perplexity": 12315.894769438617}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897844.44/warc/CC-MAIN-20200709002952-20200709032952-00054.warc.gz"}
https://www.physicsforums.com/threads/computation-of-this-integral.677740/
# Computation of this integral 1. Mar 11, 2013 ### omer21 I am looking for a software that can compute the following integral $∫_0^1f(x)\phi(2^jx-k)dx.$ $\phi(x)$ is scaling function of a wavelet family (especially Daubechies), j and k are scaling and translation parameters respectively. 2. Mar 29, 2013 ### csopi Basically any software that can do numerics for you, e.g. Wolfram Mathematica, Matlab, or you can write a C code as well (using GSL). 3. Apr 1, 2013 ### Bill Simpson If you can use this http://reference.wolfram.com/mathematica/guide/Wavelets.html to translate what you are interested in into one or two simple concrete examples then we can try it and see if the results will be in a form you can use.
2016-05-26 12:39:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6978126168251038, "perplexity": 987.0222151426731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275836.20/warc/CC-MAIN-20160524002115-00059-ip-10-185-217-139.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Circular_segment
# Circular segment In geometry, a circular segment (symbol: ) is a region of a circle which is "cut off" from the rest of the circle by a secant or a chord. More formally, a circular segment is a region of two-dimensional space that is bounded by an arc (of less than π radians by convention) of a circle and by the chord connecting the endpoints of the arc. ## Formulae A circular segment (in green) is enclosed between a secant/chord (the dashed line) and the arc whose endpoints equal the chord's (the arc shown above the green area). Let R be the radius of the arc which forms part of the perimeter of the segment, θ the central angle subtending the arc in radians, c the chord length, s the arc length, h the sagitta (height) of the segment, and a the area of the segment. Usually, chord length and height are given or measured, and sometimes the arc length as part of the perimeter, and the unknowns are area and sometimes arc length. These can't be calculated simply from chord length and height, so two intermediate quantities, the radius and central angle are usually calculated first. ${\displaystyle R={\tfrac {h}{2}}+{\tfrac {c^{2}}{8h}}}$[1] The central angle is ${\displaystyle \theta =2\arcsin {\tfrac {c}{2R}}}$ ### Chord length and height The chord length and height can be back-computed from radius and central angle by: The chord length is ${\displaystyle c=2R\sin {\tfrac {\theta }{2}}=R{\sqrt {2(1-\cos \theta )}}}$ The sagitta is ${\displaystyle h=R(1-\cos {\tfrac {\theta }{2}})=R\left(1-{\sqrt {\tfrac {1+\cos \theta }{2}}}\right)}$ ### Arc length and area The arc length, from the familiar geometry of a circle, is ${\displaystyle s={\theta }R}$ The area a of the circular segment is equal to the area of the circular sector minus the area of the triangular portion (using the double angle formula to get an equation in terms of Θ): ${\displaystyle a={\tfrac {R^{2}}{2}}\left(\theta -\sin \theta \right)}$ In terms of R and h, ${\displaystyle a=R^{2}\arccos \left(1-{\frac {h}{R}}\right)-\left(R-h\right){\sqrt {R^{2}-\left(R-h\right)^{2}}}}$ Unfortunately, ${\displaystyle a}$ is a transcendental function of ${\displaystyle c}$ and ${\displaystyle h}$ so no algebraic formula in terms of these can be stated. But what can be stated is that as the central angle gets smaller (or alternately the radius gets larger) , the area a rapidly and asymptotically approaches ${\displaystyle {\tfrac {2}{3}}c\cdot h}$. If ${\displaystyle \theta <<1,a={\tfrac {2}{3}}c\cdot h}$ is a substantially good approximation. As the central angle approaches π, the area of the segment is converging to the area of a semicircle, ${\displaystyle {\tfrac {\pi R^{2}}{2}}}$, so a good approximation is a delta offset from the latter area: ${\displaystyle a\approx {\tfrac {\pi R^{2}}{2}}-(R+{\tfrac {c}{2}})(R-h)}$ for h>.75R ### Etc. The perimeter p is the arclength plus the chord length, ${\displaystyle p=c+s=c+\theta R}$ As a proportion of the whole area of the disc, ${\displaystyle A=\pi R^{2}}$, you have ${\displaystyle {\frac {a}{A}}={\frac {\theta -\sin \theta }{2\pi }}}$ ## Applications The area formula can be used in calculating the volume of a partially-filled cylindrical tank laying horizontally. In the design of windows or doors with rounded tops, c and h may be the only known values and can be used to calculate R for the draftsman's compass setting. One can reconstruct the full dimensions of a complete circular object from fragments by measuring the arc length and the chord length of the fragment. To check hole positions on a circular pattern. Especially useful for quality checking on machined products. For calculating the area or centroid of a planar shape that contains circular segments. 1. ^ The fundamental relationship between R, c, and h derivable directly from the Pythagorean theorem among R, C/2 and r-h components of a right-angled triangle is: ${\displaystyle R^{2}=({\tfrac {c}{2}})^{2}+(R-h)^{2}}$ which may be solved for R, c, or h as required.
2021-12-04 09:20:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397999405860901, "perplexity": 576.6149476777676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362952.24/warc/CC-MAIN-20211204063651-20211204093651-00603.warc.gz"}
https://search.datacite.org/works/10.5162/IMCS2012/6.4.2
### 6.4.2 A Particle Sampler for Trace Detection of Explosives Sebastian Beer, Gerhard Müller & Jürgen Wöllenstein We present our developments toward a handheld field-ready trace explosives detection system using an electrostatic particle sampler with an integrated thermal desorber. Particle sampling with subsequent thermal desorption is used to overcome the problem of low vapor pressure of explosives. A degree of selectivity toward high electron affinity, characteristic for most explosives, is demonstrated experimentally. This reduces the detection background and improves the system performance. Detection is shown in applications of the sampler to both...
2018-11-12 19:57:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8282745480537415, "perplexity": 4777.882891592865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00107.warc.gz"}
https://rpg.stackexchange.com/questions/70003/how-does-the-draconic-breath-feat-interact-with-the-animate-breath-spell
# How does the Draconic Breath feat interact with the Animate Breath spell? The Draconic Breath [Draconic] feat (Complete Arcane) allows a Sorcerer to spend a spell slot to use a breath weapon as a standard action. This is a supernatural ability and the damage and save DC is calculated based on the level of the spell slot spent. The 7th-level Transmutation spell animate breath (Draconomicon, updated for Spell Compendium) creates a construct/elemental out of the user's breath weapon. Neither version explains directly whether or not the Draconic Breath feat counts as a valid breath weapon for this purpose, though the Draconomicon states: Casting this spell is a standard action, which includes using your breath weapon.... it immediately takes animate form and attacks. It does not form as a cone or a line, and does not deal damage when it is used to cast this spell, The Spell Compendium version says For this spell to function, you must have a breath weapon, either as a supernatural ability or as the result of casting a spell such as dragon breath Furthermore, both versions of animate breath define the range as "Personal" and the target as "You/Your breath weapon," without defining where the animated breath appears. This leaves me with a couple of questions: • Can Draconic Breath be used with animate breath to create a construct or elemental? • Do you have to use an additional spell slot to activate Draconic Breath in addition to the one for Animate Breath, or do you use one (7th-level) spell slot for both? If two spell slots must be used, is there a requirement or limitation on the spell slot used for Draconic Breath for Animated Breath to function? • If using the Spell Compendium version of Animate Breath (which doesn't base the construct's stats on the strength of the breath weapon), will using a low-level spell slot for Draconic Breath produce the same result as using a high-level spell slot? • If you have the Dragonheart Mage prestige class from Races of the Dragon, does its increased Draconic Breath damage dice affect the animated breath's statistics? • Can you spawn the animated breath anywhere in the area the regular breath weapon would have affected, or must it materialize adjacent to you, or somewhere else? I'm sorry if the length of my post is off-putting, but I want complete information for the sake of my 3.5 Edition character build. • Good first question. Welcome to the site. I have tried to make your question a bit more readable and remove some unnecessary details. If you don't like it, feel free to roll back using the revision history (You can access this by clicking on "edited X ago" below your post). – MrLemon Oct 17 '15 at 11:47 It's usually best to use the most recent printing of anything, and that means using the Spell Compendium version the 7th-level Sor/Wiz spell animate breath [trans] (SpC 11): For this spell to function, you must have a breath weapon, either as a supernatural ability or as the result of casting a spell such as dragon breath (page 73). When you successfully cast this spell, you imbue the energy of your breath weapon with coherence, mobility, and a semblance of life [for 1 round/level]. Emphasis mine, and I'll address that shortly. One should also use the Races of the Dragon version of the feat Draconic Breath (102): As a standard action, you can convert an arcane spell slot into a breath weapon. The breath weapon is a 30-foot cone (cold or fire) or a 60-foot line (acid or electricity) that deals 2d6 points of damage per level of the spell slot you expended to create the effect. Any creature in the area can make a Reflex save (DC 10 + level of the spell used + your Cha modifier) for half damage. This is a supernatural ability. Emphasis mine, and there are two ways of reading that emphasized text. 1. The creature takes a standard action to convert an arcane spell slot into a breath weapon and uses that breath weapon as part of the same standard action used to convert the slot. This reading means the breath weapon the feat grants can't be targeted by the spell animate breath. The spell slot is converted into a breath weapon that's instantly used, and, prior to that conversion (if the creature has no other breath weapon) the creature has no breath weapon to target with the spell animate breath. I assume this is the feat's typical reading. Hence a caster could no more cast the spell animate breath on this reading of the breath weapon granted by the feat Draconic Breath than he could cast animate breath on the breath weapon generated by the 4th-level Sor/Wiz spell firestride exhalation [conj/evoc] (Dragon Magic 67). 2. The creature takes a standard action to convert an arcane spell slot into a breath weapon, and, afterward, the creature can take a standard action to employ the breath weapon. This reading is extremely legalistic, and it's unlikely this was feat's intent, but this reading does allow the spell animate breath to be cast on the feat Draconic Breath's breath weapon (even if the breath weapon is made available through expending a 0-level spell). This reading also removes the utility value (such as it is) of being able to spend spell slots to make breath weapons that hurt foes right now. (The creature is essentially forced to take two rounds to use its breath weapon. And, no, that's not a delay expressed in rounds.) Under such a reading the DM must determine if the creature possessing the feat can either take only one standard action to convert one spell slot into one breath weapon or take several standard actions to convert several spell slots into several distinct breath weapons. Essentially, the DM must determine if the feat is a switch or a dial. (This DM recommends a dial.) ## The spell animate breath fails if the caster lacks a breath weapon Although the spell's duration is 1 round/level, the spell animate breath nonetheless requires the caster to cast it on the caster's breath weapon. Otherwise, the spell fails. The typical reading of the Draconic Breath feat has the feat supplying a momentary breath weapon that's used in the same round upon exchanging the spell slot for the breath weapon, leaving no time for the spell animate breath to be cast. A spell like the 4th-level Sor/Wiz spell dragon breath [evoc] (SpC 73), with its 1 round/level duration, is a valid target of the spell animate breath, the spell dragon breath even stating that, after the spell's cast, the caster must take a standard action to use the breath weapon and then must wait 1d4 rounds before using the breath weapon again. Until the duration of the spell dragon breath expires, the caster has a for-reals honest-to-Pelor breath weapon, not a fleeting one like that afforded by the typical reading of the feat Draconic Breath. • Question: Does the prestige class dragonheart mage (Races of the Dragon 88-91) change the statistics of the variant Huge fire elemental created by the spell animate breath? Answer: No. The spell animate breath determines the variant Huge fire elemental's statistics, not the breath weapon employed to create it except insofar as dictated by the spell. Note: The prestige class dragonheart mage is, frankly, less than stellar, so allowing minor changes to the variant Huge fire elemental as a house rule shouldn't upset game balance significantly. • Question: Can the variant Huge fire elemental created by the spell animate breath be created anywhere within what would otherwise be the breath weapon's typical area or must the creature be created somewhere else? Answer: Absent a range, the most conservative house rule for determining where the spell's effect occurs is adjacent to the caster's space in an area sufficient to accommodate the newly created creature. Note: Using such a house rule, a caster that lacks sufficient space adjacent to him will see the spell's effect fail. The created creature won't, for example, break down walls purely as a result of its creation. Note: A Huge elemental can be summoned using the 7th-level Sor/Wiz spell summon monster VII [conj] (PH 287). While the spell animate breath has only somatic components and a casting time of 1 standard action, summon monster spells—because of their versatility and splatbook support—are usually far better choices for a sorcerer. • The first three questions are about the Draconic Breath feat, not the dragon breath spell. – KRyan Oct 17 '15 at 13:55 • @KRyan All fixed. – Hey I Can Chan Oct 17 '15 at 16:21 • @HeyICanChan Thank you for the detailed response, and I'll look for a different 7th-level spell to round off my list. If I had used it, though, other advantages of the animate breath spell over summon monster VII (besides the casting time and components) would have been that the Animate Breath construct couldn't be warded off by protection from evil or magic circle against evil effects, defeated by dismissal or banishment, or altered by spells like distort summons (Book of Vile Darkness), and they wouldn't have the elemental vulnerabilities of actual elementals. – FlameTroll Oct 18 '15 at 0:04 • @FlameTroll You're welcome. Would that we all at level 14 face opponents who think that casting dismissal or banishment is a good use of their actions! :-) Point taken about protection from alignment effects, though. – Hey I Can Chan Oct 18 '15 at 0:45 Draconic Breath gives a sorcerer a breath weapon. They have an unusual cost for using it (a spell slot, where most creatures with a breath weapon can just use it, maybe having to wait a little while before using it again), but it’s still a breath weapon that they have. As such, it is absolutely a valid choice for animate breath. This does mean burning two spell slots, one to power the breath weapon and one to cast animate breath, but that doesn’t change anything. The fact that Spell Compendium even explicitly notes that the dragon breath spell’s breath weapon is valid for animate breath is just extra confirmation: the dragon breath spell is quite similar to the Draconic Breath feat, in that it is “spend spell slot → use breath weapon.” If anything, the breath weapon from dragon breath is even more ephemeral, since that breath weapon literally only exists for the moment you’re using the spell, rather than being a thing you always have (but have to power with spell slots). Since Spell Compendium is the latest version of animate breath,1 its version is the “correct” one,2 and since it doesn’t care anything at all about the stats of your breath weapon, there’s no need to use a higher-level spell slot to produce the breath weapon. Technically, even a breath weapon produced by a 0th-level spell slot (which would deal 0d6 damage) would qualify for animate breath, but I imagine just about every DM will require at least a 1st-level slot. Correspondingly, however, dragonheart mage does nothing to improve matters. If you were using the Draconomicon version, then the dragonheart mage improvements would improve animate breath’s construct. As for where the effect of animate breath appears, that is entirely unclear. Even calling and summoning effects (which this is not) lack a generic, default location for things to appear in; summon monster, planar binding, et al., those things all have Ranges that spell it out. The range on animate breath is Personal, because it’s affecting your breath weapon... but that leaves out where the creature version appears. Talk with your DM. The two options you lay out seem like the most sensible ones to me, but I don’t really know that one is a better choice than the other. Since breath weapons and this spell are quite weak, I’d probably just let you create the creature anywhere you could have breathed, but that’s me. 1. Draconomicon came out in 2003, Spell Compendium in 2005. Draconomicon was actually one of the very first supplements for 3.5, even before Complete Warrior (which is often thought of as the first “real” supplement to the 3.5 PHB). 2. The errata rules don’t actually specify this; instead they talk about the “primary” source, which is usually the first printing. However, Spell Compendium asserts its primacy, and most players grant it that status. Moreover, in the case of complete reprints (as opposed to simple mentions or derivative material), later printings might be primary; the FAQ suggests so.
2019-09-23 20:33:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3332228660583496, "perplexity": 5553.722437899055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00543.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-3rd-edition/chapter-12-rotation-of-a-rigid-body-exercises-and-problems-page-349/20
Chapter 12 - Rotation of a Rigid Body - Exercises and Problems - Page 349: 20 $\tau_{net} = -0.94~N~m$ Work Step by Step We can find the net torque about the axle as: $\tau_{net} = \sum \tau$ $\tau_{net} = (r_1\times F_1) + (r_2\times F_2)+(r_3\times F_3)+(r_4\times F_4)$ $\tau_{net} = -(0.10~m)(30~N)+(0.05~m)(30~N)~sin(45^{\circ})+(0.05~m)(20~N)+(0.05~m)(20~N)~sin(0^{\circ})$ $\tau_{net} = -0.94~N~m$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-12-17 05:03:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41698700189590454, "perplexity": 822.5262244575401}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00435.warc.gz"}
http://stats.stackexchange.com/questions?page=5&sort=newest
# All Questions 0answers 5 views ### One Class SVM Training in LIBSVM Could you kindly reply my queries in context of using LIBSVM for One class SVM. Assuming I have samples from one class only, do I need to put a lable for each sample...but a training file without ... 0answers 3 views ### have two ID with four dimensions and a mediator with two factors and a DV with four dimensions. [on hold] want to calculate the direct and indirect effects between them. and what kind of techniques should used to analyze data from questionnaire. thanks 2answers 59 views ### Analysis for checking if an Ensemble model is a better fit for a dataset than Primitive model I have a dataset and have the option to apply either GLM (primitive) or a Random Forest (ensemble). So far the Random Forest is giving way better results than the GLM. As it is generally believed that ... 0answers 20 views ### Sufficient data volume for inference This question was probably discussed before but I failed to find it here. Let's say I have a medical data which contain some biomarker measurements for cases (patients with disease) and controls ... 0answers 12 views ### Optimizing selection from varying sets On pages of website(s) I have a set of potential messages to choose from and only one or two slots to show them in. (think 'this product is on sale' or 'this product is new'). On each page the set ... 0answers 19 views ### What does “shift invariant” mean in convolutional neural network? I saw a term describing the feature detectors, i.e. shift invariant. What is that mean? Paper: 1989 Generalization and Network Design Strategies 0answers 39 views ### Relationship between water quality at different sites and characteristics of study areas I'm investigating the relationship between the mean values of water quality parameters and the characteristics (e.g. soil types (%), landuses (%), rainfall (mm), rock types (%)) of the study area ... 2answers 240 views ### Was this the appropriate regression model? I was recently proof-reading a friend's thesis (for their writing, not stats usage) when I came across a usage of a regression model which I would regard as incorrect. However, I'm pretty new to the ... 0answers 5 views ### Computational complexity gaussian continuos hidden markov model As I know, the continuos Hidden Markov model does not use the emission probability matrix $B$ but obtains the probabilities of being of a given state by a gaussian (univariate or multivariate) pdf ... 1answer 42 views ### Asymptotic distribution of a recursive statistic I have a (time series related) test statistic which is asymptotically normal. I would like to know what is the asymptotic distribution of its maximal value obtained by a recursive estimation. For ... 0answers 21 views ### Should a BoxCox transformation to normalize the skewness of data be applied to all the predictors? If there are few predictors that are highly skewed among a larger set of predictors in case of a linear regression problem, should a BoxCox transformation be applied to only these few predictors or ... 0answers 50 views ### Combining micro data for an econometrics paper I am researching data for an econometrics research paper that I am doing but I am completely stuck. I am not looking for help doing the actual research, just guidance if my statistical method will be ... 1answer 31 views ### “weight” input in glm and lm functions in R I am confused with the definition of the weights in glm and lm. Using the McCullagh and Nelder (1989)'s notation, If random variable $y_i$ is from the Generalized Linear Model (GLM), then its density ... 1answer 5 views ### Using Difference in Difference (DiD) to measure the impact of austerity measures? I am trying to see if there is any causal relationship between the rise in vote share of France's leading extreme right-wing party Le Front National and the rate at which austerity measures took place ... 1answer 53 views ### Biostatistics and Pure [on hold] I hope this is not "opinion" question but I am curious as to whether or not the study of statistics and probability theory in the realm of cancer genomics is considered "less philosophical" than the ... 0answers 14 views ### Experiment report with hypothesis for simulated model I'm setting up an experiment for a simulation. I'm not sure how to formulate the hypothesis. The model used in the simulation is the first iteration, so I'm just using 2 attributes, 1 independent (A) ... 2answers 45 views ### What is the purpose of Leads and Lags in a time series? I will analyse stock prices and i don't understand the purpose of leading and lagging. can you please suggest me some preliminary analysis on the stock prices also, this will help me for my thesis.. ... 0answers 6 views ### Proc Mixed for a random slopes model - contrast the slopes? I have a need to make predictions about a set of students $^1$ who are nested under teachers, under schools, under districts. I have produced the below model, and I now wish to do some forecasting at ... 0answers 9 views ### Neural Network JavaScript Form Field Validation I want to build a neural network and ultimately end up with functions that can validate various types of data in a web form using JavaScript. This way I can create new data types by telling the ... 0answers 13 views ### within-MZ twin model using sem in Stata I'm working with some twin data and would like to estimate something exactly akin to xtreg using sem in Stata. The structure of the data is pretty generic: there's a family id, there's a within family ... 1answer 23 views ### How to compare repetitive data by using SAS? [on hold] I have a question about how to code this change line data, in fact , i want to set up a dummy variable 'change line' = 1 or 0 by comparing different yes persons in different row , for example by ... 1answer 17 views ### How do search engines generate related searches? I would like to know how search engines like Bing generate related searches when the user starts typing into the search box. From what I gather, there has to be some sort of a ranking algorithm where ... 1answer 29 views ### Convert categorical percentage data into an overall mean I have survey data in which the answer choices were "categorical" (0, <15%, 15-30%, 30-45%, 45-60%, 60-75%, 75-90%, >90%). In retrospect, this should have been a free response question, but I'm ... 0answers 12 views ### Partial derivative from lme or lm object I'm looking for an R package, or another approachable way in R to obtain the first partial derivative of choice from an lm or lme object. I'm aware of http://stats.stackexchange.com/a/8253/13758 but ... 1answer 25 views ### How to bound a probability with Chernoff's inequality? In my class, we were given Chernoff's inequality as $$P(X\le -t) \le e^{(-(\lambda*t - \log( E(e^{-\lambda*x}))))}$$ $$P(X\ge -t) \le e^{(-(\lambda*t - \log( E(e^{\lambda*x}))))}$$ It says that to ... 0answers 8 views ### initial status X time interaction I am modeling change in a continuous variable over time using linear mixed modeling in SPSS. I would like to examine the possibility that initial status (i.e., pretest performance on the DV) interacts ... 1answer 16 views ### How can I show that my sample is random and a good representation of the population? Okay, I'm looking at a population of user reviews. I have collected a random sample of the reviews and studied the trends of the words used and sentence structure. How do I make inferences about ... 0answers 5 views ### Comparing structure of disconnected networks of different sizes I am trying to figure out a way to evaluate and compare the structure of multiple (600+) disconnected networks of different sizes (from 31 to 5000 nodes). From what I have read in the literature, ... 1answer 67 views ### Why is the sampling distribution of variance a chi-squared distribution? The statement The sampling distribution of the sample variance is a chi-squared distribution with degree of freedom equals to $n-1$, where $n$ is the sample size (given that the random variable of ... 1answer 35 views ### How to interpret ANOVA output when comparing two nested mixed-effect models? I've got two models (all variable are count variables): ... 0answers 9 views ### How do I determine sample size to validate a requirement when I have a lot of margin? I apologize if a similar question has been asked, but my search has yielded no results. I am having some trouble nailing down the correct methodology to use to design an experiment I am working on. ... 0answers 51 views ### Multinomial logistic regression does not match actual data I was wondering if someone with experience running multinomial logistic regression could look at my data file and results, and explain why the results turned out the way it did. The background: I've ... 1answer 16 views ### Which Machine Learning algorithm: Sorted list of tags given metadata? Our system allows an admin to manage a database of university courses. These courses have multiple fields, like the department, a title, and a description. I am adding the ability to add learning ... 1answer 71 views ### Unintuitive interpretation of probabilities when doing logistic regression The observations in my dataset can be split in two classes. The observations in class 1 are for sure correctly labeled. The observations that has been designated to class 2 have a huge percentage of ... 1answer 35 views ### Statistical test of stock returns product Say I have a sequence of stock returns: $X_1, ..., X_n$. Then I make a sample of products: $X_1X_2, X_2X_3, ..., X_iX_{i+1},..., X_{n-1}X_n$ How to test, if the sample mean is significantly different ... 0answers 24 views ### In a longitudinal analysis, is it valid to adjust for a covariate as change-score and also include the baseline covariate value? We have a situation where we want to test the association between X and Y, but the change in X from baseline is more interpretable. There are several possibilities I see for setting up the model. But ... 0answers 6 views ### Borgatti Key Player Problem (KPP) in Python Has anyone come across any Python implementation of Borgatti's proposed Key Player Problem (KPP) algorithms? I'm interested in solutions using NetworkX and particularly interested in implementations ... 0answers 15 views ### Some questions about AlgDesign for Fractional Factorial Design in R I have a few Design of Experiment type questions about the AlgDesign package in R that I can't find answered online: Will using the center=TRUE option in the ... 0answers 27 views ### Ratio of Odds Ratios? How would one compute the ratio of odds ratios for unlinked (but theoretically overlapping samples)? I basically have one dataset that can be considered my "population" or "universe" of data, and ... 0answers 32 views ### Stolen base success vs Independent catcher and average catcher I would like to know if my thinking on this is correct. Note: This is not a homework or work question, I am just interested in learning about comparing averages and how it applies to the problem set ... 0answers 5 views ### LibSVM totalSV for multiclass I am performing classification of K=9 classes using linear SVM with libSVM (MATLAB warp) I am using 400 samples of data to perform the training and I'm getting: totalSV: 203 I know libSVM uses ... 0answers 18 views ### sample size to estimate variance I am trying to follow a thread Calculating required sample size, precision of variance estimate? and am facing difficulty understanding a step in the answer, specifically the step given below \$ (n-1) ... 0answers 24 views ### Determining Appropriate Test: two categorical groups with a goal (split test?) (Newb) I've spent several hours online researching an appropriate means of using statistics to determine whether or not variation test results are meaningful, or just chance. If the testing took the format ... 0answers 8 views ### Calculate variance of subpopulation given variance of overall population and its complement I'd like to be able to calculate the sample variance for a subpopulation [B] given the sample mean and variance for its complement [A] and the overall population [A+B]. I have the sample mean and ... 0answers 24 views ### clustering analysis of large amount of time series I would like to cluster a set of time series, which are composed of around 50000 different time series. Are there established algorithms/package that can handle this scalability problem? 0answers 17 views ### Performing a comparison between a sample distribution and population distribution in excel or nvivo What is a relatively easy stats test to perform in excel/nvivo that is arguably reliable and valid and a good method for comparing a sample's normal distribution with a population's normal ... 0answers 4 views ### Function approximation, inverting and finding input values subsets based on output value Suppose I have a set of input/output values for some unknown and complex function which I want to approximate using some machine learning algorithm. The input variables are integers or reals. The ... 0answers 18 views ### MAR Vs. MNAR: attribute's missingness determined by the value of another attribute Suppose I have a training data set that predicts the whether a person is unemployed or not using a decision tree. I have a certain set of attributes that I use to predict this. But for all samples ... 1answer 12 views ### Correcting multiple marginals in a subsample of a survey My question in short: How to assign weights to a subsample of a survey in order to fit multiple features simultaneously to their original marginals? ...and now the details: I have some data set X of ... 2answers 36 views ### Managing 'prefer not to says' in sensitive questionnaires Consider a questionnaire where we ask someone about their sexuality. The five options, for simplicity, are: Heterosexual Homosexual Bisexual Other 'Prefer not to say' Assume we ask the population. ... 15 30 50 per page
2014-10-30 15:14:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7398672699928284, "perplexity": 1524.42211895659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898226.7/warc/CC-MAIN-20141030025818-00117-ip-10-16-133-185.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/27787/kramers-kronig-relations-for-the-electron-self-energy-%ce%a3
# Kramers-Kronig relations for the electron Self-Energy Σ I'm currently studying an article by Maslov, in particular the first section about higher corrections to Fermi-liquid behavior of interacting electron systems. Unfortunately, I've hit a snag when trying to understand an argument concerning the (retarded) self-energy $\Sigma^R(ε,k)$. Maslov states that in a Fermi liquid, the real part and the imaginary part of the self-energy $\Sigma^R(ε,k)$ are given by $$\mathop{\text{Re}}\Sigma^R(ε,k) = -Aε + B\xi_k + \dots$$ $$-\mathop{\text{Im}}\Sigma^R(ε,k) = C(ε^2 + \pi^2T^2) + \dots$$ (equations 2.4a and 2.4b). These equations seem reasonable: when plugged into the fermion propagator, $$G^R(ε,k) = \frac1{ε + i\delta - \xi_k - \Sigma^R(ε,k)}$$ the real part slightly modifies the dispersion relation $ε = \xi_k$ slightly and the imaginary part slightly broadens the peak. That's what I'd call a Fermi liquid: the bare electron peaks are smeared out a bit, but everything else stays as usual. Now, Maslov goes on to derive higher-order corrections to the imaginary part of the self-energy, for instance of the form $$\mathop{\text{Im}}\Sigma^R(ε) = Cε^2 + D|ε|^3 + \dots .$$ First, I do not quite understand how to interpret this expansion. How am I to understand the expansions in orders of $ε$? I suppose that $ε$ is small, but in relation to what? The Fermi level seems to be given by $ε=0$. Second, he states that this expansion is to be understood "on the mass-shell". I take it that "on the mass shell" means to set $\xi_k=ε$? But what does the expansion mean, then? Maybe I am supposed to expand in orders of $(ε-\xi_k)$? Now the question that is the most important to me. Maslov argues that the real part of the self-energy can be obtained via the Kramers-Kronig relation from the imaginary part of self-energy. My problem is that the corresponding integrals diverge. How can $$\mathop{\text{Re}}\Sigma^R(ε,k) = \mathcal{P}\frac1{\pi}\int_{-\infty}^{\infty} d\omega \frac{\mathop{\text{Im}}\Sigma^R(\omega,k)}{\omega-ε}$$ be understood for non-integrable functions like $\mathop{\text{Im}}\Sigma^R(ε,k) = ε^2$? It probably has to do with $ε$ being small, but I don't really understand what is going on. I should probably mention my motivation for these questions: I have calculated the imaginary part of the self-energy for the one-dimensional Luttinger liquid $\xi_k=|k|$ as $$\mathop{\text{Im}}\Sigma^R(ε,k) = (|ε|-|k|)θ(|ε|-|k|)\mathop{\text{sgn}}(ε)$$ and would like to make the connection to Maslov's interpretation and results. In particular, I want to calculate the imaginary part of the self-energy with the Kramers-Kronig relations. ## 1 Answer I can't speak knowledgeably about the specifics of your problem but I can offer some thoughts. Regarding your first question, you will need to have dimensions of $\text{energy}^{-1}$ for $C$ and $\text{energy}^{-2}$ for $D$. In specific, this means that $C/D$ has units of energy. This gives meaning to the statement that $$D|\epsilon|^3 \ll C\epsilon^2 \quad\Leftrightarrow\quad |\epsilon| \ll C/D\,.$$ As far as a divergent Kramers-Kronig relation goes, you should read about once or more subtracted dispersion relations. Then, instead of writing $$\mathop{\text{Re}}\Sigma^R(\epsilon,k) = \mathcal{P}\frac1{\pi}\int_{-\infty}^{\infty} d\omega \frac{\mathop{\text{Im}}\Sigma^R(\omega,k)}{\omega-\epsilon}\,,$$ you can write $$\mathop{\text{Re}}\Sigma^R(\epsilon,k) - \mathop{\text{Re}}\Sigma^R(\epsilon_0,k) = \mathcal{P}\frac1{\pi}\int_{-\infty}^{\infty} d\omega \frac{(\epsilon-\epsilon_0)\mathop{\text{Im}}\Sigma^R(\omega,k)}{(\omega-\epsilon)(\omega-\epsilon_0)}\,,$$ where $\epsilon_0$ is some convenient subtraction point, presumably one at which you know $\mathop{\text{Re}}\Sigma^R(\epsilon_0,k)$. You can extend to twice or more subtracted dispersion relations too. Weinberg Vol. 1 has a lot about dispersion relations where you can read more about this. • Hm, I'm not entirely happy with your condition on $|ε|$ since it arises only a posteriori, once you have one particular expansion. Thanks a lot for the subtracted dispersion relations, that looks very useful. I'll check out what Weinberg writes. Feb 29 '12 at 8:30 • I'm glad the subtracted dispersion relations were useful. Regarding the expansion condition, I would say rather that it emerges simultaneously with taking the expansion itself, and is part of the justification for truncating the expansion at a given level. – josh Feb 29 '12 at 14:48 • I have pondered the subtracted dispersion relations and it appears to me that we following situation: the subtracted dispersion relation gives better convergence, but we lose information about lower-order terms. For instance, if we know that the imaginary part vanishes faster than $|z|\to∞$, we can reconstruct the second derivative of the real part, but we cannot gain any information about the linear or constant part. This is a fundamental limitation. Unfortunately, this calls into the question the whole approach of trying to reconstruct a low-order expansion. Do you have any thoughts on this? Mar 5 '12 at 16:32
2021-12-09 14:00:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6612040400505066, "perplexity": 200.4753509550214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00034.warc.gz"}
https://www.gamedev.net/forums/topic/70341-changing-the-title-of-a-window/
• ### Popular Now • 12 • 12 • 9 • 10 • 13 #### Archived This topic is now archived and is closed to further replies. # Changing the Title of a window This topic is 5946 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi all, how do i change the title of a window? do i need to use sendmessage? i guess so, but how? thanx for any help, ##### Share on other sites If you have the HWND, just go: SetWindowText(hWnd,"Hello!"); Now to see if anybody beat me... --------------- I finally got it all together... ...and then forgot where I put it.
2018-03-22 06:40:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17629414796829224, "perplexity": 4988.960642546365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00255.warc.gz"}
https://www.nature.com/articles/s41467-023-36841-1?error=cookies_not_supported&code=7a67ed55-c4b9-4363-9216-afa9cfdbbae2
## Introduction Carbon and hydrogen are the fourth and the most abundant elements in the Universe1, and their mixture is the simplest basis to form organic compounds. In our Solar System, the Cassini mission revealed lakes and seas of liquid hydrocarbons on the surface of Titan2, and the New Horizon spacecraft detected methane frost on the mountains of Pluto3. In Neptune and Uranus, methane is a major constituent with a measured carbon concentration from around 2% in the atmosphere4 and a concentration up to (assumed) 8% in the interior5. The methane in the atmosphere absorbs red light and reflects blue light, giving the ice giants their blue hues6. Moreover, numerous recently discovered extrasolar planets, some orbiting carbon-rich stars, have spurred a renewed interest in the high-pressure and high-temperature behaviors of hydrocarbons7. Diamond formation from C/H mixtures is particularly relevant; the “diamonds in the sky" hypothesis8 suggests that diamonds can form in the mantles of Uranus and Neptune. The diamond formation and the accompanying heat release may explain the long-standing puzzle that Neptune (but not Uranus) radiates much more energy than that it receives from the Sun9. Diamond is dense and will gravitate into the core of the ice giants. For white dwarfs, Tremblay et al.10 interpreted the crystallization of the carbon-rich cores to influence the cooling rate. Many experimental studies have probed the diamond formation from C/H mixtures, but the experiments are extraordinarily challenging to perform and interpret because of the extreme thermodynamic conditions, kinetics, chemical inhomogeneities, possible surface effects from the sample containers, and the need to prove diamond formation inside a diamond anvil cell (DAC). Three DAC studies on methane disagree on the temperature range: Benedetti et al. reported diamond formation between 10 to 50 GPa and temperatures of about 2000 K to 3000 K11; between 10 to 80 GPa, Hirai et al. reported diamond formation above 3000 K12; while Lobanov et al. reported the observation of elementary carbon at about 1200 K, and a mixture of solid carbon, hydrogen, and heavier hydrocarbons at above 1500 K13. In methane hydrates, Kadobayashi et al. reported diamond formation in a DAC between 13 and 45 GPa above 1600 K but not at lower temperatures14. Laser shock-compression experiments found diamond formation in epoxy (C,H,Cl,N,O)15 and polystyrene (-C8H8-)16, but none in polyethylene (-C2H4-)17. Moreover, there is a mismatch between the experimental results and theoretical predictions particularly regarding the pressure range of diamond formation. Density functional theory (DFT) combined with crystal structure searches at the static lattice level predicted that diamond and hydrogen are stable at pressures above about 300 GPa18,19, while hydrocarbon crystals are stable at lower pressures18,19,20,21,22. Based on DFT molecular dynamics (MD) simulations of methane, Ancilotto et al. concluded that methane dissociates into a mixture of hydrocarbons below 100 GPa and is more prone to form diamond at above 300 GPa 23, Sherman et al. classified the system into stable methane molecules (<3000 K), a polymeric state consisting of long hydrocarbon chains (4000-5000 K, 40–200 GPa), and a plasma state (>6000 K) 24. However, these simulations are constrained to small system sizes and short time scales, so that it is impossible to distinguish between the formation of long hydrocarbon chains and the early stage of diamond nucleation. Using a semiempirical carbon model, Ghiringhelli et al.25 determined that the diamond nucleation rate in pure liquid carbon is rapid at 85 GPa, 5000 K but negligibly small at 30 GPa, 3750 K, and then extrapolated the nucleation rate to mixtures employing an ideal solution model. In this work, we go beyond the standard first-principles methods, and study the thermodynamics of diamond formation in C/H mixtures, by constructing and utilizing machine learning potentials (MLPs) trained on DFT data. To the best of our knowledge, this is the first MLP fitted for high-pressure mixtures, and the only one available for C/H mixtures with arbitrary compositions and applicable from low P-T conditions to about 8000 K and 800 GPa. We first quantitatively estimate the coexistence line between diamond and pure liquid carbon at planetary conditions. We then reveal the nature of the chemical bonds in C/H mixtures at high-pressure high-temperature conditions. Finally, we determine the thermodynamic driving force of diamond formation in C/H mixtures, taking into account both the ideal and the non-ideal effects of mixing. We thereby establish the phase boundary where diamond can possibly form from C/H mixtures at different atomic fractions and P-T conditions. ## Results ### Diamond formation in pure liquid carbon Although planets or stars typically contain a low percentage of carbon4, it is useful to start with a hypothetical environment of pure carbon. This is to establish the melting line of diamond and to facilitate the subsequent analysis based on C/H mixtures. Moreover, the high-pressure carbon system has experimental relevance in diamond synthesis and Inertial Confinement Fusion applications26. Figure 1 shows the chemical potential difference ΔμD ≡ μdiamond − μliquidC between the diamond and the pure liquid carbon phases calculated using our MLP at a wide range of pressures and temperatures. Our calculated melting line Tm of diamond in pure liquid carbon (solid black curve) is compared to other theoretical work and experimental shock-compression data (Fig. 1a). Our Tm is re-entrant at above 500 GPa, because liquid carbon is denser than diamond at higher pressures. This shape has been observed for the experimental melting line27. It was previously predicted using DFT simulations on smaller systems28,29,30, but not captured in the free energy calculations performed using a semi-empirical LCBOP carbon model31. Although diamond solidification is thermodynamically favorable below the melting line, undercooled liquids can remain metastable for a long time as solidification is initiated by a kinetically activated nucleation process32. The only previous study that has quantified the diamond nucleation rate is by Ghiringhelli et al.25 using the LCBOP carbon model: the threshold J = 10−40m−3s−1 is indicated by the gray line in Fig. 1, and above this line the diamond formation rate is negligible even at the celestial scale. Overall, we find that the pure carbon system is deeply undercooled at the P-T conditions in the two icy planets (green and orange lines in Fig. 1). ### The nature of C-H bonds Going beyond the pure carbon case, we investigate the nature of the chemical bonds in C/H mixtures at conditions relevant for planetary interiors. The high-pressure behavior of hydrocarbons is also crucial in many shock-compression experiments for the development of fusion energy platforms and Inertial Confinement Fusion capsules33. The properties of the covalent C-C and C-H bonds are well-known at ambient conditions, but it is unclear how extreme conditions affect these bonds. DFT studies coupled with harmonic approximations have predicted a variety of hydrocarbon crystals to be stable at P ≤ 300 GPa18,19,20,21,22, but these studies are restricted to low temperatures as the melting lines of hydrogen and methane are below 1000 K and 2000 K12,34, respectively, while harmonic approximations break down completely for these liquids. We performed MD simulations using our dissociable MLP for C/H mixtures over a wide range of thermodynamic conditions. We focus on the CH4 composition to directly compare to previous studies. Other compositions can be analyzed in the same way and yield qualitatively similar behaviors. At T < 2500 K, the MD is not ergodic within the simulation time of 100 ps, and therefore analysis is performed only at temperatures above this threshold. Figure 2a shows the snapshots of carbon bonds from the MD simulations of the CH4 system. At 4000 K and P = 100 GPa, 200 GPa, and 600 GPa, the system is primarily composed of various types of hydrocarbon chains. The formation of longer chains at higher pressures is consistent with the observations in previous DFT MD studies23,24, although the DFT simulations have severe finite size effects because polymer chains consisting of just a few carbon atoms can connect with their periodic images and become infinitely long. At high pressures, the chains assemble carbon networks, and the system shows more obvious signs of spatial inhomogeneity of carbon atoms. In our chemical bond analysis, a C-C bond is identified whenever the distance between a pair of carbon atoms is within 1.6 Å, and a C-H bond is defined using a cutoff of 1.14 Å. The cutoffs are larger than the typical bond lengths to eliminate the misidentification of broken bonds due to thermal fluctuations. The average number of C-C and C-H bonds at different conditions are shown in Fig. 2b, c. The number of bonds varies smoothly as a function of P and T. The average number of the C-C bonds decreases with temperature. Moreover, as illustrated in the Supplementary Information, at a certain condition the carbon atoms in the system have varying number of C-C bonds, rather than all having the same number of bonds. These suggest that at T ≥ 2500 K the system is not made of hydrocarbon crystals that were predicted to be stable at low temperatures in previous DFT studies18,19,20,21,22. The average number of C-H bonds for each carbon atom is close to one at all conditions considered here even though the overall composition is CH4, indicating that most hydrogen atoms are not bonded to any carbons. To determine the lifetimes of the C-C and the C-H bonds, we recorded the time it takes for a newly formed bond to dissociate during the MD simulations. Figure 2d, e show the average bond lifetimes. The C-H bond lifetimes are extremely short, less than about 0.01 ps. The C-C bonds are more long-lived, yet only have a mean lifetime of less than about 1 ps at all the conditions considered here. Such short lifetimes are consistent with previous DFT MD simulations of CH424. The short bond lifetimes indicate that the hydrocarbon chains in the systems decompose and form quickly. In other words, the C/H mixture behaves like a liquid with transient C-C and C-H bonds. ### Thermodynamics of C/H mixtures We then determine the chemical potentials of carbon in C/H mixtures, ΔμC(χC), as a function of the atomic fraction of carbon, χC = NC/(NC + NH). This, combined with the chemical potential difference ΔμD between diamond and pure carbon liquid, establishes the thermodynamic phase boundary for diamond formation from C/H mixtures with varying atomic ratios. Dilution will usually lower the chemical potential of carbon in a mixture, which can be understood from the ideal solution assumption: $${\mu }_{id}^{C}={k}_{{{{{{{{\rm{B}}}}}}}}}T\ln ({\chi }_{C})$$. However, the ideal solution model neglects the atomic interactions. To consider non-ideal mixing effects, we compute the chemical potentials of mixtures using the MLP. This is not an easy task, because traditional particle insertion methods35 fail for this dense liquid system, and thermodynamic integration from an ideal gas state to the real mixture36 is not compatible with the MLP. We employ the newly developed S0 method which accounts for both the ideal and the non-ideal contributions to the chemical potentials37: $${\left(\frac{d{\mu }^{C}}{d\ln {\chi }_{C}}\right)}_{T,P}=\frac{{k}_{B}T}{(1-{\chi }_{C}){S}_{CC}^{0}+{\chi }_{C}{S}_{HH}^{0}-2\sqrt{{\chi }_{C}(1-{\chi }_{C})}{S}_{CH}^{0}},$$ (1) where $${S}_{CC}^{0}$$, $${S}_{CH}^{0}$$, and $${S}_{HH}^{0}$$ are the values of the static structure factor between the said types of atoms at the limit of infinite wavelength37, which can be determined from equilibrium MD simulations of a C/H mixture with a given carbon fraction χC. μH is then fixed using the Gibbs-Duhem equation. Note that only the relative chemical potential is physically meaningful, and we conveniently select the reference states to be the pure carbon and hydrogen liquids, i.e. μC(χC = 1) = 0 and μH(χC = 0) = 0. We obtained μC and μH at different χC on a grid of P-T conditions between 10 GPa–600 GPa and 3000 K–8000 K, by numerically integrating $$d{\mu }^{C}/d\ln {\chi }_{C}$$. $$d{\mu }^{C}/d\ln {\chi }_{C}$$ at P = 50 GPa, T = 4000 K and P = 400 GPa, T = 3000 K are shown in Fig. 3a, d, respectively. For both sets, these values deviate from the ideal behavior (i.e. constant at 1), and have maxima and minima around certain compositions. The corresponding chemical potentials are plotted in Fig. 3b, e, while the results at other conditions are shown in Fig. 4 of the Methods. As an independent validation, we also computed μC using the coexistence method described in the Methods, although this approach is in general less efficient and can become prohibitive if carbon concentration or diffusivity is low. The values from the coexistence method are shown as the hollow symbols in Fig. 3b, in agreement with the S0 method. As the statistical accuracy of the S0 method is much better compared to the coexistence approach, all the subsequent analysis is based on the former. In both Fig. 3b and e, μC has a plateau at χC between about 0.25 and 0.35, and the same phenomenon is found at T ≤ 5000 K at 50 GPa, and at even broader temperature range under increasing pressures, up to 8000 K at 600 GPa (see Fig. M4 of the Methods). At 50 GPa, 4000 K (Fig. 3b), μC then decreases rapidly and approaches the ideal behavior at lower χC. In contrast, at 400 GPa, 3000 K (Fig. 3e), μC plateaus and reaches a constant value for χC < 0.12. The plateaus at low χC were observed at pressures between 200 GPa and 600 GPa and temperatures lower than 3500 K (see Fig. M4 of the Methods). In Fig. 3b,e, the chemical potentials of diamond, μD, are indicated by black diamond symbols and horizontal lines. If μC is larger than μD at a given χC, diamond formation is thermodynamically favorable. To rationalize the plateaus, we express the per-atom chemical potential of the C/H mixture as $${\mu }_{mixture}({\chi }_{C})={\chi }_{C}{\mu }^{C}({\chi }_{C})+(1-{\chi }_{C}){\mu }^{H}({\chi }_{C}),$$ (2) and compare it to the ideal solution curve $${\mu }_{mixture,id}= {k}_{{{{{{{{\rm{B}}}}}}}}}T({\chi }_{C}\log ({\chi }_{C})+(1-{\chi }_{C})\log (1-{\chi }_{C}))$$. Figure 3c shows μmixture at 50 GPa, 4000 K. Compared with the ideal solution chemical potential (dashed gray curve) which is fully convex, μmixture has two edges. One can thus perform a common tangent construction to the μmixture curve to find out the coexisting liquid phases. The green line in Fig. 3d indicates the common tangent, and the two green crosses shows the location of the edges. For C/H mixtures with χC between the two atomic ratios ($${\chi }_{C}^{1}=0.27$$ and $${\chi }_{C}^{2}=0.36$$ at the condition shown), a liquid-liquid phase separation (PS1) will occur and form two phases with the proportions determined by the lever rule. Here the region between the two edges is not concave but linear, which is because the phase separation has little activation barrier and already occurs during the MD simulations. In other words, a C/H mixture with a carbon fraction that is between the values of $${\chi }_{C}^{1}$$ and $${\chi }_{C}^{2}$$ will first undergo spontaneous liquid-liquid phase separation, which explains the corresponding plateaus in μC of Fig. 3b,e. Furthermore, Fig. 3f shows that, at 400 GPa, 3000 K, μmixture at low χC significantly deviates from the ideal solution approximation (dashed gray curve), and one can construct a tangent as plotted in purple. This means that, besides the aforementioned PS1, C/H mixtures at a low C fraction can also phase separate (PS2) into a fluid of mostly hydrogen and another fluid with χC ≈ 0.12 (purple cross). We show example snapshots of such phase separated configurations collected from the MD simulations in the Supplementary Information. This PS2 explains the plateau of μC at low χC in Fig. 3d, as the carbon concentrations in both phase-separated liquids stay the same, while only the proportions of the two liquids change. Supplementary Movie 1 shows the occurrence of PS2 in MD simulations. This phase separation has immense consequences: at pressures above 200 GPa and temperatures below 3000 K-3500 K, C in C/H mixtures will always have μC > μD even at very low C fraction due to PS2, and the carbon atoms will thus always be under a thermodynamic driving force to form diamond. We refer to these conditions as the “depletion zone”. Figure 3 g presents the thermodynamic phase boundaries, below which diamond formation is possible in C/H mixtures for each indicated carbon atomic fraction. This is obtained by combining the values of μC(χC) in C/H mixtures and μD at a wide range of P-T conditions. For lower and lower χC, the boundaries deviate more and more from the Tm of diamond. At P < 100 GPa, the locations of the boundaries are very sensitive to both temperature and pressure, whereas at higher P it is mostly independent of pressure. Figure 3g can also be read in another way: for a certain P-T condition, it gives the minimal carbon ratio required to make diamond formation possible. Notice that the χC = 0.25 and χC = 0.3 lines almost overlap, which is due to the plateau of μC induced by PS1. The light blue shaded area indicates the depletion zone, where diamond formation is always possible due to PS2. In this zone, carbon atoms will first form a carbon-rich liquid phase, and diamond can nucleate from this phase. Such process is similar with a two-step nucleation mechanism previously revealed in protein systems38. Previous experimental measurements are included in Fig. 3g, with the conditions where diamonds were either found (diamond symbols or rectangular regions) or absent (cross symbols) indicated. At lower pressures, our calculations largely agree with the observation of diamond formation for methane in DAC experiments between 2000–3000 K11 and above 3000 K12. We find less agreement with the shock-compression experiments at higher pressures16,17. We speculate that the disagreement may be because diamond formation needs to go through the activated nucleation process which may take longer than the short timescale of these rapid compression experiments, or may come from the difficulty in the temperature estimation of these experiments. The hollow diamond symbols in Fig. 3g show the diamond formation conditions from starting materials of more complex compositions: Marshall et al.15 used epoxy (C:H:Cl:N:O ≈ 27:38:1:1:5) and Kadobayashi et al.14 used methane hydrate. We find little agreement between Kadobayashi et al.14 and our phase boundaries, if we compare solely in terms of χC including all atomic species, although the agreement is improved if based on the χC of methane alone, and indeed CH4 may be an intermediate product in the experiment14. The liquid-liquid phase separations of C/H mixtures have not been previously observed, but they may be detected from speed of sound, mixed optical spectra, inhomogeneity in the diamond formation reaction, and hydrodynamic instability during compression experiments. ## Discussion We first computed the melting line of diamond in pure liquid carbon. We then moved on to the C/H mixtures, and showed that they behave like liquids at T≥2500 K. We finally precisely computed the thermodynamic boundary of diamond formation for different atomic ratios. Notably, we revealed the occurrence of phase separations in C/H mixtures, which can greatly enhance diamond formation. For PS1, the C/H mixture will phase separate into two liquids with χC of about 0.25 and 0.35. Both liquids have the same μC, but their interfacial free energies with the diamond phase are different. Diamond will thus prefer to nucleate from the liquid with the lower interfacial free energy. At 200 < P < 600 GPa and T below 3000 K-3500 K, there is a depletion zone where C/H mixtures at a low C fraction can phase separate (PS2) into a fluid of mostly hydrogen and a more carbon-rich fluid with χC ≈ 0.12. In this zone, there is always thermodynamic driving force to form diamond from the carbon-rich phase. Our phase boundaries in Fig. 3g put largely scattered experimental measurements11,12,15,16,17 into context, and provide a mechanistic understanding of the thermodynamics involved. They also help gauge the accuracy of the experimental determination of diamond formation conditions, extrapolate between different experiments, and guide future efforts to validate these boundaries. Note that our boundaries are solely based on the thermodynamic criterion, but the kinetic nucleation rate may play a role particularly in shock-compression experiments. An amount of undercooling may be needed for diamond to nucleate from C/H mixtures within finite time, depending on the magnitude of the nucleation activation barrier. In homogeneous nucleation, the magnitude of the interfacial free energy contribution is crucial25,32. In experiments, DAC are in close contact with the fluid samples, so heterogeneous nucleation may happen, which requires less undercooling compared with the homogeneous case. In addition, other elements (e.g. He, N, O) are also prevalent in icy planets, and we suggest future experiments to probe how they affect the phase boundaries of diamond formation. The “depletion zone” can help explain the difference in the luminosity between Uranus and Neptune. Being similar in size and composition, Neptune has a strong internal heat source but Uranus does not39. The “diamonds in the sky" hypothesis8,11 relates the heat source with diamond formation, but does not explain the dichotomy between the two planets. By comparing the P-T conditions at different depths of the two ice giants from Ref. 40 with our calculated phase boundaries (in Fig. 3g), one can see that a relatively small difference in the planetary profile can drastically change the possibility of diamond formation: At the P-T conditions in Uranus, diamond formation requires about 15% of carbon, which seems unlikely as less than 10% of carbon is believed to be present in its mantle4. As such, diamond formation in Uranus may be absent. In contrast, Neptune is a bit cooler so it is much more likely that its planetary profile may have an overlap with the depletion zone; at these conditions C/H mixtures will phase separate (PS2), and diamond formation is thermodynamically favorable regardless of the actual carbon fraction. If there is indeed an overlap, diamonds can in principle form in the depletion zone in the mantle of Neptune, and then sink towards the core while releasing heat. Although the mantle will become increasingly carbon-deprived, the diamond formation in the depletion zone can proceed until all carbon is exhausted. Moreover, the “diamond rain” will naturally induce a compositional gradient inside the planet, which is an important aspect in explaining the evolution of giant planets41,42. Our carbon-ratio-dependent diamond formation phase boundaries can help estimate the prevalence and the existence criteria of extraterrestrial diamonds. Neptune-like exoplanets are extremely common according to the database of planets discovered43, and methane-rich exoplanets are modeled to have a carbon core, a methane envelope, and a hydrogen atmosphere7. Our boundaries can put a tight constraint on the structure and composition of these planets. Furthermore, diamond formation and liquid-liquid phase separation play a key role in the cooling process in white dwarfs10, and thus the precise determination for the onset of phase separation and crystallization is also crucial there. ## Methods ### DFT calculations DFT is the workhorse of high-pressure equation-of-state calculations and has shown good agreement with several experiments on hydrocarbons and other systems44,45,46 for measured thermodynamic properties in particular for Hugoniot curves. Single-point DFT calculations with VASP47,48,49,50 were carried out for configurations with various C/H ratios to generate the training set of the MLP. The simulations were performed with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional51 employing hard pseudopotentials for hydrogen and carbon, a cutoff energy of 1000 eV, and a consistent k-point spacing of 0.2 Å−1. In addition, extensive PBE MD simulations for CH16, CH8, CH4, CH2, CH, C2H and C4H mixtures were performed, and together with previous PBE MD data for methane52, carbon30 and hydrogen53, were used to benchmark the MLP. To approximate the impact of the thermal excitation of the electronic subsystem, we set the electronic temperature equal to the average ionic temperature during the DFT MD calculations as well as in the reference calculations used to train and test the MLP. The convergence tests of DFT and the influence of the electronic temperature are provided in the Supplementary Information. ### Machine learning potential We generated flexible and dissociable MLPs for the high-pressure C/H system, employing the Behler-Parrinello artificial neural network54, and using the N2P2 code55. The total training set contains 92,185 configurations with a sum of 8,906,582 atoms, and was constructed using a combination of strategies, including DFT MD, random structure searches, adapting previous training sets for pure C56 and H53, and active learning. The training set includes a large variety of structures: cubic/hexagonal diamond, graphite, graphane, carbon nanotubes, fullerenes, amorphous carbon, carbon structures with defects, liquid carbon, liquid hydrogen, many hydrogen crystalline polymorphs, hydrocarbon crystals, hydrocarbon liquids with varying carbon concentrations at a wide range of P-T conditions. Details on the construction and the benchmarks of the MLP are provided in the Supplementary Information. Note that the MLP has been extensively benchmarked for high-pressure liquid hydrogen, diamond/liquid carbon, and C/H mixtures based on energetic, thermodynamic and dynamic properties. However, we would like to caution the limitations of the current MLP: The MLP is not applicable to gas-phase hydrocarbons. For low-pressure carbon phases and diamond-graphite transitions, the current MLP has not been extensively tested, and we recommend users to employ the MLP from Ref. 56. Long-range Van der Waals interactions in liquid methane at low density may be important57 but are lacking in the current MLP. The comparison between PBE and MLP for structure and dynamic properties such as equation of states, radial distribution functions, diffusivity, vibrational density of states, and bond lifetimes are provided in the Supplementary Information. We recommend checking these comparisons before applying the MLP for a given C/H composition at certain conditions. ### MLP MD simulation details All MD simulations were performed in LAMMPS58 with a MLP implementation59. The time step was 0.25 fs for C/H mixtures, and 0.4 fs for pure carbon systems. ### Computing the chemical potentials of diamond and pure liquid carbon We computed ΔμD using interface pinning simulations60, which were performed using the PLUMED code61. We used solid-liquid systems containing 1,024 C atoms at pressures between 10-800 GPa and at temperatures close to the melting line, employing the MLP. The Nosé-Hoover barostat was used only along the z direction that is perpendicular to the interface in these coexistence simulations, while the dimensions of the supercell along the x and y directions were commensurate with the equilibrium lattice parameters of the diamond phase at the given conditions. We used the locally-averaged62Q3 order parameter63 for detecting diamond structures, and introduced an umbrella potential to counter-balance the chemical potential difference and constrain the size of diamond in the system. We then used thermodynamic integration along isotherms and isobars64,65 to extend the ΔμD to a wide range of pressures and temperatures. ### MLP MD simulation of CH4 The simulation cell contained 7,290 atoms (1,458 CH4 formula units). Each simulation was run for more than 100 ps. The simulations were performed in the NPT ensemble, using the Nosé-Hoover thermostat and isotropic barostat. At each condition, two independent MD simulations were initialized using a starting configuration of either bonded CH4 molecules on a lattice or a liquid. For T≥2500 K, the two simulations provided consistent statistical properties. These simulations were the basis for the further analysis we performed. For T < 2500 K the two runs gave different averages, meaning that under these conditions the system is not ergodic within the simulation time. ### Computing the chemical potentials of C in C/H mixtures We used two independent methods for computing the chemical potentials of carbon in C/H mixtures at various conditions. The first is the S0 method37 that uses the static structure factors computed from equilibrium NPT simulations. The S0 method uses the thermodynamic relationship between composition fluctuations and the derivative of chemical potential with respect to concentration, and accounts for both mixing entropy and enthalpy37. The simulations were performed on a grid of P-T conditions, P = 10 GPa, 25 GPa, 50 GPa, 100 GPa, 200 GPa, 300 GPa, 400 GPa, 600 GPa, and T = 3500 K, 4000 K, 5000 K, 6000 K, 7000 K, and 8000 K. At each P-T condition, MD simulations were run for systems at varying atomic ratios, on a dense grid of χC from 0.015 to 0.98. The system size varied between 9,728 and 82,944 total number of atoms. We obtained the static structure factors at different wavevectors k using the Fourier expansion on the scaled atomic coordinates, i.e. $${S}_{AB}({{{{{{{\bf{k}}}}}}}})=\frac{1}{\sqrt{{N}_{A}{N}_{B}}}\left\langle \mathop{\sum }\limits_{i=1}^{{N}_{A}}\exp (i{{{{{{{\bf{k}}}}}}}}\cdot {\hat{{{{{{{{\bf{r}}}}}}}}}}_{{i}_{A}}(t))\mathop{\sum }\limits_{i=1}^{{N}_{B}}\exp (-i{{{{{{{\bf{k}}}}}}}}\cdot {\hat{{{{{{{{\bf{r}}}}}}}}}}_{{i}_{B}}(t))\right\rangle$$ (3) where AB can be CC, CH and HH, and $$\hat{{{{{{{{\bf{r}}}}}}}}}(t)={{{{{{{\bf{r}}}}}}}}(t){\left\langle l\right\rangle }_{{{{{{{{\rm{NPT}}}}}}}}}/l(t)$$ and l(t) is the instantaneous dimension of the supercell. We then determined $${S}_{CC}^{0}$$, $${S}_{CH}^{0}$$ and $${S}_{HH}^{0}$$ by extrapolating SAB(k) to the k → 0 case using the Ornstein–Zernike form as described in Ref. 37. Finally, we used numerical integration using Eqn. (3) of the main text to obtain the chemical potential of carbon for different atomic fractions, and get the chemical potential of H using the Gibbs-Duhem equation. All the chemical potential data are presented in Fig. 4. The second approach is based on the coexistence method, similar to the setup used for computing the chemical potentials of the pure carbon systems. In this case, interface pinning simulations60,66 were performed on a diamond-C/H liquid coexistence system containing 1024 C atoms and a varying number of H atoms at pressures 0-600 GPa. A snapshot of the coexistence system is provided in the Supplementary Information. The chemical potentials estimated using coexistence are shown in Fig. 4, and the errors shown are the standard errors of the mean estimated from the values of the CV. However, there are other sources of errors that are hard to estimate: finite size effects and ergodicity issues related to the explicit interface; the carbon concentration can vary in the liquid region of the simulation box.
2023-04-02 00:47:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5854164361953735, "perplexity": 1751.0967805558525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00360.warc.gz"}
https://nbviewer.jupyter.org/github/jckantor/CBE20255/blob/master/notebooks/04.03-General-Mass-Balance-on-a-Single-Tank.ipynb
This notebook contains course material from CBE20255 by Jeffrey Kantor (jeff at nd.edu); the content is available on Github. The text is released under the CC-BY-NC-ND-4.0 license, and code is released under the MIT license. # General Mass Balance on a Single Tank¶ ## Summary¶ This Jupyter notebook demonstrates the application of a mass balance to a simple water tank. This example is adapted with permission from learnCheme.com, a project at the University of Colorado funded by the National Science Foundation and the Shell Corporation. ## Problem Statement¶ ### Mass Balance¶ Using our general principles for a mass balance $\frac{d(\rho V)}{dt} = \dot{m}_1 - \dot{m}_2$ which can be simplified to $\frac{dV}{dt} = \frac{1}{\rho}\left(\dot{m}_1 - \dot{m}_2\right)$ where the initial value is $V(0) = 1\,\mbox{m}^3$. This is a differential equation. ### Numerical Solution using odeint¶ There are a number of numerical methods available for solving differential equations. Here we use odeint which is part of the scipy package. odeint requires a function that returns the rate of accumulation in the tank as a function of the current volume and time. In [1]: import numpy as np import matplotlib.pyplot as plt %matplotlib inline from scipy.integrate import odeint In [2]: # Flowrates in kg/sec m1 = 4.0 m2 = 2.0 # Density in kg/m**3 rho = 1000.0 # Function to compute accumulation rate def dV(V,t): return (m1 - m2)/rho; Next we import odeint from the scipy.integrate package, set up a grid of times at which we wish to find solution values, then call odeint to compute values for the solution starting with an initial condition of 1.0. In [3]: t = np.linspace(0,1000) V = odeint(dV,1.0,t) We finish by plotting the results of the integration and comparing to the capacity of the tank. In [4]: plt.plot(t,V,'b',t,2*np.ones(len(t)),'r') plt.xlabel('Time [sec]') plt.ylabel('Volume [m**3]') plt.legend(['Water Volume','Tank Capacity'],loc='upper left'); This same approach can be used solve systems of differential equations. For an light-hearted (but very useful) example, check out this solution for the Zombie Apocalypse. ### Solving for the Time Required to Fill the Tank¶ Now that we know how to solve the differential equation, next we create a function to compute the air volume of the tank at any given time. In [5]: Vtank = 2.0 Vinitial = 1.0 def Vwater(t): return odeint(dV,Vinitial,[0,t])[-1][0] def Vair(t): return Vtank - Vwater(t) print("Air volume in the tank at t = 100 is {:4.2f} m**3.".format(Vair(100))) Air volume in the tank at t = 100 is 0.80 m**3. The next step is find the time at which Vair(t) returns a value of 0. This is root finding which the function brentq will do for us. In [6]: from scipy.optimize import brentq t_full = brentq(Vair,0,1000) print("The tank will be full at t = {:6.2f} seconds.".format(t_full)) The tank will be full at t = 500.00 seconds. ## Exercise¶ Suppose the tank was being used to protect against surges in water flow, and the inlet flowrate was a function of time where $\dot{m}_1 = 4 e^{-t/500}$ • Will the tank overflow? • Assuming it doesn't overflow, how long would it take for the tank to return to its initial condition of being half full? To empty completely? • What will be the peak volume of water in the tank, and when will that occur? In [ ]:
2021-08-02 02:01:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4587143361568451, "perplexity": 1067.399815230063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00382.warc.gz"}
https://www.w3schools.com/statistics/statistics_mode.php
# Statistics - Mode The mode is a type of average value, which describes where most of the data is located. ## Mode The mode is the value(s) that are the most common in the data. A dataset can have multiple values that are modes. A distribution of values with only one mode is called unimodal. A distribution of values with two modes is called bimodal. In general, a distribution with more than one mode is called multimodal. Mode can be found for both categorical and numerical data. ## Finding the Mode Here is a numerical example: 4, 7, 3, 8, 11, 7, 10, 19, 6, 9, 12, 12 Both 7 and 12 appears two times each, and the other values only once. The modes of this data is 7 and 12. Here is a categorical example with names: Alice, John, Bob, Maria, John, Julia, Carol John appears two times, and the other values only once. The mode of this data is John. ## Finding the Mode with Programming The mode can easily be found with many programming languages. Using software and programming to calculate statistics is more common for bigger sets of data, as calculating manually becomes difficult. ### Example With Python use the statistics library multimode() method to find the modes of the values 4,7,3,8,11,7,10,19,6,9,12,12: from statistics import multimode values = [4,7,3,8,11,7,10,19,6,9,12,12] x = multimode(values) print(x) Try it Yourself » ### Example Using R with a user-defined function to find the modes of the values 4,7,3,8,11,7,10,19,6,9,12,12: mode <- function(x) { unique_values <- unique(x) table <- tabulate(match(x, unique_values)) unique_values[table == max(table)] } values <- c(4,7,3,8,11,7,10,19,6,9,12,12) mode(values) Try it Yourself » Note: R has no built-in function to find the mode.
2022-01-24 06:31:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3645455539226532, "perplexity": 1840.4492231547351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00525.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/196317-nonlinear-recurrence-relations.html
# Math Help - Nonlinear recurrence relations 1. ## Nonlinear recurrence relations Hi. This is my first post here so I hope I've posted in the right place. My question concerns finding closed forms of nonlinear recurrence relations such as the following: $a_1=a$ $a_{n+1}=a_n^2-1\ \mbox{for}\ n\geqslant1$ This one is both nonlinear and nonhomogeneous. The even terms do form a homogeneous recurrence relation, which is nonetheless still nonlinear. Are there general methods for solving particular types of nonlinear recurrence relations? I've tried googling but the results aren't very helpful. 2. ## Re: Nonlinear recurrence relations Generally Ricatti equations can be useful in solving nonlinear recurrence relations; however, I don't think they apply to your problem. I had a similar problem to solve for my thesis. I solved it as follows: $x\cdot f(x+1) - f^2(x) +1 = 0$ This is a first order, non-linear difference equation with variable coefficients. Commonly used solution methods such as Ricatti Equations do not seem to work nicely for this example. However, by inspection, it seems that the solution is a linear function $f(x)=mx+b$. From the definition, $f(0)=1$. This gives that $f(0)=0+b=1$; hence, $b=1$. Thus we have $f(x)=mx+1$. Substituting this into the difference equation, we get $x\cdot (m(x+1)+1)-(mx+1)^2+1$ $=mx^2+mx+x-m^2x^2 -2mx -1 +1$ $=mx^2-mx+x-m^2x^2=0$ Furthermore, the above holds for all $x \in \mathbb{R}$, so we can choose an $x$ to solve for $m$. Take $x=1$ to get $m(1)^2-m(1)+(1)-m^2(1)^2$ $=m-m+1-m^2$ $=1-m^2=0$ $\Longrightarrow 1=m^2 \Longrightarrow m=\pm 1.$ Taking $m=-1$ would yield negative solutions which is not possible; hence, $m=1$ and $f(x)=x+1$. Thus we have that $x+1$ is a particular solution to the difference equation. Hence, we get $\sqrt{1+2\sqrt{1+3\sqrt{1+...}}}=f(2)=2+1=3.$
2014-08-23 17:21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155242443084717, "perplexity": 164.2917040796799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826322.0/warc/CC-MAIN-20140820021346-00454-ip-10-180-136-8.ec2.internal.warc.gz"}
https://brilliant.org/problems/how-many-roots2/
# How many roots?#2 How many real positive number $x$ such that $\sin { x } =\log _{ 100 }{ x } ?$ ×
2022-10-06 01:54:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8470965027809143, "perplexity": 3110.3957761426836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00166.warc.gz"}
http://www.nesbittresearch.com/blog/Virginia-2019.html
A Deep Dive into Virginia 2019 | Nesbitt Research Control of the Virginia General Assembly is up for grabs come 2019. Voters, take notice. On November 5, 2019, Virginia will join three other state legislative chambers, out of 99 in the U.S., to determine the legislative composition of its General Assembly between 2020 and 2022. The stakes are high after the wave of Democratic and Progressive victories in 2017 that flipped 15 House seats and saw the House shift much closer to parity. Politico wrote this helpful overview of the post-2016 presidential election sentiments that spurred the 2017 House victories. According to Ballotpedia, the House went from a 66-34 split under Republican control in 2015 to a 51-49 split in 2017, barely under Republican control. The Senate moved from an even 20-20 split in 2011 to a 21-19 Republican split in 2015 with Democratic Lieutenant Governor Justin Fairfax as Senate President. If 15 flipped House seats sounds underwhelming for any reason, consider The Washington Post’s characterization of the change as the “most sweeping shift in control of the legislature since Reconstruction.” The 2017 elections signaled the changing political landscape in Virginia with a flurry of closely-contested races and legislature firsts. According to Ballotpedia, the margin of victory among the 15 seats that flipped Democratic ranged from 0.8 percent in Dawn Adams’ victory in the 68th District to 25.4 percent in Jennifer Foy’s victory in the 2nd District. The closest House race came down to drawing names hidden in blacked-out film canisters, which contributed to a series of new election laws in 2018. The 94th District’s Democratic candidate Shelly Simonds and Republican incumbent David Yancey faced a random drawing when they ostensibly tied after a recount, a questionable ballot assessed to Yancey and judicial review. Yancey prevailed in the random drawing and Simonds eventually conceded. But even this setback demonstrated the power that a dozen votes, a single vote, can have on the electoral process. These close races, flipped seats and the wave of Democratic enthusiasm that propelled them also marked a series of important firsts for the Virginia legislature. According to Huffpost, Virginia’s blue bloom produced: Danica Roem for the 13th District, Virginia’s first transgender legislator; Elizabeth Guzman for the 31st District and Hala Ayala for the 51st District, Virginia’s first Latina House legislators; Kelly Convirs-Fowler for the 21st District and Kathy Tran for the 42nd District, Virginia’s first Asian-American House legislators; and Dawn Adams for the 68th District, Virginia’s first openly lesbian legislator. Roem’s victory, especially, garnered a wide audience. She beat conservative Republican Bob Marshall, known to LGBTQ groups as “Bigot Bob,” in a dramatic and necessary about-face for the 13th District. The 2017 results prompted increased fundraising for worried House Republicans and empowered Democrats gearing up for a contentious 2019. In June 2018, the Daily Press reported that House members’ fundraising increased by eight percent from the same time prior to the 2017 elections despite bill payments from the 2017 campaigns that reduced their cash on hand by 12 percent. Republican Delegate David Yancey, who barely kept his seat during the random drawing tiebreaker, raised 47 percent more money in the first half of 2018 than he did in the months after his 2015 victory. Republican Delegate Tim Hugo, leader of the GOP caucus who only kept his seat by a 106-vote margin in 2017, raised almost $213,000, increasing his fundraising total by 212 percent from the same period after he won in 2015. Freshman Democratic Delegate Danica Roem also showed fundraising strength with her approximately$91,000 haul since the beginning of 2018, “more than any other member of the House except for [Republican Speaker Kirk] Cox and [Tim] Hugo,” according to the Daily Press. Despite the wakeup call in 2017 in the House, some Virginia state senators seem to be proceeding a little too casually. According to the Daily Press, University of Mary Washington Political Scientist Stephen Farnsworth said, “A lot of Republican senators are looking at 2017 as an aberration, but if you ran in 2017 like the House did, it’s far more real than abstract.” While the total combined state senators’ cash on hand increased by 22 percent compared to the same period in 2015 before the election, their combined fundraising dropped by 11 percent, according to the Daily Press. Christopher Newport University Political Scientist Rachel Bitecofer told the Daily Press that Republican Senators Frank Wagner for the 7th District and Bryce Reeves for the 17th District face the most difficult races for incumbent senators in 2019 because Wagner publicly supported Medicaid expansion in the face of his party’s opposition and Reeves’ district barely voted Republican in the 2017 gubernatorial election. In the same Daily Press article, George Mason University Dean of the Schar School of Policy and Goverment Mark Rozell also suggested that Republican Senator Richard Black for the 13th District, who once tried to defend spousal rape, might finally face a challenger who can rid Virginia, and all of us, of his legislative powers. Flipping any red seat blue in 2019, it still seems, would not only be a great numerical victory in the tight State Senate, but a moral one. Blue Virginia highlighted a series of Democrats, including Lucero Wiley, Kyle Green and Suhas Subramanyam, who could oust Black from the 13th District in 2019. Lucero Wiley struck an especially determined tone in her campaign announcement, saying, “Today I would like to announce that, tired of living in a world full of discrimination and injustice, I can no longer sit down and do nothing.” She continued, “I cannot watch children being held in cages, I cannot turn a blind eye on discrimination and abuse, and I will not stay quiet while I see my people, OUR people, be object of denigration in this country…” Wiley added, “So I have chosen to stand up. To no longer resist, to no longer condone. Enough is enough. It is time for us to take our power back and to fight for what is right. To speak up and let our voices be heard.” The full list of 2019 races for the House can be found here on The Virginia Public Access Project and here for the State Senate. With the lion’s share of coverage and money flowing to Virginia’s 2018 elections for all eleven of its seats to the U.S. House of Representatives and its Class 1 Senate seat held by Democrat and 2016 vice presidential nominee Tim Kaine, Virginia 2019 might be temporaily obscured. However, that doesn’t diminish its importance. According to the National Conference of State Legislators, as of January 2018, Republicans controlled 32 state legislatures, Democrats controlled 13, four were divided and unicameral Nebraska was listed as “nonpartisan.” Speculation about devolving abortion laws back to the states following the developments with the Supreme Court and the upcoming redistricting following the 2020 Census highlight the importance of flipping Virginia, and any state legislature, blue. Perhaps the handful of votes that tied Democratic House candidate Simonds and Republican incumbent Yancey and led to Yancey’s victory via random drawing in 2017 will work out to favor the blue team in some of 2019’s closer races. One doesn’t have to hope, just to vote.
2018-12-12 14:20:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20897652208805084, "perplexity": 8293.960146132204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823895.25/warc/CC-MAIN-20181212134123-20181212155623-00402.warc.gz"}
https://mhuig.github.io/NoteBook/posts/32ffa341.html
0% # 附录 找到MySQL的安装路径: which mysql 假设找到的是:/home/user1/mysql/bin/mysql Writing is not easy. Thank you for your support.
2020-08-04 18:04:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4677717685699463, "perplexity": 5324.470163054237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735881.90/warc/CC-MAIN-20200804161521-20200804191521-00319.warc.gz"}
https://tumundoecuestre.com/47knkvp/aebdd0-wave-equation-example
A function describes a relationship between two values. Wave Equation Applications . 21.2 Some examples of physical systems in which the wave equation governs the dynamics 21.2.1 The Guitar String Figure 1. General solution of the wave equation … These give rise to boundary waves, of which the reflections at interfaces were an example. Then, if a … Find the frequencies of the solutions, and sketch the standing waves that are solutions to this equation. Note: 1 lecture, different from §9.6 in , part of §10.7 in . Acoustic Wave Equation Sjoerd de Ridder (most of the slides) & Biondo Biondi January 16th 2011. A solitary wave (a soliton solution of the Korteweg-de Vries equation… This simulation is a simplified visualization of the phenomenon, and is based on a paper by Goring and Raichlen [1]. Free ebook https://bookboon.com/en/partial-differential-equations-ebook An example showing how to solve the wave equation. So you'd do all of this, but then you'd be like, how do I find the period? Solve initial value problems with the wave equation Understand the concepts of causality, domain of influence, and domain of dependence in relation with the wave equation Become aware that the wave equation ensures conservation of energy. Hyperbolic Equations -- Wave equations The classical example of a hyperbolic equation is the wave equation (2.5) The wave equation can be rewritten in the form (2.6) or as a system of 2 equations (2.7) (2.8) Note that the first of these equations (2.3a) is independent of and can be solved on it's own. For example to [calculate] the speed of a wave made by a [ripple] tank generating waves with a [frequency] of 2.5Hz and a wavelength of [0.2m] you complete the following equation: V = [2.5]x 0.2 V = [0.5m/s] , To calculate the frequency of a wave divide the speed by the [wavelength]. The resulting waves … The above example illustrates how to use the wave equation to solve mathematical problems. We have solved the wave equation by using Fourier series. For example to calculate the [frequency] of a wave … You can set up to 7 reminders per week. Schrödinger’s Equation in 1-D: Some Examples. Wave Speed Equation Practice Problems The formula we are going to practice today is the wave speed equation: wave speed=wavelength*frequency v f Variables, units, and symbols: Quantity Symbol Quantity Term Unit Unit Symbol v wave speed meters/second m/s wavelength meter m f frequency Hertz Hz Remember: … m ∂ 2 u ∂ t 2-∇ ⋅ (c ∇ u) + a u = f. So the standard wave equation has coefficients m = 1, c … Transverse mechanical waves (for example, a wave on a string) have an amplitude expressed as a distance (for example, meters), longitudinal mechanical waves (for example, sound waves) use units of pressure (for example, pascals), and electromagnetic waves (a form of transverse vacuum wave) express the amplitude in terms of its electric field (for example… A wave equation typically describes how a wave function evolves in time. Rep:? Go to first unread Skip to page: SassyPete Badges: 6. For if we take the derivative of u along the line x = ct+k, we have, d dt u(ct+k,t) = cu x +u t = 0, so u is constant on this line, and only depends on the choice of parameter … The Schrödinger equation is a linear partial differential equation that describes the wave function or state function of a quantum-mechanical system. We can also deal with this issue by having other types of constraints on the boundary. which is an example of a one-way wave equation. The standard second-order wave equation is ∂ 2 u ∂ t 2-∇ ⋅ ∇ u = 0. Study Reminders . Initial condition and transient solution of the plucked guitar string, whose dynamics is governed by (21.1). and wavelength, according to this equation: $v = f~ \times \lambda$ where: v is the wave speed in metres per second, m/s. Curvature of Wave Functions . The frequency of the light wave is 5 \times 10^1^4 Hz. 21.2.2 Longitudinal Vibrations of an elastic bar Figure 2. But “stops” limiting the diameter of a light or sound beam do likewise. We'll email you at these times to remind you to study. The function f ( x ) = x +1, for example, is a function because for every value of x you get a new value of f ( x ). To express this in toolbox form, note that the solvepde function solves problems of the form. Wave equation definition: a partial differential equation describing wave motion . Thus to the observer (x,t)whomovesatthesteadyspeedc along the positivwe x-axis, the function F is … This example shows how to solve the wave equation using the solvepde function. The function A function describes a relationship between two values. Schrödinger’s equation in the form. Mathematics of the Tsunami Model. Exercise: Show that this is well-de ned, i.e., suppose that j˚ 0 j2 = 1 and ˚t˚ 1 = 0. Solved Examples. That means that the net amplitude caused by two or more waves traversing the same space is the sum of the amplitudes which would have been produced by the individual waves separately. When this is true, the superposition principle can be applied. The wave map equation is given by the following system of (m+ 1) equations: ˚= ˚(@ t ˚T@ t˚ Xn i=1 @ i˚ T@ i˚); where T denotes the transpose of a vector in Rm+1. WATERWAVES 5 Wavetype Cause Period Velocity Sound Sealife,ships 10 −1−10 5s 1.52km/s Capillaryripples Wind <10−1s 0.2-0.5m/s Gravitywaves Wind 1-25s 2-40m/s Sieches Earthquakes,storms minutestohours standingwaves Solution: D’Alembert’s formula is 1 x+t Worked examples: the wave equation. Basic linearized acoustic equations … It has the form ∇ 2 φ = (1/ c 2... | Meaning, pronunciation, translations and examples Illustrate the nature of the solution by sketching the ux-profiles y = u (x, t) of the string displacement for t = 0, 1/2, 1, 3/2. dimensional wave equation (1.1) is Φ(x,t)=F(x−ct)+G(x+ct) (1.2) where F and g are arbitrary functions of their arguments. Q.2: A sound wave … The frequency is: f = \frac{v}{\lambda }\\ f = \frac{3 × 10^8 }{ 600 × 10^-^9}\\ = 5 × 10^1^4 Hz. cosh(k(z+ d)) cosh(kd) cos(kx !t); where ais wave amplitude, gis gravity acceleration, k= 2ˇ= is wave number, is wave length,!= p kgtanh(kd) is frequency of the wave… The speed of a wave is related to its frequency. 3 Outline 1. However, the Schrodinger equation is a wave equation for the wave function of the particle in question, and so the use of the equation to predict the future state of a system is sometimes called “wave mechanics.” The equation itself derives from the conservation of energy and is built around an operator called the Hamiltonian. Write down the solution of the wave equation utt = uxx with ICs u (x, 0) = f (x) and ut (x, 0) = 0 using D’Alembert’s formula. For example… It also illustrates the principle that wave speed is dependent upon medium properties and independent of wave properties. Reminder: physical significance and derivation of the wave equation, basic properties 2. For waves on a string, we found Newton’s laws applied to one bit of string gave a differential wave equation, ∂ 2 y ∂ x 2 = 1 v 2 ∂ 2 y ∂ t 2. and it turned out that sound waves in a tube satisfied the same equation. But it is often more convenient to use the so-called d'Alembert solution to the wave equation 1 .While this solution can be derived using Fourier series as well, it is … The string is plucked into … wave equation is also a solution. #1 Report Thread starter 3 years ago #1 Hi, I am currently going through past papers for a test i have tomorrow, and i have come … Using physical reasoning, for example, for the vibrating string, we would argue that in order to define the state of a dynamical system, we must initially specify both the displacement and the velocity. \end{equation… Table of Topics I Basic Acoustic Equations I Wave Equation I Finite Differences I Finite Difference Solution I Pseudospectral Solution I Stability and Accuracy I Green’s function I Perturbation Representation I Born Approximation. 4 Example: Reflected wave In the previous two examples we specifically identified what was happening at the boundaries. PDE wave equation example Watch. We'd have to use the fact that, remember, the speed of a wave is either written as wavelength times frequency, or you can write … The one-dimensional wave equation can be solved exactly by d'Alembert's solution, using a Fourier transform method, or via separation of variables.. d'Alembert devised his solution in 1746, and Euler subsequently expanded the method in 1748. Section 4.8 D'Alembert solution of the wave equation. We'll email you at these times to remind you to study. Examples of wave propagation for which this independence is not true will be considered in Chapter ... Our deduction of the wave equation for sound has given us a formula which connects the wave speed with the rate of change of pressure with the density at the normal pressure: \label{Eq:I:47:21} c_s^2 = \biggl(\ddt{P}{\rho}\biggr)_0. Monday Set Reminder -7 am … Announcements Applying to uni? Redo the wave equation solution using the boundary conditions for a flute ux(0, t) ux(L, t) 0 ; Redo the wave equation solution using the boundary conditions for a clarinet u(0, t) ux(L, t) 0. Let's say that's the wave speed, and you were asked, "Create an equation "that describes the wave as a function of space and time." To solve this, we notice that along the line x − ct = constant k in the x,t plane, that any solution u(x,y) will be constant. Solution: Given in the problem, Wavelength, \lambda = 600 nm, Speed of light, v = 3 × 10^8 m/s. Set your study reminders. Let ˚: I Rn!Sm = fx2Rm+1: jxj= 1g. The 1-D Wave Equation 18.303 Linear Partial Differential Equations Matthew J. Hancock Fall 2006 1 1-D Wave Equation : Physical derivation Reference: Guenther & Lee §1.2, Myint-U & Debnath §2.1-2.4 [Oct. 3, 2006] We consider a string of length l with ends fixed, and rest state coinciding with x-axis. Even though the wave speed is calculated by multiplying wavelength by frequency, an alteration in wavelength does not affect wave … This avoided the issue that equation 2 cannot be used at the boundary. The ideal-string wave equation applies to any perfectly elastic medium which is displaced along one dimension.For example, the air column of a clarinet or organ pipe can be modeled using the one-dimensional wave equation by substituting air-pressure deviation for string displacement, and … Page 1 of 1. Michael Fowler, UVa. The Wave Equation and Superposition in One Dimension. For example to [calculate] the speed of a wave made by a [ripple] tank generating waves with a [frequency] of 2.5Hz and a wavelength of [0.2m] you complete the following equation: V = [2.5]x 0.2 V = [0.5m/s] , To calculate the frequency of a wave divide the speed by the [wavelength]. Find your group chat here >> start new discussion reply. Example of Application of Morrison Equation 5. : 1–2 It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject.The equation is named after Erwin Schrödinger, who postulated the equation … d 2 ψ (x) d x 2 = 2 m (V (x) − E) ℏ 2 ψ (x) can be interpreted by saying that the left-hand side, the rate of change of slope, is the curvature – so the curvature of the function is proportional to (V (x) − … The wave equations for sound and light alike prescribe certain conditions of continuity on surfaces where the material data have discontinuities. Compression and rarefaction waves in an … Example 1.5 (Wave map equations). For example to calculate the [frequency] of a wave … For example, have the wave … In the x,t (space,time) plane F(x − ct) is constant along the straight line x − ct = constant. This example simulates the tsunami wave phenomenon by using the Symbolic Math Toolbox™ to solve differential equations. Like heat equation and Laplace equation, the solution of second-order wave equation can also be obtained using the standard method … Q.1: A light wave travels with the wavelength 600 nm, then find out its frequency. The example involves an … 4.3. In many cases (for example, in the classic wave equation), the equation describing the wave is linear. You're all set. Horizontal velocity component of a wave propagating in x-direction in water of constant depth dis described by the equation v x = agk! A soliton solution of the Korteweg-de Vries to express this in toolbox form, that! An elastic bar Figure 2 this avoided the issue that equation 2 can not be used at boundary. Simulation is a simplified visualization of the form interfaces were an example showing how to solve the wave equation the! Per week new discussion reply, but then you 'd be like, how do I find the?.: SassyPete Badges: 6 bar Figure 2 how do I find the frequencies of the form in... To remind you to study the issue that equation 2 can not be used at the.. Out its frequency are solutions to this equation §10.7 in solve the wave using. This equation and derivation of the form second-order wave equation using the solvepde function propagating in x-direction water... Fx2Rm+1: jxj= 1g equation definition: a partial differential equation describing wave motion is based a. Is governed by ( 21.1 ) significance and derivation of the wave Free. That the solvepde function the period 1 = 0 that are solutions to this equation be like, do... Solved the wave equation Fourier series 3 × 10^8 m/s acoustic equations … the speed of a is!, have the wave equation, different from §9.6 in, part of §10.7 in this.. So you 'd do all of this, but then you 'd do all of this but... 'D do all of this, but then you 'd do all of this, but you... By the equation v x = agk governs the dynamics 21.2.1 the Guitar String, whose dynamics governed! But then you 'd wave equation example like, how do I find the of... Start new discussion reply when this is well-de ned, i.e., suppose that j˚ 0 j2 1. The light wave travels with the wavelength 600 nm, speed of light, v = ×... Problem, wavelength, \lambda = 600 nm, speed of a wave … Schrödinger ’ s in! The Korteweg-de Vries > start new discussion reply solve mathematical problems Fourier series examples physical. Remind you to study, wavelength, \lambda = 600 nm, find... The dynamics 21.2.1 the Guitar String Figure 1 limiting the diameter of a wave is 5 \times Hz. On a paper by Goring and Raichlen [ 1 ] two values to unread. A partial differential equation describing wave motion note: 1 lecture, different from §9.6 in part! Example showing how to solve the wave equation definition: a light wave travels with the wavelength 600,... Jxj= 1g \lambda = 600 nm, then find out its frequency at... 'D be like, how do I find the frequencies of the solutions, and is based on paper... A relationship between two values ⋅ ∇ u = 0 this example shows how to solve mathematical.. Be applied String, whose dynamics is governed by ( 21.1 ) 0. Fx2Rm+1: jxj= 1g propagating in x-direction in water of constant depth dis described by the equation x. Is 5 \times 10^1^4 Hz related to its frequency a paper by Goring and Raichlen [ 1 ] used... Well-De ned, i.e., suppose that j˚ 0 j2 = 1 and ˚t˚ 1 =.., how do I find the frequencies of the wave equation using the solvepde.... In x-direction in water of constant depth dis described by the equation v x = agk that... At the boundary = 3 × 10^8 m/s waves … wave equation the. U ∂ t 2-∇ ⋅ ∇ u = 0 boundary waves, of the. = 3 × 10^8 m/s, i.e., suppose that j˚ 0 j2 = and. Out its frequency principle can be applied: 1 lecture, different from §9.6 in, wave equation example... An example that j˚ 0 j2 = 1 and ˚t˚ 1 = 0 1 ˚t˚... 5 \times 10^1^4 Hz can set up to 7 reminders per week the superposition can. Governs the dynamics 21.2.1 the Guitar String, whose dynamics is governed by ( 21.1.... With this issue by having other types of constraints on the boundary used at the boundary waves in …! Describes a relationship between two values in, part of §10.7 in of constraints on the boundary per week u.: Given in the problem, wavelength, \lambda = 600 nm, speed of light, v = ×. 1 = 0 set up to 7 reminders per week, v 3. How to use the wave equation definition: a partial differential equation describing wave motion at these times remind. Light wave is 5 \times 10^1^4 Hz: 1 lecture, different from §9.6 in, of. So you 'd be like, how do I find the period function function! 'D be like, how do I find the frequencies of the plucked Guitar String, whose dynamics is by. Out its frequency to express this in toolbox form, note that the solvepde function problems! Using Fourier series Show that this is well-de ned, i.e., suppose j˚., then find out its frequency speed of light, v = 3 × 10^8.... 4.8 D'Alembert solution of the solutions, and sketch the standing waves that are to... Showing how to solve the wave equation using the solvepde function solves of! The [ frequency ] of a wave propagating in x-direction in water constant. Reminders per week in 1-D: Some examples then find out its frequency D'Alembert of! Waves in an … Section 4.8 D'Alembert solution of the phenomenon, and is based a. And independent of wave properties = 1 and ˚t˚ 1 = 0: //bookboon.com/en/partial-differential-equations-ebook an example: examples! Solve mathematical problems //bookboon.com/en/partial-differential-equations-ebook an example, different from §9.6 in, part of §10.7 in sound! Show that this is true, the superposition principle can be applied the second-order... Rarefaction waves in an … Section 4.8 D'Alembert solution of the solutions, and is based on a paper Goring... Suppose that j˚ 0 j2 = 1 and ˚t˚ 1 = 0 paper Goring! This avoided the issue that equation 2 can not be used at the boundary the issue that equation can. Based on a paper by Goring and Raichlen [ 1 ] form, note that the solvepde function: ’! Is governed by ( 21.1 ) propagating in x-direction in water of constant dis! Solve mathematical problems other types of constraints on the boundary wave … Free ebook https //bookboon.com/en/partial-differential-equations-ebook. Figure 2 stops ” limiting the diameter of a wave propagating in x-direction in water of constant depth dis by. At the boundary you to study that equation 2 can not be used the. 2-∇ ⋅ ∇ u = 0 solve the wave equation governs the dynamics 21.2.1 the Guitar String Figure 1 applied. Equation definition: a light or sound beam do likewise 10^8 m/s https: //bookboon.com/en/partial-differential-equations-ebook example. Dis described by the equation v x = agk are solutions to this equation beam wave equation example likewise upon properties! Solutions to this equation 21.1 ) wave equation example depth dis described by the equation v x agk. This avoided the issue that equation 2 can not be used at the boundary form, note that the function. Be like, how do I find the frequencies of the wave equation is ∂ 2 u ∂ t ⋅... Constraints on the boundary to its frequency of which the reflections at interfaces were example. Equation and superposition in One Dimension 21.2.2 Longitudinal Vibrations of an elastic bar Figure 2 and rarefaction in... Of §10.7 in from §9.6 in, part of §10.7 in equation by Fourier. But then you 'd do all of this, but then you 'd do all of,. Second-Order wave equation by using Fourier series how to solve the wave using! A relationship between two values and independent of wave properties toolbox form, note that the solvepde function solves of! ] of a wave … Free ebook https: //bookboon.com/en/partial-differential-equations-ebook an example showing how to solve the wave Free... Have solved the wave equation governs the dynamics 21.2.1 the Guitar String Figure 1 relationship two. Jxj= 1g Badges: 6 in, part of §10.7 in Vibrations of an elastic bar 2! Superposition in One Dimension physical significance and derivation of the wave equation, basic properties 2 how use. An … Section 4.8 D'Alembert solution of the plucked Guitar String, whose dynamics is governed by ( 21.1.. Have solved the wave equation governs the dynamics 21.2.1 the Guitar String, whose dynamics is governed (... X = agk String Figure 1 a partial differential equation describing wave.. Properties and independent of wave properties two values well-de ned, i.e., suppose j˚... Also a solution the boundary soliton solution of the solutions, and sketch the standing that. To first unread Skip to page: SassyPete Badges: 6 to equation! ˚T˚ 1 = 0 ˚t˚ 1 = 0 or sound beam do likewise u! Give rise to boundary waves, of which the reflections at interfaces were example. T 2-∇ ⋅ ∇ u = 0 Given in the problem, wavelength, =. The light wave is related to its frequency String, whose dynamics is governed by ( 21.1 ) to:... At these times to remind you to study Given in the problem, wavelength, \lambda = 600 nm speed! Frequency ] of a wave … Schrödinger ’ s equation in 1-D: examples... This issue by having other types of constraints on the boundary ∂ t 2-∇ ∇! Medium properties and independent of wave properties between two values the solutions, and is based on paper. It also illustrates the principle that wave speed is dependent upon medium properties and independent of wave properties times!
2021-03-08 00:47:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087310194969177, "perplexity": 1155.7292501476682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381230.99/warc/CC-MAIN-20210307231028-20210308021028-00396.warc.gz"}
https://chemistry.stackexchange.com/questions/90263/small-but-modest-displacements-at-negligible-forces-from-dft
Small but modest displacements at negligible forces from DFT When looking at a Gaussian log file today, I noticed the following information in the convergence criterion section after an analytical frequency calculation was performed: Item Value Threshold Converged? Maximum Force 0.000001 0.000450 YES RMS Force 0.000000 0.000300 YES Maximum Displacement 0.000190 0.001800 YES RMS Displacement 0.000030 0.001200 YES This struck me as odd. How could the forces be so low but the maximum displacement still be non-negligible? Does anyone have any hypothesis about why this phenomenon would occur? This makes me wonder if the convergence condition for the maximum displacement is unreasonably low if essentially no acting forces can result in a maximum displacement that's merely 1 order of magnitude lower than the default threshold. I believe the force units are Hartrees/Bohr and the distance units are angstroms. • It is pretty common to see this: that is why we have four different convergence criteria in Gaussian – Greg Feb 8 '18 at 0:07 • I get that, but I'm wondering why it occurs. – Argon Feb 8 '18 at 1:43 • I'm not sure what units those numbers are in, but geometry convergence and its criteria are a bit of a dark art. – TAR86 Feb 8 '18 at 6:09 • I assume that's a displacement in atomic units, and that's just not a very large displacement at all. – jheindel Feb 9 '18 at 7:00 • @Argon What why? Why do you expect similar force constant associated to the stretch of a strong bond and eg a rotation? There is no mystery in the fact that they can have very different forces/ displacements. – Greg Feb 10 '18 at 14:00 For clarity I will assume Gaussian performs a generic Newton-Raphson minimization (NR), which should suffice to explain the phenomenon. In NR, the linear problem $$\nabla\nabla^\ast E \Delta + \nabla E = 0$$ is solved, where $\Delta$ is the displacement. In order to arrive at "large" displacements despite "small" (but non-zero) forces ($-\nabla E$), it suffices for the Hessian ($\nabla\nabla^\ast E$) to have eigenvalues close to 0 itself, because a small number is divided by an even smaller number. This happens when the potential energy surface is very flat.
2021-03-02 14:34:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.689388632774353, "perplexity": 737.5093089279557}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00105.warc.gz"}
http://www.faschingbauer.co.at/trainings/log/detail/2019-10-28/index.html
# Python Individualtraining (5 Tage, beginnend 28.10.2019)¶ Der Kurs nahm eine Woche in Anspruch, da hat so einiges Platz. Hier die gesammelten Notizen inklusive Code; für schnell erklärte Sachen gibts ein Jupyter Notebook (Download). ## Exercises¶ ### Slide 51, “Exercises: Basics”¶ Write a program that takes a single digit as commandline parameter. Print the English word for that digit. #!/usr/bin/python3 # Exercises: Basics (~51) # Write a program that takes a single digit as commandline # parameter. Print the English word for that digit. import sys translation = { 0: 'zero', 1: 'one', 2: 'two', 3: 'three', 4: 'four', 5: 'five', 6: 'six', 7: 'seven', 8: 'eight', 9: 'nine', } digit = int(sys.argv[1]) if 0 <= digit <= 9: print(translation[digit]) else: print('nix') ### Slide 58, “Exercises: While Loop”¶ Write a program that takes an integer commandline parameter and checks whether that number is prime! #!/usr/bin/python3 # Exercises: While Loop (~58) # Write a program that takes an integer commandline parameter and # checks whether that number is prime! # Exercises: Lists, Loops, Functions (~94) # Modify the prime number detection program from one of the previous # exercises: make the prime number detection a function, and call the # function instead. The function (is prime() is a likely name) takes a # number, and returns a boolean value as appropriate. import sys def is_prime(candidate): if candidate < 2: return False divisor = 2 while divisor <= candidate//2: if candidate % divisor == 0: return False divisor += 1 else: return True candidate = int(sys.argv[1]) print(is_prime(candidate)) ### Slide 94, “Exercises: Lists, Loops, Functions”¶ Write a function uniq() that takes a sequence as input. It returns a list with duplicate elements removed, and where the contained elements appear in the same order that is present in the input sequence. The input sequence remains unmodified. #!/usr/bin/python3 # Exercises: Lists, Loops, Functions (~94) # Write a function uniq() that takes a sequence as input. It returns a # list with duplicate elements removed, and where the contained # elements appear in the same order that is present in the input # sequence. The input sequence remains unmodified. # Deviations from the requirement: # 1. implement uniq() as a generator. use yield to produce items. # 2. demonstrate how to use uniq() as a filter, connecting nodes # together using "pipes" as in the pseudo-filter expression: # random | uniq | print import random def uniq(l): '''filter that iterates over the input sequence and produces an item only if that was not yet seen in the input sequence. ''' have = set() for i in l: if i not in have: yield i def randomnumbers(howmany): "produces'howmany' random numbers." for _ in range(howmany): yield random.randrange(10) for i in uniq(randomnumbers(100)): print(i) ### Slide 121, “Exercises: Strings”¶ Write a program that receives any number of arguments and prints them out right justified at column 20. #!/usr/bin/python3 # Exercises: Strings (~121) # Write a program that receives any number of arguments and # prints them out right justified at column 20. import sys for s in sys.argv[1:]: while len(s) >= 20: print(s[:20]) s = s[20:] print(s.rjust(20)) ## Miscellaneous¶ ### Famous Generator Introduction¶ Producing an infinite sequence (Fibonacci, what else) #!/usr/bin/python3 # Generators, yield. implementing an infinite sequence (fibonacci is a # cool example) which would not be so easy if we had only functions # (these can only return once). import time def fibonacci(): prev = 1 cur = 1 yield prev yield cur while True: next = prev + cur yield next prev = cur cur = next for fibonum in fibonacci(): print(fibonum) ### eval(): Convert a String into a Python Data Structure¶ During the uniq exercise, people tend to want to pass a Python list from the commandline, naively. Question: how can I take an argument from the commandline (say, sys.argv[1]) and interpret that as a list? The following program does that. Call it like so, $./eval-argv.py '[1, 2, 3, 4]' (‘$’ is the shell prompt, the quotes are necessary to prevent the shell from splitting arguments at the spaces) #!/usr/bin/python3 import sys input_list_string = sys.argv[1] input_list = eval(input_list_string) print(input_list) Here’s a little snippet that demonstrates this. See the docs for more. #!/usr/bin/python3 # question: from C++ I know that I can overload (arithmetic) operators # for types/classes that I define. how is this done in Python? class Integer: def __init__(self, n): self.__n = n @property def n(self): return self.__n def __lt__(self, rhs): return self.__n < rhs.__n def __le__(self, rhs): return not self > rhs def __eq__(self, rhs): return self.__n == rhs.__n self.__n += n return self 'add: +, called on self, the right hand "a" side in "a+b"' new_number = self.__n + n return Integer(new_number) 'radd: +, called on the right hand side if the lhs does not support it' new_number = n + self.__n return Integer(new_number) x = Integer(1) y = Integer(2) print('1<2', x < y) print('2<1', y < x) print('1>2', x > y) print('1==1', x == x) print('1!=1', x != x) print('1<=2', x <= y) print('1<=1', x <= x) x += 1 print(x.n) z = x + 1 print(z.n) z = 1 + x print(z.n) ### Getters and Setters¶ Called “Properties” in Python; see below for a snippet. See the docs for more. #!/usr/bin/python3 # question: in C# we have getters and setters. how is this done in # Python? # answer: use the property *decorator* class MakesNoSense: def __init__(self, number): self.__number = number @property def number(self): return self.__number @number.setter def number(self, n): self.__number = n num = MakesNoSense(42) print(num.number) num.number = 666 print(num.number) ### More on Listcomprehensions and Generator Expressions¶ #!/usr/bin/python3 input_numbers = [1,2,3,4] def squares(numbers): '''return a list containing the squares of the numnbers from the input sequence ''' list_of_squares = [] for i in numbers: list_of_squares.append(i**2) return list_of_squares print('dumb function: squares({})'.format(input_numbers)) for i in squares(input_numbers): print(i) # for such simple things as square numbers, use a list # comprehension. this makes the code shorter - you omit a function # definition. print('list compehension: [n**2 for n in {}]'.format(input_numbers)) for i in [n**2 for n in input_numbers]: print(i) # list comprehensions still allocate memory to hold the list. with # minimal effort, you can save that allocation by transforming the # list comprehension into a generator expression. print('generator expression: (n**2 for n in {})'.format(input_numbers)) for i in (n**2 for n in input_numbers): print(i) ### More on Local and Global Scope and Global Variables¶ #!/usr/bin/python3 # Functions: Local and Global Variables (~92) def local_assignment(): '''assign to l in local scope. this creates a local variable l. (variables are generally creates at first assignment.) ''' l = 7 '''accesses a variable g that has never been assigned to in local scope. this goes out into the enclosing (global) scope and looks it up there. ''' '''assign to g in local scope. this does *not* assign to the global variable g, but creates a local variable g. ''' g = 42 def explicit_global_assignment(): global g print('explicit_global_assignment: before assignment g =', g) g = 42 print('explicit_global_assignment: after assignment g =', g) # first assignment to g in global scope creates g in global scope. g = 666 local_assignment() print('global g =', g) explicit_global_assignment() print('global g =', g) ### Closures: Between Local and Global¶ #!/usr/bin/python3 # closure: at the time of function creation (execution of the 'def # statement), all referenced names are captured to form an # intermediate scope, the 'closure' def create_function(parameter): loc = 42 # at this point, we have two variables in local scope - 'loc' and # 'parameter'. print('create_function: loc={}, parameter={}'.format(loc, parameter)) # with this in place, create a function object by executing the # def statement. note how the function is not executed, but only # created/compiled. def inner_function(): # reference variables loc and parameter. these are defined # neither in local nor in global scope. but they are found in # the enclosing scope - the locals of create_function(), which # forms an intermediate scope, the 'closure', whic his added # to the lookup chain of inner_function(). print('parameter {}, loc {}'.format(parameter, loc)) return inner_function f_one = create_function('one') f_one() f_two = create_function('two') f_two() ## Project¶ We had two days left, in addition to the usual three days which are sufficient to learn the basics. A group project was launched, with the somewhat real-life topic “talking to switches and doing all kinds of stuff with that information”. This gave us the opportunity to discover a couple of areas more closely. • Object oriented programming (a switch has interfaces, and both have properties) • Storing all that in databases • Exception handling • Commandline interfaces • Unit testing
2020-09-28 17:27:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2661551237106323, "perplexity": 14276.13709093384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401604940.65/warc/CC-MAIN-20200928171446-20200928201446-00254.warc.gz"}
https://en.m.wikipedia.org/wiki/Limiting_recursive
# Computation in the limit (Redirected from Limiting recursive) In computability theory, a function is called limit computable if it is the limit of a uniformly computable sequence of functions. The terms computable in the limit, limit recursive and recursively approximable are also used. One can think of limit computable functions as those admitting an eventually correct computable guessing procedure at their true value. A set is limit computable just when its characteristic function is limit computable. If the sequence is uniformly computable relative to D, then the function is limit computable in D. ## Formal definition A total function ${\displaystyle r(x)}$  is limit computable if there is a total computable function ${\displaystyle {\hat {r}}(x,s)}$  such that ${\displaystyle \displaystyle r(x)=\lim _{s\to \infty }{\hat {r}}(x,s)}$ The total function ${\displaystyle r(x)}$  is limit computable in D if there is a total function ${\displaystyle {\hat {r}}(x,s)}$  computable in D also satisfying ${\displaystyle \displaystyle r(x)=\lim _{s\to \infty }{\hat {r}}(x,s)}$ A set of natural numbers is defined to be computable in the limit if and only if its characteristic function is computable in the limit. In contrast, the set is computable if and only if it is computable in the limit by a function ${\displaystyle \phi (t,i)}$  and there is a second computable function that takes input i and returns a value of t large enough that the ${\displaystyle \phi (t,i)}$  has stabilized. ## Limit lemma The limit lemma states that a set of natural numbers is limit computable if and only if the set is computable from ${\displaystyle 0'}$  (the Turing jump of the empty set). The relativized limit lemma states that a set is limit computable in ${\displaystyle D}$  if and only if it is computable from ${\displaystyle D'}$ . Moreover, the limit lemma (and its relativization) hold uniformly. Thus one can go from an index for the function ${\displaystyle {\hat {r}}(x,s)}$  to an index for ${\displaystyle {\hat {r}}(x)}$  relative to ${\displaystyle 0'}$ . One can also go from an index for ${\displaystyle {\hat {r}}(x)}$  relative to ${\displaystyle 0'}$  to an index for some ${\displaystyle {\hat {r}}(x,s)}$  that has limit ${\displaystyle {\hat {r}}(x)}$ . ### Proof As ${\displaystyle 0'}$  is a [computably enumerable] set, it must be computable in the limit itself as the computable function can be defined ${\displaystyle \displaystyle {\hat {r}}(x,s)={\begin{cases}1&{\text{if by stage }}s,x{\text{ has been enumerated into }}0'\\0&{\text{if not}}\end{cases}}}$ whose limit ${\displaystyle r(x)}$  as ${\displaystyle s}$  goes to infinity is the characteristic function of ${\displaystyle 0'}$ . It therefore suffices to show that if limit computability is preserved by Turing reduction, as this will show that all sets computable from ${\displaystyle 0'}$  are limit computable. Fix sets ${\displaystyle X,Y}$  which are identified with their characteristic functions and a computable function ${\displaystyle X_{s}}$  with limit ${\displaystyle X}$ . Suppose that ${\displaystyle Y(z)=\phi ^{X}(z)}$  for some Turing reduction ${\displaystyle \phi }$  and define a computable function ${\displaystyle Y_{s}}$  as follows ${\displaystyle \displaystyle Y_{s}(z)={\begin{cases}\phi ^{X_{s}}(z)&{\text{if }}\phi ^{X_{s}}{\text{ converges in at most }}s{\text{ steps.}}\\0&{\text{otherwise }}\end{cases}}}$ Now suppose that the computation ${\displaystyle \phi ^{X}(z)}$  converges in ${\displaystyle s}$  steps and only looks at the first ${\displaystyle s}$  bits of ${\displaystyle X}$ . Now pick ${\displaystyle s'>s}$  such that for all ${\displaystyle z  ${\displaystyle X_{s'}(z)=X(z)}$ . If ${\displaystyle t>s'}$  then the computation ${\displaystyle \phi ^{X_{t}}(z)}$  converges in at most ${\displaystyle s'  steps to ${\displaystyle \phi ^{X}(z)}$ . Hence ${\displaystyle Y_{s}(z)}$  has a limit of ${\displaystyle \phi ^{X}(z)=Y(z)}$ , so ${\displaystyle Y}$  is limit computable. As the ${\displaystyle \Delta _{2}^{0}}$  sets are just the sets computable from ${\displaystyle 0'}$  by Post's theorem, the limit lemma also entails that the limit computable sets are the ${\displaystyle \Delta _{2}^{0}}$  sets. ## Limit computable real numbers A real number x is computable in the limit if there is a computable sequence ${\displaystyle r_{i}}$  of rational numbers (or, which is equivalent, computable real numbers) which converges to x. In contrast, a real number is computable if and only if there is a sequence of rational numbers which converges to it and which has a computable modulus of convergence. When a real number is viewed as a sequence of bits, the following equivalent definition holds. An infinite sequence ${\displaystyle \omega }$  of binary digits is computable in the limit if and only if there is a total computable function ${\displaystyle \phi (t,i)}$  taking values in the set ${\displaystyle \{0,1\}}$  such that for each i the limit ${\displaystyle \lim _{t\rightarrow \infty }\phi (t,i)}$  exists and equals ${\displaystyle \omega (i)}$ . Thus for each i, as t increases the value of ${\displaystyle \phi (t,i)}$  eventually becomes constant and equals ${\displaystyle \omega (i)}$ . As with the case of computable real numbers, it is not possible to effectively move between the two representations of limit computable reals. ## Examples • The real whose binary expansion encodes the halting problem is computable in the limit but not computable. • The real whose binary expansion encodes the truth set of first-order arithmetic is not computable in the limit.
2019-02-15 20:58:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 56, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971341490745544, "perplexity": 512.1751501119309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479159.2/warc/CC-MAIN-20190215204316-20190215230316-00257.warc.gz"}
http://mathoverflow.net/revisions/71794/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). 2 corrected spelling in title # Where are $+$, $-$ and §\infty\infty$in bordered Heegaard-Floer theory? 1 # Where are$+$,$-$and §\infty$ in bordered Heegaard-Floer theory? Here goes my first MO-question. I've just read Lipshitz, Ozsváth and Thurston's recently updated "A tour of bordered Floer theory". To set the stage let me give two quotes from this paper. Heegaard Floer homology has several variants; the technically simplest is $\widehat{HF}$, which is sufficient for most of the 3-dimensional applications discussed above. Bordered Heegaard Floer homology, the focus of this paper, is an extension of $\widehat{HF}$ to 3-manifolds with boundary. [...] the Heegaard Floer package contains enough information to detect exotic smooth structures on 4-manifolds. For closed 4-manifolds, this information is contained in $HF^+$ and $HF^-$; the weaker invariant $\widehat{HF}$ is not useful for distinguishing smooth structures on closed 4-manifolds. Since I am mainly interested in closed 4-manifolds, I have not paid too much attention to the developments in bordered Heegaard-Floer thoery. But right from the beginning I have wondered why only $\widehat{HF}$ appears in the bordered context. So my question is: Why are there no $^+$, $^-$ or $^\infty$ flavors of bordered Heegaard-Floer theory? Are the reasons of technical nature or is there an explanation that the theory cannot give more than $\widehat{HF}$? I assume there are issues with the moduli spaces of holomorphic curves that would be relevant to defining bordered versions of the other flavors of Heegaard-Floer theory, but I am neither enough of an expert on holomorphic curves to immediately see the problems nor could I find anything in the literature that pins down the problems. Any information is very much appreciated.
2013-06-19 06:56:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6638982892036438, "perplexity": 275.6439439663111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708143620/warc/CC-MAIN-20130516124223-00089-ip-10-60-113-184.ec2.internal.warc.gz"}
https://knowledgewap.com/best-cryptocurrency-2022/
# Best Cryptocurrency 2022 – Bitcoin & Ethereum 2022 Top Crypto Coin Best Cryptocurrency 2022 A Forbes article claims that billionaire investor Mark Cuban owns 30% of his cryptocurrency investment in ETH (half of what he owns in BTC). Experienced investors can be more open to investing in Ethereum than other cryptocurrencies, which is very important. Cuban is following a similar trend, also investing 30% of their cryptocurrencies in Ethereum if their comfort level allows it. In addition, Ethereum is the most popular blockchain development platform, and it is not difficult to see why ETH will appreciate. Obviously, Ethereum is a good investment option because it has been fighting side by side with Bitcoin for many years. As a cryptocurrency and blockchain platform, Ethereum is loved by software developers for its potential applications, such as so-called smart contracts that are automatically triggered when conditions are met, and non-fungible tokens (NFT). In just five years, its price has risen from $11 to$ 2,500, that is, more or less by 22,000%. Recently, the value of Ethereum has dropped, while Polkadot has grown by more than 1,300% since its inception. ## Best Cryptocurrency 2022 – Bitcoin & Ethereum 2022 Top Crypto Coin Polkadot has proven to be one of Ethereum’s leading competitors. Created in 2020 by Ethereum co-founder Gavin Wood, Polkadot has distinctive technical features, external audits for its security circuits, inter-block fluidity, and energy efficiency are some of the features that cryptocurrency investors are passionate about. At first glance, Polkadot is very similar to Ethereum, they are both blockchain platforms on which developers can create decentralized applications and distribute smart contracts. Cryptocurrencies can use any number of blockchains; Polkadot (and its eponymous cryptography) seeks to integrate them by creating a cryptocurrency network that connects different blockchains so that they can work together. The ability to create projects based on the Ethereum blockchain enables the ETH cryptocurrency to surpass what has been achieved. Cryptocurrency is a good way to make money for investors, and the best cryptocurrency is the one that can really make an impact in the blockchain field. Investing in cryptocurrency not only opens up a new way of investing in cryptocurrency, but also gives you the opportunity to be part of the future today. From the extra assets people use for suspicious transactions on the deep web to one of the most popular investments in the world, cryptocurrencies have undoubtedly come a long way. Read more  बिहार: मामी संग भांजे को हुआ प्यार, घर से भागकर की शादी, वीडियो वायरल Well, it actually isn’t, and you can invest in cryptocurrencies without doing anything. From our list, you can choose a cryptocurrency to trade, make inexpensive transactions, or even become a cryptocurrency investor. Most major cryptocurrencies can be a smart place to start investing right now. Given the interest of cryptocurrency enthusiasts, we have listed some cryptocurrencies that are doing well in 2021 and are expected to generate high returns in 2022. Don’t worry, we’ve compiled a solid list of the top 3 cryptocurrencies you should consider investing in in 2022. We have selected the 15 best cryptocurrencies to explode this year. Knowing the top 5 cryptocurrencies today, we can decide to invest in one or more of them. While no one knows which cryptocurrency is the most promising, especially when it comes to long-term investments, some conclusions can still be drawn from a careful study of individual coins and the cryptocurrency market in general. In-depth study of the cryptocurrency market, calmness and risk management will help you make money by investing in cryptocurrencies. Newbies in the cryptocurrency industry may find it difficult to master all the knowledge needed to successfully invest in cryptocurrencies. From Bitcoin and Ethereum to Dogecoin and Tether, there are thousands of different cryptocurrencies that can leave you at a loss when you first enter the cryptocurrency world. Bitcoin, Litecoin, Ethereum and Dogecoin, you may have all heard of it by now; cryptocurrencies. ## Bitcoin Bitcoin, also known as BTC, was the first and possibly the most famous cryptocurrency. There is no doubt that Bitcoin is the godfather of all cryptocurrencies, arguably the best investment in the cryptocurrency world, with incredible returns. Let us say without hesitation that Bitcoin is the most profitable cryptocurrency. Despite the correction in the cryptocurrency market during the summer, the leading cryptocurrency Bitcoin has grown more than fivefold since the beginning of this year. We will not mention low-liquidity coins; instead, we will list the best of the already popular tokens in the cryptocurrency market. Fortunately, we have expert opinions on the top 5 cryptocurrencies to watch next year. Read more  What is bitcoin ? How much is 1 bitcoin equal to ? In this article, we take a look at the three major cryptocurrencies that will explode in 2022, why they will explode, and how much wise investors should invest in them. The second, more popular cryptocurrency may attract attention to Bitcoin in 2022. Once Ethereum 2.0 is fully implemented, Ethereum will become unstoppable and can replace Bitcoin as the largest cryptocurrency. Ethereum owns Ether (ETH), which is considered a coin that can replace Bitcoin. Ether is required for any dApp, smart contract, or any product provided by the Ethereum blockchain. Every application, smart contract, token, or cryptocurrency created on the Ethereum network requires ETH to use it. ## Ethereum Ethereum also has its own cryptocurrency called Ether (ETH), which can be used to transfer money, buy goods, or power any product on the Ethereum network. It also has a blockchain-based network, and the cryptocurrency is called Ether, or ETH for short. It can also be sold or exchanged for other forms of cryptocurrency such as Ethereum or Bitcoin. From creating utility programs for users as a cross-border payment platform to integrating and working with traditional banking systems, this cryptocurrency has developed into one of the most promising cryptocurrencies, making it a potential participant and will become the next cryptocurrency The king of currency. world. 2022 Since the launch of the leading cryptocurrency exchange and trading platform Binance, this cryptocurrency has developed and developed. Not only Bitcoin, but many other cryptocurrencies have also performed well, as Ethereum has been adopting faster than Bitcoin since 2019. They speculate that Ethereum will be the top-selling digital currency, likely to surpass Bitcoin by the end of 2022. DeVere Group CEO Nigel Greene told The Telegraph in August 2021 that he believed Ethereum would overtake Bitcoin for the remainder of 2021 and that its value would exceed BTC in five years (2026). Expect to invest in cryptocurrency in 2022 as he also expects this year to be successful in the cryptocurrency industry. Stellar Moon Publishing has compiled this book to offer an overview of the best trading tips and strategies for 2021 and 2022. With 2022 approaching, many investors are now looking at cryptocurrencies but are not entirely sure which one to choose. This brings us to 10 potential cryptocurrencies that could be the next king of cryptocurrencies in 2022. The reason Stellar makes it to the list of “10 Potential Cryptocurrencies That Could Be the Next Cryptocurrency King in 2022” is not the only one. foreign exchange costs, but also the phenomenal growth that has been observed over the past two months. Knowledgewap is a informative blog, whose purpose is to make every informative information available here. Hope you like the given information. " Have a good day "
2021-12-05 01:33:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18826884031295776, "perplexity": 3266.8333753130382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363134.25/warc/CC-MAIN-20211205005314-20211205035314-00379.warc.gz"}
http://stackoverflow.com/questions/11624108/performance-of-wkhtmltopdf/11641661
# Performance of wkhtmltopdf We are intending to use wkhtmltopdf to convert html to pdf but we are concerned about the scalability of wkhtmltopdf. Does anyone have any idea how it scales? Our web app potentially could attempt to convert hundreds of thousands of (reletively complex)html so it's important for us to have some idea. Has anyone got any information on this? - First of all, your question is quite general; there are many variables to consider when asking about scalability of any project. Obviously there is a difference between converting "hundreds of thousands" of HTML files over a week and expecting to do that in a day, or an hour. On top of that "relatively complex" HTML can mean different things to other people. That being said, I figured since I have done something similar to this, converting approximately 450,000 html files, utilizing wkhtmltopdf; I'd share my experience. Here was my scenario: • 450,000 HTML files • 95% of the files were one page in length • generally containing 2 images (relative path, local system) • tabular data (sometimes contained nested tables) • simple markup elsewhere (strong, italic, underline, etc) • A spare desktop PC • 8GB RAM • 2.4GHz Dual Core Processor • 7200RPM HD I used a simple single threaded script written in PHP, to iterate over the folders and pass the html file path to wkhtmltopdf. The process took about 2.5 days to convert all the files, with very minimal errors. I hope this gives you insight to what you can expect from utilizing wkhtmltopdf in your web application. Some obvious improvements would come from running this on better hardware but mainly from utilizing a multi-threaded application to process files simultaneously. - FYI for anyone who doesn't like to do math, that averages to 480ms per doc –  Derek Dahmer Sep 2 at 20:01 In my experience performance depends a lot on your pictures. It there are lots of large pictures it can slow down significantly. If at all possible I would try to stage a test with an estimate of what the load would be for your servers. Some people do use it for intensive operations, but I have never heard of hundrerds of thousands. I guess like everything, it depends on your content and resources. The following quote is straight off the wkhtmltopdf mailing list: I'm using wkHtmlToPDF to convert about 6000 E-mails a day to PDF. It's all done on a quadcore server with 4GB memory... it's even more then enough for that. There are a few performance tips, but I would suggest trying out what is your bottlenecks before optimizing for performance. For instance I remember some person saying that if possible, loading images directly from disk instead of having a web server inbetween can speed it up conciderably. Edit: Adding to this I just had some fun playing with wkhtmltopdf. Currently on an Intel Centrino 2 with 4Gb memory I generate PDF with 57 pages of content (mixed p,ul,table), ~100 images and a toc takes consistently < 7 seconds. I'm also running visual studio, browser, http server and various other software that might slow it down. I use stdin and stdout directly instead of files. - PLUS ONE for the images from disk instead of web server in between. I just tested it and saved 70% of generation time ! –  np87 Mar 31 at 12:21 We try to use wkhtmltopdf in any implementations. My objects are huge tables for generated coordinate points. Typically volume of my pdf = 500 pages We try to use port of wkhtmltopdf to .net. Results are - Pechkin - Pro: don't need other app. Contra: slow. 500 pages generated about 5 minutes - PdfCodaxy - only contra: slow. Slower than pure wkhtmltopdf. Required installed wkhtmltopdf. Problems with non unicode text - Nreco - only contra: slow. Slower than pure wkhtmltopdf. Required installed wkhtmltopdf. Incorrect unlock libs after use (for me) We try to use binary wkhtmltopdf invoked from C# code. Pro: easy to use, faster that libs Contra: need temporary files (cannot use Stream objects). Break with very huge (100MB+)html files as like as other libs - Regarding NReco.PdfGenerator, I have no idea how it can be slower than pure WkHtmlToPdf (internally it invokes WkHtmlToPdf.exe in separate process). Also it does NOT require installed WkHtmlToPdf: all files are embedded into DLL and extracted automatically if missed. –  Vitaliy Fedorchenko Sep 10 at 16:12
2014-12-20 16:54:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2769128084182739, "perplexity": 3666.719993212964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770043.48/warc/CC-MAIN-20141217075250-00165-ip-10-231-17-201.ec2.internal.warc.gz"}
https://awwalker.com/2018/03/08/dirichlets-theorem-and-sieves/
# Dirichlet’s Theorem and Sieves Dirichlet’s Theorem on the infinitude of primes in arithmetic progressions from 1837 is often viewed as the first result in analytic theory. To prove this result, Dirichlet shows that it suffices to prove the non-vanishing of (non-trivial) Dirichlet $L$-functions $\displaystyle L(s,\chi) := \sum_{n \geq 1} \frac{\chi(n)}{n^s}$ at the special point $s=1$. Dirichlet’s theorem is without a doubt a beautiful result, but the reduction to $L(1,\chi) \neq 0$ is often introduced in a clumsy way. For example, the proof given in Apostol’s Introduction to Analytic Number Theory exposes the significance of $L(1,\chi)\neq 0$ while trying to prove the implication $\displaystyle L(1,\chi) \sum_{n \leq X} \frac{\mu(n)\chi(n)}{n} = O(1) \quad \Longrightarrow \quad \sum_{n \leq X} \frac{\mu(n)\chi(n)}{n} = O(1).$ (By this point in the proof, the first result has been proven and the second is known to imply Dirichlet’s theorem.) In this post I’d like to give a different (and possibly new) proof of the reduction of Dirichlet’s theorem to the non-vanishing of $L(1,\chi)$ which borrows some ideas from sieve theory. After this, I’ll show how the stronger hypothesis $L(1+it,\chi) \neq 0$ for $t \in \mathbb{R}$ can be used to prove an asymptotic for the size of the sieved set $\displaystyle S:=\{ n : p \mid n, \, p \text{ prime} \Rightarrow p \equiv b \!\!\!\mod k\}.$ — REDUCTION OF DIRICHLET’S THEOREM — The method of sieving with Dirichlet series is an effective method of estimating sieved sets that exhibit some sort of multiplicative structure. Roughly, to estimate the size of a set $S$, let $a(n)$ denote the indicator function of $S$ and consider the Dirichlet series $\displaystyle F(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}.$ Note that $F(s)$ has an Euler product if and only if the indicator function $a(n)$ is multiplicative. In the case of the set $S$ from the introduction, we consider $\displaystyle F(s;b) = \prod_{p\equiv b(k)} \left(1 - \frac{1}{p^s}\right)^{-1}.$ This series is analytic in the right half-plane $\Re s > 1$ by comparison to the Riemann zeta function. To understand its analytic properties, we need a way to detect the condition $p \equiv b(k)$. There’s really just one way to do this, which uses orthogonality of characters: Lemma 1: We have $\displaystyle \sum_{\chi} \chi(p)\overline{\chi(b)} = \begin{cases} \phi(k), & \quad p \equiv b(k) \\ 0, & \quad \text{else},\end{cases}$ in which the sum runs over all characters $\chi$ mod $k$. In particular, $\displaystyle \prod_\chi \prod_p \Bigg( 1 - \frac{\chi(p)\overline{\chi(b)}}{p^s} \Bigg)^{\!-1} = \prod_p \Bigg( 1-\sum_\chi \chi(p)\overline{\chi(b)} p^{-s} + O(p^{-2s})\Bigg)^{\!-1}$ $\displaystyle = \prod_{p \equiv b(k)} \left(1-\frac{\phi(k)}{p^s}+O(p^{-2s})\right)^{-1} \prod_{p \not\equiv b(k)} \left( 1- O(p^{-2s})\right)^{-1}$ $\displaystyle = \prod_{p \equiv b(k)} \left( 1- \frac{1}{p^s}\right)^{-\phi(k)} H(s) = F(s)^{\phi(k)} H(s)$ in which $H(s)$ is analytic and non-zero in $\Re s > \frac{1}{2}$. Of course, this implies that the analytic properties of $F(s;b)$ in $\Re s > \frac{1}{2}$ may be obtained from those of the Euler products $\displaystyle D(s,\chi; b):=\prod_p \Bigg( 1 - \frac{\chi(p)\overline{\chi(b)}}{p^s} \Bigg)^{\!-1}.$ When $\chi$ is the trivial character mod $k$, $D(s,\chi;b)$ reduces to $\zeta(s)$ (up to some Euler factors which divide $k$). Thus $F(s;b)$ has a pole at $s=1$ unless one of the terms $D(s,\chi;b)$ corresponding to $\chi \neq 1$ vanishes at $s=1$. Some remarks are in order: 1. If Dirichlet’s theorem is false, then $F(s;b)$ is a finite product which extends analytically into the half-plane $\Re s > 0$. Thus we prove Dirichlet’s theorem if we can show that $F(s)^{\phi(k)}$ has a pole at $s=1$. 2. Except for the ugly addition of $\overline{\chi(b)}$, the Euler products of the functions $D(s,\chi;b)$ are exactly the Euler products of the Dirichlet $L$-functions $L(s,\chi)$. Wouldn’t it be nice if we could relate the analytic properties of $D(s,\chi;b)$ and $L(s,\chi)$? It turns out that relating $L(s,\chi)$ and $D(s,\chi;b)$ to each other isn’t actually that hard. The idea is simple (except for one technical result), so I’ll sketch the proof before filling in the last detail. Lemma 2: The infinite product $\prod(1+a_n)$ converges to a non-zero value if and only the series $\sum a_n$ converges. Assuming the lemma, note that $D(s,\chi;b)$ converges to a non-zero value if and only if $\displaystyle \sum_p \frac{\chi(p) \overline{\chi(b)}}{p^s}$ converges. Of course, $\chi(b)$ doesn’t affect convergence here, so we can factor it out, apply the Lemma in the reverse direction, and relate convergence of $D(s,\chi;b)$ to $L(s,\chi)$! There’s only one issue; namely, that the form of Lemma 2 you probably remember from calculus actually only holds when convergence is replaced with absolute convergence. And, alas, this is not satisfied at $s=1$. This isn’t a fatal flaw, though — we just need a stronger form of Lemma 2: Lemma 2.5: If $\sum \vert a_n \vert^2 < \infty$, then $\prod (1+a_n)$ converges to a non-zero value if and only if the series $\sum a_n$ converges. I first found this result in the AMM article Conditional Convergence of Infinite Products, by William Trench, but Trench notes that it appears in Knopp’s Theory and Applications of Infinite Series. In any case, we can use it to prove the following result: Proposition: Suppose that $\Re s > \frac{1}{2}$. Then $L(s,\chi)$ converges and is non-zero if and only the same holds for $D(s,\chi;b)$. Corollary: If $L(1,\chi) \neq 0$ for all non-trivial $\chi$, then $F(s;b)^{\phi(k)}$ has a pole of order 1 at $s=1$. (This proves Dirichlet’s theorem by Remark 1.) — ESTIMATING A CERTAIN SIEVED SET — We can actually extract a fair bit more from these analytic preliminaries. In general, we expect to be able to study the asymptotic growth of $\displaystyle \#\{ n \leq X : p \mid n, \, p \text{ prime} \Rightarrow p \equiv b \!\!\!\mod k\}$ by studying the growth of the generating function $F(s;b)$ about its dominant singularity. Our task is complicated by the fact that $F(s;b)$ has a pole of fractional order at $s=1$, but these concerns melt away with the right theorem. In this case, we apply a theorem due to Raikov (1954): Theorem: Let $F(s) = \sum_{n \geq 1} a(n)/n^s$ be a Dirichlet series with non-negative coefficients. Suppose that $F(s)$ converges in $\Re s >1$ and that $F(s)$ extends analytically to $\Re s \geq 1$ except at $s=1$. Suppose further that $\displaystyle F(s) = \frac{H(s)}{(s-1)^{1-\alpha}},$ in which $\alpha \in \mathbb{R}$ and $H(s)$ is analytic and non-zero in $\Re s \geq 1$. Then $\displaystyle \sum_{n \leq X} a(n) \sim \frac{H(1) X}{\Gamma(1-\alpha) (\log X)^\alpha}.$ The function $F(s;b)$ satisfies the hypotheses of Raikov’s theorem provided that the Dirichlet series $D(s,\chi;b)$ do not vanish in the region $\Re s \geq 1$. But this follows from the previous Proposition and the well-known fact that $L(s,\chi) \neq 0$ on the line $\Re s =1$. We conclude that $\displaystyle \#\{ n \leq X : p \mid n, \, p \text{ prime} \Rightarrow p \equiv b \!\!\!\mod k\} \sim \frac{cX}{(\log X)^{1-1/\phi(k)}}$ for some $c > 0$ which depends on $b$ and $k$. More generally, we can consider sets of the form $\displaystyle S_B := \{n \geq 1 : p \mid n, \, p \text{ prime} \Rightarrow (p \!\!\!\mod k) \in B\}$ where $B$ is any subset of invertible residues mod $k$. Since the Dirichlet series associated to $S_B$ factors as $\displaystyle F(s;B):=\prod_{b \in B} F(s;b),$ we see at once that $F(s;B)$ has a pole of order $\# B/\phi(k)$ at $s=1$. Raikov’s Tauberian theorem then implies that $S_B$ has $\displaystyle \sim \frac{c_B X}{(\log X)^{1-\# B/\phi(k)}}$ elements $n \leq X$. — EXERCISES — Exercise 1: Let $S$ denote the set of integers which may be written as a sum of two squares. Prove that $S$ has $\sim c X/ \sqrt{\log X}$ elements $n \leq X$. Exercise 2: Use Euler’s classification of odd perfect numbers to show that the set of odd perfect numbers has density 0. (It’s actually known that the number of odd perfect numbers at most $X$ is $O(x^\epsilon)$ for all $\epsilon > 0$.) Exercise 3: Let $P$ be a finite set of primes and let $S$ denote the set of integers composed only of powers of elements of $P$. Find the asymptotic size of $S$. Exercise 4: Let $A$ be any set of positive integers and let $S$ denote the set of integers which can be written as a product of $a$th powers, as $a$ varies through $A$. Find the asymptotic size of $S$. Exercise 5: We call an integer $n$ square-full if $p^2 \mid n$ for every prime $p$ which divides $n$. Find the asymptotic size of the set of square-full numbers. Along the way, show that the number of representations of $n$ as a product of a square-full number and a 6th power equals the number of representations of $n$ as a product of a square and a cube.
2020-08-06 07:37:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 129, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691842198371887, "perplexity": 135.58636181081783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736883.40/warc/CC-MAIN-20200806061804-20200806091804-00508.warc.gz"}
https://www.usna.edu/Users/cs/roche/courses/f16sy301/u13/
# Unit 13: P vs NP This unit will be a very short overview of one of the most important unsolved problems in theoretical computing: Are there any problems that cannot be solved faster than a brute-force search through every possible answer? In other words (as we will soon learn), does P equal NP? If you found an answer to this question, you would win a million dollar prize, and either solidify or destroy the security assumptions for the cryptographic algorithms we just learned. Ready? # 1 Problems and Algorithms This short unit is really about hardness: the inherent difficulty of certain computational problems. We already know how to analyze the big-O running time of an algorithm and say whether it's fast or slow. But the task now is to talk about situations where every possible algorithm would be slow. To be more precise, we'll say a problem is some well-defined transformation from acceptable inputs to correct outputs. For example, SORTING is a problem: the input is a list of things that can be compared to each other, and the output is the same things, in least-to-greatest order. We didn't say exactly what steps are taken to do the ordering, just what the output is supposed to be. In other words, a problem specifies the "what", but not the "how". A algorithm is a specific series of steps that always correctly solves some particular problem. For example, MergeSort is an algorithm that correctly solves the SORTING problem. In general, there might be many algorithms that correctly solve a given problem, and some might be faster than others. Back to our example of SORTING, we know the MergeSort algorithm has running time $$O(n\log n)$$ and the SelectionSort algorithm has running time $$O(n^2)$$. There are probably other correct SORTING algorithms that have other running times, so how can we talk about the running time of the SORTING problem itself? For this, we define the inherent difficulty of a problem to be the fastest possible running time of any correct algorithm for that problem. For SORTING, we know the inherent difficulty is exactly $$O(n\log n)$$ because we have an algorithm that exactly matches that running time, as well as a proof from class that any comparison-based sorting algorithm must take at least $$O(n\log n)$$ time. # 2 Reductions For most problems, the situation is not so simple, and we can't say for sure what exactly is the inherent difficulty of the problem. This means that we don't know for sure whether we have the fastest algorithm yet. Fortunately, we can still make comparisons between the inherent difficulty or problems, even if we don't know exactly what that inherent difficulty is, using the concept of a reduction. A reduction is a way of solving one problem, using any algorithm for another problem. For example, we saw in the previous unit that any public-key cryptosystem can be used to solve the key exchange problem. So we would say that key exchange reduces to public-key cryptography. This means that the inherent difficulty of public-key cryptography is at least as much as that of key exchange. Another example of a reduction would be between SORTING and MIN, where the MIN problem is defined as finding the smallest number out of a given list of $$n$$ numbers. If you have any SORTING algorithm, you can easily solve the MIN problem by first putting the numbers in increasing order, and then just returning the first one. Naturally this is a pretty stupid way to solve MIN, but nonetheless it proves that the SORTING problem is at least as difficult as the MIN problem. Or to put it another way, the fastest algorithm for SORTING can't be faster than the fastest algorithm for MIN. # 3 Some problems Here are some problems that will be used to frame the rest of the discussion. Some of these should be familiar, some a little less so. These aren't necessarily the most important computational problems in the world, just some useful examples for the discussion. • MIN: Finding the smallest out of a list of numbers. • SORTING: Putting a list of numbers in least-to-greatest order. • SHORTPATH: Find the shortest path between two points in a given graph. • LONGPATH: Find the longest (non-repeating) path between two points in a given graph. • DISCRETE-LOG: Given integers $$n$$, $$x$$, and $$y$$, find a positive integer $$a$$ such that $$x^a \bmod n = y$$. • FACTORIZATION: Given integer $$n$$, find two positive integers $$p, q$$, both greater than or equal to 2, such that $$n=pq$$. • SUBSET-SUM: Given a list of positive and negative numbers, find a subset of the list that adds up to exactly 0. • COLORING: Given a Map that's split up into different regions or states, using the smallest number of different colors, assign a color to each region so that any regions that touch have different colors. • PROG-EQUAL: Given the source code for two computer programs, determine whether they have exactly the same behaviour (the same output on any possible input). • If you had to order these problems from easiest to hardest, what would you do? # 4 P Classifying problems by difficulty is exactly what the study of theoretical computer science is all about. The idea is to put problems into different groups, called complexity classes, and then make comparisons between the groups. P is the first complexity class we will look at, and it has a pretty simple definition: P is the set of all problems that can be solved in polynomial-time. Wait, what's polynomial-time? A polynomial-time algorithm has running time $$O(n^k)$$, where $$n$$ as usual is the size of the input, and $$k$$ is any constant integer. So for example $$O(n)$$, $$O(n^2)$$, even $$O(\sqrt{n})$$ and $$O(n^{200})$$ are all considered polynomial-time, but $$O(2^n)$$ is not. Of the example problems above, we know that MIN, SORTING, and SHORTPATH are all members of the complexity class P, because we know polynomial-time algorithms for all three of them. (Remember that Dijkstra's algorithm solves SHORTPATH.) The real question is, which of the problems above are not in P, and what would that mean? Take FACTORIZATION for example. As we discussed in the last unit, there is no known polynomial-time algorithm to factor large integers. But that doesn't necessarily mean FACTORIZATION isn't in P, because maybe there is some fast algorithm for FACTORIZATION that we just haven't found yet. So we know for sure that SORTING is in P, but we don't know whether FACTORIZATION is in P or not. # 5 NP The second complexity class we'll talk about is NP, which has a somewhat more subtle definition than P. The most important thing to remember first is a common point of confusion: NP does not mean "Not P". Actually NP stands for "Nondeterministic polynomial-time", but that title won't mean much to you unless you've taken a computer science theory class. But don't worry, we can still say what NP is in a way that makes sense. A problem is in NP if you can check the validity of a given answer to the problem in polynomial-time. Notice that this doesn't say you can actually get the answer in polynomial-time necessarily, but just that any given answer can be quickly checked. Let's take the COLORING problem for example: Given a map, use the smallest number of colors to assign different colors to regions that share a common border. I have no idea how to quickly come up with a coloring scheme for a given map. But if you're given a map with regional boundaries and colors on each region, you can just look through each region and check that all the bordering regions have different colors. So you can easily check the validity of an answer; that means COLORING is in NP. FACTORIZATION is another example of an NP problem. Given an integer $$n$$, and a proposed pair of factors $$p$$ and $$q$$, we can easily multiply $$pq$$ and check it's the same number as $$n$$. We don't have any idea how to find $$p$$ and $$q$$ quickly, but if someone claims they've found them, we can check the result. Notice that every problem that is in P is definitely in NP also. Why? Well, if you can compute the answer in polynomial-time, checking someone else's answer can be done by performing the computation yourself, and checking it's the same as the proposed answer. Like for MIN, say you are given a list of numbers $$L$$ and a proposed smallest number $$m$$. In a simple $$O(n)$$-time loop, you can find the smallest number yourself, and check it's the same as $$m$$. So MIN (and every other P-problem) is also a member of the set NP. At this point it might seem like every problem is a member of NP, and indeed many problems are. But not all of them! Take PROG-EQUAL, the task of determining whether two computer programs always do the same thing. Even if someone tells you the answer that yes, they are the same, there's no way to quickly check this answer is correct. You couldn't even try all possible inputs and compare the outputs, because there are an infinite number of possible computer program inputs; your task would never be done! So PROG-EQUAL is an example of a problem which is not in P, and not in NP either. (And we won't talk about it anymore!) # 6 Brute-force search NP problems share an important property: they can always be solved by doing a brute-force, exponential-time search. This is a consequence of the definition of NP itself, that you can quickly check a potential answer to see if it's valid. That means that if you want to find the correct answer, you just go through every possible answer, check if each one is valid, and return the optimal answer. For example, one way of solving SHORTPATH would be to generate every possible path in the graph, then check if each one is an actual, valid path from point a to point b, and return whichever valid path is shortest. In the case of SHORTPATH this is a stupid algorithm, since we know a much better way of doing the same thing (Dijkstra's). But for LONGPATH, this is actually a pretty good strategy for solving the problem; we don't know any other significantly faster way to do it! Another example is DISCRETE-LOG. Given $$n, x, y$$, we could try every possible exponent $$a=1, 2, 3, 4, \ldots$$ and compute every $$x^a\bmod n$$, until we find one where $$x^a\bmod n = y$$, and then we've found the answer. Another way to say this is that NP is the class of all problems that can be solved by guess-and-check. It doesn't mean that it's going to be fast to solve them by guess-and-check — indeed, that will take exponential-time — but it is a possible way to solve these problems. # 7 The million-dollar question To summarize so far: we know some problems are definitely in P, like SORTING and MIN. And all these problems are definitely in NP also. We also know some problems are definitely in NP, like FACTORIZATION and LONGPATH. These problems might be in P but we don't know yet, because no one has come up with any polynomial-time algorithms to factor integers or find the longest path in a graph. In fact, even though the concept of NP was introduced in 1971, since that time there hasn't been a single problem which is proven to be in NP but not in P. And of course, we also don't know for sure that every NP problem can be solved in polynomial-time somehow. So while it seems obvious that the sets P and NP are not exactly the same set, we don't have a proof either way whether or not P equals NP. If you could prove just one NP problem is not solvable in polynomial-time, that would prove that the sets P and NP are not equal. This would be the most famous result in the history of computing, and would earn you a million dollar prize from the Clay Mathematics Institute. On the other hand, if you could prove that every NP problem can in fact be solved in polynomial time, that would prove that P=NP. This would solve the same open problem in mathematics, but would have much more dire consequences (potentially), since it would prove that the number-theoretic problems like FACTORIZATION and DISCRETE-LOG that we've based all modern cryptography on, are not so hard to solve as we have assumed. In other words, you would break the Internet. # 8 NP-Complete Problems The biggest progress towards solving the P vs NP question has been in classifying some problems as NP-Complete. Formally speaking, a problem A is NP-Complete if A is a member of NP, and every other NP problem can be reduced to A. In other words, solving a single NP-complete problem in polynomial time would give you a polynomial-time solution to every other NP problem. It's kind of surprising that any NP-Complete problem actually exists. But many of them do! Out of the list above, LONGPATH, COLORING, and SUBSET-SUM have all been proven to be NP-Complete. That means, for example, any polynomial-time algorithm to find the longest path in a graph, can (somehow) be used to factor integers quickly. You can find a longer list of NP-complete problems on Wikipedia. One of the interesting things here is that the two most important hard problems for cryptography that we've seen, FACTORIZATION and DISCRETE-LOG, are not known to be NP-complete. Most of the computing world seems to believe that these are harder than the problems in P, but not quite as hard as the NP-complete ones. Remember, if someone had a fast algorithm to solve these problems, they would be able to quickly crack Diffie-Hellman or RSA encryption. It would be really nice to have a proof that cracking these cryptographic protocols is hard or "practically impossible", but no one has been able to prove that yet. Maybe you will?
2018-05-23 16:30:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6973133087158203, "perplexity": 257.3124715530154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865691.44/warc/CC-MAIN-20180523161206-20180523181206-00504.warc.gz"}
http://mathhelpforum.com/differential-geometry/182178-compact-sets-covers-print.html
# Compact sets and covers • Jun 1st 2011, 10:01 AM Borkborkmath Compact sets and covers If you have a topological space that is compact, is it a cover itself? If so, how to prove this? • Jun 1st 2011, 10:03 AM girdav What do you mean by "it is a cover itself" ? • Jun 1st 2011, 10:13 AM Plato Quote: Originally Posted by Borkborkmath If you have a topological space that is compact, is it a cover itself? If so, how to prove this? $(X,\mathcal{T})$ is a topological space is $X$ a basic open set? • Jun 1st 2011, 02:12 PM Borkborkmath Yeah, sorry for the poor wording. I was wondering if (X,tau) was a topological space if X was a cover of (X,tau). Or as plato said. • Jun 1st 2011, 02:21 PM Plato Quote: Originally Posted by Borkborkmath Yeah, sorry for the poor wording. I was wondering if (X,tau) was a topological space if X was a cover of (X,tau). Or as plato said. In any topology space $(X,\tau)$, yes $X$ is an open cover of itself. That fact is true if the space is compact or not compact. So what is the point? Please tell us what you are going for. • Jun 2nd 2011, 11:14 AM Borkborkmath Just wanted to make sure, thank you :]
2017-02-27 19:57:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476455211639404, "perplexity": 1922.6733588601282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00249-ip-10-171-10-108.ec2.internal.warc.gz"}
https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes
Skip to main content # 16: Markov Processes A Markov process is a random process in which the future is independent of the past, given the present. Thus, Markov processes are the natural stochastic analogs of the deterministic processes described by differential and difference equations. They form one of the most important classes of random processes. This page titled 16: Markov Processes is shared under a CC BY 2.0 license and was authored, remixed, and/or curated by Kyle Siegrist (Random Services) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. • Was this article helpful?
2022-10-03 02:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691002130508423, "perplexity": 629.6509363249016}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00164.warc.gz"}
https://anhngq.wordpress.com/2018/09/
# Ngô Quốc Anh ## September 14, 2018 ### Leibniz rule for proper integral with parameter whose limits also depends on the parameter Filed under: Uncategorized — Ngô Quốc Anh @ 21:24 The following Leibniz integral rule is well-known Theorem. Let $f(x, t)$ be a function such that both $f(x, t)$ and its partial derivative $f_x(x, t)$ are continuous in $t$ and $x$ in some region of the $(x, t)$-plane, including $a(x) \leqslant t \leqslant b(x)$, $x_0 \leqslant x \leqslant x_1$. Also suppose that the functions $a(x)$ and $b(x)$ are both continuous and both have continuous derivatives for $x_0 \leqslant x \leqslant x_1$. Then, for $x_0 \leqslant x \leqslant x_1$, $\displaystyle \frac {d}{dx}\left(\int _{a(x)}^{b(x)}f(x,t)\,dt\right)=f\big (x,b(x)\big )b'(x)-f\big (x,a(x)\big) a'(x)+\int _{a(x)}^{b(x)}{\frac {\partial f }{\partial x}}(x,t)\,dt.$ The purpose of this note is to show that, in fact, it is not  is not necessary to assume the function $f$ to be continuous. We note that this is indeed the case in which the limits of the integral $\int_{a(x)}^{b(x)}f(x,t)\,dt$ do not depend on the parameter $x$. For convenience, it is routine to assume the continuity, which immediately implies that all integrals are well-defined. As mentioned above, we want to show that this is also the case for integrals of the form above.
2022-05-20 00:22:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9926461577415466, "perplexity": 72.36395392384894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00736.warc.gz"}
http://physics.stackexchange.com/questions/12090/small-scale-water-power-how-does-water-volume-and-hight-convert-into-electric-e/12102
# Small scale water power, how does water volume and hight convert into electric energy? I was playing a little bit with the basic physics behind water power production but I can't get the numbers right. Let's say that I put a windmill that pumps water into a watertank on the top of my house, then I connect some kind of pipe with generator and starts to drain the watertank. How much electric power (kWh) can I get out from a watertank with size $X\text{ m}^3$ placed $Y\text{ m}$ above the ground? How does the formulas look like? Let's put some numbers on this problem, and see where we end up: Let's say the tank is $1\text{ m}^3$, and it is $10\text{ m}$ off the ground so the water will fall $10\text{ m}$ to the generator. Let's connect the generator with a standard garden hose that has a 1 inch diameter, with an area of $2.54\text{ cm}/(2\pi) \sim 5.1\text{ cm}^2$. And then I guess we would get a $10\text{ m}$ column of water pressure, that could be transformed with the area into the force the hight is putting on the system. Something like the earths gravity (9.82)*density*height = 9.82*1*10 ~ 98 Newton (???). And then maybe use that we can find that pressure=Force/Area, but how to move from pressure to energy? Thanks David Zaslavsky for the example, and in theory that would mean that to store 1kWh I need like 40m3 at 10m height. That more or less mean that if one would try to build something like this in real life things need to be quite big. Also thanks Fortunato for illustrate the practical problem in extracting the energy, and that even thou it is hard to get hight numbers it can be worth the effort anyway. - What you're looking for is actually energy, not power, and you can put an upper limit on the amount you can get by computing the gravitational potential energy lost by the water as it drops. If the volume of the tank is $V = X\text{ m}^3$ and its height above the ground (or more precisely, above the point where you extract the energy) is $h = Y\text{ m}$, the amount of energy you get is no greater than $$E = \rho V g h$$ where $\rho$ is the density of water and $g$ is the gravitational acceleration. If you put in all the numbers and unit conversions to get it in kilowatt-hours, that works out to $$E = 1000\frac{\mathrm{kg}}{\mathrm{m}^3}\times X\text{ m}^3\times 9.81\frac{\mathrm{m}}{\mathrm{s}^2}\times Y\text{ m}\times\biggl(3.60\times 10^6\frac{\mathrm{kWh}}{\mathrm{J}}\biggr) = 0.00273XY\text{ kWh}$$
2014-10-25 12:10:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7415263056755066, "perplexity": 322.77979354583005}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648148.32/warc/CC-MAIN-20141024030048-00110-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/members/children.138824/recent-content
# Recent content by children 1. ### Fourier Transform of a wavefunction Why shud one take the Fourier transform of a wavefunction and multiply the result with its conjugate to get the probability? Why can't it be fourier transform of the probability directly? thank you
2019-09-17 20:38:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212926030158997, "perplexity": 632.1347110296801}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573121.4/warc/CC-MAIN-20190917203354-20190917225354-00355.warc.gz"}
https://zbmath.org/?q=an:0734.47036
zbMATH — the first resource for mathematics Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Iterative construction of fixed points of asymptotically nonexpansive mappings. (English) Zbl 0734.47036 Let T be a completely continuous and asymptotically non-expansive self- mapping (in the sense of Goebel and Kirk) of a nonempty closed bounded and convex subset of a Hilbert space. The author gives conditions under which a fixed point of T may be obtained as limit of the Mann-type iterates $x\sb{n+1}=\alpha\sb nT\sp n(x\sb n)+(1-\alpha\sb n)x\sb n.$ A parallel result is obtained for a new class of operators (called “asymptotically pseudocontractive”) whose iterates admit a universal Lipschitz constant. MSC: 47H10 Fixed-point theorems for nonlinear operators on topological linear spaces 47H09 Mappings defined by “shrinking” properties Full Text: References: [1] Browder, F. E.: Nonexpansive nonlinear operators in Banach space. Proc. natl. Acad. sci. USA 54, 1041-1044 (1965) · Zbl 0128.35801 [2] Goebel, K.; Kirk, W. A.: A fixed point theorem for asymptotically nonexpansive mappings. Proc. amer. Math. soc. 35, 171-174 (1972) · Zbl 0256.47045 [3] Goebel, K.; Kirk, W. A.: A fixed point theorem for transformations whose iterates have uniform Lipschitz constant. Studia math. 47, 137-140 (1973) · Zbl 0265.47044 [4] Göhde, D.: Zum prinzip der kontraktiven abbildung. Math. nachr. 30, 251-258 (1965) · Zbl 0127.08005 [5] Groetsch, C. W.: A note on segmenting Mann iterates. J. math. Anal. appl. 40, 369-372 (1972) · Zbl 0244.47042 [6] Ishikawa, S.: Fixed points by a new iteration method. Proc. amer. Math. soc. 44, 147-150 (1974) · Zbl 0286.47036 [7] Ishikawa, S.: Fixed points and iteration of a nonexpansive mapping in a Banach space. Proc. amer. Math. soc. 59, 65-71 (1976) · Zbl 0352.47024 [8] Kirk, W. A.: A fixed point theorem for mappings which do not increase distance. Amer. math. Monthly 72, 1004-1006 (1965) · Zbl 0141.32402 [9] Kirk, W. A.: Krasnoselskii’s iteration process in hyperbolic space. Numer. fund. Anal. optim. 4, No. No. 4, 371-381 (1982) · Zbl 0505.47046 [10] Qihou, L.: On naimpally and singh’s open questions. J. math. Anal. appl. 124, 157-164 (1987) · Zbl 0625.47044 [11] Reinermann, J.: Über fixpunkte kontrahierender abbildungen und schwach konvergente Toeplitz-verfahren. Arch. math. 20, 59-64 (1969) · Zbl 0174.19401 [12] Rhoades, B. E.: Comments on two fixed point iteration methods. J. math. Anal. appl. 56, 741-750 (1976) · Zbl 0353.47029 [13] Schöneberg, R.: Fixpunktsätze für einige klassen kontraktionsartiger operatoren in banachräumen über einen fixpunktindex, eine zentrumsmethode und die fixpunkttheorie nichtexpansiver abbildungen. Dissertation RWTH (1977) [14] Vijayaraju, P.: Fixed point theorems for asymptotically nonexpansive mappings. Bull. cal. Math. soc. 80, 133-136 (1988) · Zbl 0667.47032
2016-05-01 15:42:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7397130727767944, "perplexity": 6963.720940348308}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116587.6/warc/CC-MAIN-20160428161516-00075-ip-10-239-7-51.ec2.internal.warc.gz"}
https://monba.dicook.org/labs/lab1.html
Getting up and running with the computer: • R and RStudio • RStudio Projects • RMarkdown • R syntax and basic functions ## What is R? From Wikipedia: R is a programming language and software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.’’ R is free to use and has more than 14,000 (Feb 2019) user contributed add-on packages on the Comprehensive R Archive Network (CRAN). ## What is RStudio? If R were an airplane, RStudio would be the airport, providing many, many supporting services that make it easier for you, the pilot, to take off and go to awesome places. Sure, you can fly an airplane without an airport, but having those runways and supporting infrastructure is a game-changer. The RStudio integrated development environment (IDE) has multiple components including: 1. Source editor (to edit your scripts): • Docking station for multiple files, • Useful shortcuts (“Knit”), • Highlighting/Tab-completion, • Code-checking (R, HTML, JS), • Debugging features 1. Console window (to run your scripts, to test small pieces of code): • Highlighting/Tab-completion, • Search recent commands 1. Other tabs/panes: • Graphics, • R documentation, • Environment pane, • Tools for package development, git, etc There’s a cheatsheet in the “Help” menu, on tips for using this interface. ## RStudio Projects • For the unit ETC3250, I have created a project on my laptop called ETC3250. Note that the name of the current project can be seen at the top right of the RStudio window. • YOU SHOULD ALWAYS WORK IN A PROJECT FOR THIS CLASS 😄 • Each time you start RStudio] for this class, be sure to open the right project. ## Exercise 1 Create a project for this unit, in the directory. • File -> New Project -> Existing Directory -> Empty Project ## Exercise 2 Download the lab1.Rmd from the course web site. ## What is RMarkdown? • R Markdown is an authoring format that enables easy creation of dynamic documents, presentations, and reports from R. • It combines the core syntax of markdown (an easy-to-write plain text format) with embedded R code chunks that are run so their output can be included in the final document. • R Markdown documents are fully reproducible (they can be automatically regenerated whenever underlying R code or data changes). There’s a cheatsheet in the “Help” pages of RStudio on Rmarkdown. When you click the Knit button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document. Equations can be included using LaTeX (https://latex-project.org/) commands like this: $$s^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2.$$ produce $s^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2.$ We can also use inline mathematical symbols such as $\alpha$ and $\infty$, which produce $$\alpha$$ and $$\infty$$, respectively. For more details on using R Markdown see http://rmarkdown.rstudio.com. Spend a few minutes looking over that website before continuing with this document. ## Exercise 3 Look at the text in the lab1.Rmd document. • What is R code? • How does knitr know that this is code to be run? • Using the RStudio IDE, work out how to run a chunk of code. Run this chunk, and then run the next chunk. • Using the RStudio IDE, how do you run just one line of R code? • Using the RStudio IDE, how do you highlight and run multiple lines of code? • What happens if you try to run a line that starts with “{r}”? Or try to run a line of regular text from the document? • Using the RStudio IDE, knit the document into a Word document. ## Some R Basics • Type (into the console pane) and figure out what each of the following command is doing: (100+2)/3 5*10^2 1/0 0/0 (0i-9)^(1/2) sqrt(2*max(-10,0.2,4.5))+100 x <- sqrt(2*max(-10,0.2,4.5))+100 x log(100) log(100,base=10) • Check that these are equivalent: y <- 100, y = 100 and 100 -> y • R has rich support for documentation. Find the help page for the mean command, either from the help menu, or by typing one of these: help(mean) and ?mean. Most help pages have examples at the bottom. • The summary command can be applied to almost anything to get a summary of the object. Try summary(c(1, 3, 3, 4, 8, 8, 6, 7)) ## Data Types • list’s are heterogeneous (elements can have different types) • data.frame’s are heterogeneous but elements have same length (dim reports the dimensions and colnames shows the column names) • vector’s and matrix’s are homogeneous (elements have the same type), which would be why c(1, "2") ends up being a character string. • function’s can be written to save repeating code again and again • Try to understand these commands: class, typeof, is.numeric, is.vector and length ## Operations • Use built-in vectorized functions to avoid loops set.seed(1000) x <- rnorm(6) x # [1] -0.44577826 -1.20585657 0.04112631 0.63938841 -0.78655436 -0.38548930 sum(x + 10) # [1] 57.85684 • Use [ to extract elements of a vector. x[1] # [1] -0.4457783 x[c(T, F, T, T, F, F)] # [1] -0.44577826 0.04112631 0.63938841 • Extract named elements with $, [[, and/or [ x <- list( a = 10, b = c(1, "2") ) x$a # [1] 10 x[["a"]] # [1] 10 x["a"] # $b: chr [1:2] "1" "2" ## Missing Values • NA is the indicator of a missing value in R • Most functions have options for handling missings x <- c(50, 12, NA, 20) mean(x) # [1] NA mean(x, na.rm=TRUE) # [1] 27.33333 ## Counting Categories • the table function can be used to tabulate numbers table(c(1, 2, 3, 1, 2, 8, 1, 4, 2)) # # 1 2 3 4 8 # 3 3 1 1 1 ## Functions One of the powerful aspects of R is to build on the reproducibility. If you are going to do the same analysis over and over again, compile these operations into a function that you can then apply to different data sets. average <- function(x) { return(sum(x)/length(x)) } y1 <- c(1,2,3,4,5,6) average(y1) # [1] 3.5 y2 <- c(1, 9, 4, 4, 0, 1, 15) average(y2) # [1] 4.857143 Now write a function to compute the mode of some vector, and confirm that it returns 4 when applied on y <- c(1, 1, 2, 4, 4, 4, 9, 4, 4, 8) ## Exercise 4 • What’s an R package? • How do you install a package? • How does the library() function relates to a package? • How often do you load a package? • Install and load the package ISLR ## Getting data Data can be found in R packages library(tidyverse) data(economics, package = "ggplot2") # data frames are essentially a list of vectors glimpse(economics) # Observations: 574 # Variables: 6 #$ date <date> 1967-07-01, 1967-08-01, 1967-09-01, 1967-10-01, 1967-1… # $pce <dbl> 507.4, 510.5, 516.3, 512.9, 518.1, 525.8, 531.5, 534.2,… #$ pop <int> 198712, 198911, 199113, 199311, 199498, 199657, 199808,… # $psavert <dbl> 12.5, 12.5, 11.7, 12.5, 12.5, 12.1, 11.7, 12.2, 11.6, 1… #$ uempmed <dbl> 4.5, 4.7, 4.6, 4.9, 4.7, 4.8, 5.1, 4.5, 4.1, 4.6, 4.4, … # $unemploy <int> 2944, 2945, 2958, 3143, 3066, 3018, 2878, 3001, 2877, 2… These are not usually kept up to date but are good for practicing your analysis skills on. Or in their own packages library(gapminder) glimpse(gapminder) # Observations: 1,704 # Variables: 6 #$ country <fct> Afghanistan, Afghanistan, Afghanistan, Afghanistan, Af… # $continent <fct> Asia, Asia, Asia, Asia, Asia, Asia, Asia, Asia, Asia, … #$ year <int> 1952, 1957, 1962, 1967, 1972, 1977, 1982, 1987, 1992, … # $lifeExp <dbl> 28.801, 30.332, 31.997, 34.020, 36.088, 38.438, 39.854… #$ pop <int> 8425333, 9240934, 10267083, 11537966, 13079460, 148803… # $gdpPercap <dbl> 779.4453, 820.8530, 853.1007, 836.1971, 739.9811, 786.… I primarily use the readr package (part of the tidyverse suite) for reading data now. It mimics the base R reading functions but is implemented in C so reads large files quickly, and it also attempts to identify the types of variables. candy <- read_csv("https://raw.githubusercontent.com/fivethirtyeight/data/master/candy-power-ranking/candy-data.csv") glimpse(candy) # Observations: 85 # Variables: 13 #$ competitorname <chr> "100 Grand", "3 Musketeers", "One dime", "One q… # $chocolate <dbl> 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0,… #$ fruity <dbl> 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1,… # $caramel <dbl> 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0,… #$ peanutyalmondy <dbl> 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0,… # $nougat <dbl> 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0,… #$ crispedricewafer <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,… # $hard <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,… #$ bar <dbl> 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0,… # $pluribus <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1,… #$ sugarpercent <dbl> 0.732, 0.604, 0.011, 0.011, 0.906, 0.465, 0.604… # $pricepercent <dbl> 0.860, 0.511, 0.116, 0.511, 0.511, 0.767, 0.767… #$ winpercent <dbl> 66.97173, 67.60294, 32.26109, 46.11650, 52.3414… You can pull data together yourself, or look at data compiled by someone else. ## Question 1 • Look at the economics data in the ggplot2 package. Can you think of two questions you could answer using these variables? • Write these into your .Rmd file. ## Question 2 • Read the documentation for gapminder data. Can you think of two questions you could answer using these variables? • Write these into your .Rmd file. ## Question 3 • Read the documentation for pedestrian sensor data. Can you think of two questions you could answer using these variables? • Write these into your .Rmd file. ## Question 4 1. Read in the OECD PISA data (file student_sub.rds` is available at from the course web site) 2. Tabulate the countries (CNT) 3. Extract the values for Australia (AUS) and Shanghai (QCN) 4. Compute the average and standard deviation of the reading scores (PV1READ), for each country 5. Write a few sentences explaining what you learn about reading in these two countries. ## Homework Using your free DataCamp account, work your way through the free tutorial Introduction to R. This provides some good insights on the data types you will commonly use in R. ## Got a question? It is always good to try to solve your problem yourself first. Most likely the error is a simple one, like a missing “)” or “,”. For deeper questions about packages, analyses and functions, making your Rmd into a document, or simply the error that is being generated, you can often google for an answer. Often, you will be directed to Q/A site: http://stackoverflow.com. Stackoverflow is a great place to get answers to tougher questions about R and also data analysis. You always need to check that someone hasn’t asked it before, the answer might already be available for you. If not, make a reproducible example of your problem, following the guidelines here and ask away. Remember these people that kindly answer questions on stackoverflow have day jobs too, and do this community support as a kindness to all of us.
2019-10-17 18:43:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19640380144119263, "perplexity": 3281.3853618503954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00421.warc.gz"}
http://snsgamers.com/fd90m/779257-area-of-parallelogram-vectors-2d
The area of a parallelogram can be calculated using the following formula: $\text{Area} = \text{base (b)} \times \text{height (h)}$. of the parallelogram formed by the vectors. I can find the area of the parallelogram when two adjacent side vectors are given. Problem 1 : Find the area of the parallelogram whose two adjacent sides are determined by the vectors i vector + 2j vector + 3k vector and 3i vector − 2j vector + k vector. The vector product of a and b is always perpendicular to both a and b. Library: cross product of two vectors. About Cuemath. Read about our approach to external linking. In this section, you will learn how to find the area of parallelogram formed by vectors. To find cross-product, calculate determinant of matrix: where i = < 1, 0, 0 > , j = < 0, 1, 0 > , k = < 0, 0, 1 >, AB×AD = i(3×0−0×−2) − j(2×0−0×4) + k(2×−2−3×4), - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -, For vectors: u = < a, b > and v = < c, d >. The magnitude of the product u × v is by definition the area of the parallelogram spanned by u and v when placed tail-to-tail. Perry. Sign in, choose your GCSE subjects and see content that's tailored for you. (Geometry in 2D) Two vectors can define a parallelogram. The other multiplication is the dot product, which we discuss on another page. (Geometry in 3D)Giventwovectorsinthree-dimensionalspace,canwefindathirdvector perpendicular to them? If two vectors acting simultaneously at a point can be represented both in magnitude and direction by the adjacent sides of a parallelogram drawn from a point, then the resultant vector is represented both in magnitude and direction by the diagonal of the parallelogram passing through that point. Question. b) Find the area of the parallelogram constructed by vectors and , with and . can be calculated using the following formula: Home Economics: Food and Nutrition (CCEA). Ceiling joists are usually placed so they’re ___ to the rafters? 1. Find the area of the parallelogram with u and v as adjacent edges. So let's compute this determinant. The parallelogram has vertices A(-2,1), B(0,4), C(4,2) and D(2,-1). Theorem 1: If then the area of the parallelogram formed by is. One of these methods of multiplication is the cross product, which is the subject of this page. 3. If the parallelogram is formed by vectors a and b, then its area is $|a\times b|$. Solution : Let a vector = i vector + 2j vector + 3k vector. More in-depth information read at these rules. Is equal to the determinant of your matrix squared. The determinant of a 2x2 matrix is equal to the area of the parallelogram defined by the column vectors of the matrix. Well, we'd better be careful. b vector = 3i vector − 2j vector + k vector. solution Up: Area of a parallelogram Previous: Area of a parallelogram Example 1 a) Find the area of the triangle having vertices and . Area determinants are quick and easy to solve if you know how to solve a 2x2 determinant. Parallelograms - area The area of a parallelogram is the $$base \times perpendicular~height~(b \times h)$$. The perimeter of a 2D shape is the total distance around the outside of the shape. A. Let’s address each of these questions individually to build our understanding of a cross product. It can be shown that the area of this parallelogram (which is the product of base and altitude) is equal to the length of the cross product of these two vectors. Magnitude of the vector product of the vectors equals to the area of the parallelogram, build on corresponding vectors: Therefore, to calculate the area of the parallelogram, build on vectors, one need to find the vector which is the vector product of the initial vectors, then find the magnitude of this vector. Answer Save. You can input only integer numbers, decimals or fractions in this online calculator (-2.4, 5/7, ...). So the area of your parallelogram squared is equal to the determinant of the matrix whose column vectors construct that parallelogram. Area of a parallelogram Suppose two vectors and in two dimensional space are given which do not lie on the same line. And the area of the parallelogram and cross product alter for different values of the angle . All of these shapes have a different set of properties with different formulas for ... Now, you will be able to easily solve problems on the area of parallelogram vectors, area of parallelogram proofs, and area of a parallelogram without height, and use the area of parallelogram calculator. What is the answer and how do you actually compute ||ABxAD||? Area of parallelogram from 2 given vectors using cross product (2D)? Explain why a limit is needed.? 2-dimensional shapes are flat. We note that scaling one side of a parallelogram scales its area by the same fraction (Figure 5.3): |(ka)b| = |a(kb)| = k|ab|. So we'll expand vectors into 3D space (with z = 0). [Vectors] If the question is asking me to find the area of a parallelogram given 4 points in the xyz plane, can I disregard the z-coordinate? Learn to calculate the area using formula without height, using sides and diagonals with solved problems. The figure shows t… What's important is the vectors which connect the two of our endpoints together. This is a fairly easy question.. but I just can't seem to get the answer because I'm used to doing it in 3D. We will now look at a formula for calculating a parallelogram of two vectors in. One thing that determinants are useful for is in calculating the area determinant of a parallelogram formed by 2 two-dimensional vectors. So we find 6 times 2 minus 5-- so we get 12 minus 5 is 7. We can use matrices to handle the mechanics of computing determinants. These two vectors form two sides of a parallelogram. Library. Geometry is all about shapes, 2D or 3D. The area of a 2D shape is the space inside the shape. 1 Answer. Parallel B. Graph both of the equations that you are given on the vertical and horizontal axis. Finding the slope of a curve is different from finding the slope of a line. I created the vectors AB = <2,3> and AD = <4,2>. Hence we can use the vector product to compute the area of a triangle formed by three points A, B and C in space. The cross product equals zero when the vectors point in the same or opposite direction. We can express the area of a triangle by vectors also. But how to find the area of the parallelogram when diagonals of the parallelogram are given as \\alpha = 2i+6j-k and \\beta= 6i-8j+6k Remember, the height must be the perpendicular height, measured across the shape. The below figure illustrates how, using trigonometry, we can calculate that the area of the parallelogram spanned by a and b is a bsinθ, where θ is the angle between a and b. It's going to be plus or minus the determinant, is going to be the area. Area suggests the shape is 2D, which is why I think it's safe to neglect the z-coordinate that would make it 3D. I created the vectors AB = <2,3> and AD = <4,2> So... ||ABxAD|| = area of parallelogram What is the answer and how do you actually compute ||ABxAD||? Practice Problems. Relevance. Statement of Parallelogram Law . The maximum value of the cross product occurs when the vectors are perpendicular. If we have 2D vectors r and s, we denote the determinant |rs|; this value is the signed area of the parallelogram formed by the vectors. So, let me just go through the one tricky part of this problem is the original endpoints of our parallelogram are not what are important for the area. Get your answers by asking now. In this video, we learn how to find the determinant & area of a parallelogram. There are two ways to take the product of a pair of vectors. parallelepiped (3D parallelogram; a sheared 3D box) formed by the three vectors (Figure 5.2). Note that the magnitude of the vector resulting from 3D cross product is also equal to the area of the parallelogram between the two vectors, which gives Implementation 1 another purpose. Area = $$9 \times 6 = 54~\text{cm}^2$$ The formula for the area of a parallelogram can be used to find a missing length. u = 5i -2j v = 6i -2j Cross product is usually done with 3D vectors. Calculate the width of the base of the parallelogram: Our tips from experts and exam survivors will help you through. Best answer for first and correct answer, thanks! Join Yahoo Answers and get 100 points today. This is true in both $R^2\,\,\mathrm{and}\,\,R^3$. What is the area of this paral-lelogram? The cross product of two vectors a and b is a vector c, length (magnitude) of which numerically equals the area of the parallelogram based on vectors a and b as sides. Area of Parallelogram is the region covered by the parallelogram in a 2D space. Can someone help me with the second math question. So now that we have these two vectors, the area of our parallelogram is just going to be the determinant of our two vectors. The area of parallelogram formed by the vectors a and b is equal to the module of cross product of this vectors: A = | a × b |. Lv 4. Still have questions? Best answer for first and correct answer, thanks! You can see that this is true by rearranging the parallelogram to make a rectangle. You can input only integer numbers, decimals or fractions in this online calculator (-2.4, 5/7, ...). Area of a Parallelogram Given two vectors u and v with a common initial point, the set of terminal points of the vectors su + tv for 0 £ s, t £ 1 is defined to be parallelogram spanned by u and v. We can explore the parallelogram spanned by two vectors in a 2-dimensional coordinate system. This means that vectors and … The Area of a Parallelogram in 2-Space Recall that if we have two vectors, the area of the parallelogram defined by then can be calculated with the formula. The parallelogram has vertices A(-2,1), B(0,4), C(4,2) and D(2,-1). The formula for the area of a parallelogram can be used to find a missing length. At 30 angles C. Perpendicular D. Diagonal? We know that in a parallelogram when the two adjacent sides are given by \vec {AB} AB and \vec {AC} AC and the angle between the two sides are given by θ then the area of the parallelogram will be given by The area forms the shape of a parallegram. Calculate the area of the parallelogram. To compute a 2D determinant, we first need to establish a few of its properties. In addition, this area is signed and can be used to determine whether rotating from V1 to V2 moves in an counter clockwise or clockwise direction. Or if you take the square root of both sides, you get the area is equal to the absolute value of the determinant of A. The area between two vectors is given by the magnitude of their cross product. The parallelogram has vertices A(-2,1), B(0,4), C(4,2) and D(2,-1). Suppose we have two 2D vectors with Cartesian coordinates (a, b) and (A,B) (Figure 5.7). That aside, I'm not sure why they gave me 4 points when the formula only uses 3 points . Experts and exam survivors will help you through formula only uses 3 points is true in both [ ]! Space are given which do not lie on the same or opposite.! The dot product, which we discuss on another page calculate the width of the cross product when! Usually placed so they ’ re ___ to the area of the parallelogram spanned by u and when... Determinant equal to the area of the matrix math ] R^2\,,. The other multiplication is the subject of this page = 0 ): our tips experts! How to find a missing length 1: if then the area between two vectors form two sides of 2D. ] R^2\, \, \, \mathrm { and } \, \mathrm { and },! Given on the same or opposite direction definition the area of parallelogram vectors 2d of a pair of vectors to the...: Food and Nutrition ( CCEA ) 2D, which we discuss another! Ab = < 2,3 > and AD = < 2,3 > and AD = 4,2. It 3D outside of the equations that you are given on the same line a is! In a 2D space perimeter of a cross product you are given on the same opposite! Has a determinant equal to the area of a line,... ) =! By 2 two-dimensional vectors, is going to be the perpendicular area of parallelogram vectors 2d measured. The subject of this page points when the formula only uses 3 points computing determinants then its area [! Computing determinants to calculate the width of the parallelogram spanned by u and v when placed tail-to-tail formula for area. Adjacent edges s address each of these methods of multiplication is the \ ( base perpendicular~height~. Is 7 compute a 2D determinant, is going to be plus or minus the determinant of a can! Answer for first and correct answer, thanks the formula only uses 3 points the vectors! } \, R^3 [ /math ] which connect the two of our endpoints.! Ways to take the product u × v is by definition the area two! To neglect the z-coordinate that would make it 3D the width of the parallelogram spanned by u v. About shapes, 2D or 3D perpendicular height, using sides and diagonals with solved problems vectors are.... A triangle by vectors and in two dimensional space are given on the and. All about shapes, 2D or 3D determinant & area of parallelogram is the cross product of a of! Giventwovectorsinthree-Dimensionalspace, canwefindathirdvector perpendicular to them in a 2D shape is the subject of this page 2D space 3D! 12 minus 5 is 7 i vector + 2j vector + 3k vector so ’... Space are given on the same line of these methods of multiplication the. This is true by rearranging the parallelogram in a 2D shape is the total around..., 5/7,... ) u and v when placed tail-to-tail spanned by u and v when tail-to-tail... These two vectors has a determinant equal to the area of a line for you safe to the! 2 given vectors using cross product alter for different values of the product of two vectors in line... 2 minus 5 is 7 calculating the area of a parallelogram can be calculated using the following formula: Economics. 'M not sure why they gave me 4 points when the vectors AB = < 2,3 > AD! Compute ||ABxAD|| can see that this is true by rearranging the parallelogram defined by the column vectors the. That 's tailored for you the same line if you know how to find the of! Using formula without height, using sides and diagonals with solved problems z-coordinate would! We first need to establish a few of its properties,... ) matrix squared not sure why they me... Shows t… Geometry is all about shapes, 2D or 3D - area the area of the base of equations! 2D shape is 2D, which is the answer and how do you actually ||ABxAD||! The second math question × v is by definition the area of parallelogram from 2 vectors... Is equal to the area between two vectors is given by the magnitude of their cross.... For you parallelogram of two vectors in rearranging the parallelogram is the subject of this.... ’ s address each of these methods of multiplication is the vectors given... I created the vectors which connect the two of our endpoints together to a. Its area is [ math ] |a\times b| [ /math ] the second math question i vector + k.. Total distance around the outside of the angle width of the equations you. Find a missing length or minus the determinant & area of your squared... With z = 0 ) have two 2D vectors with Cartesian coordinates ( a b. Using cross product ( 2D ) 'm not sure why they gave 4! And, with and to solve a 2x2 matrix is equal to the of! The base of the parallelogram constructed by vectors also vectors construct that parallelogram |a\times b| [ /math ] different finding! The \ ( base \times perpendicular~height~ ( b \times h ) \ ) must be the perpendicular height, sides... The following formula: Home Economics: Food and Nutrition ( CCEA ) sides of a cross product alter different... 2D determinant, we first need to establish a few of its properties from the... Two-Dimensional vectors how do you actually compute ||ABxAD|| by vectors also suggests the shape both of the parallelogram with and! Without height, using sides and diagonals with solved problems of their cross product ( ). They gave me 4 points when the vectors AB = < 4,2 > a is... For you can express the area in both [ math ] |a\times [... Gave me 4 points when the vectors are perpendicular tips from experts and exam survivors will help through. Me 4 points when the vectors point in area of parallelogram vectors 2d same line you are given which do not lie on vertical... Cartesian coordinates ( a, b ) and ( a, b ) find the determinant a... The vectors are given which do not lie on the same line and see content that 's tailored you. Why i area of parallelogram vectors 2d it 's safe to neglect the z-coordinate that would make it 3D same opposite. The determinant of the parallelogram with u and v when placed tail-to-tail this., 5/7,... ) given vectors using cross product occurs when the vectors AB = 4,2! Area determinant of your matrix squared must be the area of the with... Parallelogram spanned by u and v as adjacent edges the two of our endpoints.. Be plus or minus the determinant & area of a parallelogram if then the of... Video, we first need to establish a few of its properties we need! Vectors AB = < 4,2 > formula: Home Economics: Food and (! The vector product of a 2D shape is the region covered by the column vectors construct that.... The other multiplication is the region covered by the magnitude of the parallelogram two... The maximum value of the base of the parallelogram defined by the column vectors of the.! Connect the two of our endpoints together two adjacent side vectors are perpendicular the area of parallelogram. Are given these questions individually to build our understanding of a 2x2 determinant 's tailored for you −. Figure shows t… Geometry is all about shapes, 2D or 3D × v is definition... The other multiplication is the total distance around the outside of the matrix whose column vectors construct that parallelogram Library... Be calculated using the following formula: Home Economics: Food and (... And b. Library: cross product occurs when the vectors are perpendicular parallelogram and cross product alter for values! Parallelogram is area of parallelogram vectors 2d total distance around the outside of the parallelogram: our tips from experts and exam will. The second math question a few of its properties is different from finding the slope of a product! V when placed tail-to-tail this page and } \, \, [..., \, R^3 [ /math ] h ) \ ) input only numbers..., measured across the shape product u × v is by definition the area of pair! This video, we first need to establish a few of its.. These methods of multiplication is the answer and how do you actually compute?! 'S important is the cross product equals zero when the formula for the area between two form! The \ ( base \times perpendicular~height~ ( b \times h ) \ ) can find the of... The maximum value of the parallelogram defined by the column vectors construct that parallelogram ) and ( a, )! Choose your GCSE subjects and see content that 's tailored for you horizontal axis of! See content that 's tailored for you the height must be the area of the parallelogram spanned by and. The slope of a parallelogram what 's important is the answer and how do you actually ||ABxAD||! Uses 3 points find the area using formula without height, measured across the shape calculating the area your! × v is by definition the area between two vectors and, with and diagonals with solved.! A triangle by vectors also two of our endpoints together going to be the area of parallelogram... We get 12 minus 5 -- so we get 12 minus 5 so., with and vectors in now look at a formula for calculating a parallelogram of two vectors in. Two-Dimensional vectors its area is [ math ] R^2\, \, \, \ \... Crouse Hinds 101, Sls Black Series For Sale, Fluval Fx6 Dimensions, Wows Shimakaze Build 2020, Sls Black Series For Sale, Best Asphalt Crack Filler, Mike Tyson Mysteries Episodes, Sls Black Series For Sale,
2021-06-13 06:20:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751612305641174, "perplexity": 409.9386225903339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00364.warc.gz"}
https://www.chemicalforums.com/index.php?topic=6376.0
October 18, 2021, 01:50:17 PM Forum Rules: Read This Before Posting ### Topic: iron thiocyanate calculations  (Read 33845 times) 0 Members and 1 Guest are viewing this topic. #### plu • Full Member • Posts: 193 • Mole Snacks: +15/-7 • Gender: ##### iron thiocyanate calculations « on: January 07, 2006, 10:54:07 PM » Hello there!  I was recently approached with a question that I could not answer.    Here it is: A student was to determine the equilibrium constant for the reaction between iron(III) and thiocyanate ions.  A standard reference solution was prepared by mixing 18.0 mL of a 0.20 mol/L Fe3+ solution with 2.0 mL of a 0.0020 mol/L SCN-.  The absorbance of this solution was found to be 0.520.  A second solution was prepared with the same 0.0020 mol/L SCN- and dilute 0.0020 Fe3+ solutions.  Solution #2 contained the following: 5.0 mL Fe(NO3)3, 2.0 mL KSCN, and 3.0 mL H2O.  The absorbance of the second solution was found to be 0.138.  Assuming FeSCN2+ follows Beer's Law, find the equilibrium constant for the reaction.  Note: The molar absorptivity of FeSCN2+ is 7.00 x 103 L/cm.mol. I can work the question out if I assume that the pathlength used is a standard 1.00-cm cell.  However, it is apparently possible to solve the question without knowing the pathlength    Assistance requested! #### Mitch • General Chemist • Sr. Member • Posts: 5297 • Mole Snacks: +376/-3 • Gender: • "I bring you peace." -Mr. Burns ##### Re:iron thiocyanate calculations « Reply #1 on: January 07, 2006, 11:56:19 PM » Its a common assumption to assume the path length is 1cm. Most Common Suggestions I Make on the Forums. 1. Start by writing a balanced chemical equation. 2. Don't confuse thermodynamic stability with chemical reactivity. 3. Forum Supports LaTex #### kkrizka • Guest ##### Re:iron thiocyanate calculations « Reply #2 on: January 08, 2006, 02:07:37 AM » We are doing this lab in class right now, so I might be some help. According to my teacher, we can use the first testube, our reference test tube to figure out the molarities of others throug cross multiplication. This means we don't need to use Beer's Law or at least not directly. So it would be setup like this: 0.002M     ?M? --------- ------ 0.520       0.260 Does anyone here see a problem with this method? Btw, we had to cancel the lab because the concentration of KSCN was too high so the resulting solution was too dark and we couldn't get the reading. Doing it next class! #### sdekivit • Chemist • Full Member • Posts: 403 • Mole Snacks: +32/-3 • Gender: • B.Sc Biomedical Sciences, Utrecht University ##### Re:iron thiocyanate calculations « Reply #3 on: January 08, 2006, 05:14:24 AM » if law of Lambert-Beer is applicable to the experiment (that is: when extinction and concentration are proportional to each other), than there's no problem using cross multiplication --> remember that when E and c are proportional, when E get's 2 times higher c does too. Now cross multiplication is no problem. #### plu • Full Member • Posts: 193 • Mole Snacks: +15/-7 • Gender: ##### Re:iron thiocyanate calculations « Reply #4 on: January 08, 2006, 11:26:51 AM » So it would be setup like this: 0.002M     ?M? --------- ------ 0.520       0.260 How are you getting the 0.260? #### kkrizka • Guest ##### Re:iron thiocyanate calculations « Reply #5 on: January 08, 2006, 11:36:35 AM » How are you getting the 0.260? Random number I made up for my example. #### plu • Full Member • Posts: 193 • Mole Snacks: +15/-7 • Gender: ##### Re:iron thiocyanate calculations « Reply #6 on: January 08, 2006, 11:50:04 AM » Random number I made up for my example. Ah, I see    How would you complete the solution then? #### kkrizka • Guest ##### Re:iron thiocyanate calculations « Reply #7 on: January 09, 2006, 12:56:45 AM » Ah, I see    How would you complete the solution then? You mean calculate the unknown concentration? Well, since the absorbance reading is directly proportional to the concentration, halfing it would also half the concentration. So my number is for a 0.001M concentration of Fe(SCN)2+ For less nice numbers, just use cross multiplication. #### plu • Full Member • Posts: 193 • Mole Snacks: +15/-7 • Gender: ##### Re:iron thiocyanate calculations « Reply #8 on: January 09, 2006, 06:55:21 AM » You mean calculate the unknown concentration? Well, since the absorbance reading is directly proportional to the concentration, halfing it would also half the concentration. So my number is for a 0.001M concentration of Fe(SCN)2+ For less nice numbers, just use cross multiplication. Oy, but you would have to again assume a pathlength of 1.0 cm since you don't know the concentration of FeSCN2+ for either of the two trials • Mr. pH
2021-10-18 17:50:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262983560562134, "perplexity": 12687.183641411702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00032.warc.gz"}
https://www.gamedev.net/forums/topic/552723-rotation-with-quaterion/
# rotation with quaterion ## Recommended Posts If I make my quaternion from an axis angle giving it the Y axis it rotate around fine.. but if give it the X axis it rotates kinda on an angle.. its not moving around the x axis.. First: setting up the rotations, angleToAddX/Y is the angle that it should be rotated around calculated from the position of the mouse curser Math::Quaternionf rotX; Math::Quaternionf rotY; rotX.normalize(); rotY.normalize(); then if I do; listElement.state.orientation = listElement.state.orientation * rotY; it rotates fine around the Y axis.. but if I do; listElement.state.orientation = rotX * listElement.state.orientation; it rotates around some other axis.. looks like its on a 45 degree from x,y,z.. what I realy want is to set the orientation so it rotates around the X axis and the Y axis at the same time using thoese two rotations.. [Edited by - Nanook on November 8, 2009 7:02:23 AM] ##### Share on other sites This is the fromAngleAxis function by the way.. void fromAngleAxis(const qType rfAngle, const Math::Vector3<qType>& rkAxis) { // assert: axis[] is unit length // // The quaternion representing the rotation is // q = cos(A/2)+sin(A/2)*(x*i+y*j+z*k) qType fHalfAngle = 0.5*rfAngle; qType fSin = sin(fHalfAngle); w = cos(fHalfAngle); x = fSin*rkAxis.x; y = fSin*rkAxis.y; z = fSin*rkAxis.z; } ##### Share on other sites I have found the following to work: Store the yaw and pitch separately and increase / decrease them as needed. Then rebuild the quaternion: transform.resetOrientation(); // calls orientation.setIdentity();transform.rotate(yaw, 0.0f, 1.0f, 0.0f);transform.rotate(pitch, 1.0f, 0.0f, 0.0f);public void rotate(float angle, Vector3f axis) { Quaternion rot = new Quaternion(); axis.normalise(); Vector4f a1 = new Vector4f(axis.x, axis.y, axis.z, angle * Constants.DEG2RAD); rot.setFromAxisAngle(a1); orientation = Quaternion.mul(orientation, rot, null); } ##### Share on other sites Hmm.. I dont have Vector4f class.. Aren't you doing the same thing as I am though? ##### Share on other sites Quote: Original post by NanookHmm.. I dont have Vector4f class.. Aren't you doing the same thing as I am though? The Vector4f is just storing a 4f vector, it's nothing magical - you already have a function for getting quaternion from axis-angle. Err forget the whole storing pitch and yaw. I was looking at the camera class. Quaternion multiplication is not commutative though - try post-multiplying rotX instead of premultiplying. Other than that yes I think it's the same. These rotations occur in the local coordinate space of your object though - the axis change once your object rotates. Particularly once you rotate around the y-axis, your object's x-axis is no longer aligned with the world's x-axis. To always rotate in the same manner with respect to the world axis, you'll have to resort to keeping track of pitch and yaw and rebuilding the quaternion. ## Create an account Register a new account • ## Partner Spotlight • ### Forum Statistics • Total Topics 627676 • Total Posts 2978582 • 11 • 12 • 10 • 12 • 22
2017-10-19 18:32:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17761410772800446, "perplexity": 5087.596520611158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823360.1/warc/CC-MAIN-20171019175016-20171019195016-00245.warc.gz"}
http://www.maa.org/publications/maa-reviews/classical-galois-theory-with-examples?device=desktop
Classical Galois Theory with Examples Lisl Gaal Publisher: American Mathematical Society Chelsea Publication Date: 1998 Number of Pages: 248 Format: Hardcover Price: 30.00 ISBN: 0-8218-1375-7 Category: Textbook BLL Rating: The Basic Library List Committee strongly recommends this book for acquisition by undergraduate mathematics libraries. There is no review yet. Please check back later. * Prerequisites: 1.1 Group theory; 1.2 Permutations and permutation groups; 1.3 Fields; 1.4 Rings and polynomials; 1.5 Some elementary theory of equations; 1.6 Vector spaces * Fields: 2.1 Degree of an algebraic extension; 2.2 Isomorphisms of fields; 2.3 Automorphisms of fields; 2.4 Fixed fields * Fundamental theorem: 3.1 Splitting fields; 3.2 Normal extensions and groups of automorphisms; 3.3 Conjugate fields and elements; 3.4 Fundamental theorem * Applications: 4.1 Solvability of equations; 4.2 Solvable equations have solvable groups; 4.3 General equation of degree $n$; 4.4 Roots of unity and cyclic equations; 4.5 How to solve a solvable equation; 4.6 Ruler-and-compass constructions; 4.7 Lagrange's theorem; 4.8 Resolvent of a polynomial; 4.9 Calculation of the Galois group; 4.10 Matrix solutions of equations; 4.11 Finite fields; 4.12 More applications * Bibliography * Index
2015-03-01 21:12:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3615626096725464, "perplexity": 13049.005315261338}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462555.21/warc/CC-MAIN-20150226074102-00115-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/two-linked-bodies-via-a-spring.933624/
# Two linked bodies via a spring 1. Dec 6, 2017 ### inv4lid 1. The problem statement, all variables and given/known data two linked are attached to each other with a spring. If the second body is placed on a fixed support, the length of the spring is 5 cm. If we fix the first body as in the picture, the length of the spring becomes 15cm. Determine the length of the spring in the non-deformed state. m1 = 1 kg; m2 = 4kg; l1 = 5cm; l2 = 15cm; 2. Relevant equations This time i have really no idea. The m1 pushes the spring with a force of 10N and m2 should like respond with another force? Quite don't get it. 3. The attempt at a solution Any help would be greatly appreciated. 2. Dec 6, 2017 ### PeroK I suggest using Hooke's law. 3. Dec 6, 2017 ### inv4lid 10 N = kΔL? ΔL1 = L1 - L0 from there L0 = -ΔL1+L1 -> L0 = -ΔL1+5 ΔL2 = L2-L0 from there L0 = -ΔL2+L2 -> L0 = -ΔL2+15 Quite still don't get it. 4. Dec 6, 2017 ### PeroK If we take the first case with the mass $m_1$. Let $L$ be the natural length of the spring and $x_1 > 0$ be the contraction due to $m_1$. Then, by Hooke's law: $m_1g = kx_1$ Where $k$ is the (unknown) sprong constant. Does that get you started? 5. Dec 6, 2017 ### inv4lid What is x? I assume it's a different writing form of ΔL? Ok. Sorry, but i have already tried that above. mg (which is 10N) = kx, where both k & x are unknown 6. Dec 6, 2017 ### PeroK $x_1$ is the distance that the spring is compressed from its natural length under the weight of $m_1$. 7. Dec 6, 2017 ### PeroK Yes, I forgot you had used $\Delta L_1$. It doesn't matter how many unknowns you have at this stage. The trick is keep going with the next equation and hope that you can get rid of the unknowns at some stage. 8. Dec 6, 2017 ### inv4lid A question there: why do we need m2 there if it doesn't influence the object? 9. Dec 6, 2017 ### PeroK We don't. I think the premise is that they are joined together. The mass of $m_2$ doesn't affect the first scenario, nor does $m_1$ affect the second. You might also ask how they managed to attach $m_1$ to the ceiling, but I wouldn't worry about that either. 10. Dec 6, 2017 ### inv4lid Okay. m1g = k(x1-x0) m1g = k(x2-x0) -> 10 = k(5-x0) 10 = k(15-x0) that's though quite non-sense Last edited: Dec 6, 2017 11. Dec 6, 2017 ### PeroK In post #4 I had: $m_1g = kx_1$ That seemed the simplest approach. My thinking was: the spring is being compressed by a certain amount. Let's call that $x_1$. But, if you are going to use: $m_1g = k(x_0 - x_1)$ Then you have to be careful about what $x_0$ and $x_1$ are. In your second equation things have gone wrong. You've still got $10$ for the force, which can't be right. And, you've got negative numbers creeping in. Anyway, you need to fix those equations. We can stick with your notation, but you need to be careful about how you are defining things. 12. Dec 6, 2017 ### inv4lid Okay. Ty for everything, gonna solve it.
2018-03-24 07:26:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6630305051803589, "perplexity": 1253.4771401905757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649931.17/warc/CC-MAIN-20180324054204-20180324074204-00004.warc.gz"}
http://mathoverflow.net/questions/10666/isomorphism-types-or-structure-theory-for-nonstandard-analysis
# Isomorphism types or structure theory for nonstandard analysis My question is about nonstandard analysis, and the diverse possibilities for the choice of the nonstandard model R*. Although one hears talk of the nonstandard reals R*, there are of course many non-isomorphic possibilities for R*. My question is, what kind of structure theorems are there for the isomorphism types of these models? Background. In nonstandard analysis, one considers the real numbers R, together with whatever structure on the reals is deemed relevant, and constructs a nonstandard version R*, which will have infinitesimal and infinite elements useful for many purposes. In addition, there will be a nonstandard version of whatever structure was placed on the original model. The amazing thing is that there is a Transfer Principle, which states that any first order property about the original structure true in the reals, is also true of the nonstandard reals R* with its structure. In ordinary model-theoretic language, the Transfer Principle is just the assertion that the structure (R,...) is an elementary substructure of the nonstandard reals (R*,...). Let us be generous here, and consider as the standard reals the structure with the reals as the underlying set, and having all possible functions and predicates on R, of every finite arity. (I guess it is also common to consider higher type analogues, where one iterates the power set ω many times, or even ORD many times, but let us leave that alone for now.) The collection I am interested in is the collection of all possible nontrivial elementary extensions of this structure. Any such extension R* will have the useful infinitesimal and infinite elements that motivate nonstandard analysis. It is an exercise in elementary mathematical logic to find such models R* as ultrapowers or as a consequence of the Compactness theorem in model theory. Since there will be extensions of any desired cardinality above the continuum, there are many non-isomorphic versions of R*. Even when we consider R* of size continuum, the models arising via ultrapowers will presumably exhibit some saturation properties, whereas it seems we could also construct non-saturated examples. So my question is: what kind of structure theorems are there for the class of all nonstandard models R*? How many isomorphism types are there for models of size continuum? How much or little of the isomorphism type of a structure is determined by the isomorphism type of the ordered field structure of R*, or even by the order structure of R*? - I haven't read the book, but it appears that "Super-Real Fields" by Dales & Woodin is really about that very question. –  François G. Dorais Jan 6 '10 at 19:14 Uh, oh. I hope this doesn't put me in the doghouse with Woodin! :-) (He was my advisor 15-20 years ago.) Actually, now I remember Dales giving a number of seminar talks in Berkeley on superreal fields, when he was visiting Woodin at that time. –  Joel David Hamkins Jan 6 '10 at 22:18 Under a not unreasonable assumption about cardinal arithmetic, namely $2^{<c}=c$ (which follows from the continuum hypothesis, or Martin's Axiom, or the cardinal characteristic equation t=c), the number of non-isomorphic possibilities for *R of cardinality c is exactly 2^c. To see this, the first step is to deduce, from $2^{<c} = c$, that there is a family X of 2^c functions from R to R such that any two of them agree at strictly fewer than c places. (Proof: Consider the complete binary tree of height (the initial ordinal of cardinality) c. By assumption, it has only c nodes, so label the nodes by real numbers in a one-to-one fashion. Then each of the 2^c paths through the tree determines a function f:c \to R, and any two of these functions agree only at those ordinals $\alpha\in c$ below the level where the associated paths branch apart. Compose with your favorite bijection R\to c and you get the claimed maps g:R \to R.) Now consider any non-standard model *R of R (where, as in the question, R is viewed as a structure with all possible functions and predicates) of cardinality c, and consider any element z in *R. If we apply to z all the functions *g for g in X, we get what appear to be 2^c elements of *R. But *R was assumed to have cardinality only c, so lots of these elements must coincide. That is, we have some (in fact many) g and g' in X such that *g(z) = *g'(z). We arranged X so that, in R, g and g' agree only on a set A of size $<c$, and now we have (by elementarity) that z is in *A. It follows that the 1-type realized by z, i.e., the set of all subsets B of R such that z is in *B, is completely determined by the following information: A and the collection of subsets B of A such that z is in *B. The number of possibilities for A is $c^{<c} = 2^{<c} = c$ by our cardinal arithmetic assumption, and for each A there are only c possibilities for B and therefore only 2^c possibilities for the type of z. The same goes for the n-types realized by n-tuples of elements of *R; there are only 2^c n-types for any finite n. (Proof for n-types: Either repeat the preceding argument for n-tuples, or use that the structures have pairing functions so you can reduce n-types to 1-types.) Finally, since any *R of size c is isomorphic to one with universe c, its isomorphism type is determined if we know, for each finite tuple (of which there are c), the type that it realizes (of which there are 2^c), so the number of non-isomorphic models is at most (2^c)^c = 2^c. To get from "at most" to "exactly" it suffices to observe that (1) every non-principal ultrafilter U on the set N of natural numbers produces a *R of the desired sort as an ultrapower, (2) that two such ultrapowers are isomorphic if and only if the ultrafilters producing them are isomorphic (via a permutation of N), and (3) that there are 2^c non-isomorphic ultrafilters on N. If we drop the assumption that $2^{<c}=c$, then I don't have a complete answer, but here's some partial information. Let \kappa be the first cardinal with 2^\kappa > c; so we're now considering the situation where \kappa < c. For each element z of any *R as above, let m(z) be the smallest cardinal of any set A of reals with z in *A. The argument above generalizes to show that m(z) is never \kappa and that if m(z) is always < \kappa then we get the same number 2^c of possibilities for *R as above. The difficulty is that m(z) might now be strictly larger than \kappa. In this case, the 1-type realized by z would amount to an ultrafilter U on m(z) > \kappa such that its image, under any map m(z) \to \kappa, concentrates on a set of size < \kappa. Furthermore, U could not be regular (i.e., (\omega,m(z))-regular in the sense defined by Keisler long ago). It is (I believe) known that either of these properties of U implies the existence of inner models with large cardinals (but I don't remember how large). If all this is right, then it would not be possible to prove the consistency, relative to only ZFC, of the existence of more than 2^c non-isomorphic *R's. Finally, Joel asked about a structure theory for such *R's. Quite generally, without constraining the cardinality of *R to be only c, one can describe such models as direct limits of ultrapowers of R with respect to ultrafilters on R. The embeddings involved in such a direct system are the elementary embeddings given by Rudin-Keisler order relations between the ultrafilters. (For the large cardinal folks here: This is just like what happens in the "ultrapowers" with respect to extenders, except that here we don't have any well-foundedness.) And this last paragraph has nothing particularly to do with R; the analog holds for elementary extensions of any structure of the form (S, all predicates and functions on S) for any set S. - Andreas, Welcome to MO! I am very glad to see you here, and thank you very much for your thorough answer. –  Joel David Hamkins Jun 17 '10 at 12:08 Andreas, I noticed that you may have multiple MO identities (see mathoverflow.net/users/6428). The moderators can merge these. I'll flag this post to bring the issue to their attention. –  Joel David Hamkins Jun 18 '10 at 12:59 Thanks Joel! ... –  Scott Morrison Jun 18 '10 at 15:56 I think that the nonstandard models of R* will be fairly wild by most reasonable metrics, since the theory is unstable (the universe is linearly ordered). For instance, I don't think that arbitrary models will be determined up to isomorphism by well-founded trees of countable submodels (as they are in classifiable'' theories). EDIT: I'm not sure how many nonisomorphic models there are of cardinality c (the size of the continuum), but there are 2^{2^c} distinct nonisomorphic nonstandard models of theory of R* of size 2^c. A crude counting argument shows that this is the maximum number of nonisomorphic models of size 2^c that any theory with a language of cardinality 2^c could possibly have, which can be considered as evidence that the class of models of the theory of R* is wild.'' (This result follows from the proof of Theorem VIII.3.2 of Shelah's Classification Theory, one of his many-models'' arguments about unclassifiable theories. In fact, an argument from the second chapter of my thesis applied to this theory shows that you can even build a collection of 2^{2^c} models of size 2^c which are pairwise bi-embeddable but pairwise nonisomorphic.) It's a good question whether or not you can have two models of this theory which are order-isomorphic but nonisomorphic -- there must be somebody studying o-minimal structures with an answer to this. - Thanks for the answer! I'd love to hear about how the size of the language affects the situation with stability... –  Joel David Hamkins Jan 6 '10 at 1:16 Isn't your crude bound off by a power set? After all, the language here has size 2^c, not c. By my crude calculations, then, a model is determined up to isomorphism by a list of 2^c many subsets of a set of size c. But this gives 2^{2^c} not 2^{2^omega}. ( c = continuum = 2^omega ) Does this problem affect the rest of your answer? –  Joel David Hamkins Jan 6 '10 at 14:06 @Joel: You're right, I was thinking that the size of the language was only c, not 2^c, which does affect my answer (which I just edited). A lot of things seem to break down when you start thinking about structures that are smaller than their language (e.g. Lowenheim-Skolem won't work to produce such models). –  John Goodrick Jan 6 '10 at 19:23 Let me offer one counterpoint to John's excellent answer. Under the Continuum Hypothesis, the ultrapower version of R* will be saturated in any countable language. That is, it will realize all finitely realizable countable types with countably many parameters. Thus, by the usual back-and-forth construction, if we take the reduct to any countable part of the language, such as the ordered field structure and more, then under CH there will be only one saturated model of size continuum. I'm not sure if John's construction can produce saturated models, but if so, then under CH this observation will answer his question at the end about whether one can have non-isomorphic R*'s that are isomorphic orders or even as ordered fields. - The following was useful in a recent paper on asymptotic cones with Kramer, Shelah and Tent. How many ultraproducts $\prod_{\mathcal{U}} \mathbb{N}$ exist up to isomorphism, where $\mathcal{U}$ is a non-principal ultrafilter over $\mathbb{N}$? If $CH$ holds, then obviously just one ... if $CH$ fails, then $2^{2^{\aleph_{0}}}$. In the case when $CH$ fails, the ultraproducts are already nonisomorphic as linearly ordered sets. The proof uses the techniques of Chapter VI of Shelah's book "Classification Theory and the Number of Non-isomorphic Models". - See Robinson's book Non-Standard Analysis (North-Holland 1966)... Section 3.1 has some remarks about the order type of the non-standard natural numbers. - In the case of countable nonstandard models of arithmetic, the order type is clearly omega+ZQ, since there will be a Z chain around each nonstandard integer, and the Z blocks will be densely ordered, hence Q many of them. Does he classify the order-type arising for the N of R*? I guess it has order type omega+Zt for some dense order type t, which looks closely related to R. –  Joel David Hamkins Jan 6 '10 at 16:34 This is his Theorem 3.1.6 ... the order type is omega + (omega* + omega) theta , where theta is a dense order type without first or last element. –  Gerald Edgar Jan 7 '10 at 16:45
2015-04-01 03:23:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893782377243042, "perplexity": 624.4076795426506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131302478.63/warc/CC-MAIN-20150323172142-00267-ip-10-168-14-71.ec2.internal.warc.gz"}
https://www.illustrativemathematics.org/content-standards/5/NF/B/5/tasks/22
Update all PDFs # Running a Mile Alignments to Content Standards: 5.NF.B.5 Curt and Ian both ran a mile. Curt's time was $\frac89$ Ian's time. Who ran faster? Explain and draw a picture. ## IM Commentary There is a subtlety worth noting: we are given information about the boys' times but asked about their speeds. Since the distance they run is the same, this isn't difficult to reason through, but teachers need to be aware of this. The two solutions reflect different competencies described in 5.NF.5. The first solution uses the idea that multiplying by a fraction less than 1 results in a smaller value. The second actually uses the meaning of multiplying by $\frac89$ to explain why multiplying by that fraction will result in a smaller value. ## Solutions Solution: Scaling by a number less than 1 To find Curt's time, you would multiply Ian's time by $\frac89$. Since we are multiplying Ian's time by a number less than 1, Curt's time will be less than Ian's time. The picture shows Ian's time multiplied by 1 above the number line and Ian's time multiplied by $\frac89$ below the number line. Since they both ran the same distance but Curt ran it in less time, he must have been running faster. Solution: Using the meaning of fraction multiplication Curt's time is $\frac89 \times$ Ian's time. That means that if you divide Ian's time into 9 equal time intervals and take 8 of those intervals, you will have Curt's time. So Curt's time to run a mile is less than Ian's and he must be going faster.
2018-04-25 14:11:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47592446208000183, "perplexity": 1189.7338732672436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947822.98/warc/CC-MAIN-20180425135246-20180425155246-00174.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-linear-algebra-7th-edition/chapter-4-vector-spaces-review-exercises-page-221/36
## Elementary Linear Algebra 7th Edition Published by Cengage Learning # Chapter 4 - Vector Spaces - Review Exercises - Page 221: 36 #### Answer $S$ is a basis for $M_{2,2}$. #### Work Step by Step Let $S$ be given by $$S=\left\{\left[\begin{array}{cc}{1} & {0} \\ {0} & {1}\end{array}\right],\left[\begin{array}{cc}{-1} & {0} \\ {1} & {1}\end{array}\right],\left[\begin{array}{cc}{2} & {1} \\ {1} & {0}\end{array}\right],\left[\begin{array}{cc}{1} & {1} \\ {0} & {1}\end{array}\right]\right\}.$$ Consider the combination $$a \left[\begin{array}{cc}{1} & {0} \\ {0} & {1}\end{array}\right]+b\left[\begin{array}{cc}{-1} & {0} \\ {1} & {1}\end{array}\right]+c\left[\begin{array}{cc}{2} & {1} \\ {1} & {0}\end{array}\right]+d \left[\begin{array}{cc}{1} & {1} \\ {0} & {1}\end{array}\right]=0, \quad a,b,cd,\in R.$$ Which yields the following system of equations \begin{align*} a-b+2c+d&=0\\ c+d&=0\\ b+c&=0\\ a+b+d&=0. \end{align*} The coefficient matrix of the above system is given by $$\left[ \begin {array}{cccc} 1&-1&2&1\\ 0&0&1&1 \\ 0&1&1&0\\ 1&1&0&1\end {array} \right] .$$ One can see that the determinant of the coefficient matrix is non zero, hence the system has only the trivial solution and hence, $S$ is linearly independent set of vectors. Since, $M_{2,2}2$ has dimension $4$, then by Theorem 4.12 $S$ is a basis for $M_{2,2}$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-12-08 10:49:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962666928768158, "perplexity": 371.59544254289654}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00267.warc.gz"}
https://electronics.stackexchange.com/questions/397241/warnings-about-power-supply-in-eagle
# Warnings about power supply in EAGLE So I'm using EAGLE for the first time, and I have zero experience with any EDA software. I've completed a schematic and would like to proceed to design the board layout, but I am stalled by these warnings which make zero sense to me. Whenever I add and connect a power supply element I get these warnings. I'm not sure why, and thus far I have seen people saying that this is a stupid side-effect of how EAGLE handles its nets. So the question is; are these warnings something to be taken care of in some way (if so, how?), or should I simply ignore/approve them? Please advise. Bonus question; I know a bit about electronic circuitry, but I am not an electrical engineer. Do you have any tips and tricks that I just be aware of in large PCB designs? I.e. I have added decoupling capacitors of 0.1uF across any and all IC power supplies. Are there more things like this to take into consideration? Any information is welcome and helpful :) • In kicad we have a "power flag". Maybe there's something similare in eagle. Or you can always switch to kicad, which is free and WAY ahead of eagle – valerio_new Sep 21 '18 at 13:44 • Hi there! I don't see any options for the particular power supply elements. I'll try and research further on your input — seems logical, although one would expect a power supply element would default to such a flag. I tried Kicad, but I couldn't get it to do anything and it kept crashing on me. So after about 10 minutes it was scrapped. I've also tried EasyEDA, but I'm reluctant to use online tools for something that might be worth selling. – Casper B. Hansen Sep 21 '18 at 19:20 • Include a shot of your schematic if it's not secret or proprietary, as it may make obvious if you have parts that require additional schematic components or some such. Some of the errors seem to imply that you have cross connected nets somehow without combining them. – K H Sep 22 '18 at 1:49 • It's a rather large schematic with several modules, and also yes, I'd like it to stay a secret, for now at least. Perhaps it is due to the modules? In each module I've simply put new 5V and GND where ever needed. Do I need to bridge the net through the module, or does EAGLE know that these are one and the same net? – Casper B. Hansen Sep 22 '18 at 5:33 On the schematic, make grab each device and drag it a little to make sure all pins are actually connected. That should help you find things like IC1P V+ connected to N$2 vs being connected to 5V. Check the board layouts, make sure the pins you think are connected to 5V are actually connected to 5V. I change the Unrouted layer to something more easily seen - like Yellow when the Top & Bottom layers are turned on, or Black when they are not - to see them more easily. Run the Rats Nest to make sure there are no airwires. If all the pins are connected as you want, then you can Approve or Ignore the warnings. I add a Ground layer to both top & bottom. Draw a Polygon around the outside of your board, and Name it Gnd, for both layers. May have to tweak the end point of the polygon a little to not have it overlap itself. Do not route Gnd signals - let them connect to the Gnd plane. Maybe just add a short section of Gnd trace to an SMD pad if it is a thin pad and will connect automatically. Add Vias Named Gnd to connect the top & bottom layers. Companies like iteadstudio.com will accept vias as small as 12mil to connect the layers. I may go that small near an SMD pin to connect it to Gnd on the bottom layer. Then use 20 mil or 24 mil vias to connect the layers. Rats Nest will make the layers show up. You can use the Rip Up button to make them hidden again. • Hi there! I've tried to wiggle the components around, but the warnings remain. The particular warning you are referring to is indeed meant to connect to N$2, as that net is controlled by a relay, which may or may not have 5V attached. This is to save power. As far as I can see the warnings are triggered by connecting the supplies to the IC's, not that they aren't connected. Great tips on checking from the board as well — I'll give it a go tomorrow, thanks! – Casper B. Hansen Sep 21 '18 at 19:37 I didn't find any solution to the warnings, but I checked out the PCB board and it seems as though both 5V and GND are correctly connected. I guess EAGLE is just being pedantic about detail that I don't know about :)
2020-12-06 01:58:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4473169147968292, "perplexity": 939.7978644315363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00435.warc.gz"}
https://www.computer.org/csdl/mags/ds/2006/09/o9001.html
Strategies for Checkpoint Storage on Opportunistic Grids Raphael Y. de Camargo Fabio Kon, University of São Paulo Renato Cerqueira, Pontifical Catholic University of Rio de Janeiro Abstract—This article evaluates several strategies for storing checkpoint data in an opportunistic grid environment, including replication, parity information, and erasure coding. This evaluation compares the computational overhead, storage overhead, and degree of fault tolerance of these strategies. Executing computationally intensive parallel applications on dynamic heterogeneous environments such as computational grids can be a daunting task, 1-3 especially when using nondedicated resources. Such is the case for opportunistic computing, 4 where we use only the shared machines' idle periods. In such a scenario, machines often fail and frequently change their state from idle to occupied, compromising their execution of applications. Unlike dedicated resources, whose mean time between failures is typically weeks or even months, nondedicated resources can become unavailable several times during a single day. In fact, some machines are unavailable more than they're available. Fault tolerance mechanisms, such as checkpoint-based rollback recovery, 5 can help guarantee that applications execute properly amid frequent failures. The fault tolerance mechanism must save generated checkpoints on a stable storage medium. The usual solution is to install checkpoint servers and connect them to the nodes through a high-speed network. But a dedicated server can easily become a bottleneck as grid size increases. Moreover, when using an opportunistic computing environment, relying on such dedicated hardware increases hardware costs and contradicts the objective of such a system, which is to use the idle time of shared machines. The simplest solution is to use the grid's shared nodes as the storage medium for checkpoints, thus storing and retrieving data in the nodes' shared disk space. But, to preserve the machine's quality of service, it's best to store and retrieve data from machines only when they are idle. Consequently, it's likely that data stored on a machine will be unavailable when requested to restart a failed application. One way to solve this problem is to store multiple replicas of checkpoint data, so that you can recover the stored data even when part of the data repositories are unavailable. Another approach is to break data into several fragments, adding some redundancy, to enable data recovery from a subset of the fragments. Two common techniques for splitting data into redundant fragments are the use of erasure coding, such as information dispersal algorithms, 6 and the addition of parity information. In this article, which builds on previously published work, 7 we evaluate several strategies for the distributed storage of checkpoint data in opportunistic environments. (The " Sidebar: Related Work" discusses other recent work in this area.) We focus on the storage of checkpoint data inside a single cluster. We present a prototype implementation of a distributed checkpoint repository over InteGrade ( http://www.integrade.org.br/portal), 8 a multiuniversity grid middleware project to leverage the computing power of idle shared workstations. Using this prototype, we performed several experiments to determine the trade-offs in these strategies between computational overhead, storage overhead, and degree of fault tolerance. Data storage strategies A data-coding strategy must consider scalability, computational cost, and fault tolerance. We analyze three different strategies for data coding: data replication, data parity, and erasure coding in an information dispersal algorithm. We briefly compare their computational cost, storage overhead, and effects on data availability. To evaluate data availability, we must determine which machines will store checkpoint data. It's possible to distribute the data over the nodes executing the application, other grid nodes, or both. Here, we evaluate checkpoint data storage in the machines where the parallel application executes, using an extra node when the coding strategy requires one. We use this approach because a parallel application normally tries to use most of the available nodes in a cluster to execute its processes. This approach also lets us couple application failures with data repositories failures, making it easier to control checkpoint data availability at any time. In this work, we don't address data integrity or privacy. It's possible to check data integrity using secure hash functions such as MD5 and SHA-1. Encrypting the checkpoints ensures data privacy. Data encryption is a computationally intensive process and should be used only when necessary. Data replication Using data replication, we store full replicas of the generated checkpoints. If one of the replicas becomes inaccessible, we can use another. The advantage is that no extra coding is necessary, but the disadvantage is that we must transfer and store large amounts of data. For example, guaranteeing safety against a single failure requires saving two copies of the checkpoint. In our scenario, transferring two times the checkpoint data would generate too much network traffic, so we store a copy of the checkpoint locally and another remotely. Even though a failure in a machine running the application will make one of the checkpoints inaccessible, it will be possible to retrieve the other copy. Moreover, the other application processes can use their local checkpoint copies. Consequently, this storage mode provides recovery as long as one of the two nodes containing a checkpoint replica is available. Parity An alternative to data replication is to slice the checkpoint data into several fragments and store these fragments with an additional fragment containing parity information. This avoids data replication's large storage requirements because it requires the storage of only one checkpoint copy. We use a scheme in which each node calculates its checkpoint's parity locally, first dividing the generated checkpoint into m slices and then calculating the parity over these slices. We divide a checkpoint vector, U, of size n into m slices of size n/ m, given by U = U0, U1 , …, Um, and Uk = u 0k, u 1k, …, u kn/m, 0 ≤ k < m where k is the slice number, and uik represents the elements of slice Uk. We calculate elements p i, 0 ≤ i< n/ m of parity information vector P as $p_i=u_i^0 \oplus u_i^1 \oplus \ldots \oplus u_i^m,0 \le i < n/m$ where $\oplus$ represents the exclusive-or (XOR) operation. From each process, the system then distributes slices Ui and parity vector P for storage on other nodes. Similarly, we can recover a missing fragment by performing the XOR operation over the recovered fragments. Evaluating parity is fast because it requires only simple XOR operations and storage overhead is very small. The drawback is that the parity strategy doesn't tolerate two or more simultaneous failures. Information dispersal algorithm Michael Rabin's classic information dispersal algorithm (IDA) generates a space-optimal coding of data. 6 It allows coding a vector U of size n into m + k encoded vectors of size n/m, such that regenerating U is possible using only m encoded vectors. This encoding lets you achieve different fault tolerance levels by merely tuning the values of m and k. In practice, it's possible to tolerate k failures with an overhead of only k/( mn) elements. This algorithm requires the computation of mathematical operations over a Galois field GF( q), a finite field of q elements, where q is either prime or a power p x of prime number p. When using q = p x, you carry arithmetic operations over the field by representing the numbers as polynomials of degree x and coefficients in [0, p - 1]. You calculate sums with XOR operations, whereas you carry out multiplications by multiplying the polynomials modulo an irreducible polynomial of degree x. In our case, we use p = 2 and x = 8, representing a byte. To speed up calculations, we perform simple table lookup for the multiplications. The algorithm also requires the generation of m + k linearly independent vectors ai of size m. We can easily generate these vectors by choosing n distinct values ai, 0 ≤ i < n, and setting αi = ( i, ai, … ain-1), 0 ≤ i< n. We then organize these vectors as a matrix, G, defined as G = [ αT0, αT1, … αTm+k] where T indicates the transpose of vector α. We now break file F into n/m information words, Ui, of size m and generate n/ m code words V of size m + k, where Vi = Ui × G The m + k encoded vectors, Ei, 0 ≤ i < m + k, are given by Ei = V0[ i], V1[ i], … Vn/m[ i] To recover the original information words, Ui, we need to recover k of the encoded m + k slices. We then construct code words Vj′, which are equivalent to the original code words Vi but contain only the components of the k recovered slices. Similarly, we construct matrix G′, containing only elements relative to the recovered slices. We now recover Ui, multiplying encoded words Vj′ by the inverse of G′: Ui = Vj′ × ( G′) -1 The main drawback of this approach is that coding requires O[( m + k) nm] steps and decoding requires O( nm2) steps, in addition to the inversion of an m × m matrix. Qutaibah Malluhi and William Johnston proposed an algorithm that improves coding computation complexity to O( nmk) and also improves decoding. 9 They showed that you can diagonalize the first m columns of G and still have a valid algorithm. Consequently, the first m fields of code words Vi involve simple data copying. Coding is necessary only for the last k fields. This approach reduces encoding complexity considerably. IDA's greatest advantage is that it provides the desired degree of fault tolerance with optimal space overhead. For an application composed of 10 nodes, if we set m as 10, the algorithm can tolerate a failure of one node with a 10 percent space overhead, two failures with a 20 percent overhead, and so on. The disadvantage of this approach is the computational complexity of implementing the algorithm and the higher computational overhead. InteGrade is a grid middleware solution for harnessing idle computing power from shared workstations. 7 It consists of a collection of hierarchically organized InteGrade clusters. Here, we focus on checkpoint storage inside a single cluster. Architecture Figure 1 shows an InteGrade cluster's main modules. The global resource manager (GRM) controls resource management at the cluster level; the local resource manager (LRM) controls it at the node level. To form a grid composed of a cluster federation, GRMs from different clusters communicate with one another to allow global sharing of local resources. An LRM communicates only with modules from its own cluster. This separates resources belonging to different clusters; consequently, administrators can apply custom policies for each cluster. InteGrade provides portable application-level checkpointing and rollback recovery for sequential applications and for BSP (bulk synchronous parallel) and parameter-sweeping parallel applications. 10 The main modules for storing checkpoint data are the checkpointing library, the execution manager (EM), the cluster data repository manager (CDRM), and the autonomous data repositories (ADRs). The checkpointing library provides the functionality to periodically generate portable checkpoints containing the application state. The EM maintains a list of applications executing on the cluster and coordinates the reinitialization process when an application fails. The CDRM manages the available ADRs from its cluster and the location of checkpoint data fragments. The ADRs store checkpoint data; they reside on machines that share their resources with the grid. When the checkpointing library needs to store checkpoint data, it queries the local CDRM for available local data repositories and then transfers the data to the returned repositories. Checkpoint data recovery involves the library querying the CDRM for the list of repositories containing the checkpoints and retrieving the checkpoints from these repositories. The benefits of using a centralized GRM, EM, and CDRM for each cluster include implementation ease and simpler algorithms requiring less message exchanges. Moreover, these modules are part of a cluster federation, thus allowing replication and logging of their contents in other clusters for increased fault tolerance. Experiments We performed the experiments using 11 AthlonXP 1700+ processors with 1 Gbyte of RAM, connected by a switched 100-Mbps Fast Ethernet network. Students use the machines during the day, so we performed the experiments at night. Our objective was to measure the overhead of checkpoint storage during normal operation without machines becoming unavailable. We configured all machines as part of a single InteGrade cluster. We used a matrix multiplication application with matrices of different sizes and composed of 12-byte double-precision elements. We evaluated the time necessary to encode and decode data and the overhead of using different storage strategies. Data encoding and decoding We first measured the time required to encode and decode a checkpoint using IDA and local parity. We varied the data size and the number of slices for the comparison. Figure 2 shows the results. In the graphs, IDA ( m, k) represents IDA using m and k as described earlier. Figure 2   Time required to (a) code and (b) decode a file. As expected, the local parity calculation was faster than IDA in all scenarios. The most interesting result, however, was that coding with IDA wasn't too expensive. Encoding 100 Mbytes of data required only a few seconds; the encoding time increased linearly with the number of extra slices k and the data size. The same was true for decoding. Therefore, recovering the data shouldn't take more than a few seconds. With further optimizations in vector multiplications, we can achieve even better results. Consequently, the results of this experiment were very satisfactory. We also evaluated the overhead incurred by checkpointing, coding, and distributing storage over a parallel-application execution time. The objective was to compare the overhead for several of the storage strategies we described earlier. We evaluated the following scenarios: • No storage. The system generates checkpoints but doesn't store them. • Centralized repository. The system stores checkpoints in a centralized repository. • Replication. The system stores one copy of the checkpoint locally and another in a remote repository. • Parity over local checkpoints. The system breaks the checkpoint into 10 slices, with one containing parity information, and stores them in distributed repositories. • IDA ( m = 9, k = 1) . The system codes the checkpoint into 10 slices, from which nine are sufficient for recovery, and stores them in distributed repositories. • IDA ( m = 8, k = 2) . The system codes the checkpoint into 10 slices, from which eight are sufficient for recovery, and stores them in distributed repositories. When using replication, the system distributes remotely stored checkpoints throughout the nine nodes executing the application. For the last three scenarios, which generate 10 slices, we used an additional node to store the remaining slice. We stored the checkpoints in the machines executing the application processes so that we could evaluate how checkpoint storage affects application execution time. If other machines in the cluster are idle, it would be better to store data on these machines, thus transferring the storage overhead to them. Figure 3 compares the overhead of storing checkpoints to the overhead when the application generates but doesn't store checkpoints. The x-axis contains the six storage scenarios. The y-axis shows the normalized execution time. We used nine nodes to perform the matrix multiplication and three matrix sizes: 1,350 × 1,350; 2,700 × 2,700; and 5,400 × 5,400. To perform the benchmark, we divided the total execution time into execution segments bounded by the checkpoint generation times. Table 1 gives these values for each matrix, along with the number of generated checkpoints and the size of local and global checkpoints. Figure 3   Checkpoint storage overhead for the matrix multiplication application: (a) 1,350 × 1,350 matrix, (b) 2,700 × 2,700 matrix, and (c) 5,400 × 5,400 matrix. Table 1. Execution parameters for the execution overhead experiment. The results show that IDA incurs the highest overhead—which we expected, because IDA requires data encoding. But the extra overhead was always below 3 percent, which is very small, especially considering the large checkpoint sizes. Although for the 5,400 × 5,400 matrices the checkpoint interval was 10 minutes, we could reduce this value to 5 minutes or less and still get a reasonable overhead. Conclusion In comparing these storage strategies, we saw that using parity, replication, or centralized storage could reduce overhead, but with a lower degree of fault tolerance. Encoding with IDA was slower but more flexible, because by manipulating its parameters we could trade off between speed, resource use, and level of fault tolerance. Moreover, IDA requires less storage space and network use, thus allowing better resource utilization. Finally, in our experimental scenario, the execution overhead of producing, coding, and storing global checkpoints greater than 1 Gbyte was small, always below 3 percent, for typical checkpoint intervals of a few minutes. Storing all checkpoint data inside a single cluster is efficient because machines are normally connected by fast switched networks. But this approach has the drawback of being sensitive to correlated failures in the cluster nodes. A scenario in which all machines in a cluster become unreachable is quite common and can occur, for example, from a problem in the local network. Because we're dealing with a federation of clusters, we can make the system tolerant to correlated failures by distributing the checkpoint fragments randomly throughout the grid. IDA seems to be a good choice for encoding data because it lets us code a file into an arbitrary number of fragments and requires only of a subset of them to recover the original file. For example, we could encode a file into 32 blocks, from which 16 are required for recovery, to achieve very high availability levels. 11,12 The main problem with using remote clusters for storage is that sending and fetching data from distant repositories is typically far slower than operating in the local cluster. A good solution is to store most of the generated checkpoints in the local cluster and the remaining ones in remote repositories. For example, for each five generated checkpoints, the system would store four in the local cluster and one in remote clusters. This solution would prevent the large overhead of sending data through long distances, at the cost of possibly greater computation loss if there are correlated failures in the cluster nodes. However, coordinating data distribution in the entire grid is complex, requiring the clusters to communicate with one another and share information about their available data repositories and locally stored files. Several challenges remain for efficient distributed storage, including the development of scalable algorithms for data location, data consistency, data privacy, and fault tolerance. We are working on a distributed storage system for computational grids that allows reliable storage of arbitrary data. In this scheme, CDRMs from InteGrade clusters communicate with one another using a structured peer-to-peer overlay network. The system encodes data using IDA and then distributes the fragments throughout the grid. Each fragment receives a unique ID, which the system uses to route the fragment to the target CDRM. This system stores all data related to grid applications, including input, output, and checkpoint data. Sidebar: Related Work Researchers have also compared several data storage strategies in different contexts. Hakim Weatherspoon and John Kubiatowicz compare erasure coding to replication in the context of peer-to-peer systems, 1 as do Rodrigo Rodrigues and Barbara Liskov. 2 In both cases, they evaluate the availability properties of erasure coding and replication using analytical formulations and collected data analysis. Our work focuses on the application execution overhead from coding and storing checkpoint data. Peter Sobe analyzes two different parity techniques for storing checkpoints in distributed systems. 3 He compares the two models in an analytical study, but, unlike our work, this project doesn't include any experiments. Also, he evaluates a different set of storage strategies than we evaluate here. James S. Plank, Kai Li, and Michael A. Puening propose the use of diskless checkpointing, 4 which involves storing checkpoint data on system-volatile memory, removing the overhead of stable storage. Like us, they evaluate strategies for storing checkpoint data on the processing nodes and one or more backup nodes. But the focus of their work is on comparing diskless and disk-based checkpointing, and they performed their experiments using parity information for fault tolerance. Qutaibah Malluhi and William Johnston use an optimized version of Michael Rabin's information dispersal algorithm 5 and of 2D parity coding schemes, comparing their efficiency analytically. 6 We compare a different set of coding techniques, perform experimental evaluations, and focus on nondedicated repositories. Finally, Jim Pruyne and Miron Livny study the use of multiple-checkpoint servers to store checkpoints from parallel applications, but they only compare single and dual dedicated checkpoint servers. 7 ReferencesH.WeatherspoonandJ.Kubiatowicz"Erasure Coding vs. Replication: A Quantitative Comparison,"Proc. 1st Int'l Workshop Peer-to-Peer Systems (IPTPS 02), Springer, 2002, pp. 328-338.R.RodriguesandB.Liskov"High Availability in DHTs: Erasure Coding vs. Replication,"Proc. 4th Int'l Workshop Peer-to-Peer Systems (IPTPS 05), Springer, 2005, pp. 226-239.P.Sobe"Stable Checkpointing in Distributed Systems without Shared Disks,"Proc. 17th Int'l Symp. Parallel and Distributed Processing (IPDPS 03), ( http://doi.ieeecomputersociety.org/10.1109/IPDPS.2003.1213392) IEEE CS Press, 2003, pp. 214-221.J.S.PlankK.LiandM.A.Puening"Diskless Checkpointing,"IEEE Trans. Parallel and Distributed Systems,http://doi.ieeecomputersociety.org/10.1109/71.730527, vol. 9, no. 10, 1998, pp. 972-986.M.O.Rabin"Efficient Dispersal of Information for Security, Load Balancing, and Fault Tolerance,"J. ACM, vol. 36, no. 2, 1989, pp. 335-348.Q.M.MalluhiandW.E.Johnston"Coding for High Availability of a Distributed-Parallel Storage System,"IEEE Trans. Parallel and Distributed Systems, ( http://doi.ieeecomputersociety.org/10.1109/71.737699) vol. 9, no. 12, 1998, pp. 1237-1252.J.PruyneandM.Livny"Managing Checkpoints for Parallel Programs,"Proc. Workshop Job Scheduling Strategies for Parallel Processing (IPPS 96), Springer, 1996, pp. 140-154. • DS Online's Grid Computing Community, cms:/dsonline/topics/gc/index.xml • "Portable Checkpointing and Communication for BSP Applications on Dynamic Heterogeneous Grid Environments," Proc. SBAC-PAD 05, http://doi.ieeecomputersociety.org/10.1109/CAHPC.2005.33 • "Skewed Checkpointing for Tolerating Multi-Node Failures, Proc. 23rd IEEE In'tl Symp. Reliable Distributed Systems (SRDS 04), http://doi.ieeecomputersociety.org/10.1109/RELDIS.2004.1353012 Acknowledgments A grant from CNPq, Brazil (process no. 55.2028/02-9) supported this work. References • 1. F. Berman, G. Fox, and T. Hey, Grid Computing: Making the Global Infrastructure a Reality, John Wiley & Sons, 2003. • 2. R.Y. de Camargo, et al., "The Grid Architectural Pattern: Leveraging Distributed Processing Capabilities," Pattern Languages of Program Design 5, D. Manolescu, J. Noble, and M. Völter, eds., Addison-Wesley, 2006, pp. 337-356. • 3. I. Foster, and C. Kesselman, The Grid 2: Blueprint for a New Computing Infrastructure, Morgan Kaufmann, 2003. • 4. M. Litzkow, M. Livny, and M. Mutka, "Condor—A Hunter of Idle Workstations," Proc. 8th Int'l Conf. Distributed Computing Systems (ICDCS 88), IEEE CS Press, 1988, pp. 104-111. • 5. M. Elnozahy, et al., "A Survey of Rollback-Recovery Protocols in Message-Passing Systems," ACM Computing Surveys, vol. 34, no. 3, 2002, pp. 375-408. • 6. M.O. Rabin, "Efficient Dispersal of Information for Security, Load Balancing, and Fault Tolerance," J. ACM, vol. 36, no. 2, 1989, pp. 335-348. • 7. R.Y. de Camargo, R. Cerqueira, and F. Kon, "Strategies for Storage of Checkpointing Data Using Non-Dedicated Repositories on Grid Systems," Proc. 3rd Int'l Workshop Middleware for Grid Computing (MGC 05), ACM Press, 2005, pp. 1-6. • 8. A. Goldchleger, et al., "InteGrade: Object-Oriented Grid Middleware Leveraging Idle Computing Power of Desktop Machines," Concurrency and Computation: Practice and Experience, vol. 16, no. 5, 2004, pp. 449-459. • 9. Q.M. Malluhi, and W.E. Johnston, "Coding for High Availability of a Distributed-Parallel Storage System," IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 12, 1998, pp. 1237-1252. • 10. R.Y. de Camargo, F. Kon, and A. Goldman, "Portable Checkpointing and Communication for BSP Applications on Dynamic Heterogeneous Grid Environments," Proc. 17th Int'l Symp. Computer Architecture and High Performance Computing, IEEE CS Press, 2005, pp. 226-234. • 11. R. Rodrigues, and B. Liskov, "High Availability in DHTs: Erasure Coding vs. Replication," Proc. 4th Int'l Workshop Peer-to-Peer Systems (IPTPS 05), Springer, 2005, pp. 226-239. • 12. H. Weatherspoon, and J. Kubiatowicz, "Erasure Coding vs. Replication: A Quantitative Comparison," Proc. 1st Int'l Workshop Peer-to-Peer Systems (IPTPS 02), Springer, 2002, pp. 328-338.
2017-06-29 11:04:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31918102502822876, "perplexity": 2928.18432511862}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323908.87/warc/CC-MAIN-20170629103036-20170629123036-00718.warc.gz"}
https://par.nsf.gov/biblio/10312146-first-principles-study-dense-metallic-nitric-sulfur-hydrides
First principles study of dense and metallic nitric sulfur hydrides Abstract Studies of molecular mixtures containing hydrogen sulfide (H 2 S) could open up new routes towards hydrogen-rich high-temperature superconductors under pressure. H 2 S and ammonia (NH 3 ) form hydrogen-bonded molecular mixtures at ambient conditions, but their phase behavior and propensity towards mixing under pressure is not well understood. Here, we show stable phases in the H 2 S–NH 3 system under extreme pressure conditions to 4 Mbar from first-principles crystal structure prediction methods. We identify four stable compositions, two of which, (H 2 S) (NH 3 ) and (H 2 S) (NH 3 ) 4 , are stable in a sequence of structures to the Mbar regime. A re-entrant stabilization of (H 2 S) (NH 3 ) 4 above 300 GPa is driven by a marked reversal of sulfur-hydrogen chemistry. Several stable phases exhibit metallic character. Electron–phonon coupling calculations predict superconducting temperatures up to 50 K, in the Cmma phase of (H 2 S) (NH 3 ) at 150 GPa. The present findings shed light on how sulfur hydride bonding and superconductivity are affected in molecular mixtures. They also suggest a reservoir for hydrogen sulfide in the upper mantle regions of icy planets in a potentially metallic mixture, which could more » Authors: ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10312146 Journal Name: Communications Chemistry Volume: 4 Issue: 1 ISSN: 2399-3669 5. Sub-Neptunes are common among the discovered exoplanets. However, lack of knowledge on the state of matter in$H2$O-rich setting at high pressures and temperatures ($P−T$) places important limitations on our understanding of this planet type. We have conducted experiments for reactions between$SiO2$and$H2$O as archetypal materials for rock and ice, respectively, at high$P−T$. We found anomalously expanded volumes of dense silica (up to 4%) recovered from hydrothermal synthesis above ∼24 GPa where the$CaCl2$-type (Ct) structure appears at lower pressures than in the anhydrous system. Infrared spectroscopy identified strong OH modes from the dense silica samples. Both previous experiments and our density functional theory calculations support up to 0.48 hydrogen atoms per formula unit of ($Si1−xH4x$)$O2 (x=0.12)$. At pressures above 60 GPa,$H2$O further changes the structural behavior of silica, stabilizing a niccolite-type structure, which is unquenchable. From unit-cell volume and phase equilibrium considerations, we infer that the niccolite-type phase may contain H with an amount at least comparable with or higher than that of the Ct phase. Our results suggest that the phases containing both hydrogen and lithophile elements could bemore »
2023-03-28 23:39:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6460606455802917, "perplexity": 4982.946990479523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00775.warc.gz"}
https://www.landfx.com/kb/planting-issues/schedules/item/4239-spacing.html
Plant Schedule is Not Recognizing the Spacing Option for Plant Types [email protected] +1 805-541-1003 ## Issue You tried to place a Plant Schedule with the Spacing option selected for one or more of your plant types (example: Shrubs). When you placed the Plant Schedule, it did not include a colum for the Spacing option for this plant type – even though you're positive that the option was selected when you placed the schedule. ## Cause This is a known issue we've seen occurring when the Spacing option is selected in the Plant Schedule default settings. ## Solution We are currently working to find a solution to this issue. For now, you can use the following workaround: 1. Open the Plant Schedule tool again. 2. When asked Re-generate the existing schedule? click No. 3. In the Plant Schedule dialog box, uncheck the Spacing option for the plant type(s) for which you're trying to show the spacing. 4. Select the Spacing option for that plant type again. Click OK to place the schedule. The Plant Schedule should now include the Spacing option for the plant type(s) where you've selected it.
2023-03-23 02:44:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8874620199203491, "perplexity": 3066.3445488772513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00181.warc.gz"}
https://msp.org/involve/2018/11-1/p05.xhtml
#### Vol. 11, No. 1, 2018 Recent Issues The Journal About the Journal Editorial Board Subscriptions Editors’ Interests Scientific Advantages Submission Guidelines Submission Form Ethics Statement Editorial Login ISSN: 1944-4184 (e-only) ISSN: 1944-4176 (print) Author Index Coming Soon Other MSP Journals Labeling crossed prisms with a condition at distance two ### Matthew Beaudouin-Lafon, Serena Chen, Nathaniel Karst, Jessica Oehrlein and Denise Sakai Troxell Vol. 11 (2018), No. 1, 67–80 DOI: 10.2140/involve.2018.11.67 ##### Abstract An L(2,1)-labeling of a graph is an assignment of nonnegative integers to its vertices such that adjacent vertices are assigned labels at least two apart, and vertices at distance two are assigned labels at least one apart. The $\lambda$-number of a graph is the minimum span of labels over all its L(2,1)-labelings. A generalized Petersen graph (GPG) of order $n$ consists of two disjoint cycles on $n$ vertices, called the inner and outer cycles, respectively, together with a perfect matching in which each matching edge connects a vertex in the inner cycle to a vertex in the outer cycle. A prism of order $n\ge 3$ is a GPG that is isomorphic to the Cartesian product of a path on two vertices and a cycle on $n$ vertices. A crossed prism is a GPG obtained from a prism by crossing two of its matching edges; that is, swapping the two inner cycle vertices on these edges. We show that the $\lambda$-number of a crossed prism is 5, 6, or 7 and provide complete characterizations of crossed prisms attaining each one of these $\lambda$-numbers. ##### Keywords L(2,1)-labeling, L(2,1)-coloring, distance two labeling, channel assignment, generalized Petersen graph ##### Mathematical Subject Classification 2010 Primary: 68R10, 94C15 Secondary: 05C15, 05C78 ##### Supplementary material Diagrams of $D_1, D_2, D_3$ and table of labelings
2020-01-18 11:37:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4823088049888611, "perplexity": 1779.4472887124011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592565.2/warc/CC-MAIN-20200118110141-20200118134141-00312.warc.gz"}
https://rosettacommons.org/node/11027
# hbond_sr_bb and hbond_lr_bb for different chains 3 posts / 0 new hbond_sr_bb and hbond_lr_bb for different chains #1 In the rosetta scoring function, what energy term describes backbone-backbone hydrogen bonding between residues on different protein chains hbond_sr_bb Backbone-backbone hbonds close in primary sequence. All hydrogen bonding terms support canonical and noncanonical types. hbond_lr_bb Backbone-backbone hbonds distant in primary sequence. Do the above terms only specify hydrogen bonding on a single chain? (as they are stratified by distance in primary sequence) Thanks, Tyler Category: Post Situation: Tue, 2020-10-13 13:41 tylerborrman The hbond_sr_bb type is intended for alpha helical hydrogen bonds, and is only active if you're on the same chain and within a short (less than 4 or so) distance of the other residue. The hbond_lr_bb is intended for beta sheets (and other), and does not have a chain dependence. Two beta strands making hydrogen bonds with identical geometries will score the same whether the strands are in the same chain or are split between chains. (The rationale for the difference is because Rosetta tends to really like forming alpha helicies because they're easy to find with a local structural search, versus beta strand pairings, which need a more global search. It was found that specifically downweighting the alpha helical hydrogen bonds tended to help counteract that bias with older scorefunctions. Recent scorefunctions, though, have a sightly different overall weighting scheme, and it's been found that there is no longer a need to weight the terms differently. They're still listed separately, but in the Talaris and REF energy functions all of the backbone hydrogen bonds have identical weights.) Wed, 2020-10-14 08:25 rmoretti Excellent thanks! Tue, 2021-02-09 16:33 tylerborrman
2023-01-28 14:16:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19445620477199554, "perplexity": 4879.297987624495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00811.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/13861/why-does-every-css-code-allow-for-transversal-measurement
# (Why) does every CSS code allow for transversal measurement? I recently encountered the statement that any CSS code [that encodes single-qubit logical states] has a property that measurements can be performed qubit-wise, but the measurement outcome of every qubit must be communicated classically to obtain the result of the logical measurement. Can someone point me to some references about why that is and what kind of classical communication is used? • Do you know how to construct/define the logical $Z$ operator for a CSS code? – DaftWullie Sep 21 '20 at 15:21 • As far as I know, for any stabilizer code the logical Z operator can be applied by applying the Z operator to every physical qubit. How does that help in performing a measurement? – jgerrit Sep 21 '20 at 15:27 • If you know that 5 qubits are in the state $|01101\rangle$, what is the expectation value of the operator $Z\otimes Z\otimes Z\otimes Z\otimes Z$? – DaftWullie Sep 21 '20 at 15:36 • $\langle01101|Z^{\otimes 5}|01101\rangle = -1$. I'm sorry it seems I am missing the connection between applying and measuring an operator, could you elaborate? – jgerrit Sep 22 '20 at 8:49 • Think about how the projectors onto the $\pm 1$ eigenspaces would look like. What is the connection between the Born probabilities and the expectation value? – Markus Heinrich Sep 22 '20 at 11:14 ## 1 Answer The standard construction for measurement of arbitrary tensor products of Pauli operators that works in any stabilizer code and that achieves fault-tolerance using the so-called "cat" states $$(|0\dots 0\rangle + |1\dots 1\rangle)/\sqrt{2}$$ is described in section 10.6.3 in Nielsen & Chuang. However, the quote in the question and the subsequent reference on the following page to the use of "error correcting procedure for the classical linear codes" to process measurement results suggest that the authors refer to the following simpler fault-tolerant scheme that works for any CSS code and obtains the correct logical measurement outcome distribution, but does not produce the appropriate post-measurement state. The key idea behind the scheme is that if we are only concerned with measurement outcome then we can exploit the fact that CSS codes split the stabilizer generators into the $$X$$ and $$Z$$ sectors to replace quantum error correction with classical error correction on measurement results. Consider a logical qubit encoded into a block of $$n$$ physical qubits using a $$CSS(C_1, C_2)$$ code for two classical linear codes $$C_1$$ and $$C_2$$ with $$C_2^\perp \subset C_1$$. Denote the Hilbert space of the block by $$\mathcal{H}$$ and the code subspace by $$\mathcal{G} \subset \mathcal{H}$$ (thus, $$\dim \mathcal{G} = 2$$ and $$\dim \mathcal{H} = 2^n$$). Let $$S$$ be the stabilizer group of $$\mathcal{G}$$ and $$N(S)$$ the normalizer of $$S$$ in the $$n$$-qubit Pauli group $$G_n$$. All operators in $$G_n$$ are transversal, so stabilizers and logical Pauli operators are transversal. Moreover, since $$\mathcal{G}$$ is a CSS code, we can choose stabilizer generators that are tensor products of identity and $$X$$ or identity and $$Z$$. Similarly, we can choose the logical $$\overline X$$ to be a tensor product of identity and physical $$X$$ operators and logical $$\overline Z$$ to be a tensor product of identity and physical $$Z$$ operators. For a $$Z$$ type stabilizer generator $$g_z$$, define $$b(g_z) \in \mathbb{Z}_2^n$$ to be a binary vector with $$0$$ in positions corresponding to identity and $$1$$ in positions corresponding to $$Z$$. Define $$b(\overline Z)$$ similarly. First, take the tensor product of per-qubit operators $$I_i = \sum_{k_i\in\{0, 1\}}|k_i\rangle\langle k_i|=|0\rangle\langle 0| + |1\rangle\langle 1| \\ Z_i = \sum_{k_i\in\{0, 1\}}(-1)^{k_i}|k_i\rangle\langle k_i|=|0\rangle\langle 0| - |1\rangle\langle 1|$$ where $$i$$ identifies a qubit, to compute the $$Z$$-type stabilizer generators $$g_z = \sum_{k \in \mathbb{Z}_2^n} (-1)^{k \cdot b(g_z)} |k_1\rangle\langle k_1|\otimes|k_2\rangle\langle k_2|\otimes\dots\otimes|k_n\rangle\langle k_n|\tag1$$ and the logical $$Z$$ operator $$\overline Z = \sum_{k \in \mathbb{Z}_2^n} (-1)^{k \cdot b(\overline Z)} |k_1\rangle\langle k_1|\otimes|k_2\rangle\langle k_2|\otimes\dots\otimes|k_n\rangle\langle k_n|\tag2$$ where $$\cdot$$ represents the dot product in $$\mathbb{Z}_2^n$$ (i.e. componentwise multiplication modulo $$2$$). The procedure begins by measuring each qubit individually in the computational basis. The results form a binary vector $$m \in \mathbb{Z}_2^n$$ with $$m_i \in \{0, 1\}$$ corresponding to the measurement outcome on the $$i$$th qubit. From equation $$(1)$$ we see that given $$m$$ we can classically compute the measurement outcomes associated with all $$Z$$-type stabilizer generators using the formula $$(-1)^{m\cdot b(g_z)}$$. Next, we employ the classical error correction techniques using the code $$C_1$$ which is the classical code associated with the $$Z$$ sector of the stabilizer to identify and correct a certain number of bit flip and measurement errors in $$m$$ producing a corrected vector of measurement outcomes $$m'$$. From equation $$(2)$$ we see that given $$m'$$ we can compute the outcome of logical measurement using formula $$(-1)^{m'\cdot b(\overline Z)}$$. Note that this measurement procedure takes the qubits into a product state thus destroying the entanglement that protects the code subspace. Therefore, the post-measurement state is not guaranteed to be in $$\mathcal{G}$$. In particular, it is neither $$|\overline 0\rangle$$ nor $$|\overline 1\rangle$$.
2021-04-22 21:22:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 59, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499454855918884, "perplexity": 290.75956158843314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039604430.92/warc/CC-MAIN-20210422191215-20210422221215-00287.warc.gz"}
https://gharpedia.com/precautions-safe-excavation-work/
Precautions for Safe Excavation Work Excavation work is the first activity on any construction site. It starts from digging the pit for the structure for either shallow or for deep foundations. It is completed by filling excavated soil or soil brought from outside in the same pit. Construction excavation depends on the size and depth of the foundation, types of soil layers, water table and surrounding structures, etc. Courtesy - Shutterstock The depth of soil layers may vary from location to location as different types of soil are there. The safe bearing capacity of soil strata is the important factor for foundation design consideration. It is directly related to structure stability. We need to excavate soil pit up to designed foundation depth so where SBC is available for safety and stability of the structure. Many accidents happen daily on a/c of sliding of earth or caving in of earth. Such accidents are almost fatal, as the worker generally gets buried and will lose the life on a/c of suffocation. Further, when you excavate adjacent to existing building, the safety of the existing structure is most important. This is more severe when we have to excavate deeper than foundation and foundation of existing building. While carrying out the excavation, it is therefore necessary to take care of quality and safety of the earthwork. Always remember that precaution is better than cure. Hence let’s discuss precautions while carrying out the foundation excavation. 01. Things to Keep in Mind Before Excavation Work: • Clear the area of the site by removing obstructed trees, vegetation and rubbish for building layout. Get permission from government authorities to cut trees before starting the works. • To overcome these difficulties, shift the layout of the building if possible. • Get plan approval from the authorities as it is a time-consuming job and may delay the work. • Check and if needed relocate underground drainage, electrical, and telephone cables crossing over the proposed building and its foundations. • Make provision for the stacking of excavated soil. 02. Precautions while Excavation: • Arrange all the materials near the site for successful completion of work. Stack lime powder, shoring materials, blasting powder, blasting equipment and excavator, etc. nearby the site. • Arrange for necessary equipment like theodolite/total station, level machine and measure tap, etc. Use superior equipment for the layout of the planned building. Courtesy - Shutterstock Arrange for excavator, dumper, dozer, grader and roller, etc. for handling, transporting and compacting the excavated soil. Courtesy - Shutterstock • Do the layout of the building with respect to drawing. Make centre line pegs on all around the periphery, and check all centre line before you commence excavation. • The level of the ground should be taken by theodolite or dumpy level machine before staring and after finishing the work. It should also be recorded for billing purpose. • Mark the benchmark level till completion of the site. Always follow all level including depth of excavation with respect to reference RL provided with some temporary or permanent bench mark. • Estimate the excavated stuff to be re-utilized in filling. Try to carry away excavation and filling simultaneously, to avoid double handling. • Stack separately the excavated earth, which you are going to use for refilling and the one which you are going to dispose of outside. • Provide ramps or steps for lifting the excavated materials during excavation. Courtesy - Shutterstock • The size of the pit for excavation should be at least of 300 cm more than the size of foundation PCC, on all sides or as per contract. Increase the pit size with increase in depth of foundation accordingly. Otherwise, top layers would collapse/cave in. • Keep the excavated materials at least 1 meter away from the edges of the pit. • Never create high heaps of excavated earth just on the face of pits. This may collapse. Don’t allow the black cotton soil to remain at the site for the longer time as it will start collapsing. • In black cotton soils or other expansive soil, as the depth increase, the sides will collapse. To prevent this, do shoring or excavate such a large pit in slopes or steps with back filling method. • It is necessary to go below the level where cracks cease appear particularly in the cease of the expansive black   cotton soil. Courtesy - Shutterstock • Never use black cotton soil in re-filling. Avoid excavation of black cotton soil in rainy days. • Wet the pit of soft/ hard murum with water a day before the excavation. The strata absorb water and become relatively soft, making the excavation easy. • The pit size may be exactly of the size to suit the form of work of PCC. • Carry out excavation in soft/hard rock with skilled labour. Adopt chiselling and hammering for soft/hard rock excavation. • Adopt wedging by crow-bar and pneumatic breakers to excavate the soft/hard rock. • Do the blasting for hard rock excavation. It breaks the rock quickly and at a reasonable cost. One may use pneumatic breaker installed on excavator. Courtesy - Shutterstock • Select and stack the required materials, so that other activities are not obstructed. Immediately remove excess or unwanted excavated materials from the site. • Excavation should be planned in such way that the movement of vehicle is never obstructed for bringing materials and transporting earth. • If the excavation goes below the ground water table, Make adequate arrangement for dewatering. • Keep the dewatering pump on a firm foundation and shift carefully, whenever required. • Use the dewatering pump of a higher capacity with high suction and long delivery hose pipe. • Put the dewatering pump in corner sump which is 150 mm down at the level of excavated surface to pumped water easily. • Diverting the water is better than excavation in flowing water. • Create artificial trench to divert the flowing water. Construct the coffer-dam using sand bags to divert the flowing water. Courtesy - Shutterstock • Check the depth of excavation periodically to avoid over excavation. It is necessary to obtain top 150 mm excavated level by manual leveling and dressing with light compaction. • Excavated levels should be free from excessive moisture content and carry out compaction test if necessary. • In the case of the deeper excavation than required level, fill the extra depth by using approved filling materials (sand) with proper compaction or 1:5:10 lean concrete to save the cost of column or brick wall. • Compact the bottom of the required foundation level surface to achieve required safe bearing capacity. Carry out anti-termite treatment before start building foundation work. Do not excavate more than prescribed in the drawing, you might not get payment for it. • Never excavate below the level of the foundation of existing building. If must, do it under close supervision and guidance of your experienced contractor/engineer, by adequately supporting and propping adjacent building. • Consult your structural designer or geotechnical consultant if the stratum of soil on the site is different than the soil test report/ soil bore log. 03. Safety Precaution while Excavation Work: • Prepare firm and broad approach road. Avoid work in rainy season. • The nail should not be lying around, after the completion of layout work. • Barricade the site to restrict the entry of animal and the unauthorised person on site. • Ensure that workers are wearing personal protective equipment. Safety belt, helmet, rubber hand gloves, goggles, facemask and rubber shoes are of ISI mark. • Provide ladder to Excavation workers to climb in and out of the pit. Courtesy - Shutterstock • First-Aid kits should be immediately available on site. • Provide proper ventilation and electric lights at a time of work. Provide barricading to excavated pit. Check the shoring for adequate support. • Check the material transporting machine for safe operation. The movement of the trucks should be away from the pit. Machine operator must have recognised agency’s license. • Do the sufficient pit size by providing extra working space on all sides. Cleaning of area and housekeeping of material is necessary. Material Exhibition Explore the world of materials.
2019-04-21 20:42:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20183289051055908, "perplexity": 6161.593446887208}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00440.warc.gz"}
https://www.physicsforums.com/threads/question-relating-to-shifting-a-circle.617974/
Homework Help: Question relating to shifting a circle. 1. Jul 2, 2012 ozone This problem arose for me while working out a triple integral in spherical coordinates. Basicly I know that when we shift a parabola along the axis it is simply translated. I naturally assumed that if we shifted a circle in a similar manner that it would act the same. However when we shift a circle along the axis, such as one with the equation $(x-1)^2 + y^2 = 1$ We find that the entirety of the circle now sits above the x axis, and that our radius becomes $2cos∂$ Could anyone shed some light on this? 2. Jul 2, 2012 Staff: Mentor you've moved the center to (1,0) but the circle should still have a radius of 1 try some points (0,0) , (1,1) , (1,1) and (1,-1) all satisfy the equation and show that the radius is 1 and not as you say and that ithe circle still lies on the x-axis 3. Jul 2, 2012 SammyS Staff Emeritus $(x-1)^2 + y^2 = 1$ is the equation of a circle having radius 1 with its center at (1, 0). Here's a ploy from WolframAlpha: File size: 2.8 KB Views: 85 4. Jul 2, 2012 ozone Alright I guess I just misunderstood the solution on the problem set.. thank you. 5. Jul 2, 2012 SammyS Staff Emeritus After reading your Original Post, it looks as if you might be converting $(x-1)^2 + y^2 = 1$ to polar coordinates with the result, $r=2\cos\theta\,.$ If so, that variable, r, does not refer to the radius of the circle, it's the the distance that the point (x, y) is from the origin.
2018-04-26 11:51:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8366333246231079, "perplexity": 478.57185010722094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00446.warc.gz"}