url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.singer22.com/blog/style/inside-look-at-david-lerner/
# Inside Look at David Lerner This entry was posted in Style and tagged , , , , . Bookmark the permalink.
2014-04-16 05:06:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227837920188904, "perplexity": 7646.073591144058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/how-do-i-integrate-sin-120pi-t-cos-120pi-n-t-easily.677979/
# How do I integrate sin(120pi*t)cos(120pi*n*t) easily? 1. Mar 12, 2013 ### luckyduck 1. The problem statement, all variables and given/known data How do I integrate this easily? $\frac{2}{T}$$\int^{T/2}_{0}sin(120\pi t)cos(120\pi n t)$ 2. Relevant equations 3. The attempt at a solution I used Wolfram Alpha to integrate this, but are there ways to use substitution or another trick instead? 2. Mar 12, 2013 ### micromass Staff Emeritus Product to sum formulas. 3. Mar 12, 2013 ### Simon Bridge Sure there is! You start out by understanding the shape of the curve and what it means to integrate it. Remember that you are finding the area between the curve and the t-axis. Also, what is the significance of that T/2: does the capital T have a special meaning in context of the operand? What difference does the 120pi in the trig function make to the shape of the function? What difference does the n make in the cosine. Do you know how trig functions combine? [edit: i.e. the product-to-sum formulas micromass mentions - when you see combinations of trig functions, it is often useful to arm yourself with a table of identities.] When you understand what you are doing - things come more easily. However - just looking at it in terms of a brute force approach: have you tried integration by parts?
2017-12-14 23:20:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7801520228385925, "perplexity": 1094.912588662027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00166.warc.gz"}
http://tutorial.math.lamar.edu/Problems/CalcI/IntegralsIntro.aspx
Paul's Online Notes Home / Calculus I / Integrals Show Mobile Notice Show All Notes Hide All Notes Mobile Notice You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width. ## Chapter 5 : Integrals Here are a set of practice problems for the Integrals chapter of the Calculus I notes. 1. If you’d like a pdf document containing the solutions the download tab above contains links to pdf’s containing the solutions for the full book, chapter and section. At this time, I do not offer pdf’s for solutions to individual problems. 2. If you’d like to view the solutions on the web go to the problem set web page, click the solution link for any problem and it will take you to the solution to that problem. Note that some sections will have more problems than others and some will have more or less of a variety of problems. Most sections should have a range of difficulty levels in the problems although this will vary from section to section. Here is a list of all the sections for which practice problems have been written as well as a brief description of the material covered in the notes for that particular section. Indefinite Integrals – In this section we will start off the chapter with the definition and properties of indefinite integrals. We will not be computing many indefinite integrals in this section. This section is devoted to simply defining what an indefinite integral is and to give many of the properties of the indefinite integral. Actually computing indefinite integrals will start in the next section. Computing Indefinite Integrals – In this section we will compute some indefinite integrals. The integrals in this section will tend to be those that do not require a lot of manipulation of the function we are integrating in order to actually compute the integral. As we will see starting in the next section many integrals do require some manipulation of the function before we can actually do the integral. We will also take a quick look at an application of indefinite integrals. Substitution Rule for Indefinite Integrals – In this section we will start using one of the more common and useful integration techniques – The Substitution Rule. With the substitution rule we will be able integrate a wider variety of functions. The integrals in this section will all require some manipulation of the function prior to integrating unlike most of the integrals from the previous section where all we really needed were the basic integration formulas. More Substitution Rule – In this section we will continue to look at the substitution rule. The problems in this section will tend to be a little more involved than those in the previous section. Area Problem – In this section we start off with the motivation for definite integrals and give one of the interpretations of definite integrals. We will be approximating the amount of area that lies between a function and the $$x$$-axis. As we will see in the next section this problem will lead us to the definition of the definite integral and will be one of the main interpretations of the definite integral that we'll be looking at in this material. Definition of the Definite Integral – In this section we will formally define the definite integral, give many of its properties and discuss a couple of interpretations of the definite integral. We will also look at the first part of the Fundamental Theorem of Calculus which shows the very close relationship between derivatives and integrals Computing Definite Integrals – In this section we will take a look at the second part of the Fundamental Theorem of Calculus. This will show us how we compute definite integrals without using (the often very unpleasant) definition. The examples in this section can all be done with a basic knowledge of indefinite integrals and will not require the use of the substitution rule. Included in the examples in this section are computing definite integrals of piecewise and absolute value functions. Substitution Rule for Definite Integrals – In this section we will revisit the substitution rule as it applies to definite integrals. The only real requirements to being able to do the examples in this section are being able to do the substitution rule for indefinite integrals and understanding how to compute definite integrals in general.
2019-01-17 11:17:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685722351074219, "perplexity": 172.46106268085322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658928.22/warc/CC-MAIN-20190117102635-20190117124635-00626.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/circles-examples-solutions-in-given-figure-pq-chord-length-8cm-circle-radius-5cm-tangents-p-q-intersect-point-t-find-length-tp_5784
Share Books Shortlist # In the given figure, PQ is a chord of length 8cm of a circle of radius 5cm. The tangents at P and Q intersect at a point T. Find the length TP - CBSE Class 10 - Mathematics ConceptCircles Examples and Solutions #### Question In the given figure, PQ is a chord of length 8cm of a circle of radius 5cm. The tangents at P and Q intersect at a point T. Find the length TP #### Solution Join OP and OT Let OT intersect PQ at a point R. Then, TP = TQ and ∠PTR = ∠QTR. ∴ TR ⊥ PQ and TR bisects PQ. ∴ PR = RQ = 4 cm. \text{Also, OR}=\sqrt{OP^{2}-PR^{2}}=\sqrt{5^{2}-4^{2})\text{ cm} =\sqrt{25-16}=\sqrt{9}=3\text{ cm} Let TP = x cm and TR = y cm. From right ∆TRP, we get TP^2 = TR^2 + PR^2 ⇒ x^2 = y^2 + 16 ⇒ x^2 – y^2 = 16 …. (i) From right ∆OPT, we get TP^2 + OP^2 = OT^2 ⇒ x^2 + 52 = (y + 3)^2 [∵ OT^2 = (OR + RT)^2 ] ⇒ x^2 – y^2 = 6y – 16 ….(ii) From (i) and (ii), we get 6y – 16 = 16 ⇒ 6y = 32 ⇒ y = 16/3. Putting y = 16/3 in (i), we get \x^{2}=16+( \frac{16}{3})^{2}=(\frac{256}{9}+16)=\frac{400}{9} \Rightarrow x=\sqrt{\frac{400}{9}}=\frac{20}{3} Hence, length TP = x cm = 20/3 cm = 6.67 cm. Is there an error in this question or solution? #### Video TutorialsVIEW ALL [1] Solution In the given figure, PQ is a chord of length 8cm of a circle of radius 5cm. The tangents at P and Q intersect at a point T. Find the length TP Concept: Circles Examples and Solutions. S
2019-06-17 21:37:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6349953413009644, "perplexity": 2733.3713895172114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00342.warc.gz"}
https://www.statistics-lab.com/%E6%95%B0%E5%AD%A6%E4%BB%A3%E5%86%99%E8%AE%A1%E7%AE%97%E5%A4%8D%E6%9D%82%E5%BA%A6%E7%90%86%E8%AE%BA%E4%BB%A3%E5%86%99computational-complexity-theory%E4%BB%A3%E8%80%83-complex-networks-a-very-short/
### 数学代写|计算复杂度理论代写Computational complexity theory代考| Complex Networks: A Very Short Overview statistics-lab™ 为您的留学生涯保驾护航 在代写计算复杂度理论Computational complexity theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写计算复杂度理论Computational complexity theory代写方面经验极为丰富,各种代写计算复杂度理论Computational complexity theory相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 数学代写|计算复杂度理论代写Computational complexity theory代考|Complex Networks: A Very Short Overview Nowadays, Complex Networks represent a vibrant and independent research field that attracted the attention of scientists coming from different areas. The underlying reason is that many natural and man-made complex systems, as biological neural networks, social networks, and infrastructural networks, have a nontrivial topology that strongly influences the dynamics among the related agents (i.e., users of social networks, neurons of neural networks, and so on). An increasing amount of investigations is demonstrating the relevance of the interaction structure in a wide amount of systems, and, even in the case of EGT, complex networks allow to obtain very interesting results. For instance, as recalled in Chap. 1, Santos and Pacheco showed the role of heterogeneity in the emergence of cooperation, modeling their system with scale-free networks. The latter, as well as others famous models, often is used as toy model both in EGT and in many other contexts as social dynamics, ecological networks, etc. Thus, in this section, we provide a very short overview on the main network properties, and on three different models that can be used for generating a complex network with a known topology. Readers interested in this topic are warmly encouraged to read the wide literature on complex networks. So, first of all, modern network theory has its basis in the classical theory of graphs. In particular, a preliminary definition of complex network can be “a graph with a nontrivial topology.” In general, a graph is a mathematical object that allows to represent relations among a collection of items, named nodes. More formally, a graph $G$ is defined as $G=(N, E)$, with $N$ set of nodes/vertices and $E$ set of edges/links (or bonds). Nodes can be described by a label and represent the elements of a system, e.g., users of a social network, websites of the WEB, and so on. In turn, the edges represent the connections among nodes, and map relations as friendship, physical links, etc. A graph can be “directed” or “undirected,” i.e., the relation can be symmetrical (e.g., friendship) or not (e.g., a one way road), and can be “weighted” or “unweighted.” The former allows to introduce some coarseness in the relations, e.g., in a transportation network the weights might refer to the actual geographical distance between two locations. The information related to the connections in a network is saved in a $N \times N$ matrix, with $N$ number of nodes, defined “adjacency matrix.” Numerical analysis on the adjacency matrix allow to investigate the properties of a network. For instance, the adjacency matrix $A$ of an unweighted graph can have the following form: $$a_{i j}= \begin{cases}1 & \text { if } e_{i j} \text { is defined } \ 0 & \text { if } e_{i j} \text { is not defined }\end{cases}$$ On the other hand, in the case of weighted networks, the inner values of the adjacency matrix are real. Among the properties of a complex network, the degree distribution is one of the most relevant. Notably, this “centrality measure” constitutes a kind of signature for classifying the nature of a network (e.g., scalefree), where the term “degree” means amount of connections (i.e., edges) of a node. So, indicating with $k$ the degree of nodes, the distribution $P(k)$ of a network represents the probability to randomly select a node with a degree equal to $k$, i.e., a node with $k$ connections. A second network property is called clustering coefficient, and it allows to know if nodes of a network tend to cluster together. Actually, this phenomenon is common in many real networks as social networks, where it is possible to identify circles of friends, or acquaintances in which every person knows all the others. For the sake of clarity, considering a social network, if the user $a$ is connected to the user $b$, and the latter is connected to the user $c$, there is a high probability that $a$ be connected to $c$. The clustering coefficient can be computed as $$C=\frac{3 \times T n}{T p}$$ with $T n$ number of triangles in a network, and $T p$ number of connected triples of nodes. A connected triple is a single node with links running to an unordered pair of others. This coefficient has a range that spans the interval $0 \leq C \leq 1$. A further mathematical definition of the clustering coefficient reads $$C_{i}=\frac{T n_{i}}{T p_{i}}$$ with $T n_{i}$ number of triangles connected to node $i$, and $T p_{i}$ number of triples centered on node $i$. The main difference between the two definitions is that the second one is local, so that to obtain a global value one has to compute the following parameter $$C=\frac{1}{n} \sum_{i} C_{i}$$ ## 数学代写|计算复杂度理论代写Computational complexity theory代考|Classical Random Networks One of the early works on random networks has been developed by Paul Erdös and Alfred Renyi. Their model, usually called E-R model/graph, considers a graph with $N$ nodes and a probability $p$ to generate each edge. Accordingly, an E-R graph contains about $p \cdot \frac{N(N-1)}{2}$ edges, and it has a binomial degree distribution $$P(k)=\left(\begin{array}{c} N-1 \ k \end{array}\right) p^{k}(1-p)^{n-1-k}$$ for $N \rightarrow$ inf and $n p=$ const, the degree distribution converges to a Poissonian distribution $$P(k) \sim e^{-p n} \cdot \frac{(p n)^{k}}{k !}$$ To generate this kind of networks, one can implement the following simple algorithm: 1. Define the number of $N$ of nodes and the probability $p$ for each edge 2. Draw each potential-link with probability $p$ Figure $2.4$ illustrates the $P(k)$ for an E-R graph with $N=25,000$ and $p=4 \cdot 10^{-4}$. ## 数学代写|计算复杂度理论代写Computational complexity theory代考|Scale-Free Networks Scale-free networks are characterized by the presence of few nodes (called hubs) that have many connections (i.e., a high degree), while the majority of nodes has a low degree. Therefore, these networks constitute a classical example of heterogeneous networks. The related degree distribution follows a power-law function $$P(k) \sim c \cdot k^{-\gamma}$$ with $c$ normalizing constant and $\gamma$ scaling parameter of the distribution. A famous model for generating scale-free networks is the Barabasi-Albert model (BA model hereinafter) that considers two parameters: $N$ nodes and $m$ minimum number of edges drawn for each node. The BA model can be summarized as follows: 1. Define $N$ number of nodes and $m$ minimum number of edges drawn for each node 2. Add a new node and link it with other $m$ pre-existing nodes. Pre-existing nodes are selected according to the following equation: $$\Pi\left(k_{i}\right)=\frac{k_{i}}{\sum_{j} k_{j}}$$ with $\Pi\left(k_{i}\right)$ probability that the new node generates a link with the $i$-th node (having a $k_{i}$ degree). Figure $2.5$ illustrates the $P(k)$ for a scale-free network with $N=25,000$ and $m=5$. C=3×吨n吨p C一世=吨n一世吨p一世 C=1n∑一世C一世 ## 数学代写|计算复杂度理论代写Computational complexity theory代考|Classical Random Networks Paul Erdös 和 Alfred Renyi 开发了随机网络的早期作品之一。他们的模型,通常称为 ER 模型/图,考虑了一个图ñ节点和概率p生成每条边。因此,ER 图包含大约p⋅ñ(ñ−1)2边缘,并且它具有二项式度数分布 1. 定义数量ñ节点数和概率p对于每条边 2. 用概率绘制每个潜在链接p 数字2.4说明了磷(ķ)对于 ER 图ñ=25,000和p=4⋅10−4. ## 数学代写|计算复杂度理论代写Computational complexity theory代考|Scale-Free Networks 1. 定义ñ节点数和米为每个节点绘制的最小边数 2. 添加一个新节点并将其与其他节点链接米预先存在的节点。根据以下等式选择预先存在的节点: 圆周率(ķ一世)=ķ一世∑jķj 和圆周率(ķ一世)新节点生成链接的概率一世-th 节点(具有ķ一世程度)。 ## 有限元方法代写 tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
2023-02-05 13:27:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7940047979354858, "perplexity": 771.0358213818918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00589.warc.gz"}
https://ask.libreoffice.org/en/answers/159734/revisions/
# Revision history [back] @Avvy65, Your comment was attached to the deleted duplicate question: Sorry, I am unclear as to what you mean. When you say 'Please do not post as wiki as it helps no one.' I didn't know I did post as wiki. Do you mean wikipedia here? If I read you correctly, you say I don't need new tables for each month, mmm, I thought I did. Ok so if I use 'append data' to the main table which the form uses, then that should do it. As for wiki this is a check box when asking your question: You can also see above in your question, in upper right, This post is a wiki. instead of your user information. Now, so as to be carefully clear, it was NOT stated you didn't need new tables. It WAS stated that it was not understood why you created new tables. There is a difference. Also stated: Possibly clarify what is being attempted. So, no, you did not read correctly. You have not explained what is being done clearly enough.
2019-09-20 06:10:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6557601094245911, "perplexity": 832.3048172113707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00157.warc.gz"}
https://tex.stackexchange.com/questions/377122/typesetting-for-a-verilog-lstinput
# Typesetting for a Verilog LstInput Similar to this question, I want to be able to colour my Verilog HDL code to match the Intel Altera Quartus GUI software typset. I believe there are no packages like matlab-prettifier to automatically render the Verilog HDL. I have added a image to show what Quartus interprets verilog as below. I've started off some listing styling but am not really sure how to work with the [A:B] number formatting that is an orange colour. \definecolor{vgreen}{RGB}{104,180,104} \definecolor{vblue}{RGB}{49,49,255} \definecolor{vorange}{RGB}{255,143,102} \lstdefinestyle{verilog-style} { language=Verilog, basicstyle=\small, keywordstyle=\color{vblue}, identifierstyle=\color{black}, numbers=left, numberstyle={\tiny \color{black}}, numbersep=10pt, tabsize=8 } • Does tex.stackexchange.com/a/42895/647 solve your issue? – TH. Jun 28, 2017 at 6:03 • @TH. Not quite, I think this is a closer bet. – Mr G Jun 28, 2017 at 7:35 • I've found a matching (ie, [is]{\fn}{[}{:}}) that works based on the above example, but when there's more than once space before the matching the function will also be called for that set of spaces not including the first space... – Mr G Jun 28, 2017 at 13:55 • Another issue is that the indexing styles can appear as [A:B] or just as [A]. This is complicated to match to because of the same starting delimeter. – Mr G Jun 28, 2017 at 14:08 • It's been a very long time since I wrote any verilog. Are A and B in [A:B] and [A] always literal numbers? – TH. Jun 28, 2017 at 15:19 # Hack to make this work. I initially wrote the stuff below about listings really bizarre behavior and then immediately discovered a hack that makes this work. The key is to use moredelim=*[s][\colorIndex]{[}{]} where \colorIndex is a new macro that examines the listings-internal token register \lst@token to decide what to typeset. It also makes : display in a literate style which combines with the * option to moredelim to make this work. Contrary to my assertion at the very bottom of this answer, \lst@token does contain the material to be typeset, but not when ** is given to moredelim which was how I tested. \documentclass{article} \usepackage{xcolor} \usepackage{listings} \definecolor{vgreen}{RGB}{104,180,104} \definecolor{vblue}{RGB}{49,49,255} \definecolor{vorange}{RGB}{255,143,102} \lstdefinestyle{verilog-style} { language=Verilog, basicstyle=\small\ttfamily, keywordstyle=\color{vblue}, identifierstyle=\color{black}, numbers=left, numberstyle=\tiny\color{black}, numbersep=10pt, tabsize=8, moredelim=*[s][\colorIndex]{[}{]}, literate=*{:}{:}1 } \makeatletter \newcommand*\@lbracket{[} \newcommand*\@rbracket{]} \newcommand*\@colon{:} \newcommand*\colorIndex{% \edef\@temp{\the\lst@token}% \ifx\@temp\@lbracket \color{black}% \else\ifx\@temp\@rbracket \color{black}% \else\ifx\@temp\@colon \color{black}% \else \color{vorange}% \fi\fi\fi } \makeatother \usepackage{trace} \begin{document} \begin{lstlisting}[style={verilog-style}] module Mixing { inout AUD_BCLK, output AUD_DACDAT, inout AUD_DACLRCK, output AUD_XCK, ///////// clocks ///////// input clock2_50, input clock3_50, input clock4_50, input clock_50, ///////// HEX ///////// output [6:0] HEX0, output [6:0] HEX1, output [6:0] HEX2, output [6:0] HEX3, output [6:0] HEX4, output [6:0] HEX5, ///////// FOO ///////// output [2] FOO, } \end{lstlisting} \end{document} # Bizarre listings behavior. This was initially a nonanswer that was too complicated for a comment. Almost immediately after posting it, I found the workaround above. The two "obvious" ideas I have for this are the following. 1. Make : be typeset as literate in black. Combining this with the * or ** option in moredelim typesets the colon in black. 2. Use the answer you pointed out in the comments to create a new macro \colorIndex that takes one argument and typesets it in orange with brackets surrounding it. This is in conjunction with the is delimiter style. Unfortunately, this doesn't solve the problem. The argument that get passed to \colorIndex is fairly complicated, it gets used multiple times to style various parts, including the space before the [6:0]! Here's an example that demonstrates this bizarre behavior. \documentclass{article} \usepackage{xcolor} \usepackage{listings} \definecolor{vgreen}{RGB}{104,180,104} \definecolor{vblue}{RGB}{49,49,255} \definecolor{vorange}{RGB}{255,143,102} \lstdefinestyle{verilog-style} { language=Verilog, basicstyle=\small\ttfamily, keywordstyle=\color{vblue}, identifierstyle=\color{black}, numbers=left, numberstyle=\tiny\color{black}, numbersep=10pt, tabsize=8, literate=*{:}{{\textcolor{black}{:}}}1 } \newcommand\colorIndex[1]{[\textcolor{vorange}{#1}]} \begin{document} No delimiters. \begin{lstlisting}[style={verilog-style}] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=[s][\color{vorange}]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={[s][\color{vorange}]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=*[s][\color{vorange}]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={*[s][\color{vorange}]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=**[s][\color{vorange}]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={**[s][\color{vorange}]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=[s][\colorIndex]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={[s][\colorIndex]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=*[s][\colorIndex]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={*[s][\colorIndex]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=**[s][\colorIndex]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={**[s][\colorIndex]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \newpage \verb!moredelim=[is][\color{vorange}]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={[is][\color{vorange}]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=*[is][\color{vorange}]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={*[is][\color{vorange}]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=**[is][\color{vorange}]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={**[is][\color{vorange}]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=[is][\colorIndex]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={[is][\colorIndex]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=*[is][\colorIndex]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={*[is][\colorIndex]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \verb!moredelim=**[is][\colorIndex]{[}{]}! \begin{lstlisting}[ style={verilog-style}, moredelim={**[is][\colorIndex]{[}{]}} ] output [6:0] HEX0, output [2] FOO, \end{lstlisting} \end{document} You can see how the various settings for moredelim change the output. First, using the [s] style of delimiters. And now using [is]. I really can't explain this behavior. I was hoping to use moredelim=*[s][\colorIndex]{[}{]} and have \colorIndex examine its arguments to decide how to style the various pieces. I tried poking around at listings internals to see if \colorIndex could determine what it was about to typeset in order to set the appropriate color, but I didn't see anything useful. (There's a \lst@token which is a token register which looks like it's used to fill in the various parts of the line, but it was always empty when \colorIndex was called.) I don't have time to investigate this further right now, but hopefully someone else will have an idea. • I used moredelim too, but didn't have sufficient knowledge of the * option to let another rule inside. Also experienced same spacing errors. Bizzare! This is also the first time I've seen conditionals used in TeX, that's very useful. Perhaps some comments on the core functional pieces of the code will help improve it's readability for people who haven't seen such TeX command use before. I didn't realize you actually need to comment the end of lines to avoid whitespace interfering with your commands! I consider the spaces behavior error as unresolved, but the hack solves the main problem. – Mr G Jun 29, 2017 at 8:40 • It's just testing if \@temp is defined to a single bracket or colon token and if so, setting the color to black (that might actually be unnecessary) otherwise it sets it to orange. TeX conditionals are sufficiently complex that without a specific question about them, a full explanation would likely be long. But briefly, \ifx checks if the following two tokens have the same definition. It doesn't expand the two tokens. Check TeX by Topic for the details. – TH. Jun 29, 2017 at 13:48 I realise this is quite an old question, but I've had the same issues with needing to put Verilog code in a Latex document. In my case I wanted it to follow the style of Notepad++, but its easy enough to change the colouring to match your preferences. The accepted answer doesn't work for my needs as it doesn't handle verilog constants well, nor parameter names inside square brackets. After not finding what I needed, I was able to shamelessly adapter this answer for SuperCollider to work instead with Verilog. I've attached an example document below with the listing style declarations, which essentially extend the existing Verilog language definition to include highlighting for constants, operators, preprocessor directives, and system commands. The only one that's a bit iffy is the / operator, which will only highlight if it has a space on either side of it - otherwise the literate that finds it was capturing the comments as well and messing those up. Minimal example used to produce the above output: % Packages \documentclass{article} % Code Handling \usepackage{listings} \usepackage{xcolor} \usepackage[lighttt]{lmodern} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Verilog Code Style %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \definecolor{verilogcommentcolor}{RGB}{104,180,104} \definecolor{verilogkeywordcolor}{RGB}{49,49,255} \definecolor{verilogsystemcolor}{RGB}{128,0,255} \definecolor{verilognumbercolor}{RGB}{255,143,102} \definecolor{verilogstringcolor}{RGB}{160,160,160} \definecolor{verilogdefinecolor}{RGB}{128,64,0} \definecolor{verilogoperatorcolor}{RGB}{0,0,128} % Verilog style \lstdefinestyle{prettyverilog}{ language = Verilog, alsoletter = \$'0123456789\, literate = *{+}{{\verilogColorOperator{+}}}{1}% {-}{{\verilogColorOperator{-}}}{1}% {@}{{\verilogColorOperator{@}}}{1}% {;}{{\verilogColorOperator{;}}}{1}% {*}{{\verilogColorOperator{*}}}{1}% {?}{{\verilogColorOperator{?}}}{1}% {:}{{\verilogColorOperator{:}}}{1}% {<}{{\verilogColorOperator{<}}}{1}% {>}{{\verilogColorOperator{>}}}{1}% {=}{{\verilogColorOperator{=}}}{1}% {!}{{\verilogColorOperator{!}}}{1}% {^}{{\verilogColorOperator{$\land$}}}{1}% {|}{{\verilogColorOperator{|}}}{1}% {=}{{\verilogColorOperator{=}}}{1}% {[}{{\verilogColorOperator{[}}}{1}% {]}{{\verilogColorOperator{]}}}{1}% {(}{{\verilogColorOperator{(}}}{1}% {)}{{\verilogColorOperator{)}}}{1}% {,}{{\verilogColorOperator{,}}}{1}% {.}{{\verilogColorOperator{.}}}{1}% {~}{{\verilogColorOperator{$\sim$}}}{1}% {\%}{{\verilogColorOperator{\%}}}{1}% {\&}{{\verilogColorOperator{\&}}}{1}% {\#}{{\verilogColorOperator{\#}}}{1}% {\ /\ }{{\verilogColorOperator{\ /\ }}}{3}% {\ _}{\ \_}{2}% , morestring = [s][\color{verilogstringcolor}]{"}{"},% identifierstyle = \color{black}, vlogdefinestyle = \color{verilogdefinecolor}, vlogconstantstyle = \color{verilognumbercolor}, vlogsystemstyle = \color{verilogsystemcolor}, basicstyle = \scriptsize\fontencoding{T1}\ttfamily, keywordstyle = \bfseries\color{verilogkeywordcolor}, numbers = left, numbersep = 10pt, tabsize = 4, escapeinside = {/*!}{!*/}, upquote = true, sensitive = true, showstringspaces = false, %without this there will be a symbol in the places where there is a space frame = single } % This is shamelessly stolen and modified from: % https://github.com/jubobs/sclang-prettifier/blob/master/sclang-prettifier.dtx \makeatletter % Language name \newcommand\language@verilog{Verilog} \expandafter\lst@NormedDef\expandafter\languageNormedDefd@verilog% \expandafter{\language@verilog} % save definition of single quote for testing \lst@SaveOutputDef{'}\quotesngl@verilog \lst@SaveOutputDef{}\backtick@verilog \lst@SaveOutputDef{\$}\dollar@verilog % Extract first character token in sequence and store in macro % firstchar@verilog, per http://tex.stackexchange.com/a/159267/21891 \newcommand\getfirstchar@verilog{} \newcommand\getfirstchar@@verilog{} \newcommand\firstchar@verilog{} \def\getfirstchar@verilog#1{\getfirstchar@@verilog#1\relax} \def\getfirstchar@@verilog#1#2\relax{\def\firstchar@verilog{#1}} % Initially empty hook for lst % The style used for constants as set in lstdefinestyle \newcommand\constantstyle@verilog{} \lst@Key{vlogconstantstyle}\relax% {\def\constantstyle@verilog{#1}} % The style used for defines as set in lstdefinestyle \newcommand\definestyle@verilog{} \lst@Key{vlogdefinestyle}\relax% {\def\definestyle@verilog{#1}} % The style used for defines as set in lstdefinestyle \newcommand\systemstyle@verilog{} \lst@Key{vlogsystemstyle}\relax% {\def\systemstyle@verilog{#1}} % Counter used to check current character is a digit \newcount\currentchar@verilog % Processing macro \newcommand\@ddedToOutput@verilog {% % If we're in \lstpkg{}' processing mode... \ifnum\lst@mode=\lst@Pmode% % Save the first token in the current identifier to \@getfirstchar \expandafter\getfirstchar@verilog\expandafter{\the\lst@token}% % Check if the token is a backtick \expandafter\ifx\firstchar@verilog\backtick@verilog % If so, then this starts a define \let\lst@thestyle\definestyle@verilog% \else % Check if the token is a dollar \expandafter\ifx\firstchar@verilog\dollar@verilog % If so, then this starts a system command \let\lst@thestyle\systemstyle@verilog% \else % Check if the token starts with a single quote \expandafter\ifx\firstchar@verilog\quotesngl@verilog % If so, then this starts a constant without length \let\lst@thestyle\constantstyle@verilog% \else \currentchar@verilog=48 \loop \expandafter\ifnum% \expandafter\firstchar@verilog=\currentchar@verilog% \let\lst@thestyle\constantstyle@verilog% \let\iterate\relax% \fi \unless\ifnum\currentchar@verilog>57% \repeat% \fi \fi \fi % ...but override by keyword style if a keyword is detected! %\lsthk@DetectKeywords% \fi } % Add processing macro only if verilog \ifx\lst@language\languageNormedDefd@verilog% \fi } % Colour operators in literate \newcommand{\verilogColorOperator}[1] {% \ifnum\lst@mode=\lst@Pmode\relax% {\bfseries\textcolor{verilogoperatorcolor}{#1}}% \else #1% \fi } \makeatother %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % End Verilog Code Style %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{lstlisting}[style={prettyverilog}] /* * A Testbench Example */ timescale 1 ns/100 ps module SynchronousTestBench_tb ( input [4:0] notReally ); // Parameter Declarations localparam NUM_CYCLES = 50; //Simulate this many clock cycles. localparam CLOCK_FREQ = 50000000; //Clock frequency (in Hz) localparam RST_CYCLES = 2; //Number of cycles of reset at beginning. // Test Bench Generated Signals reg clock; reg reset; // Device Under Test // A counter which starts at 8'hAF. wire [7:0] count; counter dut ( .clock(clock ), .reset(reset ), .start(8'hAF ), .count(count[7:0]) ); // Reset Logic initial begin reset = 1'b1; //Start in reset. repeat(RST_CYCLES) @(posedge clock); //Wait for a couple of clocks reset = 1'b0; //Then clear the reset signal. end //Clock generator + simulation time limit. initial begin clock = 1'b0; //Initialise the clock to zero. end //Next we convert our clock period to nanoseconds and half it //to work out how long we must delay for each half clock cycle //Note how we convert the integer CLOCK_FREQ parameter it a real real HALF_CLOCK_PERIOD = (1000000000.0 / $itor(CLOCK_FREQ)) / 2.0; //Now generate the clock integer half_cycles = 0; always begin //Generate the next half cycle of clock #(HALF_CLOCK_PERIOD); //Delay for half a clock period. clock = ~clock; //Toggle the clock half_cycles = half_cycles + 1; //Increment the counter //Check if we have simulated enough half clock cycles if (half_cycles == (2*NUM_CYCLES)) begin //Once the number of cycles has been reached half_cycles = 0;$stop; //Note: We can continue the simualation after this breakpoint using //"run -continue" or "run ### ns" in modelsim. end end endmodule \end{lstlisting} \end{document} `
2022-05-17 02:16:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6433888673782349, "perplexity": 6980.367377022829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00186.warc.gz"}
http://openstudy.com/updates/557614dfe4b03f31cab2ef70
## anonymous one year ago quick question.... On a xy-coordinate how do i know what to plug in the y2-y1/x2-x1 formula 1. idku u should know the 2 points through which your line is going or one x or y coordinate can be not given, but then you have to know the slope to solve for this coordinate 2. anonymous if the y-intercept is 12 and the x-intercept is 5 ... how do i know which one to plug in where? 3. Nnesha y = mx + b where m is slope and b is y-intercept x-intercept is when y =0 so you can write it in parentheses (x,y)----->>(5,0) 4. anonymous no but in the y2-y1/x2/x1 5. Nnesha ohh so if y-intercept is 12 remember y-intercept when line cross y-axis when x=0 so (0 ,12) and x-intercept like i say x when line cross x-axis when y = 0 so (5,0) 6. anonymous yea so how do i know which goes first in the y2-y1/x2-x1 7. Nnesha |dw:1433802336037:dw| and it doesn't matter just remember y values should be at the top 8. anonymous so it doesn't mater if the 12 is y2 or y1 in the equation? and the same for the 5 ? 9. Nnesha yes right :-) 10. anonymous oh.... hmmmmm well lemme try my question because i got something different... 11. anonymous wait no it's not it.... 12. anonymous i'm supposed to get a -12/5 but if i plug it in anywhere i can also get a positive 12/5 13. Nnesha or you can see which number is coming first for exxample (5,0)(0,12)|dw:1433802577923:dw| (0,12) is first so y_1 is 12 that's how i do it 14. anonymous so left to right whatever is first? so it would be 12-0/ 0-5? 15. Nnesha it doesn't matter you will get the same answer :-) 16. anonymous ohhh okay so always right to left? 17. Nnesha yep that works too 18. Nnesha $$\color{blue}{\text{Originally Posted by}}$$ @yomamabf i'm supposed to get a -12/5 but if i plug it in anywhere i can also get a positive 12/5 $$\color{blue}{\text{End of Quote}}$$ (5,0)(0,12) $\huge\rm \frac{ 12-0 }{ 0-5 }=-\frac{ 12 }{ 5 }$ now other way (0,12)(5,0) $\frac{ 0-12 }{ 5-0} = \frac{ -12 }{ 5 }$ same ? 19. anonymous oh okay i got it 20. Nnesha u will get the same answer :-) 21. anonymous also it can be this too right? y=m(5)+12 =-12/5 22. Nnesha nope x_2 - x_1 = 5 in this case bec x = 0 so that will work but NO! 23. Nnesha Let x_2 = 3 and x_1 = 1 x_2 - x_1 = 2-1 =1 so you CAN'T substitute 1 for just x 24. anonymous hmmm okay 25. Nnesha yeah :-) 26. anonymous thank you <333 27. Nnesha but you can pick any x value from two order pair (5,0)(0,12) to plug in this equation y = mx+b u will get the same answer 28. Nnesha my pleasure :-) gO_Od luck!
2016-10-27 12:54:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6909397840499878, "perplexity": 2299.335503195912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00217-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.acmicpc.net/problem/10292
시간 제한메모리 제한제출정답맞힌 사람정답 비율 3 초 256 MB32161650.000% ## 문제 Nowadays, social networks are one of most active research topic. A social network represents friendships between people. We say that two people are direct friends if they accept each other as friends. But friendship is an amazing thing. It is possible that a person information shared to her/his direct friends gets shared to her/his friends of friends, and friends of friends of friends, and so on. We say that two people can reach each other if they are either direct friends, friends of friends, friends of friends of friends, and so on. Given a social network for which every pair of people can reach each other, we define Man in the Middle as a person who, if leaving the social network, breaks the condition that every pair of people can reach each other. It’s possible to have more than one Man in the Middle in a social network. Help us find if the given social network has any Man in the Middle! ## 입력 On the first line there is a single integer T <= 15, number of test cases. Then T test cases follow. • Each test starts, on the first line, with two integers N <= 30,000 and M <= 300,000, number of people in a social network and number of friendships. • For the next M lines, each line contains 2 integers A and B, such that 1<=A<=N, 1<=B<=N, and A and B are distinct; this line shows that that A and B are direct friend. ## 출력 The output should be T lines, each line representing one test case. For each line, output “YES” if the given social network has Man in the Middle, and “NO” otherwise. ## 예제 입력 1 2 3 2 1 2 1 3 4 4 1 2 1 3 2 4 3 4 ## 예제 출력 1 YES NO
2022-05-16 07:39:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2795239984989166, "perplexity": 893.0887743725478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00173.warc.gz"}
https://www.physicsforums.com/threads/pv-diagram.522071/
# PV Diagram Punkyc7 Can some one check my work, I'm not sure if I am understanding how internal energy heat and work are related. Work is defined to be positive if the system does work on the environment. Q=$\Delta U$ +W I am showing if the process is positive, negative or 0. Also this is suppose to be an ideal gas #### Attachments • CyclicProcces.jpg 35.2 KB · Views: 403 Last edited: Homework Helper Can some one check my work, I'm not sure if I am understanding how internal energy heat and work are related. Work is defined to be positive if the system does work on the environment. Q=$\Delta U$ +W I am showing if the process is positive, negative or 0. Also this is suppose to be an ideal gas
2023-03-31 11:49:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5706540942192078, "perplexity": 616.4319605197616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00232.warc.gz"}
https://gateoverflow.in/792/gate2006-1-isro2009-57
1.5k views Consider the polynomial $p(x) = a_0 + a_1x + a_2x^2 + a_3x^3$ , where $a_i \neq 0$, $\forall i$. The minimum number of multiplications needed to evaluate $p$ on an input $x$ is: 1. 3 2. 4 3. 6 4. 9 asked | 1.5k views By Horner's Rule Is it in syllabus? we can factorize the equation (x+r1)(x+r2)(x+r3), where r1,r2 and r3 are root of equation so 3 multiplication Hello reena Doesn't (x+r1)(x+r2)(x+r3) need 2 multiplication ? Why're you sure enough that coefficient of $x^{3}$ in our general given equation would be 1 ? We can use just horner's method, according to which, we can write p(x) as : $$p(x) = a_0 + x(a_1 + x(a_2 + a_3x))$$ As we can see, here we need only three multiplications, so option (A) is correct. answered by Veteran (14.6k points) selected a_0+x(a_1+x(a_2+a_3x)) so 3 multiplications required answered by Veteran (14.1k points)
2018-01-23 09:57:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6675677299499512, "perplexity": 3337.6351645146774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891886.70/warc/CC-MAIN-20180123091931-20180123111931-00447.warc.gz"}
https://gis.stackexchange.com/questions/152853/image-segmentation-of-rgb-image-by-k-means-clustering-in-python/152924
# image segmentation of RGB image by K means clustering in python I want to segment RGB images for land cover using k means clustering in such a fashion that the different regions of the image are marked by different colors and if possible boundaries are created separating different regions. I want something like : from this : Is it possible to achieve this by K-means clustering? I have been searching all over internet and many tutorials do it by k means clustering but only after converting the image to grey scale. I want to do it with an RGB image only. Is there any source that could help me begin with it? Please suggest something. • Hi, try this link. I tried it some time ago, but only had limited success. Maybe you can get it to work a bit better. Good luck. opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/… – Jcstay Jul 1 '15 at 7:18 • Hi,thank You for your suggestion @Jcstay but i have already tried the link and it did not help. Thank You though. – rach Jul 1 '15 at 7:29 • I would point out that the K-means algorithm, like all other clustering methods, needs and optimal fit of k. Since everything in the reference data will get assigned a class, if k is not optimized, the results can be erroneous with no support for a resulting class. In these cases, a given class can represent nothing other than noise or marginal effect in the data. Commonly, margin silhouette values are used to select an optimal k. – Jeffrey Evans Jul 1 '15 at 15:25 I hacked together a solution for this and wrote a blog article a while back on a very similar topic, which I will summarize here. The script is intended to extract a river from a 4-band NAIP image using an image segmentation and classification approach. 1. Convert image to a numpy array 2. Perform a quick shift segmentation (Image 2) 3. Convert segments to raster format 4. Calculate NDVI 5. Perform mean zonal statistics using segments and NDVI to transfer NDVI values to segments (Image 3) 6. Classify segments based on NDVI values 7. Evaluate results (Image 4) This example segments an image using quickshift clustering in color (x,y) space with 4-bands (red, green, blue, NIR) rather than using K-means clustering. The image segmentation was performed using the scikit-image package. More details on a variety of image segmentation algorithms in scikit-image here. For convenience sake, I used arcpy to do much of the GIS work, although this should be pretty easy to port over to GDAL. from __future__ import print_function import arcpy arcpy.CheckOutExtension("Spatial") import matplotlib.pyplot as plt import numpy as np from skimage import io from skimage.segmentation import quickshift # The input 4-band NAIP image river = r'C:\path\to\naip_image.tif' # Convert image to numpy array # Run the quick shift segmentation segments = quickshift(img, kernel_size=3, convert2lab=False, max_dist=6, ratio=0.5) print("Quickshift number of segments: %d" % len(np.unique(segments))) # View the segments via Python plt.imshow(segments) # Get raster metrics for coordinate info myRaster = arcpy.sa.Raster(river) # Lower left coordinate of block (in map units) mx = myRaster.extent.XMin my = myRaster.extent.YMin sr = myRaster.spatialReference # Note the use of arcpy to convert numpy array to raster seg = arcpy.NumPyArrayToRaster(segments, arcpy.Point(mx, my), myRaster.meanCellWidth, myRaster.meanCellHeight) outRaster = r'C:\path\to\segments.tif' seg_temp = seg.save(outRaster) arcpy.DefineProjection_management(outRaster, sr) # Calculate NDVI from bands 4 and 3 b4 = arcpy.sa.Raster(r'C:\path\to\naip_image.tif\Band_4') b3 = arcpy.sa.Raster(r'C:\path\to\naip_image.tif\Band_3') ndvi = arcpy.sa.Float(b4-b3) / arcpy.sa.Float(b4+b3) # Extract NDVI values based on image object boundaries zones = arcpy.sa.ZonalStatistics(outRaster, "VALUE", ndvi, "MEAN") zones.save(r'C:\path\to\zones.tif') # Classify the segments based on NDVI values binary = arcpy.sa.Con(zones < 20, 1, 0) binary.save(r'C:\path\to\classified_image_objects.tif') • This is a fantastic solution and sidesteps some of the issues with k-means and finding an optimal k. – Jeffrey Evans Jul 1 '15 at 15:26 • This is very nice, great work!! – Jcstay Jul 2 '15 at 8:41 You could look at clustering in scikit-learn. You will need to read the data into numpy arrays (I'd suggest rasterio) and from there you can manipulate the data so that each band is a variable for classification. For example, assuming you have the three bands read into python as red, green, and blue numpy arrays: import numpy as np import sklearn.cluster original_shape = red.shape # so we can reshape the labels later samples = np.column_stack([red.flatten(), green.flatten(), blue.flatten()]) clf = sklearn.cluster.KMeans(n_clusters=5) labels = clf.fit_predict(samples).reshape(original_shape) import matplotlib.pyplot as plt plt.imshow(labels) plt.show() Note that the KMeans clustering doesn't take into account connectivity within the dataset. • +1 Great answer. It would be especially nice to show an example of converting color images to numpy arrays using rasterio;) – Aaron Jul 1 '15 at 14:57 • @Aaron Thanks! I've posted a slightly longer example including reading data using rasterio. – om_henners Jul 2 '15 at 2:17 • @om_henners your solution is wonderful but I have a question. The segmented image returned by your program using k means clustering is 2D. Now I need to calculate dice similarity coefficient between the original image( 3D image before splitting into R,G,B bands) and the segmented image but that needs the two to have same dimensions. How do I solve this problem? – rach Jul 4 '15 at 10:31
2021-02-27 07:34:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32417288422584534, "perplexity": 2584.388557750705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00156.warc.gz"}
https://qudt.org/vocab/unit/RAD
Type Description The radian is the standard unit of angular measure, used in many areas of mathematics. It describes the plane angle subtended by a circular arc as the length of the arc divided by the radius of the arc. In the absence of any symbol radians are assumed, and when degrees are meant the symbol $$^{\ circ}$$ is used. Properties The radian is the standard unit of angular measure, used in many areas of mathematics. It describes the plane angle subtended by a circular arc as the length of the arc divided by the radius of the arc. In the absence of any symbol radians are assumed, and when degrees are meant the symbol $$^{\ circ}$$ is used.
2022-10-02 07:32:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842307567596436, "perplexity": 138.1534770255415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00227.warc.gz"}
https://ergodicity.net/2012/07/26/spcom-2012-the-talks/
SPCOM 2012 : the talks Well, SPCOM 2012 is over now — it was a lot of fun and a really nice-sized conference. I missed the first day of tutorials, which I heard were fantastic. Qing Zhao couldn’t make it due to visa issues but gave her tutorial over Skype. Hooray for technology! The conference had one track of invited talks and two tracks of submitted papers. I am embarrassed to say that I mostly hung out in the invited track, mostly because those talks were the most of interest to me. The conference is split between signal processing and communication, rather broadly construed. So the topics ran the gamut of things you might see at ICC, ICASSP, and ISIT. I’ll just touch on a few of the talks here (I missed a few due to meetings and coffee), but the full proceedings will be available on IEEExplore eventually. I attended all but one plenary — Rob Calderbank talked about compressed sensing ideas in the random access MAC, trying to get non-asyptotic results with realistic codes and asynchronous communication. Ingrid Daubechies talked about her work on forgery detection in art, specifically the Van Gogh forgery detection problem and analyzing underdrawings from the workshop of Goossen van der Weyden to detect whether van der Weyden had actually worked on those paintings. Prakash Narayan talked about generating secret keys (there was overlap from his ISIT plenary early in the month) and secure computing. The plenaries couldn’t be more different in topic from each other, and so I think the students in the audience must have gotten quite a wide perspective on the current breadth of work in signal processing and communication. The invited talks also ran the gamut of topics. Li-Chun Wang and TJ Lim talked about energy-saving networks. Prof. Lim talked about things from a standards perspective (5G cellular) and Prof. Lim talked about energy harvesting devices. Robert Heath talked about some work in modeling obstruction (e.g. buildings) for cellular networks using a random point process to place random shapes. He showed results for modeling building as lines and analyzed the impact on things like connectivity. Rahul Vaze discussed the problem of localization in sensor networks and made a connection to a percolation model called bootstrap percolation in which nodes have one of two colors (red or blue) and a node becomes red (= localized) if a sufficient number of its neighbors become red. If the node placements are random, the question is what fraction of initial nodes need to be red in order for all nodes to eventually become red. On a more information theoretic front, Ravi Adve discussed a model of communication which might apply when the transmitter and receiver transmit via chemical signals. This could happen if they lie in two positions in a pipe (blood vessel, oil pipe). In essence they get a timing channel, but by looking at the flow PDEs, they get a model in which the noise has an inverse Gaussian distribution. It’s a preliminary setup but an interesting model (even though it involves icky icky PDEs). Prasad Santhanam talked about his work with Venkat Anantharam on insurance. THe problem of someone providing insurance is similar to that of someone trying to do compression — both involved being able to predict something about future behavior of the process (insurance claims or the data signal) based on finite observations. In a related talk, Wenyi Zhang discussed an information theoretic model in which a source provides a resource (say energy) to an encoder and then the encoder can only transmit when it has enough energy — how do we code in such a scenario? Vinod Prabhakaran talked about how multiuser information theory proofs that use indirect decoding (e.g. auxiliary random variables which are not actually decoded) can be transformed into those using direct decoding without loss in rate. So in a sense, the indirect decoding is not providing the extra boost in the rate region. There were a few talks on networks as well — Pramod Vishwanath talked bout packet erasure networks and designing broadcast protocols for them, and Sid Jaggi talked about polynomial time codes for Gaussian relay networks and then switched to discussing the SHO-FA protocol, which is actually efficient in a real sense (and not in a polytime sense). On the networked inference front, José Moura talked about models for consensus with continuous observations using a mix of filtering and consensus operations (and stochastic approximation). The formulas were a bit hairy, but that seemed hard to get around. Olgica Milenkovic talked about consensus protocols for rankings, where you want a group of agents to learn a consensus ranking defined by say, the ranking which minimizes some average distance to all of the initial values. The choice of metric is important here, and she talked about weighted distances between permutations where the weights correspond to ranking (e.g. it’s more important for the top guys to be equal). There were also talks on learning and inference — Preeti Rao talked about extracting meta data from music, and in particular Hindustani classical music. R. Aravind talked about an empirical study of trying to determine if stripe patterns in tigers exhibit bilateral symmetry. This is important for things like tracking tiger populations via triggered cameras in the forest. Rowr. Aarti Singh discussed matrix completion for matrices which are highly structured but not low-rank — these are ultrametric matrices which have strong block structure and decay as you move off-diagonal. Prakash Ishwar talked about large $p$, small $n$ settings for statistical inference in the setting where there is a nonzero Bayes error. In this setting what can we say about different inference procedures? Finally, I talked about communication with interference generated by an eavesdropper. In the middle of my talk the laptop decided to install Windows updates and rebooted. Apparently it was eavesdropping on my talk and decided to jam it. I now know who the adversary is — it’s Microsoft.
2018-12-19 08:00:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5176302194595337, "perplexity": 1569.1121990623878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831715.98/warc/CC-MAIN-20181219065932-20181219091932-00385.warc.gz"}
https://www.tutorialspoint.com/convex-hull-example-in-data-structures
# Convex Hull Example in Data Structures Data Structure AlgorithmsAnalysis of AlgorithmsAlgorithms Here we will see one example on convex hull. Suppose we have a set of points. We have to make a polygon by taking less amount of points, that will cover all given points. In this section we will see the Jarvis March algorithm to get the convex hull. Jarvis March algorithm is used to detect the corner points of a convex hull from a given set of data points. Starting from left most point of the data set, we keep the points in the convex hull by anti-clockwise rotation. From a current point, we can choose the next point by checking the orientations of those points from current point. When the angle is largest, the point is chosen. After completing all points, when the next point is the start point, stop the algorithm. Input − Set of points: {(-7,8), (-4,6), (2,6), (6,4), (8,6), (7,-2), (4,-6), (8,-7),(0,0), (3,-2),(6,-10),(0,-6),(-9,-5),(-8,-2),(-8,0),(-10,3),(-2,2),(-10,4)} Output − Boundary points of convex hull are − (-9, -5) (6, -10) (8, -7) (8, 6) (-7, 8) (-10, 4) (-10, 3) ## Algorithm findConvexHull(points, n) Input: The points, number of points. Output: Corner points of convex hull. Begin start := points[0] for each point i, do if points[i].x < start.x, then // get the left most point start := points[i] done current := start add start point to the result set. define colPts set to store collinear points while true, do //start an infinite loop next := points[i] for all points i except 0th point, do if points[i] = current, then skip the next part, go for next iteration val := cross product of current, next, points[i] if val > 0, then next := points[i] clear the colPts array else if cal = 0, then if next is closer to current than points[i], then next := points[i] else done add all items in the colPts into the result if next = start, then break the loop insert next into the result current := next done return result End ## Example Live Demo #include<iostream> #include<set> #include<vector> using namespace std; struct point{ //define points for 2d plane int x, y; bool operator==(point p2){ if(x == p2.x && y == p2.y) return 1; return 0; } bool operator<(const point &p2)const{ //dummy compare function used to sort in set return true; } }; int crossProduct(point a, point b, point c){ //finds the place of c from ab vector int y1 = a.y - b.y; int y2 = a.y - c.y; int x1 = a.x - b.x; int x2 = a.x - c.x; return y2*x1 - y1*x2; //if result < 0, c in the left, > 0, c in the right, = 0, a,b,c are collinear } int distance(point a, point b, point c){ int y1 = a.y - b.y; int y2 = a.y - c.y; int x1 = a.x - b.x; int x2 = a.x - c.x; int item1 = (y1*y1 + x1*x1); int item2 = (y2*y2 + x2*x2); if(item1 == item2) return 0; //when b and c are in same distance from a else if(item1 < item2) return -1; //when b is closer to a return 1; //when c is closer to a } set<point> findConvexHull(point points[], int n){ point start = points[0]; for(int i = 1; i<n; i++){ //find the left most point for starting if(points[i].x < start.x) start = points[i]; } point current = start; set<point> result; //set is used to avoid entry of duplicate points result.insert(start); vector<point> *collinearPoints = new vector<point>; while(true){ point nextTarget = points[0]; for(int i = 1; i<n; i++){ if(points[i] == current) //when selected point is current point, ignore rest part continue; int val = crossProduct(current, nextTarget, points[i]); if(val > 0){ //when ith point is on the left side nextTarget = points[i]; collinearPoints = new vector<point>; //reset collinear points }else if(val == 0){ //if three points are collinear if(distance(current, nextTarget, points[i]) < 0){ //add closer one to collinear list collinearPoints->push_back(nextTarget); nextTarget = points[i]; }else{ collinearPoints->push_back(points[i]); //when ith point is closer or same as nextTarget } } } vector<point>::iterator it; for(it = collinearPoints->begin(); it != collinearPoints->end(); it++){ result.insert(*it); //add allpoints in collinear points to result set } if(nextTarget == start) //when next point is start it means, the area covered break; result.insert(nextTarget); current = nextTarget; } return result; } int main(){ point points[] = { {-7,8},{-4,6},{2,6},{6,4},{8,6},{7,-2},{4,-6},{8,-7},{0,0}, {3,-2},{6,-10},{0,-6},{-9,-5},{-8,-2},{-8,0},{-10,3},{-2,2},{-10,4}}; int n = 18; set<point> result; result = findConvexHull(points, n); cout << "Boundary points of convex hull are: "<<endl; set<point>::iterator it; for(it = result.begin(); it!=result.end(); it++) cout << "(" << it->x << ", " <<it->y <<") "; } ## Output Boundary points of convex hull are: (-9, -5) (6, -10) (8, -7) (8, 6) (-7, 8) (-10, 4) (-10, 3) Published on 27-Aug-2019 13:34:33
2021-10-16 19:14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3185328245162964, "perplexity": 11474.114966132052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00642.warc.gz"}
https://www.math-only-math.com/word-problems-on-pythagorean-theorem.html
# Word problems on Pythagorean Theorem Learn how to solve different types of word problems on Pythagorean Theorem. Pythagoras Theorem can be used to solve the problems step-by-step when we know the length of two sides of a right angled triangle and we need to get the length of the third side. Three cases of word problems on Pythagorean Theorem: Case 1: To find the hypotenuse where perpendicular and base are given. Case 2: To find the base where perpendicular and hypotenuse are given. Case 3: To find the perpendicular where base and hypotenuse are given. Word problems using the Pythagorean Theorem: 1. A person has to walk 100 m to go from position X in the north of east direction to the position B and then to the west of Y to reach finally at position Z. The position Z is situated at the north of X and at a distance of 60 m from X. Find the distance between X and Y. Solution: Let XY = x m Therefore, YZ = (100 – x) m In ∆ XYZ, ∠Z = 90° Therefore, by Pythagoras theorem XY2 = YZ2 + XZ2 ⇒ x2 = (100 – x)2 + 602 ⇒ x2 = 10000 – 200x + x2 + 3600 200x = 10000 + 3600 200x = 13600 x = 13600/200 x = 68 Therefore, distance between X and Y = 68 meters. 2. If the square of the hypotenuse of an isosceles right triangle is 128 cm2, find the length of each side. Solution: Let the two equal side of right angled isosceles triangle, right angled at Q be k cm. Given: h2 = 128 So, we get PR2 = PQ2 + QR2 h2 = k2 + k2 ⇒ 128 = 2k2 ⇒ 128/2 = k2 ⇒ 64 = k2 √64 = k 8 = k Therefore, length of each side is 8 cm. Using the formula solve more word problems on Pythagorean Theorem. 3. Find the perimeter of a rectangle whose length is 150 m and the diagonal is 170 m. Solution: In a rectangle, each angle measures 90°. Therefore PSR is right angled at S Using Pythagoras theorem, we get ⇒ PS2 + SR2 = PR2 ⇒ PS2 + 1502 = 1702 ⇒ PS2 = 1702 – 1502 ⇒ PS2= (170 + 150) (170 – 150), [using the formula of a2 - b2 = (a + b) (a - b)] ⇒ PS2= 320 × 20 ⇒ PS2 = 6400 PS = √6400 PS = 80 Therefore perimeter of the rectangle PQRS = 2 (length + width) = 2 (150 + 80) m = 2 (230) m = 460 m 4. A ladder 13 m long is placed on the ground in such a way that it touches the top of a vertical wall 12 m high. Find the distance of the foot of the ladder from the bottom of the wall. Solution: Let the required distance be x meters. Here, the ladder, the wall and the ground from a right-angled triangle. The ladder is the hypotenuse of that triangle. According to Pythagorean Theorem, x2 + 122 = 132 ⇒ x2 = 132 – 122 ⇒ x2 = (13 + 12) (13 – 12) ⇒ x2 = (25) (1) ⇒ x2 = 25 x = √25 x = 5 Therefore, distance of the foot of the ladder from the bottom of the wall = 5 meters. 5. The height of two building is 34 m and 29 m respectively. If the distance between the two building is 12 m, find the distance between their tops. Solution: The vertical buildings AB and CD are 34 m and 29 m respectively. Draw DE ┴ AB Then AE = AB – EB but EB = BC Therefore AE = 34 m - 29 m = 5 m Now, AED is right angled triangle and right angled at E. Therefor, ⇒ AD2 = 52 + 122 ⇒ AD2 = 25 + 144 Therefore the distance between their tops = 13 m. The examples will help us to solve various types of word problems on Pythagorean Theorem. Congruent Line-segments Congruent Angles Congruent Triangles Conditions for the Congruence of Triangles Side Side Side Congruence Side Angle Side Congruence Angle Side Angle Congruence Angle Angle Side Congruence Right Angle Hypotenuse Side congruence Pythagorean Theorem Proof of Pythagorean Theorem Converse of Pythagorean Theorem
2018-12-17 12:34:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48289597034454346, "perplexity": 1320.1353695146026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00453.warc.gz"}
https://www.hackmath.net/en/math-problem/15053?tag_id=154
# Angled cyclist turn The cyclist passes through a curve with a radius of 20 m at 25 km/h. How much angle does it have to bend from the vertical inward to the turn? Correct result: A =  13.557 ° #### Solution: $r=20 \ \text{m} \ \\ v=25 \ km/h \rightarrow m/s=25 / 3.6 \ m/s=6.94444 \ m/s \ \\ g=10 \ \text{m/s}^2 \ \\ \ \\ F_o + F_g=F \ \\ F_g=m \cdot \ g \ \\ F_o=m v^2 / r \ \\ \ \\ \tan A=\dfrac{ F_o }{ F_g } \ \\ \tan A=\dfrac{ m v^2 / r }{ m \cdot \ g } \ \\ \tan A=\dfrac{ v^2 }{ r \cdot \ g } \ \\ \ \\ A_{1}=\arctan(\dfrac{ v^2 }{ r \cdot \ g } )=\arctan(\dfrac{ 6.9444^2 }{ 20 \cdot \ 10 } ) \doteq 0.2366 \ \text{rad} \ \\ \ \\ A=A_{1} \rightarrow \ ^\circ =A_{1} \cdot \ \dfrac{ 180 }{ \pi } \ \ ^\circ =0.236609896438 \cdot \ \dfrac{ 180 }{ \pi } \ \ ^\circ =13.557 \ \ ^\circ =13.557 ^\circ =13^\circ 33'24"$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Showing 1 comment: Matematik A cyclist has to bend slightly towards the center of the circular track in order to make a safe turn without slipping. Let m be the mass of the cyclist along with the bicycle and v, the velocity. When the cyclist negotiates the curve, he bends inwards from the vertical, by an angle θ. Let R be the reaction of the ground on the cyclist. The reaction R may be resolved into two components: (i) the component R sin θ, acting towards the center of the curve providing necessary centripetal force for circular motion and (ii) the component R cos θ, balancing the weight of the cyclist along with the bicycle. Thus for less bending of the cyclist (i.e for θ to be small), the velocity v should be smaller and radius r should be larger. let h be the elevation of the outer edge of the road above the inner edge and l be the width of the road then, Tips to related online calculators Two vectors given by its magnitudes and by included angle can be added by our vector sum calculator. Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation? Do you want to convert mass units? Do you want to convert velocity (speed) units? #### You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: ## Next similar math problems: • Two chords From the point on the circle with a diameter of 8 cm, two identical chords are led, which form an angle of 60°. Calculate the length of these chords. • Triangle in a square In a square ABCD with side a = 6 cm, point E is the center of side AB and point F is the center of side BC. Calculate the size of all angles of the triangle DEF and the lengths of its sides. • Traffic sign There is a traffic sign for climbing on the road with an angle of 7%. Calculate at what angle the road rises (falls). • An observer An observer standing west of the tower sees its top at an altitude angle of 45 degrees. After moving 50 meters to the south, he sees its top at an altitude angle of 30 degrees. How tall is the tower? • The tower The observer sees the base of the tower 96 meters high at a depth of 30 degrees and 10 minutes and the top of the tower at a depth of 20 degrees and 50 minutes. How high is the observer above the horizontal plane on which the tower stands? • Fighter A military fighter flies at an altitude of 10 km. From the ground position, it was aimed at an altitude angle of 23° and 12 seconds later at an altitude angle of 27°. Calculate the speed of the fighter in km/h. • Trip with compass During the trip, Peter went 5 km straight north from the cottage, then 12 km west and finally returned straight to the cottage. How many kilometers did Peter cover during the whole trip? • Lookout tower How high is the lookout tower? If each step was 3 cm lower, there would be 60 more of them on the lookout tower. If it was 3 cm higher again, it would be 40 less than it is now. • 1 page 1 page is torn from the book. The sum of the page numbers of all the remaining pages is 15,000. What numbers did the pages have on the page that was torn from the book? • A bottle A bottle full of cola weighs 1,320 g. If we drink three-tenths of it, it will weigh 1,008g. How much does an empty bottle weigh? • Save trees 25 tons of old paper will save 1,600 trees. How many tons of paper is needed to save the 32 trees in the park? • Aquarium height How high does the water in the aquarium reach, if there are 36 liters of water in it? The length of the aquarium is 60 cm and the width is 4 dm. • Spherical segment Calculate the volume of a spherical segment 18 cm high. The diameter of the lower base is 80 cm, the upper base 60 cm. • Regular hexagonal prism Calculate the volume of a regular hexagonal prism whose body diagonals are 24cm and 25cm long. • Bike cost The father gave his son € 100 to buy a bicycle, which was 40% of the total amount of the bicycle. How much did the bike cost? • Right triangle - ratio The lengths of the legs of the right triangle ABC are in ratio b = 2: 3. The hypotenuse is 10 cm long. Calculate the lengths of the legs of that triangle. • The cube The cube has a surface area of 216 dm2. Calculate: a) the content of one wall, b) edge length, c) cube volume.
2020-07-03 10:07:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5625880360603333, "perplexity": 779.6547325844432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881984.34/warc/CC-MAIN-20200703091148-20200703121148-00335.warc.gz"}
http://theanalysisofdata.com/probability/3_7.html
## Probability ### The Analysis of Data, volume 1 Important Random Variables: The Uniform Distribution ## 3.7. The Uniform Distribution The uniform RV, $X\sim U(a,b)$, where $a < b$, is the classical model on the interval $[a,b]$ (see Section 1.4). By definition, the pdf of a $U(a,b)$ RV is constant for $x\in[a,b]$ and 0 for $x\not\in[a,b]$. The only such constant function that integrates to one is $f(x)=1/(b-a)$ for $x\in [a,b]$ and 0 otherwise, implying that $f_X(x)=\begin{cases}1/(b-a) & x\in[a,b]\\ 0 & \text{otherwise}\end{cases}.$ As computed in Example 2.3.2, \begin{align*} \E(X)&=(a+b)/2\\ \Var(X)&=(b-a)^2/12 \end{align*} implying that the expectation is the mid-point of the interval and the variance increases with the square of the interval width. The R code below graphs the pdf and the cdf of $U(a,b)$ for two different parameter values. x = seq(-1, 2, length = 100) y1 = dunif(x, 0, 1/2) y2 = dunif(x, 0, 1) y3 = punif(x, 0, 1/2) y4 = punif(x, 0, 1) D = data.frame(probability = c(y1, y2, y3, y4)) D$parameter[1:100] = "$U(0,1/2)$" D$parameter[101:200] = "$U(0,1)$" D$parameter[201:300] = "$U(0,1/2)$" D$parameter[301:400] = "$U(0,1)$" D$type[1:200] = "$f_X(x)$" D$type[201:400] = "$F_X(x)$" D$x = x qplot(x, probability, data = D, geom = "area", facets = parameter ~ type, xlab = "$x\$", ylab = "", main = "Uniform pdf and cdf functions")
2019-04-18 16:18:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974946737289429, "perplexity": 1223.2699373108262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00263.warc.gz"}
https://math.stackexchange.com/questions/800392/hypersphere-packings-from-hypercubic-graphs
# Hypersphere packings from hypercubic graphs? Consider a $D$-dimensional hypercubic lattice, i.e. a graph $H$ embedded in ${\mathbb R}^D$ where the vertices have integer coordinates $(x_1,...,x_D) \in {\mathbb Z}^D$ and edges are between all pairs $\{ (x_1,...,x_i,...,x_D), (x_1,...,x_i+1,...,x_D)\}$. From this graph $H$, build a second graph $S$ the following way. • consider the family of vertices of $H$, embedded in the $D-1$-dimensional hyperplanes $S_t$ defined by $\sum_{i=1}^D x_i = t$ with $t \in {\mathbb Z}$ • consider that there is an edge between two vertices in $S_t$ if, in $H$, those two vertices have a common neighbour in $S_{t+1}$ or/and $S_{t-1}$. Obviously $S = \cup_t S_t$ is a disconnected graph with each $S_t$ being a $D-1$ dimensional layer. I remarked that for $D = 2, 3$, and apparently also for $D=4$, the $S_t$ are actually $D-1$-hypersphere packing contact graphs. My question is, for what values of $D$ does this hold? • In your definition of adjacency, do you mean $x_i+1$? May 20, 2014 at 8:23 • Right - I've just done the editing. May 20, 2014 at 17:51 • If I'm mentally translating right, these entities are usually known as the $A_n$ lattices and you might be able to find useful information on them under that name; in particular, en.wikipedia.org/wiki/Root_system#An is a decent starting point. Your intuition is good; they're related to sphere packing, although for $n\gt 3$ they're not the most efficient packings. May 20, 2014 at 18:20 • Thanks for this answer. I'll have to dig into root systems then. Yet I would have expected this to hold for a bit more dimensions since up to $D=8$, the compact hypersphere packs are laminated. Or maybe I have to go into the details to see how my $D$ and your $n$ map. May 21, 2014 at 4:20 • @Mathias I may be slightly off my mark; this isn't my area of expertise, but I believe what you've described is the $A_n$ lattice as opposed to $D_n$ (my $n$ is your $D-1$); if I'm right, then through up to three dimensions of packing - or 4 dimensions of hypercube - you're correct in your assumption, but then I believe it will break down one dimension higher and no longer be the densest packing. If you want more reading on this, I highly, highly recommend Conway and Sloane's Sphere Packings, Lattices and Groups, which is absolutely the canonical reference. May 21, 2014 at 15:08
2022-07-05 14:59:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7648181915283203, "perplexity": 338.91387386656714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00782.warc.gz"}
https://www.physicsforums.com/threads/derivative-method-for-error-in-kinetic-energy-formula.741541/
Derivative Method for Error in Kinetic Energy formula 1. Mar 4, 2014 Shiz 1. The problem statement, all variables and given/known data Finding error in kinetic energy 2. Relevant equations K = $\frac{1}{2}$ m v2 3. The attempt at a solution Measured mass and velocities have errors in them. So we have to use derivative method to calculate uncertainty in KE which is to find the square root of the derivatives of K with respect to m and v and multiply by the errors of the variable you took the derivative of. [1] derivative of K with respect to m = 1/2 v2 times error in the mass [2] derivative of K with respect to v = m v times error in the velocity to calculate the error in K we have to take the square root of the addition of the square of [1] and square of [2]. What I don't understand is why the units don't match with equation [2]. Units should be kg2m2/s2 Last edited: Mar 4, 2014 2. Mar 4, 2014 vela Staff Emeritus [1] should be $$\frac{\partial K}{\partial m} = \frac{1}{2}v^2$$ and $$\Delta K = \frac{\partial K}{\partial m} \Delta m.$$ Perhaps that's what you meant, but what you wrote is $$\frac{\partial K}{\partial m} = \frac{1}{2}v^2 \Delta m.$$ In any case, why do you think the units aren't working out?
2017-08-16 22:37:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7996552586555481, "perplexity": 569.833200541031}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102663.36/warc/CC-MAIN-20170816212248-20170816232248-00529.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/ipi.2016.10.87
# American Institute of Mathematical Sciences • Previous Article Factorization method in inverse interaction problems with bi-periodic interfaces between acoustic and elastic waves • IPI Home • This Issue • Next Article The topological gradient method for semi-linear problems and application to edge detection and noise removal February  2016, 10(1): 87-102. doi: 10.3934/ipi.2016.10.87 ## Common midpoint versus common offset acquisition geometry in seismic imaging 1 School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623, United States 2 TIFR Centre for Applicable Mathematics, Post Bag No. 6503, GKVK Post Office, Sharada Nagar, Chikkabommasandra, Bangalore, Karnataka 560065, India 3 Department of Mathematics and Statistics, University of Limerick, Castletroy, Co. Limerick, Ireland 4 Department of Mathematics, Tufts University, Medford, MA 02155, United States Received  December 2014 Revised  June 2015 Published  February 2016 We compare and contrast the qualitative nature of backprojected images obtained in seismic imaging when common offset data are used versus when common midpoint data are used. Our results show that the image obtained using common midpoint data contains artifacts which are not present with common offset data. Although there are situations where one would still want to use common midpoint data, this result points out a shortcoming that should be kept in mind when interpreting the images. Citation: Raluca Felea, Venkateswaran P. Krishnan, Clifford J. Nolan, Eric Todd Quinto. Common midpoint versus common offset acquisition geometry in seismic imaging. Inverse Problems and Imaging, 2016, 10 (1) : 87-102. doi: 10.3934/ipi.2016.10.87 ##### References: [1] G. Ambartsoumian, R. Felea, V. P. Krishnan, C. Nolan and E. T. Quinto, A class of singular Fourier integral operators in synthetic aperture radar imaging, J. Funct. Anal., 264 (2013), 246-269. doi: 10.1016/j.jfa.2012.10.008. [2] G. Beylkin, Imaging of discontinuities in the inverse scattering problem by inversion of a causal generalized Radon transform, J. Math. Phys., 26 (1985), 99-108. doi: 10.1063/1.526755. [3] N. Bleistein, J. Cohen and J. J.W. Stockhwell, Mathematics of Multidimensional Seismic Imaging, Migration, and Inversion, Interdisciplinary Applied Mathematics, 13, Springer, New York, 2001. doi: 10.1007/978-1-4613-0001-4. [4] M. V. de Hoop, H. Smith, G. Uhlmann and R. D. van der Hilst, Seismic imaging with the generalized Radon transform: A curvelet transform perspective, Inverse Problems, 25 (2009), 025005, 21pp. doi: 10.1088/0266-5611/25/2/025005. [5] M. V. de Hoop, Microlocal analysis of seismic inverse scattering, in Inside Out: Inverse Problems and Applications, Math. Sci. Res. Inst. Publ., 47, Cambridge Univ. Press, Cambridge, 2003, 219-296. [6] A. Devaney, Geophysical diffraction tomography, IEEE Transactions on Geoscience and Remote Sensing, 22 (1984), 3-13. [7] R. Felea and A. Greenleaf, An FIO calculus for marine seismic imaging: folds and crosscaps, Communications in Partial Differential Equations, 33 (2008), 45-77. doi: 10.1080/03605300701318716. [8] R. Felea, Composition of Fourier integral operators with fold and blowdown singularities, Comm. Partial Differential Equations, 30 (2005), 1717-1740. doi: 10.1080/03605300500299968. [9] R. Felea, Displacement of artefacts in inverse scattering, Inverse Problems, 23 (2007), 1519-1531. doi: 10.1088/0266-5611/23/4/009. [10] R. Felea and A. Greenleaf, Fourier integral operators with open umbrellas and seismic inversion for cusp caustics, Math. Res. Lett., 17 (2010), 867-886. doi: 10.4310/MRL.2010.v17.n5.a6. [11] R. Felea, A. Greenleaf and M. Pramanik, An FIO calculus for marine seismic imaging, II: Sobolev estimates, Math. Ann., 352 (2012), 293-337. doi: 10.1007/s00208-011-0644-5. [12] J. Frikel and E. T. Quinto, Characterization and reduction of artifacts in limited angle tomography, Inverse Problems, 29 (2013), 125007, 21pp. doi: 10.1088/0266-5611/29/12/125007. [13] A. Greenleaf and G. Uhlmann, Composition of some singular Fourier integral operators and estimates for restricted X-ray transforms, Ann. Inst. Fourier (Grenoble), 40 (1990), 443-466. doi: 10.5802/aif.1220. [14] A. Greenleaf and G. Uhlmann, Non-local inversion formulas for the X-ray transform, Duke Math. J., 58 (1989), 205-240. doi: 10.1215/S0012-7094-89-05811-0. [15] A. Greenleaf and G. Uhlmann, Estimates for singular Radon transforms and pseudodifferential operators with singular symbols, J. Funct. Anal., 89 (1990), 202-232. doi: 10.1016/0022-1236(90)90011-9. [16] A. Greenleaf and G. Uhlmann, Microlocal techniques in integral geometry, in Integral Geometry and Tomography (Arcata, CA, 1989), Contemp. Math., Amer. Math. Soc., 113, Providence, RI, 1990, 121-135. doi: 10.1090/conm/113/1108649. [17] V. Guillemin, Some Remarks on Integral Geometry, Technical Report, MIT, 1975. [18] V. Guillemin, Cosmology in $(2 + 1)$-dimensions, Cyclic Models, and Deformations of $M_{2,1}$, Annals of Mathematics Studies, 121, Princeton University Press, Princeton, NJ, 1989. [19] V. Guillemin and S. Sternberg, Geometric Asymptotics, Mathematical Surveys, No. 14, American Mathematical Society, Providence, R.I., 1977. [20] V. Guillemin and G. Uhlmann, Oscillatory integrals with singular symbols, Duke Math. J., 48 (1981), 251-267. doi: 10.1215/S0012-7094-81-04814-6. [21] A. I. Katsevich, Local Tomography for the Limited-Angle Problem, Journal of mathematical analysis and applications, 213 (1997), 160-182. doi: 10.1006/jmaa.1997.5412. [22] V. P. Krishnan and E. T. Quinto, Microlocal aspects of bistatic synthetic aperture radar imaging, Inverse Problems and Imaging, 5 (2011), 659-674. doi: 10.3934/ipi.2011.5.659. [23] A. Malcolm, B. Ursin and M. de Hoop, Seismic imaging and illumination with internal multiples, Geophysical Journal International, 176 (2009), 847-864. doi: 10.1111/j.1365-246X.2008.03992.x. [24] R. B. Melrose and G. A. Uhlmann, Lagrangian intersection and the Cauchy problem, Comm. Pure Appl. Math., 32 (1979), 483-519. doi: 10.1002/cpa.3160320403. [25] C. J. Nolan and W. W. Symes, Global solution of a linearized inverse problem for the wave equation, Comm. Partial Differential Equations, 22 (1997), 919-952. doi: 10.1080/03605309708821289. [26] C. J. Nolan, Scattering in the presence of fold caustics, SIAM J. Appl. Math., 61 (2000), 659-672. doi: 10.1137/S0036139999356107. [27] C. J. Nolan and M. Cheney, Synthetic aperture inversion, Inverse Problems, 18 (2002), 221-235. doi: 10.1088/0266-5611/18/1/315. [28] C. J. Nolan and M. Cheney, Microlocal analysis of synthetic aperture radar imaging, J. Fourier Anal. Appl., 10 (2004), 133-148. doi: 10.1007/s00041-004-8008-0. [29] E. T. Quinto, Singularities of the X-ray transform and limited data tomography in $\mathbb{R}^2$ and $\mathbb{R}^3$, SIAM J. Math. Anal., 24 (1993), 1215-1225. doi: 10.1137/0524069. [30] E. T. Quinto, A. Rieder and T. Schuster, Local inversion of the sonar transform regularized by the approximate inverse, Inverse Problems, 27 (2011), 035006, 18pp. doi: 10.1088/0266-5611/27/3/035006. [31] Rakesh, A linearised inverse problem for the wave equation, Comm. Partial Differential Equations, 13 (1988), 573-601. doi: 10.1080/03605308808820553. [32] R. E. Sheriff, Encyclopedic Dictionary of Applied Geophysics, Society Of Exploration Geophysicists, 2002. doi: 10.1190/1.9781560802969. [33] P. Stefanov and G. Uhlmann, Is a curved flight path in {SAR} better than a straight one?, SIAM J. Appl. Math., 73 (2013), 1596-1612. doi: 10.1137/120882639. [34] C. C. Stolk, Microlocal analysis of a seismic linearized inverse problem, Wave Motion, 32 (2000), 267-290. doi: 10.1016/S0165-2125(00)00043-3. [35] C. C. Stolk and M. V. de Hoop, Microlocal analysis of seismic inverse scattering in anisotropic elastic media, Comm. Pure Appl. Math., 55 (2002), 261-301. doi: 10.1002/cpa.10019. [36] C. C. Stolk and M. V. de Hoop, Seismic inverse scattering in the downward continuation approach, Wave Motion, 43 (2006), 579-598. doi: 10.1016/j.wavemoti.2006.05.003. [37] W. W. Symes, Mathematics of Reflection Seismology, Technical Report, Department of Computational and Applied Mathematics, Rice University, Houston, Texas, 1990, Technical Report TR90-02. [38] A. P. E. ten Kroode, D.-J. Smit and A. R. Verdel, A microlocal analysis of migration, Wave Motion, 28 (1998), 149-172. doi: 10.1016/S0165-2125(98)00004-3. [39] F. Trèves, Introduction to Pseudodifferential and Fourier Integral Operators, Fourier Integral Operators, Vol. 2, Plenum Press, New York-London, 1980. show all references ##### References: [1] G. Ambartsoumian, R. Felea, V. P. Krishnan, C. Nolan and E. T. Quinto, A class of singular Fourier integral operators in synthetic aperture radar imaging, J. Funct. Anal., 264 (2013), 246-269. doi: 10.1016/j.jfa.2012.10.008. [2] G. Beylkin, Imaging of discontinuities in the inverse scattering problem by inversion of a causal generalized Radon transform, J. Math. Phys., 26 (1985), 99-108. doi: 10.1063/1.526755. [3] N. Bleistein, J. Cohen and J. J.W. Stockhwell, Mathematics of Multidimensional Seismic Imaging, Migration, and Inversion, Interdisciplinary Applied Mathematics, 13, Springer, New York, 2001. doi: 10.1007/978-1-4613-0001-4. [4] M. V. de Hoop, H. Smith, G. Uhlmann and R. D. van der Hilst, Seismic imaging with the generalized Radon transform: A curvelet transform perspective, Inverse Problems, 25 (2009), 025005, 21pp. doi: 10.1088/0266-5611/25/2/025005. [5] M. V. de Hoop, Microlocal analysis of seismic inverse scattering, in Inside Out: Inverse Problems and Applications, Math. Sci. Res. Inst. Publ., 47, Cambridge Univ. Press, Cambridge, 2003, 219-296. [6] A. Devaney, Geophysical diffraction tomography, IEEE Transactions on Geoscience and Remote Sensing, 22 (1984), 3-13. [7] R. Felea and A. Greenleaf, An FIO calculus for marine seismic imaging: folds and crosscaps, Communications in Partial Differential Equations, 33 (2008), 45-77. doi: 10.1080/03605300701318716. [8] R. Felea, Composition of Fourier integral operators with fold and blowdown singularities, Comm. Partial Differential Equations, 30 (2005), 1717-1740. doi: 10.1080/03605300500299968. [9] R. Felea, Displacement of artefacts in inverse scattering, Inverse Problems, 23 (2007), 1519-1531. doi: 10.1088/0266-5611/23/4/009. [10] R. Felea and A. Greenleaf, Fourier integral operators with open umbrellas and seismic inversion for cusp caustics, Math. Res. Lett., 17 (2010), 867-886. doi: 10.4310/MRL.2010.v17.n5.a6. [11] R. Felea, A. Greenleaf and M. Pramanik, An FIO calculus for marine seismic imaging, II: Sobolev estimates, Math. Ann., 352 (2012), 293-337. doi: 10.1007/s00208-011-0644-5. [12] J. Frikel and E. T. Quinto, Characterization and reduction of artifacts in limited angle tomography, Inverse Problems, 29 (2013), 125007, 21pp. doi: 10.1088/0266-5611/29/12/125007. [13] A. Greenleaf and G. Uhlmann, Composition of some singular Fourier integral operators and estimates for restricted X-ray transforms, Ann. Inst. Fourier (Grenoble), 40 (1990), 443-466. doi: 10.5802/aif.1220. [14] A. Greenleaf and G. Uhlmann, Non-local inversion formulas for the X-ray transform, Duke Math. J., 58 (1989), 205-240. doi: 10.1215/S0012-7094-89-05811-0. [15] A. Greenleaf and G. Uhlmann, Estimates for singular Radon transforms and pseudodifferential operators with singular symbols, J. Funct. Anal., 89 (1990), 202-232. doi: 10.1016/0022-1236(90)90011-9. [16] A. Greenleaf and G. Uhlmann, Microlocal techniques in integral geometry, in Integral Geometry and Tomography (Arcata, CA, 1989), Contemp. Math., Amer. Math. Soc., 113, Providence, RI, 1990, 121-135. doi: 10.1090/conm/113/1108649. [17] V. Guillemin, Some Remarks on Integral Geometry, Technical Report, MIT, 1975. [18] V. Guillemin, Cosmology in $(2 + 1)$-dimensions, Cyclic Models, and Deformations of $M_{2,1}$, Annals of Mathematics Studies, 121, Princeton University Press, Princeton, NJ, 1989. [19] V. Guillemin and S. Sternberg, Geometric Asymptotics, Mathematical Surveys, No. 14, American Mathematical Society, Providence, R.I., 1977. [20] V. Guillemin and G. Uhlmann, Oscillatory integrals with singular symbols, Duke Math. J., 48 (1981), 251-267. doi: 10.1215/S0012-7094-81-04814-6. [21] A. I. Katsevich, Local Tomography for the Limited-Angle Problem, Journal of mathematical analysis and applications, 213 (1997), 160-182. doi: 10.1006/jmaa.1997.5412. [22] V. P. Krishnan and E. T. Quinto, Microlocal aspects of bistatic synthetic aperture radar imaging, Inverse Problems and Imaging, 5 (2011), 659-674. doi: 10.3934/ipi.2011.5.659. [23] A. Malcolm, B. Ursin and M. de Hoop, Seismic imaging and illumination with internal multiples, Geophysical Journal International, 176 (2009), 847-864. doi: 10.1111/j.1365-246X.2008.03992.x. [24] R. B. Melrose and G. A. Uhlmann, Lagrangian intersection and the Cauchy problem, Comm. Pure Appl. Math., 32 (1979), 483-519. doi: 10.1002/cpa.3160320403. [25] C. J. Nolan and W. W. Symes, Global solution of a linearized inverse problem for the wave equation, Comm. Partial Differential Equations, 22 (1997), 919-952. doi: 10.1080/03605309708821289. [26] C. J. Nolan, Scattering in the presence of fold caustics, SIAM J. Appl. Math., 61 (2000), 659-672. doi: 10.1137/S0036139999356107. [27] C. J. Nolan and M. Cheney, Synthetic aperture inversion, Inverse Problems, 18 (2002), 221-235. doi: 10.1088/0266-5611/18/1/315. [28] C. J. Nolan and M. Cheney, Microlocal analysis of synthetic aperture radar imaging, J. Fourier Anal. Appl., 10 (2004), 133-148. doi: 10.1007/s00041-004-8008-0. [29] E. T. Quinto, Singularities of the X-ray transform and limited data tomography in $\mathbb{R}^2$ and $\mathbb{R}^3$, SIAM J. Math. Anal., 24 (1993), 1215-1225. doi: 10.1137/0524069. [30] E. T. Quinto, A. Rieder and T. Schuster, Local inversion of the sonar transform regularized by the approximate inverse, Inverse Problems, 27 (2011), 035006, 18pp. doi: 10.1088/0266-5611/27/3/035006. [31] Rakesh, A linearised inverse problem for the wave equation, Comm. Partial Differential Equations, 13 (1988), 573-601. doi: 10.1080/03605308808820553. [32] R. E. Sheriff, Encyclopedic Dictionary of Applied Geophysics, Society Of Exploration Geophysicists, 2002. doi: 10.1190/1.9781560802969. [33] P. Stefanov and G. Uhlmann, Is a curved flight path in {SAR} better than a straight one?, SIAM J. Appl. Math., 73 (2013), 1596-1612. doi: 10.1137/120882639. [34] C. C. Stolk, Microlocal analysis of a seismic linearized inverse problem, Wave Motion, 32 (2000), 267-290. doi: 10.1016/S0165-2125(00)00043-3. [35] C. C. Stolk and M. V. de Hoop, Microlocal analysis of seismic inverse scattering in anisotropic elastic media, Comm. Pure Appl. Math., 55 (2002), 261-301. doi: 10.1002/cpa.10019. [36] C. C. Stolk and M. V. de Hoop, Seismic inverse scattering in the downward continuation approach, Wave Motion, 43 (2006), 579-598. doi: 10.1016/j.wavemoti.2006.05.003. [37] W. W. Symes, Mathematics of Reflection Seismology, Technical Report, Department of Computational and Applied Mathematics, Rice University, Houston, Texas, 1990, Technical Report TR90-02. [38] A. P. E. ten Kroode, D.-J. Smit and A. R. Verdel, A microlocal analysis of migration, Wave Motion, 28 (1998), 149-172. doi: 10.1016/S0165-2125(98)00004-3. [39] F. Trèves, Introduction to Pseudodifferential and Fourier Integral Operators, Fourier Integral Operators, Vol. 2, Plenum Press, New York-London, 1980. [1] Raluca Felea, Romina Gaburro, Allan Greenleaf, Clifford Nolan. Microlocal analysis of borehole seismic data. Inverse Problems and Imaging, , () : -. doi: 10.3934/ipi.2022026 [2] Elena Cordero, Fabio Nicola, Luigi Rodino. Time-frequency analysis of fourier integral operators. Communications on Pure and Applied Analysis, 2010, 9 (1) : 1-21. doi: 10.3934/cpaa.2010.9.1 [3] Daniela Calvetti, Erkki Somersalo. Microlocal sequential regularization in imaging. Inverse Problems and Imaging, 2007, 1 (1) : 1-11. doi: 10.3934/ipi.2007.1.1 [4] Kanghui Guo and Demetrio Labate. Sparse shearlet representation of Fourier integral operators. Electronic Research Announcements, 2007, 14: 7-19. doi: 10.3934/era.2007.14.7 [5] James W. Webber, Sean Holman. Microlocal analysis of a spindle transform. Inverse Problems and Imaging, 2019, 13 (2) : 231-261. doi: 10.3934/ipi.2019013 [6] Venkateswaran P. Krishnan, Eric Todd Quinto. Microlocal aspects of common offset synthetic aperture radar imaging. Inverse Problems and Imaging, 2011, 5 (3) : 659-674. doi: 10.3934/ipi.2011.5.659 [7] Radjesvarane Alexandre, Lingbing He. Integral estimates for a linear singular operator linked with Boltzmann operators part II: High singularities $1\le\nu<2$. Kinetic and Related Models, 2008, 1 (4) : 491-513. doi: 10.3934/krm.2008.1.491 [8] Raluca Felea, Romina Gaburro, Allan Greenleaf, Clifford Nolan. Microlocal analysis of Doppler synthetic aperture radar. Inverse Problems and Imaging, 2019, 13 (6) : 1283-1307. doi: 10.3934/ipi.2019056 [9] G. Infante. Positive solutions of some nonlinear BVPs involving singularities and integral BCs. Discrete and Continuous Dynamical Systems - S, 2008, 1 (1) : 99-106. doi: 10.3934/dcdss.2008.1.99 [10] Ennio Fedrizzi. High frequency analysis of imaging with noise blending. Discrete and Continuous Dynamical Systems - B, 2014, 19 (4) : 979-998. doi: 10.3934/dcdsb.2014.19.979 [11] Peter W. Bates, Ji Li, Mingji Zhang. Singular fold with real noise. Discrete and Continuous Dynamical Systems - B, 2016, 21 (7) : 2091-2107. doi: 10.3934/dcdsb.2016038 [12] Dorota Bors, Andrzej Skowron, Stanisław Walczak. Systems described by Volterra type integral operators. Discrete and Continuous Dynamical Systems - B, 2014, 19 (8) : 2401-2416. doi: 10.3934/dcdsb.2014.19.2401 [13] Patricio Felmer, Alexander Quaas. Fundamental solutions for a class of Isaacs integral operators. Discrete and Continuous Dynamical Systems, 2011, 30 (2) : 493-508. doi: 10.3934/dcds.2011.30.493 [14] Hermann Brunner. On Volterra integral operators with highly oscillatory kernels. Discrete and Continuous Dynamical Systems, 2014, 34 (3) : 915-929. doi: 10.3934/dcds.2014.34.915 [15] Ahmad Al-Salman. Marcinkiewicz integral operators along twisted surfaces. Communications on Pure and Applied Analysis, 2022, 21 (1) : 159-181. doi: 10.3934/cpaa.2021173 [16] Gary Froyland, Cecilia González-Tokman, Anthony Quas. Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools. Journal of Computational Dynamics, 2014, 1 (2) : 249-278. doi: 10.3934/jcd.2014.1.249 [17] Jorge J. Betancor, Alejandro J. Castro, Marta De León-Contreras. Variation operators for semigroups associated with Fourier-Bessel expansions. Communications on Pure and Applied Analysis, 2022, 21 (1) : 239-273. doi: 10.3934/cpaa.2021176 [18] Joshua L. Mike, Vasileios Maroulas. Combinatorial Hodge theory for equitable kidney paired donation. Foundations of Data Science, 2019, 1 (1) : 87-101. doi: 10.3934/fods.2019004 [19] Łukasz Rudnicki. Geophysics and Stuart vortices on a sphere meet differential geometry. Communications on Pure and Applied Analysis, 2022, 21 (7) : 2479-2493. doi: 10.3934/cpaa.2022075 [20] Shanshan Wang, Yanxia Chen, Taohui Xiao, Lei Zhang, Xin Liu, Hairong Zheng. LANTERN: Learn analysis transform network for dynamic magnetic resonance imaging. Inverse Problems and Imaging, 2021, 15 (6) : 1363-1379. doi: 10.3934/ipi.2020051 2020 Impact Factor: 1.639
2022-06-27 18:45:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6534275412559509, "perplexity": 4196.503126250384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00403.warc.gz"}
http://mathhelpforum.com/calculus/104717-solved-can-anyone-give-me-hints-how-do-these-problems-print.html
# [SOLVED] Can anyone give me hints on how to do these problems? • Sep 27th 2009, 07:24 PM Forum_User [SOLVED] Can anyone give me hints on how to do these problems? Can anyone give me a hint on how to do these? Just tell me which methods to use, like can problem "a" be factored or can it be found finding the common denominator? Just hints on which methods which can be used to solve the problems. http://img21.imageshack.us/img21/825...roblems123.jpg I have been having a hard time just figuring out how to solve the problems so if anyone can help me, it will be appreciated it, thanks. • Sep 27th 2009, 07:33 PM HallsofIvy Generally speaking, to find $\lim_{x\to a} \frac{f(x)}{g(x)}$, where f and g are polynomials, first try setting x= a. If g(a) is not 0, then the limit is just f(a)/g(a). If g(a)= 0 and f(a) is not 0, then the limit does not exist. If both f(a)= 0 and g(a)= 0, then, yes, x- a must be a factor of both. Just divide the polynomials by x-a to find the other factor and cancel the x-a in numerator and denominator. For (c), "rationalize the denominator" by multiplying both numerator and denominator by $3+\sqrt{x^2+ 5}$ For (d), use the fact that $\frac{sin(a)}{a}$ has limit 1 as a goes to 0. (Of course, "a", here, is "5x".) • Sep 27th 2009, 07:33 PM VonNemo19 Quote: Originally Posted by Forum_User Can anyone give me a hint on how to do these? Just tell me which methods to use, like can problem "a" be factored or can it be found finding the common denominator? Just hints on which methods which can be used to solve the problems. http://img21.imageshack.us/img21/825...roblems123.jpg I have been having a hard time just figuring out how to solve the problems so if anyone can help me, it will be appreciated it, thanks. In a abd b: Factoring will remove the problem. In c: rationalize the denominator and simplify. You will see what to do after that. In d: Devude every term by $w^3$ and note which ones will go to zero as w goes to infinity. In e: If the numerator is always between -1 and 1, and the denominator just keeps getting bigger, what do you thinks gonna happen. Graph this and you will see. In f: multiply numerator and denominator by five thirds. See anything familiar? • Sep 27th 2009, 09:02 PM Forum_User Thanks for the help everyone. I did problem "a" successfully by factoring then plugging it in, it came out as 32/9 and I checked the answer sheet to find out that was the correct answer. Thanks again guys, For "b" I think I am stuck. $\lim_{x\to 3+} \frac{x^2+4x+3}{x^2-7x+12}$ I factored it out to $\lim_{x\to 3+} \frac{(x+3)(x+1)}{(x-4)(x-3)}$ If I plugged in 3, it would become 24/0. Is there anything else I can do? • Sep 27th 2009, 10:41 PM Forum_User I tried working out problem c, I managed to do it mostly right but I took a few peeks at the answer sheet. There is one part that I don't understand: http://img3.imageshack.us/img3/3092/12345qe.jpg So inside the circled part, it seems they canceled the $\frac{(x-2)}{}$ on the top with the $\frac{}{(2-x)}$ on the bottom. I don't understand exactly how they went from the (inside the blue circle) part on the left to the part on the right, can anyone explain? Also I don't understand why there it was $\frac{}{(2+x)}$ on the left side, then suddenly a negative sign in front of it $\frac{}{-(2+x)}$ after they canceled the $\frac{(x-2)}{}$ and $\frac{}{(2-x)}$. • Sep 28th 2009, 12:17 AM mr fantastic Quote: Originally Posted by Forum_User I tried working out problem c, I managed to do it mostly right but I took a few peeks at the answer sheet. There is one part that I don't understand: http://img3.imageshack.us/img3/3092/12345qe.jpg So inside the circled part, it seems they canceled the $\frac{(x-2)}{}$ on the top with the $\frac{}{(2-x)}$ on the bottom. I don't understand exactly how they went from the (inside the blue circle) part on the left to the part on the right, can anyone explain? Also I don't understand why there it was $\frac{}{(2+x)}$ on the left side, then suddenly a negative sign in front of it $\frac{}{-(2+x)}$ after they canceled the $\frac{(x-2)}{}$ and $\frac{}{(2-x)}$. In the denominator, (2 - x) can be written as -(x - 2). Then the common factor of (x - 2) is cancelled, leaving a negative on the denominator.
2016-12-11 03:06:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.800318717956543, "perplexity": 449.0391870639047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00403-ip-10-31-129-80.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/71159/algorithm-to-split-n-distinct-items-into-k-nonempty-unlabelled-subsets
# Algorithm to split $n$ distinct items into $k$ nonempty unlabelled subsets The number of ways to split $n$ items into $k$ nonempty unlabelled subsets ($k<n$) is a Stirling number of the second kind.(https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind) Is there an algorithm to generate all the possible combinations? • You can derive such an algorithm directly from the recurrence for these numbers. – D.W. Mar 5 '17 at 13:58 Concretely, let us count the number of ways to partition $[n]$ into $k$ non-empty ordered subsets, which are themselves ordered by increasing order of the first element. For example, a valid partition of 10 into 5 sets is $$\{1,4,5\},\{2,3,6\},\{7\},\{8,10\},\{9\}.$$ Note that $1<2<7<8<9$. The base cases $k=0,k=1,k=n$ are easy, so suppose $2 \leq k < n$. A partition of $[n]$ into $k$ ordered subsets can be obtained in two ways: 1. Taking a partition of $[n-1]$ into $k-1$ ordered subsets, and adding a new subset containing only $n$. 2. Taking a partition of $[n-1]$ into $k$ ordered subsets, and adding $n$ to the 1st subset, the 2nd subset, ..., the $k$th subset. By recursively generating all partitions of $[n-1]$ into $k-1$ or $k$ ordered subsets, you can thus generate all partitions of $[n]$ into $k$ ordered subsets.
2022-01-26 05:31:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7764383554458618, "perplexity": 163.58051530411362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00287.warc.gz"}
https://codecrucks.com/activity-selection-problem/
Activity Selection Problem – Scheduling Optimal Number of Activities Activity Selection Problem : “Schedule maximum number of compatible activities that need exclusive access to resources likes processor, class room, event venue etc.” • Span of activity is defined by its start time and finishing time. Suppose we have such n activities. • Aim of algorithm is to find optimal schedule with maximum number of activities to be carried out with limited resources. Suppose S = {a1, a2, a3, .. an} is the set of activities that we want to schedule. • Scheduled activities must be compatible with each other. Start time of activities is let’s say si and finishing time is fi, then activities i and j are called compatible if and only if fi < sj or fj < si. In other words, two activities are compatible if their time durations do not overlap. • Consider the below time line. Activities {A1, A3} and {A2, A3} are compatible set of activities. • For given n activities, there may exist multiple such schedules. Aim of activity selection algorithm is to find out the longest schedule without overlap. • Greedy approach sort activities by their finishing time in increasing order, so that f1 ≤ f2 ≤ f3 ≤ . . . ≤ fn. By default it schedules the first activity in sorted list. Subsequent next activities are scheduled whose start time is larger than finish time of previous activity. Run through all possible activities and do the same. • Consider the activity set A = {A1, A2, A3, A4, A5, A6, A7, A8, A9}, their start time S = {1, 4, 5, 2, 6, 3, 10, 12, 11} and finish time F = {4, 5, 7, 9, 10, 14, 15, 16, 17}. If we first select activity A4 for scheduling, we will end up with two activities {A4, A7}. • If we select A1 followed by A6, we end up with {A1, A6} only. • Whereas, greedy algorithm schedules {A1, A2, A3, A7}, which is the largest possible set. Algorithm for Activity Selection Problem Algorithm ACTIVITY_SELECTION(A, S) // A is Set of n activities sorted by finishing time. // S = { A[1] }, solution set, initially which contains first activity j ← 2 while j ≤ n do if fi ≤ si then S ← S union A[ j ] i ← j end j ← j + 1 i ← i – 1 end Complexity Analysis • Brute force approach leads to (n – 1) comparison for each activity to find next compatible activity. So it will run in O(n2) time. • Sorting of activities by their finishing time takes O(n.log2n) time. After sorting activities, one scan is required to select activities, which will take O(n) time. So total time required for greedy activity selection is O(n + nlog2n) = O(nlog2n) Example Example: Given following data, determine the optimal schedule using greedy approach. A = <A1, A2, A3, A4, A5, A6>, S = <1, 2, 3, 4, 5, 6>, F = <3, 6, 4, 5, 7, 9> Solution: First of all, sort all activities by their finishing time. Following chart shows the time line of all activities Let us now check the feasible set of activities. A1 is already selected, so S = < A1> f1 > s2, so A1 and A2 are not compatible. Do check for next activity f1 ≤ s3, so A1 and A3 are compatible. Schedule A3, S = <A1, A3> f3 ≤ s4, so A3 and A4 are compatible. Schedule A4, S = <A1, A3, A4> f4 ≤ s5, so A4 and A5 are compatible. Schedule A5, S = <A1, A3, A4, A5> f5 > s6, so A5 and A6 are not compatible.  And there is no more activity left to check. Hence final schedule is, S = <A1, A3, A4, A5> Example: Given following data, determine the optimal schedule for activity selection using greedy algorithm. A = <A1, A2, A3, A4, A5, A6, A7, A8>, S = <1, 2, 3, 4, 5, 6, 7, 8>, F = <4, 3, 7, 5, 6, 8, 10, 9> Solution: First of all sort all activities by their finishing time. A2 is already selected, so S = < A2> f2 > s1, so A1 and A2 are not compatible. Do check for next activity f2 ≤ s4, so A2 and A4 are compatible. Schedule A3, S = <A2, A4> f4 ≤ s5, so A4 and A5 are compatible. Schedule A5, S = <A2, A4, A5> f5 > s3, so A3 and A5 are not compatible. Do check for next activity f5 ≤ s6, so A5 and A6 are compatible. Schedule A6, S = <A2, A4, A5, A6> f6 ≤ s8, so A6 and A6 are compatible. Schedule A8, S = <A2, A4, A5, A6, A8> f8 > s7, so A8 and A7 are not compatible. And there is no more activity left to check. So final schedule is, S = <A2, A4, A5, A6, A8> Greedy algorithms are used to find an optimal or near-optimal solution to many real-life problems. A few of them are listed below :
2022-11-30 21:17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2395235002040863, "perplexity": 2200.939132628122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00277.warc.gz"}
https://web2.0calc.com/questions/help-with-finding-variables
+0 # Help with finding variables 0 37 1 Hello, Im confused on how to do this: Find A, B and C if $$\frac{A}{x-1}+\frac{B}{x-2}+\frac{C}{x-3}=\frac{2{x}^{2}-6x+6}{(x-1)(x-2)(x-3)}$$ Thanks for the help! Sep 8, 2020
2020-09-29 01:50:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878745675086975, "perplexity": 1896.454779926381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401617641.86/warc/CC-MAIN-20200928234043-20200929024043-00447.warc.gz"}
http://openstudy.com/updates/4dcde795dfd28b0b23340bfd
## anonymous 5 years ago How do I find the equation of a line having the given slope of m=5/6 and the given points (8,-8). I am completely confused and any help would be greatly appreciated. 1. anonymous Think about what does the slope means. It means that equation has to start with something like y=mx +some number Now you have the point given, (8,-8) that means that at the point x=8, y=-8 put these values in the equation and count that +some number. Yell if something is not clear 2. anonymous that is one clear way. also don't forget the all mighty 'point-slope' formula which says that a line through $(x_1,y_1)$ with slope $m$ has the equation $y-y_1=m(x-x_1)$ where of course $x_1, y_1$ are numbers and x and y are the variables in your anwer. so you can write $y-(-8)=\frac{5}{6}(x-8)$ and solve for y
2016-10-24 10:49:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6824299693107605, "perplexity": 344.25259558676987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719564.4/warc/CC-MAIN-20161020183839-00532-ip-10-171-6-4.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1127976/olympic-number-theory-problem-is-this-solution-fine-and-sufficiently-well-writt
# Olympic number theory problem: is this solution fine and sufficiently well written? Determine all positive integers $m$ such that the ratios $$\frac{2(5^m+5)}{3^m+1}\quad\text{and}\quad \frac{9^m+1}{5^m+5}$$ are both integers. Attempt at a solution: If the ratios are both integers, then their product is also an integer; this means that $$\frac{2(5^m+5)}{3^m+1}\cdot \frac{9^m+1}{5^m+5}=\frac{2(3^{2m}+1)}{3^m+1}$$ is also an integer. Let $x=3^m$. We find the values of $x$ such that $$\frac{2(x^2+1)}{x+1}$$ is an integer. Using euclidean division, we find that $$2(x^2+1)=(2x-2)(x+1)+4.$$ Dividing both sides by $x+1$, we find that $$\frac{2(x^2+1)}{x+1}=2x-2+\frac{4}{x+1}.$$ For the LHS to be an integer, the RHS must be an integer as well. Obviously $2x-2$ is an integer, and therefore $4/(x+1)$ must be an integer as well; that is, $x+1$ divides $4$. The only possible values of $x+1$ are therefore $4, 2, 1, -1, -2$ and $-4$. But the only acceptable value of $x$ is $3$, since it can't be negative or zero and must be divisible by three, being a positive power of three. In conclusion, $3^m=3 \implies m=1$. We verify that $m=1$ is indeed a solution, hence the only one. I'm quite positive about the correctness of the solution, but I'd like to hear (constructive!) criticism about the way I wrote it. When I write a solution I always feel like I'm not properly justifying all the steps. • My stylistic recommendation is to skip the part about "euclidean division." Simple polynomial long division will take you directly to $$\frac{2(x^2+1)}{x+1} = 2x-2 + \frac{4}{x+1}$$ and this step does not need further explanation. – heropup Jan 31 '15 at 18:09 • Another stylistic comment: Once you conclude that $x+1 \in \{\pm 1, \pm 2, \pm 4\}$, then you can write $$3^m \in \{-5, -3, -2, 0, 1, 3\},$$ hence if $m$ is to be a positive integer, the only admissible solution is $m = 1$. – heropup Jan 31 '15 at 18:12 • Alternatively to the above, you can also first establish that if $m \in \mathbb Z^+$, it follows that the minimum value of $x = 3^m$ is $x \ge 3$. – heropup Jan 31 '15 at 18:13 • A typo: you have "we verify that $m=3$ is indeed a solution". – Joffan Feb 1 '15 at 11:33 • As far as other stylistic comments go, your writing is clear and concise, and this is what is important. I wouldn't overly concern myself with trading one set of symbols for another set that someone views as slightly better. – RghtHndSd Feb 1 '15 at 15:28
2020-10-30 08:08:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7789020538330078, "perplexity": 166.1337963194281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107909746.93/warc/CC-MAIN-20201030063319-20201030093319-00421.warc.gz"}
https://math.stackexchange.com/questions/4225652/difference-of-f-mathbbr2-rightarrow-mathbbr2-and-as-f-mathbbc-r
# Difference of $f:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ and as $f: \mathbb{C} \rightarrow \mathbb{C}$. I want to know the difference of differentation as $$f: \mathbb{R}^2 \rightarrow \mathbb{R}^2$$ and $$f: \mathbb{C} \rightarrow \mathbb{C}$$. What are their differences, $$f$$ as two real variables, or $$f$$ as differentiation as a complex function? This question arose when I took the youtube lectures by "Richard E. BORCHERDS" on complex analysis. First treatment of real analysis : In multivariable calculus, when we set $$f(x,y)$$ its total derivatives is written as \begin{align} df =f_x dx + f_y dy \end{align} where $$f_x, f_y$$ are partial derivatives with respect to $$x,y$$ Formally, we say that a function $$f: \mathbb{R}^2 \rightarrow \mathbb{R}^2$$ is differentiable at $$a \in \mathbb{R}^2$$ if it exists a continuous linear map $$\nabla f(a) : \mathbb{R}^2 \rightarrow \mathbb{R}^2$$ such that \begin{align} \lim_{h \rightarrow 0} \frac{f(a+h) - f(a) - \nabla f(a) \cdot h}{\|h\|} =0 \end{align} so when we consider multivariable calculus, we have to see whether the multivariable function have a partial derivatives(or directional derivatives) and then see the above limit holds[In the calculus, we learn that a function having a partial derivatives but not differentiable, i.e., $$f(x,y) = \frac{xy}{\sqrt{x^2+y^2}}$$ at $$(x,y) \neq (0,0)$$ but $$0$$ at $$(x,y)=(0,0)$$. ] In the complex analysis, we treat $$f: \mathbb{R}^2 \rightarrow \mathbb{R}^2$$ or $$f: \mathbb{C} \rightarrow \mathbb{C}$$ and define complex derivatives analogus to real derivatives and obtain Cauchy Riemann equation. For example $$w=u+iv$$, \begin{align} \begin{pmatrix} u(x,y) \\ v(x,y) \end{pmatrix} = \begin{pmatrix} u(x_0, y_0) \\ v(x_0, y_0) \end{pmatrix} + \begin{pmatrix} \frac{\partial u}{\partial x} & \frac{\partial u}{\partial y} \\ \frac{\partial v}{\partial x} & \frac{\partial v}{\partial y} \end{pmatrix} \begin{pmatrix} x-x_0 \\ y-y_0 \end{pmatrix} + \epsilon \end{align} and doing $$w$$ as \begin{align} w=w_0 + A (z-z_0) + \epsilon, \quad A \in \mathbb{C} \end{align} [This is Borcherds treatment of differentiation as a linear approximation. Like real case he treats $$w$$ as $$\mathbb{C}$$ and does the linear approximation on $$\mathbb{C}$$] then identifying the component of $$A$$ he obtain Cauchy Riemann equation. In complex cases, I feel Borcherds treat the differentiation as $$x,y$$ and $$z$$ equally, but in general case those two approaches are different am I? For example, when dealing with complex analysis, differentiable at some open region (analytic) implies $$C^{\infty}$$ but I know in multi-variable calculus this does may not happen. What are their differences, $$f$$ as two real variables, or $$f$$ as differentiation as a complex function? $$\mathbb{C}$$ is literally $$\mathbb{R}^2$$ with additional vector multiplication. The complex $$i$$ is simply $$(0,1)$$. For $$a,b\in\mathbb{R}$$ we can easily then check that $$a+bi$$ is the same as $$(a,b)\in\mathbb{C}$$. And so a function $$f:\mathbb{C}\to\mathbb{C}$$ is literally the same as a function $$f:\mathbb{R}^2\to\mathbb{R}^2$$. But complex and real differentiation is (somewhat) different. For starters their respective definitions are obviously different. But every complex differentiable function $$f:\mathbb{C}\to\mathbb{C}$$ is real differentiable. Moreover if $$f(a+bi)=u(a+bi)+iv(a+bi)$$, where $$u,v:\mathbb{C}\to\mathbb{R}$$ are real valued functions and $$f$$ is complex differentiable, then $$\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}$$ $$\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$$ which are known as Cauchy-Riemann equations. It turns out that this is also a sufficient condition for $$f$$ to be complex differentiable, given that both $$u,v$$ are continuously real differentiable. In such situation both derivatives agree in the following sense: since $$\nabla f(a):\mathbb{R}^2\to\mathbb{R}^2$$ is a linear map, then it corresponds to a real $$2\times 2$$ matrix, which then corresponds to a complex number since in this situation our matrix has a specific form $$\left[\begin{matrix}\alpha & \beta \\ -\beta & \alpha\end{matrix}\right]$$. The $$\alpha+\beta i$$ complex number is our complex derivative at $$a$$ and vice versa. And so you can think of complex differentiation as a very special case of real differentiation. In fact those two simple equations make the complex analysis much much more restrictive than its real counterpart. For example as you said: for complex differentiation $$C^1$$ already implies $$C^\infty$$ (smooth) and even $$C^\omega$$ (analytic). Another difference is that every bounded complex differentiable function must be constant (Liouville's theorem). Even more: a complex differentiable function takes every possible complex value, except at most one, if non-constant (little Picard theorem), and so on. The derivative of a function $$\mathbb R^2 \to \mathbb R^2$$ is a $$2\times2$$ matrix. The complex derivative of a function $$\mathbb C \to \mathbb C$$ is a complex number. Application of the $$2\times2$$ matrix derivative is analogous to multiplication by the complex derivative. If $$a + bi$$ is a complex number, then multiplying the complex number $$x + yi$$ by $$a + bi$$ sends it to $$(ax -yb) + (ay + bx)i$$. That means that the complex number can itself be thought of as the following $$2\times2$$ matrix. $$\begin{pmatrix} a & -b \\ b & a \end{pmatrix}$$ This leads us to the following definition of complex differentiability. A function $$f \colon \mathbb C \to \mathbb C$$ is complex differentiable at a point $$z$$ if it is differentiable at $$z$$ when considered as a function $$\mathbb R^2 \to \mathbb R^2$$ and if its its derivative at $$z$$ is a matrix taking the form $$\begin{pmatrix} a & -b \\ b & a \end{pmatrix}\,.$$ Therefore: any complex differentiable function is differentiable in the real sense, but not every function that is differentiable in the real sense is complex differentiable, since not every $$2\times 2$$ matrix takes the form given above. Now, since the coefficients of the derivative of a function $$f = \langle u,v\rangle$$ are precisely the partial derivatives $$\begin{pmatrix} \frac{\partial u}{\partial x} & \frac{\partial u}{\partial y} \\ \frac{\partial v}{\partial x} & \frac{\partial v}{\partial y} \end{pmatrix}$$ then this statement is precisely the Cauchy-Riemann equations. If $$f\colon\Bbb C\longrightarrow\Bbb C$$ is differentiable, then it is also differentiable as a map from $$\Bbb R^2$$ into $$\Bbb R^2$$. Besides, if $$f'(z_0)=c$$, then, if you see $$f$$ as a map from $$\Bbb R^2$$ into $$\Bbb R^2$$, if $$z_0=x_0+y_0i$$, and if $$c=a+bi$$ (with $$x_0,y_0,a,b\in\Bbb R$$), then $$f'(x_0,y_0)$$ is the linear map whose matrix with respect to the standard basis is$$\begin{bmatrix}a&-b\\b&a\end{bmatrix}.\tag1$$That's why, in general, if $$f\colon\Bbb R^2\longrightarrow\Bbb R^2$$ is differentiable, then it is not differentiable as a map from $$\Bbb C$$ into $$\Bbb C$$; in general, the Jacobian of $$f$$ at a point $$(x_0,y_0)$$ is not of the form $$(1)$$. But if it is (and if $$f$$ is a $$C^1$$ function), then $$f$$ will actually be differentiable at $$x_0+y_0i$$, and $$f'(x_0+y_0i)=a+bi$$.
2022-01-17 17:39:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 98, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961749315261841, "perplexity": 182.20811771724053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00091.warc.gz"}
https://www.ademcetinkaya.com/2023/02/ufi-unifi-inc-new-common-stock.html
Outlook: Unifi Inc. New Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Time series to forecast n: 03 Feb 2023 for (n+3 month) Methodology : Modular Neural Network (Financial Sentiment Analysis) ## Abstract Unifi Inc. New Common Stock prediction model is evaluated with Modular Neural Network (Financial Sentiment Analysis) and Beta1,2,3,4 and it is concluded that the UFI stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Buy ## Key Points 2. What are the most successful trading algorithms? 3. Trust metric by Neural Network ## UFI Target Price Prediction Modeling Methodology We consider Unifi Inc. New Common Stock Decision Process with Modular Neural Network (Financial Sentiment Analysis) where A is the set of discrete actions of UFI stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Beta)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Financial Sentiment Analysis)) X S(n):→ (n+3 month) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$ n:Time series to forecast p:Price signals of UFI stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## UFI Stock Forecast (Buy or Sell) for (n+3 month) Sample Set: Neural Network Stock/Index: UFI Unifi Inc. New Common Stock Time series to forecast n: 03 Feb 2023 for (n+3 month) According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Buy X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Unifi Inc. New Common Stock 1. If a put option written by an entity prevents a transferred asset from being derecognised and the entity measures the transferred asset at fair value, the associated liability is measured at the option exercise price plus the time value of the option. The measurement of the asset at fair value is limited to the lower of the fair value and the option exercise price because the entity has no right to increases in the fair value of the transferred asset above the exercise price of the option. This ensures that the net carrying amount of the asset and the associated liability is the fair value of the put option obligation. For example, if the fair value of the underlying asset is CU120, the option exercise price is CU100 and the time value of the option is CU5, the carrying amount of the associated liability is CU105 (CU100 + CU5) and the carrying amount of the asset is CU100 (in this case the option exercise price). 2. Adjusting the hedge ratio by decreasing the volume of the hedging instrument does not affect how the changes in the value of the hedged item are measured. The measurement of the changes in the fair value of the hedging instrument related to the volume that continues to be designated also remains unaffected. However, from the date of rebalancing, the volume by which the hedging instrument was decreased is no longer part of the hedging relationship. For example, if an entity originally hedged the price risk of a commodity using a derivative volume of 100 tonnes as the hedging instrument and reduces that volume by 10 tonnes on rebalancing, a nominal amount of 90 tonnes of the hedging instrument volume would remain (see paragraph B6.5.16 for the consequences for the derivative volume (ie the 10 tonnes) that is no longer a part of the hedging relationship). 3. An entity's estimate of expected credit losses on loan commitments shall be consistent with its expectations of drawdowns on that loan commitment, ie it shall consider the expected portion of the loan commitment that will be drawn down within 12 months of the reporting date when estimating 12-month expected credit losses, and the expected portion of the loan commitment that will be drawn down over the expected life of the loan commitment when estimating lifetime expected credit losses. 4. Because the hedge accounting model is based on a general notion of offset between gains and losses on the hedging instrument and the hedged item, hedge effectiveness is determined not only by the economic relationship between those items (ie the changes in their underlyings) but also by the effect of credit risk on the value of both the hedging instrument and the hedged item. The effect of credit risk means that even if there is an economic relationship between the hedging instrument and the hedged item, the level of offset might become erratic. This can result from a change in the credit risk of either the hedging instrument or the hedged item that is of such a magnitude that the credit risk dominates the value changes that result from the economic relationship (ie the effect of the changes in the underlyings). A level of magnitude that gives rise to dominance is one that would result in the loss (or gain) from credit risk frustrating the effect of changes in the underlyings on the value of the hedging instrument or the hedged item, even if those changes were significant. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Unifi Inc. New Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Unifi Inc. New Common Stock prediction model is evaluated with Modular Neural Network (Financial Sentiment Analysis) and Beta1,2,3,4 and it is concluded that the UFI stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Buy ### UFI Unifi Inc. New Common Stock Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCaa2Ba3 Balance SheetB1Baa2 Leverage RatiosBaa2C Cash FlowCBaa2 Rates of Return and ProfitabilityBaa2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 81 out of 100 with 551 signals. ## References 1. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., MO Stock Price Prediction. AC Investment Research Journal, 101(3). 2. V. Borkar. A sensitivity formula for the risk-sensitive cost and the actor-critic algorithm. Systems & Control Letters, 44:339–346, 2001 3. Wu X, Kumar V, Quinlan JR, Ghosh J, Yang Q, et al. 2008. Top 10 algorithms in data mining. Knowl. Inform. Syst. 14:1–37 4. L. Busoniu, R. Babuska, and B. D. Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions of Systems, Man, and Cybernetics Part C: Applications and Reviews, 38(2), 2008. 5. Vapnik V. 2013. The Nature of Statistical Learning Theory. Berlin: Springer 6. Hoerl AE, Kennard RW. 1970. Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12:55–67 7. T. Morimura, M. Sugiyama, M. Kashima, H. Hachiya, and T. Tanaka. Nonparametric return distribution ap- proximation for reinforcement learning. In Proceedings of the 27th International Conference on Machine Learning, pages 799–806, 2010 Frequently Asked QuestionsQ: What is the prediction methodology for UFI stock? A: UFI stock prediction methodology: We evaluate the prediction models Modular Neural Network (Financial Sentiment Analysis) and Beta Q: Is UFI stock a buy or sell? A: The dominant strategy among neural network is to Buy UFI Stock. Q: Is Unifi Inc. New Common Stock stock a good investment? A: The consensus rating for Unifi Inc. New Common Stock is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of UFI stock? A: The consensus rating for UFI is Buy. Q: What is the prediction period for UFI stock? A: The prediction period for UFI is (n+3 month)
2023-04-02 01:59:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5998270511627197, "perplexity": 4621.768965236114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00516.warc.gz"}
https://www.biostars.org/p/461396/
Reference sequence for miRNA-Seq differential expression analysis 0 0 Entering edit mode 13 months ago fawazfebin ▴ 100 Hi I am doing a differential expression analysis on miRSeq data. Since the reference miRNA sequence was not available in miRBase, I need to download them from a different database, PmiREN. I am confused on which miRNA reference sequences are to be downloaded to perform mapping using mirDeep2. Here is the list of reference sequences available: Mature_miRNA_expression Mature_miRNA_seqequence miRNA_stem-loop_expression miRNA_stem-loop_secondary_structure miRNA_stem-loop_sequence miRNA_stem-loop_with_20bp_flanking_secondary_sequence miRNA_stem-loop_with_20bp_flanking_secondary_structure Star_miRNA_expression Star_miRNA_sequence Syntenic_block_info miRNA-Seq differential expression miRDeep2 • 263 views
2021-10-22 01:03:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37842482328414917, "perplexity": 6300.990068404146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00598.warc.gz"}
https://blog.csdn.net/weixin_34306446/article/details/91507328
It’s a shame! Rsync is an old, stable, field-tested, nice, quick and versatile tool for backups. And if you had bad luck, like I did, and just cut and pasted the first best example found on the Internet, and that example didn’t behave the way I thought it supposed to, you are lost, because all the solution I found at Google at the first page, sound more like voodoo, rather than a real and reasonable explanation.  And because I write this Blog in order not run ever into same issues again, I decided to write this little solution for my rsync problem, so I can easily look it up one fine day I need it again. And I really hope, this little article will be higher rated at Google, at least higher than the other solutions. This would definitely save ppls time. Here we go. There are at least two known reasons for the “cannot delete non-empty directory” problem: ### The Backup Problem You’ve got two directories. One is called SOURCE and the other is called BACKUP. Now you want to sync these to directories, e.g. via a cron-job. This cron-job is the bad one, which I used the first time: # m h dom mon dow command 0 2 * * * /usr/bin/rsync -avb --delete /SOURCE/ /BACKUP >/dev/null 2>&1 The first day, this cron-job was running, all was fine, and I didn’t saw anything suspicious. On the other day I’ve deleted a few files in SOURCE, but to my surprise, the files haven’t been deleted inBACKUP. Instead the deleted files have been renamed simply by adding a suffix to the filename. This suffix was a dash (“~“). So since these directories weren’t empty, they couldn’t be deleted, no matter what switch I’ve tried with rsync. Using the “--dry-run” switch, I could see, which directories couldn’t be deleted: /usr/bin/rsync -avb --delete /SOURCE/ /BACKUP --dry-run I’ve got a bunch of cannot delete non-empty directory The problem is the “-b” or the “--backup” switch. If you specify the backup option, you tell rsync not to delete the files, but to back them up. This is done by the previously mentioned suffix, the dash. And this whole matter is done in the directory, where you deleted your files. There are two solutions actually. The first one is not to use the backup switch at all. You can even specify the “--prune-empty-dirs“, or short “-m” switch, to ensure empty folder are being deleted also. /usr/bin/rsync -avm --delete /SOURCE/ /BACKUP --dry-run Or you really intend to use the --backup switch, but then please also specify a backup directory for it, where all these deleted files are moved into. This switch is called: “--backup-dir=“. If you don’t specify it, don’t expect rsync to delete empy directories, because rsync cannot delete empty directories, and it really doesn’t matter whether you additionally specify “--force“, or not. In combination with “--delete” it’s even double senseless. ### The Exclude Problem The second problem wasn’t figured out by myself, but this Blog article referring to this article here. It describes a deadlock between the switches listed right below, even without using the notorious backup switch (-b) at all: • --delete • --exclude <exclude pattern> • --ignore-existing Also here the --force switch is without effect. Here the writer points out, an additional • --delete-excluded
2020-09-26 11:10:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.587170422077179, "perplexity": 1270.2531893007367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400241093.64/warc/CC-MAIN-20200926102645-20200926132645-00121.warc.gz"}
https://en.wikipedia.org/wiki/Multigraph
# Multigraph "Pseudograph" redirects here. It is not to be confused with Pseudepigraph. A multigraph with multiple edges (red) and several loops (blue). Not all authors allow multigraphs to have loops. In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges[1]), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge. There are two distinct notions of multiple edges: • Edges without own identity: The identity of an edge is defined solely by the two nodes it connects. In this case, the term "multiple edges" means that the same edge can occur several times between these two nodes. • Edges with own identity: Edges are primitive entities just like nodes. When multiple edges connect two nodes, these are different edges. A multigraph is different from a hypergraph, which is a graph in which an edge can connect any number of nodes, not just two. For some authors, the terms pseudograph and multigraph are synonymous. For others, a pseudograph is a multigraph with loops. ## Undirected multigraph (edges without own identity) A multigraph G is an ordered pair G:=(V, E) with • V a set of vertices or nodes, • E a multiset of unordered pairs of vertices, called edges or lines. ## Undirected multigraph (edges with own identity) A multigraph G is an ordered triple G:=(V, E, r) with • V a set of vertices or nodes, • E a set of edges or lines, • r : E → {{x,y} : x, yV}, assigning to each edge an unordered pair of endpoint nodes. Some authors allow multigraphs to have loops, that is, an edge that connects a vertex to itself,[2] while others call these pseudographs, reserving the term multigraph for the case with no loops.[3] ## Directed multigraph (edges without own identity) A multidigraph is a directed graph which is permitted to have multiple arcs, i.e., arcs with the same source and target nodes. A multidigraph G is an ordered pair G:=(V,A) with • V a set of vertices or nodes, • A a multiset of ordered pairs of vertices called directed edges, arcs or arrows. A mixed multigraph G:=(V,E, A) may be defined in the same way as a mixed graph. ## Directed multigraph (edges with own identity) A multidigraph or quiver G is an ordered 4-tuple G:=(V, A, s, t) with • V a set of vertices or nodes, • A a set of edges or lines, • ${\displaystyle s:A\rightarrow V}$, assigning to each edge its source node, • ${\displaystyle t:A\rightarrow V}$, assigning to each edge its target node. This notion might be used to model the possible flight connections offered by an airline. In this case the multigraph would be a directed graph with pairs of directed parallel edges connecting cities to show that it is possible to fly both to and from these locations. In category theory a small category can be defined as a multidigraph (with edges having their own identity) equipped with an associative composition law and a distinguished self-loop at each vertex serving as the left and right identity for composition. For this reason, in category theory the term graph is standardly taken to mean "multidigraph", and the underlying multidigraph of a category is called its underlying digraph. ## Labeling Multigraphs and multidigraphs also support the notion of graph labeling, in a similar way. However there is no unity in terminology in this case. The definitions of labeled multigraphs and labeled multidigraphs are similar, and we define only the latter ones here. Definition 1: A labeled multidigraph is a labeled graph with labeled arcs. Formally: A labeled multidigraph G is a multigraph with labeled vertices and arcs. Formally it is an 8-tuple ${\displaystyle G=(\Sigma _{V},\Sigma _{A},V,A,s,t,\ell _{V},\ell _{A})}$ where • V is a set of vertices and A is a set of arcs. • ${\displaystyle \Sigma _{V}}$ and ${\displaystyle \Sigma _{A}}$ are finite alphabets of the available vertex and arc labels, • ${\displaystyle s\colon A\rightarrow \ V}$ and ${\displaystyle t\colon A\rightarrow \ V}$ are two maps indicating the source and target vertex of an arc, • ${\displaystyle \ell _{V}\colon V\rightarrow \Sigma _{V}}$ and ${\displaystyle \ell _{A}\colon A\rightarrow \Sigma _{A}}$ are two maps describing the labeling of the vertices and arcs. Definition 2: A labeled multidigraph is a labeled graph with multiple labeled arcs, i.e. arcs with the same end vertices and the same arc label (note that this notion of a labeled graph is different from the notion given by the article graph labeling).
2016-10-01 05:10:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.572147786617279, "perplexity": 816.5019510752902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662527.91/warc/CC-MAIN-20160924173742-00189-ip-10-143-35-109.ec2.internal.warc.gz"}
https://computergraphics.stackexchange.com/questions/10177/how-to-calculate-position-from-depth-pass-of-the-zed-camera
How to calculate position from depth pass of the ZED camera? I'm using the ZED 2 camera, and although the API provides a means of getting a point cloud position from a specific pixel, for my project I need to be able to perform this calculation myself from the depth pass alone. According to the documentation: "Depth maps captured by the ZED store a distance value (Z) for each pixel (X, Y) in the image. The distance is expressed in metric units (meters for example) and calculated from the back of the left eye of the camera to the scene object." So my question is, given a pixel index (therefore UV value) and a depth value, how can I work out the world location of the pixel/point? I suspect this has something to do with inverting a matrix (camera? projection?), and I have the following calibration values for the left sensor of the camera: fx: 1057.1 fy: 1056.71 cx: 979.01 cy: 531.934 k1: -0.0412 k2: 0.0095 k3: -0.0047 p1: -0.0005 p2: -0.0002 But matrices are far from my strong suit so I'm struggling to understand how to turn the above information into a matrix which I could use to extrapolate the position. Can anyone help me?
2020-10-25 08:57:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3394676446914673, "perplexity": 865.201873866369}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888402.81/warc/CC-MAIN-20201025070924-20201025100924-00364.warc.gz"}
https://publications.mfo.de/handle/mfo/1326?show=full
dc.contributor.author Detinko, Alla dc.contributor.author Flannery, Dane dc.contributor.author Hulpke, Alexander dc.date.accessioned 2017-11-27T11:28:09Z dc.date.available 2017-11-27T11:28:09Z dc.date.issued 2017-10-28 dc.identifier.uri http://publications.mfo.de/handle/mfo/1326 dc.description Research in Pairs 2017 en_US dc.description.abstract We give a method to describe all congruence images of a finitely generated Zariski dense group $H\leq SL(n, \mathbb{R})$. The method is applied to obtain efficient algorithms for solving this problem in odd prime degree $n$; if $n=2$ then we compute all congruence images only modulo primes. We propose a separate method that works for all $n$ as long as $H$ contains a known transvection. The algorithms have been implemented in ${\sf GAP}$, enabling computer experiments with important classes of linear groups that have recently emerged. en_US dc.language.iso en_US en_US dc.publisher Mathematisches Forschungsinstitut Oberwolfach en_US dc.relation.ispartofseries Oberwolfach Preprints;2017,31 dc.rights Attribution-ShareAlike 4.0 International * dc.rights.uri http://creativecommons.org/licenses/by-sa/4.0/ * dc.title Experimenting with Zariski Dense Subgroups en_US dc.type Preprint en_US dc.identifier.doi 10.14760/OWP-2017-31 local.scientificprogram Research in Pairs 2017 en_US local.series.id OWP-2017-31 local.subject.msc 20 local.subject.msc 68  ### This item appears in the following Collection(s) Except where otherwise noted, this item's license is described as Attribution-ShareAlike 4.0 International
2018-09-21 11:24:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38891562819480896, "perplexity": 9021.881812529664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157070.28/warc/CC-MAIN-20180921112207-20180921132607-00119.warc.gz"}
https://crypto.stackexchange.com/questions/87680/different-inputs-but-able-to-generate-consistent-outputs-across-different-sha-en
# Different inputs but able to generate consistent outputs across different SHA engines Say I'm feeding in few thousand bits data (INPUT AAAA) into both SHA256 & SHA3 256 engines at the same time. (Both engines using different hashing architecture) and hence it will generate different 256-bits of output, lets say SHA256 generate ABCD while SHA3-256 generate EFGH. I'm curious about if we can try to find the alternative input (INPUT BBBB) later that can generate the same HASH output like above (SHA256 generate ABCD & SHA3 256 generate EFGH). • Sounds to me like you're after "collision resistance". Additionally, I'd like to note that you seem to use SHA2 and SHA3 in the same program to achieve domain separation, so I would like to note that it's perfectly possible to achieve domain separation while using the same hash function! E.g. SHA3(0 || INPUT AAAA) will be different from SHA3(1 || INPUT AAAA) – Ruben De Smet Jan 21 at 8:54 • @RubenDeSmet, if I interpret you correctly, you mean SHA2 can be out of the picture since the single SHA3 engine can achieve the domain separation by applying some kind of different seed numbers? – Pi-Turn Jan 21 at 9:04 • That's pretty much what I am saying indeed. FWIW, SHA2 does not necessarily need to be out of the picture (it's usually a bit faster than SHA3 in software), but there's no good reason to use both of them. – Ruben De Smet Jan 21 at 9:56 • Domain seperation, and it can be achieved with a fixed string on the beginning like SHA-3 does see suffix – kelalaka Jan 21 at 10:37 In short, this will be not be possible, even if we only use one secure hash function rather than two. You seem to be describing a sort of dual second-preimage attack where we need to find two inputs that clash over two separate hash functions. A secure hash function will be resistant to such attacks. As such for either SHA2 & SHA3, it will not be possible to find another input that makes a desired output. I believe that even SHA1 is only weak in terms of collision resistance. See here for further details. • It's only a second pre-image attack if the input 1 "AAAA" is fixed (which is not entirely clear from the question). However, collision resistance still applies (for SHA-2 and -3 at least). – SEJPM Jan 21 at 9:14 • @SEJPM Thank you. I see what you mean, the wording of finding an alternative input later suggested to me that the initial input was fixed. – Modal Nest Jan 21 at 9:21 • @ModalNest, "A secure hash function will be resistant to such attacks. As such for either SHA2 & SHA3, it will not be possible to find another input that makes a desired output.", I always don't get why this is not possible to happen because the input size is always much larger than the output size. if your input is a 100,000 bits long meaning the permutation is easily larger than 2^256 bits long output. So it should have overlap with simple thinking... – Pi-Turn Jan 21 at 11:22 • @Pi-Turn The size of the input doesn't matter really. It's the mindboggling size of 256bits (in the case of preimage). Using some quick JS (so maybe wrong) but a 4GHz processor left running would take a number of years 54 digits long, assuming it ran one million hashes per clock cycle. Or 4000000000000000 hashes per second. – Modal Nest Jan 21 at 11:51 • @Pi-Turn There is an overlap I suppose in the sense that if you stored all 10k bit permutations, there would have to be $n$ collisions. However it's not feasible to do that. Even storing a single bit to represent every permutation of 256 bits would require more storage space than we have on earth. – Modal Nest Jan 21 at 14:58
2021-04-10 18:59:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44656574726104736, "perplexity": 1399.8750162007138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00067.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/112745-numerical-integration.html
1. ## Numerical integration Hi, I need to compute the following integral: Integrate function: 1-normcdf(t,1,3), for the interval [0,0.4]. 't',0,0.4), but it does not work. Anyone knows how to compute? 2. Originally Posted by guvenc Hi, I need to compute the following integral: Integrate function: 1-normcdf(t,1,3), for the interval [0,0.4].
2016-09-27 16:41:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949526846408844, "perplexity": 3757.647535447983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661123.53/warc/CC-MAIN-20160924173741-00024-ip-10-143-35-109.ec2.internal.warc.gz"}
http://en.citizendium.org/wiki?title=User:Richard_Pinch/Articles&diff=100429915&oldid=prev
Citizendium - a community developing a quality comprehensive compendium of knowledge, online and free. Click here to join and contribute—free CZ thanks AUGUST 2014 donors; special to Darren Duncan. SEPTEMBER 2014 donations open; need minimum total \$100. Let's exceed that. Donate here. Treasurer's Financial Report -- Thanks to August content contributors. -- # User:Richard Pinch/Articles (Difference between revisions) Revision as of 12:17, 1 January 2009 (view source) (→Articles I have started: added Order (ring theory))← Older edit Revision as of 18:36, 1 January 2009 (view source) (→Articles I have started: added Integral closure)Newer edit → Line 100: Line 100: {{rpl|Incentre}} {{rpl|Incentre}} {{rpl|Injective function}} {{rpl|Injective function}} + {{rpl|Integral closure}} {{rpl|Interior (topology)}} {{rpl|Interior (topology)}} {{rpl|Intersection}} {{rpl|Intersection}} ## Articles I have started •  Absorbing element: An element whose behaviour with respect to an algebraic binary operation is like that of zero with respect to multiplication. [e] •  Albert algebra: An exceptional Jordan algebra, consisting of 3×3 self-adjoint matrices over the octonions. [e] •  Algebra over a field: A ring containing an isomorphic copy of a given field in its centre. [e] •  Algebraic independence: The property of elements of an extension field which satisfy only the trivial polynomial relation. [e] •  Algebraic number field: A field extension of the rational numbers of finite degree; a principal object of study in algebraic number theory. [e] •  Alternant code: A class of parameterised error-correcting codes which generalise the BCH codes. [e] •  Altitude (geometry): In a triangle, a line from a vertex perpendicular to the opposite side. [e] •  Arithmetic function: A function defined on the set of positive integers, usually with integer, real or complex values, studied in number theory. [e] •  Artin-Schreier polynomial‎: A type of polynomial whose roots generate extensions of degree p in characteristic p. [e] •  Associativity: A property of an algebraic operation such as multiplication: a(bc) = (ab)c. [e] •  Automorphism: An isomorphism of an algebraic structure with itself: a permutation of the underlying set which respects all algebraic operations. [e] •  Average order of an arithmetic function‎: A simple or well-known function, usually continuous and montonic, which on average takes the same or closely approximate values as a given arithmetic function. [e] •  Baer-Specker group‎: An example of an infinite Abelian group which is a building block in the structure theory of such groups. [e] •  Barycentre: The centre of mass of a body or system of particles, a weighted average where certain forces may be taken to act. [e] •  Barycentric coordinates‎: The weights that would have to be assigned to a system of reference points to yield a given position as barycentre are used as coordinates. [e] •  Binary operation: A function of two elements within a set, which assigns another value from among the elements of the set. [e] •  Brun-Titchmarsh theorem: An upper bound on the distribution on primes in an arithmetic progression. [e] •  Cameron-Erdos conjecture‎: Add brief definition or description •  Cartesian product: The set of ordered pairs whose elements come from two given sets. [e] •  Centraliser: The set of all group elements which commute with every element of a given subset. [e] •  Centre of a group: The subgroup of a group consisting of all elements which commute with every element of the group. [e] •  Centre of a ring: The subring of a ring consisting of all elements which commute with every element of the ring. [e] •  Cevian line: A line from the vertex of a triangle to some point on the opposite edge. [e] •  Chain rule: A rule in calculus for differentiating a function of a function. [e] •  Characteristic function: A function on a set which takes the value 1 on a given subset and 0 on its complement. [e] •  Characteristic polynomial: The polynomial attached to a square matrix or endomorphism det(A-XI)=0. [e] • Circumcentre: The centre of the circle that goes through the vertices of a triangle or a cyclic polygon. [e] •  Closure operator: An idempotent unary operator on subsets of a given set, mapping a set to a larger set with a particular property. [e] •  Cocountable topology: The topology on a space in which the open sets are those with countable complements, or the empty set. [e] •  Cofactor (mathematics): A component of a matrix computation of the determinant; a signed determinant of a matrix minor. [e] •  Cofinite topology: The topology on a space in which the open sets are those with finite complement, or the empty set. [e] •  Commutativity: A property of a binary operation (such as addition or multiplication), that the two operands may be interchanged without affecting the result. [e] •  Commutator: A measure of how close two elements of a group are to commuting. [e] •  Compactification: A compact space in which a given topological space can be embedded as a dense subset. [e] •  Compactness axioms: Properties of a toplogical space related to compactness. [e] •  Complement (linear algebra)‎: A pair of subspaces which form an (internal) direct sum. [e] •  Complement (set theory)‎: The set containing those elements of a set (or "universal" set) which are not contained in a given set. [e] •  Complex conjugation: The operation on complex numbers which changes the sign of the imaginary part, x+iyx-iy [e] •  Conductor of an abelian variety: A measure of the nature of the bad reduction at some prime. [e] •  Congruent triangles: In Euclidean geometry, triangles which can be superposed by a rigid motion. [e] •  Conjugation (group theory)‎: The elements of any group that may be partitioned into conjugacy classes. [e] •  Connected space: A topological space in which there is no non-trivial subset which is both open and closed. [e] •  Content (algebra): The highest common factor of the coefficients of a polynomial. [e] •  Continuant (mathematics): An algebraic expression which has applications in generalized continued fractions and as the determinant of a tridiagonal matrix. [e] • Convolution: Add brief definition or description •  Coprime: Integers, or more generally elements of a ring, which have no non-trivial common factor. [e] •  Countability axioms in topology: Properties that a topological space may satisfy which refer to the countability of certain structures within the space. [e] •  Cubic reciprocity: Various results connecting the solvability of two related cubic equations in modular arithmetic, generalising the concept of quadratic reciprocity. [e] •  Cyclic group: A group consisting of the powers of a single element. [e] •  Cyclic polygon: A polygon whose vertices lie on a single circle. [e] •  Cyclotomic field: An algebraic number field generated over the rational numbers by roots of unity. [e] •  Cyclotomic polynomial: A polynomial whose roots are primitive roots of unity. [e] •  Delta form: A modular form arising from the discriminant of an elliptic curve: a cusp form of weight 12 and level 1 for the full modular group and a Hecke eigenform. [e] •  Derivation (mathematics): A map defined on a ring which behaves formally like differentiation: D(x.y)=D(x).y+x.D(y). [e] •  Diagonal matrix: A square matrix which has zero entries off the main diagonal. [e] •  Different ideal: An invariant attached to an extension of algebraic number fields which encodes ramification data. [e] •  Differential ring: A ring with added structure which generalises the concept of derivative. [e] •  Dirichlet series: An infinite series whose terms involve successive positive integers raised to powers of a variable, typically with integer, real or complex coefficients. [e] •  Discrete metric: The metric on a space which assigns distance one to any distinct points, inducing the discrete topology. [e] •  Discrete space: A topological space with the discrete topology, in which every subset is open (and also closed). [e] •  Discriminant of a polynomial: An invariant of a polynomial which vanishes if it has a repeated root: the product of the differences between the roots. [e] •  Discriminant of an algebraic number field: An invariant attached to an extension of algebraic number fields which describes the geometric structure of the ring of integers and encodes ramification data. [e] •  Disjoint union: A set containing a copy of each of a family of two or more sets, so that the copies do not overlap. [e] •  Distributivity: A relation between two binary operations on a set generalising that of multiplication to addition: a(b+c)=ab+ac. [e] •  Division (arithmetic): The process of determing how many copies of one quantity are required to make up another; repeated subtraction; the inverse operation to multiplication. [e] •  Division ring: (or skew field), In algebra it is a ring in which every non-zero element is invertible. [e] •  Divisor (algebraic geometry): A formal sum of subvarieties of an algebraic variety. [e] •  Door space: A topological space in which each subset is open or closed. [e] •  Dowker space: A topological space that is T4 but not countably paracompact. [e] • Empty set: In set theory, this is a set without elements, usually denoted $\{~\}$ or $\empty$. The empty set is a subset of any set. [e] •  End (topology): For a topological space this generalises the notion of "point at infinity" of the real line or plane. [e] •  Equivalence relation: A reflexive symmetric transitive binary relation on a set. [e] •  Erdos-Fuchs theorem: A statement about the number of ways that numbers can be represented as a sum of two elements of a given set. [e] •  Error function: A function associated with the cumulative distribution function of the normal distribution. [e] •  Essential subgroup: A subgroup of a group which has non-trivial intersection with every other non-trivial subgroup. [e] •  Exact sequence: A sequence of algebraic objects and morphisms which is used to describe or analyse algebraic structure. [e] •  Factorial: The number of ways of arranging n labeled objects in order; the product of the first n integers. [e] •  Field automorphism: An invertible function from a field onto itself which respects the field operations of addition and multiplication. [e] •  Filter (mathematics)‎: A family of subsets of a given set which has properties generalising the notion of "almost all natural numbers". [e] •  Frattini subgroup: The intersection of all maximal subgroups of a group. [e] •  Free group: A group in which there is a generating set such that every element of the group can be written uniquely as the product of generators. [e] •  Frobenius map: The p-th power map considered as acting on commutative algebras or fields of prime characteristic p. [e] •  Function composition: The successive application of two functions. [e] •  Functional equation: A relation between the values of a function at different points, such as periodicity or symmetry. [e] •  Generic point: A point of a topological space which is not contained in any proper closed subset; a point satisfying no special properties. [e] •  Genus field: The maximal absolutely abelian unramified extension of a number field. [e] •  Group action: A way of describing symmetries of objects using groups. [e] •  Group homomorphism: A map between group which preserves the group structure. [e] •  Group isomorphism problem: The decision problem of determining whether two group presentations present isomorphic groups. [e] •  Hall polynomial: The structure constants of Hall algebra. [e] •  Hall-Littlewood polynomial‎: Symmetric functions depending on a parameter t and a partition λ. [e] •  Heine–Borel theorem: In Euclidean space of finite dimension with the usual topology, a subset is compact if and only if it is closed and bounded. [e] •  Hutchinson operator: A collection of functions on an underlying space. [e] •  Idempotence: The property of an operation that repeated application has no effect. [e] •  Idempotent element: An element or operator for which repeated application has no further effect. [e] •  Identity element: An element whose behaviour with respect to a binary operation generalises that of zero for addition or one for multiplication. [e] •  Identity function: The function from a set to itself which maps each element to itself. [e] •  Identity matrix: A square matrix with ones on the main diagonal and zeroes elsewhere: the identity element for matrix multiplication. [e] •  Incentre: The centre of the incircle, a circle which is within a triangle and tangent to its three sides. [e] •  Injective function: A function which has different output values on different input values. [e] •  Integral closure: The ring of elements of an extension of a ring which satisfy a monic polynomial over the base ring. [e] •  Interior (topology): The union of all open sets contained within a given subset of a topological space. [e] •  Intersection: The set of elements that are contained in all of a given family of two or more sets. [e] •  Isolated singularity: A point at which function of a complex variable is not holomorphic, but which has a neighbourhood on which the function is holomorphic. [e] •  Jordan's totient function: A generalisation of Euler's totient function. [e] •  Justesen code: A class of error-correcting codes which are derived from Reed-Solomon codes and have good error-control properties. [e] •  KANT: A computer algebra system for mathematicians interested in algebraic number theory. [e] •  Kernel of a function: The equivalence relation on the domain of a function defined by elements having the same function value: the partition of the domain into fibres of a function. [e] •  Kronecker delta: A quantity depending on two subscripts which is equal to one when they are equal and zero when they are unequal. [e] •  Krull dimension: In a ring, one less than the length of a maximal ascending chain of prime ideals. [e] •  Lambda function: The exponent of the multiplicative group modulo an integer. [e] •  Lattice (geometry): A discrete subgroup of a real vector space. [e] •  Limit point: A point which cannot be separated from a given subset of a topological space; all neighbourhoods of the points intersect the set. [e] •  Littlewood polynomial: A polynomial all of whose coefficients are plus or minus 1. [e] •  Manin obstruction: A measure of the failure of the Hasse principle for geometric objects. [e] •  Median algebra: A set with a ternary operation satisfying a set of axioms which generalise the notion of median or majority function, as a Boolean function. [e] •  Minimal polynomial: The monic polynomial of least degree which a square matrix or endomorphism satisfies. [e] •  Möbius function‎: Arithmetic function which takes the values -1, 0 or +1 depending on the prime factorisation of its input n. [e] •  Modulus (algebraic number theory)‎: A formal product of places of an algebraic number field, used to encode ramification data for abelian extensions of a number field. [e] •  Monogenic field: An algebraic number field for which the ring of integers is a polynomial ring. [e] •  Monoid: An algebraic structure with an associative binary operation and an identity element. [e] •  Monotonic function: A function on an ordered set which preserves the order. [e] •  Moore determinant: A determinant defined over a finite field which has successive powers of the Frobenius automorphism applied to the first column. [e] •  Morita conjectures: Three conjectures in topology relating to normal spaces, now proved. [e] •  Neighbourhood: Add brief definition or description • Nine-point centre: Add brief definition or description •  Noetherian module: Module in which every ascending sequence of submodules has only a finite number of distinct members. [e] •  Normal extension: A field extension which contains all the roots of an irreducible polynomial if it contains one such root. [e] •  Normal number: A real number whose digits in some particular base occur equally often in the long run. [e] •  Normal order of an arithmetic function‎: A simple or well-known function, usually continuous and montonic, which "usually" takes the same or closely approximate values as a given arithmetic function. [e] •  Normaliser: The elements of a group which map a given subgroup to itself by conjugation. [e] •  Nowhere dense set: A set in a topological space whose closure has empty interior. [e] • Null set: Add brief definition or description •  Number of divisors function: The number of positive integer divisors of a given number. [e] •  Number Theory Foundation: A non-profit organisation based in the United States which supports research and conferences in the field of number theory. [e] •  Order (group theory): For a group, its cardinality; for an element of a group, the least positive integer (if one exists) such that raising the element to that power gives the identity. [e] •  Order (relation): An irreflexive antisymmetric transitive binary relation on a set. [e] •  Order (ring theory): A ring which is finitely generated as a Z-module. [e] •  Ordered field: A field with a total order which is compatible with the algebraic operations. [e] •  Ordered pair: Two objects in which order is important. [e] •  p-adic metric: A metric on the rationals in which numbers are close to zero if they are divisible by a large power of a given prime p. [e] •  Partition (mathematics): Concepts in mathematics which refer either to a partition of a set or an ordered partition of a set, or a partition of an integer, or a partition of an interval. [e] •  Partition function (number theory): The number of additive partitions of a positive integer. [e] •  Pedal triangle: Triangle whose vertices are located at the feet of the perpendiculars from some given point to the sides of a specified triangle. [e] •  Pointwise operation: Method of extending an operation defined on an algebraic struture to a set of functions taking values in that structure. [e] •  Pole (complex analysis): A type of singularity of a function of a complex variable where it behaves like a negative power. [e] •  Power set: The set of all subsets of a given set. [e] •  Preparata code: A class of non-linear double-error-correcting codes. [e] •  Primitive root: A generator of the multiplicative group in modular arithmetic when that group is cyclic. [e] •  Product topology: Topology on a product of topological spaces whose open sets are constructed from cartesian products of open sets from the individual spaces. [e] •  Quadratic field: A field which is an extension of its prime field of degree two. [e] •  Quadratic residue: A number which is the residue of a square integer with respect to a given modulus. [e] •  Quotient topology: The finest topology on the image set that makes a surjective map from a topological space continuous. [e] •  Relation (mathematics): A property which holds between certain elements of some set or sets. [e] •  Relation composition: Formation of a new relation S o R from two given relations R and S, having as its most well-known special case the composition of functions. [e] •  Removable singularity: A singularity of a complex function which can be removed by redefining the function value at that point. [e] •  Residual property (mathematics): A concept in group theory on recovered element properties. [e] •  Resolution (algebra): An exact sequence which is used to describe the structure of a module. [e] •  Resultant (algebra): An invariant which determines whether or not two polynomials have a factor in common. [e] •  Resultant (statics): A single force having the same effect as a system of forces acting at different points. [e] •  Rigid motion: A transformation which preserves the geometrical properties of the Euclidean spacea distance-preserving mapping or isometry. [e] •  Ring homomorphism: Function between two rings which respects the operations of addition and multiplication. [e] •  Root of unity: An algebraic quantity some power of which is equal to one. [e] •  S-unit: An element of an algebraic number field which has a denominator confined to primes in some fixed set. [e] •  Selberg sieve: A technique for estimating the size of "sifted sets" of positive integers which satisfy a set of conditions which are expressed by congruences. [e] •  Semigroup: An algebraic structure with an associative binary operation. [e] •  Separation axioms: Axioms for a topological space which specify how well separated points and closed sets are by open sets. [e] •  Series (group theory): A chain of subgroups of a group linearly ordered by subset inclusion. [e] •  Singleton set: A set with exactly one element. [e] •  Sober space: A topological space in which every irreducible closed set has a unique generic point. [e] •  Srivastava code: A class of parameterised error-correcting codes which are a special case of alternant codes. [e] •  Stably free module: A module which is close to being free: the direct sum with some free module is free. [e] •  Stirling number: Coefficients which occur in the Stirling interpolation formula for a difference operator. [e] •  Subgroup: A subset of a group which is itself a group with respect to the group operations. [e] •  Subspace topology: An assignment of open sets to a subset of a topological space. [e] •  Sum-of-divisors function: The function whose value is the sum of all positive divisors of a given positive integer. [e] •  Surjective function: A function for which every possible output value occurs for one or more input values: the image is the whole of the codomain. [e] •  Sylow subgroup: A subgroup of a finite group whose order is the largest possible power of one of the primes factors of the group order. [e] •  Symmetric difference: The set of elements that lie in exactly one of two sets. [e] •  Szpiro's conjecture: A relationship between the conductor and the discriminant of an elliptic curve. [e] •  Tau function: An arithmetic function studied by Ramanjuan, the coefficients of the q-series expansion of the modular form Delta. [e] •  Theta function: An analytic function which is a modular form of weight one-half; more generally, the generating function for a quadratic form. [e] •  Totient function: The number of integers less than or equal to and coprime to a given integer. [e] •  Transitive relation: A relation with the property that if x→y and y→z then x→z. [e] •  Turan sieve: A technique for estimating the size of "sifted sets" of positive integers which satisfy a set of conditions which are expressed by congruences. [e] •  Tutte matrix: A matrix used to determine the existence of a perfect matching in a graph: that is, a set of edges which is incident with each vertex exactly once. [e] •  Weierstrass preparation theorem: A description of a canonical form for formal power series over a complete local ring. [e] •  Zero matrix: A matrix consisting entirely of zero entries. [e] •  Zipf distribution: Observation that states that, in a population consisting of many different types, the proportion belonging to the nth most common type is approximately proportional to 1/n. [e]
2014-09-18 03:51:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7958385944366455, "perplexity": 933.8389009672618}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125488.38/warc/CC-MAIN-20140914011205-00145-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://2021.help.altair.com/2021.1/winprop/topics/winprop/user_guide/wallman_tuman/wallman_introduction/coordinate_systems_winprop.htm
# Coordinate Systems WallMan requires all databases to be in a metric system (for example, UTM). This is important for different computations like the free space losses. When a topographical database should be used together with the building database both databases must be in UTM format.
2022-09-30 16:05:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879048228263855, "perplexity": 2141.8389445728726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00341.warc.gz"}
https://tex.stackexchange.com/questions/192476/manual-citation-of-specific-references-with-biblatex-chicago
Manual citation of specific references with biblatex-chicago How can I tell Biber that I have just manually cited a specific citation? This question is not about \mancite. In biblatex-chicago there is a command called \mancite which allows you to signal to the ibid-tracker that something has been cited without a \cite{...} (or similar) command, so that \cite[30]{smith:ref}. And see \emph{Reference X}. does not produce John Smith, Reference (Place: Publisher, Year), 30. And see Reference X. See also ibid., 31. but rather John Smith, Reference (Place: Publisher, Year), 30. And see Reference X. See also Smith, Reference 1, 31. What I have not been able to figure out is how to signal that a specific reference is being cited (for example, for the first time) but that I am manually formatting this particular citation of it. I would like to do this so that 1. subsequent citations of the same reference are in the short format; and 2. the reference appears in the bibliography even if I do not cite it again. In other words, suppose I define a reference in my .bib file which is: @book{CMAG:alchemy, Editor = {Bidez, Joseph}, Shorthand = {\emph{CMAG}}, Title = {Catalogue des manuscrits alchimiques grecs}, Volumes = {8}, Year = {1924--1932}} What I would like is to be able to write something like: This is my own way of referring to the CMAG for the first time, p. 10. \MANUALCITATION{CMAG:alchemy} [...] See \cite[11]{CMAG:alchemy}. and have it result in: This is my own way of referring to the CMAG for the first time. […] See CMAG, vol. 2, p. 11. This is my own way of referring to the CMAG for the first time. … See Joseph Bidez, ed., Catalogue des manuscrits alchimiques grecs, 8 vols. (Brussels, 1924–1932), vol. 2, p. 11 (hereafter cited as CMAG). You can make up a "fake" citation command that does nothing (hardly anything) but will still make biblatex think it cited the key. Just define this nice command \blindcite \DeclareCiteCommand{\blindcite}{\unspace}{}{}{} I had to add the \unspace macro to get rid of an unwanted space. MWE \documentclass{article} \usepackage{filecontents} \usepackage[notes]{biblatex-chicago} \begin{filecontents*}{\jobname.bib} @book{CMAG:alchemy, Editor = {Bidez, Joseph}, Shorthand = {\emph{CMAG}}, Title = {Catalogue des manuscrits alchimiques grecs}, Volumes = {8}, Year = {1924--1932}} \end{filecontents*} • Thank you! This is great. For the purposes of ibid tracking (when there is no Shorthand), it seemed preferable to include \mancite as well, so that I don't end up with a confusing "ibid." coming next. I get the desired result by writing \blindcite{...}\mancite, but it doesn't work when I try to fold it into the command declaration, as in \DeclareCiteCommand{\blindcite}{\unspace\mancite}{}{}{}. Is there a simple way to do this? – Alex Roberts Jul 22 '14 at 8:33 • @AlexRoberts Try \DeclareCiteCommand{\blindcite}{\unspace}{}{}{\mancite} – moewe Jul 22 '14 at 8:44
2019-07-24 06:58:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8308594226837158, "perplexity": 2772.1842358655767}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00150.warc.gz"}
https://www.general-relativity.net/2021/08/mtw-ex-85-sheet-of-paper-in-polar.html
## Monday, 30 August 2021 ### Question The two-dimensional metric for a flat sheet of paper in polar coordinates is $\left(r,\theta\right)$ is$${ds}^2={dr}^2+r^2{d\phi}^2$$or in modern notation$$\mathbf{g}=\mathbf{d}r\otimes\mathbf{d}r+r^2\mathbf{d}\phi\otimes\mathbf{d}\phi$$Presumably the coordinates are $\left(r,\phi\right)$ not $\left(r,\theta\right)$. (a) Calculate the connection coefficients using 8.24. (b) Write down the geodesic equation in $\left(r,\phi\right)$ coordinates. (c) Solve these equations for $r\left(\lambda\right)$ and $\phi\left(\lambda\right)$ and show that the solution is a uniformly parameterized straight line. ($x\equiv r\cos{\phi}=a\lambda+b$ for some $a$ and $b$, $y\equiv r\sin{\phi}=j\lambda+k$ for some $j$ and $k$). (d) Verify that the noncoordinate basis $\mathbf{e}_{\hat{r}}\equiv\mathbf{e}_r=\frac{\partial\mathcal{P}}{\partial r},\ \mathbf{e}_{\hat{\phi}}\equiv r^{-1}\mathbf{e}_\phi=r^{-1}\frac{\partial\mathcal{P}}{\partial\phi},\ \ \mathbf{\omega}^r=\mathbf{d}r,\ \mathbf{\omega}^{\hat{\phi}}=r\mathbf{d}\phi$ is orthonormal, and that $\left<\mathbf{\omega}^\alpha,\mathbf{e}_{\hat{\beta}}\right>=\delta_{\ \ \hat{\beta}}^{\hat{\alpha}}$. Then calculate the connection coefficients of this basis from a knowledge [part (a)] of the connection of the coordinate basis. I think 1) there are hats missing from omega indices and 2) 'modern notation' might not be very modern. I find it surprising that such an old, respected book has so many misprints. a,b,c were straightforward. (d) contained the surprises. The $\left(\hat{r},\hat{\phi}\right)$ system (as we might call it) was orthonormal. The $\left(r,\phi\right)$ system was not, it was only orthogonal. The connection coefficients of the $\left(\hat{r},\hat{\phi}\right)$ system are not all symmetric in the lower two indices: $\Gamma_{\hat{\phi}\hat{r}}^{\hat{\phi}}=0\neq\Gamma_{\hat{r}\hat{\phi}}^{\hat{\phi}}=\frac{1}{r}$ which we prove. The method of calculating the coefficients is a great exercise in the piercing counter $\left<,\right>$. ${\hat{e}}_r,{\hat{e}}_\phi$ form a noncoordinate basis because, if we use them, the same point can have different coordinates. We show that is true. $e_r,e_\phi$ do not suffer from this problem. Additionally we calculate the commutators (or Lie derivatives) $\left[e_r,e_\phi\right]=\left[\partial_r,\partial_\phi\right]$ and $\left[{\hat{e}}_r,{\hat{e}}_\phi\right]$. The first vanishes the second does not. This is a proof that the first is a coordinate (holonomic) basis and the second a noncoordinate (anholonomic) basis where you can't use coordinates. So you can't say $\left[{\hat{e}}_r,{\hat{e}}_\phi\right]=\left[\partial_{\hat{r}},\partial_{\hat{\phi}}\right]$. even though you can use the indices as in $\Gamma_{\hat{\phi}\hat{r}}^{\hat{\phi}}=0$. Tiptoe through noncoordinate minefield at 8.5 Exercise Plane polar coordinates.pdf  (14 pages)
2023-02-08 03:29:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8022420406341553, "perplexity": 665.7972930097034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00657.warc.gz"}
https://zenodo.org/record/3373629/export/schemaorg_jsonld
Conference paper Open Access # Spectral Efficiency Maximization for Spatial Modulation Aided Layered Division Multiplexing: An Injection Level Optimization Perspective Yue Sun; Jintao Wang; Changyong Pan; Longzhuang He ### JSON-LD (schema.org) Export { "description": "<p>The layered division multiplexing (LDM) is recently combined with spatial modulation (SM) systems to provide a more efficiency way for broadcasting transmission. In SM aided LDM (SM-LDM) systems, the service of each layer, which is allocated with different power, is transmitted via SM scheme. In this paper, a gradient descent based iterative method is proposed to optimize the injection level, which can enhance the spectral efficiency (SE) in the two-layer SM-LDM systems with maximum ratio combining (MRC). In addition, the concavity analysis of this optimization problem is also conducted. Monte Carlo simulations are also provided to verify the effectiveness of our proposed injection level optimization method.</p>", "creator": [ { "affiliation": "Tsinghua University", "@type": "Person", "name": "Yue Sun" }, { "affiliation": "Tsinghua University", "@type": "Person", "name": "Jintao Wang" }, { "affiliation": "Tsinghua University", "@type": "Person", "name": "Changyong Pan" }, { "affiliation": "Tsinghua University", "@type": "Person", "name": "Longzhuang He" } ], "headline": "Spectral Efficiency Maximization for Spatial Modulation Aided Layered Division Multiplexing: An Injection Level Optimization Perspective", "citation": [ { "@id": "https://ieeexplore.ieee.org/document/8450494", "@type": "CreativeWork" } ], "datePublished": "2018-08-30", "url": "https://zenodo.org/record/3373629", "@context": "https://schema.org/", "identifier": "https://doi.org/10.1109/IWCMC.2018.8450494", "@id": "https://doi.org/10.1109/IWCMC.2018.8450494", "@type": "ScholarlyArticle", "name": "Spectral Efficiency Maximization for Spatial Modulation Aided Layered Division Multiplexing: An Injection Level Optimization Perspective" } 33 43 views
2021-05-10 23:19:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19383171200752258, "perplexity": 13641.35927707804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00061.warc.gz"}
https://brilliant.org/discussions/thread/june-newsletter/
Greetings from the moderation team! Here is the June Edition of the much awaited newsletter. As usual, it encapsulates the best of Brilliant over the past month. From the best posts to the most active members, it has it all. ## New Features to be Excited About: • Your performance at Brilliant can now get you a job in a leading company. Doesn’t sound credible? Then read this. • Did you ever feel that your solution isn’t the best way out? Or maybe wished that some mentor reviewed it? Brilliant has heard it all. You can submit a request to have your solution reviewed by Challenge Masters! • Guess what? Notifications just got better. In case of a new,unread notification you would now have a star on the tab on which Brilliant is open. So now, you need not switch between tabs just to check for notifications. ## Challenging Problems Over the past month the feed was flooded with a large number of problems across all topics and levels. This is not something new to the Brilliant community, but the strength of the flux is inspiring and motivating as always. Here are a few handpicked problems just in case you found the flow to be strong. These are some out there in the brain gym which would keep you working for quite some time. The Mystic Triangle , Seven Eight Nine , Don't try $$x = y = z$$ and Magnet falling in a copper tube. After all that workout here some easier ones. Warning: Neither would they leave you without having them a second thought. Aren't they the same thing?, Isn't it equal to 1?, Can you boil this water?, Triangle-ception and Do you know your basics - VIII. ## New and Active Members The ever-expanding community yet again witnessed incorporation of some new individuals who actively put in efforts and enhanced the experience of the community with their problems, solutions, wikis and comments having actively contributed them over the past four weeks or so. They deserve a word of recognition and the moderation team sincerely appreciates the efforts of Abyoso Hapsoro, Vijay Simha, Ankit Nigam, Dhaman Trivedi, Colin Carmody and Arian Tashakkor and hopes that they continue this active interaction with the community. ## Who To Follow Interestingly in this edition, each of the members who has made it into the WhoToFollow List is really great at one of the major topics out here. So no matter where your interests lie, you would find your idol in this list. PS: It seems we haven’t found our master logician yet and apparently the programmers had some time off recently. Do hit the follow button on their profiles. ## Popular Posts: A collection of discussions which are worth a dekko. • Ever wanted to know how one of those popular members are like other than being a math/physics whiz? Brilliant has begun featuring some of its popular members, one each month, from this May. Catch up from the first note in the series which features Jake Lai. • It has been 10 odd days since the JEE-2015 and this discussion encapsulates the reactions and performances of our fellow members who took the important examination this time. • Ever tried doing simple things the difficult way? This is how you try estimating $$\pi ^2$$ using Riemann Zeta Functions. • Thinking of taking JEE and don’t know from where to begin? Start here • Nihar just mixed up Vieta’s, polynomials and progressions. This is how the mixture tastes. This concludes the June edition of the Newsletter. Hope you all liked it ! Cheers, #moderation Note by Sudeep Salgia 2 years, 10 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: How the hell are these 14 year olds so good? I used to waste my time watching pokemon and beyblade when i was young.. - 2 years, 10 months ago Same here bro! But I wouldn't call it wasting my time . I actually enjoyed them ^_^ - 2 years, 10 months ago Yeah , I am enjoying them too :P - 2 years, 10 months ago Well, here I would love to add that you seem to be 17 years young as per your profile. You've got the infinite potential and you can create yourself so wonderful in the coming years that someone will start up commenting "how the hell is this 19 year young so Genius" . Don't mind. It's just for the sake of Hope. :P - 2 years, 10 months ago This applies to @Azhaghu Roopesh M too! :P - 2 years, 10 months ago I do both brilliant and watching random stuff on YouTube when I neglect my homework - 2 years, 10 months ago I do enjoy Pokemon and Beyblade, Chakar de Takkar! - 2 years, 10 months ago LOL! - 2 years, 10 months ago Exactly, I want to be 14 again. - 2 years, 10 months ago Same here, good fret-free old days... - 2 years, 10 months ago I think in New and Active Members , name of @Ikkyu San must also be included , since he is posting really very nice problems. - 2 years, 10 months ago Moderation cares for you. Don't miss the chance to go through it. - 2 years, 10 months ago Keep up the great work mods ! I just wanted to mention a name that got missed out : I think he's a genuine candidate for the post of Master Tactician . It's really really strange that his name hasn't been featured as of now , in any of the Who to follow lists . Just wanted to say , he is definitely worth a mention . - 2 years, 10 months ago Many thanks for mentioning me @Azhaghu Roopesh M . You are truly an awesome friend. But honestly, I don't mind it. I think, if I've got something to share with you all, then I'll do it for sure, no matter if I get mentioned for it somewhere or not. I apologize if I have heard anyone's sentiments. - 2 years, 10 months ago Thanks for including me in the list! $$\ddot \smile$$ - 2 years, 10 months ago Is 'Brilliantian' an official demonym? Because I really think an official demonym is needed. - 2 years, 10 months ago @Calvin Lin sir - 2 years, 10 months ago Thank you so much for including my name in the Who To Follow list, I am honoured!!! - 2 years, 10 months ago I followed you! - 2 years, 10 months ago The "have a challenge master read" feature does seem to have a small problem. I think the only time you see it is when just before you post an answer for the first time. But very often it takes a lot of fiddling around before the posted answer is complete, and by that time, that option is gone. - 2 years, 10 months ago Go ahead and request for it, and then you can add a note at the top saying "Hey, I'm not done with this yet. Don't review this as yet". As a related example, sometimes I add "This is not a complete solution" at the top of my hint-solutions. Staff - 2 years, 10 months ago Thanks for mentioning my work on polynomials too! - 2 years, 10 months ago Comment deleted Jun 04, 2015 Seems like you have mesmerized the mods into mentioning your name :P - 2 years, 10 months ago Hahahahahahahah xD. - 2 years, 10 months ago Lol!!! :P - 2 years, 10 months ago Please try to improve the interface of the app also. - 2 years, 10 months ago
2018-04-26 21:03:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7701279520988464, "perplexity": 3691.5331543963803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00122.warc.gz"}
http://gyanraj.blogspot.com/
## Wednesday, March 11, 2009 ### Find the volume of the given rectangular glass plate using vernier calipers and screw gauge. Formulae: i) Least count of vernier calipers (L.C) = $\frac{S}{N}$ mm, S = value of 1 Main scale division , N = Number of vernier divisions. ii ) Total reading = Main scale reading (a) mm + ( n*L.C ) mm iii) Pitch of the screw = $\frac{Distance moved by sloped edge over the pitch scale}{Number of rotations of the screw}$ iv) Least count of Screw gauge (L.C) = $\frac{Pitch of the screw}{Number of divisions on Head scale}$ v) Total Reading = P.S.R +$n\times L.C$ , vi) Volume of the glass plate V = $l\times{b}\times{h}$ $mm^3$ l = length of the glass plate, b = breadth of glass plate , h = Thickness of glass plate. Procedure :First we have to determine the least count count of the given vernier calipers. From the given vernier calipers S= Length of Main scale division = 1 mm = 0.1 cm, N = Number of vernier scale divisions = 10 , Substitute these values in the formula of Least count L.C = $\frac{S}{N}$ = $\frac{0.1}{10}$ =0.01 cm. Draw neat diagram of Vernier calipers Part I : To determine the length ( l )and breadth (b) of the given glass plate with vernier calipers :The given glass plate is held between two jaws of vernier calipers, first to measure its length.Note down the values of the Main scale reading (M.S.R ) and vernier coincidence (VC) in Table-I, take 3set of readings by placing the glass plate in 3 different positions.Each time calculate the total reading by substituting the values of M.S.R and VC in the formula Total reading = M.S.R + ($VC\times L.C$. Find the average of 3readings and calculate Average Length ( l )of the given glass plate. Now hold the glass plate between jaws of vernier calipers breadth wise ,repeat the experiment as above , note down the 3 set of readings of M.S.R and VC in Table-II.Calculate average breadth (b) of the glass plate Part II: To determine thickness(h) of glass plate using Screw gauge:First we have to determine the least count of the given Screw gauge. Number of complete rotations of the screw = 5 Distance moved by sloped edge over the pitch scale = 5mm Pitch of the screw = $\frac{Distance moved by sloped edge over the pitch scale}{Number of rotations of the screw}$ = $\frac{5mm}{5}$ =1mm. Number of divisions on the head scale = 100 Least count (L.C) = $\frac{Pitch of the screw}{Number of divisions on Head scale}$ = $\frac{1mm}{100}$ =0.01mm Draw neat diagram of Screw Gauge Zero Error :Now check whether the given screw gauge has any ZERO ERROR or not. To determine the ZERO ERROR, the head H is rotated until the flat end of the screw $S_2$ touches the plane surface of the stud $S_1$ (do not apply excess pressure) i.e we have to rotate the head only by means of safety device ‘D’ only. When $S_1$ and $S_2$ are in contact,the zero of the head scale perfectly coincides with the index line as in Fig-(a). In such case there will be no ZERO ERROR and no correction is required. When $S_1$ and $S_2$ are in contact,the zero of the head scale is below the index line as in Fig(b), such ZERO ERROR is called positive ZERO ERROR, and the correction is negative. When $S_1$ and $S_2$ are in contact,the zero of the head scale is above the index line as in Fig(c) , such ZERO ERROR is called negative ZERO ERROR, and the correction is positive. When $S_1$ and $S_2$ are in contact,98 th division of head scale is coinciding with index line i.e the zero of the head scale is 3 divisions below the index line as in Fig(b), such ZERO ERROR is called positive ZERO ERROR, and the correction is negative. The Zero correction for the given screw gauge = - 2 The given glass plate is held between the two parallel surfaces of fix stud $S_1$ and screw tip $S_2$. Note the completed number of divisions on pitch scale, which is called PITCH SCALE READING (P.S.R). The number of the head scale division coinciding with the index line is noted, which is called OBSERVED HEAD SCALE READING n’. If the given screw gauge has ZERO ERROR (x) the correction is made by adding or subtracting the ZERO ERROR (x) from the OBSERVED HEAD SCALE READING n’.The corrected value (n’-x) or (n’+x) is called the HEAD SCALE READING (H.S.R) n. To calculate the fraction the H.S.R (n) is multiplied by the least count (L.C). Diameter of first wire = Total reading = P.S.R +$n\times L.C$ - - - - - - (1) Changing the position of the glass plate, 3 readings should be taken, and recorded in the table-III. Every time calculate the total thickness (h)of glass plate using equation (1). Calculate average of 3readings which is average thickness (h) of glass plate. Table-I: Length (l) of the glass plate : S.No M.S.Ra cm Vernier Coincidence (n) Fraction b=n*L.C Total Reading (a+b) cm 1 2.5 8 0.01*8=0.08 2.58 2 2.5 9 0.01*9=0.09 2.59 3 2.5 7 0.01*7=0.07 2.57 Average length of glass plate (l) = $\frac{2.58+2.59+2.57}{3}$ = $\frac{7.74}{3}$ = 2.58 cm Average length of glass plate (l) = 2.58 cm or 25.8mm Table-II: Breadth (b)of the glass plate : S.No M.S.R a cm Vernier Coincidence (n) Fraction b=n*L.C Total Reading (a+b) cm 1 1.1 4 0.01*4=0.04 1.14 2 1.1 5 0.01*5=0.05 1.15 3 1.1 5 0.01*5=0.05 1.15 Average Breadth of glass plate (b) = $\frac{1.14+1.15+1.15}{3}$ = $\frac{3.44}{3}$ = 1.15 cm. Average Breadth of glass plate (b) = 1.15 cm or 11.5 mm. Table-III: Thickness (h)of the glass plate : S.No Pitch Scale Reading (P.S.R) amm Observed H.S.R (n’) Correction (x) Corrected H.S.R n=n’(+/-)x Fraction b=n*L.C Total reading (a+b) mm 1 2 75 2 75-2=73 73*0.01=0.73 2.73 2 2 74 2 74-2=72 72*0.01=0.72 2.72 3 2 76 2 76-2=74 74*0.01=0.74 2.74 Average Thickness (h) of glass plate (b) = $\frac{2.73+2.72+2.74}{3}$ = $\frac{8.19}{3}$ = 2.73 mm. Average Thickness of glass plate (h) = 2.73 mm. Observations : i)Average length of glass plate (l) = 2.58 cm or 25.8mm ii)Average Breadth of glass plate (b) = 1.15 cm or 11.5 mm. iii)Average Thickness of glass plate (h) = 2.73 mm. Calculations : Volume of the given glass plate V = $l\times{b}\times{h}$ $mm^3$ Volume of the given glass plate V = $25.8\times11.5\times2.73$ $mm^3$ =809.99 $mm^3$ Precautions : 1) Take the M.S.R and vernier coincide every time without parallax error. 2)Record all the reading in same system preferably in C.G.S system. 3) Do not apply excess pressure on the body held between the jaws. 4) Check for the ZERO error.When the two jaws of the vernier are in contact,if the zero division of the main scale coincides with the zero of the vernier scale no ZERO error will be there.If not ZERO error will be there, apply correction. 5) Pitch scale reading (P.S.R) should be taken carefully without parallax error 6) Head scale reading (H.S.R) should be taken carefully without parallax error 7)Screw must be rotated by holding the safety device ‘D’ 8 ) Do not apply excess pressure on the object held between the surfaces $S_1$ and $S_2$. Result : Volume of the given glass plate is V= 809.99 $mm^3$
2016-08-28 20:34:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4036978781223297, "perplexity": 3384.151022926978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982947845.70/warc/CC-MAIN-20160823200907-00059-ip-10-153-172-175.ec2.internal.warc.gz"}
https://scholarworks.iu.edu/dspace/browse?order=ASC&rpp=20&sort_by=1&etal=-1&offset=1844&type=title
# Browsing by Title Sort by: Order: Results: • (2015-08-17) • (Indiana Institute on Disability and Community, 2007) • ([Bloomington, Ind.] : Indiana University, 2015-11) For profit, non-profit, and government organizations that have an interest in improving performance, intervention set selection is a key component. As a result, consultants seek guidance on how to select intervention sets ... • ([Bloomington, Ind.] : Indiana University, 2011-10-19) A number of atmospheric-pressure ionization sources for mass spectrometry has recently appeared in the literature to yield a field that is collectively referred to as Ambient Desorption/Ionization-Mass Spectrometry (ADI-MS). ... • (2013-12-19) The following study and resulting curriculum are the product of this author’s desire to contribute useful foundation-building tools for both teachers and students of wind instruments, specifically the flute, oboe, clarinet, ... • (2004) • ([Bloomington, Ind.] : Indiana University, 2010-06-01) Purpose: The purpose of this study was twofold. First, a new method, the Max Power Model, for assessing resistive (Fres) and propulsive (Fprop) forces using tethered swimming was developed. The Max Power Model (MPM) is ... • (Indiana University Digital Collections Services, 2016-02-10) “The Vietnam War/American War: Stories from All Sides” is a transmedia project that includes a PBS style documentary and participatory website. Initially funded through an IU New Frontiers Grant, currently more than 135 ... • (Indiana University Cyclotron Facility, 1978) • (Indiana University Cyclotron Facility, 1991) • ([Bloomington, Ind.] : Indiana University, 2010-06-16) Concerning the contention of Pine and Gilmore (1999), experiences are directly related to a business's ability to generate revenue, providing tourist experiences that are more memorable and easier to retrieve would lead ... • (Indiana University Cyclotron Facility, 1980) • (Indiana University South Bend, 2011-10-07) The goal of this study was to explore the use of machine learning techniques in the development of a web-based application that transcribes between multiple orthographies of the same language. To this end, source text files ... • (6/5/2006) • (Indiana University Cyclotron Facility, 1989) • (Berkeley Linguistics Society, 1992) • (Indiana University Cyclotron Facility, 1977) • (Indiana University Cyclotron Facility, 1976) • ([Bloomington, Ind.] : Indiana University, 2014-12) Various studies have been done on exotic spin-dependent short-range forces in the mm to &mu;m range. We are using an ensemble of optically polarized $^3$He gas and an unpolarized test mass to search for such forces. The ... • (International Society for the Scholarship of Teaching and Learning, 2009-10) Bloom’s Taxonomy of cognitive domains is a well-defined and broadly accepted tool for categorizing types of thinking into six different levels: knowledge, comprehension, application, analysis, synthesis and evaluation. ...
2017-08-21 16:55:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3108879327774048, "perplexity": 11561.335648123146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00667.warc.gz"}
https://quant.stackexchange.com/tags/tactical-asset-allocation/hot
# Tag Info 9 If $Q$ is your covariance matrix, and $r$ is a vector of your expected returns, then the maximum Sharpe ratio is given by the following math program. $${\rm maximize} \frac{r^t x}{\sqrt{0.5 x^t Q x}}$$ subject to $$1^t x = m$$ $$x \in \{0,1\}^n$$ Where $x$ is a vector of indicators of which of the $n$ assets are part of the $m$ selected assets. While the ... 6 In recent years there has been much attention given to defining indexes other than market-cap based indices. While market-cap based indices approximate the theoretical Market Portfolio enshrined in textbooks, some people believe we could do better than that. One popular idea is that "market indexes overweight the most overvalued stocks", though this is ... 3 For such a problem ("selecting n out of m") you can use optimisation heuristics. These algorithms work well even for large n and m, and they are flexible: you may as well select a portfolio that minimises some other function, for instance, the portfolio's drawdown. The downside is that you may have to do some programming yourself. An example very similar ... 2 I would say it could be short for annual turnover (precent/portfolio) Higher portfolio turnover often means higher transaction costs. The definition is usually the lesser of all buys and sells in a year divided by the average monthly NAV of the strategy. (Morningstar) Be aware that turnover numbers come in all colors and flavors and can in or exclude ... 2 There are a number of issues here. First, there are a number of methodologies called “performance attribution” each providing answers to different questions. So I am not sure what type of question you wish to address. I will here assume that you wish to evaluate the effects of investment decisions as opposed to the effects of market factors. I will also ... 2 As @piRSquared has pointed out, smart beta can be an ambiguous term which is fairly loosely defined. Cliff Asness wrote a paper defining smart beta as To be considered Smart Beta, we believe that these factors must also be simple and transparent. However, they don’t have to be the same for all managers or products. One can, and many do, argue that ... 2 The term "smart beta" is loaded and ambiguous. It means different things to different people. Some people manage products that they would argue are not smart beta while the rest of the industry vehemently disagrees. I've gone the route of defining what it means to me. Smart Beta Short Version Commoditized Factor Investing Longer Version Investment ... 1 An approach which satisfies the requirements I listed above is the one laid out in Tracking Error and the Setting of Tactical Ranges, David E. Kuenzi, The Journal of Investing Spring 2004, 13 (1) 35-44. 1 Here's a thought. In 2-dim your score $(x, y) \in [-4,4]^2$ is best characterised as the minimal distance from the line $y=x$, along which your portfolio is balanced. I.e. wherever $y=x$ either at $(0,0), (-1,-1) (4,4)$ the weight is 50-50, since there is no minimal displacement vector. As a further example $(0,2)$ has minimal distance from the point $(1,1)... 1 This a Quadratically Constrained Quadratic Program (QCQP) (try searching for that) albeit the usual inequality constraint has been replaced by your equality constraint. maximise over$x_i\$ $$x_i'S_i - 0.01^2\lambda_i$$ s.t. $$x_i'Qx_i=0.01^2$$ You may have some success if you investigate techniques for solving the constraint in the first place and then ... Only top voted, non community-wiki answers of a minimum length are eligible
2020-10-23 03:11:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.694006085395813, "perplexity": 850.0735193809994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880519.12/warc/CC-MAIN-20201023014545-20201023044545-00347.warc.gz"}
https://www.nature.com/articles/s41467-019-13946-0?error=cookies_not_supported&code=713c9c22-d9fa-47e9-88c0-888b9e4a50d2
## Introduction Topological superconductors (TSCs) are a peculiar class of superconductors where the nontrivial topology of bulk leads to the emergence of Majorana bound states (MBSs) within the bulk superconducting gap1,2,3,4,5. Since MBSs are potentially applicable to the fault-tolerant quantum computation, searching for a new type of TSCs is one of the central challenges in quantum science. A straightforward way to realize TSCs would be to synthesize an odd-parity p-wave superconductor; however, intrinsic p-wave pairing is rare in nature, as highlighted by a limited number of p-wave superconductor candidates hitherto reported (e.g., refs. 6,7). A different approach to realize TSCs is to utilize the superconducting proximity effect (SPE) in a heterostructure consisting of a conventional superconductor and a spin-orbit coupled material such as a topological insulator (TI), as initiated by the theoretical prediction of effectively p-wave superconductivity induced in helical Dirac fermions and Rashba states8,9. This approach has been widely applied to various superconducting hybrids10,11,12,13,14,15,16,17, whereas the existence of MBSs is still under intensive debates. A part of the difficulty in establishing the SPE-derived topological superconductivity may lie in the SPE process itself, since the searched MBSs are expected to be localized in the vortex core at the interface within the heterostructure, and hence are hard to be accessed by surface-sensitive spectroscopies such as scanning tunneling microscopy (STM). Therefore, it would be desirable to invent an alternative way to realize TSCs without using bulk p-wave superconductor or the SPE. In this work, we present the possibility to realize TSCs by using the topological proximity effect (TPE)18; such a novel approach was discovered through our angle-resolved photoemission (ARPES) study of a heterostructure consisting of an epitaxial Pb thin film grown on a three-dimensional (3D) TI, TlBiSe2. ## Results ### Fabrication and characterization of Pb film on TlBiSe2 The studies of SPE for generating TSCs have often employed a heterostructure consisting of a TI thin film as a top layer and a Bardeen-Cooper-Schrieffer (BCS) superconductor as a substrate14,15,16,17. On the other hand, in our TPE approach, the stacking sequence is reversed, and a superconducting Pb thin film was grown on TlBiSe2 (Fig. 1a). We have deliberately chosen this combination, because (i) Pb films are known to maintain the superconductivity down to a few monolayers (MLs)19, and (ii) TlBiSe2 serves as a good substrate for epitaxial films18. Using the low-energy-electron-diffraction (LEED) (inset to Fig. 1d, f, h) and the ARPES results, we have estimated the in-plane lattice-constant a to be 3.5 and 4.2 Å for Pb (~20 ML) and TlBiSe2, respectively. While the a value of Pb film is close to that of bulk20, there is a sizable lattice mismatch of 19.5% between the Pb film and TlBiSe2. First, we discuss the overall electronic structure. As shown in Fig. 1c, d, the electronic band structure of pristine TlBiSe2 is characterized by a Dirac-cone surface state (SS) around the $$\overline{\Gamma}$$ point that traverses the bulk valence and conduction bands21,22,23, forming a small Fermi surface (FS) centered at $$\overline{\Gamma}$$. Upon evaporation of Pb on TlBiSe2, the electronic structure drastically changes as seen in Fig. 1e, f; the holelike valence band of TlBiSe2 disappears, while several M-shaped bands emerge. The outermost holelike band crosses the Fermi level (EF) and forms a large triangular FS (Fig. 1e). The M-shaped bands are ascribed to the quantum well states (QWSs) due to the quantum confinement of electrons in the Pb thin film. This is supported by the experimental fact that similar M-shaped bands are also observed in a Pb(111) thin film grown on Si(111) (Fig. 1h). The QWSs in Pb thin films with various thickness on Si(111) have been well studied by spectroscopies and calculations19,24,25,26,27,28. Since the in-plane lattice constant of Pb/TlBiSe2 is close to that of Pb/Si(111), we expect a similar electronic structure between the two. By referring to the previous studies and our band-structure calculations, we estimated the film thickness to be 17 ML for the case in Fig. 1e, f; see Supplementary Fig. 1 and Supplementary Note 1. We observed no obvious admixture from other MLs (e.g., 16 and 18 MLs) that would create additional QWSs24,26, suggesting an atomically flat nature of our Pb film. The LEED pattern of 17ML-Pb/TlBiSe2 as sharp as that of pristine TlBiSe2 (inset to Fig. 1f, d, respectively) also suggests the high crystallinity of Pb film. A careful look at Fig. 1f reveals an additional intensity spot near EF above the topmost M-shaped band. This band is not attributed to the QWSs, and is responsible for our important finding, as described below. ### Topological proximity effect Next we clarify how the band structure of TlBiSe2 is influenced by interfacing with a Pb film. One may expect that there is no chance to observe the band structure associated with TlBiSe2 because the Pb film (17 ML ~ 5 nm) is much thicker than the photoelectron escape depth (~0.5–1 nm). Figure 2a shows the ARPES-derived band structure near EF obtained with a higher resolution for 17ML-Pb/TlBiSe2, where we clearly resolve an X-shaped band above the topmost QWSs. This band resembles the Dirac-cone SS in pristine TlBiSe2 (Fig. 2c), and is totally absent in 17ML-Pb/Si(111) (Fig. 2b), thereby ruling out the possibility of its Pb origin. The appearance of a Dirac-cone-like band in 17ML-Pb/TlBiSe2 is surprising, because the Pb film thickness is about ten times larger than the photoelectron escape depth. This in return definitely rules out the possibility that the observed Dirac-cone-like band is the Dirac-cone state embedded at the Pb/TlBiSe2 interface. Furthermore, this band is not likely to originate from the accidentally exposed SS of TlBiSe2 through holes in Pb, since the observed bands do not involve a replica of pristine TlBiSe2 bands and no trace of the Tl core-level peaks was found in Pb/TlBiSe2; see Supplementary Fig. 2 and Supplementary Note 2. In fact, the bulk valence band lying below 0.4 eV observed in pristine TlBiSe2 (Fig. 2c) totally disappears in Pb/TlBiSe2, and in addition, the Dirac point of Pb/TlBiSe2 is shifted upward with respect to that of pristine TlBiSe2, as clearly seen in Fig. 2d–f. These observations led us to conclude that the Dirac-cone band has migrated from TlBiSe2 to the surface of Pb film via the TPE when interfacing Pb with TlBiSe218. Such a migration can be intuitively understood in terms of the adiabatic bulk-band-gap reversal29,30 in the real space where the band gap (inverted gap) in TlBiSe2 closes throughout the gapless metallic overlayer and starts to open again at the Pb-vacuum interface. It is noted that the upper branch of the Dirac-cone-like band would be connected to the quantized conduction band of the Pb film above EF because it only crosses EF once between Γ and M. The band picture based on the TPE well explains the observed spectral feature in Pb/TlBiSe2. As shown in Fig. 2g–i, the spin-degenerate topmost QWS of Pb (Fig. 2g) and the spin-polarized Dirac-cone SS of TlBiSe2 (Fig. 2h) start to interact each other when interfacing Pb and TlBiSe2. Due to the spin-selective band hybridization18, the Dirac-cone band is pushed upward, while the QWS is pulled down (Fig. 2i). This is exactly what we see in Fig. 2a. Our systematic thickness-dependent ARPES measurements revealed a detailed hybridization behavior between the Dirac-cone band and the QWSs, supporting this scenario; see Supplementary Fig. 3 and Supplementary Note 3. Noticeably, the migration of Dirac-cone state is observed at least up to 22 ML thick (~6.5 nm thick) Pb film. Such a long travel of the Dirac cone in the real space is unexpected, and hard to be reproduced by the band calculations due to large incommensurate lattice mismatch between Pb and TlBiSe2. In fact, we have tried to calculate the band dispersion of Pb/TlBiSe2 slab by expanding the in-plane lattice constant of Pb film to hypothetically form a commensurate system, but it caused a sizable change in the whole band structure of Pb film, resulting in the band structure totally different from the experiment. Alternatively, a calculation that uses a larger in-plane unit cell might be useful to achieve an approximate lattice match between Pb and TlBiSe2. It is noted here that the coherency of electronic states may play an important role for the observation of a coupling with the substrate (i.e., the TPE in this study) as in the case of other quantum composite systems involving metallic overlayer31. We estimate the electronic coherence length in Pb film to be larger than 22 ML (~6.5 nm) because the topological SS is observed even in the 22 ML film; see Supplementary Fig. 3 and Supplementary Note 3. ### Superconducting gap The next important issue is whether the Pb/TlBiSe2 heterostructure hosts superconductivity. To elucidate it, we first fabricated a thicker (22 ML) Pb film on TlBiSe2 and carried out ultrahigh-resolution ARPES measurements at low temperatures. Figure 3b shows the energy distribution curve (EDC) at the kF point of the Pb-derived triangular FS (point A in Fig. 3a) measured at T = 4 and 10 K across the superconducting transition temperature Tc of bulk Pb (7.2 K). At T = 4 K, one can clearly recognize a leading-edge shift toward higher EB together with a pile up in the spectral weight, a typical signature of the superconducting-gap opening. This coherence peak vanishes at T = 10 K due to the gap closure, as better visualized in the symmetrized EDC (Fig. 3c). We have estimated the superconducting-gap size at T = 4 K to be 1.3 meV from the numerical fittings. This value is close to that of bulk Pb (~1.2 meV)32, suggesting that the Tc is comparable to that of bulk Pb. Since the superconductivity shows up on the Pb film, we now address an essential question whether the migrating Dirac-cone band hosts superconductivity. We show in Fig. 3d–i the EDCs and corresponding symmetrized EDCs at T = 4 and 10 K for the 17 ML sample measured at three representative kF points (points A–C in Fig. 3a). At point A on the Pb-derived FS, we observe the superconducting-gap opening (Fig. 3d), similarly to the 22 ML film. At point B (C), where the migrating Dirac-cone band crosses EF along the $$\overline{\Gamma {\rm{K}}}$$ ($$\overline{\Gamma {\rm{M}}}$$) line, we still observe a gap as seen in Fig. 3f (Fig. 3h). This indicates that an isotropic superconducting gap opens on the migrating Dirac-cone FS. We observed that this gap persists at least down to 12 ML, confirming that the superconducting gap is not an artifact that accidentally appears at some specific film thickness; see Supplementary Fig. 4 and Supplementary Note 4. We have also confirmed that the gap opening is not an inherent nature of the original topological SS in pristine TlBiSe2, by observing no leading-edge shift or spectral-weight suppression at EF at 4 K in pristine TlBiSe2 (Fig. 3j, k). ## Discussion The present results show that the superconducting gap opens on the entire FS originating from both the Pb-derived QWSs and the migrating Dirac-cone band (Fig. 3l). The emergence of an isotropic superconducting gap on the Dirac-cone FS suggests that the 2D topological superconductivity is likely to be realized, since this heterostructure satisfies the theoretically proposed condition for the effectively p-wave superconducting helical-fermion state8. In this regard, one may think that such realization is a natural consequence of making heterojunction between superconductor and TI. However, the present study proposes an essentially new strategy to realize the 2D topological superconductivity. In the ordinary approach based on the SPE (Fig. 3m), the topological Dirac-cone state hosts the effective p-wave pairing at the interface due to the penetration of Cooper pairs from the superconductor to the TI. On the other hand, the present approach does not rely on this phenomenon at all, because the topological Dirac-cone state appears on the top surface of a superconductor (Fig. 3n) via the TPE. One can view this effect as a conversion of a conventional superconductor (Pb film without topological SS) to a TSC (Pb film with topological SS) by interfacing. The present approach to realize 2D TSCs has an advantage in the sense that the pairing in the helical-fermion state (and the MBS as well) is directly accessed by surface spectroscopies such as STM and ARPES. The superconducting helical fermions would be otherwise embedded deep at the interface and are hard to be accessed if the TPE does not occur. Moreover, the observed gap magnitude on the topological SS is comparable to that of the original Pb, unlike the SPE-induced gap that is usually smaller. This result tells us that the so-far overlooked TPE had better be seriously taken into account in many superconductor-TI hybrids. Also, the present study points to the possibility of realizing even wider varieties of 2D TSCs by using the TPE. It is noted that the topological states in Pb/TlBiSe2 are electrically shorted out by the metallic QWSs, unlike the case of some TI films on top of superconductors. This needs to be considered in the application because single conducting channel from the Dirac-cone states would be more preferable. In this regard, the present approach utilizing the TPE and the existing approach using the SPE would be complementary to each other. ## Methods ### Sample preparation High-quality single crystals of TlBiSe2 were grown by a modified Bridgman method21. To prepare a Pb film, we first cleaved a TlBiSe2 crystal under ultrahigh vacuum with scotch tape to obtain a shiny mirror-like surface, and then deposited Pb atoms (purity; 5 N) on the TlBiSe2 substrate using the molecular-beam epitaxy technique while keeping the substrate temperature at T = 85 K. A Pb(111) film on Si(111), used as a reference, was fabricated by keeping the same substrate temperature. The film thickness was controlled by the deposition time at a constant deposition rate. The actual thickness was estimated by a comparison of ARPES-derived band dispersions with the band-structure calculations for free-standing multilayer Pb. ### ARPES measurements ARPES measurements were performed with the MBS-A1 electron analyzer equipped with a high-intensity He discharge lamp. After the growth of Pb thin film by evaporation, it was immediately transferred to the sample cryostat kept at T = 30 K in the ARPES chamber, to avoid the clusterization of Pb that is accelerated at room temperature (note that such clusterization hinders the detailed investigation of the surface morphology by atomic-force microscopy). We used the He-Iα resonance line ( = 21.218 eV) to excite photoelectrons. The energy resolution of ARPES measurements was set to be 2–40 meV. The sample temperature was kept at T = 30 K during the ARPES-intensity-mapping measurements, while T = 4 and 10 K for the superconducting-gap measurements. The Fermi level (EF) of the samples was referenced to that of a gold film evaporated onto the sample holder. ### Band calculations First-principles band-structure calculations were carried out by a projector augmented wave method implemented in Vienna Ab initio Simulation Package code33 with generalized gradient approximation potential34. After the crystal structure was fully optimized, the spin-orbit coupling was included self-consistently.
2022-12-01 08:19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6493234634399414, "perplexity": 2361.206415098098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00581.warc.gz"}
http://dharmath.blogspot.com/2009/08/tuymaada-yakut-olympiad-day-2-problem-4.html
## Friday, August 14, 2009 The sum of several non-negative numbers is not greater than 200, while the sum of their squares is not less than 2500. Prove that among them there are four numbers whose sum is not less than 50. #### Solution The problem does not change if we delete all the zeroes, so we can assume that all numbers are positive. Let $a_1 \geq a_2 \geq \cdots \geq a_n$ be those numbers, and suppose all sums of four are less than 50. So now we have three conditions: 1. $a_1 + a_2 + a_3 + a_4 < 50$. 2. $a_1 + \cdots + a_n \leq 200$. 3. $a_1^2 + \cdots + a_n^2 \geq 2500$. Extend the sequence by adding numbers $a_{n+1}, \cdots, a_m$ where $a_{n+1} = a_{n+2} = \cdots = a_{m-1} \geq a_m$ such that $a_1 + \cdots + a_m = 200$. This addition will not disrupt condition one and three, so we now have: 1. $a_1 + a_2 + a_3 + a_4 < 50$. 2. $a_1 + \cdots + a_m = 200$. 3. $a_1^2 + \cdots + a_m^2 \geq 2500$. Let us now define a transfer operation as follows. Given $a,b$ with $a \geq b$, we replace them with $a+ \epsilon,b - \epsilon$ with $\epsilon > 0$. One can easily verify that a transfer on two numbers will not change their linear sums, but will only increase their sum of squares, because $(a+\epsilon)^2 + (b-\epsilon)^2 > a^2+b^2 \iff \epsilon(a-b)+\epsilon^2 > 0$ So now we can apply a series of transfers, beginning with $(a_1,a_m)$, followed by $(a_1, a_{m-1}), (a_1, a_{m-2}), ...$ and so on. In each step, we apply as much transfer as possible, possibly reducing the smaller number to zero in the process. We stop when $a_1 + a_2 + a_3 + a_4$ reach $50$, at which points our three conditions become: 1. $a_1 + a_2 + a_3 + a_4 = 50$. 2. $a_1 + \cdots + a_m = 200$. 3. $a_1^2 + \cdots + a_m^2 > 2500$. (Note that the sign for condition 3 changed because equality is no longer possible. In order to reach condition 1, we must do at least one non-trivial transfer and that transfer strictly increase the sum of squares). Now we apply another series of transfers similar to above. We start with the last numbers and apply the transfer, this time not to $a_1$ but to $a_5$. Our goal is such that $a_4 = a_5 = \cdots = a_{k-1} \geq a_k > a_{k+1} = \cdots = a_m = 0$ for some $k$. One can achieve this configuration by transferring from the smallest number each time, and raising the largest number that's less than $a_4$ to be up to par with $a_4$. After we are done, our sequence looks like this: $a, b, c, d, d, \cdots, d, e$. Lastly, we transfer $(c-d)$ from $c$ to $a$, and $(b-d)$ from $b$ to $a$ to arrive at configuration $50-3d,d,d,d,\cdots,d,e$ where there are $N$ $d$'s. Our conditions can be rewritten to be: 1. $(50-3d)+d+d+d = 50$. This condition is self-satisfactory and will be removed from our consideration. 2. $(50-3d) + Nd + e = 200$. 3. $(50-3d)^2 + Nd^2 + e^2 > 2500$ But because $50-3d \geq d$, then $d \leq 12.5$, and thus $12d \leq 150$. Then $(50-3d)^2 + Nd^2 + e^2 = (50-3d)^2 + e^2 + d(200-e-(50-3d))$ $= 2500 - 150d + 12d^2 + e^2 - ed$ $= 2500 + d(12d-150) + e(e-d) \leq 2500$
2018-03-21 05:01:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937686562538147, "perplexity": 216.90156855427674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00624.warc.gz"}
https://socratic.org/questions/is-a-triangle-with-sides-of-3-4-6-a-right-triangle
# Is a triangle with sides of 3,4,6 a right triangle? Apr 19, 2018 It is not a right triangle. #### Explanation: To check if the sides are a right triangle, check if the sum of the squares of the two smaller sides equals the length of the square of the longest side. In other words, check if it works with the Pythagorean theorem: Does ${3}^{2} + {4}^{2}$ equal ${6}^{2}$? 3^2+4^2stackrel?=6^2 9+16stackrel?=36 $25 \ne 36$ Since $25$ isn't $36$ the triangle is not a right triangle. Hope this helped! Apr 19, 2018 A triangle with sides of color(red)(3, 4 and 6 is color(blue)(NOT a Right triangle. #### Explanation: $\text{ }$ We are given three sides of a triangle $3 , 4 \mathmr{and} 6$. Pythagoras Theorem states that in a right angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. To determine whether the three given sides form a right triangle, we use the Pythagoras Theorem to verify. Draw a triangle, say $A , B , C$ with the given magnitudes. Note that the longest side (BC) has a magnitude of $6$ units. Hence, this must be the Hypotenuse, if triangle ABC is a right-triangle. Does the angle $\angle C A B$ make a right angle of ${90}^{\circ}$? Verify that using the relationship between the hypotenuse and the other two legs of the triangle. If ${\left(A B\right)}^{2} + {\left(A C\right)}^{2} = {\left(B C\right)}^{2}$, then we know that $B C$ is the Hypotenuse and the triangle $A B C$ is a right-triangle. $\overline{A B} = 3$; $\overline{A C} = 4$; and $\overline{B C} = 6$ ${\left(A B\right)}^{2} = 9$ ${\left(A C\right)}^{2} = 16$ ${\left(B C\right)}^{2} = 36$ ${\left(A B\right)}^{2} + {\left(A C\right)}^{2} = 9 + 16 = 25$ Hence, ${\left(A B\right)}^{2} + {\left(A C\right)}^{2} \ne {\left(B C\right)}^{2}$ Hope it helps.
2021-09-21 13:16:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 26, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7558181285858154, "perplexity": 359.1305558967856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.38/warc/CC-MAIN-20210921131252-20210921161252-00342.warc.gz"}
https://meta.mathoverflow.net/questions/linked/223?sort=unanswered
# Linked Questions 1answer 347 views ### Reopening questions after clarifications I noticed that closing unclear questions often happens in next to no time while reopening them after the OP honestly follows the request to explain what he meant and does a reasonably decent job on ... 1answer 474 views ### Should there be a “statute of limitations” for closable questions? Ever more frequently I find in the "close votes" review queue questions which are a few years old. Many of them already have accepted answers, and one can see that they were well received at the time ... 1answer 832 views ### Why was “Is it meaningful to work on convergencies, integration, etc. on the Zariski topology?” closed? This post is about the question Is it meaningful to work on convergencies, integration, etc. on the Zariski topology? It is an interesting, relevant question which attracted two excellent answers. The ... 1answer 413 views ### I fixed my question. Can it please be taken off hold now? Yesterday I asked a question on parameterizations of knotted surfaces in $\mathbb R^4$. After I stated in the comments that I wanted the question to be kept to the case of a general surface, the ... 1answer 310 views ### How could I have better clarified my question? I seem to keep getting my posts put on hold in various stack exchanges because it's "unclear what I am asking." How can I do better to prevent this? Specifically right now I'm asking about this ... 1answer 365 views ### Legitimate questions put on hold with no comments, no notification, and no chance to improve My question was put on hold as unclear. I see a number of issues in this: No intent was made to clarify the question via comments: it was just shut up. I consider it rude and unhelpful. No reason was ... 1answer 326 views ### How the moderators/admin decide the significance/research orientation of any question before closing it? is it intuitive to them? I am not sure whether there exists some kind of objectivity in taking a decision in closing a question. For example my question https://mathoverflow.net/questions/226226/a-symmetric-function was ... 0answers 673 views ### Should “The probability for a streak when tossing a coin” be reopened? The question "The probability for a streak when tossing a coin" is on hold. [Edit: It has been reopened.] I disagree with closing the question and voted to reopen. This problem may sound like it is ... 0answers 337 views ### OpenScience Q&A just went live Perhaps you are aware of the fact that there was an Open Science private beta at StackExchange that did not quite manage to develop enough traction so it was closed down recently. Some people want to ... 0answers 534 views ### Probability question migrated to stats.stackexchange What has happened to the following question?: https://mathoverflow.net/posts/202242/revisions Firstly, it looks like a reasonable question to me (although the notation could be improved). Why was it ... 0answers 689 views ### Why this is off-topic? I posted a question on MSE for two months and received no helpful answer. So I tried the same question on MO. I got 4 off-topic votes and so the question got migrated back to MSE. I'd like to mention ... 0answers 288 views ### Nominate bountied question for closure Bountied questions cannot be closed: this prompted this meta-MO question. In the answer to that question, the reasons for this feature are explained (closure would shorten the bounty period, possibly ... 0answers 335 views ### Should this question remain closed? The OP of How to project a vector onto a very large, non-orthogonal subspace has written to complain that the question was closed unfairly. The question is too far from my area of expertise to be ... 0answers 66 views ### Tracing and Seeing deleted questions Is there a way to search through deleted questions? Can a user reach his or her own old deleted questions? In the early days of MO I asked about similar sites in related disciplines and there were a ... 0answers 215 views ### Thread for asking about suitability of math.SE question on MO? Often there are questions of the type: "I have posted this question on math.SE. No satisfactory answer so far. Would this question be suitable for MathOverflow?" People ask this in various places: ... 15 30 50 per page
2020-08-04 05:31:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7744222283363342, "perplexity": 1516.468102389885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735860.28/warc/CC-MAIN-20200804043709-20200804073709-00338.warc.gz"}
http://math.stackexchange.com/questions/148862/packing-density-of-tetrahedra-explicit-calculations
# Packing Density of Tetrahedra - Explicit Calculations I am researching problems relating to finding the optimal packing density of tetrahedra and I am driving myself crazy with the following very elementary calculations which do not seem to make sense. I have a container in the shape of a rectangular prism with volume $4690$ml (I measured it with water and also computed it theoretically by measuring the length, width, and height) and I am attempting to pack tetrahedra with edge length $6.7$cm. According to Wikipedia and other websites the volume of a regular tetrahedron with edge length $a$ is given by, $$\text{Vol}(\text{Tet})=\frac{\sqrt{2}}{12}a^3=\frac{\sqrt{2}}{12}(0.067\text{m})^3=3.545 \times 10^{-5}\text{m}^3$$ I then can convert my volume of the container in terms of ml's to m$^3$ as follows: $$\frac{x}{4.69\text{L}} = \frac{1 \text{m}^3}{1000\text{L}}$$ So, I have my container has volume $x = 0.00469 \text{m}^3$. For my packing density, I then have, $$\Delta = n\frac{\text{Vol(Tet)}}{\text{Vol(Box)}}=n\left(\frac{3.545 \times 10^{-5}\text{m}^3}{0.00469 \text{m}^3}\right)=0.00756n$$ where $n$ is the number of tetrahedra I can fit in the packing. I now attempted to fill the box, and following a fairly dense packing of around $0.78$, I only was able to fit $47$ tetrahedra in the container. That gives $\Delta = 0.00756(47) = 0.36$, which is almost as bad as the Bravais lattice packing! Therefore, my calculations must be off somewhere because it looks to me like I have packed around $\Delta=0.7$, but I am coming up with a calculation of $\Delta=0.36$. Any ideas? - Just a note: tetrahedron packing is very hard. Ex: How many tetrahedron of edge length $1$ can you fit in the unit sphere, with one vertex each at the center? That problem is open. –  Alex Becker May 23 '12 at 17:21 @AlexBecker: I know Tetrahedron packing is hard, I have read tons of papers on it and am familiar with the recent work done to improve the lower bound on $\Delta$; I am actually researching improving the upper bound published by Elser in 2010 and am presenting a demonstration to a general audience where I need to have a correlated packing density to the number of tetrahedra they can fit in the container. Unfortunately, my calculations make no sense to me visually and so this is why I asked my question here. –  Samuel Reid May 23 '12 at 21:33 Your calculation is probably right. The decreased density is due to the effect of the boundary. Since your box is only about 3-4 edge lengths across, the packing densities achievable for packing all of space are pretty much meaningless for the problem of packing in your box. For example, note that the densest packing of ten circles in a square is $0.69$, whereas the densest packing of the whole plane is $0.91$. I would not at all be surprised if the boundary effects are even bigger in three dimensions and for tricky shapes like tetrahedra.
2015-08-31 22:43:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8303982615470886, "perplexity": 353.8397600129611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068098.37/warc/CC-MAIN-20150827025428-00195-ip-10-171-96-226.ec2.internal.warc.gz"}
https://newproxylists.com/finite-automata-using-pumping-lemma-to-show-that-there-is-always-a-smaller-word-for-a-regular-language/
# finite automata – Using pumping lemma to show that there is always a smaller word for a regular language I’m having trouble putting together a mathematical proof that uses the Pumping Lemma to show that $$exists$$n $$geqslant$$ 1 such that that for all strings w $$in$$ L such that |w| $$geqslant$$ n, there is another string z $$in$$ L such that |z| < n. L is a regular language in this case. I understand its saying that for any string in a regular language there can be a shorter version of it to an extent. But I’m not sure how to begin using the pumping lemma to show this. I have used the pumping lemma to show something is not regular, but I haven’t done a proof like this before…
2020-10-31 13:11:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484088778495789, "perplexity": 146.88143780344055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00479.warc.gz"}
https://online.2iim.com/IIFT-question-paper/IIFT-2023-Question-Paper-VARC/
# IIFT 2023 Question Paper | IIFT VARC ###### IIFT Verbal Ability and Reading Comprehension | IIFT 2023 Question Paper The best way to boost your IIFT prep is to practice the actual IIFT Question Papers. 2IIM offers you exactly that, in a student friendly format to take value from this. In the 2023 IIFT, VARC were a mixed bag of questions of varying difficulty, with some routine questions and the others were very demanding. Some beautiful questions that laid emphasis on Learning ideas from basics and being able to comprehend more than remembering gazillion formulae and shortcuts. You absolutely must develop feelings of self-esteem and confidence to become empowered. No amount of willpower can surmount the feeling of defeatism. Any negative thoughts will filter into your subconscious mind, which does not question or analyse the data it receives. If you have experienced repeated failure in past attempts to change a behaviour pattern, your total self-image becomes established and fixed as one of failure. You become so convinced that you are incapable of reversing this trend that you eventually stop picturing a desirable goal for yourself. You resign yourself to accepting the current situation as being permanent and helpless. A positive self-image must be fed into your subconscious mind without being evaluated by the critical factor of your conscious mind proper (defence mechanisms). The most efficient and effective method of accomplishing this goal is by practicing self-hypnosis. Although many obstacles may arise during your consciousness raising program, the proper use of self-programming will transform these former roadblocks into stepping-stones of success. Once you envision succeeding in your goals, former difficulties disappear, and the subconscious becomes your chief ally in strengthening your ability to meet challenges. The subconscious mind contains all memories. It is a natural computer and is continually being programmed with data originating from the conscious mind proper. The subconscious cannot alter this data; however, it does direct the conscious mind to act in a specific way. The conscious mind is always resistant to change, any change, even if it is for the better. The conscious mind likes business as usual. Consciousness raising and behavioural changes are not business as usual; therefore, the conscious mind is your only enemy. By seeing yourself as you desire to be, you are reprogramming your subconscious computer. This does not require a critical acceptance, because your subconscious is incapable of analytical thought. Accompanying this visualization will be a feeling that you have already attained this goal. This as-if approach is remarkably successful. Once you achieve a particular goal using the subconscious mind, the maintenance of this goal will be effortless. When something attempts to interfere with the proper functioning of the reprogrammed subconscious, your internal computer will recognize the error immediately, and it will be corrected by this feedback mechanism. Your initial efforts in reprogramming the subconscious require a certain amount of mental training, which encompasses all new goals and aspirations. Daily practice of the exercises self-hypnosis, yoga, hetero-hypnosis, and trance results in a permanent reprogramming of the subconscious computer and a spontaneous incorporation of this goal. Willpower is neither necessary nor desirable for this paradigm. This is one example of raising consciousness. Your imagination can create a new mental image of yourself. If you have properly implanted the subconscious with positive images and suggestions, you automatically alter your behaviour to act in accordance with this new programming. A new sense of well-being and accomplishment accompany this pattern of behaviour. You will be able to feel this sense of confidence and empowerment for prolonged periods following additional practice sessions. Willpower alone cannot result in permanent changes in behaviour. The problem with the willpower approach is that you are consciously placing too much emphasis on past failures. As a result, your mental mind-set is not conducive to improvement, and subsequent efforts prove only more frustrating. Success in applying consciousness-raising techniques depends on the subconscious mind's uncritical acceptance of constructive suggestions. Thus, the most effective method of achieving this is through the use of self-hypnosis. 1. #### IIFT 2023 Question Paper VARC To change one's mind from 'negativity' to 'positivity' or to change one's behaviour permanently, which of the following is the most effective way? 1. Unanalytical acceptance of inefficacious suggestions. 2. Analytical acceptance of productive suggestions. 3. Critical acceptance of uncritical suggestions. 4. Uncritical acceptance of positive images and suggestions. 2. #### IIFT 2023 Question Paper VARC What is the most effective way to permanently reprogram one's subconscious mind? 1. Daily practice of yoga, self-hypnosis with targeted goal 2. Strong willpower 3. Feeding positive self-image in conscious mind 4. Constant efforts 3. #### IIFT 2023 Question Paper VARC Which of the following about 'using willpower to bring permanent changes in behaviour' is not correct according to passage? 1. Willpower purposely places excessive importance on past failures. 2. Willpower established on past failures makes one's mental mind-set non conducive to improvements. 3. Willpower leads to establishing one's total self-image as a failure. 4. Willpower is non-essential for raising consciousness. 4. #### IIFT 2023 Question Paper VARC According to passage, the subconscious mind....: 1. questions or analyzes the data it receives and eventually stops one from picturing a desirable goal for oneself. 2. captures negative thoughts instantly leading one to accept his/her current situation as being permanent or helpless. 3. is programmed by data from the conscious mind proper which is the only enemy that is always resistant to change. 4. can be reprogrammed by feeding positive self-image into ones subconscious mind through critical acceptance of the evaluations of the conscious mind proper. Widespread currency manipulation, mainly in developing and newly industrialized economies, is the most important development of the past decade in international financial markets. In an attempt to hold-down the values of their currencies, governments are distorting capital flows by around $$1.5$ trillion per year. The result is a net drain on aggregate demand in the United States and the Euro area by an amount roughly equal to the large output gaps in the United States and the Euro area. In other words, millions more Americans and Europeans would be employed if other countries did not manipulate their currencies and instead achieved sustainable growth through higher domestic demand. The United States has lost 1 million to 5 million jobs due to this foreign currency manipulation. More than 20 countries have increased their aggregate foreign exchange reserves and other official foreign assets by an annual average of nearly $$1.5$ trillion in recent years. This build-up of official assets - mainly through intervention in the foreign exchange markets; keeps the currencies of the interveners substantially undervalued, thus boosting their international competitiveness and trade surpluses. The corresponding trade deficits are spread around the world, but the largest share of the loss centres on the United States, whose trade deficit has increased by $\ 200$ billion to $\ 500$ billion per year as a result. The United States must tighten fiscal policy over the coming decade to bring its national debt under control. Monetary policy has already exhausted most of its expansionary potential. Hence the United States must eliminate or at least sharply reduce its large trade deficit to accelerate growth and restore full employment. The way to do so, at no cost to the US budget, is to insist that other countries stop manipulating their currencies and permit the dollar to regain a competitive level. This can be done through steps fully consistent with the international obligations of the United States that are indeed based on existing International Monetary Fund$IMF$ guidelines. Such a strategy should in fact attract considerable support from other countries that are adversely affected by the manipulation, including Australia, Canada, the euro area, Brazil, India, Mexico, and a number of other developing economies. The strategy would aim to fill a major gap in the existing international financial architecture: its inability to engage surplus countries, even when they blatantly violate the legal strictures against competitive currency undervaluation, in an equitable sharing of global rebalancing requirements. The United States and its allies should first seek voluntary agreement from the manipulators to sharply reduce or eliminate their intervention. The United States should inform the manipulators that if they do not do so, the United States will adopt four new policy measures against their currency activities. First, it will undertake countervailing currency intervention (CCI) against countries with convertible currencies by buying amounts of their currencies equal to the amounts of dollars they are buying themselves, to neutralize the impact on exchange rates. Second, it will tax the earnings on, or restrict further purchases of, dollar assets acquired by intervening countries with inconvertible currencies (where CCI could therefore not be fully effective) to penalize them for building up these positions. Third, it will hereafter treat manipulated exchange rates as export subsidies for purposes of levying countervailing import duties. Fourth, hopefully with a number of other adversely affected countries, it will bring a case against the manipulators in the World Trade Organization (WTO) that would authorize more wide-ranging trade retaliation. 6. #### IIFT 2023 Question Paper VARC The term "currency manipulation" by the developing and newly industrialized economies as mentioned in the passage can be explained as 1. Buying and selling the currencies of friendly countries to hold-down the value of domestic currency. 2. Keeping the relative value of developing and newly industrialized economies' currency depreciated via various kinds of financial instruments. 3. Keeping the relative value of developing and newly industrialized economies' currency pegged to the market forces, i.e. demand and supplies of currency in the foreign exchange markets. 4. Keeping the relative value of developed countries' currency always appreciated via various kinds of financial instruments. 7. #### IIFT 2023 Question Paper VARC What do you comprehend from the sentence "the result is a net drain on aggregate demand in the United States and the Euro area"? 1. Refers to inflationary pressures thus reducing the purchasing power of the customers of the United States and the Euro area which results in reduced aggregate demand. 2. Refers to fiscal deficit coupled with trade deficit thus causing "twin-deficit" which weaken customer's confidence thus leading to reduction in aggregate demand. 3. Refers to export competitiveness of developing and newly industrialized countries in the markets of the United States and the euro area. 4. Refers to loss of economic, commercial, financial and business opportunities in the United States and the Euro area. 8. #### IIFT 2023 Question Paper VARC What kind of retaliatory action is most likely to be taken by the United States against the manipulator countries which have convertible currency? 1. Imposing higher rate of import duties and possibly import restrictions also. 2. Treating the currency manipulation as the export subsidy. 3. Undertaking the countervailing currency intervention. 4. Reporting the case of currency manipulator(s) to the World Trade Organization to get authorization for plethora of retaliatory trade measures. 9. #### IIFT 2023 Question Paper VARC Based on the learnings from the passage, which of the following statement is not false? 1. The United States of America (USA) and the Euro area may not be able to significantly enhance the employment opportunities in their country/ region provided the other countries do not, intentionally and artificially, manipulate their currencies to their advantage. 2. As the US has exhausted the monetary policy tools, it can leverage extra ordinary banking policy instruments to reduce its current account deficit and can create millions of domestic job opportunities. 3. Imposition of countervailing import duties against countries which manipulate its exchange rates. 4. As a result of currency manipulation, there is trade deficit witnessed by countries across the world but the United States of America and the European Union are notable exception. The international economy almost certainly will continue to be characterized by various regional and national economies moving at significantly different speeds, a pattern reinforced by the 2008 global financial crisis. The contrasting speed across different regional economies are exacerbating global imbalances and straining governments and the international system. The key question is whether the divergences and increased volatility will result in a global breakdown and collapse or whether the development of multiple growth centres will lead to resiliency. The absence of a clear hegemonic economic power could add to the volatility. Some experts have compared the relative decline in the economic weight of the US to the late 19th century when economic dominance by one player, Britain; receded into multi-polarity. During the next 15-20 years, as power becomes even more diffuse than today, a growing number of diverse state and non-state actors, as well as subnational actors, such as cities, will play important governance roles. The increasing number of players needed to solve major transnational challenges, and their discordant values, will complicate decision-making. The lack of consensus between and among established and emerging powers suggests that multilateral governance to 2030 will be limited at best. The chronic deficit probably will reinforce the trend toward fragmentation. However, various developments, positive or negative; could push the world in different directions. Advances cannot be ruled out despite growing multi-polarity, increased regionalism, and possible economic slowdowns. Prospects for achieving progress on global issues will vary across issues. The governance gap will continue to be most pronounced at the domestic level and driven by rapid political and social changes. The advances during the past couple decades in health, education, and income-which we expect to continue, if not accelerate in some cases; will drive new governance structures. Transitions to democracy are much more stable and long-lasting when youth bulges begin to decline and incomes are higher. Currently about 50 countries are in the awkward stage between autocracy and democracy, with the greatest number concentrated in Sub-Saharan Africa, Southeast and Central Asia, and the Middle East and North Africa. Both social science theory and recent history, the Color Revolutions and the Arab Spring, support the idea that with maturing age structures and rising incomes, political liberalization and democracy will advance. However, many countries will still be zig-zagging their way through the complicated democratization process during the next 15-20 years. Countries moving from autocracy to democracy have a proven track record of instability. Other countries will continue to suffer from a democratic deficit: in these cases a country's developmental level is more advanced than its level of governance. Gulf countries and China account for a large number in this category. China, for example, is slated to pass the threshold of US $$15,000$ per capita purchasing power parity$PPP) in the next five years, which is often a trigger for democratization. Chinese democratization could constitute an immense "wave," increasing pressure for change on other authoritarian states. The widespread use of new communications technologies will become a double-edged sword for governance. On the one hand, social networking will enable citizens to coalesce and challenge governments, as we have already seen in Middle East. On the other hand, such technologies will provide governments; both authoritarian and democratic; an unprecedented ability to monitor their citizens. It is unclear how the balance will be struck between greater IT-enabled individuals and networks and traditional political structures. In our interactions, technologists and political scientists have offered divergent views. Both sides agree, however, that the characteristics of IT use; multiple and simultaneous action, near instantaneous responses, mass organization across geographic boundaries, and technological dependence; increase the potential for more frequent discontinuous change in the international system. 11. #### IIFT 2023 Question Paper VARC According to the passage, which of the following is not a notable cause of multi-polarity? 1. Enhanced volatility due to absence of hegemonic economic power. 2. Uneven economic growth in the world of national and regional economies. 3. Ever-bourgeoning global imbalances caused by diverging speed of economic growth nationally and regionally. 4. Wavering, atypical and conflicting global economic growth acting as a catalyst of global economic break-down and collapse. 12. #### IIFT 2023 Question Paper VARC According to passage, which of the following will cause chronic deficit in multilateral governance? 1. Growing multi-polarity as the nations will have different political and ideological orientations. 2. The decentralized decision structures of diverse states and non-state actors, internationally, nationally and sub-nationally, thus emanating a discordant value in decision making. 3. Increased regionalism which is result of ever-proliferating number of Free Trade Agreement(s)/ Preferential Trade Agreement(s). 4. Possible economic slowdown which is an outcome of economic sanctions, high energy prices and supply-chain disruptions. 13. #### IIFT 2023 Question Paper VARC According to passage, which of the following is/are not a trigger(s) for democratization? I. Maturing age structure II. Rising income III. Rising Human Development Index IV. Religious beliefs 1. Only I & III 2. Only III & IV 3. Only I & IV 4. Only II & III 14. #### IIFT 2023 Question Paper VARC According to passage, the widespread use of communication technologies will lead to ....? 1. Recurrent yet non-continuous changes in the international system. 2. Maturing of process of democratization in higher income countries. 3. Strengthening the political and social governance thus offering the basic social services at doorsteps of the citizens. 4. Enhanced disharmony and socio-economic movements including civil disobedience. Nine years ago, when Japan was beating America's brains out in the auto industry, I wrote a column about playing the computer geography game Where in the World Is Carmen Sandiego? with my then nine-year-old daughter, Orly. I was trying to help her by giving her a clue suggesting that Carmen had gone to Detroit, so I asked her, "Where are cars made?" And without missing a beat she answered, "Japan." Ouch! Well, I was reminded of that story while visiting Global Edge, an Indian software design firm in Bangalore. The company's marketing manager, Rajesh Rao, told me that he had just made a cold call to the VP for engineering of a U.S. company, trying to drum up business. As soon as Mr. Rao introduced himself as calling from an Indian software firm, the U.S. executive said to him, "Namaste," a common Hindi greeting. Said Mr. Rao, "A few years ago nobody in America wanted to talk to us. Now they are eager." And a few even know how to say hello in proper Hindu fashion. So now I wonder: If I have a granddaughter one day, and I tell her I'm going to India, will she say, "Grandpa, is that where software comes from?" No, not yet, honey. Every new product-from software to widgets - goes through a cycle that begins with basic research, then applied research, then incubation, then development, then testing, then manufacturing, then deployment, then support, then continuation engineering in order to add improvements. Each of these phases is specialized and unique, and neither India nor China nor Russia has a critical mass of talent that can handle the whole product cycle for a big American multinational. But these countries are steadily developing their research and development capabilities to handle more and more of these phases. As that continues, we really will see the beginning of what Satyam Cherukuri, of Sarnoff, an American research and development firm, has called "the globalization of innovation" and an end to the old model of a single American or European multinational handling all the elements of the product development cycle from its own resources. More and more American and European companies are outsourcing significant research and development tasks to India, Russia, and China. According to the information technology office of the state government in Karnataka, where Bangalore is located, Indian units of Cisco Systems, Intel, IBM, Texas Instruments, and GE have already filed a thousand patent applications with the U.S. Patent Office. Texas Instruments alone has had 225 U.S. patents awarded to its Indian operation. "The Intel team in Bangalore is developing microprocessor chips for high-speed broadband wireless technology, to be launched in 2006," the Karnataka IT office said, in a statement issued at the end of 2004, and "at GE's John F. Welch Technology Centre in Bangalore, engineers are developing new ideas for aircraft engines, transport systems and plastics." Indeed, GE over the years has frequently transferred Indian engineers who worked for it in the United States back to India to integrate its whole global research effort. GE now even sends non-Indians to Bangalore. Vivek Paul is the president of Wipro Technologies, another of the elite Indian technology companies, but he is based in Silicon Valley to be close to Wipro's American customers. Before coming to Wipro, Paul managed GE's CAT scanner business out of Milwaukee. At the time he had a French colleague who managed GE's power generator business for the scanners out of France. "I ran into him on an airplane recently," said Paul, "and he told me he had moved to India to head up GE's high-energy research there." I told Vivek that I love hearing an Indian who used to head up GE's CT business in Milwaukee but now runs Wipro's consulting business in Silicon Valley tell me about his former French colleague who has moved to Bangalore to work for GE. That is a flat world. 16. #### IIFT 2023 Question Paper VARC According to the passage, which of the following is correct: 1. American and European countries are outsourcing significant research and development tasks to India, China and Russia because the latter are not capable of handling the other aspects of a product cycle for a big American multinational. 2. As the countries like India, China and Russia would handle more and more of product development cycle phases through developed research and development capabilities, we will see the beginning of 'the globalization of innovation'. 3. American or European multinationals outsource significant research and development tasks to India, Russia, and China as the former are unable to handle all the elements of product development cycle from their own resources. 4. As the countries like India, China and Russia are steadily developing their research and development capabilities to handle more and more of new product development phases, this would deter American or European multinational from handling all the elements of the product development cycle from its own resources. 17. #### IIFT 2023 Question Paper VARC According to the passage, which of the following is correct: 1. Mr. Rao's unsolicited phone call to the VP for engineering of a U.S. company, in an effort to bring about some business, was ignored. 2. Americans earlier did not know how to say hello in proper Hindu fashion. 4. The author wishes to be able to tell her granddaughter someday that all the software comes from India. 18. #### IIFT 2023 Question Paper VARC With reference to passage, 'That is a flat world' can be best described to mean: 1. The world is literally flat 2. A metaphor for viewing the world as a level playing field in terms of business. 3. The next phase of globalization 4. Measuring businesses purely by the amount of innovations they make 19. #### IIFT 2023 Question Paper VARC The central idea of the passage is: 1. Flying allows people from diverse business backgrounds to meet and interact with each other without regard to geography or distance. 2. Convergence of technological and other forces allows businesses to connect and collaborate with each other irrespective of geography or distance, empowering more and more companies to reach farther, faster, and deeper than ever before. 3. India, Russia and China are on the verge of becoming global powers. 4. A large number of Indian operations have started filing patent applications with the U.S. Patent Office indicating the progress of India and other such countries (like Russia or China) in doing business. $$begin{array}{l} $text { Match the word with its correct meaning: }\\ \begin{array}{|c|l|c|l|} \hline \text { S.No. } & \text { Word } & \text { S.No. } &\text { Meaning } \\ \hline \text { i. } & \text { Flotsam } & \text { a. } & \begin{array}{l} \text { people with common interests who do } \\ \text { things together in a small group and do } \\ \text { not like to include others } \end{array} \\ \hline \text { ii. } & \text { Coterie } & \text { b. } & \text { insolent or impertinent behaviour } \\ \hline \text { iii. } & \text { Insouciant } & \text { c. } & \text { not easily made angry or upset } \\ \hline \text { iv. } & \text { Effrontery } & \text { d. } & \text { showing a casual lack of concern } \\ \hline \text { v. } & \text { Phlegmatic } & \text { e } & \begin{array}{l} \text { people or things that have been rejected } \\ \text { or discarded as worthless } \end{array} \\ \hline \end{array} \end{array}$ 1. i - e; ii - b; iii - c; iv - a; $v - d$ 2. i - c; ii - a; iii - b; iv - e; $v - d$ 3. i - e; ii - a; iii-d; iv -b;v-c 4. i - d; ii - c; iii - e; iv - b; v - a 21. For the given idiom, identify the correct meaning: 22. #### IIFT 2023 Question Paper VARC Bring someone to book 1. punish someone or make somebody to account for something he/she has done wrong. 2. to do something in strict accordance with rules or regulations. 3. something that one doesn't understand or know anything about. 4. punish someone for keeping the book in a bad condition, with torn pages, etc. 23. #### IIFT 2023 Question Paper VARC Irons in the fire 1. being attacked and criticized heavily. 2. the trouble will break-out. 3. work or function at a peak level of performance. 4. to have several different activities or projects in progress at the same time. 24. #### IIFT 2023 Question Paper VARC Fish out of water 1. something or someone that doesn't really fit into any one group. 2. getting uncomfortable because a person is in an unusual or unfamiliar situation. 3. a difficult problem or situation. 4. a person who seems unfriendly and does not share his/her feelings. 25. #### IIFT 2023 Question Paper VARC Fill in the blanks with appropriate words to form a meaningful paragraph: Conflict is a great clarifier; in a conflict, the opposing ____$i$____ not only come to a better understanding of each other's arguments but are forced to reflect on the ____(ii)____ and clarity of their own beliefs. Conflict prevents one from becoming ____(iii)____ into thinking that there is only one truth. It also serves as a powerful antidote against intellectual sterility and decline, since it encourages the ____(iv)____ adjustment and refinement of competing positions. 1. (i) Antagonists, (ii) cogency, (iii) bewitched, (iv) dialectical 2. (i) Contacts, (ii) weakness, (iii) hexed, (iv) formal 3. (i) Ally, (ii) vagueness, (iii) enchanted, (iv) learned 4. (i) Protagonists, (ii) illogicality, (iii) enamoured, (iv) teleological 26. #### IIFT 2023 Question Paper VARC Certain foreign words are frequently used in English language. Identify the option with the correct origin of the given words: I. Hoi polloi II. Vox populi III. A cappella IV. Prima donna V. Noblesse oblige 1. Only II and IV are Latin 2. Only III and V are French 3. Only I is not Greek 4. Only III and IV are Italian 27. For the meaning given in the question, choose the most appropriate and expressive adjective from the options: 28. #### IIFT 2023 Question Paper VARC Meaning: form of long-standing habit; long-accustomed, deeply habituated 1. inveterate 2. notorious 3. congenital 4. glib 29. #### IIFT 2023 Question Paper VARC 1. salubrious 2. chronic 3. egregious 4. opprobrious 30. For the given root/suffix, identify its meaning: 31. #### IIFT 2023 Question Paper VARC Suffix: '-escent' as used in the context of word 'senescent' 1. To write 3. Beauty 4. Growing, becoming 32. #### IIFT 2023 Question Paper VARC Root: 'agōgos' as used in the context of word 'demagogue' 1. Science, study 3. Mind, soul, spirit 4. Marriage 33. Identify the Antonym for the given word: 34. #### IIFT 2023 Question Paper VARC VENAL 1. Avaricious 2. Mercenary 3. Untrustworthy 4. Incorruptible 35. #### IIFT 2023 Question Paper VARC LACONIC 1. Compendious 2. Aphoristic 3. Pleonastic 4. Apothegmatic 36. #### IIFT 2023 Question Paper VARC Identify the misspelled word: 1. INEQUITOUS 2. RETICENCE 3. TACITURNITY 4. MARTINET 37. Identify the option to which the collective noun given in the question does not apply: 38. #### IIFT 2023 Question Paper VARC Collective Noun: shoal 1. Bass 2. Herrings 3. Pilchards 4. Gnats 39. #### IIFT 2023 Question Paper VARC Collective Noun: herd 1. Chamois 2. Gulls 3. Walruses 4. Wrens 40. Identify one word for the description given in question: 41. #### IIFT 2023 Question Paper VARC A state whose power derives from its naval or commercial supremacy on the seas. 1. Neocracy 2. Kakistocracy 3. Plutocracy 4. Thalassocracy 42. #### IIFT 2023 Question Paper VARC One who possesses outstanding technical ability in a particular art or field. 1. Vi1tuoso 2. Uxorious 3. Termagant 4. Indefatigable 43. #### IIFT 2023 Question Paper VARC Use the words in the table below to solve the questions: $$begin{array}{|c|c|c|c|c|c|c|c|} $hline i$ & Zwieback & ii) & Ligneous & iii) & Antiphon & iv) & Decrepit $\ \hline v$ & Ypsiloid & vi) & Filibuster & vii) & Incendiary & viii) & Inveigle $\ \hline ix$ & Whodunits & x) & Abasedly & xi) & Yack away & xii) & Gossamer $\ \hline xiii$ & Abaction & xiv) & Cognovit & xv) & Imbroglio & xvi) & Volacious $\ \hline xvii$ & Abearing & xviii) & Zugzwanged & xix) & Shemozzles & & $\ \hline \end{array}$ Complete the crossword using the words from above table. There are more words in the table than required. Down: 3. A situation that is complicated, confusing or embarrassing, especially a political or public one; 8.$used about a thing or person$ old and in very bad condition or poor health Across: 2. That causes a fire; 7. Persuade (someone) to do something by means of deception or flattery; 1. 3 Down - ix) ; 8 Down - i) ; 2 Across - vii) ; 7 Across - ii) 2. 3 Down - xi) ; 8 Down - iv); 2 Across - vi); 7 Across - xii) 3. 3 Down - xvi); 8 Down - iii) ; 2 Across - xix) ; 7 Across - ii) 4. 3 Down - xv) ; 8 Down - iv) ; 2 Across - vii) ; 7 Across - viii) 44. #### IIFT 2023 Question Paper VARC The question has explained the meaning of two words from the given table. Identify the correct matching words from the table. $$begin{array}{|l|l|l|l|l|l|l|l|} $hline i$ & Zwieback & ii) & Ligneous & iii) & Antiphon & iv) & Decrepit $\ \hline v$ & Ypsiloid & vi) & Filibuster & vii) & Incendiary & viii) & Inveigle $\ \hline ix$ & Whodunits & x ) & Abasedly & xi) & Yack away & xii) & Gossamer $\ \hline xiii$ & Abaction & xiv) & Cognovit & xv) & Imbroglio & xvi) & Volacious $\ \hline xvii$ & Abearing & xviii) & Zugzwanged & xix) & Shemozzles & & $\ \hline \end{array}$ a$ A story or play about a murderer in which the identity of the murderer is not revealed until the end b) able or fit to fly 1. a) $- $mathrm { ix }$$; b) - xvi) 2. a) $- $mathrm { xi }$$; b) $- $mathrm { ix }$$ 3. a) $- x v ) ;$ b) $- x v i )$ 4. a) $- x $mathrm { xv } ;$; b$ $- $mathrm { xi }$$ ###### Prepare for CAT 2023 with 2IIM's Daily Preparation Schedule ###### Know all about CAT Exam Syllabus and what to expect in CAT ###### Best CAT Online Coaching Try upto 40 hours for free Learn from the best! ###### Already have an Account? ###### CAT Coaching in ChennaiCAT 2023 Classroom Batches Starting Now! @Gopalapuram ###### Best CAT Coaching in Chennai Introductory offer of 5000/- Attend a Demo Class ###### Best Indore IPM & Rohtak IPM CoachingSignup and sample 9 full classes for free. Register now! ##### Where is 2IIM located? 2IIM Online CAT Coaching A Fermat Education Initiative, 58/16, Indira Gandhi Street, Kaveri Rangan Nagar, Saligramam, Chennai 600 093 ##### How to reach 2IIM? Mobile:$91) 99626 48484 / 94459 38484 WhatsApp: WhatsApp Now Email: [email protected]
2023-02-09 06:15:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24148212373256683, "perplexity": 6485.543927925808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00555.warc.gz"}
https://stats.stackexchange.com/questions/388968/how-to-do-psm-with-panel-data-using-panelmatch/389165
# How to do PSM with panel data using PanelMatch? I would greatly appreciate if you could let me know how to use PanelMatch for my dataset. Unfortunately, I couldn't find it's manual so I don't know how to find which firms are matched, how to extract the coefficients of the estimated models, how to report bias before and after matching, and etc.. 1. First, I need to do PSM using these variables: switch =big4+ lnasset+ leverage+ loss 1. Then, I should do diff in diff on the matched sample: decost= switch+ post_switch +switch*post_switch+ lnaudten +big4 +altmanz +lnasset +lnage +markettobook+ leverage +profit+ tangible+ cashvol I also read this document in Stata. However, in my dataset, the treatment dates are different for each firm. Besides, the treatment could occur more than once for each firm. Therefore, I don’t know how to define "post_switch". id date lnaudten big4 altmanz lnasset lnage mtob lev prof tang cavol switch decost los 1 86 .693147 0 18.4373 12.4689 2.48491 3.69137 .051575 .44427 .999581 .195047 0 .205964 0 1 87 1.09861 0 12.5244 12.7628 2.56495 2.69891 .043572 .559291 .999688 .128583 0 .107817 0 1 88 1.38629 0 14.7922 13.3187 2.63906 3.55144 .037377 .901665 .99897 .045367 0 .085176 0 1 89 1.60944 0 21.6806 13.5282 2.70805 4.4521 .090386 1.00277 .998904 .034365 0 .059932 0 1 90 1.79176 0 16.6034 13.7204 2.77259 3.16585 .077934 1.21371 .999292 .032229 0 .064589 0 1 91 0 0 9.32285 14.0652 2.83321 1.87682 .038984 1.61792 .999376 .019715 1 .086323 0 1 92 .693147 0 29.1306 14.3805 2.89037 3.83173 .030874 3.42558 .999687 .117503 0 .148985 0 1 93 1.09861 0 23.7929 14.5855 2.94444 3.08877 .01225 4.19413 .999862 .171374 0 .181363 0 2 86 1.94591 1 2.67142 13.5351 1.60944 .90438 .031392 .284566 .997711 .172729 0 .116186 0 2 87 2.07944 1 1.85554 13.6068 1.79176 .783169 .037099 .28575 .997862 .055812 0 .137087 0 2 88 2.19723 1 3.25227 13.6162 1.94591 .857463 .046493 .264266 .99788 .052991 0 .174771 0 2 89 2.30258 1 2.46358 13.8247 2.07944 1.00449 .045589 .246997 .998208 .064097 0 .168786 0 2 90 2.3979 1 1.43551 13.8304 2.19723 .791431 .060575 .171494 .998218 .062911 0 .240464 0 2 91 0 0 1.10687 13.7423 2.30258 .532189 .071249 .164944 .998054 .093181 1 .351773 0 2 92 .693147 0 3.39252 13.8668 2.3979 1.80869 .121138 .177533 .998281 .090341 0 .282046 0 2 93 1.09861 0 3.95825 14.0244 2.48491 1.41083 .094626 .162305 .99847 .134091 0 .188627 0 3 86 .693147 0 5.01935 13.0392 3.49651 1.08849 .008833 .275658 .995814 .165765 0 .12684 0 3 87 1.09861 0 8.51978 13.0429 3.52636 .794968 .010574 .349996 .995351 .276396 0 2.49701 0 3 88 1.38629 0 13.1943 13.2777 3.55535 1.36713 .043884 .409195 .996392 .079824 0 .033575 0 3 89 1.60944 0 18.7427 13.4562 3.58352 1.89782 .010373 .42366 .997045 .049833 0 .057621 0 3 90 1.79176 0 20.2185 13.4667 3.61092 1.69264 .016154 .339384 .997148 .133837 0 .133177 0 3 91 0 0 11.1153 13.9098 3.63759 1.50931 .010464 .935899 .998216 .12095 1 .089572 0 3 92 .693147 0 25.7134 14.1341 3.66356 2.41058 .004609 1.06214 .99856 .13175 0 .171943 0 3 93 1.09861 0 29.8983 14.162 3.68888 2.29729 .003891 .902802 .997648 .146949 0 .823985 0 • did you try the ?PanelMatch command in R? – StatsStudent Jan 24 '19 at 17:51 • @StatsStudent Thanks. I tried the example codes provided here: github.com/insongkim/PanelMatch/tree/master/R. However, as you could see these commands don't report coefficients of the predictors, the reduced bias after matching. I mean some tables like the ones which are illustrated here: edge.edx.org/assets/courseware/v1/… That's why I am confused. – ebrahimi Jan 24 '19 at 18:50 • you invited me to answer this question, but I don't know very much about panel models/econometric approaches, so it would take a lot of effort for me ... – Ben Bolker Jan 25 '19 at 14:50 • I think I know what you wanted to do, but I'm slightly uncertain. Here is why. You said you want to do PSM with the first equation you show, but PSM is 2 steps with the PS being the regression then the M coming 2nd. I guess your 1st equation was your PS regression, right? Then by "diff in diff" did you mean matching to estimate something like ATT or ATE, which is how PSM normally works, or did you mean a diff-in-diff model? – Hack-R Jan 25 '19 at 17:53 • Another thing - while I think this package is interesting, it looks like it's focused on time-series/panel versions of the PSM analysis - is this what you're going for? If so, do you know what lags, etc, you wanted? If not, I suggest to use Matching or FastMatch, the traditional PSM packages that are not focused on time-series (I have some tutorials online and could show you how). – Hack-R Jan 25 '19 at 17:57 This is how I would do it. Please see the questions and comment I left above. Based on the question it seemed like the choice of the newer non-CRAN panel matching library PanelMatch, while interesting, seemed to require information/data not in your question for time-series specific use cases of PSM. It sounded like you're in the more general case, wherein you'd want a plain PSM/matching package like Matching or FastMatch, though if this assumption is incorrect please let me know and provide more info on your needs. Ok so first, load the libraries and data: #devtools::install_github("insongkim/PanelMatch", dependencies=TRUE) if ( !require(pacman) ) install.packages("pacman");require(pacman) data <- read.table(text="id date lnaudten big4 altmanz lnasset lnage mtob lev prof tang cavol switch decost los 1 86 .693147 0 18.4373 12.4689 2.48491 3.69137 .051575 .44427 .999581 .195047 0 .205964 0 1 87 1.09861 0 12.5244 12.7628 2.56495 2.69891 .043572 .559291 .999688 .128583 0 .107817 0 1 88 1.38629 0 14.7922 13.3187 2.63906 3.55144 .037377 .901665 .99897 .045367 0 .085176 0 1 89 1.60944 0 21.6806 13.5282 2.70805 4.4521 .090386 1.00277 .998904 .034365 0 .059932 0 1 90 1.79176 0 16.6034 13.7204 2.77259 3.16585 .077934 1.21371 .999292 .032229 0 .064589 0 1 91 0 0 9.32285 14.0652 2.83321 1.87682 .038984 1.61792 .999376 .019715 1 .086323 0 1 92 .693147 0 29.1306 14.3805 2.89037 3.83173 .030874 3.42558 .999687 .117503 0 .148985 0 1 93 1.09861 0 23.7929 14.5855 2.94444 3.08877 .01225 4.19413 .999862 .171374 0 .181363 0 2 86 1.94591 1 2.67142 13.5351 1.60944 .90438 .031392 .284566 .997711 .172729 0 .116186 0 2 87 2.07944 1 1.85554 13.6068 1.79176 .783169 .037099 .28575 .997862 .055812 0 .137087 0 2 88 2.19723 1 3.25227 13.6162 1.94591 .857463 .046493 .264266 .99788 .052991 0 .174771 0 2 89 2.30258 1 2.46358 13.8247 2.07944 1.00449 .045589 .246997 .998208 .064097 0 .168786 0 2 90 2.3979 1 1.43551 13.8304 2.19723 .791431 .060575 .171494 .998218 .062911 0 .240464 0 2 91 0 0 1.10687 13.7423 2.30258 .532189 .071249 .164944 .998054 .093181 1 .351773 0 2 92 .693147 0 3.39252 13.8668 2.3979 1.80869 .121138 .177533 .998281 .090341 0 .282046 0 2 93 1.09861 0 3.95825 14.0244 2.48491 1.41083 .094626 .162305 .99847 .134091 0 .188627 0 3 86 .693147 0 5.01935 13.0392 3.49651 1.08849 .008833 .275658 .995814 .165765 0 .12684 0 3 87 1.09861 0 8.51978 13.0429 3.52636 .794968 .010574 .349996 .995351 .276396 0 2.49701 0 3 88 1.38629 0 13.1943 13.2777 3.55535 1.36713 .043884 .409195 .996392 .079824 0 .033575 0 3 89 1.60944 0 18.7427 13.4562 3.58352 1.89782 .010373 .42366 .997045 .049833 0 .057621 0 3 90 1.79176 0 20.2185 13.4667 3.61092 1.69264 .016154 .339384 .997148 .133837 0 .133177 0 3 91 0 0 11.1153 13.9098 3.63759 1.50931 .010464 .935899 .998216 .12095 1 .089572 0 3 92 .693147 0 25.7134 14.1341 3.66356 2.41058 .004609 1.06214 .99856 .13175 0 .171943 0 3 93 1.09861 0 29.8983 14.162 3.68888 2.29729 .003891 .902802 .997648 .146949 0 .823985 0", I am taking the PS equation from your question, but normally I use the MatchBalance() function and its statistical tests to define the PS model specification Your equation mentioned leverage and loss, but it's missing from the data, so I will exclude that below. Here's the propensity score (PS) model: form <- as.formula("switch ~ big4 + lnasset") mod <- speedglm::speedglm( form, family=binomial(), fitted=T, data = data ) summary(mod) # note poor fit, but I will ignore this for the example OK, now extract the propensity scores: data$fitted.values <- predict(mod) Now do matching, and calculate quasi-experimental statistics, like Average effect of Treatment on the Treated (ATT) or the ATE: set.seed(1) # set a random seed atta <- Match( Y = data$$decost, # I assume this is the outcome Tr = data$$switch, # Treatment/Control indicator X = data$fitted.values, # PS's estimand = "ATT", # Outcome metric M = 1, # 1-to-1 or 1-to-many matching ties = F,#T, # T = VERY SLOW but higher quality replace = TRUE, exact = T, version = "fast" ) summary(atta) # That gives you your result. You should also do post hoc testing to make sure that treatment and control are NOT significantly different on any control variables.
2021-07-26 04:06:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6626120209693909, "perplexity": 6928.267333263968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00217.warc.gz"}
https://www.physicsforums.com/threads/diff-eq-should-be-easy-but-my-answer-is-upside-down.349377/
Diff Eq - Should be easy, but my answer is upside down! 1. Oct 27, 2009 Wildcat04 1. The problem statement, all variables and given/known data Find vz(r) Boundary Conditions: 1. vz(Ro) = 0 2. vz(Ri) = W 2. Relevant equations (1/r) d/dr [r dvz/dr] = 0 (from previous problem) Let v = dvz / dr d/dr [r v] = 0 r v = c1 v = c1 / r vz = c1 ln r + c2 BC 1: 0 = c1 ln Ro + c2 c2 = - c1 ln Ro BC 2: W = c1 ln Ri - c1 ln Ro W = c1 (ln Ri - ln Ro) W = c1 ln(Ri/Ro) c1 = W / ln(Ri/Ro) My Soultion: vz = (W ln r) / (ln(Ri/Ro) - (W ln Ro)/ln(Ri/Ro) vz = W[ln (r / Ro) / ln(Ri/Ro)] Unfortunately, the answer key says it should be: vz = W [ln (Ro / r) / ln(Ro/Ri)] So I am close but no cigar. I have recompleted this problem several times and keep arriving at the same solution. Can anyone point out my mistake? 2. Oct 27, 2009 foxjwill Actually, the two answers are equivalent! Recall that $$\frac{\ln\left(\frac{1}{a}\right)}{\ln\left(\frac{1}{b}\right)} = \frac{\ln\left(a^{-1}\right)}{\ln\left(b^{-1}\right)} = \frac{-\ln{a}}{-\ln{b}} = \frac{\ln{a}}{\ln{b}}$$​ 3. Oct 27, 2009 Wildcat04 Hrmmm...I always forget about identities, weither it be sin / cos or in this case natural logs. Thank you foxjwill, I thought this was an easy one but I couldn't figure out how to get the correct answer, when, all along, I had it!
2017-08-19 17:23:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021728515625, "perplexity": 5217.583563278864}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105700.94/warc/CC-MAIN-20170819162833-20170819182833-00684.warc.gz"}
https://www.physicsforums.com/threads/ballistic-space-probes.718745/
# Ballistic space probes? 1. Oct 25, 2013 ### fulltime Hi. I am writing a scifi story that involves some (i hope) realistic science. The conceit is to show, in the near future, some use of cheap "ballistic" probes of the inner system and eventually TNS and heliopause. They would be small objects launched in large numbers, designed to gather very specific data and perhaps to work together somehow. I am thinking they would be launched directly from the surface, perhaps via a magnetic chute. Am i on the right track? Is something like this plausible? What would be the physical properties of the probes? What kind of construction would a launcher have? Last edited by a moderator: Oct 25, 2013 2. Oct 25, 2013 ### Bobbywhy fulltime, I imagine when you wrote "The conceit is to show," you meant "The concept is to show," What's the TNS? This Forum deals with real science. Fictional stories and scenarios, I'm pretty sure, are not dealt with here. Have you read the Forum Rules? Maybe a Mentor will assist us here. 3. Oct 25, 2013 ### Staff: Mentor We have a sci-fi subforum, including a writing sub-subforum. Hmm. This is beginning to sound like inception... 4. Oct 25, 2013 ### Staff: Mentor The main thing is that your probes would need to reach escape velocity within the time frame of the launch. Escape velocity for Earth's gravity is approximately 11 km/s, while for the Sun, seeing that we are already 1 AU out, is about 42 km/s. Launching from Mercury would require 67 km/s to escape the Sun's gravity. So, either your probes would need to be extremely durable and able to withstand very high G-forces, or your launch mechanism would need to accelerate them over a long period of time. Currently, we do the latter. We use rockets which give a low acceleration over a long period of time. 5. Oct 25, 2013 ### voko As long as your probe acquires the escape velocity, it is not impossible. The problem is in the method. The currently used method uses a two or three-stage rocket that puts a spaceship into a low Earth orbit. The spaceship, which may also include expendable stages, then accelerates and acquires the escape velocity. The critical part of this method is that the densest part of the atmosphere is traversed at a relatively low speed, so drag is manageable. Which is not very likely in your method. 6. Oct 25, 2013 ### Staff: Mentor This thread has now been moved to the Science Fiction Writing subforum. 7. Oct 25, 2013 ### Decimator Launching ballistically off Earth isn't really feasible for drag and compression heating reasons, but perhaps you could launch them off Luna with a large mass driver? 8. Oct 25, 2013 ### fulltime I came here looking for good advice from knowledgeable of people. First thing i get is someone going for the jugular with an ignorant jab at my writing style. Not cool. Trans neptunian space, an area extending from 30 to 600+ au, containing pluto and many large bodies (sedna, eris, makemake, etc), the keiper belt and scatter disk, and possibly the beginnings of the oort cloud. Of course science fiction stories can have real science in them... Lets say these are small - 1 cubic meter volume - bullet or sphere shaped payloads with no boosters of any kind. I assume that to be commercially viable the launchers would be electrically powered magnetic rails or some similar system which would use low friction in combination with a long ramp, perhaps down the side of a mountain and then up into a valley. The mechanics of that would be interesting to research too, but would a structure like that produce higher velocity launches, at a higher rate than launching rockets? Regarding managing drag - does that mean that fast movers will be very uncontrollable? Regarding durability. Are there extant materials and sensors which would suffice for these velocities? I imagine there are... 9. Oct 25, 2013 ### Enigman That was NOT a jab. Just a bit of well-meant advice about the forum you were posting in. You cannot use gravitational potential energy to overcome gravitational potential energy. So the mountains and valleys won't matter except to elongate the path. You will have to have a source of energy. As long as you can use the rails to power it up somehow it should be fine. Then comes the trajectory- it would have to be carefully calculated so as to avoid gravitational effect of planets and avoid all the debris in between as there's no steering mechanism. Also about physical property the projectile will have to face a lot of air friction and will get heated up- so it better be durable (I don't know enough physics to say what material). Last edited: Oct 25, 2013 10. Oct 25, 2013 ### voko "Managing" meant it could be overcome. In this case, you want an escape velocity close to the sea level. The force will be huge (meaning huge stress on the probe), and so will be the heating. Plus you will need to pump lots of energy into this over some very short time. Seems very unreasonable as a concept. I am not qualified to say whether this is impossible, but that seems so to me. 11. Oct 25, 2013 ### fulltime I read what you said, then used the forum tool to fix it on my end. Regarding using the slopes though, i wonder whether coriolis effect and gravity together provide more energy than just gravity... might just have to go and ask a stupid question in some wrong forum ;) I too am no expert but it seems reasonable enough to me, if there was a power station near by. Does anyone have any other opinions? 12. Oct 25, 2013 ### D H Staff Emeritus A power station is not going to help "manage drag". The solution to "managing drag" at low altitudes is exactly the same as the solution to "Doctor! It hurts when I do this:" «bonk!». 13. Oct 25, 2013 ### Hornbein This is a good idea, probes are moving in this direction, especially probes for magnetic fields. EM fields in vacuum are not remotely detectable, you have to have a probe there. The best way to get detail would be large numbers of small probes not too far from one another. The atmosphere is a huge problem, so I don't think anyone would ever do it this way. You want to launch from outside the atmosphere. So you could send a rail gun or something up on a rocket or up a tether and launch a large number of ballistic probes from up there. I'd do it via a large number of cheap solid state probes about the size of a fist that have no or almost no propulsion. No engine, no fuel. Each would be sort of like a rock, so no problems with acceleration. I think (maybe) they could network in some way to simulate a large antenna to transmit the data. Last edited: Oct 25, 2013 14. Oct 28, 2013 ### Ryan_m_b Staff Emeritus Reaching escape velocity at sea level means that the object would be travelling at Mach 32. I doubt there are many materials that could survive the heat from the dense lower atmosphere that would arise from travelling at that speed. Shooting things into space with a cannon has experimented with but there are a range of issues: http://en.wikipedia.org/wiki/Space_gun Your best bet for an SF novel is to propose something like a launch loop, space elevator or even some form of beam launch. http://en.wikipedia.org/wiki/Launch_loop http://en.wikipedia.org/wiki/Space_elevator http://en.wikipedia.org/wiki/Beam-powered_propulsion 15. Oct 28, 2013 ### voko Heat per se might be manageable via ablative cooling. What seems more of a problem here is the stress in the material of the probe produced by the drag. And the stress in the material when it gets accelerated to those speeds over some relatively short distance. More significantly, we need M32 just outside the atmosphere; which means we need a much greater speed at its bottom, which makes things much nastier. 16. Oct 28, 2013 ### Ryan_m_b Staff Emeritus Totally forgot to mention that, the escape velocity figure doesn't take into account how the atmosphere will slow you down. 17. Oct 28, 2013 ### voko Some back of envelope calculations. Drag is given by $c \rho(h) v^2$, where $c$ is some constant (I know it is not really constant). I will neglect gravity, too, because it will be much weaker than the drag at the speeds we are considering. $$m\dot v = - c \rho(h) v^2$$ Dividing both sides by $m \dot h = mv$ yields $$\frac {\dot v} {v} = - \frac c m \rho(h) \dot h$$ giving $$\ln \frac {v_f} {v_i} = - \frac c m \int\limits_0^h \rho(x) dx$$ Now, using the US Standard Atmosphere data published here: http://en.wikipedia.org/wiki/U.S._Standard_Atmosphere and Wolfram Alpha via http://www.wolframalpha.com/input/?...,+868},+{47,+111},+{51,+61},+{71,+4}},+x])+dx (somebody please check - much appreciated), for h = 100 km I obtained $$\ln \frac {v_f} {v_i} = -2 \cdot 10^9 \frac c m$$ At high Reynolds numbers, for a sphere, $c$ is approximately one quarter of the area of its cross-section. Assuming the probe to be spherical, with m = 100 kg and radius 0.3, $\frac c m \approx 0.001$, giving $$\ln \frac {v_f} {v_i} = -2 \cdot 10^6$$ which basically means whatever initial velocity we might have at the sea level, it is not possible to reach the LEO altitude with anything even remotely resembling the escape velocity. 18. Oct 28, 2013 ### fulltime I could do that thing and move my chute to the moon. But to do this i need some simple explanation of why sending people to the moon is cheap. Near future solutions arent cheap or likely. So - united alliance permanent moon mission? Pan oceanic factories? Asianese biodomes? As you can see, i like the earth launcher idea! A, it seems to comply with forum rules 100% and b it doesnt require any serious flights of fancy. If possible i want to use something commercial, something that could be built right now if only there was a will. Thank you for that, i im certain those space gun chels have calculated everything! This may be what im looking for. Its not much of a looker but its a proven concept, nearly perfect. It just needs to be sexified. In answer to ryan as well, this is interesting! I did some basic math using the american shuttle as a model (and discovered the laughable way the boosters are recovered from the ocean, at a huge cost in fuel for helicopters and craft), which seem to confirm that shooting things straight up is actually a good idea? It reduces the need for ablative armor as well i assume. Not great at maths though me... But the most intriguing thing about this is that there are sensors available right now that will easily withstand these g forces! There are even sensors that can assemble themselves after launch i believe, as well as purely mechanical sensors like chemical films for example, that can be imaged and transmitted. So what if these were small projectiles, like the shells fired from 100 mm guns mentioned in the space gun article? S as above i assume we need to make it smaller? 19. Oct 29, 2013 ### voko The volume of a sphere is $\frac 4 3 \pi r^3$; thus its mass is $m = \frac 4 3 \rho_m \pi r^3$; its cross-section is $\pi r^2$, thus $c \approx \frac 1 4 \pi r^2$, so $\frac c m \approx \frac {3} {16 \rho_m r}$. So it is exactly the opposite of your intuition: the smaller the sphere is, the stronger the deceleration. The bigger the sphere, the lesser the deceleration. You need $\frac c m \approx 10^{-9} \frac {\text{m}^2} {\text{kg}}$ so that the initial and final velocities be in the same ballpark. That means $\rho_m r \approx 10^8 \frac {\text{kg}} {\text{m}^2}$ Taking $\rho_m \approx 2500 \frac {\text{kg}} {\text{m}^3}$ (a bit less dense than aluminum), $r \approx 40 \ \text{km}$, which is clearly impossible. The bottomline is that a purely ballistic launch from the Earth of an interplanetary probe is impossible. 20. Oct 29, 2013 ### Staff: Mentor Plus, when you are shooting through the atmosphere (even assuming you shoot from high enough to overcome drag problems voko explained) you have only a very rough control over the final trajectory. 21. Oct 29, 2013 ### D H Staff Emeritus That interpolating polynomial yields nonsense above 55 km or so. It has pressure negative between ~56 km and ~70 km and above 71 km, pressure rises as a seventh order polynomial. As a general rule, you should never use an interpolating polynomial to extrapolate. That region from 71 km to 100 km is doing exactly that, and at 100 km, the pressure is greater than 1 atmosphere per the interpolating polynomial. It's better to do the integration from 0 to 55 km and use this as a lower bound. Your factor of 2×109 becomes 7×108. That is still a very bad result. You can't launch ballistically from the ground. Making it smaller makes the problem that much worse! Acceleration due to drag is proportional to cross section area and inversely proportional to mass. Cross section area is proportional to length squared while mass is proportional to length cubed. This is one of those places where the cube-square law says the bigger the better (so long as average density remains roughly constant). A boulder fall to the ground at roughly 9.81 m/s2. Shave a tiny grain of dust off that rock and the grain of dust will remain suspended for hours, or even longer. I'll repeat what I said above: You can't launch ballistically from the ground. 22. Oct 29, 2013 ### fulltime OK guys, thank you. Would a launch from the top of kilamanjaro or elbrus help? And if not, what if i did put it on the moon? I assume having these launchers in orbit would cost a lot, with fuel needed for the shooting tobe straight. 23. Oct 29, 2013 ### voko I made a silly mistake in #17. Instead of taking the values of density of the air when interpolating the air density function, I took the values of pressure. Plus there is the bad extrapolation as D H pointed out. Very embarrassing. The correct density values taken from the original publication are: 0 1.2250 1 1.1117 3 9.0925 × 10-1 5 7.3643 × 10-1 9 4.6706 × 10-1 15 1.9476 × 10-1 25 4.0084 × 10-2 40 3.9957 × 10-3 50 1.0269 × 10-3 75 3.9921 × 10-5 100 5.604 × 10-7 Where the left column is altitude, in kilometers, and the right column is density in $\frac {\text{kg}} {\text{m}^3}$. This I simply integrate using the trapezoidal rule, which gives 10771 $\frac {\text{kg}} {\text{m}^2}$. So the result for a 0.3 m 100 kg sphere is $$\ln \frac {v_f} {v_i} = -20$$ which is still "impossible". However, this modifies the argument in #19 somewhat. It should read: You need $\frac c m \approx 10^{-4} \frac {\text{m}^2} {\text{kg}}$ so that the initial and final velocities be in the same ballpark. That means $\rho_m r \approx 2000 \frac {\text{kg}} {\text{m}^2}$ Taking $\rho_m \approx 2500 \frac {\text{kg}} {\text{m}^3}$ (a bit less dense than aluminum), $r \approx 0.8 \ \text{m}$, and the mass of 5.5 metric tons, which seems impossible, but is actually still impossible due the heat and stress. 24. Oct 29, 2013 ### Staff: Mentor Moon would be my choice. And it was already mentioned much earlier in the thread. 25. Oct 29, 2013 ### voko This seems much more likely, but. You would still have to shoot at a velocity way higher than 11.2 km/s. The Chelyabinsk meteor we all saw past winter disintegrated at 22 km altitude, at 15 km/s, which is milder conditions than your setup. Regardless, you still have the problem how the probe is accelerated and how it withstands the acceleration.
2018-07-20 02:04:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5357809066772461, "perplexity": 1155.234816796202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00361.warc.gz"}
https://en.m.wikipedia.org/wiki/Binary_heap
# Binary heap A binary heap is a heap data structure that takes the form of a binary tree. Binary heaps are a common way of implementing priority queues.[1]: 162–163  The binary heap was introduced by J. W. J. Williams in 1964, as a data structure for heapsort.[2] Binary (min) heap Typebinary tree/heap Invented1964 Invented byJ. W. J. Williams Time complexity in big O notation Algorithm Space Average Worst case O(n) O(n) O(n) O(n) O(1) O(log n) O(1) O(1) O(log n) O(log n) Example of a complete binary max-heap Example of a complete binary min heap A binary heap is defined as a binary tree with two additional constraints:[3] • Shape property: a binary heap is a complete binary tree; that is, all levels of the tree, except possibly the last one (deepest) are fully filled, and, if the last level of the tree is not complete, the nodes of that level are filled from left to right. • Heap property: the key stored in each node is either greater than or equal to (≥) or less than or equal to (≤) the keys in the node's children, according to some total order. Heaps where the parent key is greater than or equal to (≥) the child keys are called max-heaps; those where it is less than or equal to (≤) are called min-heaps. Efficient (logarithmic time) algorithms are known for the two operations needed to implement a priority queue on a binary heap: inserting an element, and removing the smallest or largest element from a min-heap or max-heap, respectively. Binary heaps are also commonly employed in the heapsort sorting algorithm, which is an in-place algorithm because binary heaps can be implemented as an implicit data structure, storing keys in an array and using their relative positions within that array to represent child–parent relationships. ## Heap operations Both the insert and remove operations modify the heap to conform to the shape property first, by adding or removing from the end of the heap. Then the heap property is restored by traversing up or down the heap. Both operations take O(log n) time. ### Insert To add an element to a heap, we can perform this algorithm: 1. Add the element to the bottom level of the heap at the leftmost open space. 2. Compare the added element with its parent; if they are in the correct order, stop. 3. If not, swap the element with its parent and return to the previous step. Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with its parent, are called the up-heap operation (also known as bubble-up, percolate-up, sift-up, trickle-up, swim-up, heapify-up, or cascade-up). The number of operations required depends only on the number of levels the new element must rise to satisfy the heap property. Thus, the insertion operation has a worst-case time complexity of O(log n). For a random heap, and for repeated insertions, the insertion operation has an average-case complexity of O(1).[4][5] As an example of binary heap insertion, say we have a max-heap and we want to add the number 15 to the heap. We first place the 15 in the position marked by the X. However, the heap property is violated since 15 > 8, so we need to swap the 15 and the 8. So, we have the heap looking as follows after the first swap: However the heap property is still violated since 15 > 11, so we need to swap again: which is a valid max-heap. There is no need to check the left child after this final step: at the start, the max-heap was valid, meaning the root was already greater than its left child, so replacing the root with an even greater value will maintain the property that each node is greater than its children (11 > 5; if 15 > 11, and 11 > 5, then 15 > 5, because of the transitive relation). ### Extract The procedure for deleting the root from the heap (effectively extracting the maximum element in a max-heap or the minimum element in a min-heap) while retaining the heap property is as follows: 1. Replace the root of the heap with the last element on the last level. 2. Compare the new root with its children; if they are in the correct order, stop. 3. If not, swap the element with one of its children and return to the previous step. (Swap with its smaller child in a min-heap and its larger child in a max-heap.) Steps 2 and 3, which restore the heap property by comparing and possibly swapping a node with one of its children, are called the down-heap (also known as bubble-down, percolate-down, sift-down, sink-down, trickle down, heapify-down, cascade-down, extract-min or extract-max, or simply heapify) operation. So, if we have the same max-heap as before We remove the 11 and replace it with the 4. Now the heap property is violated since 8 is greater than 4. In this case, swapping the two elements, 4 and 8, is enough to restore the heap property and we need not swap elements further: The downward-moving node is swapped with the larger of its children in a max-heap (in a min-heap it would be swapped with its smaller child), until it satisfies the heap property in its new position. This functionality is achieved by the Max-Heapify function as defined below in pseudocode for an array-backed heap A of length length(A). Note that A is indexed starting at 1. // Perform a down-heap or heapify-down operation for a max-heap // A: an array representing the heap, indexed starting at 1 // i: the index to start at when heapifying down Max-Heapify(A, i): left ← 2×i right ← 2×i + 1 largest ← i if left ≤ length(A) and A[left] > A[largest] then: largest ← left if right ≤ length(A) and A[right] > A[largest] then: largest ← right if largest ≠ i then: swap A[i] and A[largest] Max-Heapify(A, largest) For the above algorithm to correctly re-heapify the array, no nodes besides the node at index i and its two direct children can violate the heap property. The down-heap operation (without the preceding swap) can also be used to modify the value of the root, even when an element is not being deleted. In the worst case, the new root has to be swapped with its child on each level until it reaches the bottom level of the heap, meaning that the delete operation has a time complexity relative to the height of the tree, or O(log n). ### Insert then extract Inserting an element then extracting from the heap can be done more efficiently than simply calling the insert and extract functions defined above, which would involve both an upheap and downheap operation. Instead, we can do just a downheap operation, as follows: 1. Compare whether the item we're pushing or the peeked top of the heap is greater (assuming a max heap) 2. If the root of the heap is greater: 1. Replace the root with the new item 2. Down-heapify starting from the root 3. Else, return the item we're pushing Python provides such a function for insertion then extraction called "heappushpop", which is paraphrased below.[6][7] The heap array is assumed to have its first element at index 1. // Push a new item to a (max) heap and then extract the root of the resulting heap. // heap: an array representing the heap, indexed at 1 // item: an element to insert // Returns the greater of the two between item and the root of heap. Push-Pop(heap: List<T>, item: T) -> T: if heap is not empty and heap[1] > item then: // < if min heap swap heap[1] and item _downheap(heap starting from index 1) return item A similar function can be defined for popping and then inserting, which in Python is called "heapreplace": // Extract the root of the heap, and push a new item // heap: an array representing the heap, indexed at 1 // item: an element to insert // Returns the current root of heap Replace(heap: List<T>, item: T) -> T: swap heap[1] and item _downheap(heap starting from index 1) return item ### Search Finding an arbitrary element takes O(n) time. ### Delete Deleting an arbitrary element can be done as follows: 1. Find the index ${\displaystyle i}$  of the element we want to delete 2. Swap this element with the last element 3. Down-heapify or up-heapify to restore the heap property. In a max-heap (min-heap), up-heapify is only required when the new key of element ${\displaystyle i}$  is greater (smaller) than the previous one because only the heap-property of the parent element might be violated. Assuming that the heap-property was valid between element ${\displaystyle i}$  and its children before the element swap, it can't be violated by a now larger (smaller) key value. When the new key is less (greater) than the previous one then only a down-heapify is required because the heap-property might only be violated in the child elements. ### Decrease or increase key The decrease key operation replaces the value of a node with a given value with a lower value, and the increase key operation does the same but with a higher value. This involves finding the node with the given value, changing the value, and then down-heapifying or up-heapifying to restore the heap property. Decrease key can be done as follows: 1. Find the index of the element we want to modify 2. Decrease the value of the node 3. Down-heapify (assuming a max heap) to restore the heap property Increase key can be done as follows: 1. Find the index of the element we want to modify 2. Increase the value of the node 3. Up-heapify (assuming a max heap) to restore the heap property ## Building a heap Building a heap from an array of n input elements can be done by starting with an empty heap, then successively inserting each element. This approach, called Williams’ method after the inventor of binary heaps, is easily seen to run in O(n log n) time: it performs n insertions at O(log n) cost each.[a] However, Williams’ method is suboptimal. A faster method (due to Floyd[8]) starts by arbitrarily putting the elements on a binary tree, respecting the shape property (the tree could be represented by an array, see below). Then starting from the lowest level and moving upwards, sift the root of each subtree downward as in the deletion algorithm until the heap property is restored. More specifically if all the subtrees starting at some height ${\displaystyle h}$  have already been “heapified” (the bottommost level corresponding to ${\displaystyle h=0}$ ), the trees at height ${\displaystyle h+1}$  can be heapified by sending their root down along the path of maximum valued children when building a max-heap, or minimum valued children when building a min-heap. This process takes ${\displaystyle O(h)}$  operations (swaps) per node. In this method most of the heapification takes place in the lower levels. Since the height of the heap is ${\displaystyle \lfloor \log n\rfloor }$ , the number of nodes at height ${\displaystyle h}$  is ${\displaystyle \leq {\frac {2^{\lfloor \log n\rfloor }}{2^{h}}}\leq {\frac {n}{2^{h}}}}$ . Therefore, the cost of heapifying all subtrees is: {\displaystyle {\begin{aligned}\sum _{h=0}^{\lfloor \log n\rfloor }{\frac {n}{2^{h}}}O(h)&=O\left(n\sum _{h=0}^{\lfloor \log n\rfloor }{\frac {h}{2^{h}}}\right)\\&=O\left(n\sum _{h=0}^{\infty }{\frac {h}{2^{h}}}\right)\\&=O(n)\end{aligned}}} This uses the fact that the given infinite series ${\textstyle \sum _{i=0}^{\infty }i/2^{i}}$  converges. The exact value of the above (the worst-case number of comparisons during the heap construction) is known to be equal to: ${\displaystyle 2n-2s_{2}(n)-e_{2}(n)}$ ,[9][b] where s2(n) is the sum of all digits of the binary representation of n and e2(n) is the exponent of 2 in the prime factorization of n. The average case is more complex to analyze, but it can be shown to asymptotically approach 1.8814 n − 2 log2n + O(1) comparisons.[10][11] The Build-Max-Heap function that follows, converts an array A which stores a complete binary tree with n nodes to a max-heap by repeatedly using Max-Heapify (down-heapify for a max-heap) in a bottom-up manner. The array elements indexed by floor(n/2) + 1, floor(n/2) + 2, ..., n are all leaves for the tree (assuming that indices start at 1)—thus each is a one-element heap, and does not need to be down-heapified. Build-Max-Heap runs Max-Heapify on each of the remaining tree nodes. Build-Max-Heap (A): for each index i from floor(length(A)/2) downto 1 do: Max-Heapify(A, i) ## Heap implementation A small complete binary tree stored in an array Comparison between a binary heap and an array implementation. Heaps are commonly implemented with an array. Any binary tree can be stored in an array, but because a binary heap is always a complete binary tree, it can be stored compactly. No space is required for pointers; instead, the parent and children of each node can be found by arithmetic on array indices. These properties make this heap implementation a simple example of an implicit data structure or Ahnentafel list. Details depend on the root position, which in turn may depend on constraints of a programming language used for implementation, or programmer preference. Specifically, sometimes the root is placed at index 1, in order to simplify arithmetic. Let n be the number of elements in the heap and i be an arbitrary valid index of the array storing the heap. If the tree root is at index 0, with valid indices 0 through n − 1, then each element a at index i has • children at indices 2i + 1 and 2i + 2 • its parent at index floor((i − 1) ∕ 2). Alternatively, if the tree root is at index 1, with valid indices 1 through n, then each element a at index i has • children at indices 2i and 2i +1 • its parent at index floor(i ∕ 2). This implementation is used in the heapsort algorithm, where it allows the space in the input array to be reused to store the heap (i.e. the algorithm is done in-place). The implementation is also useful for use as a Priority queue where use of a dynamic array allows insertion of an unbounded number of items. The upheap/downheap operations can then be stated in terms of an array as follows: suppose that the heap property holds for the indices b, b+1, ..., e. The sift-down function extends the heap property to b−1, b, b+1, ..., e. Only index i = b−1 can violate the heap property. Let j be the index of the largest child of a[i] (for a max-heap, or the smallest child for a min-heap) within the range b, ..., e. (If no such index exists because 2i > e then the heap property holds for the newly extended range and nothing needs to be done.) By swapping the values a[i] and a[j] the heap property for position i is established. At this point, the only problem is that the heap property might not hold for index j. The sift-down function is applied tail-recursively to index j until the heap property is established for all elements. The sift-down function is fast. In each step it only needs two comparisons and one swap. The index value where it is working doubles in each iteration, so that at most log2 e steps are required. For big heaps and using virtual memory, storing elements in an array according to the above scheme is inefficient: (almost) every level is in a different page. B-heaps are binary heaps that keep subtrees in a single page, reducing the number of pages accessed by up to a factor of ten.[12] The operation of merging two binary heaps takes Θ(n) for equal-sized heaps. The best you can do is (in case of array implementation) simply concatenating the two heap arrays and build a heap of the result.[13] A heap on n elements can be merged with a heap on k elements using O(log n log k) key comparisons, or, in case of a pointer-based implementation, in O(log n log k) time.[14] An algorithm for splitting a heap on n elements into two heaps on k and n-k elements, respectively, based on a new view of heaps as an ordered collections of subheaps was presented in.[15] The algorithm requires O(log n * log n) comparisons. The view also presents a new and conceptually simple algorithm for merging heaps. When merging is a common task, a different heap implementation is recommended, such as binomial heaps, which can be merged in O(log n). Additionally, a binary heap can be implemented with a traditional binary tree data structure, but there is an issue with finding the adjacent element on the last level on the binary heap when adding an element. This element can be determined algorithmically or by adding extra data to the nodes, called "threading" the tree—instead of merely storing references to the children, we store the inorder successor of the node as well. It is possible to modify the heap structure to allow extraction of both the smallest and largest element in ${\displaystyle O}$ ${\displaystyle (\log n)}$  time.[16] To do this, the rows alternate between min heap and max-heap. The algorithms are roughly the same, but, in each step, one must consider the alternating rows with alternating comparisons. The performance is roughly the same as a normal single direction heap. This idea can be generalized to a min-max-median heap. ## Derivation of index equations In an array-based heap, the children and parent of a node can be located via simple arithmetic on the node's index. This section derives the relevant equations for heaps with their root at index 0, with additional notes on heaps with their root at index 1. To avoid confusion, we'll define the level of a node as its distance from the root, such that the root itself occupies level 0. ### Child nodes For a general node located at index i (beginning from 0), we will first derive the index of its right child, ${\displaystyle {\text{right}}=2i+2}$ . Let node i be located in level L, and note that any level l contains exactly ${\displaystyle 2^{l}}$  nodes. Furthermore, there are exactly ${\displaystyle 2^{l+1}-1}$  nodes contained in the layers up to and including layer l (think of binary arithmetic; 0111...111 = 1000...000 - 1). Because the root is stored at 0, the kth node will be stored at index ${\displaystyle (k-1)}$ . Putting these observations together yields the following expression for the index of the last node in layer l. ${\displaystyle {\text{last}}(l)=(2^{l+1}-1)-1=2^{l+1}-2}$ Let there be j nodes after node i in layer L, such that {\displaystyle {\begin{alignedat}{2}i=&\quad {\text{last}}(L)-j\\=&\quad (2^{L+1}-2)-j\\\end{alignedat}}} Each of these j nodes must have exactly 2 children, so there must be ${\displaystyle 2j}$  nodes separating i's right child from the end of its layer (${\displaystyle L+1}$ ). {\displaystyle {\begin{alignedat}{2}{\text{right}}=&\quad {\text{last(L + 1)}}-2j\\=&\quad (2^{L+2}-2)-2j\\=&\quad 2(2^{L+1}-2-j)+2\\=&\quad 2i+2\end{alignedat}}} As required. Noting that the left child of any node is always 1 place before its right child, we get ${\displaystyle {\text{left}}=2i+1}$ . If the root is located at index 1 instead of 0, the last node in each level is instead at index ${\displaystyle 2^{l+1}-1}$ . Using this throughout yields ${\displaystyle {\text{left}}=2i}$  and ${\displaystyle {\text{right}}=2i+1}$  for heaps with their root at 1. ### Parent node Every node is either the left or right child of its parent, so we know that either of the following is true. 1. ${\displaystyle i=2\times ({\text{parent}})+1}$ 2. ${\displaystyle i=2\times ({\text{parent}})+2}$ Hence, ${\displaystyle {\text{parent}}={\frac {i-1}{2}}\;{\textrm {or}}\;{\frac {i-2}{2}}}$ Now consider the expression ${\displaystyle \left\lfloor {\dfrac {i-1}{2}}\right\rfloor }$ . If node ${\displaystyle i}$  is a left child, this gives the result immediately, however, it also gives the correct result if node ${\displaystyle i}$  is a right child. In this case, ${\displaystyle (i-2)}$  must be even, and hence ${\displaystyle (i-1)}$  must be odd. {\displaystyle {\begin{alignedat}{2}\left\lfloor {\dfrac {i-1}{2}}\right\rfloor =&\quad \left\lfloor {\dfrac {i-2}{2}}+{\dfrac {1}{2}}\right\rfloor \\=&\quad {\frac {i-2}{2}}\\=&\quad {\text{parent}}\end{alignedat}}} Therefore, irrespective of whether a node is a left or right child, its parent can be found by the expression: ${\displaystyle {\text{parent}}=\left\lfloor {\dfrac {i-1}{2}}\right\rfloor }$ ## Related structures Since the ordering of siblings in a heap is not specified by the heap property, a single node's two children can be freely interchanged unless doing so violates the shape property (compare with treap). Note, however, that in the common array-based heap, simply swapping the children might also necessitate moving the children's sub-tree nodes to retain the heap property. The binary heap is a special case of the d-ary heap in which d = 2. ## Summary of running times Here are time complexities[17] of various heap data structures. Function names assume a min-heap. For the meaning of "O(f)" and "Θ(f)" see Big O notation. Operation find-min delete-min insert decrease-key meld Binary[17] Θ(1) Θ(log n) O(log n) O(log n) Θ(n) Leftist Θ(1) Θ(log n) Θ(log n) O(log n) Θ(log n) Binomial[17][18] Θ(1) Θ(log n) Θ(1)[c] Θ(log n) O(log n)[d] Fibonacci[17][19] Θ(1) O(log n)[c] Θ(1) Θ(1)[c] Θ(1) Pairing[20] Θ(1) O(log n)[c] Θ(1) o(log n)[c][e] Θ(1) Brodal[23][f] Θ(1) O(log n) Θ(1) Θ(1) Θ(1) Rank-pairing[25] Θ(1) O(log n)[c] Θ(1) Θ(1)[c] Θ(1) Strict Fibonacci[26] Θ(1) O(log n) Θ(1) Θ(1) Θ(1) 2–3 heap[27] O(log n) O(log n)[c] O(log n)[c] Θ(1) ? 1. ^ In fact, this procedure can be shown to take Θ(n log n) time in the worst case, meaning that n log n is also an asymptotic lower bound on the complexity.[1]: 167  In the average case (averaging over all permutations of n inputs), though, the method takes linear time.[8] 2. ^ This does not mean that sorting can be done in linear time since building a heap is only the first step of the heapsort algorithm. 3. Amortized time. 4. ^ n is the size of the larger heap. 5. ^ Lower bound of ${\displaystyle \Omega (\log \log n),}$ [21] upper bound of ${\displaystyle O(2^{2{\sqrt {\log \log n}}}).}$ [22] 6. ^ Brodal and Okasaki later describe a persistent variant with the same bounds except for decrease-key, which is not supported. Heaps with n elements can be constructed bottom-up in O(n).[24] ## References 1. ^ a b Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03384-4. 2. ^ Williams, J. W. J. (1964), "Algorithm 232 - Heapsort", Communications of the ACM, 7 (6): 347–348, doi:10.1145/512274.512284 3. ^ eEL,CSA_Dept,IISc,Bangalore, "Binary Heaps", Data Structures and AlgorithmsCS1 maint: uses authors parameter (link) 4. ^ Porter, Thomas; Simon, Istvan (Sep 1975). "Random insertion into a priority queue structure". IEEE Transactions on Software Engineering. SE-1 (3): 292–298. doi:10.1109/TSE.1975.6312854. ISSN 1939-3520. S2CID 18907513. 5. ^ Mehlhorn, Kurt; Tsakalidis, A. (Feb 1989). "Data structures": 27. Porter and Simon [171] analyzed the average cost of inserting a random element into a random heap in terms of exchanges. They proved that this average is bounded by the constant 1.61. Their proof docs not generalize to sequences of insertions since random insertions into random heaps do not create random heaps. The repeated insertion problem was solved by Bollobas and Simon [27]; they show that the expected number of exchanges is bounded by 1.7645. The worst-case cost of inserts and deletemins was studied by Gonnet and Munro [84]; they give log log n + O(1) and log n + log n* + O(1) bounds for the number of comparisons respectively. Cite journal requires |journal= (help) 6. ^ "python/cpython/heapq.py". GitHub. Retrieved 2020-08-07. 7. ^ "heapq — Heap queue algorithm — Python 3.8.5 documentation". docs.python.org. Retrieved 2020-08-07. heapq.heappushpop(heap, item): Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop(). 8. ^ a b Hayward, Ryan; McDiarmid, Colin (1991). "Average Case Analysis of Heap Building by Repeated Insertion" (PDF). J. Algorithms. 12: 126–153. CiteSeerX 10.1.1.353.7888. doi:10.1016/0196-6774(91)90027-v. Archived from the original (PDF) on 2016-02-05. Retrieved 2016-01-28. 9. ^ Suchenek, Marek A. (2012), "Elementary Yet Precise Worst-Case Analysis of Floyd's Heap-Construction Program", Fundamenta Informaticae, 120 (1): 75–92, doi:10.3233/FI-2012-751. 10. ^ Doberkat, Ernst E. (May 1984). "An Average Case Analysis of Floyd's Algorithm to Construct Heaps" (PDF). Information and Control. 6 (2): 114–131. doi:10.1016/S0019-9958(84)80053-4. 11. ^ Pasanen, Tomi (November 1996). Elementary Average Case Analysis of Floyd's Algorithm to Construct Heaps (Technical report). Turku Centre for Computer Science. CiteSeerX 10.1.1.15.9526. ISBN 951-650-888-X. TUCS Technical Report No. 64. Note that this paper uses Floyd's original terminology "siftup" for what is now called sifting down. 12. ^ Kamp, Poul-Henning (June 11, 2010). "You're Doing It Wrong". ACM Queue. Vol. 8 no. 6. 13. ^ Chris L. Kuszmaul. "binary heap" Archived 2008-08-08 at the Wayback Machine. Dictionary of Algorithms and Data Structures, Paul E. Black, ed., U.S. National Institute of Standards and Technology. 16 November 2009. 14. ^ J.-R. Sack and T. Strothotte "An Algorithm for Merging Heaps", Acta Informatica 22, 171-186 (1985). 15. ^ Sack, Jörg-Rüdiger; Strothotte, Thomas (1990). "A characterization of heaps and its applications". Information and Computation. 86: 69–86. doi:10.1016/0890-5401(90)90026-E. 16. ^ Atkinson, M.D.; J.-R. Sack; N. Santoro & T. Strothotte (1 October 1986). "Min-max heaps and generalized priority queues" (PDF). Programming techniques and Data structures. Comm. ACM, 29(10): 996–1000. 17. ^ a b c d Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill. ISBN 0-262-03141-8. 18. ^ "Binomial Heap | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2019-09-30. 19. ^ 20. ^ Iacono, John (2000), "Improved upper bounds for pairing heaps", Proc. 7th Scandinavian Workshop on Algorithm Theory (PDF), Lecture Notes in Computer Science, 1851, Springer-Verlag, pp. 63–77, arXiv:1110.4428, CiteSeerX 10.1.1.748.7812, doi:10.1007/3-540-44985-X_5, ISBN 3-540-67690-2 21. ^ Fredman, Michael Lawrence (July 1999). "On the Efficiency of Pairing Heaps and Related Data Structures" (PDF). Journal of the Association for Computing Machinery. 46 (4): 473–501. doi:10.1145/320211.320214. 22. ^ Pettie, Seth (2005). Towards a Final Analysis of Pairing Heaps (PDF). FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. pp. 174–183. CiteSeerX 10.1.1.549.471. doi:10.1109/SFCS.2005.75. ISBN 0-7695-2468-0. 23. ^ Brodal, Gerth S. (1996), "Worst-Case Efficient Priority Queues" (PDF), Proc. 7th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 52–58 24. ^ Goodrich, Michael T.; Tamassia, Roberto (2004). "7.3.6. Bottom-Up Heap Construction". Data Structures and Algorithms in Java (3rd ed.). pp. 338–341. ISBN 0-471-46983-1. 25. ^ Haeupler, Bernhard; Sen, Siddhartha; Tarjan, Robert E. (November 2011). "Rank-pairing heaps" (PDF). SIAM J. Computing. 40 (6): 1463–1485. doi:10.1137/100785351. 26. ^ Brodal, Gerth Stølting; Lagogiannis, George; Tarjan, Robert E. (2012). Strict Fibonacci heaps (PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. pp. 1177–1184. CiteSeerX 10.1.1.233.1740. doi:10.1145/2213977.2214082. ISBN 978-1-4503-1245-5. 27. ^ Takaoka, Tadao (1999), Theory of 2–3 Heaps (PDF), p. 12
2022-01-22 03:41:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 40, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6250526309013367, "perplexity": 2150.722612336687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00003.warc.gz"}
https://stat.ethz.ch/pipermail/r-help/2009-May/390319.html
# [R] FOURIER INTEGRALS IN R andrew andrewjohnroyal at gmail.com Wed May 6 00:53:28 CEST 2009 Isn't it possible to write this as the real part of a fourier transform? You could then use the fft to calculate it for all the values of x, all at once. The Madan and Carr paper offers some pointers on how to do this: http://www.imub.ub.es/events/sssf/vgfrier7.pdf > > What are the limits of integration: 0 to 1 or 0 to infinity?  Main challenge > is that it is increasingly oscillatory as x and/or t increase.  You can find > the zeros of thecosinefunction and add up the integrals between successive > zeros.  In what context does this inttegral arise?  It must have been > studied well using asymptotic approximation and such. > > Ravi. > > ---------------------------------------------------------------------------- > ------- > > > Assistant Professor, The Center on Aging and Health > > Division of Geriatric Medicine and Gerontology > > Johns Hopkins University > > Ph: (410) 502-2619 > > Fax: (410) 614-9625 > > > > ---------------------------------------------------------------------------- > -------- > > -----Original Message----- > From: r-help-boun... at r-project.org [mailto:r-help-boun... at r-project.org] On > > Behalf Of A Achilleos > Sent: Tuesday, May 05, 2009 10:18 AM > To: r-h... at r-project.org > Subject: Re: [R] FOURIER INTEGRALS IN R > > Ok thanks.. > > No, my function is not smooth. > > Actually, I am dealing with an integral having the following form, for > example: > > \int cos(tx) (1-t^2)^3 \exp(0.5*t^2) dt > > I want to estimate this FourierCosineintegral for a given value of x. > > Thanks for the help. > > AA > > On Tue, May 5, 2009 2:34 am, andrew wrote: > > integrate offers some one-dimensional algorithms, but you need to > > start with a smooth function to get it to converge properly.  With a > >cosineintegral, there may be certain routines that offer better value > > for money: the Clenshaw-Curtis integration, or perhaps the FFT.  You > > would have to recast your problem by doing some sort of substitution. > > > Perhaps post some latex code to show the exact type of integral you > > are wanting to calculate. > > > Regards, > > > On May 5, 6:32 am, "Achilleas Achilleos" <ma... at bristol.ac.uk> wrote: > >> Hi, > > >> I am wondering whether there exist any function in R (any package) > >> that calculates Fourier Integrals. > > >> Particularly, I am interested for estimation of aCosineFourier > >> integral... > > >> I would be much obliged if you could help me on this.. > > >> Thanks. > > >> Andreas > > >> -- > > >> ______________________________________________ > >> R-h... at r-project.org mailing > >> listhttps://stat.ethz.ch/mailman/listinfo/r-help > >> guidehttp://www.R-project.org/posting-guide.html > >> and provide commented, minimal, self-contained, reproducible code. > > > ______________________________________________ > > R-h... at r-project.org mailing list > >https://stat.ethz.ch/mailman/listinfo/r-help > >http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > ---------------------- > A Achilleos > ma... at bristol.ac.uk > > ______________________________________________ > R-h... at r-project.org mailing listhttps://stat.ethz.ch/mailman/listinfo/r-help > and provide commented, minimal, self-contained, reproducible code. > > ______________________________________________ > R-h... at r-project.org mailing listhttps://stat.ethz.ch/mailman/listinfo/r-help
2020-01-25 07:47:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406504392623901, "perplexity": 14129.217024349802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00112.warc.gz"}
http://fpish.net/blog/CKoenig/id/2125/http~3a~2f~2fgettingsharper.de~2f~3fp~3d270
0 comment on 12/5/2011 4:00 PM Finally we will produce some output. And after the work we did so far it will be rather easy. The last “hard” part will be the shading: First (very) simple shading The idea is very simple. A object will reflect … Weiterlesen →
2017-04-27 11:10:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421751260757446, "perplexity": 1851.818248503284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00042-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.cuemath.com/ncert-solutions/q-2-exercise-15-2-probability-class-10-maths/
# Ex.15.2 Q2 Probability Solution - NCERT Maths Class 10 Go back to  'Ex.15.2' ## Question A die is numbered in such a way that its faces show the numbers $$1, 2, 2, 3, 3, 6$$. It is thrown two times and the total score in two throws is noted. Complete the following table which gives a few values of the total score on the two throws: What is the probability that the total score is (i) even? (ii) $$6$$? (iii) at least $$6$$? ## Text Solution What is known? A die is numbered in such a way that its faces show the numbers $$1, 2, 2, 3, 3, 6$$. It is thrown two times and the total score in two throws is noted What is  unknown? What is the probability that the total score is (i) even? (ii) $$6$$? (iii) at least $$6$$? Reasoning: To solve this question, first find out the total number of outcomes and all the possible outcomes. Now, to find the probability use the formula given below \begin{align}\text{Probability}=\frac{\text{ No of possible outcomes }}{\text{ no of outcomes }}\end{align} Step: + 1 2 2 3 3 6 1 2 3 3 4 4 7 2 3 4 4 5 5 8 2 3 4 4 5 5 8 3 4 5 5 6 6 9 3 4 5 5 6 6 9 6 7 8 8 9 9 12 Total number of possible outcomes $$= 6 \times 6 =36$$ (i) No of possible outcomes when the sum is even $$= 18$$ Probability that the total score is even \begin{align} & =\frac{\text{ No of possible outcomes }}{\text{ Total no of outcomes }} \\& =\frac{18}{36}\\&=\frac{1}{2} \\\end{align} (ii) No of possible outcomes when the sum is $$6 = 4$$ Probability that of getting the sum $$6$$ \begin{align} & =\frac{\text{ No of possible outcomes }}{\text{ Total no of outcomes }} \\& =\frac{4}{36}\\&=\frac{1}{9}\end{align} (iii) No of possible outcomes when the sum is at-least6(greater than $$5$$) $$= 15$$ Probability that of getting the sum at-least $$6$$ \begin{align} & =\frac{\text{No of possible outcomes }}{\text{Total no of outcomes}} \\& =\frac{15}{36}\\&=\frac{5}{12}\end{align} Learn math from the experts and clarify doubts instantly • Instant doubt clearing (live one on one) • Learn from India’s best math teachers • Completely personalized curriculum
2019-10-21 23:35:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422585368156433, "perplexity": 633.1548678990636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00215.warc.gz"}
https://www.anl.gov/article/katie-martin-keeps-the-advanced-photon-source-upgrade-project-on-track
# Argonne National Laboratory Article | Argonne National Laboratory # Katie Martin keeps the Advanced Photon Source Upgrade project on track As Project Controls manager, Katie Martin’s job is to account for the APS Upgrade’s schedule and its $815 million budget, down to the penny. Katie Martin knows the value of a long-term investment. As a Project Controls manager for the U.S. Department of Energy’s (DOE) Argonne National Laboratory, Martin’s job is to manage the cost and schedule for long-term construction projects, working with the scientists and engineers to keep things on time and within budget. Some of these projects can span years and cost hundreds of millions of dollars, and Martin keeps track of every day and every dime. She’s also living proof that long-term investments pay off. Martin began working as an administrative assistant at Argonne in 2007, as part of a co-op program in high school. She stayed with the laboratory and worked full time while attending DeVry University part time, and Argonne provided financial assistance for her college education. Getting to work with these scientists and engineers, some of the best in the world, is a privilege.” — Katie Martin, Project Management Office, Argonne National Laboratory. The lab invested in me,” she said. That is really comforting, and has kept me going. I was taking 20 credit hours a semester, juggling classes and full-time work. They saw something in me, and helped me to go to school so they could keep me on.” For most of that time Martin has been working in the Project Management Office, which serves as a conduit between project teams and the DOE. As part of this team, Martin has worked on several construction projects at the laboratory, including the Energy Sciences Building and the Advanced Protein Characterization Facility, helping the project teams deliver on timing and costs. After working on smaller efforts at the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne, Martin now leads a team that manages the cost and schedule for the massive ongoing upgrade to the facility. The APS Upgrade will see the current particle accelerator at the heart of the facility replaced with a state-of-the-art model, one that will increase the brightness of the X-ray beams by up to 500 times. New research stations will be built and existing stations modified or enhanced to make use of the new high-brightness light source. With a projected cost of$815 million and a year-long installation period required for the new accelerator (during which the X-ray beams will be shut down), the APS Upgrade has a lot of moving parts, and Martin’s job is to keep her eye on each one of them. We work with all of the technical teams, the scientists and engineers designing and building each component,” Martin said. We ask them to explain components to us, so we can break it down into a manageable process — where they order it from, the cost, when they get it — and we report monthly to the DOE, so they can see we’re performing to their standards.” Those reports from Martin’s team are vital to maintaining close coordination with DOE’s Office of Basic Energy Sciences, which funds the project. Her team also keeps the master schedule for the project, tracking the timeliness of hundreds of vendors delivering important, complicated parts to order. Every change to the cost or schedule, no matter how small, has to be accounted for, with the proper forms filled out and regulations followed. All of this careful management means that Martin was among the first to realize the impact the COVID-19 pandemic would have on the APS Upgrade project. She tracked delays from the project’s vendors — 20,000 different activities, she said, linked to other supply lines around the world — and used that information to build a case to the DOE for a change in the upgrade’s schedule. In May 2021, the project announced a new date for the start of the installation period: April 2023, a change of 10 months from the original schedule. People were expecting it and glad to have a final decision,” Martin said. There were so many delays, this was our only choice. I’m relieved that the new schedule is better for the project as a whole.” The APS Upgrade is the largest and most complicated project Martin has been a part of — I have a lot of balls in the air at all times,” she said — and she is grateful for the mentors who invested in her, sharing their knowledge of the job. I could not have gotten this far without a lot of people teaching me a lot of things,” she said. There’s a lot I am now able to pass on to my team.” And she has nothing but positive words to say about working with the APS Upgrade team, and playing a part in a project that she knows will lead to positive changes in the world, from new energy storage devices to new treatments and vaccines for diseases. This has been one of the best projects I have worked on,” she said. Everyone is so passionate about the work they’re doing. The APS Upgrade will directly impact the future of science in our country and around the world, and everyone realizes that. Getting to work with these scientists and engineers, some of the best in the world, is a privilege.”
2022-01-27 05:50:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26060205698013306, "perplexity": 1581.7398076193967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00244.warc.gz"}
https://venuthatikonda.github.io/blog/stupidity/
# Code, Pizza and my Stupidity ### Morning 09:00 Saturday morning. Quite a peaceful day. I am so excited to write the code for a web application that I have been working on. Little did I know that I am going to spend the entire day debugging an error that is hiding in my code. I had a strong espresso before sitting in front of my laptop and divided the task into three components. • Take input data files from user • Perform calculations on received data • Show the results to user I started writing code for the first part. At each stage I was testing how the application is going to look in the browser. I was careful enough not to miss any semicolons (;). Because, I remember ### Afternoon 13:00 By this time I successfully wrote the code to take-in the user provided data into server to perform some calculations. Me: I feel Good. Brain: Enough, Go grab a coffee. I had one more espresso, this time Indian Sundara (Highly recommended). While having coffee, I mentally drew a framework how I am going to write the code to perform calculations. Task started again. Sublime page is being filled with the code to perform calculations. This time it took three hours to complete the code including the code for final task, “showing results to user”. It’s time for testing. • Input some test data • Start calculations PROGRAM RUNNING . . . It showed something. It was not the results but an error. Nooo. I don’t want this to be happening. I closed the application and tested it again. Of course the error didn’t correct by itself. I carefully gone through the entire code scanning for any obvious bugs. I didn’t find any. I tried to print exactly the input data as result on the screen without performing any calculations. It showed perfectly. I concluded that the error is in the code to perform calculations. I started probing the internet for this kind of error. I found some similar ones but not the exact one. I tried different debugging methods, some of which I already knew and some are completely new. I got to know different debugging methods which I didn’t know before. Different methods work for different bugs. But none worked for mine. It seems the error is not the usual one, if it is, it should’ve been caught by one of these debugging methods. ### Evening, 18:00 Me: Heck, this err.. Brain: Stop complaining, go get a coffee and write the entire code again. Me: Okay, as you say! I started writing entire code again. This time with utmost care, hoping to avoid the error. After 90-100 minutes of coding, I tested the application again. ### Error Oh God! Do you even know that I exist in this world. I know you are busy but at least help me once in a decade What is wrong with this code, I yelled. Again, I started probing the internet. Every time, I found the same solutions which didn’t work. I checked the code again. Tested the application with different test data. Printed again the input without calculations. Came to the same conclusion that the error is in code to do calculations. Brain: I don’t think you are a good fit for this job. Quit and look for a simple and easy job. Me: ### Night: 23:00 Aargh! I am feeling hungry. Let’s work on this tomorrow, I thought. I went to buy pizza, yes I love pizza at midnight, at a shop across the street. I was walking down the empty streets filled with darkness and dim lights, with my pizza. All of a sudden something striked in my mind. In simplest explanation, what I was writing in the code is the following “II x 2” and expecting it to give 4 Obviously, program is unable to understand two different formats (roman number and numeric), that was what causing the error. I have to convert one format type to the other in order to get the results. That is “2 x 2” which should give 4 Yes, that was the error. I rushed back home, modified the code and tested the application. Guess what ? Yes, my code is working perfect. Aww! I am such a stupid. This is a simple bug. I spent almost 6-7 hours to find this. #### Did I simply waste 6-7 hours ? No, all those hours were not waste. In the process • I got more command on the programming language • I learnt more debugging methods which are definitely useful in the long run and most importantly, • I learnt patience being a millennial • I learnt to overcome self-doubt inner voice • I learnt that when things don’t work, I should leave everything and go out for a walk (and buy pizza) • I learnt that I’m a STUPID
2017-12-16 22:11:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32024073600769043, "perplexity": 1422.4293066902912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589512.63/warc/CC-MAIN-20171216220904-20171217002904-00420.warc.gz"}
https://chemistry.stackexchange.com/questions/123013/optical-activity-of-cis-coordination-compounds
# Optical activity of cis coordination compounds I came across a statement saying, "Consider a Octahedral coordination compound of type [MA2X2]. Then, if the compound is in a cis form, it shows optical activity as there is no plane of symmetry and hence is non-superimposable on its mirror image. On the other hand, a trans form is symmetric and superimposable on its mirror image and so is optically inactive." I do not see why the cis form is not superimposable on its mirror image. If all the ligands lie on the same plane, then isn't there a line of symmetry passing diagonally? Does that not make it superimposable on its mirror image? • $\ce{[CoCl2(en)2]+}$ isn't square planar. Please add a reference for the quoted part. – andselisk Oct 28 '19 at 5:17 • Point 1. The compound is not planar(it is octahedral). Point 2. Line/Axis of symmetry doesn't affect chirality. Only POS,COS,alternating Axis of symmetry does. But for any planar molecule, molecular plane is itself a plane of symmetry so planar compounds are always achiral – user600016 Oct 28 '19 at 5:17 • ah duly noted. Turns out I got the structures messed up. I will edit my question for future viewers. – The Jade Reaper Oct 29 '19 at 1:40 If we do that with $$\ce{[Co(en)2Cl2]}$$, we get the following two enantiomers which you cannot convert into each other: The complex is chiral because the $$\ce{en}$$ ligands necessarily cross the paper plane and can do so either in a right-turn screw motion or in a left-turn screw motion.
2021-08-01 18:31:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5556369423866272, "perplexity": 1558.2527273112953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00325.warc.gz"}
https://www.baryonbib.org/bib/f78b29ca-d6c7-474a-b63b-f7c5700a8826
PREPRINT # Shape asymmetries and lopsidedness-radial-alignment in simulated galaxies Jinzhi Shen, Xufen Wu, Yirui Zheng, Beibei Guo Submitted on 7 November 2022 ## Abstract Galaxies are observed to be lopsided, meaning that they are more massive and more extended along one direction than the opposite. However, the galaxies generated in cosmological simulations are much less lopsided, inconsistent with observations. In this work, we provide a statistical analysis of the lopsided morphology of 2148 simulated isolated satellite galaxies generated by TNG50-1 simulation, incorporating the effect of tidal fields from halo centres. We study the radial alignment (RA) between the major axes of satellites and the radial direction of their halo centres within truncation radii of $3{R}_{h}$, $5{R}_{h}$ and $10{R}_{h}$. According to our results, RA is absent for all these truncations. We also calculate the far-to-near-side semi-axial ratios of the major axes, denoted by ${a}_{-}/{a}_{+}$, which measures the semi-axial ratios of the major axes in the hemispheres between backwards (far-side) and facing (near-side) the halo centres. If the satellites are truncated within radii of $3{R}_{h}$ and $5{R}_{h}$ with ${R}_{h}$ being the stellar half mass radius, the numbers of satellites with longer semi-axes on the far-side are found to be almost equal to those with longer semi-axes on the near-side. Within a larger truncated radius of $10{R}_{h}$, the number of satellites with axial ratios ${a}_{-}/{a}_{+}<1.0$ is about $10\mathrm{%}$ more than that with ${a}_{-}/{a}_{+}>1.0$. Therefore, the tidal fields from halo centres play a minor role in the generation of lopsided satellites. The lopsidedness radial alignment (LRA), i.e., an alignment of long semi-major-axes along the radial direction of halo centres, is further studied. No clear evidence of LRA is found in our sample within the framework of $\mathrm{\Lambda }$CDM Newtonian dynamics. In comparison, the LRA can be naturally induced by the external fields from the central host galaxy in Milgromian dynamics. (See paper for full abstract) ## Preprint Comment: 16 pages, 12 figures, 3 tables, submitted to MNRAS Subject: Astrophysics - Astrophysics of Galaxies
2022-12-09 13:23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5589531064033508, "perplexity": 2098.19761855332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00150.warc.gz"}
https://planetmath.org/ConjugateFields
# conjugate fields If  $\vartheta_{1},\,\vartheta_{2},\,\ldots,\,\vartheta_{n}$  are the algebraic conjugates of the algebraic number $\vartheta_{1}$, then the algebraic number fields$\mathbb{Q}(\vartheta_{1}),\,\mathbb{Q}(\vartheta_{2}),\,\ldots,\,\mathbb{Q}(% \vartheta_{n})$  are the conjugate fields of $\mathbb{Q}(\vartheta_{1})$. Notice that the conjugate fields of $\mathbb{Q}(\vartheta_{1})$ are always isomorphic but not necessarily distinct. All conjugate fields are equal, i.e. (http://planetmath.org/Ie) $\mathbb{Q}(\vartheta_{1})=\mathbb{Q}(\vartheta_{2})=\ldots=\mathbb{Q}(% \vartheta_{n})$, or equivalently $\vartheta_{1},\ldots,\vartheta_{n}$ belong to $\mathbb{Q}(\vartheta_{1})$, if and only if the extension $\mathbb{Q}(\vartheta_{1})/\mathbb{Q}$ is a Galois extension of fields. The reason for this is that if $\vartheta_{1}$ is an algebraic number and $m(x)$ is the minimal polynomial of $\vartheta_{1}$ then the roots of $m(x)$ are precisely the algebraic conjugates of $\vartheta_{1}$. For example, let $\vartheta_{1}=\sqrt{2}$. Then its only conjugate is $\vartheta_{2}=-\sqrt{2}$ and $\mathbb{Q}(\sqrt{2})$ is Galois and contains both $\vartheta_{1}$ and $\vartheta_{2}$. Similarly, let $p$ be a prime and let $\vartheta_{1}=\zeta$ be a primitive $p$th root of unity (http://planetmath.org/PrimitiveRootOfUnity). Then the algebraic conjugates of $\zeta$ are $\zeta^{2},\ldots,\zeta^{p-1}$ and so all conjugate fields are equal to $\mathbb{Q}(\zeta)$ and the extension $\mathbb{Q}(\zeta)/\mathbb{Q}$ is Galois. It is a cyclotomic extension of $\mathbb{Q}$. Now let $\vartheta_{1}=\sqrt[3]{2}$ and let $\zeta$ be a primitive $3$rd root of unity (i.e. $\zeta$ is a root of $x^{2}+x+1$, so we can pick $\zeta=\frac{-1+\sqrt{-3}}{2}$). Then the conjugates of $\vartheta_{1}$ are $\vartheta_{1}$, $\vartheta_{2}=\zeta\sqrt[3]{2}$, and $\vartheta_{3}=\zeta^{2}\sqrt[3]{2}$. The three conjugate fields $\mathbb{Q}(\vartheta_{1})$, $\mathbb{Q}(\vartheta_{2})$, and $\mathbb{Q}(\vartheta_{3})$ are distinct in this case. The Galois closure of each of these fields is $\mathbb{Q}(\zeta,\sqrt[3]{2})$. Title conjugate fields ConjugateFields 2013-03-22 17:10:28 2013-03-22 17:10:28 pahio (2872) pahio (2872) 10 pahio (2872) Definition msc 12F05 msc 11R04 PropertiesOfMathbbQvarthetaConjugates
2018-11-17 14:57:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 41, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919666051864624, "perplexity": 117.94213727339391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743714.57/warc/CC-MAIN-20181117144031-20181117170031-00248.warc.gz"}
https://wwww.gams.com/latest/docs/S_MINOS.html
MINOS and QUADMINOS Bruce A. Murtagh; Graduate School of Management, Macquarie University, Sydney, Australia Michael A. Saunders, Walter Murray; Department of EESOR, Stanford University, CA Philip E. Gill; Department of Mathematics, University of California, San Diego, La Jolla, CA # Introduction This document describes the GAMS interface to MINOS which is a general purpose nonlinear programming solver. For a quad-precision MINOS, see quadMINOS. GAMS/MINOS is a specially adapted version of the solver that is used for solving linear and nonlinear programming problems in a GAMS environment. GAMS/MINOS is designed to find solutions that are locally optimal. The nonlinear functions in a problem must be smooth (i.e., their first derivatives must exist).The functions need not be separable. Integer restrictions cannot be imposed directly. A certain region is defined by the linear constraints in a problem and by the bounds on the variables. If the nonlinear objective and constraint functions are convex within this region, any optimal solution obtained will be a global optimum. Otherwise there may be several local optima, and some of these may not be global. In such cases the chances of finding a global optimum are usually increased by choosing a staring point that is "sufficiently close", but there is no general procedure for determining what "close" means, or for verifying that a given local optimum is indeed global. Linearly constrained models are solved with a very efficient and reliable reduced gradient technique that takes advantage of the sparsity of the model. Models with nonlinear constraints are solved with a method that iteratively solves subproblems with linearized constraints and an augmented Lagrangian objective function. This iterative scheme implies that only the final, optimal solution is sure to be feasible w.r.t the nonlinear constraints. This is in contrast to the feasible path method used by some other NLP solvers, e.g., CONOPT. MINOS and CONOPT are very complementary to each other as they employ very different algorithms. See MINOS vs CONOPT for a comparison of the two solvers. GAMS allows you to specify values for many parameters that control GAMS/MINOS, and with careful experimentation you may be able to influence the solution process in a helpful way. All MINOS options available through GAMS/MINOS are summarized at the end of this document. # How to Run a Model with GAMS/MINOS MINOS is capable of solving many types of models, including LP, NLP, DNLP and QCP. If MINOS is not specified as the default solver for the desired model type (e.g. NLP), then the following statement can be used in your GAMS model to select MINOS: option nlp=minos; The option statement should appear before the solve statement. To be complete, we mention that the solver can be also specified on the command line, as in: > gams camcge nlp=minos This will override the global default, but if an algorithm option has been specified inside the model, then that specification takes precedence. # Overview of GAMS/MINOS GAMS/MINOS is a system designed to solve large-scale optimization problems expressed in the following form: $\begin{array}{lcllr} \textrm{NLP}: & ~~~~~~~~~~~~~ & \textrm{minimize} & F(x) +c^Tx + d^Ty & ~~~~~~~~~~~~~~~~(1) \\[5pt] & ~~~~~~~~~~~~~ & \textrm{subject to} & f(x) + A_1 y \sim b_1 & ~~~~~~~~~~~~~~~~(2) \\ & ~~~~~~~~~~~~~ & & A_2 x + A_3 y \sim b_2 & ~~~~~~~~~~~~~~~~(3) \\ & ~~~~~~~~~~~~~ & & \displaystyle \ell \le \begin{pmatrix}x \\ y\end{pmatrix} \le u & ~~~~~~~~~~~~~~~~(4) \\ \end{array}$ where the vectors $$c$$, $$d$$, $$b_1$$, $$b_2$$, $$\ell$$, $$u$$ and the matrices $$A_1$$, $$A_2$$, $$A_3$$ are constant, $$F(x)$$ is a smooth scalar function, and $$f(x)$$ is a vector of smooth functions. The $$\sim$$ signs mean that individual constraints may be defined using $$\le$$, $$=$$ or $$\ge$$ corresponding to the GAMS constructs =L= , =E= and =G=. The components of $$x$$ are called the nonlinear variables, and the components of $$y$$ are the linear variables. Similarly, the equations in $$(2)$$ are called the nonlinear constraints, and the equations in $$(3)$$ are the linear constraints. Equations $$(2)$$ and $$(3)$$ together are called the general constraints. Let $$m_1$$ and $$n_1$$ denote the number of nonlinear constraints and variables, and let $$m$$ and $$n$$ denote the total number of (general) constraints and variables. Thus, $$A_3$$ has $$m-m_1$$ rows and $$n-n_1$$ columns. The constraints $$(4)$$ specify upper and lower bounds on all variables. These are fundamental to many problem formulations and are treated specially by the solution algorithms in GAMS/MINOS. Some of the components of $$\ell$$ and $$u$$ may be $$-\infty$$ or $$+\infty$$ respectively, in accordance with the GAMS use of -INF and +INF. The vectors $$b_1$$ and $$b_2$$ are called the right-hand side, and together are denoted by $$b$$. ## Linear Programming If the functions $$F(x)$$ and $$f(x)$$ are absent, the problem becomes a linear program. Since there is no need to distinguish between linear and nonlinear variables, we use $$x$$ rather than $$y$$. GAMS/MINOS converts all general constraints into equalities, and the only remaining inequalities are simple bounds on the variables. Thus, we write linear programs in the form $\begin{array}{lcll} \textrm{LP}: & ~~~~~~~~~~~~~ & \textrm{minimize} & c^Tx \\[5pt] & ~~~~~~~~~~~~~ & \textrm{subject to} & Ax + Is =0 \\ & ~~~~~~~~~~~~~ & & \displaystyle \ell \le \begin{pmatrix}x \\ s\end{pmatrix} \le u \\ \end{array}$ where the elements of $$x$$ are your own GAMS variables, and $$s$$ is a set of slack variables: one for each general constraint. For computational reasons, the right-hand side $$b$$ is incorporated into the bounds on $$s$$. In the expression $$Ax + Is = 0$$ we write the identity matrix explicitly if we are concerned with columns of the associated matrix $$\begin{pmatrix} A & I\end{pmatrix}$$. Otherwise we will use the equivalent notation $$Ax + s = 0$$. GAMS/MINOS solves linear programs using a reliable implementation of the primal simplex method [41] , in which the constraints $$Ax + Is = 0$$ are partitioned into the form \begin{equation*} B x_B + N x_N = 0, \end{equation*} where the basis matrix is square and nonsingular. The elements of $$x_B$$ and $$x_N$$ are called the basic or nonbasic variables respectively. Together they are a permutation of the vector \begin{equation*} \begin{pmatrix} x\\ s\end{pmatrix}. \end{equation*} Normally, each nonbasic variable is equal to one of its bounds, and the basic variables take on whatever values are needed to satisfy the general constraints. (The basic variables may be computed by solving the linear equations $$B x_B = N x_N$$.) It can be shown that if an optimal solution to a linear program exists, then it has this form. The simplex method reaches such a solution by performing a sequence of iterations, in which one column of $$B$$ is replaced by one column of $$N$$ (and vice versa), until no such interchange can be found that will reduce the value of $$c^Tx$$. As indicated nonbasic variables usually satisfy their upper and lower bounds. If any components of $$x_B$$ lie significantly outside their bounds, we say that the current point is infeasible. In this case, the simplex method uses a Phase 1 procedure to reduce the sum of infeasibilities to zero. This is similar to the subsequent Phase 2 procedure that optimizes the true objective function $$c^Tx$$. If the solution procedures are interrupted, some of the nonbasic variables may lie strictly between their bounds $$\ell_j < x_j < u_j$$. In addition, at a "feasible" or "optimal" solution, some of the basic variables may lie slightly outside their bounds: $$\ell_j - \delta < x_j < \ell_j$$ or $$u_j < x_j < u_j + \delta$$ where $$\delta$$ is a feasibility tolerance (typically $$10^{-6}$$). In rare cases, even nonbasic variables might lie outside their bounds by as much as $$\delta$$. GAMS/MINOS maintains a sparse $$LU$$ factorization of the basis matrix $$B$$, using a Markowitz ordering scheme and Bartels-Golub updates, as implemented in the Fortran package LUSOL [82] (see [17, 16, 148, 149] ). The basis factorization is central to the efficient handling of sparse linear and nonlinear constraints. ## Problems with a Nonlinear Objective When nonlinearities are confined to the term $$F(x)$$ in the objective function, the problem is a linearly constrained nonlinear program. GAMS/MINOS solves such problems using a reduced-gradient algorithm [204] combined with a quasi-Newton algorithm that is described in [140] . In the reduced-gradient method, the constraints $$Ax + Is = 0$$ are partitioned into the form \begin{equation*} Bx_B + Sx_S + Nx_N = 0 \end{equation*} where $$x_s$$ is a set of superbasic variables. At a solution, the basic and superbasic variables will lie somewhere between their bounds (to within the feasibility tolerance $$\delta$$, while nonbasic variables will normally be equal to one of their bounds, as before. Let the number of superbasic variables be $$s$$, the number of columns in $$S$$. (The context will always distinguish $$s$$ from the vector of slack variables.) At a solution, $$s$$ will be no more than $$n_1$$, the number of nonlinear variables. In many practical cases we have found that $$s$$ remains reasonably small, say 200 or less, even if $$n_1$$ is large. In the reduced-gradient algorithm, $$x_s$$ is regarded as a set of "independent variables" or "free variables" that are allowed to move in any desirable direction, namely one that will improve the value of the objective function (or reduce the sum of infeasibilities). The basic variables can then be adjusted in order to continue satisfying the linear constraints. If it appears that no improvement can be made with the current definition of $$B$$, $$S$$ and $$N$$, some of the nonbasic variables are selected to be added to $$S$$, and the process is repeated with an increased value of $$s$$. At all stages, if a basic or superbasic variable encounters one of its bounds, the variable is made nonbasic and the value of $$s$$ is reduced by one. A step of the reduced-gradient method is called a minor iteration. For linear problems, we may interpret the simplex method as being the same as the reduced-gradient method, with the number of superbasic variable oscillating between 0 and 1. A certain matrix $$Z$$ is needed now for descriptive purposes. It takes the form \begin{equation*} \begin{pmatrix} -B^{-1}S \\ I \\ 0 \end{pmatrix} \end{equation*} though it is never computed explicitly. Given an $$LU$$ factorization of the basis matrix $$B$$, it is possible to compute products of the form $$Zq$$ and $$Z^Tg$$ by solving linear equations involving $$B$$ or $$B^T$$. This in turn allows optimization to be performed on the superbasic variables, while the basic variables are adjusted to satisfy the general linear constraints. An important feature of GAMS/MINOS is a stable implementation of a quasi-Newton algorithm for optimizing the superbasic variables. This can achieve superlinear convergence during any sequence of iterations for which the $$B$$, $$S$$, $$N$$ partition remains constant. A search direction $$q$$ for the superbasic variables is obtained by solving a system of the form \begin{equation*} R^TRq = -Z^Tg \end{equation*} where $$g$$ is a gradient of $$F(x)$$, $$Z^Tg$$ is the reduced gradient, and $$R$$ is a dense upper triangular matrix. GAMS computes the gradient vector $$g$$ analytically, using automatic differentiation. The matrix $$R$$ is updated in various ways in order to approximate the reduced Hessian according to $$R^TR \approx Z^THZ$$ where $$H$$ is the matrix of second derivatives of $$F(x)$$ (the Hessian). Once $$q$$ is available, the search direction for all variables is defined by $$p = Zq$$. A ine search is then performed to find an approximate solution to the one-dimensional (w.r.t. α) problem \begin{equation*} \begin{split} \textrm{minimize} \>\> & F(x+\alpha p) \\ \textrm{subject to} \>\> & 0 < \alpha < \beta \end{split} \end{equation*} where $$\beta$$ is determined by the bounds on the variables. Another important piece in GAMS/MINOS is a step-length procedure used in the linesearch to determine the step-length $$\alpha$$ (see [80] ). The number of nonlinear function evaluations required may be influenced by setting the Linesearch tolerance, as discussed in Section Detailed Description of MINOS Options . As in a linear programming solver, an equation $$B^T\pi= gB$$ is solved to obtain the dual variables or shadow prices $$\pi$$ where $$gB$$ is the gradient of the objective function associated with basic variables. It follows that $$gB - B^T \pi= 0$$. The analogous quantity for superbasic variables is the reduced-gradient vector $$Z^Tg = gs - s^T \pi$$; this should also be zero at an optimal solution. (In practice its components will be of order $$r ||\pi||$$ where $$r$$ is the optimality tolerance, typically $$10^{-6}$$, and $$||\pi||$$ is a measure of the size of the elements of $$\pi$$.) ## Problems with Nonlinear Constraints If any of the constraints are nonlinear, GAMS/MINOS employs a project Lagrangian algorithm, based on a method due to [151] , see [141] . This involves a sequence of major iterations, each of which requires the solution of a linearly constrained subproblem. Each subproblem contains linearized versions of the nonlinear constraints, as well as the original linear constraints and bounds. At the start of the $$k^{\hbox{th}}$$ major iteration, let $$x_k$$ be an estimate of the nonlinear variables, and let $$\lambda_k$$ be an estimate of the Lagrange multipliers (or dual variables) associated with the nonlinear constraints. The constraints are linearized by changing $$f(x)$$ in equation (2) to its linear approximation: \begin{equation*} f'(x, x_k) = f(x_k) + J(x_k) (x - x_k) \end{equation*} or more briefly \begin{equation*} f' = f_k+ J_k (x - x_k) \end{equation*} where $$J(x_k)$$ is the Jacobian matrix evaluated at $$x_k$$. (The $$i$$-th row of the Jacobian is the gradient vector of the $$i$$-th nonlinear constraint function. As with the objective gradient, GAMS calculates the Jacobian using automatic differentiation). The subproblem to be solved during the $$k$$-th major iteration is then $\begin{array}{lllr} & \textrm{minimize} & F(x) +c^Tx + d^Ty - \lambda_k^T(f-f') + 0.5\rho (f-f')^T(f-f') & ~~~~~~~~~~~~~~~~(5) \\[5pt] & \textrm{subject to} & f' + A_1 y \sim b_1 & ~~~~~~~~~~~~~~~~(6) \\ & & A_2 x + A_3 y \sim b_2 & ~~~~~~~~~~~~~~~~(7) \\ & & \displaystyle \ell \le \begin{pmatrix}x \\ y\end{pmatrix} \le u & ~~~~~~~~~~~~~~~~(8) \\ \end{array}$ The objective function $$(5)$$ is called an augmented Lagrangian. The scalar $$\rho$$ is a penalty parameter, and the term involving $$\rho$$ is a modified quadratic penalty function. GAMS/MINOS uses the reduced-gradient algorithm to minimize $$(5)$$ subject to $$(6)$$ – $$(8)$$. As before, slack variables are introduced and $$b_1$$ and $$b_2$$ are incorporated into the bounds on the slacks. The linearized constraints take the form \begin{equation*} \begin{pmatrix}J_k & A_1 \\ A_2 & A_3\end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix} + \begin{pmatrix}I & 0 \\ 0 & I\end{pmatrix} \begin{pmatrix}s_1\\ s_2 \end{pmatrix}= \begin{pmatrix}J_k x_k - f_k\\ 0 \end{pmatrix} \end{equation*} This system will be referred to as $$Ax + Is = 0$$ as in the linear case. The Jacobian $$J_k$$ is treated as a sparse matrix, the same as the matrices $$A_1$$, $$A_2$$, and $$A_3$$. In the output from GAMS/MINOS, the term Feasible subproblem indicates that the linearized constraints have been satisfied. In general, the nonlinear constraints are satisfied only in the limit, so that feasibility and optimality occur at essentially the same time. The nonlinear constraint violation is printed every major iteration. Even if it is zero early on (say at the initial point), it may increase and perhaps fluctuate before tending to zero. On "well behaved problems", the constraint violation will decrease quadratically (i.e., very quickly) during the final few major iterations. # Modeling Issues Formulating nonlinear models requires that the modeler pays attention to some details that play no role when dealing with linear models. ## Starting Points The first issue is specifying a starting point. It is advised to specify a good starting point for as many nonlinear variables as possible. The GAMS default of zero is often a very poor choice, making this even more important. As an (artificial) example consider the problem where we want to find the smallest circle that contains a number of points $$(x_i,y_i)$$: $\begin{array}{lcllr} \textrm{Example}: & ~~~~~~~~~~~~~ & \textrm{minimize} & r \\[5pt] & ~~~~~~~~~~~~~ & \textrm{subject to} & (x_i-a)^2 + (y_i-b)^2 \le r^2, \> \> r \ge 0. \\ \end{array}$ This problem can be modeled in GAMS as follows. set i 'points' /p1*p10/; parameters x(i) 'x coordinates', y(i) 'y coordinates'; * fill with random data x(i) = uniform(1,10); y(i) = uniform(1,10); variables a 'x coordinate of center of circle' b 'y coordinate of center of circle' r 'radius'; equations e(i) 'points must be inside circle'; e(i).. sqr(x(i)-a) + sqr(y(i)-b) =l= sqr(r); r.lo = 0; model m /all/; option nlp=minos; solve m using nlp minimizing r; Without help, MINOS will not be able to find an optimal solution. The problem will be declared infeasible. In this case, providing a good starting point is very easy. If we define \begin{eqnarray*} x_{\min} &=& \min_i x_i,\\ y_{\min} &=& \min_i y_i,\\ x_{\max} &=& \max_i x_i,\\ y_{\max} &=& \max_i y_i, \end{eqnarray*} then good estimates are \begin{eqnarray*} a &=& (x_{\min}+x_{\max})/2, \\ b &=& (y_{\min}+y_{\max})/2, \\ r &=& \sqrt{ (a-x_{\min})^2 + (b-y_{\min})^2}. \end{eqnarray*} Thus we include in our model: parameters xmin,ymin,xmax,ymax; xmin = smin(i, x(i)); ymin = smin(i, y(i)); xmax = smax(i, x(i)); ymax = smax(i, y(i)); * set starting point a.l = (xmin+xmax)/2; b.l = (ymin+ymax)/2; r.l = sqrt( sqr(a.l-xmin) + sqr(b.l-ymin) ); and now the model solves very easily. Level values can also be set away from zero implicitly as a result of assigning bounds. When a variable is bounded away from zero, for instance by the statement Y.LO = 1;, the implicit projection of variable levels onto their bounds that occurs when a model is solved will initialize Y away from zero. ## Bounds Setting appropriate bounds can be very important to steer the algorithm away from uninteresting areas, and to prevent function evaluation errors from happening. If your model contains a real power of the form x**y it is important to add a bound $$x > 0.001$$, as real exponentation is evaluated in GAMS as $$\exp(y \log(x))$$. In some cases one cannot write a bound directly, e.g. if the equation is $$z = x^{f(y)}$$. In that case it is advised to introduce an extra variable and equation: \begin{equation*} \begin{split} z &= x^{\vartheta } \\ \vartheta &= f(y) \\ \vartheta &\ge \varepsilon \end{split} \end{equation*} (Note that the functions SQR(x) and POWER(x,k) are integer powers and do not require $$x$$ to be positive). If the model produces function evaluation errors adding bounds is prefered to raising the DOMLIM limit. Bounds in GAMS are specified using X.LO(i)=0.001 and X.UP(i) = 1000. ## Scaling Although MINOS has some facilities to scale the problem before starting to optimize it, it remains an important task for the modeler to provide a well-scaled model. This is especially the case for nonlinear models. GAMS has special syntax features to specify row and column scales that allow the modeler to keep the equations in a most natural form. For more information consult the GAMS User's Guide. ## The Objective Function The first step GAMS/MINOS performs is to try to reconstruct the objective function. In GAMS, optimization models minimize or maximize an objective variable. MINOS however works with an objective function. One way of dealing with this is to add a dummy linear function with just the objective variable. Consider the following GAMS fragment: obj.. z =e= sum(i, sqr(resid(i))); model m /all/; solve m using nlp minimizing z; This can be cast in form NLP (equations $$(1)-(4)$$) by saying minimize $$z$$ subject to $$z = \sum_i resid^2_i$$ and the other constraints in the model. Although simple, this approach is not always preferable. Especially when all constraints are linear it is important to minimize $$\sum_i resid^2_i$$ directly. This can be achieved by a simple reformulation: $$z$$ can be substituted out. The substitution mechanism carries out the formulation if all of the following conditions hold: • the objective variable $$z$$ is a free continuous variable (no bounds are defined on $$z$$), • $$z$$ appears linearly in the objective function, • the objective function is formulated as an equality constraint, • $$z$$ is only present in the objective function and not in other constraints. For many models it is very important that the nonlinear objective function be used by MINOS. For instance the model [CHEM] from the model library solves in 21 iterations. When we add the bound energy.lo = 0; to the objective variable energy and thus prevent it from being substituted out, MINOS will not be able to find a feasible point for the given starting point. This reformulation mechanism has been extended for substitutions along the diagonal. For example, the GAMS model Variables x,y,z; Equations e1,e2; e1..z =e= y; e2..y =e= sqr(1+x); model m /all/; option nlp=minos; solve m using nlp minimizing z; will be reformulated as an unconstrained optimization problem $\begin{array}{lrl} & \text{minimize} ~~\text{f(x)} & =(1+x)^2. \\ \end{array}$ These additional reformulations can be turned off by using the statement option reform = 0; (see Section GAMS Options). # GAMS Options The standard GAMS options (e.g. iterlim, domlim) can be used to control GAMS/MINOS. For more details, see section Controlling a Solver via GAMS Options. We highlight some of the details of this usage below for cases of special interest. iterlim Sets the minor iteration limit. MINOS will stop as soon as the number of minor iterations exceeds the iteration limit and report the current solution. domlim Sets the domain error limit. Domain errors are evaluation errors in the nonlinear functions. An example of a domain error is trying to evaluate $$\sqrt{x}$$ for $$x<0$$. Other examples include taking logs of negative numbers, and evaluating the real power $$x^y$$ for $$x < \varepsilon$$ ( $$x^y$$ is evaluated as $$\exp(y \log x)$$). When such an error occurs the count of domain errors is incremented: MINOS will stop if this count exceeds the limit. If the limit has not been reached, reasonable estimates for the function (and derivatives, if requested) are returned and MINOS continues. For example, in the case of $$\sqrt{x}, x<0$$ a zero is passed back for the function value and a large value for the derivative. In many cases MINOS will be able to recover from these domain errors, especially when they happen at some intermediate point. Nevertheless it is best to add appropriate bounds or linear constraints to ensure that these domain errors don't occur. For example, when an expression $$\log(x)$$ is present in the model, add a statement like x.lo = 0.001;. bratio Ratio used in basis acceptance test. When a previous solution or solution estimate exists, GAMS automatically passes this solution to MINOS so that it can reconstruct an advanced basis. When too many new (i.e. uninitialized with level and/or marginal values) variables or constraints enter the model, it may be better not to use existing basis information, but to instead crash a new basis. The bratio determines how quickly an existing basis is discarded. A value of 1.0 will discard any basis, while a value of 0.0 will retain any basis. workfactor By default, GAMS/MINOS computes an estimate of the amount of workspace needed by MINOS, and passes this workspace on to MINOS for use in solving the model. This estimate is based on the model statistics: number of (nonlinear) equations, number of (nonlinear) variables, number of (nonlinear) nonzeroes, etc. In most cases this is sufficient to solve the model. In some rare cases MINOS may need more memory, and the user can provide this by specifying a value of workfactor greater than 1. The computed memory estimate is multiplied by the workfactor to determine the amount of workspace made available to MINOS for the solve. workspace The workspace option is deprecated: use the workfactor option instead. The workspace option specifies the amount of memory, in MB, that MINOS will use. reform This option controls the objective reformulation mechanism described in Section The Objective Function The default value of 100 will cause GAMS/MINOS to try further substitutions along the diagonal after the objective variable has been removed. Any other value will disable this diagonal procedure. # Summary of MINOS Options The performance of GAMS/MINOS is controlled by a number of parameters or "options." Each option has a default value that should be appropriate for most problems. For special situations it is possible to specify non-standard values for some or all of the options through the MINOS option file. While the content of an option file is solver-specific, the details of how to create an option file and instruct the solver to use it are not. This topic is covered in section The Solver Options File. Note that the option file is not case sensitive. Examples for using the option file can be found at the end of this section. The tables below contain summary information about the MINOS options, default values, and links to more detailed explanations. ## Output related options Option Description Default debug level Controls amount of debug information written 0 log frequency Controls iteration logging to listing file 100 print level Controls amount of information printed during optimization 0 scale print Print scaling factors solution Prints MINOS solution NO summary frequency Controls iteration logging to summary (log file) 100 ## Tolerances Option Description Default crash tolerance Allow crash procedure to ignore small elements in eligible columns 0.1 feasibility tolerance Feasibility tolerance for linear constraints 1.0e-6 linesearch tolerance Controls accuracy of steplength selected 0.1 LU density tolerance When to use dense factorization 0.5 LU factor tolerance Trade-off between stability and sparsity in basis factorization 100.0 LU singularity tolerance Protection against ill-conditioned basis matrices 1.0e-11 LU update tolerance Trade-off between stability and sparsity in basis updating 10.0 optimality tolerance Reduced gradient optimality check 1.0e-6 row tolerance Accuracy requirement for nonlinear rows 1.0e-6 scale print tolerance Scale print flag and set tolerance 0.9 scale tolerance Scale tolerance 0.9 subspace tolerance Determines when nonbasics becomes superbasic 0.5 ## Limits Option Description Default hessian dimension Size of Hessian matrix 1 iterations limit Minor iteration limit GAMS iterlim major iterations Max number of major iterations 50 minor iterations Max number of minor iterations between linearizations of nonlinear constraints 40 superbasics limit Maximum number of superbasics 1 unbounded objective value Determines when a problem is called unbounded 1.0e20 unbounded step size Determines when a problem is called unbounded 1.0e10 ## Other algorithmic options Option Description Default check frequency Controls frequency of linear constraint satisfaction test 60 completion Completion level for subproblems (full/partial) FULL crash option Controls the basis crash algorithm 3 expand frequency Setting for anti-cycling mechanism 10000 factorization frequency Number of iterations between basis factorizations 100 lagrangian Determines form of objection function in the linearized subproblems YES LU complete pivoting LUSOL pivoting strategy LU partial pivoting LUSOL pivoting strategy LU rook pivoting LUSOL pivoting strategy major damping parameter Prevents large relative changes between subproblem solutions 2.0 minor damping parameter Limit change in x during linesearch 2.0 multiple price Multiple pricing 1 partial price Number of segments in partial pricing strategy 10 penalty parameter Used in modified augmented Lagrangian automatic radius of convergence controls final reduction of penalty parameter 0.01 scale all variables Synonym to scale option 2 scale linear variables Synonym to scale option 1 scale no Synonym to scale option 0 scale nonlinear variables Synonym to scale option 2 scale option Scaling 1 scale yes Synonym to scale option 1 start assigned nonlinears Starting strategy when there is no basis SUPERBASIC verify constraint gradients Synonym to verify level 2 verify gradients Synonym to verify level 3 verify level Controls verification of gradients 0 verify no Synonym to verify level 0 verify objective gradients Synonym to verify level 1 verify yes Synonym to verify level 3 weight on linear objective Composite objective weight 0.0 ## Examples of GAMS/MINOS Option File The following example illustrates the use of certain options that might be helpful for "difficult" models involving nonlinear constraints. Experimentation may be necessary with the values specified, particularly if the sequence of major iterations does not converge using default values. * These options might be relevant for very nonlinear models. Major damping parameter 0.2 * may prevent divergence. Minor damping parameter 0.2 * if there are singularities * in the nonlinear functions. Penalty parameter 10.0 * or 100.0 perhaps-a value * higher than the default. Scale linear variables * (This is the default.) Conversely, nonlinearly constrained models that are very nearly linear may optimize more efficiently if some of the cautious defaults are relaxed: * Suggestions for models with MILDLY nonlinear constraints Completion Full Penalty parameter 0.0 * or 0.1 perhaps-a value * smaller than the default. * Scale one of the following Scale all variables * if starting point is VERY GOOD. Scale linear variables * if they need it. Scale No * otherwise. Most of the options should be left at their default values for any given model. If experimentation is necessary, we recommend changing just one option at a time. # Special Notes ## Modeling Hints Unfortunately, there is no guarantee that the algorithm just described will converge from an arbitrary starting point. The concerned modeler can influence the likelihood of convergence as follows: • Specify initial activity levels for the nonlinear variables as carefully as possible (using the GAMS suffix .L). • Include sensible upper and lower bounds on all variables. • Specify a Major damping parameter that is lower than the default value, if the problem is suspected of being highly nonlinear • Specify a Penalty parameter $$\rho$$ that is higher than the default value, again if the problem is highly nonlinear. In rare cases it may be safe to request the values $$\lambda_k = 0$$ and $$\rho = 0$$ for all subproblems, by specifying Lagrangian=No. However, convergence is much more likely with the default setting, Lagrangian=Yes. The initial estimate of the Lagrange multipliers is then $$\lambda_0 = 0$$, but for later subproblems $$\lambda_k$$ is taken to be the Lagrange multipliers associated with the (linearized) nonlinear constraints at the end of the previous major iteration. For the first subproblem, the default value for the penalty parameter is $$\rho= 100.0/m_1$$ where $$m_1$$ is the number of nonlinear constraints. For later subproblems, $$\rho$$ is reduced in stages when it appears that the sequence $$\{x_k, \lambda_k\}$$ is converging. In many cases it is safe to specify $$\lambda = 0$$, particularly if the problem is only mildly nonlinear. This may improve the overall efficiency. ## Storage GAMS/MINOS uses one large array of memory for most of its workspace. The implementation places no fixed limit on the size of a problem or on its shape (many constraints and relatively few variables, or vice versa). In general, the limiting factor will be the amount of physical memory available on a particular machine, and the amount of computation time one is willing to spend. Some detailed knowledge of a particular model will usually indicate whether the solution procedure is likely to be efficient. An important quantity is $$m$$, the total number of general constraints in $$(2)$$ and $$(3)$$. The amount of workspace required by GAMS/MINOS is roughly $$100m$$ doubles, or $$800m$$ bytes for workspace. A further 300K bytes, approximately, are needed for the program itself, along with buffer space for several files. Very roughly, then, a model with $$m$$ general constraints requires about $$(m+300)$$ K bytes of memory. Another important quantity is $$n$$, the total number of variables in $$x$$ and $$y$$. The above comments assume that $$n$$ is not much larger than $$m$$, the number of constraints. A typical ratio for $$n/m$$ is 2 or 3. If there are many nonlinear variables (i.e., if $$n_1$$ is large), much depends on whether the objective function or the constraints are highly nonlinear or not. The degree of nonlinearity affects $$s$$, the number of superbasic variables. Recall that $$s$$ is zero for purely linear problems. We know that $$s$$ need never be larger than $$n_1+1$$. In practice, $$s$$ is often very much less than this upper limit. In the quasi-Newton algorithm, the dense triangular matrix $$R$$ has dimension $$s$$ and requires about $$s^2/2$$ words of storage. If it seems likely that $$s$$ will be very large, some aggregation or reformulation of the problem should be considered. # The GAMS/MINOS Log File MINOS writes different logs for LPs, NLPs with linear constraints, and NLPs with non-linear constraints. In this section, a sample log file is shown for each case, and the messages that appear are explained. ## Linear Programs MINOS uses a standard two-phase simplex method for LPs. In the first phase, the sum of the infeasibilities at each iteration is minimized. Once feasibility is attained, MINOS switches to phase 2 where it minimizes (or maximizes) the original objective function. The different objective functions are called the phase 1 and phase 2 objectives. Notice that the marginals in phase 1 are with respect to the phase 1 objective. This means that if MINOS interrupts in phase 1, the marginals are "wrong" in the sense that they do not reflect the original objective. The log for the problem TURKPOW is as follows: GAMS Rev 235 Copyright (C) 1987-2010 GAMS Development. All rights reserved --- Starting compilation --- turkpow.gms(230) 3 Mb --- Starting execution: elapsed 0:00:00.009 --- turkpow.gms(202) 4 Mb --- Generating LP model turkey --- turkpow.gms(205) 4 Mb --- 350 rows 949 columns 5,872 non-zeroes --- Executing MINOS: elapsed 0:00:00.025 GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows M I N O S 5.51 (Jun 2004) GAMS/MINOS 5.51, Large Scale Nonlinear Solver B. A. Murtagh, University of New South Wales P. E. Gill, University of California at San Diego, W. Murray, M. A. Saunders, and M. H. Wright, Systems Optimization Laboratory, Stanford University Work space allocated -- 1.60 Mb Reading Rows... Reading Columns... Itn ninf sinf objective 100 3 2.283E-01 -2.51821463E+04 200 0 0.000E+00 2.02819284E+04 300 0 0.000E+00 1.54107277E+04 400 0 0.000E+00 1.40211808E+04 500 0 0.000E+00 1.33804183E+04 600 0 0.000E+00 1.27082709E+04 EXIT - Optimal Solution found, objective: 12657.77 --- Restarting execution --- turkpow.gms(205) 0 Mb --- Reading solution for model turkey --- turkpow.gms(230) 3 Mb *** Status: Normal completion The first line that is written by MINOS is the version string: GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows This line identifies which version of the MINOS libraries and links you are using, and is only to be deciphered by GAMS support personnel. After some advertisement text we see the amount of work space (i.e. memory) that is allocated: 1.60 Mb. When MINOS is loaded, the amount of memory needed is first estimated. This estimate is based on statistics like the number of rows, columns and non-zeros. This amount of memory is then allocated and the problem loaded into MINOS. The columns have the following meaning: Itn Iteration number. ninf Number of infeasibilities. If nonzero the current iterate is still infeasible. sinf The sum of the infeasibilities. This number is minimized during Phase I. Once the model is feasible this number is zero. objective The value of the objective function: $$z = \sum c_i x_i$$. In phase II this number is maximized or minimized. In phase I it may move in the wrong direction. The final line indicates the exit status of MINOS. ## Linearly Constrained NLP's The log is basically the same as for linear models. The only difference is that not only matrix rows and columns need to be loaded, but also instructions for evaluating functions and gradients. The log for the problem WEAPONS is as follows: GAMS Rev 235 Copyright (C) 1987-2010 GAMS Development. All rights reserved --- Starting compilation --- weapons.gms(77) 3 Mb --- Starting execution: elapsed 0:00:00.005 --- weapons.gms(66) 4 Mb --- Generating NLP model war --- weapons.gms(68) 6 Mb --- 13 rows 66 columns 156 non-zeroes --- 706 nl-code 65 nl-non-zeroes --- weapons.gms(68) 4 Mb --- Executing MINOS: elapsed 0:00:00.013 GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows M I N O S 5.51 (Jun 2004) GAMS/MINOS 5.51, Large Scale Nonlinear Solver B. A. Murtagh, University of New South Wales P. E. Gill, University of California at San Diego, W. Murray, M. A. Saunders, and M. H. Wright, Systems Optimization Laboratory, Stanford University Work space allocated -- 0.82 Mb Reading Rows... Reading Columns... Reading Instructions... Itn ninf sinf objective 100 0 0.000E+00 1.71416714E+03 200 0 0.000E+00 1.73483184E+03 EXIT - Optimal Solution found, objective: 1735.570 --- Restarting execution --- weapons.gms(68) 0 Mb --- Reading solution for model war --- weapons.gms(77) 3 Mb *** Status: Normal completion ## NLP's with Nonlinear Constraints For models with nonlinear constraints the log is more complicated. The library model [CAMCGE] from the model library is such an example: the log output resulting from running it is shown below. GAMS Rev 235 Copyright (C) 1987-2010 GAMS Development. All rights reserved --- Starting compilation --- camcge.gms(450) 3 Mb --- Starting execution: elapsed 0:00:00.010 --- camcge.gms(441) 4 Mb --- Generating NLP model camcge --- camcge.gms(450) 6 Mb --- 243 rows 280 columns 1,356 non-zeroes --- 5,524 nl-code 850 nl-non-zeroes --- camcge.gms(450) 4 Mb --- Executing MINOS: elapsed 0:00:00.023 GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows M I N O S 5.51 (Jun 2004) GAMS/MINOS 5.51, Large Scale Nonlinear Solver B. A. Murtagh, University of New South Wales P. E. Gill, University of California at San Diego, W. Murray, M. A. Saunders, and M. H. Wright, Systems Optimization Laboratory, Stanford University Work space allocated -- 1.48 Mb Reading Rows... Reading Columns... Reading Instructions... Major minor step objective Feasible Optimal nsb ncon penalty BSswp 1 2T 0.0E+00 1.91724E+02 1.8E+02 2.0E-01 0 1 1.0E+00 0 2 90 1.0E+00 1.91735E+02 1.5E-03 7.6E+00 0 3 1.0E+00 0 3 0 1.0E+00 1.91735E+02 1.3E-09 5.5E-06 0 4 1.0E+00 0 4 0 1.0E+00 1.91735E+02 1.1E-12 2.8E-13 0 5 1.0E-01 0 EXIT - Optimal Solution found, objective: 191.7346 --- Restarting execution --- camcge.gms(450) 0 Mb --- Reading solution for model camcge *** Status: Normal completion Two sets of iterations, major and minor, are now reported. A description of the various columns present in this log file follows: Major A major iteration involves linearizing the nonlinear constraints and performing a number of minor iterations on the resulting subproblem. The objective for the subproblem is an augmented Lagrangian, not the true objective function. minor The number of minor iterations performed on the linearized subproblem. If it is a simple number like 90, then the subproblem was solved to optimality. Here, $$2T$$ means that the subproblem was terminated. In general the $$T$$ is not something to worry about. Other possible flags are $$I$$ and $$U$$, which mean that the subproblem was infeasible or unbounded. MINOS may have difficulty if these keep occurring. step The step size taken towards the solution suggested by the last major iteration. Ideally this should be 1.0, especially near an optimum. If the subproblem solutions are widely different, MINOS may reduce the step size under control of the Major Damping parameter. objective The objective function for the original nonlinear program. Feasible Primal infeasibility, indicating the maximum non-linear constraint violation. Optimal The maximum dual infeasibility, measured as the maximum departure from complementarity. If we call $$d_j$$ the reduced cost of variable $$x_j$$, then the dual infeasibility of $$x_j$$ is $$d_j \times \min\{x_j - \ell_j, 1\}$$ or $$-d_j \times \min\{u_j - x_j, 1\}$$ depending on the sign of $$d_j$$. nsb Number of superbasics. If the model is feasible this number cannot exceed the superbasic limit, which may need to be reset to a larger number if the numbers in this column become larger. ncon The number of times MINOS has evaluated the nonlinear constraints and their derivatives. penalty The current value of the penalty parameter in the augmented Lagrangian (the objective for the subproblems). If the major iterations appear to be converging, MINOS will decrease the penalty parameter. If there appears to be difficulty, such as unbounded subproblems, the penalty parameter will be increased. BSswp Number of basis swaps: the number of $$\begin{pmatrix}B & S\end{pmatrix}$$ (i.e. basic vs. superbasic) changes. Note: The CAMCGE model (like many CGE models or other almost square systems) can better be solved with the MINOS option Start Assigned Nonlinears Basic. # Detailed Description of MINOS Options The following is an alphabetical list of the keywords that may appear in the GAMS/MINOS options file, and a description of their effect. Options not specified will take the default values shown. check frequency (integer): Controls frequency of linear constraint satisfaction test Every ith iteration after the most recent basis factorization, a numerical test is made to see if the current solution x satisfies the general linear constraints (including linearized nonlinear constraints, if any). The constraints are of the form Ax+s = 0 where s is the set of slack variables. To perform the numerical test, the residual vector r = Ax + s is computed. If the largest component of r is judged to be too large, the current basis is refactorized and the basic variables are recomputed to satisfy the general constraints more accurately. Range: {1, ..., ∞} Default: 60 completion (string): Completion level for subproblems (full/partial) When there are nonlinear constraints, this determines whether subproblems should be solved to moderate accuracy (partial completion) or to full accuracy (full completion). GAMS/MINOS implements the option by using two sets of convergence tolerances for the subproblems. Use of partial completion may reduce the work during early major iterations, unless the Minor iterations limit is active. The optimal set of basic and superbasic variables will probably be determined for any given subproblem, but the reduced gradient may be larger than it would have been with full completion. An automatic switch to full completion occurs when it appears that the sequence of major iterations is converging. The switch is made when the nonlinear constraint error is reduced below 100 * (Row tolerance), the relative change in Lambdak is 0.1 or less, and the previous subproblem was solved to optimality. Full completion tends to give better Langrange-multiplier estimates. It may lead to fewer major iterations, but may result in more minor iterations. Default: FULL value meaning FULL Solve subproblems to full accuracy PARTIAL Solve subproblems to moderate accuracy crash option (integer): Controls the basis crash algorithm If a restart is not being performed, an initial basis will be selected from certain columns of the constraint matrix (A I). The value of the parameter i determines which columns of A are eligible. Columns of I are used to fill gaps where necessary. If i > 0, three passes are made through the relevant columns of A, searching for a basis matrix that is essentially triangular. A column is assigned to pivot on a particular row if the column contains a suitably large element in a row that has not yet been assigned. (The pivot elements ultimately form the diagonals of the triangular basis). Pass 1 selects pivots from free columns (corresponding to variables with no upper and lower bounds). Pass 2 requires pivots to be in rows associated with equality (=E=) constraints. Pass 3 allows the pivots to be in inequality rows. For remaining (unassigned) rows, the associated slack variables are inserted to complete the basis. Default: 3 value meaning 0 Initial basis will be a slack basis 1 All columns are eligible 2 Only linear columns are eligible 3 Columns appearing nonlinearly in the objective are not eligible 4 Columns appearing nonlinearly in the constraints are not eligible crash tolerance (real): Allow crash procedure to ignore small elements in eligible columns The Crash tolerance r allows the starting procedure CRASH to ignore certain small nonzeros in each column of A. If amax is the largest element in column j, other nonzeros aij in the column are ignored if |aij| < amax * r. To be meaningful, the parameter r should be in the range 0 <= r < 1. When r > 0.0 the basis obtained by CRASH may not be strictly triangular, but it is likely to be nonsingular and almost triangular. The intention is to obtain a starting basis containing more columns of A and fewer (arbitrary) slacks. A feasible solution may be reached sooner on some problems. For example, suppose the first m columns of A are the matrix shown under LU factor tolerance; i.e., a tridiagonal matrix with entries -1, 4, -1. To help CRASH choose all m columns for the initial basis, we could specify Crash tolerance r for some value of r > 0.25. Range: [0, 1.0] Default: 0.1 debug level (integer): Controls amount of debug information written This causes various amounts of information to be output. Most debug levels will not be helpful to GAMS users, but they are listed here for completeness. Note that you will need to use the GAMS statement OPTION SYSOUT=on; to echo the MINOS listing to the GAMS listing file. • debug level 0 No debug output. • debug level 2(or more) Output from M5SETX showing the maximum residual after a row check. • debug level 40 Output from LU8RPC (which updates the LU factors of the basis matrix), showing the position of the last nonzero in the transformed incoming column. • debug level 50 Output from LU1MAR (which updates the LU factors each refactorization), showing each pivot row and column and the dimensions of the dense matrix involved in the associated elimination. • debug level 100 Output from M2BFAC and M5LOG listing the basic and superbasic variables and their values at every iteration. Default: 0 expand frequency (integer): Setting for anti-cycling mechanism This option is part of an anti-cycling procedure designed to guarantee progress even on highly degenerate problems. For linear models, the strategy is to force a positive step at every iteration, at the expense of violating the bounds on the variables by a small amount. Suppose the specified feasibility tolerance is delta and the expand frequency is k. Over a period of k iterations, the tolerance actually used by GAMS/MINOS increases from 0.5*delta to delta (in steps 0.5*delta/k). For nonlinear models, the same procedure is used for iterations in which there is only one superbasic variable. (Cycling can occur only when the current solution is at a vertex of the feasible region.) Thus, zero steps are allowed if there is more than one superbasic variable, but otherwise positive steps are enforced. At least every k iterations, a resetting procedure eliminates any infeasible nonbasic variables. Increasing k helps to reduce the number of these slightly infeasible nonbasic variables. However, it also diminishes the freedom to choose a large pivot element (see Pivot tolerance). Range: {1, ..., ∞} Default: 10000 factorization frequency (integer): Number of iterations between basis factorizations At most i basis updates will occur between factorizations of the basis matrix. With linear programs, basis updates usually occur at every iteration. The default i is reasonable for typical problems. Higher values up to i = 200 (say) may be more efficient on problems that are extremely sparse and well scaled. When the objective function is nonlinear, fewer basis updates will occur as an optimum is approached. The number of iterations between basis factorizations will therefore increase. During these iterations a test is made regularly (according to the Check frequency) to ensure that the general constraints are satisfied. If necessary the basis will be re-factorized before the limit of i updates is reached. When the constraints are nonlinear, the Minor iterations limit will probably preempt i. Range: {1, ..., ∞} Default: 100 feasibility tolerance (real): Feasibility tolerance for linear constraints When the constraints are linear, a feasible solution is one in which all variables, including slacks, satisfy their upper and lower bounds to within the absolute tolerance r. (Since slacks are included, this means that the general linear constraints are also satisfied within r.) GAMS/MINOS attempts to find a feasible solution before optimizing the objective function. If the sum of infeasibilities cannot be reduced to zero, the problem is declared infeasible. Let SINF be the corresponding sum of infeasibilities. If SINF is quite small, it may be appropriate to raise r by a factor of 10 or 100. Otherwise, some error in the data should be suspected. If SINF is not small, there may be other points that have a significantly smaller sum of infeasibilities. GAMS/MINOS does not attempt to find a solution that minimizes the sum. If Scale option = 1 or 2, feasibility is defined in terms of the scaled problem (since it is then more likely to be meaningful). A nonlinear objective function F(x) will be evaluated only at feasible points. If there are regions where F(x) is undefined, every attempt should be made to eliminate these regions from the problem. For example, for a function F(x) = sqrt(x1)+log(x2), it should be essential to place lower bounds on both variables. If Feasibility tolerance = 10-6, the bounds x1 > 10-5 and x2 > 10-4 might be appropriate. (The log singularity is more serious; in general, keep variables as far away from singularities as possible.) If the constraints are nonlinear, the above comments apply to each major iteration. A feasible solution satisfies the current linearization of the constraints to within the tolerance r. The associated subproblem is said to be feasible. As for the objective function, bounds should be used to keep x more than r away from singularities in the constraint functions f(x). At the start of major iteration k, the constraint functions f(xk) are evaluated at a certain point xk. This point always satisfies the relevant bounds (l < xk < u), but may not satisfy the general linear constraints. During the associated minor iterations, F(x) and f(x) will be evaluated only at points x that satisfy the bound and the general linear constraints (as well as the linearized nonlinear constraints). If a subproblem is infeasible, the bounds on the linearized constraints are relaxed temporarily, in several stages. Feasibility with respect to the nonlinear constraints themselves is measured against the Row tolerance (not against r). The relevant test is made at the start of a major iteration. Default: 1.0e-6 hessian dimension (integer): Size of Hessian matrix This specifies that an r*r triangular matrix R is to be available for use by the quasi-Newton algorithm. The matrix R approximates the reduced Hessian in that RTR approximates ZTHZ. Suppose there are s superbasic variables at a particular iteration. Whenever possible, r should be greater than s. If r > s, the first s columns of R will be used to approximate the reduced Hessian in the normal manner. If there are no further changes to the set of superbasic variables, the rate of convergence will ultimately be superlinear. If r < s, a matrix of the form R = diag(Rr, D) will be used to approximate the reduced Hessian, where Rr is an r * r upper triangular matrix and D is a diagonal matrix of order s - r. The rate of convergence will no longer be superlinear (and may be arbitrarily slow). The storage required is of the order sqr(r)/2, i.e. quadratic in r. In general, r should be a slight over-estimate of the final number of superbasic variables, whenever storage permits. It need never be larger than n1 + 1, where n1 is the number of nonlinear variables. For many problems it can be much smaller than n1. If Superbasics limit s is specified, the default value of r is the same number, s (and conversely). This is a safeguard to ensure super-linear convergence wherever possible. If neither r nor s is specified, GAMS chooses values for both, using certain characteristics of the problem. Range: {1, ..., ∞} Default: 1 iterations limit (integer): Minor iteration limit The maximum number of minor iterations allowed (i.e., iterations of the simplex method or the reduced-gradient method). This option, if set, overrides the GAMS ITERLIM specification. If i = 0, no minor iterations are performed, but the starting point is tested for both feasibility and optimality. Default: GAMS iterlim lagrangian (string): Determines form of objection function in the linearized subproblems This determines the form of the objective function used for the linearized subproblems. The default value yes is highly recommended. The Penalty parameter value is then also relevant. If No is specified, the nonlinear constraint functions will be evaluated only twice per major iteration. Hence this option may be useful if the nonlinear constraints are very expensive to evaluate. However, in general there is a great risk that convergence may not occur. Default: YES value meaning NO Nondefault value (not recommended) YES Default value (recommended) linesearch tolerance (real): Controls accuracy of steplength selected For nonlinear problems, this controls the accuracy with which a steplength alpha is located in the one-dimensional problem minimize F(x+alpha*p) subject to 0 < alpha <= beta A linesearch occurs on most minor iterations for which x is feasible. (If the constraints are nonlinear, the function being minimized is the augmented Lagrangian.) r must be a real value in the range 0.0 < r < 1.0. The default value r = 0.1 requests a moderately accurate search. It should be satisfactory in most cases. If the nonlinear functions are cheap to evaluate, a more accurate search may be appropriate: try r = 0.01 or r = 0.001. The number of iterations should decrease, and this will reduce total run time if there are many linear or nonlinear constraints. If the nonlinear function are expensive to evaluate, a less accurate search may be appropriate; try r = 0.5 or perhaps r = 0.9. (The number of iterations will probably increase but the total number of function evaluations may decrease enough to compensate.) Range: [0, 1.0] Default: 0.1 log frequency (integer): Controls iteration logging to listing file In general, one line of the iteration log is printed every ith minor iteration. A heading labels the printed items, which include the current iteration number, the number and sum of feasibilities (if any), the subproblem objective value (if feasible), and the number of evaluations of the nonlinear functions. A value such as i = 10, 100 or larger is suggested for those interested only in the final solution. Log frequency 0 may be used as shorthand for Log frequency 99999. If Print level > 0, the default value of i is 1. If Print level = 0, the default value of i is 100. If Print level = 0 and the constraints are nonlinear, the minor iteration log is not printed (and the Log frequency is ignored). Instead, one line is printed at the beginning of each major iteration. Range: {1, ..., ∞} Default: 100 LU complete pivoting (no value): LUSOL pivoting strategy The LUSOL factorization implements a Markowitz-style search for pivots that locally minimize fill-in subject to a threshold pivoting stability criterion. The rook and complete pivoting options are more expensive than partial pivoting but are more stable and better at revealing rank, as long as the LU factor tolerance is not too large (say < 2.0). LU density tolerance (real): When to use dense factorization The density tolerance is used during LUSOL's basis factorization B=LU. Columns of L and rows of U are formed one at a time, and the remaining rows and columns of the basis are altered appropriately. At any stage, if the density of the remaining matrix exceeds this tolerance, the Markowitz strategy for choosing pivots is terminated and the remaining matrix is factored by a dense LU procedure. Raising the tolerance towards 1.0 may give slightly sparser factors, with a slight increase in factorization time. Range: [0, 1.0] Default: 0.5 LU factor tolerance (real): Trade-off between stability and sparsity in basis factorization This tolerance affects the stability and sparsity of the basis factorization B = LU during factorization. The value r specified must satisfy r >= 1.0. • The default value r = 100.0 usually strikes a good compromise between stability and sparsity. • For large and relatively dense problems, a larger value may give a useful improvement in sparsity without impairing stability to a serious degree. • For certain very regular structures (e.g., band matrices) it may be necessary to set r to a value smaller than the default in order to achieve stability. Range: [1.0, ∞] Default: 100.0 LU partial pivoting (no value): LUSOL pivoting strategy The LUSOL factorization implements a Markowitz-style search for pivots that locally minimize fill-in subject to a threshold pivoting stability criterion. The rook and complete pivoting options are more expensive than partial pivoting but are more stable and better at revealing rank, as long as the LU factor tolerance is not too large (say < 2.0). LU rook pivoting (no value): LUSOL pivoting strategy The LUSOL factorization implements a Markowitz-style search for pivots that locally minimize fill-in subject to a threshold pivoting stability criterion. The rook and complete pivoting options are more expensive than partial pivoting but are more stable and better at revealing rank, as long as the LU factor tolerance is not too large (say < 2.0). LU singularity tolerance (real): Protection against ill-conditioned basis matrices When the basis is refactorized, the diagonal elements of U are tested as follows: if |Ujj| <= r or |Ujj| < r * maxi |Uii|, the jth column of the basis is replaced by the corresponding slack variable. (This is most likely to occur after a restart, or at the start of a major iteration.) In some cases, the Jacobian matrix may converge to values that make the basis very ill-conditioned, causing the optimization to progress very slowly (if at all). Setting r = 1.0-5, say, may help cause a judicious change of basis. Default: 1.0e-11 LU update tolerance (real): Trade-off between stability and sparsity in basis updating This tolerance affects the stability and sparsity of the basis factorization B = LU during updates. The value r specified must satisfy r >= 1.0. • The default value r = 10.0 usually strikes a good compromise between stability and sparsity. • For large and relatively dense problems, r = 25.0 (say) may give a useful improvement in sparsity without impairing stability to a serious degree. • For certain very regular structures (e.g., band matrices) it may be necessary to set r to a value smaller than the default in order to achieve stability. Range: [1.0, ∞] Default: 10.0 major damping parameter (real): Prevents large relative changes between subproblem solutions The parameter may assist convergence on problems that have highly nonlinear constraints. It is intended to prevent large relative changes between subproblem solutions (xk, lambdak) and (xk+1, lambdak+1). For example, the default value 2.0 prevents the relative change in either xk or lambdak from exceeding 200 percent. It will not be active on well behaved problems. The parameter is used to interpolate between the solutions at the beginning and end of each major iteration. Thus xk+1 and lambdak+1 are changed to xk + sigma*(xk+1 - xk) and lambdak + sigma*(lambdak+1 - lambdak) for some step-length sigma < 1. In the case of nonlinear equations (where the number of constraints is the same as the number of variables) this gives a damped Newton method. This is a very crude control. If the sequence of major iterations does not appear to be converging, one should first re-run the problem with a higher Penalty parameter (say 10 or 100 times the default rho). (Skip this re-run in the case of nonlinear equations: there are no degrees of freedom and the value of rho is irrelevant.) If the subproblem solutions continue to change violently, try reducing r to 0.2 or 0.1 (say). For implementation reasons, the shortened step to sigma applies to the nonlinear variables x, but not to the linear variables y or the slack variables s. This may reduce the efficiency of the control. Default: 2.0 major iterations (integer): Max number of major iterations The maximum number of major iterations allowed. It is intended to guard against an excessive number of linearizations of the nonlinear constraints, since in some cases the sequence of major iterations may not converge. The progress of the major iterations can be best monitored using Print level 0 (the default). Default: 50 minor damping parameter (real): Limit change in x during linesearch This parameter limits the change in x during a linesearch. It applies to all nonlinear problems, once a feasible solution or feasible subproblem has been found. A linesearch of the form minimizealpha F(x + alpha*p) is performed over the range 0 < alpha <= beta, where beta is the step to the nearest upper or lower bound on x. Normally, the first step length tried is a1 = min(1, beta). In some cases, such as F(x) = aebx or F(x) = axb, even a moderate change in the components of x can lead to floating-point overflow. The parameter r is therefore used to define a limit beta2 = r(1 + ||x||)/||p|| and the first evaluation of F(x) is at the potentially smaller steplength alpha1 = min(1, beta, beta2). Wherever possible, upper and lower bounds on x should be used to prevent evaluation of nonlinear functions at meaningless points. The Minor damping parameter provides an additional safeguard. The default value r = 2.0 should not affect progress on well behaved problems, but setting r = 0.1 or 0.01 may be helpful when rapidly varying functions are present. A good starting point may be required. An important application is to the class of nonlinear least squares problems. In cases where several local optima exist, specifying a small value for r may help locate an optimum near the starting point. Default: 2.0 minor iterations (integer): Max number of minor iterations between linearizations of nonlinear constraints The the maximum number of feasible minor iterations allowed between successive linearizations of the nonlinear constraints. A moderate value (e.g., 20 <= i <= 50) prevents excessive efforts being expended on early major iterations, but allows later subproblems to be solved to completion. The limit applies to both infeasible and feasible iterations. In some cases, a large number of iterations (say K) might be required to obtain a feasible subproblem. If good starting values are supplied for variables appearing nonlinearly in the constraints, it may be sensible to specify a limit > K, to allow the first major iteration to terminate at a feasible (and perhaps optimal) subproblem solution. If a good initial subproblem is arbitrarily interrupted by a small limit, the subsequent linearization may be less favorable than the first. In general it is unsafe to specify a value as small as i = 1 or 2. Even when an optimal solution has been reached, a few minor iterations may be needed for the corresponding subproblem to be recognized as optimal. The Iteration limit provides an independent limit on the total minor iterations (across all subproblems). If the constraints are linear, only the Iteration limit applies: the minor iterations value is ignored. Default: 40 multiple price (integer): Multiple pricing Pricing refers to a scan of the current non-basic variables to determine which, if any, are eligible to become (super)basic. The multiple pricing parameter k controls the number of entering variables to choose: the k best non-basic variables are selected for admission to the set of (super)basic variables. The default k = 1 is best for linear programs, since an optimal solution will have zero superbasic variables. Warning : If k > 1, GAMS/MINOS will use the reduced-gradient method rather than the simplex method, even on purely linear problems. The subsequent iterations do not correspond to the efficient minor iterations carried out by commercial linear programming systems using multiple pricing. (In the latter systems, the classical simplex method is applied to a tableau involving k dense columns of dimension m, and k is therefore limited for storage reasons typically to the range 2 <= k <= 7.) GAMS/MINOS varies all superbasic variables simultaneously. For linear problems its storage requirements are essentially independent of k. Larger values of k are therefore practical, but in general the iterations and time required when k > 1 are greater than when the simplex method is used (k = 1). On large nonlinear problems it may be important to set k > 1 if the starting point does not contain many superbasic variables. For example, if a problem has 3000 variables and 500 of them are nonlinear, the optimal solution may well have 200 variables superbasic. If the problem is solved in several runs, it may be beneficial to use k = 10 (say) for early runs, until it seems that the number of superbasics has leveled off. If Multiple price k is specified, it is also necessary to specify Superbasic limit s for some s > k. Range: {1, ..., ∞} Default: 1 optimality tolerance (real): Reduced gradient optimality check This is used to judge the size of the reduced gradients dj = gj - piT aj, where gj is the gradient of the objective function corresponding to the jth variable, aj is the associated column of the constraint matrix (or Jacobian), and pi is the set of dual variables. By construction, the reduced gradients for basic variables are always zero. Optimality will be declared if the reduced gradients for nonbasic variables at their lower or upper bounds satisfy dj/||pi|| >= -r or dj/||pi|| <= r respectively, and if dj/||pi|| <= r for superbasic variables. The ||pi|| is a measure of the size of the dual variables. It is included to make the tests independent of a scale factor on the objective function. The quantity actually used is defined by sigma = sum(i, abs(pi(i))), ||pi|| = max{sigma/sqrt(m),1} so that only large scale factors are compensated for. As the objective scale decreases, the optimality test tends to become an absolute (instead of a relative) test. Default: 1.0e-6 partial price (integer): Number of segments in partial pricing strategy This parameter is recommended for large problems that have significantly more variables than constraints. It reduces the work required for each pricing operation (when a nonbasic variable is selected to become basic or superbasic). When i = 1, all columns of the constraints matrix (A I) are searched. Otherwise, Aj and I are partitioned to give i roughly equal segments Aj, Ij (j = 1 to i). If the previous search was successful on Aj-1, Ij-1, the next search begins on the segments Aj, Ij. (All subscripts here are modulo i.) If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to become superbasic. (Several may be selected if multiple pricing has been specified.) If nothing is found, the search continues on the next segments Aj+1, Ij+1 and so on. Partial price t (or t/2 or t/3) may be appropriate for time-stage models having t time periods Range: {1, ..., ∞} Default: 10 penalty parameter (real): Used in modified augmented Lagrangian This specifies the value of rho in the modified augmented Lagrangian. It is used only when Lagrangian = yes (the default setting). For early runs on a problem with unknown characteristics, the default value should be acceptable. If the problem is known to be highly nonlinear, specify a large value, such as 10 times the default. In general, a positive value of rho may be necessary to ensure convergence, even for convex programs. On the other hand, if rho is too large, the rate of convergence may be unnecessarily slow. If the functions are not highly nonlinear or a good starting point is known, it will often be safe to specify penalty parameter 0.0. When solving a sequence of related problems, initially, use a moderate value for rho (such as the default) and a reasonably low Iterations and/or major iterations limit. If successive major iterations appear to be terminating with radically different solutions, the penalty parameter should be increased. (See also the Major damping parameter.) If there appears to be little progress between major iterations, it may help to reduce the penalty parameter. Default: automatic print level (integer): Controls amount of information printed during optimization This varies the amount of information that will be output during optimization. Print level 0 sets the default log and summary frequencies to 100. It is then easy to monitor the progress of the run. Print level 1 (or more) sets the default log and summary frequencies to 1, giving a line of output for every minor iteration. Print level 1 also produces basis statistics, i.e., information relating to LU factors of the basis matrix whenever the basis is re-factorized. For problems with nonlinear constraints, certain quantities are printed at the start of each major iteration. The option value is best thought of as a binary number of the form Print level JFLXB where each letter stands for a digit that is either 0 or 1. The quantities referred to are: • B Basis statistics, as mentioned above • X xk, the nonlinear variables involved in the objective function or the constraints. • L lambdak, the Lagrange-multiplier estimates for the nonlinear constraints. (Suppressed if Lagrangian=No, since then lambdak = 0.) • F f(xk), the values of the nonlinear constraint functions. • J J(xk), the Jacobian matrix. To obtain output of any item, set the corresponding digit to 1, otherwise to 0. For example, Print level 10 sets X = 1 and the other digits equal to zero; the nonlinear variables will be printed each major iteration. If J = 1, the Jacobian matrix will be output column-wise at the start of each major iteration. Column j will be preceded by the value of the corresponding variable xj and a key to indicate whether the variable is basic, superbasic or nonbasic. (Hence if J = 1, there is no reason to specify X = 1 unless the objective contains more nonlinear variables than the Jacobian.) A typical line of output is 3 1.250000D+01 BS 1 1.00000D+00 4 2.00000D+00 which would mean that x3 is basic at value 12.5, and the third column of the Jacobian has elements of 1.0 and 2.0 in rows 1 and 4. (Note: the GAMS/MINOS row numbers are usually different from the GAMS row numbers; see the Solution option.) Default: 0 radius of convergence (real): controls final reduction of penalty parameter This determines when the penalty parameter rho will be reduced, assuming rho was initially positive. Both the nonlinear constraint violation (see ROWERR below) and the relative change in consecutive Lagrange multiplier estimates must be less than r at the start of a major iteration before rho is reduced or set to zero. A few major iterations later, full completion will be requested if not already set, and the remaining sequence of major iterations should converge quadratically to an optimum. Default: 0.01 row tolerance (real): Accuracy requirement for nonlinear rows This specifies how accurately the nonlinear constraints should be satisfied at a solution. The default value is usually small enough, since model data is often specified to about this accuracy. Let ROWERR be the maximum component of the residual vector f(x) + A1y - b1, normalized by the size of the solution. Thus ROWERR = ||f(x) + A1y - b1||inf/(1 + XNORM) where XNORM is a measure of the size of the current solution (x, y). The solution is considered to be feasible if ROWERR <= r. If the problem functions involve data that is known to be of low accuracy, a larger Row tolerance may be appropriate. Default: 1.0e-6 scale all variables (no value): Synonym to scale option 2 scale linear variables (no value): Synonym to scale option 1 scale no (no value): Synonym to scale option 0 scale nonlinear variables (no value): Synonym to scale option 2 scale option (integer): Scaling Scale Yes sets the default. (Caution: If all variables are nonlinear, Scale Yes unexpectedly does nothing, because there are no linear variables to scale). Scale No suppresses scaling (equivalent to Scale Option 0). If nonlinear constraints are present, Scale option 1 or 0 should generally be tried at first. Scale option 2 gives scales that depend on the initial Jacobian, and should therefore be used only if (a) a good starting point is provided, and (b) the problem is not highly nonlinear. Default: 1 value meaning 0 No scaling If storage is at a premium, this option should be used. 1 Scale linear variables Linear constraints and variables are scaled by an iterative procedure that attempts to make the matrix coefficients as close as possible to 1.0 (see [5]). This will sometimes improve the performance of the solution procedures. Scale linear variables is an equivalent option. 2 Scale linear + nonlinear variables All constraints and variables are scaled by the iterative procedure. Also, a certain additional scaling is performed that may be helpful if the right-hand side b or the solution x is large. This takes into account columns of (A I) that are fixed or have positive lower bounds or negative upper bounds. Scale nonlinear variables or Scale all variables are equivalent options. scale print (no value): Print scaling factors This causes the row-scales r(i) and column-scales c(j) to be printed. The scaled matrix coefficients are âij = aijc(j)/r(i). The scaled bounds on the variables and slacks are j = lj/c(j) and ûj = uj/c(j), where c(j) = r(j - n) if j > n. If a Scale option has not already been specified, Scale print sets the default scaling. scale print tolerance (real): Scale print flag and set tolerance See Scale Tolerance. This option also turns on printing of the scale factors. Range: [0, 1.0] Default: 0.9 scale tolerance (real): Scale tolerance All forms except Scale option may specify a tolerance r where 0 < r < 1 (for example: Scale Print Tolerance = 0.99). This affects how many passes might be needed through the constraint matrix. On each pass, the scaling procedure computes the ratio of the largest and smallest nonzero coefficients in each column: rhoj = maxi |aij|/mini |aij| (aij ≠ 0) If maxj rhoj is less than r times its previous value, another scaling pass is performed to adjust the row and column scales. Raising r from 0.9 to 0.99 (say) usually increases the number of scaling passes through A. At most 10 passes are made. If a Scale option has not already been specified, Scale tolerance sets the default scaling. Range: [0, 1.0] Default: 0.9 scale yes (no value): Synonym to scale option 1 solution (string): Prints MINOS solution This controls whether or not GAMS/MINOS prints the final solution obtained. There is one line of output for each constraint and variable. The lines are in the same order as in the GAMS solution, but the constraints and variables labeled with internal GAMS/MINOS numbers rather than GAMS names. (The numbers at the left of each line are GAMS/MINOS column numbers, and those at the right of each line in the rows section are GAMS/MINOS slacks.) The GAMS/MINOS solution may be useful occasionally to interpret certain messages that occur during the optimization, and to determine the final status of certain variables (basic, superbasic or nonbasic). Default: NO value meaning NO Turn off printing of solution YES Turn on printing of solution start assigned nonlinears (string): Starting strategy when there is no basis This option affects the starting strategy when there is no basis (i.e., for the first solve or when the GAMS statement option bratio = 1 is used to reject an existing basis.) This option applies to all nonlinear variables that have been assigned nondefault initial values and are strictly between their bounds. Free variables at their default value of zero are excluded. Let K denote the number of such assigned nonlinear variables. Default: SUPERBASIC value meaning SUPERBASIC Default Specify superbasic for highly nonlinear models, as long as K is not too large (say K < 100) and the initial values are good. BASIC Good for square systems Specify basic for models that are essentially square (i.e., if there are about as many general constraints as variables). NONBASIC Specify nonbasic if K is large. ELIGIBLE FOR CRASH Specify Eligible for Crash for linear or nearly linear models. The nonlinear variables will be treated in the manner described under Crash option. subspace tolerance (real): Determines when nonbasics becomes superbasic This controls the extent to which optimization is confined to the current set of basic and superbasic variables (Phase 4 iterations), before one or more nonbasic variables are added to the superbasic set (Phase 3). The parameter r must be a real number in the range 0 < r <= 1. When a nonbasic variable xj is made superbasic, the resulting norm of the reduced-gradient vector (for all superbasics) is recorded. Let this be ||ZT g0||. (In fact, the norm will be |dj| , the size of the reduced gradient for the new superbasic variable xj. Subsequent Phase 4 iterations will continue at least until the norm of the reduced-gradient vector satisfies ||ZT g0|| <= r||ZT g0|| is the size of the largest reduced-gradient component among the superbasic variables.) A smaller value of r is likely to increase the total number of iterations, but may reduce the number of basic changes. A larger value such as r = 0.9 may sometimes lead to improved overall efficiency, if the number of superbasic variables has to increase substantially between the starting point and an optimal solution. Other convergence tests on the change in the function being minimized and the change in the variables may prolong Phase 4 iterations. This helps to make the overall performance insensitive to larger values of r. Range: [0, 1.0] Default: 0.5 summary frequency (integer): Controls iteration logging to summary (log file) A brief form of the iteration log is output to the MINOS summary file (i.e. the GAMS log file). In general, one line is output every ith minor iteration. In an interactive environment, the output normally appears at the terminal and allows a run to be monitored. If something looks wrong, the run can be manually terminated. The summary frequency controls summary output in the same way as the log frequency controls output to the print file. A value such as Summary Frequency = 10 or 100 is often adequate to determine if the solve is making progress. If Print level = 0, the default value of Summary Frequency is 100. If Print level > 0, the default value of Summary Frequency is 1. If Print level = 0 and the constraints are nonlinear, the Summary Frequency is ignored. Instead, one line is printed at the beginning of each major iteration. Range: {1, ..., ∞} Default: 100 superbasics limit (integer): Maximum number of superbasics This places a limit on the storage allocated for superbasic variables. Ideally, the parameter i should be set slightly larger than the number of degrees of freedom expected at an optimal solution. For linear problems, an optimum is normally a basic solution with no degrees of freedom. (The number of variables lying strictly between their bounds is not more than m, the number of general constraints.) The default value of i is therefore 1. For nonlinear problems, the number of degrees of freedom is often called the number of independent variables. Normally, i need not be greater than n1 + 1, where n1 is the number of nonlinear variables. For many problems, i may be considerably smaller than n1. This will save storage if n1 is very large. This parameter also sets the Hessian dimension, unless the latter is specified explicitly (and conversely). If neither parameter is specified, GAMS chooses values for both, using certain characteristics of the problem. Range: {1, ..., ∞} Default: 1 unbounded objective value (real): Determines when a problem is called unbounded This parameter is intended to detect unboundedness in nonlinear problems. During a line search of the form minimizealpha F(x + alpha*p) If |F| exceeds the parameter r or if alpha exceeds the unbounded stepsize, iterations are terminated with the exit message PROBLEM IS UNBOUNDED (OR BADLY SCALED). If singularities are present, unboundedness in F(x) may be manifested by a floating-point overflow (during the evaluation of F(x + alpha*p), before the test against r can be made. Unboundedness is best avoided by placing finite upper and lower bounds on the variables. See also the Minor damping parameter. Default: 1.0e20 unbounded step size (real): Determines when a problem is called unbounded This parameter is intended to detect unboundedness in nonlinear problems. During a line search of the form minimizealpha F(x + alpha*p) If alpha exceeds the parameter r or if |F| exceeds the unbounded objective value, iterations are terminated with the exit message PROBLEM IS UNBOUNDED (OR BADLY SCALED). If singularities are present, unboundedness in F(x) may be manifested by a floating-point overflow (during the evaluation of F(x + alpha*p), before the test against r can be made. Unboundedness is best avoided by placing finite upper and lower bounds on the variables. See also the Minor damping parameter. Default: 1.0e10 verify constraint gradients (no value): Synonym to verify level 2 verify gradients (no value): Synonym to verify level 3 verify level (integer): Controls verification of gradients This option controls the finite-difference check performed by MINOS on the gradients (first derivatives) computed by GAMS for each nonlinear function. GAMS computes gradients analytically, and the values obtained should normally be taken as correct. Default: 0 value meaning 0 Cheap test Only a cheap test is performed, requiring three evaluations of the nonlinear objective and two evaluations of the nonlinear constraints. Verify No is an equivalent option. 1 Check objective A more reliable check is made on each component of the objective gradient. Verify objective gradients is an equivalent option. 2 Check Jacobian A check is made on each column of the Jacobian matrix associated with the nonlinear constraints. Verify constraint gradients is an equivalent option. 3 Check objective and Jacobian A detailed check is made on both the objective and the Jacobian. Verify, Verify gradients, and Verify Yes are equivalent options. -1 No check verify no (no value): Synonym to verify level 0 verify objective gradients (no value): Synonym to verify level 1 verify yes (no value): Synonym to verify level 3 weight on linear objective (real): Composite objective weight This option controls the so-called composite objective technique. If the first solution obtained is infeasible, and if the objective function contains linear terms, and the objective weight w is positive, this technique is used. While trying to reduce the sum of infeasibilities, the method also attempts to optimize the linear portion of the objective. At each infeasible iteration, the objective function is defined to be minimizex sigma*w(cTx) + (sum of infeasibilities) where sigma = 1 for minimization and sigma = -1 for maximization and c is the linear portion of the objective. If an optimal solution is reached while still infeasible, w is reduced by a factor of 10. This helps to allow for the possibility that the initial w is too large. It also provides dynamic allowance for the fact the sum of infeasibilities is tending towards zero. The effect of w is disabled after five such reductions, or if a feasible solution is obtained. This option is intended mainly for linear programs. It is unlikely to be helpful if the objective function is nonlinear. Default: 0.0 # Exit Conditions This section discusses the exit codes printed by MINOS at the end of the optimization run. EXIT – Optimal solution found This is the message we all hope to see! It is certainly preferable to every other message. Of course it is quite possible that there are model formulation errors, which will (hopefully) lead to unexpected objective values and solutions. The reported optimum may be a local, and other much better optima may exist. EXIT – The problem is infeasible When the constraints are linear, this message can probably be trusted. Feasibility is measured with respect to the upper and lower bounds on the variables (the bounds on the slack variables correspond to the GAMS constraints). The message tells us that among all the points satisfying the general constraints $$Ax+s=0$$, there is apparently no point that satisfies the bounds on $$x$$ and $$s$$. Violations as small as the FEASIBILITY TOLERANCE are ignored, but at least one component of $$x$$ or $s$ violates a bound by more than the tolerance. Note: Although the objective function is the sum of the infeasibilities, this sum will usually not have been minimized when MINOS recognizes the situation and exits. There may exist other points that have significantly lower sum of infeasibilities. When nonlinear constraints are present, infeasibility is much harder to recognize correctly. Even if a feasible solution exists, the current linearization of the constraints may not contain a feasible point. In an attempt to deal with this situation MINOS may relax the bounds on the slacks associated with nonlinear rows. This perturbation is allowed a fixed number of times. Normally a feasible point will be obtained relative to the perturbed constraints, and optimization can continue on the subproblem. However, if several consecutive subproblems require such perturbation, the problem is terminated and declared INFEASIBLE. Clearly this is an ad-hoc procedure. Wherever possible, nonlinear constraints should be defined in such a way that feasible points are known to exist when the constraints are linearized. EXIT – The problem is unbounded (or badly scaled) For linear problems, unboundedness is detected by the simplex method when a nonbasic variable can apparently be increased by an arbitrary amount without causing a basic variable to violate a bound. A simple way to diagnose such a model is to add an appropriate bound on the objective variable. Very rarely, the scaling of the problem could be so poor that numerical error will give an erroneous indication of unboundedness. Consider using the SCALE option. For nonlinear problems, MINOS monitors both the size of the current objective function and the size of the change in the variables at each step. If either of these is very large (as judged by the UNBOUNDED parameter), the problem is terminated and declared UNBOUNDED. To avoid large function values, it may be necessary to impose bounds on some of the variables in order to keep them away from singularities in the nonlinear functions. EXIT – User Interrupt This exit code is a result of interrupting the optimization process by hitting Ctrl-C. Inside the IDE this is accomplished by hitting the Interrupt button. The solver will finish its current iteration, and return the current solution. This solution can be still intermediate infeasible or intermediate non-optimal. EXIT – Too many iterations The iteration limit was hit. Either the ITERLIM, or in some cases the ITERATIONS LIMIT or MAJOR ITERATION LIMIT was too small to solve the problem. In most cases increasing the GAMS ITERLIM option will resolve the problem. In other cases you will need to create a MINOS option file and set a MAJOR ITERATION LIMIT. The listing file will give more information regarding what limit was hit. The GAMS iteration limit is displayed in the listing file under the section SOLVE SUMMARY. If the ITERLIM was hit, the message will look like: ITERATION COUNT, LIMIT 10001 10000 EXIT – Resource Interrupt The solver hit the RESLIM resource limit, which is a time limit. It returned the solution at that time, which may be still intermediate infeasible or intermediate non-optimal. The GAMS resource limit is displayed in the listing file under the section SOLVE SUMMARY. If the GAMS RESLIM was hit, the message will look like: RESOURCE USAGE, LIMIT 1001.570 1000.000 EXIT – The objective has not changed for many iterations This is an emergency measure for the rare occasions when the solution procedure appears to be cycling. Suppose that a zero step is taken for several consecutive iterations, with a basis change occurring each time. It is theoretically possible for the set of basic variables to become the same as they were one or more iterations earlier. The same sequence of iterations would then occur ad infinitum. EXIT – The Superbasics Limit is too small The problem appears to be more non-linear than anticipated. The current set of basic and superbasic variables have been optimized as much as possible and an increase in the number of superbasics is needed. You can use the option SUPERBASICS LIMIT to increase the limit. See also option HESSIAN DIMENSION. EXIT – Constraint and objective function could not be calculated The function or gradient could not be evaluated. For example, this can occur when MINOS attempts to take a log or a square root of a negative number, when evaluating the expression $$x^y$$ with $$x\le 0$$, or when evaluating exp(x) for large x and the result is too large to store. The listing file will contain details about where and why evaluation errors occur. To fix this problem, add bounds so that all functions can be properly evaluated. E.g. if you have an expression $$x^y$$, add a lower bound X.LO=0.001 to your model. In many cases the algorithm can recover from function evaluation errors, for instance if they happen in the line search while evaluating trial points. The message above appears in cases where the algorithm can not recover, and requires a reliable function or gradient evaluation. EXIT – Function evaluation error limit The limit of allowed function evaluation errors DOMLIM has been exceeded. Function evaluation errors occur when MINOS attempts to evaluate the objective and/or constraints at points where these functions or their derivatives are not defined or where overflows occur. Some examples are given above. The listing file contains details about these errors. The quick and dirty way to solve this is to increase the GAMS DOMLIM setting, but in general it is better to add bounds. E.g. if you have an expression $$x^y$$, then add a bound X.LO=0.001 to your model. EXIT – The current point can not be improved The line search failed. This can happen if the model is very nonlinear or if the functions are nonsmooth (using a DNLP model type). If the model is non-smooth, consider a smooth approximation. It may be useful to check the scaling of the model and think more carefully about choosing a good starting point. Sometimes it can help to restart the model with full scaling turned on: option nlp=minos; solve m minimizing z using nlp; // this one gives "current point cannot be improved" file fopt /minos.opt/; // write option file putclose fopt "scale all variables"/; m.optfile=1; solve m minimizing z using nlp; // solve with "scale all variables" EXIT – Numerical error in trying to satisfy the linear constraints (or the linearized constraints) The basis is very ill-conditioned. This is often a scaling problem. Try the full scaling option scale all variables or, better yet, rescale the model in GAMS via the .scale suffix or by choosing more appropriate units for variables and RHS values. EXIT – Not enough storage to solve the model The amount of workspace allocated for MINOS to solve the model is insufficient. Consider increasing the GAMS option workfactor to increase the workspace allocated for MINOS to use. The listing file and log file (screen) will contain information about the current workspace allocation. Increasing the workfactor by 50% is a reasonable strategy. EXIT– Systems error This is a catch all return for other serious problems. Check the listing file for more messages. If needed rerun the model with OPTION SYSOUT=ON;. GAMS Development Corp. GAMS Software GmbH General Information and Sales U.S. (+1) 202 342-0180 Europe: (+49) 221 949-9170
2022-08-08 09:50:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527958750724792, "perplexity": 1297.573018887567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00153.warc.gz"}
https://www.wyzant.com/resources/answers/topics/ratio?page=6
284 Answered Questions for the topic Ratio Ratio 07/15/18 #### If there are 89.96 pesos in 2 dollars. How much is 12 dollars in pesos? if there are  89.96 pesos in 2 dollars. how much is 12 dollars in peso? #### Why the answer is 1:500 but not 1:250000 The actual area of a playground is 900m2.If the area of the playground on a map is 36 cm2,then the scale of the map is? i thought the answer is 36:900*100*100=1:250000 but is wrong Ratio 06/16/18 #### Find the total number of working days To complete a job alone,A needs 12 days,B needs 18 days and C needs 24 days.Now,the job is carried out by A first for several days,and then by B for several days,and finally completed by C.The... more Ratio 06/12/18 #### express the ratio 15:5 in the form n:1 Give answer in the simplest form Ratio 05/22/18 #### A student finished 8 of her homework problems in class. If the ratio of problems she finished to problems she still had left was 4:1. How many Homework problems A student finished eight of her homework problems in class is the ratio of problems she finished to problems she still had was 4:1 how many homework problems do you have in total Ratio 05/06/18 #### How much potassium does a sample have The number the ratio of nitrogen to potassium in a sample of soil is 12 to 9 the sample has 36 units of nitrogen Ratio 05/05/18 #### Splitting the bill. There are three people moving into a house that they are financing for $1500 a month. All three people want to split the bill and all pay a portion of it. Let's say person 1 makes$40,000 a year,... more Ratio 04/27/18 #### What if their are 3 girls and 7 boys if their are 50 students how many boys are their? This is for my homework Ratio 04/10/18 a sum of money is divided between peter,sam and kenneth in the ratio 5:2:7.   (a) if peter receives $24 more than sam, calculate how much kenneth receives 04/08/18 #### My understanding of fraction is that we are catering to part of a whole. Is that technically correct? What i mean to ask is if a/b is the fraction a the numerator can take any value positive, negative or 0 whereas the denominator represented by b over here should always be positive as whole should... more Ratio 04/02/18 #### Ratio question To make a ham sandwich I need I bread roll and two slices of ham. Pack of 20 bread rolls are £2.87 Pack of 30 ham slices are £6.32 I am going to have to buy enough packs to have exactly twice as... more Ratio 03/30/18 #### I’m stuck on a ratio question a b c are all positive intergers a:b= 5:6 b:c=8:9 work our the smallest value of a+b+c Ratio Math 03/29/18 #### Help with Ratio Problem Please I saw this question on my formative task and want to know how to solve it. in a group of people, their average age is 23. The men's average age is 25 and the women's average age is 22. Find the... more Ratio Ratios 03/29/18 #### Ratio question The scale on a map is 1:25000. How many kilometres on the ground is represented by 7cm on the map Ratio 03/10/18 #### The capacities of tanks A, B and C are 2, 3 and 6 litres, respectively...... The capacities of tanks A, B and C are 2, 3 and 6 litres, respectively. Tank A is half full of whiskey, tank B is 2/3rd empty, whereas tank C is empty. If the whiskey in the tanks is divided... more Ratio 03/09/18 #### The cost of tickets for a party of 16 adults and 24 children is$88. If a childs ticket cost two-thirds of that of an adult ticket, find the cost of an adult Its to do with ratio and finding the cost of things. This question was given to us by my teacher and I cant seem to solve it yet. Ratio 02/13/18 #### 3 quarters to 6 dollars refuced to lowest terms im just confused how to figure this question outb ## Still looking for help? Get the right answer, fast. Get a free answer to a quick problem. Most questions answered within 4 hours. #### OR Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
2021-06-14 21:36:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4734123945236206, "perplexity": 1845.2049742625577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613453.9/warc/CC-MAIN-20210614201339-20210614231339-00413.warc.gz"}
https://a.tellusjournals.se/articles/10.16993/tellusa.47/
A- A+ Alt. Display # A New Standardized Type Drought Indicators Based Hybrid Procedure for Strengthening Drought Monitoring System ## Abstract Drought is occurring recurrently in various climatic zones around the world. Therefore, accurate and continuous drought monitoring are essential for reliable drought mitigation policies. In past research, several drought monitoring indicators have been developed. Regardless of their scopes and applicabilities, every indicator has certain amount of error regarding accurate determination of drought classes. In addition, climate change and complex features of meteorological variables also reduce the performance of each indicator. Consequently, accurate drought monitoring is a challenging task in hydrology and water management research. The objective of this research is to enhance the accuracies in drought characterization by employing multiple drought indicators, simultaneously. This article proposes a new aggregative index – the Seasonal Mixture Standardized Drought Index (SMSDI). The procedure of SMSDI is mainly based on the integration of Principle Component Analysis (PCA) and K- Component Gaussian Mixture Distribution (K-CGMD). In preliminary analysis, aggregation of three multi-scalar Standardized Drought Indices (SDIs) is made for three meteorological gauge stations of Pakistan. For comparative assessment, individual SDI has used to investigate the association and consistency with SMSDI. Outcome associated with this research shows that the SMSDI have significant correlation with individual SDIs. We conclude that instead of using individual indicator, the proposed aggregative approach enhances the scope and capacity of drought indicators for extracting reliable information related to future drought. Keywords: How to Cite: Khan, M.A., Zhang, X., Ali, Z., Jiang, H., Ismail, M. and Qamar, S., 2022. A New Standardized Type Drought Indicators Based Hybrid Procedure for Strengthening Drought Monitoring System. Tellus A: Dynamic Meteorology and Oceanography, 74(1), pp.119–140. DOI: http://doi.org/10.16993/tellusa.47 Published on 28 Mar 2022 Accepted on 10 Mar 2022            Submitted on 10 Mar 2022 ## 1. Introduction Due to climate change and increasing temperature, there is a continuous trend in recurrent occurrences of drought events at several parts of the world. Comparative to other hazards, effects of drought are more disastrous and long lasting on humans, agriculture, livestock, and industries (Vásquez-León et al., 2003). Drought can be defined as “a certain period of time (usually lasting over several months or longer than usual) during which a particular region receives comparative less precipitation (in terms of rain or snowfall)” (Van Loon et al., 2016). According to the characteristics of drought, it has been divided into four major types. Details on each type of drought can be found in Sun et al., (2019). Every year, around 55 million people are affected by drought directly or indirectly from all over the world (WHO, 2020). In addition, continuous increase in temperature and global warming is threatening for bad effect on the soil fertility of the agricultural land (Thadshayini et al., 2020). Further, a list of catastrophic consequences of drought includes the decrease of accessible resources of drinking and groundwater, death of inhabitants and livestock, deterioration of food quality, serious diseases, desertification, economic inflation, social disruption, soil erosion, depletion of freshwater resources, and low economy, etc. (Garcia et al., 2013, Khan et al., 2021). The major challenges for hydrologists and environmentalists are water security, water management, and the development of future climate policies. In addition, many research and debates are ongoing about the impact of atmospheric circulation (Magan et al., 2010), global warming, and the procedure of climate variability indices (Gu et al., 2019). In previous development, several tools and methods for continuous drought monitoring and drought assessment have been suggested. In all tools and methods, time series data of temperature and precipitation are play key roles. For example, the Palmer Drought Severity Index (PDSI) had been proposed for assessing dry and wet events (Palmer, 1965). In recent research, some authors have revamped the method of PDSI by accounting new climatic parameters (Yu et al., 2019). In recent research, Shen et al. (2019) have proposed – the Index for Integrated Drought Condition Index (IDCI) by combining rainfall, temperature, evaporation, vegetation condition, soil moisture, and potential evaporation. Following PDSI, a standardized and probabilistic procedure based on the time series data of rainfall – the Standardized Precipitation Index (SPI) had been proposed by McKee et al., (1993). Numerous applications of SPI for drought monitoring and drought assessment are available in literature (Wu et al., 2007). In advance research, some authors have suggested various drought indices by including additional meteorological variables under the same standardized procedure. For instance, Standardized Precipitation Evapotranspiration Index (SPEI) accounts evaporation before standardization (Vicente Serrano et al., 2010). Ali et al., (2017) have suggest Standardized Precipitation Temperature Index (SPTI). In SPTI, a time series vector of average temperature is used with precipitation data before standardization phase. SPI, SPEI, SPTI have homogenous computational procedure. Therefore, we call these indices as a set of Standardized Drought Indices (SDIs). Some more details on SDIs are available in Erhardt and Czado, (2015). In recent research, many authors have estimated drought indices using various probability distribution function. They includes Zhang et al., (2020), Ali et al., (2019d), Stagge et al., (2015), Angelidis et al., (2012) etc. However, among the available choices, the optimal selected probability function for whole set of time series data needs may contains significant amount of errors due to their poor fitting. For example, Q-Q plot shown in Ali et al., (2019c) clearly shows inappropriateness of trapezoidal distribution for the estimation of SPEI index. To resolve these issues, Ali et al., (2020) have introduced a generalized non-parametric based standardization approach for the estimation of various standardized drought indices. Alternatively, many researchers of various fields are frequently using the K-CGMD (McLachlan, 2004) for multi-distributed data. Furthermore, K-CGMD models have many theoretical and computational benefits over unimodal distribution. However, being a probabilistic nature, each indicator of SDI contains a certain amount of error. Therefore, the accuracy of determining accurate drought classes is the major problem in SDs. These inaccuracies arise due to the heterogeneous and inconsistencies, seasonal patterns in temporal data sets. In past research, many authors have addressed the seasonality in the procedure of single drought index. For example, Carrão et al., (2018) employed SPI for seasonal drought forecasting in Latin America. Moghimi et al., (2020) have forecasted seasonal drought under various time series models using Reconnaissance Drought Index (RDI). Qaiser et al., (2021) have established Composite Drought Index (CDI) for seasonal drought characterization. Hao et al., (2018) have provided a detailed review and future challenges related to seasonal drought prediction. Under certain circumstances, some of our recent developments include Seasonally Combinative Regional Drought Indicator (SCRDI) (Ali et al., 2020c), Regionally Improved Weighted Standardized Drought Index (RIWSDI) (Jiang et al., 2020), Multi-Scalar Aggregative Standardized Precipitation Temperature Index (MASPTI) (Ali et al., 2020b), Probabilistic Weighted Joint Aggregative Index (PWJADI) (Ali et al., 2019a) and Long Averaged Weighted Joint Aggregative Criterion (LAWJAC) (Ali et al., 2020a). The aim of this paper is to propose a comprehensive framework which will investigate various features of drought by aggregating the temporal and seasonal characteristics of multiple drought indices. Hence, the objective of this research to provide a new hybrid drought indicator that increase the accuracy and account seasonality in the drought characterization by amalgamating multiple SDIs. ## 2. Material and methods ### 2.1 Study area and data description Pakistan is geographically located between Middle East and Central Asia (Ahmed et al., 2018). It has generally four seasons: cold (November-February); pre-monsoon/hot (March-mid of June); monsoon (mid of June-mid of September); post-monsoon (mid of September-October). There is extremely hot in the summer season with relative humidity of 25% to 50%. Most areas of the country are arid to semiarid, because of the spatial variability in temperatures. However, some parts are very wet, e.g., the southern slopes of Himalayas region as well as the sub-mountainous region, because of the high amount of annual rainfall (760 mm to 2000 mm). Based on the Census 2016, the total population of Pakistan is approximately 207774520 (Pakistan Census Reports, 2017). Where, the income source of the majority of the people belongs to the agriculture or related sectors (Rehman et al., 2015). From past few decades, recurrent occurrences of drought have bad impact on people’s life, livestock and agricultural yield. Consequently, the economy of the country has been severely disturbed in the recent years, due to extreme drought hazards. Therefore, the country needs a wide-ranging drought mitigation policies for future drought. This research is based on three meteorological stations which includes Badin (BD) (24.6459° N, 68.8467° E), Khanpur (KP) (28.6332° N, 70.6574° E), and Nawabshah (NS) (26.2447° N, 68.3935° E). These stations are located in various climatological regions of Pakistan. The locations of the chosen meteorological stations are shown in Figure 1. For computations, long time series data (from January 1971 to December 2016) of rainfall and temperature is obtained from Karachi Data Processing Center (KDPC) via Pakistan Meteorological Department (PMD). The maximum average monthly precipitation for Badin, Khanpur and Nawabshah are 459.00 mm (August–1979), 307.50 mm (August–2015) and 353.20 mm (September-2011) respectively. Similarly, maximum and minimum average monthly temperature for Badin, Khanpur and Nawabshah are 34°C (May–2010) and 15.45°C (January-1975), 36.3 C° (June–2002) and 11.15 C° (January–1984), and 37.2 C° (May–2002) and 12.45 C° (January–2008) respectively. Throughout various seasons the regions have substantially high variation in rainfall and temperature. The summary statistics of the data have been shown in Table 1. Figure 1 Geographical locations of the study area. Table 1 Month-wise summary statistics rainfall and temperature (min, max) in selected stations. MONTHS STATISTICS BADIN (BD) NAWABSHAH (NS) KHANPUR (KP) PRE MAX T MIN T PRE MAX T MIN T PRE MAX T MIN T Jan Avg. 1.4 25.2 9.7 2.1 24.3 6.3 4.1 21.4 4.8 Std. 3.6 0.9 1.5 5.2 0.9 1.5 8.9 1.0 1.6 Kurtosis 10.7 –0.3 0.6 14.4 0.1 –0.2 16.2 0.1 0.5 Feb Avg. 4.8 28.3 12.4 3.5 27.6 8.8 8.2 24.3 7.8 Std. 11.3 1.5 1.9 8.3 1.7 1.9 11.8 1.7 2.0 Kurtosis 15.0 –0.2 0.4 10.7 –0.1 0.8 4.9 –0.1 1.2 Mar Avg. 0.9 33.7 17.6 2.9 33.7 14.3 6.2 30.0 13.3 Std. 2.4 1.7 1.5 6.1 2.1 1.0 9.8 2.0 1.6 Kurtosis 11.0 –0.3 2.0 21.4 –0.1 0.5 4.6 –0.1 0.0 Apr Avg. 1.7 38.0 22.3 3.1 40.1 19.9 6.2 37.3 19.1 Std. 5.2 1.4 1.0 7.9 1.8 1.3 10.6 1.8 2.0 Kurtosis 14.7 0.6 –0.3 11.3 0.0 –0.5 10.9 –0.3 –0.7 May Avg. 4.0 39.5 25.8 2.0 44.1 24.8 4.8 41.9 24.6 Std. 24.0 1.0 0.8 5.7 1.5 1.0 8.1 1.6 2.1 Kurtosis 46.0 0.0 0.6 17.3 0.9 0.0 5.1 –0.2 –0.7 Jun Avg. 11.4 38.1 27.7 6.8 43.8 27.5 5.5 42.3 27.5 Std. 22.4 0.9 0.4 14.1 1.1 0.7 9.9 1.0 1.4 Kurtosis 12.4 1.7 2.5 6.2 –0.8 0.5 10.2 –0.4 0.6 Jul Avg. 62.6 35.0 27.1 49.5 40.7 27.4 27.0 39.6 27.6 Std. 73.4 1.0 0.4 64.2 1.4 0.7 35.5 1.1 1.3 Kurtosis 1.5 –0.2 0.0 5.3 0.6 0.3 4.8 1.5 2.7 Aug Avg. 94.3 33.4 26.1 49.7 38.8 26.1 40.9 38.0 26.4 Std. 102.9 1.0 0.5 63.0 1.4 0.8 68.2 1.0 1.5 Kurtosis 2.7 –0.3 –0.2 3.4 0.8 2.9 6.3 0.0 2.7 Sep Avg. 38.2 34.2 25.0 25.1 38.7 24.0 20.9 36.7 23.5 Std. 84.3 1.1 0.7 65.1 1.6 1.1 51.7 1.0 1.8 Kurtosis 8.1 0.2 –0.4 15.4 3.6 1.8 16.9 –0.1 2.7 Oct Avg. 6.4 35.3 22.1 3.5 37.5 18.6 1.8 34.8 17.2 Std. 21.5 1.0 1.2 11.6 1.3 1.7 7.1 0.8 2.1 Kurtosis 18.2 0.4 –0.3 15.9 1.0 0.0 21.4 –0.1 1.9 Nov Avg. 2.2 31.5 16.5 0.9 31.9 12.6 0.4 29.7 10.9 Std. 6.9 1.0 1.5 3.7 1.9 1.7 1.2 1.0 1.8 Kurtosis 15.6 0.3 1.8 37.1 18.8 2.5 14.5 1.2 0.4 Dec Avg. 1.0 26.7 11.4 3.0 26.1 8.0 3.9 23.8 6.2 Std. 2.7 1.1 1.6 9.0 1.1 1.4 12.4 1.2 1.7 Kurtosis 14.9 0.2 0.1 15.4 –0.3 0.8 29.9 0.0 0.7 ### 2.2. Standardized Drought Indices (SDIs) In past research, several authors have developed numerous methods and procedures of drought monitoring for various climatological region. However, the procedure of Standardized Drought Indices (SDIs) (Erhardt and Czado 2015) is one of the most commonly used and accepted methods around the world. Among others, the three most important SDIs named as SPI (McKee et al., 1993), SPEI (Vicente-Serrano et al., 2010), SPTI (Ali et al., 2017) have same mathematical process and classification. However, the errors and uncertainty in accurate determination of class are always present in each one. In this paper, the proposal of this research is based on the aggregation of SPI, SPEI and SPTI drought indices. However, the approach is not limited and hence can be applied to other drought indices. Some brief descriptions on SPI, SPEI and SPTI are as follow: SPI is one of the oldest and the most commonly used drought indicator. Estimation of SPI is based on long term data of rainfall at particular station. In SPI procedure, Cumulative Distribution Function (CDF) of the appropriate probability function fitted on rainfall data is standardized. The standardized time series data are called the values of SPI. SPI is standardized, powerful, easy to use measure for defining and comparing drought characteristics of various climatological region. The main flexibility of SPI is to define various types of drought (i.e., meteorological, agricultural and hydrological). Recent applications of SPI includes Kalisa et al., (2020), Achour et al., (2020), Bong and Richard, (2020), Yaseen et al., (2021), and Qaisrani et al., 2021) etc. After SPI, Vicente-Serrano et al. (2010) proposed a new climatic drought index, the Standardized Precipitation Evapotranspiration Index (SPEI) which consider the effect of temperature in defining drought characteristics. SPEI is an upgraded version of SPI. The SPEI is based on both precipitation and temperature data and has the advantage of integrating a multi-scalar character with the capacity to incorporate impacts of temperature variability on drought assessment. Mathematically, the procedure of SPEI is same as that of SPI. However, SPEI is more reliable than SPI in climate change and global warming scenario. One major drawback of SPEI is its sensitive method of estimation. That is for arid and semi-arid region, SPEI have high rate of estimation errors in the determination of drought classes. To address the estimation issues in SPEI, Ali et al. (2017) developed Standardized Precipitation Temperature Index (SPTI) drought index. One can substitute SPTI with SPEI for reducing estimation errors. ## 3. Outlines of the proposed Index The following five points summarize the outlines of the proposed index. 1. Estimation of various SDIs under identical estimation procedure. 2. Segregation of time series data of SDIs by appropriate seasonality indicator. 3. Integration of PCA on each serrated data of SDIs 4. Standardization of PC1, for each segregated set. 5. Aggregation of segregated standard series to obtain new index However, to increase the flexibility for our proposal and its associated results, we suggest the following three lemmas. Lemma 1. The Choice of the stations In many hydrological and meteorological studies, the core constraining influence in model performance and drought assessment are the small temporal length and low quality of meteorological data. However, many frameworks rely on long time series, precise and reliable data of meteorological variables. For instance, to enhance reservoir management methods and to predict drought characteristics, long time series data on meteorological and environments objects is required for effective hydrological modelling (Arsenault and Brissette, 2014; Jiang et al., 2020). Further, the probabilistic estimation of SDIs and the integration of PCA on seasonally segregated data requires long time series data. Therefore, we suggest to includes those meteorological stations that have long time series data meteorological variables. Lemma 2. Selection of SDI type indicator There are several Standardized Drought Indices (SDIs) indicators, each one is developed to meet a particular requirement (Yihdego et al., 2019). SDI based drought characterization involves standardization of time series data of different variables, or a collection of variables. Since all the indicators used in SDIs procedure are based on subjective choice of meteorological variables, the scope of individual SDIs is therefore limited to study area and research question. Therefore, the main concentration in the development SDIs is global challenges (i.e., global warming and climate change). WMO and GWP, (2016) reviewed and used the most popular drought indices as well as drought monitoring tools. Though, SDIs are the most widely employed method for monitoring drought. However, the use of individual or multiple SDIs without taking precautions related to the selection meteorological variables, estimation procedure, and the compatibility of region limits the scope of results related to drought characterization. Therefore, the choice of SDIs can significantly contribute in reliable and accurate drought assessment. Lemma 3. The Choice of Season Spatio-temporal fluctuations of meteorological variables in different gauge stations within a particular region is an eminent topic (Ma et al., 2018; Asfaw et al., 2018; Ongoma and Chen, 2017; Ali et al., 2020c). From past research, we have learned that some regions have a long period of cold season (Yang et al., 2013). On the other hand, there are some regions that have hot climate throughout the whole calendar year (Uvo et al., 1998). In this situation, defining a generalized index of seasonality is difficult task. However, numerous environmental and meteorological assessments are based on month wise seasonal indices (Ayugi et al., 2016; Yang et al., 2013). By keeping this argument, the proposal of this research suggests each individual month as a season. ### 3.1. The proposed indicator – The Seasonal Mixture Standardized Drought Index (SMSDI) Following the above three lemmas, this research is based on the aggregation of SPI, SPEI and SPTI for reducing error in inaccurate determination of drought classes. The selection of SPI, SPEI and SPTI is based on the standardized classification of drought characteristics. The following subsection explains the steps involved in the development of SMSDI. #### 3.1.1. Estimation of drought indices such as SPI, SPEI and SPTI under K – Components Gaussian Mixture Distribution (K-CGMD) Comparative to single probability function or varying distribution concept, this subsection assesses and describes the appropriateness of K-CGMD for the estimation of drought indices under probabilistic framework. Therefore, to enhance the accuracy in the estimation of drought indices, this study introduces K-CGMD based standardization. The formulation of K-CGMD consists of two types of parameters, the weights of mixture components and the means and variances of each component. Mathematically, the K-CGMD model is presented as, (1) $p\left(x\{\mu }_{k},{\Sigma }_{k}\right)=\frac{1}{\sqrt{{\left(2\pi \right)}^{d}\mathit{\text{det}}\left({\Sigma }_{k}\right)}}\mathit{\text{exp}}\left(-\frac{1}{2}{\left(x-{\mu }_{k}\right)}^{T}{\Sigma }_{k}^{-1}\left(x-{\mu }_{k}\right)\right)$ (2) $P\left(x\right)={\sum }_{i=1}^{K}{\alpha }_{i}N\left(x\text{|}{\mu }_{i},{\sigma }_{i}\right)$ (3) $N\left(x\text{|}{\mu }_{i},{\sigma }_{i}\right)=\frac{1}{{\sigma }_{i\sqrt{2\pi }}}exp\left(\frac{-{\left(x-{\mu }_{i}\right)}^{2}}{2{\sigma }_{i}^{2}}\right)$ (4) ${\sum }_{i=1}^{K}{\alpha }_{i}=1$ Where, K is the number of components, αi is the mixture component weight of ith component with the constraint ${\Sigma }_{i=1}^{K} {\alpha }_{i}=1$, so that the total probability distribution normalizes to 1. μi, σi are the mean and variance of ith component. In experimental study, we have suggested to choose 12-CGMD model. The selection of components is subjective to the nature of data. We assume that in each month, the distribution of data follows normal probability function. However, the choice of the number of components is very important subject of computational theory. For estimation of SPI, SPEI and SPTI, we suggest standardization the following CDF of 12 components mixture of Gaussian functions. (5) $H\left(x\right)={\sum }_{i}^{12}{a}_{i}F\left({x}_{i}\right)$ After the estimation of CDF of mixture model H(x), the temporal vector of H(x) is then standardized under the standardization approximation procedure adopted in Ali et al., (2017). #### 3.1.2. Seasonal Segregation This step is due to correlation structure in seasonal multivariate time series. Thus, to increase the accuracy of PCA, data segregation based on seasonal index plays an important role for accurate determination of drought classes. Therefore, this step suggests the segregation of time series data of each index based on seasonal indication. #### 3.1.3. Application of PCA on each segregated data Principal Component Analysis (PCA) is a useful technique and has widespread applications in numerous computational research. PCA is a statistical technique that transforms the original variables of data into new axes or principal component (PCs), so that the result provided in those axes are not associated with each other. The main goal of principal component analysis is to extract important information from the data to represent it -a set of new orthogonal variables called Principal component PCs. Each PC is a linear combination of the original responses (that retain some correlation among), and PCs are orthogonal to each other. Thus, the first PC is the mathematical combination of measurements that accounts for the largest amount of variability in the data. In other words, PCs iteratively expresses as much as possible of the total variation in the data in such a way that PC1 explains more of that PC2 and PC2 explains more data variation than PC3 and so on. That why few PCs describes the variation of large number of original responses. Particularly in hydrology, many authors have used PCA for inferencing hydrological process. In drought modeling, Bazrafshan et al. (2014) have used PCA technique to resolve the multi-scaling challenges in SDIs. On the same lines of Bazrafshan et al. (2014), the current study recommends the use of PCA to resolve the multiplicity issue of drought indices. #### 3.1.4. Standardization In our case study, we calculate SMSDI for each set of SPI, SPEI and SPTI- time scales. i.e. (1,3, 6, 9, 12 and 24 months) for each station. SMSDI is based on the first principal component PC1. The component PC1 is a linear combination of the original variables and explain most part of the variability existing in the original variable. Due to algebraic characteristics, its value cannot be compare among different months or place. SPI, SPEI and SPTI have zero mean and unit variance. For each segregated data set, we suggest to standardized the resultant first PC. Hence, we suggest the following equation for standardization (see Bazrafshan et al., 2014). (6) $\mathit{\text{SMSD}}{I}_{\mathit{\text{uc}}}=\frac{P{C}_{1\mathit{\text{ym}}}-\overline{P{C}_{1\mathit{\text{ym}}}}}{S{D}_{1m}}$ where PC1ym is the first principal component of the yth year mth month, $\overline{P{C}_{1m}}$ is the PC1 mean in the mth month and SD1m is the standard deviation of PC1 in the mth month. Following the applications of PCA, the chosen PCs are then standardized and the upcoming section will give us aggregates seasonally segregated time series data of drought indices. We call the resultant time series data as SMSDI. #### 3.1.5. Aggregation The aggregation may express itself in a process such that the techniques used in the current study can handle the proposed index in a precise manner and gives accurate drought assessment results on regional basis. It is worth noting that aggregation and segregation are two faces of the same coin, which emphasis to give a well-mannered required information. Following a step by step procedure indicated in the flow chart (see Figure 2), the last phase of our proposed index i.e. SMSDI is the aggregation of the segregated standardized data. Figure 2 Flowchart of the proposed framework. ## 4. Results and Discussion ### 4.1. Estimation of SPI, SPEI and SPTI under K-CGMD settings This section presents the results associated with K-CGMD based standardization of drought indicators. Here, we applied 12-CGMD for modeling the time series data of all SDI. Table 2 shows the BIC values of the 12 component gaussian model in all the selected stations under study with different time-scales. For instance, –5505.72, –4329.87 and –623.88 are the values of BIC for Badin, Nawabshah and Khanpur stations respectively, highlighting time-scale 1. Figures 3, 4, 5 provide the graphical demonstration of the application of CGMD. As we can observe clearly from these figures which provide evidence that in each data, the K-CGMD models are more appropriate instead of applying a single distribution. Some more results are archived in author’s gallery. Table 2 BIC Values for 12 CGMD for SPI, SPEI and SPTI. SPI 1 –5505.72 –4329.87 –623.88 3 –3271.21 –3330.55 –3947.69 6 –5612.63 –5355.59 –5540.46 9 –6560.43 –5908.35 –5932.45 12 –6685.60 –6151.64 –6129.32 24 –6893.84 –6579.39 –6499.06 SPEI 1 –5818.03 –5873.33 –5831.03 3 –6762.69 –6879.15 –6830.68 6 –7135.43 –7316.96 –7226.22 9 –7195.26 –7255.91 –7126.11 12 –7084.73 –6868.48 –6652.28 24 –7324.87 –7206.16 –7023.16 SPTI 1 –2957.43 –630.93 –1405.21 3 –1480.09 –1006.22 –1439.65 6 –1525.45 –1302.26 –1544.35 9 –2262.77 –1939.43 –1905.43 12 –2595.84 –2212.02 –2098.22 24 –2930.93 –2660.30 –2548.93 Figure 3 Density plots of 12-CGMD for Badin station. Figure 4 Density plots of 12-CGMD for Khanpur station. Figure 5 Density plots of 12-CGMD for Nawabshah station. ### 4.2. Principal Components Analysis In further part of research, we intend to use SDI time series seasonal data for PCA. The resulting data will be reduced to one-dimension data. In section 2.2, we overview the SDI techniques and a detailed explanation of the proposed index i.e. Seasonal Mixture Standardized Drought Index (SMSDI) is given in section 4. We calculate SDI for 1 to 24-month time-scales. The calculation procedure and methodology has been explained in the said section. The SMSDI is based on 3*12*6*3 sets of the chosen indices (for different months and taking different time-scales for the chosen stations) using principal component analysis technique. For the stations under study eigen-values as well as Eigen-vectors were calculated. Such as, Figures 6 and 7, for the considered stations showing scree-plots for different sets. The contribution of each component is represented in values (out of 3) at y-axis, these figures show the importance of PCs. Figures 6 and 7 suggests that PC1 of all the sets explain more than 75% variation of the total variation. Apart from the Scree plot, the bar chart (illustrating the role of each original variable in PCs) can also be useful in evaluating the nature of the indices. Figure 6 Scree plots (1). Figure 7 Scree plots (2). The bar plot of the three PCs (PC1, PC2 and PC3) for all the months (seasons), timescales and stations under study has been manifested in Figure 8. This figure reveals that SPI, SPEI and SPTI have a noticeable highly significant contribution to the first component (i.e., PC1) for all the seasons (months) as well as for all the stations. Figure 8 has comparative plots, which represent the variation explained by PC1. All the months of the meteorological stations are mentioned at the x-axis where, the contribution of the PCs are represented by the color bars. We conclude that all the plots have same behavior. However, for Khanpur station the contribution of PC1 is not that much higher compared to other stations. For instance, PC1 of Badin for the months i.e., January, February, March, …, December have 88%, 89%, 75%, …, 89% contributions respectively, using timescale-1. Similar results for other stations can be seen by observing the said figure. Figure 8 Percentage of variations in PCs in different months at different time-scales. Results reveal that the eigenvalues for the first PCs are large, and for subsequent PCs small, see Tables 3 and 4. Such that, the first PCs in the data set correspond to the directions with the greatest amount of variations. The sum of all the eigenvalues give a total of 3 for each month with timescales – (1,3,6,9,12 and 24). Similar results for all the stations along with the months and timescales can be seen by observing Tables 3 and 4. Table 3 Eigen values for timescales (1,3,6) for different components of PCA. TS STNS. CS JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC 1 BD C1 2.64 2.66 2.24 2.59 2.77 2.73 2.81 2.77 2.75 2.80 2.73 2.68 C2 0.36 0.34 0.76 0.41 0.23 0.27 0.19 0.23 0.25 0.20 0.27 0.32 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 KP C1 2.70 2.81 2.63 2.75 2.63 2.63 2.71 2.73 2.68 2.77 2.57 2.74 C2 0.30 0.18 0.37 0.25 0.37 0.37 0.29 0.27 0.32 0.23 0.43 0.26 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 NS C1 2.74 2.74 2.54 2.45 2.34 2.68 2.66 2.75 2.71 2.85 2.23 2.79 C2 0.26 0.26 0.46 0.55 0.66 0.32 0.34 0.25 0.29 0.15 0.77 0.21 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3 BD C1 2.60 2.58 2.70 2.74 2.88 2.88 2.87 2.85 2.86 2.75 2.58 2.68 C2 0.40 0.42 0.30 0.26 0.12 0.12 0.13 0.15 0.14 0.25 0.42 0.32 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 KP C1 2.80 2.75 2.69 2.66 2.70 2.77 2.75 2.80 2.77 2.79 2.72 2.81 C2 0.19 0.24 0.31 0.34 0.30 0.23 0.25 0.20 0.23 0.21 0.28 0.19 C3 0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 NS C1 2.65 2.52 2.45 2.54 2.68 2.80 2.79 2.74 2.77 2.71 2.61 2.79 C2 0.35 0.48 0.54 0.46 0.32 0.20 0.21 0.26 0.23 0.29 0.39 0.21 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 6 BD C1 2.66 2.89 2.93 2.93 2.93 2.94 2.95 2.95 2.92 2.66 2.61 2.66 C2 0.33 0.11 0.07 0.07 0.07 0.06 0.05 0.05 0.08 0.33 0.39 0.33 C3 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.01 KP C1 2.73 2.76 2.80 2.79 2.78 2.74 2.69 2.76 2.71 2.58 2.71 2.80 C2 0.25 0.23 0.20 0.21 0.21 0.20 0.22 0.19 0.23 0.34 0.22 0.18 C3 0.02 0.01 0.00 0.00 0.01 0.06 0.10 0.05 0.06 0.08 0.07 0.02 NS C1 2.49 2.74 2.84 2.84 2.84 2.85 2.87 2.88 2.80 2.67 2.60 2.55 C2 0.50 0.26 0.16 0.16 0.16 0.15 0.13 0.11 0.20 0.33 0.39 0.44 C3 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 Table 4 Eigen values for timescales (9,12,24) for different components of PCA. TS STNS. CS JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC 9 BD C1 2.93 2.93 2.93 2.94 2.95 2.96 2.96 2.96 2.88 2.65 2.86 2.93 C2 0.07 0.07 0.07 0.06 0.05 0.04 0.04 0.04 0.12 0.34 0.14 0.07 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 KP C1 1.71 1.98 2.03 1.83 1.44 1.69 1.78 1.87 1.76 2.03 1.81 1.54 C2 0.99 0.95 0.92 0.96 0.94 0.91 0.94 0.85 0.93 0.73 0.96 1.07 C3 0.30 0.07 0.04 0.21 0.63 0.40 0.28 0.28 0.31 0.24 0.23 0.39 NS C1 2.88 2.82 2.81 2.85 2.92 2.92 2.92 2.89 2.83 2.65 2.81 2.90 C2 0.12 0.18 0.19 0.15 0.08 0.07 0.07 0.11 0.17 0.34 0.19 0.10 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 BD C1 2.93 2.93 2.93 2.94 2.95 2.96 2.96 2.96 2.88 2.65 2.86 2.93 C2 0.07 0.07 0.07 0.06 0.05 0.04 0.04 0.04 0.12 0.34 0.14 0.07 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 KP C1 1.71 1.98 2.03 1.83 1.44 1.69 1.78 1.87 1.76 2.03 1.81 1.54 C2 0.99 0.95 0.92 0.96 0.94 0.91 0.94 0.85 0.93 0.73 0.96 1.07 C3 0.30 0.07 0.04 0.21 0.63 0.40 0.28 0.28 0.31 0.24 0.23 0.39 NS C1 2.88 2.82 2.81 2.85 2.92 2.92 2.92 2.89 2.83 2.65 2.81 2.90 C2 0.12 0.18 0.19 0.15 0.08 0.07 0.07 0.11 0.17 0.34 0.19 0.10 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 12 BD C1 2.95 2.96 2.96 2.96 2.96 2.96 2.96 2.96 2.94 2.95 2.95 2.95 C2 0.04 0.04 0.04 0.04 0.04 0.04 0.04 0.04 0.06 0.05 0.05 0.05 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 KP C1 2.83 2.82 2.81 2.80 2.80 2.81 2.80 2.81 2.80 2.84 2.83 2.83 C2 0.16 0.17 0.18 0.19 0.19 0.18 0.19 0.18 0.19 0.15 0.16 0.16 C3 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 NS C1 2.91 2.91 2.91 2.91 2.89 2.89 2.90 2.89 2.92 2.93 2.92 2.92 C2 0.08 0.08 0.09 0.09 0.10 0.11 0.09 0.11 0.07 0.07 0.07 0.07 C3 0.00 0.01 0.00 0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 24 BD C1 2.93 2.93 2.93 2.93 2.94 2.94 2.94 2.93 2.91 2.92 2.93 2.93 C2 0.07 0.06 0.06 0.06 0.06 0.06 0.06 0.07 0.08 0.07 0.07 0.07 C3 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 KP C1 2.64 2.61 2.61 2.63 2.63 2.72 2.75 2.74 2.76 2.67 2.64 2.62 C2 0.35 0.38 0.38 0.36 0.36 0.27 0.24 0.25 0.24 0.33 0.35 0.37 C3 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 NS C1 2.88 2.88 2.88 2.88 2.87 2.88 2.88 2.87 2.87 2.87 2.87 2.87 C2 0.12 0.12 0.12 0.12 0.12 0.12 0.12 0.13 0.13 0.13 0.12 0.12 C3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ### 4.3. Comparison of SMSDI with SPI, SPEI and SPTI In comparative assessment, the proposed SMSDI is compared with SDI used in this study. It has been observed that SMSDI is strongly correlated with SPI, SPEI and SPTI in all stations for various time scales (see Figures 9 and 10). Figure 9 Correlations of SMSDI with SPI, SPEI and SPTI at Badin and Khanpur. Figure 10 Correlations of SMSDI with SPI, SPEI and SPTI at Nawabshah station. Figures 11 and 12 show the temporal behavior of SMSDI, SPI, SPEI and SPTI at scale 2 and 12 in Badin, Khanpur and Nawabshah. It is perceived by observing these figures that the behavior of the series of SPI, SPEI and SPTI have homogenous pattern, but we can see the fluctuation in SMSDI. This fluctuation is the seasonal variation. In accordance with the figures, the SMSDIs are seen to follow appropriately the fluctuations of SPI, SPEI and SPTI, particularly during extended wet and dry periods. Likewise, SMSDI avoids dramatic volatility within actual SPI, SPEI and SPTI time-series (especially for the series of timescales of less than 9 months). This signifies that the SMSDI can remove the slighter wet and dry periods in the extreme and prolonged wet or dry periods. Further analysis was conducted to consider the parallelism in stations of interest among the SMSDI and each chosen series of SPI, SPEI and SPTI timescales in terms of various months of the year. Figure 11 Temporal plots of SMSDI, SPI, SPEI and SPTI (Month wise). Figure 12 Temporal plots of SMSDI, SPI, SPEI and SPTI (Year wise). Figure 13 shows the counts of drought categories for SDIs and SMSDI for all the stations with time-scales 1–24. There are total seven categories which contain the values of SMSDI as mention in the graphs. These categories were defined by McKee et al. (1993), the ranges of these categories can be seen in Table 5. Figure 13(a) demonstrates the drought characterization for Badin station under various timescales. Normal category of drought for Badin occurred 469, 353, 469 and 458 times by using SDIs and SMSDI respectively for timescale-1. Similar interpretations can be found for ED, SD, MD, MW, SW and EW for Badin from Figure 9(a) and Table 6. It can be noted from the said figure and table that SMSDI is a candidate index for highlighting the droughts, as other indices are showing zero counts for several drought categories for timescales-1,3 and 6. The seasonality is taken under consideration in the proposed index, that is the reason that SMSDI has the ability of finding the drought even for less timescales. Similar results can be obtained for Khanpur and Nawabshah stations by analyzing Figure (9b)and(9c) respectively. Table 5 Drought classifications. SR. NO. RANGE OF SDIS AND SMSDI CATEGORIES 1. 2.00 and above Extremely Wet 2. 1.50 to 1.99 Very Wet 3. 1.00 to 1.49 Moderate Wet 4. –0.99 to 0.99 Near Normal 5. –1.00 to –1.49 Moderate Drought 6. –1.50 to –1.99 Severe Drought 7. –2.00 and less Extremely Drought Table 6 Counts of various drought categories at Badin station. TIMESCALES CATEGORY SPI SPEI SPTI SMSDI COUNT %AGE COUNT %AGE COUNT %AGE COUNT %AGE 1 ND 469 83.16 353 62.59 469 83.16 458 81.21 ED 0 0.00 57 10.11 0 0.00 0 0.00 SD 0 0.00 67 11.88 0 0.00 12 2.13 MD 0 0.00 53 9.40 0 0.00 12 2.13 MW 44 7.80 12 2.13 45 7.98 33 5.85 SW 43 7.62 11 1.95 43 7.62 15 2.66 EW 7 1.24 11 1.95 7 1.24 34 6.03 3 ND 447 79.54 406 72.24 449 79.61 422 74.82 ED 0 0.00 0 0.00 0 0.00 3 0.53 SD 0 0.00 36 6.41 0 0.00 14 2.48 MD 0 0.00 67 11.92 0 0.00 30 5.32 MW 71 12.63 20 3.56 71 12.59 50 8.87 SW 40 7.12 19 3.38 37 6.56 21 3.72 EW 4 0.71 14 2.49 5 0.89 22 3.90 6 ND 371 66.37 410 73.35 361 64.58 394 70.48 ED 0 0.00 2 0.36 0 0.00 6 1.07 SD 0 0.00 19 3.40 0 0.00 19 3.40 MD 90 16.10 51 9.12 103 18.43 50 8.94 MW 68 12.16 47 8.41 69 12.34 48 8.59 SW 25 4.47 18 3.22 21 3.76 23 4.11 EW 5 0.89 12 2.15 5 0.89 19 3.40 9 ND 329 59.17 413 74.28 335 60.25 394 70.86 ED 69 12.41 6 1.08 68 12.23 8 1.44 SD 64 11.51 18 3.24 64 11.51 29 5.22 MD 60 10.79 29 5.22 55 9.89 53 9.53 MW 18 3.24 54 9.71 18 3.24 35 6.29 SW 9 1.62 24 4.32 9 1.62 23 4.14 EW 7 1.26 12 2.16 7 1.26 14 2.52 12 ND 404 73.06 407 73.60 403 72.88 375 67.81 ED 0 0.00 0 0.00 0 0.00 0 0.00 SD 20 3.62 3 0.54 20 3.62 21 3.80 MD 71 12.84 56 10.13 71 12.84 77 13.92 MW 26 4.70 50 9.04 27 4.88 42 7.59 SW 21 3.80 14 2.53 21 3.80 15 2.71 EW 11 1.99 23 4.16 11 1.99 23 4.16 24 ND 350 64.70 402 72.69 347 62.75 355 64.20 ED 0 0.00 0 0.00 0 0.00 0 0.00 SD 2 0.37 32 5.79 21 3.80 32 5.79 MD 115 21.26 22 3.98 98 17.72 76 13.74 MW 34 6.28 39 7.05 40 7.23 31 5.61 SW 7 1.29 24 4.34 12 2.17 23 4.16 EW 32 5.91 22 3.98 22 3.98 24 4.34 Figure 13 Histograms of counts of various drought classes under SMSDI, SPI, SPEI and SPTI. ## 5. Conclusion In this paper, a new hybrid drought index has been proposed. The procedure of the new drought index is based on the integration of PCA and K-CGMD. We called the new index asSMSDI. SMSDI have ability to accounts the features multiple existing SDIs. To assess the performance of SMSDI, numerical application has consisted on based three gauge stations of Pakistan. Outcomes associated with the applications show that all of the data in the selected stations contain K underlying classes, each defined by different parameters. So, instead of using one probability function, use of K – CGMD guarantees the accurate determination of drought classes. In this setting, the procedure of SMSDI is free from the problem of fitting inappropriate probability distribution functions. Consequently, we have provided a new and more solid fitting methods for the estimation of SDIs, 2) the seasonality affect have also considered in the proposal of SMSDI, 3) the problem of the existence of multiple drought indices has been resolved in SMSDI procedure, 4) drought categories defined by SMSDI are greatly accorded with those defined by SPI, SPEI and SPTI time series. Hence, to avoid the hardness of computational work, and confusion in the interpretation of SPI, SPEI and SPTI, our proposal provides best solution to date. ## Data Accessibility Statements The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Funding Information This work was supported grants by the National Natural Science Foundation of China program (41801339), Natural Science Foundation of Hubei Province, China (2020CFB615), and Open Fund of State Key Laboratory of Remote Sensing Science (Grant No. OFSLRSS202114). The authors are also thankful to the Jiangxi Outstanding Youth Funding (20121ACB211003). ## Competing Interests The authors have no competing interests to declare. ## Authors Contributions All authors (Muhammad Asif Khan, Xiang Zhang, Zulfiqar Ali, He Jiang, Muhammad Ismail, and Sadia Qamar) has equal contribution. ## References 1. Achour, K, Meddi, M, Zeroual, A, Bouabdelli, S, Maccioni, P, and Moramarco, T. 2020. Spatio-temporal analysis and forecasting of drought in the plains of northwestern Algeria using the standardized precipitation index. Journal of Earth System Science, 129(1): 1–22. DOI: https://doi.org/10.1007/s12040-019-1306-3 2. Ahmed, K, Shahid, S and Nawaz, N. 2018. Impacts of climate variability and change on seasonal drought characteristics of Pakistan. Atmospheric research, 214: 364–374. DOI: https://doi.org/10.1016/j.atmosres.2018.08.020 3. Ali, Z, Almanjahie, IM, Hussain, I, Ismail, M and Faisal, M. 2020a. A novel generalized combinative procedure for Multi-Scalar standardized drought Indices-The long average weighted joint aggregative criterion. Tellus A: Dynamic Meteorology and Oceanography, 72(1): 1–23. DOI: https://doi.org/10.1080/16000870.2020.1736248 4. Ali, Z, Hussain, I, Faisal, M, Almanjahie, IM, Ahmad, I, Khan, DM, …, and Qamar, S. 2019a. A probabilistic weighted joint aggregative drought index (PWJADI) criterion for drought monitoring systems. Tellus A: Dynamic Meteorology and Oceanography, 71(1): 1588584. DOI: https://doi.org/10.1080/16000870.2019.1588584 5. Ali, Z, Hussain, I, Faisal, M, Almanjahie, IM, Ahmad, I, Khan, DM, …, and Qamar, S. 2019c. A probabilistic weighted joint aggregative drought index (PWJADI) criterion for drought monitoring systems. Tellus A: Dynamic Meteorology and Oceanography, 71(1): 1588584. DOI: https://doi.org/10.1080/16000870.2019.1588584 6. Ali, Z, Hussain, I, Faisal, M, Khan, DM, Niaz, R, Elashkar, EE and Shoukry, AM. 2020b. Propagation of the Multi-Scalar Aggregative Standardized Precipitation Temperature Index and its Application. Water Resources Management, 34(2): 699–714. DOI: https://doi.org/10.1007/s11269-019-02469-4 7. Ali, Z, Hussain, I, Faisal, M, Nazir, HM, Abd-el Moemen, M, Hussain, T and Shamsuddin, S. 2017. A novel multi-scalar drought index for monitoring drought: the standardized precipitation temperature index. Water resources management, 31(15): 4957–4969. DOI: https://doi.org/10.1007/s11269-017-1788-1 8. Ali, Z, Hussain, I, Grzegorczyk, MA, Ni, G, Faisal, M, Qamar, S, …, and Al-Deek, FF. 2020c. Bayesian network based procedure for regional drought monitoring: The Seasonally Combinative Regional Drought Indicator. Journal of Environmental Management, 276: 111296. DOI: https://doi.org/10.1016/j.jenvman.2020.111296 9. Angelidis, P, Maris, F, Kotsovinos, N and Hrissanthou, V. 2012. Computation of drought index SPI with alternative distribution functions. Water resources management, 6(9): 2453–2473. DOI: https://doi.org/10.1007/s11269-012-0026-0 10. Arsenault, R and Brissette, F. 2014. Determining the optimal spatial distribution of weather station networks for hydrological modeling purposes using RCM datasets: An experimental approach. Journal of Hydrometeorology, 15(1): 517–526. DOI: https://doi.org/10.1175/JHM-D-13-088.1 11. Asfaw, A, Simane, B, Hassen, A and Bantider, A. 2018. Variability and time series trend analysis of rainfall and temperature in northcentral Ethiopia: A case study in Woleka sub-basin. Weather and climate extremes, 19: 29–41. DOI: https://doi.org/10.1016/j.wace.2017.12.002 12. Ayugi, BO, Wen, W and Chepkemoi, D. 2016. Analysis of spatial and temporal patterns of rainfall variations over Kenya. Studies, 6(11). 13. Bazrafshan, J, Hejabi, S and Rahimi, J. 2014. Drought monitoring using the multivariate standardized precipitation index (MSPI). Water resources management, 28(4): 1045–1060. DOI: https://doi.org/10.1007/s11269-014-0533-2 14. Bong, CHJ and Richard, J. 2020. Drought and climate change assessment using standardized precipitation index (SPI) for Sarawak River Basin. Journal of Water and Climate Change, 11(4): 956–965. DOI: https://doi.org/10.2166/wcc.2019.036 15. Erhardt, TM and Czado, C. 2015. Standardized drought indices: A novel uni-and multivariate approach. arXiv preprint arXiv:1508.06476. 16. Garcia, RV and Escudero, JC. 2013. The Constant Catastrophe: Malnutrition, Famines and Drought (Vol. 2). Elsevier. 17. Carrão, H, Naumann, G, Dutra, E, Lavaysse, C and Barbosa, P. 2018. Seasonal drought forecasting for Latin America using the ECMWF S4 forecast system. Climate, 6(2): 48. 18. Gu, L, Chen, J, Xu, CY, Kim, JS, Chen, H, Xia, J and Zhang, L. 2019. The contribution of internal climate variability to climate change impacts on droughts. Science of The Total Environment, 684: 229–246. DOI: https://doi.org/10.3390/cli6020048 19. Hao, Z, Singh, VP and Xia, Y. 2018. Seasonal drought prediction: advances, challenges, and future prospects. Reviews of Geophysics, 56(1): 108–141. 20. Jiang, H, Khan, MA, Li, Z, Ali, Z, Ali, F and Gul, S. 2020. Regional drought assessment using improved precipitation records under auxiliary information. Tellus A: Dynamic Meteorology and Oceanography, 72(1): 1–26. DOI: https://doi.org/10.1002/2016RG000549 21. Kalisa, W, Zhang, J, Igbawua, T, Ujoh, F, Ebohon, OJ, Namugize, JN and Yao, F. 2020. Spatio-temporal analysis of drought and return periods over the East African region using Standardized Precipitation Index from 1920 to 2016. Agricultural Water Management, 237: 106195. DOI: https://doi.org/10.1016/j.agwat.2020.106195 22. Khan, MA, Faisal, M, Hashmi, MZ, Nazeer, A, Ali, Z and Hussain, I. 2021. Modeling drought duration and severity using two-dimensional copula. Journal of Atmospheric and Solar-Terrestrial Physics, 105530. DOI: https://doi.org/10.1016/j.jastp.2020.105530 23. Ma, L, Xia, H, Sun, J, Wang, H, Feng, G and Qin, F. 2018. Spatial–Temporal Variability of Hydrothermal Climate Conditions in the Yellow River Basin from 1957 to 2015. Atmosphere, 9(11): 433. DOI: https://doi.org/10.3390/atmos9110433 24. McKee, TB, Doesken, NJ and Kleist, J. 1993, January. The relationship of drought frequency and duration to time scales. In Proceedings of the 8th Conference on Applied Climatology, 17(22), 179–183. 25. McLachlan, GJ and Peel, D. 2004. Finite mixture models. John Wiley & Sons. 26. Moghimi, MM, Zarei, AR and Mahmoudi, MR. 2020. Seasonal drought forecasting in arid regions, using different time series models and RDI index. Journal of Water and Climate Change, 11(3): 633–654. 27. Ongoma, V and Chen, H. 2017. Temporal and spatial variability of temperature and precipitation over East Africa from 1951 to 2010. Meteorology and Atmospheric Physics, 129(2): 131–144. DOI: https://doi.org/10.2166/wcc.2019.009 28. Pakistan Cenus Reports. 2017. Statistical Division, Government of Pakistan http://www.pbscensus.gov.pk/. 29. Palmer, WC. 1965. Meteorological drought (Vol. 30). US Department of Commerce, Weather Bureau. Research Paper No. 45, 58. 30. Qaisrani, ZN, Nuthammachot, N and Techato, K. 2021. Drought monitoring based on Standardized Precipitation Index and Standardized Precipitation Evapotranspiration Index in the arid zone of Balochistan province, Pakistan. Arabian Journal of Geosciences, 14(1): 1–13. DOI: https://doi.org/10.1007/s12517-020-06302-w 31. Rehman, A, Jingdong, L, Shahzad, B, Chandio, AA, Hussain, I, Nabi, G and Iqbal, MS. 2015. Economic perspectives of major field crops of Pakistan: An empirical study. Pacific Science Review B: Humanities and Social Sciences, 1(3): 145–158. DOI: https://doi.org/10.1016/j.psrb.2016.09.002 32. Shen, Z, Zhang, Q, Singh, VP, Sun, P, Song, C and Yu, H. 2019. Agricultural drought monitoring across Inner Mongolia, China: Model development, spatiotemporal patterns and impacts. Journal of Hydrology, 571: 793–804. DOI: https://doi.org/10.1016/j.jhydrol.2019.02.028 33. Stagge, JH, Tallaksen, LM, Gudmundsson, L, Van Loon, AF and Stahl, K. 2015. Candidate distributions for climatological drought indices (SPI and SPEI). International Journal of Climatology, 35(13): 4027–4040. DOI: https://doi.org/10.1002/joc.4267 34. Sun, F, Mejia, A, Zeng, P and Che, Y. 2019. Projecting meteorological, hydrological and agricultural droughts for the Yangtze River basin. Science of the Total Environment, 696: 134076. DOI: https://doi.org/10.1016/j.scitotenv.2019.134076 35. Thadshayini, V, Nianthi, KR and Ginigaddara, GAS. 2020. Climate-Smart and-Resilient Agricultural Practices in Eastern Dry Zone of Sri Lanka. In Global Climate Change: Resilient and Smart Agriculture (pp. 33–68). Singapore: Springer. DOI: https://doi.org/10.1007/978-981-32-9856-9_3 36. Uvo, CB, Repelli, CA, Zebiak, SE and Kushnir, Y. 1998. The relationships between tropical Pacific and Atlantic SST and northeast Brazil monthly precipitation. Journal of Climate, 11(4): 551–562. DOI: https://doi.org/10.1175/1520-0442(1998)011<0551:TRBTPA>2.0.CO;2 37. Van Loon, AF, Gleeson, T, Clark, J, Van Dijk, AI, Stahl, K, Hannaford, J, …, and Hannah, DM. 2016. Drought in the Anthropocene. Nature Geoscience, 9(2): 89. DOI: https://doi.org/10.1038/ngeo2646 38. Vásquez-León, M, West, CT and Finan, TJ. 2003. A comparative assessment of climate vulnerability: agriculture and ranching on both sides of the US–Mexico border. Global Environmental Change, 13(3): 159–173. DOI: https://doi.org/10.1016/S0959-3780(03)00034-7 39. Vicente-Serrano, SM, Beguería, S and López-Moreno, JI. 2010. A multiscalar drought index sensitive to global warming: the standardized precipitation evapotranspiration index. Journal of climate, 23(7): 1696–1718. DOI: https://doi.org/10.1175/2009JCLI2909.1 40. WHO. 2020. https://www.who.int/health-topics/drought#tab=tab_1. Accessed on September 10, 2020. 41. Wmo, G and Gwp, G. 2016. Handbook of Drought Indicators and Indices. Geneva: World Meteorological Organization (WMO) and Global Water Partnership (GWP). 42. Wu, H, Svoboda, MD, Hayes, MJ, Wilhite, DA and Wen, F. 2007. Appropriate application of the standardized precipitation index in arid locations and dry seasons. International Journal of Climatology: A Journal of the Royal Meteorological Society, 27(1): 65–79. DOI: https://doi.org/10.1002/joc.1371 43. Yang, YG, Hu, JF, Xiao, HL, Zou, SB and Yin, ZL. 2013. Spatial and temporal variations of hydrological characteristic on the landscape zone scale in alpine cold region. Huan jing ke xue= Huanjing kexue, 34(10): 3797–3803. 44. Yaseen, ZM, Ali, M, Sharafati, A, Al-Ansari, N and Shahid, S. 2021. Forecasting standardized precipitation index using data intelligence models: regional investigation of Bangladesh. Scientific reports, 11(1): 1–25. DOI: https://doi.org/10.1038/s41598-021-82977-9 45. Yihdego, Y, Vaheddoost, B and Al-Weshah, RA. 2019. Drought indices and indicators revisited. Arabian Journal of Geosciences, 12(3): 69. DOI: https://doi.org/10.1007/s12517-019-4237-z 46. Yu, H, Zhang, Q, Xu, CY, Du, J, Sun, P and Hu, P. 2019. Modified palmer drought severity index: model improvement and application. Environment international, 130: 10495. DOI: https://doi.org/10.1016/j.envint.2019.104951 47. Zhang, Y and Li, Z. 2020. Uncertainty Analysis of Standardized Precipitation Index Due to the Effects of Probability Distributions and Parameter Errors. Frontiers in Earth Science, 8. DOI: https://doi.org/10.3389/feart.2020.00076
2022-08-08 21:45:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.634990394115448, "perplexity": 3733.587497222276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00443.warc.gz"}
http://www.ge.imati.cnr.it/index.php/news-3/532-applied-mathematics-saminar44
Tuesday, 14 March 2017, 3 p.m. (sharp), Prof. Matteo Bonforte, UAM, Madrid at the conference room of IMATI-CNR in Pavia, will give a lecture titled: NONLINEAR AND NONLOCAL DEGENERATE DIFFUSIONS ON BOUNDED DOMAINS as part of the Applied Mathematics Seminar (IMATI-CNR e Dipartimento di Matematica, Pavia). At the end a refreshment will be organized. ______________________ Abstract. We investigate quantitative properties of nonnegative solutions $u(t,x)\ge 0$ to the nonlinear fractional diffusion equation, $\partial_t u + \mathcal{L} F(u)=0$ posed in a bounded domain, $x\in\Omega\subset \mathbb{R}^N$\,, with appropriate homogeneous Dirichlet boundary conditions. As $\mathcal{L}$ we can use a quite general class of linear operators that includes the three most common versions of the fractional Laplacian $(-\Delta)^s$, $0<s<1$, in="" a="" bounded="" domain="" with="" zero="" dirichlet="" boundary="" conditions;="" many="" other="" examples="" are="" included.="" the="" nonlinearity="" $f$="" is="" assumed="" to="" be="" increasing="" and="" allowed="" degenerate,="" prototype="" being="" $f(u)="|u|^{m-1}u$," $m="">1$. We will present some recent results about existence, uniqueness and a priori estimates for a quite large class of very weak solutions, that we call weak dual solutions. We will devote special attention to the regularity theory: decay and positivity, boundary behavior, Harnack inequalities, interior and boundary regularity, and asymptotic behavior. All this is done in a quantitative way, based on sharp a priori estimates. Although our focus is on the fractional models, our techniques cover also the local case s = 1 and provide new results even in this setting. A surprising instance of this problem is the possible presence of nonmatching powers for the boundary behavior: for instance, when $\mathcal{L}=(-\Delta)^s$ is a spectral power of the Dirichlet Laplacian inside a smooth domain, we can prove that, whenever $2s \ge 1 - 1/m$, solutions behave as $dist^{1/m}$ near the boundary; on the other hand, when $2s < 1 - 1/m$, different solutions may exhibit different boundary behaviors even for large times. This unexpected phenomenon is a completely new feature of the nonlocal nonlinear structure of this model, and it is not present in the elliptic case. The above results are contained on a series of recent papers in collaboration with A. Figalli, Y. Sire, X. Ros-Oton and J. L. Vazquez.
2018-03-17 12:47:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7874204516410828, "perplexity": 1437.3355328135838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00459.warc.gz"}
https://cstheory.stackexchange.com/questions/47739/three-clique-sums-of-bounded-treewidth-and-bounded-genus-graphs
# Three Clique Sums of Bounded Treewidth and Bounded Genus graphs This question asks about the forbidden minors of the class of graphs that can be formed by taking three clique sums of planar graphs and bounded treewidth graphs(The class is defined for some constant $$t$$), if there is a forbidden minors set for the class with all graphs have crossing number at most one. A reverse of the statement is known from Robertson and Seymours graph minor theory, that for any graph $$H$$ with crossing number at most one, there exists a constant, say $$t$$, such that any $$H$$ minor free graph can be written as a three clique sum of planar graphs and graphs with treewidth $$\leq t$$. I want to ask a similar question for bounded genus graphs, that is, if $$C$$ is a class of graphs that can be expressed as three clique sums of bounded genus and bounded treewidth graphs, what can we say about the forbidden minors of the class? (It is a finite set since $$C$$ is minor closed right?)
2021-03-04 16:32:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7623841166496277, "perplexity": 168.2820283184217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00444.warc.gz"}
https://msp.org/pjm/2020/308-2/p07.xhtml
Vol. 308, No. 2, 2020 Download this article For screen For printing Recent Issues Vol. 314: 1 Vol. 313: 1  2 Vol. 312: 1  2 Vol. 311: 1  2 Vol. 310: 1  2 Vol. 309: 1  2 Vol. 308: 1  2 Vol. 307: 1  2 Online Archive Volume: Issue: The Journal Subscriptions Editorial Board Officers Contacts Submission Guidelines Submission Form Policies for Authors ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Special Issues Author Index To Appear Other MSP Journals A remark on a trace Paley–Wiener theorem Goran Muić Vol. 308 (2020), No. 2, 407–418 Abstract We prove a version of a trace Paley–Wiener theorem for tempered representations of a reductive $p$-adic group. This is applied to complete certain investigations of Shahidi on the proof that a Plancherel measure is an invariant of an $L$-packet of discrete series. Keywords Paley–Wiener theorem, admissible representations, reductive $p$-adic groups Primary: 22E50 Secondary: 11E70 Milestones Received: 5 January 2019 Revised: 10 January 2020 Accepted: 23 May 2020 Published: 9 December 2020 Authors Goran Muić Department of Mathematics Faculty of Sciences University of Zagreb Zagreb Croatia
2021-10-27 22:33:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6593380570411682, "perplexity": 11078.421417229145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00165.warc.gz"}
https://stats.stackexchange.com/questions/76587/pandas-statsmodels-time-series-seasonal-forecasting
# Pandas Statsmodels Time series seasonal forecasting Using Stats models and Pandas (and requests for the data) I'm working on a forecast model.. my 1st step is just getting the Arma function working and understood. My data is available publically and is highly seasonal residential real estate unit sales data, I'm planning to see how a quarterly survey that we do can help with the forecast as a later step. Hence why I am changing the frequency to quarterly with dates that match the dates I have on the quarterly survey. So the code looks like: #get the statewide actual data from Sheet 1 parse dates and select just unit sales act = requests.get('https://docs.google.com/spreadsheet/ccc?key=0Ak_wF7ZGeMmHdFZtQjI1a1hhUWR2UExCa2E4MFhiWWc&output=csv&gid=1') dataact = act.content actdf = pd.read_csv(StringIO(dataact),index_col=0,parse_dates=['date'], thousands=',') #converts to numbers actdf.rename(columns={'Unit Sales': 'Units'}, inplace=True) actdf=actdf[['Units']] actdfq=actdf.resample('Q',sum) actdfq.index = actdfq.index + pd.DateOffset(days=15) #align the actual data dates to the survey dates Eg the 15th of the quarter actdfq=actdfq['2009':] # selcts the time periods for which we have surveys (actual results here) The survey would be shifted back by one actdfqchg=actdfq['Units'] fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(actdfqchg.values.squeeze(), lags=4, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(actdfqchg, lags=4, ax=ax2) the data looks like: 2009-01-15 7867 2009-04-15 7483 2009-07-15 10109 2009-10-15 10648 2010-01-15 9678 The acf graphs look like: So I don't really know what the ACF is telling me..The graph of auto correlation would tend to tell me that the 4q correlation is the strongest but still only .4? (correct?) and the 2q score of -.4 would indicate that the summer to winter correlation would be the weakest (which makes sense) and how to proceed with 1) a projection based on this actual data just using straight Arma Stats models capabilities. and 2) how to best incorporate survey data that attempt to predict the following quarter.. the survey is taken asking for predictions for the following quarter on a 5 point scale with a400+ participants, the straight correlation is not super strong but I think somehow I should be able to find the quarterly correlation to help inform the projection for the subsequent quarter..?? • FWIW I did also redo this with the monthly data and stats models and made some more headway but the main question still stands.. It seems to me that the positive correlation indicated at 4 and the negative at 2, although not above the significance shaded area are still, "good enough" still having trouble with the stats models parameters to get a prediction, and no idea of how to incorporate survey data...(probably with a different method??) – dartdog Nov 14 '13 at 17:52 • I have expanded the notebook and posted it here with monthly data nbviewer.ipython.org/7473989 – dartdog Nov 14 '13 at 20:44 • I have expanded the question and posted a revised IPython notebook (with access to data), see the Google Groups entry here groups.google.com/d/msg/pystatsmodels/HmQldkxK344/uWH3Dbh0O9QJ – dartdog Nov 18 '13 at 18:27 ## 1 Answer I agree with Simeon here that this is more a stat question. First of all, the 2q score of -0.4 indicates that there is a negative correlation between summer and winter (i.e. the higher the summer values the lower the winer values). This allows for more predictivity so I wouldn't say the weakest. 1) These ACF and PACF indicate autocorrelations which are not significant (the significance interval is in blue). Maybe the agregation in quarters caused some loss of information here. 2) There is a lot of info on how to use ACF and PAF graphs in order to build ARMA models. These slides are a good overview of the associated Box Jenkins methodology and are a quick read: http://www.colorado.edu/geography/class_homepages/geog_4023_s11/Lecture16_TS3.pdf EDIT 1: Before going into the ARMA model, you should first differenciate your serie in order to make it stationary, i.e. with constant mean. Then you should be able to detect safely the periodicities of the time serie and try to build a model. If differenciating once is not enough to remove tendancy, do it again. You should really have a look at the Box-Jenkins methodology, which is really a standard for this kind of problem. A small point: The test you are doing after building the model are don to test if you have a normally distributed error (or white noise) in your model. If so this means that you have discovered most of the information cointained in your signal.
2020-01-29 04:01:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48548993468284607, "perplexity": 1756.087334921478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00474.warc.gz"}
http://www.ncatlab.org/nlab/show/proper+model+category
model category for ∞-groupoids # Contents ## Idea In a model category fibrations enjoy pullback stability and cofibrations are stable under pushout, but weak equivalences need not have either property. In a proper model category weak equivalences are also preserved under certain pullbacks and/or certain pushouts. ## Definition ###### Definition A model category is called • right proper if weak equivalences are preserved by pullback along fibrations • left proper if weak equivalence are preserved by pushout along cofibrations • proper if it is both left and right proper. ###### Remark More in detail this means the following. A model category is right proper if for every weak equivalence $f:A\to B$ in $W\subset \mathrm{Mor}\left(C\right)$ and every fibration $h:C\to B$ the pullback ${h}^{*}f:A{×}_{B}C\to C$ in $\begin{array}{ccc}A{×}_{C}B& \to & A\\ \phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{↓}^{⇒{h}^{*}f\in W}& & {↓}^{f\in W}\\ C& \stackrel{h\in F}{\to }& B\end{array}$\array{ A \times_C B &\to& A \\ \;\;\downarrow^{\mathrlap{\Rightarrow h^* f \in W}} && \downarrow^{\mathrlap{f \in W}} \\ C &\stackrel{h \in F}{\to}& B } is a weak equivalence. ## Properties The following says that left/right properness holds locally in every model category, namely between cofibrant/fibrant objects. ###### Proposition Given a model category, 1. every pushout of a weak equivalence between cofibrant objects along a cofibration is again a weak equivalence; 2. every pullback of a weak equivalence between fibrant objects along a fibration is again a weak equivalence. A proof is spelled out in (Hirschhorn, prop. 13.1.2), there attributed to (Reedy). This gives a large class of examples of left/right proper model categories: ###### Corollary A model category in which all objects are cofibrant is left proper. A model category in which all objects are fibrant is right proper. See in the list of Examples below for concrete examples. ## Examples ### Left proper model categories • by cor. 1, every model category in which all objects are cofibrant is left proper; this includes notably and many model structures derived from these, such as • the left Bousfield localization of every left proper combinatorial model category at a set of morphisms is again left proper. So in particular also the local injective model structures on simplicial presheaves over a site are left proper. ### Non-left proper model categories A class of model structures which tends to be not left proper are model structures on categories of not-necessarily commutative algebras. For instance But it is Quillen equivalent to a model structure that is left proper. This is discussed below. ### Proper model categories Model categories which are both left and right proper include ### Proper Quillen equivalent model structures While some model categories fail to be proper, often there is a Quillen equivalent one that does enjoy properness. ###### Theorem Every model category whose acyclic cofibrations are monomorphisms is Quillen equivalent to its model structure on algebraic fibrant objects. In this all objects are fibrant, so that it is right proper. ###### Proof This is theorem 2.18 in ###### Theorem Let $T$ be a simplicial (possibly multi-colored) theory, and let $T\mathrm{Alg}$ be the corresponding category of simplicial T-algebras. This carries a model category structure where the fibrations and weak equivalences are those of the underlying simplicial sets in the standard model structure on simplicial sets. Then there exists a morphism of simplicial theories $T\to S$ such that 1. the induced adjunction $S\mathrm{Alg}\stackrel{\to }{←}T\mathrm{Alg}$ is a Quillen equivalence; 2. $S\mathrm{Alg}$ is a proper simplicial model category. ###### Proof This is the content of ## Properties ### Homotopy (co)limits in proper model categories ###### Lemma In a left proper model category, ordinary pushouts along cofibrations are homotopy pushouts. Dually, in a right proper model category, ordinary pullbacks along fibrations are homotopy pullbacks. ###### Proof This is stated for instance in HTT, prop A.2.4.4 or in prop. 1.19 in Bar. We follow the proof given in this latter reference. We demonstrate the first statement, the second is its direct formal dual. So consider a pushout diagram $\begin{array}{ccc}K& \to & Y\\ {↓}^{\in \mathrm{cof}}& & ↓\\ L& \to & X\end{array}$\array{ K &\to& Y \\ \downarrow^{\mathrlap{\in cof}} && \downarrow \\ L &\to& X } in a left proper model category, where the morphism $K\to L$ is a cofibration, as indicated. We need to exhibit a weak equivalence $X\prime \stackrel{}{\to }X$ from an object $X\prime$ that is manifestly a homotopy pushout of $L←K\to Y$. The standard procedure to produce this $X\prime$ is to pass to a weakly equivalent diagram with the property that all objects are cofibrant and one of the morphisms is a cofibration. The ordinary pushout of that diagram is well known to be the homotopy pushout, as described there. So pick a cofibrant replacement $\varnothing ↪K\prime \stackrel{\simeq }{\to }$ of $K$ and factor $K\prime \to K\to Y$ as a cofibration followed by a weak equivalence $K\prime ↪Y\prime \stackrel{\simeq }{\to }Y$ and similarly factor $K\prime \to K\to L$ as $K\prime ↪L\prime \stackrel{\simeq }{\to }L$ This yields a weak equivalence of diagrams $\begin{array}{ccc}Y& \stackrel{\simeq }{←}& Y\prime \\ ↑& & {↑}^{\in \mathrm{cof}}\\ K& \stackrel{\simeq }{←}& K\prime \\ {↓}^{\in \mathrm{cof}}& & {↓}^{\in \mathrm{cof}}\\ L& \stackrel{\simeq }{←}& L\prime \end{array}\phantom{\rule{thinmathspace}{0ex}},$\array{ Y &\stackrel{\simeq}{\leftarrow}& Y' \\ \uparrow && \uparrow^{\mathrlap{\in cof}} \\ K &\stackrel{\simeq}{\leftarrow}& K' \\ \downarrow^{\mathrlap{\in cof}} && \downarrow^{\mathrlap{\in cof}} \\ L &\stackrel{\simeq}{\leftarrow}& L' } \,, where now the diagram on the right is cofibrant as a diagram, so that its ordinary pushout $X\prime :=L\prime \coprod _{K\prime }Y\prime$X' := L' \coprod_{K'} Y' is a homotopy colimit of the original diagram. To obtain the weak equivalence from there to $X$, first form the further pushouts $\begin{array}{ccccccc}K& & & \to & & & Y\\ & {↖}^{\in W}& & & & {↗}_{\simeq }& \\ & & K\prime & \to & Y\prime & & \\ {↓}^{\in \mathrm{cof}}& & {↓}^{\in \mathrm{cof}}& & {↓}^{\in \mathrm{cof}}& & ↓\\ & & L\prime & \to & X\prime & & \\ & {}^{\in W}↙& & & & {↘}^{\simeq }& \\ L″:=K\coprod _{K\prime }L& & & \to & & & L″\coprod _{K}Y\\ {↓}^{\in W}& & & & & & ↓\\ L& & & \to & & & X\end{array}\phantom{\rule{thinmathspace}{0ex}},$\array{ K &&&\to&&& Y \\ & \nwarrow^{\mathrlap{\in W}} &&&& \nearrow_{\mathrlap{\simeq}} & \\ && K' &\to& Y' && \\ \downarrow^{\mathrlap{\in cof}} && \downarrow^{\mathrlap{\in cof}} && \downarrow^{\mathrlap{\in cof}} && \downarrow \\ && L' &\to& X' && \\ & {}^{\mathllap{\in W}} \swarrow &&&& \searrow^{\mathrlap{\simeq}} & \\ L'':= K \coprod_{K'} L &&&\to&&& L'' \coprod_{K} Y \\ \downarrow^{\mathrlap{\in W}} &&&&&& \downarrow \\ L &&&\to&&& X } \,, where the total outer diagram is the original pushout diagram. Here the cofibrations are as indicated by the above factorization and by their stability under pushouts, and the weak equivalences are as indicated by the above factorization and by the left properness of the model category. The weak equivalence $L″\stackrel{\simeq }{\to }L$ is by the 2-out-of-3 property. This establishes in particular a weak equivalence $X\prime \stackrel{\simeq }{\to }L″\coprod _{K}Y\phantom{\rule{thinmathspace}{0ex}}.$X' \stackrel{\simeq}{\to} L'' \coprod_K Y \,. It remains to get a weak equivalence further to $X$. For that, take the two outer squares from the above $\begin{array}{ccc}K& \to & Y\\ {↓}^{\in \mathrm{cof}}& & ↓\\ L″& \to & L″\coprod _{K\prime }Y\\ {↓}^{\in W}& & ↓\\ L& \to & X\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ K &\to& Y \\ \downarrow^{\mathrlap{\in cof}} && \downarrow \\ L'' &\to& L'' \coprod_{K'} Y \\ \downarrow^{\mathrlap{\in W}} && \downarrow \\ L &\to& X } \,. Notice that the top square is a pushout by construction, and the total one by assumption. Therefore by the general theorem about pastings of pushouts, also the lower square is a pushout. Then factor $K\to Y$ as a cofibration followed by a weak equivalence $K↪Z\stackrel{\simeq }{\to }Y$ and push that factorization through the double diagram, to obtain $\begin{array}{ccccc}K& \stackrel{\in \mathrm{cof}}{\to }& Z& \stackrel{\in W}{\to }& Y\\ {↓}^{\in cof}& & {↓}^{\in \mathrm{cof}}& & ↓\\ L″& \stackrel{\in \mathrm{cof}}{\to }& L″\coprod _{K}Z& \stackrel{\in W}{\to }& L″\coprod _{K\prime }Y\\ {↓}^{\in W}& & {↓}^{\in W}& & ↓\\ L& \to & L\coprod _{K}Z& \stackrel{\in W}{\to }& X\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ K &\stackrel{\in cof}{\to}& Z &\stackrel{\in W}{\to}& Y \\ \downarrow^{\mathrlap{\in \cof}} && \downarrow^{\mathrlap{\in cof}} && \downarrow \\ L'' &\stackrel{\in cof}{\to}& L'' \coprod_{K} Z &\stackrel{\in W}{\to}& L'' \coprod_{K'} Y \\ \downarrow^{\mathrlap{\in W}} && \downarrow^{\mathrlap{\in W}} && \downarrow \\ L & \to& L \coprod_K Z &\stackrel{\in W}{\to}& X } \,. Again by the behaviour of pushouts under pasting, every single square and composite rectangle in this diagram is a pushout. Using this, the cofibration and weak equivalence properties from before push through the diagram as indicated. This finally yields the desired weak equivalence $L″\coprod _{K\prime }Y\stackrel{\simeq }{\to }X$L'' \coprod_{K'} Y \stackrel{\simeq}{\to} X by 2-out-of-3. If we had allowed ourselved to assume in addition that $K$ itself is already cofibrant, then the above statement has a much simpler proof, which we list just for fun, too. ###### Proof of the above assuming that the domain of the cofibration is cofibrant Let $A↪B$ be a cofibration with $A$ cofibrant and let $A\to C$ be any other morphism. Factor this morphism as $A↪C\prime \stackrel{\simeq }{\to }C$ by a cofibration followed by an acyclic fibration. This give a weak equivalence of pushout diagrams $\begin{array}{ccc}C\prime & \stackrel{\simeq }{\to }& C\\ ↑& & ↑\\ A& \stackrel{=}{\to }& A\\ ↓& & ↓\\ B& \stackrel{=}{\to }& B\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ C' &\stackrel{\simeq}{\to}& C \\ \uparrow && \uparrow \\ A &\stackrel{=}{\to}& A \\ \downarrow && \downarrow \\ B &\stackrel{=}{\to}& B } \,. In the diagram on the left all objects are cofibrant and one morphism is a cofibration, hence this is a cofibrant diagram and its ordinary colimit is the homotopy colimit. Using that pushout diagrams compose to pushout diagrams, that cofibrations are preserved under pushout and that in a left proper model category weak equivalences are preserved under pushout along cofibrations, we find a weak equiovalence $\mathrm{hocolim}\stackrel{\simeq }{\to }B{\coprod }_{A}C$ $\begin{array}{ccccc}A& \stackrel{\in \mathrm{cof}}{\to }& C\prime & \stackrel{\in W\cap \mathrm{fib}}{\to }& C\\ {↓}^{\in \mathrm{cof}}& & {↓}^{\in \mathrm{cof}}& & {↓}^{\in \mathrm{cof}}\\ B& \to & \mathrm{hocolim}& \stackrel{\in W}{\to }& B\coprod _{A}C\end{array}\phantom{\rule{thinmathspace}{0ex}}.$\array{ A &\stackrel{\in cof}{\to}& C' &\stackrel{\in W \cap fib}{\to}& C \\ \downarrow^{\mathrlap{\in cof}} && \downarrow^{\mathrlap{\in cof}} && \downarrow^{\mathrlap{\in cof}} \\ B &\to& hocolim &\stackrel{\in W}{\to}& B \coprod_A C } \,. The proof for the second statement is the precise formal dual. ### Slice categories For any model category $M$, and any morphism $f:A\to B$, the adjunction ${\Sigma }_{f}:M/A⇄M/B:{f}^{*}$\Sigma_f : M/A \rightleftarrows M/B : f^* is a Quillen adjunction. If this adjunction is a Quillen equivalence, then $f$ must be a weak equivalence. In general, the converse can be proven only if $A$ and $B$ are fibrant. ###### Theorem The following are equivalent: 1. $M$ is right proper. 2. If $f$ is any weak equivalence in $M$, then ${\Sigma }_{f}⊣{f}^{*}$ is a Quillen equivalence. In other words, $M$ is right proper iff all slice categories have the “correct” Quillen equivalence type. Since whether or not a Quillen adjunction is a Quillen equivalence depends only on the classes of weak equivalences, not the fibrations and cofibrations, it follows that being right proper is really a property of a homotopical category. In particular, if one model structure is right proper, then so is any other model structure on the same category with the same weak equivalences. ### Local cartesian closure Since most well-behaved model categories are equivalent to a model category in which all objects are fibrant — namely, the model category of algebraically fibrant objects — they are in particular equivalent to one which is right proper. Thus, right properness by itself is not a property of an $\left(\infty ,1\right)$-category, only of a particular presentation of it via a model category. However, if a Cisinski model category is right proper, then the $\left(\infty ,1\right)$-category which it presents must be locally cartesian closed. Conversely, any locally cartesian closed (∞,1)-category has a presentation by a right proper Cisinski model category; see locally cartesian closed (∞,1)-category for the proof. ## References The usefulness of right properness for constructions of homotopy categories is discussed in • J. Jardine, Cocycle categories (pdf) The general theory can be found in Chapter 13 of • Philip S. Hirschhorn, Model Categories and Their Localizations (AMS, pdf toc, pdf) also in Revised on April 10, 2013 11:49:46 by Urs Schreiber (82.169.65.155)
2013-12-05 09:24:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 66, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991613626480103, "perplexity": 1257.9045023375736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163043081/warc/CC-MAIN-20131204131723-00001-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.jiskha.com/questions/764632/i-was-asked-to-state-as-a-product-of-speed-and-unit-vector-a-mass-traveling-at-1000ft-sec
# Multivariable Calculus I was asked to state as a product of speed and unit vector a mass traveling at 1000ft/sec along the vector <1,sqrt(3)>. I know that v/|v| gives me the unit vector, which is (1/2i, sqrt(3)/2j). Do I keep the speed as given? 1. 👍 2. 👎 3. 👁 1. "product"? What does that mean in vectors? I suspect the problem means NOT the product, but to indicate the velocity as a vector. IF so, then speed along the direction of the vector is 1000, but looking at the triangle (1,sqr3, 2) then velocity=1000*cosTheta *i + 1000*sinTheta*j = 500 i +866 j 1. 👍 2. 👎 👤 bobpursley 2. Opps velocty=500i +.573 j 1. 👍 2. 👎 👤 bobpursley 3. Most probably I misunderstood the question and quite frankly I thought this might be the answer, but in the answer box it did ask for the unit vector and the speed, which confused me. (This is a problem that I have already answered rightly or wrongly). 1. 👍 2. 👎 ## Similar Questions 1. ### Science #1. Kilometers per hour describes the speed of an object. ture false **** #2. Which of the following reference points would allow you to observe the speed of a car that you are traveling in? a) an airplane flying across the sky b) 2. ### Math: Vectors Find any vector w that is perpendicular to both vector "u = 3j + 4k" and vector "v = 2i". Note: i, j and k are unit vectors How would you solve this problem? Please walk me through? 3. ### Vectors If vector a and b are unit vectors, and |a+b|= root3, determine (2a-5b) dot product(b+3a) 4. ### Physics You are given vectors A = 5.0i - 6.5j & B = -3.5i + 7.0j. A third vector C lies on the xy-plane. Vector C is perpendicular to vector A, & the scalar product of C with B is 15.0. From this information, find the components of vector 1. ### Physics You are given vectors A = 5.0i - 6.5j and B = -2.5i + 7.0j. A third vector C lies in the xy-plane. Vector C is perpendicular to vector A and the scalar product of C with B is 15.0. Find the x and y components to vector C. Answer 2. ### mechanics and heat if the cross product of vector A and vector B is 8i-14j+k and the difference of vector A and vector B is 5i+3j+2k,then find the value of vector A and vector B. 3. ### math vectors Use a specific example to prove that the cross product is also not associative. That is, use three specific vectors in 3-space to show that Vector a×(Vector b × Vector c) is not equal to (Vector a × Vector b) × Vector c. Can 4. ### Dot Product Verify using an example that Vector a • (Vector b • Vector c) = (Vector a • Vector b) • Vector c is not true. Explain your reasoning both numerically and by using the definition of the dot product. I am very confused as to 1. ### Calculus and Vectors (6) Consider the vectors Vector a=(1,2,3) and Vector b=(-3,0,2). (a) Determine the projection of Vector a onto Vector b. (b) A unit vector is a vector of length 1. Determine a simplified unit vector in the same direction as Vector 2. ### Science 2. Which of the following reference points would allow you to observe the speed of a car that you are traveling in? A. An airplane flying acroos the sky. B. A car traveling next to you at the same speed you are traveling. C. A car 3. ### physics A car (mass=1100kg) is traveling at 32m/s when it collides head on with a sport utility vehicle (mass=2500kg) traveling in the opposite direction. In the collision, the two vehicles come to a halt. at what speed was the sport 4. ### Vectors Explain why it is not possible for Vector a • (Vector b • Vector c) to equal (Vector a • Vector b) • Vector c . (This means that the dot product is not associative.)
2021-06-13 20:55:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081250190734863, "perplexity": 1081.2288445620627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00112.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/10926-lambertw-print.html
# LambertW • January 31st 2007, 04:21 AM galactus LambertW Hello all: I seen a problem on another site where someone was wanting to know the inverse. $y=4+x+5e^{x-3}$ As Pka said, the derivative is always positive, so it has an inverse, but can not be found by elementary means. I ran it through Maple and it kicked back: $x=y-4-LambertW(\frac{5e^{y}}{e^{7}})$ What is Lambert?. I am unfamiliar. Does anyone know how this would be done without technology?. I thought it would something interesting to look into. I did find an interesting page on Wiki • January 31st 2007, 04:34 AM CaptainBlack Quote: Originally Posted by galactus Hello all: I seen a problem on another site where someone was wanting to know the inverse. $y=4+x+5e^{x-3}$ As Plato said, the derivative is always positive, so it has an inverse, but can not be found by elementary means. I ran it through Maple and it kicked back: $x=y-4-LambertW(\frac{5e^{y}}{e^{7}})$ What is Lambert?. I am unfamiliar. Does anyone know how this would be done without technology?. I thought it would something interesting to look into. I did find an interesting page on Wiki The wikipedia pge is a good overview, and computational schemes. For real arguments > -1/e Newton-Raphson is quite a good method og evaluating it. RonL • January 31st 2007, 06:53 AM ThePerfectHacker In my engineering class we were studing a hanging cable. The problem is that the equations that describe it are hyperbolic functions and hence cannot be solved through "normally". However, I have been able to reduce the problem to solving, $ax+b=e^x$ for $a\not = 0$. In that case the linear-exponential equations can always be solved (I imagine). The full solution is here. http://www.mathhelpforum.com/math-he...e-problem.html
2014-03-15 17:23:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044571280479431, "perplexity": 1021.312029500778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678698575/warc/CC-MAIN-20140313024458-00092-ip-10-183-142-35.ec2.internal.warc.gz"}
https://mersenneforum.org/showthread.php?s=af6c87236ddc0d00ea1388e5b68ce120&p=590548
mersenneforum.org > Data Let's finish primality verification through Mp#49*, M(74 207 281) Register FAQ Search Today's Posts Mark Forums Read 2021-10-14, 15:09   #34 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 2×3×13×83 Posts Quote: Originally Posted by Zhangrc It's time to add another term for http://oeis.org/A000043! Not yet updated. How is that done? Last fiddled with by kriesel on 2021-10-14 at 15:13 2021-10-14, 15:33   #35 chris2be8 Sep 2009 13×179 Posts Quote: Possibly because 74209207 is composite: Code: \$ factor 74209207 74209207: 331 224197 2021-10-15, 17:22   #36 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 647410 Posts Estimates of primality test verification completion 1. Participants may post one estimate of verification completion date per known Mp#i*. 2. Entries are open now for Mp#49 - Mp #51*. Entries for Mp #52* are not open, until after its exponent is known, the primality is redundantly confirmed, and an official Mersenne Research Inc. press release is issued announcing the discovery. 3. Participants are encouraged to give ISO-9001 dates, or ISO 9001 date-time strings if especially ambitious. 4. First estimate per (forum user ID, Mp #i*) combination is the only one that counts. No revisions will be accepted. 5. Estimates posted after a date that is 31 days before the actual date of verification completion of all Mersenne numbers up to the relevant Mp#i*s are invalid. 6. Interpretation of estimate dates and times posted will be as follows: 1. Dates posted will be interpreted as Gregorian calendar 2. If an estimate is given imprecisely such as first half / quarter / whatever of a year or other time period: 1. the midpoint of the specified period is considered to be the estimate date value. 2. In case of a fraction, as may occur if the period contains an odd number of days, date value shall be rounded upward. For example, first half of 2024 would be evaluated as if the estimator had stated roundup (2024-01-01 + 2024-06-30) / 2 = 2024-01-01 + roundup(181/2) = 2024-01-01 + 91 =2024-04-01. (2024 is a leap year.) 3. Date evaluations will be performed with https://www.timeanddate.com/date/duration.html while that URL remains functional. If it ceases to function for an extended period, another functional URL will be sought and posted. 4. If time of day is also included in the estimate, time zone will be presumed to be UTC unless otherwise stated. 5. If time of day is not included in the estimate, time zone will be presumed to be US Central unless otherwise stated. 6. Estimate dates posted without attached times will be treated as yyyy-dd-mm 0:0:0.0 for purposes of computing time differences and other values. 7. Actual dates of completion of verification as posted at https://www.mersenne.org/report_milestones/ shall be definitive. (No room for debate on actual date values or TZ effects.) 8. Verification of the test status of an exponent is by whatever tests are accepted by mersenne.org at the time; matching res64 values for LL or PRP of the same type; PRP with Cert; found factors; future tests if any. (No room for debate on what constitutes verification or does not.) 9. Highest quality estimate is that with maximum value for a given Mp#i* of the function abs ((actual date – post date) / (actual date – estimate date)) and for which post date < (actual date - 30.99999 days ). 10. Closest estimate is that with minimum value for a given Mp#i* of the function abs (actual date – estimate date) and for which post date < (actual date - 30.99999 days). 11. In the unlikely case of a tie on evaluation values, the earliest posted estimate wins. In the case of a tie between estimates posted the same day, earlier post time and order is the first tiebreaker. Second tiebreaker is earlier estimate value. Third tiebreaker is later forum join date. Attached Files Mpnn verification forecasts.pdf (20.0 KB, 64 views) 2021-10-16, 21:11   #37 greenskull Xebeche Apr 2019 🌺🙏🌺 1BD16 Posts Quote: Originally Posted by kriesel Stake your claims.. My prediction for Mp#49*, M(74 207 281) was March 08, 2025. You marked my prediction in your PDF file as 2028-03-08. It is incorrect. It should be 2025-03-08. 2021-10-16, 21:35   #38 greenskull Xebeche Apr 2019 🌺🙏🌺 1BD16 Posts Quote: Originally Posted by kriesel Stake your claims.. Also, you have specified your forecast for Mp#49*, M(74 207 281) with the date 2021-08-26 and link: https://www.mersenneforum.org/showpo...5&postcount=11 But I did not find such a forecast there. I have seen the forecast 2025-01-18 in attached PDF there but it dated by 10/13/2021 and not by 2021-08-26. It seems to me better to be more careful with numbers. Last fiddled with by greenskull on 2021-10-16 at 21:42 2021-10-17, 01:16   #39 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 145128 Posts Quote: Originally Posted by greenskull My prediction for Mp#49*, M(74 207 281) was March 08, 2025. https://mersenneforum.org/showthread...619#post589619 That typo will be corrected at next posting of the pdf. My spreadsheet is corrected now. I plan to update the pdf in the thread only ~monthly, or less frequently if there's no estimation activity and no milestone reached. An astute reader of the rules I drafted may have already noted that they are constructed to limit the effort required of me going forward. Quote: Originally Posted by greenskull Also, you have specified your forecast for Mp#49*, M(74 207 281) with the date 2021-08-26 and link: https://www.mersenneforum.org/showpo...5&postcount=11 But I did not find such a forecast there. I have seen the forecast 2025-01-18 in attached PDF there but it dated by 10/13/2021 and not by 2021-08-26. It seems to me better to be more careful with numbers. On that one, you just don't know what you're talking about. 1) https://www.mersenneforum.org/showpo...5&postcount=11 is a reference post originated 2018-12-16, 12:03, that has been regularly updated numerous times over the years following, and (relative to now) was last updated 2021-10-13 at 12:45 (my time zone). The page 5 was added to one of the attachments updated, at a previous update, so page 5 got first posted there 2021-08-26. 2) The post date of an estimate does not change. It is an estimate's first appearance that counts, not any other time it gets reposted or mentioned, or remains in an updated post. (Read the rules, again.) 3) The update that added page 5 and multiple estimate computations was demonstrably after 2021-08-16, at which point only Mp#48* occupied page 5: see the attachment of https://mersenneforum.org/showpost.p...postcount=3436. 4) The date I saw and took note of on the previous version of the pdf on my server before generating the 2021-10-13 update pdf for updating was 2021-08-26. 5) Also https://mersenneforum.org/showpost.p...&postcount=128 posted 2021-10-06 makes explicit reference to the 2021-08-26 update with a link. "Projections for Mp#49*-51* were previously posted 2021-08-26 and you were notified of that." 6) Also, I update https://www.mersenneforum.org/showpo...5&postcount=11 when a milestone is reached. A previous milestone was reaching 57M DC completion, which https://www.mersenne.org/report_milestones/ shows as "2021-08-26 All tests below 57 million verified." 7) Another update would have been triggered by https://www.mersenne.org/report_milestones/ "2021-10-07 All exponents below 105 million tested at least once." 8) The next update would have been triggered by https://www.mersenne.org/report_milestones/ "2021-10-13 All tests below 58 million verified." 9) Whichever minor milestone is reached next, for which I then post an update of https://www.mersenneforum.org/showpo...5&postcount=11, will also change its modified-date. Which will also have nothing to do with when the Mp#49*-Mp#51* estimates were first posted. 10) Unfortunately we can not further confirm that 2021-08-26 date with the wayback machine, because archive.org does not have anything for that URL. It only does top level of mersenneforum.org, not individual posts. Even the data subforum comes up empty: http://web.archive.org/web/202108261...splay.php?f=21 11) Page 5 describes the fit on which the Mp#49*-Mp#51* estimates were computed. That fit is based on data for milestones occurring 2010-2020. Updating milestones data progress of 2021 or later has no effect on that. 12) There is no connection between the spreadsheet cells concerning 2021 milestones reached, and the estimate computations for Mp#49*-Mp#51*. The estimates computed on page 5 of the attachment are not and will not be changed by future updates of the earlier pages. Please stop posting false claims and faulty reasoning. The 2021-08-26 date corresponds to reality. Your false claim 2021-10-13 does not. Last fiddled with by kriesel on 2021-10-17 at 01:26 2021-10-17, 08:51   #40 greenskull Xebeche Apr 2019 🌺🙏🌺 5×89 Posts Quote: Originally Posted by kriesel On that one, you just don't know what you're talking about. I know very well what I am talking about :) Your formula for estimating the quality of a participant's forecast uses the date that the forecast was made right? And the rating of participant's forecast depends on it. The earlier forecast date gives an advantage. Quote: Originally Posted by kriesel 9. Highest quality estimate is that with maximum value for a given Mp#i* of the function abs ((actual date – post date) / (actual date – estimate date)) and for which post date < (actual date - 30.99999 days ). 10. Closest estimate is that with minimum value for a given Mp#i* of the function abs (actual date – estimate date) and for which post date < (actual date - 30.99999 days). You have written that your forecast for 2025-01-18 was made on 2021-08-26. But this does not follow from anywhere. It can't be clearly verified and confirmed. 1) Ok. 2) Ok. 3) There is no forecast for Mp#49*, there is an approximate estimate of the year. 4) Do I just need to trust you, or can I see it somehow? 5) This post links to another post that has your prediction dated 10/13/2021. 6) Duplicates point 5) 7) Not directly related to the question. 8) Not directly related to the question. 9) Any bet must be documented. You cannot appeal to something that cannot be verified or confirmed. 10) ¯\_(ツ)_/¯, no link - no bet. 11) Not directly related to the question. 12) Not directly related to the question. Quote: Originally Posted by kriesel Please stop posting false claims and faulty reasoning. The 2021-08-26 date corresponds to reality. Your false claim 2021-10-13 does not. What is the fallacy of this claim? If you cannot clearly confirm this date 2021-08-26, then you have to accept the date of your attached file (10/13/2021) where this your forecast is mentioned and anyone can see it. Then there will be more clarity. I thank you for correcting my forecast soon. But taking into account the fact that you are easily confusing (changing?) numbers as in my forecast, I cannot agree with the date 2021-08-26 of your forecast, which you cannot confirm in any way. Sorry :) Last fiddled with by greenskull on 2021-10-17 at 09:37 2021-10-17, 09:54   #41 greenskull Xebeche Apr 2019 🌺🙏🌺 5·89 Posts And further. In your rules for evaluating predictions, we cannot adjust our earlier predictions and make new bets. Quote: Originally Posted by kriesel 4. First estimate per (forum user ID, Mp #i*) combination is the only one that counts. No revisions will be accepted. Taking into account the fact that the deadlines for their execution are measured in years, this is too rigid a limitation. It may happen that within six months the fact deviates from the predicted trajectory and it becomes obvious that the forecast of one or another participant will not work even close. Such a participant will have no choice but to yearn for the rest of the time - for several years, without having the opportunity to cheerfully continue to participate in the game. Earlier, I proposed a slightly different system that allows as many predictions from the participants as they want. And this system also objectively assesses their talent for forecasting. And this system allows the participants to correct their forecasts on the way to Moment of Truth and make new bets. And the result (Diviner Talent Rate) will take into account all the attempts. https://www.mersenneforum.org/showth...848#post585848 You once said that it is all for fun. Quote: Originally Posted by kriesel It's all just for fun. So let's change the system to the one I suggested so that it is really fun for everyone right down to the Moment of Truth. The 30.99999 days amendment can also be taken into account. Last fiddled with by greenskull on 2021-10-17 at 10:13 2021-10-17, 21:26   #42 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 2·3·13·83 Posts Quote: Originally Posted by kriesel 4) The date I saw and took note of on the previous version of the pdf on my server before generating the 2021-10-13 update pdf for updating was 2021-08-26. ... 7) Another update would have been triggered by https://www.mersenne.org/report_milestones/ "2021-10-07 All exponents below 105 million tested at least once." #4 should have read 2021-10-07, not 2021-10-13. I did not find a cached copy of GIMPS minor milestone progress versus year https://www.mersenneforum.org/showpo...5&postcount=11 in my web browser cache, downloads folder, on my server drive, etc. But I did find a copy of "gimps progress and rate.pdf" from the 2021-08-26 update of that page. It has been copied and renamed to preserve it on my server drive. The milestone progress post will be updated shortly. The actual posting date is shown by multiple methods to have been 2021-08-26. Greenskull's false claim of 2021-10-13 is refuted. Attached Thumbnails Attached Files gimps progress and rate-2021-08-26.pdf (58.9 KB, 47 views) Last fiddled with by kriesel on 2021-10-17 at 21:45 Reason: remove firefox's extra line feeds, correct second attachment, removed and readded 3rd & 4th to preserve order 2021-10-17, 21:28   #43 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 194A16 Posts rules addition; updated verification forecasts pdf Effective as of the time and date of this post, these rules are appended to those posted at https://mersenneforum.org/showpost.p...&postcount=36: 12) Any of the following shall constitute grounds for disqualification of a participant: 1. Making false claims about matters relating to the estimations and failing to promptly correct them after becoming aware of the inaccuracy. 2. Engaging in excessively argumentative, specious, or illogical posting or too-frequent posting regarding estimates. 3. Repeated violation of the stated rules. 4. Dishonesty. 5. Trolling. (See https://en.wikipedia.org/wiki/Internet_troll and https://mersenneforum.org/showpost.p...&postcount=314) 6. Other sociopathic behavior. 13) Disqualification and grounds for it shall be determined in the sole judgment of kriesel. (See the Predict M52 game thread https://mersenneforum.org/showthread.php?t=23892 for precedent that the creator of the game is the final arbiter of rules interpretation and enforcement.) 14) There is no mechanism for appeal of disqualification. Disqualification is final. Disqualification is permanent. 15) Estimates made, without specification for each estimate, of all 3 of the following elements, 1. which Mersenne prime, eg Mp#49*/Mp#50*/Mp#51* (or eventually Mp#52*), 2. estimated date of verification completion, and 3. corresponding stated credible basis, are invalid and shall be ignored. Estimates made before the time and date of this post are grandfathered in, and may be supplemented by stating the basis for them, within 30 days of original posting of estimate elements. Stating a clear reproducible basis or algorithm is encouraged. (Examples of credible basis are fits to past milestones, projections based on number of verification completions per day, projections based upon total GHzD required and primenet statistics, other fully documented math based approaches; examples of not credible basis are crystal balls, ouija boards, magic 8balls, dreams, premonitions, psychics) 16) Purpose is to identify a variety of alternate estimation methods, and demonstrate by test over extended periods, which are more effective or less effective. 17) Participation after the time and date of this post constitutes consent to the rules. (Anyone may of course start their own game, contest, whatever, spending the effort to: develop and post rules; tabulate estimates; upon completion of a verification, compute standings; etc. A different forum thread for such an undertaking is highly recommended.) Quote: Originally Posted by Uncwilly There is not so much fun there. Amen to that. Attached Files Mpnn verification forecasts.pdf (20.1 KB, 52 views) 2021-10-17, 21:51 #44 greenskull Xebeche     Apr 2019 🌺🙏🌺 5×89 Posts I don't want to participate in Kriesel's game. Due to boring uninteresting rules and sloppy data handling. But I will stay here and will, from time to time, refine my predictions. If necessary, of course. Also, if someone wants to clarify theirs, then I will take this into account in my quality assessment system. I will apply the assessment I presented earlier here: https://www.mersenneforum.org/showth...848#post585848 A little later, I will summarize all the forecasts here in a table and publish. Last fiddled with by greenskull on 2021-10-17 at 21:55 Similar Threads Thread Thread Starter Forum Replies Last Post kriesel Marin's Mersenne-aries 136 2021-11-02 04:40 Happy5214 Computer Science & Computational Number Theory 5 2021-03-31 12:36 kriesel News 0 2019-02-24 15:47 CuriousKit PrimeNet 13 2015-06-21 14:49 jasong Software 13 2010-03-10 18:53 All times are UTC. The time now is 07:03. Sat May 21 07:03:37 UTC 2022 up 37 days, 5:04, 0 users, load averages: 0.95, 1.29, 1.27
2022-05-21 07:03:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.437486469745636, "perplexity": 4895.225856531087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00607.warc.gz"}
https://www.semanticscholar.org/paper/PAC-Bayesian-Inequalities-for-Martingales-Seldin-Laviolette/801baf5aa5980bf73a8665e56541997f573fab3a
# PAC-Bayesian Inequalities for Martingales @article{Seldin2012PACBayesianIF, title={PAC-Bayesian Inequalities for Martingales}, author={Yevgeny Seldin and François Laviolette and Nicol{\o} Cesa-Bianchi and John Shawe-Taylor and Peter Auer}, journal={IEEE Transactions on Information Theory}, year={2012}, volume={58}, pages={7086-7093} }` • Published 31 October 2011 • Mathematics • IEEE Transactions on Information Theory We present a set of high-probability inequalities that control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. Our results extend the PAC-Bayesian (probably approximately correct) analysis in learning theory from the i.i.d. setting to martingales opening the way for its application to importance weighted sampling, reinforcement learning, and other interactive learning domains, as well as many other domains in… ## Figures from this paper ### PAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits • Computer Science ICML On-line Trading of Exploration and Exploitation • 2012 A new tool for data-dependent analysis of the exploration-exploitation trade-off in learning under limited feedback based on a new concentration inequality that makes it possible to control the concentration of weighted averages of multiple simultaneously evolving and interdependent martingales. ### PAC-Bayes Analysis Beyond the Usual Bounds • Computer Science NeurIPS • 2020 A basic PAC-Bayes inequality for stochastic kernels is presented, from which one may derive extensions of various known PAC- Bayes bounds as well as novel bounds, and a simple bound for a loss function with unbounded range is presented. ### Novel Change of Measure Inequalities and PAC-Bayesian Bounds • Computer Science ArXiv • 2020 This work proposes a multiplicative change of measure inequality for $\alpha$-divergences, which leads to tighter bounds under some technical conditions and presents several PAC-Bayesian bounds for various classes of random variables, by using the novel change ofMeasure inequalities. ### PAC-Bayesian Transportation Bound A new generalization error bound is developed, the PAC-Bayesian transportation bound, which is the first PAC- Bayesian bound that relates the risks of any two predictors according to their distance, and capable of evaluating the cost of de-randomization of stochastic predictors faced with continuous loss functions. ### Novel Change of Measure Inequalities with Applications to PAC-Bayesian Bounds and Monte Carlo Estimation • Computer Science, Mathematics AISTATS • 2021 Several applications are presented, including PAC-Bayesian bounds for various classes of losses and non-asymptotic intervals for Monte Carlo estimates and a generalized version of Hammersley-Chapman-Robbins inequality. ### Simpler PAC-Bayesian bounds for hostile data • Computer Science Machine Learning • 2017 This paper provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as hostile data) and proves a general PAC- Bayesian bound, and shows how to use it in various hostile settings. ### A Strongly Quasiconvex PAC-Bayesian Bound • Computer Science ALT • 2017 It is shown that the PAC-Bayesian bound can be rewritten as a one-dimensional function of the trade-off parameter and provide sufficient conditions under which the function has a single global minimum. ### A New Family of Generalization Bounds Using Samplewise Evaluated CMI • Computer Science ArXiv • 2022 A new family of information-theoretic generalization bounds is presented, in which the training loss and the population loss are compared through a jointly convex function, and a samplewise, average version of Seeger’s PAC-Bayesian bound is derived. ### Tighter PAC-Bayes Generalisation Bounds by Leveraging Example Difficulty • Computer Science ArXiv • 2022 A modified version of the excess risk is introduced, which can be used to obtain tighter, fast-rate PAC-Bayesian generalisation bounds and a new bound for [ − 1 , 1]-valued signed losses, which is more favourable when they empirically have low variance around 0.05. ## References SHOWING 1-10 OF 28 REFERENCES ### A PAC analysis of a Bayesian estimator • Computer Science COLT '97 • 1997 The paper uses the techniques to give the first PAC style analysis of a Bayesian inspired estimator of generalisation, the size of a ball which can be placed in the consistent region of parameter space, and the resulting bounds are independent of the complexity of the function class though they depend linearly on the dimensionality of the parameter space. ### Bayesian Gaussian process models : PAC-Bayesian generalisation error bounds and sparse approximations The tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning is demonstrated and generic schemes for automatic model selection with many (hyper)parameters are developed. ### PAC-Bayesian Analysis of Contextual Bandits • Computer Science NIPS • 2011 The analysis allows to provide the algorithm large amount of side information, let the algorithm to decide which side information is relevant for the task, and penalize the algorithm only for the side information that it is using de facto. ### Empirical Bernstein Bounds and Sample-Variance Penalization • Mathematics, Computer Science COLT • 2009 Improved constants for data dependent and variance sensitive confidence bounds are given, called empirical Bernstein bounds, and extended to hold uniformly over classes of functions whose growth function is polynomial in the sample size n, and sample variance penalization is considered. ### PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification • M. Seeger • Computer Science J. Mach. Learn. Res. • 2002 By applying the PAC-Bayesian theorem of McAllester (1999a), this paper proves distribution-free generalisation error bounds for a wide range of approximate Bayesian GP classification techniques, giving a strong learning-theoretical justification for the use of these techniques. ### PAC-Bayesian Stochastic Model Selection A PAC-Bayesian performance guarantee for stochastic model selection that is superior to analogous guarantees for deterministic model selection and shown that the posterior optimizing the performance guarantee is a Gibbs distribution. ### Distribution-Dependent PAC-Bayes Priors • Mathematics, Computer Science ALT • 2010 The idea that the PAC-Bayes prior can be informed by the data-generating distribution is developed, sharp bounds for an existing framework are proved, and insights into function class complexity are developed in this model and means of controlling it with new algorithms are suggested. ### WEIGHTED SUMS OF CERTAIN DEPENDENT RANDOM VARIABLES 1. Let be a probability space and,be an increasing family of sub o'-fields of(we put(c) Let (xn)n=1, 2, •c be a sequence of bounded martingale differences on , that is,xn(ƒÖ) is bounded almost surely ### Some PAC-Bayesian Theorems The PAC-Bayesian theorems given here apply to an arbitrary prior measure on an arbitrary concept space and provide an alternative to the use of VC dimension in proving PAC bounds for parameterized concepts.
2022-12-04 01:12:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7523017525672913, "perplexity": 2255.1982715204076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00049.warc.gz"}
https://isabelle.in.tum.de/repos/isabelle/file/c039b8ede204/doc-src/IsarRef/pure.tex
doc-src/IsarRef/pure.tex author wenzelm Mon Mar 04 22:31:21 2002 +0100 (2002-03-04) changeset 13016 c039b8ede204 parent 12976 5cfe2941a5db child 13024 0461b281c2b5 permissions -rw-r--r-- tuned; 1 2 \chapter{Basic Language Elements}\label{ch:pure-syntax} 3 4 Subsequently, we introduce the main part of Pure Isar theory and proof 5 commands, together with fundamental proof methods and attributes. 6 Chapter~\ref{ch:gen-tools} describes further Isar elements provided by generic 7 tools and packages (such as the Simplifier) that are either part of Pure 8 Isabelle or pre-installed in most object logics. Chapter~\ref{ch:logics} 9 refers to object-logic specific elements (mainly for HOL and ZF). 10 11 \medskip 12 13 Isar commands may be either \emph{proper} document constructors, or 14 \emph{improper commands}. Some proof methods and attributes introduced later 15 are classified as improper as well. Improper Isar language elements, which 16 are subsequently marked by $^*$'', are often helpful when developing proof 17 documents, while their use is discouraged for the final outcome. Typical 18 examples are diagnostic commands that print terms or theorems according to the 19 current context; other commands emulate old-style tactical theorem proving. 20 21 22 \section{Theory commands} 23 24 \subsection{Defining theories}\label{sec:begin-thy} 25 26 \indexisarcmd{header}\indexisarcmd{theory}\indexisarcmd{context}\indexisarcmd{end} 27 \begin{matharray}{rcl} 28 \isarcmd{header} & : & \isarkeep{toplevel} \\ 29 \isarcmd{theory} & : & \isartrans{toplevel}{theory} \\ 30 \isarcmd{context}^* & : & \isartrans{toplevel}{theory} \\ 31 \isarcmd{end} & : & \isartrans{theory}{toplevel} \\ 32 \end{matharray} 33 34 Isabelle/Isar new-style'' theories are either defined via theory files or 35 interactively. Both theory-level specifications and proofs are handled 36 uniformly --- occasionally definitional mechanisms even require some explicit 37 proof as well. In contrast, old-style'' Isabelle theories support batch 38 processing only, with the proof scripts collected in separate ML files. 39 40 The first real'' command of any theory has to be $\THEORY$, which starts a 41 new theory based on the merge of existing ones. Just preceding $\THEORY$, 42 there may be an optional $\isarkeyword{header}$ declaration, which is relevant 43 to document preparation only; it acts very much like a special pre-theory 44 markup command (cf.\ \S\ref{sec:markup-thy} and \S\ref{sec:markup-thy}). The 45 $\END$ commands concludes a theory development; it has to be the very last 46 command of any theory file to loaded in batch-mode. The theory context may be 47 also changed interactively by $\CONTEXT$ without creating a new theory. 48 49 \begin{rail} 50 'header' text 51 ; 52 'theory' name '=' (name + '+') filespecs? ':' 53 ; 54 'context' name 55 ; 56 57 filespecs: 'files' ((name | parname) +); 58 \end{rail} 59 60 \begin{descr} 61 \item [$\isarkeyword{header}~text$] provides plain text markup just preceding 62 the formal beginning of a theory. In actual document preparation the 63 corresponding {\LaTeX} macro \verb,\isamarkupheader, may be redefined to 64 produce chapter or section headings. See also \S\ref{sec:markup-thy} and 65 \S\ref{sec:markup-prf} for further markup commands. 66 67 \item [$\THEORY~A = B@1 + \cdots + B@n\colon$] starts a new theory $A$ based 68 on the merge of existing theories $B@1, \dots, B@n$. 69 70 Due to inclusion of several ancestors, the overall theory structure emerging 71 in an Isabelle session forms a directed acyclic graph (DAG). Isabelle's 72 theory loader ensures that the sources contributing to the development graph 73 are always up-to-date. Changed files are automatically reloaded when 74 processing theory headers interactively; batch-mode explicitly distinguishes 75 \verb,update_thy, from \verb,use_thy,, see also \cite{isabelle-ref}. 76 77 The optional $\isarkeyword{files}$ specification declares additional 78 dependencies on ML files. Files will be loaded immediately, unless the name 79 is put in parentheses, which merely documents the dependency to be resolved 80 later in the text (typically via explicit $\isarcmd{use}$ in the body text, 81 see \S\ref{sec:ML}). In reminiscence of the old-style theory system of 82 Isabelle, \texttt{$A$.thy} may be also accompanied by an additional file 83 \texttt{$A$.ML} consisting of ML code that is executed in the context of the 84 \emph{finished} theory $A$. That file should not be included in the 85 $\isarkeyword{files}$ dependency declaration, though. 86 87 \item [$\CONTEXT~B$] enters an existing theory context, basically in read-only 88 mode, so only a limited set of commands may be performed without destroying 89 the theory. Just as for $\THEORY$, the theory loader ensures that $B$ is 90 loaded and up-to-date. 91 92 This command is occasionally useful for quick interactive experiments; 93 normally one should always commence a new context via $\THEORY$. 94 95 \item [$\END$] concludes the current theory definition or context switch. 96 Note that this command cannot be undone, but the whole theory definition has 97 to be retracted. 98 99 \end{descr} 100 101 102 \subsection{Markup commands}\label{sec:markup-thy} 103 104 \indexisarcmd{chapter}\indexisarcmd{section}\indexisarcmd{subsection} 105 \indexisarcmd{subsubsection}\indexisarcmd{text}\indexisarcmd{text-raw} 106 \begin{matharray}{rcl} 107 \isarcmd{chapter} & : & \isartrans{theory}{theory} \\ 108 \isarcmd{section} & : & \isartrans{theory}{theory} \\ 109 \isarcmd{subsection} & : & \isartrans{theory}{theory} \\ 110 \isarcmd{subsubsection} & : & \isartrans{theory}{theory} \\ 111 \isarcmd{text} & : & \isartrans{theory}{theory} \\ 112 \isarcmd{text_raw} & : & \isartrans{theory}{theory} \\ 113 \end{matharray} 114 115 Apart from formal comments (see \S\ref{sec:comments}), markup commands provide 116 a structured way to insert text into the document generated from a theory (see 117 \cite{isabelle-sys} for more information on Isabelle's document preparation 118 tools). 119 120 \railalias{textraw}{text\_raw} 121 \railterm{textraw} 122 123 \begin{rail} 124 ('chapter' | 'section' | 'subsection' | 'subsubsection' | 'text' | textraw) text 125 ; 126 \end{rail} 127 128 \begin{descr} 129 \item [$\isarkeyword{chapter}$, $\isarkeyword{section}$, 130 $\isarkeyword{subsection}$, and $\isarkeyword{subsubsection}$] mark chapter 131 and section headings. 132 \item [$\TEXT$] specifies paragraphs of plain text, including references to 133 formal entities (see also \S\ref{sec:antiq} on antiquotations''). 134 \item [$\isarkeyword{text_raw}$] inserts {\LaTeX} source into the output, 135 without additional markup. Thus the full range of document manipulations 136 becomes available. 137 \end{descr} 138 139 Any of these markup elements corresponds to a {\LaTeX} command with the name 140 prefixed by \verb,\isamarkup,. For the sectioning commands this is a plain 141 macro with a single argument, e.g.\ \verb,\isamarkupchapter{,\dots\verb,}, for 142 $\isarkeyword{chapter}$. The $\isarkeyword{text}$ markup results in a 143 {\LaTeX} environment \verb,\begin{isamarkuptext}, {\dots} 144 \verb,\end{isamarkuptext},, while $\isarkeyword{text_raw}$ causes the text 145 to be inserted directly into the {\LaTeX} source. 146 147 \medskip 148 149 Additional markup commands are available for proofs (see 150 \S\ref{sec:markup-prf}). Also note that the $\isarkeyword{header}$ 151 declaration (see \S\ref{sec:begin-thy}) admits to insert section markup just 152 preceding the actual theory definition. 153 154 155 \subsection{Type classes and sorts}\label{sec:classes} 156 157 \indexisarcmd{classes}\indexisarcmd{classrel}\indexisarcmd{defaultsort} 158 \begin{matharray}{rcll} 159 \isarcmd{classes} & : & \isartrans{theory}{theory} \\ 160 \isarcmd{classrel} & : & \isartrans{theory}{theory} & (axiomatic!) \\ 161 \isarcmd{defaultsort} & : & \isartrans{theory}{theory} \\ 162 \end{matharray} 163 164 \begin{rail} 165 'classes' (classdecl +) 166 ; 167 'classrel' nameref ('<' | subseteq) nameref 168 ; 169 'defaultsort' sort 170 ; 171 \end{rail} 172 173 \begin{descr} 174 \item [$\isarkeyword{classes}~c \subseteq \vec c$] declares class $c$ to be a 175 subclass of existing classes $\vec c$. Cyclic class structures are ruled 176 out. 177 \item [$\isarkeyword{classrel}~c@1 \subseteq c@2$] states a subclass relation 178 between existing classes $c@1$ and $c@2$. This is done axiomatically! The 179 $\INSTANCE$ command (see \S\ref{sec:axclass}) provides a way to introduce 180 proven class relations. 181 \item [$\isarkeyword{defaultsort}~s$] makes sort $s$ the new default sort for 182 any type variables given without sort constraints. Usually, the default 183 sort would be only changed when defining a new object-logic. 184 \end{descr} 185 186 187 \subsection{Primitive types and type abbreviations}\label{sec:types-pure} 188 189 \indexisarcmd{typedecl}\indexisarcmd{types}\indexisarcmd{nonterminals}\indexisarcmd{arities} 190 \begin{matharray}{rcll} 191 \isarcmd{types} & : & \isartrans{theory}{theory} \\ 192 \isarcmd{typedecl} & : & \isartrans{theory}{theory} \\ 193 \isarcmd{nonterminals} & : & \isartrans{theory}{theory} \\ 194 \isarcmd{arities} & : & \isartrans{theory}{theory} & (axiomatic!) \\ 195 \end{matharray} 196 197 \begin{rail} 198 'types' (typespec '=' type infix? +) 199 ; 200 'typedecl' typespec infix? 201 ; 202 'nonterminals' (name +) 203 ; 204 'arities' (nameref '::' arity +) 205 ; 206 \end{rail} 207 208 \begin{descr} 209 \item [$\TYPES~(\vec\alpha)t = \tau$] introduces \emph{type synonym} 210 $(\vec\alpha)t$ for existing type $\tau$. Unlike actual type definitions, 211 as are available in Isabelle/HOL for example, type synonyms are just purely 212 syntactic abbreviations without any logical significance. Internally, type 213 synonyms are fully expanded. 214 \item [$\isarkeyword{typedecl}~(\vec\alpha)t$] declares a new type constructor 215 $t$, intended as an actual logical type. Note that object-logics such as 216 Isabelle/HOL override $\isarkeyword{typedecl}$ by their own version. 217 \item [$\isarkeyword{nonterminals}~\vec c$] declares $0$-ary type constructors 218 $\vec c$ to act as purely syntactic types, i.e.\ nonterminal symbols of 219 Isabelle's inner syntax of terms or types. 220 \item [$\isarkeyword{arities}~t::(\vec s)s$] augments Isabelle's order-sorted 221 signature of types by new type constructor arities. This is done 222 axiomatically! The $\INSTANCE$ command (see \S\ref{sec:axclass}) provides a 223 way to introduce proven type arities. 224 \end{descr} 225 226 227 \subsection{Constants and simple definitions}\label{sec:consts} 228 229 \indexisarcmd{consts}\indexisarcmd{defs}\indexisarcmd{constdefs}\indexoutertoken{constdecl} 230 \begin{matharray}{rcl} 231 \isarcmd{consts} & : & \isartrans{theory}{theory} \\ 232 \isarcmd{defs} & : & \isartrans{theory}{theory} \\ 233 \isarcmd{constdefs} & : & \isartrans{theory}{theory} \\ 234 \end{matharray} 235 236 \begin{rail} 237 'consts' (constdecl +) 238 ; 239 'defs' ('(overloaded)')? (axmdecl prop +) 240 ; 241 'constdefs' (constdecl prop +) 242 ; 243 244 constdecl: name '::' type mixfix? 245 ; 246 \end{rail} 247 248 \begin{descr} 249 \item [$\CONSTS~c::\sigma$] declares constant $c$ to have any instance of type 250 scheme $\sigma$. The optional mixfix annotations may attach concrete syntax 251 to the constants declared. 252 253 \item [$\DEFS~name: eqn$] introduces $eqn$ as a definitional axiom for some 254 existing constant. See \cite[\S6]{isabelle-ref} for more details on the 255 form of equations admitted as constant definitions. 256 257 The $overloaded$ option declares definitions to be potentially overloaded. 258 Unless this option is given, a warning message would be issued for any 259 definitional equation with a more special type than that of the 260 corresponding constant declaration. 261 262 \item [$\CONSTDEFS~c::\sigma~eqn$] combines declarations and definitions of 263 constants, using the canonical name $c_def$ for the definitional axiom. 264 \end{descr} 265 266 267 \subsection{Syntax and translations}\label{sec:syn-trans} 268 269 \indexisarcmd{syntax}\indexisarcmd{translations} 270 \begin{matharray}{rcl} 271 \isarcmd{syntax} & : & \isartrans{theory}{theory} \\ 272 \isarcmd{translations} & : & \isartrans{theory}{theory} \\ 273 \end{matharray} 274 275 \railalias{rightleftharpoons}{\isasymrightleftharpoons} 276 \railterm{rightleftharpoons} 277 278 \railalias{rightharpoonup}{\isasymrightharpoonup} 279 \railterm{rightharpoonup} 280 281 \railalias{leftharpoondown}{\isasymleftharpoondown} 282 \railterm{leftharpoondown} 283 284 \begin{rail} 285 'syntax' ('(' ( name | 'output' | name 'output' ) ')')? (constdecl +) 286 ; 287 'translations' (transpat ('==' | '=>' | '<=' | rightleftharpoons | rightharpoonup | leftharpoondown) transpat +) 288 ; 289 transpat: ('(' nameref ')')? string 290 ; 291 \end{rail} 292 293 \begin{descr} 294 \item [$\isarkeyword{syntax}~(mode)~decls$] is similar to $\CONSTS~decls$, 295 except that the actual logical signature extension is omitted. Thus the 296 context free grammar of Isabelle's inner syntax may be augmented in 297 arbitrary ways, independently of the logic. The $mode$ argument refers to 298 the print mode that the grammar rules belong; unless the \texttt{output} 299 flag is given, all productions are added both to the input and output 300 grammar. 301 \item [$\isarkeyword{translations}~rules$] specifies syntactic translation 302 rules (i.e.\ \emph{macros}): parse~/ print rules (\texttt{==} or 303 \isasymrightleftharpoons), parse rules (\texttt{=>} or 304 \isasymrightharpoonup), or print rules (\texttt{<=} or 305 \isasymleftharpoondown). Translation patterns may be prefixed by the 306 syntactic category to be used for parsing; the default is \texttt{logic}. 307 \end{descr} 308 309 310 \subsection{Axioms and theorems}\label{sec:axms-thms} 311 312 \indexisarcmd{axioms}\indexisarcmd{lemmas}\indexisarcmd{theorems} 313 \begin{matharray}{rcll} 314 \isarcmd{axioms} & : & \isartrans{theory}{theory} & (axiomatic!) \\ 315 \isarcmd{lemmas} & : & \isartrans{theory}{theory} \\ 316 \isarcmd{theorems} & : & \isartrans{theory}{theory} \\ 317 \end{matharray} 318 319 \begin{rail} 320 'axioms' (axmdecl prop +) 321 ; 322 ('lemmas' | 'theorems') locale? (thmdef? thmrefs + 'and') 323 ; 324 \end{rail} 325 326 \begin{descr} 327 328 \item [$\isarkeyword{axioms}~a: \phi$] introduces arbitrary statements as 329 axioms of the meta-logic. In fact, axioms are axiomatic theorems'', and 330 may be referred later just as any other theorem. 331 332 Axioms are usually only introduced when declaring new logical systems. 333 Everyday work is typically done the hard way, with proper definitions and 334 actual proven theorems. 335 336 \item [$\isarkeyword{lemmas}~a = \vec b$] restrieves and stores existing facts 337 in the theory context, or the specified locale (see also 338 \S\ref{sec:locale}). Typical applications would also involve attributes, to 339 declare Simplifier rules, for example. 340 341 \item [$\isarkeyword{theorems}$] is essentially the same as 342 $\isarkeyword{lemmas}$, but marks the result as a different kind of facts. 343 344 \end{descr} 345 346 347 \subsection{Name spaces} 348 349 \indexisarcmd{global}\indexisarcmd{local}\indexisarcmd{hide} 350 \begin{matharray}{rcl} 351 \isarcmd{global} & : & \isartrans{theory}{theory} \\ 352 \isarcmd{local} & : & \isartrans{theory}{theory} \\ 353 \isarcmd{hide} & : & \isartrans{theory}{theory} \\ 354 \end{matharray} 355 356 \begin{rail} 357 'hide' name (nameref + ) 358 ; 359 \end{rail} 360 361 Isabelle organizes any kind of name declarations (of types, constants, 362 theorems etc.) by separate hierarchically structured name spaces. Normally 363 the user does not have to control the behavior of name spaces by hand, yet the 364 following commands provide some way to do so. 365 366 \begin{descr} 367 \item [$\isarkeyword{global}$ and $\isarkeyword{local}$] change the current 368 name declaration mode. Initially, theories start in $\isarkeyword{local}$ 369 mode, causing all names to be automatically qualified by the theory name. 370 Changing this to $\isarkeyword{global}$ causes all names to be declared 371 without the theory prefix, until $\isarkeyword{local}$ is declared again. 372 373 Note that global names are prone to get hidden accidently later, when 374 qualified names of the same base name are introduced. 375 376 \item [$\isarkeyword{hide}~space~names$] removes declarations from a given 377 name space (which may be $class$, $type$, or $const$). Hidden objects 378 remain valid within the logic, but are inaccessible from user input. In 379 output, the special qualifier $\mathord?\mathord?$'' is prefixed to the 380 full internal name. Unqualified (global) names may not be hidden. 381 \end{descr} 382 383 384 \subsection{Incorporating ML code}\label{sec:ML} 385 386 \indexisarcmd{use}\indexisarcmd{ML}\indexisarcmd{ML-command} 387 \indexisarcmd{ML-setup}\indexisarcmd{setup} 388 \indexisarcmd{method-setup} 389 \begin{matharray}{rcl} 390 \isarcmd{use} & : & \isartrans{\cdot}{\cdot} \\ 391 \isarcmd{ML} & : & \isartrans{\cdot}{\cdot} \\ 392 \isarcmd{ML_command} & : & \isartrans{\cdot}{\cdot} \\ 393 \isarcmd{ML_setup} & : & \isartrans{theory}{theory} \\ 394 \isarcmd{setup} & : & \isartrans{theory}{theory} \\ 395 \isarcmd{method_setup} & : & \isartrans{theory}{theory} \\ 396 \end{matharray} 397 398 \railalias{MLsetup}{ML\_setup} 399 \railterm{MLsetup} 400 401 \railalias{methodsetup}{method\_setup} 402 \railterm{methodsetup} 403 404 \railalias{MLcommand}{ML\_command} 405 \railterm{MLcommand} 406 407 \begin{rail} 408 'use' name 409 ; 410 ('ML' | MLcommand | MLsetup | 'setup') text 411 ; 412 methodsetup name '=' text text 413 ; 414 \end{rail} 415 416 \begin{descr} 417 \item [$\isarkeyword{use}~file$] reads and executes ML commands from $file$. 418 The current theory context (if present) is passed down to the ML session, 419 but may not be modified. Furthermore, the file name is checked with the 420 $\isarkeyword{files}$ dependency declaration given in the theory header (see 421 also \S\ref{sec:begin-thy}). 422 423 \item [$\isarkeyword{ML}~text$ and $\isarkeyword{ML_command}~text$] execute ML 424 commands from $text$. The theory context is passed in the same way as for 425 $\isarkeyword{use}$, but may not be changed. Note that the output of 426 $\isarkeyword{ML_command}$ is less verbose than plain $\isarkeyword{ML}$. 427 428 \item [$\isarkeyword{ML_setup}~text$] executes ML commands from $text$. The 429 theory context is passed down to the ML session, and fetched back 430 afterwards. Thus $text$ may actually change the theory as a side effect. 431 432 \item [$\isarkeyword{setup}~text$] changes the current theory context by 433 applying $text$, which refers to an ML expression of type 434 \texttt{(theory~->~theory)~list}. The $\isarkeyword{setup}$ command is the 435 canonical way to initialize any object-logic specific tools and packages 436 written in ML. 437 438 \item [$\isarkeyword{method_setup}~name = text~description$] defines a proof 439 method in the current theory. The given $text$ has to be an ML expression 440 of type \texttt{Args.src -> Proof.context -> Proof.method}. Parsing 441 concrete method syntax from \texttt{Args.src} input can be quite tedious in 442 general. The following simple examples are for methods without any explicit 443 arguments, or a list of theorems, respectively. 444 445 {\footnotesize 446 \begin{verbatim} 447 Method.no_args (Method.METHOD (fn facts => foobar_tac)) 448 Method.thms_args (fn thms => Method.METHOD (fn facts => foobar_tac)) 449 Method.ctxt_args (fn ctxt => Method.METHOD (fn facts => foobar_tac)) 450 Method.thms_ctxt_args (fn thms => fn ctxt => 451 Method.METHOD (fn facts => foobar_tac)) 452 \end{verbatim} 453 } 454 455 Note that mere tactic emulations may ignore the \texttt{facts} parameter 456 above. Proper proof methods would do something appropriate'' with the list 457 of current facts, though. Single-rule methods usually do strict 458 forward-chaining (e.g.\ by using \texttt{Method.multi_resolves}), while 459 automatic ones just insert the facts using \texttt{Method.insert_tac} before 460 applying the main tactic. 461 \end{descr} 462 463 464 \subsection{Syntax translation functions} 465 466 \indexisarcmd{parse-ast-translation}\indexisarcmd{parse-translation} 467 \indexisarcmd{print-translation}\indexisarcmd{typed-print-translation} 468 \indexisarcmd{print-ast-translation}\indexisarcmd{token-translation} 469 \begin{matharray}{rcl} 470 \isarcmd{parse_ast_translation} & : & \isartrans{theory}{theory} \\ 471 \isarcmd{parse_translation} & : & \isartrans{theory}{theory} \\ 472 \isarcmd{print_translation} & : & \isartrans{theory}{theory} \\ 473 \isarcmd{typed_print_translation} & : & \isartrans{theory}{theory} \\ 474 \isarcmd{print_ast_translation} & : & \isartrans{theory}{theory} \\ 475 \isarcmd{token_translation} & : & \isartrans{theory}{theory} \\ 476 \end{matharray} 477 478 \railalias{parseasttranslation}{parse\_ast\_translation} 479 \railterm{parseasttranslation} 480 481 \railalias{parsetranslation}{parse\_translation} 482 \railterm{parsetranslation} 483 484 \railalias{printtranslation}{print\_translation} 485 \railterm{printtranslation} 486 487 \railalias{typedprinttranslation}{typed\_print\_translation} 488 \railterm{typedprinttranslation} 489 490 \railalias{printasttranslation}{print\_ast\_translation} 491 \railterm{printasttranslation} 492 493 \railalias{tokentranslation}{token\_translation} 494 \railterm{tokentranslation} 495 496 \begin{rail} 497 ( parseasttranslation | parsetranslation | printtranslation | typedprinttranslation | 498 printasttranslation | tokentranslation ) text 499 \end{rail} 500 501 Syntax translation functions written in ML admit almost arbitrary 502 manipulations of Isabelle's inner syntax. Any of the above commands have a 503 single \railqtoken{text} argument that refers to an ML expression of 504 appropriate type. 505 506 \begin{ttbox} 507 val parse_ast_translation : (string * (ast list -> ast)) list 508 val parse_translation : (string * (term list -> term)) list 509 val print_translation : (string * (term list -> term)) list 510 val typed_print_translation : 511 (string * (bool -> typ -> term list -> term)) list 512 val print_ast_translation : (string * (ast list -> ast)) list 513 val token_translation : 514 (string * string * (string -> string * real)) list 515 \end{ttbox} 516 See \cite[\S8]{isabelle-ref} for more information on syntax transformations. 517 518 519 \subsection{Oracles} 520 521 \indexisarcmd{oracle} 522 \begin{matharray}{rcl} 523 \isarcmd{oracle} & : & \isartrans{theory}{theory} \\ 524 \end{matharray} 525 526 Oracles provide an interface to external reasoning systems, without giving up 527 control completely --- each theorem carries a derivation object recording any 528 oracle invocation. See \cite[\S6]{isabelle-ref} for more information. 529 530 \begin{rail} 531 'oracle' name '=' text 532 ; 533 \end{rail} 534 535 \begin{descr} 536 \item [$\isarkeyword{oracle}~name=text$] declares oracle $name$ to be ML 537 function $text$, which has to be of type 538 \texttt{Sign.sg~*~Object.T~->~term}. 539 \end{descr} 540 541 542 \section{Proof commands} 543 544 Proof commands perform transitions of Isar/VM machine configurations, which 545 are block-structured, consisting of a stack of nodes with three main 546 components: logical proof context, current facts, and open goals. Isar/VM 547 transitions are \emph{typed} according to the following three different modes 548 of operation: 549 \begin{descr} 550 \item [$proof(prove)$] means that a new goal has just been stated that is now 551 to be \emph{proven}; the next command may refine it by some proof method, 552 and enter a sub-proof to establish the actual result. 553 \item [$proof(state)$] is like a nested theory mode: the context may be 554 augmented by \emph{stating} additional assumptions, intermediate results 555 etc. 556 \item [$proof(chain)$] is intermediate between $proof(state)$ and 557 $proof(prove)$: existing facts (i.e.\ the contents of the special $this$'' 558 register) have been just picked up in order to be used when refining the 559 goal claimed next. 560 \end{descr} 561 562 The proof mode indicator may be read as a verb telling the writer what kind of 563 operation may be performed next. The corresponding typings of proof commands 564 restricts the shape of well-formed proof texts to particular command 565 sequences. So dynamic arrangements of commands eventually turn out as static 566 texts. Appendix~\ref{ap:refcard} gives a simplified grammar of the overall 567 (extensible) language emerging that way. 568 569 570 \subsection{Markup commands}\label{sec:markup-prf} 571 572 \indexisarcmd{sect}\indexisarcmd{subsect}\indexisarcmd{subsubsect} 573 \indexisarcmd{txt}\indexisarcmd{txt-raw} 574 \begin{matharray}{rcl} 575 \isarcmd{sect} & : & \isartrans{proof}{proof} \\ 576 \isarcmd{subsect} & : & \isartrans{proof}{proof} \\ 577 \isarcmd{subsubsect} & : & \isartrans{proof}{proof} \\ 578 \isarcmd{txt} & : & \isartrans{proof}{proof} \\ 579 \isarcmd{txt_raw} & : & \isartrans{proof}{proof} \\ 580 \end{matharray} 581 582 These markup commands for proof mode closely correspond to the ones of theory 583 mode (see \S\ref{sec:markup-thy}). 584 585 \railalias{txtraw}{txt\_raw} 586 \railterm{txtraw} 587 588 \begin{rail} 589 ('sect' | 'subsect' | 'subsubsect' | 'txt' | txtraw) text 590 ; 591 \end{rail} 592 593 594 \subsection{Context elements}\label{sec:proof-context} 595 596 \indexisarcmd{fix}\indexisarcmd{assume}\indexisarcmd{presume}\indexisarcmd{def} 597 \begin{matharray}{rcl} 598 \isarcmd{fix} & : & \isartrans{proof(state)}{proof(state)} \\ 599 \isarcmd{assume} & : & \isartrans{proof(state)}{proof(state)} \\ 600 \isarcmd{presume} & : & \isartrans{proof(state)}{proof(state)} \\ 601 \isarcmd{def} & : & \isartrans{proof(state)}{proof(state)} \\ 602 \end{matharray} 603 604 The logical proof context consists of fixed variables and assumptions. The 605 former closely correspond to Skolem constants, or meta-level universal 606 quantification as provided by the Isabelle/Pure logical framework. 607 Introducing some \emph{arbitrary, but fixed} variable via $\FIX x$ results in 608 a local value that may be used in the subsequent proof as any other variable 609 or constant. Furthermore, any result $\edrv \phi[x]$ exported from the 610 context will be universally closed wrt.\ $x$ at the outermost level: $\edrv 611 \All x \phi$ (this is expressed using Isabelle's meta-variables). 612 613 Similarly, introducing some assumption $\chi$ has two effects. On the one 614 hand, a local theorem is created that may be used as a fact in subsequent 615 proof steps. On the other hand, any result $\chi \drv \phi$ exported from the 616 context becomes conditional wrt.\ the assumption: $\edrv \chi \Imp \phi$. 617 Thus, solving an enclosing goal using such a result would basically introduce 618 a new subgoal stemming from the assumption. How this situation is handled 619 depends on the actual version of assumption command used: while $\ASSUMENAME$ 620 insists on solving the subgoal by unification with some premise of the goal, 621 $\PRESUMENAME$ leaves the subgoal unchanged in order to be proved later by the 622 user. 623 624 Local definitions, introduced by $\DEF{}{x \equiv t}$, are achieved by 625 combining $\FIX x$ with another version of assumption that causes any 626 hypothetical equation $x \equiv t$ to be eliminated by the reflexivity rule. 627 Thus, exporting some result $x \equiv t \drv \phi[x]$ yields $\edrv \phi[t]$. 628 629 \railalias{equiv}{\isasymequiv} 630 \railterm{equiv} 631 632 \begin{rail} 633 'fix' (vars + 'and') 634 ; 635 ('assume' | 'presume') (props + 'and') 636 ; 637 'def' thmdecl? \\ name ('==' | equiv) term termpat? 638 ; 639 \end{rail} 640 641 \begin{descr} 642 \item [$\FIX{\vec x}$] introduces local \emph{arbitrary, but fixed} variables 643 $\vec x$. 644 \item [$\ASSUME{a}{\vec\phi}$ and $\PRESUME{a}{\vec\phi}$] introduce local 645 theorems $\vec\phi$ by assumption. Subsequent results applied to an 646 enclosing goal (e.g.\ by $\SHOWNAME$) are handled as follows: $\ASSUMENAME$ 647 expects to be able to unify with existing premises in the goal, while 648 $\PRESUMENAME$ leaves $\vec\phi$ as new subgoals. 649 650 Several lists of assumptions may be given (separated by 651 $\isarkeyword{and}$); the resulting list of current facts consists of all of 652 these concatenated. 653 \item [$\DEF{a}{x \equiv t}$] introduces a local (non-polymorphic) definition. 654 In results exported from the context, $x$ is replaced by $t$. Basically, 655 $\DEF{}{x \equiv t}$ abbreviates $\FIX{x}~\ASSUME{}{x \equiv t}$, with the 656 resulting hypothetical equation solved by reflexivity. 657 658 The default name for the definitional equation is $x_def$. 659 \end{descr} 660 661 The special name $prems$\indexisarthm{prems} refers to all assumptions of the 662 current context as a list of theorems. 663 664 665 \subsection{Facts and forward chaining} 666 667 \indexisarcmd{note}\indexisarcmd{then}\indexisarcmd{from}\indexisarcmd{with} 668 \indexisarcmd{using} 669 \begin{matharray}{rcl} 670 \isarcmd{note} & : & \isartrans{proof(state)}{proof(state)} \\ 671 \isarcmd{then} & : & \isartrans{proof(state)}{proof(chain)} \\ 672 \isarcmd{from} & : & \isartrans{proof(state)}{proof(chain)} \\ 673 \isarcmd{with} & : & \isartrans{proof(state)}{proof(chain)} \\ 674 \isarcmd{using} & : & \isartrans{proof(prove)}{proof(prove)} \\ 675 \end{matharray} 676 677 New facts are established either by assumption or proof of local statements. 678 Any fact will usually be involved in further proofs, either as explicit 679 arguments of proof methods, or when forward chaining towards the next goal via 680 $\THEN$ (and variants); $\FROMNAME$ and $\WITHNAME$ are composite forms 681 involving $\NOTE$. The $\USINGNAME$ elements allows to augment the collection 682 of used facts \emph{after} a goal has been stated. Note that the special 683 theorem name $this$\indexisarthm{this} refers to the most recently established 684 facts, but only \emph{before} issuing a follow-up claim. 685 686 \begin{rail} 687 'note' (thmdef? thmrefs + 'and') 688 ; 689 ('from' | 'with' | 'using') (thmrefs + 'and') 690 ; 691 \end{rail} 692 693 \begin{descr} 694 \item [$\NOTE{a}{\vec b}$] recalls existing facts $\vec b$, binding the result 695 as $a$. Note that attributes may be involved as well, both on the left and 696 right hand sides. 697 \item [$\THEN$] indicates forward chaining by the current facts in order to 698 establish the goal to be claimed next. The initial proof method invoked to 699 refine that will be offered the facts to do anything appropriate'' (cf.\ 700 also \S\ref{sec:proof-steps}). For example, method $rule$ (see 701 \S\ref{sec:pure-meth-att}) would typically do an elimination rather than an 702 introduction. Automatic methods usually insert the facts into the goal 703 state before operation. This provides a simple scheme to control relevance 704 of facts in automated proof search. 705 \item [$\FROM{\vec b}$] abbreviates $\NOTE{}{\vec b}~\THEN$; thus $\THEN$ is 706 equivalent to $\FROM{this}$. 707 \item [$\WITH{\vec b}$] abbreviates $\FROM{\vec b~this}$; thus the forward 708 chaining is from earlier facts together with the current ones. 709 \item [$\USING{\vec b}$] augments the facts being currently indicated for use 710 in a subsequent refinement step (such as $\APPLYNAME$ or $\PROOFNAME$). 711 \end{descr} 712 713 Forward chaining with an empty list of theorems is the same as not chaining. 714 Thus $\FROM{nothing}$ has no effect apart from entering $prove(chain)$ mode, 715 since $nothing$\indexisarthm{nothing} is bound to the empty list of theorems. 716 717 Basic proof methods (such as $rule$) expect multiple facts to be given in 718 their proper order, corresponding to a prefix of the premises of the rule 719 involved. Note that positions may be easily skipped using something like 720 $\FROM{\Text{\texttt{_}}~a~b}$, for example. This involves the trivial rule 721 $\PROP\psi \Imp \PROP\psi$, which happens to be bound in Isabelle/Pure as 722 \texttt{_}'' (underscore).\indexisarthm{_@\texttt{_}} 723 724 Automated methods (such as $simp$ or $auto$) just insert any given facts 725 before their usual operation. Depending on the kind of procedure involved, 726 the order of facts is less significant here. 727 728 729 \subsection{Goal statements}\label{sec:goals} 730 731 \indexisarcmd{lemma}\indexisarcmd{theorem}\indexisarcmd{corollary} 732 \indexisarcmd{have}\indexisarcmd{show}\indexisarcmd{hence}\indexisarcmd{thus} 733 \begin{matharray}{rcl} 734 \isarcmd{lemma} & : & \isartrans{theory}{proof(prove)} \\ 735 \isarcmd{theorem} & : & \isartrans{theory}{proof(prove)} \\ 736 \isarcmd{corollary} & : & \isartrans{theory}{proof(prove)} \\ 737 \isarcmd{have} & : & \isartrans{proof(state) ~|~ proof(chain)}{proof(prove)} \\ 738 \isarcmd{show} & : & \isartrans{proof(state) ~|~ proof(chain)}{proof(prove)} \\ 739 \isarcmd{hence} & : & \isartrans{proof(state)}{proof(prove)} \\ 740 \isarcmd{thus} & : & \isartrans{proof(state)}{proof(prove)} \\ 741 \end{matharray} 742 743 From a theory context, proof mode is entered by an initial goal command such 744 as $\LEMMANAME$, $\THEOREMNAME$, $\COROLLARYNAME$. Within a proof, new claims 745 may be introduced locally as well; four variants are available here to 746 indicate whether forward chaining of facts should be performed initially (via 747 $\THEN$), and whether the emerging result is meant to solve some pending goal. 748 749 Goals may consist of multiple statements, resulting in a list of facts 750 eventually. A pending multi-goal is internally represented as a meta-level 751 conjunction (printed as \verb,&&,), which is automatically split into the 752 corresponding number of sub-goals prior to any initial method application, via 753 $\PROOFNAME$ (\S\ref{sec:proof-steps}) or $\APPLYNAME$ 754 (\S\ref{sec:tactic-commands}).\footnote{The $induct$ method covered in 755 \S\ref{sec:cases-induct-meth} acts on multiple claims simultaneously.} 756 757 Claims at the theory level may be either in short or long form. A short goal 758 merely consists of several simultaneous propositions (often just one). A long 759 goal includes an explicit context specification for the subsequent 760 conclusions, involving local parameters; here the role of each part of the 761 statement is explicitly marked by separate keywords (see also 762 \S\ref{sec:locale}). 763 764 \begin{rail} 765 ('lemma' | 'theorem' | 'corollary') locale? (goal | longgoal) 766 ; 767 ('have' | 'show' | 'hence' | 'thus') goal 768 ; 769 770 goal: (props + 'and') 771 ; 772 longgoal: thmdecl? (contextelem *) 'shows' goal 773 ; 774 \end{rail} 775 776 \begin{descr} 777 \item [$\LEMMA{a}{\vec\phi}$] enters proof mode with $\vec\phi$ as main goal, 778 eventually resulting in some fact $\turn \vec\phi$ to be put back into the 779 theory context, and optionally into the specified locale, cf.\ 780 \S\ref{sec:locale}. An additional \railnonterm{context} specification may 781 build an initial proof context for the subsequent claim; this may include 782 local definitions and syntax as well, see the definition of $contextelem$ in 783 \S\ref{sec:locale}. 784 785 \item [$\THEOREM{a}{\vec\phi}$ and $\COROLLARY{a}{\vec\phi}$] are essentially 786 the same as $\LEMMA{a}{\vec\phi}$, but the facts are internally marked as 787 being of a different kind. This discrimination acts like a formal comment. 788 789 \item [$\HAVE{a}{\vec\phi}$] claims a local goal, eventually resulting in a 790 fact within the current logical context. This operation is completely 791 independent of any pending sub-goals of an enclosing goal statements, so 792 $\HAVENAME$ may be freely used for experimental exploration of potential 793 results within a proof body. 794 795 \item [$\SHOW{a}{\vec\phi}$] is like $\HAVE{a}{\vec\phi}$ plus a second stage 796 to refine some pending sub-goal for each one of the finished result, after 797 having been exported into the corresponding context (at the head of the 798 sub-proof that the $\SHOWNAME$ command belongs to). 799 800 To accommodate interactive debugging, resulting rules are printed before 801 being applied internally. Even more, interactive execution of $\SHOWNAME$ 802 predicts potential failure after finishing its proof, and displays the 803 resulting error message as a warning beforehand, adding this header: 804 805 \begin{ttbox} 806 Problem! Local statement will fail to solve any pending goal 807 \end{ttbox} 808 809 \item [$\HENCENAME$] abbreviates $\THEN~\HAVENAME$, i.e.\ claims a local goal 810 to be proven by forward chaining the current facts. Note that $\HENCENAME$ 811 is also equivalent to $\FROM{this}~\HAVENAME$. 812 \item [$\THUSNAME$] abbreviates $\THEN~\SHOWNAME$. Note that $\THUSNAME$ is 813 also equivalent to $\FROM{this}~\SHOWNAME$. 814 \end{descr} 815 816 Any goal statement causes some term abbreviations (such as $\Var{thesis}$, 817 $\dots$) to be bound automatically, see also \S\ref{sec:term-abbrev}. 818 Furthermore, the local context of a (non-atomic) goal is provided via the 819 $rule_context$\indexisarcase{rule-context} case, see also 820 \S\ref{sec:rule-cases}. 821 822 \medskip 823 824 \begin{warn} 825 Isabelle/Isar suffers theory-level goal statements to contain \emph{unbound 826 schematic variables}, although this does not conform to the aim of 827 human-readable proof documents! The main problem with schematic goals is 828 that the actual outcome is usually hard to predict, depending on the 829 behavior of the actual proof methods applied during the reasoning. Note 830 that most semi-automated methods heavily depend on several kinds of implicit 831 rule declarations within the current theory context. As this would also 832 result in non-compositional checking of sub-proofs, \emph{local goals} are 833 not allowed to be schematic at all. Nevertheless, schematic goals do have 834 their use in Prolog-style interactive synthesis of proven results, usually 835 by stepwise refinement via emulation of traditional Isabelle tactic scripts 836 (see also \S\ref{sec:tactic-commands}). In any case, users should know what 837 they are doing. 838 \end{warn} 839 840 841 \subsection{Initial and terminal proof steps}\label{sec:proof-steps} 842 843 \indexisarcmd{proof}\indexisarcmd{qed}\indexisarcmd{by} 844 \indexisarcmd{.}\indexisarcmd{..}\indexisarcmd{sorry} 845 \begin{matharray}{rcl} 846 \isarcmd{proof} & : & \isartrans{proof(prove)}{proof(state)} \\ 847 \isarcmd{qed} & : & \isartrans{proof(state)}{proof(state) ~|~ theory} \\ 848 \isarcmd{by} & : & \isartrans{proof(prove)}{proof(state) ~|~ theory} \\ 849 \isarcmd{.\,.} & : & \isartrans{proof(prove)}{proof(state) ~|~ theory} \\ 850 \isarcmd{.} & : & \isartrans{proof(prove)}{proof(state) ~|~ theory} \\ 851 \isarcmd{sorry} & : & \isartrans{proof(prove)}{proof(state) ~|~ theory} \\ 852 \end{matharray} 853 854 Arbitrary goal refinement via tactics is considered harmful. Properly, the 855 Isar framework admits proof methods to be invoked in two places only. 856 \begin{enumerate} 857 \item An \emph{initial} refinement step $\PROOF{m@1}$ reduces a newly stated 858 goal to a number of sub-goals that are to be solved later. Facts are passed 859 to $m@1$ for forward chaining, if so indicated by $proof(chain)$ mode. 860 861 \item A \emph{terminal} conclusion step $\QED{m@2}$ is intended to solve 862 remaining goals. No facts are passed to $m@2$. 863 \end{enumerate} 864 865 The only other proper way to affect pending goals in a proof body is by 866 $\SHOWNAME$, which involves an explicit statement of what is to be solved 867 eventually. Thus we avoid the fundamental problem of unstructured tactic 868 scripts that consist of numerous consecutive goal transformations, with 869 invisible effects. 870 871 \medskip 872 873 As a general rule of thumb for good proof style, initial proof methods should 874 either solve the goal completely, or constitute some well-understood reduction 875 to new sub-goals. Arbitrary automatic proof tools that are prone leave a 876 large number of badly structured sub-goals are no help in continuing the proof 877 document in any intelligible way. 878 879 Unless given explicitly by the user, the default initial method is $rule$'', 880 which applies a single standard elimination or introduction rule according to 881 the topmost symbol involved. There is no separate default terminal method. 882 Any remaining goals are always solved by assumption in the very last step. 883 884 \begin{rail} 885 'proof' method? 886 ; 887 'qed' method? 888 ; 889 'by' method method? 890 ; 891 ('.' | '..' | 'sorry') 892 ; 893 \end{rail} 894 895 \begin{descr} 896 \item [$\PROOF{m@1}$] refines the goal by proof method $m@1$; facts for 897 forward chaining are passed if so indicated by $proof(chain)$ mode. 898 \item [$\QED{m@2}$] refines any remaining goals by proof method $m@2$ and 899 concludes the sub-proof by assumption. If the goal had been $\SHOWNAME$ (or 900 $\THUSNAME$), some pending sub-goal is solved as well by the rule resulting 901 from the result \emph{exported} into the enclosing goal context. Thus 902 $\QEDNAME$ may fail for two reasons: either $m@2$ fails, or the resulting 903 rule does not fit to any pending goal\footnote{This includes any additional 904 strong'' assumptions as introduced by $\ASSUMENAME$.} of the enclosing 905 context. Debugging such a situation might involve temporarily changing 906 $\SHOWNAME$ into $\HAVENAME$, or weakening the local context by replacing 907 some occurrences of $\ASSUMENAME$ by $\PRESUMENAME$. 908 \item [$\BYY{m@1}{m@2}$] is a \emph{terminal proof}\index{proof!terminal}; it 909 abbreviates $\PROOF{m@1}~\QED{m@2}$, with backtracking across both methods, 910 though. Debugging an unsuccessful $\BYY{m@1}{m@2}$ commands might be done 911 by expanding its definition; in many cases $\PROOF{m@1}$ is already 912 sufficient to see what is going wrong. 913 \item [$\DDOT$''] is a \emph{default proof}\index{proof!default}; it 914 abbreviates $\BY{rule}$. 915 \item [$\DOT$''] is a \emph{trivial proof}\index{proof!trivial}; it 916 abbreviates $\BY{this}$. 917 \item [$\SORRY$] is a \emph{fake proof}\index{proof!fake} pretending to solve 918 the pending claim without further ado. This only works in interactive 919 development, or if the \texttt{quick_and_dirty} flag is enabled. Certainly, 920 any facts emerging from fake proofs are not the real thing. Internally, 921 each theorem container is tainted by an oracle invocation, which is 922 indicated as $[!]$'' in the printed result. 923 924 The most important application of $\SORRY$ is to support experimentation and 925 top-down proof development in a simple manner. 926 \end{descr} 927 928 929 \subsection{Fundamental methods and attributes}\label{sec:pure-meth-att} 930 931 The following proof methods and attributes refer to basic logical operations 932 of Isar. Further methods and attributes are provided by several generic and 933 object-logic specific tools and packages (see chapters \ref{ch:gen-tools} and 934 \ref{ch:logics}). 935 936 \indexisarmeth{assumption}\indexisarmeth{this}\indexisarmeth{rule}\indexisarmeth{$-$} 937 \indexisaratt{OF}\indexisaratt{of} 938 \indexisarattof{Pure}{intro}\indexisarattof{Pure}{elim} 939 \indexisarattof{Pure}{dest}\indexisarattof{Pure}{rule} 940 \begin{matharray}{rcl} 941 assumption & : & \isarmeth \\ 942 this & : & \isarmeth \\ 943 rule & : & \isarmeth \\ 944 - & : & \isarmeth \\ 945 OF & : & \isaratt \\ 946 of & : & \isaratt \\ 947 intro & : & \isaratt \\ 948 elim & : & \isaratt \\ 949 dest & : & \isaratt \\ 950 rule & : & \isaratt \\ 951 \end{matharray} 952 953 %FIXME intro!, intro, intro? 954 955 \begin{rail} 956 'rule' thmrefs? 957 ; 958 'OF' thmrefs 959 ; 960 'of' insts ('concl' ':' insts)? 961 ; 962 'rule' 'del' 963 ; 964 \end{rail} 965 966 \begin{descr} 967 \item [$assumption$] solves some goal by a single assumption step. Any facts 968 given (${} \le 1$) are guaranteed to participate in the refinement. Recall 969 that $\QEDNAME$ (see \S\ref{sec:proof-steps}) already concludes any 970 remaining sub-goals by assumption. 971 \item [$this$] applies all of the current facts directly as rules. Recall 972 that $\DOT$'' (dot) abbreviates $\BY{this}$. 973 \item [$rule~\vec a$] applies some rule given as argument in backward manner; 974 facts are used to reduce the rule before applying it to the goal. Thus 975 $rule$ without facts is plain \emph{introduction}, while with facts it 976 becomes \emph{elimination}. 977 978 When no arguments are given, the $rule$ method tries to pick appropriate 979 rules automatically, as declared in the current context using the $intro$, 980 $elim$, $dest$ attributes (see below). This is the default behavior of 981 $\PROOFNAME$ and $\DDOT$'' (double-dot) steps (see 982 \S\ref{sec:proof-steps}). 983 \item [$-$''] does nothing but insert the forward chaining facts as premises 984 into the goal. Note that command $\PROOFNAME$ without any method actually 985 performs a single reduction step using the $rule$ method; thus a plain 986 \emph{do-nothing} proof step would be $\PROOF{-}$ rather than $\PROOFNAME$ 987 alone. 988 \item [$OF~\vec a$] applies some theorem to given rules $\vec a$ (in 989 parallel). This corresponds to the \texttt{MRS} operator in ML 990 \cite[\S5]{isabelle-ref}, but note the reversed order. Positions may be 991 skipped by including $\_$'' (underscore) as argument. 992 \item [$of~\vec t$] performs positional instantiation. The terms $\vec t$ are 993 substituted for any schematic variables occurring in a theorem from left to 994 right; \texttt{_}'' (underscore) indicates to skip a position. Arguments 995 following a $concl\colon$'' specification refer to positions of the 996 conclusion of a rule. 997 \item [$intro$, $elim$, and $dest$] declare introduction, elimination, and 998 destruct rules, respectively. Note that the classical reasoner (see 999 \S\ref{sec:classical-basic}) introduces different versions of these 1000 attributes, and the $rule$ method, too. In object-logics with classical 1001 reasoning enabled, the latter version should be used all the time to avoid 1002 confusion! 1003 \item [$rule~del$] undeclares introduction, elimination, or destruct rules. 1004 \end{descr} 1005 1006 1007 \subsection{Term abbreviations}\label{sec:term-abbrev} 1008 1009 \indexisarcmd{let} 1010 \begin{matharray}{rcl} 1011 \isarcmd{let} & : & \isartrans{proof(state)}{proof(state)} \\ 1012 \isarkeyword{is} & : & syntax \\ 1013 \end{matharray} 1014 1015 Abbreviations may be either bound by explicit $\LET{p \equiv t}$ statements, 1016 or by annotating assumptions or goal statements with a list of patterns 1017 $\ISS{p@1\;\dots}{p@n}$. In both cases, higher-order matching is invoked to 1018 bind extra-logical term variables, which may be either named schematic 1019 variables of the form $\Var{x}$, or nameless dummies \texttt{_}'' 1020 (underscore).\indexisarvar{_@\texttt{_}} Note that in the $\LETNAME$ form the 1021 patterns occur on the left-hand side, while the $\ISNAME$ patterns are in 1022 postfix position. 1023 1024 Polymorphism of term bindings is handled in Hindley-Milner style, similar to 1025 ML. Type variables referring to local assumptions or open goal statements are 1026 \emph{fixed}, while those of finished results or bound by $\LETNAME$ may occur 1027 in \emph{arbitrary} instances later. Even though actual polymorphism should 1028 be rarely used in practice, this mechanism is essential to achieve proper 1029 incremental type-inference, as the user proceeds to build up the Isar proof 1030 text. 1031 1032 \medskip 1033 1034 Term abbreviations are quite different from actual local definitions as 1035 introduced via $\DEFNAME$ (see \S\ref{sec:proof-context}). The latter are 1036 visible within the logic as actual equations, while abbreviations disappear 1037 during the input process just after type checking. Also note that $\DEFNAME$ 1038 does not support polymorphism. 1039 1040 \begin{rail} 1041 'let' ((term + 'and') '=' term + 'and') 1042 ; 1043 \end{rail} 1044 1045 The syntax of $\ISNAME$ patterns follows \railnonterm{termpat} or 1046 \railnonterm{proppat} (see \S\ref{sec:term-decls}). 1047 1048 \begin{descr} 1049 \item [$\LET{\vec p = \vec t}$] binds any text variables in patters $\vec p$ 1050 by simultaneous higher-order matching against terms $\vec t$. 1051 \item [$\IS{\vec p}$] resembles $\LETNAME$, but matches $\vec p$ against the 1052 preceding statement. Also note that $\ISNAME$ is not a separate command, 1053 but part of others (such as $\ASSUMENAME$, $\HAVENAME$ etc.). 1054 \end{descr} 1055 1056 Some \emph{automatic} term abbreviations\index{term abbreviations} for goals 1057 and facts are available as well. For any open goal, 1058 $\Var{thesis}$\indexisarvar{thesis} refers to its object-level statement, 1059 abstracted over any meta-level parameters (if present). Likewise, 1060 $\Var{this}$\indexisarvar{this} is bound for fact statements resulting from 1061 assumptions or finished goals. In case $\Var{this}$ refers to an object-logic 1062 statement that is an application $f(t)$, then $t$ is bound to the special text 1063 variable $\dots$''\indexisarvar{\dots} (three dots). The canonical 1064 application of the latter are calculational proofs (see 1065 \S\ref{sec:calculation}). 1066 1067 1068 \subsection{Block structure} 1069 1070 \indexisarcmd{next}\indexisarcmd{\{}\indexisarcmd{\}} 1071 \begin{matharray}{rcl} 1072 \NEXT & : & \isartrans{proof(state)}{proof(state)} \\ 1073 \BG & : & \isartrans{proof(state)}{proof(state)} \\ 1074 \EN & : & \isartrans{proof(state)}{proof(state)} \\ 1075 \end{matharray} 1076 1077 While Isar is inherently block-structured, opening and closing blocks is 1078 mostly handled rather casually, with little explicit user-intervention. Any 1079 local goal statement automatically opens \emph{two} blocks, which are closed 1080 again when concluding the sub-proof (by $\QEDNAME$ etc.). Sections of 1081 different context within a sub-proof may be switched via $\NEXT$, which is 1082 just a single block-close followed by block-open again. Thus the effect of 1083 $\NEXT$ to reset the local proof context. There is no goal focus involved 1084 here! 1085 1086 For slightly more advanced applications, there are explicit block parentheses 1087 as well. These typically achieve a stronger forward style of reasoning. 1088 1089 \begin{descr} 1090 \item [$\NEXT$] switches to a fresh block within a sub-proof, resetting the 1091 local context to the initial one. 1092 \item [$\BG$ and $\EN$] explicitly open and close blocks. Any current facts 1093 pass through $\BG$'' unchanged, while $\EN$'' causes any result to be 1094 \emph{exported} into the enclosing context. Thus fixed variables are 1095 generalized, assumptions discharged, and local definitions unfolded (cf.\ 1096 \S\ref{sec:proof-context}). There is no difference of $\ASSUMENAME$ and 1097 $\PRESUMENAME$ in this mode of forward reasoning --- in contrast to plain 1098 backward reasoning with the result exported at $\SHOWNAME$ time. 1099 \end{descr} 1100 1101 1102 \subsection{Emulating tactic scripts}\label{sec:tactic-commands} 1103 1104 The Isar provides separate commands to accommodate tactic-style proof scripts 1105 within the same system. While being outside the orthodox Isar proof language, 1106 these might come in handy for interactive exploration and debugging, or even 1107 actual tactical proof within new-style theories (to benefit from document 1108 preparation, for example). See also \S\ref{sec:tactics} for actual tactics, 1109 that have been encapsulated as proof methods. Proper proof methods may be 1110 used in scripts, too. 1111 1112 \indexisarcmd{apply}\indexisarcmd{apply-end}\indexisarcmd{done} 1113 \indexisarcmd{defer}\indexisarcmd{prefer}\indexisarcmd{back} 1114 \indexisarcmd{declare} 1115 \begin{matharray}{rcl} 1116 \isarcmd{apply}^* & : & \isartrans{proof(prove)}{proof(prove)} \\ 1117 \isarcmd{apply_end}^* & : & \isartrans{proof(state)}{proof(state)} \\ 1118 \isarcmd{done}^* & : & \isartrans{proof(prove)}{proof(state)} \\ 1119 \isarcmd{defer}^* & : & \isartrans{proof}{proof} \\ 1120 \isarcmd{prefer}^* & : & \isartrans{proof}{proof} \\ 1121 \isarcmd{back}^* & : & \isartrans{proof}{proof} \\ 1122 \isarcmd{declare}^* & : & \isartrans{theory}{theory} \\ 1123 \end{matharray} 1124 1125 \railalias{applyend}{apply\_end} 1126 \railterm{applyend} 1127 1128 \begin{rail} 1129 ( 'apply' | applyend ) method 1130 ; 1131 'defer' nat? 1132 ; 1133 'prefer' nat 1134 ; 1135 'declare' locale? (thmrefs + 'and') 1136 ; 1137 \end{rail} 1138 1139 \begin{descr} 1140 \item [$\APPLY{m}$] applies proof method $m$ in initial position, but unlike 1141 $\PROOFNAME$ it retains $proof(prove)$'' mode. Thus consecutive method 1142 applications may be given just as in tactic scripts. 1143 1144 Facts are passed to $m$ as indicated by the goal's forward-chain mode, and 1145 are \emph{consumed} afterwards. Thus any further $\APPLYNAME$ command would 1146 always work in a purely backward manner. 1147 1148 \item [$\isarkeyword{apply_end}~(m)$] applies proof method $m$ as if in 1149 terminal position. Basically, this simulates a multi-step tactic script for 1150 $\QEDNAME$, but may be given anywhere within the proof body. 1151 1152 No facts are passed to $m$. Furthermore, the static context is that of the 1153 enclosing goal (as for actual $\QEDNAME$). Thus the proof method may not 1154 refer to any assumptions introduced in the current body, for example. 1155 1156 \item [$\isarkeyword{done}$] completes a proof script, provided that the 1157 current goal state is already solved completely. Note that actual 1158 structured proof commands (e.g.\ $\DOT$'' or $\SORRY$) may be used to 1159 conclude proof scripts as well. 1160 1161 \item [$\isarkeyword{defer}~n$ and $\isarkeyword{prefer}~n$] shuffle the list 1162 of pending goals: $defer$ puts off goal $n$ to the end of the list ($n = 1$ 1163 by default), while $prefer$ brings goal $n$ to the top. 1164 1165 \item [$\isarkeyword{back}$] does back-tracking over the result sequence of 1166 the latest proof command.\footnote{Unlike the ML function \texttt{back} 1167 \cite{isabelle-ref}, the Isar command does not search upwards for further 1168 branch points.} Basically, any proof command may return multiple results. 1169 1170 \item [$\isarkeyword{declare}~thms$] declares theorems to the current theory 1171 context (or the specified locale, see also \S\ref{sec:locale}). No theorem 1172 binding is involved here, unlike $\isarkeyword{theorems}$ or 1173 $\isarkeyword{lemmas}$ (cf.\ \S\ref{sec:axms-thms}), so 1174 $\isarkeyword{declare}$ only has the effect of applying attributes as 1175 included in the theorem specification. 1176 \end{descr} 1177 1178 Any proper Isar proof method may be used with tactic script commands such as 1179 $\APPLYNAME$. A few additional emulations of actual tactics are provided as 1180 well; these would be never used in actual structured proofs, of course. 1181 1182 1183 \subsection{Meta-linguistic features} 1184 1185 \indexisarcmd{oops} 1186 \begin{matharray}{rcl} 1187 \isarcmd{oops} & : & \isartrans{proof}{theory} \\ 1188 \end{matharray} 1189 1190 The $\OOPS$ command discontinues the current proof attempt, while considering 1191 the partial proof text as properly processed. This is conceptually quite 1192 different from faking'' actual proofs via $\SORRY$ (see 1193 \S\ref{sec:proof-steps}): $\OOPS$ does not observe the proof structure at all, 1194 but goes back right to the theory level. Furthermore, $\OOPS$ does not 1195 produce any result theorem --- there is no claim to be able to complete the 1196 proof anyhow. 1197 1198 A typical application of $\OOPS$ is to explain Isar proofs \emph{within} the 1199 system itself, in conjunction with the document preparation tools of Isabelle 1200 described in \cite{isabelle-sys}. Thus partial or even wrong proof attempts 1201 can be discussed in a logically sound manner. Note that the Isabelle {\LaTeX} 1202 macros can be easily adapted to print something like $\dots$'' instead of an 1203 $\OOPS$'' keyword. 1204 1205 \medskip The $\OOPS$ command is undo-able, unlike $\isarkeyword{kill}$ (see 1206 \S\ref{sec:history}). The effect is to get back to the theory \emph{before} 1207 the opening of the proof. 1208 1209 1210 \section{Other commands} 1211 1212 \subsection{Diagnostics} 1213 1214 \indexisarcmd{pr}\indexisarcmd{thm}\indexisarcmd{term} 1215 \indexisarcmd{prop}\indexisarcmd{typ} 1216 \begin{matharray}{rcl} 1217 \isarcmd{pr}^* & : & \isarkeep{\cdot} \\ 1218 \isarcmd{thm}^* & : & \isarkeep{theory~|~proof} \\ 1219 \isarcmd{term}^* & : & \isarkeep{theory~|~proof} \\ 1220 \isarcmd{prop}^* & : & \isarkeep{theory~|~proof} \\ 1221 \isarcmd{typ}^* & : & \isarkeep{theory~|~proof} \\ 1222 \end{matharray} 1223 1224 These diagnostic commands assist interactive development. Note that $undo$ 1225 does not apply here, the theory or proof configuration is not changed. 1226 1227 \begin{rail} 1228 'pr' modes? nat? (',' nat)? 1229 ; 1230 'thm' modes? thmrefs 1231 ; 1232 'term' modes? term 1233 ; 1234 'prop' modes? prop 1235 ; 1236 'typ' modes? type 1237 ; 1238 1239 modes: '(' (name + ) ')' 1240 ; 1241 \end{rail} 1242 1243 \begin{descr} 1244 \item [$\isarkeyword{pr}~goals, prems$] prints the current proof state (if 1245 present), including the proof context, current facts and goals. The 1246 optional limit arguments affect the number of goals and premises to be 1247 displayed, which is initially 10 for both. Omitting limit values leaves the 1248 current setting unchanged. 1249 \item [$\isarkeyword{thm}~\vec a$] retrieves theorems from the current theory 1250 or proof context. Note that any attributes included in the theorem 1251 specifications are applied to a temporary context derived from the current 1252 theory or proof; the result is discarded, i.e.\ attributes involved in $\vec 1253 a$ do not have any permanent effect. 1254 \item [$\isarkeyword{term}~t$ and $\isarkeyword{prop}~\phi$] read, type-check 1255 and print terms or propositions according to the current theory or proof 1256 context; the inferred type of $t$ is output as well. Note that these 1257 commands are also useful in inspecting the current environment of term 1258 abbreviations. 1259 \item [$\isarkeyword{typ}~\tau$] reads and prints types of the meta-logic 1260 according to the current theory or proof context. 1261 \end{descr} 1262 1263 All of the diagnostic commands above admit a list of $modes$ to be specified, 1264 which is appended to the current print mode (see also \cite{isabelle-ref}). 1265 Thus the output behavior may be modified according particular print mode 1266 features. For example, $\isarkeyword{pr}~(latex~xsymbols~symbols)$ would 1267 print the current proof state with mathematical symbols and special characters 1268 represented in {\LaTeX} source, according to the Isabelle style 1269 \cite{isabelle-sys}. 1270 1271 Note that antiquotations (cf.\ \S\ref{sec:antiq}) provide a more systematic 1272 way to include formal items into the printed text document. 1273 1274 1275 \subsection{Inspecting the context} 1276 1277 \indexisarcmd{print-facts}\indexisarcmd{print-binds} 1278 \indexisarcmd{print-commands}\indexisarcmd{print-syntax} 1279 \indexisarcmd{print-methods}\indexisarcmd{print-attributes} 1280 \indexisarcmd{thms-containing}\indexisarcmd{thm-deps} 1281 \indexisarcmd{print-theorems} 1282 \begin{matharray}{rcl} 1283 \isarcmd{print_commands}^* & : & \isarkeep{\cdot} \\ 1284 \isarcmd{print_syntax}^* & : & \isarkeep{theory~|~proof} \\ 1285 \isarcmd{print_methods}^* & : & \isarkeep{theory~|~proof} \\ 1286 \isarcmd{print_attributes}^* & : & \isarkeep{theory~|~proof} \\ 1287 \isarcmd{print_theorems}^* & : & \isarkeep{theory~|~proof} \\ 1288 \isarcmd{thms_containing}^* & : & \isarkeep{theory~|~proof} \\ 1289 \isarcmd{thms_deps}^* & : & \isarkeep{theory~|~proof} \\ 1290 \isarcmd{print_facts}^* & : & \isarkeep{proof} \\ 1291 \isarcmd{print_binds}^* & : & \isarkeep{proof} \\ 1292 \end{matharray} 1293 1294 \railalias{thmscontaining}{thms\_containing} 1295 \railterm{thmscontaining} 1296 1297 \railalias{thmdeps}{thm\_deps} 1298 \railterm{thmdeps} 1299 1300 \begin{rail} 1301 thmscontaining (term * ) 1302 ; 1303 thmdeps thmrefs 1304 ; 1305 \end{rail} 1306 1307 These commands print certain parts of the theory and proof context. Note that 1308 there are some further ones available, such as for the set of rules declared 1309 for simplifications. 1310 1311 \begin{descr} 1312 \item [$\isarkeyword{print_commands}$] prints Isabelle's outer theory syntax, 1313 including keywords and command. 1314 \item [$\isarkeyword{print_syntax}$] prints the inner syntax of types and 1315 terms, depending on the current context. The output can be very verbose, 1316 including grammar tables and syntax translation rules. See \cite[\S7, 1317 \S8]{isabelle-ref} for further information on Isabelle's inner syntax. 1318 \item [$\isarkeyword{print_methods}$] prints all proof methods available in 1319 the current theory context. 1320 \item [$\isarkeyword{print_attributes}$] prints all attributes available in 1321 the current theory context. 1322 \item [$\isarkeyword{print_theorems}$] prints theorems available in the 1323 current theory context. In interactive mode this actually refers to the 1324 theorems left by the last transaction; this allows to inspect the result of 1325 advanced definitional packages, such as $\isarkeyword{datatype}$. 1326 \item [$\isarkeyword{thms_containing}~\vec t$] retrieves theorems from the 1327 theory context containing all of the constants occurring in the terms $\vec 1328 t$. Note that giving the empty list yields \emph{all} theorems of the 1329 current theory. 1330 \item [$\isarkeyword{thm_deps}~\vec a$] visualizes dependencies of facts, 1331 using Isabelle's graph browser tool (see also \cite{isabelle-sys}). 1332 \item [$\isarkeyword{print_facts}$] prints any named facts of the current 1333 context, including assumptions and local results. 1334 \item [$\isarkeyword{print_binds}$] prints all term abbreviations present in 1335 the context. 1336 \end{descr} 1337 1338 1339 \subsection{History commands}\label{sec:history} 1340 1341 \indexisarcmd{undo}\indexisarcmd{redo}\indexisarcmd{kill} 1342 \begin{matharray}{rcl} 1343 \isarcmd{undo}^{{*}{*}} & : & \isarkeep{\cdot} \\ 1344 \isarcmd{redo}^{{*}{*}} & : & \isarkeep{\cdot} \\ 1345 \isarcmd{kill}^{{*}{*}} & : & \isarkeep{\cdot} \\ 1346 \end{matharray} 1347 1348 The Isabelle/Isar top-level maintains a two-stage history, for theory and 1349 proof state transformation. Basically, any command can be undone using 1350 $\isarkeyword{undo}$, excluding mere diagnostic elements. Its effect may be 1351 revoked via $\isarkeyword{redo}$, unless the corresponding 1352 $\isarkeyword{undo}$ step has crossed the beginning of a proof or theory. The 1353 $\isarkeyword{kill}$ command aborts the current history node altogether, 1354 discontinuing a proof or even the whole theory. This operation is \emph{not} 1355 undo-able. 1356 1357 \begin{warn} 1358 History commands should never be used with user interfaces such as 1359 Proof~General \cite{proofgeneral,Aspinall:TACAS:2000}, which takes care of 1360 stepping forth and back itself. Interfering by manual $\isarkeyword{undo}$, 1361 $\isarkeyword{redo}$, or even $\isarkeyword{kill}$ commands would quickly 1362 result in utter confusion. 1363 \end{warn} 1364 1365 1366 \subsection{System operations} 1367 1368 \indexisarcmd{cd}\indexisarcmd{pwd}\indexisarcmd{use-thy}\indexisarcmd{use-thy-only} 1369 \indexisarcmd{update-thy}\indexisarcmd{update-thy-only} 1370 \begin{matharray}{rcl} 1371 \isarcmd{cd}^* & : & \isarkeep{\cdot} \\ 1372 \isarcmd{pwd}^* & : & \isarkeep{\cdot} \\ 1373 \isarcmd{use_thy}^* & : & \isarkeep{\cdot} \\ 1374 \isarcmd{use_thy_only}^* & : & \isarkeep{\cdot} \\ 1375 \isarcmd{update_thy}^* & : & \isarkeep{\cdot} \\ 1376 \isarcmd{update_thy_only}^* & : & \isarkeep{\cdot} \\ 1377 \end{matharray} 1378 1379 \begin{descr} 1380 \item [$\isarkeyword{cd}~name$] changes the current directory of the Isabelle 1381 process. 1382 \item [$\isarkeyword{pwd}~$] prints the current working directory. 1383 \item [$\isarkeyword{use_thy}$, $\isarkeyword{use_thy_only}$, 1384 $\isarkeyword{update_thy}$, $\isarkeyword{update_thy_only}$] load some 1385 theory given as $name$ argument. These commands are basically the same as 1386 the corresponding ML functions\footnote{The ML versions also change the 1387 implicit theory context to that of the theory loaded.} (see also 1388 \cite[\S1,\S6]{isabelle-ref}). Note that both the ML and Isar versions may 1389 load new- and old-style theories alike. 1390 \end{descr} 1391 1392 These system commands are scarcely used when working with the Proof~General 1393 interface, since loading of theories is done fully transparently. 1394 1395 1396 %%% Local Variables: 1397 %%% mode: latex 1398 %%% TeX-master: "isar-ref" 1399 %%% End:
2020-07-06 03:54:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956334233283997, "perplexity": 4828.651453968692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00355.warc.gz"}
https://stats.stackexchange.com/questions/186033/how-do-you-see-a-markov-chain-is-irreducible/186034
# How do you see a Markov chain is irreducible? I have some trouble understanding the Markov chain property irreducible. Irreducible is said to mean that the stochastic process can "go from any state to any state". But what defines whether it can go from state $i$ to state $j$, or cannot go? State $j$ is accessible (written $i\rightarrow j$) from state $i$, if exists integer $n_{ij}>0$ s.t. $$P(X_{n_{ij}}=j\space |\space X_0=i)=p_{ij}^{(n_{ij})} >0$$ then communicating is if $i\rightarrow j$ and $j \rightarrow i$. From these irreducibility follows somehow. • What's the intuition about "accessibility"? I don't understand why having a conditional probability makes something "accessible"? – mavavilj Dec 10 '15 at 8:20 • You may look from the inaccessibility point. The state $j$ is said to be inaccessible from $i$ if there is no chance to get there from $i$, that is for any number of steps $n$ the probability of this event remains $0$. To make definition of accessibility one should switch the quantors, i.e. $\forall$ to $\exists$ and $=0$ to $\neq 0$ (which is the same as $>0$, since probability is positive). – nmerci Dec 10 '15 at 8:28 Here are three examples for transition matrices, the first two for the reducible case, the last for the irreducible one. \begin{eqnarray*} P_1 &=& \left( \begin{array}{cccc} 0.5 & 0.5 & 0 & 0 \\ 0.9 & 0.1 & 0 & 0 \\ 0 & 0 & 0.2 & 0.8 \\ 0 & 0 & 0.7 & 0.3 \end{array} \right) \\\\ P_2 &=& \left( \begin{array}{cccc} 0.1 & 0.1 & 0.4 & 0.4 \\ 0.5 & 0.1 & 0.1 & 0.3 \\ 0.2 & 0.4 & 0.2 & 0.2 \\ 0 & 0 & 0 & 1% \end{array} \right) \end{eqnarray*} For $P_1$, when you are in state 3 or 4, you will stay there, and the same for states 1 and 2. There is no way to get from state 1 to state 3 or 4, for example. For $P_2$, you can get to any state from states 1 to 3, but once you are in state 4, you will stay there. $$P_3=\left( \begin{array}{cccccc} 0.5 & 0.5 & 0 & 0 & 0 & 0 \\ 0.9 & 0 & 0 & 0 & 0 & 0.1 \\ 0 & 0 & 0 & 0.8 & 0 & 0.2 \\ 0.7 & 0 & 0.1 & 0 & 0.2 & 0 \\ 0 & 0 & 0 & 0.1 & 0.9 & 0 \\ 0.9 & 0 & 0 & 0 & 0.1 & 0% \end{array} \right)$$ For this example, you may start in any state and can still reach any other state, although not necessarily in one step. The state $j$ is said to be accessible from a state $i$ (usually denoted by $i \to j$) if there exists some $n\geq 0$ such that: $$p^n_{ij}=\mathbb P(X_n=j\mid X_0=i) > 0$$ That is, one can get from the state $i$ to the state $j$ in $n$ steps with probability $p^n_{ij}$. If both $i\to j$ and $j\to i$ hold true then the states $i$ and $j$ communicate (usually denoted by $i\leftrightarrow j$). Therefore, the Markov chain is irreducible if each two states communicate. • Is the $n$ in $p_{ij}^n$ a power or an index? – mavavilj Dec 10 '15 at 10:29 • It's an index. However, it has an interpretation: if $\mathbf P=(p_{ij})$ be a transition probability matrix, then $p_{ij}^n$ is the $ij$-th element of $\mathbf P^n$ (here $n$ is a power). – nmerci Dec 10 '15 at 12:32 Let $i$ and $j$ be two distinct states of a Markov Chain. If there is some positive probability for the process to go from state $i$ to state $j$, whatever be the number of steps(say 1, 2, 3$\cdots$), then we say that state $j$ is accessible from state $i$. Notationally, we express this as $i\rightarrow j$. In terms of probability, it is expressed as follows: a state $j$ is accessible from state $i$, if there exists an integer $m>0$ such that $p_{ij}^{(m)}>0$. Similarly, we say that, $j\rightarrow i$, if there exists an integer $n>0$ such that $p_{ji}^{(n)}>0$. Now, if both $i\rightarrow j$ and $j\rightarrow i$ are true, then we say that the states $i$ and $j$ communicate with each other, and is notationally expressed as $i \leftrightarrow j$. In terms of probability, this means that, there exists two integers $m>0,\;\; n>0$ such that $p_{ij}^{(m)}>0$ and $p_{ji}^{(n)}>0$. If all the states in the Markov Chain belong to one closed communicating class, then the chain is called an irreducible Markov chain. Irreducibility is a property of the chain. In an irreducible Markov Chain, the process can go from any state to any state, whatever be the number of steps it requires. Some of the existing answers seem to be incorrect to me. As cited in Stochastic Processes by J. Medhi (page 79, edition 4), a Markov chain is irreducible if it does not contain any proper 'closed' subset other than the state space. So if in your transition probability matrix, there is a subset of states such that you cannot 'reach' (or access) any other states apart from those states, then the Markov chain is reducible. Otherwise the Markov chain is irreducible. First a word of warning : never look at a matrix unless you have a serious reason to do so : the only one I can think of is checking for mistakenly typed digits, or reading in a textbook. If $P$ is your transition matrix, compute $\exp(P)$. If all entries are nonzero, then the matrix is irreducible. Otherwise, it's reducible. If $P$ is too large, compute $P^n$ with $n$ as large as you can. Same test, slightly less accurate. Irreducibility means : you can go from any state to any other state in a finite number of steps. In Christoph Hanck's example $P_3$, you can't go directly from state 1 to state 6, but you can go 1 -> 2 -> 6 • How do you define "can go from state $i$ to state $j$"? – mavavilj Dec 10 '15 at 10:26 • You really need to ask your teacher. He's not going to eat you, you know. – titus Dec 14 '15 at 10:17 • when you use exp(P) you are referring ot the matrix exponential? or $e^{P_{ij}}$, where i, j is the ij term of the matrix P? – makansij Nov 8 '18 at 7:35 • I am referring to the matrix exponential – titus Mar 12 '19 at 21:02
2020-08-12 12:54:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792478680610657, "perplexity": 145.05568401781775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738892.21/warc/CC-MAIN-20200812112531-20200812142531-00452.warc.gz"}
http://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-3-section-3-3-derivatives-of-trigonometric-functions-3-3-exercises-page-197/53
## Calculus: Early Transcendentals 8th Edition $$A=-\frac{3}{10}$$ and $$B=-\frac{1}{10}$$ $$y=A\sin x+B\cos x$$ 1) Find $y'$ and $y''$ $$y'= A\cos x-B\sin x$$ and $$y''=-A\sin x-B\cos x$$ 2) Now consider the equation $$y''+y'-2y=\sin x$$ $$-A\sin x-B\cos x+A\cos x-B\sin x-2(A\sin x+B\cos x)=\sin x$$ $$-A\sin x-B\cos x+A\cos x-B\sin x-2A\sin x-2B\cos x=\sin x$$ $$(-A-B-2A)\sin x+(A-B-2B)\cos x=\sin x$$ $$(-3A-B)\sin x+(A-3B)\cos x=1\sin x+0\cos x$$ Comparing both sides of the equation, we can see that $$-3A-B=1$$ and $$A-3B=0$$ Consider $A-3B=0$ then $A=3B$ Combine $A=3B$ to the first equation, we have $$-3\times3B-B=1$$ $$-10B=1$$ $$B=-\frac{1}{10}$$
2018-04-22 20:26:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901280999183655, "perplexity": 69.32415701060022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945648.77/warc/CC-MAIN-20180422193501-20180422213501-00197.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/19513
# CALCULATED OH STRETCHING VIBRATIONAL BAND INTENSITIES OF SMALL WATER CLUSTERS Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/19513 Files Size Format View 1999-TI-06.jpg 131.2Kb JPEG image Title: CALCULATED OH STRETCHING VIBRATIONAL BAND INTENSITIES OF SMALL WATER CLUSTERS Creators: Kjaergaard, Henrik G.; Low, Geoffrey R. Issue Date: 1999 Publisher: Ohio State University Abstract: We have calculated fundamental and overtone OH-stretching vibrational band intensities of the small water clusters. The intensities were determined with a simple harmonically coupled anharmonic oscillator (HCAO) local mode model and ab initio dipole moment functions. The dipole moment functions were calculated at the self-consistent-field Hartree-Fock and quadratic configuration interaction including single and double excitations levels of theory with the $6-3IG(d), 6-3II+G(d,p)$, and $6-3II++G(2d,2p)$ basis sets. The overtone spectra of the dimer and trimer have not been observed and a method of obtaining local mode parameters from scaled ab initio calculations has been suggested. Our calculations show that the total overtone intensity of the dimer and trimer, although distributed differently, is close to two and three times the total intensity of the monomer for a given region. One significant difference between the monomer and the dimer and trimer is the appearance of the red shifted hydrogen bonded OH-stretching band in the dimer and trimer spectra. We suggest that these red shifted bands are ideal for attempts to observe the water dimer in the atmosphere. The method presented can provide an accurate estimate of the OH-stretching intensities for molecules for which vibrational spectra have not been observed. Such calculations are of importance in atmospheric solar energy absorption models. Description: Author Institution: Department of Chemistry, University of Otago URI: http://hdl.handle.net/1811/19513 Other Identifiers: 1999-TI-06
2016-12-06 01:00:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5804215669631958, "perplexity": 3141.0606843106775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541864.44/warc/CC-MAIN-20161202170901-00222-ip-10-31-129-80.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1996195/find-the-marginal-probability-function-of-y
# Find the marginal probability function of Y I get stuck on the first part of this problem and consequently am unable to solve the second part. Problem The number of defects per yard in a certain fabric, $Y$ , was known to have a Poisson distribution with parameter $λ$. The parameter $λ$ was assumed to be a random variable with a density function given by: $f(λ)=e^{−λ}$ , for $λ≥0$ (a) Find the marginal probability function for $Y$. (b) You choose a yard of fabric. What is the probability that $Y≤3$? My Attempt (a) $P_y(y)=P(Y=y)= \int_0^\infty f_λ(h)P(Y=y | λ=h)dh$ $=\int_0^\infty \frac{e^{-2h}h^y}{y!}dh$ $=\frac{2^{-(y+1)} Γ(y+1)}{y!}$ for $Re(y)>-1$ I have a strong suspicion that I'm doing something incorrectly since the problem requires integration by parts and results in an incomplete gamma function. Could this answer actually be the marginal probability function for $Y$? (b) $P(Y≤3)$ would just have me plugging in values for $y$ to get $\sum_{y=0}^3P(Y=y)$ so I'd add up the four terms using the function found in part (a). In doing so, I believe I would represent the term $Γ(y+1)$ as $y!$ because $Γ(y)=\int_0^\infty x^{y-1}e^xdx=(y-1)!$ Our restriction $Re(y) > -1$ is satisfied, of course, since we cannot have negative yardage of the fabric, which results in a lower bound of 0 yards. You're doing it right. In (a), since $y$ is a nonnegative integer, the gamma function $\Gamma(y+1)$ nicely cancels with the $y!$ in the denominator, so the marginal distribution of $Y$ is geometric.
2019-08-22 13:29:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486964344978333, "perplexity": 124.90595406395028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317130.77/warc/CC-MAIN-20190822130553-20190822152553-00476.warc.gz"}
https://proofwiki.org/wiki/Excluded_Point_Topology_is_T4/Proof_3
# Excluded Point Topology is T4/Proof 3 ## Theorem Let $T = \left({S, \tau_{\bar p}}\right)$ be an excluded point space. Then $T$ is a $T_4$ space. ## Proof We have: Excluded Point Topology is $T_5$ $T_5$ Space is $T_4$ $\blacksquare$
2020-01-17 15:34:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814924955368042, "perplexity": 3622.4414751050867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00488.warc.gz"}
https://math.stackexchange.com/questions/484286/prove-by-the-definition-that-ln-varepsilon-1-to-mathbbr-is-cauchy-integ
# Prove by the definition that $\ln:[\varepsilon, 1]\to\mathbb{R}$ is Cauchy-integrable and compute its integral I'll first define the Cauchy integral (which is merely a relaxed version of the Riemann integral). (Partitions and their norms are defined as usual, just like in the definition of the Riemann integral.) Definition. Given a function $f:[a, b]\to\mathbb{R}$ ($a < b$) and a partition $P = \{a = x_0 < x_1 < \dots < x_N = b\}$ of $[a, b]$, we define the Cauchy sum for $f$ corresponding to $P$ to be $C(f, P) = \sum_{n = 1}^Nf(x_{n-1})(x_n - x_{n-1})$. Definition. A function $f:[a, b]\to\mathbb{R}$ ($a < b$) is Cauchy-integrable iff there exists $l\in\mathbb{R}$ such that for every $\varepsilon > 0$ there exists a $\delta > 0$ such that if $P$ is a partition of $[a, b]$ with $|P| < \delta$, then $|C(f, P) - l| < \varepsilon$. We call $l$ the Cauchy integral of $f$. (One can prove $l$ is unique, if it exists.) For more on the Cauchy integral, see http://emp.byui.edu/BrownD/Mathematics/Calculus-R-R/Calc-R-R-Advanced/Cauchy-integral-intro.pdf Now here's the exercise I'm having trouble with: Exercise. Suppose $0 < \varepsilon < 1$ and consider the function $\ln:[\varepsilon, 1]\to\mathbb{R}$. Prove by the definition that such function is Cauchy-integrable and that its Cauchy integral is $-1 + \varepsilon - \varepsilon \ln(\varepsilon)$. (This exercise comes from a course in Measure and Integration I'm taking right now. Our professor has defined the Cauchy integral and others for historical and pedagogical reasons.) Of course, I could argue thus: since $\ln:[\varepsilon, 1]\to\mathbb{R}$ is continuous, it must be Riemann-integrable (and, therefore, Cauchy-integrable) and its integral can be computed using the Fundamental Theorem of Calculus. However, the exercise requires me to do it with only the definition of the Cauchy integral. It's not clear to me how to proceed here. Thoughts? • Possible duplicate: see the question. – Tony Piccolo Sep 4 '13 at 22:20 • I don't think this is a duplicate. In the question you mention it is taken for granted that the function $\log$ is Riemann-integrable and it is suggested to use one specific Riemann sum. – Etienne Sep 4 '13 at 22:31 I would do it as follows (writing "$\log$" instead of "$\rm ln$"). First, note that for any function $f:[a,b]\to\mathbb R$ and any partition $P=(x_0,\dots ,x_N)$, we may write \begin{eqnarray}C(f,P)&=&\sum_{n=1}^N f(x_{n-1})(x_n-x_{n-1})\\ &=&\sum_{n=1}^{N-1} x_n(f(x_{n-1})-f(x_n))+x_Nf(x_{N-1})-x_0f(x_0)\\ &=&\sum_{n=1}^{N-1} x_n(f(x_{n-1})-f(x_n))+bf(x_{N-1})-af(a)\, . \end{eqnarray} When specialized to $f={\log}$ on $[a,b]=[\varepsilon,1]$, this becomes $$C({\log},P)=\sum_{n=1}^{N-1} x_n(\log(x_{n-1})-\log(x_n))+\log(x_{N-1})-\varepsilon\log(\varepsilon)\,.$$ By the mean value theorem, for each $n\in\{ 1,\dots ,N-1\}$ one can find $c_n\in (x_{n-1},x_n)$ such that $\log(x_{n-1})-\log(x_n)=\frac{1}{c_n} (x_{n-1}-x_n)$. This gives $$C(\log, P)=\sum_{n=1}^{N-1} \frac{x_n}{c_n} (x_{n-1}-x_n)+\log(x_{N-1})-\varepsilon\log(\varepsilon)\,.$$ Now, the idea is that if $\vert P\vert$ is small, then all $x_n/c_n$ are close to $1$ (because $c_n\in (x_{n-1},x_n)$) and $x_{N-1}$ is close to $x_N=1$ (so that $\log(x_{N-1})$ is close to $0$). Hence, we get \begin{eqnarray}C(\log, P)&\sim& \sum_{n=1}^{N-1} (x_{n-1}-x_n)-\varepsilon\log(\varepsilon)\\ &=&x_0-x_{N-1}-\varepsilon\log(\varepsilon)\\ &\sim&\varepsilon-1-\varepsilon\log(\varepsilon)\, . \end{eqnarray} • Showing me how to rewrite $C(f, P)$ in terms of differences $f(x_{n-1}) - f(x_n)$ and then how to apply the Mean Value Theorem to get a more workable expression was nice. This technique will probably help me solve similar problems in the future. Thank you very much for taking the time help me, Etienne! (As a side note, I must say this is not the first time I'm unable to solve an exercise and later I find out that a solution involves the MVT. I definitely have to increase MVT's priority in my "techniques-to-try" queue.) – Detached Laconian Sep 6 '13 at 14:02
2020-07-08 02:31:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701126217842102, "perplexity": 191.91800330480166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00353.warc.gz"}
http://talkrational.org/showthread.php?t=36727&page=13
Frenemies of TalkRational: TalkRational Explosion rocks nuclear plant Physical Sciences Dangerous meddling in things man was not meant to know. Physics, Astronomy, Chemistry, etc. 03-18-2011, 04:48 AM #1345467  /  #301 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 Forgive me for getting back on the sarcasm horse one more time. Radiation coming? Not to worry? Sure. But here's a fucking brilliant idea for you. Yes you, US government, that runs on the taxes of millions of Americans. Why don't you take a few of them really expensive jets of yours, and fly them out to where the "hardly noticeable" cloud of radioactive dust is, and measure it? You know, fly around and collect air samples and see what the barely noticable amount is? Rather than sitting around with your finger up your collective ass waiting to see what it is. When it finally arrives. How's that for a suggestion? Maybe do a little defensive action, rather than waiting to see. Or does the technology for that not exist yet? __________________ . ..... ........ ..... . 03-18-2011, 05:22 AM #1345498  /  #302 Steviepinhead Senior Member     Join Date: Mar 2008 Location: Seattle, WA Posts: 31,183 Yeah, and this give us the bearing and a time when everyone on the west coast can defensively exhale in the direction of the plume to shunt it aside to Canada. Or Alaska. Or some other non-American place... 03-18-2011, 05:35 AM #1345508  /  #303 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 Just watched a bit of the radioactive cloud pass over Denver. No big deal. __________________ . ..... ........ ..... . 03-18-2011, 12:47 PM   #1345635  /  #304 RAFH Robot Architect From Hell Join Date: Mar 2008 Location: Lori's Place. Posts: 24,065 Quote: Originally Posted by Dean W Quote: Originally Posted by F X http://www.nytimes.com/interactive/2...ml?ref=science lets you see the plume of radioactive particles getting ready to hit California Friday. Yeah, but consider the dissipation gradient depicted. Researchers have calculated that a mother holding her baby transmits 12 million times the radioactivity than that plume will by the time it gets to CA. Yes, but what about the radiation the baby is giving off. Ever wonder about that pink glow? And that stuff that comes out of their butts is definitely both toxic and radioactive. __________________ Invent the Future 03-18-2011, 01:13 PM #1345641  /  #305 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 The exodus of people from Japan may actually help the real estate markets elsewhere. __________________ . ..... ........ ..... . 03-18-2011, 01:34 PM   #1345651  /  #306 F X mostly harmless Join Date: May 2009 Location: in a house Posts: 7,292 Quote: http://belfercenter.ksg.harvard.edu/...terrorism.html __________________ . ..... ........ ..... . 03-18-2011, 02:21 PM   #1345683  /  #307 cakemaker Senior Member Join Date: Sep 2010 Posts: 1,156 Quote: Originally Posted by F X Quote: http://belfercenter.ksg.harvard.edu/...terrorism.html You gotta hope for a win at Bingo, FX, so you can afford those reading for comprehension lessons. 03-18-2011, 02:25 PM   #1345687  /  #308 F X mostly harmless Join Date: May 2009 Location: in a house Posts: 7,292 Quote: There is also some radionuclide data coming out of station RN38, in Takasaki/Gunma. The station is operated by the Comprehensive Test Ban Treaty Organization (CTBTO) (see the long article I published yesterday on CTBTO data for more details on its data) and recorded many species of radionuclides on 15 March, including iodine-131 and Barium-140, with a preliminary estimate of the concentration of iodine-131 at 15 becquerels per cubic meter. That's a fairly low dose. Radioactive iodine-131 was also detected at the Petropavlovsk station in Russia, but at fourfold-lower levels. That should pose no threat to human health, though its very presence is troubling. http://blogs.nature.com/news/thegrea...st_maps_o.html __________________ . ..... ........ ..... . 03-18-2011, 09:42 PM #1346174  /  #309 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 Watching the CNN (turns head and spits) reporting, they keep repeating that there is no way to tell what is going on. Then they show the video from the drone/helicopter yesterday, and try to analyze shaking low res video. Trying to determine what is there. Anyone with a brain is screaming at the TV, "Hey! Ever heard of an infrared camera? How about ultraviolet camera? I bet somebody even invented one that can detect x-rays, gamma rays and other radiation signatures. Why not fly one of them over the site?" I mean, we can spot and kill people from 30 miles away, at night, based on infrared imaging. But you can't get a fucking drone to fly overhead with a thermal imaging camera? Or a fucking zoom lens? Nobody on the whole planet has a device that measure gamma rays? Talk about being prepared for a nuclear disaster. "Hey Bob, we might have a leak somewhere" "Gee Joe, too bad nobody ever invented a device that can take pictures of anything other than visible light, then we could scan for it" "Yeah Bob, I knows. Lets get out the Geiger counters and walk around the whole plant a few times." "Someday we won't have to do this no more" "Yep, someday" __________________ . ..... ........ ..... . 03-18-2011, 09:50 PM #1346184  /  #310 akuaku Nocturnal Member Lord High Coder     Join Date: Feb 2010 Posts: 3,047 Yeah, amazing, nobody has come up with a device that would detect radiation, except our village idiot. __________________ $\LARGE z_i=z_{i-1}^2+z_0 \hspace{20} \vspace{67}$ 03-18-2011, 09:53 PM   #1346189  /  #311 MattShizzle Zombie for Satan Join Date: Dec 2009 Location: Bernville, PA Posts: 12,743 Quote: Originally Posted by F X Watching the CNN (turns head and spits) reporting, they keep repeating that there is no way to tell what is going on. Then they show the video from the drone/helicopter yesterday, and try to analyze shaking low res video. Trying to determine what is there. Anyone with a brain is screaming at the TV, "Hey! Ever heard of an infrared camera? How about ultraviolet camera? I bet somebody even invented one that can detect x-rays, gamma rays and other radiation signatures. Why not fly one of them over the site?" I mean, we can spot and kill people from 30 miles away, at night, based on infrared imaging. But you can't get a fucking drone to fly overhead with a thermal imaging camera? Or a fucking zoom lens? Nobody on the whole planet has a device that measure gamma rays? Talk about being prepared for a nuclear disaster. "Hey Bob, we might have a leak somewhere" "Gee Joe, too bad nobody ever invented a device that can take pictures of anything other than visible light, then we could scan for it" "Yeah Bob, I knows. Lets get out the Geiger counters and walk around the whole plant a few times." "Someday we won't have to do this no more" "Yep, someday" 03-18-2011, 10:07 PM   #1346198  /  #312 SomecallmeTim That's no ordinary rabbit! Join Date: Mar 2008 Location: Mt. View, CA Posts: 5,266 Quote: Originally Posted by F X Watching the CNN (turns head and spits) reporting, they keep repeating that there is no way to tell what is going on. Then they show the video from the drone/helicopter yesterday, and try to analyze shaking low res video. Trying to determine what is there. Anyone with a brain is screaming at the TV, "Hey! Ever heard of an infrared camera? How about ultraviolet camera? I bet somebody even invented one that can detect x-rays, gamma rays and other radiation signatures. Why not fly one of them over the site?" I mean, we can spot and kill people from 30 miles away, at night, based on infrared imaging. But you can't get a fucking drone to fly overhead with a thermal imaging camera? Or a fucking zoom lens? Nobody on the whole planet has a device that measure gamma rays? Talk about being prepared for a nuclear disaster. "Hey Bob, we might have a leak somewhere" "Gee Joe, too bad nobody ever invented a device that can take pictures of anything other than visible light, then we could scan for it" "Yeah Bob, I knows. Lets get out the Geiger counters and walk around the whole plant a few times." "Someday we won't have to do this no more" "Yep, someday" Quote: Northrop Drone Flies Over Japan Reactor to Record Data Mar 17, 2011 1:32 PM PT A Northrop Grumman Corp. (NOC) Global Hawk drone flew over Japan’s crippled Fukushima Dai-Ichi nuclear plant today to collect data and imagery for the Japanese government, said U.S. Air Force Chief of Staff General Norton Schwartz. link Theoretically it may be possible for F X to be a bigger dickhead, but practically it's difficult to imagine. __________________ "We measure heat in watts, dumbshit." - Schneibster "You can think of Sanford as your mentally retarded cousin" - Dave Hawkins 03-18-2011, 10:07 PM #1346199  /  #313 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 The real worry right now is coal. It's far more dangerous than the 6 reactors and 300 tons of fuel rods at risk. Or is it 500 tons? Nobody knows it seems. They also can't find the blueprints of the plant, so we have to guess where the fuel rods might be. And I checked, and it does appear that nobody in the entire world has any kind of remote controlled flying craft that can sense gamma radiation. Or x-rays, or ultraviolet light. But thanks to the wars, we do have the ability to view infrared, and they are diverting drones from the war effort to take a look at the plants in infrared. So the engineers and other trying to save the plant can figure out where the fuel rods are. It's so fucking reassuring to find out how prepared everybody is for a nuclear plant problem. __________________ . ..... ........ ..... . 03-18-2011, 10:37 PM   #1346217  /  #314 SomecallmeTim That's no ordinary rabbit! Join Date: Mar 2008 Location: Mt. View, CA Posts: 5,266 Quote: Originally Posted by F X And I checked, and it does appear that nobody in the entire world has any kind of remote controlled flying craft that can sense gamma radiation. Or x-rays, or ultraviolet light. But thanks to the wars, we do have the ability to view infrared, and they are diverting drones from the war effort to take a look at the plants in infrared. Maybe if you looked somewhere besides the inside of your own rectum: Quote: Unmanned Air Vehicles (UAVs) Unlike drones, which are autonomous vehicles not requiring human intervention, UAVs are aircraft that do not carry human operators but still rely on humans to operate. They include fixed and rotary wing configurations and can be remotely operated or flown with varying degrees of autonomy. UAVs carry a wide variety of sensor payloads: Electro-Optics (EO); Infra-Red (IR); Synthetic Aperture Radar (SAR); Signal and Communications Intelligence (SIGINT and COMINT); Chemical, Biological and Radiation (CBR) detection systems; and radio relay equipment. Big list of UAV sensor payloadss People on the ground over there are doing everything humanly possible to bring a serious problem under control. Many are going to get radiation illnesses, some will probably die. But because they don't send you a personal email update every 15 minutes that means they're all sitting around with their thumbs up their asses doing nothing. You really are a dickhead F X. Major league. __________________ "We measure heat in watts, dumbshit." - Schneibster "You can think of Sanford as your mentally retarded cousin" - Dave Hawkins 03-19-2011, 01:19 AM   #1346336  /  #315 Dean W What difference does it make? Join Date: Jun 2009 Posts: 6,202 Quote: Originally Posted by F X Quote: The Consequence of Cesium-137 Release A spent fuel pool would contain tens of million curies of Cs-137. Cs-137 has a 30 year half-life; it is relatively volatile and a potent land contaminant. http://belfercenter.ksg.harvard.edu/...terrorism.html That's actually kind of reassuring. Let's say there were 100,000,000 curies of Cs-137. With a half life of 30 years, there'd be less than 1 curie left after 800 years, hardly a tick of the second hand on the geological timescale. __________________ First understand, then criticize; not the other way round! - Per Ahlberg 03-19-2011, 05:20 AM #1346519  /  #317 rmacfarl DAG and proud of it     Join Date: Feb 2010 Location: World's most liveable city - still! Posts: 10,021 tl;dr 03-19-2011, 05:29 AM #1346524  /  #318 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 Good plan. __________________ . ..... ........ ..... . 03-19-2011, 06:08 AM #1346561  /  #319 recursive prophet Recursions Analyst     Join Date: Apr 2009 Location: Palomar Mountain hills near Escondido, Ca enjoying retirement and laughing at angry fools online. Posts: 4,306 I thought your post made some excellent points FX. Not at all surprising of course, that the storage can is kicked down the road. That's what we do. Look at the economy, and predictions for 5+% growth soon to arrive and save the day. The next generation will figure out what to do with those spent rods, right? And if it means a bit of risk so that we can keep air-conditioning 5k square foot homes in the desert, so be it. I mean, just how much sacrifice can we expect the public to endure? The bad news? Green tech ain't gonna get us even close in this decade. Takes a lot of oil to make it happen. Ever read Asimov's The Last Question? Short of a singularity it's pretty hard to escape entropy, even in fiction. 03-19-2011, 06:14 AM #1346565  /  #320 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 It's simply amazing that just a little over a hundred years ago, nobody had power plants at all. Or cars. And yet, people lived. Oh sure not like the Gods we resemble, but then, they also didn't have to deal with Wotan actually rising up for real every now and then. Or knowing about every fucking disaster in the world as it happened. __________________ . ..... ........ ..... . 03-19-2011, 10:08 AM   #1346640  /  #321 cold one Upright Member Join Date: May 2009 Location: Location Location Posts: 3,437 Quote: Originally Posted by F X This horrific accident, followed by a clusterfuck of unimaginable size, which now has the situation about as bad as it can get before there is nothing but the running and the screaming ... Quote: I no longer support nuclear reactors for power plants. Sat. March 6th, obscure discussion site, breaking news: Local tard no longer supports nuclear reactors for power plants. CNN is on it. Quote: And I am adamant that we need to replace every last one of them with Quote: If we put our minds to it, we can probably figure out how to create safe power, and do it cheaper. Yes, if only someone would have thought about that. __________________ tldr 03-19-2011, 02:58 PM   #1346798  /  #322 Autonemesis Flipper Offer Join Date: Jun 2009 Location: Location: Location: Posts: 16,671 Quote: Originally Posted by F X This horrific accident, followed by a clusterfuck of unimaginable size, which now has the situation about as bad as it can get before there is nothing but the running and the screaming ... Quote: "We sincerely apologize ... for causing such a great concern and nuisance," said a statement from Masataka Shimizu, the president of Tokyo Electric. HTH __________________ As if. 03-19-2011, 03:00 PM   #1346800  /  #323 Autonemesis Flipper Offer Join Date: Jun 2009 Location: Location: Location: Posts: 16,671 Quote: Originally Posted by F X This horrific accident... It was a natural disaster, not an accident, that caused the present situation at Fukushima. __________________ As if. 03-19-2011, 03:03 PM   #1346802  /  #324 VoxRat Senior Member Join Date: Mar 2008 Posts: 40,799 Quote: Originally Posted by recursive prophet I thought your post made some excellent points FX. Not at all surprising of course, that the storage can is kicked down the road. That's what we do. Look at the economy, and predictions for 5+% growth soon to arrive and save the day. The next generation will figure out what to do with those spent rods, right? And if it means a bit of risk so that we can keep air-conditioning 5k square foot homes in the desert, so be it. I mean, just how much sacrifice can we expect the public to endure? The bad news? Green tech ain't gonna get us even close in this decade. Takes a lot of oil to make it happen. Ever read Asimov's The Last Question? Short of a singularity it's pretty hard to escape entropy, even in fiction. exactly. Quote: Originally Posted by F_X It's simply amazing that just a little over a hundred years ago, nobody had power plants at all. Or cars. And yet, people lived. "Boy, the way Glenn Miller played! A little over a hundred years ago there were fewer than 20% as many people living as now. Plans to "return to simpler times" involve the pesky detail of getting rid of 5 or 6 billion excess people. Quote: Oh sure not like the Gods we resemble... I suspect that the lives of most of the 7 billion on the planet now are somewhat short of "divine", and not all that different from what they were a little over a hundred years ago. 03-19-2011, 04:40 PM #1346860  /  #325 F X mostly harmless     Join Date: May 2009 Location: in a house Posts: 7,292 You say that using a worldwide instant communication network, typed out on a super computer of some kind, (it could be a mobile computer as well), likely sitting in a comfortable safe building with controlled temperature, free of fear from war or famine. I think we all want such a lifestyle to continue. __________________ . ..... ........ ..... . TalkRational Thread Tools Display Modes Linear Mode Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Community     Introductions     Arts and Entertainment     Sports     Games         Mafia             Mafia Game Threads             Mafia Game Threads Annex     TR Embassy and Animal Shelter Discussion     Philosophy     Mathematics     Life Science Discussions     Physical Sciences     Alternative Science Subforum     History, Anthropological Sciences and Humanities     Politics and Current Affairs     Theology, Hagiography and Creeds     Computers and Technology     General Discussion The Rat Ring     The Rat Ring Formal Debate Forum     Sequential Debates     Rat Ring Peanut Gallery     Rat Ring Proposals Closed Town Hall     The Soap Opera         Technical Issues and Questions         TSO Archive     Staff Discussion         Forum Workshop     Guy Debord Memorial Forum All times are GMT. The time now is 01:03 AM. -- Default Style ---- Halloween 2007 ---- Halloween 2008 ---- Dark Blue ------ Dark Purple ---- peachy rat ---- grey rat ------ dark grey rat ------ bright grey rat ------ tropical rat -- Mobile -- Lightweight Contact Us - TalkRational Forums - Archive - Top
2014-10-26 01:03:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3507333993911743, "perplexity": 5621.857876240004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653628.45/warc/CC-MAIN-20141024030053-00041-ip-10-16-133-185.ec2.internal.warc.gz"}
https://proxieslive.com/tag/kernel/
2D kernel density estimation (SmoothKernelDistribution) with bin width estimation: what are the bin values that Mathematica chooses? Mathematica has builtin bin estimation including the rules Scott, SheatherJones and Silverman (the default one); they work in both 1D and multiple dimensions. Most of the statistical documentation that I could find of these bin-width rules are for 1D data. Their implementation for 2D or higher dimensions seems not, as far as I know, so robust. I could not find a Mathematica documentation on how exactly these rules are implemented in any dimensions. For the Silverman case, there is a nice question about it that raises very important subtleties: About Silverman's bandwidth selection in SmoothKernelDistribution . For 2D data, my first guess was that Mathematica uses the same 1D algorithm, but for each of the axis, thus yielding a diagonal bin matrix. Hence, I extended the code provided in the previous link to 2D as follows: Clear[data, silvermanBandwidth]; silvermanBandwidth[data_] := silvermanBandwidth[data] = Block[ {m, n}, m = MapThread[Min @ {#1, #2} &, { StandardDeviation @ data, InterquartileRange[data, {{0, 0}, {1, 0}}]/1.349 } ]; n = Length @ data; 0.9 m/n^(1/5) ]; (In the statistical literature I could find different conventions for rounding the real numbers that appear in the above code, I do not know precisely which version Mathematica picks; anyway the problem below is larger than these small rounding changes). The approach above (and a few variations I tried) is quite close to what Mathematica does in 2D, but it is not identical. Here is an example: data = RandomReal[1, {100, 2}]; silvermanWMDist = SmoothKernelDistribution @ data; silvermanMyDist = SmoothKernelDistribution[data, silvermanBandwidth @ data, "Gaussian"]; ContourPlot[PDF[silvermanWMDist, {x, y}], {x, -0.1, 1.1}, {y, -0.1, 1.1} ] ContourPlot[PDF[silvermanMyDist, {x, y}], {x, -0.1, 1.1}, {y, -0.1, 1.1} ] My questions are: how Silverman’s rule is implemented in Mathematica for 2D data? Is there a way to print out Mathematica’s derived bin matrix, either for Silverman or any other rule? Is Exit (no square brackets) equivalent to Quit[] for refreshing the Kernel from within an Evaluation Notebook? I prefer to use Exit as it conveniently requires fewer key presses over Quit[]. But before I use it regularly I need to know if there any subtle differences between Quit[] and Exit. The Wolfram documentation pages for Quit and Exit appear to be very similar and even call these two functions synonymous but I just need to be sure. Thanks. What does a kernel of size n,n^2 ,… mean? So according to Wikipedia, In the Notation of [Flum and Grohe (2006)], a ”parameterized problem” consists of a decision problem $$L\subseteq\Sigma^*$$ and a function $$\kappa:\Sigma^*\to N$$, the parameterization. The ”parameter” of an instance $$x$$ is the number $$\kappa(x)$$. A ”’kernelization”’ for a parameterized problem $$L$$ is an algorithm that takes an instance $$x$$ with parameter $$k$$ and maps it in polynomial time to an instance $$y$$ such that • $$x$$ is in $$L$$ if and only if $$y$$ is in $$L$$ and • the size of $$y$$ is bounded by a computable function $$f$$ in $$k$$. Note that in this notation, the bound on the size of $$y$$ implies that the parameter of $$y$$ is also bounded by a function in $$k$$. The function $$f$$ is often referred to as the size of the kernel. If $$f=k^{O(1)}$$, it is said that $$L$$ admits a polynomial kernel. Similarly, for $$f={O(k)}$$, the problem admits linear kernel. ”’ Stupid question, but since the parameter can be anything can’t you just define the parameter to be really large and then you always have linear kernel? Is this Ubuntu kernel version vulnerable to dirty cow? [closed] I am attempting to escalate privileges on a CTF Ubuntu box but I am afraid to run dirty cow due to possible crash is this kernel version vulnerable to the exploit: Linux ip-10.0.0.1 3.13.0-162-generic #212-Ubuntu SMP Mon Oct 29 12:08:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux? The Ubuntu version is Ubuntu 14.04 Dirty cow documentation shows Ubuntu 14 versions < 3.13.0-100.147 are vulnerable although I am confused as to if this version is vulnerable and want to be somewhat positive before running it on the CTF / CapturetheFlag machine. As far as I read in a OS text book (Operating Systems by Silberschatz) Kernel mode is for privileged task, so it it true to claim that "User Level Thread can read/write Kernel threads" ? Generally speaking, Is there any kind of protection between user and kernel level threads? Security of NGFW OS and kernel I know there are lots of different providers, but let us focus on the bigger ones and the ones running some kind of Linux. In the end they are all some kind of huge packet parsing engine and I guess many options will be enabled in the kernel. But I’m not sure about that nor can you find much info on how they do networking under the hood. Are they doing something specifically different than a normal Linux system in terms of kernel/program security and networking? Or are they more or less the average Linux router with iptables + a nice gui and analytics ? When I look through some patches/changelogs I regularly see CVE’s with high risk so I am wondering if they can make the network security actually worse. How to understand mapping function of kernel? For a kernel function, we have two conditions one is that it should be symmetric which is easy to understand intuitively because dot products are symmetric as well and our kernel should also follow this. The other condition is given below There exists a map $$φ:R^d→H$$ called kernel feature map into some high dimensional feature space H such that $$∀x,x′$$ in $$R^d:k(x,x′)=<φ(x),φ(x′)>$$ I understand that this means that there should exist a feature map that will project the data from low dimension to any high dimension D and kernel function will take the dot product in that space. For example, the Euclidean distance is given as $$d(x,y)=∑_i(x_i−y_i)^2=+−2$$ If I look this in terms of the second condition how do we know that doesn’t exist any feature map for euclidean distance? What exactly are we looking in feature maps mathematically? Any exploit details regarding CVE-2019-3846 : Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability How to get this exploit working or any method for this. It is seen that various Linux version < 8 is vulnerable to this issue Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability Issue Description: A flaw that allowed an attacker to corrupt memory and possibly escalate privileges was found in the mwifiex kernel module while connecting to a malicious wireless network. Can you share exploit details regarding this.? https://vulners.com/cve/CVE-2019-3846 https://www.securityfocus.com/bid/69867/exploit : NO exploit there Any tips on how to exploit this. How to build Linux Volatility Profiles With the Compiled Kernel I’m familiar with creating Linux memory profiles as stated here. However, this is assuming that I have access to the live system which often times is not the case. I heard there is a way to build the profile with the compiled linux kernel but I cannot find any documentation on how to do that through googling. Is anyone familiar with building volatility profiles from the compiled kernel and if so willing to provide instructions on how to do so? Thanks! How can a classifier using lapacian kernel achieve no error on the input samples? If we have a sample dataset $$S = \{(x_1, y_i),…(x_n,y_n)\}$$ where $$y_i = \{0,1\}$$, how can we tune $$\sigma$$ such that there is no error on $$S$$ from a classifier using the Laplacian kernel? Laplacian Kernel is $$K(x,x’) = exp(-\dfrac{\| x – x’\|}{\sigma})$$ If this is true, does it mean that if we run hard-SVM with the Laplacian kernel and $$\sigma$$ from the above on $$S$$, we can find no error separing classifier also?
2021-10-23 01:44:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33254024386405945, "perplexity": 1053.4920969269197}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00377.warc.gz"}
https://stats.stackexchange.com/questions/318087/very-low-loadings-in-principal-components
I have a financial dataset with 9500 observations with 4000 variables. There are around 2500 variables highly correlated (higher than 0.95). Without removing any variable, I have applied PCA. According to my results, first 100 components explain 78.8%. When I check the PCs in detail, I observed that the highest loadings range between 0.01 - 0.05. On the other hand, the original variables having the highest loadings make sense. That is, same group of variables are appearing on the top (top positive) or on the bottom for PCs (bottom negative). Besides the approach above, I have removed one variable from the pairs with a correlation higher than 0.95.This reduced my variable set to 1200. Then I again applied PCA. However, there happened no significant change on loadings. In short, how should I pick the most important components for each PC when such very low loadings present? • Is it possible you have some co-linear variables? Perhaps checking for this among your variables that are highly correlated would be a good sanity check. – guy Dec 11 '17 at 13:48 • Yes @guy. I have many variables even having a correlation of 1. I have removed one variable from those pairs having very high correlations and applied pca again. However, I did not observe a significant increase in coefficients. Dec 11 '17 at 19:29 • Are you speaking of loadings or of eigenvectors? stats.stackexchange.com/q/143905/3277. Also, did you do the analysis based on correlations or on covariances? If the latter, how big are variances in your matrix? Dec 12 '17 at 1:50 RotatedCoefficients = rotatefactors(PCACoefficients(:,5));
2022-01-23 19:36:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4738784730434418, "perplexity": 1152.4873655636818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00415.warc.gz"}
https://www.ademcetinkaya.com/2023/01/afwda-applyflow-limited_30.html
Outlook: APPLYFLOW LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 30 Jan 2023 for (n+16 weeks) Methodology : Statistical Inference (ML) Abstract APPLYFLOW LIMITED prediction model is evaluated with Statistical Inference (ML) and Linear Regression1,2,3,4 and it is concluded that the AFWDA stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell Key Points 1. Which neural network is best for prediction? 2. Should I buy stocks now or wait amid such uncertainty? 3. Operational Risk AFWDA Target Price Prediction Modeling Methodology We consider APPLYFLOW LIMITED Decision Process with Statistical Inference (ML) where A is the set of discrete actions of AFWDA stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Linear Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Statistical Inference (ML)) X S(n):→ (n+16 weeks) $∑ i = 1 n a i$ n:Time series to forecast p:Price signals of AFWDA stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? AFWDA Stock Forecast (Buy or Sell) for (n+16 weeks) Sample Set: Neural Network Stock/Index: AFWDA APPLYFLOW LIMITED Time series to forecast n: 30 Jan 2023 for (n+16 weeks) According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% IFRS Reconciliation Adjustments for APPLYFLOW LIMITED 1. Sales that occur for other reasons, such as sales made to manage credit concentration risk (without an increase in the assets' credit risk), may also be consistent with a business model whose objective is to hold financial assets in order to collect contractual cash flows. In particular, such sales may be consistent with a business model whose objective is to hold financial assets in order to collect contractual cash flows if those sales are infrequent (even if significant in value) or insignificant in value both individually and in aggregate (even if frequent). If more than an infrequent number of such sales are made out of a portfolio and those sales are more than insignificant in value (either individually or in aggregate), the entity needs to assess whether and how such sales are consistent with an objective of collecting contractual cash flows. Whether a third party imposes the requirement to sell the financial assets, or that activity is at the entity's discretion, is not relevant to this assessment. An increase in the frequency or value of sales in a particular period is not necessarily inconsistent with an objective to hold financial assets in order to collect contractual cash flows, if an entity can explain the reasons for those sales and demonstrate why those sales do not reflect a change in the entity's business model. In addition, sales may be consistent with the objective of holding financial assets in order to collect contractual cash flows if the sales are made close to the maturity of the financial assets and the proceeds from the sales approximate the collection of the remaining contractual cash flows. 2. To the extent that a transfer of a financial asset does not qualify for derecognition, the transferee does not recognise the transferred asset as its asset. The transferee derecognises the cash or other consideration paid and recognises a receivable from the transferor. If the transferor has both a right and an obligation to reacquire control of the entire transferred asset for a fixed amount (such as under a repurchase agreement), the transferee may measure its receivable at amortised cost if it meets the criteria in paragraph 4.1.2. 3. If a variable-rate financial liability bears interest of (for example) three-month LIBOR minus 20 basis points (with a floor at zero basis points), an entity can designate as the hedged item the change in the cash flows of that entire liability (ie three-month LIBOR minus 20 basis points—including the floor) that is attributable to changes in LIBOR. Hence, as long as the three-month LIBOR forward curve for the remaining life of that liability does not fall below 20 basis points, the hedged item has the same cash flow variability as a liability that bears interest at three-month LIBOR with a zero or positive spread. However, if the three-month LIBOR forward curve for the remaining life of that liability (or a part of it) falls below 20 basis points, the hedged item has a lower cash flow variability than a liability that bears interest at threemonth LIBOR with a zero or positive spread. 4. For the purposes of the transition provisions in paragraphs 7.2.1, 7.2.3–7.2.28 and 7.3.2, the date of initial application is the date when an entity first applies those requirements of this Standard and must be the beginning of a reporting period after the issue of this Standard. Depending on the entity's chosen approach to applying IFRS 9, the transition can involve one or more than one date of initial application for different requirements. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. Conclusions APPLYFLOW LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. APPLYFLOW LIMITED prediction model is evaluated with Statistical Inference (ML) and Linear Regression1,2,3,4 and it is concluded that the AFWDA stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell AFWDA APPLYFLOW LIMITED Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCaa2B3 Balance SheetBa3B3 Leverage RatiosB3C Cash FlowBaa2Baa2 Rates of Return and ProfitabilityB1B2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? Prediction Confidence Score Trust metric by Neural Network: 90 out of 100 with 515 signals. References 1. Tibshirani R. 1996. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58:267–88 2. N. B ̈auerle and A. Mundt. Dynamic mean-risk optimization in a binomial model. Mathematical Methods of Operations Research, 70(2):219–239, 2009. 3. uyer, S. Whiteson, B. Bakker, and N. A. Vlassis. Multiagent reinforcement learning for urban traffic control using coordination graphs. In Machine Learning and Knowledge Discovery in Databases, European Conference, ECML/PKDD 2008, Antwerp, Belgium, September 15-19, 2008, Proceedings, Part I, pages 656–671, 2008. 4. Bewley, R. M. Yang (1998), "On the size and power of system tests for cointegration," Review of Economics and Statistics, 80, 675–679. 5. A. Shapiro, W. Tekaya, J. da Costa, and M. Soares. Risk neutral and risk averse stochastic dual dynamic programming method. European journal of operational research, 224(2):375–391, 2013 6. Candès E, Tao T. 2007. The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35:2313–51 7. B. Derfer, N. Goodyear, K. Hung, C. Matthews, G. Paoni, K. Rollins, R. Rose, M. Seaman, and J. Wiles. Online marketing platform, August 17 2007. US Patent App. 11/893,765 Frequently Asked QuestionsQ: What is the prediction methodology for AFWDA stock? A: AFWDA stock prediction methodology: We evaluate the prediction models Statistical Inference (ML) and Linear Regression Q: Is AFWDA stock a buy or sell? A: The dominant strategy among neural network is to Sell AFWDA Stock. Q: Is APPLYFLOW LIMITED stock a good investment? A: The consensus rating for APPLYFLOW LIMITED is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of AFWDA stock? A: The consensus rating for AFWDA is Sell. Q: What is the prediction period for AFWDA stock? A: The prediction period for AFWDA is (n+16 weeks)
2023-04-01 07:59:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38639843463897705, "perplexity": 5015.923154732995}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00721.warc.gz"}
http://www.biodiversity-science.net/CN/10.3724/SP.J.1003.2009.09077
• 论文 • ### 华南地区3种入侵植物与本地植物叶片 建成成本的比较 1. 1 (华南师范大学生命科学学院, 广东省高等学校生态与环境科学重点实验室, 广州 510631) 2 (中山大学有害生物控制与资源利用国家重点实验室, 广州 510275) • 出版日期:2009-07-20 ### Comparison of leaf construction costs between three invasive species and three native species in South China Liying Song1, 2, Changlian Peng1, Shaolin Peng2* 1. 1 Key Laboratory of Ecology and Environmental Science in Guangdong Higher Education, College of Life Science, South China Normal University, Guangzhou 510631 2 State Key Laboratory of Biocontrol, Sun Yat-Sen University, Guangzhou 510275 • Online:2009-07-20 Construction cost is a quantifiable measure of energy demand for biomass production, and reflects specific growth strategies. Low construction cost is hypothesized to give plant invaders a growth advantage through efficient energy utilization. In this study, three invasive alien species (Mikania micrantha, Wedelia trilobata and Ipomoea cairica) and their co-occurring or phylogenetically related native species (Paederia scandens, Wedelia chinensis and Ipomoea pescaprae) in South China were used as materials for comparing leaf construction costs. These three invasive species exhibited lower mass- (CCmass) and area- (CCarea) based leaf construction costs than the corresponding native species had. Taking the three invasive species as a group, the mean leaf CCmass and CCarea of invasive species were 1.17 g glucose/g and 22.34 g glucose/m2, respectively, which were significantly lower than those for the native species (CCmass = 1.32 g glucose/g and CCarea = 36.93 g glucose/m2). The results testified the lower construction costs for invasive species compared with native ones, which might be a potential mechanism for the successful invasion of plants. Further, statistical analysis revealed significant correlations between leaf construction cost and leaf carbon content, nitrogen content and ash content (Ash) in invasive species. It suggested that the lower leaf construction costs of invasive species were partly due to their lower carbon and nitrogen content, and higher Ash relative to their corresponding native species. No related articles found! Viewed Full text Abstract Cited Shared Discussed
2020-08-08 14:25:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20188678801059723, "perplexity": 14268.279625573188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00013.warc.gz"}
https://datascience.stackexchange.com/questions/6645/errortype-of-predictors-in-new-data-do-not-match-that-of-the-training-data
# Error::Type of predictors in new data do not match that of the training data I am building a classification model using randomForest. When trying to predict I get the below error Type of predictors in new data do not match that of the training data I made sure that the testing and training data has same levels. I also included levels(test_var) <- levels(train_var) to make sure that the levels are matching. But still I end up getting this error, is there anything else that I should look for? **EDITED ON 3rd Aug 2015** Here is the structure of the training dataset Structure of the test dataset Sapply training data Sapply testing data In order to make sure that the levels are matching between the training and test datasets, I wrote this loop to see if there are any differences exist between the datasets for(i in 1:28) { if(is.factor(testing[,i])) { print(names((testing[i]))) difference_in_test = setdiff(levels(testing[,i]), levels(training_data[,i])) print(difference_in_test) } } Results of the above For loop which shows that there are no differences exist between the levels. [1] "VAR1" character(0) [1] "VAR2" character(0) [1] "VAR3" character(0) [1] "VAR4" character(0) [1] "VAR5" character(0) [1] "VAR7" I still continue to get the error as mentioned below: Edit on 4th Aug I started getting the above error only after including a filter in my dataset like this: training_data <- subset(training_data, gender !="F") • It would be very useful if you would provide the R package that you are using and a sample of the data. My guess is that there are some sort of hidden differences in the features. Often R data comes in as factors or character strings instead of numbers, so even though they look like numbers, they aren't. There are several ways to convert to numeric data, but as.numeric() is the most popular. Check out this stack overflow post Jul 31 '15 at 16:54 • @AN6U5- Thanks for the reply. I am using randomForest package. The error i got during the predict statmenet. I will try to provide some sample data by tomorrow. However I could see that both the training data and the test data contains same factors for categorical data and the few numerical variables. – Arun Jul 31 '15 at 18:42 • I'm still skeptical that either the types are different, the number of features are different, or you are giving it the transpose of your data so the shape is different. Those are the things I would investigate with that error. Jul 31 '15 at 21:40 • @AN6U5 - Thank you again. So do you advice to try all the numeric fie lds to be declared as.numeric explicitly ( and the factor fields as as.factor() respectively) and give a try? – Arun Jul 31 '15 at 21:52 • I just want you to run sapply(data, mode) and sapply(data, class) and dim(data) on your training data and testing data to see if everything matches. Jul 31 '15 at 21:58
2021-12-09 04:42:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3144311010837555, "perplexity": 902.0320403921219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00253.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-14-problems-page-534/14
College Physics (4th Edition) $502,320~J$ of heat must flow into the water. We can find the mass of the water: $m = (1000~kg/m^3)(2.0\times 10^{-3}~m^3) = 2.0~kg$ We can find the required heat: $Q = m~c~\Delta T$ $Q = (2.0~kg)(4186~J/kg~C^{\circ})(60.0~C^{\circ})$ $Q = 502,320~J$ $502,320~J$ of heat must flow into the water.
2020-02-17 09:33:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7216721773147583, "perplexity": 369.8934692182959}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00493.warc.gz"}
http://cheaptalk.org/tag/statistics/
You are currently browsing the tag archive for the ‘statistics’ tag. Some people were asked to name their favorite number, others were asked to give a random number: More here.  Via Justin Wolfers. Matthew Rabin was here last week presenting his work with Erik Eyster about social learning. The most memorable theme of their their papers is what they call “anti-imitation.” It’s the subtle incentive to do the opposite of someone in your social network even if you have the same preferences and there are no direct strategic effects. You are probably familiar with the usual herding logic. People in your social network have private information about the relative payoff of various actions. You see their actions but not their information. If their action reveals they have strong information in favor of it you should copy them even if you have private information that suggests doing the opposite. Most people who know this logic probably equate social learning with imitation and eventual herding. But Eyster and Rabin show that the same social learning logic very often prescribes doing the opposite of people in your social network. Here is a simple intuition. Start with a different, but simpler problem.  Suppose that your friend makes an investment and his level of investment reveals how optimistic he is. His level of optimism is determined by two things, his prior belief and any private information he received. You don’t care about his prior, it doesn’t convey any information that’s useful to you but you do want to know what information he got. The problem is the prior and the information are entangled together and just by observing his investment you can’t tease out whether he is optimistic because he was optimistic a priori or because he got some bullish information. Notice that if somebody comes and tells you that his prior was very bullish this will lead you to downgrade your own level of optimism. Because holding his final beliefs fixed, the more optimistic was his prior the less optimistic must have been his new information and its that new information that matters for your beliefs. You want to do the opposite of his prior. This is the basic force behind anti-imitation. (By the way I found it interesting that the English language doesn’t seem to have a handy non-prefixed word that means “doing the opposite of.”) Suppose now your friend got his prior beliefs from observing his friend. And now you see not only your friend’s investment level but his friend’s too. You have an incentive to do the opposite of his friend for exactly the same reason as above. This assumes his friend’s action conveys no information of direct relevance for your own decision. And that leads to the prelim question. Consider a standard herding model where agents move in sequence first observing a private signal and then acting.  But add the following twist. Each agent’s signal is relevant only for his action and the action of the very next agent in line.  Agent 3 is like you in the example above.  He wants to anti-imitate agent 1. But what about agents 4,5,6, etc? You are walking back to your office in the rain and your path is lined by a row of trees. You could walk under the trees or you could walk in the open. Which will keep you drier? If it just started raining you can stay dry by walking under the trees. On the other hand, when the rain stops you will be drier walking in the open. Because water will be falling off the leaves of the tree even though it has stopped raining. Indeed when the rain is tapering off you are better off out in the open. And when the rain is increasing you are better off under the tree. What about in steady state? Suppose it has been raining steadily for some time, neither increasing nor tapering off. The rain that falls onto the top of the tree gets trapped by leaves. But the leaves can hold only so much water. When they reach capacity water begins to fall off the leaves onto you below. In equilibrium the rate at which water falls onto the top of the tree, which is the same rate it would fall on you if you were out in the open, equals the rate at which water falls off the leaves onto you. Still you are not indifferent: you will stay drier out in the open. Under the tree the water that falls onto you, while constituting an equal total volume as the water that would hit you out in the open, is concentrated in larger drops. (The water pools as it sits on the leaves waiting to be pushed off onto you.) Your clothes will be dotted with fewer but larger water deposits and an equal volume of water spread over a smaller surface area will dry slower. It is important in all this that you are walking along a line of trees and not just standing in one place. Because although the rain lands uniformly across the top of the tree, it is probably channeled outward away from the trunk as it falls from leaf to leaf and eventually below. (I have heard that this is true of Louisiana Oaks.) So the rainfall is uniform out in the open but not uniform under the tree. This means that no matter where you stand out in the open you will be equally wet, but there will be spots under the tree in which the rainfall will be greater than and less than that average. You can stand at the local minimum and be drier than you would out in the open. Why are conditional probabilities so rarely used in court, and sometimes even prohibited?  Here’s one more good reason:  prosecution bias. Suppose that a piece of evidence X is correlated with guilt.  The prosecutor might say, “Conditional on evidence X, the likelihood ratio for guilt versus innoncence is Y, update your priors accordingly.”  Even if the prosecutor is correct in his statistics his claim is dubious. Because the prosecutor sees the evidence for all suspects before deciding which ones to bring to trial.  And the jurors know this.  So the fact that evidence like X exists against this defendant is already partially reflected in the fact that it was this guy they brought charges against and not someone else. If jurors were truly Bayesian (a necessary presumption if we are to consider using probabiilties in court at all) then they would already have accounted for this and updated their priors accordingly before even learning that evidence X exists.  When they are actually told it would necessarily move their priors less than what the statistics imply, perhaps hardly at all, maybe even in the opposite direction. Why does it seem like the other queue is more often moving faster than yours?  Here’s MindHacks: So here we have a mechanism which might explain my queuing woes. The other lanes or queues moving faster is one salient event, and my intuition wrongly associates it with the most salient thing in my environment – me. What, after all, is more important to my world than me. Which brings me back to the universe-victim theory. When my lane is moving along I’m focusing on where I’m going, ignoring the traffic I’m overtaking. When my lane is stuck I’m thinking about me and my hard luck, looking at the other lane. No wonder the association between me and being overtaken sticks in memory more. Which is one theory.  But how about this theory:  because it is in fact more often moving faster than yours.  It’s true by definition because out of the total time in your life you spent in queues, the time spent in the slow queues is necessarily longer than the time spent in the fast queues. Dear Northwestern Economics community. I was among the first to submit my bracket and I have already chosen all 16 teams seeded #1 through #4 to be eliminated in the first round of the NCAA tournament. In case you don’t believe me: Now that i got that out of the way, consider the following complete information strategic-form game. Someone will throw a biased coin which comes up heads with probability 5/8. Two people simultaneously make guesses. A pot of money will be divided equally among those who correctly guessed how the coin would land. (Somebody else gets the money if both guess incorrectly.) In a symmetric equilibrium of this game the two players will randomize their guesses in such a way that each earns the same expected payoff. But now suppose that player 1 can publicly announce his guess before player 2 moves. Player 1 will choose heads and player 2’s best reply is to choose tails. By making this announcement, player 1 has increased his payoff to a 5/8 chance of winning the pot of money. This principle applies to just about any variety of bracket-picking game, hence my announcement. In fact in the psychotic version we play in our department, the twisted-brain child of Scott Ogawa, each matchup in the bracket is worth 1000 points to be divided among all who correctly guess the winner, and the overall winner is the one with the most points. Now that all of my colleagues know that the upsets enumerated above have already been taken by me their best responses are to pick the favorites and sure they will be correct with high probability on each, but they will split the 1000 points with everyone else and I will get the full 1000 on the inevitable one or two upsets that will come from that group. The Magic Kingdom of Data: The Walt Disney Co. recently announced its intention to “evolve” the experience of its theme park guests with the ultimate goal of giving everyone an RFID-enabled bracelet to transform their every move through the company’s parks and hotels into a steady stream of information for the company’s databases. …Tracking the flow through the parks will come next. Right now, the park prints out pieces of paper called “FastPasses” to let people get reservations to ride. The wristbands and golden orbs will replace these slips of paper and most of everything else. Every reservation, every purchase, every ride on Dumbo, and maybe every step is waiting to be noticed, recorded, and stored away in a vast database. If you add up the movements and actions, it’s easy to imagine leaving a trail of hundreds of thousands of bytes of data after just one day in the park. That’s a rack of terabyte drives just to record this. Theory question:  Suppose Disney develops a more efficient rationing system than the current one with queues and then adjusts the price to enter the park optimally.  In the end will your waiting time go up or down? Eartip:  Drew Conway Comes from being able to infer that since by now you have not found any clear reason to favor one choice over the other it means that you are close to indifferent and you should pick now, even randomly. It was the way he treated last-second, buzzer-beating three-pointers. Not close shots at the end of a game or shot clock, but half-courters at the end of each of the first three quarters. He seemed to be purposely letting the ball go just a half-second after the buzzer went off, presumably in order to shield his shooting percentage from the one-in-100 shot he was attempting. If the shot missed, no harm all around. If it went in? Then the crowd would go nuts and he’d get a few slaps on the back, even if he wouldn’t earn three points for the scoreboard. In Baseball, a sacrifice is not scored as an at-bat and this alleviates somewhat the player/team conflict of interest.  The coaches should lobby for a separate shooting category “buzzer-beater prayers.” As an aside, check out Kevin Durant’s analysis: “It depends on what I’m shooting from the field. First quarter if I’m 4-for-4, I let it go. Third quarter if I’m like 10-for-16, or 10-for-17, I might let it go. But if I’m like 8-for-19, I’m going to go ahead and dribble one more second and let that buzzer go off and then throw it up there. So it depends on how the game’s going.” This seems backward.  100% (4-4) is much bigger than 80% (4/5) whereas the difference between 8 for 19 and 8 for 20 is just 2 percentage points. One reason people over-react to information is that they fail to recognize that the new information is redundant.  If two friends tell you they’ve heard great things about a new restaurant in town it matters whether those are two independent sources of information or really just one. It may be that they both heard it from the same source, a recent restaurant review in the newspaper. When you neglect to account for redundancies in your information you become more confident in your beliefs than is justified. This kind of problem gets worse and worse when the social network becomes more connected because its ever more likely that your two friends have mutual friends. And it can explain an anomaly of psychology:  polarization.  Sandeep in his paper with Peter Klibanoff and Eran Hanany give a good example of polarization. A number of voters are in a television studio before a U.S. Presidential debate. They are asked the likelihood that the Democratic candidate will cut the budget deficit, as he claims. Some think it is likely and others unlikely. The voters are asked the same question again after the debate. They become even more convinced that their initial inclination is correct. It’s inconsistent with Bayesian information processing for groups who observe the same information to systematically move their beliefs in opposite directions.  But polarization is not just the observation that the beliefs move in opposite directions.  It’s that the information accentuates the original disagreement rather than reducing it.  The  groups move in the same opposite directions that caused their disagreement originally. Here’s a simple explanation for it that as far as I know is a new one: the voters fail to recognize that the debate is not generating any new information relative to what they already knew. Prior to the debate the voters had seen the candidate speaking and heard his view on the issue.  Even if these voters had no bias ex ante, their differential reaction to this pre-debate information separates the voters into two groups according to whether they believe the candidate will cut the deficit or not. Now when they see the debate they are seeing the same redundant information again.  If they recognized that the information was redundant they would not move at all.  But if don’t then they are all going to react to the debate in the same way they reacted to the original pre-debate information. Each will become more confident in his beliefs.  As a result they will polarize even further. Note that an implication of this theory is that whenever a common piece of information causes two people to revise their beliefs in opposite directions it must be to increase polarization, not reduce it. I read this interesting post which talks about spectator sports and the gap between the excitement of watching in person versus on TV. The author ranks hockey as the sport with the largest gap: seeing hockey in person is way more fun than watching on TV. I think I agree with that and generally with the ranking given. (I would add one thing about American Football. With the advent of widescreen TVs the experience has improved a lot. But its still very dumb how they frame the shot to put the line of scrimmage down the center of the screen. The quarterback should be near the left edge of the screen at all times so that we can see who he is looking at downfield.) But there was one off-hand comment that I think the author got completely wrong. I think NBA basketball players might be the best at what they do in all of sports. The thought experiment is to compare players across sports. I.e., are basketball players better at basketball than, say, snooker players are at playing snooker? Unless you count being tall as one of the things NBA basketball players “do” I would say on the contrary that NBA basketball players must be among the worst at what they do in all of professional sports. The reason is simple: because height is so important in basketball, the NBA is drawing the top talent among a highly selected sub-population: those that are exceptionally tall. The skill distribution of the overall population, focusing on those skills that make a great basketball player like coordination, quickness, agility, accuracy; certainly dominate the distribution of the subpopulation from which the NBA draws its players. Imagine that the basket was lowered by 1 foot and a height cap enforced so that in order to be eligible to play you must be 1 foot shorter than the current tallest NBA player (or you could scale proportionally if you prefer.) The best players in that league would be better at what they do than current NBA players. (Of course you need to allow equilibrium to be reached where young players currently too short to be NBA stars now make/receive the investments and training that the current elite do.) Now you might ask why we should discard height as one of the bundle of attributes that we should say a player is “best” at. Aren’t speed, accuracy, etc. all talents that some people are born with and others are not, just like height? Definitely so, but ask yourself this question. If a guy stops playing basketball for a few years and then takes it up again, which of these attributes is he going to fall the farthest behind the cohort who continued to train uninterrupted? He’ll probably be a step slower and have lost a few points in shooting percentage. He won’t be any shorter than he would have been. When you look at a competition where one of the inputs of the production function is an exogenously distributed characteristic, players with a high endowment on that dimension have a head start. This has two effects on the distribution of the (partially) acquired characteristics that enter the production function. First, there is the pure statistical effect I alluded to above. If success requires some minimum height then the pool of competitors excludes a large component of the population. There is a second effect on endogenous acquisition of skills. Competition is less intense and they have less incentive to acquire skills in order to be competitive. So even current NBA players are less talented than they would be if competition was less exclusive. So what are the sports whose athletes are the best at what they do? My ranking 1. Table Tennis 2. Soccer 3. Tennis 4. Golf 5. Chess Suppose that what makes a person happy is when their fortunes exceed expectations by a discrete amount (and that falling short of expectations is what makes you unhappy.)  Then simply because of convergence of expectations: 1. People will have few really happy phases in their lives. 2. Indeed even if you lived forever you would have only finitely many spells of happiness. 3. Most of the happy moments will come when you are young. 4. Happiness will be short-lived. 5. The biggest cross-sectional variance in happiness will be among the young. 6. When expectations adjust to the rate at which your fortunes improve, chasing further happiness requires improving your fortunes at an accelerating rate. 7. If life expectancy is increasing and we simply extrapolate expectations into later stages of life we are likely to be increasingly depressed when we are old. 8. There could easily be an inverse relationship between intelligence and happiness. The average voter’s prior belief is that the incumbent is better than the challenger. Because without knowing anything more about either candidate, you know that the incumbent defeated a previous opponent. To the extent that the previous electoral outcome was based on the voters’ information about the candidates this is good news about the current incumbent. No such inference can be made about the challenger. Headline events that occurred during the current incumbent’s term were likely to generate additional information about the incumbent’s fitness for office. The bigger the headline the more correlated that information is going to be among the voters. For example, a significant natural disaster such as Hurricane Katrina or Hurricane Sandy is likely to have a large common effect on how voters’ evaluate the incumbent’s ability to manage a crisis. For exactly this reason, an event like that is bad for the incumbent on average. Because the incumbent begins with the advantage of the prior.  The upside benefit of a good signal is therefore much smaller than the downside risk of a bad signal. As I understand it, this is the theory developed in a paper by Ethan Bueno de Mesquita and Scott Ashworth, who use it to explain how events outside of the control of political leaders (like natural disasters) seem, empirically, to be blamed on incumbents. This pattern emerges in their model not because voters are confused about political accountability, but instead through the informational channel outlined above. It occurs to me that such a model also explains the benefit of saturation advertising. The incumbent unleashes a barrage of ads to drive voters away from their televisions thus cutting them off from information and blunting the associated risks. Note that after the first Obama-Romney debate, Obama’s national poll numbers went south but they held steady in most of the battleground states where voters had already been subjected to weeks of wall-to-wall advertising. Economists Andrew Healy, Neil Malhotra, and Cecilia Mo make this argument in afascinating article in the Proceedings of the National Academy of Science. They examined whether the outcomes of college football games on the eve of elections for presidents, senators, and governors affected the choices voters made. They found that a win by the local team, in the week before an election, raises the vote going to the incumbent by around 1.5 percentage points. When it comes to the 20 highest attendance teams—big athletic programs like the University of Michigan, Oklahoma, and Southern Cal—a victory on the eve of an election pushes the vote for the incumbent up by 3 percentage points. That’s a lot of votes, certainly more than the margin of victory in a tight race. And these results aren’t based on just a handful of games or political seasons; the data were taken from 62 big-time college teams from 1964 to 2008. And Andrew Gelman signs off on it. I took a look at the study (I felt obliged to, as it combined two of my interests) and it seemed reasonable to me. There certainly could be some big selection bias going on that the authors (and I) didn’t think of, but I saw no obvious problems. So for now I’ll take their result at face value and will assume a 2 percentage-point effect. I’ll assume that this would be +1% for the incumbent party and -1% for the other party, I assume. Let’s try this: 1. Incumbents have an advantage on average. 2. Higher overall turnout therefore implies a bigger margin for the incumbent, again on average. 3. In sports, the home team has an advantage on average. 4. Conditions that increase overall scoring amplify the advantage of the home team. 5. Good weather increases overall turnout in an election and overall scoring in a football game. So what looks like football causes elections could really be just good weather causes both.  Note well, I have not actually read the paper but I did search for the word weather and it appears nowhere. From Nature news. Calcagno, in contrast, found that 3–6 years after publication, papers published on their second try are more highly cited on average than first-time papers in the same journal — regardless of whether the resubmissions moved to journals with higher or lower impact. Calcagno and colleagues think that this reflects the influence of peer review: the input from referees and editors makes papers better, even if they get rejected at first. Based on my experience with economics journals as an editor and author I highly doubt that.  Authors pay very close attention to referees’ demands when they are asked to resubmit to the same journal because of course those same referees are going to decide on the next round.  On the other hand authors pretty much ignore the advice of referees who have proven their incompetence by rejecting their paper. Instead my hypothesis is that authors with good papers start at the top journals and expect a rejection or two (on average) before the paper finally lands somewhere reasonably good.  Authors of bad papers submit them to bad journals and have them accepted right away.  Drew Fudenberg suggested something similar. Its the same reason the lane going in the opposite direction is always flowing faster. This is a lovely article that works through the logic of conditional proportions. I really admire this kind of lucid writing about subtle ideas. (link fixed now, sorry.) This phenomenon has been called the friendship paradox. Its explanation hinges on a numerical pattern — a particular kind of “weighted average” — that comes up in many other situations. Understanding that pattern will help you feel better about some of life’s little annoyances. For example, imagine going to the gym. When you look around, does it seem that just about everybody there is in better shape than you are? Well, you’re probably right. But that’s inevitable and nothing to feel ashamed of. If you’re an average gym member, that’s exactly what you should expect to see, because the people sweating and grunting around you are not average. They’re the types who spend time at the gym, which is why you’re seeing them there in the first place. The couch potatoes are snoozing at home where you can’t count them. In other words, your sample of the gym’s membership is not representative. It’s biased toward gym rats. Nate Silver’s 538 Election Forecast has consistently given Obama a higher re-election probability than InTrade does.  The 538 forecast is based on estimating vote probabilities from State polls and simulating the Electoral College.  InTrade is just a betting market where Obama’s re-election probability is equated with the market price of a security that pays off \$1 in the event that Obama wins.  How can we decide which is the more accurate forecast?  When you log on in the morning and see that InTrade has Obama at 70% and Nate Silver has him at 80%, on what basis can we say that one of them is right and the other is wrong? At a philosophical level we can say they are both wrong.  Either Obama is going to win or Romney is going to win so the only correct forecast would give one of them 100% chance of winning.  Slightly less philosophically, is there any interpretation of the concept of “probability” relative to which we can judge these two forecasting methods? One way is to define probability simply as the odds at which you would be indifferent between betting one way or the other.  InTrade is meant to be the ideal forecast according to this interpretation because of course you can actually go and bet there.  If you are not there betting right now then we can infer you agree with the odds.  One reason among many to be unsatisfied with this conclusion is that there are many other betting sites where the odds are dramatically different. Then there’s the Frequentist interpretation.  Based on all the information we have (especially polls) if this situation were repeated in a series of similar elections, what fraction of those elections would eventually come out in Obama’s favor?  Nate Silver is trying to do something like this.  But there is never going to be anything close to enough data to be able to test whether his model is getting the right frequency. Nevertheless, there is a way to assess any forecasting method that doesn’t require you to buy into any particular interpretation of probability.  Because however you interpret it, mathematically a probability estimate has to satisfy some basic laws.  For a process like an election where information arrives over time about an event to be resolved later, one of these laws is called the Martingale property. The Martingale property says this.  Suppose you checked the forecast in the morning and it said Obama 70%.  And then you sit down to check the updated forecast in the evening.  Before you check you don’t know exactly how its going to be revised.  Sometimes it gets revised upward, sometimes downard.  Soometimes by a lot, sometimes just a little.  But  if the forecast is truly a probability then on average it doesn’t change at all.  Statistically we should see that the average forecast in the evening equals the actual forecast in the morning. We can be pretty confident that Nate Silver’s 538 forecast would fail this test.  That’s because of how it works.  It looks at polls and estimates vote shares based on that information.  It is an entirely backward-looking model.  If there are any trends in the polls that are discernible from data these trends will systematically reflect themselves in the daily forecast and that would violate the Martingale property.  (There is some trendline adjustment but this is used to adjust older polls to estimate current standing.  And there is some forward looking adjustment but this focuses on undecided voters and is based on general trends.  The full methodology is described here.) In order to avoid this problem, Nate Silver would have to do the following.  Each day prior to the election his model should forecast what the model is going to say tomorrow, based on all of the available information today (think about that for a moment.)  He is surely not doing that. So 70% is not a probability no matter how you prefer to interpret that word.  What does it mean then?  Mechanically speaking its the number that comes out of a formula that combines a large body of recent polling data in complicated ways.  It is probably monotonic in the sense that when the average poll is more favorable for Obama then a higher number comes out.  That makes it a useful summary statistic.  It means that if today his number is 70% and yesterday it was 69% you can logically conclude that his polls have gotten better in some aggregate sense. But to really make the point about the difference between a simple barometer like that and a true probability, imagine taking Nate Silver’s forecast, writing it as a decimal (70% = 0.7) and then squaring it.  You still get a “percentage,”  but its a completely different number.  Still its a perfectly valid barometer:  its monotonic.  By contrast, for a probability the actual number has meaning beyond the fact that it goes up or down. What about InTrade?  Well, if the market it efficient then it must be a Martingale.  If not, then it would be possible to predict the day-to-day drift in the share price and earn arbitrage profits.  On the other hand the market is clearly not efficient because the profits from arbitraging the different prices at BetFair and InTrade have been sitting there on the table for weeks. In a meeting a guy’s phone goes off because he just received a text and he forgot to silence it.   What kind of guy is he? 1. He’s the type who is a slave to his smartphone, constantly texting and receiving texts.  Statistically this must be true because conditional on someone receiving a text it is most likely the guy whose arrival rate of texts is the highest. 2. He’s the type who rarely uses his phone for texting and this is the first text he has received in weeks.  Statistically this must be true because conditional on someone forgetting to silence his phone it is most likely the guy whose arrival rate of texts is the lowest. My 9 year-old daughter’s soccer games are often high-scoring affairs. Double-digit goal totals are not uncommon.  So when her team went ahead 2-0 on Saturday someone on the sideline remarked that 2-0 is not the comfortable lead that you usually think it is in soccer. But that got me thinking.  Its more subtle than that.  Suppose that the game is 2 minutes old and the score is 2-0.  If these were professional teams you would say that 2-0 is a good lead but there are still 88 minutes to play and there is a decent chance that a 2-0 lead can be overcome. But if these are 9 year old girls and you know only that the score is 2-0 after 2 minutes your most compelling inference is that there must be a huge difference in the quality of these two teams and the team that is leading 2-0 is very likely to be ahead 20-0 by the time the game is over. The point is that competition at higher levels is different in two ways. First there is less scoring overall which tends to make a 2-0 lead more secure.  But second there is also lower variance in team quality.  So a 2-0 lead tells you less about the matchup than it does at lower levels. Ok so a 2-0 lead is a more secure lead for 9 year olds when 95% of the game remains to be played (they play for 40 minutes). But when 5% of the game remains to be played a 2-0 lead is almost insurmountable at the professional level but can easily be upset in a game among 10 year olds. So where is the flipping point?  How much of the game must elapse so that a 2-0 lead leads to exactly the same conditional probability that the 9 year olds hold on to the lead and win as the professionals? Next question.  Let F be the fraction of the game remaining where the 2-0 lead flipping point occurs.  Now suppose we have a 3-0 lead with F remaining.  Who has the advantage now? And of course we want to define F(k) to be the flipping point of a k-nil lead and we want to take the infinity-nil limit to find the flipping point F(infinity).  Does it converge to zero or one, or does it stay in the interior? Act as if you have log utility and with probability 1 your wealth will converge to infinity. Sergiu Hart presented this paper at Northwestern last week.  Suppose you are going to be presented an infinite sequence of gambles.  Each has positive expected return but also a positive probability of a loss.  You have to decide which gambles to accept and which gambles to reject. You can also invest purchase fractions of gambles: exposing yourself to some share $\alpha$ of its returns. Your wealth accumulates (or depreciates) along the way as you accept gambles and absorb their realized returns. Here is a simple investment strategy that guarantees infinite wealth.  First, for every gamble $g$ that appears you calculate the wealth level such that an investor with that as his current wealth and who has logarithmic utility for final wealth would be just indifferent between accepting and rejecting the gamble.  Let’s call that critical wealth level $R(g)$.  In particular, such an investor strictly prefers to accept $g$ if his wealth is higher than $R(g)$ and strictly prefers to reject it if his wealth is below that level. Next, when your wealth level is actually $W$ and you are presented gamble $g$, you find the maximum share of the gamble that an investor with logarithmic utility would be willing to take.  In particular, you determine the share of $g$ such that the critical wealth level $R(\alpha g)$ of the resulting gamble $\alpha g$ is exactly $W$. Now the sure-thing strategy for your hedge fund is the following:  purchase the share $\alpha$ of the gamble $g$, realize its returns, wait for next gamble, repeat. If you follow this rule then no matter what sequence of gambles appears you will never go bankrupt and your wealth will converge to infinity. What’s more, this is in some sense the most aggressive investment strategy you can take without running the risk of going bankrupt.  Foster and Hart show that any investor that is willing to accept some gambles $g$ at wealth levels $W$ below the critical wealth level $R(g)$ there is a sequence of gambles that will drive that investor to bankruptcy.  (This last result assumes that the investor is using a “scale free” investment strategy, one whose acceptance decisions scale proportionally with wealth.  That’s an unappealing assumption but there is a convincing version of the result without this assumption.) In basketball the team benches are near the baskets on opposite sides of the half court line. The coaches roam their respective halves of the court shouting directions to their team. As in other sports the teams switch sides at halftime but the benches stay where they were. That means that for half of the game the coaches are directing their defenses and for the other half they are directing their offenses. If coaching helps then we should see more scoring in the half where the offenses are receiving direction. This could easily be tested. Here is an excellent rundown of some soul searching in the neuroscience community regarding statistical significance.  The standard method of analyzing brain scan data apparently involves something akin to data mining but the significance tests use standard single-hypothesis p-values. One historical fudge was to keep to uncorrected thresholds, but instead of a threshold of p=0.05 (or 1 in 20) for each voxel, you use p=0.001 (or 1 in a 1000).  This is still in relatively common use today, but it has been shown, many times, to be an invalid attempt at solving the problem of just how many tests are run on each brain-scan. Poldrack himself recently highlighted this issue by showing a beautiful relationship between a brain region and some variable using this threshold, even though the variable was entirely made up. In a hilarious earlier version of the same point, Craig Bennett and colleagues fMRI scanned a dead salmon, with a task involving the detection of the emotional state of a series of photos of people. Using the same standard uncorrected threshold, they found two clusters of activation in the deceased fish’s nervous system, though, like the Poldrack simulation, proper corrected thresholds showed no such activations. Biretta blast:  Marginal Revolution. So there was this famous experiment and just recently a new team of researchers tried to replicate it and they could not. Quoting Alex Tabarrok: You will probably not be surprised to learn that the new paper fails to replicate the priming effect. As we know from Why Most Published Research Findings are False (also here), failure to replicate is common, especially when sample sizes are small. There’s a lot more at the MR link you should check it out. But here’s the thing. If most published research findings are false then which one is the false one, the original or the failed replication? Have you noticed that whenever a failed replication is reported, it is reported with all of the faith and fanfare that the original, now apparently disproven study was afforded? All we know is that one of them is wrong, can we really be sure which? If I have to decide which to believe in, my money’s on the original. Think publication bias and ask yourself which is likely to be larger:  the number of unpublished experiments that confirmed the original result or the number of unpublished results that didn’t. Here’s a model. Experimenters are conducting a hidden search for results and they publish as soon as they have a good one. For the original experimenter a good result means a positive result. They try experiment A and it fails so they conclude that A is a dead end, shelve it and turn to something new, experiment B. They continue until they hit on a positive result, experiment X and publish it. Given the infinity of possible original experiments they could try, it is very likely that when they come to experiment X they were the first team to ever try it. By contrast, Team-Non-Replicate searches among experiments that have already been published, especially the most famous ones.  And for them a good result is a failure to replicate. That’s what’s going to get headlines. Since X is a famous experiment it’s not going to take long before they try that. They will do a pilot experiment and see if they can fail to replicate it. If they fail to fail to replicate it, they are going to shelve it and go on to the next famous experiment. But then some other Team-Non-Replicate, who has no way of knowing this is a dead-end, is going to try experiment X, etc. This is going to continue until someone succeeds in failing to replicate. When that’s all over let’s count the number of times X failed:  1.  The number of times X was confirmed equals 1 plus the number of non-non-replications before the final successful failure. Email is the superior form of communication as I have argued a few times before, but it can sure aggravate your self-control problems. I am here to help you with that. As you sit in your office working, reading, etc., the random email arrival process is ticking along inside your computer. As time passes it becomes more and more likely that there is email waiting for you and if you can’t resist the temptation you are going to waste a lot of time checking to see what’s in your inbox.  And it’s not just the time spent checking because once you set down your book and start checking you won’t be able to stop yourself from browsing the web a little, checking twitter, auto-googling, maybe even sending out an email which will eventually be replied to thereby sealing your fate for the next round of checking. One thing you can do is activate your audible email notification so that whenever an email arrives you will be immediately alerted. Now I hear you saying “the problem is my constantly checking email, how in the world am i going to solve that by setting up a system that tells me when email arrives? Without the notification system at least I have some chance of resisting the temptation because I never know for sure that an email is waiting.” Yes, but it cuts two ways.  When the notification system is activated you are immediately informed when an email arrives and you are correct that such information is going to overwhelm your resistance and you will wind up checking. But, what you get in return is knowing for certain when there is no email waiting for you. Ok, now that you’ve got your answer let’s figure out whether you should use your mailbeep or not.  The first thing to note is that the mail arrival process is a Poisson process:  the probability that an email arrives in a given time interval is a function only of the length of time, and it is determined by the arrival rate parameter r.  If you receive a lot of email you have a large r, if the average time spent between arrivals is longer you have a small r.  In a Poisson process, the elapsed time before the next email arrives is a random variable and it is governed by the exponential distribution. Let’s think about what will happen if you turn on your mail notifier.  Then whenever there is silence you know for sure there is no email, p=0 and you can comfortably go on working temptation free. This state of affairs is going to continue until the first beep at which point you know for sure you have mail (p=1) and you will check it.  This is a random amount of time, but one way to measure how much time you waste with the notifier on is to ask how much time on average will you be able to remain working before the next time you check.  And the answer to that is the expected duration of the exponential waiting time of the Poisson process.  It has a simple expression: Expected time between checks with notifier on = $\frac{1}{r}$ Now let’s analyze your behavior when the notifier is turned off.  Things are very different now.  You are never going to know for sure whether you have mail but as more and more time passes you are going to become increasingly confident that some mail is waiting, and therefore increasingly tempted to check. So, instead of p lingering at 0 for a spell before jumping up to 1 now it’s going to begin at 0 starting from the very last moment you previously checked but then steadily and continuously rise over time converging to, but never actually equaling 1.  The exponential distribution gives the following formula for the probability at time T that a new email has arrived. Probability that email arrives at or before a given time T = $1 - e^{-rT}$ Now I asked you what is the p* above which you cannot resist the temptation to check email.  When you have your notifier turned off and you are sitting there reading, p will be gradually rising up to the point where it exceeds p* and right at that instant you will check.  Unlike with the notification system this is a deterministic length of time, and we can use the above formula to solve for the deterministic time at which you succumb to temptation.  It’s given by Time between checks when the notifier is off = $\frac{- log (1 - p^*)}{r}$ And when we compare the two waiting times we see that, perhaps surprisingly, the comparison does not depend on your arrival rate r (it appears in the numerator of both expressions so it will cancel out when we compare them.) That’s why I didn’t ask you that, it won’t affect my prescription (although if you receive as much email as I do, you have to factor in that the mail beep turns into a Geiger counter and that may or may not be desirable for other reasons.)  All that matters is your p* and by equating the two waiting times we can solve for the crucial cutoff value that determines whether you should use the beeper or not. The beep increases your productivity iff your p* is smaller than $\frac{e-1}{e}$ This is about .63 so if your p* is less than .63 meaning that your temptation is so strong that you cannot resist checking any time you think that there is at least a 63% chance there is new mail waiting for you then you should turn on your new mail alert.  If you are less prone to temptation then yes you should silence it. This is life-changing advice and you are welcome. Now, for the vapor mill and feeling free to profit, we do not content ourselves with these two extreme mechanisms.  We can theorize what the optimal notification system would be.  It’s very counterintuitive to think that you could somehow “trick” yourself into waiting longer for email but in fact even though you are the perfectly-rational-despite-being-highly-prone-to-temptation person that you are, you can.  I give one simple mechanism, and some open questions below the fold. It’s the canonical example of reference-dependent happiness. Someone from the Midwest imagines how much happier he would be in California but when he finally has the chance to move there he finds that he is just as miserable as he was before. But can it be explained by a simple selection effect? Suppose that everyone who lives in the Midwest gets a noisy but unbiased signal of how happy they would be in California. Some overestimate how happy they would be and some underestimate it. Then they get random opportunities to move. Who is going to take that opportunity? Those who overestimate how happy they will be.  And so when they arrive they are disappointed. It also explains why people who are forced to leave California, say for job-related reasons, are pleasantly surprised at how happy they can be in the Midwest. Since they hadn’t moved voluntarily already, its likely that they underestimated how happy they would be. These must be special cases of this paper by Eric van den Steen, and its similar to the logic behind Lazear’s theory behind the Peter Principle.  (For the latter link I thank Adriana Lleras-Muney.) In many situations, such reinforcement learning is an essential strategy, allowing people to optimize behavior to fit a constantly changing situation. However, the Israeli scientists discovered that it was a terrible approach in basketball, as learning and performance are “anticorrelated.” In other words, players who have just made a three-point shot are much more likely to take another one, but much less likely to make it: What is the effect of the change in behaviour on players’ performance? Intuitively, increasing the frequency of attempting a 3pt after made 3pts and decreasing it after missed 3pts makes sense if a made/missed 3pts predicted a higher/lower 3pt percentage on the next 3pt attempt. Surprizingly [sic], our data show that the opposite is true. The 3pt percentage immediately after a made 3pt was 6% lower than after a missed 3pt. Moreover, the difference between 3pt percentages following a streak of made 3pts and a streak of missed 3pts increased with the length of the streak. These results indicate that the outcomes of consecutive 3pts are anticorrelated. This anticorrelation works in both directions. as players who missed a previous three-pointer were more likely to score on their next attempt. A brick was a blessing in disguise. The underlying study, showing a “failure of reinforcement learning” is here. Suppose you just hit a 3-pointer and now you are holding the ball on the next possession. You are an experienced player (they used NBA data), so you know if you are truly on a hot streak or if that last make was just a fluke. The defense doesn’t. What the defense does know is that you just made that last 3-pointer and therefore you are more likely to be on a hot streak and hence more likely than average to make the next 3-pointer if you take it. Likewise, if you had just missed the last one, you are less likely to be on a hot streak, but again only you would know for sure. Even when you are feeling it you might still miss a few. That means that the defense guards against the three-pointer more when you just made one than when you didn’t. Now, back to you. You are only going to shoot the three pointer again if you are really feeling it. That’s correlated with the success of your last shot, but not perfectly. Thus, the data will show the autocorrelation in your 3-point shooting. Furthermore, when the defense is defending the three-pointer you are less likely to make it, other things equal. Since the defense is correlated with your last shot, your likelihood of making the 3-pointer is also correlated with your last shot. But inversely this time:  if you made the last shot the defense is more aggressive so conditional on truly being on a hot streak and therefore taking the next shot, you are less likely to make it. (Let me make the comparison perfectly clear:  you take the next shot if you know you are hot, but the defense defends it only if you made the last shot.  So conditional on taking the next shot you are more likely to make it when the defense is not guarding against it, i.e. when you missed the last one.) You shoot more often and miss more often conditional on a previous make. Your private information about your make probability coupled with the strategic behavior of the defense removes the paradox. It’s not possible to “arbitrage” away this wedge because whether or not you are “feeling it” is exogenous. I write all the time about strategic behavior in athletic competitions.  A racer who is behind can be expected to ease off and conserve on effort since effort is less likely to pay off at the margin.  Hence so will the racer who is ahead, etc.  There is evidence that professional golfers exhibit such strategic behavior, this is the Tiger Woods effect. We may wonder whether other animals are as strategically sophisticated as we are.  There have been experiments in which monkeys play simple games of strategy against one another, but since we are not even sure humans can figure those out, that doesn’t seem to be the best place to start looking. I would like to compare how humans and other animals behave in a pure physical contest like a race.  Suppose the animals are conditioned to believe that they will get a reward if and only if they win a race.  Will they run at maximum speed throughout regardless of their position along the way?  Of course “maximum speed” is hard to define, but a simple test is whether the animal’s speed at a given point in the race is independent of whether they are ahead or behind and by how much. And if the animals learn that one of them is especially fast, do they ease off when racing against her?  Do the animals exhibit a tiger Woods effect? There are of course horse-racing data.  That’s not ideal because the jockey is human.  Still there’s something we can learn from horse racing.  The jockey does not internalize 100% of the cost of the horse’s effort.  Thus there should be less strategic behavior in horse racing than in races between humans or between jockey-less animals.  Dog racing?  Does that actually exist? And what if a dog races against a human, what happens then? In the past few weeks Romney has dropped from 70% to under 50% and Gingrich has rocketed to 40% on the prediction markets.  And in this time Obama for President has barely budged from its 50% perch.  As someone pointed out on Twitter (I forget who, sorry) this is hard to understand. For example if you think that in this time there has been no change in the conditional probabilities that either Gingrich or Romney beats Obama in the general election, then these numbers imply that the market thinks that those conditional probabilities are the same.  Conversely, If you think that Gingrich has risen because his perceived odds of beating Obama have risen over the same period, then it must be that Romney’s have dropped in precisely the proportion to keep the total probability of a GOP president constant. It’s hard to think of any public information that could have these perfectly offsetting effects.  Here’s the only theory I could come up with that is consistent with the data.  No matter who the Republican candidate is, he has a 50% chance of beating Obama.  This is just a Downsian prediction.  The GOP machine will move whoever it is to a median point in the policy space.  But, and here’s the model, this doesn’t imply that the GOP is indifferent between Gingrich and Romney. While any candidate, no matter what his baggage, can be repositioned to the Downsian sweet spot, the cost of that repositioning depends on the candidate, the opposition, and the political climate.  The swing from Romney to Gingrich reflects new information about these that alters the relative cost of marketing the two candidates.  Gingrich has for some reason gotten relatively cheaper. I didn’t say it was a good theory. Update:  Rajiv Sethi reminded me that the tweet was from Richard Thaler. (And see Rajiv’s comment below.) Stefan Lauermann points me to a new paper, this is from the abstract: Our analysis shows that both stake size and communication have a significant impact on the player’s likelihood to cooperate. In particular, we observe a negative correlation between stake size and cooperation. Also certain gestures, as handshakes, decrease the likelihood to cooperate. But, if players mutually promise each other to cooperate and in addition shake hands on it, the cooperation rate increases. Measuring social influence is notoriously difficult in observational data.  If I like Tin Hat Trio and so do my friends is it because I influenced them or we just have similar tastes, as friends often do.  A controlled experiment is called for.  It’s hard to figure out how to do that.  How can an experimenter cause a subject to like something new and then study the effect on his friends? Online social networks open up new possibilities.  And here is the first experiment I came across that uses Facebook to study social influence, by Johan Egebark and Mathias Ekstrom.  If one of your friends “likes” an item on Facebook, will it make you like it too? Making use of five Swedish users’ actual accounts, we create 44 updates in total during a seven month period.1 For every new update, we randomly assign our user’s friends into either a treatment or a control group; hence, while both groups are exposed to identical status updates, treated individuals see the update after someone (controlled by us) has Liked it whereas individuals in the control group see it without anyone doing so. We separate between three different treatment conditions: (i) one unknown user Likes the update, (ii) three unknown users Like the update and (iii) one peer Likes the update. Our motivation for altering treatments is that it enables us to study whether the number of previous opinions as well as social proximity matters.2 The result from this exercise is striking: whereas the first treatment condition left subjects unaffected, both the second and the third more than doubled the probability of Liking an update, and these effects are statistically significant.
2014-09-23 16:21:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4713955223560333, "perplexity": 1252.7089750317427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657139314.4/warc/CC-MAIN-20140914011219-00249-ip-10-234-18-248.ec2.internal.warc.gz"}
http://www.strengejacke.de/sjPlot/sjp.int/
# sjp.int {sjPlot} This document shows examples for using the sjp.int function of the sjPlot package. Ressources: • Developer snapshot at GitHub • Submission of bug reports and issues at GitHub (back to table of content) ## Data initialization library(sjPlot) library(sjmisc) data(efc) # set basic theme options sjp.setTheme(theme = "539", axis.title.size = .85, axis.textsize = .85, legend.size = .8, geom.label.size = 3.5) ## Plotting interactions of regression models The sjp.int function plots regression (predicted values) or probability lines (predicted probabilities) of significant interaction terms of fitted models. This helps better understanding effects of moderations in regression models. The function accepts following fitted model classes: • linear models (lm) • generalized linear models (glm) • linear mixed effects models (lme4::lmer) • generalized linear mixed effects models (lme4::glmer) • linear mixed effects models (nlme::lme, but only for type = "eff") • generalized least squares models (nlme::gls, but only for type = "eff") • panel data estimators (plm::plm) Note that beside interaction terms, also the single predictors of each interaction (main effects) must be included in the fitted model as well. Thus, lm(dep ~ pred1 * pred2) will work, but lm(dep ~ pred1:pred2) won’t! ## Types of effect displays The sjp.int function has three different types of interaction (or moderation) effects that can be displayed. Use the type argument to select the effect type. ### type = “cond” Plots the effective change or impact (conditional effect) on a dependent variable of a moderation effect, as described in Grace-Martin K: Clarifications on Interpreting Interactions in Regression, i.e. the difference of the moderation effect on the dependent variable in presence and absence of the moderating effect (simple slope plot or conditional effect, see Hayes 2012). All remaining predictors are set to zero (i.e. ignored and not adjusted for). Hence, this plot type may be used especially for - but is of course not restricted to - binary or dummy coded moderator values. This type does not show the overall effect of interactions on the result of Y. Use type = "eff" for effect displays similar to the effect-function from the effects-package. ### type = “eff” Plots the overall effects (marginal effects( of the interaction, with all remaining covariates set to the mean. Effects are calculated using the effect-function from the effects-package. ### type = “emm” Plots the estimated marginal means of interactions with categorical variables (which was the former sjp.emm.int function that is now deprecated). This plot type plots estimated marginal means (also called least square means or marginal means) of (significant) interaction terms, e.g. in two-way repeated measure ANOVA or ANCOVA. This function may be used, for example, to plot differences in interventions between control and treatment groups over multiple time points, as described here. ## Example for type = “cond” ### Fitting a linear model # Note that the data sets used in the following examples may # not be perfectly suitable for fitting linear models. # fit "dummy" model. fit <- lm(weight ~ Diet * Time, data = ChickWeight) Let’s take a look at the model summary to see the estimates of interactions: # show summary to see significant interactions summary(fit) ## ## Call: ## lm(formula = weight ~ Diet * Time, data = ChickWeight) ## ## Residuals: ## Min 1Q Median 3Q Max ## -135.425 -13.757 -1.311 11.069 130.391 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 30.9310 4.2468 7.283 1.09e-12 *** ## Diet2 -2.2974 7.2672 -0.316 0.75202 ## Diet3 -12.6807 7.2672 -1.745 0.08154 . ## Diet4 -0.1389 7.2865 -0.019 0.98480 ## Time 6.8418 0.3408 20.076 < 2e-16 *** ## Diet2:Time 1.7673 0.5717 3.092 0.00209 ** ## Diet3:Time 4.5811 0.5717 8.014 6.33e-15 *** ## Diet4:Time 2.8726 0.5781 4.969 8.92e-07 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 34.07 on 570 degrees of freedom ## Multiple R-squared: 0.773, Adjusted R-squared: 0.7702 ## F-statistic: 277.3 on 7 and 570 DF, p-value: < 2.2e-16 ### Plot conditional effects of interactions in linear regressions The first example is quite simple. It produces three plots, because Diet, as factor, has three levels (plus one reference level), thus we have the interaction of Time and each of the three levels of Diet. # plot regression line of interaction terms sjp.int(fit, type = "cond") ### Explaining the output By default, the function examines both interaction terms and checks, which term has a larger range. The predictor with the larger range is plotted along the x-axis. In this example, Time ranges from 0 to 21, while Diet is dichotomous (since it is splitted into its factor levels). The predictor with lower range is used as grouping variable, indicating the different lines. By default, the lowest value (labelled lower bound in the plot) of this predictor is used to compute the effect (or change, or impact) for the interaction, indicating the absence of interaction (no moderation effect from predictor 2 on predictor 1). This is the red line. Second, the highest value of this predictor is used to calculate effect (or change, or impact) for the interaction, indicating the presence of an interaction (or the highest moderation effect from predictor 2 on predictor one). This is the blue line. Hence, this plot type may be used especially for binary or dummy coded moderator values. To better understand the formula behind this, please refer to these two blog posts from Karen Grace: Interpreting Interactions in Regression and Clarifications on Interpreting Interactions in Regression. ## Example for type = “eff” ### Effect plot of fitted model Using type = "eff" computes the interaction effects based on the effect-function from the effects-package. Using this approach, all covariates are set to the mean, and both main effects of the interaction term are used to calculate the overall mean of the dependent variable. # plot regression line of interaction terms sjp.int(fit, type = "eff") ### Explaining the output The eff-type produces one plot, where all factor levels of Diet (i.e., all interaction effects) are included (note that the effect-function uses the first interaction term (in this case, Diet) as moderater variable; if you want to swap the moderator with the predictor on the x-axis (the second interaction term), use the argument swap.pred). Each line in the plot represents one factor level of the moderator variable (i.e., each line stands for one interaction effect). To better understand the formula behind this, please refer to this paper: Fox J (2003) Effect displays in R for generalised linear models. Journal of Statistical Software 8:15, 1–27, http://www.jstatsoft.org/v08/i15/. In short, you see the unadjusted relation between response and interaction term, in presence and absence of the moderating effect. ### Difference between type = “cond” and type = “eff” Comparing the overall interaction effect on the dependent variable (type = "eff") and the mere impact of the moderation effect (type = "cond") show the same tendencies. It’s a simple variation in the regression slopes: type = "cond" shows the (mere) impact of the moderation effect. The differences between the slopes (indicated by the shaded areas) are related to the different slopes in the overall effect. ### One plot for each interaction term Use facet.grid = TRUE to plot each interaction term in a new plot. # plot regression line of interaction terms sjp.int(fit, type = "eff", facet.grid = TRUE) ## Showing value labels in the plot With the show.values argument, you can also show value labels of the predicted values. sjp.int(fit, type = "cond", show.values = TRUE) ## Adding confidence regions to the plot With the show.ci argument, you can add confidence intervals regions to the plots. However, this argument does not work for type = "cond". sjp.int(fit, type = "eff", show.values = TRUE, show.ci = TRUE) ## Choose the values of continuous moderators intentionally By default (see above), the lower and upper bound (lowest and highest value) of the moderator are used to plot interactions. If the moderator is a continuous variable, you may also use other values instead of lowest/highest. One suggestion is to use the mean as well as one standard deviation above and below the mean value. You can do this with the mdrt.values paramter, with mdrt.values = "meansd". First, we fit another dummy model. mydf <- data.frame(usage = efc$tot_sc_e, sex = efc$c161sex, education = efc$c172code, burden = efc$neg_c_7, barthel = efc$barthtot) # convert gender predictor to factor mydf$sex <- relevel(factor(mydf$sex), ref = "2") # fit "dummy" model fit <- lm(usage ~ .*., data = mydf) # show model summary summary(fit) ## ## Call: ## lm(formula = usage ~ . * ., data = mydf) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.2364 -0.8478 -0.2685 0.3086 8.1836 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.1118091 0.7567683 -0.148 0.8826 ## sex1 0.5583579 0.5725374 0.975 0.3297 ## education 0.2242707 0.3434615 0.653 0.5140 ## burden 0.0432757 0.0437340 0.990 0.3227 ## barthel 0.0097000 0.0072795 1.333 0.1831 ## sex1:education -0.0127309 0.1560235 -0.082 0.9350 ## sex1:burden -0.0236557 0.0290406 -0.815 0.4156 ## sex1:barthel -0.0035729 0.0038240 -0.934 0.3504 ## education:burden 0.0150701 0.0185970 0.810 0.4180 ## education:barthel -0.0026358 0.0026749 -0.985 0.3247 ## burden:barthel -0.0007119 0.0003969 -1.794 0.0732 . ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.232 on 804 degrees of freedom ## (93 observations deleted due to missingness) ## Multiple R-squared: 0.04751, Adjusted R-squared: 0.03567 ## F-statistic: 4.011 on 10 and 804 DF, p-value: 2.22e-05 As we can see, the interaction terms are not significant, the one closest to significance has a p-value of 0.0732. By default, only interaction terms with a p-value lower than or equal to 0.1 are plotted. However, we can change this by adjusting the p-level sensivity with the plevel argument. # show mean and standard deviation values for moderator effect # and show all interactions with p-values up to 0.1 sjp.int(fit, type = "cond", mdrt.values = "meansd") ## Following non-significant interaction terms were omitted from the output: ## sex1:education ## sex1:burden ## sex1:barthel ## education:burden ## education:barthel ## ## Use plevel to show more interaction terms. In the above figure we can see the moderation effect (interaction) of burden of care on Barthel index (functional dependency scale of people in need of care) toward usage of supporting services. In general, with increasing Barthel index (i.e. people in need of care are less dependent) and absence of moderating effect (i.e. lower sd of burden of care), service usage increases. If the moderation effect increases, i.e. we have higher burden of care, service usage decreases. In short, a) decreasing dependency (higher Barthel index), moderated by higher burden, has a stronger impact on decreasing service usage, while b) decreasing dependency (higher Barthel index), moderated by lower burden, has a weaker impact on decreasing service usage (or a stronger impact on increasing service usage). Looking at the overall interaction effect on the dependent variable (type = "eff", see next example) shows the same tendencies. It’s a simple variation in the regression slopes: type = "cond" shows the (isolated) impact of the moderation effect. The differences between the slopes (indicated by the shaded areas) are related to the different slopes in the overall effect (shown in the next example). ### Different moderator values for effect display plot type The argument options for the mdrt.values also apply to type = "eff". While the default effect-function from the effects-package automatically selects a pretty range for continuous variables, the sjp.int function sticks to the mdrt.values-options, i.e. using min/max values of the moderator, zero/max or mean/+-sd. The p-level sensivity (plevel) is not needed for type = "eff", as this option always plots all interactions found. By default, this behaviour would result in six plots. To select a specific plot only, use the int.plot.index argument and specify the plot number. # show mean and standard deviation values for moderator effect sjp.int(fit, type = "eff", mdrt.values = "meansd", int.plot.index = 6) ## Interaction terms in generalized linear models Interpreting interaction terms in generalized linear models is a bit tricky. Instead of working with, e.g. odds ratios, the sjp.int function transforms estimates into probabilities or incidents rates and plots the predicted values of the interaction effect. First create some sample data and fit a binary logistic regression model: # load library for sample data # and getting value labels library(sjmisc) # load sample data data(efc) # create binary response care.burden <- dicho(efc$neg_c_7) # create data frame for fitted model mydf <- data.frame(care.burden = care.burden, sex = to_factor(efc$c161sex), barthel = efc$barthtot) # fit model fit <- glm(care.burden ~ sex * barthel, data = mydf, family = binomial(link = "logit")) # plot interaction, increase p-level sensivity sjp.int(fit, type = "cond", legend.labels = get_labels(efc$c161sex), plevel = 1) What we see in the above figure are the predicted probabilities of the outcome (care burden) by Barthel index (predictor) with no moderating effect of sex (red line). And we can see the predicted probabilities of the outcome considering the interaction effect (moderation) of sex on Barthel index (blue line). In general, care burden decreases with increasing functional status (independecy of cared for person), however, male care givers tend to perceive a higher care burden than women. Another way to analyse the moderator effect of sex on function status and care burden is to use box plots. The following figures “validates” our results we got from the above figure. sjp.grpfrq(mydf$barthel, mydf$care.burden, intr.var = mydf$sex, legend.labels = c("low burden", "high burden"), type = "box") To investigate the overall effect on burden, use the type = "eff" argument again. # plot overall effect on burden sjp.int(fit, type = "eff") ## Examples for type = “emm” - plotting estimated marginal means With the type = "emm" argument, you can plot estimated marginal means from the dependent variable distinguished by groups and group-levels. For instance, you can use this function to visualize a pre-post-comparison (first preditcor, independent variable) of an intervention (dependent variable) between a treatment and control group (second preditcor, independent variable). The estimated marginal means are also called “adjusted means”. The sjp.int function extracts all significant interactions and calculates least-squares means, which are plotted. ### Fitting a linear model First, we need to create a data frame and fit a linear model. # load sample data set data(efc) # create data frame with variables that should be # included in the model mydf <- data.frame(burden = efc$neg_c_7, sex = to_factor(efc$c161sex), education = to_factor(efc$c172code)) # set variable label set_label(mydf$burden) <- "care burden" # fit model, including interaction fit <- lm(burden ~ .*., data = mydf) ### Plotting estimated marginal means This first example is taken from the function’s online-help. It uses the plevel argument because all interactions’ p-values are above 0.05. sjp.int(fit, type = "emm", plevel = 1) ### Another example # create data frame. we want to see whether the relationship between # cared-for person's dependency and negative impact of care is moderated # by the carer's employment status (interaction between dependency and # deployment). mydf <- data.frame(negimp = efc$neg_c_7, dependency = to_factor(efc$e42dep), employment = to_factor(efc$c175empl), hours = efc$c12hour, sex = to_factor(efc$c161sex), age = efc$e17age) # set variable label set_label(mydf\$negimp) <- "negative impact of care" # fit model fit <- lm(negimp ~ dependency + employment + dependency:employment + hours + sex + age, data = mydf) # bad dataset for demonstration, again no significant interaction summary(fit) ## ## Call: ## lm(formula = negimp ~ dependency + employment + dependency:employment + ## hours + sex + age, data = mydf) ## ## Residuals: ## Min 1Q Median 3Q Max ## -7.064 -2.425 -0.737 1.736 17.034 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.760486 1.322190 7.382 3.65e-13 *** ## dependency2 0.976546 0.755183 1.293 0.19631 ## dependency3 2.342064 0.724655 3.232 0.00128 ** ## dependency4 4.386289 0.741475 5.916 4.75e-09 *** ## employment1 0.085163 0.888775 0.096 0.92368 ## hours 0.006895 0.002826 2.440 0.01490 * ## sex2 0.461571 0.284276 1.624 0.10481 ## age -0.015132 0.015222 -0.994 0.32047 ## dependency2:employment1 0.595547 1.009922 0.590 0.55555 ## dependency3:employment1 0.659697 0.980491 0.673 0.50124 ## dependency4:employment1 -0.834350 1.002996 -0.832 0.40572 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.555 on 868 degrees of freedom ## (29 observations deleted due to missingness) ## Multiple R-squared: 0.1591, Adjusted R-squared: 0.1494 ## F-statistic: 16.43 on 10 and 868 DF, p-value: < 2.2e-16 Since there’s no significant interaction, we again adjust the plevel-argument to allow also non-significant interactions to be plotted. sjp.int(fit, type = "emm", plevel = 1) The above figure shows the “pre-post” comparison (non-employed/employed) of an “intervention” (negative impact of care) in different “treatment” and “control” groups (dependency levels). If necessary, you can swap the variables for the x and y axis with the swap.pred argument. sjp.int(fit, type = "emm", plevel = 1, swap.pred = TRUE)
2016-10-28 00:28:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5802910923957825, "perplexity": 5954.400743662018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00096-ip-10-171-6-4.ec2.internal.warc.gz"}
https://scicomp.stackexchange.com/questions/29785/fft-of-implicitly-uniform-data/29786
# FFT of “implicitly” uniform data I am trying to take a Fourier transform of a density field estimated from mock galaxy survey catalogs. Basically, you start with a list of galaxy positions, then you bin these positions over some grid spacing, getting an array with a weight for each grid position in the relevant volume of space. Because I am trying to preserve the information about small spatial scales, I would like to avoid using large bins. Unfortunately, this makes a full array representation of the density field quite memory expensive, as there are huge numbers of empty bins which are for most purposes irrelevant. I have thus far been binning without the array representation, instead keeping the data in a list of coordinates with their respective weights. However, my understanding is that this is not sufficient to perform a standard FFT routine. I need evenly spaced data, which I do not have in this representation. A natural option could then be something like pynufft (since I am using Python primarily) or nfft. However, my situation seems somewhat different from the standard non-uniform FFT one; I am not dealing with "nonuniform sampling" per se, rather, my sampling is perfectly uniform but just too large to store with explicit zeroes. Is there any implementation of FFT which would allow me to not create an explicit array of zeros, or at least leverage the fact that I implicitly know the "missing" bins in the sparse representation of my data are zero-valued? • Would this really help you? The output of the FFT will still be a full uniform array of Fourier coefficients, so you'll need all that memory in the end anyway. – user3883 Jul 1 '18 at 5:45 • You make a good point. I hadn't yet thought of this. – Davis Jul 2 '18 at 18:49 Since your histogram is very sparse, your density field is approximately a sum of weighted delta functions: $$\rho(\vec r) = \sum_{j=1}^{N_{\mathrm{bins}}} w_j\, \delta(\vec r - \vec r_j)$$ where $r_j$ is the center of each bin, and $w_j$ is the weight. Its Fourier transform is then simply \begin{align} \hat\rho(\vec k) &= \int \exp(-i\vec k\cdot\vec r) \rho(\vec r)\,d^3r \\ &= \sum_{j=1}^{N_{\mathrm{bins}}} w_j \exp(-i\vec k\cdot\vec r_j) \end{align} Alternatively, you could skip the binning process, treating each galaxy as being in its own "bin" with weight equal to one. In principle, this gives you access to arbitrarily high-$k$ information (rather than being limited by the physical width of the bin), but it will be noisier than the binned density.
2020-11-24 23:55:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056797027587891, "perplexity": 825.3851405174867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141177607.13/warc/CC-MAIN-20201124224124-20201125014124-00155.warc.gz"}
https://openstax.org/books/statistics/pages/7-3-using-the-central-limit-theorem
Statistics # 7.3Using the Central Limit Theorem Statistics7.3 Using the Central Limit Theorem It is important for you to understand when to use the central limit theorem. If you are being asked to find the probability of the mean, use the clt for the means. If you are being asked to find the probability of a sum or total, use the clt for sums. This also applies to percentiles for means and sums. ### NOTE If you are being asked to find the probability of an individual value, do not use the clt. Use the distribution of its random variable. ### Examples of the Central Limit Theorem #### Law of Large Numbers The law of large numbers says that if you take samples of larger and larger sizes from any population, then the mean $x ¯ x ¯$ of the samples tends to get closer and closer to μ. From the central limit theorem, we know that as n gets larger and larger, the sample means follow a normal distribution. The larger n gets, the smaller the standard deviation gets. (Remember that the standard deviation for $X ¯ X ¯$ is $σ n σ n$.) This means that the sample mean $x ¯ x ¯$ must be close to the population mean μ. We can say that μ is the value that the sample means approach as n gets larger. The central limit theorem illustrates the law of large numbers. ### Example 7.8 A study involving stress is conducted among the students on a college campus. The stress scores follow a uniform distribution with the lowest stress score equal to one and the highest equal to five. Using a sample of 75 students, find: 1. the probability that the mean stress score for the 75 students is less than 2 2. the 90th percentile for the mean stress score for the 75 students 3. the probability that the total of the 75 stress scores is less than 200 4. the 90th percentile for the total stress score for the 75 students Let X = one stress score. Problems (a) and (b) ask you to find a probability or a percentile for a mean. Problems (c) and (d) ask you to find a probability or a percentile for a total or sum. The sample size, n, is equal to 75. Because the individual stress scores follow a uniform distribution, X ~ U(1, 5) where a = 1 and b = 5 (see Continuous Random Variables for an explanation of a uniform distribution), In the formula above, the denominator is understood to be 12, regardless of the endpoints of the uniform distribution. For problems (a) and (b), let $X ¯ X ¯$ = the mean stress score for the 75 students. Then, a. Find P($x ¯ x ¯$ < 2). Draw the graph. b. Find the 90th percentile for the mean of 75 stress scores. Draw a graph. For problems (c) and (d), let ΣX = the sum of the 75 stress scores. Then, ΣX ~ N[(75)(3),$( 75 ) ( 75 )$(1.15)]. c. Find P(Σx < 200). Draw the graph. d. Find the 90th percentile for the total of 75 stress scores. Draw a graph. ### Try It 7.8 Use the information in Example 7.8, but use a sample size of 55 to answer the following questions. 1. Find P($x ¯ x ¯$ < 7). 2. Find P(Σx > 170). 3. Find the 80th percentile for the mean of 55 scores. 4. Find the 85th percentile for the sum of 55 scores. ### Example 7.9 Suppose that a market research analyst for a cell phone company conducts a study of their customers who exceed the time allowance included on their basic cell phone contract. The analyst finds that for those people who exceed the time included in their basic contract, the excess time used follows an exponential distribution with a mean of 22 minutes. Consider a random sample of 80 customers who exceed the time allowance included in their basic cell phone contract. Let X = the excess time used by one INDIVIDUAL cell phone customer who exceeds his contracted time allowance. XExp$( 1 22 ) ( 1 22 )$. From previous chapters, we know that μ = 22 and σ = 22. Let $X ¯ X ¯$ = the mean excess time used by a sample of n = 80 customers who exceed their contracted time allowance. $X ¯ X ¯$ ~ N by the central limit theorem for sample means. Using the clt to find probability 1. Find the probability that the mean excess time used by the 80 customers in the sample is longer than 20 minutes. This is asking us to find P($x ¯ x ¯$ > 20). Draw the graph. 2. Suppose that one customer who exceeds the time limit for his cell phone contract is randomly selected. Find the probability that this individual customer's excess time is longer than 20 minutes. This is asking us to find P(x > 20). 3. Explain why the probabilities in parts (a) and (b) are different. Using the clt to find percentiles Find the 95th percentile for the sample mean excess time for a sample of 80 customers who exceed their basic contract time allowances. Draw a graph. ### Try It 7.9 Use the information in Example 7.9, but change the sample size to 144. 1. Find P(20 < $x ¯ x ¯$ < 30). 2. Find P(Σx is at least 3000). 3. Find the 75th percentile for the sample mean excess time of 144 customers. 4. Find the 85th percentile for the sum of 144 excess times used by customers. ### Example 7.10 U.S. scientists studying a certain medical condition discovered that a new person is diagnosed every two minutes, on average. Suppose the standard deviation is 0.5 minutes and the sample size is 100. 1. Find the median, the first quartile, and the third quartile for the sample mean time of diagnosis in the United States. 2. Find the median, the first quartile, and the third quartile for the sum of sample times of diagnosis in the United States. 3. Find the probability that a diagnosis occurs on average between 1.75 and 1.85 minutes. 4. Find the value that is two standard deviations above the sample mean. 5. Find the IQR for the sum of the sample times. ### Try It 7.10 Based on data from the National Health Survey, women between the ages of 18 and 24 have an average systolic blood pressures (in mm Hg) of 114.8 with a standard deviation of 13.1. Systolic blood pressure for women between the ages of 18 to 24 follows a normal distribution. 1. If one woman from this population is randomly selected, find the probability that her systolic blood pressure is greater than 120. 2. If 40 women from this population are randomly selected, find the probability that their mean systolic blood pressure is greater than 120. 3. If the sample was four women between the ages of 18–24 and we did not know the original distribution, could the central limit theorem be used? ### Example 7.11 A study was done about a medical condition that affects a certain group of people. The age range of the people was 14–61. The mean age was 30.9 years with a standard deviation of nine years. 1. In a sample of 25 people, what is the probability that the mean age of the people is less than 35? 2. Is it likely that the mean age of the sample group could be more than 50 years? Interpret the results. 3. In a sample of 49 people, what is the probability that the sum of the ages is no less than 1,600? 4. Is it likely that the sum of the ages of the 49 people are at most 1,595? Interpret the results. 5. Find the 95th percentile for the sample mean age of 65 people. Interpret the results. 6. Find the 90th percentile for the sum of the ages of 65 people. Interpret the results. ### Try It 7.11 According to data from an aerospace company, the 757 airliner carries 200 passengers and has doors with a mean height of 72 inches. Assume for a certain population of men we have a mean of 69 inches inches and a standard deviation of 2.8 inches. 1. What mean doorway height would allow 95 percent of men to enter the aircraft without bending? 2. Assume that half of the 200 passengers are men. What mean doorway height satisfies the condition that there is a 0.95 probability that this height is greater than the mean height of 100 men? 3. For engineers designing the 757, which result is more relevant: the height from part (a) or part (b)? Why? ### HISTORICAL NOTE Normal Approximation to the Binomial Historically, being able to compute binomial probabilities was one of the most important applications of the central limit theorem. Binomial probabilities with a small value for n (say, 20) were displayed in a table in a book. To calculate the probabilities with large values of n, you had to use the binomial formula, which could be very complicated. Using the normal approximation to the binomial distribution simplified the process. To compute the normal approximation to the binomial distribution, take a simple random sample from a population. You must meet the following conditions for a binomial distribution: • There are a certain number, n, of independent trials. • The outcomes of any trial are success or failure. • Each trial has the same probability of a success, p. Recall that if X is the binomial random variable, then X ~ B(n, p). The shape of the binomial distribution needs to be similar to the shape of the normal distribution. To ensure this, the quantities np and nq must both be greater than five (np > 5 and nq > 5; the approximation is better if they are both greater than or equal to 10. The product >5 is more or less accepted as the norm here.). This is another accepted rule. So, for whatever value of x we are looking at (the number of successes). We add 0.5 if we are looking for the probability that is less than or equal to that number. We subtract 0.5 if we are looking for the probability that is greater than or equal to that number. Then the binomial can be approximated by the normal distribution with mean μ = np and standard deviation σ = $npq npq$. Remember that q = 1 – p. In order to get the best approximation, add 0.5 to x or subtract 0.5 from x (use x + 0.5 or x – 0.5). This is another accepted rule. So, for whatever value of x we are looking at (the number of successes). We add 0.5 if we are looking for the probability that is less than or equal to that number. We subtract 0.5 if we are looking for the probability that is greater than or equal to that number. The number 0.5 is called the continuity correction factor and is used in the following example. ### Example 7.12 Suppose in a local kindergarten through 12th grade (K–12) school district, 53 percent of the population favor a charter school for grades K through 5. A simple random sample of 300 is surveyed. 1. Find the probability that at least 150 favor a charter school. 2. Find the probability that at most 160 favor a charter school. 3. Find the probability that more than 155 favor a charter school. 4. Find the probability that fewer than 147 favor a charter school. 5. Find the probability that exactly 175 favor a charter school. Let X = the number that favor a charter school for grades K through 5. X ~ B(n, p) where n = 300 and p = 0.53. Because np > 5 and nq > 5, use the normal approximation to the binomial. The formulas for the mean and standard deviation are μ = np and σ = $npq npq$. The mean is 159, and the standard deviation is 8.6447. The random variable for the normal distribution is Y. Y ~ N(159, 8.6447). See The Normal Distribution for help with calculator instructions. For Part (a), you include 150 so P(X ≥ 150) has a normal approximation P(Y ≥ 149.5) = 0.8641. normalcdf(149.5,10^99,159,8.6447) = 0.8641. For Part (b), you include 160 so P(X ≤ 160) has a normal approximation P(Y ≤ 160.5) = 0.5689. normalcdf(0,160.5,159,8.6447) = 0.5689 For Part (c), you exclude 155 so P(X > 155) has normal approximation P(y > 155.5) = 0.6572. normalcdf(155.5,10^99,159,8.6447) = 0.6572. For Part (d), you exclude 147 so P(X < 147) has normal approximation P(Y < 146.5) = 0.0741. normalcdf(0,146.5,159,8.6447) = 0.0741 For Part (e), P(X = 175) has normal approximation P(174.5 < Y < 175.5) = 0.0083. normalcdf(174.5,175.5,159,8.6447) = 0.0083 Because of calculators and computer software that let you calculate binomial probabilities for large values of n easily, it is not necessary to use the the normal approximation to the binomial distribution, provided that you have access to these technology tools. Most school labs have computer software that calculates binomial probabilities. Many students have access to calculators that calculate probabilities for binomial distribution. If you type in binomial probability distribution calculation in an internet browser, you can find at least one online calculator for the binomial. For Example 7.10, the probabilities are calculated using the following binomial distribution: (n = 300 and p = 0.53). Compare the binomial and normal distribution answers. See Discrete Random Variables for help with calculator instructions for the binomial. P(X ≥ 150) :1 - binomialcdf(300,0.53,149) = 0.8641 P(X ≤ 160) :binomialcdf(300,0.53,160) = 0.5684 P(X > 155) :1 - binomialcdf(300,0.53,155) = 0.6576 P(X < 147) :binomialcdf(300,0.53,146) = 0.0742 P(X = 175) :(You use the binomial pdf.)binomialpdf(300,0.53,175) = 0.0083 Try It 7.12 In a city, 46 percent of the population favors the incumbent, Dawn Morgan, for mayor. A simple random sample of 500 is taken. Using the continuity correction factor, find the probability that at least 250 favor Dawn Morgan for mayor.
2020-05-25 15:18:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 35, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7603963613510132, "perplexity": 459.72346439495226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00498.warc.gz"}
https://imogeometry.blogspot.com/p/imac-arhimede-romania.html
### IMAC Arhimede 2007-14 (Romania) 15p geometry problems from IMAC International Mathematical Arhimede Contest (Romanian) 2007 - 2014 [it lasted only those years] 2007 IMAC Arhimede P2 Let $ABCD$ be a parallelogram that is not rhombus. We draw the symmetrical half-line of $(DC$ with respect to line $BD$. Similarly we draw the symmetrical half- line of $(AB$ with respect to $AC$. These half- lines intersect each other in $P$. If $\frac{AP}{DP}= q$ find the value of $\frac{AC}{BD}$ in function of $q$. 2007 IMAC Arhimede P6 Let $A_1A_2...A_n$ ba a polygon. Prove that there is a convex polygon $B_1B_2...B_n$ such that $B_iB_{i + 1} = A_iA_{i + 1}$ for $i \in \{1, 2,...,n-1\}$ and $B_nB_1 = A_nA_1$ (some of the succecive vertices of the polygon $B_1B_2...B_n$ can be colinear). 2008 IMAC Arhimede P4 Consider the arbitrary tetrahedron $ABCD$ . Points $E$ and $F$ are midpoints of the edges $AB$ and$CD$ respectively . If $\alpha$ is the angle of the edges $AD$ and $BC$, calculate $\ cos \alpha$ in terms of the lengths of the segments $[EF], [AD]$ and $[BC]$ (Romania) 2008 IMAC Arhimede P5 The diagonals of the cyclic quadrilateral $ABCD$ are intersecting at the point $E$. $K$ and $M$ are the midpoints of $AB$ and $CD$, respectively. Let the points $L$ on $BC$ and $N$ on $AD$ s.t. $EL\perp BC$ and $EN\perp AD$.Prove that $KM\perp LN$. (Moldova) 2009 IMAC Arhimede P2 In the triangle $ABC$, the circle with the center at the point $O$ touches the pages $AB, BC$ and $CA$ in the points $C_1, A_1$ and $B_1$, respectively. Lines $AO, BO$ and $CO$ cut the inscribed circle at points $A_2, B_2$ and $C_2,$ respectively. Prove that it is the area of the triangle $A_2B_2C_2$ is double from the surface of the hexagon $B_1A_2C_1B_2A_1C_2$. (Spain) 2009 IMAC Arhimede P3 In the interior of the convex polygon $A_1A_2...A_{2n}$ there is point $M$. Prove that at least one side of the polygon has not intersection points with the lines $MA_i$, $1\le i\le 2n$. (Moldova) 2010 IMAC Arhimede P3 Let $ABC$ be a triangle and let $D\in (BC)$ be the foot of the $A$- altitude. The circle $w$ with the diameter $[AD]$  meet again the lines $AB$ , $AC$ in the points $K\in (AB)$ , $L\in (AC)$ respectively. Denote the meetpoint $M$ of the tangents to the circle $w$ in the points $K$ , $L$ . Prove that the ray $[AM$ is the $A$-median in $\triangle ABC$ (Serbia) 2010 IMAC Arhimede P4 Let $M$ and $N$ be two points on different sides of the square $ABCD$. Suppose that segment $MN$ divides the square into two tangential polygons. If $R$ and $r$ are radii of the circles inscribed in these polygons ($R> r$), calculate the length of the segment $MN$ in terms of $R$ and $r$. (Moldova) 2011 IMAC Arhimede P2 Let  $ABCD$ be a cyclic quadrilatetral inscribed in a circle $k$. Let $M$ and $N$ be the midpoints of the arcs $AB$ and $CD$ which do not contain $C$ and $A$ respectively. If $MN$ meets side $AB$ at $P$, then show that $$\frac{AP}{BP}=\frac{AC+AD}{BC+BD}$$ 2011 IMAC Arhimede P4 Inscribed circle of triangle $ABC$ touches sides $BC$, $CA$ and $AB$ at the points $X$, $Y$ and $Z$, respectively. Let $AA_{1}$, $BB_{1}$ and $CC_{1}$ be the altitudes of the triangle $ABC$ and $M$, $N$ and $P$ be the incenters of triangles $AB_{1}C_{1}$, $BC_{1}A_{1}$ and $CA_{1}B_{1}$, respectively. a) Prove that $M$, $N$ and $P$ are orthocentres of triangles $AYZ$, $BZX$ and $CXY$, respectively. b) Prove that common external tangents of these incircles, different from triangle sides, are concurent at orthocentre of triangle $XYZ$. 2012 IMAC Arhimede P2 Circles $k_1,k_2$ intersect at $B,C$ such that $BC$ is diameter of $k_1$.Tangent of $k_1$ at $C$ touches $k_2$ for the second time at $A$.Line $AB$ intersects $k_1$ at $E$ different from $B$, and line $CE$ intersects $k_2$ at F different from $C$. An arbitrary line through $E$ intersects segment $AF$ at $H$ and $k_1$ for the second time at $G$.If $BG$ and $AC$ intersect at $D$, prove $CH//DF$ . 2013 IMAC Arhimede P3 Let $ABC$ be a triangle with $\angle ABC=120^o$ and triangle bisectors $(AA_1),(BB_1),(CC_1)$, respectively. $B_1F \perp A_1C_1$, where $F\in (A_1C_1)$. Let $R,I$ and $S$ be the centers of the circles which are inscribed in triangles $C_1B_1F,C_1B_1A_1, A_1B_1F$, and $B_1S\cap A_1C_1=\{Q\}$. Show that $R,I,S,Q$ are on the same circle. 2013 IMAC Arhimede P5 Let $\Gamma$ be the circumcircle of a triangle $ABC$ and let $E$ and $F$ be the intersections of the bisectors of $\angle ABC$ and $\angle ACB$ with $\Gamma$. If $EF$ is tangent to the incircle $\gamma$ of $\triangle ABC$, then find the value of $\angle BAC$. 2014 IMAC Arhimede P2 A convex quadrilateral $ABCD$ is inscribed into a circle $\omega$ . Suppose that there is a point $X$ on the segment $AC$ such that the $XB$ and $XD$ tangents to the circle $\omega$ . Tangent of  $\omega$  at $C$, intersect $XD$ at $Q$. Let $E$ ($E\ne A$) be the intersection of the line $AQ$ with $\omega$ . Prove that $AD, BE$, and $CQ$ are concurrent. source: imomath.com
2019-06-26 23:14:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090119957923889, "perplexity": 180.19507892381708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00373.warc.gz"}
https://proofwiki.org/wiki/Fourth_Powers_which_are_Sum_of_4_Fourth_Powers/Examples/651
# Fourth Powers which are Sum of 4 Fourth Powers/Examples/651 ## Examples of Fourth Powers which are Sum of 4 Fourth Powers $651^4 = 240^4 + 340^4 + 430^4 + 599^4$ ## Proof $\displaystyle 240^4 + 340^4 + 430^4 + 599^4$ $=$ $\displaystyle 3 \, 317 \, 760 \, 000$ $\displaystyle$ $\, \displaystyle + \,$ $\displaystyle 13 \, 363 \, 360 \, 000$ $\displaystyle$ $\, \displaystyle + \,$ $\displaystyle 34 \, 188 \, 010 \, 000$ $\displaystyle$ $\, \displaystyle + \,$ $\displaystyle 128 \, 738 \, 157 \, 601$ $\displaystyle$ $=$ $\displaystyle 179 \, 607 \, 287 \, 601$ $\displaystyle$ $=$ $\displaystyle 651^4$ $\blacksquare$
2019-11-20 00:23:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706789255142212, "perplexity": 70.71353746224516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00516.warc.gz"}
http://math.stackexchange.com/questions/256820/convergence-of-series-1-c2-ln-x
# Convergence of Series $1/[c^2(\ln x)]$ Does there exist a $c$ such that the series $1/[c^2(\ln x)]$ converges? I know it's a simple question but I am working on a probability proof and if I can find such a $c$ I would be able to apply the Borel Cantelli lemmas and complete the proof. Any help would be appreciated. - Constants won't affect convergence? –  Alex Youcis Dec 12 '12 at 3:52 Your original post asked about 1/{c^2(ln x)} which lost the braces in the edit. However, this would be read as $\frac 1{c^2 \ln x}$ while you probably meant $\frac 1{c^{2 \ln x}}$. Can you confirm? –  Ross Millikan Dec 12 '12 at 4:11 If the question is about $\dfrac{1}{(c^2)(\ln x)}$ then of course not, since $\sum_{x=2}^\infty \dfrac{1}{\ln x}$ diverges. So the question must be about $\dfrac{1}{c^{2\ln x}}$. Then sure, $c=e$ will do, indeed any $c\gt e^{1/2}$. For then we get a series $\sum \dfrac{1}{x^p}$ with $p\gt 1$, and these are known to converge.
2014-09-16 12:07:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909140229225159, "perplexity": 156.44007567953454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114926.36/warc/CC-MAIN-20140914011154-00166-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://tutorme.com/tutors/100063/interview/
TutorMe homepage Subjects PRICING COURSES Start Free Trial Junghune N. M.S. in Material Engineering for Math, Physics, and Chemistry Tutoring Tutor Satisfaction Guarantee Pre-Calculus TutorMe Question: You are investing for your child's college education by depositing $25,000 into an account that compounds interest monthly at 3% interest. How long do you have to wait for the balance to reach$45,000? Junghune N. The compound interest formula is the following: $(B = P\bigg(1 + \frac{r}{n} \bigg)^{nt}$) where $$B$$ is the balance, $$P$$ is the principal or the starting balance, $$n$$ is the number of times interest is compounded in a year, $$r$$ is the percent interest in decimal form, $$t$$ is the number of years elapsed. We plug in 45,000 for $$B$$, 25,000 for $$P$$, 12 for $$n$$, and .03 for $$r$$ to get: $(45000 = 25000\bigg(1 + \frac{.03}{12} \bigg)^{12t}$) $(\frac{45000}{25000} = \bigg(1 + \frac{.03}{12} \bigg)^{12t}$) $(1.8 = (1.0025)^{12t}$) To deal with the fact that t is in the exponent, let's take the $$log$$ of both sides. $(log(1.8) = log((1.0025)^{12t})$) There is a property of logarithms such that: $(log (x)^y = y \thinspace log(x)$) We can use this property to move $$12t$$ down from the exponent as shown below: $(log (1.8) = 12t \thinspace log(1.0025)$) Rearrange the terms to directly solve for t: $(\frac{1}{12}\frac{log(1.8)}{log(1.0025)} = t$) $$t = 19.62 \thinspace years$$ Trigonometry TutorMe Question: You designed a mischievous robot to pelt bypassers with a dodgeball at an initial speed of 40 ft/second. Due to your excitement, you rush to do a test throw at an unknown angle, $$\theta$$ and measured that the dodgeball landed 50 feet away. The engineer that helped you design the robot tells you that the distance traveled by the dodgeball is given by the following equation: $(\frac{1}{12} v_{o}^2 \sin(2\theta)= r$) Determine the throwing angle of the dodgeball. Junghune N. Since we are solving for the launch angle, $$\theta$$, let's rearrange the given equation: $(\frac{1}{12} v_{o}^2 \sin(2\theta)= r$) $(\sin(2\theta)= 12 \frac{r}{v_{o}^2}$) We plug in 40 for $$v_{o}$$ and 50 for $$r$$ to get: $(\sin(2\theta) = \bigg(12\cdot \frac{50}{40^2}\bigg)$) $(\sin(2\theta) = \frac{60}{160} = \frac{3}{8}$) We can use the following trigonometric identity: $(\sin(2\theta) = 2\sin(\theta)\cos(\theta)$) to rewrite the expression for $$\theta$$ as: $(2\sin(\theta)\cos(\theta)= \frac{3}{8}$) $(2\sin(\theta)\cos(\theta) - \frac{3}{8} = 0$) We can directly graph the above expression to find the zeroes. The zeroes are located at: $$\theta = 11.01^{\circ}$$ and $$78.99^{\circ}$$. Thus, the throwing angles of the dodgeball are: $$\theta = 11.01^{\circ}$$ and $$78.99^{\circ}$$. Algebra TutorMe Question: You make picture frames for a side hustle. Earlier in the day, a client requested you to make a frame for a portrait that was brought back from their travels abroad. After preparing all the materials, you drove to your workshop to build the frame, but forgot to bring the exact specifications with you. Because you're really lazy and don't feel like driving back to your home, you figure that you can just do the math to figure out the dimensions. You recall that you need 15 feet of wood to build the client's frame and that its width was 3 feet longer than its height. What are the dimensions of the frame that you need to build? Junghune N. We have 2 important pieces of information here: 1.) You need 15 feet of wood to build the frame. 2.) the frame's width is 3 feet longer than its height. Information piece #1 tells you that the perimeter of the frame is 15 feet. Thus, if we take $$h$$ as the frame's height and $$w$$ as its width, then: $(2w + 2h = 15$) Information piece #2 tells you that the frame's width is 3 feet longer than its height. Thus: $(w = 3 + h$) We now have a system of equations: $(2w + 2h = 15$) $(w = 3 + h$) We can plug in $$3 + h$$ for $$w$$ in the first equation to solve for the frame's height: $(2w + 2h = 15$) $(2(3 + h) + 2h = 15$) $(6 + 2h + 2h = 15$) $(4h = 9$) $(h = \frac{9}{4}feet = 2.25 feet$) Now that we know that the frame's height is 2.25 feet tall, we can plug that back into the 2nd equation of the system: $(w = 3 + h$) $(w = 3 + 2.25$) $(w = 5.25 feet$) Thus, the dimensions of your picture frame are 2.25 feet tall and 5.25 feet wide. Let's check our work: $(5.25 = 3 + 2.25$) Check. $(2*5.25 + 2*2.25 = 10.5 + 4.5 = 15$) Check. Send a message explaining your needs and Junghune will reply soon. Contact Junghune
2019-08-25 08:52:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627521514892578, "perplexity": 610.1298889720451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00317.warc.gz"}
https://topospaces.subwiki.org/wiki/Monotonically_normal_space
# Monotonically normal space ## Definition ### Definition with symbols A topological space is termed monotonically normal if it is a T1 space (i.e., all points are closed) and there exists an operator from ordered pairs of disjoint closed sets to open sets, such that: 1. For any disjoint closed subsets , contains and its closure is disjoint from 2. If and with all four sets being closed, disjoint from , and disjoint from , we have: This is the monotonicity condition. Such an operator is termed a monotone normality operator. This article defines a property of topological spaces: a property that can be evaluated to true/false for any topological space|View a complete list of properties of topological spaces This is a variation of normality. View other variations of normality ## Relation with other properties ### Stronger properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions metrizable space underlying topology of a metric space metrizable implies monotonically normal monotonically normal not implies metrizable Elastic space, Protometrizable space|FULL LIST, MORE INFO ordered field-metrizable space underlying topology of a space with a metric taking values in an ordered field ordered field-metrizable implies monotonically normal monotonically normal not implies ordered field-metrizable | linearly orderable space order topology from a linear ordering on a set linearly orderable implies monotonically normal monotonically normal not implies linearly orderable | elastic space elastic implies monotonically normal monotonically normal not implies elastic | closed sub-Euclidean space (via metrizable) (via metrizable) Elastic space, Metrizable space, Protometrizable space|FULL LIST, MORE INFO manifold (via metrizable) (via metrizable) Elastic space, Metrizable space, Protometrizable space|FULL LIST, MORE INFO ### Weaker properties Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions normal space any two disjoint closed subsets are separated by disjoint open subsets monotonically normal implies normal normal not implies monotonically normal Collectionwise normal space, Hereditarily collectionwise normal space, Hereditarily normal space|FULL LIST, MORE INFO hereditarily normal space every subspace is a normal space monotonically normal implies hereditarily normal hereditarily normal implies monotonically normal Hereditarily collectionwise normal space|FULL LIST, MORE INFO collectionwise normal space every discrete collection of closed subsets can be separated by disjoint open subsets monotonically normal not implies collectionwise normal collectionwise normal not implies monotonically normal Hereditarily collectionwise normal space|FULL LIST, MORE INFO hereditarily collectionwise normal space every subspace is collectionwise normal monotonically normal implies hereditarily collectionwise normal hereditarily collectionwise normal not implies monotonically normal | completely regular space (via normal) (via normal) Normal Hausdorff space|FULL LIST, MORE INFO regular space (via normal) (via normal) Normal Hausdorff space|FULL LIST, MORE INFO Hausdorff space (via normal) (via normal) Normal Hausdorff space|FULL LIST, MORE INFO Urysohn space (via normal) (via normal) | collectionwise Hausdorff space (via collectionwise normal) (via collectionwise normal) Collectionwise normal space, Hereditarily collectionwise normal space|FULL LIST, MORE INFO ## Metaproperties ### Hereditariness This property of topological spaces is hereditary, or subspace-closed. In other words, any subspace (subset with the subspace topology) of a topological space with this property also has this property. View other subspace-hereditary properties of topological spaces Any subspace of a monotonically normal space is monotonically normal. For full proof, refer: Monotone normality is hereditary
2017-03-27 22:17:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872860312461853, "perplexity": 4019.3978393073917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189534.68/warc/CC-MAIN-20170322212949-00570-ip-10-233-31-227.ec2.internal.warc.gz"}
http://physics.stackexchange.com/tags/estimation/new
# Tag Info 1 Start digging at the equator and move all the dirt to the polar regions. This will decrease the moment of inertia of the planet about its spinning axis. Due to the conservation of angular momentum this will result in an increase in angular velocity, akin to a figure skater who retracts her arms while spinning. 3 Cover it in mirrors that are highly reflective on one side and painted black on the other. Position the mirrors so that the "faces" are perpendicular to the surface. A sketch is below (I have only shown three mirrors, the idea is that you would cover the planet with them, but they will be most effectively placed close to the equator). The plan is that each ... 0 As a first step you can use a simple method, which is for every small time step $\Delta t$ approximate the acceleration as constant and use $\Delta v=a\Delta t$ for each direction, and then $\Delta x = v\Delta t$. These equations apply separately for each dimension, so calculate the x and y velocities first and then the resulting changes in position 1 You asked a similar question on worldbuilding for a story. Hit youtube for Operation Crossroads, Baker test. The US Navy detonated a 21kt device 90 feet underwater. Produced a nice fountain but no overly-destructive wave action after a few kilometers. A few years later, Castle Bravo (15Mt) also failed to produce any significant damage* outside the ... 0 Based on the answers here is my summary: Kinetic energy of one plane: $$E_{kin} = 2 * 10^9 J = 0.5 \text{ kilotons of TNT}$$ Chemical energy of the plane fuel (very close to the amazing estimation of Floris): $$E_{chem} = 38000 L * 35 * 10^6 J/L = 1.33*10^{12} J = 330 \text{ kilotons of TNT}$$ Potential energy of one collapsing tower: $$E_{pot} = ... 4 Assuming kerosene is C8H18, has 25 chemical bonds, each of which releases 1eV when burned, gives an energy in fuel of 20 MJ/kg. The weight of a plane shortly after take off is significantly fuel. If I'd guessed I would have said 20 tonnes per plane; (Floris' comment suggests more like 30). This gives 400 GJ per plane, equivalent to 100 tons of TNT per ... 4 Estimating is always a fun aspect of physics - so let's do some, without looking up any values. What is the kinetic energy of a plane? We need to know the mass of a plane and its speed. I am going to use seriously rounded numbers - let's see how close we get. We "know" a full size car is about 1000 kg, and can carry 5 passengers of 100 kg. That means a car ... -1 Aircraft fuel is kerosense. Sometimes it is difficult to directly measure the amount of heat something produces. We can make the process easier by burning an amount of the fuel to heat water. The energy lost by the fuel can then be calculated by finding the heat gained by the water as measured by the change in temperature of the water. this reference ... 2 One favorite calculation method for approximating diffusion (heat, mass, whatever) is to use the following pretty simple relation:$$L^2 = C D t_c in which you've got length (radius in this case), a constant dependent on geometry (6 for a sphere), diffusivity, and characteristic time. Here you know all but $t_c$, for which you can easily solve (~21s). ... 1 Since the object is small and spherical, and the temperature difference is small, I am going to assume there is no convection - meaning that you basically are asking about a conduction problem. Thermal conductivity of air is 0.0257 W/m/K at 20 °C, and heat capacity is 1.005 kJ/kg/K Details of the calculation (including an animation) are nicely shown in ... 0 I've read a little bit on the science of a space elevator and it's a surprisingly difficult problem. To have a working space elevator, it would need to be at least to the Geosynchronous orbit, 22,000 miles up, probobly a bit beyond that for buoyancy. The highest balloon is some 25 miles - so that's less than 1/10th of 1% of the distance. The strongest ... 3 If the cable of the "elevator" is not connected to a point on earth, then the satellite must be in a geostationary orbit (or it will float away); this implies that if you now attach something to the platform (increasing the pull on the cable) you will pull the satellite down to earth. And as @lionelbrits pointed out, the pulling part of a space elevator ... Top 50 recent answers are included
2015-03-30 10:50:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6505944728851318, "perplexity": 577.697532294511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299261.59/warc/CC-MAIN-20150323172139-00013-ip-10-168-14-71.ec2.internal.warc.gz"}