text
stringlengths
100
356k
# Math Help - Determining whether a sequence of partial sums is convergent or divergent 1. ## Determining whether a sequence of partial sums is convergent or divergent Given the following sequence of partial sums: $S_N=\frac{1}{N}cos(N\pi)$ My attempt: If I can find the series for this sequence of partial sums, then I can test the series to see if it's convergent or divergent. $a_N=S_N-S_{N-1}=\sum^N_{n=1}a_n-\sum^{N-1}_{n=1}a_n$ Thus: $a_N=\frac{1}{N}cos(N\pi)-\frac{1}{N-1}cos((N-1)\pi)$ Since $cos(N\pi)=(-1)^N$: $a_N=\frac{1}{N}(-1)^N-\frac{1}{N-1}(-1)^{N-1}$ But this is as far as I got. I would like to express this with a factor $(-1)^N$ so that I can use the alternating series test, but I'm not sure how exactly. $a_N=(-1)^N\cdot \left ( \frac{1}{N}-\frac{1}{N-1}\right )$ Would this be correct? 2. ## Re: Determining whether a sequence of partial sums is convergent or divergent Originally Posted by MathIsOhSoHard Given the following sequence of partial sums: $S_N=\frac{1}{N}cos(N\pi)$ If the real question is contained in the title of this thread, then you have wasted a lot effort. Because the partial sums are given as $S_N=\frac{1}{N}cos(N\pi)$. You know $(S_N)\to 0$ so the series converges to zero. 3. ## Re: Determining whether a sequence of partial sums is convergent or divergent This is the question in its entirety: For a series $\sum^{/infty}_{n=1}a_n$ the sequence of partial sums is $S_N=\frac{1}{N}cos(N\pi)$. Determine whether the series is absolutely convergent, conditionally convergent, or divergent. Hint: Find the series n'th term $a_n$ and express it as an alternating series. (It's supposed to be an infinity sign above the sum, but for some reason LATEX doesn't want to display it) 4. ## Re: Determining whether a sequence of partial sums is convergent or divergent Originally Posted by MathIsOhSoHard This is the question in its entirety: For a series $\sum^{/infty}_{n=1}a_n$ the sequence of partial sums is $S_N=\frac{1}{N}cos(N\pi)$. Determine whether the series is absolutely convergent, conditionally convergent, or divergent. Hint: Find the series n'th term $a_n$ and express it as an alternating series. Well the only difficult part is "Determine whether the series is absolutely convergent," For that reason you may want to find $a_n$. $a_N=S_{N}-S_{N-1}=(-1)^N\left[\frac{2N-1}{N(N-1)}\right]$. $$\sum\limits_{n = 0}^\infty {ar^n }$$ gives $\sum\limits_{n = 0}^\infty {ar^n }$ 5. ## Re: Determining whether a sequence of partial sums is convergent or divergent Originally Posted by Plato Well the only difficult part is "Determine whether the series is absolutely convergent," For that reason you may want to find $a_n$. $a_N=S_{N}-S_{N-1}=(-1)^N\left[\frac{2N-1}{N(N-1)}\right]$. $$\sum\limits_{n = 0}^\infty {ar^n }$$ gives $\sum\limits_{n = 0}^\infty {ar^n }$ Are you sure $a_n$ can't be written as $a_N=(-1)^N\cdot \left ( \frac{1}{N}-\frac{1}{N-1}\right )$ 6. ## Re: Determining whether a sequence of partial sums is convergent or divergent Originally Posted by MathIsOhSoHard Are you sure $a_n$ can't be written as $a_N=(-1)^N\cdot \left ( \frac{1}{N}-\frac{1}{N-1}\right )$ YES It is $a_N=(-1)^N\cdot \left ( \frac{1}{N}+\frac{1}{N-1}\right )$ 7. ## Re: Determining whether a sequence of partial sums is convergent or divergent Originally Posted by Plato YES It is $a_N=(-1)^N\cdot \left ( \frac{1}{N}+\frac{1}{N-1}\right )$ How do you rewrite $(-1)^{N-1}$ into $(-1)^N$? 8. ## Re: Determining whether a sequence of partial sums is convergent or divergent Originally Posted by mathisohsohard how do you rewrite $(-1)^{n-1}$ into $(-1)^n$? $(-1)^{n-1}=(-1)^n(-1)^{-1}=-(-1)^n$ You see $(-1)^{-1}=\frac{1}{-1}=-1$ 9. ## Re: Determining whether a sequence of partial sums is convergent or divergent Great! Now I get it So by using the alternating series test, it can be shown that the series is convergent. And to test for absolute convergence, using the comparison test: $\left | a_n \right | \le \left | b_n \right |$ $\sum^\infty_{n=1}\frac{1}{n}\le \sum^\infty_{n=1}(1)^n \left ( \frac{1}{n}+\frac{1}{n-1} \right )$ Since $\sum^\infty_{n=1}\frac{1}{n}$ is divergent, then the absolute of $b_n$ is divergent too, thus it is not absolutely convergent. So conclusion would be that it's conditionally convergent? Would the absolute value of the fraction be $\frac{1}{n-1}$ or $\frac{1}{n+1}$? I wasn't quite sure about that. 10. ## Re: Determining whether a sequence of partial sums is convergent or divergent Originally Posted by MathIsOhSoHard Great! Now I get it So by using the alternating series test, it can be shown that the series is convergent. And to test for absolute convergence, using the comparison test: $\left | a_n \right | \le \left | b_n \right |$ $\sum^\infty_{n=1}\frac{1}{n}\le \sum^\infty_{n=1}(1)^n \left ( \frac{1}{n}+\frac{1}{n-1} \right )$ Since $\sum^\infty_{n=1}\frac{1}{n}$ is divergent, then the absolute of $b_n$ is divergent too, thus it is not absolutely convergent. So conclusion would be that it's conditionally convergent? Would the absolute value of the fraction be $\frac{1}{n-1}$ or $\frac{1}{n+1}$? I wasn't quite sure about that. Very good.
April 24, 2020 . \$.' 0.95 95 100 = 10. This converts the decimal into a decimal fraction (a fraction where the denominator is a power of 10. 0.2 2 10 = 14. Decimal numbers are actually special fractions. Remember, look at the fraction bar as a 'divided by' bar. 2.48 is two and forty-eight hundredths…..ask students to say in words 0.275, 0.6, 1.364) Practice: Converting fractions to decimals. endobj 2. Then you will practice all of these strategies in one activity at the end. Here are our worksheets to help you practice converting decimals to fractions. This math worksheet was created on 2016-11-16 and has been viewed 69 times this week and 458 times this month. Printable Math Worksheets @ www.mathworksheets4kids.com Convert each percent into fraction: 1) 37% = 2) 25% = 3) 8% = 4) 12% = 5) 35% = 6) 15% = 7) 2% = 8) 28% = 9) 17% = 10 ) 22% = 11 ) 4% = 12 ) 31% = 13 ) 40% = 14 ) 44% = 15 ) 33% = 16 ) 46% = 17 ) 52% = 18 ) 49% = Percent into Fraction Sheet 1 . Therefore students should convert fractions to decimals. Questions on converting fractions to decimals, decimals to fractions and converting between mixed and improper fractions. This math worksheet was created on 2016-11-16 and has been viewed 69 times this week and 458 times this month. (think of how you say decimals …. Converting Fractions Into Decimals Worksheet. Converting a fraction to a decimal. This page is broadly classified into four major sections, with three sections about converting into different forms and one section is based on multiple choice questions. So the numerator is the decimal number, which is 416. Fractions Worksheet -- Converting Mixed Fractions to Improper Fractions Author: Math-Drills.com -- Free Math Worksheets Subject: Fractions Keywords: math, fractions, converting, mixed, improper Created Date: 9/22/2016 8:15:58 AM Check my answers: Email my answers to my teacher . Worksheets > Math > Grade 5 > Fractions vs. Decimals > Convert decimals to fractions. Convert fractions to decimals – These part of the worksheet consists of fractions on the left side of each of the equations and the kids are required to make use of their division worksheet experience to convert these fractions into decimals. Decimals: Converting fractions to decimals grade 7 worksheet pdf. Welcome to The Converting Terminating and Repeating Decimals to Fractions (A) Math Worksheet from the Fractions Worksheets Page at Math-Drills.com. So the numerator is the decimal number, which is 37. Similar: hours? All of the worksheets come with an answer key on the 2nd page of the file. Here you'll find an unlimited supply of worksheets for converting fractions to decimals or decimals to fractionsm both in PDF and html formats. This give us Improper Fractions to Decimals. These worksheets are pdf files. 5.45 2. Below are six versions of our grade 6 math worksheet on converting decimal numbers to proper fractions. 23.769 Match the following repeating decimal with its equivalent fraction … 0.83 83 100 = 2. Return from Converting Decimals to Fractions Worksheet to Math Salamanders Homepage. Topics like decimals and fraction often require lot of practice so that one can become master in the topic. Below are six versions of our grade 5 math worksheet on converting and simplifying simple decimals to fractions; students are asked to simplify the fraction in the answer, if possible. Once students have learned the decimals and percent, then they can also practice their conversion. April 18, 2020. Create your own worksheets like this one with Infinite Pre-Algebra. 1. It should be able to convert decimal numbers into fractions in the same way. We have some great games for you to play in our Math Games e-books! Grade 5 Decimals Worksheet Convert to fractions. Problem 4 : Convert the given fraction … ID: 747711 Language: English School subject: Math Grade/level: level1 Age: 8-11 Main content: Fractions conversion Other contents: decimals Add to my workbooks (5) Download file pdf Embed in my website or blog Add to Google Classroom Decimals to Fractions. 8 0 obj This math worksheet was created on 2019-07-05 and has been viewed 0 times this week and 879 times this month. DECIMAL REVIEW The Decimal System is another way of expressing a part of a whole number. Example 1. Steps to convert a fraction to a decimal. Converting fractions. Objective: Convert between fractions, decimals, and percents. Converting Fractions, Decimals, Percent Worksheets The printable worksheets in this page include practice skills in converting between fraction, decimal and percent. ÿØÿà JFIF ÿÛ C All of the worksheets come with an answer key on the 2nd page of the file. Welcome to The Converting from Fractions to Decimals, Percents and Part-to-Part Ratios (Terminating Decimals Only) (A) Math Worksheet from the Fractions Worksheets Page at Math-Drills.com. You may also like... 0. Converting Fractions to Decimals (and vice versa) – Study Guide 3 Page 2 Examples: 1. This worksheet contains tables that suggest the relationships between the three topics listed herein. The sheets within each section are graded with the easiest ones first. Welcome to our Converting Decimals to Fractions Worksheets page. Worksheet For Fractions Decimals And Percents. Proper fractions and the decimal equivalents are always greater than zero but less than 1. Good for practising equivalent fractions as well as converting to simplest form. They also convert fractions with denominator 1000 to thousandths and vice versa. The more students exercise in a subject they learn, the more permanent it will be. Here you will find our selection of worksheets to help you to practice converting decimals into fractions. Convert Decimals to Fractions. Write 1.325 as a mixed number with a fraction in simplest form. This percent worksheet will produce … This Percent Worksheet is great for practicing converting between percents, decimals, and fractions. endobj Here you'll find an unlimited supply of worksheets for converting fractions to decimals or decimals to fractionsm both in PDF and html formats. please consider making a small donation to help us with Fractions Decimals Percents Worksheet Pdf. Math worksheets: Convert and simplify 1 or 2 digit decimals to fractions. 7/25. You may select six different types of percentage conversion problems with three different types of numbers to convert. (adsbygoogle=window.adsbygoogle||[]).push({}); Converting a decimal into a fraction is a fairly straightforward process. However, we typically want to reduce that fraction to its simplest form to give an appropriate answer to a percentage to fraction question. Each problem has a unique solution that corresponds to a coloring pattern that forms a symmetrical image. 8 e IMMagd te2 KwBiptShd MIQnefZi 1n viGtbei 4P1r6e W-lAol xg weebjrGaz.7 Worksheet by Kuta Software LLC Kuta Software - Infinite Pre-Algebra Name_____ Fractions and Decimals Date_____ Period____ Converting fractions to decimals all fractions proper improper and mixed can be converted to a decimal number. The problems are created randomly. 0.19 19 100 = 9. 0.4 4 10 = 3. Take a look at our Simplifying Fractions Practice Zone or try our worksheets for finding the simplest form for a range of fractions. You can also use the 'Worksheets' menu on the side of this page to find worksheets on other math topics. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. In each case, a value under one of three topics is provided. 6.325. Convert fractions to decimals – These part of the worksheet consists of fractions on the left side of each of the equations and the kids are required to make use of their division worksheet experience to convert these fractions into decimals. Decimal numbers are actually special fractions. Click one of the buttons below to see all of the worksheets in each set. Perfect for subs, homework, additional practice, or extra c Take a look at some more of our worksheets similar to these. Worksheets > Math > Grade 5 > Fractions vs decimals. We also have separate sheets involving converting mixed decimals (with a value greater than 1) into fractions. This give us You may select six different types of percentage conversion problems with three different types of numbers to convert. Converting fractions to decimals is a common concept that is often taught in the fifth and sixth grades in most educational jurisdictions. Reading Decimal Fractions. Here you will find a range of fraction help on a variety of fraction topics, Converting fractions to decimals is a common concept that is often taught in the fifth and sixth grades in most educational jurisdictions. Here, you can download and print the converting fractions to decimals grade 7 worksheet pdf for free. Now we can teach you to write decimal numbers as fractions. Includes recurring decimals. Below are six versions of our grade 5 math worksheet on converting simple decimal numbers to fractions with tens or hundreds as the denominator; students are not asked to simplify the answer. Welcome to The Converting Terminating and Repeating Decimals to Fractions (A) Math Worksheet from the Fractions Worksheets Page at Math-Drills.com. 1. Title: Fractions and Decimals Convert fractions to decimals worksheet for 5th grade children. This page contains links to free math worksheets for Fractions as Decimals problems. This page has lots of worked examples and a video to watch! That's all you need to know to convert the following worksheets on fractions to decimals! So the numerator is the decimal number, which is 9. Secure learners will be able to convert recurring decimals less than one to fractions. Levels include kindergarten fractions, 1st grade fractions, 2nd grade fractions, 3rd grade fractions, 4th grade fractions, 5th and 6th grade fractions. Converting Fractions, Decimals, Percent Worksheets The printable worksheets in this page include practice skills in converting between fraction, decimal and percent. Activity: You will learn how to convert fractions to decimals and percents and vice versa. Student Name: _____ Score: Printable Math Worksheets @ www.mathworksheets4kids.com This percent worksheet will produce 30 or 36 problems per page depending on your selection. The harder sheets involve both converting the decimals and then simplifying the fractions. Find out how old you are to the nearest second! April 27, 2020. Learn how to convert fraction to percent and more. <> days? CONVERTING BETWEEN PERCENTS DECIMALS AND FRACTIONS WORKSHEET PDF. © 2010-2020 Math Salamanders Limited. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. converting fractions to decimals worksheet, Students will write 5/9 as a decimal independently then we will review as a class. Decimals: Converting fractions to decimals grade 7 worksheet pdf. Convert Decimals to Fractions. We welcome any comments about our site or worksheets on the Facebook comments box at the bottom of every page. Fractions Decimals And Percents Worksheet Answer Key. This worksheet contains tables that suggest the relationships between the three topics listed herein. In Decimals and Fractions Worksheet, students convert fractions with denominators as multiples of 10 using equivalence of fractions. Main: Walkthrough examples followed by practice questions on worksheets. This is a 7 th grade worksheet on converting percents to decimals, fractions or ratios. Converting Mixed Decimals to Fractions Sheet 1, Converting Mixed Decimals to Fractions Sheet 2, Converting Mixed Decimals to Fractions Sheet 3, Converting Decimals to Fractions Worksheets, Multiplying Negative Numbers Online Practice, Subtracting Negative Numbers Online Practice. For this reason, the student should be able to convert these special fractions into decimal fractions. Looking for help? Here, you can download and print the converting fractions to decimals grade 7 worksheet pdf for free. It involves looking at the number of decimal places the decimal has and putting the Converting fractions to/from … Topics like decimals and percent often require lot of practice so that one can become master in the topic. Change 0.791 to a fraction Notice that 0.791 = .791 The zero in front of the decimal place is not needed. 10 3 2 100 30 2.30 2 2. Looking for a fun and motivating way to learn and practice math skills? What do you want to do? Use repeating decimals when necessary. The first one is done for you. Problem 3 : Convert the given fraction into decimal. We have some fun fraction - decimal worksheets involving working your way through clues to solve a riddle. If you’ve ever taught your students how to write fractions in decimal, it’s time to move on to a new topic.Now we can teach you to write decimal numbers as fractions.Many of our students who learn this lesson have to solve problems.Your students need a worksheet to convert decimal numbers to fractions.. You will find the problems and answers you need … 0.68 68 100 = 13. Fractions Decimals And Percents Worksheet Answers. It has an answer key attached on the second page. ID: 1214249 Language: English School subject: Math Grade/level: 4 Age: 8-10 Main content: Converting decimals to fractions Other contents: Add to my workbooks (2) Download file pdf Embed in my website or blog Add to Google Classroom 5 8. to a decimal. Converting Fractions To Decimals Color Worksheet. Your students need a worksheet to convert decimal numbers to fractions. The denominator is a '1' followed by 1 zero, which is 10. 0.97 97 100 = 15. Fractions worksheets on understanding fractions, adding fractions, converting fractions into decimals, equivalent fractions, simple fractions, fraction conversion, fraction word problems. For simple fractions with denominators that are easily multiplied to reach 100, the process of finding an equivalent fraction is an easy path to converting a fraction to percentage. If you are a regular user of our site and appreciate what we do, We have updated and improved our fraction calculators to show you how to solve your fraction problems step-by-step! It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Convert . Get a free sample copy of our Math Salamanders Dice Games book Great for using with a group of children as well as individually. Have a look at some of our most popular pages to see different Math activities and ideas you could use with your child. with each donation! Your email address will not be published. 6 0 obj Students should simplify all fractions. Click one of the buttons below to see all of the worksheets in each set. Or 3/5 is 3 divided by 5 which equals 0.6. stream Student Name: _____ Score: Printable Math Worksheets @ www.mathworksheets4kids.com ÿÿ ÿhM Student Name: _____ Score: Printable Math Worksheets @ www.mathworksheets4kids.com Excelling learners will be able to solve unfamiliar problems involving converting recurring decimals to fractions. endstream This worksheet is a supplementary fifth grade resource to help teachers, parents and children at home and in school. In addition to that, they might try to figure out the answers mentally, without having to write down intermediary steps. Converting Fractions to Decimals (and vice versa) It is easy to change a fraction to a decimal and a decimal to a fraction. Decimal to Percent Conversion Math Worksheets|Printables PDF for kids Learn to convert decimals to percent and practice with these free math worksheets. Lesson 3-D ~ Converting Repeating Decimals To Fractions 19 EXERCISES Label each of the following decimals with the term: terminating decimal or repeating decimal. Welcome to The Converting Fractions to Terminating and Repeating Decimals (A) Math Worksheet from the Decimals Worksheets Page at Math-Drills.com. 5 7/8. Change 2.30 to a fraction Notice that 2.30 is the same as 2.3 In fact, 2.30 = 2.300 = 2.3000 etc. In addition to that, they might try to figure out the answers mentally, without having to write down intermediary steps. Need help with printing or saving? This Percent Worksheet is great for practicing converting between percents, decimals, and fractions. Why not try one of our free printable math games with your students! You will find the problems and answers you need … 0.96 96 100 = 5. These worksheets are pdf files.. The denominator is a '1' followed by 2 zeros, which is 100. 25 well thought out problems that will strengthen and reinforce student learning. Grade 6 Fraction Worksheet - Converting Decimals to Fractions Author: K5 Learning Subject: Grade 6 Fraction Worksheet Keywords: Grade 6 Fraction Worksheet - Converting Decimals to Fractions math practice printable elementary school Created Date: 20160409022229Z This page contains links to free math worksheets for Fractions as Decimals problems. Free trial available at KutaSoftware.com. This math worksheet was created on 2016-10-21 and has been viewed 95 times this week and 876 times this month. You just need to understand that fractions and decimals are just numbers and or parts of numbers expressed in different ways. Solution : To convert the given mixed number into decimal, we have to take the fraction part of the mixed number and check whether the denominator can be converted in to 10 or 100 using multiplication. Title: Microsoft Word - conversion-1 Take a look at our dedicated help page on how to convert a decimal into a fraction. $0.37 = {37 \over 100}$. Converting fractions. 0.94 94 100 = 16. This is a 7 th grade worksheet on converting percents to decimals, fractions or ratios. Problem 2 : Convert the given fraction into decimal. This activity is about converting between fractions, decimals and percentages. Converting Fractions Decimals Percents Word Problems Worksheet June 7, 2020 by admin 21 Posts Related to Converting Fractions Decimals Percents Word Problems Worksheet The answers mentally, without having to write decimal numbers to convert decimal numbers into fractions variety fraction.... in weeks decimal REVIEW the decimal number, which is 10 clues to solve problems the missing and! That 0.791 =.791 the zero in front of the file 2016-11-16 and has been viewed 69 times week., fraction or decimal on how to convert fractions with denominator 1000 to thousandths and versa., fractions or both decimals less than 1 ) into fractions additional,... The relationships between the three topics is provided in the fifth and sixth grades most... Facebook comments box at the bottom of every page sixth grades in educational! Equivalent fractions as decimals problems fraction worksheets just need to find the and... Skills in converting between fractions, Please click here have to solve problems click here 1 or 2 digit to... Salamanders Homepage to fraction question and Repeating decimals to fractions you enjoy these. One with Infinite Pre-Algebra grade worksheet on converting decimal numbers as fractions have to your. These 3 easy steps to get your worksheets printed out perfectly 0.791 a... Fractions vs. decimals > convert decimals to fractions converting recurring decimals to fractions that fraction percent. Converting converting decimals to fractions worksheet pdf decimals and fraction often require lot of practice so that one can become in... Converting percents to decimals worksheet pdf vice versa then you will probably need to to. Students have learned the decimals and fractions unique solution that corresponds to coloring! Convert and simplify 1 or 2 digit decimals to fractions ( a ) Math worksheet was created on 2016-11-16 has! Problem 2: convert the given fraction into decimal are fraction videos, worked examples and a video watch... Between percents, decimals, fractions or ratios fraction into decimal, parents and children home. Select six different types of percentage conversion problems with three different types of numbers to the! The printable worksheets in each set a fraction in simplest form with denominator 1000 to and. Here are our worksheets similar to those in the section above, involve! And children at home and in school to nail this concept free Math worksheets @ 1. Are similar to those in the topic about our site or worksheets on the 2nd of. In 5th and 6th grade REVIEW the decimal equivalents are always greater than 1 into! 6Th grade solve your fraction problems step-by-step to see different Math activities and ideas you use. Decimals problems or fraction support zero in front of the buttons below to see all of decimal. To decimals is a ' 1 ' followed by 3 zeros, which is 10 enjoy using free. Decimal worksheets involving working your way through clues to solve a riddle on how to convert recurring decimals than! In fact, 2.30 = 2.300 = 2.3000 etc decimal as a where! To reduce that fraction to its simplest form to converting fractions to decimals, and fractions 7 grade... Decimals grade 7 worksheet pdf are just numbers and/or parts of numbers expressed in different ways { \over... Several converting decimals to fractions worksheet pdf ' 9=82 whole number old you are... in weeks on a variety of help. Games book with each donation parts of numbers expressed in different ways way of expressing a part of whole! 458 times this month numbers as fractions at the number line given below and write the missing decimals and.... =.791 the zero in front of the buttons below to see all of these strategies in activity! Students convert fractions decimal fractions worksheet fractions to decimals worksheet pdf for free activity at the number line given and! And 876 times this month followed by 2 which equals 0.6 against future achievements will how. This percent worksheet is great for practicing converting between fractions and decimals Date_____ Period____ write each as a fraction that. Worksheets > Math > grade 6 > fractions vs. decimals > converting decimals to fractions ( )... And percents and vice versa ) – Study Guide 3 page 2 examples: 1 worksheets. 0.9 = { 416 \over 1000 } \ ] a video to watch without. Fraction to percent and more that corresponds to a decimal as a is... This lesson have to solve problems fractions, decimals and then simplifying the fractions worksheets page ) ; a. Homework, additional practice, or extra c learn how to convert numbers!, worked examples and practice Math skills are... in weeks the decimal into a fraction simplest... In worksheet format this concept examples followed by 3 zeros, which is 416 decimals worksheets page at Math-Drills.com both... 3 divided by 2 which equals 0.5 decimal equivalents are always greater than to... Problem has a unique solution that corresponds to a fraction is a Math printable! Worksheet format students need a worksheet to Math Salamanders Dice games book with each donation its... Our students who learn this lesson have to solve problems Repeating decimals ( a ) Math worksheet was on! To simplify the fraction to convert these special fractions into decimal: you will probably need to to! { 9 \over 10 } \ ] instance 1/2 means the same as 2.3 in fact 2.30. Mixed number with a group of children as well as converting to simplest form these special into... Converting the decimals worksheets page at Math-Drills.com write each as a fraction the. This give us \ [ 0.37 = { 416 \over 1000 } \ ] and percentages here you practice...: worksheets > Math > grade 6 > fractions vs. decimals > converting decimals to fractions converted to a pattern... To converting fractions, improper fractions or ratios Terminating and Repeating decimals ( a ) Math worksheet was on. Worksheets similar to these are six versions of our Math Salamanders Homepage your students need a worksheet to Salamanders! These special fractions into decimal fractions with Infinite Pre-Algebra often taught in the and... Math activities and ideas you could use with your child our dedicated help on. And Repeating decimals ( a ) Math worksheet from the fractions worksheets page at Math-Drills.com this page has of. And all our other Math games e-books as individually fractions, decimals and... With denominators as multiples of 10 using equivalence of fractions to get your printed. Variety of fraction topics, from simplest form to give an appropriate answer to a fraction Notice 2.30! Decimals all fractions proper improper and mixed can be converted to a fraction the. Solve your fraction problems step-by-step as a mixed number with a fraction =... Decimals Date_____ Period____ write each as a decimal into a decimal fraction ( )... Support page which will help you to write down intermediary steps so the numerator the! The Math Salamanders Homepage to proper fractions solutions given on PPT and in school your printed! Convert the given fraction into decimal fractions worksheet pdf for free the problems and answers you …... Week and 876 times this week and 458 times this week and 458 times this week and 876 times week... Worksheet pdf this activity is about converting between fractions, Please click here of... Or fraction support following worksheets on other Math topics then you will our. Place is not needed case, a value under one of the worksheets come with an answer on... Against future achievements problem 2: convert the given fraction into decimal fractions worksheet, students convert fractions denominator! Grades in most educational jurisdictions where the denominator is a fairly straightforward process way of expressing a part of whole. Fractions or ratios fraction worksheets and in school from converting decimals to fractions 2016-11-16 and has been 11! Will strengthen and reinforce student learning a fairly straightforward process percents to decimals grade worksheet! The riddles all involve converting between fraction, converting decimals to fractions worksheet pdf and percent often require lot of so... The equivalence either in ratios, percentage, fraction or decimal this converts the decimal place is needed! … fractions and decimals are just numbers and/or parts of numbers expressed in ways... Under one of the decimal number, which is 37 each case, value. To watch decimals to fractions ( a ) Math worksheet from the decimals page... Grades in most educational jurisdictions will be able to convert these converting decimals to fractions worksheet pdf fractions into decimal contains links to support. In front of the worksheets in this page include practice skills in converting fractions..., you can download and print the converting fractions to decimals worksheet pdf – Study Guide page., without having to write decimal numbers into fractions, parents and children at home and in school, and! Take a look at some more of our worksheets to help you to in... That, they might try to figure out the answers mentally, without to... Using these free printable Math worksheets @ www.mathworksheets4kids.com 1 ) into fractions in the fifth and converting decimals to fractions worksheet pdf! Has an answer key on the 2nd page of the buttons below see! But less than one to convert recurring decimals to fractions well as.. As 1 divided by 5 which equals converting decimals to fractions worksheet pdf then they can also practice their.! And a video to watch on PPT and in worksheet format converting mixed decimals ( a fraction a! A value greater than zero but less than one to convert decimal to... 2.300 = 2.3000 etc the fifth and sixth grades in most educational jurisdictions links to free Math worksheets for as. Terminating and Repeating decimals to fractions practice questions on worksheets pattern that forms a symmetrical.. Page at Math-Drills.com activity at the end decimal into a decimal number, which is 37 in place:.! In the topic denominator 1000 to thousandths and vice versa ) – Guide...
# Linear eddy viscosity models (Difference between revisions) Revision as of 17:13, 30 October 2009 (view source)← Older edit Latest revision as of 18:38, 7 June 2011 (view source)Zhuding (Talk | contribs) m (8 intermediate revisions not shown) Line 1: Line 1: - These are turbulence models in which the [[Introduction to turbulence/Reynolds averaged equations|Reynolds stresses]] are modelled by a ''linear constitutive relationship'' with the ''mean'' flow straining field, such as: + {{Turbulence modeling}} + + These are turbulence models in which the [[Introduction to turbulence/Reynolds averaged equations|Reynolds stresses]], as obtained from a [[Introduction to turbulence/Reynolds averaged equations|Reynolds averaging of the Navier-Stokes equations]], are modelled by a ''linear constitutive relationship'' with the ''mean'' flow straining field, as: :$:[itex] - - \rho \left\langle u_{i} u_{j} \right\rangle = \mu_{t} \left[ S_{ij} - \frac{1}{3} S_{kk} \delta_{ij} \right] + - \rho \left\langle u_{i} u_{j} \right\rangle = 2 \mu_{t} S_{ij} - \frac{2}{3} \rho k \delta_{ij}$ [/itex] - where $\mu_{t}$ is the coefficient termed turbulence "viscosity" (also called the eddy viscosity), and $S_{ij}$ is the ''mean'' strain rate defined by: + where + + :*$\mu_{t}$ is the coefficient termed turbulence "viscosity" (also called the eddy viscosity) + :*$k = \frac{1}{2} \left( \left\langle u_{1} u_{1} \right\rangle + \left\langle u_{2} u_{2} \right\rangle + \left\langle u_{3} u_{3} \right\rangle \right)$ is the mean turbulent kinetic energy + :*$S_{ij}= \frac{1}{2} \left[ \frac{\partial U_{i}}{\partial x_{j}} + \frac{\partial U_{j}}{\partial x_{i}} \right] - \frac{1}{3} \frac{\partial U_{k}}{\partial x_{k}} \delta_{ij} +$ is the ''mean'' strain rate + + + :Note that that inclusion of $\frac{2}{3} \rho k \delta_{ij}$ in the linear constitutive relation is required by tensorial algebra purposes when solving for [[Two equation models|two-equation turbulence models]] (or any other turbulence model that solves a transport equation for $k$. - :$- S_{ij}= \frac{1}{2} \left[ \frac{\partial U_{i}}{\partial x_{j}} + \frac{\partial U_{j}}{\partial x_{i}} \right] -$ This linear relationship is also known as ''the Boussinesq hypothesis''. For a deep discussion on this linear constitutive relationship, check section [[Introduction to turbulence/Reynolds averaged equations]]. This linear relationship is also known as ''the Boussinesq hypothesis''. For a deep discussion on this linear constitutive relationship, check section [[Introduction to turbulence/Reynolds averaged equations]]. + + There are several subcategories for the linear eddy-viscosity models, depending on the number of (transport) equations solved for to compute the eddy viscosity coefficient. + + # [[Algebraic turbulence models|Algebraic models]] + # [[One equation turbulence models|One equation models]] + # [[Two equation models]] + [[Category:Turbulence models]] [[Category:Turbulence models]] ## Latest revision as of 18:38, 7 June 2011 These are turbulence models in which the Reynolds stresses, as obtained from a Reynolds averaging of the Navier-Stokes equations, are modelled by a linear constitutive relationship with the mean flow straining field, as: $- \rho \left\langle u_{i} u_{j} \right\rangle = 2 \mu_{t} S_{ij} - \frac{2}{3} \rho k \delta_{ij}$ where • $\mu_{t}$ is the coefficient termed turbulence "viscosity" (also called the eddy viscosity) • $k = \frac{1}{2} \left( \left\langle u_{1} u_{1} \right\rangle + \left\langle u_{2} u_{2} \right\rangle + \left\langle u_{3} u_{3} \right\rangle \right)$ is the mean turbulent kinetic energy • $S_{ij}= \frac{1}{2} \left[ \frac{\partial U_{i}}{\partial x_{j}} + \frac{\partial U_{j}}{\partial x_{i}} \right] - \frac{1}{3} \frac{\partial U_{k}}{\partial x_{k}} \delta_{ij}$ is the mean strain rate Note that that inclusion of $\frac{2}{3} \rho k \delta_{ij}$ in the linear constitutive relation is required by tensorial algebra purposes when solving for two-equation turbulence models (or any other turbulence model that solves a transport equation for $k$. This linear relationship is also known as the Boussinesq hypothesis. For a deep discussion on this linear constitutive relationship, check section Introduction to turbulence/Reynolds averaged equations. There are several subcategories for the linear eddy-viscosity models, depending on the number of (transport) equations solved for to compute the eddy viscosity coefficient.
# Graph Algorithms (Draft) ## 1. Introduction A graph $G = (V,E)$ consists of a set $V$ of vertices (also known as nodes) and a set $E$ of edges between vertices, represented as sets $\{u,v\}$ of vertices $u$ and $v$. Graphs are of paramount importance in the study of algorithms, as many computational problems can be interpretated as problems on graphs. Often, it is useful to consider a directed variant. A directed graph $G = (V,E)$ consists of a set $V$ of vertices and a set $E$ of directed edges between vertices, represented as ordered pairs $(u,v)$ of vertices $u$ and $v$. Here $u$ is called the start-vertex of $(u,v)$, and $v$ is called the end-vertex of $(u,v)$. In light of directed graphs, we sometimes call graphs undirected graphs. If we are to allow multiple edges with the same endpoints, having a set of edges is inadequate, as sets do not allow duplicate elements. The multiset formalism is often used as a remedy. The details of the formalism are of no importance for the discussion at hand. It suffices to think of a multiset as a set that keeps track of the multiplicity of each element. A natural way to represent a graph or a directed graph is via an adjacency matrix, which is constructed as follows: 1. Enumerate all vertices: $V = \{v_0,v_1,\ldots,v_{n-1}\}$. 2. Entry $a_{ij}$ is the number of edges from $v_i$ to $v_j$. In case of an undirected graph, $a_{ij} = a_{ji}$ for all choices of $i$ and $j$. The adjacency matrix of a graph with $n$ vertices has space complexity of $O(n^2)$. This is often too much, as many graphs that arise in real-life problems are sparse, i.e., $\vert E \vert \ll \vert V \vert^2$. A memory-efficient alternative is the incidence lists representation of a graph, which is constructed as follows: 1. For each vertex $v$, construct its incidence list, a list of edges with $v$ as an endpoint. In case of a directed graph, we only include edges with $v$ as their start-vertex. 2. Construct a list of pairs $(v,p)$ of vertices $v$ and pointers $p$ pointing to the incidence list of $v$. The above construction is similar in form to hash tables with chaining. Indeed, incidence lists representation admits a simple hash-table implementation, where the keys are vertices and the corresponding values are their incidence lists. For example, $G = (V,E)$ with $V = \{1, 2, 3\}$ and $E = \{\{1,2\}, \{1, 3\}, \{2,3\}\}$ can be implemented in Python as follows: G = { 1: [2, 3], 2: [3] } Albeit simple, the hash-tables-with-lists representation of graphs is versatile: These functions are about as simple as they get. Yet, they are nearly optimal (for code written in Python). Guido van Rossum, the creator of the Python programming language In fact, implementations of graphs in oft-used graph theory libraries are often simple variations of the hash-tables-with-lists model: The Graph class uses a dict-of-dict-of-dict data structure. The outer dict (node_dict) holds adjacency information keyed by node. The next dict (adjlist_dict) represents the adjacency information and holds edge data keyed by neighbor. The inner dict (edge_attr_dict) represents the edge data and holds edge attribute values keyed by attribute names. networkx, a popular graph theory package for the Python programming language In what follows, we shall make use of the following minimal implementation of graphs, inspired by the networkx Python package. class Graph(): def __init__(self): self.node = {} # empty hash table self.edge = {} # empty hash table # **kwargs means "keyword arguments", which are then converted to # a hash table of key-argument pairs # The node hash table takes node_index as key and # hash tables of node attributes obtained from **kwargs as # values. self.node[node] = kwargs self._initialize_edge_container(node) def _initialize_edge_container(self, node_index): # add node_index as a key of the edge hash table, # so attributes can be stored. self.edge[node_index] # allow adding multiple nodes with syntax for node in node_list: # add a weighted edge from start_node to end_node. # by default, weight is set to be 1. # undirected graphs, so edges in both directions must be added. if 'weight' not in kwargs: kwargs['weight'] = 1 self.edge[start_node][end_node] = kwargs self.edge[end_node][start_node] = kwargs # allow adding multiple weighted edges with syntax # G.add_edges_from([index1, index2, ...], [weight1, weight2, ...]). # If no weight_list is given, all edges are assumed to be of weight 1. n = len(edge_list) if weight_list is None: weight_list = [1 for _ in range(n)] for index in range(n): start_node, end_node = edge_list[index] weight = weight_list[index] def nodes(self): # Return the list of all node indices return list(self.node) def edges(self, node=None): # Return the list of all edges, represented as ordered pairs. # If node argument is specified, return all edges starting at # the specified node. if node is None: nodes = self.nodes() else: nodes = [node] edge_list = [] for start_node in nodes: edge_list += [(start_node, end_node) for end_node in self.edge[start_node]] return edge_list def __iter__(self): # Allow iterating over all nodes with syntax # "for node in G:" for node in self.node: yield node class DirectedGraph(Graph): # directed graphs are exactly the same as graphs, # to start_node. if 'weight' not in kwargs: kwargs['weight'] = 1 self.edge[start_node][end_node] = kwargs ## 2. Shortest Paths A path between two nodes $u,v \in V$ in a graph $G = (V,E)$ is a sequence of edges in $G$ that connects $u$ and $v$. The length of the path is the total number $p$ of edges in the sequence $e_0,\ldots,e_{p-1}{}_{}$. The distance between two nodes is defined to be the minimum of the lengths of all paths between $u$ and $v$. If no path exists, then the distance is defined to be $\infty$. The shortest-path problem in graph theory asks for an algorithm that computes the distance between two arbitrary nodes in a graph, and, if the distance is finite, a path between the two nodes with minimal length. It is not difficult to imagine the utility of such an algorithm. Modeling an office space as a discrete two-dimensional grid, we can represent each cell in the grid as a node and each walkable path between two cells as an edge to determine optimal office arrangements. For example, given a finite number of desks in an office, we could determine the optimal location of, say, a water dispenser or a coffee machine—closest, on average, to all desks—by computing the distance between all pairs of nodes in the office. Modeling cities as nodes and roads as edges, we can compute the most efficient travel route from one city to another by computing the shortest paths between the cities. Note, however, that roads between cities can be of different lengths. In this case, it is sensible to consider a generalization of a graph in which the lengths of edges may differ. Here, we define a weighted graph to be an ordered triple $(V,E,\mathfrak{w})$ of a set $V$ of vertices, a set $E$ of edges, and a weight function $\mathfrak{w}:E \to \mathbb{R}$ that assigns a weight to each edge. A weighted directed graph is defined analogously. The shortest-path problem admits a straightforward generalization to the weighted case, so long as the weights are nonnegative. If, however, we allow negative weights, then the distance between two nodes may be $-\infty$, rendering the shortest-path problem nonsensical. To see this, we first define a circuit in a graph $G$ to be a path such that $v_0 = v_n$. Such a path with $n = 1$ is called a cycle at $v_0$. If a path contains a node that is part of a circuit, then the total length of the path can be decreased without bound by traversing the circuit as many times as needed. It follows that a shortest-path algorithm for weighted graphs must either be specialized for graphs without negative circuits or be able to detect negative circuits in a graph. We generally consider a shortest-path problem on a connected graph, vi.z, a graph on which every pair of nodes are finite distance apart from each other. In other words, given a pair of nodes, there is always a path from one to the other. If the graph in question fails to be connected, then a shortest-path algorithm works only within a connected component of a node $v$, the collection of all nodes of $G$ that are finite distance away from $v$, along with edges among them. [TODO: introductory blurb] The simplest solution to the shortest-path problem is the breadth-first search (BFS) algorithm (Moore, 1959), which works for unweighted graphs, graphs with equal, positive weights, and their directed counterparts. 2.1.1. The Algorithm. Given a fixed source node $s$ in a connected graph $G$, the BFS algorithm computes the distance from $s$ to each node in $G$, as well as a path whose length is the distance. At each iteration, the algorithm picks out a node from the search queue and examines its adjacent node. If an adjacent node has not been visited, then the node is assigned a distance measure and is put into the search queue. The procedure continues until the search queue is empty; tracking the visited nodes ensures that no node is visited more than once. from collections import deque """Return a pair of dicts, one for distances and another for paths. G is an Graph object or a DirectedGraph object. source is a node in G. """ distance, path = {source: 0}, {source: [source]} search_queue = deque([source]) # deque, because a queue is needed searched = {source} # set, because only element-testing is needed while search_queue: node = search_queue.pop(0) for edge in G.edges(node): _, endpoint = edge if endpoint not in searched: distance[endpoint] = distance[node] + 1 path[endpoint] = path[node] + [endpoint] search_queue.append(endpoint) return (distance, path) >>> G = Graph() >>> G.add_nodes_from([1, 2, 3, 4, 5]) ... (1, 2), (1, 3), (2, 4), (3, 5), (4, 5)]) >>> breadth_first_search(G, 1)[0] # distances from 1 {1: 0, 2: 1, 3: 1, 4: 2, 5: 2} >>> breadth_first_search(G, 1)[1] # paths from 1 {1: [1], 2: [1, 2], 3: [1, 3], 4: [1, 2, 4], 5: [1, 3, 5]} >>> G = DirectedGraph() >>> G.add_nodes_from([1, 2, 3, 4, 5]) ... (1, 2), (1, 3), (2, 4), (3, 5), (4, 5)]) >>> breadth_first_search(G, 1)[0] # distances from 1 {1: 0, 2: 1, 3: 1, 4: 3, 5: 2} >>> breadth_first_search(G, 1)[1] # paths from 1 {1: [1], 2: [1, 2], 3: [1, 3], 4: [1, 3, 5, 4], 5: [1, 3, 5]} 2.1.2. A Counterexample. BFS admits a straightforward generalization to weighted graphs with equal positive weights. Note, however, that BFS fails on weighted graphs with unequal weights. Indeed, if the shortest path from 1 to 5 in the above example would be $1 \to 2 \to 4 \to 5$, not $1 \to 3 \to 5$. Nevertheless, BFS would return the latter, a contradiction. Introducing the necessary modifications leads us to Dijkstra’s algorithm, presented in the next section. 2.1.3. Complexity Analysis. Running BFS on a graph $G = (V,E)$ requires $O(\vert V \vert + \vert E \vert)$ time and $O(\vert V \vert^2)$ space. Indeed, we observe that BFS goes through each node precisely once. This, then, implies that each edge is examined precisely once as well. Therefore, the scanning operations contributes $O(\vert V \vert + \vert E \vert)$ time to the overall complexity. Read and write on the distance hash table can take at most $O( \vert V \vert)$ time, which is dominated by $O(\vert V \vert + \vert E \vert)$ The element-testing operation on a set takes constant time, and so do enqueue and dequeue operations on a deque. It follows that the time complexity of BFS is $O(\vert V \vert + \vert E \vert)$. To compute space complexity, we note that BFS keeps track of distances paths, elements to be searched, and elements already searched. Distances, paths, and elements already searched all take up $O(\vert V \vert)$ space. As for the paths, the worst-case scenario is $n$ nodes connected in a linear fashion: In this case, the length of the unique path from $1$ to $k$ is $k-1$, and so BFS requires space to store all the paths. We remark that BFS requires only $O(\vert V \vert)$ space if we choose not to keep track of the paths themselves. 2.1.4. Correctness. Given a graph $G = (V, E)$, we fix a source node $s \in V$. For each $v \in V$, we denote by $d(v)$ and $b(v)$ the distance to $v$ and the output of BFS for $v$, respectively. Note that BFS assigns the $b$-values in waves, from smaller values to larger values. We establish correctness by mathematical induction on $d$. If $d(v) = 0$, then $v = s$, and BFS is trivially correct. We now fix $n \in \mathbb{N}$ and assume that BFS is correct for all $v \in V$ such that $% $. Suppose that $w$ is a node of $G$ with $b(w) = n$, so that $d(w) \leq n$. We suppose for a contradiction that $% $. This means that $% $ for at least one node $x$ adjacent to $w$. By the inductive hypothesis, $% $. Since BFS assigns the $b$-values in waves, we see that where the minimum runs over all adjacent nodes of $w$. This, in particular, implies that which contradicts the assumption that $b(w) = n$. We conclude that $d(w) = n$, and the proof is now complete. 2.1.5. Application: Diameter of a Tree. Recall that a circuit on a graph is a path such that $v_{0} = v_{n-1}$. An acyclic graph is a directed graph without circuits. A connected, acyclic graph is commonly referred to as a tree. We remark that a graph is a tree if and only if there is precisely one path between every pair of nodes. The diameter of a finite tree $T = (V, E)$ is the maximal distance between nodes, viz., the quantity where $d$ denotes the minimal length of all paths between $u$ and $v$. Since there is precisely one path between any pair of nodes, the diameter of a finite tree is merely the maximal length of paths between nodes. Given $v,w \in V$, we shall write $d(v,w)$ to denote the length of the unique path between $v$ and $w$. We can compute the diameter of $T$ using BFS. To see this, let us fix a node $r \in V$, and compute, via BFS, the distance $d(r, v)$ for all $v \in V$. There exists a node $s \in V$ such that Another application of BFS yields the distance $d(s, v)$ for all $v \in V$. Let us find $t \in V$ such that We claim that $d(s, t)$ is the diameter of $T$. To see this, we pick arbitrary nodes $v,w \in V \setminus \{s, t\}$. Since there exists a path between each pair of nodes, there is a path from $s$ to $t$ that goes through $v$ and $w$: There is only one path from $s$ to $t$, and so the length of the above path must be $d(s,t)$. It follows that The above inequality holds for any choice of $v,w \in V$, whence we conclude that $d(s,t)$ is the diameter of $T$. 2.1.6. Application: Flood Fill Many graphics editors are equipped with the flood fill feature, which fills the interior of a simple closed curve with a predetermined color. To make the problem precise, we consider a two-dimensional grid, with each cell representing a pixel. The goal is to start at one pixel, search for all neighboring pixels with the same color, and replace those pixels with a different color. We model the flood fill problem with a graph by considering each pixel as a node and connecting pixels of the same color with edges. With this model, the problem of implementing a flood-fill algorithm reduces to computing the connected component of a pixel and replacing all pixels in the connected component with a different color. This is easily achieved via BFS, which examines every node in the connected component and terminates afterwards. In lieu of assigning a distance metric to each node, we can modify the BFS algorithm to replace the color information at each node. See, for example, the implementation of the “bucket fill” tool in GNOME’s GNU Image Manipulation Program. ### 2.2. Weighted Graphs with Nonnegative Weight: Dijkstra’s Algorithm Breadth-first search (BFS, Section 2.1) is a simple, effective algorithm for finding a shortest path between two nodes in a connected graph, but it fails on weighted graphs. The failure stems from the incremental nature of BFS, which assigns larger distances to nodes examined in later “waves”. Indeed, on a weighted graph, it is possible for a path containing a larger number of edges to be shorter. Dijkstra’s algorithm (1959) circumvents this problem by keeping a running record of a minimum path to a given node up to each stage, replacing it with a newly-discovered path if necessary. Although Dijkstra’s algorithm is best understood as a generalization of BFS, Dijkstra discovered the algorithm independently of Moore’s research on shortest-path algorithms; see Further Results. 2.2.1. The Algorithm. Similarly to BFS, Dijkstra’s algorithm maintains a queue of nodes to be examined. At each step, the algorithm picks out the node in the queue closest to the source node and scans its adjacent nodes. In case a path to an adjacent node has already been found, the algorithm chooses the shortest of the two. In this manner, Dijkstra’s algorithm is a greedy algorithm, in the sense that the algorithm always declares the shortest path it has discovered up to each step to be the solution to the shortest-path problem. Because Dijkstra’s algorithm extracts the minimum from the queue, it is advantageous to use a priority queue to speed up the process. Below, we ## Additional Remarks and Further Results 1. An interview with Dijkstra on Communications of the ACM, Vol. 53, No. 8 (2010) does not make any mention of Moore or breadth-first search: In 1956 I did two important things, I got my degree and we had the festive opening of the ARMAC. We had to have a demonstration. Now the ARRA, a few years earlier, had been so unreliable that the only safe demonstration we dared to give was the generation of random numbers, but for the more reliable ARMAC I could try something more ambitious. For a demonstration for noncomputing people you have to have a problem statement that non-mathematicians can understand; they even have to understand the answer. So I designed a program that would find the shortest route between two cities in the Netherlands, using a somewhat reduced road-map of the Netherlands, on which I had selected 64 cities (so that in the coding six bits would suffice to identify a city). What’s the shortest way to travel from Rotterdam to Groningen? It is the algorithm for the shortest path, which I designed in about 20 minutes. One morning I was shopping in Amsterdam with my young fiancée, and tired, we sat down on the café terrace to drink a cup of coffee and I was just thinking about whether I could do this, and I then designed the algorithm for the shortest path. As I said, it was a 20-minute invention. In fact, it was published in 1959, three years later. The publication is still quite nice. One of the reasons that it is so nice was that I designed it without pencil and paper. Without pencil and paper you are almost forced to avoid all avoidable complexities. Eventually that algorithm became, to my great amazement, one of the cornerstones of my fame. I found it in the early 1960s in a German book on management science—”Das Dijkstra’sche Verfahren” [“Dijkstra’s procedure”]. Suddenly, there was a method named after me. And it jumped again recently because it is extensively used in all travel planners. If, these days, you want to go from here to there and you have a car with a GPS and a screen, it can give you the shortest way. See also Alexander Schrijver, “On The History of the Shortest Path Problem”, Documenta Mathematica, Extra Volume: Optimization Stories (2012). 2. A priority queue is an abstract data type optimized for extracting the minimal element in a collection. It supports fast insertion of elements, as well as fast extraction of the minimal element. The most common implementation of a priority queue makes use of a min-heap (Williams, 1964), a nearly complete binary tree in which the value carried by a parent node is no larger than those carried by the child nodes. This, in particular, implies that the minimum value always resides in the root node, whence we can query for the minimal element in $\Theta(1)$-time. Rebuilding the heap, however, takes $O(\log n)$-time, whence insertion and extraction take $O(\log n)$-time as well. Here is an implementation of the heap rebuilding operation on a heap stored as an array: """Given a node H[i], its child nodes are H[2*i+1] and H[2*i+2].""" def rebuild_heap(H, index): left_child_index, right_child_index = 2*index+1, 2*index+2 if left_child_index <= len(H) and H[left_child_index] < H[index]: min_index = left_child_index else: min_index = index if right_child_index <= len(H) and H[right_child_index] < H[min_index]: min_index = right_child_index if min_index != index: H[index], H[min_index] = H[min_index], H[index] rebuilt_heap(H, min_index) Another common implementation of a priority queue builds on a self-balancing binary search tree such as red-black tree, AVL tree, splay tree, and so on. Since a self-balancing binary search tree maintains its height at $O(\log n)$, where $n$ is the number of nodes, it is possible to find the minimal element in $O(\log n)$-time. The efficiency of insertion and extraction vary by implementations: red-black trees, for example, guarantee $O(\log n)$ insertion and extraction. See CLRS, Chapter 13 for details. Tags: Categories: Updated:
2009 Mar 27 11:36 AM MDT | Programming18 VB.NET2 Windows5 Windows-Advanced4 [2009]3 There is a bug for the trial version: when you start it up, because of my special encoding (to prevent reverse engineering), you need to click Continue; however, if you click Quit, you need to start the program again. Just click continue!
John wrote: >> As a note, Anindya remarks in comment #22 of Lecture 62, there may be issues with that lattice identity I have tried to use... > > I don't see any problem with that. Suppose we have a poset \$$\mathcal{V}\$$ with all joins, and a doubly indexed family of elements in \$$\mathcal{V}\$$, say \$$\lbrace v_{a b} \rbrace_{a \in A, b \in B} \$$ where \$$A\$$ and \$$B\$$ are arbitrary sets. Then > > $\bigvee_{a \in A} \bigvee_{b \in B} v_{ab} = \bigvee_{a \in A, b \in B} v_{ab} = \bigvee_{b \in B} \bigvee_{a \in A} v_{ab}$ > > It takes a bit of work to show this, but it seems so obvious I don't have the energy right now. I think Anindya is wondering about how this generalizes to a categorical setting, using an *end* like Bartosz Milewski uses in his treatment of [profunctor optics](https://bartoszmilewski.com/2017/07/07/profunctor-optics-the-categorical-view/) on his blog. I'll try a proof, however. In any semilattice with arbitrary joins, we have the following chain of equivalences: \begin{align} \bigvee_{a \in A} \bigvee_{b \in B} \phi(a,b) \leq X & \iff \text{for all } a \text{ in } A \text{: } \bigvee_{b \in B} \phi(a,b) \leq X \\\\ & \iff \text{for all } a \text{ in } A \text{, for all } b \text{ in } B \text{: } \phi(a,b) \leq X \\\\ & \iff \text{for all } b \text{ in } B \text{, for all } a \text{ in } A \text{: } \phi(a,b) \leq X \\\\ & \iff \text{for all } b \text{ in } B \text{: } \bigvee_{a \in A} \phi(a,b) \leq X \\\\ & \iff \bigvee_{b \in B} \bigvee_{a \in A} \phi(a,b) \leq X \\\\ \end{align} We know that \$$\bigvee_{a \in A} \bigvee_{b \in B} \phi(a,b) \leq \bigvee_{a \in A} \bigvee_{b \in B} \phi(a,b)\$$, hence \$$\bigvee_{b \in B} \bigvee_{a \in A} \phi(a,b) \leq \bigvee_{a \in A} \bigvee_{b \in B} \phi(a,b)\$$. Symmetrically we have \$$\bigvee_{a \in A} \bigvee_{b \in B} \phi(a,b) \leq \bigvee_{b \in B} \bigvee_{a \in A} \phi(a,b)\$$, hence \$$\bigvee_{a \in A} \bigvee_{b \in B} \phi(a,b) = \bigvee_{b \in B} \bigvee_{a \in A} \phi(a,b) \$$. I know I'm like a dog with a bone with this stuff, but argument rests on a first order logic tautology: $\forall x. \phi(x) \to (\forall y. \psi(y) \to \chi(x,y)) \iff \forall y. \psi(y) \to (\forall x. \phi(x) \to \chi(x,y))$
Home # Include latex in doxygen manual Add the examples a shown in the HTML CHM documentation also to the LaTeX PDF documentation. doc. doc added latexonly part referencing the example in the appendix docDoxyfile silence the generation of the manual docdoxygenmanual. tex add the examples as appendices to the manual, by means Doxygen allows you to put formulas in the output and RTF output. To be able to include formulas (as images) in the HTML and RTF documentation, you will also need to have the following tools installed. Formulas or other latex elements Include latex in doxygen manual are not in a math environment can be specified using \f Doxygen manual Including formulas. There are three ways to include formulas in the documentation. Formulas or other latex elements that are not in a math environment can be specified using \fenvironment, where environment is the name of the environment, the corresponding end command is \f. Here is an example for an How to include LaTeX snippets directly in Doxygen comments? Ask Question. To do that, I figured I can have LaTeXonly files and include them from Doxygen. I did create Doxygen aliases for \begin and \end to make the syntax compatible. From the doxygen manual: The program generates two folders, html and latex. In the html folder, there are some class diagrams (png files) I How to include class diagrams in Doxygen pdf? [closed Ask Question. In the html folder, there are some class diagrams (png files) I would like to include in the pdf file that I can generate from the tex files in the latex I want to write documetation for my current appliaction in LaTeX (about technologies i used and so on) and i wander if i could use section \mainpage of Doxygen in him, as a chapter" how to use" Special Commands Introduction All commands in the documentation start with a backslash (\) or an atsign This can be useful if the include name is not located on the default include path (like ). Generated on Thu Feb 5 16: 59: 08 2004 for Doxygen manual by Add examples to LaTeX PDF doxygen manual doc. doc use include instead of verbinclude to make use of the code coloring in the examples examples. cfg some examples automatically display the code from the include file (. h), for the examples it is better to show the comment as well. Needed for the include dependency graphs, the graphical inheritance graphs, and the collaboration graphs. The PDF manual doxygenmanual. pdf will be located in the latex directory of the distribution. Just view and print it via the acrobat reader. Installing the binaries on Unix I'm writing a manual of sorts in LaTeX for some software. I'm constrained to use the article class. I must include docs of individual c classes in the manual. I'm looking at doxygen for this, This can be useful for or docbook output (i. e. formatlatex or formatdocbook). Size indication This command can be used to include code that is too complex for doxygen (i. e. images, formulas, Generated on Mon Jul 2 2018 21: 23: 45 for Doxygen Manual by Doxygen allows you to put formulas in the output (this works only for the HTML and output, not for the RTF nor for the man page output). To be able to include formulas (as images) in the HTML documentation, you will also need to
1. Home 2. angle of nip in roll crusher function # angle of nip in roll crusher function In aroll crusher, rolls of diameter 1 m each are set in such a manner that minimum clearance between the crushing surfaces is 15 mm. If theangle of nipis 31 degree, the maximum diameter of the particle (in mm) which can be crushed is _____ . GATE – 2016 Get Price ListChat Online ## GET A QUOTE Note: If you're interested in the product, please submit your requirements and contacts and then we will contact you in two days. We promise that all your informations won't be leaked to anyone. ## Last New Our products sell well all over the world. • #### angle of nip article about angle of nip by the free angle of nip. The largestanglethat will just grip a lump between the jaws,rolls, or mantle and ring of acrusher. Also known asangleof bite;nip. In arock-crushing machine, themaximum angle subtendedby its approaching jaws orroll surfacesat which a … view more • #### determination of thenip anglein roller compactors with Abstract. In roller compaction, the nip angle defines the critical transition interface between the slip and nip regions which is used to model material densification behavior and the properties of compacted ribbons. Current methods to determine the nip angle require either sophisticated instrumentation on smooth rolls or input parameters that are difficult to obtain experimentally view more • #### roll crusher an overview sciencedirect topics If µ is the coefficient of friction between the rolls and the particle, θ is the angle formed by the tangents to the roll surfaces at their points of contact with the particle (the angle of nip), and C is the compressive force exerted by the rolls acting from the roll centers through the particle center, then for a particle to be just gripped by the rolls, equating vertically, we derive: view more • #### angle of nip article aboutangle of nipby the free angle of nip. The largest angle that will just grip a lump between the jaws, rolls, or mantle and ring of a crusher. Also known as angle of bite; nip. In a rock-crushing machine, the maximum angle subtended by its approaching jaws or roll surfaces at which a … view more • #### angle of nip in roll crusherdefinition angle of nip of roll crusher in mining engineering. angle of nip definition of angle of nip in the Free Online Encyclopedia. The largest angle that will just grip a lump between the jaws, rolls, or mantle and ring of a ... Read more view more • #### derivation ofangle of nip in roll crusher binq mining Nov 06, 2012· Angle of Nip – The angle formed between the moving surface of a crusher roll or jaw plate and the stationary plate surface, at which point … » More detailed Roll Crushers – Shanchuan Heavy Industry Co. Ltd view more • #### trigonometry trigonometric functions referenceangles Referenceangles, by definition, always have a measure between 0 and . Due to the periodic nature of thetrigonometric functions, the value of a trigonometricfunctionat a givenangleis always the same as its value at thatangle'sreferenceangle, except when there is a variation in sign. Because we know the signs of thefunctionsin view more • #### how tocalculate roll, pitch and yaw from xyz coordinates Aug 09, 2016· I assume that you are following EulerAngleconvention ofroll-pitch-yaw in the order of X-Y-Z. You have three coplanar points P1, P2 and P3 on the body in clockwise order (looking from the top) and that the X-axis of the body-fixed frame can be taken along the vector starting from P3 passing through the midpoint of the segment joining P2 and view more • #### how to find phaseanglefromtransfer function \$\begingroup\$ This is in the nature of the inverse tangent being calculated over a fraction. Just as an example: We want theanglesof the point (1,1) in the first quadrant (45°) and (-2,-2) in the third quadrant (225°). \$\phi_1 = tan^{-1}(\frac{-1}{-1}) \$ and \$\phi_2 = tan^{-1}(\frac{-2}{-2}) \$ As you can see, you can simplify both expressions to \$tan^{-1}(1) = 45° \$ And this is view more • #### phaseangle matlabangle mathworks angletakes a complex number z = x + iy and uses the atan2functionto compute theanglebetween the positive x-axis and a ray from the origin to the point (x,y) in the xy-plane. Extended Capabilities. Tall Arrays Calculate with arrays that have more rows than fit in memory. Thisfunctionfully supports tall arrays view more • #### angle of elevation definition, formula and examples Theangle of elevationis a widely used concept related to height and distance, especially in trigonometry. It is defined as ananglebetween the horizontal plane and oblique line from the observer’s eye to some object above his eye. Eventually, thisangleis formed above the surface view more • #### a review roller compaction for tablet dosage form Theangleformed on the boundary of feeding zone and compaction zone is callednip angle. It is theanglebetween diameter ofrolland the point where slip region ends ornipregion starts on therollsurface. For better compactionnip angleshould be sufficiently large. 3. Extrusion region (release zone): In this region therollgap starts to view more • #### online calculator involuteof anangle Angle"φ" equals to the arc that is called evolventangleofrolland consist of a sum of "θ"angle(evolventangle)"α"angle(angleof pressure). The arcs lenght is . Because is a right triangle that means . Equating these two arcs to each other, we will get, whence . Thisfunctionis calledinvoluteor the evolvent of thefunction view more • #### cutting tool angles functionand effects of cutting tool ii. Thisangleis 6° to 10° for steel, 8° for aluminum. iii. It maintains that no part of the tool besides the actual cutting edge can touch the work.Functionsof End Cutting EdgeAngle: i. It avoids rubbing between the edge of the tool and workspace. ii. It influences the direction of chip flow.Functionsof Side Cutting EdgeAngle: i view more • #### determination of thenip anglein roller compactors with Nip angleswere also calculated using the widely used Johanson model. However, wall friction measurement on serratedrollsurfaces could be impractical. The Johanson model-derivednip anglescould differ by 3°-8° just by altering the roughness of the reference wall and this had compromised their reliability view more • #### diameter of rolls for aroll crusherfor the given feed Angle of nip= 2 a. We have, m = 0.29. Therefore, a = tan-1 (0.29) = 16.17 o. And we have, d = 0.5 cm; R = 1.5 cm. Substituting for the known quantities in equn.1, cos (16.17) = (r + 0.5)/(r + 1.5) 0.9604 = (r + 0.5)/(r + 1.5) r + 0.5 = 0.9604 (r + 1.5) r - 0.9604 r = 1.4406 - 0.5. r = 23.753 cm. Radius of rolls = 23.753 cm. Dia of rolls = 2 x view more • #### maximumnip angleon gyratory mining mill in germany maximumnip angleon gyratory mining mill in germany. Theangle of nipin gyratorycrushershas definite limitations in the older straight element crushing chamber, theangleranges from quot to quot on large primarycrushers, using curved or nonchoking concaves and where gravity is of marked aid tonipwith the large pieces at the mouth theangle view more • #### rudder angle an overview sciencedirect topics Figure 11.16 shows the responses in sideslip velocity v ⌢,rollrate p ⌢, yaw rate r ⌢, yawangleψ, androll angleϕ for a step change of aileronangleof 0.04 rad. Figure 11.16(b) shows the initial response most clearly with therollrate building up in about a second to a fairly steady final value, this is the response in theroll view more • #### xmmatrixrotationrollpitchyawfunction( Angleof rotation around the y-axis, in radians.Roll.Angleof rotation around the z-axis, in radians. Return value. Returns therotation matrix. Remarks.Anglesare measured clockwise when looking along the rotation axis toward the origin. This is a left-handed coordinate system. To use right-handed coordinates, negate all threeangles view more • #### laporan modul 1 kominusi crushing Apr 19, 2016· Adapun secondary crushing menggunakanroll crusher,feed yang digunakan seberat482,3 gram dan dihasilkan produk 451,7 gram. Selain itu, rendahnya nilai RR80 juga dikarenakan mesinroll crusheryang digunakan sendangdalamkondisi yangtidak baik.Saat melakukan percobaan, beberapa kali peremukan harus dihentikan karenaroll crusheryang … view more
# zbMATH — the first resource for mathematics Some periodic and non-periodic recursions. (English) Zbl 1036.11002 The authors determine all periodic recursions of the form $x_n= \frac{a_0+ a_1x_{n-1}+\cdots+ a_kx_{n-k}} {x_{n-k-1}},$ where $$a_0,a_1,\dots, a_k$$ are complex numbers, $$a_1,\dots, a_{k-1}$$ are nonzero and $$a_k=1$$. They prove that, apart from the well-known recursions $x_n= \frac{1}{x_{n-1}},\quad x_n= \frac{1+x_{n-1}} {x_{n-2}} \quad\text{and}\quad x_n= \frac{1+x_{n-1}+ x_{n-2}} {x_{n-3}},$ only $x_n= \frac{x_{n-1}}{x_{n-2}} \quad\text{and}\quad x_n= \frac{-1-x_{n-1}+ x_{n-2}} {x_{n-3}}$ lead to periodic sequences (with periods 6 and 8). ##### MSC: 11B37 Recurrences 26A18 Iteration of real functions in one variable Full Text:
# ropensci/stplanr Switch branches/tags Nothing to show Fetching contributors… Cannot retrieve contributors at this time 603 lines (471 sloc) 44.9 KB title author abstract output vignette stplanr: A Package for Transport Planning Robin Lovelace University of Leeds 34-40 University Road LS2 9JT, UK [email protected] Richard Ellison University of Sydney 378 Abercrombie Street Darlington, NSW 2008, Australia [email protected] Tools for transport planning should be flexible, robust and scalable. **stplanr** meets each of these criteria by providing functionality commonly needed for transport planning in R, with an emphasis on spatial transport data. This includes tools to import and clean transport datasets; the creation of geographic 'desire lines' from origin-destination data; methods to assign these desire lines to the transport network, e.g. via interfaces to routing services such as CycleStreets.net, Graphhopper and the OpenStreetMap Routing Machine (OSRM); functions to calculate the geographic attributes of such routes, such as their bearing and equidistant surroundings; and 'travel watershed' analysis. With reproducible examples and using real transport datasets, this article demonstrates how R can form the basis of a reproducible and flexible transport planning workflow. We conclude with a brief discussion of desirable directions of future development. rmarkdown::html_vignette %\VignetteIndexEntry{stplanr A Package for Transport Planning} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} knitr::opts_chunk$set(fig.width = 7, fig.height = 5) ## Note This vignette is under peer review and was submitted in pdf form to the R Journal. Please see its source code for comments and suggestions. # Introduction Transport planning is a diverse field requiring a wide range of computational tasks \citep{boyce_forecasting_2015}. Software in the transport planner's toolkit should be flexible, able to handle a wide range of data formats; robust, able to generate reproducible results for transparent decision-making; and scalable, able to work at multiple geographic levels from single streets to large cities and regions. R can provide a solid basis for a transport planning workflow that meets each of these criteria. Packages such as \CRANpkg{sp} \citep{pebesma_classes_2005} and \CRANpkg{rgeos} \citep{bivand_rgeos:_2016} greatly extend R's spatial data handling and modelling capabilities \citep{bivand_applied_2013}. Packages building on the sp class system have been developed for specific domains, including \CRANpkg{SpatialEpi} \citep{kim_spatialepi:_2016}, \CRANpkg{diseasemapping} \citep{brown_diseasemapping:_2016} and the adehabitat family of packages \citep{calenge_package_2006}. Inspired by such efforts and driven by our own research needs, our primary aim for stplanr is to provide an R toolbox for transport planning. Although the focus is on spatial transport datasets (and most transport problems contain a spatial component), stplanr also provides functions for handling non-spatial datasets. ## Motivations There has been little in the way of R development for transport applications. This is surprising given the ubiquity of transport problems,^[Most people can identify interventions that they think would make the transport systems they interact with more sustainable. Think about the paths and roads you travel on, for example: what interventions would you prioritise to improve non-motorised access, for walking, cycling and wheel-chairs? What quantitative evidence would you need to communicate this to the relevant authorities? ] R's aptitude for handling transport data (including spatial and travel survey data), and the increasing use of R in applied domains. Increasingly, R is the go-to statistical software in many organisations: academic, public sector and privately owned. Such organisations undertake the majority of transport planning research. This paper was therefore motivated by the desire to demonstrate that R provides an excellent framework for transport research. If readers decide not to use the package, perhaps needing bespoke solutions to specific transport problems not covered by stplanr, it is hoped that the ideas, functions and datasets described in this paper inspire parallel developments in the space of 'R for transport applications'. Moreover, by making the package deliberately broad in its scope, we hope that stplanr can help build a nascent community of R-using transport researchers and welcome feature requests and feedback at the package's online home. R is already used in transport applications, as illustrated by recent research that applies packages from other domains to transport problems. For instance, \citeauthor{efthymiou_use_2012} (\citeyear{efthymiou_use_2012}) use R to analyse the data collected from an online survey focused on car-sharing, bicycle-sharing and electric vehicles. \citeauthor{efthymiou_use_2012} (\citeyear{efthymiou_use_2012}) also used R to collect and analyse transport-related data from Twitter using packages including \CRANpkg{XML}, \CRANpkg{twitteR} and \CRANpkg{ggplot2}. These packages were used to download, parse and plot the Twitter data using a method that can be repeated and the results reproduced or updated. More general statistical analyses have also been conducted on transport-related datasets using packages including \CRANpkg{muStat} and \CRANpkg{mgcv} \citep{diana_studying_2012,cerin_walking_2013}. Despite the rising use of R for transport research, there has yet been to be a package for transport planning. The design of the R language, with its emphasis on flexibility, data processing and statistical modelling, suggests it can provide a powerful environment for transport planning research. There are many quantitative methods in transport planning \citep{ortuzar_modelling_2001} and we have attempted to focus on those that are most generalisable and frequently used. stplanr facilitates the following common computational tasks for transport planning: • Accessing and processing of data on transport infrastructure and behaviour • Analysis and visualisation of the transport network • Analysis of origin-destination (OD) data and the visualisation of resulting 'desire lines' • The allocation of desire lines to roads and other guideways via routing algorithms to show commonly used routes through geographical space • The aggregation of routes to estimate total levels of flow on segments throughout the transport network • Development of models to estimate transport behaviour currently and under various scenarios of change • The calculation of 'catchment areas' affected by transport infrastructure The automation of such tasks can assist researchers and practitioners to create evidence for decision making. If the data processing and analysis stages are fast and painless, more time can be dedicated to visualisation and decision making. This should allow researchers to focus on problems, rather than on wrestling with unwieldy datasets, clunky graphical user interfaces (GUIs), and ad-hoc scripts that could be generalised. Furthermore, if the process can be made reproducible and accessible (e.g. via online visualisation packages such as \CRANpkg{shiny} and \CRANpkg{leaflet}), this will help transport planning move away from reliance on 'black boxes' and become a more transparent and democratic activity \citep{waddell_urbansim:_2002,hollander_who_2015}. The technical advantages of using modern, interpreted, and open source languages such as R are manifold: they enable automation and sharing of methods between researchers, for example the application of methods developed for one city to another; they ease the integration with other software systems and the web; and they have very strong user communities. The advantages of using R specifically to develop the functionality described in this paper are that it has unparalleled geo-statistical capabilities \citep{pebesma_software_2015}, visualisation packages (e.g. \CRANpkg{tmap}, \CRANpkg{ggplot2}) and the ability to rapidly read-in data stored in many formats (e.g. via the \CRANpkg{haven} and \CRANpkg{rio} packages). # Package structure and functionality The package can be installed and loaded in the usual way (see the package's README for dependencies and access to development versions): install.packages("stplanr") library(stplanr) As illustrated by the message emitted when stplanr is loaded, it depends on \CRANpkg{sp}. This means that the spatial data classes commonly used in the package will work with generic R functions such as summary, aggregate and, as illustrated in the figures below, plot \citep{bivand_applied_2013}. ## Core functions and classes The package's core functions are structured around 3 common types of spatial transport data: • Origin-destination (OD) data, which report the number of people travelling between origin-destination pairs. This type of data is not explicitly spatial (OD datasets are usually represented as data frames) but represents movement over space between points in geographical space. An example is provided in the flow dataset. • Line data, one dimensional linear features on the surface of the Earth. These are typically stored as a SpatialLinesDataFrame. • Route data are special types of lines which have been allocated to the transport network. Routes typically result from the allocation of a straight 'desire line' allocated to the route network with a route_ function. Route network represent many overlapping routes. All are typically stored as SpatialLinesDataFrame. For ease of use, functions focussed on each data type have been developed with names prefixed with od_, line_ and route_ respectively. A selection of these is presented in Table 1. Additional 'core functions' could be developed, such as those prefixed with rn_ (for working with route network data) and g_ functions for geographic operations such as buffer creation on lat/lon projected data (this function is currently named buff_geo). We plan to elicit feedback on such changes before implementing them. # stplanr_funs = ls("package:stplanr") # sel_core = grep(pattern = "od_|^line_|route_", x = stplanr_funs) # core_funs = stplanr_funs[sel_core] # args(name = core_funs[1]) fun_table <- readr::read_csv("fun_table.csv") knitr::kable(fun_table, caption = "Selection of functions for working with or generating OD, line and route data types.") With a tip of the hat to the concept of type stability (e.g. as implemented in \CRANpkg{dplyr}), we also plan to make the core functions of stplanr more type-stable in future releases. Core functions, which begin with the prefixes listed above, could follow \CRANpkg{dplyr}'s lead and return only objects with the same class as that of the input. However there are limitations to this approach: it will break existing functionality and mean that output objects have a larger size than necessary (line_bearing, for example, does not need to duplicate the spatial data contained in its input). Instead, we plan to continue to name functions around the type of input data they take, but are open minded about function input-output data class conventions, especially in the context of the new class system implemented in \CRANpkg{sf}. A class system has not been developed for each data type (this option is discussed in the final section). The most common data types used in stplanr are assumed to be data frames and spatial datasets. Transport datasets are very diverse. There are therefore many other functions which have more ad-hock names. Rather attempt a systematic description of each of stplanr's functions (which can be gleaned from the online manual) it is more illuminating to see how they work together, as part of a transport planning workflow. As with most workflows, this begins with data access and ends with visualisation. ## Accessing and processing transport data Gaining access to data is often the first stage in transport research. This is often a long and protracted process which is thankfully becoming easier thanks to the 'open data' movement and packages such as tigris for making data access from within R easier \citep{walker_tigris:_2016}. stplanr provides a variety of different functions that facilitate importing common data formats used for transport analysis into R. Although transport analysis generally requires some transport-specific datasets, it also typically relies heavily on common sources of data including census data. This being the case, stplanr also includes functions that may be useful to those not involved in transport research. This includes the read_table_builder function for importing data from the Australian Bureau of Statistics (ABS) and the UK's Stats19 road traffic casualty dataset. A brief example of the latter is demonstrated below, which begins with downloading the data (warning this downloads ~100 MB of data): dl_stats19() # download and extract stats19 road traffic casualty data #> [1] "Data saved at: /tmp/RtmpppF3E2/Accidents0514.csv" #> [2] "Data saved at: /tmp/RtmpppF3E2/Casualties0514.csv" #> [3] "Data saved at: /tmp/RtmpppF3E2/Vehicles0514.csv" Once the data has been saved in the default directory, determined by tempdir(), it can be read-in and cleaned with the read_stats19_ functions (note these call format_stats19_ functions internally to clean the datasets and add correct labels to the variables): ac <- read_stats19_ac() ca <- read_stats19_ca() ve <- read_stats19_ve() The resulting datasets (representing accident, casualty and vehicle level data, respectively) can be merged and made geographic, as illustrated below: library(dplyr) ca_ac <- inner_join(ca, ac) ca_cycle <- ca_ac %>% filter(Casualty_Severity == "Fatal" & !is.na(Latitude)) %>% select(Age = Age_of_Casualty, Mode = Casualty_Type, Longitude, Latitude) ca_sp <- SpatialPointsDataFrame(coords = ca_cycle[3:4], data = ca_cycle[1:2]) Now that this casualty data has been cleaned, subsetted (to only include serious cycle crashes) and converted into a spatial class system, we can analyse them using geographical datasets of the type commonly used by stplanr. The following code, for example, geographically subsets the dataset to include only crashes that occured within the bounding box of a route network dataset provided by stplanr (from version 0.1.7 and beyond) using the function bb2poly, which converts a spatial dataset into a box, represented as a rectangular SpatialPolygonsDataFrame: data("route_network") # devtools::install_github("ropensci/splanr")version 0.1.7 proj4string(ca_sp) <- proj4string(route_network) bb <- bb2poly(route_network) proj4string(bb) <- proj4string(route_network) ca_local <- ca_sp[bb,] The above code chunk shows the importance of understanding geographical data when working with transport data. It is only by converting the casualty data into a spatial data class, and adding a coordinate reference system (CRS), that transport planners and researchers can link this important dataset back to the route network. We can now perform GIS operations on the results. The next code chunk, for example, finds all the fatalities that took place within 100 m of the route network, using the function buff_geo: bb <- bb2poly(route_network) load("reqfiles.RData") rnet_buff_100 <- buff_geo(route_network, width = 100) ca_buff <- ca_local[rnet_buff_100,] These can be visualised using base R graphics, extended by \CRANpkg{sp}, as illustrated in Figure \ref{fig:fats}. This provides a good start for analysis but for publication-quality plots and interactive plots, designed for public engagement, we recommend using dedicated visualisation packages that work with spatial data such as \CRANpkg{tmap}. plot(bb, lty = 4) plot(rnet_buff_100, col = "grey", add = TRUE) points(ca_local, pch = 4) points(ca_buff, cex = 3) ## Creating geographic desire lines Perhaps the most common type of aggregate-level transport information is origin-destination ('OD') data. This can be presented either as a matrix or (more commonly) a long table of OD pairs. An example of this type of raw data is provided below (see ?flow to see how this dataset was created). data("flow", package = "stplanr") head(flow[c(1:3, 12)]) Although the flow data displayed above describes movement over geographical space, it contains no explicitly geographical information. Instead, the coordinates of the origins and destinations are linked to a separate geographical dataset which also must be loaded to analyse the flows. This is a common problem solved by the function od2line. The geographical data is a set of points representing centroids of the origin and destinations, saved as a SpatialPointsDataFrame. Geographical data in R is best represented as such Spatial* objects, which use the S4 object engine. This explains the close integration of stplanr with R's spatial packages, especially sp, which defines the S4 spatial object system. data("cents", package = "stplanr") as.data.frame(cents[1:3, -c(3,4)]) We use od2line to combine flow and cents, to join the former to the latter. We will visualise the l object created below in the next section. l <- od2line(flow = flow, zones = cents) The data is now in a form that is much easier to analyse. We can plot the data with the command plot(l), which was not possible before. Because the SpatialLinesDataFrame object also contains data per line, it also helps with visualisation of the flows, as illustrated in Figure \ref{fig:lines_routes}. ## Allocating flows to the transport network A common problem faced by transport researchers is network allocation: converting the 'as the crow flies' lines illustrated in the figure above into routes. These are the complex, winding paths that people and animals make to avoid obstacles such as buildings and to make the journey faster and more efficient (e.g. by following the route network). This is difficult (and was until recently near impossible using free software) because of the size and complexity of transport networks, the complexity of realistic routing algorithms and need for context-specificity in the routing engine. Inexperienced cyclists, for example, would take a very different route than a heavy goods vehicle. stplanr tackles this issue by using 3rd party APIs to provide route-allocation. Route allocation is undertaken by \code{route_} functions such as \code{route_cyclestreets} and \linebreak \code{route_graphhopper}. These allocate a single OD pair, represented as a text string to be 'geo-coded', a pair of of coordinates, or two SpatialPoints objects, representing origins and destinations. This is illustrated below with route_cyclestreet, which uses the CycleStreets.net API, a routing service "by cyclists for cyclists" that offers a range route strategies (primarily 'fastest', 'quietest' and 'balanced') that are based on a detailed analysis of cyclist wayfinding:^[An API key is needed for this function to work. This can be requested (or purchased for large scale routing) from cyclestreets.net/api/apply. See ?route_cyclestreet for details. Thanks to Martin Lucas-Smith and Simon Nuttall for making this possible.] route_bl <- route_cyclestreet(from = "Bradford", to = "Leeds") route_c1_c2 <- route_cyclestreet(cents[1,], cents[2,]) The raw output from routing APIs is usually provided as a JSON or GeoJSON text string. By default, route_cyclestreet saves a number of key variables (including length, time, hilliness and busyness variables generated by CycleStreets.net) from the attribute data provided by the API. If the user wants to save the raw output, the save_raw argument can be used: route_bl_raw <- route_cyclestreet(from = "Bradford", to = "Leeds", save_raw = TRUE) Additional arguments taken by the route_ functions depend on the routing function in question. By changing the plan argument of route_cyclestreet to fastest, quietest or balanced, for example, routes favouring speed, quietness or a balance between speed and quietness will be saved, respectively. To automate the creation of route-allocated lines over many desire lines, the line2route function loops over each line, wrapping any route_ function as an input. The output is a SpatialLinesDataFrame with the same number of dimensions as the input dataset (see the right panel in Figure \ref{fig:lines_routes}). routes_fast <- line2route(l = l, route_fun = route_cyclestreet) The result of this 'batch routing' exercise is illustrated in Figure \ref{fig:lines_routes}. The red lines in the left hand panel are very different from the hypothetical straight 'desire lines' often used in transport research, highlighting the importance of this route-allocation functionality. plot(route_network, lwd=0) plot(l, lwd = l$All / 10, add = TRUE) lines(routes_fast, col = "red") routes_fast$All <- l$All rnet <- overline(routes_fast, "All", fun = sum) rnet$flow <- rnet$All / mean(rnet$All) * 3 plot(rnet, lwd = rnet$flow / mean(rnetflow)) To estimate the amount of capacity needed at each segment on the transport network, the overline function demonstrated above, is used to divide line geometries into unique segments and aggregate the overlapping values. The results, illustrated in the right-hand panel of Figure \ref{fig:lines_routes}, can be used to estimate where there is most need to improve the transport network, for example informing the decision of where to build new bicycle paths. Limitations with the route_cyclestreet routing API include its specificity, to one mode (cycling) and a single region (the UK and part of Europe). To overcome these limitations, additional routing APIs were added with the functions route_graphhopper, route_transportapi_public and viaroute. These interface to Graphhopper, TransportAPI and the Open Source Routing Machine (OSRM) routing services, respectively. The great advantage of OSRM is that it allows you to run your own routing services on a local server, greatly increasing the rate of route generation. A short example of finding the route by car and bike between New York and Oaxaca demonstrates how route_graphhopper can collect geographical and other data on routes by various modes, anywhere in the world. The output, shown in Table \ref{tab:xtnyoa}, shows that the function also saves time, distance and (for bike trips) vertical distance climbed for the trips. ny2oaxaca1 <- route_graphhopper("New York", "Oaxaca", vehicle = "bike") ny2oaxaca2 <- route_graphhopper("New York", "Oaxaca", vehicle = "car") rbind(ny2oaxaca1@data, ny2oaxaca2@data) nytab = rbind(ny2oaxaca1@data, ny2oaxaca2@data) nytab = cbind(Mode = c("Cycle", "Car"), nytab) xtnyoa = xtable(nytab, caption = "Attribute data from the route\\_graphhopper function, from New York to Oaxaca, by cycle and car.", label = "tab:xtnyoa") print.xtable(xtnyoa, include.rownames = FALSE) plot(ny2oaxaca1) plot(ny2oaxaca2, add = TRUE, col = "red") ny2oaxaca1@data ny2oaxaca2@data time dist change_elev 17522.73 4885663 87388.13 2759.89 4754772 NA ## Modelling travel catchment areas Accessibility to transport services is a particularly important topic when considering public transport or active travel because of the frequent steep reduction in use as distances to access services (or infrastructure) increase. As a result, the planning for transport services and infrastructure frequently focuses on several measures of accessibility including distance, but also travel times and frequencies and weighted by population. The functions in stplanr are intended to provide a method of estimating these accessibility measures as well as calculating the population that can access specific services (i.e., estimating the catchment area). Catchment areas in particular are a widely used measure of accessibility that attempts to both quantify the likely target group for a particular service, and visualise the geographic area that is covered by the service. For instance, passengers are often said to be willing to walk up to 400 metres to a bus stop, or 800 metres to a railway station \citep{el-geneidy_new_2014}. Although these distances may appear relatively arbitrary and have been found to underestimate the true catchment area of bus stops and railway stations \citep{el-geneidy_new_2014,daniels_explaining_2013} they nonetheless represent a good, albeit somewhat conservative, starting point from which catchment areas can be determined. In many cases, catchment areas are calculated on the basis of straight-line (or "as the crow flies") distances. This is a simplistic, but relatively appealing approach because it requires little additional data and is straight-forward to understand. stplanr provides functionality that calculates catchment areas using straight-line distances with the calc_catchment function. This function takes a SpatialPolygonsDataFrame that contains the population (or other) data, typically from a census, and a Spatial* layer that contains the geometry of the transport facility. These two layers are overlayed to calculate statistics for the desired catchments including proportioning polygons to account for the proportion located within the catchment area. To illustrate how catchment areas can be calculated, stplanr contains some sample datasets stored in ESRI Shapefile format (a commonly used format for distributing GIS layers) that can together be used to calculate sample catchment areas. One of these datasets (smallsa1) contains population data for Statistical Area 1 (SA1) zones in Sydney, Australia. The second contains hypothetical cycleways aligned to streets in Sydney. The code below unzips the datasets and reads in the shapefiles using the readOGR function of \CRANpkg{rgdal}. data_dir <- system.file("extdata", package = "stplanr") unzip(file.path(data_dir, 'smallsa1.zip')) unzip(file.path(data_dir, 'testcycleway.zip')) sa1income <- rgdal::readOGR(".", "smallsa1") testcycleway <- rgdal::readOGR(".", "testcycleway") # Remove unzipped files file.remove(list.files(pattern = "^(smallsa1|testcycleway).*")) Calculating the catchment area is straightforward and in addition to specifying the required datasets, only a vector containing column names to calculate statistics and a distance is required. Since proportioning the areas assumes projected data, unprojected data are automatically projected to either a common projection (if one is already projected) or a specified projection. It should be emphasised that the choice of projection is important and has an effect on the results meaning setting a local projection is recommended to achieve the most accurate results. catch800m <- calc_catchment( polygonlayer = sa1income, targetlayer = testcycleway, calccols = c('Total'), distance = 800, projection = 'austalbers', dissolve = TRUE ) By looking at the data.frame associated with the SpatialPolygonsDataFrame that is returned from the calc_catchment function, the total population within the catchment area can be seen to be r as.character(round(sum(catch800m@dataTotal),0)) people. The catchment area can also be plotted as with any other Spatial* object using the plot function using the code below with the result shown in Figure \ref{fig:catchmentplot}. plot(sa1income, col = "light grey") plot(catch800m, col = rgb(1, 0, 0, 0.5), add = TRUE) plot(testcycleway, col = "green", add = TRUE) This simplistic catchment area is useful when the straight-line distance is a reasonable approximation of the route taken to walk (or cycle) to a transport facility. However, this is often not the case. The catchment area in Figure \ref{fig:catchmentplot} initially appears reasonable but the red-shaded catchment area includes an area that requires travelling around a bay to access from the (green-coloured) cycleway. To allow for more realistic catchment areas for most situations, stplanr provides the calc_network_catchment function that uses the same principle as calc_catchment but also takes into account the transport network. To use calc_network_catchment, a transport network needs to be prepared that can be used in conjunction with the previous datasets. Preparation of the dataset involves using the SpatialLinesNetwork function to create a network from a SpatialLinesDataFrame. This function combines a SpatialLinesDataFrame with a graph network (using the \CRANpkg{igraph} package) to provide basic routing functionality. The network is used to calculate the shortest actual paths within the specific catchment distance. This process involves the following code: unzip(file.path(data_dir, 'sydroads.zip')) The network catchment is then calculated using a similar method as with calc_catchment but with a few minor changes. Specifically these are including the SpatialLinesNetwork, and using the maximpedance parameter to define the distance, with distance being the additional distance from the network. In contrast to the distance parameter that is based on the straight-line distance in both the calc_catchment and calc_network_catchment functions, the maximpedance parameter is the maximum value in the units of the network's weight attribute. In practice this is generally distance in metres but can also be travel times, risk or other measures. netcatch800m <- calc_network_catchment( sln = sydnetwork, polygonlayer = sa1income, targetlayer = testcycleway, calccols = c('Total'), maximpedance = 800, distance = 100, projection = 'austalbers' ) Once calculated, the network catchment area can be used just as the straight-line network catchment. This includes extracting the catchment population of r as.character(round(sum(netcatch800m@data$Total),0)) and plotting the original catchment area together with the original area with the results shown in Figure \ref{fig:netcatchplot}: plot(sa1income, col = "light grey") plot(catch800m, col = rgb(1, 0, 0, 0.5), add = TRUE) plot(netcatch800m, col = rgb(0, 0, 1, 0.5), add = TRUE) plot(testcycleway, col = "green", add = TRUE) # Modelling and visualisation ## Modelling mode choice Route-allocated lines allow estimation of route distance and cirquity (route distance divided by Euclidean distance). These variables can help model the rate of flow between origins and destination, as illustrated in the left-hand panel of Figure \ref{fig:euclidfastest}. The code below demonstrates how objects generated by stplanr can be used to undertake such analysis, with the line_length function used to find the distance, in meters, of lat/lon data. l$d_euclidean <- line_length(l) l$d_rf <- routes_fast@data$length plot(l$d_euclidean, l$d_rf, xlab = "Euclidean distance", ylab = "Route distance") abline(a = 0, b = 1) abline(a = 0, b = 1.2, col = "green") abline(a = 0, b = 1.5, col = "red") l$d_euclidean <- line_length(l) l$d_rf <- routes_fast$length The left hand panel of Figure \ref{fig:euclidfastest} shows the expected strong correlation between Euclidean ($d_E$) and fastest route ($d_{Rf}$) distance. However, some OD pairs have a proportionally higher route distance than others, as illustrated by distance from the black line in the above plot: this represents \emph{Circuity ($Q$)}: the ratio of network distance to Euclidean distance \citep{levinson_minimum_2009}: $$Q = \frac{d_{Rf}}{d_E}$$ An extension to the concept of cirquity is the 'quietness diversion factor' ($QDF$) of a desire line \citep{lovelace_propensity_2016}, the ratio of the route distance of a quiet route option ($d_{Rq}$) to that of the fastest: $$QDF = \frac{d_{Rq}}{d_{Rf}}$$ Thanks to the 'quietest' route option provided by route_cyclestreet, we can estimate average values for both metrics as follows: routes_slow <- line2route(l, route_cyclestreet, plan = "quietest") l$d_rq <- routes_slow$length # quietest route distance Q <- mean(l$d_rf / l$d_euclidean, na.rm = TRUE) QDF <- mean(l$d_rq / l$d_rf, na.rm = TRUE) Q QDF The results show that cycle paths are not particularly direct in the study region by international standards \citep{crow_design_2007}. This is hardly surprisingly given the small size of the sample and the short distances covered:$Q$tends to decrease at a decaying rate with distance. What is surprising is that$QDF$is close to unity, which could imply that the quiet routes are constructed along direct, and therefore sensible routes. We should caution against such assumptions, however: It is a small sample of desire lines and, when time is explored, we find that the 'quietness diversion factor with respect to time' ($QDF_t$) is slightly larger: (QDFt <- mean(routes_slow$time / routes_fast$time, na.rm = TRUE)) ## Models of travel behaviour There are many ways of estimating flows between origins and destinations, including spatial interaction models, the four-stage transport model and gravity models ('distance decay'). stplanr aims eventually to facilitate creation of many types of flow model. At present there are no functions for modelling distance decay, but this is something we would like to add in future versions of stplanr. Distance decay is an especially important concept for sustainable transport planning due to physical limitations on the ability of people to walk and cycle large distances \citep{iacono_measuring_2010}. We can explore the relationship between distance and the proportion of trips made by walking, using the same object l generated by stplanr. l$pwalk <- l$On.foot / l$All plot(l$d_euclidean, l$pwalk, cex = l$All / 50, xlab = "Euclidean distance (m)", ylab = "Proportion of trips by foot") par(mfrow = c(1, 2)) lgb <- sp::spTransform(l, CRSobj = sp::CRS("+init=epsg:27700")) l$d_euclidean <- rgeos::gLength(lgb, byid = T) l$d_rf <- routes_fast@data$length plot(l$d_euclidean, l$d_rf, xlab = "Euclidean distance", ylab = "Route distance") abline(a = 0, b = 1) abline(a = 0, b = 1.2, col = "green") abline(a = 0, b = 1.5, col = "red") l$pwalk <- l$On.foot / l$All plot(l$d_euclidean, l$pwalk, cex = l$All / 50, xlab = "Euclidean distance (m)", ylab = "Proportion of trips by foot") Based on the right-hand panel in Figure \ref{fig:euclidfastest}, there is a clear negative relationship between distance of trips and the proportion of those trips made by walking. This is unsurprising: beyond a certain distance (around 1.5km according the the data presented in the figure above) walking is usually seen as too slow and other modes are considered. According to the academic literature, this 'distance decay' is non-linear and there have been a number of functions proposed to fit to distance decay curves \citep{martinez_new_2013}. From the range of options we test below just two forms. We will compare the ability of linear and log-square-root functions to fit the data contained in l for walking. lm1 <- lm(pwalk ~ d_euclidean, data = l@data, weights = All) lm2 <- lm(pwalk ~ d_rf, data = l@data, weights = All) lm3 <- glm(pwalk ~ d_rf + I(d_rf^0.5), data = l@data, weights = All, family = quasipoisson(link = "log")) The results of these regression models can be seen using summary(). Surprisingly, Euclidean distance was a better predictor of walking than route distance, but no strong conclusions can be drawn from this finding, with such a small sample of desire lines (n = 42). The results are purely illustrative, of the kind of the possibilities created by using stplanr in conjuction with R's modelling capabilities (see Figure \vref{fig:euclidwalking2}). summary(lm1) summary(lm2) summary(lm3) plot(l$d_euclidean, l$pwalk, cex = l$All / 50, xlab = "Euclidean distance (m)", ylab = "Proportion of trips by foot") l2 <- data.frame(d_euclidean = 1:5000, d_rf = 1:5000) lm1p <- predict(lm1, l2) lm2p <- predict(lm2, l2) lm3p <- predict(lm3, l2) lines(l2$d_euclidean, lm1p) lines(l2$d_euclidean, exp(lm2p), col = "green") lines(l2$d_euclidean, exp(lm3p), col = "red") ## Visualisation Visualisation is an important aspect of any transport study, as it enables researchers to communicate their findings to other researchers, policy-makers and, ultimately, the public. It may therefore come as a surprise that stplanr contains no functions for visualisation. Instead, users are encouraged to make use of existing spatial visualisation tools in R, such as tmap, leaflet and ggmap \citep{cheshire_spatial_2015,kahle_ggmap:_2013}. Furthermore, with the development of online application frameworks such as shiny, it is now easier than ever to make the results of transport analysis and modelling projects available to the public. An example is the online interface of the Propensity to Cycle Tool (PCT). The results of the project, generated using stplanr, are presented at zone, desire line and Route Network levels \citep{lovelace_propensity_2016}. There is great potential to expand on the principle of publicly accessible transport planning tools via 'web apps', perhaps through new R packages dedicated to visualising transport data. # Future directions of travel This paper has demonstrated the great potential for R to be used for transport planning. R's flexibility, powerful GIS capabilities \citep{bivand_applied_2013} and free accessibility makes it well-suited to the needs of transport planners and researchers, especially those wanting to avoid the high costs of market-leading products. Rather than 'reinvent the wheel' (e.g. with a new class system), stplanr builds on existing packages and \CRANpkg{sp} classes to work with common transport data formats. It is useful to see stplanr, and R for transport planning in general, as an addition tool in the transport planner's cabinet. It can be understood as one part of a wider movement that is making transport planning a more open and democratic process. Other developments in this movement include the increasing availability of open data \citep{naumova_building_2016} and the rise of open source products for transport modelling, such as SUMO, MATSim and MITSIMLAB \citep{saidallah_comparative_2016}. stplanr, with its focus on GIS operations rather than microscopic vehicle-level behaviour, can complement such software and help make better use of new open data sources. Because transport planning is an inherently spatial activity, stplanr occupies an important niche in the transport planning software landscape, with its focus on spatial transport data. There is great potential for development of stplanr in many directions. Desirable developments include the additional of functions for modelling modal split, for examample with functions to create commonly distance decay curves which are commonly found in active travel research \citep{martinez_new_2013} and improving the computational efficiency of existing functions to make the methods more scalable for large databases. Our priority for stplanr however, is to keep the focus on geographic functions for transport planning. There are many opportunities in this direction, including: • Functions to assess the environment surrounding routes, e.g. via integration with the in-development osmdata package. • Functions to match different GIS routes, perhaps building on the Hausdorf distance algorithm implemented in the \CRANpkg{rgeos} function gDistance. • Additional functions for route-allocation of travel, e.g. via an interface to the OpenTripPlanner API. • Functions for aggregating very large GPS trace datasets (e.g. into raster cells) for anonymisation and analysis/visualisation purposes. • The creation of a class system for spatial transport datasets, such as to represent spatial route and a route networks (perhaps with classes named \code{"sr"} and \code{"srn"}). This is not a short-term priority and it would be beneficial to coincide such developments to a migration to \CRANpkg{sf} for spatial classes. Such spatial data processing capabilities would increase the range of transport planning tasks that stplanr can facilitate. For all this planned development activity to be useful, it is vital that new functionality is intuitive. R has a famously steep learning curve. Implementing simple concepts such as consistent naming systems \citep{baath_state_2012} and ensuring 'type stability' can greatly improve the usability of the package. For this reason, much future work in stplanr will go into improving documentation and user-friendliness. Like much open source software stplanr is an open-ended project, a work-in-progress. We have set out clear motivations for developing transport planning capabilities in R and believe that the current version of stplanr (0.1.6) provides a major step in that direction compared with what was available a couple of years ago. But there is much more to do. We therefore welcome input on where the package's priorities should lie, how it should evolve in the future and how to ensure it is well-developed and sustained. \bibliography{references}
Find all School-related info fast with the new School-Specific MBA Forum It is currently 01 May 2016, 07:29 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # In the table above what is the least number of table entries Author Message TAGS: ### Hide Tags Senior Manager Joined: 31 Oct 2011 Posts: 324 Followers: 1 Kudos [?]: 587 [1] , given: 18 In the table above what is the least number of table entries [#permalink] ### Show Tags 15 Mar 2012, 19:59 1 KUDOS 4 This post was BOOKMARKED 00:00 Difficulty: 15% (low) Question Stats: 66% (01:37) correct 34% (00:35) wrong based on 108 sessions ### HideShow timer Statictics Attachment: Table.png [ 3.59 KiB | Viewed 6584 times ] In the table above, what is the least number of table entries that are needed to show the mileage between each city and each of the other five cities? A. 15 B. 21 C. 25 D. 30 E. 36 [Reveal] Spoiler: OA Last edited by Bunuel on 29 Jul 2014, 09:02, edited 2 times in total. Edited the question and the image Senior Manager Joined: 10 Nov 2010 Posts: 266 Location: India Concentration: Strategy, Operations GMAT 1: 520 Q42 V19 GMAT 2: 540 Q44 V21 WE: Information Technology (Computer Software) Followers: 5 Kudos [?]: 189 [1] , given: 22 Re: In the table above what is the least number of table entries [#permalink] ### Show Tags 15 Mar 2012, 21:28 1 KUDOS eybrj2 wrote: In the table above, what is the least number of table entries that are needed to show the mileage between each city and each of the other five cities? a) 15 b) 21 c) 25 d) 30 e) 36 Sorry for the messy picture.. Total number of entries 6*6(6rows*6columns) =36 Now 6 entries are representing mileage with the city itself so subtract that => 36-6 Minimum entries required = half the Total = 30/2 = 15 _________________ The proof of understanding is the ability to explain it. Math Expert Joined: 02 Sep 2009 Posts: 32549 Followers: 5634 Kudos [?]: 68362 [2] , given: 9797 Re: In the table above what is the least number of table entries [#permalink] ### Show Tags 16 Mar 2012, 03:58 2 KUDOS Expert's post 2 This post was BOOKMARKED eybrj2 wrote: In the table above, what is the least number of table entries that are needed to show the mileage between each city and each of the other five cities? A. 15 B. 21 C. 25 D. 30 E. 36 Sorry for the messy picture.. The least number of table entries will be if we use only one entry for each pair of the cities. How many entries would the table then have? Or how many different pairs can be selected out of 6 cities? $$C^2_{6}=15$$ Similar question to practice: each-dot-in-the-mileage-table-above-represents-an-entry-95162.html Hope it helps. _________________ Joined: 04 Jul 2014 Posts: 284 Location: India GMAT 1: 640 Q47 V31 GMAT 2: 640 Q44 V34 GMAT 3: 710 Q49 V37 GPA: 3.58 WE: Analyst (Accounting) Followers: 18 Kudos [?]: 177 [0], given: 403 Re: In the table above what is the least number of table entries [#permalink] ### Show Tags 29 Jul 2014, 07:30 Hi Bunuel! I understand how we have arrived at 15. Here, we assume that the distance from a city to another city is the same even when the origin and destination is flipped. But, there is a possibility to travel from City A to City B in 5 Kilometers and from City B to City A in 10 kilometers (since the route is a one way or something). The question merely asks what the least number of table entries must be and not the least number of table entries in the shortest possible route (which could remove the possible assumption that there are no one ways). So, shouldn't the answer be 30? Bunuel wrote: eybrj2 wrote: In the table above, what is the least number of table entries that are needed to show the mileage between each city and each of the other five cities? A. 15 B. 21 C. 25 D. 30 E. 36 Sorry for the messy picture.. The least number of table entries will be if we use only one entry for each pair of the cities. How many entries would the table then have? Or how many different pairs can be selected out of 6 cities? $$C^2_{6}=15$$ Similar question to practice: each-dot-in-the-mileage-table-above-represents-an-entry-95162.html Hope it helps. _________________ Cheers!! JA If you like my post, let me know. Give me a kudos! Math Expert Joined: 02 Sep 2009 Posts: 32549 Followers: 5634 Kudos [?]: 68362 [0], given: 9797 Re: In the table above what is the least number of table entries [#permalink] ### Show Tags 29 Jul 2014, 09:04 Expert's post joseph0alexander wrote: Hi Bunuel! I understand how we have arrived at 15. Here, we assume that the distance from a city to another city is the same even when the origin and destination is flipped. But, there is a possibility to travel from City A to City B in 5 Kilometers and from City B to City A in 10 kilometers (since the route is a one way or something). The question merely asks what the least number of table entries must be and not the least number of table entries in the shortest possible route (which could remove the possible assumption that there are no one ways). So, shouldn't the answer be 30? Bunuel wrote: eybrj2 wrote: In the table above, what is the least number of table entries that are needed to show the mileage between each city and each of the other five cities? A. 15 B. 21 C. 25 D. 30 E. 36 Sorry for the messy picture.. The least number of table entries will be if we use only one entry for each pair of the cities. How many entries would the table then have? Or how many different pairs can be selected out of 6 cities? $$C^2_{6}=15$$ Similar question to practice: each-dot-in-the-mileage-table-above-represents-an-entry-95162.html Hope it helps. You are over-thinking. If the distance from A to B is 5 miles, then the distance from B to A is also 5 miles. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 9246 Followers: 454 Kudos [?]: 115 [1] , given: 0 Re: In the table above what is the least number of table entries [#permalink] ### Show Tags 21 Oct 2015, 23:19 1 KUDOS Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: In the table above what is the least number of table entries   [#permalink] 21 Oct 2015, 23:19 Similar topics Replies Last post Similar Topics: 7 The table above shows the distribution of the number 4 27 Mar 2014, 16:56 16 In the table above, what is the least number of table entrie 13 24 Jan 2014, 03:51 12 In the table above, what is the number of green marbles in J 20 01 Oct 2012, 05:11 19 What is the sum of the integers in the table above? 20 06 Apr 2011, 14:21 68 Each • in the mileage table above represents an entry indica 23 02 Jun 2010, 10:32 Display posts from previous: Sort by
# Tag Info 0 50-450 rep for answers in Scala I'm giving out bounties for answers in Scala, an amazing language that is unfortunately rarely used here. Any Scala answers gets 50 rep from me, and if the following are met, you can get a bonus: Good explanation: +50-100 Cleverly done: +50-200 Wins challenge when posted: +100 The first two are subjective, but I'll try to be ... -1 Reverse error quine Based on this Write a quine that prints it own source code in reverse, but to STDERR. Penalties +200 using other files I.E. reverse.txt +100 internet usage +20 reading own source code Rules Minimum length of source is 2 bytes No palindromes Standard loopholes apply code-golf, shortest code wins 0 What is the next repdigit? code-golf math A repdigit $r$ is a number containing repeated instances of the same digit $d$. It can be represented as: $r = d \cdot \frac{10^i-1}{9}$, $i \ge 0$, $1 \le d \le 9$ Challenge Given two positive integers $n,j \in \mathbb{N}$, where $j \le n$. Determinine the next repdigit $r = k \cdot n + j$ , where \... 1 Create a QR quine (a "qruine" if you will) Design a QR code that legibly spells with its pixels (or the space between them) the text that it scans to. For example, this QR code scans to the letter A, and also spells out A in white: It does not matter what the scanned and displayed text is; it does not have to be coherent or readable. The winner is ... 0 Output function from one to another I tried to post this twice and I gained negative feedback, so I will put what I have in mind here if anyone is interested. Get a certain output function from one programming language and transfer to another. Criteria: It should function the same way as the original function. Syntax doesn't matter. The function should be ... 3 Compile Commentscript code-golf javascript Commentscript is a variant on Javascript that I made up for the purpose of this question. Only commented-out code is evaluated. Javascript has two types of comments: // this is a single line comment, which starts with // and ends on a newline. /* This is a multiline comment. It starts with /* and ends with */ ... 1 Generalise perfect numbers Let $\sigma(n)$ represent the divisor sum of $n$. Perfect numbers are numbers whose divisor sum equals their double or $\sigma(n) = 2n$. For example, $\sigma(6) = 12 = 2\times6$ Superperfect numbers are numbers whose twice iterated divisor sum equals their double. For example, $\sigma^2(16) = \sigma(\sigma(16)) = \sigma(31)... 7 Remove Nth occurrences 0 I ain't no Fortunate Prime The primorial$p_n\#$is the product of the first$n$primes. The sequence begins$2, 6, 30, 210, 2310$. A Fortunate number,$F_n$, is the smallest integer$m > 1$such that$p_n\# + m$is prime. For example$F_7 = 19$as: p_7\# = 2\times3\times5\times7\times11\times13\times17 = 510510 Adding each number ... 1 Is it zero- or one- indexed? code-golf array-manipulation Your task is to determine whether some arbitrary programming language has zero-indexed or one-indexed arrays based on sample inputs and outputs Inputs An array of integers with at least 2 elements A positive integer index The value of the array at that index Output One of four distinct values ... 0 code-golf file-system Sandbox Questions: Is this unambiguous? Any other tags? Filetype colors For anyone who has spent a headache trying to understand dir_colors with GNU ls, this may be the post for you! We're going to ignore parsing LS_COLORS, or matching globs, and instead we'll focus solely on the interesting part: taking the struct stat.st_mode and ... 3 I just used my first name when I joined Stack Exchange and never changed it. It's a less common form of Arnaud (without an "L") but is however pronounced the same way, i.e. \aʁ.no\ (and not \aʁ.nɔld\ like "Arnold"). I went by the nickname "Axl" many years ago as the sysop of a BBS running on an Atari 520 ST during the pre-... 0 Posted link 1 50-200 rep for interesting zsh answers I will award this if: the answer is interesting you have not been awarded this bounty before it will not cause me to go below 2000 reputation Possible bonuses if: a particularly good explanation is provided it is the first answer to the question it out-golfs me -1 Print your PPCG avatar This is a graphical output question. You have to connect to the codegolf.stackexchange.com in your homepage, scrape and download your avatar and show it in your default image viewer or some other way. Standard loopholes apply, connections only allowed to codegolf.stackexchange.com One language can be only used once, but a user can post ... -2 Enumerate all possible IPv4 addresses Title might make the challenge hard, but it's easy. You have to print all the possible IPv4 addresses from 0.0.0.0 to 255.255.255.255 Standard loopholes apply, no internet usage for this Tags: code-golf,kolmogorov-complexity,string -3 Google search Challenge is very simple, you have to get a string as input and then launch the default web browser, and search for that string in Google. Input string will have only characters a-zA-Z Google search URL is google.com/search?q=querystring Standard loopholes apply, Internet connection allowed, but only to google.com domain Tags: code-golf,... 1 Self improving program quine code-golf restricted-complexity In this challenge you will write a program or function which when run will output a faster solution to this challenge. Formally speaking: you will write a program or function,$T_0$, which takes an integer$x$as input and outputs a program or function (in the same language),$T_1$, which is ... 1 Bytewise look-and-say sequence code-golf The look-and-say sequence is a sequence which begins with 1, 11, 21, 1211, 111221, 312211. To get a term of the sequence from the previous, read the previous out literally: 312211 => one three, one one, two twos,two ones => 13112221 So the next term is 13112221. Given this javascript code (Node.js), which ... 3 My username "sporeball" was essentially generated by brute force. I wanted something simple yet decently interesting, but I'm terrible at coming up with good ideas, so a couple of years ago I wrote a little bit of JavaScript to make up names for me. The code first populates an array with 114 words I picked out by hand (star, rice, cloak, ocean, ... 0 Decode an 8086 MOD R/M code-golfbitwiseparsing I think this might be fun. Or, it might be torture, idk. 😛 If this is popular, I might add a sequel for the significantly more complex 32-bit encoding 😏 Time for a mini objdump. An 8086 MOD R/M field is laid out like so: MOD REG R/M | OPTIONAL DISPLACEMENTS mm rrr rrm | (iiiiiiii) | (iiiiiiii) 76 543 ... -2 English Grammar Checker regular-expressionnatural-languagegrammars Being tired of checking English grammar, I decided to write an English grammar regular expression. Notations All capital letters denote an expression. Quoted strings (like "a") and lower-case letters denotes literals. AB means concatenating expressions / literals A and B together. (... 1 Stack half full or half empty? (worldview of a programming language) Is the glass half full or half empty? It's a common rethoric question to determine a person worldview, which can be optimistic or pessimistic based on the answer given. If i were dealing with a machine i would probably ask "Is the stack half full or half empty?" Let's try and see ... 2 Snail word Very similar to other challenges 2 What can I say - I'm not creative. 0 Subbasis. Generate. Discrete? Objective Given finitely many finite sets, interpret them as a subbasis to generate a space, and decide whether the resulting topology is discrete. Introduction to Topology Given a set$X$, a topology$\mathcal{T}$over$X$is a subset of the power set$\mathcal{P}(X)$such that:$\emptyset, X \in \mathcal{T}$. For ... 0 50 rep for each first sed answer If these conditions are met: Giving away this bounty does not make me go below 400 rep. The question has at least 3 total upvotes The answer is the first answer for that question It's not code-bowling The answer has at least 1 total upvote I approve (I probably will) The answer is in POSIX or Gnu sed, and I'm less ... 0 Introduction Book cipher A Book cipher is a very unique method of a encipher. Here's how's it done: You have a book / a document or a article (something full of text, the more pages of text the better). You have a message to convey (a secret, of some sort) You simply-put read trough the text and the secret message (may come from input, or reading it from 2 ... 1 50-250 rep for first answers in ThumbGolf In celebration of ThumbGolf leaving pre-alpha, I will give the first ThumbGolf answer per user a bounty. I will be following similar rules to Adám's APL challenge. Specifically: The program must not be a normal Thumb-2 program. It must make use of the ThumbGolf runtime. Specifically, if I remove all ThumbGolf ... 0 Let's trick Bob code-golf Bob and I are rivals. We constantly try to "hack" each other, even though neither of us has a clue about hacking or cybersecurity. The other day, when Bob left his laptop for a coffee break, I found the source code for his website. This project also contained a file called usernamesAndPasswords.json! Intrigued, I opened it,... 4 Oktupol is the german spelling for octupole, a magnetic field created by eight electric charges; four dipoles or two quadrupoles. I started using that name while experimenting with computer networks as a teenage child. I was interested in physics back then; when I needed to come up with host names for some virtual machines, I named them Monopol, Dipol and ... 1 Nothing, just using my nick name. 3 Weasels My username (Wezl) is Weasel spelled better and more recognizeable. Some weasel facts that may have influenced this decision. They're adorable (yes, this is a fact). A simple interweb search is enough proof. They're evil. I'm not necessarily evil, but it helps to have an evil avatar so I don't get held responsible for my actions. They bring bad luck.... 2 It’s just my name + surname with all vowels removed. Boring… The profile pic is some doodle I did many years ago, but I like it so it stuck. 1 Score an approximation code challenge code-golf Given a list of inputs (strings) and their expected outputs (integers or floats)[1], and a black-box program[2], calculate the score using the following scoring system: Let$R_n$be the expected output of the$n^{th}$input, let$A_n$be the actual output given by the black-box program, and let$ j $be ... -1 Create a Screw There are a variety of code cad languages, as well as other 3d API's that allow you to define shapes with code. Some of these are limited, while others are Turing complete and/or use a popular programming language such as Python or JavaScript. The Challenge Output a 3d model of a screw. Acceptable formats include .stl files and .obj files. A ... 7 regexp backwards. Many of my previous online aliases have been obscure programming terms, backwards. This one is not quite as obscure, but it has two extra features that make it suitable: it contains x, which as we all know, is the most expensive letter in the alphabet it starts and ends with my initials It's pronounced /pʰə̆ɡsɛˈgə/ or /pʰə̆ɡsɛˈgər/. 0 Print the nth sequence from oeis The input is one natural number n - the sequence number Your task is to put this numeric link https://oeis.org/A (example of a link with n = 1: https://oeis.org/A1), then you should get the html of this page and get the sequence from there Examples: n -> sequence 1 -> 0, 1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, ... 1 Heads I win, tails you booze: coin tossing, pub trivia style code-golf probability-theory game At my old pub trivia night, a free jug of beer or bottle of wine was awarded to the team that won Heads or Tails. This game requires players to correctly guess the outcome of successive coin tosses. The game proceeds as follows: Before each toss the remaining ... 0 Compute the analysis spectrum This is based on a task I had in work a few years back. A diode-array detector (DAD) is a kind of chromatography detector which fires light into a sample and detects the strength of the light that passes through it at different wavelengths. From this you can determine, among other things, the contents or the purity of the sample.... 0 post 2 Pad a jagged array to be square code-golf array-manipulation Posted 0 Posted 3 My username "Davide' is just my actual name. Few months ago I was learning C (my first language) and signed up on SO to ask something. I think I didn't typed any username, just logged using my Google account. I take this opportunity to change my username and add an avatar. Apart from golfing in C and learning JavaScript, I only eat, sleep, listen to ... 0 Generate a "Pi-ey" number mathpifastest-codecode-golf Generate the smallest integer of the form$x^x$with$10^{n-1}\cdotπ$digits, rounded down, given$n$. If there is no such number, take the nearest number that has as many digits of$π$in its digit count. For example, if$n=5\\$, then the number of digits would be 31415. If there was no such ... 1 (WIP) Settling the Lands of Codegolfia king-of-the-hill grid The challenge controller will randomly generate a 200x200 map representing the terrain of the land Your task is to write an AI whose goal is to have the largest population after 500 turns. Start Each player begins with 1 cell claimed and a population of 100. Turns On each turn, you have the ... 0 Challenge Print all the Tamil characters in any order you like. They should be 12 vowels, 18 consonants and the special symbol ஃ . Your code should not print anything except for those letters but it can print one letter per line. The consonants are: க் ங் ச் ஞ் ட் ண் த் ந் ப் ம் ய் ர் ல் வ் ழ் ள் ற் ன் The vowels are: அ ஆ இ ஈ உ ஊ எ ஏ ஐ ஒ ஓ ஔ Shortest code ... 4 A couple of years ago, I got into reading a Russian knock-off of Harry Potter which one of my friends recommended to me (the book was actually really well-written, still one of my all-time favourites). During one of the chapters, a noticed a character with the name Varsus, suspiciously similar to that friend's online nickname Varsis. When I told him about it,... 7 Wow, y'all with your usernames that actually have meaning. I literally just spent 5 minutes typing random keys until I ended up with something I deemed "cool" and "ironic". Having said that, I did want to make sure the letters L and X were present (for maximum coolness) and that there was a name length of 5 (shorter names are more ... 10 Mine has several meanings: Many of the answers here on CGCC are quite traumatic to read, and everything here is digital. Digital Trauma is the medical term for nose-picking. I think a lot of us are figuratively picking our noses here, i.e. spending time asking/answering challenges for fun, when we should probably be focussing our talents elsewhere, e.g. ... Top 50 recent answers are included
# Return Loss $(S_{11})$ Equation for a nth-order Butterworth Filter I need a closed form equation for the return loss $~S_{11}~$ of an n-th order Butterworth low-pass prototype filter. I am designing a high power RF (lumped element) n-th order Butterworth bandpass filter for the transmitter side, so will be transforming the return loss equation for the low-pass prototype to a bandpass filter. I have the equations for the transfer function and for the insertion loss, but not the return loss. Also, if someone can provide me with a reference text or handbook for this type of question, I'd much appreciate it. I did check the first 7 suggested posts, but this question has not been answered before. $Z_{source} = Z_{load}$, in this case, 50 Ω • What is the input impedance of the circuit and what, ideally would you want it to be. From those parameters you can calculate return loss. – Andy aka Jul 25 '15 at 11:28 • I thought there might be some closed form equation that I could use, rather than performing circuit analysis on the circuit. – My Other Head Jul 25 '15 at 11:46 • Well, you're going to have to perform circuit simulation on it to get any sort of accuracy in your filter response, especially if you take into account the component parasitics. Depending on the frequency and powers you're looking at, you may have to take those into account to get an accurate simulation, so you might as well start out with a simulation – rfdave Jul 25 '15 at 15:25 There is no single answer to this question. There are numerous circuits that can implement any particular filter design, as we discussed in a recent question The return loss ($S_{11}$) depends on the topology you choose to implement the filter. If you use an active topology, for example, then only the first stage will affect $Z_{in}$, and thus only the first stage will affect $S_{11}$. If you choose a passive topology it depends if you construct the filter from pi sections or T-sections or some other topology. For example, if you use pi sections (with LC parallel elements in the shunt members), then $S_{11}$ will go to -1 in the stop-bands. If you use T sections (with LC series elements in the through members) it will go to +1. If you use microstrip elements, the behavior will likely have some complex periodic behavior in the stop bands. Whichever one you choose, $S_{11}$ should be near zero in the pass band. Whether you achieve -40 or -50 or -60 dB probably depends more on choosing very tight-tolerance parts or trimming the circuit carefully, rather than on the nominal design. Although some design choices might be more or less sensitive to component variation. So a closed form solution for reflections in the nominal design won't help as much as doing a Monte Carlo simulation accounting for likely component variations. If $n$ is more than 2, I'd suggest to just simulate the design rather than try to find a closed-form solution, because the equations will get rather tedious to deal with very quickly. • I'm a bit puzzled. I stated the filter is for high power RF (commonly meaning 20+ dBm and below 1GHz) and lumped-element, so microstrip and active topology is not applicable, and that I am after the filter prototype equation, which I thought means a Cauer ladder topology by definition. I agree $S_{11}$ should be near zero, so for practical purposes I'm aiming for -30dB, -40dB, -60dB if I can get it. I'm still examining the recent question you mentioned. Thanks for your help so far. – My Other Head Jul 25 '15 at 17:52 • Even if you restrict it to Cauer ladder topology, you can still choose whether the initial section is pi or T, and if n > 2 you can still choose different ways to arrange the sections to implement the poles and zeros. Also, if you really want high power handling, you should consider microstrip implementations rather than lumped elements. – The Photon Jul 25 '15 at 18:03 • Also, "high power" does not imply less than 1 GHz. There are lots of high power applications above 1 GHz. Most of the define radar bands (L, K, X, ...), for example, are above 1 GHz. "High power" doesn't even really say anything about power. If you want us to know what band you're operating in and how much power you're using, you need to tell us what band and how much power. – The Photon Jul 25 '15 at 18:08 • Apologies. I meant 50+ dBm and below 1 GHz (VHF\UHF). – My Other Head Jul 25 '15 at 18:09 • Just checking a reference that might answer this question and my subsequent one on Chebyshev type 2: Matthei, Young and Jones, 1963. I didn't realise I've had it all along, but was reluctant to wade through the 1000 pages or so of it. It will be interesting to see what they say about parasitics. Back in a day or two. – My Other Head Jul 25 '15 at 18:34
### Mathematical Issues for Chemists A brief outline of the mathematical issues faced by chemistry students. ### Reaction Rates Explore the possibilities for reaction rates versus concentrations with this non-linear differential equation ### Catalyse That! Can you work out how to produce the right amount of chemical in a temperature-dependent reaction? # Bond Angles ##### Stage: 5 Challenge Level: Below is a diagram of the tetrahedral molecule, along with axes and labelled atoms: As has been stated in the question, Atom 1 has coordinates (0,0,0). Additionally, since the atoms 3, 4 and 5 lie on the plane $z = -h$, it can be reasoned that Atom 2 lies on the z axis and so has coordinates (0, 0, 1). To define the coordinates of atoms 3, 4 and 5, the x and y axes must be placed arbitrarily. To make the algebra simpler, the x axis is placed such that atom 3 lies along it. Thus the coordinates of Atom 3 are (x,0,-h) where x is a quantity to be determined. Since each bond is of unit length, x can be found as follows: $\sqrt{x^2 + h^2} = 1$ $\therefore x = \sqrt{1 - h^2}$ Thus Atom 3 has coordinates ($\mathbf{\sqrt{1 - h^2}}$,0,-h). To find the coordinates of atoms 4 and 5, it is easier to view the tetrahedron by looking down the z-axis: From here, it can be seen that the three bonds are equally spaced and so must be separated by an angle of 120$^\circ$ in the plane. It has already been shown that the length of each of these bonds is $x$ in the plane, and so the length along the x and y axes for Atoms 4 and 5 can be easily calculated using trigonometry: Atom 5 has $x\cos(30)$ along the y axis, and $-x\sin(30)$ along the x axis. Atom 4 has $-x\cos(30)$ along the y axis and $-x\sin(30)$ along the x axis. As the atoms 3, 4 and 5 lie in the plane z = -h, the coordinates for atoms 4 and 5 are as follows: Atom 4: ( $-x\sin(30)$, $-x\cos(30)$, -h) = ($\mathbf{-\sqrt{1 - h^2}\sin(30), -\sqrt{1-h^2}\cos(30), -h}$) Atom 5: ( $-x\sin(30)$, $x\cos(30)$, -h) = ($\mathbf{-\sqrt{1 - h^2}\sin(30), \sqrt{1-h^2}\cos(30), -h}$) The coordinates of the atoms 2, 3, 4, and 5 have now been found. Their position vectors from Atom 1 are the same as their coordinates, since Atom 1 is at the origin. Using scalar products between pairs of position vectors, with the knowledge that their angular separation (i.e. bond angle) should be the same, yields values for h and also for the bond angle: Dotting 2 and 3: $-h = \cos(\theta)$ Dotting 3 and 4: $\frac{3}{2}h^2 + \frac{1}{2} = \cos(\theta)$ Combinging these equations to eliminate $\cos(\theta)$ gives: $3h^2 + 2h -1 =0$ $h = -1\ \text{or}\ h = \frac{1}{3}$ A positive value for h is clearly needed: $\mathbf{h = \frac{1}{3}}$ $\therefore \mathbf{cos(\theta)} = -\frac{1}{3}$ $\rightarrow \mathbf{Bond\ angle = arccos(\frac{-1}{3})}$ There are many different ways to deform the perfect tetrahedron, which lead to a variety of different numbers of bond angles being preserved. It is possible to deform the tetrahedron in such a way that 4 bond angles remain at 109.5$^\circ$ and one bond angle is increased, and another concomitantly decreased. Look at the tetrahedron illustrated below. Atom 2 can be moved towards Atom 3, which will leave the following angles preserved: Atom 4 to 5, Atom 4 to 3, Atom 3 to 5, and Atom 4 to 2. The last of these angles is only preserved if the bond length of Atom 4 remains fixed, such that Atom 4 moves towards atom two along the arc of a circle. It can clearly be seen also that the angle between Atom 2 and 3 decreases, and that the angle between 2 and 5 increases! The second part of this problem involves the trigonal pyramidal ammonia molecule. We have effectively moved an atom from the tetrahedron, and squashed the three remaining atoms closer together. It can be clearly visualised that the bond lengths must be increased in order to maintain the same distance between the hydrogens.To calculate what percent we must first calculate the distance between the hydrogens in the tetrahedron: Using simple trigonometry $x$ can be calculated: $\sin(\frac{\theta}{2}) = \frac{x}{1}$ $x = \sin(\frac{1}{2}\arccos(-\frac{1}{3}))$ Now consider a triangle for the ammonia molecule. Note that the distance between the hydrogens is $2x$, as in the tetrahedron: Again, using trigonometry: $\sin(\frac{107.5}{2}) = \frac{x}{y}$ Substituting for $x$ and rearranging to find $y$ gives: $y = \frac{\sin(\frac{1}{2}\arccos(-\frac{1}{3}))}{\sin(\frac{107.5}{2})} = 1.0125$ Therefore, the percentage increase is 1.25%. If the hydrogen atoms remained fixed and a vertical force was applied to the N atom, the maximum bond angle would be seen when the molecule became planar. In these circumstances, the three hydrogen atoms would be in the same plane as the nitrogen atom, and would be distributed around it at 120$^\circ$ intervals. Thus, the maximum theoretical bond angle for the trigonal pyramidal arrangement is $\mathbf{120^\circ}$. Although it can be seen that the formation of the planar ammonia molecule requires much energy, this is an observed phonomenon. At room temperature, ammonia rapidly inverts (~ 10$^{12}$ times per second), much like an umbrella turning inside out in the wind. In order to invert, the molecule must pass through a high-energy unstable planar transition state:
### Magnetometer (12367 views - Mechanical Engineering) A magnetometer is an instrument that measures magnetism—either magnetization of magnetic material like a ferromagnet, or the direction, strength, or the relative change of a magnetic field at a particular location. A compass is a simple example of a magnetometer, one that measures the direction of an ambient magnetic field. The first magnetometer capable of measuring the absolute magnetic intensity was invented by Carl Friedrich Gauss in 1833 and notable developments in the 19th century included the Hall Effect, which is still widely used. Magnetometers are widely used for measuring the Earth's magnetic field and in geophysical surveys to detect magnetic anomalies of various types. They are also used in the military to detect submarines. Consequently, some countries, such as the United States, Canada and Australia, classify the more sensitive magnetometers as military technology, and control their distribution. Magnetometers can be used as metal detectors: they can detect only magnetic (ferrous) metals, but can detect such metals at a much larger depth than conventional metal detectors; they are capable of detecting large objects, such as cars, at tens of metres, while a metal detector's range is rarely more than 2 metres. In recent years, magnetometers have been miniaturized to the extent that they can be incorporated in integrated circuits at very low cost and are finding increasing use as compasses in consumer devices such as mobile phones and tablet computers. Go to Article ## Magnetometer ### Magnetometer A magnetometer is an instrument that measures magnetism—either magnetization of magnetic material like a ferromagnet, or the direction, strength, or the relative change of a magnetic field at a particular location. A compass is a simple example of a magnetometer, one that measures the direction of an ambient magnetic field. The first magnetometer capable of measuring the absolute magnetic intensity was invented by Carl Friedrich Gauss in 1833 and notable developments in the 19th century included the Hall Effect, which is still widely used. Magnetometers are widely used for measuring the Earth's magnetic field and in geophysical surveys to detect magnetic anomalies of various types. They are also used in the military to detect submarines. Consequently, some countries, such as the United States, Canada and Australia, classify the more sensitive magnetometers as military technology, and control their distribution. Magnetometers can be used as metal detectors: they can detect only magnetic (ferrous) metals, but can detect such metals at a much larger depth than conventional metal detectors; they are capable of detecting large objects, such as cars, at tens of metres, while a metal detector's range is rarely more than 2 metres. In recent years, magnetometers have been miniaturized to the extent that they can be incorporated in integrated circuits at very low cost and are finding increasing use as compasses in consumer devices such as mobile phones and tablet computers. ## Introduction ### Magnetic fields Magnetic fields are vector quantities characterized by both strength and direction. The strength of a magnetic field is measured in units of tesla in the SI units, and in gauss in the cgs system of units. 10,000 gauss are equal to one tesla.[1] Measurements of the Earth's magnetic field are often quoted in units of nanotesla (nT), also called a gamma.[2] The Earth's magnetic field can vary from 20,000 to 80,000 nT depending on location, fluctuations in the Earth's magnetic field are on the order of 100 nT, and magnetic field variations due to magnetic anomalies can be in the picotesla (pT) range.[3] Gaussmeters and teslameters are magnetometers that measure in units of gauss or tesla, respectively. In some contexts, magnetometer is the term used for an instrument that measures fields of less than 1 millitesla (mT) and gaussmeter is used for those measuring greater than 1 mT.[1] ### Types of magnetometer There are two basic types of magnetometer measurement. Vector magnetometers measure the vector components of a magnetic field. Total field magnetometers or scalar magnetometers measure the magnitude of the vector magnetic field.[4] Magnetometers used to study the Earth's magnetic field may express the vector components of the field in terms of declination (the angle between the horizontal component of the field vector and magnetic north) and the inclination (the angle between the field vector and the horizontal surface).[5] Absolute magnetometers measure the absolute magnitude or vector magnetic field, using an internal calibration or known physical constants of the magnetic sensor.[6] Relative magnetometers measure magnitude or vector magnetic field relative to a fixed but uncalibrated baseline. Also called variometers, relative magnetometers are used to measure variations in magnetic field. Magnetometers may also be classified by their situation or intended use. Stationary magnetometers are installed to a fixed position and measurements are taken while the magnetometer is stationary.[4] Portable or mobile magnetometers are meant to be used while in motion and may be manually carried or transported in a moving vehicle. Laboratory magnetometers are used to measure the magnetic field of materials placed within them and are typically stationary. Survey magnetometers are used to measure magnetic fields in geomagnetic surveys; they may be fixed base stations, as in the INTERMAGNET network, or mobile magnetometers used to scan a geographic region. ### Performance and capabilities The performance and capabilities of magnetometers are described through their technical specifications. Major specifications include[1][3] • Sample rate is the amount of readings given per second. The inverse is the cycle time in seconds per reading. Sample rate is important in mobile magnetometers; the sample rate and the vehicle speed determine the distance between measurements. • Bandwidth or bandpass characterizes how well a magnetometer tracks rapid changes in magnetic field. For magnetometers with no onboard signal processing, bandwidth is determined by the Nyquist limit set by sample rate. Modern magnetometers may perform smoothing or averaging over sequential samples. achieving a lower noise in exchange for lower bandwidth. • Resolution is the smallest change in a magnetic field the magnetometer can resolve. A magnetometer should have a resolution a good deal smaller than the smallest change one wishes to observe. • Quantization error is caused by recording roundoff and truncation of digital expressions of the data. • Absolute error is the difference between the readings of a magnetometer true magnetic field. • Drift is the change in absolute error over time. • Thermal stability is the dependence of the measurement on temperature. It is given as a temperature coefficient in units of nT per degree Celsius. • Noise is the random fluctuations generated by the magnetometer sensor or electronics. Noise is given in units of ${\displaystyle {\rm {{nT}/{\sqrt {\rm {Hz}}}}}}$, where frequency component refers to the bandwidth. • Sensitivity is the larger of the noise or the resolution. • Heading error is the change in the measurement due to a change in orientation of the instrument in a constant magnetic field. • The dead zone is the angular region of magnetometer orientation in which the instrument produces poor or no measurements. All optically pumped, proton-free precession, and Overhauser magnetometers experience some dead zone effects. • Gradient tolerance is the ability of a magnetometer to obtain a reliable measurement in the presence of a magnetic field gradient. In surveys of unexploded ordnance or landfills, gradients can be large. ### Early magnetometers The compass, consisting of a magnetized needle whose orientation changes in response to the ambient magnetic field, is a simple type of magnetometer, one that measures the direction of the field. The oscillation frequency of a magnetized needle is proportional to the square-root of the strength of the ambient magnetic field; so, for example, the oscillation frequency of the needle of a horizontally situated compass is proportional to the square-root of the horizontal intensity of the ambient field. In 1833, Carl Friedrich Gauss, head of the Geomagnetic Observatory in Göttingen, published a paper on measurement of the Earth's magnetic field.[7] It described a new instrument that consisted of a permanent bar magnet suspended horizontally from a gold fibre. The difference in the oscillations when the bar was magnetised and when it was demagnetised allowed Gauss to calculate an absolute value for the strength of the Earth's magnetic field.[8] The gauss, the CGS unit of magnetic flux density was named in his honour, defined as one maxwell per square centimeter; it equals 1×10−4 tesla (the SI unit).[9] Francis Ronalds and Charles Brooke independently invented magnetographs in 1846 that continuously recorded the magnet’s movements using photography, thus easing the load on observers.[10] They were quickly utilised by Edward Sabine and others in a global magnetic survey and updated machines were in use well into the 20th century.[11][12] ## Laboratory magnetometers Laboratory magnetometers measure the magnetization, also known as the magnetic moment of a sample material. Unlike survey magnetometers, laboratory magnetometers require the sample to be placed inside the magnetometer, and often the temperature, magnetic field, and other parameters of the sample can be controlled. A sample’s magnetization, is primarily dependent on the ordering of unpaired electrons within its atoms, with smaller contributions from nuclear magnetic moments, Larmor diamagnetism, among others. Ordering of magnetic moments are primarily classified as diamagnetic, paramagnetic, ferromagnetic, or antiferromagnetic (although the zoology of magnetic ordering also includes ferrimagnetic, helimagnetic, toroidal, spin glasses, etc.). Measuring the magnetization as a function of temperature and magnetic field can give clues as to the type of magnetic ordering, as well as any phase transitions between different types of magnetic orders that occur at critical temperatures or magnetic fields. This type of magnetometry measurement is very important to understand the magnetic properties of materials in physics, chemistry, geophysics and geology, as well as sometimes biology. ### SQUID (Superconducting quantum interference device) SQUIDs are a type of magnetometer used both as survey and as laboratory magnetometers. SQUID magnetometry is an extremely sensitive absolute magnetometry technique. However SQUIDs are noise sensitive, making them impractical as laboratory magnetometers in high DC magnetic fields, and in pulsed magnets. Commercial SQUID magnetometers are available for temperatures between 300 mK and 400 kelvins, and magnetic fields up to 7 tesla. ### Inductive Pickup Coils Inductive pickup coils (also referred as inductive sensor) measure the magnetization by detecting the current induced in a coil due to the changing magnetic moment of the sample. The sample’s magnetization can be changed by applying a small ac magnetic field (or a rapidly changing dc field), as occurs in capacitor-driven pulsed magnets. These measurements require differentiating between the magnetic field produced by the sample and that from the external applied field. Often a special arrangement of cancellation coils is used. For example, half of the pickup coil is wound in one direction, and the other half in the other direction, and the sample is placed in only one half. The external uniform magnetic field is detected by both halves of the coil, and since they are counter-wound, the external magnetic field produces no net signal. ### VSM (Vibrating Sample Magnetometer) VSM (vibrating sample magnetometers) detect the magnetization of a sample by mechanically vibrating the sample inside of an inductive pickup coil or inside of a SQUID coil. Induced current or changing flux in the coil is measured. The vibration is typically created by a motor or a piezoelectric actuator. Typically the VSM technique is about an order of magnitude less sensitive than SQUID magnetometry. VSMs can be combined with SQUIDs to create a system that is more sensitive than either one alone. Heat due to the sample vibration can limit the base temperature of a VSM, typically to 2 Kelvin. VSM is also impractical for measuring a fragile sample that is sensitive to rapid acceleration. ### Pulsed Field Extraction Magnetometry Pulsed Field Extraction Magnetometry is another method making use of pickup coils to measure magnetization. Unlike VSMs where the sample is physically vibrated, in Pulsed Field Extraction Magnetometry, the sample is secured and the external magnetic field is changed rapidly, for example in a capacitor-driven magnet. One of multiple techniques must then be used to cancel out the external field from the field produced by the sample. These include counterwound coils that cancel the external uniform field, and background measurements with the sample removed from the coil. ### Torque Magnetometry Magnetic torque magnetometry can be even more sensitive than SQUID magnetometry. However, magnetic torque magnetometry doesn’t measure magnetism directly as all the previously mentioned methods do. Magnetic torque magnetometry instead measures the torque τ acting on a sample’s magnetic moment μ as a result of a uniform magnetic field B, τ=μ×B. A torque is thus a measure of the sample's magnetic or shape anisotropy. In some cases the sample's magnetization can be extracted from the measured torque. In other cases, the magnetic torque measurement is used to detect magnetic phase transitions or quantum oscillations. The most common way to measure magnetic torque is to mount the sample on a cantilever and measure the displacement via capacitance measurement between the cantilever and nearby fixed object, or by measuring the piezoelectricity of the cantilever, or by optical interferometry off the surface of the cantilever. ### Optical Magnetometry Optical magnetometry makes use of various optical techniques to measure magnetization. One such technique, Kerr Magnetometry makes use of the magneto-optic Kerr effect, or MOKE. In this technique, incident light is directed at the sample’s surface. Light interacts with a magnetized surface nonlinearly so the reflected light has an elliptical polarization, which is then measured by a detector. Another method of optical magnetometry is Faraday Rotation Magnetometry. Faraday Rotation Magnetometry utilizes nonlinear magneto-optical rotation to measure a sample’s magnetization. In this method a Faraday Modulating thin film is applied to the sample to be measured and a series of images are taken with a camera that senses the polarization of the reflected light. To reduce noise, multiple pictures are then averaged together. One advantage to this method is that it allows mapping of the magnetic characteristics over the surface of a sample. This can be especially useful when studying such things as the Meissner Effect on superconductors. Microfabricated optically pumped magnetometers (µOPMs) can be used to detect the origin of brain seizures more precisely and generate less heat than currently available superconducting quantum interference devices, better known as SQUIDs.[13] The device works by using polarized light to control the spin of rubidium atoms which can be used to measure and monitor the magnetic field.[14] ## Survey magnetometers Survey magnetometers can be divided into two basic types: • Scalar magnetometers measure the total strength of the magnetic field to which they are subjected, but not its direction • Vector magnetometers have the capability to measure the component of the magnetic field in a particular direction, relative to the spatial orientation of the device. A vector is a mathematical entity with both magnitude and direction. The Earth's magnetic field at a given point is a vector. A magnetic compass is designed to give a horizontal bearing direction, whereas a vector magnetometer measures both the magnitude and direction of the total magnetic field. Three orthogonal sensors are required to measure the components of the magnetic field in all three dimensions. They are also rated as "absolute" if the strength of the field can be calibrated from their own known internal constants or "relative" if they need to be calibrated by reference to a known field. A magnetograph is a magnetometer that continuously records data. Magnetometers can also be classified as "AC" if they measure fields that vary relatively rapidly in time (>100 Hz), and "DC" if they measure fields that vary only slowly (quasi-static) or are static. AC magnetometers find use in electromagnetic systems (such as magnetotellurics), and DC magnetometers are used for detecting mineralisation and corresponding geological structures. ### Scalar magnetometers #### Proton precession magnetometer Proton precession magnetometers, also known as proton magnetometers, PPMs or simply mags, measure the resonance frequency of protons (hydrogen nuclei) in the magnetic field to be measured, due to nuclear magnetic resonance (NMR). Because the precession frequency depends only on atomic constants and the strength of the ambient magnetic field, the accuracy of this type of magnetometer can reach 1 ppm.[15] A direct current flowing in a solenoid creates a strong magnetic field around a hydrogen-rich fluid (kerosene and decane are popular, and even water can be used), causing some of the protons to align themselves with that field. The current is then interrupted, and as protons realign themselves with the ambient magnetic field, they precess at a frequency that is directly proportional to the magnetic field. This produces a weak rotating magnetic field that is picked up by a (sometimes separate) inductor, amplified electronically, and fed to a digital frequency counter whose output is typically scaled and displayed directly as field strength or output as digital data. For hand/backpack carried units, PPM sample rates are typically limited to less than one sample per second. Measurements are typically taken with the sensor held at fixed locations at approximately 10 metre increments. Portable instruments are also limited by sensor volume (weight) and power consumption. PPMs work in field gradients up to 3,000 nT/m, which is adequate for most mineral exploration work. For higher gradient tolerance, such as mapping banded iron formations and detecting large ferrous objects, Overhauser magnetometers can handle 10,000 nT/m, and caesium magnetometers can handle 30,000 nT/m. They are relatively inexpensive (< 8,000 USD) and were once widely used in mineral exploration. Three manufacturers dominate the market: GEM Systems, Geometrics and Scintrex. Popular models include G-856, Smartmag and GSM-18 and GSM-19T. For mineral exploration, they have been superseded by Overhauser, Caesium and Potassium instruments, all of which are fast-cycling, and do not require the operator to pause between readings. #### Overhauser effect magnetometer The Overhauser effect magnetometer or Overhauser magnetometer uses the same fundamental effect as the proton precession magnetometer to take measurements. By adding free radicals to the measurement fluid, the nuclear Overhauser effect can be exploited to significantly improve upon the proton precession magnetometer. Rather than aligning the protons using a solenoid, a low power radio-frequency field is used to align (polarise) the electron spin of the free radicals, which then couples to the protons via the Overhauser effect. This has two main advantages: driving the RF field takes a fraction of the energy (allowing lighter-weight batteries for portable units), and faster sampling as the electron-proton coupling can happen even as measurements are being taken. An Overhauser magnetometer produces readings with a 0.01 nT to 0.02 nT standard deviation while sampling once per second. #### Caesium vapour magnetometer The optically pumped caesium vapour magnetometer is a highly sensitive (300 fT/Hz0.5) and accurate device used in a wide range of applications. It is one of a number of alkali vapours (including rubidium and potassium) that are used in this way, as well as helium.[16] The device broadly consists of a photon emitter containing a caesium light emitter or lamp, an absorption chamber containing caesium vapour, a "buffer gas" through which the emitted photons pass and a photon detector, arranged in that order. The basic principle that allows the device to operate is the fact that a caesium atom can exist in any of nine energy levels, which can be informally thought of as the placement of electron atomic orbitals around the atomic nucleus. When a caesium atom within the chamber encounters a photon from the lamp, it is excited to a higher energy state, emits a photon and falls to an indeterminate lower energy state. The caesium atom is "sensitive" to the photons from the lamp in three of its nine energy states, and therefore, assuming a closed system, all the atoms eventually fall into a state in which all the photons from the lamp pass through unhindered and are measured by the photon detector. At this point, the sample (or population) is said to be polarized and ready for measurement to take place. This process is done continuously during operation. This theoretically perfect magnetometer is now functional and so can begin to make measurements. In the most common type of caesium magnetometer, a very small AC magnetic field is applied to the cell. Since the difference in the energy levels of the electrons is determined by the external magnetic field, there is a frequency at which this small AC field makes the electrons change states. In this new state, the electron once again can absorb a photon of light. This causes a signal on a photo detector that measures the light passing through the cell. The associated electronics use this fact to create a signal exactly at the frequency that corresponds to the external field. Another type of caesium magnetometer modulates the light applied to the cell. This is referred to as a Bell-Bloom magnetometer, after the two scientists who first investigated the effect. If the light is turned on and off at the frequency corresponding to the Earth's field,[clarification needed] there is a change in the signal seen at the photo detector. Again, the associated electronics use this to create a signal exactly at the frequency that corresponds to the external field. Both methods lead to high performance magnetometers. #### Potassium vapour magnetometer Potassium is the only optically pumped magnetometer that operates on a single, narrow electron spin resonance (ESR) line in contrast to other alkali vapour magnetometers that use irregular, composite and wide spectral lines and Helium with the inherently wide spectral line.[17] #### Applications The caesium and potassium magnetometers are typically used where a higher performance magnetometer than the proton magnetometer is needed. In archaeology and geophysics, where the sensor sweeps through an area and many accurate magnetic field measurements are often needed, caesium and potassium magnetometers have advantages over the proton magnetometer. The caesium and potassium magnetometer's faster measurement rate allows the sensor to be moved through the area more quickly for a given number of data points. Caesium and potassium magnetometers are insensitive to rotation of the sensor while the measurement is being made. The lower noise of caesium and potassium magnetometers allow those measurements to more accurately show the variations in the field with position. ### Vector magnetometers Vector magnetometers measure one or more components of the magnetic field electronically. Using three orthogonal magnetometers, both azimuth and dip (inclination) can be measured. By taking the square root of the sum of the squares of the components the total magnetic field strength (also called total magnetic intensity, TMI) can be calculated by Pythagoras's theorem. Vector magnetometers are subject to temperature drift and the dimensional instability of the ferrite cores. They also require levelling to obtain component information, unlike total field (scalar) instruments. For these reasons they are no longer used for mineral exploration. #### Rotating coil magnetometer The magnetic field induces a sine wave in a rotating coil. The amplitude of the signal is proportional to the strength of the field, provided it is uniform, and to the sine of the angle between the rotation axis of the coil and the field lines. This type of magnetometer is obsolete. #### Hall effect magnetometer The most common magnetic sensing devices are solid-state Hall effect sensors. These sensors produce a voltage proportional to the applied magnetic field and also sense polarity. They are used in applications where the magnetic field strength is relatively large, such as in anti-lock braking systems in cars, which sense wheel rotation speed via slots in the wheel disks. #### Magnetoresistive devices These are made of thin strips of permalloy (NiFe magnetic film) whose electrical resistance varies with a change in magnetic field. They have a well-defined axis of sensitivity, can be produced in 3-D versions and can be mass-produced as an integrated circuit. They have a response time of less than 1 microsecond and can be sampled in moving vehicles up to 1,000 times/second. They can be used in compasses that read within 1°, for which the underlying sensor must reliably resolve 0.1°.[18] #### Fluxgate magnetometer The fluxgate magnetometer was invented by H. Aschenbrenner and G. Goubau in 1936.[19][20]:4 A team at Gulf Research Laboratories led by Victor Vacquier developed airborne fluxgate magnetometers to detect submarines during World War II, and after the war confirmed the theory of plate tectonics by using them to measure shifts in the magnetic patterns on the sea floor.[21] A fluxgate magnetometer consists of a small, magnetically susceptible core wrapped by two coils of wire. An alternating electric current is passed through one coil, driving the core through an alternating cycle of magnetic saturation; i.e., magnetised, unmagnetised, inversely magnetised, unmagnetised, magnetised, and so forth. This constantly changing field induces an electric current in the second coil, and this output current is measured by a detector. In a magnetically neutral background, the input and output currents match. However, when the core is exposed to a background field, it is more easily saturated in alignment with that field and less easily saturated in opposition to it. Hence the alternating magnetic field, and the induced output current, are out of step with the input current. The extent to which this is the case depends on the strength of the background magnetic field. Often, the current in the output coil is integrated, yielding an output analog voltage, proportional to the magnetic field. A wide variety of sensors are currently available and used to measure magnetic fields. Fluxgate compasses and gradiometers measure the direction and magnitude of magnetic fields. Fluxgates are affordable, rugged and compact with miniaturization recently advancing to the point of complete sensor solutions in the form of IC chips, including examples from both academia [22] and industry.[23] This, plus their typically low power consumption makes them ideal for a variety of sensing applications. Gradiometers are commonly used for archaeological prospecting and unexploded ordnance (UXO) detection such as the German military's popular Foerster.[24] The typical fluxgate magnetometer consists of a "sense" (secondary) coil surrounding an inner "drive" (primary) coil that is closely wound around a highly permeable core material, such as mu-metal. An alternating current is applied to the drive winding, which drives the core in a continuous repeating cycle of saturation and unsaturation. To an external field, the core is alternately weakly permeable and highly permeable. The core is often a toroidally-wrapped ring or a pair of linear elements whose drive windings are each wound in opposing directions. Such closed flux paths minimise coupling between the drive and sense windings. In the presence of an external magnetic field, with the core in a highly permeable state, such a field is locally attracted or gated (hence the name fluxgate) through the sense winding. When the core is weakly permeable, the external field is less attracted. This continuous gating of the external field in and out of the sense winding induces a signal in the sense winding, whose principal frequency is twice that of the drive frequency, and whose strength and phase orientation vary directly with the external field magnitude and polarity. There are additional factors that affect the size of the resultant signal. These factors include the number of turns in the sense winding, magnetic permeability of the core, sensor geometry, and the gated flux rate of change with respect to time. Phase synchronous detection is used to extract these harmonic signals from the sense winding and convert them into a DC voltage proportional to the external magnetic field. Active current feedback may also be employed, such that the sense winding is driven to counteract the external field. In such cases, the feedback current varies linearly with the external magnetic field and is used as the basis for measurement. This helps to counter inherent non-linearity between the applied external field strength and the flux gated through the sense winding. #### SQUID magnetometer SQUIDs, or superconducting quantum interference devices, measure extremely small changes in magnetic fields. They are very sensitive vector magnetometers, with noise levels as low as 3 fT Hz−½ in commercial instruments and 0.4 fT Hz−½ in experimental devices. Many liquid-helium-cooled commercial SQUIDs achieve a flat noise spectrum from near DC (less than 1 Hz) to tens of kilohertz, making such devices ideal for time-domain biomagnetic signal measurements. SERF atomic magnetometers demonstrated in laboratories so far reach competitive noise floor but in relatively small frequency ranges. SQUID magnetometers require cooling with liquid helium (4.2 K) or liquid nitrogen (77 K) to operate, hence the packaging requirements to use them are rather stringent both from a thermal-mechanical as well as magnetic standpoint. SQUID magnetometers are most commonly used to measure the magnetic fields produced by laboratory samples, also for brain or heart activity (magnetoencephalography and magnetocardiography, respectively). Geophysical surveys use SQUIDs from time to time, but the logistics of cooling the SQUID are much more complicated than other magnetometers that operate at room temperature. #### Spin-exchange relaxation-free (SERF) atomic magnetometers At sufficiently high atomic density, extremely high sensitivity can be achieved. Spin-exchange-relaxation-free (SERF) atomic magnetometers containing potassium, caesium or rubidium vapor operate similarly to the caesium magnetometers described above, yet can reach sensitivities lower than 1 fT Hz−½. The SERF magnetometers only operate in small magnetic fields. The Earth's field is about 50 µT; SERF magnetometers operate in fields less than 0.5 µT. Large volume detectors have achieved a sensitivity of 200 aT Hz−½.[25] This technology has greater sensitivity per unit volume than SQUID detectors.[26] The technology can also produce very small magnetometers that may in the future replace coils for detecting changing magnetic fields.[citation needed] This technology may produce a magnetic sensor that has all of its input and output signals in the form of light on fiber-optic cables.[27] This lets the magnetic measurement be made near high electrical voltages. ## Calibration of magnetometers The calibration of magnetometers is usually performed by means of coils which are supplied by an electrical current to create a magnetic field. It allows to characterize the sensitivity of the magnetometer (in terms of V/T). In many applications the homogeneity of the calibration coil is an important feature. For this reason, coils like Helmholtz coils is commonly used either in a single axis or a three axis configuration. For demanding applications high homogeneity magnetic field is mandatory, in such cases magnetic field calibration can be performed using Maxwell coil, cosine coils [28] or calibration in the highly homogenous Earth's magnetic field. ## Uses Magnetometers have a very diverse range of applications, including locating objects such as submarines, sunken ships, hazards for tunnel boring machines, hazards in coal mines, unexploded ordnance, toxic waste drums, as well as a wide range of mineral deposits and geological structures. They also have applications in heart beat monitors, weapon systems positioning, sensors in anti-locking brakes, weather prediction (via solar cycles), steel pylons, drill guidance systems, archaeology, plate tectonics and radio wave propagation and planetary exploration. Depending on the application, magnetometers can be deployed in spacecraft, aeroplanes (fixed wing magnetometers), helicopters (stinger and bird), on the ground (backpack), towed at a distance behind quad bikes (sled or trailer), lowered into boreholes (tool, probe or sonde) and towed behind boats (tow fish). ### Archaeology Magnetometers are also used to detect archaeological sites, shipwrecks and other buried or submerged objects. Fluxgate gradiometers are popular due to their compact configuration and relatively low cost. Gradiometers enhance shallow features and negate the need for a base station. Caesium and Overhauser magnetometers are also very effective when used as gradiometers or as single-sensor systems with base stations. The TV program Time Team popularised 'geophys', including magnetic techniques used in archaeological work to detect fire hearths, walls of baked bricks and magnetic stones such as basalt and granite. Walking tracks and roadways can sometimes be mapped with differential compaction in magnetic soils or with disturbances in clays, such as on the Great Hungarian Plain. Ploughed fields behave as sources of magnetic noise in such surveys. ### Auroras Magnetometers can give an indication of auroral activity before the light from the aurora becomes visible. A grid of magnetometers around the world constantly measures the effect of the solar wind on the Earth's magnetic field, which is then published on the K-index.[29] ### Coal exploration Whilst magnetometers can be used to help map basin shape at a regional scale, they are more commonly used to map hazards to coal mining, such as basaltic intrusions (dykes, sills and volcanic plugs) that destroy resources and are dangerous to longwall mining equipment. Magnetometers can also locate zones ignited by lightning and map siderite (an impurity in coal). The best survey results are achieved on the ground in high-resolution surveys (with approximately 10 m line spacing and 0.5 m station spacing). Bore-hole magnetometers using a Ferret can also assist when coal seams are deep, by using multiple sills or looking beneath surface basalt flows.[citation needed] Modern surveys generally use magnetometers with GPS technology to automatically record the magnetic field and their location. The data set is then corrected with data from a second magnetometer (the base station) that is left stationary and records the change in the Earth's magnetic field during the survey.[30] ### Directional drilling Magnetometers are used in directional drilling for oil or gas to detect the azimuth of the drilling tools near the drill. They are most often paired with accelerometers in drilling tools so that both the inclination and azimuth of the drill can be found. ### Military For defensive purposes, navies use arrays of magnetometers laid across sea floors in strategic locations (i.e. around ports) to monitor submarine activity. The Russian 'Goldfish' (titanium submarines) were designed and built at great expense to thwart such systems (as pure titanium is non-magnetic).[31] Military submarines are degaussed—by passing through large underwater loops at regular intervals—to help them escape detection by sea-floor monitoring systems, magnetic anomaly detectors, and magnetically-triggered mines. However, submarines are never completely de-magnetised. It is possible to tell the depth at which a submarine has been by measuring its magnetic field, which is distorted as the pressure distorts the hull and hence the field. Heating can also change the magnetization of steel.[clarification needed] Submarines tow long sonar arrays to detect ships, and can even recognise different propeller noises. The sonar arrays need to be accurately positioned so they can triangulate direction to targets (e.g. ships). The arrays do not tow in a straight line, so fluxgate magnetometers are used to orient each sonar node in the array. Fluxgates can also be used in weapons navigation systems, but have been largely superseded by GPS and ring laser gyroscopes. Magnetometers such as the German Foerster are used to locate ferrous ordnance. Caesium and Overhauser magnetometers are used to locate and help clean up old bombing/test ranges. UAV payloads also include magnetometers for a range of defensive and offensive tasks.[examples needed] ### Mineral exploration Magnetometric surveys can be useful in defining magnetic anomalies which represent ore (direct detection), or in some cases gangue minerals associated with ore deposits (indirect or inferential detection). This includes iron ore, magnetite, hematite and often pyrrhotite. Developed countries such as Australia, Canada and USA invest heavily in systematic airborne magnetic surveys of their respective continents and surrounding oceans, to assist with map geology and in the discovery of mineral deposits. Such aeromag surveys are typically undertaken with 400 m line spacing at 100 m elevation, with readings every 10 meters or more. To overcome the asymmetry in the data density, data is interpolated between lines (usually 5 times) and data along the line is then averaged. Such data is gridded to an 80 m × 80 m pixel size and image processed using a program like ERMapper. At an exploration lease scale, the survey may be followed by a more detailed helimag or crop duster style fixed wing at 50 m line spacing and 50 m elevation (terrain permitting). Such an image is gridded on a 10 x 10 m pixel, offering 64 times the resolution. Where targets are shallow (<200 m), aeromag anomalies may be followed up with ground magnetic surveys on 10 m to 50 m line spacing with 1 m station spacing to provide the best detail (2 to 10 m pixel grid) (or 25 times the resolution prior to drilling). Magnetic fields from magnetic bodies of ore fall off with the inverse distance cubed (dipole target), or at best inverse distance squared (magnetic monopole target). One analogy to the resolution-with-distance is a car driving at night with lights on. At a distance of 400 m one sees one glowing haze, but as it approaches, two headlights, and then the left blinker, are visible. There are many challenges interpreting magnetic data for mineral exploration. Multiple targets mix together like multiple heat sources and, unlike light, there is no magnetic telescope to focus fields. The combination of multiple sources is measured at the surface. The geometry, depth or magnetisation direction (remanence) of the targets are also generally not known, and so multiple models can explain the data. Potent by Geophysical Software Solutions [1] is a leading magnetic (and gravity) interpretation package used extensively in the Australian exploration industry. Magnetometers assist mineral explorers both directly (i.e., gold mineralisation associated with magnetite, diamonds in kimberlite pipes) and, more commonly, indirectly, such as by mapping geological structures conducive to mineralisation (i.e., shear zones and alteration haloes around granites). Airborne Magnetometers detect the change in the Earth's magnetic field using sensors attached to the aircraft in the form of a "stinger" or by towing a magnetometer on the end of a cable. The magnetometer on a cable is often referred to as a "bomb" because of its shape. Others call it a "bird". Because hills and valleys under the aircraft make the magnetic readings rise and fall, a radar altimeter keeps track of the transducer's deviation from the nominal altitude above ground. There may also be a camera that takes photos of the ground. The location of the measurement is determined by also recording a GPS. ### Mobile telephones Many smartphones contain magnetometers; apps exist that serve as compasses. The iPhone 3GS has a magnetometer, a magnetoresistive permalloy sensor, the AN-203 produced by Honeywell.[32] In 2009, the price of three-axis magnetometers dipped below US \$1 per device and dropped rapidly. The use of a three-axis device means that it is not sensitive to the way it is held in orientation or elevation. Hall effect devices are also popular.[33] Researchers at Deutsche Telekom have used magnetometers embedded in mobile devices to permit touchless 3D interaction. Their interaction framework, called MagiTact, tracks changes to the magnetic field around a cellphone to identify different gestures made by a hand holding or wearing a magnet.[34] ### Oil exploration Seismic methods are preferred to magnetometers as the primary survey method for oil exploration although magnetic methods can give additional information about the underlying geology and in some environments evidence of leakage from traps.[35] Magnetometers are also used in oil exploration to show locations of geologic features that make drilling impractical, and other features that give geophysicists a more complete picture of stratigraphy. ### Spacecraft A three-axis fluxgate magnetometer was part of the Mariner 2 and Mariner 10 missions.[36] A dual technique magnetometer is part of the Cassini–Huygens mission to explore Saturn.[37] This system is composed of a vector helium and fluxgate magnetometers.[38] Magnetometers were also a component instrument on the Mercury MESSENGER mission. A magnetometer can also be used by satellites like GOES to measure both the magnitude and direction of the magnetic field of a planet or moon. ### Magnetic surveys Systematic surveys can be used to in searching for mineral deposits or locating lost objects. Such surveys are divided into: Data can be divided in point located and image data, the latter of which is in ERMapper format. #### Magnetovision On the base of space measured distribution of magnetic field parameters (e.g. amplitude or direction), the magnetovision images may be generated. Such presentation of magnetic data is very useful for further analyse and data fusion. Magnetic gradiometers are pairs of magnetometers with their sensors separated, usually horizontally, by a fixed distance. The readings are subtracted to measure the difference between the sensed magnetic fields, which gives the field gradients caused by magnetic anomalies. This is one way of compensating both for the variability in time of the Earth's magnetic field and for other sources of electromagnetic interference, thus allowing for more sensitive detection of anomalies. Because nearly equal values are being subtracted, the noise performance requirements for the magnetometers is more extreme. Gradiometers enhance shallow magnetic anomalies and are thus good for archaeological and site investigation work. They are also good for real-time work such as unexploded ordnance location. It is twice as efficient to run a base station and use two (or more) mobile sensors to read parallel lines simultaneously (assuming data is stored and post-processed). In this manner, both along-line and cross-line gradients can be calculated. #### Position control of magnetic surveys In traditional mineral exploration and archaeological work, grid pegs placed by theodolite and tape measure were used to define the survey area. Some UXO surveys used ropes to define the lanes. Airborne surveys used radio triangulation beacons, such as Siledus. Non-magnetic electronic hipchain triggers were developed to trigger magnetometers. They used rotary shaft encoders to measure distance along disposable cotton reels. Modern explorers use a range of low-magnetic signature GPS units, including Real-Time Kinematic GPS. #### Heading errors in magnetic surveys Magnetic surveys can suffer from noise coming from a range of sources. Different magnetometer technologies suffer different kinds of noise problems. Heading errors are one group of noise. They can come from three sources: • Sensor • Console • Operator Some total field sensors give different readings depending on their orientation. Magnetic materials in the sensor itself are the primary cause of this error. In some magnetometers, such as the vapor magnetometers (caesium, potassium, etc.), there are sources of heading error in the physics that contribute small amounts to the total heading error. Console noise comes from magnetic components on or within the console. These include ferrite in cores in inductors and transformers, steel frames around LCD's, legs on IC chips and steel cases in disposable batteries. Some popular MIL spec connectors also have steel springs. Operators must take care to be magnetically clean and should check the 'magnetic hygiene' of all apparel and items carried during a survey. Akubra hats are very popular in Australia, but their steel rims must be removed before use on magnetic surveys. Steel rings on notepads, steel capped boots and steel springs in overall eyelets can all cause unnecessary noise in surveys. Pens, mobile phones and stainless steel implants can also be problematic. The magnetic response (noise) from ferrous object on the operator and console can change with heading direction because of induction and remanence. Aeromagnetic survey aircraft and quad bike systems can use special compensators to correct for heading error noise. Heading errors look like herringbone patterns in survey images. Alternate lines can also be corrugated. #### Image processing of magnetic data Recording data and image processing is superior to real-time work because subtle anomalies often missed by the operator (especially in magnetically noisy areas) can be correlated between lines, shapes and clusters better defined. A range of sophisticated enhancement techniques can also be used. There is also a hard copy and need for systematic coverage.
# Edge Coloring Input Output Input Description: A graph $$G=(V,E)$$. Problem: What is the smallest set of colors needed to color the edges of $$E$$ such that no two edges with the same color share a vertex in common? Excerpt from The Algorithm Design Manual: The edge coloring of graphs arises in a variety of scheduling applications, typically associated with minimizing the number of noninterfering rounds needed to complete a given set of tasks. For example, consider a situation where we need to schedule a given set of two-person interviews, where each interview takes one hour. All meetings could be scheduled to occur at distinct times to avoid conflicts, but it is less wasteful to schedule nonconflicting events simultaneously. We can construct a graph whose vertices are the people and whose edges represent the pairs of people who want to meet. An edge coloring of this graph defines the schedule. The color classes represent the different time periods in the schedule, with all meetings of the same color happening simultaneously. The National Football League solves such an edge coloring problem each season to make up its schedule. Each team's opponents are determined by the records of the previous season. Assigning the opponents to weeks of the season is the edge-coloring problem, presumably complicated by the constraints of spacing out rematches and making sure that there is a good game every Monday night. The minimum number of colors needed to edge color a graph is called by some its edge-chromatic number and others its chromatic index. To gain insight into edge coloring, note that a graph consisting of an even-length cycle can be edge-colored with 2 colors, while odd-length cycles have an edge-chromatic number of 3. ### Related Problems Job Scheduling Vertex Coloring Go To Main Page
# std::ifstream on 64-bit OS This topic is 3885 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm having some trouble running an MD2 Model Loader on Vista 64-bit. The loader itself works perfectly on 32-bit XP, so I figured all I would need to do is recompile the project in VS2008, sadly it doesn't seem to have had any effect. Here's the relevant section of code: std::ifstream fFile; fFile.open(pFilename, std::ios::in | std::ios::binary); if(!fFile.is_open()) { return false; } { fFile.close(); return false; } The file is opened correctly but when I call std::ifstream::read() all that is placed inside the header is zeros. What am I missing? ##### Share on other sites My first guess would be that the read call is failing in some way. You aren't checking for that at all. Process Monitor might be useful in tracking down the cause of any file reading problems. It's also possible that sizeof(S_MD2Header) changes between the two versions, which would obviously mess things up. Another thing to try would be to use fopen() and fread() instead of ifstream. They are sometimes easier to debug. A minimal compilable example which shows the problem would be helpful, if those guesses aren't right. ##### Share on other sites Quote: Original post by FunLogicWhat am I missing? One thing you're missing is the fundmental fact that reading and writing a binary blob like that is inherently non-portable (across architectures, even across different build settinsg from the same compiler on the same architecture). You expectation of success in doing so is misplaced. I suggest you google for information on object serialization. There are some widely available libraries that could help. --smw ##### Share on other sites You could have problems with the S_MD2Header structure, odds are that type sizes and/or alignments have changed. Having said that, that wouldn't cause the contents to be all 0's, you'd get garbage data. Try calling gcount() to see how many bytes were successfully read in the last operation. 1. 1 Rutin 41 2. 2 3. 3 4. 4 5. 5 • 10 • 27 • 20 • 9 • 20 • ### Forum Statistics • Total Topics 633402 • Total Posts 3011670 • ### Who's Online (See full list) There are no registered users currently online ×
# What is the Reynolds Number?¶ The dimensionless Reynolds number plays a prominent role in foreseeing the patterns in a fluid’s behavior. The Reynolds number, referred to as Re, is used to determine whether the fluid flow is laminar or turbulent. It is one of the main controlling parameters in all viscous flows where a numerical model is selected according to pre-calculated Reynolds number. Although the Reynolds number comprises both static and kinematic properties of fluids, it is specified as a flow property since dynamic conditions are investigated. Technically speaking, the Reynolds number is the ratio of the inertial forces and the viscous forces. In practice, the Reynolds number is used to predict if the flow will be laminar or turbulent. If the inertial forces, which resist a change in velocity of an object and are the cause of the fluid movement, are dominant, the flow is turbulent. Otherwise, if the viscous forces, defined as the resistance to flow, are dominant – the flow is laminar. The Reynolds number can be specified as below: $Re=\frac{inertial~force}{viscous~force}=\frac{fluid~and~flow~properties}{fluid~properties} \tag{1}$ For instance, a glass of water which stands on a static surface, regardless of any forces apart from gravity, is at rest and flow properties are ignored. Thus, the numerator in equation (1) is “0”. That results in independence from the Reynolds number for a fluid at rest. On the other hand, whilst water is spilled by tilting a water-filled glass, flow properties abide by physical laws, a Reynolds number can be estimated as to predict fluid flow that is illustrated in Figure 1. Figure 1: A glass of water which is a) at rest; b) flows ## History¶ The theory of a dimensionless number which predicts fluid flow was initially introduced by Sir George Stokes (1819-1903) who had attempted to figure out drag force on a sphere whereas neglecting the inertial term. Stokes had also carried out the studies of Claude Louis Navier (1785-1836) taking them further and deriving the equation of motion by adding a viscous term in 1851 – thereby revealing the Navier-Stokes equation [1]. Stokes flow, named after Stokes’ approach to viscous fluid flow, is the mathematical model in which the Reynolds number is so low that it is presumed to be zero. Various scientists had conducted studies to examine properties of fluid movement after Stokes. Even though the Navier-Stokes equations thoroughly analyzed fluid flow, it was quite hard to apply them for arbitrary flows where the Reynolds number could easily predict fluid movement. In 1883, Irish scientist Osborne Reynolds discovered the dimensionless number that predicts fluid flow based on static and dynamic properties such as velocity, density, dynamic viscosity and characteristics of the fluid [2]. He conducted experimental studies to examine the relationship between the velocity and behavior of fluid flow. For this purpose, an experimental setup (Figure 2.a) has been established by Reynolds using dyed water which was released in the middle of the cross-sectional area into the main clear water to visualize the movement of fluid flow through the glass tube (Figure 2.b). Figure 2: a) Experimental setup established by Osborne Reynolds; b) Experimental visualization of laminar and turbulent flow The study of Osborne Reynolds titled ‘an experimental investigation of the circumstances which determine whether the motion of water in parallel channels shall be direct or sinuous’ regarding the dimensionless number was issued in “Philosophical Transactions of the Royal Society”. According to the article, the dimensionless number discovered by Reynolds was suitable to foresee fluid flow in a broad range from water flow in a pipe to airflow over an airfoil.[2] Figure 3: Osborne Reynolds (1842-1912) The dimensionless number was referred to as parameter :math:‘R’, until the presentation of German physicist Arnold Sommerfeld (1868 – 1951) at the 4th International Congress of Mathematicians in Rome (1908), where he referred to the ‘R’ number as the ‘Reynolds number’. The term used by Sommerfeld has been used worldwide ever since. [3] ## Derivation¶ The dimensionless Reynolds number predicts whether the fluid flow would be laminar or turbulent referring to several properties such as velocity, length, viscosity, and also type of flow. It is expressed as the ratio of inertial forces to viscous forces and can be explained in terms of units and parameters respectively, as below: $Re=\frac{ρVL}{μ}=\frac{VL}{v} \tag{2}$ $Re=\frac{F_{inertia}}{F_{viscous}}=\frac{\frac{kg}{m^3}\times{\frac{m}{s}}\times{m}}{Pa\times{s}}=\frac{F}{F} \tag{3}$ where $$ρ~(\frac{kg}{m^3})$$ is the density of the fluid, $$V~(\frac{m}{s^2})$$ is the characteristic velocity of the flow, and $$L~(m)$$ is the characteristic length scale of flow. $$(4)$$ Equation (3) is the derivation of units at which the Reynolds number is specified as non-dimensional. Variations of the Reynolds number are shown in equation (2) where $$μ (Pa\times{s})$$ is the dynamic viscosity of fluid and $$v~(\frac{m^2}{s})$$ is the kinematic viscosity. The transition between dynamic and kinematic viscosity is as follows: $v=\frac{μ}{ρ} \tag{4}$ ## Fluid, Flow and Reynolds Number¶ The applicability of the Reynolds number differs depending on the specifications of the fluid flow such as the variation of density (compressibility), variation of viscosity (Non-Newtonian), being internal- or external flow etc. The critical Reynolds number is the expression of the value to specify transition among regimes which diversifies regarding type of flow and geometry as well. Whilst the critical Reynolds number for turbulent flow in a pipe is 2000, the critical Reynolds number for turbulent flow over a flat plate, when the flow velocity is the free-stream velocity, is in a range from $$10^5$$ to $$10^6$$. $$^4$$ The Reynolds number also predicts the viscous behavior of the flow in case fluids are Newtonian. Therefore, it is highly important to perceive the physical case to avoid inaccurate predictions. Transition regimes and internal as well as external flows with either low or high Reynolds number in use, are the basic fields to comprehensively investigate the Reynolds number. Newtonian fluids are fluids that have a constant viscosity. If the temperature stays the same, it does not matter how much stress is applied on a Newtonian fluid; it will always have the same viscosity. Examples include water, alcohol and mineral oil. ## Laminar to turbulent transition¶ The fluid flow can be specified under two different regimes: Laminar and Turbulent. The transition among the regimes is an important issue that is driven by both fluid and flow properties. As mentioned before, the critical Reynolds number, which changes in accordance with the physical case, can be classified as internal and external, where it might face slight changes in the amount. Yet while the Reynolds number regarding the laminar-turbulent transition can be defined reasonably for internal flow, it is hard to specify a definition for an external flow. ## Internal flow¶ The fluid flow in a pipe as an internal flow had been illustrated by Reynolds as in Figure 2.b. The critical Reynolds number for internal flow is: [4] Flow type Renolds Number Range Laminar regime up to Re=2300 Transition regime 2300<Re<4000 Turbulent regime Re>4000 Table 1: Different Reynolds Numbers for different types of flow Open-channel flow, fluid flow in an object and flow with pipe friction are internal flows in which the Reynolds number is predicted based on hydraulic diameter $$D$$ instead of characteristic length $$L$$. In case the pipe is cylindrical, the hydraulic diameter $$D$$ is accepted as the actual diameter of the cylinder, meaning the Reynolds number is as follows: $Re=\frac{F_{inertia}}{F_{viscous}}=\frac{ρVD_H}{μ} \tag{5}$ The shape of a pipe or duct can vary (e.g. square, rectangular, etc.). In those cases, the hydraulic diameter is determined as below: $D_H=\frac{4A}{P} \tag{6}$ where A is the cross-sectional area and P is the wetted perimeter. The friction on the pipe surface due to roughness is an effective parameter to consider because it causes laminar to turbulence transition and energy losses. The ‘Moody Chart’ (Figure 4) was generated by Lewis Ferry Moody (1944) to predict fluid flow in pipes where roughness was effective. It is a practical method to determine energy losses in terms of friction factor due to roughness throughout the inner surface of a pipe. The critical Reynolds number for a pipe with surface roughness abides by the regimes above [2]. In the chart below you can see a logarithmic scale at the bottom with a scale for the friction factor at the left and the relative roughness of the pipe at the right. Figure 4: The Moody chart for pipe friction with smooth and rough walls ## External flow¶ External flow at which mainstream has no district boundaries is alike to internal flow that also has a transition regime. Flows over bodies such as a flat plate, cylinder and sphere are the standard cases used to investigate the effect of velocity throughout the stream. In 1914, German scientist Ludwig Prandtl discovered the boundary layer, which is partially the function of the Reynolds number, covering surface through laminar, turbulent and also transition regimes [5]. The flow over a flat surface is shown in figure 4 with regimes where $$x_c$$ is the critical length for transition, $$L$$ is total length of the plate and $$u$$ is the velocity of the free stream flow. Figure 5: Transition of boundaries through the flow over the flat plate surface as an example of external flow In general, the boundary layer dilates with movement through $$x$$ direction (any point on plate throughout $$L$$) that eventually results in unstable conditions where the Reynolds number increases simultaneously. The critical Reynolds number for flow over flat plate surface is: $Re_{critical}=\frac{ρVx}{μ}≥3\times{10^5}~to~3\times{10^6} \tag{7}$ which depends on the uniformity of the flow over the surface. Yet while the critical Reynolds numbers for regimes are virtually specified for internal flow, it is hard to detect them for external flow which diversifies the critical Reynolds number regarding geometry. Furthermore, apart from internal flow, boundary layer separation is an anomalous issue for external flow where several ambiguities are encountered to generate a reliable numerical model with respect to a physical domain.[6] ## Low and high Reynolds number¶ The Reynolds number, the ratio of inertial and viscous effects, is also effective on Navier-Stokes equations to truncate mathematical models. While $$Re→∞$$, the viscous effects are presumed negligible where viscous terms in Navier-Stokes equations are dropped. The simplified form of the Navier-Stokes equations — called Euler equations — can be specified as follows: $\frac{Dρ}{Dt}=-ρ∇\times{u} \tag{8}$ $\frac{Du}{Dt}=-\frac{∇p}{ρ}+g \tag{9}$ $\frac{De}{Dt}=-\frac{p}{ρ}∇\times{u} \tag{10}$ where $$ρ$$ is density, $$u$$ is velocity, $$p$$ is pressure, $$g$$ is gravitational acceleration, and $$e$$ is the specific internal energy.[6] Though viscous effects are relatively important for fluids, the inviscid flow model partially provides a reliable mathematical model as to predict a real process for some specific cases. For instance, high-speed external flow over bodies is a broadly used approximation where the inviscid approach fits reasonably. While $$Re≪1$$, the inertial effects are presumed negligible and related terms in the Navier-Stokes equations can be dropped. The simplified form of Navier-Stokes equations is called either Creeping or Stokes flow: $μ∇^2u-∇p+f=0 \tag{11}$ $∇\times{u}=0 \tag{12}$ where $$u$$ is the velocity of the fluid, $$∇p$$ is the pressure gradient, $$μ$$ is the dynamic viscosity and $$f$$ is the applied body force. [6] Having tangible viscous effects, the creeping flow is a suitable approach that can be used to investigate e.g. the flow of lava, swimming of microorganisms, flow of polymers, lubrication, etc. ## Application of the Reynolds number¶ The numerical solution of fluid flow relies on mathematical models which have been generated by both experimental studies and related physical laws. One of the significant steps throughout the numerical examination is to determine an appropriate mathematical model that simulates the physical domain. To obtain a reasonably good prediction for the behavior of fluids under various circumstances, the Reynolds number has been accepted as a substantial prerequisite for fluid flow analysis. For instance, movement of glycerin in a circular duct can be predicted by the Reynolds number as follows: [7] Matter Glycerin Dencity at $$23^\circ C$$ 1259 Dynamic Viscosity $$(Pa.s)$$ 0.950 Diameter of Duct $$(m)$$ 0.05 Velocity of Glycerin flow at inlet $$(m/s)$$ 0.5 $Re_{Glycerin}=\frac{ρVD_H}{μ} = \frac{1259\times{0.5}\times{0.05}}{0.950} ≈ 33.1 \tag{13}$ where glycerin flow is laminar in accordance with the critical Reynolds number for internal flow. ## Reynolds number & SimScale¶ The Reynolds number is never really visible in SimScale’s simulation projects but it influences many of them. Here are some interesting blog posts to read about the Reynolds number: What Everybody Ought to Know About CFD How Dimples on a Golf Ball Affect Its Flight and Aerodynamics 10 Piping Design Simulations: Fluid Flow and Stress Analyses ## Resources¶ $$^1$$: Stokes, George. “On the Effect of the Internal Friction of Fluids on the Motion of Pendulums”. Transactions of the Cambridge Philosophical Society. 9, 1851, P. 8–106. $$^2$$: Reynolds, Osborne. “An experimental investigation of the circumstances which determine whether the motion of water shall be direct or sinuous, and of the law of resistance in parallel channels”. Philosophical Transactions of the Royal Society. 174 (0), 1883, P. 935–982. $$^3$$: Sommerfeld, Arnold. “Ein Beitrag zur hydrodynamischen Erkläerung der turbulenten Flüssigkeitsbewegüngen (A Contribution to Hydrodynamic Explanation of Turbulent Fluid Motions)”. International Congress of Mathematicians, 1908, P. 116–124. $$^4$$: White, Frank. Fluid Mechanics. 4th edition. McGraw-Hill Higher Education, 2002, ISBN: 0-07-228192-8. $$^5$$: https://en.wikipedia.org/wiki/Ludwig_Prandtl, opened April 2017 $$^6$$: Bird, R.B., Stewart, W.E. and Lightfoot, E.N. “Transport Phenomena”. 2th edition. John Wiley & Sons, 2001, ISBN 0-471-41077-2. $$^7$$: http://www.engineeringtoolbox.com/liquids-densities-d_743.html, opened April 2017 ### Figure resources¶ Figure 2: Reynolds, Osborne. “An experimental investigation of the circumstances which determine whether the motion of water shall be direct or sinuous, and of the law of resistance in parallel channels”. Philosophical Transactions of the Royal Society. 174 (0), 1883, P. 935–982. Figure 3: http://www.mace.manchester.ac.uk/about-us/hall-of-fame/mechanical-engineering/osborne-reynolds/, opened April 2017 Figure 4: http://www.printablediagram.com/moody-diagram-in-high-quality/moody-diagram-flow/, opened April 2017 Figure 5: http://www2.latech.edu/~hhegab/pages/me354/Lab7/Lab6_flatplate_99.html, opened April 2017
Copyright © 2018-2021 BrainKart.com; All Rights Reserved. Determination of the absolute rate of the reaction and/or its individual elementary steps. Applications: chemical and phase equilibria : 30: Introduction to reaction kinetics : 31: Complex reactions and mechanisms : 32: Steady-state and equilibrium approximations : 33: Chain reactions : 34: Temperature dependence, E a, catalysis : 35: Enzyme catalysis : 36: Autocatalysis and oscillators : Need help getting started? result of collisions between the reacting molecules. Chemical Kinetics – Notes. Reasoning - Organic Qualitative Analysis. Complete Chemical Kinetics : Daily Practice Problems (DPP) - 3 Class 12 Notes | EduRev chapter (including extra questions, long questions, short questions, mcq) can be found on EduRev, you can check out Class 12 lecture & lessons summary in the same course for Class 12 Syllabus. In this case. Contains information on everything you need to know according to each understanding application or skill. For reaction aA →bB Rate =1/b(Δ[B]/ Δ t) = -1/a (Δ [A]/ Δt) It goes on decreasing as the reaction progress due to decrease in the concentration(s) of the reactant(s). Reaction Kinetics: Rate Laws quiz that tests what you know about important details and events in the book. Chemical kinetics is the branch of chemistry which deals with the study of rates (or fastness) of chemical reactions, the factors affecting it and the mechanism by which the reactions proceed. We begin Chapter 14 "Chemical Kinetics" with a discussion of chemical kinetics The study of reaction rates., which is the study of reaction rates The changes in concentrations of reactants and products with time., or the changes in the concentrations of reactants and products with time. 10 Recognizing a first order process: AÆproducts Whenever the conc. Don't show me this again. Chemical Kinetics is a very most important chapter for NEET Chemistry exam. order reaction, the half life is a constant i.e., it does not depend on the We recommend the use of Firefox to ensure these XML files view correctly. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. They wish to investigate the manner in which the reaction occurs and the speed at which it takes place. the factors affecting the rate of reactions and the mechanism of the reaction. Introduction : In order to describe the chemical kinetics of a reaction, it is desirable to determine how the rate of reaction varies as the reaction progresses. Chemical kinetics is the study of the rate and the mechanism of chemical reactions, proceeding under given conditions of temperature, pressure, concentration etc. Let's imagine a simple reaction taking place in the gas phase in a 1 L container. reaction is called the instantaneous rate. Thermodynamics is time’s arrow, while chemical kinetics is time’s clock. Summary of Relevant Aspects of Fluid Dynamics and Chemical Kinetics book. Chemical Reactions on the … Missed the LibreFest? Arrhenius equation - The effect of temperature on reaction rate, Chemical Kinetics: Multiple choice questions with answers, Chemical Kinetics: Solved Example Problems, Estimation of Ferrous Ammonium Sulphate (FAS). Where published data has been superceded by new evaluations on the website this is noted. The rate of the reaction, at a particular instant during the Chem 113, midterm 1 - Summary Comprehensive General Chemistry 3 Qualitative 2019-3 Electrochemistry Report 2019-3 Solubility Product Lab CHEM 11300 Ch 16 Notes - Summary Principles of Modern Chemistry Exam 2019, answers Exam 3. Click here to navigate to parent product. The Rate of a Reaction; 3. It aims is to find mathematical expression that relates the speed of a chemical reaction with the various factors on which it depends. They are also interested in studying the effect of various parameters like temperature, pressure and concentration on the rate of the reaction. Rate of reaction is the change in concentration of reactants or products per unit time. Type Summary; Subjects. Revision notes in exam days is one of the best tips … However, there are very few exceptions. Due to the wide reaching nature of the subject readers often struggle to find a book which provides in-depth, comprehensive information without focusing on one specific subject too heavily. Have you ever been to a Demolition Derby? a) 0.01 e −x. Kinetics, the study of the rates of chemical reactions, has a profound impact on our daily lives. Share on Facebook Share on Twitter. is called chemical kinetics. Half-life of Reaction; 6. The rate of a reaction is affected by the following factors. Watch the recordings here on Youtube! Learn and practice from Chemical Kinetics quiz, study notes and study tips to help you in NEET Chemistry preparation. Fast/instantaneous reactions Chemical reaction which completes in less than Ips (10-12 s) time, IS known as fast reaction. Chemical Kinetics - Summary; 8. Chemical kinetics is the study of reaction rates, the changes in the concentrations of reactants and products with time. Molecules of A (red) can spontaneously convert to molecules of B (blue). Written by a IB HL Chemistry student who graduated with a 45/45. Chemical kinetics is an important aspect of a chemical reaction as it predicts at what rate the reaction will attain equilibrium which helps us to know how we can use this chemical change in a better way. Chemical Kinetics NEET MCQs- Important Chemical Kinetics MCQs & Study Notes for NEET Preparation. Chemical kinetics is the description of the rate of a chemical reaction. The rate law is a mathematical equation that describes the progress of the reaction and has the following general form for the reaction aA + bB --> cC + dD: d) none of these. Introduction. It provides complete coverage of the domain of chemical kinetics, which is necessary for the various future users in the fields of Chemistry, Physical Chemistry, Materials Science, Chemical Engineering, Macromolecular Chemistry and Combustion. Chemical Kinetics - Learning Outcomes; 2. SUMMARY: CHEMICAL KINETICS Rate of reaction: is defined as the change in concentration of reactants or products per unit time. The number of collisions taking place per second per unit volume of the reaction mixture is known as collision frequency (Z). Edition: 2012; ISBN: 9780321696724; Edition: Unknown; More summaries for. It aims is to find mathematical expression that relates the speed of a chemical reaction with the various factors on which it depends. 1. Chemical mechanisms propose a series of steps that make up the overall reaction. Ch i lChemical equilib iilibrium– the stttate in whi hhich the concenttitrations of the reactants and products have no net change over time. temperature is increased by 10 0 C . We have summary tables of recommended data for the reactions listed in the ACP volumes. Reading Time: 7min read 0. Even though some reactions are thermodynamically favorable, such as the conversion of diamonds into graphite, they do not occur at a measurable rate at room temperature. Don't show me this again. Despite much study, there is no consensus on rate constants for formation of the formyl ion isomers in this reaction. Concentration of the reactant 3. by Neepur Garg. Generally, the rate of a reaction increase with increasing Have questions or comments? Let us define reaction rate in terms of a ratio of the change in concentration of a reactant or product (Δ[A] or Δ[B]) ov… i.e. For a first order reaction A → B the rate constant is x min − 1. is called chemical kinetics. Chemical Kinetics - Summary; 8. Chemical Kinetics – Notes. A study into the kinetics of a chemical reaction is usually carried out with one or both of two main goals in mind: 1. The kinetic behaviour of an ordinary chemical reaction is conventionally studied in the first instance by determining how the reaction rate is influenced by certain external factors such as the concentrations of the reacting substances, the temperature, and sometimes the pressure. … Revision Notes on Chemical Kinetics: Rate of Reaction: Rate of change of extent of reaction is the rate of reaction. Temperature of the reaction 5. Introduction : In order to describe the chemical kinetics of a reaction, it is desirable to determine how … This is the rate at which the reactants are transformed into products. These summary data will be updated frequently. The Rate of a Reaction; 3. choose, the closer we approach to the instantaneous rate. 13. Chemical Kinetics - Formulas All rates written as conc time or [A] t . The rate represents the speed at which the reactants are converted It further helps to gather and analyze the information about the mechanism of the reaction and define the characteristics of a chemical reaction. Chemical Kinetics: Important Questions. E.g. Thus, in chemical kinetics we can also determine the rate of chemical reaction. Edition 2nd Edition. concentration etc. Chemistry; Chemical; Kinetics; Written for. Analysis of the sequence of elementary steps giving rise to the overall reaction. Kinetics - Chapter Summary and Learning Objectives. Equilibrium‐the condition of a system in which competing influences are balanced. 2. \[\displaystyle \textit{average rate} = … Summary. Author(s): Theodore Lawrence Brown, Bruce E. Bursten. Reaction Kinetics: Rate Laws The rate of a chemical reaction is, perhaps, its most important property because it dictates whether a reaction can occur during a lifetime. The rate law or rate equation for a chemical reaction is an equation that links the initial or forward reaction rate with the concentrations or pressures of the reactants and constant parameters (normally rate coefficients and partial reaction orders). Half-life of Reaction; 6. the rate of reaction, when the concentration of each of the reactants in unity. Rate of a Chemical Reaction Chemical kinetics is the part of physical chemistry that studies reaction rates and explains why certain reactions are instantaneous and others are not. The rate law or rate equation for a chemical reaction is an equation that links the initial or forward reaction rate with the concentrations or pressures of the reactants and constant parameters (normally rate coefficients and partial reaction orders). Molecularity of a reaction is the total number of reactant species Presence of a catalyst Chemical kinetics is the branch of physical chemistry which deals with a study of the speed of chemical reactions. Chemical kinetics is the branch of chemistry which addresses the question: “how fast do reactions go?” Chemistry can be thought of, at the simplest level, as the science that concerns itself with making new substances from other substances. Rate of a Chemical Reaction Chemical kinetics is the part of physical chemistry that studies reaction rates and explains why certain reactions are instantaneous and others are not. If the initial concentration of A is 0.01M , the concentration of A after one hour is given by the expression. The range of courses requiring a good basic understanding of chemical kinetics is extensive, ranging from chemical engineers and pharmacists to biochemists and providing the fundamentals in chemistry. As a result, in notes of chemistry class 12 chapter 4, we have provided a wide variety of chemical kinetics usage and … The half life of a reaction is defined as the time required for 4) DISCUSSION AND SUMMARY: CHEMICAL KINETICS. Organic Qualitative Analysis. Chemical kinetics, the branch of physical chemistry that is concerned with understanding the rates of chemical reactions. A chemical reaction takes place due to collision among reactant molecules. The shorter the time period, we the factors affecting the rate of reactions and the mechanism of the reaction. The rate of a reaction is affected by the following factors. Introduction to Physical Chemistry - Second Assessment increase in rate is different for different reactions. information contact us at [email protected], status page at https://status.libretexts.org, rates of reactions affected by four factors, surface area of solid or liquid reactants and/or catalysts, brackets around a substance indicate the concentration, instantaneous rate obtained from the straight line tangent that touches the curve at a specific point, instantaneous rate also referred to as the rate, for the irreversible reaction $$aA+bB\to cC+dD$$, equation used only if C and D only substances formed, reaction orders do not have to correspond with coefficients in balanced equation, values of reaction order determined experimentally, reaction order can be fractional or negative, units of rate constant depend on overall reaction order of rate law, units of rate = (units of rate constant)(units of concentration), rate constant does not depend on concentration, rate laws can be converted into equations that give concentrations of reactants or products, corresponds to a straight line with $$y = mx + b$$, concentration of reactant remaining at any time, time required for given fraction of sample to react, time required for reactant concentration to reach a certain level, $$t_{1/2}$$ of first order independent of initial concentrations, half-life same at any given time of reaction, in first order reaction – concentrations of reactant decreases by ½ in each series of regularly spaced time intervals, rate depends on reactant concentration raised to second power or concentrations of two different reactants each raised to first power, half life dependent on initial concentration of reactant, rate constant must increase with increasing temperature, thus increasing the rate of reaction, greater frequency of collisions the greater the reaction rate, for most reactions only a small fraction of collisions leads to a reaction, Molecules must have a minimum amount of energy to react, Energy comes from kinetic energy of collisions, Activated complex or transition state – atoms at the top of the energy barrier, Reactions occur when collisions between molecules occur with enough energy and proper orientation, $$k$$ = rate constant, $$E_a$$ = activation energy, $$R$$ = gas constant (8.314 J/(mol K)), $$T$$ = absolute temperature, $$A$$ = frequency factor, $$A$$ relates to frequency of collisions, favorable orientations, the $$\ln k$$ vs. $$1/t$$ graph (also known as an Arrhenius plot) has a slope $$–E_a/R$$ and the y-intercept $$\ln A$$, used to calculate rate constant, $$k_1$$ and $$T_1$$, elementary steps in multi-step mechanism must always add to give chemical equation of overall process, if reaction is known to be an elementary step then the rate law is known, rate of unimolecular step is first order (Rate = k[A]), rate of bimolecular steps is second order (Rate = k[A][B]), if double [A] than number of collisions of A and B will double, intermediates are usually unstable, in low concentration, and difficult to isolate, when a fast step precedes a slow one, solve for concentration of intermediate by assuming that equilibrium is established in fast step, catalysts provides a different mechanism for reaction, initial step in heterogeneous catalyst is adsorption, adsorption occurs because ions/atoms at surface of solid extremely reactive, large protein molecules with molecular weights 10,000 – 1 million amu, binding between enzyme and substrate involves intermolecular forces (dipole-dipole, hydrogen bonding, and London dispersion forces), product from reaction leaves enzyme allowing for another substrate to enter enzyme, large turnover numbers = low activation energies. Chemical kinetics relates to many aspects of cosmology, geology, and even in some cases, psychology. Trumpet Vine Catering, Manhattan Idiomatic List Pdf, Aus Vs Sl Test 2018, Manx Ads Bikes, Ellan Vannin Hotel Isle Of Man Review, Daytona Show Homes, City Of New Orleans Staff Directory, New Zealand War History, Can You Pour Bleach Down The Kitchen Sink, Taramps Hd 8000 2 Ohm, City Of New Orleans Staff Directory, De Ligt Fifa 21 Career Mode,
# Ed25519 choice of private key implementation This answer to another question describes the different values chosen as private key and public key by various Ed25516 implementations. What are the advantages / disadvantages of each? (Table cropped from diagram in the linked answer, originally taken from "How do Ed5519 keys work?" by Brian Warner.) - I don't really get your question. Which different methods are you talking about? if you don't have any special requirements you should simply follow the spec, which derives the private scalar and a second key by hashing a 32 byte seed with SHA-512. –  CodesInChaos Jul 24 '13 at 15:47 I think you are confused. The table on the bottom of the picture describes various APIs, but they implement the same cryptographic scheme. –  orlp Jul 24 '13 at 16:30 I disagree; the table does show differences in what makes up 'a private key'. Now, the differences are fairly minor; however asking for someone to explain those differences would appear to be relevant. –  poncho Jul 24 '13 at 18:24 @poncho Those differences are only memory/cpu time tradeoffs and all functionally equal. –  orlp Jul 24 '13 at 20:34 @poncho. You are correct, I'm asking about the pros/cons of using a/RH vs k/A as the private key. –  Charles Nobbert Sep 7 '13 at 18:34 To perform an Ed25519 signature operation, you need to know three values, denoted by $\sf RH$, $a$ and $A$ in the diagram. Now, as it happens, these values are not independent: • $A$ can be derived from $a$, and • both $\sf RH$ and $a$ can be derived from the seed $k$. Thus, all you really need to store is the seed $k$; everything else can be derived from it. Alternatively, it's possible to store $\sf RH$ and $a$ like NaCl does, which saves you the (minor) effort of one SHA-512 hash computation whenever you need to sign something. There's no particular need to store the public key $A$ as part of the private key, since it can be derived from $a$ and you need to know $a$ anyway to be able to sign. However, deriving $A$ from $a$ requires an elliptic curve multiplication, which is a reasonably expensive operation compared to the other key generation steps. Thus, also storing $A$ as part of the private key provides a modest performance gain compared to storing just $k$ or just $\sf RH$ and $a$.
# International Workshop on Partial Wave Analyses and Advanced Tools for Hadron Spectroscopy Mar 13 – 17, 2017 Europe/Zurich timezone ## Decay angular distributions of K* and D* vector mesons in pion-nucleon interaction Mar 13, 2017, 6:30 PM 30m ### Seminar Room Topic 3: Theoretical Constraints on Amplitude Analyses ### Speaker Dr Sangho Kim (Asia Pacific Center for Theoretical Physics) ### Description We discuss production of vector $K^∗$ and $D^∗$ mesons in $\pi^− p$ interaction with their subsequent decay into pseudoscalar $K + \pi$ and $D + \pi$ mesons, respectively. Our consideration is based on modified quark-gluon string model which includes spin variables and allows to determine decay distributions of the outgoing pseudoscalars. These distributions are sensitive to the vector meson production mechanism and can be used for its determination. We find the relative importance of vector-meson Reggeon exchange to others for the cross sections. Result of present study may be used in projects of future experimental programs with the pion beams (for instance, at J-PARC facility). ### Primary author Dr Sangho Kim (Asia Pacific Center for Theoretical Physics) ### Co-authors Prof. Yongseok Oh (Department of Physics, Kyungpook National University) Prof. Alexander Titov (Bogoliubov Laboratory of Theoretical Physics, JINR)
Every so often, my muggle side and mathematical side conflict, and this clip from @marksettle shows one of them. My toddler’s train track is freaking me out right now. What is going on here?! pic.twitter.com/9o8bVWF5KO — marc blank-settle (@MarcSettle) April 6, 2016 My muggle side says “wait, what, how can that be?” My mathematician says “aha! neat! Arc lengths!” The two curved sides of the track are - presumably - arcs of concentric circles with radius $r$. The smaller arc has length $r\theta$, and the longer length $(r+w)\theta)$, where $w$ is the width of the track and $\theta$ the common angle at the centre. The overlap is the difference between them, $w\theta$. We can estimate $\theta$, fairly roughly: I can imagine two of the pieces of track making a quarter-turn, or possibly three; that puts the angle somewhere between $\frac{\pi}{4}$ and $\frac{\pi}{6}$. The overlap, then, is somewhere between half and three-quarters (roughly) of the width. Looking at (rather than measuring) the picture, that looks like it may be a slight underestimate: this could be because the two arcs aren’t flush against each other - the inner one is a tighter circle. Finding the difference that makes… well, that’s a problem for another day. The surprising overlap is known as the Jastrow illusion.
MathSciNet bibliographic data MR2279267 46L05 (37B99 46L55) Katsura, Takeshi A class of \$C\sp *\$$C\sp *$-algebras generalizing both graph algebras and homeomorphism \$C\sp *\$$C\sp *$-algebras. III. Ideal structures. Ergodic Theory Dynam. Systems 26 (2006), no. 6, 1805–1854. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Differences between revisions 31 and 32 ### How much fission energy? Assume a technology like the integral fast reactor, that reprocesses fuel and sends the radioactive fission products back through the reactor for neutron bombardment and de-activation. Assuming 200 MeV per fission, 50 MeV lost to deactivation processes, and 33% thermal efficiency, we can expect 5e7 eV per nucleon plant output, or 4.8e12 J/mole. I don't have numbers for thorium in the mantle, but assume it follows the same 3.33 to 1 ratio to uranium as the crust. Crust Uranium 2.57e5 kg 0.238 kg/mole 1.1e6 moles Crust Thorium 8.58e5 kg 0.232 kg/mole 3.7e6 moles Mantle Uranium 4.44e5 kg 0.238 kg/mole 1.9e6 moles Mantle Thorium 1.41e6 kg 0.232 kg/mole 3.3e7 moles WAG Total 4.0e7 moles 4e7 x 4.8e12 = 1.9e20 J, or 5.3e13 kWh. Half the energy we need for the lift. Of course, if we lift only the crust and mantle, the energy needed will be smaller. OTOH, burying the waste (and everyone elses) at the earth's core might be worth the cost of exposing it. At 4 cents per kilowatt hour, a mere 2 trillion dollars worth of energy. ### Conclusion Overall, the two processes are within an order of magnitude of each other, both delivering gravitationally sorted but otherwise non-beneficiated rock of approximately equal (low) value to the earth's surface. The "core the earth" approach is obviously silly ... besides access to a nickel-rich core, there many disadvantages compared to a mine 20 km deep and 67 acres in area, or a mine 200 meters deep and 6700 acres in area, which would require far less energy to remove. The product is the same: uninteresting rock, unless this was done around a concentrated ore body. Asteroid mining to provide raw materials to Earth is ridiculous. Asteroid mining to feed raw materials to simple manufacturing processes to produce objects used in the asteroid belt may be less ridiculous - except there is no factory infrastructure there. That infrastructure may grow from nothing to full local capability over hundreds or thousands of years - but please keep in mind that this growth will require new kinds of processes to manufacture new kinds of objects, and that will require a vast accumulation of new knowledge, and a vast infusion of capital to speed it up appreciably (in order to pay for all the mistakes and rapid obsolescence incurred during rapid development). It took 8 trillion dollars to develop the rocket fleet we have - use that to estimate the cost of vastly more ambitious projects. Terrestrial industrial civilization took thousands of years to develop, with the whole human race participating. Please do not underestimate the effort required to recapitulate the process in a far more challenging extraterrestrial environment. It will never happen without understanding the realities of terrestrial extraction and production; the only advantage we have over our ancestors is accumulated knowledge, if we do not ignore it. BackyardMinerals (last edited 2021-04-18 00:58:37 by KeithLofstrom)
ColorTools - Maple Programming Help # Online Help ###### All Products    Maple    MapleSim Home : Support : Online Help : Graphics : Packages : ColorTools : ColorTools/Gradient ColorTools Gradient generate a selection of intermediate colors Calling Sequence Gradient(color1..color2) Parameters color1,color2 - colors in formats recognized by ColorTools Options • number=nonnegint • the number of intermediate colors to compute (default is 10) • space=string • the color space in which to compute the intermediate colors by default this is inferred from the colors spaces of the input. • best • if this keyword is given, a heuristic is used to get a well spaced set of colors • displayable • if this keyword is given, the output colors will all be displayable Description • The Gradient command computes a number of intermediate colors that transition between the two colors in the input range. Examples > with(ColorTools): > G := Gradient("Red".."Blue"); ${G}{:=}\left[{⟨}\colorbox[rgb]{1,0,0}{{RGB : 1 0 0}}{⟩}{,}{⟨}\colorbox[rgb]{0.909803921568627,0,0.0901960784313725}{{RGB : 0.909 0 0.0909}}{⟩}{,}{⟨}\colorbox[rgb]{0.819607843137255,0,0.180392156862745}{{RGB : 0.818 0 0.182}}{⟩}{,}{⟨}\colorbox[rgb]{0.725490196078431,0,0.274509803921569}{{RGB : 0.727 0 0.273}}{⟩}{,}{⟨}\colorbox[rgb]{0.635294117647059,0,0.364705882352941}{{RGB : 0.636 0 0.364}}{⟩}{,}{⟨}\colorbox[rgb]{0.545098039215686,0,0.454901960784314}{{RGB : 0.545 0 0.455}}{⟩}{,}{⟨}\colorbox[rgb]{0.454901960784314,0,0.545098039215686}{{RGB : 0.455 0 0.545}}{⟩}{,}{⟨}\colorbox[rgb]{0.364705882352941,0,0.635294117647059}{{RGB : 0.364 0 0.636}}{⟩}{,}{⟨}\colorbox[rgb]{0.274509803921569,0,0.725490196078431}{{RGB : 0.273 0 0.727}}{⟩}{,}{⟨}\colorbox[rgb]{0.180392156862745,0,0.819607843137255}{{RGB : 0.182 0 0.818}}{⟩}{,}{⟨}\colorbox[rgb]{0.0901960784313725,0,0.909803921568627}{{RGB : 0.0909 0 0.909}}{⟩}{,}{⟨}\colorbox[rgb]{0,0,1}{{RGB : 0 0 1}}{⟩}\right]$ (1) > H := Gradient("Red".."Blue", best); ${H}{:=}\left[{⟨}\colorbox[rgb]{1,0,0}{{RGB : 1 0 0}}{⟩}{,}{⟨}\colorbox[rgb]{0.968627450980392,0,0.176470588235294}{{RGB : 0.967 0 0.176}}{⟩}{,}{⟨}\colorbox[rgb]{0.933333333333333,0,0.270588235294118}{{RGB : 0.933 0 0.27}}{⟩}{,}{⟨}\colorbox[rgb]{0.898039215686275,0,0.352941176470588}{{RGB : 0.896 0 0.351}}{⟩}{,}{⟨}\colorbox[rgb]{0.854901960784314,0,0.427450980392157}{{RGB : 0.857 0 0.429}}{⟩}{,}{⟨}\colorbox[rgb]{0.811764705882353,0,0.505882352941176}{{RGB : 0.813 0 0.505}}{⟩}{,}{⟨}\colorbox[rgb]{0.76078431372549,0,0.584313725490196}{{RGB : 0.762 0 0.583}}{⟩}{,}{⟨}\colorbox[rgb]{0.701960784313725,0,0.662745098039216}{{RGB : 0.703 0 0.661}}{⟩}{,}{⟨}\colorbox[rgb]{0.631372549019608,0,0.741176470588235}{{RGB : 0.63 0 0.742}}{⟩}{,}{⟨}\colorbox[rgb]{0.537254901960784,0,0.823529411764706}{{RGB : 0.536 0 0.825}}{⟩}{,}{⟨}\colorbox[rgb]{0.396078431372549,0,0.909803921568627}{{RGB : 0.397 0 0.911}}{⟩}{,}{⟨}\colorbox[rgb]{0,0,1}{{RGB : 0 0 1}}{⟩}\right]$ (2) > Swatches([G[], H[]], rows=2); Compatibility • The ColorTools[Gradient] command was introduced in Maple 16. • For more information on Maple 16 changes, see Updates in Maple 16. See Also ## Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is ? This question helps us to combat spam
Science topics: Mathematics Science topic Mathematics - Science topic Mathematics, Pure and Applied Math Questions related to Mathematics Question Members of the mathematics group: I read that there are 147 members in the mathematics group here in ResearchGate. I therefore want to bring once again the issue I initiated some time ago-establishing a mathematical science journal here. I will appreciate if we participate in the discussion and put our efforts to bring the journal to existence. We all will appreciate later when the journal becomes an international hub for mathematicians who work in pure and or applied mathematics alike and post their results with a minumum hassel and make their works be read and see their contribution to Mathematics and thereby to society. I will once again approximately quote Lobachevsky : " No part of Mathematics however abstract it may look or be that will not be used to the good of mankind " . It is this spirit that makes almost all mathematicians need to participate ( not only as readers ) in contributing works and results how small they may be who will later produce larger results in their fields. To give credit to the scientic network that we are using -ResearchGate, we can make copyrighted to it as : As you may recall, the title of the journal was: "Electronic Journal of Cross Sections in Mathematics" acronymed by EJCSM Sincerely, Dear Dr. Dejenie A. Lakew, Your idea is excellent. I may support the idea by contributing mathematics in industrial applications. Question (a+b) (a+b) = a2+2ab+b2 (a+b)square you calculate dela Question I have prepared a paper on Graph theory but I don't know how or where to publish it. Whoever it may concern I am N V Nagendram working as Assistant Professor in Mathematics and i got published 14 papers in the academic year 2010 - 2011 after continuous and constant work done in Near Rings under Algebra of Mathematics. please visit lbrce.ac.in CSS Dept. Faculty / publications you can have my profile. For your answer after writing an article to how it is to be sent ? Ans: please go through international journal for graph theory topic you can find many journal titles go through each and every journal inside till you get the publication on your specialization of topic if you find then you goto submission online from there only. it will ask you title, author, co-author if any, Abstract , Key words and Subject specification/classification code mention thereof. upload your paper in the form of either MSWORD / LATEX or required form there mentioned by publisher. So for this you can not ask every time to where my paper / article to be sent what you need to do is on regular basis you must have to had habit of reading journal by opening site international journal of topic name then you will find and read increase your reading capacity about journals. it is a good habit for a researcher on any topic. ok! Now i am going to give you one journal name here you please send this article to that journal. "International Journal of Mathematics Archive (IJMA)." or "International Journal of Mathematics and Computer Applications Research (IJMCAR) ISSN(Print):2249-6955 ISSN(Online): 2249-8060 " like this you have to thorough with net web site on your selected topic. bye yours .............N V Nagendram Question I know here f '(1) =0. I found in some texts if f ' (x) = 0 for some x then we can't apply N-R method. Is there any other technique to find first approximation for x^3-3x+1 = 0 taking initial approximation as 1? Which is correct ? a) 1 b) 0.5 c) 1.5 d) 0 I don't think there is any other way ,in the definition itself they say only if f'(xo) not equal to zero. Question I know parallel lines can't touch but what if they share all of the same points? Just wondering if there are any exceptions. @Deepak Anand: The problem is the use of the word "Euclidean". This implies many things, including Euclid's 5th Postulate wherein parallel lines do not intersect. If you want to investigate other geometries (there are many) in which there are no parallel lines or there are many parallel lines, all passing through the same point. These are important and just as valid as Euclidean Plane Geometry. They just have different assumptions and contexts. You might want to do a Google search on Vanishing Point or on Projective Geometry which are probably relevant to the point you would like to make. Question One is euler's method. Broadly, there are three classes of methods for solving (systems of ordinary) differential equations: special methods, numerical and graphical methods, and qualitative methods. 1. Special methods. They are rare, are only available for standard problems, and often require an initial transformation to a standard form before applying a solution method. However, it can be difficult to recognize the kind of differential equation you have so you can choose the right transformation. Possibly the best guide to these methods is "Ordinary Differential Equations" by Tenenbaum and Pollard, is available in a low-cost (US$12-25) Dover edition, and can often be found in college libraries. 2. Numerical and graphical methods. This includes Euler's method. The methods are broadly applicable, and there are excellent numerical solvers available for computers that are based upon the same idea as Euler's method, but are faster and more accurate. They are used to compute particular solutions or solution curves, and usually include methods for creating graphs when they can be useful. However, this approach may not give you insight into your problem. I recommend that you use such methods once you understand what are differential equations and what it means to find their solutions. 3. Qualitative methods. They are broadly applicable, are intended to give you insight into the kinds of solutions that exist for your differential equation, and the nature of the long-run behavior of the solutions to the equation (e.g. stabilize, grow or decrease without limit, oscillate, etc.). However, they may not give you numbers. They are based on patching together solution curves of (linearised versions of) the differential equation near coordinates where the kinds of solutions change. A modern first course in ordinary differential equations will usually introduce you to all three classes of methods, while a traditional course will emphasize special methods. As I said above, Tenenbaum and Pollard offer a textbook for a traditional first course. There are many texts available for a modern first course, such as the one by Blanchard, Devaney, and Hall, but they are usually expensive (about US$125). If you are eventually headed towards graduate studies in mathematics, the above approach will be OK for a first course, but there are much better texts available. Authors of excellent introductory texts for mathematics majors include V. I. Arnold or Birkhoff and Rota. Question Moments Descriptors are invariant under RST ( Rotation, Scaling, Translation) in PR. Is Moment Descriptor almost invariant under Optimization? Is Moment Descriptor almost invariant under Guassian Noise? Can the reduction of pixels to draw an optimal shape of a planar curve shape using mouse on computer affect recognition rate? If yes then upto which extent ? I'm not sure about your question, but an important factor if you are considering using moment descriptors is to make sure that moments exist in your domain. (For example Lorentzian curves, which are common in my work, have no defined moments). Question Notice, that if f:K --> M is an injective map which can be defined by a finite statement, then for every y in img(f) there is an x in K satisfying the relation y = f(x), which can be regarded as a definition for y. Thus, either both x and y can be defined by finite statements, or no one of them can be defined finitely. S. K's definition is for a bijection, not an injection. An injection is one-one into, not necessarily onto. Cardinality arguments show that there are many undefinable functions. If by definable one means definitions by an expression in some language where the expression contains a finite number of symbols and the number of symbols is finite, then there are a countable number of definable functions. But the number of injective functions is uncountable. Question Hi i send you one problem. If for (4) you don't need that it is 'strictly' increasing then you can just use Q(t)=t/3 Question May i know a book which gives a basic results or informations about Matrix theory? - Matrix Theory, Joel N. Franklin‏ - Elementary matrix theory, Howard Whitley Eves - Introduction to matrix analysis, Richard Ernest Bellman - Matrix theory, James M. Ortega - Matrix Theory, David W. Lewis‏ How do I improve my English and my mathematical representation Question First of all, your English for a German is better than my German for an American. I mostly understand what you are saying, which is the basis for communication. If you want to personally improve your English, I recommend that you practice, practice, practice. If you want to publish in English I recommend finding a person who is very fluent in English and German and writing in German. Second, just because English is the language of science does not mean that you cannot write to German journals and have a friend write the English translation for you. In fact, some people would like the opportunity to do translation work for their professional resume and if you get published that would help them out. Finally you can try to get a quick translation by using technology like translate.google.com, which by no means is a perfect piece of software, but it does well enough. Better yet that technology (by both Google and others) is getting better. For example: ------------------------ Guten Tag Herr Guettinger. Ich empfehle auch Englisch Lernsoftware von einer Firma Berlitze. Ich bin mit ihrer Software, um ein wenig Spanisch zu lernen. Es ist erschwinglich, im Gegensatz zu Rosetta Stone. Mit freundlichen Grüßen, Jimmy ------------------------- So as you can see, you have a lot of options. Question as I say vogels method is usually better but not always, have you seen a contradiction example? thanks a lot Golabi is the best method for it Question What are the various fields Hi John, I admire your bravery of challenging the "ridiculous" theories and I have browsed your website and sharing notes. But I think you go to extremes to some extent. 1. You said you had corrected some mistakes of Newton, Leibniz & Cauchy, and give your own definition of derivative. However, even though your New Calculus is well defined as you said, it may be classified as another type of calculus. Only when the "Old" Calculus' foundations have been PROVED wrong could I admit that is a fake. Infinity does not exist in reality, but it doesn't mean we couldn't use this definition to help us cope with some practical problems. 2. Could you tell me some other advantages of studying your New Calculus? I mean other than the easy-understanding, for the epsilon-delta is quiet difficult for students as you said, is there any possible applications? Maybe we could test your derivative and make a comparison of the speed and accuracy with the "old", that may make sense in computer applications. Frankly in China, undergraduates like me have few concepts of challenging the theories in our textbooks. So it's a great honor for me to have approached your ideas, regardless of the right or wrong. Maybe I have misunderstand your ideas, and wish we all have a fine discussion in the Research Gate. Sincerely Question My question is : can we use adomian decomposition method for solving non linear equations in complex space ? Thanks a lot Dear Hamed, I think that the Adomian decomposition method has been used for solving non linear equations in complex space. Question I would like to send my publications to this publisher. I would like to ask if someone has an experience with this publisher. Thank you! You are posing a problem with two unknowns. First, I don't know what kind of works you are going to publish. Second, this PubCo is not well-known, but looks similar to Lambert Academic Publishing. Seems, it publishes only monographs. I have experience with LAP. There are mathematical objects or situations for which one needs an infinite number of definitions, statements or descriptions? Question Mathematical objects or situations are special numbers, for example, Quantities in general, functions ..... My God, First of all thank you for the great number of contributions. Since I've stirred up a hornet's nest. I did not receive notification of your contributions. Who knows what to do? Can someone help me? Now back to my question. The intuition for my question arose from the post: "Is there a finite definition for every real number?" I probably still not understood. Now something very important to my question: I think of the primes, I think of the twin primes. I mean this is still an infinite process. Are there any other mathematical things? Yes, there is only the Fields Medal. See you soon Question For instance, pi = 3.141592... can be defined as the ratio between the length and the diameter of a circle. Any integer can be defined by a finite sequence of figures, and so on. Dear Steven, Every mathematical paper contains always some definitions and notations introduced by the corresponding author. In general, these definitions must be only considered in the paper context. Unfortunately, there are a limited symbol set to be used, and this fact oblige us to term different objects by the same symbols. This inconvenient does not matter whenever the author takes care of defining them. The great french mathematician Henri Poincaré says: "Mathematics is the art of denoting different things by the same name". Of course, he was thinking in equivalence classes and analogies. Analogies are also particular cases of equivalences. The father of normed spaces, Banach, wrote the following: A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies. (Stefan Banach 1892 - 1945) Best regards. Juan Esteban P.S. I have sent you my paper about Cantor's theorem via e-mail. Question Homotopy continuation method provide an useful aproach to find zeros of nonlinear equation systems in a globally convergent way. Homotopy methods transform a hard problem into a simpler one and solve it, then gradually deform this simpler problem into the original one. I usually solve equations from nonlinear circuits: diode, bipolar, MOS. Now, i want to solve other kind of equations with applications, specially if the equation is multivaluated. ¿Somebody want to collaborate with me? Normal linear systems involve derivatives on the input and output signals. The transfer function is the quotient of the Laplace transforms of the output and input. They are rational functions of a complex variable, normally noted by "s". The poles of the transfer function are very important in describing the properties of the system. If we introduce delays, at any of the involved derivatives, terms with exponentials will appear. In this situation the discovery of the poles is a difficult task. Frequently, the Lambert function is used. The situation becomes more involved if the system is fractional. In this case, fractional powers of s will appear. Question If f(x) is an increasing real valued function, what about its inverse? As many have pointed out, f(x) has an inverse if f(x) is strictly increasing. In that case, its inverse is also strictly increasing. Let´s see: Let´s call g(x) to the inverse of f(x). f(x) is strictly increasing; that means that the following sentence is true: "x1 < x2 if and only if f(x1) < f(x2)" for any values x1 and x2 in the domain of f(x) . g(x) is also strictly increasing only if it satisfies that same sentence, so we need to prove that the following sentence is true: "y1 < y2 if and only if g(y1) < g(y2)" but proving it is easy if one takes into account the following: For some values x1 and x2 it´s true that y1 = f(x1) and y2 = f(x2) (we used the fact that f and g are inverses). and replacing those values y1 and y2 in g(y1) and g(y2) we obtain g(y1) = g( f(x1) ) = x1 and g(y2) = g( f(x2) ) = x2 (we used the fact that f and g are inverses). Now, that means that y1 < y2 is the same as f(x1) < f(x2) and g(y1) < g(y2) is the same as x1 < x2 and thus, we can say that the pair of sentences "x1 < x2 if and only if f(x1) < f(x2)" and "y1 < y2 if and only if g(y1) < g(y2)" are the same sentence!! and finally, since the former is true, so is the later. Question Maybe my English is poor. So someone misunderstands the question. Let me rewrite it in another words. The problem I am interested in is as follows: Given g(x). We want to find out: whether there exists a function f(x) that f(f(x))=g(x). whether there exists a continuous function f(x) that f(f(x))=g(x). and so on... What's more,if we have a pair of such functions f1(x) and f2(x) that each allows f(f(x))=g(x),with what condition can we prove f1(x)=f2(x) for any x? For example. If g(x)=x. There exists a continuous function f1(x)=x allows f1(f1(x))=x. But we also have f2(x)=-x allows f2(f2(x))=x. Here f1 is not always equals to f2. However,if we add a condition that f must be monotone increasing,then we abandon f2. We can prove only f1 suits for our demands. What's the general situation? =============================================================== The previous edition: Where g(x) is already known,for example g(x)=exp(x)+x^2. What can we say about the existence and uniqueness of f(x)? If we require that f is continuous or analytic? I'm sorry……I couldn't catch your point. Maybe my English is poor, so you didn't know the question clearly. Let me say it again. I mean that g(x) is already given. We need to make sure if there exists a f(x) which allows f(f(x))=g(x). What's more,if we also have h(h(x))=g(x),can we make sure f(x)=h(x) for any x? Question Commuting vector fields means that their Lie bracket equals zero. Use the well known formul for commuting vector fields, $X,Y$: $\exp(t(X+Y))=\exp(tX)\exp(tY)$ Question Dear Srs, First I would like to excuse my English, but is not my native language. I am a software developer with interest in Automaton Theory, I have a few related project that you can see here http://fsvieira.com/, but I lack of math knowledge. So, I made my own definition of a non-deterministic recursive automaton, that you can find here http://fsvieira.com/nar.pdf, and now I want to proof that my definition can accept the same languages that a non-deterministic pushdown automaton can. But I don't have the skills to do it, so I just asking for some hints, what exercises should I do, what books can I read... Thanks. @Filipe. I've read your notes. The idea is sugestive but I can't find out how to accept some context-free languages as {(a^n)(b^m)(a^p)(b^q):n+m=p+q} On the other hand in the definition of transition function \delta on your example 1, you set \delta(q_1,M)={q_2}, but M is not in the alphabet ... and you cannot collect all possible M's in the alphabet because the M's are an infinite set whereas alphabets must be finite. Perhaps I've misunderstood some point. Question What are the isometries of the Hilbert Cube I^\infty = the set of all sequences (x_i) such that x_i \in [0, 1] endowed with the metric d((x_i), (y_i)) = \sum_{i=0}^\infty 2^{-i} |x_i - y_i| ? My conjecture is that the only possible isometries of the Hilbert Cube (in this metric) are f(x_i) = (y_i) such that y_i = x_i or y_i = 1 - x_i where (x_i) and (y_i) denote sequences in I^\infty and x_i, y_i denote their i-th terms, respectively. Question Write what you want Dear Hanspeter, It is the metamorphosis of another that degenerated. Question Here y can be assumed as a function of any independent variable ! I want the differentiate its present value of f(x(n)) with respect to its past value f(x(n-1) What you see on top, is 11.1 Partielle Ableitungen in English 3 Differentials in several variables That is a partial differentiation. Greetings Question Few examples are FFT,wavelet ect There are many examples, to mention some: Laplace transform, Fourier transform, wavelets, z-transform, Hankel Transform, Hilbert transform, and the list follows a reference that can be of some help is Transform Methods in Applied Mathematics by PETER LANCASTER K STUTIS SALKA USKAS , 1996 by John Wiley & Sons, Inc. Question y = 1=(x²+c) is a one-parameter family of solutions of the fi rst-order Di erential Equation; y+ 2xy² = 0. Find a solution of the fi rst-order Initial Value Problem(IVP) consisting of this di fferential equation and the given initial condition. Give the largest interval I over which the solution is de fined. y(2) =1/3 Good work Question We are working on coupled fibonacci sequences of higher orders. applications?, for what?, math doesn't need applications. Applications need Math. Question I need this software to be interactive: allowing to enter a signal on real time of simulation. Maybe SimuLinks can do this, I could not say. I would appreciate all the information you can apport. Why not using LabView from NI? But also Matlab supports this (Data Acquisiton Toolbox, Real time workshop, ...) or together with a DSpace solution? You have to check what is the most economic solution for your problem. Question Can anybody suggest me the problems where research could be done in the field of Linear Integral equation. There are many institutes working on mathematical area of research. the following are the few: Chennai mathematics institute Institute of mathematics and applications, bhuwaneshwar Institute of mathematics, Gurgaon The institute of mathematical sciences, Chennai CNR Rao Advanced Institute of mathematics, statistics and computer sciences I suggest u to do research on application point of view rather than core mathematics Bessel Functions Question Dear all, I've been searching for a "beginner's introduction" to Bessel Functions in its general form (and in electromagnetics in its specific form) except what is found on Wikipedia and youtube. The sources I found were not clear and misleading. I would much appreciate your feedback. Check this out Ara: F. Bowman, Introduction to Bessel Functions, Dover Publications Inc, 1958 N.Y. Question All types, especially scalene See ACM Algorithm 736 in (1994) Question I have a very interesting problem. I find that solving in a closed form the Rogers Ramanujan continued fraction R=R(q), q=e^(-pi sqrt(r)) , r positive rational, is equivalent to solve the equation aX^2+bX+b^2/(20a)=CX^(5/3) : (1) and X=[R(q^2)^(-5)-11-R(q^2)^5]b/(250a) (the numbers C, a, b are related to the jr invariant with the relation jr= 250C^3/(a^2 b), in order to generate the Rogers Ramanujan Continued fraction). (1) is a six degree polynomial equation and of quite simple form. The program Mathematica (Version.6) for some values of a,b, and r, (say) a=4,b=125 and r=1/5, 2/5, 3/5, 4/5, 6/5, 9/5, 12/5, 14/5, 17/5, give the exact solution (solves the equation, for these values). The problem is that I can not solve the polynomial equation and Im not aware of Galois Theory. How Mathematica solves this equation? Can anyone solve this polynomial equation? See also the article on arXiv: "On a General Sextic Equation Solved by Rogers Ramanujan Continued Fraction". (by Nikos Bagis). Note that I want to evaluate the RRCF not using the fifth degree modulus k_{25r}. Ask Heng Huat Chan in Singapore..he is an expert... Question Anyone care to spare a thought on the idea that gravity has a magnetic-like component that is it's inverse (cosmological constant). I think the number zero is a purely human creation, and should be regarded as our latest step in evolution into the realm of intelligent life. Your question is a bit provoking I guess... From a physics point of view, I think you mean an absolute zero property. No zero temperature is possible (as absolute temperature in Kelvin), no zero energy is possible (as absolute energy since the vibration in the lattice still has some energy involved), maybe no zero mass (debate on the mass of a photon still open)... But I do not think of zero as a problem. Further zero is extremely useful to compare equalities, A=B means A-B=0. Question How the no of generator relate with the order of a cyclic group? An ifinite cyclic group has only two generaters and a finite group of order n has phi(n) generators Can anybody suggest me some open problems in mathematics involving matrix theory Question I basically work with matrices for analyzing Quantum systems. I would like to know if there are any open questions regarding matrix formulation. Please suggest some links to good papers on this topic Hello Sir, I advice you to refer X. Zhan's paper "Open problems in Matrix Theory." Question I feel weird without good looking math expressions For me personally, LaTeX is fine, but as I can see it is next to unknown to many researchers. But there is another possibility: use pictures (jpg, png) attached to your posts. I'm sure you know how to create them. Question The following list includes free math software and tools together with the corresponding descriptions and download sites. Operating systems: Scientific Linux: A linux distribution put together by Fermilab and CERN. Freely available from Ubuntu Linux: A Linux distribution, easy to install and freely available from Debian: Perhaps the best Linux distribution. DesktopBSD: A freeBSD distribution easy to use which can be tested through a live DVD. Freely available from BSD: Several Unix distributions. Applications for symbolic calculus. wxMaxima: Calculus with a graphic interface. Freely available from Axiom: Similar to the preceding one. Euler: id. Scilab: id. octave: id. Gap: Computational discrete algebra, R: Statistics PSPP: Statistics haskell: Pure and lazy functional programming language with an interpreter. Astronomy: Stellarium: Free astronomy appl. Star charts: Free star charts PDF files. Math graphics: Gnuplot: To build any graphic in 2D or 3D. Freely available from DISLIN: A graphical library, easy to use. Word processors: TexMacs: WYSIWYG editor with a graphical interface, by means of which one can type scientific texts, and export them in PDF, PS, HTML, LaTeX formats. Freely available from Lyx: Similar to the preceding one. Miktex: A complete LaTeX distribution for Windows. Freely available from TexMaker: A LaTeX editor: Freely available from TeXniccenter: Another powerful LaTeX editor for Windows OS. Kile: Another LaTeX editor. TexShop: A LaTeX distribution and editor for Mac OS X. Freely available from Texlive: A LaTeX distribution for Linux and Unix OS'. Open Office: A package similar to Microsoft Office: Question I have the following 3 non-linear equations: (summation(i=1 to N)) (p(i)*k1(i)) <= Ith (summation(i=1 to N)) (p(i)*k2(i)) <= Ith (summation(i=1 to N)) p(i) <= pT where p(i)=[delf/{{lambda1*k1(i)}+{lambda2*k2(i)}+{lambda3}}]-[{2*sigma^2}/{h^2}] Here N,Ith,pT,sigma,h,delf are constants.All values of k(i) from 1 to N are known. The values of lambda1,lambda2,lambda3 need to be found.How can I solve these equations using fsolve in MATLAB? I assume you have already used "fsolve" built-in solver in Matlab. If the roots (lamda1, 2 and 3) are not plausible/ difficult to optimize, you can use contour plot technique to visualize the real roots of the coupled simultaneous nonlinear equations. Please visit MathWorks and Matlab central for examples. Hope this help. Question I am a student of Computer Science and I'm just loving the story on Linear Algebra. I wonder if there is a good indication of books regarding this subject? Linear Algebra by Gilbert Stang ,,,,,,,,,Read online MIT LECT. NOTES Question I need a definition for a search. For more details.It is better to refer: Question I have these equations with parameters. I have to plot graph for numerical solutions Thnx.... I have the trial version. I m working on models , so i need the graphs to see the stability directly with given parameter values in the system of 3 equations stated earlier. Question I have three data sets,A,B and C.A is dependent on B and C therefore has (C*B number of data points). For quantiative data: Well, you want to find some relationship between B,C and A this means: you can use a model of this type A=f(B,C)=a_0 + a_1 B + a_2 C + a_3 B*C + a_4 B^2+ a_5 C^2 + a_6 B*C^2 + a_7 *C*B^2+.... and with MRA you can make statistical tests which terms are relevant and determine the coefficients a_i for this terms. Question Pls mention the book names Thank u Question C= inv( Trans(A) * inv(B) * A ) A is a rectangular matrix, and B is a large square matrix. You can indeed avoid the calculation of inv(B) if some orthonormality conditions hold among the columns of A. See "Moore–Penrose pseudoinverse". Question Assume the integral of "z*h(z,p,q)" over all values of the scalar z is equal to that of "z*h(z,p)", where both of the scalar-valued h(.) functions respectively integrate to 1 taken over all values of z. So then, is this true iff h(z,p,q) = h(z,p) for all z, p, and q? (Bonus points if you can also let me know the same for discrete z.) Thanks in advance! Before giving bonus, first you have to decide if h is a function of 2 or 3 variables. Question Number theory and computer science question Here is my interpretation of your question: "Continued fractions are very useful and as a novice I'm very impressed playing around with them. Hey, we can find patterns even on pi, wikipedia says! And periodic expansion of sqrt(2) is a miracle! Why nobody shares my enthusiasm?" Correct? Loop as only one vertices or more than one vertex? Question Graph theory Hello, This is not my area. Perhaps the two websites are helpful. http://en.wikipedia.org/wiki/Graph_theory http://en.wikipedia.org/wiki/Glossary_of_graph_theory#Walks Regards Question When I read about Fourier transform, there are several definitions about Fourier transform. It is because there are several conventions about it. Which kind of definition should I refer to? Because it is a little bit confusing. Thank you, Henk Smid. Question Anyone can help me in find solution to problems like P * x + Q * y + ......... >= some constant Capitals(P,Q) are constants while lower case letters(x,y) are variable Problem states that we have to find solution to this problem while keeping solution minimized such that it should be greater than a given constant. Anybody got solution Question My interest is to solve nonlinear problems using HAM(homotopy analysis method) and prefect it in theory. And what's this "homotopy analysis" please? Question Dear friends, can somebody help me to know how I can simplify the equation (x-y)^0.5 as an approximation in terms of x and y? Nothing else Question Inverse matrix on PPU and on SPU using SIMD instructions. This article will talk about how to convert some scalar code to SIMD code for the PPU and SPU using the inverse matrix as an example. Most of the time in the video games, programmers are not doing a standard inverse matrix. It is too expensive. Instead, to inverse a matrix, they consider it as orthonormal and they just do a 3x3 transpose of the rotation part with a dot product for the translation. Sometimes the full inverse algorithm is necessary. The main goal is to be able to do it as fast as possible. This is why the code should use SIMD instructions as much as possible. A vector is an instruction operand containing a set of data elements packed into a one-dimensional array. The elements can be fixed-point or floating-point values. Most Vector/SIMD Multimedia Extension and SPU instructions operate on vector operands. Vectors are also called Single-Instruction, Multiple-Data (SIMD) operands, or packed operands. SIMD processing exploits data-level parallelism. Data-level parallelism means that the operations required to transform a set of vector elements can be performed on all elements of the vector at the same time. That is, a single instruction can be applied to multiple data elements in parallel. no comment Question I am at present in need of help with the mathematical package bifurcation XPPAUTO Actually, I have 3 diff. eqns and when I apply XPP I get results, some of which I cannot interpret. If anyone is interested I can give the eqns etc. give me Question I m intrested in the solution of nonlinear hyperbolic partial differential equations with various techniques such as lie group theoritic method, vandyke and gutmaan technique etc, I'm interested in Oscillation of functional differential equations (dynamic equations, neutral, neutral delay, delay differential equations, inequalities). I'm ready to cooperate in this area and related topics. Question Yang-Fourier transforms and Yang-Laplace transforms are new tools to deal with fractal differential equations and dynamical systems in fractal space. I want to share and present new ideas to you to extend this transformation. You can contact through e-mail Question Does anyone know how to solve a system of equations where the number of equations is more than the number of unknowns? Any help or references is highly appreciated. Consider the case of a number of nonlinear algebraic equations in several unknowns: F_j(x_1,...,x_U)=0 for j=1,...,E where U < E (more equations than unknowns). If there is a solution (X_1,...,X_U) for the unknowns, then it suffices perhaps to solve only U of those equations. Here, I write perhaps for the following reason: For given x_2,...,X_U there may be several solutions of F_j(x_1,...,x_U)=0 for x_1 due to the nonlinear nature of the equations. For instance, there are normally two solutions for x_1 if F_j is quadratic in x_1. But it could be that only one of the values for x_1 satisfies also the rest of the equations F_k=0 for k different from j for the given x_2,...,X_U. Also, there may be the problem of which of the E equations to choose for obtaining as many equations as there are unknowns. Furthermore, there may be several solutions whence uniqueness of the solution is not guaranteed. Additionally, it may be that the system does not possess a unique solution or any solution at all. One possible solution is to replace the system of equations by a minimization problem in the least-square sense: L_E(x_1,...,x_U) = sum_{j=1,..,E} F_j(x_1,...,x_U)^2 = min Then, any of the local minima of the sum is nonnegative because the sum contains only nonnegative terms. The global minimum is the minimum of all the local minima and may be zero in case of the existence of the solution of the original system, or positive otherwise. As in the linear case, one may minimize other norms of the vector function F, i.e. ||F||=min. Of course, the minimization problem may be quite involved numerically. An interesting question is whether it is preferable to consider an alternative minimization problem instead: Choose U of the F_j renumbered in such a way that they correspond to F_1,...,F_U and minimize L_U(x_1,...,x_U) = sum_{j=1,..,U} F_j(x_1,...,x_U)^2 = min under the E-U contraints F_j=0 for j=U+1,...,E. The constraints may be added via Lagrange parameter lambda_j and then, one has to minimize M(x_1,...,x_U,lambda_{U+1},...,lambda_E) = L_U(x_1,...,x_U) + sum_{j=U+1,...,E} lambda_j F_j(x_1,...,x_U) = min Making M stationary with respect to all its E arguments then leads to E equations for the U unknowns x_k and the E-U unknowns lambda_j. There is still a further possibility that may be worth investigating that I would like to introduce via an example: Consider simultaneously three quadratic equations in a single variable: F_1 = a x^2 + b x + c = 0 F_2 = d x^2 + e x + f = 0 F_3 = g x^2 + h x + i = 0 This nonlinear system may be converted to a linear one a y + b x + c = 0 d y + e x + f = 0 g y + h x + i = 0 by introducing the new variable y=x^2. This linear system of 3 equations for 2 unknowns x,y may be solved by any of the standard methods. The original system has only a solution if a solution (x,y) of the linear system satisfies y=x^2. Again, one may tackle the linear system by minimizing L_3(x,y), but now with the constraint y=x^2, that also may be added via a Lagrange multiplier mu, say. Thus, one may try to linearize the original system by introducing further variables for the nonlinear terms and add additional constraints for the defining equations of the new variables. Question Hallo, there is a new proof of the 3n+1-Problem ! The paper ist available Perhaps there is a flaw in the proof The Collatz-conjecture (the famous 3n+1-problem): we construct a sequence of integers starting with integer n = a_0 If a_j is even, the next number ist a_(j+1) = a_j/2. If a_j is odd, the next number ist a_(j+1) = 3*a_j+1. Example n = 6: 6, 3, 10, 5, 16, 8, 4, 2, 1 The Collatz-conjecture: the sequence with a every positive starting-integer ends always in the sequence 4,2 1 no Question We usually expresed the numbers in a decimal base and define the irrationals how the numbers with non-periodic representation in the decimal base. But, if a number have a non-periodic representation in a decimal base then this number have a non-periodic representation in any other base? I try to solve this conjeture (maybe is easy) if someone have a idea is good receive! In this moment i have not the proof complete, this week i finish the semester an in vacations i think in solve this problem and other about sucesions. When i finished i load in a pdf the solution. Thanks for the interest Question Is there any skew symmetric matrix of odd order which is non singular over a finite field? try to find it Question whether there is a definition of the point? certainly a well-defined? Point: its is a ball in the n-dimensions with center (0,0,...,0) "n-times" and radius 0 Question Hi I am an M.Sc graduate in mathematics and computer sciences. Suggest me some good universities in Asia to do P.hD It is good to hear that from you. Of course, I don't know any university of India with a good reputation but as i am also in a need of that i will inform you when i come to realize there is any. Hope U do the same for me. Stay cool........ Question If we have: z = f (x,y) and z = f (t), could you please answer to my below questions: 1) Can I say: x = f (t) and y = f (t)? 2) How can I analyze dz/ dt? Best Regards Gholamreza Soleiman It is difficult to help since the letter "f" seems to have two different meanings in your question. Is "f" a function of two variables (x, y) or a function of one variable t. Maybe is your question the following. Let f be a real function of two real variables (x, y) and g be another real function of one real variable t. Now consider the set S of real numbers (x, y, t) such that g(t)=f(x,y). Is it possible to find two real functions X and Y of t such that (x, y, t) belongs to S exactly when x=X(t) and y=Y(t) ? To this question, the answer is most of the time no. Take for instance g(t)=t and f(x, y)=x+y. If the answer to the question was yes, then, for each t, you would have exactly one couple (x, y), namely (X(t), Y(t)), such that x+y=t. But this is not true for instance for t=1 since (x, y)=(1, 0) and (x, y)=(0, 1) are two different couples such that x+y=1. By the way, "the chain rule" means something else in mathematics. graceful labeling Question iam doingresearch in graph theory. will you help me in this. iam doing in graceful labeling. Please refer the book  " Dynamic Survey on Graph Labeling" by J. A. Gallian- Electronic Journal of combinatorics. Question There are five red balls and two green balls in a closed box. Two players consequently put a hand into the box and select a ball (without replacement). A player who first selects a green ball becomes the winner. Find the probability that the winner is the player who started the game.... Sheba: The answer supplied earlier is not correct! It should be (2/7) + (5/7). (4/6) . (2/5) + (5/7). (4/6). (3/5). (2/4). (2/3). In the earlier answer, instead of (3/7), the multiplier should be (5/7) in the second as well as the third terms of the expression. It was perhaps a typing error. . Question lyapunove type of the difference equation What is difference equation. Please throw light on it. Dr. Balkishan Sharma Question Hi, I am starting to explore better ways to use ResearchGate, not just for my benefit but to do something of value for others. I will start posting some links to various papers, publications, presentations, and briefs, as well as joining in to different discussions. I am looking for networking, and collaboration, and work (job(s)). Certainly I am open to sharing ideas, critiques, comments, views, and helping others. To me, everything does require an attitude of synergy and symbiosis in order for us to succeed, as scientists, as people. FYI, if anyone is interested, I just wrote up this summary: These are URLs about me, some of what I am doing, including past, and also including things that are "orthogonal" and obviously more directed at surviving in a "non-friendly ecosystem" as far as science and especially exploratory and non-mainstream ("non-major-institutional/corporate") R&D. --------------------------- --------------------------------- I can do some things pro bono and voluntary, as part of a team, etc., to help advance the general interests and causes of good research, solid science, improved education, and better public understanding. However, I also seek (need) work: part-time, temporary, full-time of course, in US and/or anywhere in the world. Best regards, Martin D +1-757-847-5511 +1-202-415-7295 cell A very useful and sincere initiative indeed,Thank you Question i am doing m.tech in computer science,want to do phD in maths.is there any possibility for this If you can take GRE subject test in math and can get acceptable marks,there are plenty of universities that really consider your applications for Ph.D(also,know about your research interests before jumping into this journey) Question Calculus (Latin, calculus, a small stone used for counting) is a branch in mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of modern mathematics education. It has two major branches, differential calculus and integral calculus, which are related by the fundamental theorem of calculus. Calculus is the study of change[1], in the same way that geometry is the study of shape and algebra is the study of operations and their application to solving equations. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, broadly called mathematical analysis. Calculus has widespread applications in science, economics, and engineering and can solve many problems for which algebra alone is insufficient. Historically, calculus was called "the calculus of infinitesimals", or "infinitesimal calculus". More generally, calculus (plural calculi) may refer to any method or system of calculation guided by the symbolic manipulation of expressions. Some examples of other well-known calculi are propositional calculus, variational calculus, lambda calculus, pi calculus, and join calculus. Mathematics is the branch of science that deals with logic, decision-making, assumptions, deductions, the clarity of thought and ability to solve the problems in a calculative manner. It is the branch of Mathematics that deals with the finding and properties of derivatives and anti-derivatives of functions by methods originally based on the summation of infinitesimal differences. The two main types are differential calculus and integral calculus. Question prove that the equatin of a circle is y=mx + c the gradient multiplied by x added to the y intercept must be equal to y. Question Colleagues, I am planning to change this thread to a category. However, at the moment, I will post my second communication of the thread CMT. Let us define another differential operator of infinite terms as : e^{-D}:=∑_{j=0}^{∞}(((-1)^{j}D^{(j)})/(j!)) when j=0, we have the identity operator, and D:=(d/(dx)) Then as in my first communication post, we can question the following: (∀ψεC^{∞}(I,ℝ))Λ(∀xεI), what will be e^{-D}(ψ(x))=∑_{j=0}^{∞}(((-1)^{j}D^{(j)}ψ(x))/(j!))? Consider the following example: Example 1: Take ψ(x)=e^(x) the usual natural exponential function. Claim: e^{-D}(ψ(x))=ψ(x-1) Indeed, e^{-D}(ψ(x))=∑_{j=0}^{∞}(((-1)^{j}D^{(j)}ψ(x))/(j!)) =∑_{j=0}^{∞}(((-1)^{j}D^{(j)}(e^{x}))/(j!)) =∑_{j=0}^{∞}(((-1)^{j}e^{x})/(j!)) =e^(x)∑_{j=0}^{∞}(((-1)^{j})/(j!)) =e^(x-1)=ψ(x-1) ∴ e^{-D}ψ(x)=ψ(x-1) ... which is a right translation of ψ by a unit. One can extend this result further and write a corollary as : Corollary: (∀kεℕ):(e^{-D})^{k}ψ(x)=ψ(x-k)-right translate of ψ by k-units. Example 2. Let φ(x)=x³+x²+x+1.Then e^{-D}φ(x)=∑_{j=0}^{∞}(((-1)^{j}D^{(j)}φ(x))/(j!)) =∑_{j=0}^{∞}(((-1)^{j}D^{(j)}(x³+x²+x+1))/(j!)) =x³-2x²+2x But the expression we have at the end is precisely φ(x-1). That is, once again we have a similar result : e^{-D}φ(x)=φ(x-1) Corollary: ∀ p(x) , e^{-D}p(x)=p(x-1) Conjecture: ∀ψεC^{∞}(I,ℝ), e^{-D}ψ(x)=ψ(x-1) Corollary to the conjecture: (∀kεℕ)(∀ψεC^{∞}(I,ℝ)),e^{-kD}ψ(x)=ψ(x-k) Further communications will be posted on operators defined from combinations of both. Dear Dr. Dejenie A. Lakew, Communication in Mathematics teaching has its great roles while teaching courses where mathematics communicates about the practical engineering applications. For example: Laplace transform, State space, Graph Theory, Boolean Difference, Mod Theory, and many such which leads to realize the real concepts of the systems. Question Hi, I am interested in fixed point theory in different spaces. Random fixed point theory is also my subject of interest. If you are interested, then we can start discuss. Hello, We might have a common interest, please find it out. Question Colleagues, Recently we have a new group " International Professors " added to our group. It is therefore possible to create a new forum in order we share insights, new methods, interesting class encounters and new concepts introduced when teaching mathematics. This will create a plat form to share and know how curriculums are apart or close on global settings and might give a hint to education policy makers what they have to expect from mathematics curriculums in order to go at par with international standards. I will therefore present my first communication. It is on enlarging the usual differential operator D:=(d/(dx)) in variable x to something else. We know that the usual differentiation makes functions to loose their smoothness or regularity as we say it, by a degree (if they are not infinitely many times continuously differentiable ). The types of questions I have, can therefore be given as extra exercises or new insights to students who take calculus on sequences, series and convergence, to engage them to think more about, not only single calculus operations but, combined of them and thereby do algebraic computations at the same time. Let us define a new differential operator of infinite terms as : ∑_{j=0}^{∞}((D^{(j)})/(j!))=:e^{D} , for j=0, we have the identity operator. Then for a real valued C^{∞}- function defined on some non-degenerate open interval I (or ℝ-for that matter ) we can question the following: what will be the action of e^{D} on such functions. That is, if ψεC^{∞}(I,ℝ), what will be ∑_{j=0}^{∞}((D^{(j)}ψ(x))/(j!))? The very immediate question will be the question of summability of the series indicated? But we take cases in which that condition works: Example 1: Take ψ(x)=e^{x} the usual natural exponential function. We see that e^{D}(e^{x}) converges to the sum : eψ(x)=ψ(x+1). Indeed, ∑_{j=0}^{∞}((D^{(j)}ψ(x))/(j!))=∑_{j=0}^{∞}((D^{(j)}(e^{x}))/(j!)) =∑_{j=0}^{∞}((e^{x})/(j!)) =e^{x}∑_{j=0}^{∞}(1/(j!)) =e^{x+1}=ψ(x+1) ∴ e^{D}ψ(x)=ψ(x+1)-which is a left translation of ψ by a unit. One can extend this result further and write a corollary as : Corollary: (e^{D})^{k}ψ(x)=ψ(x+k)-left translate of ψ by k-units. Example 2. Let φ(x)=x³+x²+x+1.Then e^{D}φ(x)=∑_{j=0}^{∞}((D^{(j)}φ(x))/(j!)) =∑_{j=0}^{∞}((D^{(j)}(x³+x²+x+1))/(j!)) =x³+4x²+6x+4 But the expression we have at the end is φ(x+1). Therefore once again we have : e^{D}φ(x)=φ(x+1) Claim: For a polynomial function p(x) , e^{D}p(x)=p(x+1) Conjecture: ∀ψεC^{∞}(I,ℝ), e^{D}ψ(x)=ψ(x+1) We can also define a similar operator that results in right translations of C^{∞}-functions by counts of units as: e^{-D}:=∑_{j=0}^{∞}(((-1)^{j}D^{(j)})/(j!)) Further communications will be posted on the last operator and combinations of both. Dear Dr. Dejenie A. Lakew, Communication in Mathematics teaching has its great roles while teaching courses where mathematics communicates about the practical engineering applications. For example: Laplace transform, State space, Graph Theory, Boolean Difference, Mod Theory, and many such which leads to realize the real concepts of the systems. analytic real-valued functions Question Hey every body. I have a big question (at least for me!!), what we means by analytic real-valued functions on a closed interval or half closed interval. If any one can help me, realy I need this Full details are here : http://en.wikipedia.org/wiki/Analytic_function I have verified the content of this source, it is ok.. Question y= -x/(a^2-x^2) what is dy/dx? where a is a constant Given how unlikely it is to get a useful explanation here, I suggest just using www.wolframalpha.com for cheating instead, you would've gotten Pathak's solution with for instance "dy/dx of y= -x/(a^2-x^2)" or "derivative of y = -x/(a^2-x^2)". An added plus is that it won't judge you for being too lazy to execute the straightforward derivation algorithm yourself. And also that it's very likely to be correct, so you don't have to rely on majority voting. Question Like hyperbolic and circular trigonometric functions can we able to generalize trigonometric ratios with respect to a general curve? I think this is a problem of Differential Geometry. By setting new cordinates, (a new curvelinear system not nececary orthogonal, as the plane-orthgonal cordinates) to a curve ''under conditions'' you have the same curve from another point of view. Some properties of the curve remain the same such as its lenght or its curvature some others not (I think). I like to think as follows: Concider y=e^x then this curve is the same as log(x) because these two are catoptric to each other with respect to the line y=x. Hence we have the same curve. The only thing we did is to transfer -all curve- say e^x in the plane to his catoptric. The curve x^2-y^2=1 is the same curve as y=1/x. The reason is that, we take x^2-y^2=1 and we rotate it by an angle a=pi/4...etc However this can see one in a better way with surfaces. A change of the parameters live some values unchanged and others not. (Some of them which remain the same, is the Gauss curvature-K and its mean curvature-H). So, I think, your answer will be given by a Differential Geometry book, that contains plane curves, space curves, surfaces and its generalizations which are Riemann spaces. Question Hai for M/M/1 queue the limiting distribution is obtained by recursive arguments. For a complete solution of the difference-differential equations refer to Gross and Harris (1998),Fundamentals of Queueing Theory, 3rd ed., Wiley, New York. Question Solve this Indices problem friends.... The answer to you question is infinity, as you are just summing up 1/6 to infinity ? Perhaps you meant 6+1/(6+1/(6+1/(6+...))) ? Well, for this the n-th iteration is defined by recursion a(1) = 6; a(n+1) = 6 + 1/a(n). Since this sequence is increasing and bounded it converges to a (positive) limit x, you can replace a(n) (for each large enough n) by x to get x = 6+1/x which is equivalent to x^2 - 6x - 1 = 0. Solving this you get x = 3-Sqrt(10) or x = 3+Sqrt(10). And since x must be positive, the solution is x = 3+Sqrt(10) which is about 6.16228. Question Solve this homogeneous problem I tried a lot but still can't get it .please tell me the ans Question Is Zeta[2+n^2]-1 a Normal[mu,sigma] ?? (Zeta is Zeta Riemann funtion) Question the following link introduce the Vieta jumping method and some of its application Question Need to know. D^3+d+1=0 solve it Question There are various implementations and variations of the LLL-algorithm, depending on the specific scope. Different "editions" have differet input variables and so on.. Has anyone experience of any of these implementations? I shall have a look at these in detail! Question Given three vectors x,y,z., how do i plot the magnitude[sqrt(x^2+y^2+z^2)] and show it in 3D using matlab or mathematica? If you have any other math package i can use and how-that would be great too. @Samuel Paraview seems cool but i have never used it before and it looks complex.Any suggestions on how to go about it? Question Can we relate Grobner Bases for ideals to Computational Mathematics or Applied Mathematics?........thanks Groebner bases can also used to prove the some geometric theorem by mathematical mechnazation. Question I need a neat but detailed explanation on the introduction of a scavenger into a predator -prey lotka volterra model,i will ask that the assumptions made are clearly outlined as this explanation is given. Thank you fellow mathematicians Predator-Prey Lotka-Volterra Model: x'=ax-bxy y'=-cy+dxy where a stands for the reproductive power of the preys; -bxy and dxy model the encounters of preys and predators (which are negative for the preys and positive for predators); and -cy models the competition between predators, which is more powerful than their reproductive capacity. Then if one try to introduce a new actor, a new equation is needed. Let z denote the scavengers What are the relations of scavengers and remaining actors? If you assume that there is no influence, then first two equations will remain the same. If you assume that scavengers do competition between them (-ez) and are fed by death bodies (+fx+gy) and the third eqution is thus: z'=-ez+fx+gy Another considerations of the relations of species led to other equations. Question What are the main differences between finsler spaces and riemann spaces In Riemannian geometry the metric tensor depends only on the points x of tha manifold M (g=g(x)), whereas in Finsler geometry the metric tensor depends on both a point x of M and a tangent vector y to M at x (g=g(x,y)). Question Hi, looking for a way around the liar and logic contradictions I have introduced a new logical dimension: Statements are not absolutely true or false anymore but true or false related to a viewing angel or kind of logical layer or meta-level. With this new dimension problems become solvable that are unsolvable with classical logic. as the truth values belong to different layers. The good news (in my theory): The liar´s paradox, Cantor´s diagonal argument, Russell´s set and Goedel´s incompleteness theorem are valid no more. The bad news: There is no more absolute truth and we have to get used to a new mathematics where numbers might have multiple prime factorisations. Over all, infinity and paradoxes will be much easier to handle in layer theory, finite sets and natural numbers more complicated, but possible (but it will be a new kind of natural numbers...). The theory was in the beginning just a ´Gedankenexperiment´, and my formal description and axioms may still be incorrect an incomplete. Perhaps someone will help me? Here my axioms of layer logic: Axiom 0: There is a inductive set T of layers: t=0,1,2,3,… (We can think of the classical natural numbers, but we need no multiplication) Axiom 1: Statements A are entities independent of layers, but get a truth value only in connection with a layer t, referred to as W(A,t). Axiom 2: All statements are undefined (=u) in layer 0. VA: W(A,0)=u (We need u to have a symmetric start.) Axiom 3: All statements in positive layers have either the truth value ´w´ (true) or ´-w´ (false). Vt>0:VA: W(A,t)= either w or –w. (We could have u in all layers, but things would be more complicated). Axiom 4: Two statements A an B are equal in layer logic, if they have the same truth values in all layers t=0,1,2,3,... VA:VB: ( A=B := Vt: W(A,t)=W(B,t) ) Axiom 5: (Meta-)statements M about a layer t are constant = w or = -w for all layers d >= 1. For example M := ´W(-w,3)= -w´, then w=W(M,1)=W(M,2)=W(M,3)=... (Meta statements are similar to classic statements) Axiom 6: (Meta-)statements about ´W(A,t)=...´ are constant = w or = -w for all layers d >= 1. Axiom 7: A statement A can be defined by defining a truth value for every layer t. This may also be done recursively in defining W(A,t+1) with W(A,t). It is also possible to use already defined values W(B,d) and values of meta statements (if t>=1). For example: W(H,t+1) := W( W(H,t)=-w v W(H,t)=w,1) A0-A7 are meta statements, i.e. W(An,1)=w. Although inspired by Russell´s theory of types, layer theory is different. For example there are more valid statements (and sets) than in classical logic and set theory (or ZFC), not less. And (as we will see in layer set theory) we will have the set of all sets as a valid set. Last not least a look onto the liar in layer theory: Classic: LC:= This statement LC is not true (LC is paradox) Layer logic: We look at: ´The truth value of statement L in layer t is not true´ And define L by (1): Vt: W(L, t+1) := W ( W(L,t) -= w , 1 ) Axiom 2 gives us: W(L,0)=u (1) with t=0 gives us: W(L,1) = W ( u-=w , 1 ) = -w (2) with t=1 : W(L,2) = W ( -w-=w , 1 ) = w (3) with t=2 : W(L,3) = W ( -w-=w , 1 ) = -w L is a statement with different truth values in different layers, Set theory is very nice in layer theory, but that at another time. What do you think about it, is it worth further investigation - or too far-fetched? Yours Trestone Hello, here some more details about the new set theory, that can be defined using layer logic: This "layer set theory" is different in many points to ZFC: It has only one kind of infinity and the set of all sets is an ordinary set. The central idea is to treat “x is element of set M” (x e M) as a layer statement: It is true in layer t+1 that set x is element of the set M, if the statement A(x) is true in layer t. (There may be still some gaps in my formalization of layer logic and set theory, but I hope, that this is owing to my limited capabilties and not to gaps in layer logic: help welcome!) Equality of layer sets: W (M1=M2, d+1) = W ( For all t: W(xeM1,t) = W(xeM2,t) , 1 ) Especially: W (M=M, d+1)=w for d>=0. The empty set 0: W(x e 0, t+1) := W( W( x e 0, t ) = w , 1 ) = -w for t>=0. The full set All: W(x e All, t+1) := W( W( x e All, t ) = w v W( x e All, t ) = u v W( x e All, t ) = -w , 1 ) = w for t>0 and =u for t=0. So other than in most set theories in layer theory the full set is a normal set. Axiom M1 (assignment of statements to sets): W(x e M, t+1) := W ( W ( A(x), t ) =w1 v W ( A(x), t ) =w2 v W ( A(x), t ) =w3 , 1 ) with w1,w2,w3 = w,u,-w For every layer set M there exists a layer logic statement A(x) witch fulfils for all t=0,1,2, …: W(x e M, t+1) = W ( W ( A(x), t ) = w v W ( A(x), t ) = -w , 1 ) W(x e M, 0+1) = W ( W ( A(x), 0) = w v W ( A(x), 0 ) = -w , 1 ) = W (u=w v u=-w, 1 ) = -w Axiom M2 (sets defined by statements): For every layer logic statement A(x) about a layer set x there exits a layer set M so that for all t=0,1,2,3,… holds: W(x e M, t+1) := W ( A(x), t ) (or the expressions of axiom M1). Definition M3 (definition of meta sets): If F is a logical function (like identity, negation or f.e. FoW(xeM1,t) = W(xeM1,t)=w ) then the following equation defines a meta set M: (M1=M is allowed): W(x e M, t+1) := W ( F o W(x e M1, t), 1 ) Consequences of the axioms and definitions: In layer 0 all sets are u: W( x e M, 0 ) = u (as all statements in layer 0). In layers > 0: W(x e M, t+1) := w if W ( A(x), t ) = w else W(x e M, t+1) := -w For all x and (normal layer) sets M holds: W(x e M, 1) = u (as W(A(x),0)=u). For all x and meta sets M holds: W(x e M, 1) = w or –w Last not least let´s look upon the Russell set: Classic definition: RC is the set of all sets, that do not have themselves as elements RC:= set of all sets x, with x –e x In layer theory: W(x e R, t+1) := W ( W ( x e x, t ) = -w v W ( x e x, t ) = u , 1 ) W(x e R, 0+1) = W ( W ( x e x, 0 ) = -w v W ( x e x, 0 ) = u , 1 ) = W (u=-w v u=u , 1 ) = w Therefore W(R e R,1) = w W(R e R,2) = W ( W ( R e R, 1 ) = -w v W ( R e R, 1 ) = u , 1 ) = W (w=-w v -w=u , 1 ) = -w And so W(R e R,3) = w, W(R e R,4) = -w , … R is a set with different elements in different layers, but that is no problem in layer set theory. As All, the set of all sets, is a set in layer theory, it is no surprise, that the diagonalisation of cantor is a problem no more (I just give the main idea): Be M a set and P(M) its power set and F: M -> P(M) a bijection between them (in layer d) Then the set A with W(x e A, t+1) = w := if ( W(x e M,t)=w and W(x e F(x),t)=-w ) A is a subset of M and therefore in P(M). So it exists x0 e M with A=F(x0). First case: W(x0 e F(x0),t)=w , then W(x0 e A=F(x0), t+1) = -w (no contradiction, as in another layer) Second case: W(x0 e F(x0),t)= -w then W(x0 e A=F(x0), t+1) = w (no contradiction, as in another layer) If we have All as M and identity as Bijektion F we get for the set A: W(x e A, t+1) = w := if ( W(x e All,t)=w and W(x e x),t)=-w ) = if ( W(x e x),t)=-w ) This is the layer Russell set R (I omitted the ´u´-value for simplification)- and no problem. So in layer theory we have just one kind of infinity – and no more Cantor´s paradise … Yours, Trestone Question Integration no Question For example : There are 230 non isomorphic groups of order 96.....and only 1non isomorphic group of order 97. In general, no formula is known for the number of groups of order n. The number is sometimes incredibly huge. For example, there exist about 50 billion groups of order 1024. If |G|=pq where p,q are prime numbers and p < q, then there exists one or two groups of order pq, according to whether or not p divides q-1. Question Once you understand what PvsNP problem is actually all about, you might as well try and solve it. In loose terms, the P vs. NP problem actually seeks an answer to this simply stated question: "Is finding a solution to a math problem equally hard in comparison to verifying that it IS a solution ?" Math guys usually "search" for a solution to their problem (e.g. solving some equation), but this can apply to "searching" any data set. Imagine a program that searches for a solution to some equation. That program will most certainly consist of two major parts: a searching part (the solver) and a verifying part (the verifier). The solver tries to construct a solution by some rules and a verifier checks that it actually *is* a solution. This solution constructing part is like when you do all sorts of manipulations (factoring, cancelling common terms, ...) to solve an equation, and this verifying part is more like when you plug in some values for your solution back to the original equation to check if both sides turn out equal. The first part will usually take up much time, as finding a solution to some equations is sometimes hard, but once the right solution is constructed, the verifier will take only a fraction of that time to check if that actually IS a solution. The PvsNP asks if those two parts are actually the same thing, because it would be nice of course, that solving an equation is as easy as checking the result. Another way to look at it, it's basically a question about searching trough (potentially large) sets of data. In that context the PvsNP asks this: "Is there a systematic way of searching trough a large data set ?" (a large data set means for example, a data set not completely searchable in the course of one persons lifetime, for example the whole Internet) Of course, people have been trying to answer this for decades ever since the computer era started, but with no luck, in my opinion because of the way the final solution needs to be presented. It is widely believed that P is not equal to NP, because otherwise it would have baffling implications for say cryptography and code breaking. As there is a huge number of potential passwords that one can make up, a positive answer to PvsNP means that a brute force search is not necessary when trying to guess someone's password and there is also a systematic way how to obtain it. On the other hand, if P is not equal to NP than it means that there is no such thing. Also, in this digital age, when almost everything is stored on a computer (music, pictures, texts, ...) if P = NP is true then we could generate any piece of music, any picture, anything ... by means of a computer program that would solve P vs NP, we just "search" for it, provided we have a computer program that recognizes that something is "a piece of music". Finally, the PvsNP can be restated in terms of creativity as: "Can creativity be effectively automated ?" The hardest thing about solving the problem is actually proving that either case is true. There are of course up till now many false starts and dead ends, and people today that are still trying are trying to prove that in fact P does not equal NP. Richard Karp, one of the most renowned computer scientists once said that this problem will someday be solved (either way) by someone under thirty using a completely new method. So, until then, you might try and solve it for yourself. What if it's only solvable as long as the solution is known to exist? Can't run a successful search for music that doesn't exist. I think the only factual solution would be (P=NP) as long as (P-NP>0). Question Definition and aplication Sobolev spaces are both distribution spaces and Banach spaces. You can solve PDE's by: 1- Distribution calculus (convolutions, Fourier transform...) to find weak solutions. 2- Sobolev imbeddings of Sobolev spaces in $C^k$ spaces of smooth functions. Some nice books, in addition to Sudev's list: 1. F. G. Friedlander, M. Joshi, Introduction to the Theory of Distributions. 2. Haim Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations. Question I need help to understand the computer science application of Algebra (rings, fields, groups, etc.) Group theory , fields and rings are very much useful in cryptoanalysis. For so many cryptosystems(cryptoanalysis) , we use groups and fields. (i.e., for encrypting a message and decrypting a message). Main aim is to determine how efficient an algorithm in terms of complexity. Question The symbols we use in mathematics to form equations are just an aid in clearly forming an argument and communicationg it to others. We are clearly restricted when we use this formal language. If we could only cast out any mention of this language and symbols when doing mathematics, then we would be on the right track in truly understanding reality's ways. The notion of quantity, form, change, space, shape, order, etc. are all independent of their symbolic representation. The language can easily change trough time, but these notions will not. Computation as we know it, is merely a formal manipulation or transformation of symbols. It can be done by hand or by a computer. Either way, there is always a notion of a conciever and an executor present, when talking about computation. These two are usually one and the same, but I like to think about them as separate entities. The executor, follows a fixed set of rules to transform given string of symbols, that a conciever has conceived having some end goal in mind. The executor blindly follows these rules and eventually, (if he's in luck and didn't get stuck somewhere blindly following the rules),he will get a transformed string of symbols representing the final result. And the conciever is the one that anticipates this result, again as a string of symbols. So, when doing computation, the main assumption is that, when we manipulate symbols, we manipulate the notions that they represent. Just like in the primitive times, when people practiced magic, they believed that the symbols they use in their spells represent objects from the real world. They believed that drawing these symbols in some special sequence will result in a spell being cast, and as a result something in the real world will change according to the spell's intention. So, in an amusing way, doing mathematics can be regarded as "doing magic", not in the real world, but in the world of ideas. Computers process strings of symbols by following a fixed set of rules that we call a program. The conciever is the programmer, and the executor is of course the computer. The processing by a computer is usually done in a one-by-one fashion, but is much faster that doing it by hand. Computers can be seen as manipulators of symbols, or executors of programs, but the acctual thing we are after is the "manipulated" idea after the computer has done millions and millions of manipulations on it (that would be too tedious to do by hand). So "ideas" are the ones that we are after when doing computation, because we hope that this mechanical grinding away of symbols will tell us something new and interesting about reality and nature, although this point of view was refuted a hundred years ago by Godel's famous incompleteness theorems. These theorems show that there is definately something more to mathematics and computation than just "symbol grinding". Remarkably, Godel showed this using only using some basic facts from NUMBER THEORY, nothing fancy. And what about nature and reality ? What are nature's rules, and what "language" is used to set these rules ? Nature is the executor, but who is the conciever ? And what is the final result ? Is it LIFE maybe ? The answers to these questions are certainly beyond human comprehension, but there is, as always a lot if speculation about it! But, when we finally find this out, only then we can make a significant progress in truly understanding this "manipulation of ideas" notion and and "reality's ways" in general that mathematicians are still desperately and vaguely trying to capture by the notion of "computation". Majority of people working in the area can do nothing but manipulating symbols. They are always formally right because they never break the formal grammar. So many people are doing this that if you wish to do something in this manner, you can be sure beforehand, that this is already done. However, they never dare to try to do something non-trivial. All non-trivial ideas as well as the reality lie beyond this grammar. Those able to see it, are not confined in formal grammar. Symbols an all their combinations constitute a discrete set which has dimension zero, whereas the world is continuous and has greater dimension (I believe, 4), therefore it cannot be embedded into 0-dimensional grammar or symbolic logic. Let computer do what can be done in 0 dimensions and do what it cannot. Human brain is presently the only instrument to work in higher (than 0) dimensions. Therefore it does not obey formal rules blindly. I believe that the time of pure formalism is over. Further progress depends on those who can go beyond blind formalities. Question Abstract algebra Total no. Of homomorphisms will be gcd(m,n)... I wl find the no. Of onto homomorphisms... Question The butterfly effect What about the brownian mouvement ? Question Hi, We are working on the theory of GCR-Lightlike Submanifolds of indefinite Kaehler manifolds and Sasakian Manifolds. Till now we have studied Totally Umbilical, Totally geodesic, Mixed geodesic GCR-lightlike submanifolds, GCR-lightlike Product, sectional curvature and Holomorphic sectional curvatures of GCR-Lightlike submanifolds and found expressions for Ricci tensor also. Now I am looking for new topic for GCR-Lightlike submanifolds. So please suggest some topics on which we can continue our research. Thanks. Dear Cenap Thank you very much for your kind suggestion. We will definitely try to work on this space. Regards Set theory Question What is set theory, and where is it applicable?
# The Windy Tales favorites Just another day at the office at Windy Tales inc. With some delay the twins had mixed their favorites. Unfortunately, the mixes got mixed up. HE LEFT MY PAPER NET AND SAT RIVALLED ON THE ICE CUE THAT DREAM ESKIMO WINS A CRASH CINEMA CLUE DIAL TWO MOO POACH SQUID Can you figure out the names of their favorites? P.S.: Lady Twines pointed out that alphabetically A(2), C(7), C(5), M(2), M(6), M(5), P(2), P(3), Q(8), S(2) was actually her favorite. Hint: A good place to start here is to think about (based on the tags to this puzzle) what these "mixes" might be a reference to and how that could change the name of the company. • On account of the tag lateral-thinking : could the answer be 'Yes' or 'No'? – mbjb Dec 1 '16 at 4:26 • Yes, it could be, but no, it isn't :P – Levieux Dec 2 '16 at 9:49 • @Levieux- "how that could change the name of the company" I didn't get it. And Lady Twines or Lady Twins? :-/ – Techidiot Dec 2 '16 at 10:34 • @Techidiot: Lady Twines. She is probably one of the Delay twins. – M Oehm Dec 2 '16 at 10:47 • Hmm.. Yes. I thought it was a typo. – Techidiot Dec 2 '16 at 10:52 The common theme behind the mixes is ... Walt Disney, which is an anagram of Windy Tales, Lady Twines or Twins (with) delay. I guess that the favourites are ... ... protagonists of Walt Disney films, namely: THE ICE CRASH – Cheshire Cat CLUE RIVALLED – Cruella de Vil CINEMA LEFT – Maleficent MY CUE ESKIMO – Mickey Mouse PAPER NET – Peter Pan POACH ON SAT – Pocahontas SQUID A MOO – Quasimodo WINS TWO HE – Snow White ... is made up of the i th letter of each of the solutions starting with the corresponding letters. This is also a hint that there was one solution starting with A, two starting with C and so on. The list is sorted alphabetically by character name: C(7) → R (Cheshire Cat) C(5) → L (Cruella de Vil) M(6) → I (Maleficent) M(5) → E (Mickey Mouse) P(2) → E (Peter Pan) P(3) → C (Pocahontas) Q(8) → D (Quasimodo) S(2) → N (Snow White) Unscrambling the letters LRLAIEECDN leads to Cinderella. • +1. I haven't watched half of them. :) Also, not sure how it was supposed to be solved. :D Too tough. Well done – Techidiot Dec 2 '16 at 12:40 • @Techidiot: The tough part was to find out about Walt Disney. Most anagram tools fail here, but Chambers Word Wizard, which has other drawbacks, knows about Walt Disney. I recognised only Snow White and Pocahontas before resorting to the Interwebs. – M Oehm Dec 2 '16 at 12:47 • Oh I see. So there are anagram solvers for finding proper nouns as well. Great. Bookmarked! :) – Techidiot Dec 2 '16 at 12:49 • @MOehm: well done! Only the Lady Twines' favorite is not quite correct yet, as you identified yourself. Keep in mind that her list was ordered alphabetically ;) – Levieux Dec 2 '16 at 12:54 • @Levieux: Okay, got it. (It ws actually one of the Characters I knew and expected to find it in the list. By the way, my list is sorted, too, which helped.) – M Oehm Dec 2 '16 at 13:03
# Hydrogen peroxide from water Is it possible that $\ce{2 OH-}$ from water could react to form $\ce{H2O2}$? I mean most autoionization forms $\ce{H3O+}$ and $\ce{OH-}$. However I think it is possible that $\ce{H-}$ and $\ce{OH+}$ forms, though the amount formed of these 2 is small. This is from water acting as an acid and H+ acting as a base. The only "hydroxyl"-type species whose recombination has been observed to yield hydrogen peroxide are hydroxyl radicals. $$\ce{HO^. + ^.OH -> H2O2}$$ In order to generate these hydroxyl radicals from water, a lot of energy has to be provided, typically by radiolysis (short wave uv, $\gamma$, electron beam). The conceivable steps yielding $\ce{HO^.}$ are either $$\ce{H2O -> H^. + ^.OH}$$ or $$\ce{H2O -> H2O^{+.} + e-}$$ $$\ce{H2O^{+.} -> H+ + ^.OH}$$ Is it possible that $\ce{2 OH-}$ from water could react to form $\ce{H2O2}$? There would be two electrons left unaccounted for. There needs to be a balanced chemical equation, then we could consider the equillbrium constant for the reaction. We could write $\ce{2H2O -> H2 + H2O2}$, but it is extremely unfavorable, like water decomposing to hydrogen and oxygen gas. You could calculate the Gibbs free energy of the reaction and an equilibrium constant, but the equilibrium constant will be extremely small. • This is true but to make it a better answer I'd do the actual calculation and show the real reason why this doesn't happen. – matt_black Dec 24 '14 at 14:07
# Is there a way to remove white margins when importing a pdf file? A workaround is to use pdfcrop separately (in the terminal) to crop the PDF file we want to import. However - is there a way to remove white margins when importing a PDF file from within the tex file? The two common packages to import PDF files are pdfpages or graphicx. Can I 'preprocess' a file for them with pdfcrop within tex file? • What do you mean by importing? Do you mean embedding a page or series of pages from a PDF in a Tex document? – Charles Stewart Dec 29 '13 at 14:05 • I have a lot of plots saved separately as PDF files and now I would like to embed/import (I'm not sure what's the right phrase) them one by one into a Tex document. – tales Dec 29 '13 at 14:10 • With shell-escape enabled you might be able to call pdfcrop from your TeX document. I don't know whether the cropping would be synchronous (good) or asynchronous (bad). – Ethan Bolker Dec 29 '13 at 14:54 • Can't you just run pdfcrop on all the files first, e.g. with a for loop? Which operating system do you use? – Torbjørn T. Dec 29 '13 at 15:35 • I use Ubuntu. Well, in principle I could run a loop first but I wanted to be able to crop the files on-the-go. Following Ethan's advice and enabling shell I can now use: \immediate\write18{pdfcrop charge_distribution.pdf tmp.pdf} \includegraphics[width=\textwidth]{tmp.pdf} Now I would like to try to create some kind of macro (or some similar structure, I am not that familiar with LaTeX yet) that would enable me to do that in a quicker way – tales Dec 29 '13 at 15:44 A new command that works like \includegraphics, but crops the pdf image: \newcommand{\includeCroppedPdf}[2][]{% \immediate\write18{pdfcrop #2}% \includegraphics[#1]{#2-crop}} Remember: \write18 needs to be enabled. For most TeX distros set the --shell-escape flag when running latex/pdflatex etc. # Example \documentclass{article} \usepackage{graphicx} \newcommand{\includeCroppedPdf}[2][]{% \immediate\write18{pdfcrop #2}% \includegraphics[#1]{#2-crop}} \begin{document} \includeCroppedPdf[width=\textwidth]{test} \end{document} # Avoid cropping on every compile To avoid cropping on every document compilation, you could check if the cropped file already exists. (some checksum would be better) \documentclass{article} \usepackage{graphicx} \newcommand{\includeCroppedPdf}[2][]{% \IfFileExists{./#2-crop.pdf}{}{% \immediate\write18{pdfcrop #2 #2-crop.pdf}}% \includegraphics[#1]{#2-crop.pdf}} \begin{document} \includeCroppedPdf[width=\textwidth]{test} \end{document} # MD5 Checksum Example The Idea is to save the MD5 of the image and compare it on the next run. This requires the \pdf@filemdfivesum macro (only works with PDFLaTeX or LuaLaTeX). For XeLaTeX You could use \write18 with md5sum utility or do a file diff. \documentclass{article} \usepackage{graphicx} \usepackage{etoolbox} \makeatletter \newcommand{\includeCroppedPdf}[2][]{\begingroup% \edef\temp@mdfivesum{\pdf@filemdfivesum{#2.pdf}}% \ifcsstrequal{#2mdfivesum}{temp@mdfivesum}{}{% %file changed \immediate\write18{pdfcrop #2 #2-crop.pdf}}% \immediate\write\@auxout{\string\expandafter\string\gdef\string\csname\space #2mdfivesum\string\endcsname{\temp@mdfivesum}}% \includegraphics[#1]{#2-crop.pdf}\endgroup} \makeatother \begin{document} \includeCroppedPdf[width=\textwidth]{abc} \end{document} • Perfect, just what I've been working on, thank you – tales Dec 30 '13 at 1:46 • @tales You're welcome! Just added a MD5 example that also might be useful. – someonr Dec 30 '13 at 2:13 • Doesn't the pdfcrop package already do something like this? Actually, that just seems to give you the command which you use here, and the rpdfcrop command, I guess. – cfr Dec 30 '13 at 2:31
# Non-linear dynamics problem: A mechanical analog of dx/dt=sinx [closed] I have been stuck at this particular problem for a while.This is a problem from Nonlinear Dynamics And Chaos by Strogatz. The thing I am having hard time finding a mechanical system following dx/dt=sinx even approximately. No, simple harmonic motion doesn't work. Then, the problem asks to intuitively explain why x=0 and x=pi are stable and unstable fixed points respectively. So, it seems like the system would be 'familiar'.But, I am not finding any 'familiar' system of this equation of motion. It looks like a typo to me: the equation is $\ddot{x}=\sin x$ and this is just a pendulum with a slightly odd convention for the angle. Posit $x = \pi + \theta$ and you get the usual $\ddot{\theta}+\sin\theta = 0$.
# Yashiro (in Oiso) to Bernard Berenson Yashiro's Study of the Annunciation in Art (1952), mentioned in this letter, was sent to Berenson with a handwritten dedication by the author. A sketch of Yashiro by Dario Neri was found in the book. The sketch was presumably made when Yashiro visited I Tatti in 1956; see below.... Read more about Yashiro (in Oiso) to Bernard Berenson # Yashiro (in Oiso) to Bernard Berenson Yashiro's Study of the Annunciation in Art (1952), mentioned in this letter, was sent to Berenson with a handwritten dedication by the author. A sketch of Yashiro by Dario Neri was found in the book. The sketch was presumably made when Yashiro visited I Tatti in 1956; see below.
## Bibliography entry GibKM author Gibney, Angela and Keel, Sean and Morrison, Ian title Towards the ample cone of moduli spaces of stable curves year 2002 journal J. Amer. Math. Soc. volume 15 pages 273–294 arXiv math/0006208 @ARTICLE{GibKM, AUTHOR = "Gibney, Angela and Keel, Sean and Morrison, Ian", TITLE = "Towards the ample cone of moduli spaces of stable curves", JOURNAL = "J. Amer. Math. Soc.", VOLUME = "15", PAGES = "273--294", YEAR = "2002", EPRINT = "math/0006208" } This item is never cited.
A simple proof of the logarithmic Sobolev inequality on the circle Séminaire de probabilités de Strasbourg, Tome 21 (1987), pp. 173-175. @article{SPS_1987__21__173_0, author = {\'Emery, Michel and Yukich, Joseph E.}, title = {A simple proof of the logarithmic {Sobolev} inequality on the circle}, journal = {S\'eminaire de probabilit\'es de Strasbourg}, pages = {173--175}, publisher = {Springer - Lecture Notes in Mathematics}, volume = {21}, year = {1987}, zbl = {0616.46023}, mrnumber = {941981}, language = {fr}, url = {http://www.numdam.org/item/SPS_1987__21__173_0/} } TY - JOUR AU - Émery, Michel AU - Yukich, Joseph E. TI - A simple proof of the logarithmic Sobolev inequality on the circle JO - Séminaire de probabilités de Strasbourg PY - 1987 DA - 1987/// SP - 173 EP - 175 VL - 21 PB - Springer - Lecture Notes in Mathematics UR - http://www.numdam.org/item/SPS_1987__21__173_0/ UR - https://zbmath.org/?q=an%3A0616.46023 UR - https://www.ams.org/mathscinet-getitem?mr=941981 LA - fr ID - SPS_1987__21__173_0 ER - Émery, Michel; Yukich, Joseph E. A simple proof of the logarithmic Sobolev inequality on the circle. Séminaire de probabilités de Strasbourg, Tome 21 (1987), pp. 173-175. http://www.numdam.org/item/SPS_1987__21__173_0/ 1. Bakry, D. and M. Emery (1985) Diffusions hypercontractives, Lecture Notes in Mathematics, no. 1123, pp. 177-206. | Numdam | MR 889476 | Zbl 0561.60080 2. Gross, L.. (1975) Logarithmic Sobolev inequalities, Amer. J. Math., 97, pp. 1061-1083. | MR 420249 | Zbl 0318.46049 3. Rothaus, O.S. (1980) Logarithmic Sobolev inequalities and the spectrum of Sturm-Liouville operators, J. Functional Analysis, 39, pp. 42-56. | MR 593787 | Zbl 0472.47024 4. Rothaus, O.S.. (1981) Diffusion on compact Riemannian manifolds and logarithmic Sobolev inequalities, J. Functional Analysis, 42, pp. 102-109. | MR 620581 | Zbl 0471.58027 5. Weissler, F.B.. (1980) Logarithmic Sobolev inequalities and hypercontractive estimates on the circle, J. Functional Analysis, 37, pp. 218-234. | MR 578933 | Zbl 0463.46024
# Electric Field inside an ideal conductor I have some doubts about the electric field inside an ideal conductor (let's call it E). Precisely, I have read two different descriptions 1) On physics books I read that the electric field inside in a conductor in electrostatic equilibrium is equal to 0. The physical reason of this is the fact that all charges, at equilibrium, are distributed in the external surface of the conductor since only this distribution can reduce their repulsion forces. A mathematical view of this is given by the equation J = sigma * E. In fact, since sigma = infinite and J = 0 (since equilibrium means that charge do not move), necessarily we must have E = 0. According to this explanation, E = 0 only in electrostatic equilibrium. 2) On Electromagnetic Fields books I read an explanation which is similar, but not identical. I read that, since J must have a finite value, and J = sigma * E and sigma = infinite, we get E = 0. According to this explanation, there is not any mention of electrostatic equilibrium. It seems that inside a conductor in any condition we have E = 0. Also if there is a voltage source applied on it or something similar. Now I have two questions: • Which is the correct description? • It is known that a metal is able to reflect EM waves. Is it due to the fact that E = 0 in its inner points? • Kinka-Byo, the title of your question includes the term "ideal conductor" but your first question includes a statement about conductors in general and not just ideal conductors. Are you clear on the distinction between "conductor" and "ideal conductor"? – Alfred Centauri Sep 20 at 2:57 • I think that a conductor has a finite (but not 0) conductivity, while an ideal conductor has an infinite conductivity, is it correct? So in practice is a conductor simply any material that is not a total dielectric ? – Kinka-Byo Sep 20 at 5:27 • Kinka-Byo, it's important to keep in mind that an ideal conductor is un-physical (thus the adjective ideal) but is useful for simplifying calculations to get valid results (in the region of operation where the ideal conductor approximation is valid). For example, in a physical wire conductor, the mobile charge carriers are electrons while in an ideal conductor, we think only of mobile charge - the properties of the charge carrier have been abstracted away. There have been several recent questions here where the OP arrived at a contradiction pushing the ideal conductor approximation too far. – Alfred Centauri Sep 20 at 10:34 • Also, don't forget that there are materials that are neither good conductors nor good insulators, e.g., semiconductors. – Alfred Centauri Sep 20 at 10:36 The reason the first works for all conductors is that in electrostatics, we get to say 'if $$\textit{any}$$ charge moves, our assumption of electrostatics is violated', so an electric field in even an imperfect conductor would violate the assumption and so is not allowed. On the other hand, if we allow a fully dynamical system, we can have moving charge in a conductor $$\textit{if}$$ it is not perfect. This should be obvious since we have charge moving through conductors all the time in the real world. So in this case, the argument only hold because the conductivity, $$\sigma$$, is infinite, which is only true for perfect conductors. As for the second, the best answer I know to give is that we use boundary conditions to understand how the fields behave near surfaces, and when you carry out the calculations, you get reflection. If I recall correctly, $$\textbf{E}=0$$ is used to get the idealized perfect reflection in the idealized perfect conductor problem.
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 0737.35135 Byszewski, Ludwik Strong maximum principles for parabolic nonlinear problems with nonlocal inequalities together with arbitrary functionals. (English) [J] J. Math. Anal. Appl. 156, No.2, 457-470 (1991). ISSN 0022-247X The paper looks for a new object for which an analogue of the maximum principle for solutions of parabolic equations is valid. The case of noncylindrical domains and nonlocal parabolic inequalities of the type $$u\sp i\sb t(x,t)\le f\sp i(x,t,u(x,t), u\sp i\sb x(x,t), u\sp i\sb{xx}(x,t);[u])\hbox { for a.e. } (x,t)$$ $i=1,\ldots,m$; $u=(u\sp 1,\ldots,u\sp m)$, with some additional nonlocal assumptions is discussed. Here $f\sp i(\cdots;[u])$ are functionals with respect to $u$. [U.Raitums (Riga)] MSC 2000: *35R10 Difference-partial differential equations 35B50 Maximum principles (PDE) Keywords: maximum principle; nonlocal parabolic inequalities Cited in: Zbl 0774.35038 Highlights Master Server
rtlsdr_nbfm.lua. Hence, the spectrum of narrow band FM consists of the carrier and upper sideband and a lower sideband . For small values of m f , the values of the j coefficients are as under :. Hi all, Here is a snapshot of a GNU RADIO project I built in last Oct. Narrow Band FM Transmitter. FATCA. Advertising. Printer friendly. Menu Search "AcronymAttic.com. Joined Dec 19, 2005 Messages 466 Helped 11 Reputation 22 Reaction score 7 … The meaning of NBFM is NarrowBand Frequency Modulation . Raspberry PI as RF Signal Generator. Narrow Band Frequency Modulation acronym NarrowBand Frequency Modulation … United dictionary of abbreviations and acronyms. And that is a fact. 0000001436 00000 n NBFM is defined as Narrow Band Frequency Modulation somewhat frequently. Suggest new … Book Your Shopping Experience Now. There is also a menu of "tail" times -- additional time the squelch is held open before being closed. x�bf�be}� �� l@���q�/s���VO��yNZ��y�yH̑���/�>Ǐ�A����.�9������$H|i�Sq�+1�0�ʢ��[lY\=M�BS�'N����U�H�N�m�R�ũ��������A���r�N��,���(u,�b��H]����K���. NBFM is defined as Narrow Band Frequency Modulated very rarely. What is the abbreviation for Nippon Building Fund Management? Tweet. Feedback, The World's most comprehensive professionally edited abbreviations and acronyms database, https://www.acronymfinder.com/Narrow-Band-Frequency-Modulation-(NBFM).html, National Bovine Functional Genomics Consortium, National Bureau of Fish Genetic Resources (India), National Business Finance, Inc. (Denver, CO), Non-Banking Financial Institutions Regulator (Ukraine), National Basic Forest Inventory System (India), National Bank Financial Ltd. (Toronto, Ontario, Canada), Nippon Building Fund Management Ltd. (Tokyo, Japan), New Brunswick Federation of Naturalists (Fredericton, New Brunswick, Canada), National Biosafety Framework Project (Philippines), New Brunswick Forest Products Association, New Bedford Free Public Library (Massachusetts), North Brunswick Free Public Library (North Brunswick, NJ). NBFM. Acronyms meaning . Nippon Building Fund Management Ltd. (Tokyo, Japan) Copyright 1988-2018 AcronymFinder.com, All rights reserved. Dar Band Frekans Modülasyonu . 0000047457 00000 n 0000033303 00000 n 0000004595 00000 n 0000047114 00000 n a. I assume that your quantity beta is the maximum momentary frequency offset per the maximum baseband signal frequency fm. So this is clearly a narrowband FM (NBFM) case. A narrow band FM is the FM wave with a small bandwidth . So when the input S/N is less than about 10dB, SSB offers better performance than NBFM. abbr. 0000000896 00000 n Definition and meaning of NBFM . NBFM abbreviation stands for Nippon Building Fund Management. NASA, NBFM is defined as Narrow Band Frequency Modulated very rarely. nbfm J 0 (m f) = 1,. For NBFM the audio or data bandwidth is small, but this is acceptable for this type of communication. Formation of side bands Our experts help clients with their capital raising, risk management and advisory needs. The technology is used in telecommunications, radio broadcasting, signal processing, and computing.. The World's most comprehensive professionally edited abbreviations and acronyms database All trademarks/service marks referenced on this site are properties of their respective owners. For communications purposes less bandwidth is used. Examples: NFL, NBFM- Nerve Blocks For the Masses : At TAS, we understand that not everyone has access to ultrasound. Website IP is 23.34.204.222 Definition of NBFM . What is the abbreviation for Nippon Building Fund Management? The meaning of NBFM is NarrowBand Frequency Modulation NBFM for transforming a Raspberry PI into a ham radio FM transmitter. Jump to: navigation, search. Practic… At NBFM the modulation is "weak". First such workshop was held in … nbfm by IK1PLD is a small program useful to transmit audio information via a frequency modulated RF carrier. 4. cos( sin( 2f mt )) 1 sin( sin( 2f mt )) sin( 2f mt ) Thus for NBFM, the expression for FM signal will be simplified to 0000037074 00000 n You will see that there is an improvement when using NBFM at high signal levels of about 5.2dB. NBFM — NarrowBand Frequency Modulation … Acronyms von A bis Z. NBFM — abbr. 0000033730 00000 n If NBFM wave whose modulation index$\beta$is less than 1 is applied as the input of frequency multiplier, then the frequency multiplier produces an output signal, whose modulation index is ‘n’ times$\beta\$ and the frequency also ‘n’ times the frequency of WBFM wave. ∠ can represent either the vector (⁡, ⁡) or the complex number ⁡ + ⁡ =, with = −, both of which have magnitudes of 1. Source : http://www.textfiles.com/fun/glossary.txt. startxref Performance comparison of NBFM and WBFM2. There is also a menu of "tail" times -- additional time the squelch is held open before being closed. 0000012170 00000 n The modulation index mf of narrow band FM is small as compared to one radian . That beta is called "modulation index" and it exists in the approximated formula of the needed transmission bandwidth which is the … Thus, if your rig is capable of holding the deviation to +/- 2.5 KHz then you will be "perfectly legal" running FM below 29 MHz. 0000004467 00000 n All of the above information as well as any information from gathered from a budget counselor or a Benevolent committee will remain confidential except for those in the decision making process. © 1988-2021, This video will help you to understand the general expression for NBFM, power of NBFM Communication is a process of transmission of information from source to destination or from transmitter to receiver. Hi all, Here is a snapshot of a GNU RADIO project I built in last Oct. I assume that your quantity beta is the maximum momentary frequency offset per the maximum baseband signal frequency fm. Find. Click to rate […] link: http://www.textfiles.com/fun/ Author: not indicated on the source document of the above text . The following two methods demodulate FM wave. This definition appears somewhat frequently NBFM abbreviation stands for Nippon Building Fund Management. Phasor notation (also known as angle notation) is a mathematical notation used in electronics engineering and electrical engineering. NBFM . Since it is NBFM we can use the equation that the bandwidth is approximately twice the modulating time frequency f m. Problem 3 Bandwidth of a FM Signal (10 points) A 100 MHz carrier signal is frequency modulated by a sinusoidal signal of 75 kHz, such that the frequenc7 deviation is f = 50 kHz. abbr. J n (m f) = 0 for n > 1. Enjoy a free personalized session with one of our expert sales professionals. NBFM for transforming a Raspberry PI into a ham radio FM transmitter. Abbreviation to define. ,random Show : (a) Modulation frequency versus time. Top NBFM abbreviation meanings updated July 2020 A vector whose polar coordinates are magnitude and angle is written ∠. Narrow Band Frequency Modulation … You can prove it mathematically. Examples: NFL, NASA, PSP, HIPAA. The Raspberry PI can be turned into a performant RF signal generator without modifications, just using the proper software. NBFM: sigla dell'inglese Narrow Band Frequency Modulation, sistema di modulazione di frequenza a banda stretta impiegato per le trasmissioni di ... Questo sito contribuisce alla audience di … For the non-NBFM squelch the slider represents dB above the median of a number of past RSSI (S-meter) values. In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. The frequency modulation index is the equivalent of the modulation index for AM , but obviously related to FM. So, TAS has designed it signature workshop “Nerve Blocks For The Masses”[NBFM], a live workshop on LOR and PNS guided blocks. NBFM. Actually, the "standard" for NBFM has been +/- 3 KHz deviation since the 1940s (if not before). We thank the authors of the texts that give us the opportunity to share their knowledge . Takes a single float input stream of audio samples in the range [-1,+1] and produces a single FM modulated complex baseband output. Word(s) in meaning: chat  NBFM - kısa versiyonu . Definition. Acronym Finder, All Rights Reserved. NBFM stands for Narrow Band Frequency Modulated. NBFM. Narrow Band Frequency Modulation engin. A full-service Canadian financial institution, we help our clients with their capital raising and risk management and advisory needs. Printer friendly. Q.2 Frequency deviation in FM is. Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. 0000025983 00000 n What does NBFM stand for? This is known as wide-band FM (WBFM). NBFM stands for Narrow Band Frequency Modulated. s(t ) Ac cos(2f ct ) Ac sin(2f ct )sin(2f mt ) NBFM Generation Block Diagram %%EOF %PDF-1.3 %���� 0000012332 00000 n What is the di⁄erence between NBFM and wideband FM refer to the Spectral component of the two signals. Usually 200 kHz is allowed for each wide-band FM transmission. It is for demonstrate the difference between NBFM, AM, USB and LSB signal in FFT and waterfall to my students in the Amateur Radio course. Jan 09, 2021 - Generation of WBFM and NBFM signal - Communication System, GATE Electrical Engineering (EE) Notes | EduRev is made by best teachers of Electrical Engineering (EE). NBFM Sat 8am-2pm 421 South Front St. New Bern, NC 252-633-0043 Share your market photos #newbernfarmersmarket #nbfm newbernfarmersmarket.org 3. 0000025476 00000 n 1 because modulation index 5 so NBFM g What is the bandwidth of the radiated from ECET 310 at DeVry University, Addison xref 0000060034 00000 n trailer These signals are capable of supporting high quality transmissions, but occupy a large amount of bandwidth. It uses the RTL-SDR as an SDR source, plays audio with PulseAudio, and shows two real-time plots: the RF spectrum and the demodulated audio spectrum. Hence, a narrow band FM wave can be expressed mathematically as under, The (-) sign associated with the LSB represents a phase shift of 180 o.. Definition of NBFM . Examples: NFL, NASA, PSP, HIPAA. 0000000016 00000 n May 11, 2006 #9 L. leoren_tm Advanced Member level 1. What does NBFM stand for? nbfm.ca is hosted in Cambridge, Massachusetts, United States and is owned by Banque Nationale Du Canada. Analog Communication - FM Demodulators - In this chapter, let us discuss about the demodulators which demodulate the FM wave. 181 0 obj <> endobj 0000033470 00000 n NBFM. 0000001830 00000 n Analog NBFM systems may be either conventional or trunked (eg: MPT-1327). NBFM — NarrowBand Frequency Modulation … Acronyms von A bis Z. nbfm — com. In March 2010, The U.S. Treasury Department signed into law the “Foreign Account Tax Compliance Act” (known as FATCA), to enable U.S. tax authorities to counter tax evasion by U.S. citizens (U.S. 0000004210 00000 n But note that there is a threshold value of signal input where the NBFM system deteriorates rapidly, whilst the SSB S/N drops as a linear function of the input. Abbreviation to define. Raspberry PI as RF Signal Generator. For small values of mf , the values of the j coefficients are as under : J0(mf) = 1, J1(mf) = mf/2 Jn(mf) = 0 for n > 1 Hence, a narrow band FM wave can be expressed mathematically as under, The (-) sign associated with the LSB represents a phase shift of 180o. 0000025644 00000 n In practice, since comms-quality NBFM audio is given a 6dB/octave boost (pre-emphasis) in the top half of the audio band, the NBFM that we use on VHF/UHF is effectively a mixture of the two - FM for the lower frequencies and PM for the higher frequencies. 0000036645 00000 n For NBFM, is small compared to one radian For WBFM, is large compared to one radian Narrow-Band Frequency Modulation For small values of , cos( sin( 2f mt )) 1 sin( sin( 2f mt )) sin( 2f mt ) Thus for NBFM, the expression for FM signal will be simplified to. The flow graph as well. frequency of the radio carrier is changed in line with the amplitude of the incoming audio signal Persons) who hold foreign accounts. NBFM: Narrow Band Frequency Modulation: NBFM: Nippon Building Fund Management Ltd. (Tokyo, Japan) NBFM — NarrowBand Frequency Modulation … Acronyms. The flow graph as well. nbfm.ca was created on 2012-10-12. It consists of transmitters that have deviation set to 2.5KHz maximum, instead of the 5KHz max that all … Top NBFM abbreviation meanings updated July 2020 Definition of an FM Signal • For a baseband signal, x(t): – k f is the frequency deviation constant in Where a low signal level and/or fading is expected, ssb will prove more effective. DAQ ; With analog systems, received signal strength (RSSI) is a reasonable indication of audio quality. 0000060199 00000 n Acronyms . This document is highly rated by Electrical Engineering (EE) students and has been viewed 131 times. List of 7 NBFM definitions. The speaker icon now has a third color, white, indicating the squelch is closed (hence no audio). This example is a Narrowband FM radio receiver. Change in carrier frequency to the frequency above and below the centre frequency b. That beta is called "modulation index" and it exists in the approximated formula of the needed transmission bandwidth which is the … Calculate β for these values Is the NBFM assumption valid in this case P3 Find from SYSC 3501 at Carleton University Acronym. Since this band was new, the FCC decided that radios operating there would be what you call NBFM (but reality is that all of the FM used by amateurs is NBFM). 0 Narrowband FM is widely used for two way radio communications. It is for demonstrate the difference between NBFM, AM, USB and LSB signal in FFT and waterfall to my students in the Amateur Radio course. In this video, i have explained Performance comparison of NBFM and WBFM by following outlines:1. Narrow Band Frequency Modulation acronym NarrowBand Frequency Modulation … United dictionary of abbreviations and acronyms. From GNU Radio. Definizione del termine acronimo NBFM impiegati nella fabbricazione. NBFM — NarrowBand Frequency Modulation … Acronyms. <<06CDC7E05C1A6D4B9BEFAECCFE89157D>]>> Practically, the narrow band FM systems have m f less than 1 . nbfm Measurement method. NBFM. 0000004339 00000 n The following text is used only for educational use and informative purpose following the fair use principles. Website IP is 23.34.204.222 0000004726 00000 n And in this case the spectrums of FM and AM are similar. Narrow Band Frequency Modulation acronym NarrowBand Frequency Modulation … "global warming" 0000036807 00000 n 181 30 What does NBFM stand for? 0000012647 00000 n 0000046948 00000 n Tweet. NBFM stands for Narrow Band Frequency Modulated. NBFM — NarrowBand Frequency Modulation … Acronyms von A bis Z. NBFM — abbr. Acronyms meaning general topics . Our experts help clients with their capital raising, risk management and advisory needs. (b) FM signal. 1) Non-Broadcast FM , 2) National Broadcast FM , 3) Narrowband FM , 4) Near Band FM J 1 (m f) = m f/2. For NBFM, the FM modulation index must be less than 0.5, although a figure of 0.2 is often used. NBFM stands for Narrow Band Frequency Modulation. For the non-NBFM squelch the slider represents dB above the median of a number of past RSSI (S-meter) values. Narrow band FM (NBFM) often uses deviation figures of around ±3 kHz. Main page The Raspberry PI can be turned into a performant RF signal generator without modifications, just using the proper software. 210 0 obj <>stream What does NBFM stand for? List of 7 NBFM definitions. Use Matlab to draw an FM signal: = 15Hz, carrier amplitude A = 2:5V, A m = 1V p, modulation frequency f m = 1Hz, modulator constant K f = 7:5Hz=Volt, t = 0 to 4 seconds. 0000001574 00000 n NBFM stands for Narrow Band Frequency Modulated. nbfm.ca was created on 2012-10-12. When value of modulation index is less than equal to 1, then FM band is called as Narrowband FM (NBFM). In conclusion, NBFM is best used on tropospheric paths where signals are quite strong, and on bands where adequate bandwidth is available to avoid interference. Menu Search "AcronymAttic.com. 0000001708 00000 n Acronyms meaning general topics index . Find. But, on the other hand, not many people do run NBFM these days on HF! However, this is not always true for digital systems, so a new measure of audio quality was developed by the TIA to measure and compare audio, whether Transmitter: The sub-system that takes the information signal and processes it prior to transmission. 0000004856 00000 n NBFM Transmit. nbfm by IK1PLD is a small program useful to transmit audio information via a frequency modulated RF carrier. It can be used to listen to NOAA weather radio in the US, amateur radio operators, analog police and emergency services, and more, on the VHF and UHF bands. Postal codes: USA: 81657, Canada: T5A 0A7. 0000004083 00000 n An NBFM is any FM system that has a small bandwidth with a modulation index that is less than or equal to 1.Frequency modulation that is identified as narrow band if the changes in the frequency of the carrier is similar to the signal frequency. PSP, HIPAA NBFM — NarrowBand Frequency Modulation … Acronyms von A bis Z. NBFM — abbr. Calculate β for these values Is the NBFM assumption valid in this case P3 Find from SYSC 3501 at Carleton University Narrow Band Frequency Modulation. nbfm.ca is hosted in Cambridge, Massachusetts, United States and is owned by Banque Nationale Du Canada. and is found in the following Acronym Finder categories: The Acronym Finder is The speaker icon now has a third color, white, indicating the squelch is closed (hence no audio). Defined as narrow Band Frequency Modulation … United dictionary of abbreviations and Acronyms high quality,!, 2 ) National Broadcast FM, 4 ) Near Band FM widely! A ) Modulation Frequency versus time the instantaneous Frequency of the two signals supporting high quality transmissions, this... Performant RF signal generator without modifications, just using the proper software about 5.2dB to share their knowledge purpose! So when the input S/N is less than equal to 1, hand, not many people run! Fm NBFM m f/2 understand that not everyone has access to ultrasound below the centre Frequency what is nbfm type of.! A low signal level and/or fading is expected, ssb will prove more effective communication - FM Demodulators in... Top NBFM abbreviation meanings updated July 2020 for small values of m f, the spectrum of narrow Frequency. … United dictionary of abbreviations and Acronyms a full-service Canadian financial institution, we understand that everyone! Website IP is 23.34.204.222 NBFM — NarrowBand Frequency Modulation acronym NarrowBand Frequency Modulation … United dictionary abbreviations... Mf of narrow Band Frequency Modulation ( FM ) is the di⁄erence between NBFM and wideband FM to! Signal level and/or fading is expected, ssb will prove more effective than NBFM instantaneous Frequency of two... Rights reserved: ( a ) Modulation Frequency versus time, but this is known angle. Above and below the centre Frequency b about 10dB, ssb will prove more effective but, on the hand... And informative purpose following the fair use principles coordinates are magnitude and is... Received signal strength ( RSSI ) is a snapshot of a number of past RSSI ( S-meter ) values type! Modulation … NBFM — com click to rate [ … ] what does stand. Signal levels of about 5.2dB what is nbfm and upper sideband and a lower sideband,., received signal strength ( RSSI ) is a small program useful to audio! Performant RF signal generator without modifications, just using the proper software f/2. Fm ( NBFM ) often uses deviation figures of around ±3 kHz generator without modifications, just the... We help our clients with their capital raising, risk Management and advisory.! The Raspberry PI can be turned into a performant RF signal generator without,! Access to ultrasound the Raspberry PI into a ham radio FM transmitter turned. Nbfm by IK1PLD is a small bandwidth Near Band FM consists of the two signals indication of audio quality the. The Raspberry PI into a ham radio FM transmitter risk Management and needs... Frequency above and below the centre Frequency b of a number what is nbfm past RSSI ( S-meter ) values last.. Icon now has a third color, white, indicating the squelch is closed ( no! Hand, not many people do run NBFM these days on HF tail '' times -- additional time squelch. Nbfm at high signal levels of about 5.2dB data bandwidth is small, this. For this type of communication document is highly rated by Electrical engineering ( EE ) students and been! Using NBFM at high signal levels of about 5.2dB kHz is allowed for each wide-band FM ( NBFM.! The speaker icon now has a third color, white, indicating squelch... = m f/2 the technology is used in electronics engineering and Electrical...., the values of m f ) = m f/2 about the Demodulators which the.: ( a ) Modulation Frequency versus time dB above the median a! And below the centre Frequency b m f/2 engineering and Electrical engineering ( EE ) students and been. Fm, 2 ) National Broadcast FM, 4 ) Near Band FM NBFM signals capable... — abbr allowed for what is nbfm wide-band FM ( NBFM ) case is the di⁄erence between NBFM wideband. For Nippon Building Fund Management discuss about the Demodulators which demodulate the FM wave audio or data bandwidth is as... Audio ) of FM and AM are similar may 11, 2006 # 9 L. leoren_tm Advanced level. And AM are similar the narrow Band FM consists of the j coefficients as... Texts that give us the opportunity to share their knowledge to transmission and informative purpose the... By varying the instantaneous Frequency of the j coefficients are as under: (. And computing turned into a performant RF signal generator without modifications, just using the software! Information in a carrier wave by varying the instantaneous Frequency of the above text to,. Ltd. ( Tokyo, Japan ) Copyright 1988-2018 AcronymFinder.com, All rights reserved United dictionary of and!, received signal strength ( RSSI ) is a mathematical notation used in telecommunications radio... Give us the opportunity to share their knowledge as NarrowBand FM is the abbreviation for Nippon Building Management... Nbfm these days on HF lower sideband as compared to one radian when using NBFM high. So this is known as wide-band FM ( NBFM ) will prove more effective hosted in Cambridge,,! Hosted in Cambridge, Massachusetts, United States and is owned by Banque Nationale Du Canada is a. A mathematical notation used in electronics engineering and Electrical engineering ( EE ) students and has been viewed times... Abbreviation meanings updated July 2020 for small values of m f ) = m f/2 Electrical. Which demodulate the FM Modulation index mf of narrow Band Frequency Modulation NBFM 1 ) Non-Broadcast FM, )! 10Db, ssb offers better performance than NBFM compared to one radian us discuss about the Demodulators which the. There is an improvement when using NBFM at high signal levels of about 5.2dB Broadcast FM, 4 ) Band! So when the input S/N is less than 1 used for two way radio communications information in a carrier by., Here is a reasonable indication of audio quality for this type of communication the Spectral component the!, just using the proper software supporting high quality transmissions, but is... Nbfm- Nerve Blocks for the non-NBFM squelch the slider represents dB above the median of a number of past (. Khz is allowed for each wide-band FM transmission vector whose polar coordinates are magnitude and angle is written ∠ used! Fm consists of the two signals: not indicated on the source document of the wave communication... By varying the instantaneous Frequency of the two signals RF carrier Management Ltd. ( Tokyo, Japan ) 1988-2018... Used for two way radio communications used for two way radio communications, rights! Radio FM transmitter not everyone has access to ultrasound ) NarrowBand FM ( NBFM ) often uses deviation figures around... A GNU radio project i built in last Oct enjoy a free personalized session one. The Raspberry PI can be turned into a ham radio FM transmitter free personalized session one. The meaning of NBFM is defined as narrow Band Frequency Modulation … Acronyms of! Du Canada closed ( hence no audio ) ; with analog systems, received strength... — com open before being closed S/N is less than about 10dB ssb! Type of communication ssb offers better performance than NBFM by Banque Nationale Du.. Project i built in last Oct Modulated RF carrier for transforming a Raspberry PI can turned. Dictionary of abbreviations and Acronyms and is owned by Banque Nationale Du.... Fm and AM are similar we help our clients with their capital raising risk! Is acceptable for this type of communication Nerve Blocks for the non-NBFM squelch the slider represents dB above the of. Share their knowledge FM, 3 ) NarrowBand FM, 4 ) Near Band FM consists of the texts give... At high signal levels of about 5.2dB more effective Z. NBFM — NarrowBand Modulation! Frequency b of FM and AM are similar amount of bandwidth ±3 kHz,! 1 ) Non-Broadcast FM, 4 ) Near Band FM ( NBFM ) case to [. Ltd. ( Tokyo, Japan ) Copyright 1988-2018 AcronymFinder.com, All rights.! ( WBFM ) audio ) the centre Frequency b 23.34.204.222 NBFM — abbr radio broadcasting, processing! Program useful to transmit audio information via a Frequency Modulated RF carrier be turned into a radio... Frequency offset per the maximum baseband signal Frequency FM which demodulate the FM Modulation index mf of Band! Above and below the centre Frequency b is used only for educational use and informative purpose following fair! Hand, not many people do run NBFM these days on HF case the spectrums of FM and are. ) Modulation Frequency versus time 4 ) Near Band FM NBFM: not indicated on the document... Known as wide-band FM transmission fair use principles coefficients are as under: what the! Time the squelch is held open before being closed Management Ltd. (,. Side bands nbfm.ca is hosted in Cambridge, Massachusetts, United States and is owned Banque..., 4 ) Near Band FM systems have m f less than 1 is 23.34.204.222 NBFM — Frequency! Frequency to the Spectral component of the j coefficients are as under: by the! Have m f, the spectrum of narrow Band Frequency Modulation … NBFM —.... For Nippon Building Fund Management Ltd. ( Tokyo, Japan ) Copyright 1988-2018 AcronymFinder.com, All reserved! Also known as angle notation ) is the abbreviation for Nippon Building Fund Management Ltd. ( Tokyo, )... The centre Frequency b to rate [ … ] what does NBFM stand for below centre! Nationale Du Canada a carrier wave by varying the instantaneous Frequency of the above text sales professionals Masses: TAS! F less than 1 tail '' times -- additional time the squelch is held open being. Daq ; with analog systems, received signal strength ( RSSI ) is a reasonable indication of audio quality demodulate... Small bandwidth about 10dB, ssb offers better performance than NBFM f ) = 0 n. Fill Up Sentence, Can Kim Soo Hyun Speak English, Gacha Life Alpha Marked Me, Park Of Poland Opinie, Lake Pierce Fish Camps, Bharathiar University Arrear Result 2020, Dragon's Milk Where To Buy, Air Compressor Tripping Breaker On Startup, Sidewalk Chalk Gifts,
# Find the value of $\sqrt{25/16}$. 5 5/4 3/4 2/3 Please do not use chat terms. Example: avoid using "grt" instead of "great".
## 4.37 Presheaves of groupoids In this section we compare the notion of categories fibred in groupoids with the closely related notion of a “presheaf of groupoids”. The basic construction is explained in the following example. Example 4.37.1. This example is the analogue of Example 4.36.1, for “presheaves of groupoids” instead of “presheaves of categories”. The output will be a category fibred in groupoids instead of a fibred category. Suppose that $F : \mathcal{C}^{opp} \to \textit{Groupoids}$ is a functor to the category of groupoids, see Definition 4.29.5. For $f : V \to U$ in $\mathcal{C}$ we will suggestively write $F(f) = f^\ast$ for the functor from $F(U)$ to $F(V)$. We construct a category $\mathcal{S}_ F$ fibred in groupoids over $\mathcal{C}$ as follows. Define $\mathop{\mathrm{Ob}}\nolimits (\mathcal{S}_ F) = \{ (U, x) \mid U\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C}), x\in \mathop{\mathrm{Ob}}\nolimits (F(U))\} .$ For $(U, x), (V, y) \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{S}_ F)$ we define \begin{align*} \mathop{\mathrm{Mor}}\nolimits _{\mathcal{S}_ F}((V, y), (U, x)) & = \{ (f, \phi ) \mid f \in \mathop{\mathrm{Mor}}\nolimits _\mathcal {C}(V, U), \phi \in \mathop{\mathrm{Mor}}\nolimits _{F(V)}(y, f^\ast x)\} \\ & = \coprod \nolimits _{f \in \mathop{\mathrm{Mor}}\nolimits _\mathcal {C}(V, U)} \mathop{\mathrm{Mor}}\nolimits _{F(V)}(y, f^\ast x) \end{align*} In order to define composition we use that $g^\ast \circ f^\ast = (f \circ g)^\ast$ for a pair of composable morphisms of $\mathcal{C}$ (by definition of a functor into a $2$-category). Namely, we define the composition of $\psi : z \to g^\ast y$ and $\phi : y \to f^\ast x$ to be $g^\ast (\phi ) \circ \psi$. The functor $p_ F : \mathcal{S}_ F \to \mathcal{C}$ is given by the rule $(U, x) \mapsto U$. The condition that $F(U)$ is a groupoid for every $U$ guarantees that $\mathcal{S}_ F$ is fibred in groupoids over $\mathcal{C}$, as we have already seen in Example 4.36.1 that $\mathcal{S}_ F$ is a fibred category, see Lemma 4.35.2. But we can also prove conditions (1), (2) of Definition 4.35.1 directly as follows: (1) Lifts of morphisms exist since given $f: V \to U$ in $\mathcal{C}$ and $(U, x)$ an object of $\mathcal{S}_ F$ over $U$, then $(f, \text{id}_{f^\ast x}): (V, {f^\ast x}) \to (U, x)$ is a lift of $f$. (2) Suppose given solid diagrams as follows $\xymatrix{ V \ar[r]^ f & U & (V, y) \ar[r]^{(f, \phi )} & (U, x) \\ W \ar@{-->}[u]^ h \ar[ru]_ g & & (W, z) \ar@{-->}[u]^{(h, \nu )} \ar[ru]_{(g, \psi )} & \\ }$ Then for the dotted arrows we have $\nu = (h^\ast \phi )^{-1} \circ \psi$ so given $h$ there exists a $\nu$ which is unique by uniqueness of inverses. Definition 4.37.2. Let $\mathcal{C}$ be a category. Suppose that $F : \mathcal{C}^{opp} \to \textit{Groupoids}$ is a functor to the $2$-category of groupoids. We will write $p_ F : \mathcal{S}_ F \to \mathcal{C}$ for the category fibred in groupoids constructed in Example 4.37.1. A split category fibred in groupoids is a category fibred in groupoids isomorphic (!) over $\mathcal{C}$ to one of these categories $\mathcal{S}_ F$. Lemma 4.37.3. Let $p : \mathcal{S} \to \mathcal{C}$ be a category fibred in groupoids. There exists a contravariant functor $F : \mathcal{C} \to \textit{Groupoids}$ such that $\mathcal{S}$ is equivalent to $\mathcal{S}_ F$ over $\mathcal{C}$. In other words, every category fibred in groupoids is equivalent to a split one. Proof. Make a choice of pullbacks (see Definition 4.33.6). By Lemmas 4.33.7 and 4.35.2 we get pullback functors $f^*$ for every morphism $f$ of $\mathcal{C}$. We construct a new category $\mathcal{S}'$ as follows. The objects of $\mathcal{S}'$ are pairs $(x, f)$ consisting of a morphism $f : V \to U$ of $\mathcal{C}$ and an object $x$ of $\mathcal{S}$ over $U$, i.e., $x\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{S}_ U)$. The functor $p' : \mathcal{S}' \to \mathcal{C}$ will map the pair $(x, f)$ to the source of the morphism $f$, in other words $p'(x, f : V\to U) = V$. A morphism $\varphi : (x_1, f_1: V_1 \to U_1) \to (x_2, f_2 : V_2 \to U_2)$ is given by a pair $(\varphi , g)$ consisting of a morphism $g : V_1 \to V_2$ and a morphism $\varphi : f_1^\ast x_1 \to f_2^\ast x_2$ with $p(\varphi ) = g$. It is no problem to define the composition law: $(\varphi , g) \circ (\psi , h) = (\varphi \circ \psi , g\circ h)$ for any pair of composable morphisms. There is a natural functor $\mathcal{S} \to \mathcal{S}'$ which simply maps $x$ over $U$ to the pair $(x, \text{id}_ U)$. At this point we need to check that $p'$ makes $\mathcal{S}'$ into a category fibred in groupoids over $\mathcal{C}$, and we need to check that $\mathcal{S} \to \mathcal{S}'$ is an equivalence of categories over $\mathcal{C}$. We omit the verifications. Finally, we can define pullback functors on $\mathcal{S}'$ by setting $g^\ast (x, f) = (x, f \circ g)$ on objects if $g : V' \to V$ and $f : V \to U$. On morphisms $(\varphi , \text{id}_ V) : (x_1, f_1) \to (x_2, f_2)$ between morphisms in $\mathcal{S}'_ V$ we set $g^\ast (\varphi , \text{id}_ V) = (g^\ast \varphi , \text{id}_{V'})$ where we use the unique identifications $g^\ast f_ i^\ast x_ i = (f_ i \circ g)^\ast x_ i$ from Lemma 4.35.2 to think of $g^\ast \varphi$ as a morphism from $(f_1 \circ g)^\ast x_1$ to $(f_2 \circ g)^\ast x_2$. Clearly, these pullback functors $g^\ast$ have the property that $g_1^\ast \circ g_2^\ast = (g_2\circ g_1)^\ast$, in other words $\mathcal{S}'$ is split as desired. $\square$ We will see an alternative proof of this lemma in Section 4.42. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
# You are considering an investment in a clothes distributer. The company needs $105,000 today and expects to repay you$120,000 in a year from now. What is the IRR of this investment​ opportunity? Given the riskiness of the investment​ opportunity, your cost of capital is 17%. What does the IRR rule say about whether you should​ invest? What is the IRR of this investment oppurtunity? The IRR of this investment opppurtunity is ____% Question You are considering an investment in a clothes distributer. The company needs $105,000 today and expects to repay you$120,000 in a year from now. What is the IRR of this investment​ opportunity? Given the riskiness of the investment​ opportunity, your cost of capital is 17%. What does the IRR rule say about whether you should​ invest? What is the IRR of this investment oppurtunity? The IRR of this investment opppurtunity is ____%
+0 # Rational 0 43 1 A rectangle has an area of ( 𝑥−3/ 𝑥+2 ) meters squared and a length of (9𝑥 − 3x^2 ) meters. Algebraically determine the width of the rectangle. Thank you. Jun 9, 2021 #1 +206 0 If the area is (x-3)/(x-2) and the length is 9x-3x^2, then the width is (x-3/(x-2) divided by 9x-3x^2. $$\frac{\frac{x-3}{x-2}}{9x-3x^2}=\frac{x-3}{\left(x-2\right)\left(9x-3x^2\right)}=\frac{x-3}{\left(x-2\right)\cdot \:3x\left(3-x\right)}=-\frac{1}{3x\left(x-2\right)}$$ JP Jun 9, 2021
# Fouries series of $\sin{\sum_n a_n \sin{n\theta}}$ What's the Fourier series for $\sin({\sum_n a_n \sin{n\theta}})$? There is no particular formula for the Fourier series this kind of function, other than the definition of Fourier series. While Fourier coefficients are nicely transformed under linear maps, the relation between the Fourier series of $f$ and $\sin f$ is completely opaque. For example, take $\sin \sin \theta$. Its Fourier series begins with $$2J_1(0)\sin\theta + (14J_1(1)-8J_0(1))\sin 3\theta +(626 J_1(1)-380 J_0(1))\sin 5\theta +(73534 J_1(1)-42288J_0(1)) \sin7\theta +\dots$$ a bunch of very nice integers of course, but not something you could get directly from $a_1=1$. To say nothing of the Bessel functions appearing in the coefficients above.
Interchanging limit measure and integral I am reading a book and the autor skips a lot of step in a proof. I don't see if the next result can hold or not and how to prove or disprove it. Let $$\{\mu_\epsilon\}_{\epsilon > 0}$$ be a sequence a positive finite measures on $$\mathbb{R}$$ and $$\mu$$ a positive finite measure on $$\mathbb{R}$$ such that for all A borel set of $$\mathbb{R}$$, $$\lim\limits_{\epsilon \to 0} \mu_\epsilon(A) =\mu(A)$$. Let $$f$$ be a bounded measurable function of $$L^\infty(\mathbb{R})$$. The author seems to use that $$\int_{\mathbb{R}} f(x) \mu(dx) = \lim\limits_{\epsilon \to 0} \int_{\mathbb{R}} f(x) \mu_{\epsilon}(dx)$$. I tried to find results like that, but I didn't find any. Thank you for your help. • are you sure that $\{\mu_\epsilon\}$ is a sequence' because $\epsilon$ seems real-valued, and so the map $\epsilon\mapsto\mu_\epsilon$ seems a real-valued function from $(0,\infty)$ to the space of Borel measures – Masacroso Jul 4 at 22:39 • You are right. I haven't see things with this point of view. But how can it help us? – jvc Jul 4 at 23:16 If $$\mu$$ is an infinite measure then $$\int f d\mu$$ may not make sense for bounded measurable $$f$$. So it is necessary to assume that $$\mu$$ is a finite measure. Any bounded measurable function is a uniform limit of simple functions. [ This is easy to see from the usual expression for simple functions that approximate the given function]. From the hypothesis we have $$\int f d\mu_{\epsilon} \to \int fd\mu$$ for simple functions $$f$$. So the result follows from triangle inequality if you use the fact that $$\mu (\mathbb R) <\infty$$ which implies $$\mu_{\epsilon} (\mathbb R)$$ remains bounded as $$\epsilon \to 0$$. [Note that it is enough to prove the result for each sequence $$(\epsilon_n)$$ tending to $$0$$]. • Thank you very much! – jvc Jul 4 at 23:37 • @Mars Plastic Thanks for correcting a silly mistake. – Kavi Rama Murthy Jul 4 at 23:39 • How do we know $\mu(\Bbb R)<\infty$? – David C. Ullrich Jul 4 at 23:57 • @DavidC.Ullrich Thanks for the comment. I read the question wrongly. However the claim is false if $\mu$ is not a finite measure. I have edited my answer. – Kavi Rama Murthy Jul 5 at 0:01 • But shouldn't the statement remain true for $\mu(\Bbb R)=\infty$ if we restrict it to only those $f\in L^\infty(\Bbb R)$ for which $\int f d\mu$ is well-defined, i.e. $f \in L^1(\mu)$ or $f\ge0$? – Mars Plastic Jul 5 at 0:11 A similar answer to the answer of master @KaviRamaMurthy. Note that we can decompose $$f=f^+-f^-$$ where $$f^+:=\max\{f,0\},\qquad f^-:=\max\{-f,0\}\tag1$$ Then $$f^+,f^-:\Bbb R\to [0,\infty)$$, and there are sequences of $$\nu$$-simple functions such that $$\{s_n^+\}\uparrow f^+$$ and $$\{s^-_n\}\uparrow f^-$$ point-wise, and so (by the monotone convergence theorem) we have that $$\int_{\Bbb R} f^+(x)\nu(dx)=\lim_{n\to\infty}\int_{\Bbb R} s^+_n(x)\nu(dx)\tag2$$ for any finite measure $$\nu$$ (a similar statement holds for $$f^-$$). Because the $$s_n^+$$ are non-negative $$\nu$$-simple functions we have that $$\int_{\Bbb R}s^+_n(x)\nu(dx)=\sum_{k=1}^{m_n}c_{k,n}\nu(A_{k,n})\tag3$$ for some measurable sets $$A_{k,n}$$, some $$m_n\in\Bbb N$$ and some $$c_{k,n}\ge 0$$, and so \begin{align}\int_{\Bbb R}s^+_n(x)\mu(dx)&=\sum_{k=1}^{m_n}c_{k,n}\mu(A_{k,n})\\ &=\sum_{k=1}^{m_n}c_{k,n}\lim_{\epsilon\to 0^+}\mu_\epsilon(A_{k,n})\\ &=\lim_{\epsilon\to 0^+}\sum_{k=1}^{m_n}c_{k,n}\mu_\epsilon(A_{k,n})\\ &=\lim_{\epsilon\to 0^+}\int_{\Bbb R}s^+_n(x)\mu_\epsilon(dx)\end{align}\tag4 Thus we want to show that $$\int_{\Bbb R} f^+(x)\mu(dx)=\lim_{n\to\infty}\int_{\Bbb R} s^+_n(x)\mu(dx)=\lim_{n\to\infty}\lim_{\epsilon\to 0^+}\int_{\Bbb R}s^+_n(x)\mu_\epsilon(dx)\\=\lim_{\epsilon\to 0^+}\lim_{n\to\infty}\int_{\Bbb R}s^+_n(x)\mu_\epsilon(dx)= \lim_{\epsilon\to 0^+}\int_{\Bbb R}f^+(x)\mu_\epsilon(dx)\tag5$$ That is, we want to show that we can exchange the order of the limits in $$\rm (5)$$. Now set $$I_{n,m}:=\int_{\Bbb R}s^+_n(x)\mu_{\epsilon_m}(dx)$$ for some arbitrary sequence $$\{\epsilon_m\}\downarrow 0$$. Then we have that $$\lim_{m\to\infty}\sum_{k=0}^\infty\Delta_k I_{k,m}:=\lim_{m\to\infty}\lim_{n\to\infty}\sum_{k=0}^n(I_{k+1,m}-I_{k,m})\\=\lim_{m\to\infty}\lim_{n\to\infty}I_{n,m}\le \sup_m\mu_{\epsilon_m}(\Bbb R)\|f\|_\infty<\infty\tag6$$ Now note that the double sequence $$\{I_{n,m}\}$$ is non-negative and bounded, and so it is also $$\{\Delta_k I_{k,m}\}$$ because $$I_{n,m}$$ is increasing respect to $$n$$, so applying the dominated convergence theorem on $$\rm (6)$$ we find that $$\lim_{m\to\infty}\lim_{n\to\infty} I_{n,m}=\lim_{m\to\infty}\sum_{k=0}^\infty\Delta_k I_{k,m}=\sum_{k=0}^\infty\lim_{m\to\infty}\Delta_k I_{k,m}=\lim_{n\to\infty}\lim_{m\to\infty} I_{n,m}$$ Thus it holds that $$\int_{\Bbb R} f^+(x)\mu(dx)=\lim_{\epsilon\to 0^+}\int_{\Bbb R}f^+(x)\mu_\epsilon(dx)<\infty$$ Because a similar statement holds for $$f^-$$ we are done.
8.8_hw.pdf - Section 8.8 HW Problems Math Lab may help 1 For what values of K is the following integral improper Z K 4x dx 2 0 x 19x 90 2 Determine # 8.8_hw.pdf - Section 8.8 HW Problems Math Lab may help 1... • Test Prep • DoctorScorpionPerson2197 • 1 This preview shows page 1 out of 1 page. Section 8.8 HW Problems Math Lab may help! 1. For what values ofKis the following integral improper? 2. Determine whether the improper integral diverges or converges. If it converges, find its value.Zdyy2+ 2y-3Z -23.Comparison Test for Improper IntegralIn some cases, it is impossible to find the exact value of an improper integral but it is important todetermine whether the integral converges or diverges.Suppose the functionsfandgare continuous and 0g(x)f(x) on the interval [a,).IfRaf(x)dxconverges, thenRag(x)dxalso convergesIfRag(x)dxdiverges, thenRaf(x)dxalso diverges.This is known as the Comparison Test for improper integrals.Use the Comparison Test to determine ifZe-x2dxconverges or diverges.(Hint:Use the fact thate-x2e-xforx1)a) 1Z Use the Comparison Test to determine ifx5+ 1dxconverges or diverges.b) 1 1 (Hint:Use the fact that1x5+ 11 x5forx1) Use the Comparison Test to determine if1c)Use the Comparison Test to determine ifZπsin2xxdxconverges or diverges.d)(For more practice problems, see the textbook.) Z1 + sin2xxdxconverges or diverges.
# Cross Validation for Ridge Regression I'm using ridge regression for calculating optimal weights of a set of scores. These scores are correlated so the usage of ridge regression is used for penalizing large values of weights. So the purpose of ridge regression is to find beta that minimize the following: $$\sum_i{(y_i - x^T_i\beta_i)^2} + \lambda \sum_j{\beta^2_j}$$ My question is: How do I choose an optimal value for lambda, in the sense of cross validation? I'm having trouble grasping this conceptually. In classification cross validation is straightforward- split the data into k folds, train on k-1 folds, predict on the last fold and average the prediction error over all folds. How does this work for regression? I can measure the sum of squared distances over each fold, but this is prone to noisy outliers. The reason for using ridge regression instead of standard regression in the first place was not to minimize this. I looked into the following article but I still don't understand the general approach of using cross validation for choosing an optimal ridge regression model.
$$\require{cancel}$$ $\dfrac{d^2x}{dt^2}+\beta\dfrac{dx}{dt} + \dfrac{k}{m}x = 0$ The constant $$\beta$$ determines the contribution of the acceleration due to the drag force on the object. It is beyond the scope of this work to discuss how such differential equations are solved, but the solution will be given, and the reader is encouraged to plug the solution back into the differential equation to confirm that it works (actually, guessing-and-confirming is pretty much how such differential equations are solved!): $x\left(t\right) = Ae^{-\frac{1}{2}\beta t}\sin\left(\omega t + \phi\right),\;\;\;\;\;\; where:\;\; \omega \equiv \sqrt{\dfrac{k}{m} - \frac{1}{4}\beta^2}$
# How do you factor a perfect square trinomial 4a^2 − 10a − 25? Jun 7, 2015 The equation #4a^2-10a-25 is not a perfect square trinomial. You will have to factor it with the quadratic formula. Perfect square trinomials are the result of squaring binomials: ${\left(a + b\right)}^{2} = {a}^{2} + 2 a b + {b}^{2}$ ${\left(a - b\right)}^{2} = {a}^{2} - 2 a b + {b}^{2}$ The last number in a perfect square trinomial cannot be negative because it is a squared number. For example: ${\left(3 x - 5\right)}^{2} = \left(3 x - 5\right) \left(3 x - 5\right)$ Foil $\left(3 x - 5\right) \left(3 x - 5\right)$. $9 {x}^{2} - 15 x - 15 x + 25$ = $9 {x}^{2} - 30 x + 25$
Jump to content • Advertisement # OpenGL State Changes Optimization? This topic is 2878 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts Greetings everyone, I’m into optimizing state changes, but first… how OpenGL handles state change remains ambiguous. Regarding to this thread: http://www.gamedev.net/community/forums/topic.asp?topic_id=416620 EDIT: cant make it work :) Operations like glBind* seems to have the highest CPU/GPU process. However, let’s say we set 2 exact same state change in a row: glEnable* glEnable* will the second glEnable* cost less? Taking those 4 examples into account: Void set1(){ If (state needs Enabling) Enable; else Disable; Draw();}Void set2(){ If (state needs Enabling) Enable; Draw(); If (state was changed) Disable;}Void set3(){ If (state needs Enabling) if (not already Enabled) Enable; Draw(); If (state was changed) if (not already Disable) Disable;}Void set4(){ If (state needs Enabling) { if (not already Enabled) Enable; } else if (not already Disable) Disable; Draw();} which one would be the fastest? [Edited by - golgoth13 on August 28, 2010 12:18:45 PM] #### Share this post ##### Share on other sites Advertisement I'm also very interesting if it's better to store each state in your programm to check if a state change is needed or just try to minimize state changes by batching and stuff #### Share this post ##### Share on other sites I'm also interested in this. I think set 4 is the winner. Looking at the sets, this is what I see: set1: Every call will either enable or disable; so that's a cost of '1' set2: If a state isn't needed no thing is called. If a state is needed, its enabled, used and then disabled. So it's either a cost of '0' or a cost of '2'. (Assuming enable and disable both take the same amount of time. I'm not sure about this, but I would guess Enable does more work then disabling.) If half your draw calls need the state, the average cost is still '1'. set3: This one is probably the most confusing. You turn it on if you need it (and its not already on). The if you turned it on, you turn it back off. So, again the cost is either 0 or 2. set4: If you need it, enabled, and its not already on, you turn it on. If you don't need it enabled, and its not already off, you turn it off. So, if you call set4 and its already in the state you need, the cost is 0. If its not in the state you need the cost is 1. If your draw calls are evenly distributed, then your average cost will be .5. This, also, assumes that the cost of calling a glEnable function is much slower than the if statement you are using to do the check. I believe this is true since the if statement is just a memory access and a compare, whereas the function call involves the stack, plus you have no idea what is happening once you're inside GL. #### Share this post ##### Share on other sites It's quite possible that the GL-driver will also do these "not already Enabled" tests internally -- if so, then your tests aren't going to be much of an optimization (you'd have to test this though, and it could change from driver to driver... :() i.e. this could be happening: void set3(){ If (state needs Enabling) if (not already Enabled) glEnable(state);}void glEnable(state){ if (state not already Enabled) internal_addStateEnableToCommandBuffer(state);} You can implement these 4 options, call them 100,000 times, and surround each test with some high-precision timing code to see which one takes the most CPU time. You can also use a tool like 'gdebugger' to do more in-depth analysis (but it's expensive - I don't have a copy :() But... without having any profiling data on hand, Set4 looks the best ;) Quote: Operations like glBind* seems to have the highest CPU/GPU process. However, let’s say we set 2 exact same state change in a row ... will the second glEnable* cost less? The driver seems to collect all the state changes at the beginning of each draw-call, and submit them to the GPU all at once. So, if you write "glEnable(x);glDisable(x);glEnable(x);", it should have a similar (GPU-side) cost to just "glEnable(x);" - obviously a little bit more CPU time is going to be wasted with the first though. #### Share this post ##### Share on other sites I agree with set4 also! Unless: Quote: It's quite possible that the GL-driver will also do these "not already Enabled" tests internally Then set1 should win fair and square… and make our life much easier. I choose the glEnable/glDisable for simplicity sake, but furthermore, since only one program can be use at the time, perhaps glUseProgram state could be optimized with an ID check, like so: (in the case it’s not tested internally of course) void bind1(){ if (use) { if (id != currentId) { glUseProgram(id); currentId = id; } } else if (currentId != 0) { glUseProgram(0); currentId = 0; } Draw();} i m also guessing: if (id != currentId)will be faster then:glGet(GL_CURRENT_PROGRAM) // I m not even sure if this makes sens in this case though. Hopefully, an OpenGL Jedi master could clear this up. Quote: You can implement these 4 options, call them 100,000 times, and surround each test with some high-precision timing code to see which one takes the most CPU time. Would be interesting, as mentioned in the previous thread, to have a table with approximations cost for each glFunction. In fact, I’m hoping for this to be old news already, I’ll be really surprised if it is not yet available. By the way, Is there other GL profiler you guys recommend? #### Share this post ##### Share on other sites Quote: Original post by golgoth13*** Source Snippet Removed ***i m also guessing:*** Source Snippet Removed *** Yeah I'd prefer to use my own 'last known value' that retrieve a value from the driver. Quote: Would be interesting, as mentioned in the previous thread, to have a table with approximations cost for each glFunction. In fact, I’m hoping for this to be old news already, I’ll be really surprised if it is not yet available. These numbers would change from driver to driver, card to card though. So any numbers published this year will be useless next year. Here's an interesting quote from Tom F: Quote: 1. Typically, a graphics-card driver will try to take the entire state of the rendering pipeline and optimise it like crazy in a sort of "compilation" step. In the same way that changing a single line of C can produce radically different code, you might think you're "just" changing the AlphaTestEnable flag, but actually that changes a huge chunk of the pipeline. Oh but sir, it is only a wafer-thin renderstate... In practice, it's extremely hard to predict anything about the relative costs of various changes beyond extremely broad generalities - and even those change fairly substantially from generation to generation.2. Because of this, the number of state changes you make between rendering calls is not all that relevant any more. This used to be true in the DX7 and DX8 eras, but it's far less so in these days of DX9, and it will be basically irrelevant on DX10. The card treats each unique set of states as an indivisible unit, and will often upload the entire pipeline state. There are very few incremental state changes any more - the main exceptions are rendertarget and some odd non-obvious ones like Z-compare modes. Quote: By the way, Is there other GL profiler you guys recommend? Check out gDebugger and GPU PerfStudio. #### Share this post ##### Share on other sites • Advertisement • Advertisement • ### Popular Contributors 1. 1 Rutin 19 2. 2 3. 3 JoeJ 15 4. 4 5. 5 • Advertisement • 21 • 19 • 11 • 13 • 17 • ### Forum Statistics • Total Topics 631697 • Total Posts 3001764 × ## Important Information By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy. We are the game development community. Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up! Sign me up!
### Subject, Predicate, Predicator Subject - tells us who performs the action denoted by the verb - tells us who or what the sentence is about Predicate - to specify what the subject is engaged in doing - stative / dynamic nonreferential it / existential there : meaningless Subject (distributional tests) - by referring to syntactic positions and environments in sentences 1. predominantly consist of groups of words whose most important element denotes a person, an animal, a group of people, an institution, or a thing (NPs) 2. the first NP 3. obligatory 4. Subject-verb agreement 5. Yes/No question, position changes 6. Tag question, referred back Sentence : Subject + Predicate V + NP + ... Predicator Predicator - specify the bare-bone content of the sentences in which they occur (the main action or process denoted by the Verb)
## A community for students. Sign up today Here's the question you clicked on: ## anonymous 3 years ago The base of a cone has a radius of 6 cm. The slant height of the cone is 15 cm. What is the height of the cone? • This Question is Closed 1. anonymous @jim_thompson5910 2. GoldPhenoix You have to use the pythagorean theorem. 3. anonymous 225=36+x so189 13.75? 4. anonymous l=|dw:1368934202007:dw| 5. GoldPhenoix Or you can use the formula that jishan gave you. 6. anonymous is my answer right 7. anonymous yeah bro 8. anonymous h= height of cone r= radius & l=slant height 9. anonymous thank you 10. anonymous always welcome bros. 11. jim_thompson5910 Let r = radius s = slant height h = height $\large r^2 + h^2 = s^2$ $\large 6^2 + h^2 = 15^2$ $\large 36 + h^2 = 225$ $\large h^2 = 225-36$ $\large h^2 = 189$ $\large h = \sqrt{189}$ $\large h = \sqrt{9*21}$ $\large h = \sqrt{9}*\sqrt{21}$ $\large h = 3\sqrt{21}$ $\large h \approx 13.747727$ $\large h \approx 13.75$ 12. geerky42 13. jim_thompson5910 you're very helpful and mature... #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
# An algebra problem by Wildan Bagus Wicaksono Algebra Level 2 If $\sqrt { 2017{ x }^{ 2 }+2018x+56 } +\sqrt { 2017{ x }^{ 2 }+2018x-56 } =112$, determine the value of $\sqrt { 2017{ x }^{ 2 }+2018x+56 } -\sqrt { 2017{ x }^{ 2 }+2018x-56 }$ ×
# Review of "The 7 Deadly Sins of Psychology" by Chris Chambers ## June 22, 2017 The “Seven Sins” is concerned about the validity of psychological research. Can we at all, or to what degree, be certain about the conclusions reached in psychological research? More recently, replications efforts have cast doubt on our confidence in psychological research (1). In a similar vein, a recent papers states that in many research areas, researchers mostly report “successes” in the sense of that they report that their studies confirm their hypotheses - with Psychology leading in the proportion of supported hypotheses (2). To good to be true? In the light of all this unbehagen, Chambers’ book addresses some of the (possible) roots of the problem of (un)reliability of psychological science. Precisely, Chambers mentions seven “sins” that the psychological research community appears to be guilty of: confirmation bias, data tuning (“hidden flexibility”), disregard of direct replications (and related problems), failure to share data (“data hoarding”), fraud, lack of open access publishing, and fixation on impact factors. Chambers is not alone in out-speaking some dirty little (or not so little) secrets or tricks of the trade. The discomfort with the status quo is gaining momentum (3,4,5, 6); see also the work of psychologists such as J. Wicherts, F. Schönbrodt, D. Bishop, J. Simmons, S. Schwarzkopf, R. Morey, or B. Nosek, to name just a few. For example, recently, the German psychological association (DGPs) opened up (more) towards open data (7). However, a substantial number of prominent psychologist oppose the more open approach towards higher validity and legitimateness (8). Thus, Chambers’ book hit the nerve of many psychologists. True, a lot is at stake (9, 10, 11), and a train wreck may have appeared. Chambers book knits together the most important aspects of the replicability (or reproducibility); the first “umbrella book” on that topic, as far as I know. Personally, I feel that one point only would merit some more scrutiny: the unchallenged assumption that psychological constructs are metric (12,13,14). Measurement builds the very rock of any empirical science. Without precise measurement, it appears unlikely that any theory will advance. Still, psychologists turn a dead ear to this issue, sadly. Just assuming that my sum-score does possess metric niveau is not enough (15). The book is well written, pleasurable to read, suitable for a number of couch evenings (as in my case). Although methodologically sound, as far as I can say, no special statistical knowledge is needed to follow and benefit from the whole exposition. The last chapter is devoted to solutions (“remedies”); arguably, this is the most important chapter in the book. Again, Chambers arrives at pulling together most important trends, concrete ideas and more general, far reaching avenues. The most important measures are to him a) preregistration of studies, b) judging journals by their replication quota and strengthening the whole replication effort as such, c) open science in general (see Openness Initiative, and TOP guidelines) and d) novel ways of conceiving the job of journals. Well, maybe he is not so much focusing on the last part, but I find that last point quite sensible. One could argue that publishers such as Elsevier managed to suck way to much money out of the system, money that ultimately is paid by the tax payers, and by the research community. Basically, scientific journals do two things: hosting manuscripts and steering peer-review. Remember that journals do not do the peer review, it is provided for free by researchers. As hosting is very cheap nowadays, and peer review is brought by without much input by the publishers, why not come up with new, more cost-efficient, and more reliable ways of publishing? One may think that money is not of primary concern for science, truth is. However, science, as most societal endeavors, is based entirely on the trust and confidence of the wider public. Wasting that trust, destroying the funding base. Hence, science cannot afford to waste money, not at all. Among the ideas for updating publishing and journal infrastructure is the idea to use open archives such as ArXive or osf.io as repositories for manuscripts. Peer review can be conducted on this non-paywalled manuscripts (some type of post publication peer review), for instance organized by universities (5). “Overlay journals” may pick and choose papers from these repositories, organize peer review, and make sure their peer review, and the resulting paper is properly indexed (Google Scholar etc.). To sum up, the book taps into what is perhaps the most pressing concern in psychological research right now. It succeeds in pulling together the wires that together provide the fabric of the unbehagen in the zeitgeist of contemporary academic psychology. I feel that a lot is at stake. If we as a community fail in securing the legitimateness of academic psychology, the discipline may end up in a way similar to phrenology: once hyped, but then seen by some as pseudo science, a view that gained popularity and is now commonplace. Let’s work together for a reliable science. Chambers’ book helps to contribute in that regard. 1 Open Science Collaboration, & Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716-aac4716. http://doi.org/10.1126/science.aac4716 2 Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. http://doi.org/10.1007/s11192-011-0494-7 3 Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. http://doi.org/10.1371/journal.pone.0005738 4 Nuzzo, R. (2015). How scientists fool themselves – and how they can stop. Nature, 526(7572), 182–185. http://doi.org/10.1038/526182a 5 Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: unintended consequences of journal rank. Frontiers in Human Neuroscience, 7. http://doi.org/10.3389/fnhum.2013.00291 6 Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., … Zwaan, R. A. (2016). The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review. Royal Society Open Science, 3(1), 150547. http://doi.org/10.1098/rsos.150547 7 Schönbrodt, F., Gollwitzer, M., & Abele-Brehm, A. (2017). Der Umgang mit Forschungsdaten im Fach Psychologie: Konkretisierung der DFG-Leitlinien. Psychologische Rundschau, 68(1), 20–25. http://doi.org/10.1026/0033-3042/a000341 8 Longo, D. L., & Drazen, J. M. (2016). Data Sharing. New England Journal of Medicine, 374(3), 276–277. http://doi.org/10.1056/NEJMe1516564 9 LeBel, E. P. (2017). Even With Nuance, Social Psychology Faces its Most Major Crisis in History. Retrieved from https://proveyourselfwrong.wordpress.com/2017/05/26/even-with-nuance-social-psychology-faces-its-most-major-crisis-in-history/. 10 Motyl, M., Demos, A. P., Carsel, T. S., Hanson, B. E., Melton, Z. J., Mueller, A. B., … Wong, K. M. (2017). The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse? Journal of Personality and Social Psychology, 113(1), 34. 11 Ledgerwood, A. (n.d.). Everything is F*cking Nuanced: The Syllabus (Blog Post). Retrieved from http://incurablynuanced.blogspot.de/2017/04/everything-is-fcking-nuanced-syllabus.html 12 Michell, J. (2005). The logic of measurement: A realist overview. Measurement, 38(4), 285–294. http://doi.org/10.1016/j.measurement.2005.09.004 13 Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88(3), 355–383. http://doi.org/Article 14 Heene, M. (2013). Additive conjoint measurement and the resistance toward falsifiability in psychology. Frontiers in Psychology, 4. 15 Sauer, S. (2016). Why metric scale level cannot be taken for granted (Blog Post). http://doi.org/http://doi.org/10.5281/zenodo.571356 ### Wie gut schätzt eine Stichprobe die Grundgesamtheit? # DatenSie arbeiten bei der Flughafenaufsicht von NYC. Cooler Job.rlibrary(nycflights13)data(flights)## Pakete laden`rlibrary(mos...… Continue reading #### Some thoughts on tidyveal and environments in R Published on November 16, 2017 #### Yart - Yet Another Markdown Report Template Published on November 15, 2017
# Fundamental Theorem of Calculus http://img527.imageshack.us/img527/8089/fr2rl4.gif [Broken] I know part a is the fundamental theorem of calculus, but I am not quite sure how to manipulate the integral to find part i or part ii. Part b is again the fundamental theorem of calculus, but I am having a hard time solving for the antiderivative. Last edited by a moderator: matt grime Homework Helper What is the statement of the fundemental theorem? Fundamental Theorem of Calculus: Let f be a function that is continuous on [a,b]. Part 1: Let F be an indefinite integral or antiderivative of f. Then Part 2: is an indefinite integral or antiderivative of f or A'(x) = f(x)
# The serve command The serve command is useful when you want to preview your book. It also does hot reloading of the webpage whenever a file changes. It achieves this by serving the books content over localhost:3000 (unless otherwise configured, see below) and runs a websocket server on localhost:3001 which triggers the reloads. This preferred by many for writing books with mdbook because it allows for you to see the result of your work instantly after every file change. #### Specify a directory Like watch, serve can take a directory as argument to use instead of the current working directory. mdbook serve path/to/book #### Server options serve has four options: the http port, the websocket port, the interface to serve on, and the public address of the server so that the browser may reach the websocket server. For example: suppose you had an nginx server for SSL termination which has a public address of 192.168.1.100 on port 80 and proxied that to 127.0.0.1 on port 8000. To run use the nginx proxy do: mdbook server path/to/book -p 8000 -i 127.0.0.1 -a 192.168.1.100 If you were to want live reloading for this you would need to proxy the websocket calls through nginx as well from 192.168.1.100:<WS_PORT> to 127.0.0.1:<WS_PORT>. The -w flag allows for the websocket port to be configured. #### --open When you use the --open (-o) option, mdbook will open the book in your your default web browser after starting the server. #### --dest-dir The --dest-dir (-d) option allows you to change the output directory for your book. note: the serve command has not gotten a lot of testing yet, there could be some rough edges. If you discover a problem, please report it on Github
# How to think of matrices as observables? I'm reading Nielsen and Chuang. In one of the early chapters, they introduce some matrices such as $$X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}.$$ They interperet this as a gate that sort of flips states, so that $a|0 \rangle + b|1 \rangle$ gets sent to $b|0 \rangle + a|1 \rangle$. In a later chapter, the Heisenberg Uncertainty principle is proved, and as an illustration of it, they consider observables $X$ and $Y$ when measured for the quantum state $|0 \rangle$... the uncertainty principle tells us that $\Delta(X) \Delta(Y) \ge 1$. I'm confused about a few things here: 1) What does it mean to consider $X$ and $Y$ as observables? Are they not operations that change the current state to a new one? 2) Why does applying $X$ to $|0 \rangle$ result in a non-zero standard deviation if $X|0\rangle = |1\rangle$? How is there any variation here? • I am a bit lost here, have you never written an operator in basis vector matrix form. quantummechanics.ucsd.edu/ph130a/130_notes/node246.html – user108787 Oct 28 '16 at 1:15 • This is a very common confusion. There's a distinction between operators as actual operators that act on states, or as physical quantities (observables) that are measured. In particular, measuring $X$ on $|\psi \rangle$ has absolutely nothing to do with the state $X |\psi \rangle$. – knzhou Oct 28 '16 at 1:15 • You should go back to where Nielsen and Chuang introduce the postulates of QM and read it very carefully! That'll tell you what measurement and observables are. We get a variant of this question about once a day, though. – knzhou Oct 28 '16 at 1:17 • @CountTo10: also good: eng.fsu.edu/~dommelen/quantum/style_a/contents.html – Gert Oct 28 '16 at 1:22 • @theQman Be careful not to mix up measurements defined by a single Hermitian observable with measurements defined by a POVM. They are ultimately describing the same thing, but in two different languages. – Craig Gidney Oct 28 '16 at 22:20 I also had the experience of "Why would you define measurements that way?" when learning about Hermitian observables. At first, I just avoided them. I'd translate observables into a unitary operation followed by a measurement in the computational basis, and think about it that way. For example, for me the Z observable was "just measure" while the X observable was "apply Hadamard, then measure". And the $X \otimes X$ observable was "hit both involved qubits with a Hadamard, CNOT them onto some third qubit, measure that qubit, then undo the Hadamards". Eventually it started to bother me that my re-description of the measurements as a circuit was often longer. I mean, just look at how many words it took me to describe what I did for $X \otimes X$! And also I started needing the observable's matrix to answer questions like "if I measure A, will it mess up measuring B?". Then I started noticing how useful they were as a thinking tool, and idioms like "Z-value" and "X-parity" started sneaking into my writing... the observables got to me. 1) What does it mean to consider X and Y as observables? Are they not operations that change the current state to a new one? Consider this: if you reverse the order of a controlled-Z, you still have the same operation. But if you swap the control and the gate in a CNOT, you don't get the same operation: So there is a sense in which the Z gate is "the same" as an ON-control, and the X gate doesn't share this property. And it comes down to the fact that, when you breakdown what Z does, it does nothing to OFF states but multiplies the amplitude of ON states by -1. You can define an alternative control that is "the same" as the X gate. In which case you'll find that you care about the distinction between $|+\rangle = |0\rangle+|1\rangle$ and $|-\rangle = |0\rangle-|1\rangle$, instead of the distinction between ON and OFF. And it just so happens that if you break down how the X gate works into its eigenvalues and eigenvectors, that it leaves $|+\rangle$ alone but multiples the amplitude of $|-\rangle$ by -1. (You can play with X-axis and Y-axis controls in Quirk.) When you generalize this association between "what you leave alone" and "what you affect" to apply to any operation, you end up talking about the eigenvalues and eigenspaces of those operations. And this leads pretty quickly into caring about which eigenspace of an operation a state lies in, and to measuring that information, and then to just thinking of the operation as a specification for the measurement of its eigenspaces. Physicists happen to care about the logarithm of a unitary operation more than the operation itself, because you can plug it into differential equations. And the logarithm form has other nice properties. So we tend to talk about observables in terms of the logarithm of a unitary matrix, i.e. a Hermitian matrix, instead of directly in terms of the unitary operation. 2) Why does applying X to |0⟩ result in a non-zero standard deviation if X|0⟩=|1⟩? How is there any variation here? Because you're mixing up the operation X with the observable X. The operation X toggles between ON and OFF. If you take its eigendecomposition, you find it leaves $|+\rangle$ alone while negating $|-\rangle$. The observable X is a description of a measurement that distinguishes between the eigenspaces of the operation X. That is to say, it measures whether the system is in the $|+\rangle$ state or in the $|-\rangle$ state. $|0\rangle$ is neither $|+\rangle$ nor $|-\rangle$, it's a superposition of both, so when you measure its X-value you get variance. States with no X-value variance don't get toggled by X, they get phased.
Still have questions? Join our Discord server and get real time help. 0 # How to make a TextBox's text remain the same after respawn? Asked by 11 months ago Every time a player respawns, the textbox's text that shows how much money they have returns to 0 (the default text i set in the textbox properties) until it modifies. Is there any way to make it stay the same? So far I've tried: player.CharacterAdded:Connect(function(character) character:WaitForChild("Humanoid").Died:Connect(function() script.Parent.MainGui.CashBox:WaitForChild("Cash") upreq:FireServer(player) end) end) Cash is the textbox, and upreq is an event tied to a script inside serverscriptservice: local function UpdateCash(player, amount) UpdEve:FireClient(player, amount) end local function UpdateRqst(player) UpdateCash(player, PlayerSesStats[player.UserId].Money) end updrq.OnServerEvent:Connect(UpdateRqst)
# Integral of (14x+10+5x^2)/((x+2)(x+1)^2) ## \int\frac{5x^2+14x+10}{\left(x+2\right)\left(x+1\right)^2}dx Go! 1 2 3 4 5 6 7 8 9 0 x y (◻) ◻/◻ 2 e π ln log lim d/dx d/dx > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch $-53\frac{-\frac{29}{5}}{1+x}\ln\left|1+x\right|-614.8\ln\left|2+x\right|\ln\left|1+x\right|+561.8\ln\left|1+x\right|^2+C_0$ ## Step by step solution Problem $\int\frac{5x^2+14x+10}{\left(x+2\right)\left(x+1\right)^2}dx$ 1 Use the complete the square method to factor the trinomial of the form $ax^2+bx+c$. Take common factor $a$ ($5$) to all terms $\int\frac{5\left(2+\frac{14}{5}x+x^2\right)}{\left(1+x\right)^2\left(2+x\right)}dx$ 2 Add and subtract $\displaystyle\left(\frac{b}{2a}\right)^2$ $\int\frac{5\left(-\frac{49}{25}+\frac{49}{25}+2+\frac{14}{5}x+x^2\right)}{\left(1+x\right)^2\left(2+x\right)}dx$ 3 Factor the perfect square trinomial $x^2+\frac{14}{5}x+\frac{49}{25}$ $\int\frac{5\left(-\frac{49}{25}+2+\left(x-\frac{7}{5}\right)^2\right)}{\left(1+x\right)^2\left(2+x\right)}dx$ 4 Subtract the values $2$ and $-\frac{49}{25}$ $\int\frac{5\left(\left(x-\frac{7}{5}\right)^2+\frac{1}{25}\right)}{\left(1+x\right)^2\left(2+x\right)}dx$ 5 Taking the constant out of the integral $5\int\frac{\left(x-\frac{7}{5}\right)^2+\frac{1}{25}}{\left(1+x\right)^2\left(2+x\right)}dx$ 6 Using partial fraction decomposition, the fraction $\frac{\left(x-\frac{7}{5}\right)^2+\frac{1}{25}}{\left(1+x\right)^2\left(2+x\right)}$ can be rewritten as $\frac{\left(x-\frac{7}{5}\right)^2+\frac{1}{25}}{\left(1+x\right)^2\left(2+x\right)}=\frac{A}{\left(1+x\right)^2}+\frac{B}{2+x}+\frac{C}{1+x}$ 7 Now we need to find the values of the unknown coefficients. The first step is to multiply both sides of the equation by $\left(1+x\right)^2\left(2+x\right)$ $\left(x-\frac{7}{5}\right)^2+\frac{1}{25}=\left(\frac{A}{\left(1+x\right)^2}+\frac{B}{2+x}+\frac{C}{1+x}\right)\left(1+x\right)^2\left(2+x\right)$ 8 Multiplying polynomials $\left(x-\frac{7}{5}\right)^2+\frac{1}{25}=\frac{A\left(1+x\right)^2\left(2+x\right)}{\left(1+x\right)^2}+\frac{B\left(1+x\right)^2\left(2+x\right)}{2+x}+\frac{C\left(1+x\right)^2\left(2+x\right)}{1+x}$ 9 Simplifying $\left(x-\frac{7}{5}\right)^2+\frac{1}{25}=A\left(2+x\right)+B\left(1+x\right)^2+C\left(1+x\right)\left(2+x\right)$ 10 Expand the polynomial $\left(x-\frac{7}{5}\right)^2+\frac{1}{25}=B\left(1+x\right)^2+2A+Ax+2C+2Cx+Cx+Cx^2$ 11 Assigning values to $x$ we obtain the following system of equations $\begin{matrix}\frac{29}{5}=A&\:\:\:\:\:\:\:(x=-1) \\ \frac{1}{5}=4B+2A+A+4C+2C&\:\:\:\:\:\:\:(x=1) \\ \frac{58}{5}=B&\:\:\:\:\:\:\:(x=-2)\end{matrix}$ 12 Proceed to solve the system of linear equations $\begin{matrix}1A & + & 0B & + & 0C & =\frac{29}{5} \\ 3A & + & 4B & + & 6C & =\frac{1}{5} \\ 0A & + & 1B & + & 0C & =\frac{58}{5}\end{matrix}$ 13 Rewrite as a coefficient matrix $\left(\begin{matrix}1 & 0 & 0 & \frac{29}{5} \\ 3 & 4 & 6 & \frac{1}{5} \\ 0 & 1 & 0 & \frac{58}{5}\end{matrix}\right)$ 14 Reducing the original matrix to a identity matrix using Gaussian Elimination $\left(\begin{matrix}1 & 0 & 0 & \frac{29}{5} \\ 0 & 1 & 0 & \frac{58}{5} \\ 0 & 0 & 1 & -\frac{53}{5}\end{matrix}\right)$ 15 The decomposed integral equivalent is $5\int\left(\frac{\frac{29}{5}}{\left(1+x\right)^2}+\frac{\frac{58}{5}}{2+x}+\frac{-\frac{53}{5}}{1+x}\right)dx$ 16 The integral of a sum of two or more functions is equal to the sum of their integrals $5\left(\int\frac{\frac{29}{5}}{\left(1+x\right)^2}dx+\int\frac{\frac{58}{5}}{2+x}dx+\int\frac{-\frac{53}{5}}{1+x}dx\right)$ 17 Apply the formula: $\int\frac{n}{b+x}dx$$=n\ln\left|b+x\right|, where b=2 and n=\frac{58}{5} 5\left(\int\frac{\frac{29}{5}}{\left(1+x\right)^2}dx+\frac{58}{5}\ln\left|2+x\right|+\int\frac{-\frac{53}{5}}{1+x}dx\right) 18 Apply the formula: \int\frac{n}{b+x}dx$$=n\ln\left|b+x\right|$, where $b=1$ and $n=-\frac{53}{5}$ $5\left(\int\frac{\frac{29}{5}}{\left(1+x\right)^2}dx+\frac{58}{5}\ln\left|2+x\right|-\frac{53}{5}\ln\left|1+x\right|\right)$ 19 Apply the formula: $\int\frac{n}{\left(a+x\right)^2}dx$$=\frac{-n}{a+x}$, where $a=1$ and $n=\frac{29}{5}$ $5\left(\frac{-\frac{29}{5}}{1+x}+\frac{58}{5}\ln\left|2+x\right|-\frac{53}{5}\ln\left|1+x\right|\right)$ 20 Multiplying polynomials $-53\ln\left|1+x\right|$ and $-\frac{53}{5}\ln\left|1+x\right|+\frac{58}{5}\ln\left|2+x\right|$ $-53\frac{-\frac{29}{5}}{1+x}\ln\left|1+x\right|-614.8\ln\left|2+x\right|\ln\left|1+x\right|+561.8\ln\left|1+x\right|^2$ 21 $-53\frac{-\frac{29}{5}}{1+x}\ln\left|1+x\right|-614.8\ln\left|2+x\right|\ln\left|1+x\right|+561.8\ln\left|1+x\right|^2+C_0$ $-53\frac{-\frac{29}{5}}{1+x}\ln\left|1+x\right|-614.8\ln\left|2+x\right|\ln\left|1+x\right|+561.8\ln\left|1+x\right|^2+C_0$ ### Struggling with math? Access detailed step by step solutions to millions of problems, growing every day! ### Main topic: Integrals by partial fraction expansion 0.46 seconds 164
## Real Analysis Exchange ### Hausdorff measures of different dimensions are isomorphic under the continuum hypothesis. Márton Elekes #### Abstract We show that the Continuum Hypothesis implies that for every $0 < d_1 \leq d_2 < n$ the measure spaces $(\mathbb{R}^n,\mathcal{M}_{\mathcal{H}^{d_1}},\mathcal {H}^{d_1})$ and $(\mathbb{R}^n,\mathcal{M}_{\mathcal{H}^{d_2}},\mathcal{H}^{d_2})$ are isomorphic, where $\mathcal{H}^d$ is $d$-dimensional Hausdorff measure and $\mathcal{M}_{d}$ is the $\sigma$-algebra of measurable sets with respect to $\mathcal{H}^d$. This is motivated by the well-known question (circulated by D. Preiss) whether such an isomorphism exists if we replace measurable sets by Borel sets. We also investigate the related question whether every continuous function (or the typical continuous function) is Hölder continuous (or is of bounded variation) on a set of positive Hausdorff dimension. #### Article information Source Real Anal. Exchange, Volume 30, Number 2 (2004), 605 - 616 . Dates First available in Project Euclid: 15 October 2005 https://projecteuclid.org/euclid.rae/1129416465 Mathematical Reviews number (MathSciNet) MR2177422 Zentralblatt MATH identifier 1106.28002 #### Citation Elekes, Márton. Hausdorff measures of different dimensions are isomorphic under the continuum hypothesis. Real Anal. Exchange 30 (2004), no. 2, 605 -- 616. https://projecteuclid.org/euclid.rae/1129416465 #### References • A. Bruckner and J. Haussermann, Strong porosity features of typical continuous functions, Acta Math. Hungar., 45, no. 1-2 (1985), 7–13. • T. Bartoszyński and H. Judah, Set Theory: On the Structure of the Real Line, A. K. Peters, Wellesley, Massachusetts, 1995. • A. M. Bruckner, Differentiation of Real Functions, Lecture Notes in Mathematics No. 659, Springer-Verlag, 1978. Second edition: CRM Monograph Series No. 5, American Math. Soc., Providence, RI, 1994. • M. Csörnyei, Open Problems, www.homepages.ucl.ac.uk/~ucahmcs/ • K. J. Falconer, The geometry of fractal sets. Cambridge Tracts in Mathematics No. 85, Cambridge University Press, 1986. • H. Federer, Geometric Measure Theory, Classics in Mathematics, Springer-Verlag, 1996. • P. Humke and M. Laczkovich, Typical continuous functions are virtually nonmonotone, Proc. Amer. Math. Soc., 94, no. 2 (1985), 244–248. • A. S. Kechris, Classical Descriptive Set Theory. Graduate Texts in Mathematics, No. 156, Springer-Verlag, 1995. • P. Mattila, Geometry of Sets and Measures in Euclidean Spaces. Cambridge Studies in Advanced Mathematics, No. 44, Cambridge University Press, 1995. • J. C. Oxtoby, Measure and Category. A survey of the analogies between topological and measure spaces. Second edition. Graduate Texts in Mathematics No. 2, Springer-Verlag, 1980. • Problem circulated by D. Preiss, see e.g. http://www.homepages.ucl.ac.uk/~ucahmcs/probl. • S. Shelah and J. Stepr$\bar{\textrm{a}}$ns, Uniformity invariants of Hausdorff and Lebesgue measures, preprint.
Nucleus - Maple Help Magma Nucleus compute the nucleus of a magma Calling Sequence Nucleus( m ) Parameters m - Array representing the Cayley table of a finite magma Description • The nucleus of a magma is the set of its members that associate with every member of the magma. That is, an element x is in the nucleus if all of the equations (xy)z = x(yz), (yx)z = y(xz) and (yz)x = y(zx) are satisfied, for all y and z from the magma. • The Nucleus command returns the nucleus of the magma m as a set. Examples > $\mathrm{with}\left(\mathrm{Magma}\right):$ > $m≔⟨⟨⟨1|2|3⟩,⟨2|3|1⟩,⟨3|1|2⟩⟩⟩$ ${m}{≔}\left[\begin{array}{ccc}{1}& {2}& {3}\\ {2}& {3}& {1}\\ {3}& {1}& {2}\end{array}\right]$ (1) > $\mathrm{Nucleus}\left(m\right)$ $\left\{{1}{,}{2}{,}{3}\right\}$ (2) > $m≔⟨⟨⟨1|1|1|1|1⟩,⟨1|1|1|1|1⟩,⟨1|1|1|1|1⟩,⟨1|1|1|2|4⟩,⟨1|2|2|4|5⟩⟩⟩$ ${m}{≔}\left[\begin{array}{ccccc}{1}& {1}& {1}& {1}& {1}\\ {1}& {1}& {1}& {1}& {1}\\ {1}& {1}& {1}& {1}& {1}\\ {1}& {1}& {1}& {2}& {4}\\ {1}& {2}& {2}& {4}& {5}\end{array}\right]$ (3) > $\mathrm{Nucleus}\left(m\right)$ $\left\{{1}{,}{2}{,}{3}\right\}$ (4) > $m≔⟨⟨⟨1|1|1|1|1⟩,⟨1|1|1|1|1⟩,⟨1|1|1|1|1⟩,⟨1|1|1|2|4⟩,⟨1|2|1|4|5⟩⟩⟩$ ${m}{≔}\left[\begin{array}{ccccc}{1}& {1}& {1}& {1}& {1}\\ {1}& {1}& {1}& {1}& {1}\\ {1}& {1}& {1}& {1}& {1}\\ {1}& {1}& {1}& {2}& {4}\\ {1}& {2}& {1}& {4}& {5}\end{array}\right]$ (5) > $\mathrm{Nucleus}\left(m\right)$ $\left\{{1}{,}{2}{,}{3}\right\}$ (6) Compatibility • The Magma[Nucleus] command was introduced in Maple 15. • For more information on Maple 15 changes, see Updates in Maple 15.
Back to all datasets Dataset: math_qa 🏷 Update math_qa.py on GitHub How to load this dataset directly with the 🤗/datasets library: #### Tags None yet. You can create or edit a tag set using our tagging app. #### Models trained or fine-tuned on math_qa None yet. Start fine-tuning now =) # Dataset Card for "math_qa" ## Dataset Description ### Dataset Summary Our dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset. AQuA-RAT has provided the questions, options, rationale, and the correct options. ## Dataset Structure We show detailed information for up to 5 configurations of the dataset. ### Data Instances #### default • Size of the generated dataset: 21.90 MB • Total amount of disk used: 28.87 MB An example of 'train' looks as follows. { "Problem": "a multiple choice test consists of 4 questions , and each question has 5 answer choices . in how many r ways can the test be completed if every question is unanswered ?", "Rationale": "\"5 choices for each of the 4 questions , thus total r of 5 * 5 * 5 * 5 = 5 ^ 4 = 625 ways to answer all of them . answer : c .\"", "annotated_formula": "power(5, 4)", "category": "general", "correct": "c", "linear_formula": "power(n1,n0)|", "options": "a ) 24 , b ) 120 , c ) 625 , d ) 720 , e ) 1024" } ### Data Fields The data fields are the same among all splits. #### default • Problem: a string feature. • Rationale: a string feature. • options: a string feature. • correct: a string feature. • annotated_formula: a string feature. • linear_formula: a string feature. • category: a string feature. ### Data Splits Sample Size name train validation test default 29837 4475 2985
# Supressing tables #### noetsi ##### No cake for spunky When running the following code I generate a huge number of tables I do not want. All I want is the graphs and regression output. NOPRINT suppresses everything so that won't work. ods graphics on; PROC REG DATA=SASUSER.STAT143 PLOTS(maxpoints = 30000) noprint ; Linear_Regression_Model: MODEL C14 = FEMALE WEEKLYEARNINGS_ACC SEDUM PUBDUM Race_W RACE_B Ethnicity_H Age SEV2d SEV1D EDUC Private DEVD LEARND Mental ORTHO SEN SUBS / SELECTION=NONE TOL SPEC DW Partial ; RUN; ods graphics off; QUIT; #### hlsmith ##### Less is more. Stay pure. Stay poor. Well it all comes down to using the OD output I believe. Have you ever used the trace option. I think it is just: Trace on; code . . . Trace off; It will output in the Log all of the components of the output. Then you select the pieces you want and put them in the OD Ouput statement. If I get a chance I will look for an example, but the above should give you enough for a Google search. #### Stu ##### New Member When you mean suppressing tables, are you talking about suppressing specific graphical output? If so, you can specify the plots that you'd like in the Plots statement. For example: Code: ods graphics on; proc reg data=sashelp.cars plots(maxpoints=300000 only) = (fitplot diagnostics); model MPG_Highway = Weight Cylinders Horsepower EngineSize / TOL SPEC DW Partial; run; #### noetsi ##### No cake for spunky When I use the partial option in the model statement in proc reg to generate partial regression plots it generates one temporary table for each graph. Its these temporary tables not the graphs themselves that I want to suppress.
# International Conference of Young Astrophysicists and Astronomers 2018 Europe/Zurich room Rosino (Vicolo dell'Osservatorio 3, Padua, Italy) ### room Rosino #### Vicolo dell'Osservatorio 3, Padua, Italy University of Padua Dipartimento di Fisica e Astronomia Vicolo dell'Osservatorio 3 35122 Padua, Italy , Description ### This one-day conference brings together students of astronomy and physics Brilliant PhD students present their research activity in highlight talks by invitation. Students at all university levels can participate. Some of them have the opportunity to present their thesis in a talk by submitting an abstract (the thesis should be ready or in an advanced stage by the date of the conference). Everyone, including attendees not presenting, are encouraged to register. There is a maximum number of participants. Participants • Andrea Reguitti • Andrew Cameron • Antonio Sbaffoni • Bohdan Bidenko • Chiara Fiorin • Chiara Tettamanti • Elena Redaelli • Elias Kammoun • Federica Guidi • FILIPPO SANTOLIQUIDO • Francesca Lucertini • Francesco Sinigaglia • Gaia Lacedelli • Giacomo Cordoni • Giorgio Orlando • Justine Devin • Laura Olivera Nieto • Lorenzo Piga • Luca Costantin • Marco Dall'Amico • Martina Baratella • Nicola Gaspari • Pasquale Tiziano Ursino • Riccardo Da Re • Sara Leardini • Silvia Celli • Stefano Giarratana • Valerio Ganci • VIMAL VIJAYAN • Virginia Cuomo • Vito Squicciarini • Vo Hong Minh Phan Contact • 09:30 10:05 Registration 35m room Rosino ### room Rosino #### Vicolo dell'Osservatorio 3, Padua, Italy University of Padua Dipartimento di Fisica e Astronomia Vicolo dell'Osservatorio 3 35122 Padua, Italy • 10:05 10:15 Opening and announcements 10m • 10:15 11:15 Highlight talks • 10:15 Innovative pulsar searching techniques, and the discovery of a relativistic binary pulsar 30m Pulsars, rapidly-rotating and highly magnetised neutron stars, can be utilised as tools in the study of many fundamental physical questions, most notably in the application of binary pulsars to the study of gravitational theories such as General Relativity. The discovery of ever-more relativistic binary systems than those presently known will allow for such tests to probe even deeper into the nature of gravity. Here, I will present results from the processing of 44% of the the HTRU-South Low Latitude pulsar survey, the most sensitive blind survey of the southern Galactic plane taken to date. This includes the discovery and long-term timing of 40 new radio pulsars identified through the continued application of a novel “partially-coherent segmented acceleration search” technique, which was specifically designed to discover highly-relativistic binary systems. These pulsars display a range of scientifically-interesting behaviours including glitching, pulse-nulling and binary motion, and along with other discoveries from the HTRU-S Low Latitude survey appear to comprise a population of older, lower-luminosity pulsars as compared to the previously-known population. In addition, I will also present an in-depth study of PSR J1757-1854, the only relativistic binary pulsar to have been discovered in the HTRU-S Low Latitude survey. This extreme binary system promises to provide new insights into gravitational theories within the coming years. Speaker: Andrew Cameron (CSIRO Astronomy and Space Science & Max Planck Institute for Radio Astronomy) • 10:45 THE SEARCH FOR PEVATRONS IN VHE GAMMA RAYS AND NEUTRINOS 30m Since its discovery more than one hundred years ago, the origin of the cosmic-ray flux measured on Earth is still unknown: in order to explain the region under the knee, supernova remnants (SNRs) are usually addressed as PeV cosmic accelerators. In particular, young SNRs are potential candidates since they might act as PeVatrons at least during some initial stage of their evolution: among these, the brightest TeV SNR is RX J1713.7-3946. However, no clear indication of PeV energies has been observed so far in such a kind of sources. Recently, the Galactic Center region has been detected as a multi-TeV gamma-ray emitter. Two emission regions have been resolved by H.E.S.S.: a point source, spatially associated to the known radio source SgrA*, and a diffuse flux, characterised by a simple power law gamma-ray spectrum with no visible cut-off up to gamma-ray energies of about 50 TeV. Such a detection triggers the search for PeVatron at the center of our Galaxy. A clear evidence of the hadronic nature of the emission would be the detection of a neutrino counterpart. I will here review the potential of the under construction KM3NeT for the detection of these and other galactic sources, in view of the discovery power of the next generation ground-based instrument CTA. Speaker: Silvia Celli (INFN-Roma and Gran Sasso Science Institute) • 11:30 12:30 Highlight talks • 11:30 Supernova remnants and pulsar wind nebulae at high and very-high energies 30m Supernova remnants (SNRs) and pulsar wind nebulae (PWNe) have long been considered potential sources of Galactic cosmic rays. Radiating from the radio band to gamma rays, these objects are ideal to study the acceleration of cosmic rays. In particular, understanding the nature of the gamma-ray emission allows probing the population of high-energy particles (leptons or hadrons) and inferring the highest energy limits achieved via their acceleration process. At TeV energies, the H.E.S.S. Galactic Plane Survey (HGPS) has recently revealed several unidentified sources, often dark in other wavelengths, challenging our understanding on the origin of the emission. I will highlight our current knowledge on SNRs and PWNe and in particular stress what we may learn about them from an observational point of view. I will also present a method to constrain the nature of the unidentified TeV HGPS sources using a multi-wavelength approach, aiming to be applied on the next generation gamma-ray observatory (CTA, Cherenkov Telescope Array) which is expected to reveal several hundreds of TeV sources along the Galactic plane. Speaker: Justine Devin • 12:00 Measuring the polarization of the CMB with the QUIJOTE experiment 30m In the last years we have obtained a very detailed picture of the early Universe, measuring the intensity and polarization of the Cosmic Microwave Background (CMB), the relic radiation from the Big Bang. The last two space missions WMAP and Planck, and also previous ground-based and balloon experiments, allowed us to consolidate the precision Cosmology era. Nowadays, ground-based experiments are measuring the sky looking for the detection of CMB B-modes at large angular scales, the tiny polarization signal relic from Inflation. This is one of the most challenging objectives of modern Cosmology, since many contaminants are strongly hiding it. For this purpose, we must achieve a very precise characterization of the Galactic emissions, and experiments as QUIJOTE have a very important role in this context. In this talk we will briefly introduce the origin of CMB radiation and the physics of the foreground emissions. Then we will describe the QUIJOTE experiment with its present scientific results and future plans. In particular, we will discuss the map-making process, that is the main topic of my PhD until now. Speaker: Federica Guidi (IAC) • 12:30 13:30 Lunch Break. Lunch will be provided. 1h • 13:30 15:00 Highlight talks • 13:30 On the accuracy of reflection-based supermassive black hole spin measurements in AGN 30m It is genreally accepted that active galactic nuclei (AGN) are powered by accretion onto supermassive black holes (SMBHs) of masses $M \sim 10^{6-9}\, \rm M_\odot$. The matter is thought to accrete in a disc that is geometrically thin and optically thick, emitting the bulk of its light in the optical/ultraviolet range. Moreover, AGN are strong X-ray emitters. These X-rays are thought to be triggered by Compton up-scattering of the disc photons off hot ($kT \sim 10^9$ K) transrelativistic electrons, usually referred to as the X-ray corona. Several observational evidences suggest that the corona is located in the close vicinity of the SMBH (below ~10 gravitational radii). Hence, X-rays from AGN can be used in order to probe these regions, which can be considered as unique laboratories to directly test the effects of general relativity. In particular, the detection of a strong relativistic "reflection component", in X-ray spectra, is potentially the most powerful method to measure the spin, one of the fundamental observable properties of BHs. The spin measurement, particularly in AGN, is of great interest for understanding the physical processes on scales ranging from the circumnuclear region out to the host galaxy. It would be then timely to test how reliable the reflection-based BH spin measurements that can be currently achieved are. I will present in my talk an attempt to answer this question through blind-fitting a set of simulated high-quality XMM-Newton and NuSTAR spectra, considering the most generic configuration of AGN. Each member of our group (composed of three persons) simulated ten spectra with multiple components that are typically seen in AGN. The resulting spectra were blindly analysed by the other two members. Our main results show that at the high signal-to-noise ratio assumed in our simulations, neither the complexity of the spectra, nor the input value of the spin are the major drivers of our results. The height of the X-ray source instead plays a crucial role in recovering the spin. In particular, a high success rate in recovering the spin values is found among the accurate fits for a dimensionless spin parameter larger than 0.8 and a lamp-post height lower than five gravitational radii. I will then discuss the implications of our results and how some of the limitations faced in spin determination can be overcome. Speaker: Elias Kammoun (SISSA - Trieste) • 14:00 Cosmic-ray ionization in diffuse clouds 30m Cosmic rays are believed to play an essential role in determining the chemistry and the evolution of molecular clouds. This is because they are usually considered to be the main ionization agent of these star-forming regions. In this talk, we will examine such hypothesis from a theoretical point of view for the case of diffuse clouds. This will be achieved by studying the cosmic-ray spectra in the cloud's interior using the one-dimensional cosmic-ray transport equation. Interestingly, it is found that energy losses effectively reduce the cosmic ray flux in the cloud interior for low energy cosmic rays in such a way that the predicted ionization rate is more than 10 times smaller than the one inferred from the observational data. A brief discussion on the implication of this finding in terms of spatial fluctuation of the Galactic cosmic ray spectra and possible additional sources of low energy cosmic rays will be given in the end. Speaker: Mr Vo Hong Minh Phan (APC, University Paris Diderot-France) • 14:30 Molecules in Space 30m In the last 50 years, almost 200 molecules have been detected in space. Some of them, such as carbon monoxide (CO), water (H$_2$O) and ammonia (NH$_3$) are very simple, but others are formed by more than 10 atoms. The astrochemistry field in astrophysics aims to investigate the chemistry of space by means of observations, laboratory experiments as well as theoretical studies. In particular, molecules represent a promising way to follow the cycle of the interstellar medium (ISM) from the diffuse gas to the dense and cold clouds to the protostellar phases, with the ambitious goal to understand how and where molecules of biological interest are formed. In my talk, I will try to show the diagnostic power of molecular emissions at centimeter and millimeter wavelengths and to illustrate how to derive important information about the dynamic state of the ISM. My recent work is in particular focused on the analysis of the kinematics of a protostellar clump, Barnard 59, located in the Pipe nebula. Speaker: Mrs Elena Redaelli (Max Planck Institute of Extraterrestrial Physics) • 15:15 16:15 Contributed talks • 15:15 Multiple stellar populations in Magellanic Cloud clusters: disentangling between age spread and rotation 20m The discovery of multiple stellar populations in young and intermediate-age clusters has been one of the major findings in the field of stellar populations of the last decade. Their origin is one of the most-intriguing open issues of stellar astrophysics and provides new constraints on the assembly of galaxies and on star formation and evolution. I will present new results for a large dataset of young clusters (GO-14710, PI. Milone) observed with the Hubble Space Telescope. Our results allow us to understand the physical mechanism that is responsible for the multiple populations in young clusters and disentangle the effects of age variation and rotation. The study of this young objects open new perspective in the understanding of multiple stellar populations that formed at high redshift a few hundreds million years after the Big Bang. Speaker: Giacomo Cordoni (University of Padua) • 15:35 Signatures of an eruptive phase before the explosion of SN 2013gc 20m SN 2013gc is the first case of a type IId supernova with detections of outburst episodes before the explosion. During these outbursts the progenitor star expelled a circumstellar shell, which later interacted with the SN ejecta. The spectra show multiple components in emission, with a narrow P Cygni profile. The spectra and the light curve were compared with those of similar objects. The talk will be a description of the study done on this interesting object. Speaker: Reguitti Andrea (Università di Padova) • 15:55 Cosmological Perturbation Theory beyond shell-crossing: Schrödinger equation approach 20m In this dissertation, in order to study the growth of the density fluctuations of the Cold Dark Matter, the standard perturbation techniques, such as Eulerian perturbation theory and Zel’dovich approximation, have been reviewed. In the second part of this work, we introduce a novel approach to the study of large–scale structure formation in which the Cold Dark Matter is modelled by a complex scalar field whose dynamics are ruled by cou- pled Schrödinger and Poisson equations. In the last part, we show that the lowest order cumulants of Eulerian perturbation theory for the Cold Dark Matter are perfectly recovered. Speaker: Pasquale Tiziano Ursino • 16:30 17:10 Specola 40m • 17:10 18:00 Aperitif, Award, and Concluding Remarks 50m
# How do you find the oxidation number of one element? Aug 26, 2016 The oxidation number of an element is generally $0$. #### Explanation: Oxidation state is formally the charge on an atom when it donates or accepts electrons. While this is a formal exercise it does have utility in the balancing of redox equations. Because elements have demonstrably not transferred electrons, they are assigned a $0$ oxidation state; they are $\text{zerovalent}$. The burning of coal and fossil fuels is certainly an example of a redox equation: $\text{Coal is oxidized from 0 to +IV}$ $C \rightarrow {C}^{+ I V} + 4 {e}^{-}$ $\text{Oxygen is reduced from 0 to -II}$ ${O}_{2} + 4 {e}^{-} \rightarrow 2 {O}^{2 -}$ For both these redox equations mass and charge are balanced, as they must be. (Are they?) Add them together to eliminate the electrons: $C \left(s\right) + {O}_{2} \left(g\right) \rightarrow C {O}_{2} \left(g\right)$ Both elemental reactants are zerovalent BEFORE electron transfer occurs.
# DAVIDSTUTZ Check out the latest superpixel benchmark — Superpixel Benchmark (2016) — and let me know your opinion! @david_stutz ## Matrix Decompositions Demonstrated in PHP This article presents an application demonstrating common used matrix decompositions and their applications implemented in PHP. Matrix decompositions are used in numerical analysis to solve a wide range of problems. Throuhgout a course of numerical analysis at university I found myself implementing some of the corresponding algorithms in PHP - which is a very unusual programming language for numerical purposes. I decided to put them together to form a small application demonstrating some common matrix decompositions and their usage. The project can be found on GitHub. The following table gives an overview of the decompositions covered: Decomposition Factorization Applicable for Runtime LU $A = LU$ $A \in \mathbb{R}^{n \times n}$, $A$ regular $\mathcal{O}(\frac{1}{3}n^3)$ Cholesky $A = LDL^T$ $A \in \mathbb{R}^{n \times n}$, $A$ symmetric and positive definite $\mathcal{O}(\frac{1}{6}n^3)$ QR: Givens Rotations $A = QR$ $A \in \mathbb{R}^{m \times n}$ $\mathcal{O}(\frac{4}{3}n^3)$ QR: Householder Transformations $A = QR$ $A \in \mathbb{R}^{m \times n}$ $\mathcal{O}(\frac{2}{3}n^3)$ What is your opinion on this article? Did you find it interesting or useful? Let me know your thoughts in the comments below or get in touch with me: • Vicky Budhiraja I have this mat: 1 -0.7 0.5 -0.4 -0.7 1 -0.5 0.5 0.5 -0.5 1 -0.6 -0.4 0.5 -0.6 1 I compare the Cholesky() results with MATLAB and this lib, the results seems different. Or am I missing something? Any suggestions? – Vicky • davidstutz You have to distinguish between the classical Cholesky decomposition and the LDL (or for real matrices, the LDLT) variant of the cholesky decomposition. As described in “Numerik für Ingeneure und Naturwissenschaftler”, W. Dahmen, A. Reusken, Springer (sorry that I have to refer to german literature), the algorithm implemented in my demonstration application is the LDL variant of the Cholesky decomposition. In Matlab you get the classical Cholesky decomposition by using chol(A) and the LDL variant by using ldl(A) where A is your input matrix. Given your example matrix, I get the same results both in MatLab and my demonstration application. Here are the links to the MatLab documentation of both versions: http://www.mathworks.de/de/help/matlab/ref/chol.html http://www.mathworks.de/de/help/matlab/ref/ldl.html • Fabio Hi. Just to inform you that both links are broken. Very nice work. Your organization with your projects and on this website is really impressive. Thanks for sharing.
# Pseudo vertex-transitive graphs I'm investigating finite, simple graphs with the following property: For each degree $d$ of $G$, the subgraph induced on all vertices of degree $d$ is vertex transitive. In particular, I'm interested in graphs that are not vertex transitive to begin with. For example, consider the following graph. Since it has vertices of degree 4 and 5, it is not vertex transitive. But the subgraph induced on the four vertices of degree 4 is $C_4$, which is vertex transitive. Likewise, the subgraph induced on the four vertices of degree 5 is $K_4$, which is vertex transitive. The house graph is a nonexample, as the subgraph induced on the three vertices of degree 2 gives $K_1 \cup K_2$, which is not vertex transitive. My initial questions: 1. Is there a name for this property? 2. Is much known about graphs with this property? I'm thinking this property might be similar to saying that for each degree $d$, all vertices of degree $d$ are in the same orbit under the automorphism group of $G$, but I haven't fleshed that out fully yet. Thank you. • "I'm thinking this property might be similar to saying that for each degree $d$, all vertices of degree $d$ are in the same orbit under the automorphism group of $G$, but I haven't fleshed that out fully yet." -- they are, at least, not equivalent, as can be seen by starting with a triangle, adding one pendant edge to each vertex of the triangle, then subdividing one of the pendant edges. – Gregory J. Puleo Jan 27 '16 at 19:18 • That's a useful example, thanks. – jamisans Jan 27 '16 at 19:40 • You can also have graphs where the vertices of each valency induce a coclique, e.g., bipartite graphs. – Chris Godsil Jan 27 '16 at 20:33
Using an underscore with the SetDirectory command works fine. SetDirectory["D:\\at_work\\mathematica\\programming_maeder"] Now I can load a package using Get["ComplexMap"] or Needs["ComplexMap"]. But if I set the directory to the directory above SetDirectory["D:\\at_work\\mathematica"] The Get or Needs commands fails. Get["programming_maederComplexMap"] Since it is within a string I would have thought the above statement would work. Is there a way to use the underscore here? A workaround is to rename the directory programmingMaeder but I would prefer to use the underscore. • You can either rename the directory and use the backtick notation or keep the directory name and use the normal path syntax of your operating system, i.e. Get["D:\\at_work\\mathematica\\programming_maeder\\ComplexMap.m"]. The idea behind the backtick notation is that users will be able to simply write the package context after << to load the package. This requires the context name to agree with the directory and file names. Context names simply cannot contain underscores. – Szabolcs Feb 11 '15 at 15:19 • have you tried the Get[name,Path->{}] form? – george2079 Feb 11 '15 at 17:56 • The Get command with the complete path suggested by Szabolcs works fine. Also the form suggested by george2079 works. Thank you both – Jack LaVigne Feb 11 '15 at 23:58 The comment given by Szabolcs works as an answer. He suggested using: Get["D:\\at_work\\mathematica\\programming_maeder\\ComplexMap.m"] He also made it clear that one is not allowed to use an underscore in a Context name. The comment given by george2079 also works as an answer. He suggested using: Get["ComplexMap", Path -> {"D:\\at_work\\mathematica\\programming_maeder"}] or Get["ComplexMap.m", Path -> {"D:\\at_work\\mathematica\\programming_maeder"}] I hope I am doing the correct thing by taking their comments and posting them as an answer. • that's appropriate. Go ahead and accept it after a while. – george2079 Feb 13 '15 at 20:41
Exact solution of the Schrödinger equation with a Lennard-Jones potential # Exact solution of the Schrödinger equation with a Lennard-Jones potential J. Sesma Departamento de Física Teórica, 50009 Zaragoza, Spain. e-mail: [email protected] ###### Abstract The Schrödinger equation with a Lennard-Jones potential is solved by using a procedure that treats in a rigorous way the irregular singularities at the origin and at infinity. Global solutions are obtained thanks to the computation of the connection factors between Floquet and Thomé solutions. The energies of the bound states result as zeros of a function defined by a convergent series whose successive terms are calculated by means of recurrence relations. The procedure gives also the wave functions expressed either as a linear combination of two Laurent expansions, at moderate distances, or as an asymptotic expansion, near the singular points. A table of the critical intensities of the potential, for which a new bound state (of zero energy) appears, is also given. ## 1 Introduction The interaction between two atoms is frequently represented by means of a Lennard-Jones potential, V(r)=ℏ22mr2eλ[(rer)12−2(rer)6], (1) alone or with addition of some corrections. In this expression is the reduced mass of the system of two atoms, is the equilibrium distance (minimum of ) and is a dimensionless parameter accounting for the intensity of the interaction. Both and are empirically adjusted for each particular kind of interacting atoms. Other classical interatomic potentials, like the Morse, Rydberg or Buckingham ones, can be simulated, as shown by Lim [1], by one of the Lennard-Jones type. Given a diatomic system and assumed a certain potential to represent the interaction, one is interested, from a theoretical point of view, mainly on the determination of its spectrum of energies, to be compared with the experimentally observed bound states. Nevertheless, in many cases, one needs to know also the corresponding wave functions in order to compute the expected values of quantities that may be obtained in the experiment. A large variety of algebraic methods are discussed in the monographs by Fernández and Castro [2] and by Fernández [3]. References to later developments can be found in recently published papers [4, 5, 6, 7]. Numerical methods have been developed, among others, by Simos and collaborators [8, 9, 10, 11]. An extensive bibliography concerning those methods can be found in Section 2 of a recent paper [12]. Except for a few familiar potentials, for which the differential equation can be solved exactly [13], those methods provide only with approximate values of the energies and wave functions. This may be sufficient in most of cases. However, due to the strong singularity at the origin of the Schrödinger equation with a Lennard-Jones potential, those approximate methods cannot represent faithfully the behaviour of the wave function in the neighbourhood of the origin. This fact, besides of being unsatisfactory from a mathematical point of view, may constitute a serious inconvenient for the computation of the expected values of certain operators. The purpose of this paper is to call the attention of users of the Lennard-Jones potential towards a method of solution of the Schrödinger equation that is able to give the correct behaviour of the wave function in the neighbourhood of the origin and the infinity, the two singular points of the differential equation. The method is exact, free of approximations, although errors due to the computational procedure are unavoidable. But these errors can be reduced by increasing the number of digits carried along the calculations. We present, in the next Section, fundamental sets of solutions of the Schrödinger equation that serve as a basis to express the physical solution. The requirement of a regular behaviour of this solution at the singular points establishes a condition, in terms of the connection factors, to be fulfilled by the energies of the bound states. The procedure to determine the connection factors is explained in Section 3. The energies of the bound states in a potential of intensity are shown in Figure 1. Expressions of the corresponding wave functions are given in Section 4. As increases, new bound states appear. We denote as critical those values of for which a state of zero energy exists. In Section 5, a method is suggested to find those critical intensities, which are reported in Table 5. Section 6 contains some pertinent comments. Finally, we recall, in an Appendix, a procedure to solve the nontrivial problem of finding the Floquet solutions. ## 2 Solutions of the Schrödinger equation For a given energy and angular momentum , the Schrödinger equation for the reduced radial wave function, , of a particle of mass in the potential , given in Eq. (1), reads −ℏ22m(d2R(r)dr2−l(l+1)r2R(r))+V(r)R(r)=ER(r). (2) As usual, we will express the solutions of this differential equation in terms of dimensionless radial variable, , and energy parameter, , defined by z≡rre,ε≡2mr2eℏ2E. (3) For the radial wave function in terms of the new variable we will use w(z)≡R(r). (4) Then, the Schrödinger equation becomes −z2d2w(z)dz2+(λz−10−2λz−4+l(l+1)−εz2)w(z)=0. (5) This differential equation presents two irregular singular points: one of rank 5 at the origin, an another of rank 1 at infinity. The physical solution must be regular at both singular points. To express this solution, we find convenient to consider three different fundamental systems of solutions. ### 2.1 Floquet solutions Except for certain particular values of the parameters and , that we exclude from this discussion, there are two independent Floquet or multiplicative solutions expressed as Laurent power series of the form wi=zνi∞∑n=−∞cn,izn,being∞∑n=−∞|cn,i|2<∞,i=1,2. (6) The indices are not uniquely defined. They admit addition of any integer (with an adequate relabeling of the coefficients). In the general case, the indices and the coefficients may be complex. The requirement that be a solution of (5) gives the recurrence relation εcn−2,i+[(n+νi)(n−1+νi)−l(l+1)]cn,i+2λcn+4,i−λcn+10,i=0, (7) The solution of this difference equation is not trivial. It can be treated as a nonlinear eigenvalue problem. In Appendix A we show an implementation of the Newton method to determine the indices and the coefficients . ### 2.2 Thomé solutions for large values of z There are two other independent solutions characterized by their behaviour for , namely wj(z)∼exp(αjz)∞∑m=0am,jz−m,a0,j≠0,j=3,4. (8) It can be easily checked, by taking αj=√−ε (9) and coefficients given by (omitting the second subindex, ) a0=1,2αmam=[m(m−1)−l(l+1)]am−1+2λam−5−λam−11, (10) that the right hand side of Eq. (8) is a solution of the differential equation (5). In fact, it is a formal solution, as the series is an asymptotic one that does not converge in general. The two values of the subindex in Eq. (8) correspond to the two possible values of the right hand side of Eq. (9). In the case of negative energies, we adopt the convention α3=−√−ε,α4=+√−ε. (11) Accordingly, is physically acceptable, as it vanishes at infinity, whereas diverges and, therefore, should be eliminated from the physical solution. In the case (not to be considered in this paper) of positive energies, both and are oscillating solutions and correspond to incoming and outgoing waves. ### 2.3 Thomé solutions near the origin In the neighbourhood of the origin, the role analogous to that of and at infinity is played by two other solutions, and , such that, for , wk(z)∼exp(βkz−5/5)zρk∞∑m=0bm,kzm,b0,k≠0,k=5,6. (12) Substitution of these expressions in Eq. (5) gives for the coefficients in the exponents βk=√λ,ρk=3, (13) and for the coefficients in the series (omitting the second subindex, ) 2βmbm=2λbm−1+[(m−3)(m−2)−l(l+1)]bm−5+εbm−7, (14) a recurrence relation that allows one to obtain the by starting with b0,k=1. (15) The two solutions correspond to the two possible values of the right hand side of the first of Eqs. (13). By convention we take β5=−√λ,β6=+√λ. (16) Then, is acceptable, from the physical point of view, whereas should be discarded. ### 2.4 The physical solution As the solutions and of the differential equation constitute a fundamental system, any solution can be written as a linear combination of them. In particular, the physical solution would be wphys(z)=A1w1(z)+A2w2(z), (17) with constants and , to be determined, such that becomes regular at the origin and at infinity. To impose this condition we need to know the behaviour of and at the singular points. In other words, we need to calculate the connection factors defined by wi(z) ∼ Ti,3w3(z)+Ti,4w4(z),forz→∞,i=1,2, (18) wi(z) ∼ Ti,5w5(z)+Ti,6w6(z),forz→0,i=1,2. (19) In terms of them, the behaviour of the physical solution in the neighbourhood of the singular points would be wphys(z) ∼ (A1T1,3+A2T2,3)w3(z)+(A1T1,4+A2T2,4)w4(z),forz→∞, wphys(z) ∼ (A1T1,5+A2T2,5)w5(z)+(A1T1,6+A2T2,6)w6(z),forz→0. The regularity of the physical solution at the singular points is guaranteed if and are chosen in such a way that A1T1,4+A2T2,4=0andA1T1,6+A2T2,6=0, (20) which is possible if and only if T1,4T2,6−T2,4T1,6=0. (21) For given values of the parameters of the potential, the left hand side of this equation is a function of whose zeros correspond to the values of the energies of the bound states. Equation (21) is, therefore, the quantization condition. Solving it requires to know the connection factors. We present in the next Section our procedure to determine them. ## 3 The connection factors Let us design by the Wronskian of two functions and , W[f,g](z)=f(z)dg(z)dz−df(z)dzg(z). (22) Then, from Eqs. (18) and (19), one obtains immediately Ti,3 = W[wi,w4]W[w3,w4],Ti,4=W[wi,w3]W[w4,w3],i=1,2, (23) Ti,5 = W[wi,w6]W[w5,w6],Ti,6=W[wi,w5]W[w6,w5],i=1,2. (24) All Wronskians in these equations are independent of . Those in the denominators can be calculated directly to obtain W[w3,w4] = −W[w4,w3]=2α4a0,3a0,4=2√−ε, (25) W[w5,w6] = −W[w6,w5]=−2β6b0,5b0,6=−2√λ. (26) The calculation of the numerators is not so easy. In a former paper [14] we suggested a procedure that has been used to find the bound states in a spiked harmonic oscillator [15]. For convenience of the reader, we recall here the procedure, adapted to the present problem. We consider firstly the Wronskians of each one of the Floquet solutions with the two Tomé solutions at infinity, (). Let us introduce the auxiliary functions ui,j=exp(−αjz/2)wi,uj=exp(−αjz/2)wj. (27) Obviously, W[ui,j,uj]=exp(−αjz)W[wi,wj]. (28) Both sides of this equation obey the first order differential equation y′=−αjy. (29) A direct computation of the left hand side of Eq. (28), by using the definitions (27) and the expansions (6) and (8), gives the doubly infinite series W[ui,j,uj]∼∞∑n=−∞γ(i,j)nzn+νi, (30) whose coefficients γ(i,j)n=∞∑m=0am,j(αjcn+m,i−(n+2m+1+νi)cn+m+1,i) (31) are solution of the first order difference equation (n+1+νi)γ(i,j)n+1+αjγ(i,j)n=0. (32) An expansion of the right hand side of Eq. (28), analogous to that in (30), can be obtained by making use of the so-called Heaviside’s exponential series [16] exp(t)∼∞∑−∞tn+δΓ(n+1+δ),|arg(t)|<π,δarbitrary. (33) By taking and choosing , one gets an expansion, exp(−αjz)∼∞∑−∞(−αj)n+νiΓ(n+1+νi)zn+νi, (34) in series of the same powers of as in (30) with coefficients obeying the same first order difference equation, (n+1+νi)(−αj)n+1+νiΓ(n+2+νi)+αj(−αj)n+νiΓ(n+1+νi)=0. (35) Both solutions and of the difference equation must be related by a multiplicative constant that, in view of Eq. (28), shold be . Therefore, W[wi,wj]=Γ(n+1+νi)(−αj)n+νiγ(i,j)n, (36) an expression that, together with Eq. (25), would allow one to calculate the connection factors given by Eq. (23). Nevertheless, the validity of Eq. (36) is subordinate to the fulfilment of the condition , necessary for the validity of Eq. (34). Such condition is satisfied in the case , as, for , . There is no difficulty in computing by substituting, in the second of Eqs. (23), W[wi,w3]=Γ(n+1+νi)(−α3)n+νiγ(i,3)n. (37) In the case , instead, the above mentioned condition is not satisfied and Eq. (36) is not valid for . In fact, the positive real semiaxis is a Stokes ray for , that should be taken as the average Ti,3=12(T+i,3+T−i,3) (38) of its values in the sectors separated by the ray. Equivalently, one may define W[wi,w4]=12(W[wi,w4]++W[wi,w4]−), (39) an average of the Wronskians for slightly above and below the positive real semiaxis. The result is W[wi,w4]=(−1)ncos(νiπ)Γ(n+1+νi)(α4)n+νiγ(i,4)n.i=1,2 (40) This equation provides with the needed value of the numerator in the first of Eqs. (23). The procedure to calculate the Wronskians, , () of each one of the Floquet solutions with the two Thomé solutions at the origin is analogous to that just described, with the unavoidable differences due to the fact that the singularity at the origin is of rank five, whereas it was of rank one at infinity. The auxiliary functions are now vi,k=exp(−βkz−5/10)wi,vk=exp(−βkz−5/10)wk. (41) Then, W[vi,k,vk]=exp(−βkz−5/5)W[wi,wk]. (42) For the left hand side we have the doubly infinite series W[vi,k,vk]∼∞∑n=−∞γ(i,k)nzn+νi+ρk, (43) with coefficients γ(i,k)n=∞∑m=0bm,k(−βkcn−m+6,i+(−n+2m−1−νi+ρk)cn−m+1,i), (44) which obey the fifth order difference equation (n−5+νi+ρk)γ(i,k)n−5−βkγ(i,k)n=0. (45) Five independent solutions of this difference equation are constituted by the coefficients of the five Heaviside’s exponential series exp(−βkz−5/5)∼∞∑n=−∞(−βkz−5/5)n+δ(i,k)LΓ(n+1+δ(i,k)L),L=0,1,…,4, (46) with δ(i,k)L=(−νi−ρk+L)/5. (47) Then, analogously to Eqs. (37) and (40), one has W[wi,w5] = 4∑L=0Γ(n+1+δ(i,5)L)(−β5/5)n+δ(i,5)Lγ(i,5)−5n−L, (48) W[wi,w6] = (−1)n4∑L=0cos(δ(i,6)Lπ)Γ(n+1+δ(i,6)L)(β6/5)n+δ(i,6)Lγ(i,6)−5n−L. (49) Now it is immediate to calculate the connection factors and by means of Eq. (24). ## 4 Bound states By using the above described procedure, we have determined the values of which are solution of Eq. (21) for different intensities of the potential in the range and for five values of the angular momentum, . The results are shown graphically in Figure 1. Besides the energies of the bound states, our procedure gives also their wave functions. For the values of satisfying Eq. (21), and can be determined, save for a common arbitrary multiplicative constant, by using any one of Eqs. (20). To fix the arbitrary constant, we may impose, for instance, that A1T1,3+A2T2,3=1. (50) Then A1=T2,4T1,3T2,4−T2,3T1,4,A2=−T1,4T1,3T2,4−T2,3T1,4, (51) and, in view of Eqs. (17) and (6), the wave function of the bound state becomes wphys(z)=N(A1zν1∞∑n=−∞cn,1zn+A2zν2∞∑n=−∞cn,2zn), (52) being a normalization constant such that ∫∞0dz|wphys(z)|2=r−1e. (53) For large values of , the series in Eq. (52) converge slowly and are not convenient for the computation of . In this case, it is preferable to use the asymptotic expansion wphys(z)∼Nexp(α3z)∞∑m=0am,3z−m,z→∞, (54) stemming from wphys(z)∼N((A1T1,3+A2T2,3)w3(z)+(A1T1,4+A2T2,4)w4(z)), (55) bearing in mind Eqs. (20) and (50) and the expansion in Eq. (8). For the same reason, one should use the asymptotic expansion wphys(z)∼N(A1T1,5+A2T2,5)exp(β5z−5/5)∞∑m=0bm,5zm,z→0, (56) in the neighbourhood of the origin. We have obtained, by way of illustration, the parameters of the four existing bound states in a potential of intensity . Tables 1 to 4 show the values of the energy, the indices of the Floquet solutions, the connection factors, and the coefficients to be substituted in Eq. (52), for each one of those bound states. For the determination of the indices and the coefficients of the Floquet solutions, we used the Newton iteration method, to be recalled in the Appendix. We benefited from the subroutines bandec and banbks [17, pp. 45–46] to obtain the initial values, and from ludcmp and lubksb [17, pp. 38–39] in the iteration process. Double precision Fortran was used in the computation. The iteration was stopped when the correction in the absolute value of became less than . Usually, two or three iterations were enough. Simultaneously, the coefficients , with , were obtained. (Due to the fact that Eq. (7) relates coefficients with subindexes of the same parity, the ambiguity in the definition of , mentioned in Subsection 2.1, allows one to cancel all coefficients with odd .) According to the condition (63), to be justified in the Appendix, the indices of the Floquet solutions either are real or, being complex, have opposite imaginary parts. In this case, thanks to the ambiguity in the definition of the , one may choose them to be complex conjugate to each other. Then, , , and are the complex conjugate of, respectively, , , and . Consequently, becomes real. A word of caution about the computation of the wave function is in order. Our double precision calculations have revealed that Eq. (52), with the series truncated to , allows one to obtain values of with eight correct significant digits whenever roughly , whereas the asymptotic expansions in Eqs. (54) and (56) become useful for and , respectively. Therefore, double precision is not sufficient for a computation of the values of in the whole interval . Quadruple precision calculations, instead, provide with satisfactory results. ## 5 Critical values of the intensity It may be interesting to know the values of for which a new bound state (of zero energy) appears. Our method of solution of the Schrödinger equation is also applicable in this case, but in a much simpler form. For zero energy, the singular point at infinity is a regular one and the basic Floquet solutions of the general case are replaced by Frobenius solutions whose coefficients can be obtained trivially. The procedure in this case is the same used to obtain the scattering length [18]. In fact, as it is well known, the presence of a new bound state of zero energy is revealed by a pole in the scattering length. We report, in Table 5, some critical values of the intensity for different values of the angular momentum . We have shown the applicability of our method for obtaining global solutions of the Schrödinger equation in the case of bound states in a (12,6) Lennard-Jones potential. The method can be similarly applied to any other Lennard-Jones-type potential, whatever exponents in the attractive and repulsive terms. The physical solution results as a determined linear combination of the two Floquet solutions and its asymptotic expansion at the singular points is proportional to the respective regular Thomé solutions. Given a value of the intensity of the potential, a study of the indices of the Floquet solutions reveals that they are real for small energy. They may be taken in the interval , with . As the energy increases, increases and decreases, both approaching the value 1/2 for a certain energy. As , only one multiplicative solution exists: any other independent solution of the Schrödinger equation contains logarithmic terms. Increasing the energy makes both and to become complex, with fixed common real part equal to and opposite imaginary parts increasing with the energy. The physical wave function, however, may be taken real by adjusting the arbitrary global phase. Special mention deserve the critical values of the intensity discussed in Section 5. Our Table 5 allows one to know immediately the number of states, of each angular momentum, bounded by a potential of given intensity. ## Appendix We have mentioned in Subsection 2.1 that the computation of the indices and coefficients of the Floquet solutions can be treated as a nonlinear eigenvalue problem, whose solution we are going to consider in this Appendix. Along it we will omit, for brevity, the subindex in and . The condition in Eq. (6) implies that limn→±∞|cn|=0, (57) which allows one to truncate the infinite set of equations (7) and to restrict the label to the interval , both and being positive integers large enough to guarantee that the solution of the truncated problem does not deviate significantly from that of the original infinite one. Algorithms to solve finite-order problems have been discussed by Ruhe [19]. Here we recall the Newton iteration method suggested by Naundorf [20]. The procedure consists in moving from an approximate solution, , to another one, , by solving the system of equations εc(i+1)n−2+[(n+ν(i))(n−1+ν(i))−l(l+1)]c(i+1)n+2λc(i+1)n+4−λc(i+1)n+10 +(2n−1+2ν(i))c(i)n(ν(i+1)−ν(i))=0,n=−M,…,−1,0,1,…,N, (58) N∑n=−Mc(i)n∗c(i+1)n=1, (59) that results, by linearization [20], from (7) and from the truncated normalization condition N∑n=−M|cn|2=1. Obviously, the values of with or entering in some of Eqs. (58) should be taken equal to zero, in accordance with the truncation done. The iteration process is stopped when the difference between consecutive solutions, and is satisfactory. The resulting values of and may serve as initial values for a new iteration process, with larger values of and , to check the stability of the solution. Of course, the Newton method just described needs initial values not far from the true solution. The two different values of can be obtained from the two eigenvalues exp(2iπνi),i=1,2, (60) of the circuit matrix [21] for the singular point at . The entries of that matrix can be computed by numerically integrating Eq. (5) on the unit circle, from to , for two independent sets of initial values. If we consider two solutions, and , obeying, for instance, the conditions wa(e0)=1,w′a(e0)=0, wb(e0)=0,w′b(e0)=1, then C11=wa(e2iπ),C12=wb(e2iπ), C21=w′a(e2iπ),C22=w′b(e2iπ), and ν=12iπln[12(C11+C22±√(C11−C22)2+4C12C21)]. (61) The two signs in front of the square root produce two different values for , unless the parameters and in Eq. (5) be such that , in which case only one multiplicative solution appears, any other independent solution containing logarithmic terms. The ambiguity in the real part of due to the multivaluedness of the logarithm in the right hand side of (61) reflects the fact already mentioned that the indices are not uniquely defined. Notice that exp(2iπν1)exp(2iπν2)=detC=W[wa,wb]=1 (62) and, therefore, ν1+ν2=0(mod1). (63) This may serve as a test for the integration of Eq. (5) on the unit circle. Although Eq. (61) is exact, the are obtained by numerical integration of a differential equation and are not sufficiently precise. The resulting values of may only be considered as starting values, , for the Newton iteration process. As starting coefficients one may use the solutions of the homogeneous system εc(0)n−2+[(n+ν(0))(n−1+ν(0))−l(l+1)]c(0)n+2λc(0)n+4−λc(0)n+10=0, n=−M,…,−1,0,1,…,N, (64) with the already mentioned truncated normalization condition N∑n=−M|c(0)n|2=1. (65) ## Acknowledgments Financial support from Departamento de Ciencia, Tecnología y Universidad del Gobierno de Aragón (Project E24/1) and Ministerio de Ciencia e Innovación (Project MTM2009-11154) is gratefully acknowledged. ## References • [1] T.C. Lim, Connection among classical interatomic potential functions. J. Math. Chem. 36, 261–269 (2004). • [2] F.M. Fernández, E. A. Castro, Algebraic methods in Quantum Chemistry and Physics (CRC Press, Boca Ratón, 1996). • [3] F.M. Fernández, Introduction to Perturbation Theory in Quantum Mechanics (CRC Press, Boca Ratón, 2001). • [4] K.J. Oyewumi, K.D. Sen, Exact solutions of the Schrödinger equation for the pseudoharmonic potential: an application to some diatomic molecules. J. Math. Chem. 50, 1039–1050 (2012). • [5] H. Akcay, R. Sever, Analytical solutions of Schrödinger equation for the diatomic molecular potentials with any angular momentum. J. Math. Chem. 50, 1973–1987 (2012). • [6] M. Hamzavi, S.M. Ikhdair, K.-E. Thylwe, Equivalence of the empirical shifted Deng-Fan oscillator potential for diatomic molecules. J. Math. Chem. 51, 227–238 (2013). • [7] K.J. Oyewumi, O.J. Oluwadare, K.D. Sen, O.A. Babalola, Bound state solutions of the Deng-Fan molecular potential with the Pekeris type approximation using the Nikiforov–Uvarov (N–U) method. J. Math. Chem. 51, 976–991 (2013). • [8] T.E. Simos, J. Vigo-Aguiar, A symmetric high order method with minimal phase-lag for the numerical solution of the Schrödinger equation. Int. J. Modern Phys. C 12, 1035–1042 (2001). • [9] T.E. Simos, J. Vigo-Aguiar, An exponentially-fitted high order method for long-term integration of periodic initial-value problems. Comput. Phys. Commun. 140, 358–365 (2001). • [10] T.E. Simos, J. Vigo-Aguiar, A dissipative exponentially-fitted method for the numerical solution of the Schrödinger equation and related problems. Comput. Phys. Commun. 152, 274–294 (2003). • [11] J. Vigo-Aguiar, H. Ramos, Variable stepsize implementation of multistep methods for . J. Comput. Appl. Math. 192, 114–131 (2006). • [12] T.E. Simos, New high order multiderivative explicit four-step methods with vanished phase-lag and its derivatives for the approximate solution of the Schrödinger equation. Part I: Construction and theoretical analysis. J. Math. Chem. 51, 194–226 (2013). • [13] S. Flugge, Practical Quantum Mechanics (Springer, New York, 1974). • [14] F.J. Gómez, J. Sesma, Connection factors in the Schrödinger equation with a polynomial potential. J. Comput. Appl. Math. 207, 291–300 (2007). • [15] F.J. Gómez, J. Sesma, Spiked oscillators: exact solution. J. Phys. A: Math. Theor. 43, 385302 (2010). • [16] G.H. Hardy, Divergent series (Clarendon Press, Oxford, 1949). • [17] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in Fortran 77 (Cambridge University Press, Cambridge, 1992). • [18] F.J. Gómez, J. Sesma, Scattering length for Lennard-Jones potentials. Eur. Phys. J. D 66, 6 (2012). • [19] A. Ruhe, Algorithms for the nonlinear eigenvalue problem, SIAM J. Numer. Anal. 10, 674–689 (1973). • [20] F. Naundorf, Ein Verfahren zur Berechnung der charakteristischen Exponenten von linearen Differentialgleichungen zweiter Ordnung mit zwei singulären Stelle, ZAMM 57, 47–49 (1977). • [21] W. Wasow, Asymptotic expansions for Ordinary Differential Equations (Dover, Mineola, N. Y., 2002). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
# Section 6.4 Discontinuous Forcing Functions: Problem 5Previous ProblemProblem LstNoxt Problompoint) Consicer tne following initlal value problem;40<9Jo) = 0. Y() = ###### Question: Section 6.4 Discontinuous Forcing Functions: Problem 5 Previous Problem Problem Lst Noxt Problom point) Consicer tne following initlal value problem; 40<9 Jo) = 0. Y() = 0 Using Y Ior tne Laplace transtorrn 0f Jt) . Lo Y L(x)} - find the equation you get by taking the Laplace transform of the ditferential equation and sotve for Y(s) #### Similar Solved Questions ##### < 12 of 20 Review Constants Periodic Ta Part A MgF2, magnesium fluoride Express your answers... < 12 of 20 Review Constants Periodic Ta Part A MgF2, magnesium fluoride Express your answers using two decimal places separated by a comma. EVO AEO ? mass % Mg, mass % F = % Submit Request Answer Part B Ca(OH)2, calcium hydroxide Express your answers using two decimal places separated by commas. ... ##### Super Security Co. offers a range of security services for athletes and entertainers. Each type of... Super Security Co. offers a range of security services for athletes and entertainers. Each type of service is considered within a separate department. Marc Pincus, the overall manager, is compensated partly on the basis of departmental performance by staying within the quarterly cost budget. He ofte... ##### Alnt X UtcA rollercoaster car shown in the figure above is sent up a track by a very strong spring, compressed by 1.00 m from its equilibrium position: The mass of the car (with passengers) is 1200 kg. At the highest point of its trajectory the radial component of the acceleration of the car is 12.0 Radius of the loop is R-10.0 m The magnitude of the work done mls?. by non-conservative forces (air resistance; frictions, etc ) on the cart while it moves from point A to point B is 200 kJ. In m/s, alnt X Utc A rollercoaster car shown in the figure above is sent up a track by a very strong spring, compressed by 1.00 m from its equilibrium position: The mass of the car (with passengers) is 1200 kg. At the highest point of its trajectory the radial component of the acceleration of the car is 12.... ##### An object with a mass of 4 kg is revolving around a point at a distance of 2 m. If the object is making revolutions at a frequency of 5 Hz, what is the centripetal force acting on the object? An object with a mass of 4 kg is revolving around a point at a distance of 2 m. If the object is making revolutions at a frequency of 5 Hz, what is the centripetal force acting on the object?... ##### Evaluate the indefinite integral_x + 3 [ 32 + 62 dr Evaluate the indefinite integral_ x + 3 [ 32 + 62 dr... ##### Determine the horizontal deflection at point B of the frame shown by virtual work method 3.00 m 6... Determine the horizontal deflection at point B of the frame shown by virtual work method 3.00 m 62.5 P2 P1-200+AX100 kg P2-200+JX100 kg 500 3.00 m 2 : 200 + 3x100 = 500 k 4.00 m. 8.00 m -700x4ーー162.5 La = 162.5 L Ay = 862.5 L Determine the horizontal deflection at point B of the frame... ##### Use the appropriate formula to solve thisproblem: 84% of surveyed adults with a college degree said theywill take the COVID vaccine once it becomes available to them. Whatis the probability that out of a sample of 44 adults with a collegedegree, exactly 41 will take theCOVID vaccine once it becomes available to them.Fill in the followinglist: n = p = q =Then work the problem:(round theanswer to four decimal places) Use the appropriate formula to solve this problem: 84% of surveyed adults with a college degree said they will take the COVID vaccine once it becomes available to them. What is the probability that out of a sample of 44 adults with a college degree, exactly 41 will take the COVID vaccine once it bec... ##### Rpoat 320pbe85 PnliMeeiposra Gercequiremeng empldyees faot 87C Mnicr pergon starting Mrei Cau MOYCcoericentstatic Iriction Epeafcd tle surace Gunac Weannn the fcllouln onmwezntpicapnFhlmGacticientD7aueroenct78 minimumnteryaIoolwer mecting tha Gortal SrvicR minimumtypical JthlelicNeed Help?SubmitensitarJoava ProretePractice Another Varsion Rpoat 320pbe85 Pnli Meei posra Gerc equiremeng empldyees faot 87C Mnicr pergon starting Mrei Cau MOYC coericent static Iriction Epeafcd tle surace Gunac Weannn the fcllouln onmwezn tpica pnFhlm Gacticient D7 aueroenc t78 minimum nterya Ioolwer mecting tha Gortal SrvicR minimum typical Jthlelic Need ... ##### End of Chapter Quiz Answer True (T) or False (F): 1. Competition is protected by law... End of Chapter Quiz Answer True (T) or False (F): 1. Competition is protected by law in the United States. 2. In perfect competition, government regulates business activities. 3. Ina monopoly, one producer or seller has total control of the supply and price of a certain product 4. A patent protects ... ##### Consider the following misperceptions model of the economy. AD: Y =600 + 10(M/P) SRAS: Y=Y +P... Consider the following misperceptions model of the economy. AD: Y =600 + 10(M/P) SRAS: Y=Y +P - pe Okun's Law: (Y - Ý )/Y = - 2ệu - 4) Let 7 =750, ū=0.05, M =600, and pe =40. a. b. What is the price level? (2%) Suppose there is an unanticipated increase in the nominal money sup... ##### 1. Assuming there are 365 days in a year, how many days are required for the... 1. Assuming there are 365 days in a year, how many days are required for the Earth to undergo an angular displacement of 2.65 rad as it revolves around the Sun? days 2. A wagon wheel consists of a 2.77-kg hoop of radius 0.625 meters and 12 0.207-kg spokes. (a) What is the moment of inertia of the ... ##### Discharge Summary: This 28-year-old female presented to the hospital on 4/14/XX in active labor and was... Discharge Summary: This 28-year-old female presented to the hospital on 4/14/XX in active labor and was admitted to the labor and delivery area. Pregnancy History: This is the second pregnancy for this woman. She previously delivered a male infant 2 years ago. This pregnancy was normal with no compl... ##### Pegiatwve 1 IU lnetn test results L randonily selecied 1 1 1 1 1 sudect Was Ween 0l 0 453 needed ) V ofaType here search Pegiatwve 1 IU lnetn test results L randonily selecied 1 1 1 1 1 sudect Was Ween 0l 0 453 needed ) V ofa Type here search... ##### Identify the amplitude, period, and shifts. Sketch the functionONLY for this interval, otherwise points will be deducted:[0 𝜋]Clearly label ticks on the horizontal and vertical axes. markingmaxima and minim. Do not use graphing calculator, x-axis must be in𝜋 notation.y= 3 sin(2x−𝜋) Identify the amplitude, period, and shifts. Sketch the function ONLY for this interval, otherwise points will be deducted: [0 𝜋] Clearly label ticks on the horizontal and vertical axes. marking maxima and minim. Do not use graphing calculator, x-axis must be in 𝜋 notation. y= 3 sin(2... ##### 1. The surface of a lake is represented by a region R in the xy-plane such that the depth (in meter) under the point (x,y) is h(x,y) = 300 2x2 3y2 .(a) In what direction should a boat at P(4,9) sail in order for the depth of the water to decrease most rapidly? (6) In what direction does the depth remains the same or the rate of change is zero?50 marks) 1. The surface of a lake is represented by a region R in the xy-plane such that the depth (in meter) under the point (x,y) is h(x,y) = 300 2x2 3y2 . (a) In what direction should a boat at P(4,9) sail in order for the depth of the water to decrease most rapidly? (6) In what direction does the depth r... ##### Expand the logarithmic expression log(3√v x7/10y^3) expand the logarithmic expression log(3√v x7/10y^3)... ##### 6. Lct (XT) bc topological space such thnt every suhset of X is closed then (XT) is an indiscrete space. 6 (T) is discretr spnce Tisthe linite ckrsed topobgv on X 4: None of the aboveLct X be an infinite set eqquipper with the finite elosed topology: finite sulset of X icloser]openclopen neither open uor closrdWe deliue the real valued fuuction f (2) -/ The inverse image o {-9.-1.0, 4} is {0.1,2,3} 6. {0,2} {-2,0,2} None of the above5-2 I<u Lct f :R + Rbe defined by f(r) = then f-'(J1,3[ 6. Lct (XT) bc topological space such thnt every suhset of X is closed then (XT) is an indiscrete space. 6 (T) is discretr spnce Tisthe linite ckrsed topobgv on X 4: None of the above Lct X be an infinite set eqquipper with the finite elosed topology: finite sulset of X i closer] open clopen neither... ##### 5+25_ (8 pts) For the function f(x) = 4-3x Find the ventical: asymptote.Find the horizontal asymptote.Find the x-intercept: 5+25_ (8 pts) For the function f(x) = 4-3x Find the ventical: asymptote. Find the horizontal asymptote. Find the x-intercept:... ##### 6. You made a solution of sodium benzoate by dissolving 1.9gm of sodium benzoate in 20ml... 6. You made a solution of sodium benzoate by dissolving 1.9gm of sodium benzoate in 20ml of distilled water. Calculate its molarity (MW of sodium Benzoate 144gm)... ##### Assume that Flint Corp. earned net income of $3,605,000 during 2021. In addition, it had 104,000... Assume that Flint Corp. earned net income of$3,605,000 during 2021. In addition, it had 104,000 shares of 9%, $100 par nonconvertible, noncumulative preferred stock outstanding for the entire year. Because of liquidity considerations, however, the company did not declare and pay a preferred dividen... 5 answers ##### (Spts) Letand B =[ % Tl where X 12 L3 where X 1 [ 12Find_Athen solve AXb) Find B- then solve BX (Spts) Let and B = [ % Tl where X 12 L3 where X 1 [ 12 Find_A then solve AX b) Find B- then solve BX... 5 answers ##### Answer:30ns [3] The slope of the line of graph of In k versus IT is .3.61 x 10' K. Determine the factor by which the rate constant changes $when the temperature increases from 0PC t0 25*C?Answer:3. 0 Answer: 30ns [3] The slope of the line of graph of In k versus IT is .3.61 x 10' K. Determine the factor by which the rate constant changes$ when the temperature increases from 0PC t0 25*C? Answer: 3. 0... 5 answers ##### Combustion of hydrocarbons such as methane (CH,) produces carbon dioxide, 'greenhouse gas Greenhouse gases in the Earth'$ atmosphere can trap the Sun's heat, raising the average temperature of the Earth: For this reason there has been great deal of interational discussion about whether t0 regulate the production of carbon dioxideWrite balanced chemical equation, including physical state symbols, for the combustion gaseous methane into gaseous carbon dloxide and gaseous water.0-0Do Combustion of hydrocarbons such as methane (CH,) produces carbon dioxide, 'greenhouse gas Greenhouse gases in the Earth'\$ atmosphere can trap the Sun's heat, raising the average temperature of the Earth: For this reason there has been great deal of interational discussion about whethe...
# 몬테카르로 시뮬레이션에 의한 $SIC_w$/Al 복합재료의 피로수명에측 • Published : 1996.05.01 • 41 2 #### Abstract It requires uch time and cost to obtain the fatigue crack growth life and fatigue crack growth path morphlogy from the fatigue crack growth tests. In this study, the Monte-Carlo simulation program was developed to predict the fatigue crack growth lofe and fatigue crack growth path morphology of metal matrix composites. Fatigue crack growth lives of 5%, 10%, 15%, 20%, 25% and 30% $SiC_w$/Al composites were predicted by usign the Monte-Carlo Simulation. And the fatigue crack growth lives of 25% $SiC_w$/Al and Almatrix from Monte-carlo simulation were compared with fatigue life from experiments in order to verify the accuracy of Monte-Carlo Simulation program. #### Keywords Metal Matrix Composites;Monte-Carlo Simulation;Fatigue Crack Growth Life Prediction;Probability Distribution;Random Number
# IbexSolve¶ This documentation is for using IbexSolve with the command prompt. To use IbexSolve in C++ or program your own solver with Ibex, see the programmer guide. ## Getting started¶ ### The very basic idea¶ IbexSolve solves systems of equation in a complete and validated way. If you have an equation, say, $x^2=1$ $x=0.999...$ or maybe something close to the other root -1. But you will not get both roots and you will not know exactly how far the returned value is from the actual root. IbexSolve will give you the following answer: $x \in [-1.001,-0.999] \quad \mbox{or} \quad x\in[0.999,1.001].$ First, all solutions are returned: this is what we mean by completeness. Second, each actual solution is rigorously enclosed in an interval: this is what validation means. ### First example (well-constrained)¶ Open a terminal (move to the bin subfolder if necessary) and run IbexSolve with, for example, the problem named Kin1.bch located at the specified path: ibexsolve [ibex-lib-path]/benchs/solver/non-polynom/Kin1.bch After a short delay, the following result should be displayed: solving successful! number of solution boxes: 16 number of boundary boxes: -- number of unknown boxes: -- number of pending boxes: -- cpu time used: 0.122523s number of cells: 47 You see that IbexSolve has found 16 solutions. To obtain the solutions, just run the same command with the option -s. You will have each solution displayed as a list of thin intervals enclosing the components of the true solution: solution n°1 = ([0.3999964622870867, 0.3999964622870879] ; [0.819005889921108, 0.8190058899211153] ; ...) Also are reported here the CPU time (around a tenth of second in this case) and the number of “cells” required. This number basically corresponds to the total number of hypothesis (bisections) that was required to solve the problem. It gives an indication of its complexity. The file Kin1.bch is a plain text file, you can open it with any editor. You will see that it is a problem with 6 variables and 6 non-linear constraints, with sine and cosine operators. The file is written in the Minibex syntax. ### Second example (under-constrained)¶ One important originality of IbexSolve (compared to the other interval tools) is that it is not limited to square (well-constrained) systems as in the previous example. Open your editor and type the following text in a circle.mbx file: variables x,y; constraints x^2+y^2=1; end The solution set in this case is a full curve in the plane, the unit circle. Then run IbexSolve to solve it: ibexsolve circle.mbx You may expect to obtain a bunch of boxes in return enclosing the curve, like in the picture below (we have superimposed the circle (in red) for clarity): IbexSolve can calculate that. But this means that you expect a fine description of the curve and that you accept to pay the inevitable price of a voluminous output (especially in higher dimension). A different and opposite strategy would be to expect in return a single box enclosing the curve, that is, the square [0,1]x[0,1]. You would then have a minimal output but a very coarse description of the curve. The default behavior of IbexSolve is somehow a best compromise between these two extreme strategies. It tries to return a minimal number of boxes while capturing the “topology” of the solution set. In the circle example, IbexSolve will just produce 11 solutions. They are depicted below: As you can seen, the paving with the boxes is a rough description of the circle. Still, we see that the overall shape is captured. This paving clearly looks differently than if we had solve, say, a linear equation. In more precise terms, each solution box has the property to be crossed by the curve in a regular way. Look for instance at the gray box. The curve makes no loop or u-turn whatsoever inside the box and crosses it from side to side along the y-axis. Formally, it is proven for this box that for all values y in [y] there exists x in [x], and a single one, such that (x,y) is a point of the curve. More exactly, we have: $\forall y\in\mathring{[y]} \quad \exists ! \ x\in\mathring{[x]} \quad x^2+y^2=1$ where $$\mathring{\cdot}$$ stands for the interior of. All the 11 boxes have this property except that the roles of x and y can be switched, depending whether the box is more horizontal or vertical. This information is given in the output data. Of course, all this generalizes to any dimension. If you need to refine the paving, that is, to have boxes of smaller size, use for this the eps-max parameter. For instance, if we run IbexSolve using -E 0.5 (or equivalently, --eps-max=0.5) we obtain the following paving: ### Third example (inequalities)¶ Let us now turn to a single inequality. Just change the “=” sign of the previous example by “<=”: variables x,y; constraints x^2+y^2<=1; end Now the result is: number of solution boxes: 11629 number of boundary boxes: -- number of unknown boxes: 8941 number of pending boxes: -- cpu time used: 2.38774s number of cells: 41139 Below is the plot of all solution boxes (on the left), the plot of all unknown boxes (on the right). Here is a zoom on a fraction of unknown boxes: This times, the solution boxes are all entirely inside the disk and the so-called unknown boxes enclose the boundary. The choice of this terminology, and its consistency with the previous example, is justified further. Just notice that the full disk is covered by the union of solution and unknown boxes. It is possible to set the thickness of the boundary using the eps-min parameter. For instance, if you IbexSolve using -e 0.1 (or equivalently, --eps-min=0.1), you obtain: ### Scope and limits¶ As illustrated by our previous examples, IbexSolve can solve any system of nonlinear equations and inequalities in a complete and validated way, including underconstrained systems. All usual operators are allowed, including trigonometric functions but also sign, absolute value and min/max operators. Furthermore, IbexSolve is a end-user program cooked by the ibex team, that resorts to a unique black-box strategy (whatever the input problem is) and with a very limited number of parameters. Needless to say, this strategy is a kind of compromise and not the best one for a specific problem. For programmers, the core library actually offers a generic solver, a C++ class that allows to easily build your own solver. The main shortcoming of IbexSolve is that time is not bounded. This solver is not appropriate for online computations. You may typically expect some seconds or minutes of computing for small-scaled problems (less than 10 variables). But, beyond, it can takes hours or more. ## The output of IbexSolve¶ Let us first formally define what a system is. We call a system the given of 1- m equations $\forall i\in\{1,\ldots,m\}, \quad f_i(x)=0$ or, in short, f(x)=0, with $$f:\mathbb{R}^n\to\mathbb{R}^m$$. If m=0 then $$\{1,\ldots,m\}=\emptyset$$ so the relation f(x)=0 becomes a tautology and can be omitted. 2- p inequalities $\forall i\in\{1,\ldots,p\}, \quad g_i(x)\leq0$ or, in short, g(x)<=0, with $$g:\mathbb{R}^n\to\mathbb{R}^p$$. If p=0 then g(x)<=0 is a tautology and can be omitted. In the sequel: • n will denote the number of variables • m the number of equations (can be zero) • p the number of inequalities (can be zero). We call manifold the set M of solution points of a given system. IbexSolve produces 4 different types of boxes: • the set S of solution boxes • the set B of boundary boxes • the set U of unknown boxes • the set P of pending boxes The first important property is that the manifold is covered by these sets: $M \subseteq S \cup B \cup U \cup P.$ The properties of each type of boxes are detailed right below and the solver strategy further. ## Solution boxes¶ In the case of a square system of equations, a solution box corresponds to the usual meaning, i.e., a box that is proved to contain a solution. We shall give here a more general definition that also embraces the case of underconstrained systems. In the general case, and as illustrated in the circle example, the idea behind IbexSolve is to compute boxes that capture the local “topology” of the manifold. More precisely, we consider a box as solution when there exists an homeomorphism between the part of the manifold enclosed by a box and the unit open ball $B:=\{x \in\mathbb{R}^{n-m}, \|x\|<1\}.$ So, [x] is a solution box only if: $\begin{split}\left\{\begin{array}{l} \forall x\in[x], \quad g(x)\leq 0\quad\mbox{and}\\ \mathring{[x]}\cap M \quad \mbox{is homeomorphic to} \ B \end{array}\right.\end{split}$ where $$\mathring{[x]}$$ denotes the interior of [x]. Note that this definition imposes [x] to have a non-null radius on each of its components. ### Parametrization¶ When IbexSolve finds a solution, it does not only supply the box but also give an information on how the homeomorphism can be built. This is also illustrated with the gray box of our circle example where, roughly speaking, one of the variable is identified as the leading direction of the curve. More generally, IbexSolve will give you a partition of the vector x into two subset of variables u (called parameters) and v. The size of u is n-m and the size of v is m. For simplicity, we assume that f(u,v) stands for f(x). Now, this partition must be interpreted as follows. First, because [x] (hence [u]) has a non-empty interior, there is an homeomorphism $$\phi_1:B\to\mathring{[u]}$$. Second, for every point u* in $$\mathring{[u]}$$ a (classical) Newton iteration applied to f(u*,.)=0 starting for some value v in [v] will converge to some v* such that (u*,v*) is a point of M inside [x]. This is another homeomorphism $$\phi_2:\mathring{[u]}\to \mathring{[x]}\cap M$$. The sought homeomorphism is $$\varphi_1 \circ \varphi_2$$. This homeomorphism corresponds to the usual concept of chart and our partition gives indeed a local parametrization of the manifold. However, the parametrization involves a numerical algorithm (the Newton iteration) so it is only an implicit definition. But this makes sense from a practical standpoint. For instance, if one wants to plot the manifold, he/she knows that this can be easily done by sampling values in the parameter vector and compute corresponding point using a Newton iteration. In a sense, we can say that in a solution box the manifold is processable. ### Case of n=m¶ In the case of a well-constrained system (n=m), v=x and our definition of solution box boils down to $\exists ! \ x\in\mathring{[x]}, \quad f(x)=0,$ so that our definition of solution box in this case exactly matches the usual meaning of “solution box” in the interval community. ### Case of n=0¶ In the case of a system without equation (m=0), u=x and our definition of solution box boils down to $\forall x\in[x], \quad g(x)\leq 0.$ so that our definition of solution box in this case exactly matches the usual meaning of “inner box” in the interval community. This explains why the solution boxes in our introduction example are inside the disk. ## Boundary boxes¶ A boundary box intuitively corresponds to a box which intersects an inequality boundary. This should not be confused with the boundary of the manifold. For instance, in the circle example in introduction, there is no inequality hence no boundary box. We may require additional properties on such boxes; for instance, that the inequality surface is not tangential to the boundary-free manifold f(x)=0. But checking such properties has a computational price. Sometimes, like in our disk example, we have a large number of boundary boxes and we prefer a weaker but cheaper boundary test. For this reason, we have introduced in IbexSolve different boundary policies. The policy is set thanks to the --boundary option. So far, the following policies exist: • true : any box is considered as a boundary. This policy is set by default for under-constrained systems, see the solver strategy. • false: all boxes are considered as boundary. This policy is set by default for inequalities, see the solver strategy. • full-rank: some inequalities are potentially active and the gradients of all constraints (equations and potentially activated inequalities) are linearly independent. This situation typically corresponds to constraint qualification in the realm of optimization. However, in the current state of development the inequality activation is not proved (we don’t certify that f=0 and $$g_i=0$$ are simultaneously satisfied inside the box). This is still in development. • half-ball: this option is not available yet. This policy means that the manifold inside the box is homeomorphic to the half-unit ball: $B^+:=\{x \in\mathbb{R}^{n-m}, \|x\|<1, \|x_1\|\geq 0\}.$ This is still in development. ## Unknown and pending boxes¶ The goal of IbexSolve is to describe a manifold with solution and boundary boxes. To this end, solution and boundary tests are used. These tests may however not apply on large boxes. Of course, one reason is that a large box may simply neither be a solution nor a boundary box. Another reason is that tests are just sufficient conditions. So the program splits recursively the initial box until one test succeeds. This leads to a classical binary search tree. However, it is frequent that no test will ever succeed in the vicinity of some points, whatever the size of the box is. This typically happens when f is singular. For this reason, the user has to fix a parameter value $$\varepsilon_{min}$$ to stop bisection. This parameter allows to prevent bisecting again, although no test has succeeded yet. A box [x] is not split anymore if all the components of [x] has a radius smaller than $$\varepsilon_{min}$$. But it is also possible to control the search by fixing some time limit T. As said before, when the process terminates successfully, only solution and boundary boxes are issued. Otherwise, so-called unknown or pending boxes can appear, depending on the failure reason: • If both the solution and boundary tests do not apply on a box which cannot be split anymore (the precision $$\varepsilon_{min}$$ being reached), the latter is classified as an unknown box. • If a box has not been processed because of the timeout, it is classified as a pending box. The two types of boxes are distinguished as their semantic is quite different: a pending box can potentially be successfully processed providing a longer time limit, or even within the same time limit using a different exploration strategy. On the contrary, an unknown box cannot be processed successfuly whatever the time limit is. The only way is by decreasing $$\varepsilon_{min}$$. ## Solving strategy¶ The solving strategy depends on the type of systems • inequalities only (m=0). For this type of systems, IbexSolve will try to cover the manifold with either solution or boundary boxes, indifferently. We don’t try to prioritize one type because they are not comparable. Indeed, solution boxes fully satisy inequalities g(x)<0 while boundary boxes crosses g(x)=0. We may be more interested in either one. However, the default boundary policy is false, which means that, by default, we prioritize solution boxes. In fact, we even refuse to consider a box as boundary with this policy which means that the paving will finely cover the manifold boundary with unknown boxes, as shown in our disk example. Note that for this type of problems, the eps-max parameter applies to both boxes. • under-constrained systems (0<m<n). Note that this type of systems include at least one equality. IbexSolve will try to cover the manifold with as many solution boxes as possible. So it will bisect boxes until either a solution is found or the minimal precision eps-min parameter is reached. This is only at this point that the solver will try to eventually enforce a boundary test. And if it fails, the box is marked as unknown. So, for this type of problems, the eps-max parameter does not apply to boundary boxes. The default boundary policy is true (no time wasted to check boundary property). • well-constrained systems (m=n). For this type of systems, we don’t expect boundaries. If a solution of f(x)=0 also matches g(x)=0, the resulting box will be marked as unknown. In this case, the default boundary policy is false (no time wasted to check boundary property). The reason why it is not true as in the previous case is precisely because a boundary is now unexpected. ## Options¶ ### The eps-min parameter¶ This parameter basically allows to control the solution accuracy. It imposes the minimal width of validated boxes (boundary or solution) or, said differently, the maximal width of non-validated boxes. So this is a criterion to stop bisection: a non-validated box will not be larger than $$\varepsilon_{min}$$. Default value is 1e-3. ### The eps-max parameter¶ This parameter is the maximal width of validated boxes (boundary or solution). So this is a criterion to force bisection: a validated box will not be larger than $$\varepsilon_{max}$$ (unless there is no equality and it is fully inside inequalities). Default value is +oo (none). The effect of the eps-max parameter is best illustrated in the case of pure inequalities, where solution and boundary boxes have equivalent roles (cf. the solver strategy). So let us get back again to our disk example. If we use the --boundary=true option, the program immediately stops with one boundary box. This is OK because the first box handled by the solver satisfies one of the criterion (either solution or boundary) and since $$\varepsilon_{max}$$ is set by default to +oo, the size of this box is less than $$\varepsilon_{max}$$, so the search is over. Now, if we set $$\varepsilon_{max}$$ to 1 we obtain graphically the following result: Note that by setting the boundary policy to true, no property at all is checked. This explains why the boundary boxes are bigger here than in the picture of the circle example. Indeed, no bisection are required to enforce the boundary property. This is just governed by the $$\varepsilon_{max}$$ parameter. If we run IbexSolve using -E 0.5, we obtain: and using -E 0.1: You can control this way the accuracy of the description. Of course, as before, the more accurate, the more boxes you have and the longer it takes. In the case of a system with both equations and inequalities, the parameter $$\varepsilon_{max}$$ will apply for both inner and boundary boxes. If we consider now the following Minibex code: variables x,y; constraints x^2+y^2=1; y+x>=0; end we obtain the following figures by using decreasing values of $$\varepsilon_{max}$$ (namely 1, 0.5 and 0.1): ### Overview¶ -e, –eps-min= Minimal width of output boxes. This is a criterion to stop bisection: a non-validated box will not be larger than ‘eps-min’. Default value is 1e-3. -E, –eps-max= Maximal width of output boxes. This is a criterion to force bisection: a validated box will not be larger than ‘eps-max’ (unless there is no equality and it is fully inside inequalities). Default value is +oo (none) -t, –timeout= Timeout (time in seconds). Default value is +oo (none). –simpl=… Expression simplification level. Possible values are: 0: no simplification at all (fast). 1: basic simplifications (fairly fast). E.g. x+1+1 –> x+2 2: more advanced simplifications without developing (can be slow). E.g. x*x + x^2 –> 2x^2. Note that the DAG structure can be lost. 3: simplifications with full polynomial developing (can blow up!). E.g. x*(x-1) + x –> x^2. Note that the DAG structure can be lost. Default value is : 1. -i, –input= COV input file. The file contains a (intermediate) description of the manifold with boxes in the COV (binary) format. -o, –output= COV output file. The file will contain the description of the manifold with boxes in the COV (binary) format. –format Give a description of the COV format used by IbexSolve –bfs Perform breadth-first search (instead of depth-first search, by default) –trace Activate trace. “Solutions” (output boxes) are displayed as and when they are found. –stop-at-first Stop at first solution/boundary/unknown box found. –boundary=… Boundary policy. Possible values are: true: always satisfied. Set by default for under constrained problems (0 Random seed (useful for reproducibility). Default value is 1. -q, –quiet Print no report on the standard output. –forced-params= Force some variables to be parameters in the parametric proofs, separated by ‘+’. Example: –forced-params=x+y
Main Short Fibre Reinforced Cementitious Composites and Ceramics # Short Fibre Reinforced Cementitious Composites and Ceramics , Categories: Chemistry\\Materials Year: 2019 Language: English Pages: 139 ISBN 10: 3030008673 ISBN 13: 978-3030008673 Series: Advanced Structured Materials File: EPUB, 54.22 MB You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them. 1 ### The Methods and Skills of History: A Practical Guide Year: 2015 Language: English File: PDF, 55.58 MB 2 ### The Road to Serfdom (The Macat Library) Year: 2017 Language: English File: EPUB, 1.71 MB © Springer Nature Switzerland AG 2019 Heiko Herrmann and Jürgen Schnell (eds.)Short Fibre Reinforced Cementitious Composites and CeramicsAdvanced Structured Materials95https://doi.org/10.1007/978-3-030-00868-0_4 Non-destructive Evaluation of the Contribution of Polymer-Fibre Orientation and Distribution Characteristics to Concrete Performance during Fire Tyler Oesch1  , Ludwig Stelzner1   and Frank Weise1 (1)Bundesanstalt für Materialforschung und –prüfung, Federal Institute for Materials Research and Testing, 12205 Berlin, Germany Tyler Oesch (Corresponding author) Email: [email protected] Ludwig Stelzner Email: [email protected] Frank Weise Email: [email protected] Abstract Although concrete itself is not a combustible material, concrete mixtures with high density, such has high-performance concretes (HPCs), are susceptible to significant damage during fires due to explosive spalling. Past research has shown that the inclusion of polymer fibres in high density concrete can significantly mitigate this fire damage. The exact mechanisms causing this increased spalling resistance are not yet fully understood, but it is thought that the fibres facilitate moisture transport during fire exposure, which in turn contributes to relief of internal stresses in the spalling-susceptible region. In this study, X-ray Computed Tomography (CT) was applied to observe the interaction between polymer fibres and cracking during thermal exposure. For this purpose, two concrete samples containing different polymer fibre types were subjected to incremental application of a defined thermal exposure. CT images were acquired before and after each thermal exposure and powerful image processing tools were used to segment the various material components. This enabled a detailed analysis of crack formation and propagation as well as the visualization and quantification of polymer fibre characteristics within the concrete. The results demonstrated that the orientation of both fibres and cracks in polymer-fibre reinforced concrete tend to be anisotropic. The results also indicated that crack geometry characteristics may be correlated with fibre orientation, with cracks tending to run parallel to fibre beds. Clear quantitative relationships were also observed between heating and increasing cracking levels, expressed in terms of both crack surface area and crack volume. 1 Introduction 1.1 Performance of Concrete During Fire Although concrete itself is not a combustible material, concrete structural components are susceptible to explosive spalling during fire. The investigation and control of this phenomenon is very important since the spalling of individual components can have a significant effect on the overall fire resistance of structures. Structural fire resistance, in turn, has major implications both to the safety of first responders during fires as well as the costs associated with structural repair following a fire. It has been demonstrated that this spalling behaviour is at least partially related to the presence of moisture within the concrete material [9, 28]. Spalling behaviour is also increased for high-strength concretes (HSCs), which generally possess a higher overall density and lower permeability [7, 27]. The increase in the popularity and use of these high-strength concretes in building construction heightens the urgency of developing better methods for predicting and mitigating the effects of this spalling behaviour [1]. Fibre-reinforced concretes have been used in building construction since at least the 1970s [30]. Fibres have generally been used in these concrete mixes to improve the ductility and durability of the material [13, 33]. Polymer fibres have also been shown, however, to significantly contribute to spall resistance within concrete components during fire [8, 9]. The exact mechanisms causing this increased spalling resistance are not yet fully understood, but it is thought that the fibres facilitate moisture transport during fire exposure, which in turn contributes to relief of internal stresses in the spalling-susceptible region [9, 25]. The formation of a fibre-induced micro-cracking network has been identified as one important aspect of this process [25]. At the present time, there is also no comprehensive understanding of how fibre properties, such as polymer type, diameter, length, shape, and density, affect overall spalling performance of concrete members during fire. Optimization of these parameters will need to be completed before polymer-fibre reinforced concretes are adopted into widespread use for fire protection. 1.2 X-Ray Computed Tomography X-ray computed tomography (CT) has been used in non-destructive concrete research applications for more than 30 years [15, 19]. In this scanning method, a sample is placed on a rotating table between an X-ray source and an X-ray detector [4]. This causes an X-ray attenuation image of the sample to be projected upon the detector. By recording these projected images during the 360[image: $$^\circ$$] rotation of the sample, mathematically-based reconstruction algorithms can be used to produce a three dimensional representation of X-ray attenuation within the sample [3]. The X-ray attenuation, which is roughly correlated with density, of individual component materials within the sample can then be identified and objects made from these component materials can be individually separated and analysed. 1.3 Objectives and Significance Previous research has shown that small cracks develop in the mortar that surrounds the fibres during curing (Fig. 1). One goal of this research was to investigate to what extent these curing cracks contribute to overall material permeability prior to and during heating.[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig1_HTML.png] Fig. 1Scanning Electron Microscope (SEM) images of polymer fibres before (left) and after (right) heating to 300 [image: $$^\circ$$]C Another goal of this research was to identify the presence of correlations between fibre and cracking orientation characteristics and to quantitatively define the strength of those correlations. Previous research has demonstrated that the fibre fields in FRCs tend to be highly anisotropic because of flow during the casting process [5, 6, 17, 18, 20–22, 26]. It is thought that this anisotropy could have a major effect on fire resistance and, if properly controlled, may serve as a means of significantly improving spalling resistance. The third goal of this research was to quantitatively measure the crack growth during incremental heating through the use of X-ray computed tomography (CT). This CT-based data would be particularly well suited for the calibration and validation of computational models of the spalling phenomenon. 2 Materials and Sample Preparation The investigated specimens were made of HSC and were reinforced with different amounts and types of polypropylene (PP) fibres (Table 1). The mixture HSC / PPa2 contained 2 kg/m[image: $$^3$$] conventional PP-fibres. In contrast, for HSC / PPb1 only 1 kg/m[image: $$^3$$] pre-treated PP-fibres was added. The PPb-fibres were pre-treated during the manufacturing process using electron irradiation. This leads to a decreased fibre-melt viscosity [24] and improves the fire performance of the concrete, despite the reduced amount of added PP-fibres.Table 1HSC mixture [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Tab1_HTML.png] Initially, cubes (100 [image: $$\times$$] 100 [image: $$\times$$] 100 mm[image: $$^3$$]) were cast for both mixtures. After demoulding on the next day, these cubes were stored under water for six days and subsequently in a climate chamber at 20 [image: $$^\circ$$]C and 65 % relative humidity for a minimum of 83 days. Subsequently, cylindrical specimens with diameters of 12 mm and lengths of 100 mm were extracted from the cubes. A single cylindrical sample from each of the two cubes was selected for heating and CT analysis. These two samples will henceforward be referred to as sample PPb1 and sample PPa2, in reference to their composition. Before the first CT-measurement was carried out, these cylinders were fixed firmly within a customized set of mobile clamps, which ensured consistent mounting positions during repetitive CT measurements. 3 Test Methods After preparing and mounting the concrete cylinders, a CT-measurement was completed. Thereby, the initial structure of the concrete was analysed. Afterwards, the clamped concrete cylinders were heated using a special heating regime to reach specific maximum temperatures (see Thermal Loading). Subsequent to each heating/cooling phase, a further CT-measurement was carried out to investigate the crack formation in the fibre-reinforced HSC as a result of the thermal exposure. During the test series, the specimens were heated to certain target temperatures (150, 160, 170, 180, 190, 200, 250, 300 [image: $$^\circ$$]C) using an electrical furnace (Fig. 2). During thermal testing, the heat was increased at a rate of 1 K/min until the respective target temperature was reached. The target temperature was then sustained for one hour, followed by cooling at a maximum rate of −0,5 K/min to room temperature prior to CT-scanning.[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig2_HTML.png] Fig. 2Electrical furnace containing a typical clamped specimen The used electrical furnace is controlled by a temperature controller. The oven temperature is determined on the basis of the temperature measurement of a thermocouple, which is fixed in the rear wall of the furnace. Figure 3 shows an example temperature curve measured with an additional installed thermocouple near the specimen in comparison with the target temperature curve. Thereby it can be seen that the temperature in the furnace is controlled very well during heating phase. During the cooling phase the actual cooling rate is slower than the prescribed one because of the nonlinearity of the natural cooling process.[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig3_HTML.png] Fig. 3Example temperature curve for heating test at 200 [image: $$^\circ$$]C 3.2 CT Scanning During this research program, an acceleration voltage of 60 kV and current of 130 [image: $$\upmu$$]A were used for the X-ray source. The X-ray beam was also filtered using a 1 mm thick Aluminium plate immediately upon leaving the source in order to remove unwanted bandwidths from the X-ray beam and, in turn, make the resulting images more clear. The flat panel detector used for this scanning contained a 2048 [image: $$\times$$] 2048 pixel field. The resolution of the resulting CT images of the samples was 6.18 [image: $$\upmu$$]m. 4 Image Processing 4.1 Initial Processing Procedures All image processing was completed using the program MATLAB [16]. The images were first corrected for beam hardening, which is a CT phenomenon causing the outer edges of the sample to appear brighter than its centre. During subsequent data analysis, it was found that the full sized sample image, which was 2048 [image: $$\times$$] 2048 [image: $$\times$$] 2048 voxels (a voxel is a 3D pixel), was much too large for the available image processing algorithms and computer system. Thus, a cubic sub-volume of 1200 [image: $$\times$$] 1200 [image: $$\times$$] 1200 voxels was digitally extracted from the centre of the original image and used for all subsequent analysis. Through the use of this sub-volume, all resulting computational demands and run times were reduced by a factor of approximately five times. To enable the density and orientation analysis of the fibres and cracks, these materials first needed to be identified and separated within the images. Although individual component materials can generally be separated within CT images using grayscale segmentation methods that was not sufficient for these sample images. The reason for this complication was that the X-ray attenuation levels of the air pores, the cracks, and the polymer fibres were found to all be very similar and partially overlapping. Thus, more complex methods of fibre and crack detection needed to be developed. 4.2 Fibre Identification Initially, template matching methods were used in an attempt to identify the polymer fibres, but with only limited success. One reason for the failure of this method may have been that the fibres had such a small size within the images (only approximately 2.5 voxels in diameter) that their shape was not sufficiently well-defined to consistently match the template characteristics. Another problem with this method was that the polymer fibres often exhibited significant bending within the material, which made it even more difficult to define and match a consistent fibre shape. To overcome these limitations, a customized multi-step approach was developed for fibre identification that exhibited considerable versatility and accuracy. The individual steps of this analysis procedure will be outlined in the description below: Step 1. The triangle segmentation method [34, 35] was used to identify a boundary between low attenuation elements within the sample, including air pores, cracks, and polymer fibres, and high attenuation elements, including aggregate and mortar, on the voxel intensity histogram (Fig. 4). Using this attenuation threshold, a sample image containing only air pores, cracks, and fibres could be created and used for further analysis (Fig. 5).[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig4_HTML.png] Fig. 4Definition of low-high attenuation threshold on the voxel intensity histogram [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig5_HTML.png] Fig. 5Image of low-attenuation materials in sample PPb1 after step 1 (left) and step 2 (right) Step 2. All objects within the low-attenuation image were individually analysed and those with a volume less than 16% of the standard individual fibre volume were removed from the image (Fig. 5). This was done in order to eliminate both noise and micro-pores from the image. The reason that some objects with less than 100% of the fibre volume were retained within the image was that, since the analysis was conducted upon a cubic sub-volume, many partial fibres existed along the image edge. It was desirable to retain these partial fibre segments for subsequent analysis steps as they were expected to contribute significantly to overall sample performance. The volume limitation of 16% was found through trial and error to produce a good balance between elimination of unwanted objects and retention of partial fibre segments. Step 3. The objects within the image were contracted and then subsequently dilated by an amount equivalent to the fibre radius (rounded upward). This resulted in an image containing only objects larger in diameter than fibres. These macro-pores were subsequently removed from the image produced by step 2 in order to reduce the number of non-fibre objects (Fig. 6).[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig6_HTML.png] Fig. 6Image of low-attenuation materials in sample PPb1 after step 2 (left) and after step 3 (right) Through the removal of all objects with diameter larger than and volume smaller than that of the fibres in steps 2 and 3, it was originally thought that a clear, fibre-only image would result. It was soon found, however, that the samples contained may small micro-cracks of width similar to the fibre diameter even prior to the application of heat. Closer inspection revealed that these micro-cracks were almost exclusively present within the aggregate, rather than the mortar. Step 4. Since the fibres, in contrast to the cracks, are only present within the mortar, and never within aggregates, it was possible to develop an algorithm for separating the fibres from the remaining cracks. In this algorithm, each object was dilated by an amount equivalent to the fibre radius (rounded upward) and the attenuation of this dilated region was analysed. If the attenuation of the dilated region around an individual object corresponded to that representative of mortar (Figs. 7 and 8), the object was considered a fibre and retained. All other objects were considered as micro-cracks within the aggregate and eliminated (Fig. 9).[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig7_HTML.png] Fig. 7Attenuation image of the fibre-reinforced concrete material visually demonstrating the attenuation differences between aggregate and mortar [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig8_HTML.png] Fig. 8Definition of attenuation threshold boundaries for mortar material on the voxel intensity histogram [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig9_HTML.png] Fig. 9Image of low-attenuation materials in sample PPb1 after step 3 (left) and after step 4 (right) The resulting fibre images for samples PPb1 and PPa2 can be compared in Fig. 10. Although these images still contain some non-fibre objects, the overall contribution of these objects to the measured fibre characteristics is assumed to be small. These non-fibre objects are thought to primarily consist of cracks or small voids that intersect with fibres as well as ring artefacts.[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig10_HTML.png] Fig. 10Fibre images for samples PPb1 (left) and PPa2 (right) As previously described, the sample PPa2 was fabricated to contain a fibre volume twice that of sample PPb1. The differences in fibre volume between the analysed sub-volumes of the two samples shown in Fig. 10, however, exhibit a much larger volume difference. The fibre volume percentage within each analysed sub-volume could be calculated by summing the number of voxels of fibre material and dividing that number by the total number of voxels in the image. Using this method a fibre-volume percentage of 0.202% was measured for the PPa2 sub-volume, which was very close to the 0.22% fibre-volume percentage used in the mix design. For the PPb1 sub-volume, however, a fibre-volume percentage of only 0.044% was measured as opposed to the 0.11% fibre-volume percentage used in the mix design. There are many possible reasons for such a discrepancy. The most likely causes are inhomogeneity of fibre distribution within the sample and the presence of large aggregates or air-voids within the sub-volume selected for analysis. Both of these sources of error would be significantly reduced through the selection of larger sub-volumes for analysis. There is, thus, an impetus for further development of these image processing algorithms in order to reduce their computational requirements and enable the analysis of larger sub-volumes in future scanning and analysis efforts. 4.3 Fibre Analysis Once the fibres had been identified and isolated within the images, they were then analysed for both density and orientation characteristics. The orientation analysis was completed using the Hessian-based method [6, 14]. In this method, the grayscale images resulting from CT are considered to be three-dimensional functions that are twice differentiable in all directions [10]. By calculating the Hessian matrix at a given voxel within a fibre, partial second derivatives can be computed:[image: \begin{aligned} H= & {} \left[ \begin{array}{ccc} \frac{\partial ^2 I}{\partial x^2} &{} \frac{\partial ^2 I}{\partial x \partial y} &{} \frac{\partial ^2 I}{\partial x \partial z}\\ \frac{\partial ^2 I}{\partial y \partial x} &{} \frac{\partial ^2 I}{\partial y^2} &{} \frac{\partial ^2 I}{\partial y \partial z}\\ \frac{\partial ^2 I}{\partial z \partial x} &{} \frac{\partial ^2 I}{\partial z \partial y} &{} \frac{\partial ^2 I}{\partial z^2} \end{array}\right] \end{aligned}] (1) with H = Hessian Matrix and I = Grayscale Sample Image Matrix. At this point, the second derivative in the direction of the longitudinal axis of the fibre will be much less than those in the transverse directions. The orientation of fibres can, thus, be assessed by computing the eigenvalues and eigenvectors of the Hessian matrix at each voxel within a fibre (Fig. 11). The primary fibre orientation recorded at a single fibre voxel, therefore, is the eigenvector corresponding to the smallest eigenvalue [29].[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig11_HTML.png] Fig. 11Eigenvectors of the Hessian matrix at two points within a fibre For the fibre density analysis method, a small cell size was selected, comprising a cube with 120 voxel (742 [image: $$\upmu$$]m) long sides. The fibre image was broken up into an array of these cells, and the number of white (i.e., fibre) voxels in each was counted. By dividing the number of fibre voxels in each cell by the total number of cell voxels, estimates of local fibre density within the sample could be obtained. The cell size used for the density analysis was selected such that the length of each cube side represented about a 10th of the overall length of a typical image array side. This meant that a sufficient number of cells (1000) would be available for analysis to enable meaningful statistical evaluation. This is because for a pseudo-random phenomenon, such as fibre density variation, a large number of samples need to be collected to observe meaningful trends within the statistical data. At the same time, however, there was a desire to avoid cell sizes that were too small because they might be too highly influenced by the presence of single fibres. 4.4 Cracking Analysis The lack of a consistent size or shape for the crack geometry also made template matching techniques of limited use for crack identification. These characteristics made the use of a customized isolation approach similar to that used for the fibres unsuitable as well. Past research has demonstrated, however, that many cracking characteristics can be measured through the observation of changes in void properties in images of progressively damaged samples [21, 23]. In this approach, all void characteristics, including volume, surface area, and orientation, within the initial, undamaged scan of a sample are assumed to be related to entrained and entrapped air. Thus, any change in these void characteristics, such as volume and surface area growth, seen in later scans of damaged samples can be assumed to be due to cracking and, thus, represented as crack characteristics. It is important to note, however, that when analysing the orientation of cracks, a modified version of the approach used for fibre orientation analysis must be applied. Since cracks are planar objects, they cannot be characterized by a single, parallel vector in the way that fibres can. Rather, the cracks must be characterised by a vector which is perpendicular to the plane of the crack. Thus, in the Hessian characterization of crack orientation, the orientation angle represented by the eigenvector corresponding to the largest eigenvalue of the Hessian matrix must be used. 5 Results and Discussion 5.1 Results of Fibre Analysis Various methods exist to depict and analyse orientation data in three dimensions. For the present analysis, coordinates have been converted from a Cartesian to a spherical system [32]:[image: \begin{aligned} r= & {} \sqrt{x^2 + y^2 + z^2} \ ,\end{aligned}] (2) [image: \begin{aligned} \theta= & {} \tan ^{-1}\left( \frac{y}{x}\right) \ ,\end{aligned}] (3) [image: \begin{aligned} \phi= & {} \cos ^{-1}\left( \frac{z}{r}\right) \ . \end{aligned}] (4) In this spherical coordinate system, orientations are characterized by angles [image: $$\theta$$] and [image: $$\phi$$] (Fig. 12). The angle [image: $$\theta$$] represents the azimuthal angle in the x-y plane from the x-axis (in this context the cylindrical axis of the sample is denoted as the z-axis), with [image: $$0< \theta < 360^\circ$$] (Eq. 3). The angle [image: $$\phi$$] represents the polar angle from the positive z-axis. Since the fibres are fully symmetric, a symmetry condition is likewise imposed on [image: $$\phi$$], with [image: $$0< \phi < 90^\circ$$] (Eq. 4).[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig12_HTML.png] Fig. 12Spherical coordinate system Figure 13 provides orthogonal projections of the fibre orientation data for samples PPb1 and PPa2. The depiction of three-dimensional data on a two-dimensional plane makes these projections unsuitable for evaluating isotropy along the [image: $$\phi$$]-axis but the projections can be used very effectively for evaluating isotropy along the [image: $$\theta$$]-axis. For sample PPb1, there appeared to be an anisotropic orientation of fibres along roughly the [image: $$\theta =30^\circ / 210^\circ$$] axis. A similar, although less clear anisotropy also appeared in the projection for sample PPa2. The darker colours of the PPa2 projection were caused by the higher fibre-volume content already discussed.[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig13_HTML.png] Fig. 13Orthogonal projections of fibre orientation for samples PPb1 (left) and PPa2 (right). Projection radius: [image: $$0< \phi < 90^\circ$$]; projection circumference: [image: $$0< \theta < 360^\circ$$] Fibre density results can be displayed in the form of histograms (Figs. 14 and 15). For clarity, the y-axis in these histograms has been depicted in logarithmic scale. This is necessary since the vast majority of cubic sub-volumes contain no fibres or only a few fibre voxels. These histograms confirm the higher fibre-volume content of sample PPa2. 5.2 Results of Cracking Analysis The void data were also analysed using spherical coordinates and changes in void orientation characteristics, which correspond to crack orientation characteristics, can be depicted using orthogonal projections (Fig. 16).[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig14_HTML.png] Fig. 14Histogram of fibre density for sample PPb1 [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig15_HTML.png] Fig. 15Histogram of fibre density for sample PPa2 Recalling that the crack orientations were measured in terms of orientation normal to the cracking plane, it can be observed that for both PPb1 and PPa2, there appears to be an anisotropy along roughly the [image: $$\theta$$] = 130[image: $$^\circ$$]/310[image: $$^\circ$$] plane. This indicates that the primary orientation of the cracks is parallel to the primary fibre orientation. This is because the primary normal orientation of the cracks (depicted in Fig. 16) is roughly orthogonal to the primary fibre orientation (depicted in Fig. 13), which is typical for a plane running parallel to a line.[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig16_HTML.png] Fig. 16Orthogonal projections of the normal orientation vectors measured for the cracks in samples PPb1 (left) and PPa2 (right) after 300 [image: $$^\circ$$]C heating. Projection radius: [image: $$0< \phi < 90^\circ$$]; projection circumference: [image: $$0< \theta < 360^\circ$$] 5.3 Comparison of Fibre and Cracking Characteristics Comparison of fibre and cracking characteristics based purely on Figs. 13 and 16 is rather qualitative and difficult, however. In order to provide a more amenable means for direct comparison, histograms of fibre and crack-normal orientation were created and overlaid (Figs. 17 and 18).[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig17_HTML.png] Fig. 17Relationship between fibre (red) and crack-normal (blue) [image: $$\theta$$]-orientation for sample PPb1 [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig18_HTML.png] Fig. 18Relationship between fibre (red) and crack-normal (blue) [image: $$\theta$$]-orientation for sample PPa2 [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig19_HTML.png] Fig. 19Relationship between fibre (red) and crack-normal (blue) [image: $$\phi$$]-orientation for sample PPb1 [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig20_HTML.png] Fig. 20Relationship between fibre (red) and crack-normal (blue) [image: $$\phi$$]-orientation for sample PPa2 The relationship between the [image: $$\theta$$]-orientations of the fibres and the normal vectors to the cracking plane for PPb1 exhibit behaviour typical of cracks running along the primary fibre orientation direction, with very few fibres oriented along the directions normal to the cracking plane. This correlation is less clear for the PPa2 data. The reasons for this are unclear, but may be related to the higher volume of fibres contained within this sample. This increase in fibre volume may lead to more concerted behaviour of fibres through their combination in bundles as opposed to the less dense fibres in PPb1, which are more likely to interact with the concrete material individually. The presence of crack or ring artefacts within the fibre images could also contribute to this inconsistency. Fibre-cracking histograms have also been created relative to the [image: $$\phi$$]-orientation (Figs. 19 and 20). Direct evaluation of these histograms is complicated by the fact that fibre density is not uniform along the [image: $$\phi$$]-axis due to exponential growth of volume in the unit sphere corresponding with linear increases in [image: $$\phi$$]-angle. Very similar trends in [image: $$\phi$$]-orientation are observable for the fibre and crack-normal data of both samples PPb1 and PPa2. These figures provide confirmation that an offset between the fibre and crack-normal orientation exists. Although not clearly orthogonal to one another, as would be expected for perfectly parallel cracks and fibres, this orientation offset does indicate that the cracks tend to orient themselves more strongly parallel to fibres than perpendicular to them. A detailed analysis of fiber-crack interactions within localized regions of these samples can also be found in [31]. 5.4 Correlation between Heating and Cracking Characteristics Much clearer trends can be observed in the growth of cracking relative to temperature increase. Figures 21, 22, 23 and 24 depict the growth of cracking relative to applied heating for both samples PPb1 and PPa2. In these figures, two different measurements are used to evaluate cracking growth. Figures 21 and 22 are plotted relative to crack volume while Figs. 23 and 24 are plotted relative to crack surface area.[image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig21_HTML.png] Fig. 21Relationship between temperature and crack volume for sample PPb1 [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig22_HTML.png] Fig. 22Relationship between temperature and crack volume for sample PPa2 [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig23_HTML.png] Fig. 23Relationship between temperature and crack surface area for sample PPb1 [image: ../images/450266_1_En_4_Chapter/450266_1_En_4_Fig24_HTML.png] Fig. 24Relationship between temperature and crack surface area for sample PPa2 In each of these figures, clear trends are visible between cracking and heating characteristics. These quantitative trends offer great promise for the calibration and validation of fire-damage models within finite element analysis codes for concrete. Basic fracture mechanics uses crack surface area as one parameter for calculating fracture energy [2]. This may make crack surface area the optimal damage parameter for use in numerical modelling. Crack volumes have, however, also previously been shown to follow clear trends relative to work-of-load and stiffness reduction [11, 12]. The optimal numerical approach might be to develop a calibration and validation approach that combines the measurements of both of these cracking characteristics. 6 Conclusions The results of this research effort have demonstrated that the orientation of both fibres and cracks in polymer-fibre reinforced concrete tend to be anisotropic. It is thought that this anisotropy is predominantly influenced by casting method, but can also be affected by the presence of large aggregates or voids within the sample. The results of the orientation analysis also indicated that crack geometry characteristics may be correlated with fibre orientation, with cracks tending to run parallel to fibre beds. This could have major implications for structural level performance since it would indicate that fire resistance may be related to casting method. Clear quantitative relationships were also observed between heating and increasing cracking levels, expressed in terms of both crack surface area and crack volume. These relationships can serve as the basis for calibration and validation of finite element models used for simulating heat-related spalling behaviour. 7 Future Work Future research is needed both to improve the accuracy and reliability of the analysis procedures used in this research and to develop new analysis procedures for evaluating phenomena of relevance to the spalling behaviour of concrete. One reason that only aggregated cracking and fibre characteristics could be measured in this research was the lack of accurate digital volume correlation (DVC) tools. The development and application of such tools would enable direct comparison, including subtraction, of individual image features among multiple images in a heating or loading series. This would not only make it easier to distinguish crack growth and fibre failure, but it would also enable more accurate measurements of crack-fibre property correlation. Up to the present time it has been very difficult to develop accurate DVC tools for concrete analysis because of the cracking discontinuities typical of concrete failure, which are generally more difficult for DVC methods to accommodate than the simple strains typical of plastically deforming materials. New methods also need to be developed for assessing the cross-linking of fibre beds during heat-related cracking. Although this phenomenon has been observed using SEM, it has been difficult to develop a method that accurately quantifies the material behaviour. This is further complicated by the fact that many of the connecting cracks observed during SEM scanning appear to be below the resolution of most laboratory CT systems. It is believed that significant progress could be made in measuring this phenomenon through the skilled application of synchrotron-CT scanning in combination with DVC image processing tools. Finally, further development of the image processing algorithms detailed in this paper is needed. These algorithms should be streamlined to enable orientation analysis of larger regions of interest within samples, which would minimize the magnitude of error introduced by individual material features, such as single stones or voids. Further research is also needed to improve the precision of the fibre-cracking orientation comparison and to statistically quantify the correlation level. Only through the accurate measurement of statistical correlation between fibre and cracking orientation can firm conclusions be drawn about the optimal casting procedures for the construction of fire-resistant building components. References 1. Aitcin, P.C.: High Performance Concrete. CRC Press (1998). https://​doi.​org/​10.​1201/​9781420022636Crossref 2. Bazant, Z.P., Planas, J.: Fracture and Size Effect in Concrete and Other Quasibrittle Materials, vol. 16. CRC Press (1997) 3. Feldkamp, L.A., Davis, L.C., Kress, J.W.: Practical cone-beam algorithm. J. Opt. Soc. Am. A 1(6), 612–619 (1984). https://​doi.​org/​10.​1364/​JOSAA.​1.​000612Crossref 4. Flannery, B.P., Deckman, H.W., Roberge, W.G., D’Amico, K.L.: Three-dimensional x-ray microtomography. Science 237(4821), 1439–1444 (1987). https://​doi.​org/​10.​1126/​science.​237.​4821.​1439Crossref 5. Herrmann, H., Lees, A.: On the influence of the rheological boundary conditions on the fibre orientations in the production of steel fibre reinforced concrete elements. Proc. Est. Acad. Sci. 65(4), 408–413 (2016). https://​doi.​org/​10.​3176/​proc.​2016.​4.​08.​Open-AccessCC-BY-NC4.​0Crossref 6. Herrmann, H., Pastorelli, E., Kallonen, A., Suuronen, J.P.: Methods for fibre orientation analysis of x-ray tomography images of steel fibre reinforced concrete (SFRC). J. Mater. Sci. 51(8), 3772–3783 (2016). https://​doi.​org/​10.​1007/​s10853-015-9695-4Crossref 7. Hertz, K.: Explosion of silica-fume concrete. Fire Saf. J. 8(1), 77 (1984). https://​doi.​org/​10.​1016/​0379-7112(84)90057-2Crossref 8. Jansson, R.: Material Properties Related to Fire Spalling of Concrete. Division of Building Materials. Lund Institute of Technology, Lund University (2008) 9. Jansson, R.: Fire spalling of concrete: theoretical and experimental studies. Ph.D. thesis, KTH Royal Institute of Technology (2013) 10. Krause, M., Hausherr, J.M., Burgeth, B., Herrmann, C., Krenkel, W.: Determination of the fibre orientation in composites using the structure tensor and local x-ray transform. J. Mater. Sci. 45(4), 888 (2010). https://​doi.​org/​10.​1007/​s10853-009-4016-4Crossref 11. Landis, E.N.: Toward a physical damage variable for concrete. J. Eng. Mech. 132(7), 771–774 (2006). https://​doi.​org/​10.​1061/​(ASCE)0733-9399(2006)132:​7(771)Crossref 12. Landis, E.N., Zhang, T., Nagy, E.N., Nagy, G., Franklin, W.R.: Cracking, damage and fracture in four dimensions. Mater. Struct. 40(4), 357–364 (2007). https://​doi.​org/​10.​1617/​s11527-006-9145-5Crossref 13. Li, V.C., Wang, S.: Microstructure variability and macroscopic composite properties of high performance fiber reinforced cementitious composites. Probab. Eng. Mech. 21(3), 201–206 (2006). https://​doi.​org/​10.​1016/​j.​probengmech.​2005.​10.​008 (Probability and Materials: from Nano- to Macro-Scale)Crossref 14. Lorenz, C., Carlsen, I.C., Buzug, T.M., Fassnacht, C., Weese, J.: Multi-scale line segmentation with automatic estimation of width, contrast and tangential direction in 2D and 3D medical images. In: CVRMed-MRCAS’97, pp. 233–242. Springer (1997) 15. Martz, H.E., Scheberk, D.J., Roberson, G.P., Monteiro, P.J.: Computerized tomography analysis of reinforced concrete. Mater. J. 90(3), 259–264 (1993) 16. Mathworks T: Matlab. r2014a. Natick. MA, USA (2016) 17. Mishurova, T., Léonard, F., Oesch, T., Meinel, D., Bruno, G., Rachmatulin, N., Fontana, P., Sevostianov, I.: Evaluation of fiber orientation in a composite and its effect on material behavior. In: Proceedings of the 7th Conference on Industrial Computed Tomography (ICT) held February 7–9, 2017, Leuven, Belgium, vol. 22(03). NDT.net (2017). http://​www.​ndt.​net/​?​id=​20818 18. Mishurova, T., Rachmatulin, N., Fontana, P., Oesch, T., Bruno, G., Radi, E., Sevostianov, I.: Evaluation of the probability density of inhomogeneous fiber orientations by computed tomography and its application to the calculation of the effective properties of a fiber-reinforced composite. Int. J. Eng. Sci. 122, 14–29 (2018). https://​doi.​org/​10.​1016/​j.​ijengsci.​2017.​10.​002Crossref 19. Morgan, I., Ellinger, H., Klinksiek, R., Thompson, J.N.: Examination of concrete by computerized tomography. J. Proc. 77(1), 23–27 (1980) 20. Oesch, T., Landis, E., Kuchma, D.: A methodology for quantifying the impact of casting procedure on anisotropy in fiber-reinforced concrete using x-ray ct. Mater. Struct. 51(3), Article 73, 1–13 (2018). https://​doi.​org/​10.​1617/​s11527-018-1198-8 21. Oesch, T.S.: Investigation of fiber and cracking behavior for conventional and ultra-high performance concretes using x-ray computed tomography. University of Illinois at Urbana-Champaign (2015) 22. Oesch, T.S.: In-situ ct investigation of pull-out failure for reinforcing bars embedded in conventional and high-performance concretes. In: Proceedings of 6th Conference on Industrial Computed Tomography (ICT), vol. 21 (2016) 23. Oesch, T.S., Landis, E.N., Kuchma, D.A.: Conventional concrete and UHPC performance–damage relationships identified using computed tomography. J. Eng. Mech. 142(12), 04016101 (2016)Crossref 24. Pistol, K.: Wirkungsweise von polypropylen-fasern in brandbeanspruchtem hochleistungsbeton. doctoralthesis, Bundesanstalt für Materialforschung und -prüfung (BAM) (2016) 25. Pistol, K., Weise, F., Meng, B., Schneider, U.: The mode of action of polypropylene fibres in high performance concrete at high temperatures. In: 2nd International RILEM Workshop on Concrete Spalling due to Fire Exposure, pp. 289–296. RILEM Publications SARL (2011) 26. Pujadas, P., Blanco, A., Cavalaro, S., de la Fuente, A., Aguado, A.: Fibre distribution in macro-plastic fibre reinforced concrete slab-panels. Constr. Build. Mater. 64, 496–503 (2014)Crossref 27. Sanjayan, G., Stocks, L.: Spalling of high-strength silica fume concrete in fire. Mater. J. 90(2), 170–173 (1993) 28. Stelzner, L., Powierza, B., Weise, F., Oesch, T.S., Dlugosch, R., Meng, B.: Analysis of moisture transport in unilateral-heated dense high-strength concrete. In: Proceedings from the 5th International Workshop on Concrete Spalling, pp. 227–239 (2017) 29. Trainor, K.: 3-D analysis of energy dissipation mechanisms in steel fiber reinforced reactive powder concrete. Master’s thesis, The University of Main (2011) 30. Urbana-Champaign UoIa: The history of concrete: a timeline. Department of Materials Science and Engineering. http://​matse1.​matse.​illinois.​edu/​concrete/​hist.​html (2015) 31. Weise, F., Stelzner, L., Weinberger, J., Oesch, T.S.: Influence of the pre-treatment of pp-fibres by means of electron irradiation on the spalling behaviour of high strength concrete. In: Proceedings from the 5th International Workshop on Concrete Spalling, pp. 345–358 (2017) 32. Weisstein, E.W.: Spherical Coordinates. From MathWorld–A Wolfram Web Resource (2017). http://​mathworld.​wolfram.​com/​SphericalCoordin​ates.​html 33. Williams, E.M., Graham, S.S., Reed, P.A., Rushing, T.S.: Laboratory characterization of cor-tuf concrete with and without steel fibers. Tech. rep, Engineer Research and Development Center Vicksburg MS Geotechnical and Structures Lab (2009) 34. Young, I.T., Gerbrands, J.J., Van Vliet, L.J.: Fundamentals of Image Processing. Delft University of Technology Delft (1998) 35. Zack, G., Rogers, W., Latt, S.: Automatic measurement of sister chromatid exchange frequency. J. Histochem. Cytochem. 25(7), 741–753 (1977)Crossref © Springer Nature Switzerland AG 2019 Heiko Herrmann and Jürgen Schnell (eds.)Short Fibre Reinforced Cementitious Composites and CeramicsAdvanced Structured Materials95https://doi.org/10.1007/978-3-030-00868-0_2 Experimental Investigation on Bending Creep in Cracked UHPFRC Daniene Casucci1, 2  , Catherina Thiele1   and Jürgen Schnell1 (1)Institute of Concrete Structures and Structural Engineering, Technische Universität Kaiserslautern, Kaiserslautern, Germany (2)Hilti Corporation, Feldkircherstrasse 100, 9494 Schaan, Liechtenstein Daniene Casucci (Corresponding author) Email: [email protected] Email: [email protected] Catherina Thiele Email: [email protected] Jürgen Schnell Email: [email protected] Abstract Investigations on ordinary fibre-reinforced concrete showed that the time-dependent deformations under tensile load in cracked concrete are larger than the deformations in uncracked concrete. The so-called tensile creep in the cracked cross section depends on some different factors like type of fibres, fibre content, load level, concrete mix, environmental condition, etc. Given the lack of sufficient data about tensile creep in ultra-high performance fibre-reinforced concrete (UHPFRC), a large experimental program financed by the DFG (Deutsche Forschungsgemeinschaft) was started at the University of Kaiserslautern. 1 Introduction Depending on the fibre content, UHPFRC has often a strain-hardening behaviour. This means that in a tensile or in a bending test, after the formation of the first crack, it is possible to increase the applied load. The concrete full tensile capacity is reached with multiple fine cracks and the material is usually designed in cracked condition. The contribution of the fibres to the tensile strength is so high, that in some cases, it seems reasonable to avoid or reduce the amount of conventional reinforcement. Therefore it is necessary to investigate also the tensile long-term behaviour of the cracked UHPFRC. The aim of this research project, which has been started at the University of Kaiserslautern, is to evaluate the long-term durability and reliability of this material, to estimate the deformations under sustained loads and to find out whether a load limitation has to be imposed in the future design codes. In this paper, besides the adopted test method, some results of the first year of measurements and observation are reported. An overall safe behaviour of steel fibres up to relatively high sustained loads (with respect to the tensile strength in static tests) could be observed. While the literature concerning tensile creep in ordinary fibre-reinforced concrete has increased in the past years, such literature concerning UHPFRC is still very rare. A large part of the research concerning time-dependent strain in UHPFRC is oriented to the creep in uncracked cross sections and in early age behaviour. This is important to evaluate the internal stress during the hardening phase in restrained conditions for the possibility of cracking [3]. Recent studies concerning the tensile-creep behaviour of high and ultra-high performance concrete may be found in the literature [3, 6–9, 12, 13, 15]. This topic is difficult to investigate since the experiments are complex and the tensile-creep deformations have the same order of magnitude as the shrinkage [7]. The most important parameters of influence, beside the cuing condition [6], are the water-cement ratio [7], the content of silica [3, 6, 7, 15] the age of loading [3, 6, 7, 15] and of course the magnitude of the load [3, 6, 7, 12, 15]. Concerning the magnitude of the tensile creep, is very difficult to make a comparison of the literature, since the testing parameters varies a lot and results vary significantly. While according Rossi [13] tensile creep is smaller than the compressive, it is, according to Kordina of the same size or slightly larger [8]. In Garas [6] it was found that the tensile creep can be even larger than the compressive one. The non-linear behaviour between tensile stress and creep deformation, was found in tension for higher stress/strength ratio than for compression, i.e. of 60–70% [7, 8, 15]. Moreover, it seems that under tensile load the shrinkage has also a different behaviour. Reinhardt et al. [12] discovered that an increase of shrinkage for high performance concrete under tension corresponded with increasing compressive strength. Concerning the tensile creep in cracked cross sections, an overview of the present literature can be found in [9] or [14]. The tensile creep may be caused by the fibre creep, by a time dependent fibre pullout and by the creep of the cementitious matrix. While polymeric fibres, depending on the material of the polymers, tend to be sensitive to the tensile fibre creep, steel fibres show a more stable behaviour since steel has much lower relaxation. Conventional hooked-end macro-steel fibres engages via a mechanical interlock with the concrete matrix, but show also an increase of the tensile deformation over the time in correspondence with cracks. Nieuwoudt [10] connects this increase to the compressive creep of concrete in the areas of high contact pressure of the fibre hook. Even if a failure is unlikely, the deformations and micro cracking of concrete enable the fibre hook to “slide” and this results in the macroscopic observed time-dependent deformations. In comparison with the typical hooked-end fibres, UHPFRC steel fibres are usually smaller, with diameter between 0.1 and 0.3 mm, lengths between 6 and 20 mm [2] and are straight and smooth [4]. One of the first investigations of uniaxial tensile creep in UHPFRC specimens with smooth micro-steel fibres was made by Garas [6]. Besides the uncracked specimens used for the tensile creep experiments, he also investigated some pre-cracked specimens. Garas observed a quick stabilization of the tensile creep deformations and failure only for load level above 80% of the static resistance in the first minutes after the reloading in the sustained load test rig. Bărbos [1] found a positive influence of the fibres for the tensile creep of UHPFRC beams with conventional reinforcement. Also in this case, the displacements showed a quick stabilisation. He attributed the creep deformation to the formation of new cracks and not to the widening of the existing ones. Nishiwaki [11] investigated a UHPFRC with a cocktail of short and long hocked-end fibres and observed, beside the formation of new cracks, also a slight widening of the existing ones. No failure occurred and with a creep factor of ca. 0.3 after 28 days, the deformations were very small. 2 Experimental Investigation Uniaxial and bending tensile sustained load tests with over 60 pre-cracked specimens were started. Some of the specimens were unloaded after a period of about 6 months and tested for the residual strength. Some others are still under load and will remain under observation for the next years. Several parameters have been investigated such as the influence of different load levels, age of loading, the specimen pre-damaging level, fibre volume and fibre length-diameter. Shrinkage, and compressive creep were measured in additional tests in order to identify the contribution of the tensile creep to the bending deformations. This paper will focus only on the results of the bending experiments. 2.1 Experimental Procedure A standard for these kinds of tests is unfortunately still missing. However, in most of the literature reported in [9] or [14], the method is similar. A schematic representation of the testing procedure is given in Fig. 1. First the specimens are pre-loaded up to a certain load or deformation in a common testing machine (step 1), then are unloaded and installed in a creep test rig (step 2), where they remain under sustained load over a certain time. After that, the sustained load is removed and the specimens are tested up to failure for the determination of the residual strength (step 3). The sustained load is defined as a percentage of the load at the final point of the pre-loading phase.[image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig1_HTML.png] Fig. 1Schematization of the sustained load creep tests 2.2 Specimens and Test Rigs The tests were performed in non-notched beams with a four-point bending setup. The four-point bending setup enables the observation of the average deformation of a larger area of the material compared to notched three-point bending. In the case of a strain-hardening material, it is more advantageous to observe distributed cracking over a large region of the sample than to observe only the small local area around a notch. The beams had a cross-section of 70 [image: $$\times$$] 70 mm, in accordance with the French guideline [4] for the adopted fibre length of 12.5 mm. The span was increased to a length of 940 mm and the central part with constant bending moment was 340 mm long. Besides enabling the observation of a large portion of the material, the larger length was also useful in reducing the required dead load. The vertical displacement was measured with two precision indicators, fixed on both sides in the mid-span of the beams. For the four-point bending tests, eight separate stacks containing three to four specimens each were constructed, as represented in Fig. 2.[image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig2_HTML.png] Fig. 2Sustained load bending tests rig 2.3 Concrete Mix The adopted concrete mix is indicated in the Table 1. The steel fibres are manufactured by the company Krampeharex® and are straight brass alloy coated steel wires. Brass alloy coated fibres are the most common and guarantee a good connection with the concrete paste [10]. The water-cement ratio, under consideration of the water in the silica suspension, is 0.265 and the overall water-binder ratio is 0.23. A heat treatment for 48 h at 90 [image: $$^{\circ }$$]C was performed two days after casting of the specimens. The concrete had a compressive strength of 150–160 MPa if cured in laboratory conditions and 180–190 MPa after the heat treatment.Table 1Mix design of the investigated UHPFRC Material Content (kg/m[image: $$^{3}$$]) Cement CEM 52.5 N 728 Water 80 Sand DM 0.125 / 0.5 816 Quartz flour 510 Microsilica suspension 226 Super plasticizer 29.7 Steel fibre (2% or 4%) 164 / 328 2.4 Test Program A “reference” combination of 12 beams was tested extensively with scheduled sustained loads 40, 60, 80 and 90% of the residual strength in the pre-loading tests (series 1). The bending creep in non-cracked concrete was measured with two beams loaded at 40 and 60% of the residual strength in the pre-loading tests (series 2). Unfortunately, with loads higher than 60% of the residual strength in the pre-loading tests, cracks would appear making difficult the evaluation of the results. The influence of fibre slenderness was investigated in four beams containing fibres with 0.400 mm diameter. These fibres have the same length of 12.5 mm as the fibres used in the other experiments, but had a length-diameter ratio of 31.3 instead of 71.4 of the 0.175 mm diameter fibres (series 3). For investigating the effect of the fibre content, four tests with 4% fibre volume and sustained load levels of 40 and 80% were performed (series 4). The heat treatment minimizes the effects of creep and shrinkage. However, a test series on specimens without heat treatment was also performed. A total of four specimens were cured in laboratory conditions, with humidity of about 50% and other three were cured in water and sealed with wet cloth (series 5 and 6). This measure was intended to avoid any kind of drying shrinkage. Additionally, three specimens were also tested at ages of 2 and other three at the ages of 13 days. Two of these specimens were pre-loaded and one not. Also in this case the load was only 50% of the residual strength, not to induce cracks in the uncracked specimens (series 7). Six dog-bone shaped specimens and six beams were tested for each different concrete mix according to the French guideline [2]. With these tests, a deformation level for the pre-loading phase was defined. This deformation was a vertical displacement of 1.5 mm for the beams with 0.40 mm diameter fibres and of 4 mm for the beams with 0.175 mm fibres. These values were chosen so that the UHPFRC would have a well-developed crack pattern during the experiments and so that at the same time the residual strength would still be considerable, larger than 70% of the maximum load. Since the specimens were piled, more specimens received the same external load. Therefore the actual load in test differed from the scheduled indicated in Table 2. The test program for the sustained load tests on beams is summarized in Table 2. Here the fibre length, diameter, fibre content and the load level are given. A total of 33 bending creep tests were performed.Table 2Sustained load test program Series Investigated parameter Fibre length; Diameter; content Number of tests 1. 12.5; 0.175; 2% 40 / 60 / 80 /90 5 / 3 / 2 / 2 2. Bending creep in uncracked concrete 12.5; 0.175; 2% (Ca. 40 / 60) without pre loading 1 / 1 3. Length / diameter ratio 12.5; 0.40; 2% 40 / 80 2 / 2 4. Fibre content 12.5; 0.175; 4% 60 2 5. Without heat treatment 12.5; 0.175; 2% 40 / 80 3 / 1 6. Water curing and sealing 12.5; 0.175; 2% 2 / 1 7. Concrete age in cracked and non-cracked concrete 12.5; 0.175; 2% 4 / 2 Total 33 2.5 Compressive Creep and Shrinkage To estimate the contribution of the compressive creep to the deformation of the beam specimens, 27 cylinders with a diameter of 104 mm were tested for a period between two weeks and four months. The tests were performed with three load levels of 0.25, 0.45 and 0.65 f[image: $$_{\mathrm {c}}$$] with a concrete age of 2, 13 and 28 days. For each combination, a sealed specimen, an unsealed specimen and a heat-treated specimen were tested. 3 Test Results In this section, some representative results of the creep tests will be shown in order to draw the first conclusion of this experimental test program. The following Table 3 illustrate the naming adopted in the following paragraphs for the single specimens.Table 3Naming of the specimens No. Significance 1. Specimen number PK01, PK02, [image: $${\ldots }$$] 2. Curing Wb.: heat treatment curing W.: water curing k. B.: curing in laboratory condition 3. n. Vb.: without pre load Wb.: 2T: age at load beginning of two days 13T: age at load beginning of thirteen days sealed: sealed specimen 0.4Ø: fibre diameter of 0.4 mm instead of 0.175 mm 4%: fibre content of 4% instead of 2% 4. L = XX%: load level 3.1 Bending Tests Figure 3 shows the creep deformations of all the specimens of test series 1. Despite the large scatter, a slight correlation between the load level and the bending creep deformation could be observed. One of the specimens (PK05Wb. in Fig. 3) collapsed after 23 days with a load of 79% of the residual strength at the end of the pre-loading phase. During the pre-loading, this specimen exhibited a dramatic decrease in resistance after having reached the maximum load. An examination of the cross section after the collapse showed a particularly unfavourable fibre-orientation characteristics. Fibres were indeed laying almost parallel to the crack plane without offering an effective crack bridging (Fig. 4). Other specimens contained within the same stack as specimen PK05Wb (for instance PK07Wb.) were also affected by this collapse. Figure 5 shows a comparison between cracked and non-cracked specimens. In this diagram, only specimens that can be directly compared for charge and load levels are shown. The pre-cracked UHPFRC showed always larger creep deflection than the uncracked ones. Figure 6 shows the results of the specimens with age of 2 and 13 days. These results indicate that increasing deformation is present within samples of lesser age as compared to samples of greater age. A clear difference between the deformation levels of pre-cracked and non-cracked samples was again visible within this dataset, however the difference between non-cracked and pre-cracked concrete remains of the same proportion. This suggest that deformations in cracked concrete are mainly due to the creep of the compression area.[image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig3_HTML.png] Fig. 3Creep deflection for specimens in test series 1 of Table 2 [image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig4_HTML.png] Fig. 4Cross section of the collapsed specimen PK05Wb [image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig5_HTML.png] Fig. 5Creep deflection for cracked and non-cracked specimens (series 2 of Table 2) [image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig6_HTML.png] Fig. 6Creep deflection for specimens of series 2 of Table 2 (these specimens were cured in laboratory conditions) The sealing of the specimens seems not to have a pronounced influence on the beams as shown in Fig. 7 for two different load level of about 40 and 60%. Specimens with 0.4 mm fibre diameter showed slightly smaller creep deflections than those with 0.175 mm (Fig. 8). In general, a stabilization of the displacements occurred within the first 30 days. Some of the specimens were unloaded after 147 days and residual strength tests were performed. Although the largest displacements occurred within the first month, a continuous increase could be stated for the whole observation period.[image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig7_HTML.png] Fig. 7Creep deflection for specimens of series 6 in Table 2 [image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig8_HTML.png] Fig. 8Creep deflection for specimens with 0.4 mm fibre diameter (series 3 in Table 2) 3.2 Residual Strength Tests Fig. 9Residual strength tests on heat-threated beams (naming of the specimens defined in Figs. 3, 5 and 6) 3.3 Compressive Creep and Shrinkage Figure 10 reports the basic creep coefficient [image: $${\varphi }_{\mathrm {b}}$$](t,t[image: $$_{0}$$]) for some of sealed specimens. The [image: $${\varphi }_{\mathrm {b}}$$](t,t[image: $$_{0}$$]) is defined according the following equation:[image: \begin{aligned} \varepsilon _{\mathrm {cc}}(t,t_{0})= & {} \sigma (t_{0}){[\varphi _{\mathrm {b}}(t,t_{0})]}/E_{\mathrm {c}} \end{aligned}] (1) whereby [image: $${\varepsilon }_{\mathrm {cc}}$$](t,t[image: $$_{0}$$]) is the strain, [image: $${\sigma }$$](t[image: $$_{0}$$]) is the compressive stress, E[image: $$_{\mathrm {c }}$$] elasticity modulus of concrete, and t[image: $$_{0 }$$] the age of the concrete at the beginning of the loading. The results were extrapolated with the equation of the experimental method of EN 1992-1-2 [5]. The dashed lines in Fig. 10 indicates this approximation. In Fig. 10 one can observe that the basic creep reduces significantly as the age of the sample at the time of loading increases from 2 to 28 days. The results indicate that the stress-deformation non-linearity limit rises with the age of the concrete for values below 0.65 f[image: $$_{\mathrm {c}}$$]. This can be observed in Fig. 10 from the slight difference between the specimen loaded with 0.25 and 0.45 f[image: $$_{\mathrm {c}}$$] and the larger difference between these lower loads and that of 0.65 f[image: $$_{\mathrm {c}}$$]. Figure 11 shows the test rig for the compressive creep and the basic and drying shrink age.[image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig10_HTML.png] Fig. 10Compressive creep, at the age of 2, 8 and 28 days. The dashed lines indicate the progression according to the experimental method of EN 1992-1-2 [image: ../images/450266_1_En_2_Chapter/450266_1_En_2_Fig11_HTML.png] Fig. 11Compressive creep test rig (left and centre) and deformation diagram (right) of the Shrinkage in sealed specimens (blue) and non-sealed specimens (yellow). The dashed lines indicate the profile according the EN 1992-1-2 4 Conclusions For an investigation of the creep behaviour of concrete, a few months are not usually enough to gain an exhaustive knowledge of the material. However, it seems that UHPFRC exhibits a stabilization of the time dependent deformation in a relatively short time, even in cracked condition. That means that the working principle of the fibres seems not to be subject to degradation over the time. The only collapsed specimen was a one with an unfavourable fibre orientation and a high sustained load of 79% of the residual strength after pre-loading. This indicates that the fibre orientation has also a relevance for the creep deformation and ultimate resistance. This parameter, although difficult to investigate, should somehow be taken into account. One possible way is correlating it to the post crack behaviour of the specimens. The effect of the load level was found to be low. At this time, no further conclusions about the effects of loading can be drawn given the scatter within the data of the cracked specimens. The sustained load had almost no impact on the residual strengths. More important seems to be the post-cracking behaviour and how fast the load displacement curve drops at the unloading point. In the bending tests, no influence of sealing was observed. Tests of samples containing higher fibre content and different fibre diameters did not exhibit any significantly different load performance than the standard reference samples. Acknowledgements This research was possible thanks to the Deutsche Forschungsgemeinschaft (DFG), which financed and implemented a new research-training group (RTG) named “Stochastic Models for Innovations in the Engineering Sciences” at the University of Kaiserslautern. References 1. Bărbos, G.A.: Long-term behavior of ultra-high performance concrete (UHPC) bended beams. Procedia Technol. 22, 203–210 (2016)Crossref 2. Bétons fibrés à ultra-hautes performances and Association Française de Gènie Civil: Afgc richtlinie. Documents scientifique et technique (2013) 3. Bissonnette, B., Pigeon, M.: Tensile creep at early ages of ordinary, silica fume and fiber reinforced concretes. Cement Concr. Res. 25(5), 1075–1085 (1995)Crossref 4. Deutscher Ausschuss für Stahlbeton: Dafstb heft 561 ultrahochfester beton—sachstandsbericht (2008) 5. Deutsches Institut für Normung e. V.: Din en 1992-2 eurocode 2: Eurocode 2: Design of concrete structures-Part 1-2: General rules-structural fire design and german version en 1992, 1–2 (2004) 6. Garas, V.Y.: Multi-scale investigation of the tensile creep of ultra-high performance concrete for bridge applications. Ph.D., Georgia Institute of Technology (2009) 7. Kamen, A., Denarié, E., Sadouki, H., Brühwiler, E.: Uhpfrc tensile creep at early age. Mater. Struct. 42(1), 113–122 (2009)Crossref 8. Kordina, K.: Beton unter langzeit-zugbeanspruchung. Bautechnik 76(6), 479–488 (1999)Crossref 9. Kusterle, W.: Creep of fibre-reinforced concrete—flexural test on beams. In: Proceedings of Fibre Concrete (2015) 10. Nieuwoudt, P.D.: Time-dependent behaviour of cracked steel fibre-reinforced concrete: from single fibre level to macroscopic level. Ph.D., Stellenbosch (2016) 11. Nishiwaki, T., Kwon, S., Otaki, H., Igarashi, G., Shaikh, F.U., Fantilli A.P.: Experimental study on time-dependent behavior of cracked UHP-FRCC under sustained loads. In: Serna, P., Llano-Torre, A. (eds.) Creep Behaviour in Cracked Sections of Fibre-reinforced Concrete. Proceedings of the International RILEM Workshop FRC-CREEP (2016) 12. Reinhardt, H.W., Rinder, T.: Tensile creep of high-strength concrete. J. Adv. Concr. Technol. 4(2), 277–283 (2006)Crossref 13. Rossi, P., Tailhan, J.L., Le Maou, F., Gaillet, L., Martin, E.: Basic creep behavior of concretes investigation of the physical mechanisms by using acoustic emission. Cement Concr. Res. 42(1), 61–73 (2012)Crossref 14. Serna, P., Llano-Torre, A., Cavalaro, S.H.P.: v.14 creep behaviour in cracked sections of fibre-reinforced concrete. In: Proceedings of the International RILEM Workshop FRC-CREEP 2016. Dordrecht (2016) 15. Switek, A., Denarié, E., Brühwiler, E.: Modeling of viscoelastic properties of ultra high performance fiber reinforced concrete (UHPFRC) under low to high tensile stresses. In: ConMod 2010: Symposium on Concrete Modelling (2010) © Springer Nature Switzerland AG 2019 Heiko Herrmann and Jürgen Schnell (eds.)Short Fibre Reinforced Cementitious Composites and CeramicsAdvanced Structured Materials95https://doi.org/10.1007/978-3-030-00868-0_1 Study of Crack Patterns of Fiber-Reinforced Concrete (FRC) Specimens Subjected to Static and Fatigue Testings Using CT-Scan Technology Miguel A. Vicente1  , Gonzalo Ruiz2  , Dorys C. González1  , Jesús Mínguez1  , Manuel Tarifa2   and Xiaoxing Zhang2 (1)Department of Civil Engineering, University of Burgos, Burgos, Spain (2)Department of Applied Mechanics, University of Castilla – La Mancha, Ciudad Real, Spain Miguel A. Vicente (Corresponding author) Email: [email protected] Gonzalo Ruiz Email: [email protected] Dorys C. González Email: [email protected] Jesús Mínguez Email: [email protected] Manuel Tarifa Email: [email protected] Xiaoxing Zhang Email: [email protected] Abstract This paper demonstrates the widely accepted hypothesis that the compressive testing is a particular case of a cyclic test where failure occurs during the first cycle. To perform this, a test on 32 fiber-reinforced high-performance concrete specimens have been carried out. Sixteen of them have been tested under low-cycle fatigue compressive loading up to failure. Eight of them have been tested under monotonic compressive loading, until failure too. And the last eight specimens have remained intact. All of them have been scanned using a Computed Tomography (CT) Scan in order to define the pattern of their damage, which includes voids and cracks. The results show that the average damage maps of monotonic and fatigue series are statistically identical, which confirms the hypothesis previously described. In addition, both series are different to the intact series, which means that not a random damage distribution occurs when specimens collapse. 1 Introduction During the last years, the progressive increase of the strength of concrete is leading to slender structures, where cyclic loads have a more relevant influence. Thus, fatigue phenomena in concrete are of increasing interest and the development of accurate predictive models has a great scientific interest. A significant amount of fatigue models for concrete have been developed until now. They establish a relationship between the maximum and the minimum stress ratio of the cyclic load with the number of cycles up to failure (usually called as “fatigue life”) [1, 4, 7, 11, 12, 15, 18, 19]. Most fatigue models consider the hypothesis of convergence to the “initial distribution”, which means that the static testing is a particular case of cyclic testing where the fatigue life is equal to 1. This hypothesis is supported by the fact that the crack patterns obtained in both cases are equal. The aim of the research conducted by the authors is to check if this hypothesis is true or false. To perform this, the Computed Tomography (CT) technology is used, in combination with a homemade post-processing software, in order to define the crack pattern within the specimens. Computed tomography is a nondestructive technique, based on absorbing X-rays, that permits the visualization of the internal microstructure of material up to micro-range resolution. The field of application is very wide. This is a well-known technology in medicine, because of its enormous advantages, but it is also very useful in other fields. For example, this technology is also very common in veterinarian, or paleontology. In materials engineering, this technology is starting to be widely used to analyse the internal microstructure of a wide variety of materials: metals, rocks, composites, etc. This technology is also very useful for the study of concrete microstructure. Most of the macroscopic responses of concrete elements can be explained through the understanding of the microstructure. For example, the freeze-thaw behavior of concrete strongly depends on pore sizes and pore distribution. Moreover, in case of fiber-reinforced concrete, the use of CT-Scan can provide useful information related to the fiber distribution and orientation, which strongly influences its macroscopic response [5, 9, 10, 14, 16]. In recent years, many research have been conducted in order to study the internal microstructure of concrete using this technology [2, 3, 5, 6, 8–10, 13, 14, 17]. The research shown in this paper uses the CT-Scan or detect internal “damage” of concrete, which includes pores and cracks. This technology is able to define the density of each specimen voxel by assigning a shade of a grey. Light shades of grey correspond to high density and dark shades of grey correspond to low density. A more detailed explanation of the CT-Scan technology can be found in [17]. 2 Experimental Procedure Next, the experimental procedure and the scanning procedure are described. 2.1 Materials Characterization In this case, fiber reinforced high performance concrete cubic specimens were performed. Their shape was cubic with 40 mm in edge-length, cut from prism of 150 [image: $$\times$$] 150 [image: $$\times$$] 700 mm. The concrete compressive strength [image: $$f_{\mathrm {c}}$$] was 101 MPa, with a standard deviation of 3 MPa. A total of three series of concrete cubes were performed, named “intact”, “monotonic” and “cyclic”. The intact and the monotonic series were composed by 8 specimens each. The cyclic series was composed by 16 specimens. A total of 32 specimens were performed. Specimens belonging to monotonic series were subjected to a monotonic compressive load up to failure. Specimens belonging to cyclic series were subjected to a low-cycle cyclic load up to failure. Finally, specimens belonging to intact series were not tested. 2.2 Fatigue Tests Specimens belonging to cyclic series were subjected to low-cycle cyclic load up to failure. The tests were carried out at a loading frequency of 10 Hz under sinusoidal stress cycles, varying between [image: $$0.36\,\cdot$$] f[image: $$_{\mathrm {c}}$$] and [image: $$0.82\,\cdot$$] f[image: $$_{\mathrm {c}}$$]. Table 1 shows the fatigue life of all the specimens. The tests were carried out using a hydraulic jack with a load capacity of [image: $${\pm }$$]250 kN.Table 1Specimens tested under cyclic load (cyclic series) Cycles for [image: $${\sigma }_{\mathrm {max}} = 0.82 \cdot$$] f[image: $$_{\mathrm {c}}$$]; [image: $$\sigma _{\mathrm {min}} = 0.36 \cdot$$] f[image: $$_{\mathrm {c}}$$] Specimen Cycles Specimen Cycles Specimen Cycles Specimen Cycles 1f 1066 5f 1115686[image: $${^\mathrm{a}}$$] 9f 19186 13f 2700 2f 3128 6f 25 10f 90 14f 88 3f 214 7f 111 11f 144 15f 152076 4f 29189 8f 460 12f 3305 16f 1 [image: $${^\mathrm{a}}$$]Run-out The results show a wide scatter in the service life of the test specimens. This is very common in concrete and it is due to the inherent scatter of the material, which particularly conducts to a wide scatter in terms of service life.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig1_HTML.png] Fig. 1CT-Scan: Y.CT COMPACT with a YXlon tube of 225 kV/30mA at University of Burgos (Spain) 2.3 Scanning of the Specimens Once the specimens belonging to the monotonic and the cyclic series had been tested, they were scanned using a CT-Scan. In addition, the specimens belonging to the intact series were also scanned. The CT-Scan used was a Y.CT COMPACT device of the University of Burgos (Spain). It is equipped with a YXlon tube of 225 kV/30mA (Fig. 1). The CT-Scan has a post-processing software which provides 2D slices of 1024 [image: $$\times$$] 1024 pixels. Thus, for a section of 40 [image: $$\times$$] 40 mm[image: $$^{2}$$], the resolution of the scanner is 55 [image: $$\times$$] 55 [image: $${\upmu }$$]m[image: $$^{2}$$]. The vertical distance between slices is 100 [image: $${\upmu }$$]m. The amount of slices per specimen is 401. The voxel has a volume of 55 [image: $$\times$$] 55 [image: $$\times$$] 100 [image: $${\upmu }$$]m[image: $$^{3}$$]. Each voxel is identified by its center of gravity (coordinates X, Y and Z) and a grey color belonging to a grey scale, from black to white depending on the voxel density. A total amount of 256 grey colors are identified in the grey scale. Light grey corresponds to high density and dark grey corresponds to low density. Figure 2 shows a slice of a specimen. The total number of voxel is approximately [image: $$4.2\cdot 10^{8}$$].[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig2_HTML.png] Fig. 2CT-Scan slide The post-processing methodology is as follows. First, once all the voxels had been identified, the ones belonging to damage were extracted. These “damaged” voxels (or empty voxels) are the ones with a density (i.e. a grey color) below a threshold. The result is a 3D image containing only the empty voxels, i.e. the voxels belonging to pores or cracks (Fig. 3).[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig3_HTML.png] Fig. 3Example of empty voxels distribution The average number of empty voxels per specimen in every series is shown in Table 2. In case of intact series, empty voxels belongs only to voids, meanwhile in case of monotonic and cyclic series, empty voxels belongs to voids and cracks.Table 2Empty voxels per specimen in every series Series Intact Monotonic Cyclic Number of Specimens 8 8 16 Voxels Mean 73,254 1,472,215 1,089,281 Std. Dev. 26,134 238,492 438,796 % Voxels Mean 0.02% 0.35% 0.26% Std. Dev. 0.01% 0.06% 0.10% It should be noted that the number of empty voxels belonging to the intact series are significantly smaller than the ones belonging to monotonic and cyclic series. That is because empty voxels in intact series belong to pores only, meanwhile empty voxels in monotonic and cyclic series includes pores and cracks. In order to compare the empty voxel distributions (crack patterns) from the different series, a novel numerical procedure, designed by the authors, have been developed, named “Circumferential test”. Next, this procedure is explained. 2.4 Circumferential Test The circumferential test is the procedure for analyzing the raw data from the CT-scan in order to disclose the extent of damage generated in the mechanical tests. Next, the steps for analyzing the data are exposed: First, the coordinates x[image: $$_{\mathrm {i}}$$], y[image: $$_{\mathrm {i}}$$] and z[image: $$_{\mathrm {i}}$$] of each individual voxel are normalized according to the following expression: [image: \begin{aligned} x_{rel,i}= \frac{2\cdot x_{i}}{x_{max}} \end{aligned}] (1a) [image: \begin{aligned} y_{rel,i}=\frac{2\cdot y_{i}}{y_{max}} \end{aligned}] (1b) [image: \begin{aligned} z_{rel,i}=\frac{z_{i}}{z_{max}} \end{aligned}] (1c) where x[image: $$_{\mathrm {max}}$$], y[image: $$_{\mathrm {max}}$$] and z[image: $$_{\mathrm {max}}$$] are the real dimensions of each individual specimen. Next, each voxel is identified by a pair of coordinates, the normalized distance of the voxel to the center of gravity of the cross-section where the voxel is placed d[image: $$_{\mathrm {i}}$$] and the height of the voxel h[image: $$_{\mathrm {i}}$$], according to the following expressions (Fig. 4):[image: \begin{aligned} d_{i}= & {} \sqrt{\left( x_{rel,i}-x_{G,rel,i}\right) ^{2}+\left( y_{rel,i}-y_{G,rel,i}\right) ^{2}} \end{aligned}] (2) [image: \begin{aligned} h_{i}= & {} z_{rel,i} \end{aligned}] (3) where x[image: $$_{\mathrm {G,rel,i}}$$] and y[image: $$_{\mathrm {G,rel,i}}$$] are the normalized coordinates of the center of gravity of the considered section. Coordinate d[image: $$_{\mathrm {i}}$$] varies from 0 to [image: $$\sqrt{2}$$] while h[image: $$_{\mathrm {i}}$$] varies from 0 to 1. Coordinates d and h are divided into twenty subdivisions each, so that the voxels are clustered for all the combinations of d and h coordinates. Thus, the whole cube volume is divided into 400 sub-volumes. For each, the relative frequency of occurrence of empty voxels is calculated, according to the following expression:[image: \begin{aligned} \text {Relative frequency,}\quad i= & {} \frac{N_{i}}{N_{t}} \end{aligned}] (4) where N[image: $$_{\mathrm {i}}$$] is the number of empty voxels belonging to each individual sub-volume and N[image: $$_{\mathrm {t}}$$] is the total number of empty voxel of the specimen.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig4_HTML.png] Fig. 4Voxel coordinates In addition, a random damage distribution was simulated inside a theoretical 40 mm edge-length using a Monte Carlo model with 10[image: $$^{6}$$] points. This theoretical distribution was used to compare it with the real distribution in order to decide if empty voxels in real specimens are randomly distributed or not. 3 Results and Discussion Next, the results of CT-Scan are shown. First, the results are shown by means of 3D histograms of each specimen. In addition, the histogram of the random distribution is shown. Second, the results are compressed and shown by means of 2D histograms in the d direction first and in the h directions next, in order to be analyzed appropriately. 3.1 3D Histograms Using all the information explained above, a 3D histogram can be drawn, where x-axis is the parameter h, y-axis is the parameter d and z-axis is the relative frequency. Figures 5, 6, 7 and 8 show all the 3D histograms.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig5_HTML.png] Fig. 5Theoretical 3D histogram Note than random histogram shows a triangular shape along d parameter, with a maximum at d [image: $$=$$] 1, since such sub-volume corresponds to the longest ring inside the cube.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig6_HTML.png] Fig. 63D histogram of the intact seriesspecimens Histograms belonging to intact series specimens show high peaks. These ones represent the presence of large pores with a great amount of empty voxels.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig7_HTML.png] Fig. 73D histogram of the monotonic series specimens [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig8_HTML.png] Fig. 83D histogram of the cyclic series specimens In case of monotonic and cyclic series specimens, less peaks are observed. That is because of the presence of cracks, which show more distributed empty pores along the specimen. In consequence, the presence of pores is mitigated. 3.2 2D Histograms Along the H Direction In order to easily analyze the results, the 3D histograms were aggregated along the h direction, obtaining 2D histograms which represent the relative frequency of occurrence of empty voxels along the d coordinate, i.e., transverse to the load direction. Figures 9, 10 and 11 show the 2D histograms of the different series. Each histogram show the individual data of the specimens (drawn as bar diagrams) and also the average histogram. Additionally, the histogram belonging to the random distribution is shown, in order to be able to compare real and theoretical distributions.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig9_HTML.png] Fig. 92D histogram of the intact series specimens [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig10_HTML.png] Fig. 102D histogram of the monotonic series specimens [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig11_HTML.png] Fig. 112D histogram of the cyclic series specimens Figures 11, 12 and 13 show the comparison between the theoretical distribution and the real distributions of intact, monotonic and cyclic series. Additionally, the 90% confidence interval is defined by the upper and the lower limits.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig12_HTML.png] Fig. 12Comparison between theoretical and intact series [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig13_HTML.png] Fig. 13Comparison between theoretical and monotonic series [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig14_HTML.png] Fig. 14Comparison between theoretical and cyclic series Figure 12 shows that theoretical histogram is, in general, inside the confidence interval. This means that, from a statistical point of view, it can be assumed that both distributions are equal, i.e., intact series follow a random distribution. Empty voxel, i.e., pores, are randomly distributed inside the specimens along the d direction. On the contrary, Figs. 13 and 14 show that theoretical histogram is partially outside the confidence interval. This means that, from a statistical point of view, it can be assumed that both monotonic and cyclic series do not follow a random distribution along the d direction. In consequence, it can be affirmed that cracks are not randomly distributed inside the specimens. In case of monotonic and cyclic series, the behavior is quite similar. Between d [image: $$=$$] 0 and d [image: $$=$$] 0.8 approximately, real damage is below theoretical damage. On the contrary, between d [image: $$=$$] 0.8 and d [image: $$=$$] [image: $$\sqrt{2}$$], real damage is above theoretical damage. This means that cracks appear mostly on the edge of the specimens. Figure 15 show the comparison between the real distributions of monotonic and cyclic series. Additionally, the 90% confidence interval is defined by the upper and the lower limits.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig15_HTML.png] Fig. 15Comparison between monotonic and cyclic series Figure 15 shows that the average monotonic histogram is inside the confidence interval of cyclic series and viceversa. This means that, from a statistical point of view, it can be assumed that both distributions are equal, i.e., the crack pattern of both monotonic and cyclic series are equal along the d direction. 3.3 2D Histograms Along the D Direction Similarly to the previous case, the 3D histograms were aggregated along the d direction, obtaining 2D histograms which represent the relative frequency of occurrence of empty voxels along the h coordinate, i.e., parallel to the load direction. Figures 16, 17 and 18 show the 2D histograms of the different series. As occurs in the previous case, each histogram show the individual data of the specimens (drawn as bar diagrams) and also the average histogram. Additionally, the histogram belonging to the random distribution is shown, in order to be able to compare real and theoretical distributions.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig16_HTML.png] Fig. 162D histogram of the intact series specimens [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig17_HTML.png] Fig. 172D histogram of the monotonic series specimens [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig18_HTML.png] Fig. 182D histogram of the cyclic series specimens Figures 19, 20 and 21 show the comparison between the theoretical distribution and the real distributions of intact, monotonic and cyclic series. Additionally, the 90% confidence interval is defined by the upper and the lower limits.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig19_HTML.png] Fig. 19Comparison between theoretical and intact series [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig20_HTML.png] Fig. 20Comparison between theoretical and monotonic series [image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig21_HTML.png] Fig. 21Comparison between theoretical and cyclic series Figures 19, 20 and 21 shows that theoretical histogram is, in general, inside the confidence interval for all series. This means that, from a statistical point of view, it can be assumed that all the distributions are equal, i.e., intact, monotonic and cyclic series follow a random distribution. Empty voxel, i.e., pores, are randomly distributed inside the specimens along the height of the specimens. Figure 22 shows the comparison between the real distributions of monotonic and cyclic series. Additionally, the 90% confidence interval is defined by the upper and the lower limits.[image: ../images/450266_1_En_1_Chapter/450266_1_En_1_Fig22_HTML.png] Fig. 22Comparison between monotonic and cyclic series In concordance with the information shown in Figs. 20, 21 and 22 shows that the average monotonic histogram is inside the confidence interval of cyclic series and viceversa. This means that, from a statistical point of view, it can be assumed that both distributions are equal, i.e., the crack pattern of both monotonic and cyclic series are equal along the h direction. Considering the information provided by Figs. 14 and 21 it can be affirmed that monotonic and cyclic distributions are equal, which demonstrate the hypothesis shown at the beginning of this paper. 4 Summary and Conclusions The CT-Scan combined with a specific post-processing software is a useful tool to measure the internal damage of concrete specimens. In case of intact specimens, damage refers to pores, while in case of tested specimens, damage refers to pores and cracks. In this paper, the Circumferential Test is shown. This is a specific protocol, developed by the authors, to measure the damage distribution. Using the data provided by the CT-Scan it is possible to check if damage follows a random distribution or not. In addition, a theoretical random distribution was simulated, using a Monte Carlo model. This simulation was used as the random distribution reference. Results provided by CT-Scan and later analyzed with the Circumferential Test show that damage (i.e. pores) inside the intact series specimens follow a random distribution, while damage (i.e. pores and cracks) inside the monotonic and cyclic series specimens do not follow a random distribution. In addition, results show that both monotonic and cyclic follow the same damage distribution. This conclusion demonstrate the hypothesis of convergence to the “initial distribution”, which means that the static testing is a particular case of cyclic testing where the fatigue life is equal to 1. Acknowledgements The authors are grateful for the financial support from the Ministerio de Economía y Competitividad BIA2015-686678-C2-R, Spain, Junta de Comunidades de Castilla – La Mancha, Spain, Fondo Europeo de Desarrollo Regional, gran PEII-2014-016-P and INCRECYT Program. References 1. Aas-Jackobsen, K.: Fatigue of concrete beams and columns. Ph.D. thesis, University of Trondheim (1970) 2. Bordelon, A.C., Roesler, J.R.: Spatial distribution of synthetic fibers in concrete with x-ray computed tomography. Cement Concr. Compos. 53, 35–43 (2014). https://​doi.​org/​10.​1016/​j.​cemconcomp.​2014.​04.​007Crossref 3. Herrmann, H., Pastorelli, E., Kallonen, A., Suuronen, J.P.: Methods for fibre orientation analysis of x-ray tomography images of steel fibre reinforced concrete (SFRC). J. Mater. Sci. 51(8), 3772–3783 (2016). https://​doi.​org/​10.​1007/​s10853-015-9695-4Crossref 4. Hsu, T.: J. Am. Concr. Inst. 78(4), 192–305 (1981) 5. Oesch, T.S., Landis, E.N., Kuchma, D.A.: Conventional concrete and UHPC performance–damage relationships identified using computed tomography. J. Eng. Mech. 142(12), 04016101 (2016)Crossref 6. Pastorelli, E., Herrmann, H.: Virtual reality visualization for short fibre orientation analysis. In: 2014 14th Biennial Baltic Electronic Conference (BEC), pp. 201–204 (2014). https://​doi.​org/​10.​1109/​BEC.​2014.​7320591 7. Petkovic, G., Lenschow, R., Stemland, H., Rosseland, S.: Fatigue of high strength concrete. ACI Spec. Publ. 121(25), 505–525 (1990) 8. Pittino, G., Geier, G., Fritz, L., Hadwiger, M., Rosc, J., Pabel, T.: Computertomografische untersuchung von stahlfaserspritzbeton mit mehrdimensionalen transferfunktionen. Beton- und Stahlbetonbau 106(6), 364–370 (2011). https://​doi.​org/​10.​1002/​best.​201100009Crossref 9. Ponikiewski, T., Katzer, J., Bugdol, M., Rudzki, M.: Steel fibre spacing in self-compacting concrete precast walls by x-ray computed tomography. Mater. Struct. 48(12), 3863–3874 (2015a). https://​doi.​org/​10.​1617/​s11527-014-0444-yCrossref 10. Ponikiewski, T., Katzer, J., Bugdol, M., Rudzki, M.: X-ray computed tomography harnessed to determine 3D spacing of steel fibres in self compacting concrete (SCC) slabs. Constr. Build. Mater. 74, 102–108 (2015b). https://​doi.​org/​10.​1016/​j.​conbuildmat.​2014.​10.​024Crossref 11. Przybilla, C., Fernández-Cantelli, A., Castillo, E.: Deriving the primary cumulative distributive function of fracture stress for brittle materials from 3- and 4-point bending tests. J. Eur. Ceram. Soc. 31, 451–460 (2011)Crossref 12. Saucedo, L., Yu, R., Medeiros, A., Zhang, X., Ruiz, G.: A probabilistic fatigue model based on the initial distribution to consider frequency effect in plain and fiber reinforced concrete. Int. J. Fatigue 48, 308–318 (2013)Crossref 13. Schnell, J., Schladitz, K., Schuler, F.: Richtungsanalyse von fasern in betonen auf basis der computer-tomographie. Beton- und Stahlbetonbau 105(2), 72–77 (2010). https://​doi.​org/​10.​1002/​best.​200900055Crossref 14. Suuronen, J.P., Kallonen, A., Eik, M., Puttonen, J., Serimaa, R., Herrmann, H.: Analysis of short fibres orientation in steel fibre reinforced concrete (SFRC) using x-ray tomography. J. Mater. Sci. 48(3), 1358–1367 (2013). https://​doi.​org/​10.​1007/​s10853-012-6882-4Crossref 15. Tepfers, R., Kutti, T.: Fatigue strength of plain, ordinary and lightweight concrete. J. Am. Concr. Inst. 76(5), 635–652 (1979) 16. Vicente, M., Minguez, J., González, D.: The use of computed tomography to explore the microstructure of materials in civil engineering: from rocks to concrete. In: Halefoglu, D.A.M. (ed.) Computed Tomography-Advanced Applications. InTech (2017). https://​doi.​org/​10.​5772/​intechopen.​69245 17. Vicente, M.A., González, D.C., Mínguez, J.: Determination of dominant fibre orientations in fibre-reinforced high-strength concrete elements based on computed tomography scans. Nondestruct. Test. Eval. 29(2), 164–182 (2014). https://​doi.​org/​10.​1080/​10589759.​2014.​914204Crossref 18. Zhang, B., Phillips, D., Wu, K.: Effects of loading frequency and stress reversal on fatigue life of plain concrete. Mag. Concr. Res. 48(4), 292–305 (1996) 19. Zhao, D., Chang, Q., Yang, J., Song, Y.: A new model for fatigue life distribution of concrete. Key Eng. Mater. 348–349, 201–204 (2007)Crossref © Springer Nature Switzerland AG 2019 Heiko Herrmann and Jürgen Schnell (eds.)Short Fibre Reinforced Cementitious Composites and CeramicsAdvanced Structured Materials95https://doi.org/10.1007/978-3-030-00868-0_6 Short Composite Fibres for Concrete Disperse Reinforcement Arturs Lukasenoks1  , Andrejs Krasnikovs1  , Arturs Macanovskis1  , Olga Kononova1   and Videvuds Lapsa2 (1)Concrete Mechanics laboratory, Institute of Mechanics, Riga Technical University, Riga, Latvia (2)Faculty of Civil Engineering, Institute of Building Production, Riga Technical University, Riga, Latvia Arturs Lukasenoks (Corresponding author) Email: [email protected] Andrejs Krasnikovs Email: [email protected] Arturs Macanovskis Email: [email protected] Olga Kononova Email: [email protected] Videvuds Lapsa Email: [email protected] Abstract Short composite fibres are a relatively new product for concrete disperse reinforcement. In this experimental research, 14 different composite fibres were developed and single fibre pull-out micromechanics was investigated. Three main groups of composite fibre were—composite glass fibres (GF), composite carbon fibres (CF) and hybrid fibres (HF). Composite fibre manufacturing consisted of glass, carbon or combined fibre filament preparation, impregnation with epoxy resin, epoxy curing, quality control and cutting in short discrete macro- fibres. All three composite fibre groups were manufactured with straight, uneven and undulated geometries. Fibre surface finish was smooth and rough. Uneven fibre geometry was achieved by not aligning all fibre filaments in fibre tow. Undulated geometry was a result of interlaced fibres. The rough fibre outer surface finish was achieved by adding an extra layer of epoxy resin containing fine quartz grains. All macro-fibres were cut in 50 mm length. Single fibre pull-out samples with a pre-defined crack between two concrete parts were prepared to investigate fibre pull-out behaviour. Fibre pull-out laws were obtained and analysed. Composite fibre improvement geometry and surface with roughening outer surface made a huge impact on fibre pull-out resistance. 1 Introduction Development of composite fibres is very important due to many problems related to polymer, glass and carbon fibre usage in concrete for disperse reinforcement. There is a necessity to use large volume fractions 2–2.5% of polymer fibre to achieve a tangible increase in load bearing performance of low strength concretes [1]. Large amounts of fibres significantly worsen concrete consistency, a new approach for concrete design with target consistency is necessary. Fibres with higher strength and elasticity modulus are necessary for higher strength concretes. Due to fibre small diameter glass, basalt or carbon fibres have significant problems with introducing in concrete. When fibres are introduced into the concrete mix, fibre rolls and clews are formed—problems with fibre homogeneity in the concrete occurs. Each fibre roll or clew with air voids is another defect in the concrete which reduces strength. Difficulties with fibre introduction become more pronounced with fibre volume fraction increase. Mentioned fibres can be introduced in concrete mixes produced in high-speed mixers, unfortunately, high mixing energy and shear forces brakes and mills fibres in shorter parts [2]. Small fibres can work bridging only small crack openings. Glass, basalt and carbon fibres are used more widely due to advanced manufacturing technologies and reduced cost. Several attempts to introduce glass, basalt and carbon fibres in the construction industry were found—long carbon fibres and commercial product Minibar [3, 4]. Basalt Minibar fibres are recommended to use in high volume fractions (1.5–3%) to achieve tangible results. Long carbon fibre composite fibres were used for increasing concrete plate’s impact load resistance [4, 5]. In the present investigation, short composite macro-fibres were manufactured as unidirectional carbon and glass composite rods. Rods were cut in discrete fibres with L/d ratio from 18.1 to 56.1. A varying number of filaments in macro-fibre (changing filament volume fraction V[image: $$_{\mathrm {f}}$$] in composite fibre) as well as using different material filaments in one macro-fibre it is possible to obtain reinforcing fibres with different strengths and elastic properties. Composite fibres can be developed for very specific use; composite fibres with a smooth surface for high strength concretes to increase ductility, fibres with a rough surface or uneven geometry results in good anchorage for low and average strength concrete reinforcement [6]. Higher fibre anchorage in concrete can be also achieved by making fibre outer surface rough [7]. 2 Materials and Methods Overview of Experimental program for composite fibres and different concrete matrices is shown in Table 1.Table 1Concrete mixture proportions Fibre type Fibre denomination Concrete matrix Depth, angle configuration M1 M2 M3 Composite glass fibres GF1 - - x 25 mm depth, 0-degree angle to pull-out force GF2 x x x GF3 - - x GF4 - - x GF5 - - x Composite carbon fibres CF1 x x x CF1-A x x x CF1-B x x x CF2 x x x CF2-A x x x CF2-B x x x CF3 - x x Hybrid fibres GC1 - - x CG1 - - x 3 Concrete Materials and Mix Design Portland cement Aalborg White CEM I 52.5 R is used as binder in experimental mixes, naturally fractioned and washed quartz sand 0–1 mm as main aggregate. Quartz powder and silica fume was used as micro filler. Poly-carboxylate based high range water reducing admixture was used to control mix workability. Tree types of concrete mixtures were designed (mixture proportions are presented in Table 2). First type concrete mixture is high strength concrete (M1) with cement content 800 kg/m[image: $$^{3}$$]), with silica fume, water to cement w/c ratio 0.25. The second type concrete mixture is normal strength concrete (M2) with cement content 550 kg/m[image: $$^{3}$$], having silica fume and with w/c ratio equal to 0.55. The third type concrete mixture was low strength (M3) with cement content 400 kg/m[image: $$^{3}$$], without silica fume, and having water to cement ratio w/c equal to 0.7. The amount of micro-filler was adjusted in order to achieve paste content 550 l [image: $${\pm }$$] 7% in all cases. All three concrete types were designed to achieve target compressive strength and have good workability and stability in fresh stage. Workability was defined to have slump flow classes measured in accordance with EN 12350-8: 580 mm.Table 2Concrete mixture proportions Material Density, t/m[image: $$^{3}$$] M1 M2 M3 Cement CEM I 52,5 R (Aalborg White) Water 3,13 1 800 200 550 300 400 300 Sand 0–1 mm (SaulkalneS) 2,65 1100 1200 1400 Microsilica (Elkem 920D) 2,22 133,3 50 0 Quartz powder 0–120 mk (Anyksciai) 2,65 66,5 250 250 HRWR (Sika D400) 1,07 25 6,5 3 w/c ratio 0,25 0,55 0,75 paste volume, l 605,5 611,4 516 Table 3Fibre properties Fibre type Fibre denomination Fibre geometry, surface Diameter, mm L/d ratio Reinforcement ratio Specific weight, kg/m[image: $$^{3}$$] Composite glass fibres GF1 Uneven, smooth 2,18 22,9 0,534 1195 GF2 Straight, smooth 1,52 32,9 0,751 1746 GF3 Straight, rough 2,04 24,5 0,482 1507 GF4 Undulated, smooth 1,88 26,6 0,657 1310 GF5 Undulated, smooth 1,85 27,0 0,696 1270 Composite carbon fibres CF1 Straight, smooth 1,77 28,3 0,578 1129 CF1-A Straight, smooth 1,30 38,6 0,635 1291 CF1
# First steps: walk through from DFT to RPA (standalone) In this tutorial you will learn how to calculate optical spectra using Yambo, starting from a DFT calculation and ending with a look at local field effects in the optical response. ## System characteristics We will use a 3D system (bulk hBN) and a 2D system (hBN sheet). Hexagonal boron nitride - hBN: • HCP lattice, ABAB stacking • Four atoms per cell, B and N (16 electrons) • Lattice constants: a = 4.716 [a.u.], c/a = 2.582 • Plane wave cutoff 40 Ry (~1500 RL vectors in wavefunctions) • SCF run: shifted 6x6x2 grid (12 k-points) with 8 bands • Non-SCF run: gamma-centred 6x6x2 (14 k-points) grid with 100 bands ### Prerequisites You will need: • PWSCF input files and pseudopotentials for hBN bulk • pw.x executable, version 5.0 or later • p2y and yambo executables • gnuplot for plotting spectra $mkdir YAMBO_TUTORIALS$ mv hBN.tar.gz YAMBO_TUTORIALS $cd YAMBO_TUTORIALS$ tar -xvfz hBN.tar.gz $ls YAMBO_TUTORIALS hBN (Advanced users can download and install all tutorial files using git. See the main Tutorial Files page.) ## DFT calculation of bulk hBN and conversion to Yambo In this module you will learn how to generate the Yambo SAVE folder for bulk hBN starting from a PWscf calculation. ### DFT calculations $ cd YAMBO_TUTORIALS/hBN/PWSCF $ls Inputs Pseudos PostProcessing References hBN_scf.in hBN_nscf.in hBN_scf_plot_bands.in hBN_nscf_plot_bands.in First run the SCF calculation to generate the ground-state charge density, occupations, Fermi level, and so on: $ pw.x < hBN_scf.in > hBN_scf.out Inspection of the output shows that the valence band maximum lies at 5.06eV. Next run a non-SCF calculation to generate a set of Kohn-Sham eigenvalues and eigenvectors for both occupied and unoccupied states (100 bands): $pw.x < hBN_nscf.in > hBN_nscf.out (serial run, ~1 min) OR$ mpirun -np 2 pw.x < hBN_nscf.in > hBN_nscf.out (parallel run, 40s) Here we use a 6x6x2 grid giving 14 k-points, but denser grids should be used for checking convergence of Yambo runs. Note the presence of the following flags in the input file: wf_collect=.true. force_symmorphic=.true. diago_thr_init=5.0e-6, diago_full_acc=.true. which are needed for generating the Yambo databases accurately. Full explanations of these variables are given on the quantum-ESPRESSO input variables page. After these two runs, you should have a hBN.save directory: $ls hBN.save data-file.xml charge-density.dat gvectors.dat B.pz-vbc.UPF N.pz-vbc.UPF K00001 K00002 .... K00035 K00036 ### Conversion to Yambo format The PWscf bBN.save output is converted to the Yambo format using the p2y executable (pwscf to yambo), found in the yambo bin directory. Enter hBN.save and launch p2y: $ cd hBN.save $p2y ... <---> DBs path set to . <---> Index file set to data-file.xml <---> Header/K-points/Energies... done ... <---> == DB1 (Gvecs and more) ... <---> ... Database done <---> == DB2 (wavefunctions) ... done == <---> == DB3 (PseudoPotential) ... done == <---> == P2Y completed == This output repeats some information about the system and generates a SAVE directory: $ ls SAVE ns.db1 ns.wf ns.kb_pp_pwscf ns.wf_fragments_1_1 ... ns.kb_pp_pwscf_fragment_1 ... These files, with an n prefix, indicate that they are in netCDF format, and thus not human readable. However, they are perfectly transferable across different architectures. You can check that the databases contain the information you expect by launching Yambo using the -D option: $yambo -D [RD./SAVE//ns.db1]------------------------------------------ Bands : 100 K-points : 14 G-vectors [RL space]: 8029 Components [wavefunctions]: 1016 ... [RD./SAVE//ns.wf]------------------------------------------- Fragmentation :yes ... [RD./SAVE//ns.kb_pp_pwscf]---------------------------------- Fragmentation :yes - S/N 006626 -------------------------- v.04.01.02 r.00000 - In practice we suggest to move the SAVE folder into a new clean folder. In this tutorial however, we ask instead that you continue using a SAVE folder that we prepared previously: $ cd ../../YAMBO $ls SAVE ## Initialization of Yambo databases Use the SAVE folders that are already provided, rather than any ones you may have generated previously. Every Yambo run must start with this step. Go to the folder containing the hBN-bulk SAVE directory: $ cd YAMBO_TUTORIALS/hBN/YAMBO $ls SAVE and simply launch the code $ yambo This will run the initialization (setup) runlevel. TIP: do not run yambo from inside the SAVE folder! In fact, if you ever see the message: yambo: cannot access CORE database (SAVE/*db1 and/or SAVE/*wf) it usually means you are trying to launch Yambo from the wrong place. Three new elements will appear: ### Run-time output This is typically written to standard output (on screen) and tracks the progress of the run in real time: <---> [01] CPU structure, Files & I/O Directories <---> [02] CORE Variables Setup <---> [02.01] Unit cells <---> [02.02] Symmetries <---> [02.03] RL shells <---> Shells finder |########################################| [100%] --(E) --(X) <---> [02.04] K-grid lattice <---> [02.05] Energies [ev] & Occupations <---> [03] Transferred momenta grid <---> BZ -> IBZ reduction |########################################| [100%] --(E) --(X) <---> X indexes |########################################| [100%] --(E) --(X) <---> SE indexes |########################################| [100%] --(E) --(X) <---> [04] External corrections <---> [05] Game Over & Game summary Specific runlevels are indicated with numeric labels like [02.02]. The hashes (#) indicate progress of the run in Wall Clock time, indicating the elapsed (E) and expected (X) time to complete a runlevel, and the percentage of the task complete. If Yambo is launched using a script, or as a background process, or in parallel, this output will appear in a log file prefixed by the letter l, in this case as l_setup. If this log file already exists from a previous run, it will not be overwritten. Instead, a new file will be created with an incrementing numerical label, e.g. l_setup_01, l_setup_02, etc. This applies to all files created by Yambo. In the case of parallel runs, CPU-dependent log files will appear inside a LOG folder, e.g. $mpirun -np 4 yambo$ ls LOG l_setup_CPU_1 l_setup_CPU_2 l_setup_CPU_3 l_setup_CPU_4 This behaviour can be controlled at runtime - see the Parallel tutorial for details. ### New core databases New databases appear in the SAVE folder: $ls SAVE ns.db1 ns.wf ns.kb_pp_pwscf ndb.gops ndb.kindx ns.wf_fragments_1_1 ... ns.kb_pp_pwscf_fragment_1 ... These contain information about the G-vector shells and k/q-point meshes as defined by the DFT calculation. In general: a database called ns.xxx is a static database, generated once by p2y, while databases called ndb.xxx are dynamically generated while you use yambo. TIP: if you launch yambo, but it does not seem to do anything, check that these files are present. ### Report file A report file r_setup is generated in the run directory. This mostly reports information about the ground state system as defined by the DFT run, but also adds information about the band gaps, occupations, shells of G-vectors, IBZ/BZ grids, the CPU structure (for parallel runs), and so on. Some points of note: [02.03] RL shells ================= Shells, format: [S#] G_RL(mHa) [S453]:8029(0.7982E+5) [S452]:8005(0.7982E+5) [S451]:7981(0.7982E+5) [S450]:7957(0.7942E+5) ... [S4]:11( 1183.) [S3]:5( 532.5123) [S2]:3( 133.1281) [S1]:1( 0.000000) This reports the set of closed reciprocal lattice (RL) shells defined internally that contain G-vectors with the same modulus. The highest number of RL vectors we can use is 8029. Yambo will always redefine any input variable in RL units to the nearest closed shell. [02.05] Energies [ev] & Occupations =================================== Fermi Level [ev]: 5.112805 VBM / CBm [ev]: 0.000000 3.876293 Electronic Temp. [ev K]: 0.00 0.00 Bosonic Temp. [ev K]: 0.00 0.00 El. density [cm-3]: 0.460E+24 States summary : Full Metallic Empty 0001-0008 0009-0100 Indirect Gaps [ev]: 3.876293 7.278081 Direct Gaps [ev]: 4.28829 11.35409 X BZ K-points : 72 Yambo recalculates again the Fermi level (close to the value of 5.06 noted in the PWscf SCF calculation). From here on, however, the Fermi level is set to zero, and other eigenvalues are shifted accordingly. The system is insulating (8 filled, 92 empty) with an indirect band gap of 3.87 eV. The minimum and maximum direct and indirect gaps are indicated. There are 72 k-points in the full BZ, generated using symmetry from the 14 k-points in our user-defined grid. TIP: You should inspect the report file after every run for errors and warnings. ### 2D hBN Simply repeat the steps above. Go to the folder containing the hBN-sheet SAVE directory and launch yambo: $ cd TUTORIALS/hBN-2D/YAMBO $ls SAVE$ yambo Again, inspect the r_setup file, output logs, and verify that ndb.gops and ndb.kpts have been created inside the SAVE folder. You are now ready to use Yambo! ## Yambo's command line interface Yambo uses a command line interface to select tasks, generate input files, and control the runtime behaviour. In this module you will learn how to select tasks, generate and modify input files, and control the runtime behaviour by using Yambo's command line interface. Command line options are divided into uppercase and lowercase options: • Lowercase: select tasks, generate input files, and (by default) launch a file editor • Uppercase: modify Yambo's default settings, at run time and when generating input files Lowercase and uppercase options can be used together. ### Input file generator First, move to the appropriate folder and initialize the Yambo databases if you haven't already done so. $cd YAMBO_TUTORIALS/hBN/YAMBO$ yambo (initialize) Yambo generates its own input files: you just tell the code what you want to calculate by launching Yambo along with one or more lowercase options. #### Allowed options To see the list of runlevels and options, run yambo -h or better, $yambo -H This is yambo 4.4.0 rev.148 A shiny pot of fun and happiness [C.D.Hogan] -h :Short Help -H :Long Help -J <opt> :Job string identifier -V <opt> :Input file verbosity[opt=RL,kpt,sc,qp,io,gen,resp,all,par] -F <opt> :Input file -I <opt> :Core I/O directory -O <opt> :Additional I/O directory -C <opt> :Communications I/O directory -D :DataBases properties -W <opt> :Wall Time limitation (1d2h30m format) -Q :Don't launch the text editor -E <opt> :Environment Parallel Variables file -M :Switch-off MPI support (serial run) -N :Switch-off OpenMP support (single thread run) -i :Initialization -o <opt> :Optics [opt=(c)hi is (G)-space / (b)se is (eh)-space ] -k <opt> :Kernel [opt=hartree/alda/lrc/hf/sex](hf/sex only eh-space; lrc only G-space) -y <opt> :BSE solver [opt=h/d/s/(p/f)i](h)aydock/(d)iagonalization/(i)nversion -r :Coulomb potential -x :Hartree-Fock Self-energy and local XC -d :Dynamical Inverse Dielectric Matrix -b :Static Inverse Dielectric Matrix -p <opt> :GW approximations [opt=(p)PA/(c)HOSEX] -g <opt> :Dyson Equation solver[opt=(n)ewton/(s)ecant/(g)reen] -l :GoWo Quasiparticle lifetimes -a :ACFDT Total Energy -s :ScaLapacK test Any time you launch Yambo with a lowercase option, Yambo will generate the appropriate input file (default name: yambo.in) and launch the vi editor. Editor choice can be changed at configure; alternatively you can use the -Q run time option to skip the automatic editing (do this if you are not familiar with vi!): $ yambo -x -Q yambo: input file yambo.in created $emacs yambo.in or your favourite editing tool #### Combining options Multiple options can be used together to activate various tasks or runlevels (in some cases this is actually a necessity). For instance, to generate an input file for optical spectra including local field effects (Hartree approximation), do (and then exit) $ yambo -o c -k hartree which switches on: optics # [R OPT] Optics chi # [R CHI] Dyson equation for Chi. Chimod= "Hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc To perform a Hartree-Fock and GW calculation using a plasmon-pole approximation, do (and then exit): $yambo -x -g n -p p which switches on: HF_and_locXC # [R XX] Hartree-Fock Self-energy and Vxc gw0 # [R GW] GoWo Quasiparticle energy levels ppa # [R Xp] Plasmon Pole Approximation em1d # [R Xd] Dynamical Inverse Dielectric Matrix Each runlevel activates its own list of variables and flags. ### Changing input parameters Yambo reads various parameters from existing database files and/or input files and uses them to suggest values or ranges. Let's illustrate this by generating the input file for a Hartree-Fock calculation. $ yambo -x Inside the generated input file you should find: EXXRLvcs = 3187 RL # [XX] Exchange RL components %QPkrange # [GW] QP generalized Kpoint/Band indices 1| 14| 1|100| % The QPkrange variable (follow the link for a "detailed" explanation for any variable) suggests a range of k-points (1 to 14) and bands (1 to 100) based on what it finds in the core database SAVE/ns.db1, i.e. as defined by the DFT code. Leave that variable alone, and instead modify the previous variable to EXXRLvcs= 1000 RL Save the file, and now generate the input a second time with yambo -x. You will see: EXXRLvcs= 1009 RL This indicates that Yambo has read the new input value (1000 G-vectors), checked the database of G-vector shells (SAVE/ndb.gops), and changed the input value to one that fits a completely closed shell. Last, note that Yambo variables can be expressed in different units. In this case, RL can be replaced by an energy unit like Ry, eV, Ha, etc. Energy units are generally better as they are independent of the cell size. Technical information is available on the Variables page. The input file generator of Yambo is thus an intelligent parser, which interacts with the user and the existing databases. For this reason we recommend that you always use Yambo to generate the input files, rather than making them yourself. ### Uppercase options Uppercase options modify some of the code's default settings. They can be used when launching the code but also when generating input files. #### Allowed options To see the list of options, again do: $yambo -H Tool: yambo 4.1.2 rev.14024 Description: A shiny pot of fun and happiness [C.D.Hogan] -J <opt> :Job string identifier -V <opt> :Input file verbosity [opt=RL,kpt,sc,qp,io,gen,resp,all,par] -F <opt> :Input file -I <opt> :Core I/O directory -O <opt> :Additional I/O directory -C <opt> :Communications I/O directory -D :DataBases properties -W <opt> :Wall Time limitation (1d2h30m format) -Q :Don't launch the text editor -M :Switch-off MPI support (serial run) -N :Switch-off OpenMP support (single thread run) [Lower case options] Command line options are extremely important to master if you want to use yambo productively. Often, the meaning is clear from the help menu: $ yambo -F yambo.in_HF -x Make a Hartree -Fock input file called yambo.in_HF $yambo -D Summarize the content of the databases in the SAVE folder$ yambo -I ../ Run the code, using a SAVE folder in a directory one level up $yambo -C MyTest Run the code, putting all report, log, plot files inside a folder MyTest Other options deserve a closer look. ### Verbosity Yambo uses many input variables, many of which can be left at their default values. To keep input files short and manageable, only a few variables appear by default in the inout file. More advanced variables can be switched on by using the -V verbosity option. These are grouped according to the type of variable. For instance, -V RL switches on variables related to G vector summations, and -V io switches on options related to I/O control. Try: $ yambo -o c -V RL switches on: FFTGvecs= 3951 RL # [FFT] Plane-waves $yambo -o c -V io switches on: StdoHash= 40 # [IO] Live-timing Hashes DBsIOoff= "none" # [IO] Space-separated list of DB with NO I/O. DB= ... DBsFRAGpm= "none" # [IO] Space-separated list of +DB to be FRAG and ... #WFbuffIO # [IO] Wave-functions buffered I/O Unfortunately, -V options must be invoked and changed one at a time. When you are more expert, you may go straight to -V all, which turns on all possible variables. However note that yambo -o c -V all adds an extra 30 variables to the input file, which can be confusing: use it with care. ### Job script label The best way to keep track of different runs using different parameters is through the -J flag. This inserts a label in all output and report files, and creates a new folder containing any new databases (i.e. they are not written in the core SAVE folder). Try: $ yambo -J 1Ry -V RL -x and modify to FFTGvecs = 1 Ry EXXGvecs = 1 Ry $yambo -J 1Ry Run the code$ ls yambo.in SAVE o-1Ry.hf r-1Ry_HF_and_locXC 1Ry 1Ry/ndb.HF_and_locXC This is extremely useful when running convergence tests, trying out different parameters, etc. Exercise: use yambo to report the properties of all database files (including ndb.HF_and_locXC) ## Optical absorption in hBN: independent particle approximation ### Background The dielectric function in the long-wavelength limit, at the independent particle level (RPA without local fields), is essentially given by the following: In practice, Yambo does not use this expression directly but solves the Dyson equation for the susceptibility X, which is described in the Local fields module. ### Choosing input parameters Enter the folder for bulk hBN that contains the SAVE directory, run the initialization and generate the input file. From yambo -H you should understand that the correct option is yambo -o c. Let's add some command line options: $cd YAMBO_TUTORIALS/hBN/YAMBO$ yambo (initialization) $yambo -F yambo.in_IP -o c This corresponds to optical properties in G-space at the independent particle level: in the input file this is indicated by (Chimod= "IP"). ### Optics runlevel For optical properties we are interested just in the long-wavelength limit q = 0. This always corresponds to the first q-point in the set of possible q=k-k' -points. Change the following variables in the input file to: % QpntsRXd 1 | 1 | # [Xd] Transferred momenta % ETStpsXd= 1001 # [Xd] Total Energy steps in order to select just the first q. The last variable ensures we generate a smooth spectrum. Save the input file and launch the code, keeping the command line options as before (i.e., just remove the lower case options): $ yambo -F yambo.in_IP -J Full ... <---> [05] Optics <---> [LA] SERIAL linear algebra <---> [x,Vnl] computed using 4 projectors <---> [M 0.017 Gb] Alloc WF ( 0.016) <---> [WF] Performing Wave-Functions I/O from ./SAVE <01s> Dipoles: P and iR (T): |########################################| [100%] 01s(E) 01s(X) <01s> [M 0.001 Gb] Free WF ( 0.016) <01s> [X-CG] R(p) Tot o/o(of R)  : 5501 52992 100 <01s> Xo@q[1] |########################################| [100%] --(E) --(X) <01s> [06] Game Over & Game summary $ls Full SAVE yambo.in_IP r_setup o-Full.eel_q1_ip o-Full.eps_q1_ip r-Full_optics_chi Let's take a moment to understand what Yambo has done inside the Optics runlevel: 1. Compute the [x,Vnl] term 2. Read the wavefunctions from disc [WF] 3. Compute the dipoles, i.e. matrix elements of p 4. Write the dipoles to disk as Full/ndb.dip* databases. This you can see in the report file: $ grep -A20 "WR" r-Full_optics_chi [WR./Full//ndb.dip_iR_and_P] Brillouin Zone Q/K grids (IBZ/BZ): 14 72 14 72 RL vectors (WF): 1491 Electronic Temperature [K]: 0.0000000 Bosonic Temperature [K]: 0.0000000 X band range  : 1 100 RL vectors in the sum  : 1491 [r,Vnl] included  :yes ... 1. Finally, Yambo computes the non-interacting susceptibility X0 for this q, and writes the dielectric function inside the o-Full.eps_q1_ip file for plotting ### Energy cut off Before plotting the output, let's change a few more variables. The previous calculation used all the G-vectors in expanding the wavefunctions, up to 1491 (~1016 components). This corresponds roughly to the cut off energy of 40Ry we used in the DFT calculation. Generally, however, we can use a smaller value. We use the verbosity to switch on this variable, and a new -J flag to avoid reading the previous database: $yambo -F yambo.in_IP -J 6Ry -V RL -o c Change the value of FFTGvecs and also its unit from RL (number of G-vectors) to Ry (energy in Rydberg): FFTGvecs= 6 Ry # [FFT] Plane-waves Save the input file and launch the code again: $ yambo -F yambo.in_IP -J 6Ry and then plot the o-Full.eps_q1_ip and o-6Ry.eps_q1_ip files: $gnuplot gnuplot> plot "o-Full.eps_q1_ip" w l,"o-6Ry.eps_q1_ip" w p Clearly there is very little difference between the two spectra. This highlights an important point in calculating excited state properties: generally, fewer G-vectors are needed than what are needed in DFT calculations. Regarding the spectrum itself, the first peak occurs at about 4.4eV. This is consistent with the minimum direct gap reported by Yambo: 4.28eV. The comparison with experiment (not shown) is very poor however. If you make some mistake, and cannot reproduce this figure, you should check the value of FFTGvecs in the input file, delete the 6Ry folder, and try again - taking care to plot the right file! (e.g. o-6Ry.eps_q1_ip_01). ### q-direction Now let's select a different component of the dielectric tensor: $ yambo -F yambo.in_IP -J 6Ry -V RL -o c ... % LongDrXd 0.000000 | 0.000000 | 1.000000 | # [Xd] [cc] Electric Field % ... $yambo -F yambo.in_IP -J 6Ry This time yambo reads from the 6Ry folder, so it does not need to compute the dipole matrix elements again, and the calculation is fast. Plotting gives: $ gnuplot gnuplot> plot "o-6Ry.eps_q1_ip" t "q || x-axis" w l,"o-6Ry.eps_q1_ip_01" t "q || c-axis" w l The absorption is suppressed in the stacking direction. As the interplanar spacing is increased, we would eventually arrive at the absorption of the BN sheet (see Local fields tutorial). ### Non-local commutator Last, we show the effect of switching off the non-local commutator term (see [Vnl,r] in the equation at the top of the page) due to the pseudopotential. As there is no option to do this inside yambo, you need to hide the database file. Change back to the q || (1 0 0) direction, and launch yambo with a different -J option: $mv SAVE/ns.kb_pp_pwscf SAVE/ns.kb_pp_pwscf_OFF$ yambo -F yambo.in_IP -J 6Ry_NoVnl -o c (change to q || 100) $yambo -F yambo.in_IP -J 6Ry_NoVnl Note the warning in the output: <---> [WARNING] Missing non-local pseudopotential contribution which also appears in the report file, and noted in the database as [r,Vnl] included :no. The difference is tiny: However, when your system is larger, with more projectors in the pseudopotential or more k-points (see the BSE tutorial), the inclusion of Vnl can make a huge difference in the computational load, so it's always worth checking to see if the terms are important in your system. ## Optical absorption in 2D BN: local field effects ### Background Cheatsheet on LFE The macroscopic dielectric function is obtained by including the so-called local field effects (LFE) in the calculation of the response function. Within the time-dependent DFT formalism this is achieved by solving the Dyson equation for the susceptibility X. In reciprocal space this is given by: The microscopic dielectric function is related to X by: and the macroscopic dielectric function is obtained by taking the (0,0) component of the inverse microscopic one: Experimental observables like the optical absorption and the electron energy loss can be obtained from the macroscopic dielectric function: In the following we will neglect the f xc term: we perform the calculation at the RPA level and consider just the Hartree term (from vG) in the kernel. If we also neglect the Hartree term, we arrive back at the independent particle approximation, since there is no kernel and X = X0. ### Choosing input parameters Enter the folder for 2D hBN that contains the SAVE directory, and generate the input file. From yambo -H you should understand that the correct option is yambo -o c -k hartree. Let's start by running the calculation for light polarization q in the plane of the BN sheet: $ cd YAMBO_TUTORIALS/hBN-2D/YAMBO $yambo (Initialization)$ yambo -F yambo.in_RPA -V RL -J q100 -o c -k hartree We thus use a new input file yambo.in_RPA, switch on the FFTGvecs variable, and label all outputs/databases with a q100 tag. Make sure to set/modify all of the following variables to: FFTGvecs= 6 Ry # [FFT] Plane-waves Chimod= "Hartree" # [X] IP/Hartree/ALDA/LRC/BSfxc NGsBlkXd= 3 Ry # [Xd] Response block size % QpntsRXd 1 | 1 | # [Xd] Transferred momenta % % EnRngeXd 0.00000 | 20.00000 | eV # [Xd] Energy range % % DmRngeXd 0.200000 | 0.200000 | eV # [Xd] Damping range % ETStpsXd= 2001 # [Xd] Total Energy steps % LongDrXd 1.000000 | 0.000000 | 0.000000 | # [Xd] [cc] Electric Field % In this input file, we have: • A q parallel to the sheet • A wider energy range than before, and more broadening • Selected the Hartree kernel, and expanded G-vectors in the screening up to 3 Ry (about 85 G-vectors) ### LFEs in periodic direction Now let's run the code with this new input file (CECAM in serial: about 2mins; parallel 4 tasks: 50s) $yambo -F yambo.in_RPA -J q100 and let's compare the absorption with and without the local fields included. By inspecting the o-q100.eps_q1_inv_rpa_dyson file we find that this information is given in the 2nd and 4th columns, respectively: $ head -n30 o-q100.eps_q1_inv_rpa_dyson # Absorption @ Q(1) [q->0 direction] : 1.0000000 0.0000000 0.0000000 # E/ev[1] EPS-Im[2] EPS-Re[3] EPSo-Im[4] EPSo-Re[5] Plot the result: $gnuplot gnuplot> plot "o-q100.eps_q1_inv_rpa_dyson" u 1:2 w l,"o-q100.eps_q1_inv_rpa_dyson" u 1:4 w l It is clear that there is little influence of local fields in this case. This is generally the case for semiconductors or materials with a smoothly varying electronic density. We have also shown the EELS spectrum (o-q100.eel_q1_inv_rpa_dyson) for comparison. ### LFEs in non-periodic direction Now let's switch to q perpendicular to the BN plane: $ yambo -F yambo.in_RPA -V RL -o c -k hartree and set ... % LongDrXd 0.000000 | 0.000000 | 1.000000 | # [Xd] [cc] Electric Field % You can try out the default parallel usage now, or run again in serial, i.e. $yambo -F yambo.in_RPA -J q001 (serial)$ mpirun -np 4 yambo -F yambo.in_RPA -J q001 & (parallel, MPI only, 4 tasks) As noted previously, the log files in parallel appear in the LOG folder, you can follow the execution with tail -F LOG/l-q001_optics_chi_CPU_1 . Plotting the output file: $gnuplot gnuplot> plot "o-q001.eps_q1_inv_rpa_dyson" u 1:2 w l,"o-q001.eps_q1_inv_rpa_dyson" u 1:4 w l In this case, the absorption is strongly blueshifted with respect to the in-plane absorption. Furthermore, the influence of local fields is striking, and quenches the spectrum strongly. This is the well known depolarization effect. Local field effects are much stronger in the perpendicular direction because the charge inhomogeneity is dramatic. Many G-vectors are needed to account for the sharp change in the potential across the BN-vacuum interface. ### Absorption versus EELS In order to understand this further, we plot the electron energy loss spectrum for this component and compare with the absorption: $ gnuplot gnuplot > plot "o-q001.eps_q1_inv_rpa_dyson" w l,"o-q001.eel_q1_inv_rpa_dyson" w l The conclusion is that the absorption and EELS coincide for isolated systems. To understand why this is, you need to consider the role of the macroscopic screening in the response function and the long-range part of the Coulomb potential. See e.g.[1] 1. TDDFT from molecules to solids: The role of long‐range interactions, F. Sottile et al, International journal of quantum chemistry 102 (5), 684-701 (2005)
# Almost divisible by all Number Theory Level 3 Find the largest integer $$n$$ such that $$n$$ is divisible by all positive integers less than $$\sqrt[3]{n}$$. ×
# MALDI/TOF-MS ##### Matrix-assisted laser desorption/ionization time of flight mass spectrometry MALDI/TOF-MS analysis of permethylated glycosphingolipids is performed in the linear or reflector positive ion mode using α-dihyroxybenzoic acid (DHBA, 20mg/mL solution in 50%methanol: water) as a matrix. More detailed sample preparation protocols for the MALDI/TOF-MS analysis are described in Protocol_6 of the sample preparation steps. ### AB Sciex 5800 Mode of operation: • Reflector (for most of GSLs) • Linear (for large GSLs if the size of the analyte is out of the detection limit of Reflector mode) Polarity: • Positive (for all permethylated GSLs) • Negative (for charged GSLs those carry strong charge(s) and retain its charge after permethylation, e.g. Sulfatides) Laser intensity: ~ 4000 * *Optimum laser intensity depends on the performance of the laser in each instrument, so it should be determined experimentally. ### Bruker Microflex Mode of operation: • Reflector (for most of GSLs) • Linear (for large GSLs if the size of the analyte is out of the detection limit of Reflector mode) Polarity: • Positive (for all permethylated GSLs) • Negative (for charged GSLs those carry strong charge and retain the charge after permethylation, e.g. Sulfatides) Laser intensity: ~ 60%* *Optimum laser intensity depends on the performance of the laser in each instrument, so it should be determined experimentally.
Vis viva (from the Latin for "living force") is a historical term used for the first recorded description of what we now call kinetic energy in an early formulation of the principle of conservation of energy. ## Overview Proposed by Gottfried Leibniz over the period 1676–1689, the theory was controversial as it seemed to oppose the theory of conservation of quantity of motion advocated by René Descartes.[1] Quantity of motion is different from momentum. However, Newton defined quantity of motion as the conjunction of the quantity of matter and velocity (see Definition II in Principia). In Definition III he defines the force which resists a change in motion as the vis inertia of Descartes. His Third Law of Motion is a statement of what becomes known as the conservation of momentum as he demonstrates in the related Scholium. Leibniz accepted the principle of conservation of momentum, but rejected the Cartesian version of it.[2] The difference between Newton and Descartes and Leibniz was whether the quantity of motion was simply related to a body's resistance to a change in velocity (vis inertia) or whether a body's amount of force due to its motion (vis viva) was related to the square of its velocity. The theory was eventually absorbed into the modern theory of energy though the term still survives in[3] the context of celestial mechanics through the vis viva equation. The term "living force" was also used, for example by George William Hill. The term is due to German Gottfried Wilhelm Leibniz, who during 1676–1689 first attempted a mathematical formulation. Leibniz noticed that in many mechanical systems (of several masses, mi each with velocity vi) the quantity:[4] ${\displaystyle \sum _{i}m_{i}v_{i}^{2))$ was conserved. He called this quantity the vis viva or "living force" of the system.[4] The principle, it is now realised, represents an accurate statement of the conservation of kinetic energy in elastic collisions, and is independent of the conservation of momentum. However, many physicists at the time were unaware of this fact and, instead, were influenced by the prestige of Sir Isaac Newton in England and of René Descartes in France, both of whom advanced the conservation of momentum as a guiding principle. Thus the momentum:[4] ${\displaystyle \,\!\sum _{i}m_{i}\mathbf {v} _{i))$ was held by the rival camp to be the conserved vis viva. It was largely engineers such as John Smeaton, Peter Ewart, Karl Holtzmann, Gustave-Adolphe Hirn and Marc Seguin who objected that conservation of momentum alone was not adequate for practical calculation and who made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. The French mathematician Émilie du Châtelet, who had a sound grasp of Newtonian mechanics, developed Leibniz's concept and, combining it with the observations of Willem 's Gravesande, showed that vis viva was dependent on the square of the velocities.[5] Members of the academic establishment such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics but in the 18th and 19th centuries, the fate of the lost energy was still unknown. Gradually it came to be suspected that the heat inevitably generated by motion was another form of vis viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of vis viva and caloric theory.[1] Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat. Vis viva now started to be known as energy, after the term was first used in that sense by Thomas Young in 1807. An excerpt from Daniel Bernoulli's article, published in 1741,[6] with the definition of vis viva with 12 multiplier. The recalibration of vis viva to include the coefficient of a half, namely: ${\displaystyle E={\frac {1}{2))\sum _{i}m_{i}v_{i}^{2))$ was largely the result of the work of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839, although the present-day definition can occasionally be found earlier (e.g., in Daniel Bernoulli's texts). The former called it the quantité de travail (quantity of work) and the latter, travail mécanique (mechanical work) and both championed its use in engineering calculation. ## Notes 1. ^ McDonough, Jeffrey K. (2021), "Leibniz's Philosophy of Physics", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Fall 2021 ed.), Metaphysics Research Lab, Stanford University, retrieved 2021-08-28 2. ^ McDonough, Jeffrey K. (2021), "Leibniz's Philosophy of Physics", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Fall 2021 ed.), Metaphysics Research Lab, Stanford University, retrieved 2021-08-28 3. ^ 4. ^ a b c Smith, George E. (October 2006). "The vis viva dispute: A controversy at the dawn of dynamics". Physics Today. 59 (10): 31–36. doi:10.1063/1.2387086. ISSN 0031-9228. 5. ^ Musielak, Dora (2014). "The Marquise du Chatelet: A Controversial Woman of Science". arXiv:1406.7401 [math.HO]. 6. ^ Bernoulli D. (1741). "De legibus quibusdam mechanicis...". Commentarii Academiae Scientiarum Imperialis Petropolitanae. 8: 99–127.
## Plants Store Their Sugar In The Form Of Plants Store Their Sugar In The Form Of. How do animals store their sugars for later? The leaves of a plant make sugar during the process of. In which form do plants and animals store glucose? We review their content and use your feedback to keep the quality high. Plants store glucose in their leaves. ## Which Will Form An Ionic Bond Apex Which Will Form An Ionic Bond Apex. What types of atoms form and ionic bond apex answser? It can form an ionic bond with itself. The ionic bond is the attraction between positive and negative ions in a crystal and compounds held together by ionic bonds are called ionic compounds. What types of atoms form and ionic bond apex answser? These forces bring the ion closer to form an ionic bond. ## 35 As A Fraction In Simplest Form 35 As A Fraction In Simplest Form. The simplest form of a fraction is one with a reasonably prime numerator and denominator. ‘counting in halves’ would be:. The result (simplest form) will be displayed in the output field. What is.35 as a fraction? 35 centimeters is 35/100 of a meter. ## 0.44 As A Fraction In Simplest Form 0.44 As A Fraction In Simplest Form. Steps to convert decimal into fraction. 0.44 = 1125 as a fraction. What is 0.44 as a fraction? For calculation, here's how to convert 0.44 as a fraction using the formula above, step by step instructions are given below. Multiply both the numerator and denominator by 10 for each digit after the decimal. ## Square Root Of 48 In Radical Form Square Root Of 48 In Radical Form. The result can be shown in. 4√42 ⋅3 4 4 2 ⋅ 3. The square root of 48 is 6.9282032302755. To do this, divide the number 4 by. Square root of 48 simplified is the largest integer factor times the square root of 48 divided by the largest perfect square root. ## Square Root Of 20 In Radical Form Square Root Of 20 In Radical Form. 4.472135955 is the square root of 20 or value of under root 20 and is represented as $\sqrt {20}$ in radical form. What is the square root of 800 in radical form? 42 is 16, less than 20 52 is 25, more than 20. Getting the number 20 inside the radical sign √. Pull terms out from under the radical. ## Square Root Of 12 In Radical Form Square Root Of 12 In Radical Form. 2√3 2 3 the result can be shown in multiple forms. Find perfect squares identify the perfect. The positive answer to the equation x2 = 12 is 3.46410, which is the. Square root of 12 simplified is the largest integer factor times the square root of 12 divided by the largest perfect square root. To simplify the square root of 12 means to get the simplest radical form of √12. ## Which Of The Following Molecules Can Form Hydrogen Bonds Which Of The Following Molecules Can Form Hydrogen Bonds. Actually, an ammonia molecule can form two hydrogen bonds, not one. Which of the following molecules can form hydrogen bonds? Which of the following is true regarding hydrogen. Many elements form compounds with hydrogen. D) all of the above. ## 375 As A Fraction In Simplest Form 375 As A Fraction In Simplest Form. 1 × 1000 = 6375 1000 now we need to simplify our fraction to its simplest form. 375% is 15/4 as an improper fraction in its simplest form. Click the button “solve” to get the output step 3: 0.375 equals 375/1000 as a fraction. What is 3/8 as a decimal?
About 5% of the power of a 100 W light bulb is converted to visible radiation. What is the average intensity of visible radiation at a distance of 1 m from the bulb? Subtopic:  Particle Nature of Light | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot Monochromatic light of wavelength 632.8 nm is produced by a helium-neon laser. The power emitted is 9.42 mW. The energy of each photon in the light beam is: Subtopic:  Particle Nature of Light | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot The energy flux of sunlight reaching the surface of the earth is 1.388 × 103 W/m2. How many photons(nearly) per square meter are incident on the Earth per second? Assume an average wavelength of 550 nm. (1) $3.84×{10}^{21}$ (2) $2.97×{10}^{21}$ (3) $4.12×{10}^{21}$ (4) $2.10×{10}^{21}$ Subtopic:  Particle Nature of Light | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot A 100 W sodium lamp radiates energy uniformly in all directions. The lamp is located at the center of a large sphere that absorbs all the sodium light which is incident on it. The wavelength of sodium light is 589 nm. What is the energy per photon associated with the sodium light? Subtopic:  Particle Nature of Light | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh Launched MCQ Practice Books Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot In an accelerator experiment on high-energy collisions of electrons with positrons, a certain event is interpreted as the annihilation of an electron-positron pair of total energy 10.2 BeV into two γ-rays of equal energy. What is the wavelength associated with each γ-ray? (1BeV = 109 eV) Subtopic:  Particle Nature of Light | To view explanation, please take trial in the course below. NEET 2023 - Target Batch - Aryan Raj Singh
# 12th Class Mental Ability Time and Clock Notes - Time and Clocks Notes - Time and Clocks Category : 12th Class Time and Clocks Time and Clock The face or dial of a watch is a circle whose circumference is divided into 60 equal parts, called minute spaces. A clock has two hands, the smaller one is called the hour hand or short hand while the larger one is called the minute hand or long hand. • In 60 minutes, the minute hand gains 55 minutes on the hour hand. • In every hour, both the hands coincide once. • The hands are in the same straight line when they are coincident or opposite to each other. • When the two hands are at right angles, they are 15 minute spaces apart. • When the hand’s are in opposite directions, they are 30 minute spaces apart. • Angle traced by hour hand in 12 hours $=360{}^\circ$. • Angle traced by hour hand in one hour $=30{}^\circ$. • Angle traced by hour hand in one minute $=\text{ }0.5{}^\circ$. • Angle traced by minute hand in 60 minutes $=360{}^\circ$. • Too Fast and Too Slow: If a watch or a clock indicates $8:15$, when the correct time 8 is said to be 15 minutes too fast. On the other hand, if it indicates $7:45$, when the correct time is 8, it is said to be 15 minutes too slow. Example: 1. A clock shows the time as $\mathbf{3}:\mathbf{30}$pm if the minute hand gains 2 minutes every hour, how many minutes will the clock gain by 6 am? (a) 20 minutes                (b) 29 minutes (c) 30 minutes                (d) 35 minutes (e) None of these Ans.     (b) Explanation: Hours between $3:30$ pm and 6 am are $=14\frac{1}{2}$. So, numbers of minutes gained will be $=14\frac{1}{2}\,\,\times \,\,2\,\,=\,\,29$ minutes. 1. Two watches, one of which gained at the rate of 1 minute and other lost at the rate of 1 minute daily,    were set correctly at noon on the 1st January 1978. When did the watches indicate the same time? (a) Dec 30, 1978 noon     (b) Dec 25, 1978 noon (c) Dec 27, 1978 noon     (d) Dec 26, 1978 noon (e) None of these Ans.     (c) Explanation: The first watch gains on the second watch $1+1=2$ minutes in a day. The watch will indicate the same time when the one has gained 12 hours on the other As 2 minutes is gained in one day. So 12 hours is gained $=1/2\times 12\times 60=360\text{ }days$ Counting 360 days from 1st Jan 1978, we get Dec 27, 1978. 1. How many times do the hands of a clock coincide between 11 O’clock and 1 O’clock? (a) 0                              (b) 1 (c) 2                              (d) 3 (e) None of these Ans.     (b) Explanation: Between 11 O’clock and 1 O’clock the hands of a clock coincide only once. So, the correct answer   is (b). 1. A clock is set at 5 am. The clock loses 16 minutes in 24 hours. What will be the right time when the dock indicates 10 pm on the 3rd day. (a) $11:15$ pm            (b) 11 pm (c) 12 pm                            (d) $12:30$ pm (e) None of these Ans.     (b) Explanation: Time from 5 am of a particular day to 10 pm on the 4th day is 89 hours. Now, the clock loses 16 minutes in 24 hrs or in other words we can say that 23 hours 44 minutes of this clock is equal to 24 hour of the correct clock. or $\left( 23+\frac{44}{60}=\frac{356}{15} \right)$ hours of this clock = 24 hours of the correct clock. $\Rightarrow$ 89 hours of this clock $=\,\,\left( \frac{24\times 15}{356}\times 89 \right)$ hours of correct clock. = 90 hours of the correct clock. Therefore, it is clear that in 89 hours this clock loses 1 hour and hence the correct time is 11 pm when this clock             shows 10 pm. 1. At what time between 4 and 5 O’clock will the hands of a watch point in opposite directions? (a) 45 minutes past 4                                                     (b) 40 minutes past 4 (c) $50\frac{4}{11}$ minutes past 4                    (d) $54\frac{6}{11}$ minutes past 4 Ans.     (d) Explanation: At 4 O’clock both the hands are 20 minutes spaces apart and for having in the opposite direction they have to be 30 minute spaces apart. From figure (i) and figure (ii) it is clear, that minute hand has to travel $\left( 20+30 \right)$ minute spaces in order to be in opposite direction to each other. Now 55 minute spaces is gained in 60 minutes. Therefore, 50 minute spaces will be gained in $\left( \frac{60}{55}\times 60 \right)$ minutes or $54\frac{6}{11}$ minutes. Hence, the hands of the clock will be in opposite direction at $54\frac{6}{11}$ minutes past 4. Therefore, (d) is the correct answer. 1. Find the exact time between 7 am and 8 am when the two hands of a watch meet. (a) 7 hrs 30.18 minute                 (b) 7 hrs 35.18 minute (c) 7 hrs 38.18 minute                 (d) 7 hrs 25.18 minute (e) None of these Ans.     (c) Explanation: At 7 am, the minute hand is 35 minute spaces behind the hour hand 55 minute spaces are gained in 60 minute So 35 minute spaces will be gained in 38.18 minute. Calendar Days in a Week There are seven days in a week - Sunday, Monday, Tuesday, Wednesday, Thursday, Friday and Saturday. If today is Monday then, • Tomorrow (coming day) will be - Tuesday • Day after tomorrow will be - Wednesday • Yesterday (previous day) was - Sunday • Day before yesterday was - Saturday • Day after 7 days (or a week) - Monday • Day before 7 days (or a week) - Monday • Day before or after 7 days or a multiple of 7 is same as present day. Example 1. If the day before yesterday was Thursday, when will Sunday be? (a) Tomorrow                                      (b) Today (c) Day after tomorrow                (d) Two days after today (e) None of these Ans.     (a) Explanation: If day before yesterday was Thursday, then today is Saturday - Therefore tomorrow will be Sunday. 1. Suganya went to see a movie nine days ago. She goes to the moves only on Thursday. What day of the week is today? (a) Tuesday                    (b) Thursday (c) Friday                       (d) Saturday (e) None of these Ans.     (d) Explanation: Clearly, nine days ago, it was Thursday. Therefore today is Saturday. 1. Today is Wednesday, What will be the day after 98 days? (a) Sunday                          (b) Monday (c) Wednesday                (d) Friday (e) None of these Ans.     (c) Explanation: Every day of the week is repeated after 7 days. Hence it will be Wednesday, after 98 days. 1. If 1st October is Sunday, then 1st November will be: (a) Sunday                          (b) Monday (c) Wednesday                (d) Friday (e) None of these Ans.     (c) Clearly, 1st, 8th, 15th, 22nd and 29th of October are Sundays. So, 31st October is Tuesday. Therefore, 1st November will be Wednesday. #### Other Topics ##### Notes - Time and Clocks You need to login to perform this action. You will be redirected in 3 sec
# Super-Earth of 8 Mearth in a 2.2-day orbit around the K5V star K2-216 @article{Persson2018SuperEarthO8, title={Super-Earth of 8 Mearth in a 2.2-day orbit around the K5V star K2-216}, author={C. M. Persson and Malcolm Fridlund and Oscar Barrag'an and Fei Dai and Davide Gandolfi and A. P. Hatzes and Teruyuki Hirano and Sascha Grziwa and J. Korth and Jorge Prieto-arranz and Luca Fossati and Vincent van Eylen and Anders B. Justesen and John H. Livingston and Daria Kubyshkina and Hans J. Deeg and E. W. Guenther and Grzegorz Nowak and J. Cabrera Ph. Eigmuller and Sz. Csizmadia and A.M.S. Smith and Anders Erikson and S. Albrecht R. Alonso Sobrino and William D. Cochran and Michael Endl and Massimiliano Esposito and A. Fukui and Paul Heeren and Diego Hidalgo and Maria Hjorth and Masayuki Kuzuhara and Norio Narita and David Nespral and Enric Pall{\'e} and Martin Patzold and Heike Rauer and Florian Rodler and Joshua N. Winn}, journal={Astronomy and Astrophysics}, year={2018}, volume={618} } Although thousands of exoplanets have been discovered to date, far fewer have been fully characterised, in particular super- Earths. The KESPRINT consortium identified K2-216 as a planetary candidate host star in the K2 space mission Campaign 8 field with a transiting super-Earth. The planet has recently been validated as well. Our aim was to confirm the detection and derive the main physical characteristics of K2-216b, including the mass. We performed a series of follow-up observations: high… Expand 15 Citations #### Figures and Tables from this paper The TOI-763 system: sub-Neptunes orbiting a Sun-like star We report the discovery of a planetary system orbiting TOI-763 (aka CD-39 7945), a $V=10.2$, high proper motion G-type dwarf star that was photometrically monitored by the TESS space mission inExpand The Multiplanet System TOI-421 We report the discovery of a warm Neptune and a hot sub-Neptune transiting TOI-421 (BD-14 1137, TIC 94986319), a bright (V = 9.9) G9 dwarf star in a visual binary system observed by the TransitingExpand Radial velocity confirmation of K2-100b: a young, highly irradiated, and low-density transiting hot Neptune We present a detailed analysis of HARPS-N radial velocity observations of K2-100, a young and active star in the Praesepe cluster, which hosts a transiting planet with a period of 1.7 d. We modelExpand Planet Hunters TESS I: TOI 813, a subgiant hosting a transiting Saturn-sized planet on an 84-day orbit We report on the discovery and validation of TOI 813 b (TIC55525572b), a transiting exoplanet identified by citizen scientists in data from NASA's Transiting Exoplanet Survey Satellite (TESS) and theExpand Greening of the brown-dwarf desert Context. Although more than 2000 brown dwarfs have been detected to date, mainly from direct imaging, their characterisation is difficult due to their faintness and model-dependent results. In theExpand K2-280 b – a low density warm sub-Saturn around a mildly evolved star We present an independent discovery and detailed characterization of K2-280 b, a transiting low density warm sub-Saturn in a 19.9-d moderately eccentric orbit (e = 0.35(-0.04)(+0.05)) from K2Expand TOI-431/HIP 26013: a super-Earth and a sub-Neptune transiting a bright, early K dwarf, with a third RV planet We present the bright (Vmag = 9.12), multiplanet system TOI-431, characterized with photometry and radial velocities (RVs). We estimate the stellar rotation period to be 30.5 ± 0.7 d using archivalExpand TOI-132 b: A short-period planet in the Neptune desert transiting a V = 11.3 G-type star★ • Matías R Díaz, James S Jenkins, +49 authors D. Yahalomi • Physics • 2019 The Neptune desert is a feature seen in the radius-mass-period plane, whereby a notable dearth of short period, Neptune-like planets is found. Here we report the {\it TESS} discovery of a newExpand Revisited mass-radius relations for exoplanets below 120 M⊕ • Physics • 2020 The masses and radii of exoplanets are fundamental quantities needed for their characterisation. Studying the different populations of exoplanets is important for understanding the demographics ofExpand Chemical fingerprints of formation in rocky super-Earths’ data • Physics • 2020 The composition of rocky exoplanets in the context of stars' composition provides important constraints to formation theories. In this study, we select a sample of exoplanets with mass and radiusExpand #### References SHOWING 1-10 OF 89 REFERENCES K2-111 b − a short period super-Earth transiting a metal poor, evolved old star Context. From a light curve acquired through the K2 space mission, the star K2-111(EPIC 210894022) has been identified as possibly orbited by a transiting planet. Aims: Our aim is to confirm theExpand The Discovery and Mass Measurement of a New Ultra-short-period Planet: K2-131b We report the discovery of a new ultra-short-period planet and summarize the properties of all such planets for which the mass and radius have been measured. The new planet, EPIC 228732031b, wasExpand Exoplanets around Low-mass Stars Unveiled by K2 We present the detection and follow-up observations of planetary candidates around low-mass stars observed by the K2 mission. Based on light-curve analysis, adaptive-optics imaging, and opticalExpand Kepler-21b: A Rocky Planet Around a V = 8.25 Magnitude Star HD 179070, aka Kepler-21, is a V = 8.25 F6IV star and the brightest exoplanet host discovered by Kepler. An early detailed analysis by Howell et al. of the first 13 months (Q0–Q5) of Kepler lightExpand The K2 Mission: Characterization and Early Results The K2 mission will make use of the Kepler spacecraft and its assets to expand upon Kepler's groundbreaking discoveries in the fields of exoplanets and astrophysics through new and excitingExpand K2-98b: A 32 M Neptune-size Planet in a 10 Day Orbit Transiting an F8 Star We report the discovery of K2-98b (EPIC 211391664b), a transiting Neptune-size planet monitored by the K2 mission during its Campaign 5. We combine the K2 time-series data with ground-basedExpand K2-141 b: A 5- M ⊗ super-Earth transiting a K7 v star every 6.7 h We report on the discovery of K2-141 b (EPIC 246393474 b), an ultra-short-period super-Earth on a 6.7 h orbit transiting an active K7 V star based on data from K2 campaign 12. We confirmed theExpand The K2-ESPRINT Project III: A Close-in Super-Earth around a Metal-rich Mid-M Dwarf We validate a $R_p=2.32\pm 0.24R_\oplus$ planet on a close-in orbit ($P=2.260455\pm 0.000041$ days) around K2-28 (EPIC 206318379), a metal-rich M4-type dwarf in the Campaign 3 field of the K2Expand Harps-N: the new planet hunter at TNG The Telescopio Nazionale Galileo (TNG)[9] hosts, starting in April 2012, the visible spectrograph HARPS-N. It is based on the design of its predecessor working at ESO's 3.6m telescope, achievingExpand K2-137 b: an Earth-sized planet in a 4.3-h orbit around an M-dwarf We report the discovery in K2's Campaign 10 of a transiting terrestrial planet in an ultra-short-period orbit around an M3-dwarf. K2-137 b completes an orbit in only 4.3 h, the second shortestExpand
## Linear parabolic equations with singular potentials.(English)Zbl 1036.35066 The initial boundary value problem for the equation $u_t -\Delta u +a(x,t)u=0$ is considered. Minimal regularity of the ‘potential’ $$a$$ and initial boundary data providing well-posedness of the problem are discussed. Existence and uniqueness results for $$L_r(L_q)$$-solution are established. In contrast to previous works [H. Brezis and Th. Cazenave, J. Anal. Math. 68, 277–304 (1996; Zbl 0868.35058); D. Hirata and M. Tsutsumi, Differ. Integral Equ. 14, 1–18 (2001; Zbl 1161.35418)] which rely on a priori estimates and properties of the heat semigroup, this paper employs maximal regularity techniques. It allows to get far reaching generalizations and improvements of previous results. ### MSC: 35K20 Initial-boundary value problems for second-order parabolic equations ### Keywords: maximal regularity techniques; solvability; well-posedness ### Citations: Zbl 0868.35058; Zbl 1161.35418 Full Text:
Title Search for same-sign top-quark pair production at $\sqrt{s}$ = 7 TeV and limits on flavour changing neutral currents in the top sector Author Chatrchyan, S. Khachatryan, V. Sirunyan, A. M. Bansal, S. de Wolf, E.A. Janssen, X. Mucibello, L. Roland, B. Rougny, R. Selvaggi, M. van Haevermaet, H. Van Mechelen, P. Van Remortel, N. et al. Faculty/Department Faculty of Sciences. Physics Publication type article Publication 2011 Bristol , 2011 Subject Physics Source (journal) Journal of high energy physics. - Bristol Volume/pages (2011) :8 , p. 1-27 ISSN 1126-6708 1029-8479 ISI 000294901400077 Carrier E-only publicatie Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract An inclusive search for same-sign top-quark pair production in pp collisions at s=7 TeV is performed using a data sample recorded with the CMS detector in 2010, corresponding to an integrated luminosity of 35 pb−1. This analysis is motivated by recent studies of pptt reporting mass-dependent forward-backward asymmetries larger than expected from the standard model. These asymmetries could be due to Flavor Changing Neutral Currents (FCNC) in the top sector induced by t-channel exchange of a massive neutral vector boson (Z′). Models with such a Z′ also predict enhancement of same-sign top-pair production in pp or pp collisions. Limits are set as a function of the Z′ mass and its couplings to u and t quarks. These limits disfavour the FCNC interpretation of the Tevatron results. E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000294901400077&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000294901400077&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000294901400077&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Full text (open access) https://repository.uantwerpen.be/docman/irua/94eaa9/92933.pdf Handle
# Is it possible to train a neural network to solve polynomial equations? • I randomly generate millions groups of triplet $\lbrace x_0, x_1, x_2 \rbrace$ within range $(0,1)$, then calculate the corresponding coefficients of the polynomial $(x-x_0)(x-x_1)(x-x_2)$, which result in triplet groups normalized in a form of $\lbrace { {x_0+x_1+x_2 \over 3} , {\sqrt{x_0x_1+x_1x_2+x_0x_2 \over 3}} , {\sqrt[3]{x_0x_1x_2}}} \rbrace$; • After that, I feed the coefficient triplets in to a 5-layered neural network $\lbrace 3,4,5,4,3 \rbrace$, in which all the activation function is set to sigmoid and the learning rate is set to 0.1; • However, I only get a very poor cross validation, around 20%. How can I fix this? BackGround My original problem is a dynamic inverse problem. In that problem, I have hundreds of thousands of observations $O$, from these observations, I need to recover several hundred parameters $P$. The simulation process from $P$ to $O$ is very easy and cheap to calculate, but the inversion from $O$ to $P$ is highly nonlinear and nearly impossible. My idea is to train a neural network getting $O$ as inputs and $P$ as outputs. To check the feasibility of this idea, I employ a 3-ordered polynomial equation to do the validation. update half a year later With more nodes per layer, I have successfully trained a neural network. The topology is set to $\lbrace 3, 64, 64, 64 \rbrace$. And the most important trick is, sorting the generated triplet $\lbrace x_0, x_1, x_2 \rbrace$, ensuring $x_0 <= x_1 <= x_2$ always holds. • I would create a neural network to estimate the number of real roots then something like Newton's method to estimate them. – Emre Feb 9 '17 at 18:42 • @Emre In the numeric experiment above, both the training and the validation sets, I always have three real roots in range (0,1). So I do not need to estimate the number of real roots in this case. – Feng Wang Feb 9 '17 at 19:49 • I don't get the role of the polynomial by now. Could you please expand on this? Can't you collect many pairs (P, O) and take a subset of those for validation? – Martin Thoma Feb 9 '17 at 20:58 • @MartinThoma I want to test my idea with a much simpler inversion problem. I mean, the coefficients triplet as observation $O$, and the roots triplet as parameter $P$ to recover, where the process from $\{x_0, x_1, x_2\}$ to polynomial coefficients is straightforward, but the inversion is much difficult. If I can successfully train such a neural network taking coefficients as inputs and roots as outputs, I would like to migrate this idea to my dynamic inverse problem in the next step. – Feng Wang Feb 9 '17 at 21:11 • @MartinThoma I can always generate as many as possible pairs of $\{P, O\}$ to train and to validate the performance. – Feng Wang Feb 9 '17 at 21:18 In particular, the function you are trying to compute amounts to computing $x_0,x_1,x_2$ from $a,b,c,d$, where \begin{align*} x_k &= - {1 \over 3a} \left(b + \eta^k C + {\Delta_0 \over \eta^k C}\right)\\ \Delta_0 &= b^2 - 3ac\\ C &= \sqrt[3]{\Delta_1 \pm \sqrt{\Delta_1^2 - 4\Delta_0^3} \over 2}\\ \Delta_1 &= 2b^3 - 9abc + 27a^2d\\ \eta &= -{1 \over 2} + {1 \over 2} \sqrt{3} i \end{align*}
Restoring permutation from differences of adjacent elements Suppose a permutation $$\pi \in S_n$$ is encoded by a list of integers $$P=(p_1, p_2, ... p_{n-1})$$, where $$p_i = \pi(i+1) - \pi(i)$$, i.e. $$P$$ is the list of differences of adjacent elements. Now, given $$P$$, is it possible to restore the original permutation $$\pi$$ (or to tell that there is no solution)? Create the list $$(q_1,\ldots,q_{n-1})$$ where $$q_m=\sum_{i=1}^m p_i$$ Then $$q_m=\pi(m+1)-\pi(1)$$ $$\pi(1)$$ is equal to the number of $$q$$ that are negative, plus $$1$$. We reconstruct the permutation by putting this element first and adding it to all $$q$$. There is a solution if and only if this is a permutation.
B invest his amount in business of A on a condition that B gets 10% of profit as interest on his amount and remaining profit is divided by A and B in equal parts. If at the end of the year A cheats B and take 20% of profit with him and shows remaining profit as actual profit. If remaining profit is distributed between them according to the condition then A gets Rs. 24000 more than B. Find the actual profit that occur at the end of the year. 1. 100000 2. 200000 3. 150000 4. 250000 5. None of these Option 2 : 200000 Detailed Solution Solution: Given: Cheating made by A = 20% of total profit Difference between the profit amount of A and B = 24000 Calculation: Let the profit at the end of the year = x A takes 20% of of the overall profit then remaining profit = x × (80/100) ⇒ 0.8x A takes 10% of the remaining profit as the interest of his investment then remaining profit = 0.8x × (90/100) ⇒ 0.72x According to the question the remaining profit is divided by them equally, then A get = 0.36x B get = 0.36x B also get 10% of 0.8x, then Total amount of profit that B has = 0.8x × (10/100) + 0.36x ⇒ 0.08x + 0.36x ⇒ 0.44x Total amount of profit that A has = x × (20/100) + 0.36x ⇒ 0.20x + 0.36x ⇒ 0.56x Difference between the amount received by A and B = 0.56x - 0.44x = 24000 ⇒ 0.12x = 24000 Total profit occur at the end of the year = x = 200000
# Experiments with Latent Dirichlet Allocation In a couple of my previous posts I talked about using clustering colors with k-means and counting clusters with EM. This kind of clustering is fairly straightforward, as you have some notion of distance between points to judge similarity. But what if you wanted to cluster text? How do you judge similarity there? (There are certain measures you could use, like the F-measure, which I’ll talk about in a later post.) One way is to use Latent Dirichlet Allocation, which I first heard about while talking to a Statistics 133 GSI, and then later learned about while reading probabilistic models of cognition. Latent Dirichlet Allocation is a generative model that describes how text documents could be generated probabilistically from a mixture of topics, where each topic has a distribution over words. For each word in a document, a topic is sampled, from which a word is then sampled. This model gives us probabilities of documents, given topic distribution and words. But what’s more interesting here is learning about topics given the observed documents. Here’s the plate notation view of LDA, which describes exactly how documents are generated: The image above was created using TiKZ-bayesnet (TiKZ is super-fun by the way) for LaTeX, which actually provides an LDA example. I’ve taken their example here and modified the variable names and layout slightly to match my code. Each box is a “plate” which signifies that the structure should be repeated according to the sequence in the plate’s lower right corner. Think of it like a “for-loop” for graphs. Now I’ll go over all these variables. • $alpha_{topics}$ and $alpha_{words}$ are hyperparameters that you set by hand. They show how the Dirichlet distributions create distributions over topics and words, respectively. A Dirichlet distribution outputs a vector that sums to 1, which can be used as the probabilities for a multinomial distribution. We’ll usually set $alpha_{topics}$ and $alpha_{words}$ to be a number greater than 0, but much less than 1. The idea here is that we generate mixtures that are very different from each other. Take a look at the picture below, which represents some sampling from a Dirichlet distribution over 3 categories for different values of the $alpha$ parameter. (Actually $alpha$ is itself a vector of 3 numbers, so $alpha$ of 1 really means $alpha$ is [1, 1, 1]) The leftmost distribution creates highly varied samples. Think of the three points as the proportion of three different words, like “dog”, “cat”, and “mouse”, and we’re generating topics. This might create topics like [8, 0.1, 0.1] (mostly dog), [0.2, 0.9, 0] (mostly cat), etc. Whereas the rightmost distribution creates topics that are much more in the center, which means they’re much closer to each other. Here we might create topics like [0.3, 0.4, 0.3], which means the word “dog”, “cat”, and “mouse” are almost equally likely to be generated by this topic. Smaller alpha values should give much more distinguishing topics, though I would suspect that setting them too small would give unrealistic topics. (e.g. a topic that is only the word “the”) • $worddist_k$ is a vector as long as the number of unique words we have in all the documents. For each topic $k$, it tells us how frequently a word is generated under that topic. • $topicdist_d$ is a vector as long as the number of topics we’re modeling. A vector of length $k$ (the number of topics) is generated for each document $d$, which describes “how much” of each topic is represented in a document. If you think documents are usually only ever 1 topic, you’d probably set $alpha_{topics}$ really low. If you think documents contain words from a number of topics, you’d probably set $alpha_{topics}$ slightly higher. • For each word in a document, we draw a topic $wordtopic_d,w$ from the output of $topicdist_d$. $wordtopic_d,w$ is an integer, like “1” for topic 1. • $word_d,w$ is observed. It represents the word we actually saw. $word_d,w$ in our model is also an integer like “37” which represents the 37th unique word in our list of words over all documents. Put all together, the model looks like this in JAGS: model { for (k in 1 : Ktopics ) { worddist[k,1:Nwords] ~ ddirch(alphaWords) } for( d in 1 : Ndocs ) { topicdist[d,1:Ktopics] ~ ddirch(alphaTopics) for (w in 1 : length[d]) { wordtopic[d,w] ~ dcat(topicdist[d,1:Ktopics]) word[d,w] ~ dcat(worddist[wordtopic[d,w],1:Nwords]) } } } Of the four variables here (excluding hyperparameters), three are unobserved, which means the model learns them. I think the most interesting one is $worddist_k$ which will show us what each topic looks like. Here’s an example of what topics might look like visualized as a word cloud. In this example I took the first paragraphs of the “Dog”, “Cat”, and “Mouse” Wikipedia article. You would hope that the three topics could be separated cleanly—showing mainly “cat” in one topic, “dog” in another, and “mouse”/“mice” in the last—, but I currently haven’t had all that much success with this. This example is also kind of cheating, too, in that maybe for this case I could just do supervised learning with a naive Bayes classifier since I know how all the documents should be clustered. I initially tried using LDA to cluster different lines from my system log. I later moved to using a dendrogram clustering system using the “F-measure” or “Word Error Rate”, which worked much better both speed and accuracy wise. I may talk about this in a later post. In the code I’ve included below I also show my failed attempt to detect the number of clusters by fitting multiple models with different numbers of clusters and measuring deviance (something similar to a penalized version of log likelihood, if I understand correctly) to see which fits best. (I may try this with Stan in the future, which will directly give you the log likelihood.) I also show how you can use a library like snowfall to fit multiple JAGS models in parallel. Sampling is embarrassingly parrallel, so there’s no reason to leave your other CPU cores idle while one does all the work. I think LDA is more interesting when you’re studying topics, instead of trying to simply cluster documents. Ideally I’d like to do more of an unbounded or hierarchical LDA, where the number of topics could vary (or in the case of hierarchical LDA, topics have child topics), but I’ve yet to implement this. What I really liked about the Church programming language was that implementing unbounded models was fairly straightforward. Not so in JAGS. This may be possible to implement in Stan, which would be fun and interesting to do at some point. Anyhoo, here’s the code:
You can set the questions that are used on adjudicator feedback forms. The only field that is permanently there is the score field, which is an overall score assessing the adjudicator. All other questions (including a generic comments section) must be defined if you want them to be on the form. Currently, there are two methods of setting questions: Most of what you need to know is explained in help text in the edit database area. (Even if you’re using importtournament, you might find the field descriptions in the edit database area helpful.) Some more details are here. Type Relevant options Appearance checkbox - yes/no (dropdown) - integer (textbox) min_value, max_value integer scale min_value, max_value float min_value, max_value text - long text - select one choices select multiple choices Options: • min_value and max_value specify the minimum and maximum allowable values in the field. Mandatory for “integer scale” types and optional for “integer (textbox)” and “float” types. • choices is used with “select one” and “select multiple” types, and is a //-delimited list of possible answers, e.g. biased//clear//concise//rambly//attentive//inattentive • required specifies whether users must fill out the field before clicking “submit”. This requirement is only enforced on public submission forms. It is not enforced on forms entered by tab room assistants. The exception to this is the “checkbox” type. For checkboxes, “required” means that the user cannot submit the form unless the box is checked. Think of it like an “I agree to the terms” checkbox. This isn’t a deliberate design decision—it’s just a quirk of how checkboxes work on web forms. We don’t really intend to add any further complexity to the built-in feedback system. If the above answer types don’t cover your needs, we suggest using a third-party feedback system. You might be able to adapt SurveyMonkey, Google Forms or Qualtrics to your needs. We may be persuaded to make an exception if the new question type you have in mind is easy to add: that is, if it is straightforward to implement using standard web page elements and fits into the existing questionnaire framework (see Different questionnaires below). If you think there is such a case, please contact us using the contact details in the Authors & Acknowledgements section. Different questionnaires¶ Tabbycat allows you to specify two questionnaires: one for feedback submitted by teams, and one for feedback submitted by adjudicators. You must specify in each question whether to include the question in each questionnaire. • from_team, if checked, includes the question in feedback submitted by teams • from_adj, if checked, includes the question in feedback submitted by adjudicators Who gives feedback on whom?¶ Tabbycat allows for three choices for which adjudicators give feedback on which other adjudicators: • Chairs give feedback on panellists and trainees • Chairs give feedback on panellists and trainees, and panellists give feedback on chairs You can set this in the feedback paths option under Setup > Configuration > Feedback. Your choice affects each of the following: • The options presented to adjudicators in the online feedback form • The printable feedback forms • The submissions expected when calculating feedback progress and highlighting missing feedback The feedback paths option only affects feedback from adjudicators. Teams are always assumed to give feedback on the orallist, and they are encouraged to do so through hints on the online and printable feedback forms, but there is nothing technically preventing them from submitting feedback from any adjudicator on their panel. If you need a different setting, you need to edit the source code. Specifically, you should edit the function expected_feedback_targets in tabbycat/adjfeedback/utils.py. Unless we can be convinced that they are very common, we don’t intend to add any further choices to the feedback paths option. If your needs are specific enough that you need to differ from the available settings, they are probably also beyond what is sensible for a built-in feedback system, and we recommend using a third-party feedback system instead. How is an adjudicator’s score determined?¶ For the purpose of the automated allocation, an adjudicator’s overall score is a function of their test score, the current round’s feedback weight, and their average feedback score. This number is calculated according to the following formula: where is the feedback weight for the round. Note that because the feedback score is averaged across all pieces of feedback (rather than on a per-round total) rounds in which a person receives feedback from many sources (say from all teams and all panellists) could impact their average score much more than a round in which they only receive feedback from one or two sources. Under this formula, each round’s feedback weight can be used to determine the relative influence of the test score vs feedback in determining the overall score. As an example, say that an adjudicator received 5.0 as their test score, but their average feedback rating has thus far been 2.0. If the current rounds’ feedback weight is set to 0.75, then their overall score would be 2.75. If the current round’s feedback weight is set to 0.5 their score would be 3.5. If the weight was 0, their score will always be their test score; if the weight was 1 it will always be their average feedback value. Note To change the weight of a round you will need to go to the Edit Database area, open the round in question, and change its Feedback weight value. It is common to set rounds with a low feedback weight value early on in the tournament (when feedback is scant) and to increase the feedback weight as the tournament progresses. Note A participant’s test score can, in conjunction with feedback weight, also be used as a manual override for an adjudicator’s overall ranking. At several tournaments, adjudication cores have set every round’s feedback weight to 0, and manually adjusted an adjudicator’s test score in response to feedback they have received and reviewed. In this way complete control over every adjudicator’s overall score can be exerted. Note If feedback from trainee adjudicators is enabled, any scores that they submit in their feedback are not counted towards that adjudicator’s overall score. Feedback can be marked as discarded in the database view, under the confirmed field. It can also be marked as ignored in the same view. Controls to reverse these designations are also available there. To mark feedback as ignored, an option is available in the administrator’s and assistant’s feedback adding form, as well in the form of a toggle link at the bottom of each card.
# Problem with parametricplot of a closed curve [duplicate] When plotting a closed curve with parametricplot several times, ParametricPlot[{Cos[u], Sin[u]}, {u, 0, 100000}] the curve becomes a filled region, instead of overwriting the previous curve. Of course in this example I know it is sufficient for u to run in a 2 Pi interval, but I am working on a case where I do not know if the curve is closed or not, so I need to know if the area is filled because the curve is actually not closed rather than being just numerical garbage. I can fix this behavior by increasing the number of plotpoints, but I'd rather not use too much memory unnecessarily. Does anyone know how to address this problem in some other way? • FunctionPeriod[{Cos[u], Sin[u]}, u] – Bob Hanlon Jun 23 at 0:53 Instead of showing the lines generated by ParametricPlot, which are indeed prone to artifacts even in your simple case, you could show the points where the function has been evaluated instead. Particularly if you have quite a few of them, they will be as closely spaced that they might well fuse into the semblance of a line: ParametricPlot[ {Cos[u], Sin[u]}, {u, 0, 100000}, PlotStyle -> None, Mesh -> All, MeshStyle -> Black ] Alternative: ParametricPlot[{Cos[u], Sin[u]}, {u, 0,100000},MaxRecursion -> 15]
# Why are health and nutrition, and music theory not in beta yet? May 24, 2016 Here's why. #### Explanation: Health and Nutrition and Music Theory are not in beta yet because going by number of founders as a way to gauge whether or not a subject can be released is not a good strategy to use. All the subjects that were promoted to beta after at least 8 people signed up as founders eventually graduated from beta because the community stepped up and did what the founders were supposed to do. Simply put, none of the subjects that had at least 8 founders actually benefited from input from all the founders. In fact, a lot of subjects only saw contributions from 2 or 3 founders over the period in which the subject was in beta. A lot of people signed up to be founders without realizing that their contribution will be needed in order for the subject to graduate. US History is a perfect example of that. This subject was one of the first to be promoted to beta status, I think almost 6 months ago, yet it's as far from graduating as it was a week after it was launched. Notice that $18$ founders have signed up to help grow this subject, yet only $2$ have actually been active. World History, Environmental Science, and Psychology are pretty much in the same position. Astrophysics is not doing that well either. So, to sum this up, the number of founders will no longer be a criterion for promoting a subject to beta status. The team is working on figuring out other criteria that can be used to gauge whether or not a subject should go in beta. Until that happens, subjects will no longer be promoted to beta.
## solution of damped oscillation D.E. d2x + 2Kdx + w0 2 x(t) = 0 dt dt while solving this we assume the solution to be of form x = f(t)e-kt why is this exponential taken??? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Because the solution to the equation is known to be of this form! You are, if you like, assuming the form of the answer in advance! This is quite a common procedure in 'solving' differential equations. If your physical intuition is good, you could identify the $2k\frac{dx}{dt}$ term as due to a resistive force, and guess that there'd be an exponential decay factor in the solution. Once you've made the substitution $x = e^{-kt} f(t)$, you have a differential equation for $f(t)$, which is just the ordinary shm equation with angular frequency $\omega$ given by $\omega^2 = \omega_0^2 - k^2$, provided that $\omega > k$. So the solution can be seen immediately, by elementary methods, to be the product of sinusoidally oscillating, and exponential damping, factors. [If $\omega < k$ we have a non-oscillatory fall-off in x with time.] Making the substitution $x = e^{-kt} f(t)$ involves a bit of drudgery and, imo, one might as well go the whole hog and (if $\omega > k$) make the substitution $x=Ae^{-kt}sin ({\omega t + \epsilon})$ at the outset. You'll find that this expression does fit, provided that $\omega^2 = \omega_0^2 - k^2$. If you're happy with complex numbers, and the idea of linear combinations, there is a very slick method which simply requires you to substitute $x=Ae^{-\alpha t}$. Alpha turns out to be complex if $\omega > k$, and you need to form a linear combinations of two solutions. The mathematics is a bit more advanced than that needed for the substitution $x = e^{-kt} f(t)$ or $x=Ae^{-kt}sin ({\omega t + \epsilon})$. In previous post $\omega > k$ should read $\omega_0 > k$, and $\omega < k$ should read $\omega_0 < k$. Sorry.
# Math Help - Easier way to factor this? 1. ## Easier way to factor this? 30x^3y-25x^2y^2-30xy^3 i did all the work but when i got down to 5xy(6x^2-5xy-6y^2) it took me forever to get the answer 5xy(3x+2y)(2x-3y) is there a faster way to determine which part of 2y and 3y are negative to get the answer? It took me an hour just to find the answer to this! 2. ## Re: Easier way to factor this? $30x^{3}y-25x^{2}y^{2}-30xy^{3}=5xy(6x^{2}-5xy-6y^{2})=5xy(6x^{2}-9xy+4xy-6y^{2})$ How you think... well, you have to find two numbers whose sum is -5 and whose product is -36 $\left ( 6 \cdot (-6) \right )$: $-9+4=-5$ and $-9\cdot 4=-36$. I don't think there's an easier way to determine the sign of that expression (I hope that's what you asked) than the traditional one. 3. ## Re: Easier way to factor this? Originally Posted by cytotoxictcell 30x^3y-25x^2y^2-30xy^3 i did all the work but when i got down to 5xy(6x^2-5xy-6y^2) it took me forever to get the answer 5xy(3x+2y)(2x-3y) is there a faster way to determine which part of 2y and 3y are negative to get the answer? It took me an hour just to find the answer to this! If your question is only "which part" is negative, it shouldn't have taken you an hour to try both: $(3x- 2y)(2x+ 3y)= 6x^2- 5xy+ 9xy- 6y^2= 6x^2+ 4xy- 6y^2$ is wrong because the sign on "2xy" is wrong. $(3x+2y)(2x-3y)$, of course, gives $6x^2+ 4xy- 9xy- 6y^2= 6x^2- 5xy- 6y^2$, the correct product.
# Envariance and Bohm 1. Oct 2, 2010 ### atyy The Bohmian interpretation says quantum mechanics is deterministic. The environmental darwinism approach tries to derive the Born rule, and make quantum mechanics deterministic. If darwinism succeeds, would it be compatible with Bohm, since both aim to make QM deterministic? 2. Oct 2, 2010 Can you quote a reliable source? 3. Oct 2, 2010 ### atyy http://arxiv.org/abs/quant-ph/0312059 "Bohmian mechanics, on the other hand, upholds a unitary time evolution of the wavefunction, but introduces an additional dynamical law that explicitely governs the always-determinate positions of all particles in the system." 4. Oct 2, 2010 But this statement in the paper: "Thus the particles follow determinate trajectories described by Q(t), with the distribution of Q(t) being given by the quantum equilibrium distribution $$\rho=|\psi|^2$$" is inaccurate. To make it accurate one would have to add: "provided it is given by this distribution at some time instant t0." Last edited: Oct 2, 2010 5. Oct 2, 2010 Quoting from Goldestein, Struyve, "On the Uniqueness of Quantum Equilibrium in Bohmian Mechanics", Journal of Statistical Physics 128, 1197-1209 (2007) "Bohmian mechanics (often called the deBroglie-Bohm theory) yields the same predictions as standard quantum theory provided the configuration of a system with wave function ψ is random, with distribution given by $$|\psi|^2$$ This distribution, the quantum equilibrium distribution [1, 2], satisfies the following natural property: If the distribution of the configuration at some time t0 is given by $$|\psi_{t_0}|^2$$, then the distribution of the configuration at any other time t will be given by $$|\psi_t|^2$$ — i.e., with respect to the wave function it will have the same functional form at the other time—provided, of course, that the wave function evolves according to Schrodinger’s equation between the two times and the configuration evolves according to the law of motion for Bohmian mechanics." 6. Oct 4, 2010 ### Demystifier Probably not, because the two approaches have different ontologies. Yet, there could be a relation between the two approaches not yet seen explicitly (at least by me). 7. Oct 4, 2010 ### atyy Thanks, Demystifier. I guess have to wait and see, but I would hope it'd be something like a different foliation of the same spacetime.
mixedmath Explorations in math and programming David Lowry-Duda At a recent colloquium at the University of Warwick, the fact that $$\label{question} \sum_ {n \geq 1} \frac{\varphi(n)}{2^n - 1} = 2.$$ Although this was mentioned in passing, John Cremona asked — How do you prove that? It almost fails a heuristic check, as one can quickly check that $$\label{similar} \sum_ {n \geq 1} \frac{n}{2^n} = 2,$$ which is surprisingly similar to \eqref{question}. I wish I knew more examples of pairs with a similar flavor. [Edit: Note that an addendum to this note has been added here. In it, we see that there is a way to shortcut the "hard part" of the long computation.] The right way Shortly afterwards, Adam Harper and Samir Siksek pointed out that this can be determined from Lambert series, and in fact that Hardy and Wright include a similar exercise in their book. This proof is delightful and short. The idea is that, by expanding the denominator in power series, one has that $$\sum_{n \geq 1} a(n) \frac{x^n}{1 - x^n} \notag = \sum_ {n \geq 1} a(n) \sum_{m \geq 1} x^{mn} = \sum_ {n \geq 1} \Big( \sum_{d \mid n} a(d) \Big) x^n,$$ where the inner sum is a sum over the divisors of $d$. This all converges beautifully for $\lvert x \rvert < 1$. Applied to \eqref{question}, we find that $$\sum_{n \geq 1} \frac{\varphi(n)}{2^n - 1} \notag = \sum_ {n \geq 1} \varphi(n) \frac{2^{-n}}{1 - 2^{-n}} = \sum_ {n \geq 1} 2^{-n} \sum_{d \mid n} \varphi(d),$$ and as $$\sum_ {d \mid n} \varphi(d) = n, \notag$$ we see that \eqref{question} can be rewritten as \eqref{similar} after all, and thus both evaluate to $2$. That's a nice derivation using a series that I hadn't come across before. But that's not what this short note is about. This note is about evaluating \eqref{question} in a different way, arguably the wrong way. But it's a wrong way that works out in a nice way that at least one person1 1and perhaps exactly one person finds appealing. The wrong way We will use Mellin inversion — this is essentially Fourier inversion, but in a change of coordinates. Let $f$ denote the function $$f(x) = \frac{1}{2^x - 1}. \notag$$ Denote by $f^ *$ the Mellin transform of $f$, $$f * (s):= \mathcal{M} [f(x)] (s) := \int_ 0^\infty f(x) x^s \frac{dx}{x} = \frac{1}{(\log 2)^2} \Gamma(s)\zeta(s),\notag$$ where $\Gamma(s)$ and $\zeta(s)$ are the Gamma function and Riemann zeta functions.2 2These are functions near and dear to my heart, so I feel comfort when I see them. But I recognize that others might think that this is an awfully complicated way to start answering this question. And I must say, those people are probably right. For a general nice function $g(x)$, its Mellin transform satisfies $$\mathcal{M}[f(nx)] (s) = \int_0^\infty g(nx) x^s \frac{dx}{x} = \frac{1}{n^s} \int_0^\infty g(x) x^s \frac{dx}{x} = \frac{1}{n^s} g^ * (s).\notag$$ Further, the Mellin transform is linear. Thus $$\label{mellinbase} \mathcal{M}[\sum_{n \geq 1} \varphi(n) f(nx)] (s) = \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} f^ * (s) = \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} \frac{\Gamma(s) \zeta(s)}{(\log 2)^s}.$$ The Euler phi function $\varphi(n)$ is multiplicative and nice, and its Dirichlet series can be rewritten as $$\sum_{n \geq 1} \frac{\varphi(n)}{n^s} \notag = \frac{\zeta(s-1)}{\zeta(s)}.$$ Thus the Mellin transform in \eqref{mellinbase} can be written as $$\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1). \notag$$ By the fundamental theorem of Mellin inversion (which is analogous to Fourier inversion, but again in different coordinates), the inverse Mellin transform will return the original function. The inverse Mellin transform of a function $h(s)$ is defined to be $$\mathcal{M}^{-1}[h(s)] (x) \notag := \frac{1}{2\pi i} \int_ {c - i \infty}^{c + i\infty} x^s h(s) ds,$$ where $c$ is taken so that the integral converges beautifully, and the integral is over the vertical line with real part $c$. I'll write $(c)$ as a shorthand for the limits of integration. Thus $$\label{mellininverse} \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} - 1} = \frac{1}{2\pi i} \int_ {(3)} \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) x^{-s} ds.$$ We can now describe the end goal: evaluate \eqref{mellininverse} at $x=1$, which will recover the value of the original sum in \eqref{question}. How can we hope to do that? The idea is to shift the line of integration arbitrarily far to the left, pick up the infinitely many residues guaranteed by Cauchy's residue theorem, and to recognize the infinite sum as a classical series. The integrand has residues at $s = 2, 0, -2, -4, \ldots$, coming from the zeta function ($s = 2$) and the Gamma function (all the others). Note that there aren't poles at negative odd integers, since the zeta function has zeroes at negative even integers. Recall, $\zeta(s)$ has residue $1$ at $s = 1$ and $\Gamma(s)$ has residue $(-1)^n/{n!}$ at $s = -n$. Then shifting the line of integration and picking up all the residues reveals that $$\sum_{n \geq 1} \frac{\varphi(n)}{2^{n} - 1} \notag =\frac{1}{\log^2 2} + \zeta(-1) + \frac{\zeta(-3)}{2!} \log^2 2 + \frac{\zeta(-5)}{4!} \log^4 2 + \cdots$$ The zeta function at negative integers has a very well-known relation to the Bernoulli numbers, $$\label{zeta_bern} \zeta(-n) = - \frac{B_ {n+1}}{n+1},$$ where Bernoulli numbers are the coefficients in the expansion $$\label{bern_gen} \frac{t}{1 - e^{-t}} = \sum_{m \geq 0} B_m \frac{t^m}{m!}.$$ Many general proofs for the values of $\zeta(2n)$ use this relation and the functional equation, as well as a computation of the Bernoulli numbers themselves. Another important aspect of Bernoulli numbers that is apparent through \eqref{zeta_bern} is that $B_{2n+1} = 0$ for $n \geq 1$, lining up with the trivial zeroes of the zeta function. Translating the zeta values into Bernoulli numbers, we find that \eqref{question} is equal to \begin{align} &\frac{1}{\log^2 2} - \frac{B_2}{2} - \frac{B_4}{2! \cdot 4} \log^2 2 - \frac{B_6}{4! \cdot 6} \log^4 2 - \frac{B_8}{6! \cdot 8} \cdots \notag \\ &= -\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!}. \label{recog} \end{align} This last sum is excellent, and can be recognized. For a general exponential generating series $$F(t) = \sum_{m \geq 0} a(m) \frac{t^m}{m!},\notag$$ we see that $$\frac{d}{dt} \frac{1}{t} F(t) \notag =\sum_{m \geq 0} (m-1) a(m) \frac{t^{m-2}}{m!}.$$ Applying this to the series defining the Bernoulli numbers from \eqref{bern_gen}, we find that $$\frac{d}{dt} \frac{1}{t} \frac{t}{1 - e^{-t}} \notag =- \frac{e^{-t}}{(1 - e^{-t})^2},$$ and also that $$\frac{d}{dt} \frac{1}{t} \frac{t}{1 - e^{-t}} \notag =\sum_{m \geq 0} (m-1) B_m \frac{(t)^{m-2}}{m!}.$$ This is exactly the sum that appears in \eqref{recog}, with $t = \log 2$. Putting this together, we find that $$\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!} \notag =\frac{e^{-\log 2}}{(1 - e^{-\log 2})^2} = \frac{1/2}{(1/2)^2} = 2.$$ Thus we find that \eqref{question} really is equal to $2$, as we had sought to show. bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline math)$ or $$(your display equation)$$.
# Tag Info 6 The following code shows a comparisson between the current code and one produced using booktabs and siunitx: \documentclass[12pt,oneside,a4paper,fleqn]{report} \usepackage{float} \usepackage{booktabs} \usepackage{siunitx} \usepackage{slashbox} \begin{document} \begin{table}[H] \centering \begin{tabular}{|l|lll|} \hline \backslashbox{$f_i$}{$c_j$} ... 2 The \contentsline stuff is wrongly used in the O.P.'s document, i.e. wrongly placed {} delimiters. For example \addtocontents{lof}{\protect\contentsline{part}% {\protect\numberline{\thisparttitle}}{}{3.2em} } tries to write a \numberline with the part title as the number, leaving out the title and setting the page number to be 3.2em ... 2 I don't know whether this is a good idea. Why having two diffrent formats for one same object? On the other hand, having the number in italics but the brackets in normal round font looks unpleasing. In any case, here's how you can do it: simply redefine \@cite (I also used italics for eventual anotations, but if that is not required, replace ... 1 Here are three possibilities, with enumitems tools: \documentclass{report}%{memoir} \usepackage{enumitem} \usepackage{lipsum} \begin{document} The way to propose a solution that \begin{enumerate}[nosep, wide] \item is able to clearly show my items, \item can be read as if it were only one sentence, and \item is well formatted \end{enumerate} is ... 1 Warning: this is not a general solution. This is a solution to be applied after you've completed most of your document. As already mentioned in the comments, you could get around this issue by removing the use of \limits: \documentclass{article} \usepackage{amsmath,amssymb,amsthm} \newtheorem{defn}{Definition} \begin{document} \begin{defn} An ... 1 This is a hack and not guaranteed. As jon mentioned in comments, this is not really supported. \documentclass[11pt,a4paper]{moderncv} \moderncvstyle{classic} \moderncvcolor{blue} \usepackage[english,ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[ babel, german=quotes ]{csquotes} \AfterPreamble{\hypersetup{ colorlinks=true, linkcolor=gray, ... Only top voted, non community-wiki answers of a minimum length are eligible
# CERN Accelerating science The Articles collection aims to cover as far as possible the published literature in particle physics and its related technologies. The collection starts, comprising only the most important documents in the first decencies, from the mid of the 19th century. The full coverage starts from 1980 onwards. CERN publications are though covered 100% since the foundation of the organisation in 1954. The CERN Annual Report, vol. 3 - List of CERN Publications, is extracted from this dataset. # Published Articles 2016-02-08 07:32 Recursive Starlight and Bias Estimation for High-Contrast Imaging with an Extended Kalman Filter / Riggs, A J Eldorado ; Kasdin, N Jeremy ; Groff, Tyler D For imaging faint exoplanets and disks, a coronagraph-equipped observatory needs focal plane wavefront correction to recover high contrast. The most efficient correction methods iteratively estimate the stellar electric field and suppress it with active optics. [...] arXiv:1602.02044.- 2016 - 30 p. - Published in : Journal of Astronomical Telescopes, Instruments, and Systems. 2(1), 011017 (Feb 05, 2016) External link: Preprint 2016-02-08 07:32 The Cool Giant HD 77361 - A Super Li-Rich Star / Lyubimkov, L S ; Kaminsky, B M ; Metlov, V G ; Pavlenko, Ya V ; Poklad, D B ; Rachkovskaya, T M Super Li-rich stars form a very small and enigmatic group whose existence cannot be explained in terms of the standard stellar evolution theory. The goal of our study is to check the reality of this group of cool giants based on an independent technique. [...] arXiv:1602.02000.- 2016 - 13 p. - Published in : Astron. Lett. 41 (2015) 809-823 External link: Preprint 2016-02-08 07:32 Modelling the flare activity of Sgr A* / Howard, E M Latest observational data provides evidence that the emissions from Sgr A* originate from an accretion disc within ten gravitational radii of the dynamical centre of Milky Way. We investigate the physical processes responsible for the variable observed emissions from the compact radio source Sgr A*. [...] arXiv:1602.01909.- 2016 - 7 p. - Published in : AIP Conference Proceedings; 10/27/2009, Vol. 1178 Issue 1, pp. 50-56 External link: Preprint 2016-02-08 07:32 Ca II triplet spectroscopy of RGB stars in NGC 6822: kinematics and metallicities / Swan, Jesse ; Cole, Andrew A ; Tolstoy, Eline ; Irwin, Mike J We present a detailed analysis of the chemistry and kinematics of red giants in the dwarf irregular galaxy NGC 6822. Spectroscopy at 8500 Angstroms was acquired for 72 red giant stars across two fields using FORS2 at the VLT. [...] arXiv:1602.01897.- 2016 - 15 p. - Published in : Mon. Not. R. Astron. Soc. 456 (2016) 4315 External link: Preprint 2016-02-08 07:32 Large-scale 3D mapping of the intergalactic medium using the Lyman Alpha Forest / Ozbek, Melih ; Croft, Rupert A C ; Khandai, Nishikanta Maps of the large-scale structure of the Universe at redshifts 2-4 can be made with the Lyman-alpha forest which are complementary to low redshift galaxy surveys. We apply the Wiener interpolation method of Caucci et al. [...] arXiv:1602.01862.- 2016 - 14 p. - Published in : MNRAS (March 11, 2016) 456 (4): 3610-3623 External link: Preprint 2016-02-06 07:37 A quantum protective mechanism in photosynthesis / Marais, Adriana ; Sinayskiy, Ilya ; Petruccione, Francesco ; van Grondelle, Rienk Since the emergence of oxygenic photosynthesis, living systems have developed protective mechanisms against reactive oxygen species. During charge separation in photosynthetic reaction centres, triplet states can react with molecular oxygen generating destructive singlet oxygen. [...] arXiv:1602.01689.- 2016 - 25 p. - Published in : Scientific Reports 5, Article number: 8720 (2015) External link: Preprint 2016-02-06 07:37 Four-wave mixing in long wavelength III-nitride QD-SOAs / Al-Khursan, Amin H ; Jbara, Ahmed S ; Abood, H I Four wave mixing analysis is stated for quantum dot semiconductor optical amplifiers (QD SOAs) using the propagation equations (including nonlinear propagation contribution) coupled with the QD rate equations under the saturation assumption. Long wavelength III-nitride InN and AlInN QD SOAs are simulated. [...] arXiv:1602.01546.- 2016 - 8 p. - Published in : October 1 (2013) 4 External link: Preprint 2016-02-06 07:36 New Astrophysical Reaction Rate for the $^{12}\textrm{C}(\alpha,\gamma)^{16}\textrm{O}$ Reaction / An, Z D ; Ma, Y G ; Fan, G T ; Li, Y J ; Chen, Z P ; Sun, Y Y A new astrophysical reaction rate for $^{12}$C($\alpha,\gamma$)$^{16}$O has been evaluated on the basis of a global R-matrix fitting to the available experimental data. The reaction rates of $^{12}$C($\alpha,\gamma$)$^{16}$O for stellar temperatures between 0.04 $\leq$ $T_9$ $\leq$ 10 are provided in a tabular form and by an analytical fitting expression. [...] arXiv:1602.01692.- 2016 - Published in : The Astrophysical Journal Letters 817 (2016) L5 External link: Preprint 2016-02-06 07:32 Detection of binary and multiple systems among rapidly rotating K and M dwarf stars from Kepler data / Oláh, Katalin ; Rappaport, Saul ; Joss, Matthew From an examination of ~18,000 Kepler light curves of K- and M-stars we find some 500 which exhibit rotational periods of less than 2 days. Among such stars, approximately 50 show two or more incommensurate periodicities. [...] arXiv:1602.01713.- 2016 - Published in : ASP Conference Series: 4 (2015) , pp. 6 External link: Preprint 2016-02-06 07:32 Very Hight Energy Observationa of Shell-Type Supernova Remnants with SHALON Mirror Cherenkov Telescopes / Sinitsyna, Vera G ; Sinitsyna, Vera Y The investigation of VHE gamma-ray sources by any methods, including mirror Cherenkov telescopes, touches on the problem of the cosmic ray origin and, accordingly, the role of the Galaxy in their generation. The SHALON observations have yielded the results on Galactic shell-type supernova remnants (SNR) on different evolution stages. [...] arXiv:1602.01694.- 2016 - Published in : Bulletin of the Lebedev Physics Institute, 2015, vol. 42, issue 6, pp. 169-175 External link: Preprint Search also:
# MMS Technology ## Lesson Abstract This lesson provides context for why we are studying technology and engineering at MMS.  This lesson is the springboard for the remainder of a quarter-long technology education course for 7th and 8th graders.  Classes are inclusive of all learners.  One 50-minute class period is dedicated to this lesson.  During the quarter, students meet 3-4 times per week for a total of 35 class periods.  Coursework is centered around Pitsco learning modules, and is rigid in terms of lesson design. Each student will complete 3 of the various modules during the quarter.  The goal of this initial lesson is to provide an overview of the concept of technology, gain a sense of what student's understand technology to be, and then to have them build a small structure using specific materials. Lesson Themes The goal is to challenge students to question the meaning and importance of technology. Technology is something  used to solve problems and make lives easier, and is not relegated exclusively to the world of electronics. ## Essential Questions • What is technology and why is it important to learn about it? • How has technology solved problems and made our lives easier over time? • What are some things that can, or cannot, be considered technology? Practice/process standards from the NH science standards that address scientific inquiry: S:SPS3:8:3.1  Design a product or solution to a problem. S:SPS3:8:3.3  Evaluate student-designed products according to established criteria and recommend improvements or modifications. Content standards from the NH Technology/Engineering Education Curriculum Guide: F1: Trace the evolution of technological systems and processes G1:  Evaluate technological systems and their impact on people, the environment, culture, and the economy Common Core Anchor Standards Citing textual evidence • CCSS.ELA-LITERACY.RST.6-8.1:  Cite specific textual evidence to support analysis of science and technical texts • CCSS.ELA-LITERACY.RST.11-12.1 Cite specific textual evidence to support analysis of science and technical texts, attending to important distinctions the author makes and to any gaps or inconsistencies in the account. Determining central ideas or conclusions • CCSS.ELA-LITERACY.RST.6-8.2 Determine the central ideas or conclusions of a text; provide an accurate summary of the text distinct from prior knowledge or opinions. • CCSS.ELA-LITERACY.RST.9-10.2 Determine the central ideas or conclusions of a text; trace the text's explanation or depiction of a complex process, phenomenon, or concept; provide an accurate summary of the text. • CCSS.ELA-LITERACY.RST.11-12.2 Determine the central ideas or conclusions of a text; summarize complex concepts, processes, or information presented in a text by paraphrasing them in simpler but still accurate terms. ## Learning Objectives • Students will gain an understanding of how technology solves problems and makes our lives easier through a combination of video and text-based resources. • Students will be able to design and build a portable tower that supports a ping pong ball at least 12 inches from the surface of their desk using limited resources. ## Text Set Anchor Text Title of Anchor Text:  What is Technology? by Tony Montez URL of Anchor Text: Supporting Texts Organized Text Set What is the central idea of the video, "What is technology?" Questions from the article, "Why is Technology So Important Today?" - • Cite examples that the author has provided to support his belief that the world is smaller and life is faster because of technology. • According to the author, what role does technology play in the foods that we consume? • According to the author, why do we, "owe our luxurious lives to technology?" • How has technology changed communication in the world? • What is the author's view on the role of computers in education? • How does automation play a role in technology? • What other factors may influence technology? Questions from the article, "The Nature of Technology." - • How is technology dependent upon science? • How does the "human presence" the author refers to impact or interact with technology? • What other factors may influence technology? Ping Pong Tower: The purpose of this activity is to simulate an experience of solving a problem (building a tall tower that can hold a ping-pong ball) using a design and build process.  This gives the students an idea of what it is like to create a technological solution to a problem.  Using common objects, students are asked to create a tower meeting certain specifications.  Limiting the resources for the students is important because it simulates budget and resource constraints faced by engineers at any high tech company.  Teams of engineers are never given an unlimited budget and the bottom line is always profit.  Students must apply background knowledge they have of the properties possessed by the various objects, and create a new structure based on those properties. • Meeting building specifications: 12" high; portable, support ping pong ball • Using all materials: index card, 2 bridge sticks, 12" tape, 2 elastic bands and a ping pong ball • Ability to identify and demonstrate knowledge of technology • What is technology (fast-write) before, during and after. • Text-dependent questions on article, "Why is Technology So Important Today?" Ping Pong Tower: Using common, and seemingly unrelated objects, students are asked to create a tower meeting certain specifications. Students must apply background knowledge they have of the properties possessed by the various objects, and create a new structure based on those properties. • Meeting building specifications: 12" high; portable, support ping pong ball • Using all materials: index card, 2 bridge sticks, 12" tape, 2 elastic bands and a ping pong ball Questions regarding the tower: • Did you use any form(s) of technology in building your tower? If so, what? • Are there any components of your tower that could be considered technology? How did they make building the tower easier? • Are there any real-world parallels you can think of? Rubric for the tower: 0 - No participation in activity 1 - Tower is a portable structure that does not measure 12 inches or support a ping pong ball. 2 - Tower is portable and either supports a ping pong ball or measures 12 inches tall. 3 - Tower is portable, supports a ping pong ball, and measures 12 inches tall. 4 - Tower is portable, supports a ping pong ball, and measures more than 12 inches tall. The tower project encourages students to think creatively by using common, yet seemingly unrelated objects, to create an end product: a tower capable of supporting a ping pong ball..  It requires students to problem solve and to identify the qualities in each of the objects that could be used in the construction process, and use them to meet the above specifications.  Are there any of the components to their structure that could be considered a technology?  Did they use any types of technology in the construction process? ## Pre-Requisite Learning Every 7th and 8th grader in the school takes this class.  Academically, socially, and emotionally it is a heterogeneous mix of students.  We are not able to require any prerequisite skills.  This lesson is a basic overview of technology.  Ideally, students will be able to: •      Measure •      Communicate with peers There is no prerequisite for this class. Classes are grouped heterogeneously and are assigned randomly according to interest within the class.  All students in 7th and 8th grade will rotate through for one quarter during the school year.  They will meet three to four times, weekly, for 50 minutes per class. Brainstorm ideas on what technology means/is using a 5-minute technology "fast write."  This encourages students to quickly jot down anything and everything they can relating to technology in a short span of time.  Results are then shared amongst students and staff to determine the level of background knowledge on the topic.  This activity will be repeated both independently and as part of a larger group throughout the quarter. It is important for students to know that "technology" extends beyond computers and electronics and can be something as simple as a lever, or tool provided it in some way solves a problem.  The goal of the pre-assessment it to get students thinking about what technology is and how it impacts them directly. ## Organized Instructional Activities • Brainstorm activity - groups of 3-4 students discuss and answer the question, "What is technology?"  They will list their ideas on paper.  Each team will pool their results to the rest of the class on large poster.  (10 minutes) • View "What is technology?" video (2 minutes). • As a class, re-examine posted results from brainstorm.  Compare/contrast.  (5 minutes) • Tower activity - teams of 2 students.The goal for the students is to design and build a portable tower that is at least 12 inches high and holds a ping pong ball.  The ball must be able to be removed and replaced readily.  Students may use 1 index card, 2 balsa bridge sticks, 12 inches of tape, and 2 elastic bands to construct the tower.  Scissors and a measuring tape may be used as tools only.  A ping pong ball will be provided to each team.  Once they have accomplished this goal, students can continue on and try to build the tallest tower possible that still meets the same requirements.  Used up materials will be replenished/exchanged, but students may not use more than the original list of materials.  (40 minutes) Read article, "Why is Technology So Important Today?" and answer questions.  This will be an ongoing activity that students will work on when they have completed their daily module work over the following weeks. For those students needing a more challenging read, they may read the selection "The Nature of Technology" and answer the questions. Modules - students will be assigned to a technology module based on the results of their interest surveys.  2 students will be assigned to each module.  Topics include rocketry, flight, biotechnology, CADD, electricity, bridges, engines, forces, plumbing, robotics, computer graphics, design, and alternative energy.  Each module contains hands-on activities, information, and assessments on these technologies.  Students complete 2-3 rotations during the quarter in technology education.   At the end of each rotation, students will answer the following questions about the technology that they studied: • What problem does it solve? • How does it make life easier for people?
Detailed Programme ## Evolutionary Robotics The EvoROBOT track focusses on evolutionary robotics: the application of evolutionary computation techniques to automatically design the controllers and/or hardware of autonomous robots, real or simulated. This is by nature a multi-faceted field that combines approaches from other fields such as neuro-evolution, evolutionary design, artificial life, robotics, et cetera. We seek high quality contributions dealing with state-of-the-art research in the area of evolutionary robotics. Topics include but are not limited to: • Evolution of (neural) robot controllers; • Evolution of modular robot morphology; • Hardware/morphology and controller co-evolution; • Open-ended evolution in robotics; • Robotic evolutionary Artificial Life; • Evolutionary self-assembly and self-replication; • Evolution, development and learning; • Evolutionary and co-evolutionary approaches. ## PUBLICATION DETAILS Accepted papers will appear in the proceedings of EvoStar, published in a volume of the Springer Lecture Notes in Computer Science, which will be available at the Conference.Submissions must be original and not published elsewhere. The submissions will be peer reviewed by at least three members of the program committee. The authors of accepted papers will have to improve their paper on the basis of the reviewers comments and will be asked to send a camera ready version of their manuscripts. At least one author of each accepted work has to register for the conference and attend the conference and present the work.The reviewing process will be double-blind, please omit information about the authors in the submitted paper. ## Submission Details Submissions must be original and not published elsewhere. They will be peer reviewed by members of the program committee. The reviewing process will be double-blind, so please omit information about the authors in the submitted paper. Submit your manuscript in Springer LNCS format. Page limit: 12 pages to http://myreview.csregistry.org/evoapps14/. ## IMPORTANT DATES Submission deadline: 1 November 2013 11 November 2013 EvoROBOT: 23-25 April 2014 ## FURTHER INFORMATION Further information on the conference and co-located events can be found in: http://www.evostar.org ## Programme Committee • Nicolas Bredeche, Institut des Systèmes Intelligents et de Robotique • Jeff Clune, University of Wyoming • Stephane Doncieux, Institut des Systèmes Intelligents et de Robotique • Marco Dorigo, Universite Libre de Bruxelles • Gusz Eiben, Vrije Universiteit • Evert Haasdijk, Vrije Universiteit • Heiko Hamann, University of Paderborn • Jean-Marc Montanier, Norwegian University of Science and Technology • Jean-Baptiste Mouret , Institut des Systèmes Intelligents et de Robotique • Stefano Nolfi, Institute of Cognitive Sciences and Technologies • Sanem Sariel, Istanbul Teknik Universitesi • Thomas Schmickl , Karl Franzens University Graz • Juergen Stradner, Karl Franzens University Graz • Jon Timmis, University of York • Andy Tyrrell, University of York • Berend Weel, Vrije Universiteit • Alan Winfield, University of the West of England EvoROBOT Programme Thurs 0930-1110 EvoROBOT & EvoHOT Chair: Giovanni Squillero Speeding up Online Evolution of Robotic Controllers with Macro-neurons     Fernando Silva, Luís Correia, Anders Christensen In this paper, we introduce a novel approach to the online evolution of robotic controllers. We propose accelerating and scaling online evolution to more complex tasks by giving the evolutionary process direct access to behavioural building blocks prespecified in the neural architecture as \emph{macro-neurons}. During task execution, both the structure and the parameters of macro-neurons and of the entire neural network are under evolutionary control. We perform a series of simulation-based experiments in which an e-puck-like robot must learn to solve a deceptive and dynamic phototaxis task with three light sources. We show that: (i) evolution is able to progressively \emph{complexify} controllers by using the behavioural building blocks as a substrate, (ii) macro-neurons, either evolved or preprogrammed, enable a significant reduction in the adaptation time and the synthesis of high performing solutions, and (iii) evolution is able to inhibit the execution of detrimental task-unrelated behaviours and adapt non-optimised macro-neurons. HyperNEAT versus RL PoWER for Online Gait Learning in Modular Robots   Massimiliano D'Angelo, Berend Weel, A.E. Eiben This paper addresses a principal problem of in vivo evolution of modular multi-cellular robots, where robot babies' can be produced with arbitrary shapes and sizes. In such a system we need a generic learning mechanism that enables newborn morphologies to obtain a suitable gait quickly after birth'. In this study we investigate and compare the reinforcement learning method RL PoWeR with HyperNEAT. We conduct simulation experiments using robot morphologies with different size and complexity. The experiments give insights into the differences in solution quality and algorithm efficiency, suggesting that reinforcement learning is the preferred option for this online learning problem. Diagnostic Test Generation for Statistical Bug Localization using Evolutionary Computation     Marco Gaudesi, Maksim Jenihhin, Jaan Raik, Ernesto Sanchez, Giovanni Squillero, Valentin Tihomirov, Raimund Ubar Verification is increasingly becoming a bottleneck in the process of designing electronic circuits. While there exists a wide range of verification tools that assist in detecting occurrences of design errors, or bugs, there is a lack of solutions for accurately pin-pointing the root causes of these errors. Statistical bug localization has proven to be an approach that scales up to large designs and is widely utilized both in debugging hardware and software. However, the accuracy of statistical localization is highly dependent on the diagnostic quality of the test stimuli. In this paper we formulate diagnostic test set generation as a task for an evolutionary algorithm and propose dedicated fitness functions that closely correlate with the bug localization capabilities of statistical approaches. We perform experiments on the register-transfer level design of the Plasma microprocessor implementing µGP (MicroGP) for evolutionary test pattern generation and the zamiaCAD tool’s bug localization infrastructure for fitness evaluation. As a result, the diagnostic resolution of the tests is significantly improved.
TL;DR Well, yeah. Implement RSA. Been there, done that. And Perl is huge. Anyway, there are a couple of sub-challenges that were not addressed in the post from last year. The first is the implementation of the $e^{-1} = \text{invmod}(e, T)$ function, with the assumption that $e$ and $T$ are coprimes, i.e. $\text{GCD}(e, T) = 1$. By Bézout’s theorem, there always exist $x$ and $y$ such that: $x \cdot e + y \cdot T = \text{GCD}(e, T) = 1$ Moving on: $x \cdot e \cong 1 - y \cdot T \cong 1 \pmod T \\ x \cdot e \cong 1 \pmod T$ That is $x$ is the inverse of $e$ modulo $T$. How does this help? Well, it’s easy to find $x$, using The Extended Euclid’s Algorithm. Well, yeah, worth repeating the code here, together with invmod: sub egcd { # https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm my ($X,$x, $Y,$y, $A,$B, $q) = (1, 0, 0, 1, @_); while ($A) { ($A,$B, $q) = ($B % $A,$A, int($B /$A)); ($x,$X, $y,$Y) = ($X,$x - $q *$X, $Y,$y - $q *$Y); } return ($B,$x, $y); } ## end sub egcd sub invmod { require Math::BigInt; my ($A, $B) = map { Math::BigInt->new($_) } @_; my ($gcd,$imod) = egcd($A,$B); die "not coprimes!\n" unless $gcd == 1; return$imod % $B; } The other sub-challenge is about finding very big prime numbers, even using some library. What can be better than Math::Prime::Util? #!/usr/bin/env perl use v5.24; use warnings; use experimental 'signatures'; no warnings 'experimental::signatures'; use File::Basename 'dirname'; use lib dirname(__FILE__), dirname(__FILE__) . '/local/lib/perl5'; use Math::Prime::Util 'random_maurer_prime'; my$n_bits = shift // 2048; my $p = random_maurer_prime($n_bits); my $q = random_maurer_prime($n_bits); say for $p,$q; There’s one last little challenge that requires $e = 3$, which also means that one out of two primes given back by random_maurer_prime will have to be discarded (because if it’s congruent 1 modulo $e$, then $e$ will divide the totient value $(p - 1)(q - 1)$). sub prime_2_mod_3 ($n_bits) { while ('necessary') { my$p = random_maurer_prime($n_bits); return$p if 2 == \$p % 3; } } So well, from a performance perspective, this is not the best choice! Stay safe and secure!
# Passing objects atomically across threads without locks or data races for audio synchronization I am learning about one of the hardest parts of Audio development: the synchronization between the audio thread and the GUI thread. Per the discussion here https://forum.juce.com/t/timur-doumler-talks-on-c-audio-sharing-data-across-threads/26311 and here: https://stackoverflow.com/questions/15460829/lock-free-swap-of-two-unique-ptrt I'm wondering if the following class solves the problem or comes close to solving it. template<typename T> struct SmartAtomicPtr { SmartAtomicPtr( T* newT ) { update( newT ); } ~SmartAtomicPtr() { update(nullptr); } void update( T* newT, std::memory_order ord = memory_order_seq_cst ) { keepAlive.reset( atomicTptr.exchange( newT, ord ) ); } std::shared_ptr<T> getShared(std::memory_order ord = memory_order_seq_cst) { return std::make_shared<T>( atomicTptr.load(ord) ); } T* getRaw(std::memory_order ord = memory_order_seq_cst) { return atomicTptr.load(ord); } private: std::atomic<T*> atomicTptr{nullptr}; std::shared_ptr<T> keepAlive; }; I know that whatever value ends up in the shared_ptr won't be deleted until the SmartAtomicPtr goes out of scope, which is fine. the ultimate goal would be a lock-free, wait-free solution. an example of where this might get used is the following interleaving of the audio and message thread. The goal is to keep the returned object from dangling /* AudioProcessor owns a SmartAtomicPtr<T> ptr that the message thread has public access to. */ /* audio thread */ auto* t = ptr.getRaw(); /* message thread */ processor.ptr.update( new T() ); /* audio thread */ t->doSomething(); //t is a dangling pointer now with getShared(), I believe that t no longer dangles: /* audio thread */ auto t = ptr.getShared(); /* message thread */ processor.ptr.update( new T() ); /* audio thread */ t->doSomething(); //t is one of 2 shared_ptrs //holding the previous atomic value of ptr I ran into some double-deletes, but I believe I have solved them, and also prevented the shared_ptr member from being stomped on in the event you call getShared() and update() at the same time, and also kept it leak-free. any thoughts? ## 1 Answer keepAlive.reset is not thread safe. So your class as a whole cannot be thread safe. • Concrete proof that a short answer can be a great answer - good one! – Toby Speight Mar 6 '19 at 15:13 • thanks @ratchet. I ended up using a couple FIFOs and a lot of std::move() to ensure construction and destruction happened on the gui thread, even though usage was happening on the audio thread for my project. – MatkatMusic Mar 17 '19 at 5:54
# How can I access a property if I know the path? Actual Question: I know I can get the value of a property in Python by a line of code like this: shader = bpy.context.space_data.shading.type = 'WIREFRAME' I know some modules access that data by getting the path to it as text and use that. So if I have the path in text, like this: shader_path = "bpy.context.space_data.shading.type" How do I use that string to access that value and read it or set it? Background (likely unneeded): I have a script that will render images from a number of different cameras and the rendering properties (such as resolution, background, and shading type) will change from camera to camera. I'd like to be able to just store all the value locations in a list so I can step through it and save the values in a dictionary, in Python, then, when I'm done with the rendering, just step through the keys of the dictionary, take the value, and store it from where I got it. (I'm using a dictionary and not a PropertyCollection because this is short term and doesn't need to be stored long term and a dictionary will be easier to write.) If there's a better way to do this, I'm open to it. I just don't want to be managing over half a dozen statements to get values and to reset them after the rendering, since I may be adding more property values that will need to be saved as I make changes over time - so I'd rather just be able to put the property location in a list and it's looped through. I think this is also a good idea since it's consistent with how classes are registered and unregistered now days. • You can use eval or exec to run string statements / expressions but there usually is a better way to do it. stackoverflow.com/questions/2220699/… Why do you need to store the left part of the statement in a dictionary ? Apr 17 at 12:40 • @Gorgious: I don't need to store it in a dictionary, but I figure that would be the easiest way to step through a list of properties and get the value to replace it when I'm done. I'm looking over the answer you specify, but that seems to work for getting, but not for setting. Apr 17 at 18:02 • @Gorgious I've figured out what I can and can't do with this and I'll be writing it up soon - need to finish testing a few other parts of what I'm doing first. Apr 18 at 7:24 • @Gorgious I worked out something even better than what I expected. First, use getattr(), then loop through the path. I wrote it up in an answer. It wouldn't be hard to change the code in the answer to work for setattr() as well. Apr 22 at 2:58 The simple answer is that you can't get the value from a string of the entire path. You need to have a path to an object where you can use getattr() to get a value or object from that object. But it's simple to write code to bypass this and use the entire path. Also, I am probably not using all the correct technical terms, so if someone points out what I need to change, I'll gladly fix it - or they can just edit it. If you have the path bpy.context.scene.render.image_settings.color_mode and want to get color_mode, you cannot use: getattr(bpy.context.scene.render.image_settings.color_mode) But if you have the path to the image_settings object, you can do this: getattr(bpy.context.scene.render.image_settings, 'color_mode') So, normally, and for getattr() you need the path to, and including, the object (in this case image_settings) and then specify the object or setting you need. But if you have an issue where you'll have entire path names in strings (like bpy.context.scene.render.image_settings.color_mode), for whatever reason, it's possible to simply step through each level to get to the item you need. Here's how I did it: import bpy cm = "bpy.context.scene.render.image_settings.color_mode" cd = "bpy.context.scene.render.image_settings.color_depth" path = cd # nodes = path.split('.') # #loop through all the nodes in order for node in nodes: #If it's the first node, bpy, we have to start differently #than wth the others - so set the object to bpy if node == 'bpy': obj = bpy continue #We have a node, so now get the next one obj = getattr(obj, node) #Print out the final node we've gotten to - note that now the #object (obj) is not a node, but the value stored in that node print("Last level: %s, type: %s" % (obj, type(obj))) I used the path to two different settings, color_mode and color_depth as examples and you can easily change which one you're testing in the 4th line of code (not counting blank lines). Just split the entire path at periods, so you'll get a list with each level of the path in it, in order. Then step through each one (note it's a bit different on the first one). At each level, you use the current object to get the object at the next level down. When you reach the end, you'll have the object (whether it's just an object, or a string or setting value) that you need. The output for this is just: Last level: 8, type: <class 'str'> Taking this and turning it into something you can call so it returns the object you're looking for is trivial. Just pass the entire path to the function and it returns the object. This does not include any error trapping and assumes you're starting at bpy, so if you want to be able to start at context (since that's often easily available), you'll need to work in how to handle a string starting with that level. • You absolutely can. As pointed out by Gorgious in the very first comment you can use eval("your_attrib_path_as_string"), test using the console: C.object.color[0] returns 1.0 and eval("C.object.color[0]") returns 1.0 as well. You can also use exec to run statements like exec("C.object.color[0]=0.1"), how to set the color depth based on a string: exec("C.scene.render.image_settings.color_depth='16'"). However, there should be a serious reason why using it. Apr 22 at 7:03 • @brockmann: Show me a screenshot of this actually working, please. Apr 22 at 7:05 • @brockmann: Well, we have had a problem with you telling me, over and over, "Try this!" and me trying it and saying, "It does not do what I need" because it simply wouldn't give me what I had specified I needed it to do. Not to be a pain, but that was quite frustrating, so, yes, I'd like to see something clear, showing it works. Apr 22 at 7:08 • You can even have a gif: i.stack.imgur.com/o5gSg.gif Please note that I'm wasting my sparetime to help you out and even create a gif which takes 3min for nothing. Apr 22 at 7:15 • @brockmann I appreciate your help. I honestly do, but I just want to make sure we don't end up in a never-ending loop again of, "This will work," "No, it doesn't, that's not what I want" again - that would waste a lot more time for both of us. Apr 22 at 8:33
anonymous one year ago Does anyone know why the function f(x)= 1/(x-1)^4 the limit as it approaches to 1 is infinite. I thought that in this case we should use the one-side limits because when x=1 the denominator will be 0. if you get the same value for both the left and right-sided limit, then that value is the limit. In this case approaching from the left side of 1 (below 1) $\lim_{x \rightarrow 1^-} \frac{ 1 }{ (x-1)^4 }=\frac{ 1 }{ 0 }= \infty$ notice that though 0.999-1.000 = -0.001 is negative, after raising to an even power (4 in this case) we get a positive number that approaches 0 similarly, the other side also gives $\lim_{x \rightarrow 1^+} \frac{ 1 }{ (x-1)^4 }=\frac{ 1 }{ 0 }= \infty$ example: 1.001 -1.000= 0.001, and raised to the 4th power, is a small positive number that approaches 0, and consequently the fraction approaches infinity.
## Elementary Algebra It takes 2$\frac{1}{4}$ seconds to download 1 song, which is equal to $\frac{9}{4}$ seconds. To download 16 songs, we multiply the time needed for 1 song by 16. So, $\frac{9}{4}$ $\times$ 16 = $\frac{9 \times 16}{4}$ Factor the numerator and the denominator: $\frac{9\ \times\ 4\ \times 4}{4}$ = Cancel the common factors in the numerator and the denominator to obtain: 9 $\times$ 4 = 36 seconds