url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.datasciencecentral.com/using-r-and-google-s-keyword-planner-to-evaluate-size-and/
# Using R and Google’s keyword planner to evaluate size and competitiveness of international markets About two months ago there was  new SaaS product, the Keyword Hero. It’s the only solution to “decrypt” the organic keywords in Google Analytics that users searched for in order to get to one’s website. We do so by buying lots of data off sources such as plugins and matching the data with our customers’ sessions in Google Analytics (side note: the entire algorithm was coded in R before we refactored it in Python to allow scalability and operability with AWS). ## Using Keyword Planner to evaluate markets In order to assess the global potential of the tool and develop a market entry and distribution strategy, we used Google’s Keyword Planner. The tool is widely used to get an initial understanding of market size and customer acquisition cost, as one sees the search volume (a proxy for market size) the average cost per click (a proxy for customer acquisition cost). [This post is not about developing the right roll out strategy but about gathering valuable data for it in an efficient manner.] ## Disappointing results for international research Google offers some insights into keywords across countries: e.g. if you’d like to find out how the search volume for “python” varies among countries, you’ll get this: But what you get is a disappointment: almost 60% of the search volume is aggregated as “other countries”, you only get a very rough idea about the size, and no idea about the costs per click. Way too little for anyone to work with and a far cry from what you’d get if you looked at the countries individually by using these targeting options: ## Getting the real deal! Google offers an API to retrieve the data. However, there are two problems with the API: 1. To use it, you need to apply for a Google AdWords’ developer token. 2. RAdwords doesn’t support the API, so you’d have to code the authentication and the queries without using RAdwords’ packages. So we opted for the easier solution and decided to crawl the data with RSelenium: ## Crawling Google’s Keyword Planner with RSelenium in 8 steps: Google’s HTML is well structured, which makes querying fairly easy: ### 1. Start RSelenium library(RSelenium) driver<- rsDriver(browser = c(“firefox”)) remDr <- driver[[“client”]] remDr$navigate(“https://adwords.google.com/ko/KeywordPlanner/Home”) 2. Sign in to AdWords manually As the sign-in only needs to be done once, we didn’t write any code or this but did it manually. ### 3. Choose a set of keywords Come up with a set of all keywords that help you assess your market. In some verticals, such as SEO this is rather simple, as the lingua franca of SEOs and web analysts is English. For other products or services, you might have to use a set of keywords in multiple languages to generate meaningful and comparable data. We enter the keyword we’d like to analyze at “Get search volume data and trends”. You can enter up to 3000 keywords (if you plan on using >100 keywords, make RSelenium save the data as .csv first), we used a set of about 30 keywords. ### 4. Import the regions you’d like to query Before you can start crawling, you need a list of all regions (cities, countries, regions) that you’d like to generate data about. We pulled a list of all countries off Wikipedia and imported it to R: #Import list of countries library(readr) countries<- read_csv(“C:/Users/User/Downloads/world-countries.csv”) ### 5. Query the first keywords without region manually To get to the right URL, you need to query AdWords manually once, using your keyword set and without choosing any region at all. ### 6. Start the loop Now start the loop, that will insert the regions, query AdWords and save the results: for (i in 1:nrow(countries)){ #Navigate to Data #click on locations css<-“.spMb-z > div:nth-child(1) > div:nth-child(3) > div:nth-child(2)” x<-try(remDr$findElement(using=’css selector’,css)) x$clickElement() #delete current locations current_loc<-“#gwt-debug-positive-targets-table > table:nth-child(1) > tbody:nth-child(2) > tr:nth-child(1) > td:nth-child(3) > a:nth-child(1)” x<-try(remDr$findElement(using=’css selector’,current_loc)) x$clickElement() #click to insert text css<-“#gwt-debug-geo-search-box” x<-try(remDr$findElement(using=’css selector’,css)) #insert some stuff to be able to add data x$sendKeysToElement(list(“somestuff”)) x$clearElement() y<-as.character(countries[i,]) x$sendKeysToElement(list(y)) Sys.sleep(5) #take the first hit first_hit<-“.aw-geopickerv2-bin-target-name” x<-try(remDr$findElement(using=’css selector’,first_hit)) x$clickElement() #click save save<-“.sps-m > div:nth-child(2) > div:nth-child(1) > div:nth-child(1) > div:nth-child(2)” x<-try(remDr$findElement(using=’css selector’,save)) x$clickElement() #Save the data Sys.sleep(5) #get the searchvolume avgsv<-“#gwt-debug-column-SEARCH_VOLUME_PRIMARY-row-0-0” x<-try(remDr$findElement(using=’css selector’,avgsv)) searchvolume[[i]]<-x$getElementText() #get the bids cpc<-“#gwt-debug-column-SUGGESTED_BID-row-0-1” x<-try(remDr$findElement(using=’css selector’,cpc)) sug_cpc[[i]]<-x$getElementText() } ### 7. Clean the data set #clear the dataset c<-as.data.frame(sug_cpc) c<-t(c) euro <- “\u20AC” c<-gsub(euro,””,c) c<-as.data.frame(as.numeric(c)) s<-as.data.frame(searchvolume) s<-t(s) s<-gsub(“,”,””,s) s<-as.numeric(s) #bind the data all_countrys<-cbind(countries, s, c) #clear the data from small countrys and wrong data (some small countrys don´t make really sense, so we eliminate them) all_countrys<-sqldf(“SELECT * from all_countrys where s <200000 and c<30”) #countrys as UTF-8 all_countrys$country<- stri_encode(all_countrys$country, “”, “UTF-8”) 8. Plot the data #Plot with plotly librayr(plotly) a <- list(title = “search volume”) b <- list(title = “costs”) our_plot<-plot_ly(all_countrys, x=~s, y=~c, text=~country) %>% layout(xaxis = a, yaxis = b, showlegend = FALSE) #show_plot and view data our_plot View(all_countrys) ## Evaluating the results of our set of SEO keywords On the X-axis you see the costs per click, on the Y axis the search volume per country. The US market strongly dominates the volume for SEO related keywords and is far off on the right. Note that the US is also pretty far up the top, which means high marketing expenses per user. As a start-up, we’re looking for a large yet less competitive market for our paid ads (and focus on less costly marketing strategies in the US). So we ignore the US and adjust our plot a bit: We segment the markets into four: “small and cheap”, “small and expensive”, “big and expensive” and “big and cheap”. As we’re constrained by limited funds, we focus on the cheap markets (bottom) and ignore the expensive ones (top). As the initial effort to set up a marketing campaign is the same for a “small and cheap” (left) and a “big and cheap” (right) region, we naturally opt for the big ones (bottom right). According to the data, the cheapest yet sizable markets to penetrate are countries such as Turkey, Brasil, Spain, Italy, France, India, and Poland. There is significant search volume (=interest) about SEO related topics and low costs per click (=little competition), which – in theory – should make market penetration fairly cheap and easy. Obviously, this is quite a simplistic model but it provides a nice foundation for further thoughts. In a future blog post, we will check whether the correlation between this data and reality (= our efforts) is significant at all. ## Here is the entire R code in one piece again: #Crawl AdWords-keywordplanner by Country library(RSelenium) library(readr) library(stringi) library(sqldf) library(plotly) driver<- rsDriver(browser = c(“firefox”)) remDr <- driver[[“client”]] #navigate to AdWords-keywordplanner remDr$navigate(“https://adwords.google.com/ko/KeywordPlanner/Home”) #Import csv with countrys for (i in 1:nrow(countries)){ #Navigate to Data #click on locations css<-“.spMb-z > div:nth-child(1) > div:nth-child(3) > div:nth-child(2)” x<-try(remDr$findElement(using=’css selector’,css)) x$clickElement() #delete current locations current_loc<-“#gwt-debug-positive-targets-table > table:nth-child(1) > tbody:nth-child(2) > tr:nth-child(1) > td:nth-child(3) > a:nth-child(1)” x<-try(remDr$findElement(using=’css selector’,current_loc)) x$clickElement() #click to insert text css<-“#gwt-debug-geo-search-box” x<-try(remDr$findElement(using=’css selector’,css)) #insert some stuff to be able to add data x$sendKeysToElement(list(“somestuff”)) x$clearElement() y<-as.character(countries[i,]) x$sendKeysToElement(list(y)) Sys.sleep(5) #take the first hit first_hit<-“.aw-geopickerv2-bin-target-name” x<-try(remDr$findElement(using=’css selector’,first_hit)) x$clickElement() #click save save<-“.sps-m > div:nth-child(2) > div:nth-child(1) > div:nth-child(1) > div:nth-child(2)” x<-try(remDr$findElement(using=’css selector’,save)) x$clickElement() #Save the data Sys.sleep(5) #get the searchvolume avgsv<-“#gwt-debug-column-SEARCH_VOLUME_PRIMARY-row-0-0” x<-try(remDr$findElement(using=’css selector’,avgsv)) searchvolume[[i]]<-x$getElementText() #get the bids cpc<-“#gwt-debug-column-SUGGESTED_BID-row-0-1” x<-try(remDr$findElement(using=’css selector’,cpc)) sug_cpc[[i]]<-x$getElementText() } #clear the dataset c<-as.data.frame(sug_cpc) c<-t(c) euro <- “\u20AC” c<-gsub(euro,””,c) c<-as.data.frame(as.numeric(c)) s<-as.data.frame(searchvolume) s<-t(s) s<-gsub(“,”,””,s) s<-as.numeric(s) #bind the data all_countrys<-cbind(countries, s, c) #clear the data from small countrys and wrong data (some small countrys don´t make really sense, so we eliminate them) all_countrys<-sqldf(“SELECT * from all_countrys where s <200000 and c<30”) #countrys as UTF-8 all_countrys$country<- stri_encode(all_countrys$country, “”, “UTF-8”) #Plot with plotly a <- list(title = “search volume”) b <- list(title = “costs”) q<-plot_ly(all_countrys, x=~s, y=~c, text=~country) %>% layout(xaxis = a, yaxis = b, showlegend = FALSE) #show_plot and view data q View(all_countrys)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3999938666820526, "perplexity": 6746.980236470063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00009.warc.gz"}
https://www.physicsforums.com/threads/what-istime.5270/
# What istime? 1. Aug 31, 2003 ### benzun_1999 what istime?? my question is simple: what is time scientifically???????????[?] [?] [?] [?] [?] [?] [?] [?] 2. Aug 31, 2003 ### kishtik No one knows. 3. Aug 31, 2003 Staff Emeritus In quantum mechanics a parameter not an observable. In relativity (both kinds) a dimension. Beyond that, what he said. 4. Aug 31, 2003 ### quartodeciman NIST: time and frequency to make more of 'time' requires philosophy. SEP:time SEP: the experience and perception of time SEP: temporal logic SEP: being and becoming in modern physics questions: Is time essentially fundamental? Does every attempt to derive time lead back to concepts that already presuppose the existence of time/temporal references? Are time qualifiers in language irreducible elements? (example: "What happened before time began?" --> "happen", "***ed" past tense, "before", "begin", "***an" past tense) Last edited: Sep 1, 2003 5. Aug 31, 2003 ### pmb Re: what istime?? Time is basically that which distinguishes different states of the universe. For example: Consider actual but particular arrangement of particles in the universe. Since there are more than one such arrangements then what is different? Furthermore there is particular way in which these arrangements are related to each other. This relation is the "order" in the universe, or what is called "entropy." By 'order' I mean orderlyness of things. For example: If you room is quite neat then it's order is 'high.' Everything is in 'order.' If your room is a mess then its of 'low' order or 'disordered." So this phenomena of different arrangements and the relationship to order is called "time." As 'time' increases things change, i.e. the particles in the universe moved, and things move to an overall disorder. So label these different arrangements with numbers and call these numbers the time. But not that the number itself is not time - time is that which the number refers to. Pete 6. Sep 3, 2003 ### shankar time is an independent factor,an abstract on. finding relation between two unknown function is very difficult and may not have any. but with a known one its easy to define a function. i hope that u got my point .. pleae fell free to correct me it iam wrong... 7. Sep 3, 2003 ### Arc_Central Time is a measurement of motion applied to your measure of motion in yer brain. The relationship is your understanding of it. You prolly autta ask - What is motion.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488661050796509, "perplexity": 3894.2765729410835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213540.33/warc/CC-MAIN-20180818095337-20180818115337-00491.warc.gz"}
https://slideplayer.com/slide/232763/
# Core 3 Differentiation Learning Objectives: ## Presentation on theme: "Core 3 Differentiation Learning Objectives:"— Presentation transcript: Core 3 Differentiation Learning Objectives: Review understanding of differentiation from Core 1 and 2 Understand how to differentiate ex Understand how to differentiate ln ax Differentiation Review Differentiation means…… Finding the gradient function. The gradient function is used to calculate the gradient of a curve for any given value of x, so at any point. If y = xn = nxn-1 The Key Bit dy dx The general rule (very important) is :- If y = xn dy dx = nxn-1 E.g. if y = x2 = 2x dy dx E.g. if y = x3 = 3x2 dy dx E.g. if y = 5x4 = 5 x 4x3 = 20x3 dy dx A differentiating Problem The gradient of y = ax3 + 4x2 – 12x is 2 when x=1 What is a? dy dx = 3ax2 + 8x -12 When x=1 dy dx = 3a + 8 – 12 = 2 3a - 4 = 2 3a = 6 a = 2 Finding Stationary Points At a maximum At a minimum + dy dx > 0 + d2y dx2 < 0 dy dx =0 - dy dx < 0 - d2y dx2 > 0 dy dx =0 Differentiation of ax Compare the graph of y = ax with the graph of its gradient function. Adjust the values of a until the graphs coincide. Differentiation of ax Summary The curve y = ax and its gradient function coincide when a = 2.718 The number 2.718….. is called e, and is a very important number in calculus See page 88 and 89 A1 and A2 Differentiation of ex f `(x) = ex f `(x) = aex If f(x) = ex Also, if f(x) = aex Differentiation of ex The gradient function f’(x )and the original function f(x) are identical, therefore The gradient function of ex is ex i.e. the derivative of ex is ex f `(x) = ex If f(x) = ex f `(x) = aex Also, if f(x) = aex Differentiation of ex Turn to page 90 and work through Exercise A Derivative of ln x = 1 ln x is the inverse of ex The graph of y=ln x is a reflection of y = ex in the line y = x This helps us to differentiate ln x If y = ln x then x = ey so = 1 So Derivative of ln x is Differentiation of ln x Live page Differentiation of ln 3x Live page Differentiation of ln 17x Live page Summary - ln ax (1) f(x) = ln x f’(1) = 1 f’(4) = 0.25 f(x) = ln 3x the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 f(x) = ln 3x f’(1) = 1 the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 f(x) = ln 17x f’(1) = 1 the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 Summary - ln ax (2) For f(x) = ln ax For f(x) = ln ax f `(x) = 1/x Whatever value a takes…… the gradient function is the same f’(1) = 1 the gradient at x=1 is 1 f’(4) = 0.25 the gradient at x=4 is 0.25 f’(100) = 0.01 f’(0.2) = 5 the gradient at x=100 is 0.01 the gradient at x=0.2 is 5 The gradient is always the reciprocal of x For f(x) = ln ax f `(x) = 1/x Examples f `(x) = 1/x If f(x) = ln 7x If f(x) = ln 11x3 Don’t know about ln ax3 f(x) = ln 11 + ln x3 f(x) = ln ln x f `(x) = 3 (1/x) Constants go in differentiation f `(x) = 3/x = nxn-1 If y = xn f `(x) = aex if f(x) = aex if g(x) = ln ax Summary dy dx = nxn-1 If y = xn f `(x) = aex if f(x) = aex if g(x) = ln ax g`(x) = 1/x if h(x) = ln axn h`(x) = n/x h(x) = ln a + n ln x Differentiation of ex and ln x Classwork / Homework Turn to page 92 Exercise B Q1 ,3, 5 Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863202571868896, "perplexity": 2367.819497689086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514314.87/warc/CC-MAIN-20181021181851-20181021203351-00500.warc.gz"}
https://gateoverflow.in/317569/ullman-toc-edition-3-exercise-9-5-question-3-page-no-418-419
+1 vote 88 views It is undecidable whether the complement of a CFL is also a CFL. Exercise $9.5.2$ can be used to show it is undecidable whether the complement of a CFL is regular, but that is not the same thing. To prove our initial claim, we need to define a different language that represents the nonsolutions to an instance $(A, B)$ of PCP. Let $L_{AB}$ be the set of strings of the form $w\#x\#y\#z$ such that: 1. $w$ and $x$ are strings over the alphabet $\Sigma$ of the PCP instance. 2. $y$ and $z$ are strings over the index alphabet $I$ for this instance. 3. $\#$ is a symbol in neither $\Sigma$ nor $I$. 4. At least one of the following holds: 1. $w\neq x^{R}.$ 2. $y\neq z^{R}.$ 3. $x^{R}$ is not what the index string $y$ generates according to list $4.$w$is not what the index string$z^{R}$generates according to the list$A$. Notice that$L_{AB}$consists of all strings in$\Sigma^{\ast}\#\Sigma^{\ast}\#I^{\ast}$unless the instance$(A B)$has a solution, but$L_{AB}$is a CFL regardless. Prove that$L_{AB}$is a CFL if and only if there is no solution. Hint: Use the inverse homomorphism trick from Exercise$9.5.2$and use Ogden's lemma to force equality in the lengths of certain substrings as in the hint to Exercise$7.2.5(b)\$. | 88 views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6676088571548462, "perplexity": 282.3040118926548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00349.warc.gz"}
http://tex.stackexchange.com/questions/12483/how-to-center-the-toc
How to center the TOC? I would like to design the table of contents aligned to a vertical line near the center of the page. The chapter and section titles need to be flushright, the numbers flushleft, and between titles and numbers, I'd like to place a separating element. It can be done easily by a tabular: \begin{tabular}{>{\raggedleft}m{0.4\textwidth}>{\centering}m{0.3cm}>{\raggedright}m{0.4\textwidth}} First section title & \textbullet & 27 \tabularnewline \end{tabular} You can see the result of the code above on this image: But how can I build a tabular with the LaTeX \tableofcontents command? What commands do I need to redefine? After some googling I haven't found any ready solution. - For the chapter insert any symbol you like, instead of the \Large\textbullet \documentclass{book} \usepackage{xcolor,ragged2e} \usepackage{array} \makeatletter \def\MBox#1#2#3#4{% \parbox[t]{0.4\linewidth}{\RaggedLeft#1}% \makebox[0.1\linewidth]{\color{red}#2}% \makebox[0.4\linewidth][l]{#3}\\[#4]} \renewcommand*\l@chapter [2]{\par\MBox{\bfseries\Large#1}{\Large\textbullet}{#2}{5pt}} \renewcommand*\l@section [2]{ \MBox{\bfseries\large#1}{\textbullet}{#2}{2pt}} \renewcommand*\l@subsection[2]{ \MBox{\bfseries #1}{\textbullet}{#2}{1pt}} \renewcommand\numberline[1]{} \makeatother \begin{document} \tableofcontents \chapter{First Chapter} \section{foo} \newpage \section{bar} \newpage \subsection{foobar} foobar \section{An extraordinary long section title which should have a linebreak bar \end{document} - wonderful, thanks! i have never thought that it's so simple. – deeenes Mar 2 '11 at 22:04 Have a look into the source of your document class, where all mentioned macros are defined. The source is on your local drive, use your search feature or kpsewhich to locate it, such as kpsewhich book.cls at the command prompt. On Linux, I often directly use it like gedit kpsewhich book.cls Actually, I made a shell function for that, to ease the frequent access of source code. - thank you! maybe i'll try also this way, because the vertical centering of the numbers in the case of section titles wrapped to more line. – deeenes Mar 2 '11 at 22:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037808775901794, "perplexity": 1800.2767927250532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445291.19/warc/CC-MAIN-20151124205405-00159-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.ecsponline.com/goods.php?id=202312
0去购物车结算 ### 浏览历史 Advanced Mathematics for Medicine医药高等数学 • 书号:9787030576378 作者:马建忠 • 外文书名: • 丛书名:中国科学院教材建设专家委员会规划教材·医学英文原版改编双语教材 • 装帧:平装 开本:16(23k) • 页数:107 字数:265000 语种:en • 出版社:科学出版社 出版时间:2019-12-01 • 所属分类: • 定价: ¥79.80元 售价: ¥63.84元 • 图书介质: 纸质书 • 购买数量: 件  可供 • 商品总价: • 暂时还没有任何用户评论 ### 全部咨询(共0条问答) • 暂时还没有任何用户咨询内容 用户名: 匿名用户 E-mail: 咨询内容: ### 目录 • CONTENTS Chapter 1 Functions, Limits, and Continuity 1 1.1 Functions 1 1.1.1 Linear and Quadratic Functions 1 1.1.2 Concept of Function 3 1.1.3 Polynomial and Rational Functions 5 1.1.4 Exponential and Logarithmic Functions 6 1.1.5 Trigonometric Functions and Functional Properties 8 1.2 Limits of Function 10 1.2.1 The Concept of Limit 10 1.2.2 Computation of Limits 15 1.3 Continuity of Function 18 1.3.1 The Continuity of Function 18 1.3.2* Continuous Compounding 21 Chapter Summary 22 Review Exercises 23 Chapter 2 Differentiation of One Variable 25 2.1 The Concept of Derivative 25 2.1.1 Instantaneous Velocity and Derivative 25 2.1.2 Slope of Tangent Line on Geometric Interpretation of Derivative 26 2.1.3 Definition of Derivative and Rates of Change 27 2.2 Computations of Derivatives 28 2.2.1 Techniques of the Differentiation 28 2.2.2 Calculation Rules of Derivative 30 2.3 Compound Function and Its Chain Rule 31 2.3.1 Compound Function and Its Chain Rule 31 2.3.2 Implicit Differentiation 33 2.4 Second-Order Derivative and Differential 34 2.4.1 Second-Order Derivative 34 2.4.2 The Concept and Computation of Differential 35 2.5 Application of the Derivative 36 2.5.1 Increasing and Decreasing Functions in the Derivative 37 2.5.2 Concavity and Points of Inflection of Functions 38 2.5.3 Relative Maximum and Relative Minimum of Functions 41 Chapter Summary 44 Review Exercises 45 Chapter 3 Integration of One Variable 46 3.1 Indefinite Integration 46 3.1.1 The Concept of Indefinite Integration 46 3.1.2 The Computing Rules and Formulas of Indefinite Integration 48 3.1.3 Integration by Substitution 50 3.1.4 Integration by Parts 52 3.2 Definite Integration 55 3.2.1 Definite Integral and the Fundamental Theorem of Calculus 55 3.2.2 The Computation of Definite Integral 59 3.2.3 Applications of Integration 62 3.2.4 Improper Integrals 67 Chapter Summary 70 Review Exercises 71 Chapter 4 Calculus of Several Variables 73 4.1 Functions of Several Variables 73 4.1.1 Functions of Two or More Variables 73 4.1.2 Graphs of Functions of Two Variables 74 4.2 Partial Derivatives 78 4.2.1 Compute and Interpret Partial Derivatives 78 4.2.2 Geometric Interpretation of Partial Derivatives 79 4.2.3 Second-order Partial Derivatives 80 4.2.4 The Chain Rule for Partial Derivatives 80 4.3 Optimizing Functions of Two Variables 82 4.3.1 The Extreme Value Property for a Function of Two Variables 82 4.3.2 Apply the Extreme Value Property to the Functions of Two Variables 84 4.3.3* The Method of Least-Squares 86 4.3.4* The Least-Squares Line 88 4.4 Double Integrals 90 4.4.1 The Double Integral over a Rectangular Region 90 4.4.2 Double Integrals over Nonrectangular Regions 91 4.4.3 The Applications of Double Integrals 93 Chapter Summary 97 Review Exercises 98 APPENDIXES 100 APPENDIX A 100 APPENDIX B 100 APPENDIX C English-Chinese Vocabulary 101
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714599251747131, "perplexity": 6037.00018138713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00333.warc.gz"}
https://ameriflux.lbl.gov/community/publication/observed-effects-of-horizontal-radiative-surface-temperature-variations-on-the-atmosphere-over-a-midwest-watershed-during-cases-97/
# Observed Effects Of Horizontal Radiative Surface Temperature Variations On The Atmosphere Over A Midwest Watershed During CASES 97 • Sites: US-Wlr • Grossman, R. L., Yates, D., LeMone, M. A., Wesely, M. L., Song, J. (2005/03) Observed Effects Of Horizontal Radiative Surface Temperature Variations On The Atmosphere Over A Midwest Watershed During CASES 97, Journal Of Geophysical Research: Atmospheres, 110(D6), n/a-n/a. https://doi.org/10.1029/2004jd004542 • Funding Agency: — • The association between ∼10-km scale horizontal variation of radiometric surface temperature (Ts) and aircraft-derived fluxes of sensible heat (H) and moisture (LE) is the focus of this work. We use aircraft, surface, and satellite data from a Cooperative Atmospheric-Surface Exchange Studies (CASES) field program, which took place in the southern part of the 60 × 100 km Walnut River (Kansas) watershed from 22 April to 22 May 1997, when winter wheat matured and prairie grass greened up. Aircraft Ts observed along repeated flight tracks above the surface layer showed a persistent pattern: maxima over ridges characterized by shallow soil and rocky outcroppings and minima over riparian zones. H and Ts reached maxima in the same longitude zone on two flight tracks 40 km apart. Satellite Ts data from March to June reveal similar persistent patterns with minima more persistent than maxima. Two mechanisms are suggested to explain the association of H and Ts maxima: (1) for winds between 6 and 8 ms−1, modulation of the surface energy budget by vegetation effects; or (2) for winds equal to or below 4 ms−1, a thermally driven circulation centered on Ts maxima. Both mechanisms were possibly enhanced by increased static instability over the Ts maxima. Owing to the small sample available, these results are suggestive rather than conclusive. Effects of rainfall and vegetation on watershed-scale Ts gradients are also explored.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188114762306213, "perplexity": 9360.8650614837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00027.warc.gz"}
https://www.mail-archive.com/search?l=emacs-orgmode%40gnu.org&q=date:20141028&o=newest
### Re: [O] An Org-mode LaTeX class? I think that I don't know if you can suit everybody's need but that is worth a try. For myself, I already wrote a dedicated LaTeX class, because it was too cumbersome to configure org-mode for the different kind of documents I need to produce (not impossible, just too lengthy to duplicate ### Re: [O] org-capture/remember in Emacs 24.4.1? (Marco Wahl) Benjamin Slade [email protected] writes: Your code works for me (Emacs 25 with a current Org from the git repo). Just guessing: Do you have installed a further hook for deleting frames in certain situations which might be the wrongdoer? I don't seem to. I can't find anything else which ### [O] Include results in a table Hello everybody, I have somethink like that #+call: gen(A) #+results: A : 10 #+call: gen(B) #+results: B : 20 Is there a simple mean to aggregate the results in a table, i.e to get | A | 10 | | B | 20 | I think some lisp can do that but as a beginner... but as I want to learn you can suggest ### Re: [O] An Org-mode LaTeX class? Marcin Borkowski [email protected] writes: Imagine someone wrote a dedicated Org-mode LaTeX class, and the LaTeX exporter got an option to export to this class. The class modifies LaTeX so that it supports all Org's elements and objects, and things like tags, timestamps, checkboxes etc. ### Re: [O] Include results in a table abonnements [email protected] writes: Hello, I have somethink like that #+call: gen(A) #+results: A : 10 #+call: gen(B) #+results: B : 20 Is there a simple mean to aggregate the results in a table, i.e to get | A | 10 | | B | 20 | I think some lisp can do that but as a ### Re: [O] An Org-mode LaTeX class? On Tue, Oct 28, 2014 at 4:02 AM, Fabrice Popineau [email protected] wrote: I think that I don't know if you can suit everybody's need but that is worth a try. For myself, I already wrote a dedicated LaTeX class, because it was too cumbersome to configure org-mode for the different ### Re: [O] An Org-mode LaTeX class? Hi Marcin, Thanks for sharing your ideas. Marcin Borkowski [email protected] writes: Imagine someone wrote a dedicated Org-mode LaTeX class, and the LaTeX exporter got an option to export to this class. The class modifies LaTeX so that it supports all Org's elements and objects, and ### Re: [O] Include results in a table Hi, thank you for your answer. Your solution is OK but only for the example I gave (2 or 3 results). In practice I have about 10 results and the number of them may be variable... Furthermore :vars does not work on my version (I must use :var x=A :var y=B)... Ta. Thierry Hello, / I have ### Re: [O] Include results in a table abonnements [email protected] writes: Hi, thank you for your answer. Your solution is OK but only for the example I gave (2 or 3 results). In practice I have about 10 results and the number of them may be variable... Furthermore :vars does not work on my version (I must use :var ### Re: [O] Include results in a table Hi Thierry On Tue, Oct 28, 2014 at 10:02 AM, abonnements [email protected] wrote: #+call: gen(A) #+results: A : 10 #+call: gen(B) #+results: B : 20 Is there a simple mean to aggregate the results in a table, i.e to get | A | 10 | | B | 20 | One solution with TBLFM is: ### [O] R code blocks and 'could not find function .ess.eval' This problem was reported last month and then again earlier this month, for example here: https://lists.gnu.org/archive/html/emacs-orgmode/2014-10/msg00178.html I'm running Emacs 25.0.50.2, Org 8.2.10 and ESS 14.1x. I'm getting a lot of Error: could not find function .ess.eval errors ### [O] Problem updating using git I just tried updating my org installation (present version, installed via git, Org-mode version 8.3beta (release_8.3beta-485-gf70439). It gives me following error: passed 524/525 test-org/up-element passed 525/525 test-org/update-radio-target-regexp Ran 525 tests, 524 ### Re: [O] Problem updating using git Doing a fresh install worked fine. Vikas On 28-Oct-2014, at 4:48 pm, Vikas Rawal [email protected] wrote: I just tried updating my org installation (present version, installed via git, Org-mode version 8.3beta (release_8.3beta-485-gf70439). It gives me following error: ### [O] math in parentheses Hi, I encounter a problem using $..$ expressions when they are enclosed in parentheses. This sample --8---cut here---start-8--- #+TITLE: Test Math * Some math This $T$ works, but ($T$) this does not. --8---cut ### Re: [O] [RFC] Change property drawer syntax Nicolas Goaziou [email protected] writes: As discussed previously, I would like to modify property drawers syntax. The change is simple: they must be located right after a headline and its planning line, if any. [...] I pushed a new branch, top-properties in the repository for code ### Re: [O] Org-mode Habit with Varying Description Hello, Eric Abrahamsen [email protected] writes: Right now it looks like the central cond statement in org-add-log-setup' is as close as we've got to a canonical definition of where a heading's log list is to be found. Should I just write my own version of this, or would you be open ### [O] Bug: [8.2.10 (8.2.10-1-g8b63dc-elpa @ /home/boudiccas/.emacs.d/elpa/org-20141027/)] Thinking that the problem was in corruption in my git download, I downloaded a fresh git setup, but the problem still remains. So I've installed org-mode from ELPA, but the problem still remains. I am still unable to send emails through gnus, and the error report is still occurring in many ### Re: [O] Org-mode Habit with Varying Description Nicolas Goaziou [email protected] writes: Hello, Eric Abrahamsen [email protected] writes: Right now it looks like the central cond statement in org-add-log-setup' is as close as we've got to a canonical definition of where a heading's log list is to be found. Should I just ### Re: [O] Org-mode Habit with Varying Description Nicolas Goaziou [email protected] writes: Hello, Eric Abrahamsen [email protected] writes: Right now it looks like the central cond statement in org-add-log-setup' is as close as we've got to a canonical definition of where a heading's log list is to be found. Should I just ### [O] BUG: preview latex with split environment Hi all$$,$$ I encounter a BUG with org-toggle-latex-fragment on an equation that uses amsmath's split environment. (Exporting of the document works fine.) The problem is that the $$,$$ get stripped of the temporary tex file. Here is some test doc --8---cut ### Re: [O] Bug: [8.2.10 (8.2.10-1-g8b63dc-elpa @ /home/boudiccas/.emacs.d/elpa/org-20141027/)] Sharon Kimble [email protected] writes: Thinking that the problem was in corruption in my git download, I downloaded a fresh git setup, but the problem still remains. So I've installed org-mode from ELPA, but the problem still remains. I am still unable to send emails through gnus, ### Re: [O] BUG: preview latex with split environment Andreas Leha [email protected] writes: Hi all$$,$$ I encounter a BUG with org-toggle-latex-fragment on an equation that uses amsmath's split environment. The problem is that the $$,$$ get stripped of the temporary tex file. I can't reproduce using ### Re: [O] math in parentheses Hi Andreas, Andreas Leha [email protected] writes: I encounter a problem using $..$ expressions when they are enclosed in parentheses. This $T$ works, but ($T$) this does not. Is that expected behaviour? (Note that $$...$$ expressions work.) Yes, that is the expected ### [O] How to extract TODOs from date-tree Hi, I have my Org files set up as date-trees containing a mix of notes, tasks and projects. I now have a need to generate a list of projects and tasks filed under specific date-tree or in a range of dates. Is it possible to get this listing from the date-trees if the entries themselves don't ### Re: [O] math in parentheses Hi Richard, Richard Lawrence [email protected] writes: Hi Andreas, Andreas Leha [email protected] writes: I encounter a problem using $..$ expressions when they are enclosed in parentheses. This $T$ works, but ($T$) this does not. Is that expected behaviour? ### Re: [O] How to extract TODOs from date-tree Jay Iyer [email protected] writes: Hi, I have my Org files set up as date-trees containing a mix of notes, tasks and projects. I now have a need to generate a list of projects and tasks filed under specific date-tree or in a range of dates. Is it possible to get this listing from the ### Re: [O] R code blocks and 'could not find function .ess.eval' On Tue, 28 Oct 2014, William Denton wrote: This problem was reported last month and then again earlier this month, for example here: https://lists.gnu.org/archive/html/emacs-orgmode/2014-10/msg00178.html I'm running Emacs 25.0.50.2, Org 8.2.10 and ESS 14.1x. I'm getting a lot of ### Re: [O] Org-mode Habit with Varying Description Eric Abrahamsen [email protected] writes: I was just fooling with this a bit, and am noticing some odd (to me) behavior. If I start with emacs -Q, then (goto-char (org-log-beginning)) takes me to the start of a :LOGBOOK: drawer, and (org-element-at-point) returns the drawer. That works ### [O] Several datetrees in one file Hello, I would like to have several datetrees in one org file. I want to have different datetrees under different headlines. The setup I have in mind is an address book with chronologically ordered notes for each person. This is an example entry for one person. Each following person in that file ### [O] make clean-install leaves htmlize.el Hi all, recently make clean-install leaves the files htmlize.el[c] left in my installation directory while I'd expect them to be removed by clean-install. Regards, Andreas ### Re: [O] math in parentheses Andreas Leha [email protected] writes: Hi Richard, Richard Lawrence [email protected] writes: Hi Andreas, Andreas Leha [email protected] writes: I encounter a problem using $..$ expressions when they are enclosed in parentheses. This $T$ ### [O] make test scrambles terminal Hi, make test` recently messes with the encoding of my terminal. Here are the last lines from the output: --8---cut here---start-8--- ⎻▒⎽⎽ed 53▮/531 ├e⎽├↑⎺⎼±/┤⎻↑e┌e└e┼├ ⎻▒⎽⎽ed 531/531 ├e⎽├↑⎺⎼±/┤⎻d▒├e↑⎼▒d☃⎺↑├▒⎼±e├↑⎼e±e│⎻ R▒┼ 531 ├e⎽├⎽← 529 ⎼e⎽┤┌├⎽ ### Re: [O] How to extract TODOs from date-tree Jay Iyer [email protected] writes: Hi Jay, The file entries are as follows and the task/note/project sub-heads generally don't have active/inactive timestamps except when a scheduling/deadline is specified. Thanks. ** 2014-10 October *** 2014-10-01 Wednesday TODO first task ### Re: [O] How to extract TODOs from date-tree Jay Iyer [email protected] writes: Hi Thorsten, The file entries are as follows and the task/note/project sub-heads generally don't have active/inactive timestamps except when a scheduling/deadline is specified. Thanks. ** 2014-10 October *** 2014-10-01 Wednesday TODO first task ### Re: [O] math in parentheses Hi Nick, Nick Dokos [email protected] writes: Andreas Leha [email protected] writes: Hi Richard, Richard Lawrence [email protected] writes: Hi Andreas, Andreas Leha [email protected] writes: I encounter a problem using $..$ expressions when ### Re: [O] An Org-mode LaTeX class? Aloha Rasmus, Rasmus [email protected] writes: Moreover, the look of these elements is configurable on the LaTeX end, and further by means of Org options. This way, we drop the generic LaTeX thing (which is nice for people sending articles to journals etc. – so my dream should not replace the ### Re: [O] BUG: preview latex with split environment Rasmus [email protected] writes: Andreas Leha [email protected] writes: Hi all$$,$$ I encounter a BUG with org-toggle-latex-fragment on an equation that uses amsmath's split environment. The problem is that the $$,$$ get stripped of the temporary tex ### Re: [O] Bug: org-columns-compile-format [8.2.10 (release_8.2.10-1-g8b63dc @ /home/boudiccas/git/org-mode/lisp/)] Aaron Ecay [email protected] writes: Hi Sharon, 2014ko urriak 27an, Sharon Kimble-ek idatzi zuen: [...] org-mode-hook '(org-drill-add-cloze-fontification org2blog/wp-mode (lambda nil (org-columns 1)) (lambda nil (abbrev-mode 1)) The above (org-columns 1) form is your problem. Have ### Re: [O] Bug: org-columns-compile-format [8.2.10 (release_8.2.10-1-g8b63dc @ /home/boudiccas/git/org-mode/lisp/)] Hi Sharon, 2014ko urriak 27an, Sharon Kimble-ek idatzi zuen: [...] org-mode-hook '(org-drill-add-cloze-fontification org2blog/wp-mode (lambda nil (org-columns 1)) (lambda nil (abbrev-mode 1)) The above (org-columns 1) form is your problem. Have you added this to org-mode-hook your ### Re: [O] Bug: [8.2.10 (8.2.10-1-g8b63dc-elpa @ /home/boudiccas/.emacs.d/elpa/org-20141027/)] Sharon Kimble [email protected] writes: It is not a gnus-problem alone as I also get the error report when opening some org-mode pages. For the last couple of days I have been unable to send any emails through gnus, or any followups to a newsgroup, because the buffer which should ### Re: [O] math in parentheses Andreas Leha [email protected] writes: Hi Rasmus, Rasmus [email protected] writes: Hi, Andreas Leha [email protected] writes: Hi Richard, Just for me to understand: Is the problem the missing whitespace around the $...$ expression? From (info (org) LaTeX ### Re: [O] BUG: preview latex with split environment Hello, Andreas Leha [email protected] writes: I open this file: #+TITLE: Test amsmath's split * Some equation with some split expression, e.g. $$\begin{split} w \cdot x + b = 1, \text{ and}\\ w \cdot x + b = -1 \end{split}$$ ### [O] Support for table notes? Hi list, I'm having trouble with LaTeX export of a table that uses some footnotes. It seems that export by default uses combinations of \footnotemark inside the table and \footnotetext after the table. This gets me in to trouble when the table is pushed to the next page - the footnotes are left ### [O] inline image size Hi there, Does anybody know how to change the size of inline images in org ? I would like to do this using something like #+attr :with 200 thanks M ### Re: [O] org-capture/remember in Emacs 24.4.1? (Marco Wahl) Oddly this works on my one computer, but not my other -- doubly odd since they're both running the same distro, the same version of Emacs, and the same .emacs file! Even commenting out that line doesn't change the (wrong) behaviour on my one computer Marco Wahl [email protected] ### Re: [O] org-capture/remember in Emacs 24.4.1? (Marco Wahl) I figured it out - in updating, I had lost the cdlatex package, and org-capture wants this for some reason, that's why it was crashing for me. Marco Wahl [email protected] writes: The following message is a courtesy copy of an article that has been posted to gmane.emacs.orgmode as well. ### Re: [O] How to extract TODOs from date-tree Thorsten Jolitz [email protected] writes: Jay Iyer [email protected] writes: Hi Thorsten, The file entries are as follows and the task/note/project sub-heads generally don't have active/inactive timestamps except when a scheduling/deadline is specified. Thanks. ** 2014-10 October *** ### Re: [O] Several datetrees in one file Alexander Baier [email protected] writes: Hello, I would like to have several datetrees in one org file. I want to have different datetrees under different headlines. The setup I have in mind is an address book with chronologically ordered notes for each person. ... Is this ### Re: [O] inline image size Hi there, I found the solution http://lists.gnu.org/archive/html/emacs-orgmode/2014-10/msg00038.html cheers, M On Oct 28, 2014, at 9:16 PM, Doyley, Marvin M. [email protected] wrote: Hi there, Does anybody know how to change the size of inline images in org ? I would like to do
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.8532421588897705, "perplexity": 9937.387016097688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00115.warc.gz"}
http://semantic-portal.net/php-language-reference-references-what-references-do
# What References Do Domains: There are three basic operations performed using references: assigning by reference, passing by reference, and returning by reference. This section will give an introduction to these operations, with links for further reading. ### Assign By Reference In the first of these, PHP references allow you to make two variables refer to the same content. Meaning, when you do: <?php $a =&$b; ?> it means that $a and$b point to the same content. Note: $a and$b are completely equal here. $a is not pointing to$b or vice versa. $a and$b are pointing to the same place. Note: If you assign, pass, or return an undefined variable by reference, it will get created. Example #1 Using references with undefined variables <?php function foo(&$var) { } foo($a); // $a is "created" and assigned to null$b = array(); foo($b['b']); var_dump(array_key_exists('b',$b)); // bool(true) $c = new StdClass; foo($c->d); var_dump(property_exists($c, 'd')); // bool(true) ?> The same syntax can be used with functions that return references: <?php$foo =& find_var($bar); ?> Since PHP 5, new returns a reference automatically, so using =& in this context is deprecated and produces an E_DEPRECATED message in PHP 5.3 and later, and an E_STRICT message in earlier versions. As of PHP 7.0 it is syntactically invalid. (Technically, the difference is that, in PHP 5, object variables, much like resources, are a mere pointer to the actual object data, so these object references are not "references" in the same sense used before (aliases). For more information, see Objects and references.) Warning If you assign a reference to a variable declared global inside a function, the reference will be visible only inside the function. You can avoid this by using the$GLOBALS array. Example #2 Referencing global variables inside functions <?php $var1 = "Example variable";$var2 = ""; function global_references($use_globals) { global$var1, $var2; if (!$use_globals) { $var2 =&$var1; // visible only inside the function } else { $GLOBALS["var2"] =&$var1; // visible also in global context } } global_references(false); echo "var2 is set to '$var2'\n"; // var2 is set to '' global_references(true); echo "var2 is set to '$var2'\n"; // var2 is set to 'Example variable' ?> Think about global $var; as a shortcut to$var =& $GLOBALS['var'];. Thus assigning another reference to$var only changes the local variable's reference. Note: If you assign a value to a variable with references in a foreach statement, the references are modified too. Example #3 References and foreach statement <?php $ref = 0;$row =& $ref; foreach (array(1, 2, 3) as$row) { // do something } echo $ref; // 3 - last element of the iterated array ?> While not being strictly an assignment by reference, expressions created with the language construct array() can also behave as such by prefixing & to the array element to add. Example: <?php$a = 1; $b = array(2, 3);$arr = array(&$a, &$b[0], &$b[1]);$arr[0]++; $arr[1]++;$arr[2]++; /* $a == 2,$b == array(3, 4); */ ?> Note, however, that references inside arrays are potentially dangerous. Doing a normal (not by reference) assignment with a reference on the right side does not turn the left side into a reference, but references inside arrays are preserved in these normal assignments. This also applies to function calls where the array is passed by value. Example: <?php /* Assignment of scalar variables */ $a = 1;$b =& $a;$c = $b;$c = 7; //$c is not a reference; no change to$a or $b /* Assignment of array variables */$arr = array(1); $a =&$arr[0]; //$a and$arr[0] are in the same reference set $arr2 =$arr; //not an assignment-by-reference! $arr2[0]++; /*$a == 2, $arr == array(2) */ /* The contents of$arr are changed even though it's not a reference! */ ?> In other words, the reference behavior of arrays is defined in an element-by-element basis; the reference behavior of individual elements is dissociated from the reference status of the array container. ### Pass By Reference The second thing references do is to pass variables by reference. This is done by making a local variable in a function and a variable in the calling scope referencing the same content. Example: <?php function foo(&$var) {$var++; } $a=5; foo($a); ?> will make $a to be 6. This happens because in the function foo the variable$var refers to the same content as \$a. For more information on this, read the passing by reference section. ### Return By Reference The third thing references can do is return by reference. Page structure Terms
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25040203332901, "perplexity": 5046.415433300183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00648.warc.gz"}
https://hal-insu.archives-ouvertes.fr/insu-00346713
# Measuring the heterogeneity of the coseismic stress change following the 1999 Mw7.6 Chi-Chi earthquake Abstract : Seismicity quiescences are expected to occur in places where the stress has been decreased, in particular following large main shocks. However, such quiescences can be delayed by hours to years and be preceded by an initial phase of earthquake triggering. This can explain previous analyses arguing that seismicity shadows are rarely observed, since they can only be seen after this triggering phase is over. Such is the case of the main rupture zone, which experiences the strongest aftershock activity despite having been coseismically unloaded by up to tens of bars. The 1999 M w 7.6 Chi-Chi, Taiwan earthquake is characterized by the existence of several such delayed quiescences, especially off the Chelungpu fault on which the earthquake took place. We here investigate whether these delays can be explained by a model of heterogeneous static-stress transfer coupled with a rate-and-state friction law. We model the distribution of coseismic small-scale stress change τ by a Gaussian law with mean tau bar and standard deviation σ τ . The latter measures the level of local heterogeneity of the coseismic change in stress. The model is shown to mimic the earthquake time series very well. Robust inversion of the tau bar and σ τ parameters can be achieved at various locations, although on-fault seismicity has not been observed for a sufficiently long time to provide more than lower bounds on those estimates for the Chelungpu fault. Several quiescences have delays that can be well explained by local stress heterogeneity, even at relatively large distances from the Chi-Chi earthquake. Document type : Journal articles https://hal-insu.archives-ouvertes.fr/insu-00346713 Contributor : Pascale Talour <> Submitted on : Wednesday, March 10, 2021 - 3:29:43 PM Last modification on : Tuesday, July 27, 2021 - 9:36:02 AM Long-term archiving on: : Friday, June 11, 2021 - 7:01:37 PM ### File 2006JB004651.pdf Publisher files allowed on an open archive ### Citation David Marsan, Guillaume Daniel. Measuring the heterogeneity of the coseismic stress change following the 1999 Mw7.6 Chi-Chi earthquake. Journal of Geophysical Research : Solid Earth, American Geophysical Union, 2007, 122 (article n° B07305), pp.NIL_10-NIL_30. ⟨10.1029/2006JB004651⟩. ⟨insu-00346713⟩ Record views
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376882076263428, "perplexity": 4295.403592269362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00081.warc.gz"}
http://patriot.net/~abdali/urdumac.html
# URDU on the MAC ## CONTENTS The Mac OS X is capable of editing and word processing in Urdu. In a few simple steps, you can enable your Mac to handle Urdu documents. Starting with MacOS10.7 (Lion), the Macintosh supports Urdu natively. If you are satisfied with the built-in Urdu keyboard, then all you need to do is go to Section Activating Urdu Input below, and in Step 4, click on the box named Urdu. But if you aim for perfection, then read on. This page could as well have been entitled "Urdu and Persian on the Mac", because the information given here can also be used to compose Persian (Farsi) documents. The keyboard, fonts, and explanations below apply equally to Persian, but the explanations are illustrated with Urdu words. The keyboard and fonts also suffice for Punjabi (Shahmukhi or Pakistani style, written in the Arabic script), Arabic, and Ottoman Turkish. But the keyboard does not have all the symbols of Sindhi and Pushto alphabets. ## Installing the Urdu keyboard layout [Note that we are talking about an Urdu keyboard layout, not an Urdu keyboard. You will be able to use this layout in order to type Urdu characters using the standard English keyboard that came with your Mac.] BEFORE INSTALLATION: If the folder /Library/Keyboard Layouts contains files named UrduPhonetic.keylayout and UrduPhonetic.icns, then delete these two files. (These have now become obsolete.) 2. Double click on this zip archive to extract its files. 3. Two of these files are: keyboard layout file UrduQWERTY.keylayout and the icon file UrduQWERTY.icns. Move both these files to the folder /Library/Keyboard Layouts. If you have already installed a previous version of Urdu-QWERTY keyboard layout, then confirm that you want the old files overwritten. 4. The other extracted files are readme-mac.txt (with some instructions similar to the ones here) and UrduQWERTYkeyboardMac.pdf (a printable one-page keyboard map for reference). 5. Log out, then log in again. This will let the system install the Urdu keyboard. ## Activating Urdu Input 1. Pull down the Apple Menu in the menu bar (the apple icon on the top left of the screen) and select System Preferences. 2. Select Keyboard (for MacOS 10.7 and above), Languages & Text (for MacOS 10.5 and 10.6), or International (for MacOS 10.4). 3. Click on the Input Sources tab (for MacOS 10.5 and above) or Input Menu tab (for MacOS 10.4). 4. For MacOS 10.5 and below, go to the next step. For 10.7 and above, a list of activated input sources (i.e., keyboard layouts) will appear in the left column. If Urdu-QWERTY isn't there, press the + (plus) sign below to see a list of languages. Selecting Urdu will show the Apple-supplied keyboard layout, which is not what you want! So select "Others" from the left list, then "Urdu - QWERTY" from the right, then press the "Add" button. You'll now be back to the previous window, with "Urdu - QWERTY" in the left list. Select it. At the bottom of the window, there is an item Show Input menu in menu bar. Click the box to its left so it shows a check sign in it. The activation is now complete, so skip the remaining steps. 5. For MacOS 10.6 and below, a long list will appear. Scroll the list down to find the item with the name Urdu - QWERTY. Click the box to its left so it shows a check sign in it. 6. At the bottom of the window, there is an item Show Input menu in menu bar. Click the box to its left so it shows a check sign in it. 7. Scroll up all the way. Towards the top of the list, you will see either (for MacOS 10.5 and above) a single item named Keyboard & Character Viewer, or (for MacOS 10.5) two items Character Palette and Keyboard Viewer. Click the boxes on their left so each shows a check sign. 8. Close Language & Text (for MacOS 10.5 and above) or International (for MacOS 10.4). The pictures below correspond to MacOS 10.5 and 10.6. What you'll see in other MacOS versions is slightly different but the changes will be obvious. If a US flag was not previously visible at the top right of the screen (on the menu bar), it should now be displayed. This is the Input menu (sometimes called the Keyboard menu). If you click on this flag, a menu will appear underneath it with various icons and names representing all the active keyboards. A keyboard named Urdu-QWERTY and an icon (Urdu-QWERTY Keyboard Icon) somewhat like the flag of Pakistan should now appear. If you click on it, then the keyboard icon on the top right of the screen will turn into the Pakistani flag, and any keys pressed on the Mac keyboard will produce Urdu characters. You can switch between keyboards by clicking on the flags (the keyboard icons). ## Installing Urdu Fonts On a Mac, it is best to use Naskh fonts (which are typically used in Sindhi, Arabic and Persian publications), not the Nastaleeq fonts (which are used in most Urdu newspapers). Although Nastaleeq fonts are available for the Mac, they don't work as robustly as the Naskh fonts. But if you want to experiment with Nastaleeq fonts anyway, they are discussed in the section Nastaleeq Fonts further below. Knut Vikor's excellent page The Arabic Macintosh has a detailed discussion of various Arabic Fonts with interesting information about them and links to obtain them. The Mac OS comes with only one Naskh font, Geeza Pro, which can be used for Urdu, but the characters do not look particularly attractive. A free and very attractive set of fonts for Intel Macs is XB Zar distributed by the Iran Mac Users Group (IRMUG). 1. Visit IRMUG's X Series 2 page by clicking on this link . 4. Click on the the downloaded file to get a folder named Zar. 5. Move this folder to /Library/Fonts. 1. Visit the Arabic Script Unicode Fonts page on the SIL International Web site by clicking here. 2. Scroll down to the Freeware License section. 3. Download the fonts Scheherazade Regular (OpenType), Lateef Regular (OpenType), Scheherazade Regular (AAT), and Lateef Regular (AAT). TextEdit uses AAT fonts, but you might need the OT fonts for other applications. 4. The downloaded files have the suffix .zip. Click on each to unzip them, getting new files with the suffix .ttf. 5. Move the ttf files to /Library/Fonts. ## DIGRESSION: Urdu QWERTY for Windows After having used the Urdu-QWERTY keyboard on your Mac for a while, you might be interested in using the same key settings also on your Windows computer. If so, you can download the Urdu QWERTY Keyboard Layout for Windows. (This file was updated on 2014-08-23.) Unzipping the downloaded file will produce a folder. Open this folder and follow the instructions in the file readme-win.txt. Basically, there are two steps: (1) Double-click on the file setup.exe (or just setup) to install the keyboard on your system. (2) Then activate the keyboard via the Control Panel. The file UrduQWERTYkeyboardWin.pdf is a printable one-page map for reference showing the key assignments in Urdu QWERTY Keyboard Layout. It is also downlaodable from here. (This file was updated on 2014-06-04.) Make sure to also install some of the fonts recommended in the sections Installing Urdu Fonts, More Naskh Fonts, and Nastaleeq Fonts. All these fonts can will properly display the symbols that this keyboard lets you type. ## The Urdu QWERTY Unicode Keyboard The Urdu-QWERTY keyboard layout that you have downloaded has been designed to closely resemble the phonetic keyboard of InPage, a popular commercial desktop publishing application for Urdu that runs under Windows. The advantage of a QWERTY (also called, phonetic) keyboard is that keys are assigned to letters based on letter sounds; e.g., the key "b" for the Urdu letter "bay", "p" for "pay", "k" for "kaaf", "g" for "gaaf", and so on. Such an assignment helps you remember most of the keys. As we do not have enough keys on the standard computer keyboards to assign to all Urdu characters, we need to use shifted keys for some (e.g., "shift-k" for "khay", "shift-g" for "ghain", etc.) In addition to being phonetic, this is also a Unicode keyboard layout. Whatever you type is converted to its Unicode representation which is the modern universal character encoding used in computers for multi-lingual texts. The Input menu (Keyboard menu) has an item Show Keyboard Viewer. If you select this item, then the system will display a picture of the keyboard on the screen. By default, the picture is very small. But you can make it larger or smaller like any window by pulling the handle at its right-bottom corner with your mouse. In this picture you can see what character each key corresponds to. The characters will change appropriately if you press the shift key or option key (or another modifier key) or select a different keyboard from the Input menu. For reference, below are larger pictures of the Urdu-QWERTY Keyboard showing the characters corresponding to the keys in plain, shift, option, and option-shift modes. (The light marks shown on the left corner of keys identify the keys on a Western keyboard.) For a pdf file with printable keyboard pictures, click here. (This file was updated on 2014-06-04.) You can then print the keyboard pictures for reference. ### Urdu-QWERTY Keyboard: Symbols generated with both Option and Shift pressed With one exception, the above pictures contain all the information that there is to give about the Urdu-QWERTY key assignments. The exception is this: The same digit keys can generate digits in three different shapes: (1) Western shapes of digits when no modifier key (SHIFT, OPTION, CAPS LOCK) is pressed; (2) Urdu shapes of digits when CAPS LOCK is pressed and SHIFT or OPTION are not pressed; and (3) Arabic shapes (suitable for Naskh fonts) when OPTION is pressed. The CAPS LOCK key has no effect on other keys. So if you like your digits to be displayed in their Urdu, and not the Western, shapes, then you can just leave CAPS LOCK depressed, and release it only to type a symbol which requires both SHIFT and OPTION keys to be pressed. Even though we have tried to make the keyboard layout as phonetic as possible, the mismatch between the Urdu alphabet and the available keys on a Western keyboard has forced us to make some unintuitive mapping between letters and keys. But with a little practice you should be able to type most letters from memory. For quick reference here are some tables of useful key bindings. ### Some unobvious key bindings Another useful key is Shift-" (i.e., the double quote key) that generates the dash-like Kasheeda character. This character is mostly used to strech horizontal components of letters. (This is described in more detail in the section The KasheedaFeature). ### Similar Letters in Urdu, Persian, and Arabic The following tables lists the letters which are similar in Urdu, Persian, and Arabic, but are treated as different Unicode symbols. You should carefully choose the key bindings appropriate to the language of the text being typed. Unicode Also called Languages Positional Forms Key Name Shape Initial Medial Final Isolated Keheh ک Kaf Urdu, Persian کـ ـکـ ـک ک k Kaf ك Arabic كـ ـكـ ـك ك option k Heh ه Persian, Arabic هـ ـهـ ـه ه option o Teh Marbuta ة Persian, Arabic ـة ة option shift O Heh Goal ہ ChoTi Heh Urdu ہـ ـہـ ـہ ه o Teh Marbuta Goal ۃ Urdu ـۃ ة shift O Heh Dochashmee ھ Urdu ـھـ ـھ h Farsi Yeh ی Urdu, Persian یـ ـیـ ـی ی i Yeh ي Arabic يـ ـيـ ـي ي option i Alef Maksura ى Yeh KhaRa Zabar Arabic ـىٰ option u Note, in particular, that the key "o" is bound to the Urdu letter "Goal Heh" (also called "ChoTi Hay"), and the key "Option-o" is bound to the Arabic/Persian letter "Heh". Both letters have the same shape when standing alone i.e., in the isolated position. But their shapes in initial, medial, and final positions are different. Worse, in the medial position, they can be confused with the Urdu letter "Dochashmi Hay" (key "h"). You have to make sure to use Goal Hay and Dochashmi Hay with Urdu fonts, and Arabic Hay with Arabic and Persian fonts; otherwise the letter might not be rendered properly. In Urdu and Persian, the letter "yeh" has two dots underneath in initial and medial positions, but none in final and isolated positions. In Arabic, this letter has those dots in all positions. You should use the key "i" when typing in Urdu or Persian, and "Option-i" when typing in Arabic. The letter "alef maksura" (key Option-u) is always dotless. In Urdu, it appears only in Arabic words, and is treated as the same as the "ChoTi yeh" i.e. "Farsi yeh" (key "i"). It is traditioanl to decorate this letter with a "khara zabar" (key Shift-I) above it. ## Preparing Urdu Documents ### Email Messages An email message is usually a very simple document. If you compose your email on a Mac, and your recipient is also going to read your email on a Mac, then you can try writing your messages in Urdu. In fact, the messages are readable even on any non-Mac machine that has been configured with system options for right-to-left languages and on which appropriate fonts have ben installed. An occasional character in these messages might be undecipherable, and might be replaced with its unicode icon (or some gibberish, in the worst case). Both Gmail and YahooMail systems work admirably when the Urdu-QWERTY input is turned on and message composition is in Rich Text format with right-to-left text direction. With Hotmail, mixing right-to-left and left-to-right text in the same line seems problematic, as that seems to interfere with the correct sequencing of words. In each case, the cursor behavior is a bit erratic; the cursor sometimes shows up at the right end of the line instead of being at left next to the last word typed. But you can ignore all that since your text is still set correcly. The cursor behavior will likely be fixed by Apple and email system producers anyway. It helps to set the Web browser preference for Default Character Encoding to Unicode (UTF-8). If you use the Firefox Web browser, then in its preferences set the Default Font to XB Zar, Size 18, all Fonts for Arabic to XB Zar, Monospace Font Size to 16, and other sizes to 18. The Safari Web browser does not allow language by language font control, so just set Standard Font to XB Zar 18. If your machine is not an Intel Mac, or for any other reason you cannot use the XB Zar font, then in its place use Scheherazade-AAT, Size 24. Google has developed an input method based on transliteration of text typed using letters of the Latin (i.e., Roman or English) alphabet. This method, called Google Transliteration Input Method Editor (IME), is available for Urdu. It lets you enter Urdu words using Latin characters phonetically. Google Transliteration IME will convert the text, based on its sound, to Urdu characters. The conversion is quite liberal so the correct Urdu word will result from most of its reasonable phonetic Roman spellings. By the same token, the same Roman text can lead to several different Urdu words. In the latter case, you will be able to choose one of the words from a menu. For example, the Roman text "sada" can correspond to the Urdu words سادہ (meaning simple), سدا (always), صدا (sound), and possibly others. So if you type "sada" in Google Transliteration IME for Urdu, the system will display these Urdu words in a menu, from which you can select your intended word. IME saves you the trouble of installing keyboard layouts for the languages you like to type in. But experimenting with IME is likely to convince you that it is more efficient to type Urdu text directly using the Urdu-QWERTY keyboard layout, rather than via IME. The mismatch between Roman and Urdu alphabets is so substantial that there are too many phonetic Roman spellings for the same Urdu word, and too many Urdu words result from the same Roman text. So while using IME, you are likely to be wasting much time trying and selecting alternatives. IME is available as an option in Gmail. It is also available for MacOS as a service so you can use it with applications such as word processors. Gmail also allows specifying Urdu as the application language. In that case all the menus, titles, warnings, etc., will be translated to Urdu. But you do not need this drastic setting in order just to read and write Urdu email messages. While keeping English as the system working language, you can type Urdu text by selecting the Urdu-QWERTY input. Of course, you can also intersperse texts in Urdu and English by simply switching back and forth between Urdu-QWERTY and English keyboards. ### Traditional Documents, Editors for Urdu For the Mac, there are three free office suites LibreOffice, OpenOffice, and NeoOffice, each of which includes a powerful word processor that can be used to edit Urdu documents. OpenOffice is a multi-platform open-source software application with functionality very similar to that of the commercial product Microsoft Office. LibreOffice is another multi-platform application that has been developed starting from the OpenOffice code base. NeoOffice is a Mac-specific implementation of OpenOffice. If you want to install one or more of them, you can find detailed descriptions and installation instructions on their official Web sites LibreOffice, OpenOffice and NeoOffice. We will not describe their use. A highly recommended editor is Bean for which the general information and download instructions can be found on its official Web site. This is a free, easy to use, small, and very efficient word processor. It is quite adequate for simple documents. Beware that at present it lacks some advanced word processing features such as footnotes. It is also a bit stubborn in its behavior; for example, the document line size cannot be changed simply by resizing the editor window. The Mac's built-in application TextEdit is good enough for simple documents. TextEdit is considered a text editor rather than a word processor. Yet it can be used for composing documents with multilingual text, embedded graphics, tables, and other advanced features typically found only in large, expensive software applications. Its advantage is that it doesn't need to be installed: It is always there, and is the Mac's default editor for text files. In Plain Text mode, TextEdit allows only a single font and a single paragraph justification style for the entire document. In Rich Text mode, you can mix various font families, font sizes, font styles (e.g., bold, outlined, shadowed), and justifications (e.g., centered text, or text justified at left or right or both sides). You need to use Rich Text since Plain Text does not work well with Urdu. To start a new Urdu document, select the menu item File > New, then Format > Text > Writing Direction > Right to Left. Set the Input menu (Top Right) to Urdu-QWERTY. Choose fonts, font styles, size, colors, etc., as is usual with most word processors. Since the default formatting is Plain Text, switch to the Rich Text Format by doing Format > Make Rich Text. Now you can apply a different justification to each paragraph, and a different formatting style to each selection. It is helpful to also do Format > Font > Show Fonts. This puts on the screen a font palette which is convenient for choosing font family (e.g, XB Zar, Scheherazade-AAT, or Lateef-AAT), size, color, etc. The XB Zar font family also includes typeface variants such as italic, bold, and bold italic. (But see the note in the section More Naskh Fonts about the use of italics.) The system seems to unpredictably switch the font sometimes to Geeza Pro (the "system default font" for the Arabic script). So you need to watch the font palette and, if necessary, change the font back to what you want it to be. You can configure TextEdit to use your favorite font as the default. To do this, start TextEdit, and on the Menu bar click on TextEdit, then on Preferences, and then on the New Document tab. If the Rich Text radio button is not active, click on it. Click on the Change... button next to Rich text font: . The font dialog will open. Now, select, for example, XB Zar in the family column, and 18 in the size column, then close the dialog. Finally, close Preferences, and quit TextEdit. When you restart TextEdit, it will use the Rich Text format and the XB Zar size 18 font as the default for new documents. Here is an image of a portion of the Mac screen during the editing of an Urdu document using TextEdit. ### TextEdit In Action X Series 2 is a set of free, high quality, attractive Naskh fonts that support Urdu and Persian. These come with matched groups of regular, italic, bold, and bold italic characters; some even have outline and shadow variants. The X Series 2 fonts are downloadable from the Iran Mac User Group Wiki site. Particularly nice font families on that site are XB Niloofar, XB Yas, XB Kayhan, and XB Zar for general use, and XB Titre for headings. NOTE: In the regular typeface of Naskh fonts, the "vertical" strokes of Alif, Laam, etc., are actually drawn with a slight tilt to the left. The italic Naskh typefaces of X Series 2 fonts are designed by slanting the same strokes a bit to the right. Some font families in this series also have oblique typefaces in which the strokes are slanted even more to the left than in the regular typeface. But since few Urdu letters contain prominent vertical elements, the italicized (or oblique) text in Urdu does not stand out well. (This is in contrast to the Latin alphabet where nearly every letter has vertical strokes.) The boldface text in Urdu is, of course, quite noticeable. Dozens of free Naskh fonts can be downloaded from the Internet. But you need to experiment with them to pick the ones that are of good quality and work with the whole Urdu alphabet. Some of them have been adapted from Arabic or Persian, without extending them properly for the additional letters of Urdu. You should check, in particular, whether these fonts properly display all the needed forms of the letters "Noon Ghunna" ں, "Goal Hay" (also called "ChoTi Hay") ە, "DoChashmi Hay" ھ, and "Bari Yay" ے . ### Nastaleeq Fonts In Mac OS 10.5 and above, it is possible, with some care, to use Nastaleeq fonts with TextEdit and Bean. Some freely available Nastaleeq fonts are: • Jameel Noori Nastaleeq and Jameel Noori Nastaleeq Kasheeda, downloadable from here. Double clicking on the downloaded zip file will extract four files. Install the file "Jameel Noori Nastaleeq.ttf", and discard the other three files. • Faiz Lahori Nastaleeq • Pak Nastaleeq • Fajer Noori Nastalique A variety of Nastaleeq as well as Naskh fonts are available for download from Urdu Web, Urdu Jahan, and Deedahwar. Please be warned though that the InPage company alleges that some freely available Nastaleeq fonts are pirated from their work. Note: To install a font file, copy it to /Library/Fonts. On Mac OS 10.6, you can also install a font file by double clicking on it, then clicking on the Install Font button in the dialog box presented to you. Any font installed in this way is copied to ~/Library/Fonts where it is available to the active user but not to other users. CAUTION: Nastaleeq does not work at all in LibreOffice, OpenOffice, and NeoOffice. OBSERVATION: Nastaleeq works satisfactorily with the message composer in Google's Gmail. You need to have the rich text style activated and the Right-to-Left text direction turned on. Since Gmail composer's font menu is fixed and there is no Nastaleeq font in it, how do you make the composer use Nastaleeq? The only way that seems to work is to start a message with some Nastaleeq text copied from another Gmail message. Then while you edit this text, Gmail preserves the current text font! But keep in mind that any text copied from elsewhere, e.g., a TextEdit window, won't be rendered in Nastaleeq, so there is no sense starting a message with such text for the purpose of composing a message in Nastaleeq. OBSERVATION: Nastaleeq works satisfatorily with TeX, discussed below in the section Typesetting Using TeX, LaTeX, XeTeX. For best results, use Open Type (OT) fonts with TeX. Here are some conculsions from experiments with the above fonts using TextEdit and Bean under MacOS 10.6: Summary: Nafees is the only Urdu font that works sastisfactorily with editors like TexTEdit and Bean. IranNastaliq is undoubtedly the most elegant and stylish, but it cannot handle the letters particular to Urdu. While Jameel and Faiz have very nice quality, they fail to render many typed words, and can be quite annoying at times. Pak is not acceptable at all in its current state because of its letter shapes. Fajer is too unstable to be used with confidence, which is a pity because its shapes are nice and the failures are limited to just a few ligatures. Details: 1. Nafees, Iran, and Pak are the only fonts which allowed without fuss the typing of the passage shown in the image below. The other fonts broke down, sometimes crashing TextEdit! Sometimes they caused the display to become garbled when a word was partially typed. When the word was typed anyway, the display became normal again. With Jameel Noori and Faiz Lahori it was possible to finish the text with those episodes of temporary failures. With Fajer Noori, it wasn't possible to continue beyond the first line. This font failed with "baRi yay" ے following most letters. 2. The only problem with Nafees under TextEdit is that words sometimes get clipped at top or bottom. Perhaps the culprit is poor management of vertical space by TextEdit. The solution is to provide more generous spacing. For this, click on the Spacing pull-down menu on the formatting bar in the TextEdit window, select the Other... menu item, and adjust the Line height multiple and Inter-line spacing values for a satisfactory display. 3. IranNastaliq, intended obviously for Persian, doesn't support the Urdu retroflex letters Tay, Daal, and Ray, and the letter variants Noon Ghunna, Dochashmi Hay, and BaRi Yay. Actually some of these letters do get displayed, but they are not properly connected. For proper rendering of Goal Hay (also called ChoTi Hay), this letter should be typed as Option-o, not o. 4. Copying the passage and trying to change its font worked with varying degree, except, again, for Fajer Noori. 5. IranNastaliq, representing the traditional Persian Nastaleeq style, is stunningly beautiful. For Urdu, Nafees, Faiz Lahori, and Jameel Noori are comparable in the calligraphic quality of shapes, with the latter two being more pleasing. The latter two look more natural and have the flair of handwritten calligraphy. But this might be a result of kerning variations and word processor functions. 6. The failure of Jameel Noori and Faiz Lahori to render some words seems unique to the Mac, since the same words that fail on the Mac get typeset correctly under the Windows operating system. 7. In Pak, the shapes of kaaf, gaaf, inital meem, medial hamza, and baRi yay are unacceptable. Several other ligatures are also quite poorly done. This font is just not usable in its present state. By comparison with Nastaleeq fonts, Naskh fonts work much better on the Mac, and editing with them is generally trouble-free. Here is an image of the TextEdit window during the eding of a document mainly with the Nafees Nastaleeq font (size 48 for the title, size 36 for the author's name, size 22 for the text body). ### Nastaleeq Fonts Used In TextEdit To contrast the Urdu and Persian Nastaleeq styles, here is the image of a document set in the IranNastaliq font. Notice that compared to the Urdu sample, the strokes in the Persian sample are more consistent and uniform, and the latter sample more closely resembles manually calligraphed old manuscripts. Specially pleasing to the eye are the long slanted strokes (called "markaz") of kaaf and gaaf, and the stretched horizontal parts of letters like bay, tay, kaaf, etc. NOTE: For the Persian letter Hay (ه), type the key Option-o. The Urdu letter Goal Hay (ہ), typed with the key o, will not get properly connected within a word when a Persian font is being used. ### The Kasheeda Feature (This section was added on 2012-07-30.) Worth special mention is the "kasheeda" feature of Iran Nastaliq to extend the horizontal strokes of letters. Essentially, by typing the Kasheeda character (key shift-") one or more times either before or after a letter, that letter can be extended arbitrarily. The judicious application of kasheeda makes the appearance of a document more pleasing by introducing variety in the shape of letters. It is also useful for highlighting titles and headings and for visually balancing the lines in poetry. There are long established calligraphic conventions as to where kasheeda extension is permissible in nastaleeq and where it is not. (See this Persian document.) The kasheeda feature is not specific to Nastaleeq; it can be applied to Naskh fonts as well. In Naskh fonts, all horizontal strokes and letter joints are positioned at the same level in a line of text. The kasheeda character is a horizontal dash, so it can be added essentially to any letter that has a horizontal component. In Nastaleeq fonts, even the strokes that seem horizontal are not truly horizontal but have subtle slopes and slants. Also, they keep varying in thickness, and have to curve anyway at letter joints. So the kasheeda feature is quite difficult to incorporate in Nastaleeq fonts. Undoubtedly, the handling of kasheeda in Iran Nastaliq is masterly! The kasheeda approach of Iran Nastaliq is superior to the one taken by the special "Kasheeda fonts" of Urdu (e.g., Jameel Noori Nastaleeq Kasheeda). Iran Natstaliq allows the user to apply kasheeda extension selectively to any chosen occurrences of any chosen letters. By contrast, the Urdu Kasheeda fonts have built-in kasheeda extensions in certain letters and ligatures. These fonts also violate the Kasheeda conventions sometimes. The sample below shows the use of Kasheeda in two different nastaleeq fonts. Unfortunately, the Kasheeda feature of Iran Nastaliq does not yet work fully under MacOS, so the sample has been prepared on a Windows machine. ### InPage Files and Their Conversion to Unicode Text (This section was completely rewritten on 2012-06-20.) InPage is a commercial desktop publishing application for Windows. It is widely used by publishing houses for producing Urdu publications because of its rich feature set, multi-lingual and multi-script capabilities, and robustness. Until recently, it was one of very few applications that could produce high-quality Nastaleeq documents. Unfortunately, InPage works only on Windows and, moreover, uses proprietary document structure and fonts. Naturally, there is much interest in converting InPage files into alternate, more portable versions that could be processed on multiple computing platforms with multiple applications. So several online tools and programs have become available to convert Inpage files to Unicode text files. In fact, InPage itself has a unicode coversion facility of sorts through its copy and paste Edit menu items. The coversion programs that I tried turned out to have errors or limitations. Some of them require you to have a running Inpage to display the file to be converted. This severely reduces their utility, since what you want is a program to simply take an InPage file as input and create an equivalent Unicode text file as output. So I had to write such a program myself. This program is available as a standard application for Windows XP/7 and a command line application for MacOS and Linux. As I do not have access to any documentation of the inner structure of InPage documents, this program is based on guess work. Although I have run it successfully on several large documents, I do not guarantee that it will convert all InPage files correctly. Also, to use it, you need some rudimentary skills to deal with commands and with Terminal windows (in MacOS) or Unix commands (in Linux). Please let me know if you encounter any bugs. Here are the instructions for the simplest way to use it: 1. Download the file InpToUniTxt.zip from here. This file was last updated on 2012-07-26. 2. Unzip the downloaded zip file to extract the following command line applications: • InpToUni-mac (universal binary for PowerPC and Intel macs, MacOS 10.4 and above) • InpToUni-win.exe (Windows XP/7) • InpToUni-lnx32 (32-bit Linux) • InpToUni-lnx64 (64-bit Linux) 3. Keep the application that is appropriate for your operating system, and delete the others. (The files are very small, in the 10 to 50 Kbytes range, so you can also just leave them in the folder.) 4. In case of MacOS or Linux, make the application file executable. 5. Copy the application file to the directory where you keep Inpage files. To convert an existing InPage file with the name, say, story.inp into a new Unicode text file to be created with the name story.txt, do this: MacOS 1. Open a Terminal window by double clicking on Terminal in the /Utilties directory. 2. Use the "cd" command to get into the directory of your InPage files. 3. Type the command ./InpToUniTxt  story.inp  story.txt and press Enter. CAUTION: If your files or the saved application are in different directories, then make sure to use the right path for each file. Linux (32- or 64-bit) 1. Use the "cd" command to get into the directory of your InPage files. 2. Apply one of the following commands approporiate to your operating system: •     ./InpToUni-lnx32  story.inp  story.txt •     ./InpToUni-lnx64  story.inp  story.txt and press Enter. CAUTION: If your files or the saved application are in different directories, then make sure to use the right path for each file. Windows XP/7 1. Double-click on the InpToTxt-win (or InpToUniTxt-win) application to launch it. 2. You will be asked to choose the existing InPage file. 3. Go through the folders as usual to locate tnd select the file (say, story.inp). Then press OK. 4. You'll be asked to choose the new file where the converted Unicode Text will be written (say, story.txt). Type its name or select an existing txt file, then press OK. 5. The converted Unicode text file will be created, and the application will quit. Processing Converted Unicode Text Files To read/edit a converted file under MacOS: 1. Open it in TextEdit. 2. Switch TextExit's mode to RichText and the Text Direction to RightToLeft. 3. Do Edit > Select All to select the whole file in the TextEdit window. 4. Change to a font such as XB Zar, Scheherazade, Lateef, or Geeza Pro. Note: Without the font change, some Urdu characters may not be rendered properly. Follow a similar procedure for other word processors and for Windows and Linux. (Changing the Text Direction to right-to-left and using a proper font is crucial.) IMPORTANT: Please be aware that the InPage document's formatting properties (justifications, fonts sizes and styles, colors, etc.) are lost during the conversion. The conversion mainly consists of text extraction. ### Typesetting Using TeX, LaTeX, XeTeX #### Skip this section if you are not interested in programmatic typesetting. The term TeX is used here in a generic sense for TeX or any of its derivatives, such as LaTeX, AMSTeX, ArabTeX, XeTeX, ArabXeTeX, etc. But when the discussion is about a particular derivative, that system is mentioned by name, e.g., XeTeX. When you use a word processing system, you take formatting actions yourself, and the system keeps displaying the document as it changes in response to your actions. When you use TeX, you put the document contents (text, images, etc.) and the formatting instructions together in one or more tex files, and the system processes them to produce the desired document. The tex files you prepare constitute a TeX program. The TeX system executes this program to produce the desired document, typically in the form of a PDF file. TeX has a steep learning curve, but once mastered it allows you to produce very complex, high-quality documents, and provides you very fine control over the look and feel of the document. TeX is widely used for producing scholarly works, and many scientific journals and conferences require that articles be submitted to them in the form of tex files. Various distributions of TeX are available for the Mac. We highly recommend the TeX Live distribution together with the TeXShop graphics environment for using TeX. For this, download and install the full MacTeX package. The download, installation, and usage instructions are on the TeXShop Web site. The MacTeX package includes most of the components needed for processing Urdu documents. In particular, it includes the system called XeTeX which currently offers the best facilities for Urdu. XeTeX has overcome two limitations that were not satisfactorily addressed by the previously existing derivatives of TeX, and that, in particular, greatly hampered the production of Urdu documents with TeX: 1. For its input, TeX originally used only the ASCII character set, so other characters had to be encoded as ASCII character combinations. XeTeX incorporates Unicode, thus giving access to nearly all of the world's written languages. For example, Urdu characters can now be typed directly within tex files. 2. For its output, TeX originally used only a small number of fonts. More fonts could be added but they had to be specified using METAFONT, a companion program that came with TeX. XeTeX allows one to use most of the fonts installed on one's computer. For example, the program ArabTex could previously be used for Urdu, but the output was restricted to a single, Naskh-style font. Now there are dozens of fonts in different styles that can be used in Urdu documents with XeTeX. So for the TeX approach to typesetting Urdu documents, you have to know LaTeX and XeTeX. We will not provide any details about TeX, LaTeX, and XeTeX beyond some cursory information. You have to learn these on your own! It would make sense for you to start reading about XeTeX only after you have become sufficiently familiar with LaTeX to be able to create some documents with it! There are hundreds of books, articles, and tutorials about LaTeX. An excellent tutorial is The Not So Short Introduction to LATEX2ε. A very comprehensive on-line reference is The LaTeX wikibook. TeXShop itself has a number of online books and tutorials under its Help menu. For XeTeX, the essential reference is The XeTeX typesetting system from SIL International where the system was designed. Also a useful reference is the 100+ page online document with much historical background, examples, and practical hints, The XeTeX Companion. TeXShop makes it very easy to edit and execute TeX programs. When you launch TeXShop, it brings up a window in which you can edit your TeX program, i.e., your tex source file. But you should first set TeXShop's Preferences. The most important preferences are in the Source and Typesetting tabs. 1. Click on the Source tab to open it. In the Editor block, check the box with Arabic in it. Under Encoding, select the item Unicode (UTF-8). In the Document Font block, select the item XB Zar - 18 (this should make the TeX source more readable). 2. Click on the Typesetting tab to open it. In the Default Command block, check the Command Listed Below radio button, and in the text field below it type "XeLaTeX". (You can change the Typesetting command to LaTeX or other choices on the editing window itself.) In the Default Script block, make sure that the Pdftex radio button is turned on. After the Preferences are taken care of, you are ready to edit your source file. The TeXShop editor is powerful, and yet very simple and intuitive. For typing the Urdu content, you have to use the Urdu-QWERTY keyboard layout, of course. But make sure to switch to the US English keyboard while typing TeX commands and the symbols that go with them, such as \, &, {, }, [, ], !, %, etc. When numbers and delimiter characters are mixed with Urdu alphabetic text, the sequence of symbols sometimes appears wrong. You just have to tolerate such disorder at present. To avoid confusion, it might be helpful to put Urdu and non-Urdu symbols on different lines. Since preparing TeX files for Urdu documents requires frequent switches between the Urdu keyboard (for text) and the English keyboard (for special characters and TeX commands), you might consider setting up Keyboard shortcuts for that purpose. Refer to the section Keyboard Shortcuts for Changing Keyboards further below. Note that TeXShop's default in the source window is to display the TeX commands in blue, comments in red, and other text in black. Also, you will notice that TeXShop uses the first character of each new line of text in the source window to determine whether to start displaying the line from the left or from the right end of the window. If the first character of the line belongs to a left-to-right script (e.g., English), then the line is started at left. But if the first character of the line belongs to a right-to-left script (e.g., Urdu or Persian or Arabic), then the line is started at right. Characters like space and certain punctuation symbols are considered belonging to left-to-right scripts, and cause the line to be started at left. Once the editing of your tex file is complete, you should click on the Typeset button. TeXShop will process your tex file, and will display the resulting PDF file if the program ran successfully. It will also bring up a Console window with progress and error messages. We now give an example of typesetting an Urdu ghazal using XeTeX. The first image below shows the TeX program, poemRA.tex. XeTeX (actually the program xelatex) executes this file to produce the PDF file poemRA.pdf, shown in the next image. The TeX program uses the fontspec package to gain access to the fonts installed on the computer. The program uses Vafa Khalighi's bidi package for the text's bidirectionality (i.e., to handle left-to-right and right-to-left scripts). The bulk of the poetry formatting is done by the bidipoem package The essential TeX instructions for typesetting consist of the first seven lines and the part between \begin{document} and \end{document}. Note how simple the TeX code is in this case; it really amounts to just putting the lines of the poem within the traditionalpoem environment. IMPORTANT: To make sure that bidipoem justifies the lines of the poem correctly, you need to typeset the document twice (that is, press the Typeset button again after running the program successfully once). To illustrate how TeX makes it easy to add extra flair to the output, the TeX program also puts a decorative border along the page margins. This is done by the block of code in the middle section. The work is done mainly by the fancyhdr package. You need to install on your computer the free font WebOMints GD which can be downloaded from the Internet. The symbols in this font can be used for decorating documents in various ways. Here some of its symbols are being used to assemble the border shown on the output PDF file. Note that in the new font family declaration, we give a name (\w) to the WebOMints GD font, and use the Color option so that all symbols of this font will be in the designated color. The 6 hexadecimal digits represent the code for a shade of turquoise. If you try to typeset a poem with longer lines, then you might get each of its couplets displayed on two lines, in a different poem style. Also, you might need to play with the parameters (62,-18) in the line beginning with \begin{picture} to display the border correctly. ### PDF File Produced by XeTeX, as displayed by TeXShop More complex Urdu documents are best produced in TeX by making use of François Charette's package polyglossia. This package is intended to support texts in multiple languages, including Urdu. It runs on top of the XeLaTex derivative of TeX. The example below shows an Urdu document typeset with the aid of polyglossia. It illustrates a number of features typically needed in an article or scholarly paper, such as: typesetting of titles and section headings; formatting of lists and tables; footnotes; and automatic numbering of sections, list items, and tables. The document also shows how to insert English text in an Urdu document. The document style employed for the sample Urdu document is article; this can have sections and references but not such components as tables of contents. For a book length document, you should use the book or memoir document styles. These styles greatly facilitate and automate much of the work needed in the production of: title pages; table of contents; chapters with sections, subsections, subsubsections, etc.; automatic numbering of lists, figures, tables, etc.; bibliographic references; and indices. The next two images below give the beginning and ending parts of the TeX source file to produce the sample Urdu document. The third image below shows the PDF pages of the Urdu document. (These files were updated on 2014-06-05.) ### Urdu Document in PDF, Produced Using Polyglossia Links to obtain the above Urdu document in PDF form as well as the TeX source to generate the document: The PDF file of the Urdu document is here, and. the full TeX source code to produce the Urdu document is here. (It is best to download these than try to view them in the Web browser.) ### Web Pages #### Skip this section if you are not interested in creating Web pages with Urdu content. Modern web browsers are quite good at interpreting and displaying multi-lingual texts from their Unicode character encodings. Of course, the browser needs to be told that it should expect Unicode material in the web document (usually, an html file) that it is being asked to execute. The Unicode character encoding for Urdu and Persian letters, along with the letters of many other languages, is called UTF-8. So to display Urdu text, you have to specify in your web document that its character set is given by UTF-8, as explained next. The particular character set that a web document contains is specified by the meta statement. Near the beginning of your html file you will find some code that looks like this: <meta content="text/html; charset=ISO-8859-1" http-equiv="content-type"> (This is just an example. Your character set might have a name different from "ISO-8859-1".) You have to change the character set declaration to "UTF-8", by replacing the above meta statement by: <meta content="text/html; charset=UTF-8" http-equiv="content-type"> Any Unicode inserted after this meta statement will be displayed as the character that the code represents. The Unicode for Urdu and Persian can be found in the Unicode Arabic page. A table which gives the standard Unicode as well as its html representation, called html numeric character reference, is given here. A very useful online tool is UTF Converter that lets you quickly convert a string of one or more characters to Unicode in various formats. UTF Converter's author, Mark Davis, has a Web site Macchiato with several other very useful Unicode-related utilities. From the table on page 2 of Unicode Arabic page, you can check that the hexadecimal Unicode representations of the Urdu letters Alif, Re, Daal, and Vaao are, respectively, 0627, 0631, 062F, and 0648. Now the html syntax for a hexadecimal code HHHH is &#xHHHH; . So suppose in your html document you insert the following: <center> <big><big><big> &#x0627;&#x0631;&#x062F;&#x0648; </big></big></big> </center> The result will be the word "Urdu" (in Urdu) displayed in 3-size larger letters and centered in a line, as follows: اردو Typing numerical codes in this way is clearly impractical except for displaying just a few characters. Fortunately, you don't have to enter character codes manually if you use the Urdu-QWERTY keyboard layout. The characters typed on this keyboard are automatically converted to their Unicode version and placed in the input. All you have to do is to switch to Urdu-QWERTY on the Input menu at the point in your html file where you desire to insert Urdu text. A caveat is in order here. To prepare html files, you are likely to use some special editor different from TextEdit. We have seen that, in RichText mode, TextEdit processes Urdu letters correctly, displaying the right form of the letter and connecting the letters appropritaely. Other editors, specially the so-called programmer's editors often used to prepare html files, may not do all that. For example, your typed Urdu letters might be displayed in their isolated form from left to right in the order of their entry, without being connected together. Or worse, your typed input might appear garbled in even more annoying ways! If you are looking for an excellent, free html editor that handles Unicode and UTF-8 well, and displays Urdu text correctly, try Arachnophilia. Of course, the readers of your Web page will be able to see the Urdu text correctly only if their system has been configured for multi-lingual processing and has the Urdu fonts installed. In addition, it might be necessary for your readers to set the viewing option of their web browser for "Unicode (UTF-8)" character encoding. ### Mathematical and Technical Typing Some mathematical symbols are so frequently needed in technical typing that they have become standard in Mac's English keyboards. So the Urdu-QWERTY keyboard also provides several of these symbols, via option and option-shift keys as usual. Note that the symbols for summation, integration, root, etc., change their orientation to match the right-to-left text direction. In Urdu mathematical notation, the dots of the dotted letters are sometimes omitted. So the Urdu-QWERTY keyboard provides the dotless forms ٮ for ب ڡ for ف ٯ for ق Another practice in Urdu mathematical writings is to sometimes use just the stems of letters, not their full form. Such symbols can be easily generated by adding the "kasheeda" character (ـ) to a letter. For example, the symbol خـ , generated by the key sequence shift-K and shift-" , represents the "imaginary" (in Urdu, خیالی) number i. NOTE: The keyboard suffices only for the casual typing of a few mathematical symbols in a general document. To prepare documents with elaborate mathematical content, the ideal approach is to use TeX/LaTeX/XeTeX. See the section Typesetting Using TeX, LaTeX, XeTeX. ## Keyboard Shortcuts for Changing Keyboards NOTE: For "Changing Keyboards", the standard MacOS terminology is "Changing Input Source" or "Changing Input Method". While editing certain documents, you need to change Keyboards quite often. For example, you may be working on a dictionary. Or, you might be preparing TeX files for Urdu documents, and need to continually switch keyboards between Urdu (for text) and English (for special characters and TeX commands). The standard way for changing Keyboards is to select the desired keyboard in the Input menu at the right end of the Apple menu bar at the top. This is cumbersome and annoying when Keyboard changes are very frequent. So you might like to set up a Keyboard Shortcut for it. MacOS has shortcuts programmed for Keyboard change already. These are: Command-space for toggling between keyboards and Command-shift-space for cycling through the active keyboards. But in MacOS versions 10.5 and above these are disabled by default, because exactly the same shortcuts are enabled for Spotlight searches. If you prefer, you can disable the Spotlight shortcuts and enable the Keyboard shortcuts. To do this in MacOS 10.5 and above, go into the Finder, and select Apple > System Preferences > Keyboard. In the window that opens, click on the Keyboard Shortcuts tab. Select Spotlight in the left column, and disable the shortcut items that show up in the right column. Then select Keyboard & Text Input in the left column, and enable the input source-related items that appear in the right column. Of the two Keyboard shortcuts Command-space and Command-shift-space, the former is certainly easier to type. If you have only two keyboards activated (say, English and Urdu), then the two shortcuts are equivalent. (You can quickly see which keyboards are active by clicking on the Input menu. An icon appears underneath it for each active keyboard.) However, if there are more than two active keyboards, then you might like to interchange the shortcuts. You can change a shortcut by double-clicking on it, and typing over it any desired combination of modifier keys (Command, Control, Shift, etc.) and a regular key (space, letter, number, etc.) ## Installation Problems Most of the reported installation difficulties turned out to have a simple reason: during download or extraction, the file extensions got changed. Often a .txt extension was appended to one or more file names. So first please make sure that your Mac shows extensions in file names. For this, move into Finder (for example, by clicking in a Finder window, or on the Finder icon in the Dock, or at a point on the screen which is not occupied by an application window). Then on the Menu bar (the one with the Apple icon at the left), click on Finder, then on Preferences, then on the Advanced tab. Now look at the Show all file extensions item. If the check box on its left does not have a check mark, then click on it so that a check mark appears there. Finally, close the Advanced window. Now you can check whether the extensions of the Urdu-QWERTY files are correct. The downloaded file (UrduQWERTY-v4.zip) and the files that your unzipper extracts (UrduQWERTY.keylayout and UrduQWERTY.icns) should have exactly those names. Change their extensions if necessary, ignoring the Finder's complaint that this could render your files dysfunctional. Another problem some people have encountered is that during editing Urdu letters show up isolated rather than connected together in the normal way. This can happen when the editor being used is different from TextEdit or Bean. For example, at present Microsoft Word does not handle the Naskh and Nastaleeq scripts correctly on the Mac. Even in TextEdit, sometimes Urdu letters appear isolated rather than correctly connected. This is usually due to TextEdit being run in the plain text mode rather than the rich text mode which Urdu editing requires. To fix this problem, start TextEdit, and on the Menu bar click on TextEdit, then on Preferences, and then on the New Document tab. If the Rich Text radio button is not active, click on it. Now close Preferences, and quit TextEdit. When you restart TextEdit, it will use Rich Text as the default for new documents. A related problem that has troubled some people is that in their Urdu files some letters don't seem to have correct shapes. For example, the letters "Goal Hay" or "yay" don't connect to the preceding or following letters properly. The culprit in such cases is nearly always the font used. At present only the X Series 2, Scheherazade, Lateef, and Geeza Pro among Naskh fonts, and Nafees and Jameel Noori among Nastaleeq fonts are known to work correctly with the whole Urdu alphabet. Please let me know if you discover (or design) other well-behaved fonts for Urdu. ## Orthographic Hints ### Diacritical Marks In Urdu, short vowels (eraab) are denoted by diacritical marks that are placed above, below, or to the left of the letter involved. Although usually omitted, they are occasionally needed to remove ambiguity or to show the correct pronunciation of a word. In particular, the tashdeed and madd signs and the zer of izaafat combinations are always helpful to the reader of the text. While composing text, you should type such a mark after typing the letter to which it belongs. The most frequently used marks are: zabar (shift->), zer (shift-<), pesh (shift-P), tashdeed (shift-_), and madd (shift-+). Alif with madd can be typed directly as shift-A. The "jazm" mark (shift-Q), which should print like a tiny "daal", doesn't have that shape in existing fonts. The alternative, also unattractive, is the "sukun" mark (/) of Arabic orthography that looks like a little circle. A complete list of diacriticl marks is given earlier with the keyboard images. ### Yay The I and Y keys correspond, respectively, to the maaroof and majhool forms of "yay", popularly referred to as "ChoTi yay" ی and "baRi yay" ے , respectively. (See the note below about maaroof and majhool sounds.) Thus, "galee" گلی (meaning lane) is to be typed as G, L, I, and "taaray" تارے (meaning stars) is to be typed as T, A, R, Y. The form entered by Y does not connect to the next letter. So even a majhool "yay" letter that occurs in the middle of a word should be typed as I. For example, "bayTay" بیٹے (meaning sons) has to be typed B, I, shift-T, Y. Even though both "yay" letters occuring in this word are pronounced with the majhool sound, the first one has to be entered as I. In Arabic, the letter "yay" has two dots underneath. In Urdu, the two dots are shown only if "yay" appears at the beginning or in the middle of a word, but not when it is the final letter of a word or when it stands alone (e.g., in an alphabet table). If needed, the "yay" with two dots ي can be typed as option-i. ### Noon Ghunna Noon Ghunna, which appears as the letter Noon but without a dot, is entered as shift-N. Thus "maaN" ماں (mother) is typed as M, A, shift-N. Noon Ghunna adds a nasal quality to the sound of the vowel preceding it. In the freewheeling, inconsistent way of Urdu orthography, Noon Gunna is used only at the end of a word. In the middle of a word, even where Noon Ghunna would be appropriate, Urdu just uses the ordinary Noon. Examples: "saaNp" سانپ (snake) has to be entered as S, A, N, P; or "pataNg" پتنگ (kite) has to be entered as P, T, N, G. This inconsistency is forced by the circumstance that in the middle of a word, Noon is written as a shosha with a dot above. Without a dot, such a shosha would be visually quite confusing. In some very old books, specially Urdu instructional primers, Noon Ghunna was indicated by a tiny inverted "v" (circumflex accent) placed above the Noon. This worked both in the middle and at the end of a word. An equivalent sign, ٘ , is still available as option-n although its use in Urdu went out of style decades ago. Note that, by contrast, Hindi takes the rational approach of signifying the nasal modification by always placing a special mark above the affected letter. ### Hamza The main forms of this letter are 1) independent ء , entered as shift-4, 2) hamza above "alif" أ , entered as the hyphen key (-), 3) hamza in the middle of a word ئ , entered as U, 4) hamza above "vaao" ؤ , entered as shift-W, and 5) hamza above "Goal Hay" ۂ , entered as the equal key (=). The rules relevant to these forms are the following: 1) If hamza is the last letter of a word, use the independent hamza form (shift-4). Examples: "ziaa" ضیاء (meaning light) is entered by typing shift-J, I, A, shift-4; "zakaa" ذكاء (intelligence) is entered by typing shift-Z, K, A, shift-4. NOTE: The terminal hamza is usually omitted in modern Urdu publications. For example, the above two words are frequently spelled as "ziaa" ضیا and "zakaa" ذكا . NOTE: Only the words of Arabic origin can have a terminal hamza. It is incorrect to append a hamza to the words derived from other languages. For example, the words "Asia" ایشیا , "Australia" آسٹریلیا , "Angela" اینجلا , or "boo" بو (smell) should not be spelled with a terminal hamza. An exception is made in Urdu when a hamza is needed for the "izafat" combination; e.g., "Asia-e Kuchak" ایشیائے کوچک (Little Asia) or "boo-e gul" بوئے گل (the flower's fragrance). But the hamza used in such combinations is specific to Urdu spelling; in Persian, the same combinations contain only the yay, not the hamza. So the above Urdu phrases are spelled in Persian as "Asia-e Kuchak" ایشیای کوچک or "boo-e gul" بوی گل . 2) If the letter hamza occurs in the middle of a word, use the key U for it. When typed, it is displayed as a hamza over the letter "yay" ئ. But as soon as the next letter is typed, the yay disappears, and the correct combination of hamza and the next letter is displayed. Examples: "ghaael" گھائل (wounded) entered by typing G, H, A, U, L; "chaae" چائے (tea) entered by typing C, A, U, Y; "na-i" نئی (new) entered by typing N, U, I. 3) However, even in the middle of a word if a hamza precedes a "vaao", and this pair starts an isolated subword, then the two should be typed together as the single "hamza above vaao" key (shift-W). (To start an isolated subword, this pair should come after an alif, vaao, daal, ray, etc.) Example: "gaaoN" گاؤں (village) should be entered by typing G, A, shift-W, shift-N, and not G, A, U, W, shift-N which would result in the wrong shape گائوں ! The isolated subword condition is important. Otherwise just a medial hamza form (key U) is to be used. Example: "ga-u maataa" گئو ماتا (Mother Cow) should be entered by typing G, U, W, space, M, A, T, A; Typing G, shift-W, space, M, A, T, A would result in the wrong shape گؤ ماتا ! 4) The "hamza above Goal Hay" ۂ occurs in "izaafat" combinations derived from Persian, and it is helpful to add a "zer" sign below it. Examples: "sitaara-e shaam" ستارۂِ شام (evening star) should be entered by typing S, T, A, R, =, shift-<, space, X, A, M. Or, "naala-e dil" نالۂِ دِل (heart's cry) should be entered as N, A, L, =, shift-<, space, D, (optionally, shift-<), L. The form of the letter "Goal Hay" with a hamza above can occur only in the terminal and isolated positions of a word, while the form without a hamza can occur in all positions---initial, medial, terminal or isolated. One should be careful in choosing the correct form of "Goal Hay" in "izaafat" combinations. The form without hamza should be used when the "Goal Hay" ending a word is pronounced as H, as in "tah" تہ (layer or bottom). The form with a hamza above should be used when the Goal Hay ending a word is pronounced as A or E, as in "gila" گلہ (complaint). This point is taken up again in the next subsection. ### Hay "BaRi Hay" ح (humorously called "Halvay Vaali Hay") is entered by typing shift-H. Thus "muhabbat" محبّت (love) is entered by typing M, shift-H, B, (optionally shift-_ for tashdeed), T. "Dochashmi Hay" ھ is entered by typing unshifted H. In modern Urdu orthography, this letter is used only in combination with some consonant (which precedes it), and its purpose is to modify that consonant's sound to make it an "aspirated letter". "Goal Hay" ہ , entered by typing the letter O, is pronounced separately by itself rather than being just used to "aspirate" another consonant. For example, the "Hay" sound is pronounced independently in the word "kahaa" كہا (said); so this word is typed with a "Goal Hay", as K, O, A. This is in contrast to the word "khaa" كھا (Eat!) where the "Hay" is used to aspirate the "k" sound; so this word is spelled with a "Dochashmi Hay", as K, H, A. In the word "majhool" مجہول even though "h" follows "j", no aspiration takes place since the two letters belong to different syllables ("maj-hool") and are pronounced independently. This word should therefore be typed as M, J, O, W, L, and not M, J, H, W, L which would appear incorrectly as مجھول ! In general, "Dochashmi Hay" should not be used in any Urdu word that is derived from Arabic or Persian, since these languages do not have aspirated letters. Aspirated letters can occur only in the words of Indic origin. There is an exception to the rule that "Goal Hay" must be pronounced with an "h" sound. At the end of a word, "Goal Hay" is pronounced as an A or E, not as H; for example, the word تكیہ typed as T, K, I, O is pronounced as "takya" (pillow). An exception to that exception occurs sometimes, and the terminal "Hay" is actually pronounced as H, not A or E. For example, the word "shah" شہ (meaning check [of chess]) is typed X, O. The word "shaah" شاہ (meaning king), typed as X, A, O, is another example where a terminal "Goal Hay" is pronounced with an "h" sound. However, the oddities of Urdu orthography do not end here. In the words ending in a pronounced Goal Hay which is not isolated but connected to the previous letter, the Hay is often written twice! For example, the word "kah" (meaning say!) is often written as كہہ , entered by K, O, O; or "sah" (meaning bear!, from the verb "sahna") as سہہ , entered by S, O, O; or "faqeeh" (expert of fiq-h [jurisprudence] ) as فقیہہ , entered by F, Q, I, O, O. The purpose of doubling the Goal Hay is ostensibly to avoid its being wrongly pronounced as A or E. For example, without the extra Goal Hay the above words "kah" كہہ and "sah" سہہ could be easily confused with the words "ke" كہ (that) and "se" سہ (Persian three), respectively, in which the terminal Goal Hay is indeed pronounced as E. But such is clearly not the case with "faqeeh" فقیہہ , where the extra Goal Hay actually introduces the hazard of this word being confused with "faqeeha" (female expert of fiq-h). The reason for writing the Goal Hay twice in this word seems to be just the whim of the scribe rather than any logical need. In general, you will find that the spelling variation of doubling the Goal Hay is practiced unpredictably and rather inconsistently! NOTE: When using a Persian or Arabic font, use the Option-o key rather than the plain o key for the letter Goal Hay (ہ); otherwise the letter might not be rendered properly. ### Punctuation The end of an Urdu declarative sentence is marked with a small dash rather than a period. But the period key itself generates the dash in the Urdu-QWERTY keyboard. Other punctuation symbols such as question mark, exclamation, comma, semicolon, parentheses, brackets, braces, double and single quotation marks, etc., are entered with the usual keys. Punctuation symbols are appropriately reversed or inverted to match the right-to-left flow of text. ### Numbers and Dates The same digit keys of the Urdu-QWERTY keyboard can be used to type digits in three different shapes: (1) Western digit characters when no modifier key (SHIFT, OPTION, or CAPS LOCK) is pressed; (2) "Eastern Arabic" digit characters when CAPS LOCK is pressed and SHIFT or OPTION are not pressed; and (3) "Traditional Arabic" digit characters when OPTION is pressed. The Traditional Arabic digit forms are used in Arabic documents. The Eastern Arabic forms are commonly used in Urdu documents with Nastaleeq fonts and in Persian documents with both Naskh and Nastaleeq fonts. An Urdu document produced in a Naskh font looks better when the Traditional rather than Eastern Arabic digits are used. The CAPS LOCK key has no effect on other keys. So if you like your digits to be displayed in their Urdu, and not the Western, shapes, then you can just leave CAPS LOCK depressed, and release it only to type a symbol which requires both SHIFT and OPTION keys to be pressed. With Arabic digits, the decimal point sign "٫" and the thousands separator "٬" are, respectively, typed as option-period and option-comma. Examples: The number 3.1416 is typed as option-3, option-period, option-1, option-4, option-1, option-6, and is displayed as ۳٫۱٤۱٦ . One million is typed as option-1, option-comma, option-0, option-0, option-0, option-comma, option-0, option-0, option-0, and is displayed as ۱٬۰۰۰٬۰۰۰ . NOTE: Separating groups of digits by thousand, million, billion. etc., is a relatively recent practice. The more traditional separation is by hazaar, laakh (lac), karoR (crore), arab, kharab, etc., for which the separating commas are placed after 3, 5, 7, 9, 11, ... digits from the right. It is traditional in writing dates to insert a "date separator" or "small slash" symbol ؍ (shift-3) between the day number and month word, or between the numbers designating day, month, and year. The abbreviation equivalent to "A.D." is a Hamza (which looks as the stem of the letter Ain) entered by typing shift-4. Example: August 14, 1947 is typed as option-1, option-4, shift-3, A, G, S, T, space, option-1, option-9, option-4, option-7, shift-4, and is displayed as ١٤؍اگست ۱۹٤۷ء . Alternatively, this date can be typed as option-1, option-4, shift-3, option-8, shift-3, option-1, option-9, option-4, 7, shift-4, and is displayed as ١٤؍۸؍۱۹٤۷ء . NOTE: The Urdu-QWERTY keyboard has made Western digit characters the default action of digit keys because Urdu publications are switching more and more to Western numerals. This move is accompanied by the adoption of British-American style of formatting numbers, that is, using a period for the decimal point and a comma for the thousands separator. The apostrophe (') key on the Urdu-QWERTY keyboard generates the period symbol, and can be typed as the decimal point to go with Western digit characters. For example, to produce 3.1416, just type the key sequence 3,',1,4,1,6. Western numerals with decimals can thus be typed without needing to switch to the English keyboard or to use any modifier keys. ### Inter-word Spaces. (And A Lament!) People accustomed to Nastaleeq publications will discover that the documents composed in Naskh have spaces and other punctuation separating each pair of adjacent words. This is the correct and rational approach to word processing, shared by every non-Nastaleeq word processor in the world. Nastaleeq word processors stand alone in suppressing inter-word spaces. The user, of course, still has to type spaces to signify ends of words, but those spaces are removed and the words follow each other in a continuous stream. Just imagine reading this English page if it did not include any spaces between words. Deciphering such a character stream requires, in essence, that you already know what you are trying to learn! But that's exactly what is expected of you when you are reading a text composed in Nastaleeq. Because some Urdu letters (e.g., alif, daal, re, vao) do not connect to the next letter in a word, Urdu words consist of isolated parts that could themselves be thought of as words. For example, the word درخواست (meaning request or appeal) contains as potential words خواست , است , رخوا , خوا , رخو , درخوا , درخو , در , and many more. When inter-word spaces are used, there is no confusion between any unintended "words" and the intended words because the beginning and end of each intended word is clearly delimited. But in the text edited with current Nastaleeq word processors, the only reason you are able to skip over the unintended "words" is that you already know the intended words, not because the text display is of any help! When computer typesetting of Nastaleeq was first introduced for Urdu in the 1980s, inter-word spaces were actually employed. The practice of suppressing them is more recent. This unwise retrogression, justified in the name of "tradition and esthetics", is an unnecessary obstacle to anyone trying to learn Urdu. The Nastaleeq script already suffers from too many complexities, obscurities, irregularities, and inconsistencies. It makes no sense to invent more barriers to the accessibility of Urdu. The practice simply prolongs the time it takes students to master the language. It is also hindering the development of optical character recognition and other important electronic processing technologies for Urdu. Exercise for the reader: Find out what ghatrabood is, and enjoy the story. ## Urdu Transliteration of English Words As more and more words are being imported in Urdu from English, and more and more foreign geographical and personal names are being mentioned in Urdu writings, there is increasing need to systematize the transliteration of English words into Urdu. Not everyone spells the English words in Urdu in the same way, but some common conventions are the following: All English consonants have reasonable equivalents in the Urdu alphabet. Conventionally, the letters "D" and "T" are rendered as the Urdu letters "ڈ " and "ٹ " , respectively. Both "V" and "W" are rendered as "و " . The digraph "th" is rendered as either "د " or "تھ ". Thus, "the" is transliterated as "دی " and "three" as "تھری ". If an English word begins with the letter "S" that is immmediately followed by one of "c", k", "m, "n", "p", "q", and "t", then the Urdu transliteration often adds an inititial "alif" to the normally expected "seen". Examaples of this rule are the transliteration of "school" as "اِسکول ","sketch" as "اِسکیچ ", "smuggler" as "اِسمگلر ","Spain" as "اِسپین ", and "Steven" as "اِسٹیوین ". However, this rule (addition of "alif") is not followed uniformly. Also the "alif" is never added when the letter following "S" is "l". Thus ""slate" is transliterated as "سلیٹ " and "slice" as "سلائِس ". The transliteration of some English vowels is not phonetically correct, but the practices are too firmly entrenched to do anything about them. Two widely used conventions are the following: 1. The long "a", the open "o", and the diphthongs like "au" and "aw" are generally rendered as "alif". For example, "Paul" is transliterated as "پال ", even though "پَول " would be more correct phonetically. Other examples are: "Dawn" as "ڈان ", "Law" as "لا ", "lot" as "لاٹ ", "collar" as "کالر ", "hot dog" as "ہاٹ ڈاگ ", and "New York" as "نیو یارک ". The diphthong "oi" sometimes gets the same tretament. Thus, "vice" and "voice" (as in "Voice of America"!) are both transliterated as "وائس ". 2. The vowel "e" is usually rendered as "Yeh" (with a majhool sound). Thus, "Web" is transliterated as "ویب " even though "وِب " would be more correct phonetically. The rationale for using "Yeh" (with a majhool sound) for "e" is, presumably, to reserve the "zer" (–ِ) diacritic for rendering "i". For example, "bell" is transliterated as "بیل " rather than as "بـِل " so that the latter can be the transliteration of "bill". Other examples are: "set" as "سیـٹ ", "cassette" as "کیسیـٹ ", "pen" as "پین ", and "vowel" as "واویل ". ## Ottoman Turkish The term Ottoman Turkish refers to both the language and the Arabic-based script in common use in Turkey during the Ottoman period. The Ottoman and Modern Turkish languages do not differ much, but their scripts are totally different because Modern Turkish uses a script based on Latin characters. However, since there exists a huge volume of older Turkish publications and manuscripts written in the Ottoman script, this script still remains of interest as an essential scholarly tool and is taught in most departments of Turkish Studies. It is suitable, for example, for preparing the contents of older texts for linguistic analyses. There is exactly one character, "Saghir Nun" ڭ (key Option-g) which is unique to Ottoman Turkish and is not used in Urdu or Persian. This character has now been added to the Urdu-QWERTY keyboard layout. So with this keyboard layout installed, your computer will be ready for Ottoman Turkish texts. To download and install this keyboard, see the earlier section Installing Urdu keyboard layout. To activate the Ottoman Turkish input, follow the steps for Activating Urdu Input. Also, you will need to install some suitable fonts in order to be able to read, edit, and produce documents in Ottoman Turkish. Some beautiful, freely available fonts are recommended in Installing Urdu Fonts. Finally, if you prefer to do the Ottoman Turkish work on a Windows computer, there is an Urdu-QWERTY keyboard layout for Windows also, with key settings identical to the Mac one. The fonts and general instructions given here will work on the Windows machines also. See DIGRESSION: Urdu QWERTY for Windows. The Urdu-QWERTY keyboard provides the various diacritics employed in older Ottoman texts (as well as in Urdu and Persian). The following diacritics are used in a standard way: 1. Fathah –َ– (key Shift->), Kesrah –ِ– (key Shift-<), Dammah –ُ– (key Shift-P), 2. Jazm –ْ– (key /), 3. Fathatain –ً– (key Shift-~), Kesretain –ٍ– (key ), Dammatain –ٌ– (key Shift-8), 5. Teshdid –ّ– (key Shift-_), 6. Hamza above –ٔ– (key Shift-&). Some diacritics that are used in old Ottoman Turkish texts in non-standard ways are the following: 1. "Wasla" is used over "alef" to form "alef-wasla" ٱ (key Shift-M). Alef-Wasla indicates a "silent" alef at the beginning of the second word of a two-word Arabic combination, such as "Daar-a (a)l-ilm" دارٱلعلم (House of Knowledge). 2. The "little v" –ٚ– (key Option-j) and "little inverted v" –ٛ– (key Option-h) are superscripts that are used for various purposes, e.g., for marking the letter "Vao" و (key w) as the v-sounding consonant rather than a vowel; distinguishing "rounded" vowel sounds "ö" and "ü" from "unrounded" vowel sounds "o" and "u" denoted by the same letter "Vao" و (key w); distinguishing vowel sounds "o" from "u" and "i" from "e", denoted, respectively, by the same letters "Vao" و (key w) and "Yeh" ی (key i); etc. The example below shows a sample text typed in Ottoman Turkish (right), together with its equivalent in Modern Turkish (left). Incidentally, this entire two-column table has been prepared by using nothing else but the Mac OS's built-in TextEdit utility. The Ottoman text part that you see on the right in the table has been typed using the exquisite Lateef font, which can be downloaded and installed as described in the section Installing Urdu Fonts. The Turkish text is supposedly the translation of a French text, Jean de La Fontaine's fable La Cigale et la Fourmi (The Cicada And The Ant). Both the typed (right) and the transcribed (left) versions shown below have been taken from the Web site of the Department of Turkish Studies at the University of Michigan. There you can also see the image of the original manuscript in the Osmani calligraphic style (referred to as Nastaleeq in Urdu and Persian calligraphy). ## Comparing Persian and Urdu Alphabets The Urdu alphabet contains the following additional symbols that do not exist in Persian: 1. The "retroflex" letters "Tay" ٹ , "Daal" ڈ , and "Ray" ڑ , each of which has a tiny "toey" mark above. These letters are typed with the keys shift-T, shift-D, and shift-R, respectively. 2. "Aspirated letters", formed by combining certain consonants with "Dochashmi Hay", e.g., "bh" بھ , "ph" پھ , "th" تھ , "Th" ٹھ , etc. Note that aspirated letters are really combinations, and do not count as letters in the Urdu alphabet. 3. "Noon Ghunna" ں (key shift-N). 4. In Urdu, it is traditional to count "Goal Hay" ہ (key O) and "Dochashmi Hay" ھ (key H) as two different letters. In Persian, these are not different letters but simply two visually different forms of the same letter. Most commonly, the "Dochashmi Hay" form is used when the letter occurs in the initial or medial position in a word, and the "Goal Hay" form is used when the letter occurs isolated or in the terminal position. 5. In Persian, "Hamza" ء (keys shift-4, -, U, shift-W, and =, depending on the context) is not considered a separate letter but a diacritical mark. Urdu and Persian diverge from Arabic in the treatment of "Hamza". In Arabic, "Alif" is a vowel, and "Hamza" is a consonant that represents the glottal stop. In Urdu and Persian, this consonant is written as an "Alif" when it starts a word and as a "Hamza" when it occurs in the middle or end of a word. Thus "Alif" is both a vowel and a consonant. 6. "BaRi Yay" ے (key Y). In Persian, this is not a different letter, but just a visual variant of "ChoTi Yay" ی . It is used for calligraphic effect in decorative writing, and is sparingly employed in computer-generated text. 7. The combination "Laam Alif" لا used to be listed as a separate letter in old Urdu primers. To continue doing that is anachronistic. That "letter" fulfills no need and serves no purpose. The Urdu-QWERTY keyboard doesn't have any single key assigned to the "Laam Alif" combination. Here is a summary of the alphabets: Arabic has 28 letters. To those, Persian adds "pay" پ , "chay" چ , "zhay" ژ , and "gaaf" گ , and thus has a total of 32 letters. Urdu, somewhat arguably, has 39 letters, counting the following as additional letters: "Tay" ٹ , "Daal" ڈ , "Ray" ڑ , "Noon Ghunna" ں , "Hamza" ء , "Dochashmi Hay" ھ , and "BaRi Yay" ے . "Noon Ghunna" ں , "Hamza" ء , "Dochashmi Hay" ھ , and "BaRi Yay" ے   are letters in a rather weak sense, since no Urdu word can begin with these. (Hence, Urdu dictionaries do not dedicate chapters to these as they do to regular letters.) Urdu and Persian differ markedly in the pronunciation of vowels. 1. In Persian, the vowel "Alif" is generally pronounced like au in the English word maul. In Urdu, the same vowel is pronounced like the vowel "a" in father. 2. In Persian, the short vowels "zer" and "pesh" have majhool sounds and the long vowels "Vaao" and "Yay" have maaroof sounds. (See the note below about maaroof and majhool sounds.) In Urdu, the same vowels do double duty to represent both maaroof and majhool sounds. These differences do not affect writing unless special marks are used to distinguish maaroof and majhool sounds. There are minor variations in the placement of "Hamza" between Urdu and Persian orthographic styles. But the needed forms in all cases are adequately provided by the keyboard and the fonts that we have recommended. The standard ("educated person's") pronunciation of consonants is generally identical in Urdu and Persian, and often different from Arabic. Some of the similarities and differences are as follows: 1. The consonants contain some groups of separate letters that are homophones (pronounced with the same sound) in Urdu and Persian. These groups of letters and their approximate English pronunciations are: • [ ا , ع , ء ] pronounced as the glottal stop; • [ ح , ہ ] pronounced as "H"; • [ ت , ط ] pronounced as "T"; • [ ث , س , ص ] pronounced as "S"; • [ ذ , ز , ض , ظ ] pronounced as "Z". In Arabic, the letters within each of these groups have distinct pronunciations. (Note: "Alif" is a vowel, not a consonant). The Arabic pronunciation is sometimes imitated in Persian and Urdu speech by religious clerics. But in the standard Persian and Urdu pronunciation, the letters in each group have identical sounds. The Urdu-QWERTY keys for these consonants are in a table given earlier . 2. When occurring as a consonant, the letter "Vaao" و (key W) is pronounced like "v" in Urdu and Persian, but like "w" in Arabic. 3. The letter "Qaaf" ق (key Q) has the same pronunciation in Urdu and Arabic but a different one in Persian. In Persian, the letters "Qaaf" ق and "Ghain" غ (key Shift-G) are generally pronounced alike, with the same sound as that of "Ghain" غ in Arabic and Urdu. Since we have called our keyboard phonetic, we wanted to relate the pronunciation of the alphabet letters with the keys being used to enter them. The tedious details given above will perhaps help you in remembering the keys. As you can see, some Persian and Urdu letters are hard to phonetically map to a Latin-based keyboard! ## Note on Maaroof and Majhool Vowels There is an old classification of certain vowel sounds as maaroof (literally, well-known) or majhool (literally, unknown or unfamiliar). The difference between these can be illustrated with English words as follows: 1. Short vowel mark Zer:       pill (maaroof), pell or bell (majhool) 2. Short vowel mark Pesh:     pull (maaroof), * (majhool) 3. Long vowel letter Vaao:     pool (maaroof), pole (majhool) 4. Long vowel letter Yay:        peel (maaroof), pale (majhool). * The majhool pesh is difficult to illustrate in English because the letter "O", which is closest to that vowel, is pronounced in several different ways. But this is the vowel sound found in the Persian words "gol" گل (meaning flower) and "sokhan" سخن (utterance). An Urdu example is "hahot" بہت (meaning very or a lot). But beware that the Hindi-influenced pronunciation of this word is "bahut" which has a maaroof, not majhool, vowel sound. In general, note that: 1. The short majhool vowel sounds represented by Zer and Pesh occur in Urdu and Persian but not in Hindi or Arabic. 2. The long majhool vowel sounds represented by Vaao and Yay occur in Urdu and Hindi but not in Persian or Arabic. ## Acknowledgement Urdu-QWERTY was designed with the aid of Ukelele, a keyboard layout editor for MacOS. I thank John Brownie, the author of Ukelele, for developing this melodious software and for making it available under a freeware license. I also wish to thank Amal Ahmed, Aaron Jakes, Muhammad Javed, Shebab Javed, Karan Misra, Knut S. Vikor, and Muhammad Yusaf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.760003387928009, "perplexity": 5358.317681200276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007056.0/warc/CC-MAIN-20141125155647-00216-ip-10-235-23-156.ec2.internal.warc.gz"}
https://arxiv.org/abs/1703.09614
physics.chem-ph (what is this?) # Title:Accessing dark states optically through excitation-ferrying states Abstract: The efficiency of solar energy harvesting systems is largely determined by their ability to transfer excitations from the antenna to the energy trapping center before recombination. Dark state protection, achieved by coherent coupling between subunits in the antenna structure, can significantly reduce radiative recombination and enhance the efficiency of energy trapping. Because the dark states cannot be populated by optical transitions from the ground state, they are usually accessed through phononic relaxation from the bright states. In this study, we explore a novel way of connecting the dark states and the bright states via optical transitions. In a ring-like chromophore system inspired by natural photosynthetic antennae, the single-excitation bright state can be optically connected to the lowest energy single-excitation dark state through certain double-excitation states. We call such double-excitation states the ferry states and show that they are the result of accidental degeneracy between two categories of double-excitation states. We then mathematically prove that the ferry states are only available when N, the number of subunits on the ring, satisfies N=4l+2 (l being an integer). Numerical calculations confirm that the ferry states enhance the energy transfer power of our model, showing a significant energy transfer power spike at N=6 compared with smaller N values, even without phononic relaxation. The proposed mathematical theory for the ferry states is not restricted to this one particular system or numerical model. In fact, it is potentially applicable to any coherent optical system that adopts a ring-shaped chromophore arrangement. Beyond the ideal case, the ferry state mechanism also demonstrates robustness under weak phononic dissipation, weak site energy disorder, and large coupling strength disorder. Subjects: Chemical Physics (physics.chem-ph); Biological Physics (physics.bio-ph); Quantum Physics (quant-ph) Cite as: arXiv:1703.09614 [physics.chem-ph] (or arXiv:1703.09614v3 [physics.chem-ph] for this version) ## Submission history From: Zixuan Hu [view email] [v1] Fri, 24 Mar 2017 20:17:14 UTC (1,084 KB) [v2] Thu, 30 Mar 2017 18:46:32 UTC (1,085 KB) [v3] Tue, 5 Sep 2017 14:33:34 UTC (1,283 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702064156532288, "perplexity": 3056.899938540826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206016.98/warc/CC-MAIN-20190326200359-20190326222359-00109.warc.gz"}
https://mersenneforum.org/showthread.php?s=109f85e9909e9608ad3bd29313e94ceb&t=26586
mersenneforum.org Prime95: INI_FILE settings limited to 80 characters Register FAQ Search Today's Posts Mark Forums Read 2021-03-10, 18:53 #1 sp00n   Dec 2020 32 Posts Prime95: INI_FILE settings limited to 80 characters This is taken from the source file for version 30.3b6. In the common.c file the various INI settings are defined as e.g. Code: IniGetString (INI_FILE, "local.ini", LOCALINI_FILE, 80, LOCALINI_FILE); IniGetString (INI_FILE, "worktodo.ini", WORKTODO_FILE, 80, WORKTODO_FILE); IniGetString (INI_FILE, "results.txt", RESFILE, 80, RESFILE); which limits the value to 80 characters. I was trying to write the log file to a different directory than where the binary file and the prime.txt/local.txt are, and if you're using a longer file name (e.g. by adding the date and time) or a longer directory name, these 80 characters can be exceeded, which in turn cuts off the name of the file that is generated in the provided directory. Example: Directory of the prime95.exe: D:\Benchmarking\Prime95\30.3b6\binaries\ Directory of results.txt file: D:\Benchmarking\Prime95\30.3b6\logs\ Name of results.txt: Prime95_results_2020-12-31_23_59_59_FFT_248-8192.txt Expected file name: D:\Benchmarking\Prime95\30.3b6\logs\Prime95_results_2020-12-31_23_59_59_FFT_248-8192.txt Actual file name (cut off after 80 characters): D:\Benchmarking\Prime95\30.3b6\logs\Prime95_results_2020-12-31_23_59_59_FFT_248 Is there any particular reason why there's a limit to 80 characters and not to ~260 characters like Windows supports? I seem to have been able to work around this by setting the WorkingDir= directive to one level above the directory where the .exe is located then setting prime.ini=, local.ini= and results.txt= as a relative path from that directory. It appears to be working, but it seems unnecessarily complicated. These are the resulting entries in the prime.txt, of course, they're only examples. (I also find it a bit weird that you can set the file name for the prime.ini/txt in that exact prime.txt.) Code: WorkingDir=D:\Benchmarking\Prime95\30.3b6 prime.ini=binaries\prime.txt local.ini=binaries\local.txt results.txt=logs\Prime95_results_2020-12-31_23_59_59_FFT_248-8192.txt So I'd suggest to change the 80 character limit to something higher in a future release so that these kind of workarounds aren't required anymore. (Also, for more than just stress testing, you'd probably have to modify the other settings as documented in undoc.txt to the relative path as well.) Last fiddled with by sp00n on 2021-03-10 at 18:53 2021-03-10, 21:55 #2 ixfd64 Bemusing Prompter     "Danny" Dec 2002 California 24×5×31 Posts The 80-character limit is a relic of a bygone era when most computer screens could not display more than 80 columns. I agree it's no longer relevant in this day and age. Last fiddled with by ixfd64 on 2021-03-11 at 01:50 2021-03-10, 22:58 #3 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 175418 Posts Prime95 is a relic, There are 8.3 limits in some of the code. I'll boost the limits you mentioned in 30.5. 2021-03-11, 01:46   #4 kriesel "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 3·43·53 Posts Quote: Originally Posted by Prime95 Prime95 is a relic, There are 8.3 limits in some of the code. I'll boost the limits you mentioned in 30.5. It would be great if the obfuscated-exponent filename consequences of that residual 8.3-ness went away. Quick, what exponent does file pA852841.residues relate to? pA851779.proof? 2021-03-11, 15:25   #5 LaurV Romulan Interpreter "name field" Jun 2011 Thailand 11×919 Posts Quote: Originally Posted by kriesel Quick, what exponent does file pA852841.residues relate to? pA851779.proof? Plus one for that! Well spotted! May George be healthy and have free time in the future, so he fix all those relics... Last fiddled with by LaurV on 2021-03-11 at 15:26 2021-03-11, 16:08 #6 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 11010101101012 Posts Also p8U82187 and related .bu* (dated today and yesterday, produced by prime95 v30.4b9 LL DC of M55882187) There are p(clear 9 digit exponent) files produced in the same folder during the same time frame, from other work types from other workers on that instance. pA853833.* from PRP-CF of M10853833 (and more examples like the above) Last fiddled with by kriesel on 2021-03-11 at 16:09 2021-03-11, 16:30   #7 chalsall If I May "Chris Halsall" Sep 2002 246478 Posts Quote: Originally Posted by LaurV Plus one for that! Well spotted! May George be healthy and have free time in the future, so he fix all those relics... +1! Thanks, George! 2021-03-11, 16:53   #8 sp00n Dec 2020 32 Posts Quote: Originally Posted by Prime95 Prime95 is a relic, There are 8.3 limits in some of the code. I'll boost the limits you mentioned in 30.5. Great, thanks. 2021-03-12, 04:45   #9 LaurV Romulan Interpreter "name field" Jun 2011 Thailand 100111011111012 Posts Quote: Originally Posted by kriesel Also p8U82187 of M55882187 pA853833.* from PRP-CF of M10853833 (and more examples like the above) To be fair, we know (and always knew) how to decode those, there is some explanation in those .txt files sent with P95, which no one reads them ever. But it would be indeed wonderful if George fix the checkpoint names and temp file names, to something more clear and palatable, considering the new invention called "long file names". We wouldn't mind zero-filled names to 9 positions, so we can sort them properly in the folder when there are different numbers of digits (edit: like for example M098765432_blah_blah.bkp, to be sorted before M123456789_blah_blah.bkp, and not M98765432_blah_blah.bkp). Last fiddled with by LaurV on 2021-03-12 at 04:51 2021-03-12, 17:25 #10 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 3·43·53 Posts The readme.txt file list section seems to indicate they'll have alpha first char, numeric ASCII for exponent. But the 7 N's don't fit even current DC exponents. Code: cNNNNNNN,cNNNNNNN.buN Intermediate files produced during certification runs. pNNNNNNN,pNNNNNNN.buN Intermediate files produced by prime95.exe to resume computation where it left off. pNNNNNNN.residues Large intermediate file produced during PRP test for constructing a PRP proof. pNNNNNNN.proof PRP proof file. eNNNNNNN,eNNNNNNN.buN Intermediate files produced during ECM factoring. fNNNNNNN,fNNNNNNN.buN Intermediate files produced during trial factoring. mNNNNNNN,mNNNNNNN.buN Intermediate files produced during P-1 factoring. I didn't see an explanation for the leading or embedded alpha characters. Nothing relevant to that found searching readme, whatsnew, or undoc, for "file" or "8.3". Where's it to be found? Last fiddled with by kriesel on 2021-03-12 at 17:26 2021-03-12, 20:45   #11 Prime95 P90 years forever! Aug 2002 Yeehaw, FL 29·277 Posts Quote: Originally Posted by kriesel Where's it to be found? The usual answer: dig into C code. Similar Threads Thread Thread Starter Forum Replies Last Post Bommer1 Software 3 2019-11-25 13:05 evanh Software 3 2017-12-04 15:18 rharmz Software 1 2014-05-20 07:36 fox_mccloud_123 Information & Answers 6 2013-05-07 21:12 jasonp Factoring 8 2006-08-05 21:06 All times are UTC. The time now is 07:29. Thu Sep 29 07:29:00 UTC 2022 up 42 days, 4:57, 0 users, load averages: 0.81, 0.90, 0.92
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44117483496665955, "perplexity": 11222.34375308469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00248.warc.gz"}
http://sb2cl.ai2.upv.es/content/differential-algebra-control-systems-design-computation-canonical-forms
# Differential Algebra for Control Systems Design. Computation of Canonical Forms. Title Differential Algebra for Control Systems Design. Computation of Canonical Forms. Publication Type Journal Article Year of Publication 2013 Authors Picó-Marco E Journal Control Systems Magazine Volume 33 Start Page 52 Issue 2 Pagination 52 - 62 Abstract Many systems can be represented using polynomial differential equations, particularly in process control, biotechnology, and systems biology [1], [2]. For example, models of chemical and biochemical reaction networks derived using the law of mass action have the form ẋ = Sv(k,x), (1) where x is a vector of concentrations, S is the stoichiometric matrix, and v is a vector of rate expressions formed by multivariate polynomials with real coefficients k . Furthermore, a model containing nonpolynomial nonlinearities can be approximated by such polynomial models as explained in "Model Approximation". The primary aims of differential algebra (DALG) are to study, compute, and structurally describe the solution of a system of polynomial differential equations,f (x,ẋ, ...,x(k)) =0, (2) where f is a polynomial [3]-[6]. Although, in many instances, it may be impossible to symbolically compute the solutions, or these solutions may be difficult to handle due to their size, it is still useful to be able to study and structurally describe the solutions. Often, understanding properties of the solution space and consequently of the equations is all that is required for analysis and control design.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426129817962646, "perplexity": 767.0032336327015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00020.warc.gz"}
https://mathstrek.blog/2015/01/29/tensor-product-and-linear-algebra/
## Tensor Product and Linear Algebra Tensor products can be rather intimidating for first-timers, so we’ll start with the simplest case: that of vector spaces over a field K. Suppose V and W are finite-dimensional vector spaces over K, with bases $\{v_1, \ldots, v_n\}$ and $\{w_1, \ldots, w_m\}$ respectively. Then the tensor product $V\otimes_K W$ is the vector space with abstract basis $\{ v_i w_j\}_{1\le i \le n, 1\le j\le m}.$ In particular, it is of dimension mn over K. Now we can “multiply” elements of V and W to obtain an element of this new space, e.g. $(2v_1 + 3v_2)(w_1 - 2w_3) = 2v_1 w_1 + 3 v_2 w_1 - 4 v_1 w_3 - 6v_2 w_3.$ For example, if V is the space of polynomials in x of degree ≤ 2 and W is the space of polynomials in y of degree ≤ 3, then $V\otimes_K W$ is the space of polynomials spanned by $x^i y^j$ where 0≤i≤2, 0≤j≤3. However, defining the tensor product with respect to a chosen basis is rather unwieldy: we’d like a definition which only depends on V and W, and not the bases we picked. Definition. A bilinear map of vector spaces is a map $B:V \times W \to X,$ where V, W, X are vector spaces, such that • when we fix w, B(-, w): V→X is linear; • when we fix v, B(v, -): W→X is linear. The tensor product of V and W, denoted $V\otimes_K W$, is defined to be a vector space together with a bilinear map $\psi : V\times W \to V\otimes_K W$ such that the following universal property holds: • for any bilinear map $B: V\times W \to X$, there is a unique linear map $f:V\otimes_K W \to X$ such that $f\circ \psi = B.$ For v∈V and w∈W, the element $v\otimes w := \psi(v,w)$ is called a pure tensor element. The universal property guarantees that if the tensor product exists, then it is unique up to isomorphism. What remains is the Proof of Existence. Recall that if S is a basis of vector space V, then any linear function VW uniquely corresponds to a function SW. Thus if we let T be the (infinite-dimensional) vector space with basis: $\{e_{v, w} : v \in V, w\in W\}$ then linear maps gTX correspond uniquely to functions BV×W → X. Saying that B is bilinear is precisely the same as g factoring through the subspace U to obtain $\overline g : T/U \to X,$ where U is the subspace generated by elements of the form: \begin{aligned} e_{v+v', w} - e_{v,w} - e_{v', w}, \qquad & e_{cv, w} - c\cdot e_{v,w}\\ e_{v, w+w'} - e_{v,w} - e_{v, w'},\qquad & e_{v,cw} - c\cdot e_{v,w}\end{aligned} for all vv’ ∈ Vww’ ∈ W and constant c ∈ K. Hence T/U is precisely our desired vector space, with $\psi : V\times W \to T/U$ given by $(v, w) \mapsto e_{v,w} \pmod U.$ And vw is the image of $e_{v,w}$ in T/U. ♦ Note From the proof, it is clear that V ⊗ W is spanned by the pure tensors; in general though, not every element of V ⊗ W is a pure tensor. E.g. vw + v’w’ is generally not a pure tensor. However, vw + vw’ + v’w + v’w’ = (v+v’)⊗(w+w’) is a pure tensor since ψ is bilinear. ## Properties of Tensor Product We have: Proposition. The following hold for K-vector spaces: • $K \otimes_K V \cong V$, where $c\otimes v\mapsto cv$; • $V \otimes_K W \cong W \otimes_K V$, where $v\otimes w\mapsto w\otimes v$; • $V \otimes_K (W \otimes_K W') \cong (V\otimes_K W)\otimes_K W'$, where $v\otimes (w\otimes w') \mapsto (v\otimes w)\otimes w'$; • $V \otimes_K (\oplus_i W_i) \cong \oplus_i (V\otimes W_i)$, where $v\otimes (w_i)_i \mapsto (v\otimes w_i)_i$. Proof For the first property, the map K × V → V taking (cv) to cv is bilinear over K, so by the universal property of tensor products, this induces fK ⊗ V → V taking cv to cv. On the other hand, let’s take the linear map gV → K ⊗ V mapping v to 1⊗v. It remains to prove gf and fg are identity maps. Indeed: fg takes v → 1⊗v → v and gf takes cv → cv → 1⊗cvcv where the equality follows from bilinearity of ⊗. For the third property, fix vV. The map W×W’ → (VW)⊗W’ taking (ww’) to (vw)⊗w‘ is bilinear in W and W’ so it induces $f_v : W\otimes W' \to (V\otimes W)\otimes W'$ taking $w\otimes w' \mapsto (v\otimes w)\otimes w'.$ Next we check that the map $V\times (W\otimes W') \to (V\otimes W)\otimes W', \qquad (v, x) \mapsto f_v(x)$ is bilinear so it induces a linear map $f : V\otimes (W\otimes W') \mapsto (V\otimes W)\otimes W'$ taking $v\otimes (w\otimes w') \mapsto (v\otimes w)\otimes w'.$ Similarly one defines a reverse map $g: (V\otimes W)\otimes W' \to V\otimes (W\otimes W')$ taking $(v\otimes w)\otimes w' \mapsto v\otimes (w\otimes w').$ Since the pure tensors generate the whole space, it follows that f and g are mutually inverse. The second and fourth properties are left to the reader. ♦ As a result of the second and fourth properties, we also have: Corollary. For any collection $\{V_i\}$ and $\{W_j\}$ of vector spaces, we have: $\oplus_{i, j} (V_i \otimes_K W_j) \cong (\oplus_i V_i)\otimes_K (\otimes_j W_j),$ where the LHS element $(v_i) \otimes (w_j)$ maps to $(v_i \otimes w_j)_{i,j}$ on the RHS. In particular, if $\{v_i\}$ and $\{w_j\}$ are bases of V and W respectively, then $V = \oplus_i Kv_i, \ W = \oplus_j Kw_j \implies V\otimes W = \oplus_{i, j} K(v_i \otimes w_j)$ so $\{v_i \otimes w_j\}$ forms a basis of VW. This recovers our original intuitive definition of the tensor product! ## Tensor Product and Duals Recall that the dual of a vector space V is the space V* of all linear maps VK. It is easy to see that V* ⊕ W* is naturally isomorphic to (V ⊕ W)* and when V is finite-dimensional, V** is naturally isomorphic to V. [ One way to visualize V** ≅ V is to imagine the bilinear map V* × V → K taking (fv) to f(v). Fixing f we obtain a linear map VK as expected while fixing v we obtain a linear map V*→K and this corresponds to an element of V**. ] If V is finite-dimensional, then a basis $\{v_1, \ldots, v_n\}$ of V gives rise to a dual basis $\{f_1, \ldots, f_n\}$ of V* where $f_i(v_j) = \begin{cases} 1, \quad &\text{if } i = j,\\ 0,\quad &\text{otherwise.}\end{cases}$ or simply $f_i(v_j) = \delta_{ij}$ with the Kronecker delta symbol. The next result we would like to show is: Proposition. Let V and W be finite-dimensional over K. • We have $V^*\otimes W^* \cong (V\otimes W)^*$ taking (f, g) to the map $V\otimes W\to K, (v\otimes w) \mapsto f(v)g(w).$ • Also $V^* \otimes W \cong \text{Hom}_K(V, W)$ taking (f, w) to the map $V\to W, v\mapsto f(v)w.$ Proof For the first case, fix fV*, gW*. The map $V\times W \to K$ taking $(v,w)\mapsto f(v)g(w)$ is bilinear so it induces a map $h:V\otimes W\to K$ taking $(v\otimes w)\mapsto f(v)g(w).$ But the assignment (fg) → h gives rise to a map $V^* \times W^* \to (V\otimes W)^*$ which is bilinear so it induces $\varphi:V^* \otimes W^* \to (V\otimes W)^*.$ Note that $f\otimes g$ corresponds to the map $h:V\otimes W\to K$ taking $v\otimes w \mapsto f(v)g(w).$ To show that this is an isomorphism, let $\{v_i\}$ and $\{w_j\}$ be bases of V and W respectively, with dual bases $\{f_i\}$ and $\{g_j\}$ of V* and W*. The map then takes $f_i \otimes g_j$ to the linear map $V\otimes W\to K$ which takes $v_k \otimes w_l$ to $f_i(v_k) g_j(w_l) = \delta_{ik}\delta_{jl}.$ But this corresponds to the dual basis of $\{v_i \otimes w_j\},$ so we see that the above map φ takes a basis $\{f_i \otimes g_j\}$ to a basis: dual of $\{v_i\otimes w_j\}.$ The second case is left as an exercise. ♦ Note Here’s one convenient way to visualize the above. Suppose elements of V comprise of column vectors. Then V* is the space of row vectors, and evaluating V* × → K corresponds to multiplying a row vector by column vector, thus giving a scalar. So V*W* ≅ (VW)* follows quite easily: indeed, the LHS concatenates two spaces of row vectors, while the RHS concatenates two spaces of column vectors then turns it into a space of row vectors. The tensor product is a little trickier: for V and W we take column vectors with entries $\alpha_1, \ldots, \alpha_n$ and $\beta_1, \ldots, \beta_m$ respectively. Then we form the column vector with mn entries $\alpha_i \beta_j.$ This lets us see why V*⊗W* ≅ (VW)*: in both cases we get a row vector with mn entries. Finally, to obtain V* ⊗W we take row vectors $\alpha_1, \ldots, \alpha_n$ for elements of V* and column vectors $\beta_1, \ldots, \beta_m$ for those of W, and the these multiply to give us an m × n matrix, which represents linear maps VW: Question Consider the map V* × V → K which takes (f, v) to f(v). This is bilinear so it induces a linear map fV*⊗V → K. On the other hand, V*⊗V is naturally isomorphic to End(V), the space of K-linear maps VV. If we represent elements of End(V) as square matrices, what does f correspond to? [ Answer: the trace of the matrix. ] ## Tensor Algebra Given a vector space V, let us consider n consecutive tensors: $V^{\otimes n} := \overbrace{V\otimes V\otimes \ldots \otimes V}^{n \text{ copies}}.$ and let T(V) be the direct sum $\oplus_{n=0}^\infty V^{\otimes n} = K \oplus V \oplus (V\otimes V) \oplus (V\otimes V\otimes V)\ldots.$ This gives an associative algebra over K by extending the bilinear map $V^{\otimes m} \times V^{\otimes n} \to V^{\otimes (m+n)}, \quad (v_1, v_2) \mapsto v_1 \otimes v_2.$ to the entire space T(V) × T(V) → T(V). Note that it is not commutative in general. For example, suppose V has a basis {xyz}. Then • $V^{(2)}$ has basis $\{x^2, xy, xz, yx, y^2, yz, zx, zy, z^2\}$, where we have shortened the notation $x^2 := x\otimes x,$ $xy := x\otimes y,$ etc. • $V^{(3)}$ has basis $\{x^3, x^2 y, \ldots\}$, with 27 elements. • Multiplying $V\times V^{(2)} \to V^{(3)}$ gives $(x+z)(xy + zx) = x^2 y + xzx + zxy + z^2 x.$ The algebra T(V), called the tensor algebra of V, satisfies the following universal property. Theorem. The natural map ψ : V → T(V) is a linear map such that: • for any associative K-algebra A, and K-linear map φ: V → A, there is a unique K-algebra homomorphism f: T(V) → A such that φ = fψ. Thus, $\text{Hom}_{K-\text{lin}}(V, A) \cong \text{Hom}_{K-\text{alg}}(T(V), A).$ However, often we would like multiplication to be commutative (e.g. when dealing with polynomials) and we’ll use the symmetric tensor algebra instead. Or we would like multiplication to be anti-commutative, i.e. xy = –yx (e.g. when dealing with differential forms) and we’ll use the exterior tensor algebra instead. We will say more about these when the need arises. This entry was posted in Notes and tagged , , , , , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 89, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9941043257713318, "perplexity": 669.9657784888367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400234232.50/warc/CC-MAIN-20200926040104-20200926070104-00208.warc.gz"}
https://bitbucket.org/sk8erchoi/sqlalchemy/src/ef6a64625f67/doc/build/core/engines.rst
Full commit # Engine Configuration The Engine is the starting point for any SQLAlchemy application. It's "home base" for the actual database and its DBAPI, delivered to the SQLAlchemy application through a connection pool and a Dialect, which describes how to talk to a specific kind of database/DBAPI combination. The general structure can be illustrated as follows: Where above, an :class:~sqlalchemy.engine.base.Engine references both a :class:~sqlalchemy.engine.base.Dialect and a :class:~sqlalchemy.pool.Pool, which together interpret the DBAPI's module functions as well as the behavior of the database. Creating an engine is just a matter of issuing a single call, :func:.create_engine(): from sqlalchemy import create_engine engine = create_engine('postgresql://scott:tiger@localhost:5432/mydatabase') The above engine creates a :class:.Dialect object tailored towards PostgreSQL, as well as a :class:.Pool object which will establish a DBAPI connection at localhost:5432 when a connection request is first received. Note that the :class:.Engine and its underlying :class:.Pool do not establish the first actual DBAPI connection until the :meth:.Engine.connect method is called, or an operation which is dependent on this method such as :meth:.Engine.execute is invoked. In this way, :class:.Engine and :class:.Pool can be said to have a lazy initialization behavior. The :class:.Engine, once created, can either be used directly to interact with the database, or can be passed to a :class:.Session object to work with the ORM. This section covers the details of configuring an :class:.Engine. The next section, :ref:connections_toplevel, will detail the usage API of the :class:.Engine and similar, typically for non-ORM applications. ## Supported Databases SQLAlchemy includes many :class:~sqlalchemy.engine.base.Dialect implementations for various backends; each is described as its own package in the :ref:sqlalchemy.dialects_toplevel package. A SQLAlchemy dialect always requires that an appropriate DBAPI driver is installed. The table below summarizes the state of DBAPI support in SQLAlchemy 0.7. The values translate as: • yes / Python platform - The SQLAlchemy dialect is mostly or fully operational on the target platform. • yes / OS platform - The DBAPI supports that platform. • no / Python platform - The DBAPI does not support that platform, or there is no SQLAlchemy dialect support. • no / OS platform - The DBAPI does not support that platform. • partial - the DBAPI is partially usable on the target platform but has major unresolved issues. • development - a development version of the dialect exists, but is not yet usable. • thirdparty - the dialect itself is maintained by a third party, who should be consulted for information on current support. • * - indicates the given DBAPI is the "default" for SQLAlchemy, i.e. when just the database name is specified Driver Connect string Py2K Py3K Jython Unix Windows DB2/Informix IDS ibm-db thirdparty thirdparty thirdparty thirdparty thirdparty thirdparty Drizzle mysql-python drizzle+mysqldb* yes development no yes yes Firebird / Interbase kinterbasdb firebird+kinterbasdb* yes development no yes yes Informix informixdb informix+informixdb* yes development no unknown unknown MaxDB sapdb maxdb+sapdb* development development no yes unknown Microsoft Access pyodbc access+pyodbc* development development no unknown yes Microsoft SQL Server adodbapi mssql+adodbapi development development no no yes jTDS JDBC Driver mssql+zxjdbc no no development yes yes mxodbc mssql+mxodbc yes development no yes with FreeTDS yes pyodbc mssql+pyodbc* yes development no yes with FreeTDS yes pymssql mssql+pymssql yes development no yes yes MySQL MySQL Connector/J mysql+zxjdbc no no yes yes yes MySQL Connector/Python mysql+mysqlconnector yes yes no yes yes mysql-python mysql+mysqldb* yes development no yes yes OurSQL mysql+oursql yes yes no yes yes pymysql mysql+pymysql yes development no yes yes Oracle cx_oracle oracle+cx_oracle* yes development no yes yes Oracle JDBC Driver oracle+zxjdbc no no yes yes yes Postgresql pg8000 postgresql+pg8000 yes yes no yes yes PostgreSQL JDBC Driver postgresql+zxjdbc no no yes yes yes psycopg2 postgresql+psycopg2* yes yes no yes yes pypostgresql postgresql+pypostgresql no yes no yes yes SQLite pysqlite sqlite+pysqlite* yes yes no yes yes sqlite3 sqlite+pysqlite* yes yes no yes yes Sybase ASE mxodbc sybase+mxodbc development development no yes yes pyodbc sybase+pyodbc* partial development no unknown unknown python-sybase sybase+pysybase yes [1] development no yes yes [1] The Sybase dialect currently lacks the ability to reflect tables. Further detail on dialects is available at :ref:dialect_toplevel. ## Engine Creation API Keyword options can also be specified to :func:~sqlalchemy.create_engine, following the string URL as follows: db = create_engine('postgresql://...', encoding='latin1', echo=True) ## Database Urls SQLAlchemy indicates the source of an Engine strictly via RFC-1738 style URLs, combined with optional keyword arguments to specify options for the Engine. The form of the URL is: dialect+driver://username:password@host:port/database Dialect names include the identifying name of the SQLAlchemy dialect which include sqlite, mysql, postgresql, oracle, mssql, and firebird. The drivername is the name of the DBAPI to be used to connect to the database using all lowercase letters. If not specified, a "default" DBAPI will be imported if available - this default is typically the most widely known driver available for that backend (i.e. cx_oracle, pysqlite/sqlite3, psycopg2, mysqldb). For Jython connections, specify the zxjdbc driver, which is the JDBC-DBAPI bridge included with Jython. # postgresql - psycopg2 is the default driver. pg_db = create_engine('postgresql://scott:tiger@localhost/mydatabase') pg_db = create_engine('postgresql+psycopg2://scott:tiger@localhost/mydatabase') pg_db = create_engine('postgresql+pg8000://scott:tiger@localhost/mydatabase') pg_db = create_engine('postgresql+pypostgresql://scott:tiger@localhost/mydatabase') # postgresql on Jython pg_db = create_engine('postgresql+zxjdbc://scott:tiger@localhost/mydatabase') # mysql - MySQLdb (mysql-python) is the default driver mysql_db = create_engine('mysql://scott:tiger@localhost/foo') mysql_db = create_engine('mysql+mysqldb://scott:tiger@localhost/foo') # mysql on Jython mysql_db = create_engine('mysql+zxjdbc://localhost/foo') # mysql with pyodbc (buggy) mysql_db = create_engine('mysql+pyodbc://scott:tiger@some_dsn') # oracle - cx_oracle is the default driver oracle_db = create_engine('oracle://scott:[email protected]:1521/sidname') # oracle via TNS name oracle_db = create_engine('oracle+cx_oracle://scott:tiger@tnsname') # mssql using ODBC datasource names. PyODBC is the default driver. mssql_db = create_engine('mssql://mydsn') mssql_db = create_engine('mssql+pyodbc://mydsn') mssql_db = create_engine('mssql+adodbapi://mydsn') mssql_db = create_engine('mssql+pyodbc://username:password@mydsn') SQLite connects to file based databases. The same URL format is used, omitting the hostname, and using the "file" portion as the filename of the database. This has the effect of four slashes being present for an absolute file path: # sqlite://<nohostname>/<path> # where <path> is relative: sqlite_db = create_engine('sqlite:///foo.db') # or absolute, starting with a slash: sqlite_db = create_engine('sqlite:////absolute/path/to/foo.db') To use a SQLite :memory: database, specify an empty URL: sqlite_memory_db = create_engine('sqlite://') The :class:.Engine will ask the connection pool for a connection when the connect() or execute() methods are called. The default connection pool, :class:~.QueuePool, will open connections to the database on an as-needed basis. As concurrent statements are executed, :class:.QueuePool will grow its pool of connections to a default size of five, and will allow a default "overflow" of ten. Since the :class:.Engine is essentially "home base" for the connection pool, it follows that you should keep a single :class:.Engine per database established within an application, rather than creating a new one for each connection. Note :class:.QueuePool is not used by default for SQLite engines. See :ref:sqlite_toplevel for details on SQLite connection pool usage. ## Custom DBAPI connect() arguments Custom arguments used when issuing the connect() call to the underlying DBAPI may be issued in three distinct ways. String-based arguments can be passed directly from the URL string as query arguments: db = create_engine('postgresql://scott:tiger@localhost/test?argument1=foo&argument2=bar') If SQLAlchemy's database connector is aware of a particular query argument, it may convert its type from string to its proper type. :func:~sqlalchemy.create_engine also takes an argument connect_args which is an additional dictionary that will be passed to connect(). This can be used when arguments of a type other than string are required, and SQLAlchemy's database connector has no type conversion logic present for that parameter: db = create_engine('postgresql://scott:tiger@localhost/test', connect_args = {'argument1':17, 'argument2':'bar'}) The most customizable connection method of all is to pass a creator argument, which specifies a callable that returns a DBAPI connection: def connect(): return psycopg.connect(user='scott', host='localhost') db = create_engine('postgresql://', creator=connect) ## Configuring Logging Python's standard logging module is used to implement informational and debug log output with SQLAlchemy. This allows SQLAlchemy's logging to integrate in a standard way with other applications and libraries. The echo and echo_pool flags that are present on :func:~sqlalchemy.create_engine, as well as the echo_uow flag used on :class:~sqlalchemy.orm.session.Session, all interact with regular loggers. This section assumes familiarity with the above linked logging module. All logging performed by SQLAlchemy exists underneath the sqlalchemy namespace, as used by logging.getLogger('sqlalchemy'). When logging has been configured (i.e. such as via logging.basicConfig()), the general namespace of SA loggers that can be turned on is as follows: • sqlalchemy.engine - controls SQL echoing. set to logging.INFO for SQL query output, logging.DEBUG for query + result set output. • sqlalchemy.dialects - controls custom logging for SQL dialects. See the documentation of individual dialects for details. • sqlalchemy.pool - controls connection pool logging. set to logging.INFO or lower to log connection pool checkouts/checkins. • sqlalchemy.orm - controls logging of various ORM functions. set to logging.INFO for information on mapper configurations. For example, to log SQL queries using Python logging instead of the echo=True flag: import logging logging.basicConfig() logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO) By default, the log level is set to logging.WARN within the entire sqlalchemy namespace so that no log operations occur, even within an application that has logging enabled otherwise. The echo flags present as keyword arguments to :func:~sqlalchemy.create_engine and others as well as the echo property on :class:~sqlalchemy.engine.base.Engine, when set to True, will first attempt to ensure that logging is enabled. Unfortunately, the logging module provides no way of determining if output has already been configured (note we are referring to if a logging configuration has been set up, not just that the logging level is set). For this reason, any echo=True flags will result in a call to logging.basicConfig() using sys.stdout as the destination. It also sets up a default format using the level name, timestamp, and logger name. Note that this configuration has the affect of being configured in addition to any existing logger configurations. Therefore, when using Python logging, ensure all echo flags are set to False at all times, to avoid getting duplicate log lines. The logger name of instance such as an :class:~sqlalchemy.engine.base.Engine or :class:~sqlalchemy.pool.Pool defaults to using a truncated hex identifier string. To set this to a specific name, use the "logging_name" and "pool_logging_name" keyword arguments with :func:sqlalchemy.create_engine. Note The SQLAlchemy :class:.Engine conserves Python function call overhead by only emitting log statements when the current logging level is detected as logging.INFO or logging.DEBUG. It only checks this level when a new connection is procured from the connection pool. Therefore when changing the logging configuration for an already-running application, any :class:.Connection that's currently active, or more commonly a :class:~.orm.session.Session object that's active in a transaction, won't log any SQL according to the new configuration until a new :class:.Connection is procured (in the case of :class:~.orm.session.Session, this is after the current transaction ends and a new one begins).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15096688270568848, "perplexity": 11044.575044724415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701151880.99/warc/CC-MAIN-20160205193911-00229-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.computer.org/csdl/trans/tc/1997/10/t1132-abs.html
ABSTRACT <p><b>Abstract</b>—The shear-sort algorithm [<ref rid="bibt113219" type="bib">19</ref>] on an SIMD mesh model requires <tmath>$4\sqrt N+o\left( {\sqrt N} \right)$</tmath> time for sorting <it>N</it> elements arranged on a <tmath>$\sqrt N\times \sqrt N$</tmath> mesh. In this paper, we present an algorithm for sorting <it>N</it> elements in time <it>O</it>(<it>N</it><super>1/4</super>) on an SIMD Multi-Mesh architecture, thereby significantly improving the order of the time complexity. The Multi-Mesh architecture [<ref rid="bibt113223" type="bib">23</ref>], [<ref rid="bibt113224" type="bib">24</ref>] is built around <it>n</it><super>2</super> blocks, where each block is an <it>n</it>×<it>n</it> mesh with <it>n</it> = <it>N</it><super>1/4</super>, so that each processor will uniformly have four neighbors in the final topology.</p> INDEX TERMS 2D mesh, Multidimensional mesh, wrap-around connection, SIMD, MIMD, sorting, shear-sort. CITATION M. Ghosh, B. P. Sinha, M. De and D. Das, "An Efficient Sorting Algorithm on the Multi-Mesh Network," in IEEE Transactions on Computers, vol. 46, no. , pp. 1132-1137, 1997. doi:10.1109/12.628397 CITATIONS SHARE 83 ms (Ver 3.3 (11022016))
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.555287778377533, "perplexity": 15812.725545467412}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647600.49/warc/CC-MAIN-20180321082653-20180321102653-00257.warc.gz"}
https://content.iospress.com/articles/international-journal-of-applied-electromagnetics-and-mechanics/jae2304
You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly. # Action of torsion and axial moment in a new nonlinear cantilever-type vibration energy harvester #### Abstract The effects of composite motion involving action of torsion and axial moment on the vibrating element on characteristics of a new cantilever-type nonlinear electromagnetic vibration energy harvester are analyzed. The systems with softening and hardening action of the magnetic force are analyzed. The impact of the phenomenon on electromagnetic quantities of the system is investigated using the 2d analytical and 3d numerical models. The simulated and measured frequency-response characteristics show noticeable differences when the phenomenon is taken into account. ## 1.Introduction The nonlinear vibration energy harvesters attract attention due to potential to widen the frequency bandwidth at varying conditions [1, 2, 3, 4]. In the electromagnetic harvesters the nonlinearity is brought by action of magnets on a core [3] or on other magnets [5, 6]. To overcome some limitations of these configurations a microgenerator with a coreless magnetic circuit, depicted in Fig. 1a, was proposed by authors [6]. Generally, in this type of harvesters it is sufficient to represent the kinematics by one degree of freedom [1, 2, 3, 5]. This is not the case for the system considered here, in which the angle Θ, between the normal to the beam mid-surface and y axis, that varies with relative displacement ζ (see Fig. 1b and c), affects the electromagnetic quantities of the microgenerator. The complex motion of “grey magnets”, illustrated in Fig. 1d, includes torsion due to which they are exposed to action of the axial magnetic moment mz. The beam theory [7] implies Θζ, though the moment mz brings additional nonlinear effects. The considered system is capable to switch the magnetic stiffness between hardening (Fig. 1b) and softening (Fig. 1c) action by reversing the magnetization sense of the stationary magnets. The two separate structures are distinguished in this way for which, depending on the magnetization sense, the axial magnetic moment attempts to straighten or to buckle the beam. Clearly, if ζ= 0, then mz= 0, though as far as the magnetic system in Fig. 1b is locally stable, the latter is locally unstable around the origin. The above suggests that the effects of complex motion on characteristics of these two systems will be different. ##### Figure 1. Harvesters considered: a) CAD drawing of entire configuration, b)–c) illustration of complex motion for system with hardening and softening action of the magnetic force, respectively; circled markers denote the magnetization vector senses going forth () and back (x) the xy plane, d) photograph of laboratory system operating at resonance – dashed lines bound blurred region of image illustrating complex motion of the vibrating element. ## 2.Mathematical models and results ### 2.1Electromagnetic quantities The harvesters in Fig. 1b and c are considered as the two structures, which will be referred to as HS (hardening, stable around ζ= 0) and SU (softening, unstable around ζ= 0), respectively. Starting from the basic considerations, if the complex motion is ignored, the electromagnetic quantities can be calculated analytically by solving the Ampère equation ##### (1) div𝐠𝐫𝐚𝐝A=μ0Jμ on the yz plane in Fig. 2, with A being the magnetic vector potential, μ0 the vacuum permeability and Jμ the magnetization current. The latter was modeled using the current shell Jμ=± 0.5 Hc/φ prescribed at edges of permanent-magnets parallel to z axis, where Hc is the coercivity field, and φ a small positive variation of ordinate. ##### Figure 2. Model for calculations of magnetic field distribution; arrows indicate magnetisation senses (SU harvester). The expressions derived from solution of Eq. (1) that describe the magnetic force and the flux linkage are, respectively ##### (2) fζ(ζ)=n=1Lx(UnVn-XnYn)sin(βnζ)2μ0βn ##### (3) λ(ζ)=ntπlScn=1m=1Cvsin(γmk)sin(βnζ)(cos(βnl)-cos(βng)) where ##### (4) [UnXn]=m=1Cs[βncos(γmxg)γmsin(γmxg)] ##### (5) [VnYn]=m=1Cv[γmcos(γmxg)βnsin(γmxg)] ##### (6) γm=0,5(2m-1)πp-1,βn=0,5(2n-1)πq-1 ##### (7) Cs=4μ0Hcsin(γmb)cos(βnϕ)(sin(βna)-sin(βnh))ϕγmβn ##### (8) Cv=4μ0Hccos(βnϕ)(sin(γmc)-sin(γmf))(sin(γme)-sin(γmd))ϕγmβn xg is the abscissa of center of an air-gap, Lx length of system along x axis, nt the number of turns, Sc the cross-section area of coil shown in Fig. 2, respectively. Impact of complex motion on the electromagnetic quantities of was taken into account using a 3d finite element model for magnetic field calculations [8]. The specifications of modeled systems are given in Table 1. ##### Table 1 Specifications of parameters common for HS and SU harvesters used in calculations of electromagnetic quantities Parameter Value 16.5, 3, 8.5, 10.4,13.4, 12, 0.5, 25.5, 4, 14 mm 70 mm, 70 mm 18 mm -900 kA/m 1000 turns ##### Figure 3. Finite element model showing a 3d mesh for one half of the system considering composite motion. In computations the motion was approximated assuming that the barycenter of “grey magnets” moves along a circle with radius equal to the beam length, as they rotate around the barycenter about the angle Θ. The variations presented in Fig. 4 are for the SU system. For the HS system the variations in Fig. 4b and c are multiplied by -1. Figure 5 compares variations of magnetic force and flux linkage with and without complex motion accounted for. As one can observe, the variations of quantities due to angle Θ are significant. It is also worth noticing that the analytical formulas provide close predictions, even though the system has relatively short length along x axis. ### 2.2Frequency-response characteristics Assessment of the impact of complex motion on the frequency response characteristics was carried out via solution of equations derived from the Timoshenko beam theory [7], considering the electromagnetic coupling ##### (10) ρId2Θdt2-EI2Θx2+5GA6(ζx+Θ)=(mz(ζ,Θ)+λΘi)δ and the electric circuit ##### (11) Lcdidt+(R0+Rc)i=-λζdζdt-λΘdΘdt where ρ, A,E,G,D,I are the beam density, the area of cross-section, the Young and shear moduli, damping coefficient, moment of inertia, fext the external force, -mag the gravity force on moving magnets mass ma; δ= 1 at the beam free end, and δ= 0 elsewhere; i, Lc, Rc, R0 are current, coil inductance, and coils and load resistance, respectively. The parameters used in simulations are given in Table 2. ##### Table 2 Specifications of parameters common for HS and SU harvesters for calculations of frequency characteristics ParameterValue Cantilever beam materialGlass fiber/epoxy composite ρ 2730 kg/m3 E,G 7250 GPa, 2971 GPa D 0.0036 Ns/m Rc,Lc 23 Ω, 0.0042 H ma 0.024 kg ##### Figure 4. Results of 3d finite element modeling (SU harvester): a) magnetic force fζ, b) axial moment mz, c) magnetic flux linkage. ##### Figure 5. Comparison of electromagnetic quantities calculated using different models (SU harvester). The characteristics were determined via time-domain solution of Eqs (9)–(11) using a chirp signal with a constant magnitude and a linear frequency sweep for fext. Using this approach the beam springs were designed taking the strength and global stability into account, but using the quantities described by Eqs (2) and (3), which clearly ignore the complex motion. The stability was assessed by analysis of spectra of the responses and estimation of the Lyapunov exponents [5] for their crucial parts. As a result, the beam sprigs with dimensions 65 × 1.1 × 16 mm, and 60 × 2.2 × 16.5 mm, were designed for the HS and SU system, respectively. Their corresponding natural frequencies are f𝐻𝑆𝑛𝑎𝑡= 12.3 Hz and f𝑆𝑈𝑛𝑎𝑡= 32.3 Hz. ##### Figure 6. Diagram of laboratory test test-stand for measurement of frequency characteristics. ##### Figure 7. Frequency characteristics of rms voltage across loading resistor for different loading conditions: a) HS harvester for rms value of force fext equal to 0.12 N, b) SU harvester for rms value of force fext equal to 0.34 N. In the next step the characteristics were determined using the variations in Fig. 4 in Eqs (9)–(11). In order to validate the results the models were put under measurements on the laboratory test-stand (see Fig. 6). The simulated and measured characteristics for various loading conditions are presented in Fig. 7. ## 3.Discussion of results The most important observations involving the results obtained for the two systems considered are, as follows. For the HS harvester in Fig. 7a negligence of complex motion causes overestimation of generated voltage for all loading conditions. With the complex motion accounted for the results of simulation closely match the measurements except the no-load envelope which exposes more complex behavior in measurements between 15 Hz and 20 Hz. This however, can be easily explained by action of large harmonics in the experimental force waveform due to impact of the inertia force on the electromagnetic shaker. The results of additional simulations carried out for physically infeasible magnitudes of external force approaching to 0.5 N, which are not illustrated here, show that system with complex motion operating at no-load falls into chaotic operation around 18 Hz, whilst its simple-motion counterpart remains stable. For the SU harvester in Fig. 7b negligence of complex motion also causes overestimation of simulated voltage. Simulations for larger and even physically infeasible magnitudes of external force, which are not illustrated here, exposed jump at frequency lower by some 2.2 Hz for the system with complex motion operating at no-load, although both were stable. ## 4.Conclusion The investigation shows the need to account for the complex motion in a more accurate designing of the considered systems. The latter regards especially the HS-type system which exposes restricted stability and higher sensitivity to variations in conditions of operation than the SU-type one. Understanding these features is crucial from the viewpoint of development of a new type of wideband harvester integrating the HS and SU harvesters in a single system which will be presented in our future work. ## Acknowledgments The work was carried out under project 2016/23/N/ST7/03808 of The National Science Centre, Poland. ## References [1] S.P. Beeby, R.N. Torah, M.J. Tudor, P. Glynne-Jones, T. O’Donnell, C.R. Saha and S. Roy, A micro electromagnetic generator for vibration energy harvesting, Journ. of Micromech. & Microeng., IOP Press, 17(7) (2007), 1257–1265. [2] P. Podder, A. Amann and S. Roy, Combined Effect of Bistability and Mechanical Impact on the Performance of a Nonlinear Electromagnetic Vibration Energy Harvester, IEEE/ASME Trans. Mechatronics 21(2) (2016), 727–739. [3] T. Sato and H. Igarashi, A chaotic vibration energy harvester using magnetic material, Smart Mater. Struct. 24(2) (2015), 25033. [4] S.M. Chen, J.J. Zhou and J.H. Hu, Experimental study and finite element analysis for piezoelectric impact energy harvesting using a bent metal beam, Int, Journ. of Applied Electromagnetics and Mechanics, IOS Press, 46(4) (2014), 895–904. [5] E. Sardini and M. Serpelloni, An efficient electromagnetic power harvesting device for low-frequency applications, Sensors Actuators A: Physical. 172(2) (2011), 475–482. [6] M. Jagiela and M. Kulik, Wideband electromagnetic converter of vibration energy into electric energy, Patent Application, Patent Office of The Republic of Poland, No. P420998, March, 2017. [7] A.J.M. Ferreira, Solid mechanics and its applications Vol. 157: Matlab codes for finite element analysis, Springer, Netherlands, 2009. [8] onelab.info, accessed 01.2018.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075194597244263, "perplexity": 2674.586680425978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747774.97/warc/CC-MAIN-20201205104937-20201205134937-00275.warc.gz"}
https://arxiv.org/abs/1603.09605
physics.flu-dyn (what is this?) # Title: Disentangling the origins of torque enhancement through wall roughness in Taylor-Couette turbulence Abstract: Direct numerical simulations (DNSs) are performed to analyze the global transport properties of turbulent Taylor-Couette flow with inner rough wall up to Taylor number $Ta=10^{10}$. The dimensionless torque $Nu_\omega$ shows an effective scaling of $Nu_\omega \propto Ta^{0.42\pm0.01}$, which is steeper than the ultimate regime effective scaling $Nu_\omega \propto Ta^{0.38}$ seen for smooth inner and outer walls. It is found that at the inner rough wall, the dominant contribution to the torque comes from the pressure forces on the radial faces of the rough elements; while viscous shear stresses on the rough surfaces contribute little to $Nu_\omega$. Thus, the log layer close to the rough wall depends on the roughness length scale, rather than on the viscous length scale. We then separate the torque contributed from the smooth inner wall and the rough outer wall. It is found that the smooth wall torque scaling follows $Nu_s \propto Ta_s^{0.38\pm0.01}$, in excellent agreement with the case where both walls are smooth. In contrast, the rough wall torque scaling follows $Nu_r \propto Ta_r^{0.47\pm0.03}$, very close to the pure ultimate regime scaling $Nu_\omega \propto Ta^{1/2}$. The energy dissipation rate at the wall of inner rough cylinder decreases significantly as a consequence of the wall shear stress reduction caused by the flow separation at the rough elements. On the other hand, the latter shed vortices in the bulk that are transported towards the outer cylinder and dissipated. Compared to the purely smooth case, the inner wall roughness renders the system more bulk dominated and thus increases the effective scaling exponent. Comments: To be published on JFM Subjects: Fluid Dynamics (physics.flu-dyn) DOI: 10.1017/jfm.2016.815 Cite as: arXiv:1603.09605 [physics.flu-dyn] (or arXiv:1603.09605v2 [physics.flu-dyn] for this version) ## Submission history From: Xiaojue Zhu [view email] [v1] Thu, 31 Mar 2016 14:30:56 GMT (3962kb,D) [v2] Tue, 13 Dec 2016 14:17:12 GMT (4006kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49513477087020874, "perplexity": 1618.3230364997933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210304.2/warc/CC-MAIN-20180815200546-20180815220546-00494.warc.gz"}
https://eventos.spc.org.pe/spire2018/wp/programme/
# Invited Speakers Philip Bille Philip Bille is associate professor in the Algorithms, Logic, and Graph Theory group at the Technical University of Denmark, Department of Applied Mathematics and Computer Science (DTU Compute). His scientific interest is in algorithms and data structures with focus on pattern matching, data compression, and parallelism in modern computer architectures. He is the deputy head of the Center for Compressed Computing and the head of the Research Academy at DTU Compute. Nataša Przulj Nataša Przulj is professor of biomedical data science at the Department of Computer Science at University College London. She is recognized for initiating extraction of biomedical knowledge from the wiring patterns (topology, structure) of “Big Data” real-world molecular (omics) and other networks. That is, she views the wiring patterns of large and complex omics networks, disease ontologies, clinical patient data, drug-drug and drug-target interaction networks etc., as a new source of information that complements the genetic sequence data and needs to be mined and meaningfully integrated to gain deeper biomedical understanding. Her recent work includes designing machine learning methods for integration of heterogeneous biomedical and molecular data, applied to advancing biological and medical knowledge. She also applies her methods to economics. Rossano Venturini Rossano Venturini is assistant professor in the Department of Computer Science at the University of Pisa. His research focusses on compressed data structures, for which he won the Best PhD Thesis award from the Italian Chapter of the EATCS in 2012, and information retrieval, for which he won the Best Paper award at SIGIR 2014 and 2015. He was program committee co-chair for SPIRE 2017 in Palermo. # Programme Tuesday, October 9th 08:45 Registration 09:00 Invited (SISAP) Humans, Machines, and Work: The Future is Now Moshe Vardi (US) 10:00 Coffee 10:10 SISAP On the Analysis of Compressed Chemical Fingerprints Ricardo C. Sperandio (France), Simon Malinowski (France), Laurent Amsaleg (France) and Romain Tavenard (France). SISAP Time Series Retrieval using DTW-Preserving Shapelets Fabio Grandi (Italy) Joint with SISAP Adaptive Computation of the Discrete Frechet Distance Jérémy Barbay (Chile) Joint with SISAP Computing Burrows-Wheeler Similarity Distributions for String Collections Felipe A. Louza (Brazil), Guilherme P. Telles (Brazil), Simon Gog (US) and Liang Zhao (Brazil) 11:50 Coffee 12:00 Invited Mining the Integrated Connectedness of Biomedical Systems Nataša Pržulj (UK) 13:00 SISAP Closing 13:10 Lunch 14:30 Excursion and Dinner 23:00 Finish Wednesday, October 10th 09:00 Registration 10:00 Invited Data Compression: The Whole is Larger than the Sum of its Parts Rossano Venturini (Italy) 11:00 Coffee 11:20 Information Retrieval Fast and Effective Neural Networks for Translating Natural Language into Denotations Tiago Pimentel (Brazil), Juliano Viana (Brazil), Adriano Veloso (Brazil) and Nivio Ziviani (Brazil) 11:40 Information Retrieval Early Commenting Features for Emotional Reactions Prediction Anastasia Giachanou (Switzerland), Paolo Rosso (Spain), Ida Mele (Italy) and Fabio Crestani (Switzerland) 12:00 Lunch 14:00 LCS and LCP Compressed Communication Complexity of Longest Common Prefixes Philip Bille (Denmark), Mikko Berggreen Ettienne (Denmark), Roberto Grossi (Italy), Inge Li Gørtz (Denmark) and Eva Rotenberg (Denmark) 14:20 LCS and LCP Better heuristic algorithms for the Repetition Free LCS and other variants Radu-Stefan Mincu (Romania) and Alexandru Popa (Romania) 14:40 LCS and LCP Longest Property-Preserved Common Factor Lorraine A.K. Ayad (UK), Giulia Bernardini (Italy), Roberto Grossi (Italy), Costas Iliopoulos (UK), Nadia Pisanti (Italy), Solon Pissis (UK) and Giovanna Rosone (Italy) 14:55 LCS and LCP Indexed Dynamic Programming to boost Edit Distance and LCSS Computation Jérémy Barbay (Chile) and Andrés Olivares (Chile) 15:15 LCS and LCP Longest Common Prefixes with k-Errors and Applications Lorraine Ayad (UK), Carl Barton (UK), Panagiotis Charalampopoulos (UK), Costas Iliopoulos (UK) and Solon Pissis (UK) 15:35 Coffee 15:55 (Re)Construction Optimal In-Place Suffix Sorting Zhize Li (China), Jian Li (China) and Hongwei Huo (China) 16:15 (Re)Construction Fast Wavelet Tree Construction in Practice Yusaku Kaneta (Japan) 16:35 (Re)Construction Recovering, counting and enumerating strings from forward and backward suffix arrays Yuki Kuhara (Japan), Yuto Nakashima (Japan), Shunsuke Inenaga (Japan), Hideo Bannai (Japan) and Masayuki Takeda (Japan) 16:55 (Re)Construction Linear-Time Online Algorithm Inferring the Shortest Path from a Walk Shintaro Narisada (Japan), Diptarama Hendrian (Japan), Ryo Yoshinaka (Japan) and Ayumi Shinohara (Japan) 17:15 Finish Thursday, October 11th 08:30 Registration 09:00 Invited Techniques for Grammar-Based Compression Philip Bille (Denmark) 10:00 Coffee 10:20 Combinatorics on Words On Extended Special Factors of a Word Panagiotis Charalampopoulos (UK), Maxime Crochemore (UK) and Solon P. Pissis (UK) 10:35 Combinatorics on Words Truncated DAWGs and their application to minimal absent word problem Yuta Fujishige (Japan), Takuya Takagi (Japan) and Diptarama Hendrian (Japan) 10:55 Combinatorics on Words Block Palindromes: A New Generalization of Palindromes Keisuke Goto (Japan), Tomohiro I (Japan), Hideo Bannai (Japan) and Shunsuke Inenaga (Japan) 11:10 Pattern Matching Faster Recovery of Approximate Periods over Edit Distance Tomasz Kociumaka (Poland), Jakub Radoszewski (Poland), Wojciech Rytter (Poland), Juliusz Straszyński (Poland), Tomasz Walen (Poland) and Wiktor Zuba (Poland) 11:25 Pattern Matching Searching for a Modified Pattern in a Changing Text Eitan Kondratovsky (Israel) and Amihood Amir (Israel) 11:45 Pattern Matching Trickier XBWT Tricks Enno Ohlebusch (Germany), Stefan Stauß (Germany) and Uwe Baier (Germany) 12:00 Lunch 14:00 Data Structures New structures to solve aggregated queries for trips over public transportation networks Nieves R. Brisaboa (Spain), Antonio Fariña (Spain), Daniil Galaktionov (Spain), Tirso V. Rodeiro (Spain) and Andrea Rodriguez (Chile) 14:20 Data Structures Faster and Smaller Two-Level Index for Network-based Trajectories Rodrigo Rivera (Chile), Andrea Rodríguez (Chile) and Diego Seco (Chile) 14:40 Data Structures Compressed Range Minimum Queries Seungbum Jo (Israel), Shay Mozes (Israel) and Oren Weimann (Israel) 15:00 Data Structures 3DGraCT: A Grammar based Compressed representation of 3D Trajectories Nieves R. Brisaboa (Spain), Adrián Gómez-Brandón (Spain), Miguel A. Martínez-Prieto (Spain) and José R. Paramá (Spain) 15:20 Data Structures Towards a compact representation of temporal rasters Ana Cerdeira-Pena (Spain), Guillermo de Bernardo (Spain), Antonio Fariña (Spain), José Ramón Paramá (Spain) and Fernando Silva-Coira (Spain) 15:40 Coffee 16:00 Bioinformatics Maximal Motif Discovery in a Sliding Window Costas Iliopoulos (UK), Manal Mohamed (UK), Solon Pissis (UK) and Fatima Vayani (UK) 16:20 Bioinformatics Recoloring the Colored de Bruijn Graph Bahar Alipanahi (US), Alan Kuhnle (US) and Christina Boucher (US) 16:35 Bioinformatics Efficient Computation of Sequence Mappability Mai Alzamel (UK), Panagiotis Charalampopoulos (UK), Costas Iliopoulos (UK), Tomasz Kociumaka (Poland), Solon Pissis (Poland), Jakub Radoszewski (Poland) and Juliusz Straszynski (Poland) 16:55 Bioinformatics The colored longest common prefix array computed via sequential scans Fabio Garofalo (Italy), Giovanna Rosone (Italy), Marinella Sciortino (Italy) and Davide Verzotto (Italy) 17:15 Farewell Reception 19:00 Finish # Spire 2018 Proceedings As in past editions, the proceedings of SPIRE 2018 are published by Springer in the Lecture Notes in Computer Science (LNCS 11147) series. # Accepted Papers Bahar Alipanahi, Alan Kuhnle and Christina Boucher. Recoloring the Colored de Bruijn Graph Abstract: The colored de Bruijn graph, an extension of the de Bruijn graph, is routinely applied for variant calling, genotyping, genome assembly, and various other applications [11]. In this data struc- ture, the edges are labelled with one or more colors from a set C, and are stored as a m × |C| matrix, where m is the number of edges. Since both m and |C| can be significantly large, the matrix should be represented in a space-efficient manner. Recently, there has been a significant amount of work in de- veloping compacted representations of this color matrix but all existing methods have focused on com- pressing the color matrix. In this paper, we explore the problem of recoloring in order to reduce the size of the color matrix. We show that finding the minimum number of colors to have a valid recoloring is a NP-hard problem, and thus, motivate our development of a recoloring heuristic that greedily merge the colors in the colored de Bruijn graph. Our results show that this heuristic is able to reduce the number of colors between one and two orders of magnitude, and the size of the matrix by almost half. This work is publicly available at https://github.com/baharpan/cosmo/tree/Recoloring. Jérémy Barbay and Andrés Olivares. Indexed Dynamic Programming to boost Edit Distance and LCSS Computation Abstract: There are efficient dynamic programming solutions to the computation of the Edit Distance from $S\in[1..\sigma]^n$ to $T\in[1..\sigma]^m$, for many natural subsets of edit operations, typically in time within $O(nm)$ in the worst-case over strings of respective lengths $n$ and $m$ (which is likely to be optimal), and in time within $O(n{+}m)$ in some special cases (e.g. disjoint alphabets). We describe how indexing the strings (in linear time), and using such an index to refine the recurrence formulas underlying the dynamic programs, yield faster algorithms in a variety of models, on a continuum of classes of instances of intermediate difficulty between the worst and the best case, thus refining the analysis beyond the worst case analysis. As a side result, we describe similar properties for the computation of the Longest Common Sub Sequence $\idtt{LCSS}(S,T)$ between $S$ and $T$, since it is a particular case of Edit Distance, and we discuss the application of similar algorithmic and analysis techniques for other dynamic programming solutions. More formally, we propose a parameterized analysis of the computational complexity of the Edit Distance for various set of operators and of the Longest Common Sub Sequence in function of the area of the dynamic program matrix relevant to the computation. Adaptive Computation of the Discrete Frechet Distance Abstract: The discrete Fr{\’e}chet distance is a measure of similarity between point sequences which permits to abstract differences of resolution between the two curves, approximating the original Fr{\’e}chet distance between curves. Such distance between sequences of respective length $n$ and $m$ can be computed in time within $O(nm)$ and space within $O(n+m)$ using classical dynamic programing techniques, a complexity likely to be optimal in the worst case over sequences of similar lenght unless the Strong Exponential Hypothesis is proved incorrect. We propose a parameterized analysis of the computational complexity of the discrete Fr{\’e}chet distance in fonction of the area of the dynamic program matrix relevant to the computation, measured by its \emph{certificate width} $\omega$. We prove that the discrete Fr{\’e}chet distance can be computed in time within $((n+m)\omega)$ and space within $O(n+m+\omega)$. Seungbum Jo, Shay Mozes and Oren Weimann. Compressed Range Minimum Queries Abstract: Given a string S of n integers in [0,sigma), a range minimum query RMQ(i,j) asks for the index of the smallest integer in S[i…j]. It is well known that the problem can be solved with a data structure of size O(n) and constant query-time. In this paper we show how to preprocess S into a compressed representation that allows fast range minimum queries. This allows for sublinear size data structures with logarithmic query time. The most natural approach is to use string compression and construct a data structure for answering range minimum queries directly on the compressed string. We investigate this approach using grammar compression. We then consider an alternative approach. Even if S is not compressible, its cartesian tree necessarily is. Therefore, instead of compressing S using string compression, we compress the cartesian tree of S using tree compression. We show that this approach can be exponentially better than the former, and is never worse by more than an O(sigma) factor (i.e. for constant alphabets it is never asymptotically worse). Costas Iliopoulos, Manal Mohamed, Solon Pissis and Fatima Vayani. Maximal Motif Discovery in a Sliding Window Abstract: With the current explosion of genomic data, the need to analyse it efficiently is growing. As next-generation sequencing technology advances, there is an increase in the production of genomic data that requires de novo assembly and analyses. One such analysis is motif discovery. Motifs are relatively short sub-sequences that are biologically significant, and may contain don’t cares (special symbols that match any symbol in the alphabet). Examples of these are protein-binding sites, such as transcription factor recognition sites. Maximal motifs are motifs that cannot be extended or specialised without losing occurrences. We present an on-line algorithm that finds all maximal motifs, that occur at least k times and contain no more than d don’t cares, in a sliding window on the input string. Tomasz Kociumaka, Jakub Radoszewski, Wojciech Rytter, Juliusz Straszyński, Tomasz Walen and Wiktor Zuba. Faster Recovery of Approximate Periods over Edit Distance (short paper) Abstract: The approximate period recovery problem asks to compute all approximate word-periods of a given word S of length n: all primitive words P (|P|=p) which have a periodic extension at edit distance smaller than τ from S, where τ < ⌊ n / ((3.75+ε) · |P|) ⌋ for some ε>0. Here, the set PerExt(P) of periodic extensions of P consists of all finite prefixes of P^∞. We improve time complexity of the fastest known algorithm for this problem of Amir et al. (TCS 2018) from O(n^{4/3}) to O(n log n). Our tool is a fast algorithm for Approximate Pattern Matching in Periodic Text. We consider only verification for the period recovery problem when the candidate approximate word-period P is explicitly given up to cyclic rotation; the algorithm of Amir et al. reduces the general problem in O(n) time to a logarithmic number of such more specific instances. New structures to solve aggregated queries for trips over public transportation networks Abstract: Representing the trajectories of mobile objects is a hot topic from the widespread use of smartphones and other GPS devices. However, few works have focused on representing trips over public transportation networks (buses, subway, and trains) where user’s trips can be seen as a sequence of stages performed within a vehicle shared with many other users. In this context, representing vehicle journeys reduces the redundancy because all the passengers inside a vehicle share the same arrival time for each stop. In addition, each vehicle journey follows exactly the sequence of stops corresponding to its line, which makes it unnecessary to represent that sequence for each journey. To analyze those problems, we designed a conceptual model that gave us a better insight into this data domain and allowed us the definition of relevant terms and the detection of redundancy sources among those data. Then, we designed two compact representations focused in users’ trips (TTCTR) and in vehicle trips (AcumM) respectively. Each approach owns some strengths and is able to answer efficiently some queries. In this paper, we present all that work and experimental results over synthetic trips generated from accurate schedules obtained from a real GTFS network description (from Madrid) to show the space/time trade-off of both approaches. We considered a wide range of different queries about the use of the transportation network such as counting-based/aggregate queries regarding the load of any line of the network at different times. Rodrigo Rivera, Andrea Rodríguez and Diego Seco. Faster and Smaller Two-Level Index for Network-based Trajectories Abstract: Two-level indexes have been widely used to handle trajectories of moving objects that are constrained to a network. The top-level of these indexes handles the spatial dimension, whereas the bottom level handles the temporal dimension. The latter turns out to be an instance of the \textit{interval-intersection} problem, but it has been tackled by non-specialized spatial indexes. In this work, we propose the use of a compact data structure on the bottom level of these indexes. Our experimental evaluation shows that our approach is both faster and smaller than existing solutions. Tiago Pimentel, Juliano Viana, Adriano Veloso and Nivio Ziviani. Fast and Effective Neural Networks for Translating Natural Language into Denotations Abstract: In this paper we study the semantic parsing problem of mapping natural language utterances into machine interpretable meaning representations. We consider a text-to-denotation application scenario in which a user interacts with a non-human assistant by entering a question, which is then translated into a logical structured query and the result of running this query is finally returned as response to the user. We propose encoder-decoder models that are trained end-to-end using the input questions and the corresponding logical structured queries. In order to ensure fast response times, our models do not condition the target string generation on previously generated tokens. We evaluate our models on real data obtained from a conversational banking chat service, and we show that conditionally-independent translation models offer similar accuracy numbers when compared with sophisticate translation models and present one order of magnitude faster response times. Nieves R. Brisaboa, Adrián Gómez-Brandón, Miguel A. Martínez-Prieto and José R. Paramá. 3DGraCT: A Grammar based Compresed representation of 3D Trajectories Abstract: The management of trajectories has attracted much research work, in most cases focused on trajectories on the ground or the sea, and probably, aircraft trajectories have received less attention. However, the need for a more efficient management of the airspace launched several ambitious research projects, where, as in the traditional case, space and query response times are important issues. This work presents a method for representing aircraft trajectories. The new method uses a compact data structure approach, and then, it is possible to directly query the compressed data without a previous decompression. In addition, in the same compressed space, the data structure includes several access methods to accelerate the searches. Yusaku Kaneta. Fast Wavelet Tree Construction in Practice Abstract: The wavelet tree and matrix are compact data structures that support a wide range of operations on a sequence of $n$ integers in~$[0,\sigma)$ using $n\lg\sigma + o(n\lg\sigma)$ bits of space. Although Munro et al. (SPIRE 2014 and Theoretical Computer Science) and Babenko et al. (SODA 2015) showed that wavelet trees (and matrices) can be constructed in $O(n\lg\sigma/\sqrt{\lg n})$ time, there has been no empirical study on their construction method, possibly due to its heavy use of precomputed tables, seemingly limiting its practicality. In this paper, we propose practical variants of their fast construction of wavelet trees. Instead of using huge precomputed tables, we introduce new techniques based on broadword programming and special CPU instructions available for modern processors. Our experiments using real-world datasets showed that our proposed methods were up to 2.2 and 4.5 times as fast as a naive one for both wavelet trees and matrices, respectively, and up to 1.9 times as fast as existing ones for wavelet matrices. Philip Bille, Mikko Berggreen Ettienne, Roberto Grossi, Inge Li Gørtz and Eva Rotenberg. Compressed Communication Complexity of Longest Common Prefixes Abstract: We consider the communication complexity of fundamental longest common prefix (LCP) problems. In the simplest version, two parties, Alice and Bob, each hold a string, A and B, and we want to determine the length of their longest common prefix l=LCP(A,B) using as few rounds and bits of communication as possible. We show that if the longest common prefix of A and B is compressible, then we can significantly reduce the number of rounds compared to the optimal uncompressed protocol, while achieving the same (or fewer) bits of communication. Namely, if the longest common prefix has an LZ77 parse of z phrases, only O(lg z) rounds and O(lg l) total communication is necessary. We extend the result to the natural case when Bob holds a set of strings B1, …, Bk, and the goal is to find the length of the maximal longest prefix shared by A and any of B1, …, Bk. Here, we give a protocol with O(log z) rounds and O(lg z lg k + lg l) total communication. We present our result in the public-coin model of computation but by a standard technique our results generalize to the private-coin model. Furthermore, if we view the input strings as integers the problems are the greater-than problem and the predecessor problem. Yuki Kuhara, Yuto Nakashima, Shunsuke Inenaga, Hideo Bannai and Masayuki Takeda. Recovering, counting and enumerating strings from forward and backward suffix arrays Abstract: The suffix array SA_w of a string w of length n is a permutation of [1..n] such that SA_w[i] = j iff w[j..n] is the lexicographically i-th suffix of w. In this paper, we consider variants of the reverse-engineering problem on suffix arrays with two given permutations P and Q of [1..n], such that P refers to the forward suffix array of some string w and Q refers to the backward suffix array of the reversed string w^R. Our results are the following: (1) An algorithm which computes a solution string over an alphabet of the smallest size, in O(n) time. (2) The exact number of solution strings over an alphabet of size \sigma. (3) An efficient algorithm which computes all solution strings in the lexicographical order, in time near optimal up to \log n factor. Anastasia Giachanou, Paolo Rosso, Ida Mele and Fabio Crestani. Early Commenting Features for Emotional Reactions Prediction Ana Cerdeira-Pena, Guillermo de Bernardo, Antonio Fariña, José Ramón Paramá and Fernando Silva-Coira. Towards a compact representation of temporal rasters Abstract: Big research efforts have been devoted to efficiently manage spatio-temporal data. However, most works focused on vectorial data, and much less, on raster data. This work presents a new representation for raster data that evolve along time named Temporal k2-raster. It faces the two main issues that arise when dealing with spatio-temporal data: the space consumption and the query response times. It extends a compact data structure for raster data in order to manage time and thus, it is possible to query it directly in compressed form, instead of the classical approach that requires a complete decompression before any manipulation. In addition, in the same compressed space, the new data structure includes two indexes: a spatial index and an index on the values of the cells, thus becoming a self-index for raster data. Keisuke Goto, Tomohiro I, Hideo Bannai and Shunsuke Inenaga. [short paper] Block Palindromes: A New Generalization of Palindromes Abstract: We propose a new generalization of palindromes and gapped palindromes called block palindromes. A block palindrome is a string becomes a palindrome when identical substrings are replaced with a distinct character. We investigate several properties of block palindromes and in particular, study substrings of a string which are block palindromes. In so doing, we introduce the notion of maximal block palindromes, which are a compact representation of all block palindromes that occur in a string. We also propose an algorithm which enumerates all maximal block palindromes that appear in a given string T in O(|T | + ∥MBP (T )∥) time, where ∥MBP(T)∥ is the output size, which is optimal unless all the maximal block palindromes can be represented in a more compact way. Yuta Fujishige, Takuya Takagi and Diptarama Hendrian. Truncated DAWGs and their application to minimal absent word problem Abstract: The \emph{directed acyclic word graph} (\emph{DAWG}) of a string $y$ is the smallest (partial) DFA which recognizes all suffixes of $y$ and has $O(n)$ nodes and edges. Na et al. proposed $k$-truncated suffix tree which is a compressed trie that represents substrings of a string whose length is $k$ and suffixes whose length is less than $k$. In this paper, we present a new data structure called \emph{$k$-truncated DAWGs}, which can be obtained by pruning the DAWGs. We show that the size complexity of the $k$-truncated DAWG of a string $y$ of length $n$ is $O(\min\{n,kz\})$ which is equal to the truncated suffix tree’s one, where $z$ is the size of LZ77 factorization of $y$. We also present an $O(n\log \sigma)$ time and $O(\min\{ n,kz\})$ space algorithm for constructing the $y$-truncated DAWG of $y$, where $\sigma$ is the alphabet size. As an application of the truncated DAWGs, we show that the set $\MAW_k(y)$ of all minimal absent words of $y$ whose size is smaller than or equal to $k$ can be computed by using $k$-truncated DAWG of $y$ in $O(\min\{ n, kz\} + |\MAW_k(y)|)$ time and $O(\min\{ n,kz\})$ working space. On Extended Special Factors of a Word Abstract: An extended special factor of a word x is a factor of x whose longest infix can be extended by at least two distinct letters to the left or to the right and still occur in x. It is called extended bispecial if it can be extended in both directions and still occur in x. Let f(n) be the maximum number of extended bispecial factors over all words of length n. Almirantis et al have shown that 2n – 6 ≤ f(n) ≤ 3n-4 (WABI 2017). In this article, we show that there is no constant c<3 such that f(n) ≤ cn. We then exploit the connection between extended special factors and minimal absent words to construct a data structure for computing minimal absent words of a specific length in optimal time for integer alphabets generalising a result by Fujishige et al (MFCS 2016). As an application of our data structure, we show how to compare two words over an integer alphabet in optimal time improving on another result by Crochemore et al (LATIN 2016). Eitan Kondratovsky and Amihood Amir. Searching for a Modified Pattern in a Changing Text Abstract: Much attention has been devoted recently to the dynamic model of pattern matching. In this model the input is updated or changed locally. One is interested in obtaining the appropriate search result in time that is shorter than the time necessary to search without any previous knowledge. In particular, searching for a pattern P in an indexed text is done in optimal O(|P |) time. There has been work done in searching for a fixed pattern in a dynamic text, as well as finding all maximum common factors of two strings in a dynamic setting. There are real-world applications where the text is unchanged and the pattern is slightly modified at every query. However, in the current state- of-the-art, a new search is required if the pattern is modified. In this paper we present an algorithm that reduces this search time to be sublinear, for a variety of types of pattern modification – in addition to the insertion, deletion, and replacement of symbols, we allow copy-paste and delete substring operations. We also make a step toward a fully dynamic pattern matching model by also supporting text changes, albeit much more modest than the pattern changes. We support dynamic pattern matching where symbols may also be either added or deleted to either the beginning or the end of the text. We show that we can support such a model in time O(log n) for every pattern modification or text change. We can then report all occ occurrences of P in the text in O(occ) time. Zhize Li, Jian Li and Hongwei Huo. Optimal In-Place Suffix Sorting Abstract: The suffix array is a fundamental data structure for many applications that involve string searching and data compression. Designing time/space-efficient suffix array construction algorithms has attracted significant attentions and considerable advances have been made for the past 20 years. We obtain the \emph{first} in-place linear time suffix array construction algorithms that are optimal both in time and space for (read-only) integer alphabets. Our algorithm settles the open problem posed by Franceschini and Muthukrishnan in ICALP 2007. The open problem asked to design in-place algorithms in $o(n\log n)$ time and ultimately, in $O(n)$ time for (read-only) integer alphabets with $|\Sigma| \leq n$. Our result is in fact slightly stronger since we allow $|\Sigma|=O(n)$. Besides, we provide an optimal in-place $O(n\log n)$ time suffix sorting algorithm for read-only general alphabets (i.e., only comparisons are allowed), recovering the result obtained by Franceschini and Muthukrishnan which was an open problem posed by Manzini and Ferragina in ESA 2002. Enno Ohlebusch, Stefan Stauß and Uwe Baier. Trickier XBWT Tricks Abstract: A trie is one of the best data structures for implementing and searching a dictionary. However, to build the trie structure for larger collections of strings takes up a lot of memory. Since the eXtended Burrows-Wheeler Transform (XBWT) is able to compactly represent a labeled tree, it can naturally be used to succinctly represent a trie. The XBWT also supports navigational operations on the trie, but it does not support failure links. For example, the Aho-Corasick algorithm for simultaneously searching for several patterns in a text achieves its good worst-case time complexity only with the aid of failure links. Manzini showed that a balanced parentheses sequence P can be used to support failure links in constant time with only 2n+o(n) bits of space, where n is the number of internal nodes in the trie. Besides practical algorithms that construct the XBWT, he also provided two different algorithms that construct P. In this paper, we suggest an alternative way for constructing P that outperforms the previous algorithms. Computing Burrows-Wheeler Similarity Distributions for String Collections Abstract: In this article we present practical and theoretical improvements to the computation of the Burrows-Wheeler similarity distribution (BWSD) for all pairs of strings in a collection. Our algorithms take advantage of the Burrows-Wheeler transform (BWT) computed for the concatenation of all strings, instead of the pairwise construction of BWTs performed by the straightforward approach, and use compressed data structures that allow reductions of running time while still keeping a small memory footprint, as shown by a set of experiments with real datasets. Lorraine Ayad, Carl Barton, Panagiotis Charalampopoulos, Costas Iliopoulos and Solon Pissis. Longest Common Prefixes with k-Errors and Applications Abstract: Although real-world text datasets, such as DNA sequences, are far from being uniformly random, average-case string searching algorithms perform significantly better than worst-case ones in most applications of interest. In this paper, we study the problem of computing the longest prefix of each suffix of a given string of length $n$ that occurs elsewhere in the string with $k$-errors. This problem has already been studied under the Hamming distance model. Our first result is an improvement upon the state-of-the-art average-case time complexity for {\em non-constant $k$} and using only {\em linear space} under the Hamming distance model. Notably, we show that our technique can be extended to the edit distance model with the same time and space complexities. Specifically, our algorithms run in $\cO(n\frac{(c\log n)^k}{k!})$ time on average, where $c>1$ is a constant, using $\cO(n)$ space. Finally, we show that our technique is applicable to several algorithmic problems found in computational biology and elsewhere. The importance of our technique lies on the fact that it is the first one achieving this bound for non-constant $k$ and using $\cO(n)$ space. Shintaro Narisada, Diptarama Hendrian, Ryo Yoshinaka and Ayumi Shinohara. Linear-Time Online Algorithm Inferring the Shortest Path from a Walk Abstract: We consider the problem of inferring an edge-labeled graph from the sequence of edge labels seen in a walk of that graph. It has been known that this problem is solvable in $O(n \log n)$ time when the targets are path or cycle graphs. This paper presents an online algorithm for the problem of this restricted case that runs in $O(n)$ time, based on Manacher’s algorithm for computing all the maximal palindromes in a string. Lorraine A.K. Ayad, Giulia Bernardini, Roberto Grossi, Costas Iliopoulos, Nadia Pisanti, Solon Pissis and Giovanna Rosone. Longest Property-Preserved Common Factor Abstract: In this paper we introduce a new family of string processing problems. We are given two or more strings and we are asked to compute a factor common to all strings that preserves a specific property and has maximal length. Here we consider the fundamental property of periodicity under two different settings. In the first one, we are given a string $x$ and we are asked to construct a data structure over $x$ answering the following type of on-line queries: given string $y$, find a longest periodic factor common to $x$ and $y$. In the second, we are given $k$ strings and an integer $1 < k’\leq k$ and we are asked to find a longest periodic factor common to at least $k’$ strings. We present linear-time solutions for both settings. Mai Alzamel, Panagiotis Charalampopoulos, Costas Iliopoulos, Tomasz Kociumaka, Solon Pissis, Jakub Radoszewski and Juliusz Straszynski. Efficient Computation of Sequence Mappability Abstract: Sequence mappability is an important factor in genome sequencing. In the $(k,m)$-mappability problem, for a given a sequence $T$ of length $n$ our goal is to compute a table $A$ such that $A[i]$ is the number of indices $j$ such that $m$-length substrings of $T$ starting at positions $i$ and $j$ are at Hamming distance at most $k$. Previous work on this problem focused on heuristic approaches that provided a rough approximation of the result or on the special case of $k=1$. We present several efficient algorithms for the general case of the problem. The main result is an algorithm that works in time $O(n \min(\log^{k+1} n, m^k))$ and linear space. It requires a careful adaptation of the technique of Cole et al. (STOC 2004) in order to avoid multiple counting of pairs of substrings. We also show $O(n^2)$-time algorithms that compute all results for a fixed $m$ and all $k=1,\ldots,n$ or a fixed $k$ and all $m=1,\ldots,n$. Finally we show that $(k,m)$-mappability cannot be computed in strongly subquadratic time for $k,m = \Omega(\log n)$ unless SETH fails. Better heuristic algorithms for the Repetition Free LCS and other variants Abstract: In Discrete Applied Mathematics 2010, Adi et al. introduce and study a variant of the well known Longest Common Subsequence problem, named \emph{Repetition Free Longest Common Subsequence (RFLCS)}. In RFLCS the input consists of two strings $A$ and $B$ over an alphabet $\Sigma$ and the goal is to find the longest common subsequence containing only distinct characters from $\Sigma$. Adi et al. prove that the problem is $\mathcal{APX}$-hard and show three approximation algorithms. Castelli et al. (Operations Research Letters 2013) propose a heuristic approximation algorithm and Blum and Blesa introduce an exact algorithm based on ILP (Journal of Heuristics 2017). In this paper we design and test several new approximation algorithms for RFLCS. The first algorithm, which we believe is the most important contribution, uses dynamic programming and on our tests performs noticeably better than the algorithms of Adi et al.. The second algorithm transforms the RFLCS instance into an instance of the Maximum Independent Set (MIS) problem with the same value of the optimum solution. Then, we apply known algorithms for the MIS problem. We also augment one of the approximation algorithms of Adi et al. and we prove that we achieve an approximation of factor $2\sqrt{\min\{|A|,|B|\}}$. Finally, we introduce two variants of the LCS problem. For one of the problems, named \emph{Pinned Longest Common Subsequence (PLCS)} we present an exact polynomial time algorithm based on dynamic programming. The other variant, named \emph{Multiset Restricted Common Subsequence (MRCS)} is a generalization of RFLCS. We present an exact polynomial time algorithm for MRCS for constant size alphabet. Also, we show that MRCS admits a $2\sqrt{\min\{|A|,|B|\}}$ approximation. Fabio Garofalo, Giovanna Rosone, Marinella Sciortino and Davide Verzotto. The colored longest common prefix array computed via sequential scans Abstract: Due to the increased availability of large datasets of biological sequences, the tools for sequence comparison are now relying on efficient alignment-free approaches to a greater extent. Most of the alignment-free approaches require the computation of statistics of the sequences in the dataset. Such computations become impractical in internal memory when very large collections of long sequences are considered. In this paper, we present a new conceptual data structure, the colored longest prefix array (cLCP), that allows to efficiently tackle several problems with an alignment-free approach. In fact, we show that such a data structure can be computed via sequential scans in semi-external memory. By using cLCP, we propose an efficient lightweight strategy to solve the multi-string ACS problem, that consists in the pairwise comparison of a single string against a collection of m strings simultaneously, in order to obtain m ACS induced distances. Experimental results confirm the effectiveness of our approach. # Excursion to Pachacamac Pachacamac is an archaeological site located on the right bank of the Lurin River, very close to the Pacific Ocean and in front of a group of homonymous islands. It is located in the district of Lurin (2 hours from Lima), in the province of Lima, in Peru. It contains the remains of various buildings, dating from the Early Intermediate (3rd century) to the Late Horizon (15th century), with the best preserved Inca buildings (1450-1532).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104333996772766, "perplexity": 1865.388734874182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743011.30/warc/CC-MAIN-20181116111645-20181116133645-00134.warc.gz"}
https://www.semanticscholar.org/paper/Modular-type-transformations-and-integrals-the-Dixit/a2ddcd70351a6287f00ad6223fd7fba9dcf013e8
Corpus ID: 182557443 # Modular-type transformations and integrals involving the Riemann ?-function @inproceedings{Dixit2018ModulartypeTA, title={Modular-type transformations and integrals involving the Riemann ?-function}, author={A. Dixit}, year={2018} } A survey of various developments in the area of modular-type transformations (along with their generalizations of different types) and integrals involving the Riemann Ξ-function associated to them is given. We discuss their applications in Analytic Number Theory, Special Functions and Asymptotic Analysis. 3 Citations On Hurwitz zeta function and Lommel functions • Mathematics • 2019 We obtain a new proof of Hurwitz's formula for the Hurwitz zeta function $\zeta(s, a)$ beginning with Hermite's formula. The aim is to reveal a nice connection between $\zeta(s, a)$ and a specialExpand Ramanujan's Beautiful Integrals • Mathematics • 2021 Throughout his entire mathematical life, Ramanujan loved to evaluate definite integrals. One can find them in his problems submitted to the Journal of the Indian Mathematical Society, notebooks,Expand Superimposing theta structure on a generalized modular relation • Mathematics • 2020 A generalized modular relation of the form $F(z, w, \alpha)=F(z, iw,\beta)$, where $\alpha\beta=1$ and $i=\sqrt{-1}$, is obtained in the course of evaluating an integral involving the RiemannExpand #### References SHOWING 1-10 OF 29 REFERENCES Transformation formulas associated with integrals involving the Riemann Ξ-function Using residue calculus and the theory of Mellin transforms, we evaluate integrals of a certain type involving the Riemann Ξ-function, which give transformation formulas of the form F(z, α) = F(z, β),Expand Series transformations and integrals involving the Riemann Ξ-function The transformation formulas of Ramanujan, Hardy, Koshliakov and Ferrar are unified, in the sense that all these formulas come from the same source, namely, a general formula involving an integral ofExpand Zeros of combinations of the Riemann ξ-function on bounded vertical shifts • Mathematics • 2015 In this paper we consider a series of bounded vertical shifts of the Riemann ξ-function. Interestingly, although such functions have essential singularities, infinitely many of their zeros lie on theExpand Self-reciprocal functions, powers of the Riemann zeta function and modular-type transformations • Mathematics, Physics • 2013 Abstract Integrals containing the first power of the Riemann Ξ-function as part of the integrand that lead to modular-type transformations have been previously studied by Ramanujan, Hardy,Expand A First Course in Modular Forms • Mathematics • 2008 Modular Forms, Elliptic Curves, and Modular Curves.- Modular Curves as Riemann Surfaces.- Dimension Formulas.- Eisenstein Series.- Hecke Operators.- Jacobians and Abelian Varieties.- Modular CurvesExpand A transformation formula involving the gamma and riemann zeta functions in Ramanujan's lost notebook • Mathematics • 2010 Two proofs are given for a series transformation formula involving the logarithmic derivative of the Gamma function found in Ramanujan’s lost notebook. The transformation formula is connected with aExpand Riesz-type criteria and theta transformation analogues • Mathematics • 2016 Abstract We give character analogues of a generalization of a result due to Ramanujan, Hardy and Littlewood, and provide Riesz-type criteria for Riemann Hypotheses for the Riemann zeta function andExpand Koshliakov kernel and identities involving the Riemann zeta function • Mathematics • 2015 Some integral identities involving the Riemann zeta function and functions reciprocal in a kernel involving the Bessel functions $J_{z}(x), Y_{z}(x)$ and $K_{z}(x)$ are studied. Interesting specialExpand Analogues of the general theta transformation formula • A. Dixit • Mathematics • Proceedings of the Royal Society of Edinburgh: Section A Mathematics • 2013 A new class of integrals involving the confluent hypergeometric function 1F1(a;c;z) and the Riemann Ξ-function is considered. It generalizes a class containing some integrals of Ramanujan, Hardy andExpand Analogues of a transformation formula of Ramanujan We derive two new analogues of a transformation formula of Ramanujan involving the Gamma and Riemann zeta functions present in the Lost Notebook. Both involve infinite series consisting of HurwitzExpand
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958456754684448, "perplexity": 1476.826048548968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057733.53/warc/CC-MAIN-20210925172649-20210925202649-00438.warc.gz"}
http://clay6.com/qa/42784/iron-occurs-as-bcc-as-well-as-fcc-unit-cell-if-the-effective-radius-of-an-a
Browse Questions # Iron occurs as bcc as well as fcc unit cell. If the effective radius of an atom of iron is 124 pm. Compute the density of iron in both these structures. BCC=$7.887g/cm^3,FCC=8.59g/cm^3$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.693026602268219, "perplexity": 2097.661936352514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608416.96/warc/CC-MAIN-20170525195400-20170525215400-00194.warc.gz"}
http://cms.math.ca/cjm/kw/majorant
Search results Search: All articles in the CJM digital archive with keyword majorant Expand all        Collapse all Results 1 - 4 of 4 1. CJM 2009 (vol 61 pp. 503) Baranov, Anton; Woracek, Harald Subspaces of de~Branges Spaces Generated by Majorants For a given de~Branges space $\mc H(E)$ we investigate de~Branges subspaces defined in terms of majorants on the real axis. If $\omega$ is a nonnegative function on $\mathbb R$, we consider the subspace $\mc R_\omega(E)=\clos_{\mc H(E)} \big\{F\in\mc H(E): \text{ there exists } C>0: |E^{-1} F|\leq C\omega \mbox{ on }{\mathbb R}\big\} .$ We show that $\mc R_\omega(E)$ is a de~Branges subspace and describe all subspaces of this form. Moreover, we give a criterion for the existence of positive minimal majorants. Keywords:de~Branges subspace, majorant, Beurling-Malliavin TheoremCategories:46E20, 30D15, 46E22 2. CJM 2003 (vol 55 pp. 1231) Admissible Majorants for Model Subspaces of $H^2$, Part I: Slow Winding of the Generating Inner Function A model subspace $K_\Theta$ of the Hardy space $H^2 = H^2 (\mathbb{C}_+)$ for the upper half plane $\mathbb{C}_+$ is $H^2(\mathbb{C}_+) \ominus \Theta H^2(\mathbb{C}_+)$ where $\Theta$ is an inner function in $\mathbb{C}_+$. A function $\omega \colon \mathbb{R}\mapsto[0,\infty)$ is called {\it an admissible majorant\/} for $K_\Theta$ if there exists an $f \in K_\Theta$, $f \not\equiv 0$, $|f(x)|\leq \omega(x)$ almost everywhere on $\mathbb{R}$. For some (mainly meromorphic) $\Theta$'s some parts of $\Adm\Theta$ (the set of all admissible majorants for $K_\Theta$) are explicitly described. These descriptions depend on the rate of growth of $\arg \Theta$ along $\mathbb{R}$. This paper is about slowly growing arguments (slower than $x$). Our results exhibit the dependence of $\Adm B$ on the geometry of the zeros of the Blaschke product $B$. A complete description of $\Adm B$ is obtained for $B$'s with purely imaginary (vertical'') zeros. We show that in this case a unique minimal admissible majorant exists. Keywords:Hardy space, inner function, shift operator, model, subspace, Hilbert transform, admissible majorantCategories:30D55, 47A15 Admissible Majorants for Model Subspaces of $H^2$, Part II: Fast Winding of the Generating Inner Function This paper is a continuation of \cite{HM02I}. We consider the model subspaces $K_\Theta=H^2\ominus\Theta H^2$ of the Hardy space $H^2$ generated by an inner function $\Theta$ in the upper half plane. Our main object is the class of admissible majorants for $K_\Theta$, denoted by $\Adm \Theta$ and consisting of all functions $\omega$ defined on $\mathbb{R}$ such that there exists an $f \ne 0$, $f \in K_\Theta$ satisfying $|f(x)|\leq\omega(x)$ almost everywhere on $\mathbb{R}$. Firstly, using some simple Hilbert transform techniques, we obtain a general multiplier theorem applicable to any $K_\Theta$ generated by a meromorphic inner function. In contrast with \cite{HM02I}, we consider the generating functions $\Theta$ such that the unit vector $\Theta(x)$ winds up fast as $x$ grows from $-\infty$ to $\infty$. In particular, we consider $\Theta=B$ where $B$ is a Blaschke product with horizontal'' zeros, {\it i.e.}, almost uniformly distributed in a strip parallel to and separated from $\mathbb{R}$. It is shown, among other things, that for any such $B$, any even $\omega$ decreasing on $(0,\infty)$ with a finite logarithmic integral is in $\Adm B$ (unlike the vertical'' case treated in \cite{HM02I}), thus generalizing (with a new proof) a classical result related to $\Adm\exp(i\sigma z)$, $\sigma>0$. Some oscillating $\omega$'s in $\Adm B$ are also described. Our theme is related to the Beurling-Malliavin multiplier theorem devoted to $\Adm\exp(i\sigma z)$, $\sigma>0$, and to de~Branges' space $\mathcal{H}(E)$. Keywords:Hardy space, inner function, shift operator, model, subspace, Hilbert transform, admissible majorantCategories:30D55, 47A15 Inequalities for rational functions with prescribed poles This paper considers the rational system ${\cal P}_n (a_1,a_2,\ldots,a_n):= \bigl\{ {P(x) \over \prod_{k=1}^n (x-a_k)}, P\in {\cal P}_n\bigr\}$ with nonreal elements in $\{a_k\}_{k=1}^{n}\subset\Bbb{C}\setminus [-1,1]$ paired by complex conjugation. It gives a sharp (to constant) Markov-type inequality for real rational functions in ${\cal P}_n (a_1,a_2,\ldots,a_n)$. The corresponding Markov-type inequality for high derivatives is established, as well as Nikolskii-type inequalities. Some sharp Markov- and Bernstein-type inequalities with curved majorants for rational functions in ${\cal P}_n(a_1,a_2,\ldots,a_n)$ are obtained, which generalize some results for the classical polynomials. A sharp Schur-type inequality is also proved and plays a key role in the proofs of our main results. Keywords:Markov-type inequality, Bernstein-type inequality, Nikolskii-type inequality, Schur-type inequality, rational functions with prescribed poles, curved majorants, Chebyshev polynomialsCategories:41A17, 26D07, 26C15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795172810554504, "perplexity": 569.2820408120039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704179963/warc/CC-MAIN-20130516113619-00043-ip-10-60-113-184.ec2.internal.warc.gz"}
https://repo.scoap3.org/search?ln=en&p=Lu%2C+Ran&f=author
SCOAP3 Repository 3 records found Search took 0.02 seconds. 1 Geometric compatibility of IceCube TeV-PeV neutrino excess and its galactic dark matter origin / Bai, Yang ; Lu, Ran ; Salvado, Jordi We perform a geometric analysis for the sky map of the IceCube TeV-PeV neutrino excess and test its compatibility with the sky map of decaying dark matter signals in our galaxy. [...] Published in JHEP 1601 (2016) 161 10.1007/JHEP01(2016)161 arXiv:1311.5864 Fulltext: XML PDF (PDFA); 2 J E T ${J}_{E_T}$ : a global jet finding algorithm / Bai, Yang ; Han, Zhenyu ; Lu, Ran We introduce a new jet-finding algorithm for a hadron collider based on maximizing a J E T ${J}_{E_T}$ function for all possible combinations of particles in an event. [...] Published in JHEP 1503 (2015) 102 10.1007/JHEP03(2015)102 arXiv:1411.3705 Fulltext: XML PDF (PDFA); 3 R-parity conservation from a top down perspective / Acharya, Bobby ; Kane, Gordon ; Kumar, Piyush ; Lu, Ran ; et al Motivated by results from the LHC and dark matter searches, we study the possibility of phenomenologically viable R-parity violation in SU(5) GUT models from a top-down point of view. [...] Published in JHEP 1410 (2014) 001 10.1007/JHEP10(2014)001 arXiv:1403.4948 Fulltext: XML PDF (PDFA);
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574721813201904, "perplexity": 8097.301363039854}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00437.warc.gz"}
http://retrievo.pt/advanced_search?link=1&field1=creators&searchTerms1='Ensslin%2C+K.'&constraint1=MATCH_EXACT_PHRASE
Type Database Creator Date Thumbnail # Search results 136 records were found. ## Self-consistent simulation of quantum wires defined by local oxidation of Ga[Al]As heterostructures Comment: 5 pages, 6 figures; revised figures, clarified text ## Density dependence of microwave induced magneto-resistance oscillations in a two-dimensional electron gas Comment: 5 pages, 4 figures ## Spin state mixing in InAs double quantum dots Comment: 5 pages, 4 figures ## Pauli spin-blockade in an InAs nanowire double quantum dot Comment: EP2DS-17 Proceedings, 3 Pages, 3 Figures ## Raman imaging and electronic properties of graphene Graphite is a well-studied material with known electronic and optical properties. Graphene, on the other hand, which is just one layer of carbon atoms arranged in a hexagonal lattice, has been studied theoretically for quite some time but has only recently become accessible for experiments. Here we demonstrate how single- and multi-layer graphene can be unambiguously identified using Raman scattering. Furthermore, we use a scanning Raman set-up to image few-layer graphene flakes of various heights. In transport experiments we measure weak localization and conductance fluctuations in a graphene flake of about 7 monolayer thickness. We obtain a phase-coherence length of about 2 $\mu$m at a temperature of 2 K. Furthermore we investigate the conductivity through single-layer graphene flakes and the tuning of electron and hole densities v... ## Measuring current by counting electrons in a nanowire quantum dot We measure current by counting single electrons tunneling through an InAs nanowire quantum dot. The charge detector is realized by fabricating a quantum point contact in close vicinity to the nanowire. The results based on electron counting compare well to a direct measurements of the quantum dot current, when taking the finite bandwidth of the detector into account. The ability to detect single electrons also opens up possibilities for manipulating and detecting individual spins in nanowire quantum dots. ## Spin-orbit interaction and spin relaxation in a two-dimensional electron gas Using time-resolved Faraday rotation, the drift-induced spin-orbit Field of a two-dimensional electron gas in an InGaAs quantum well is measured. Including measurements of the electron mobility, the Dresselhaus and Rashba coefficients are determined as a function of temperature between 10 and 80 K. By comparing the relative size of these terms with a measured in-plane anisotropy of the spin dephasing rate, the D'yakonv-Perel' contribution to spin dephasing is estimated. The measured dephasing rate is significantly larger than this, which can only partially be explained by an inhomogeneous g-factor. ## Analytic Model for the Energy Spectrum of a Graphene Quantum Dot in a Perpendicular Magnetic Field Comment: 4 pages, 3 figures ## Graphene quantum dots in perpendicular magnetic fields Comment: 5 pages, 4 figures, submitted to pss-b ## Spin States in Graphene Quantum Dots We investigate ground and excited state transport through small (d = 70 nm) graphene quantum dots. The successive spin filling of orbital states is detected by measuring the ground state energy as a function of a magnetic field. For a magnetic field in-plane of the quantum dot the Zemann splitting of spin states is measured. The results are compatible with a g-factor of 2 and we detect a spin-filling sequence for a series of states which is reasonable given the strength of exchange interaction effects expected for graphene.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405680656433105, "perplexity": 1731.682554308584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00124.warc.gz"}
https://fmph.uniba.sk/detail-novinky/back_to_page/fakulta-matematiky-fyziky-a-informatiky-uk/article/-cbdd62dc5d/
Fakulta matematiky, fyziky a informatiky Univerzita Komenského v Bratislave # Seminár z teórie grafov - Róbert Jajcay (1.10.2020) ## vo štvrtok 1.10.2020 o 9:50 hod. v miestnosti M/213 29. 09. 2020 22.35 hod. Prednášajúci: Róbert Jajcay Názov: Extremal edge-girth-regular graphs Termín: 1.10.2020, 9:50 hod., M/213 Abstrakt: The aim of the talk is to generalize properties satisfied by highly symmetric (vertex-, edge- or arc-transitive) graphs to larger classes of graphs and to look for families of graphs that share some of the properties of symmetric graphs but are not themselves symmetric. One such class is the class of edge-girth-regular $egr(v,k,g,\lambda)$-graphs which are $k$-regular graphs of order $v$ and girth $g$ in which every edge is contained in $\lambda$ distinct $g$-cycles. Beside the obvious edge-transitive graphs, examples include other important classes of graphs such as Moore graphs, as well as many of extremal $k$-regular graphs of prescribed girth or diameter. Infinitely many $egr(v,k,g,\lambda)$-graphs are known to exist for sufficiently large parameters $(k,g,\lambda)$, and in line with the well-known Cage Problem we attempt to determine the smallest graphs among all edge-girth-regular graphs for given parameters $(k,g,\lambda)$. We derive lower bounds in terms of the parameters $k,g$ and $\lambda$. We also determine the orders of the smallest $egr(v,k,g,\lambda)$-graphs for some specific parameters $(k,g,\lambda)$, and address the problem of the smallest possible orders of bipartite edge-girth-regular graphs. Joint work with A. Zavrtanik Drglin, S. Filipovski, and T. Raiman.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4571256637573242, "perplexity": 1452.667594799127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00414.warc.gz"}
https://mersenneforum.org/showthread.php?s=6044850b80198422398d8615ad9f9eaf&p=581317
mersenneforum.org > News Lucky 13 (M51 related) Register FAQ Search Today's Posts Mark Forums Read 2019-01-12, 01:09   #265 GP2 Sep 2003 258510 Posts Quote: Originally Posted by Dr Sardonicus Looking at the current status of the exponent 8191, one other prime factor is known, and the remaining cofactor is a PRP. Uh... it's not. 2019-01-12, 01:21 #266 philmoore     "Phil" Sep 2002 Tracktown, U.S.A. 100010111112 Posts I found the larger of the two known prime factors of M8191 in 2003 and did the computations at the time to show that the cofactor was not only composite, but was also not a power of a single prime factor, so we know that the cofactor has at least two distinct prime factors. Currently, the ECM status shows that it probably has no other factors less than around 50 digits. 2019-01-12, 02:43 #267 LaurV Romulan Interpreter     Jun 2011 Thailand 24×32×67 Posts We know that. We watched you at the time , and we also did a lot of work on those DM's in 2012-2014 or so, with the mmff fever, but stopped for a while. That mother is composite. But from the amount of work done on it, no new factor under (about) 45 digits should exits. Last fiddled with by LaurV on 2019-01-12 at 02:44 2019-01-12, 03:05   #268 GP2 Sep 2003 50318 Posts Quote: Originally Posted by philmoore I found the larger of the two known prime factors of M8191 in 2003 and did the computations at the time to show that the cofactor was not only composite, but was also not a power of a single prime factor, so we know that the cofactor has at least two distinct prime factors. If we ever find a non-squarefree Mersenne number (with prime exponent), it would make headlines. The factor in question would be the third known Wieferich prime. I run a script to check for this every few days. Takes a fraction of a second. Very tiny effort, very huge payoff, astronomical odds. 2019-01-12, 13:24   #269 Dr Sardonicus Feb 2017 Nowhere 24×3×97 Posts Quote: Originally Posted by GP2 Uh... it's not. Sorry about that. When I read "PRP Cofactor" in the "Status" column I thought it meant the cofactor was a PRP. Apparently it means "results of PRP test on cofactor" or some such. Last fiddled with by Dr Sardonicus on 2019-01-12 at 13:28 2019-01-12, 13:53   #270 GP2 Sep 2003 5·11·47 Posts Quote: Originally Posted by Dr Sardonicus Sorry about that. When I read "PRP Cofactor" in the "Status" column I thought it meant the cofactor was a PRP. Apparently it means "results of PRP test on cofactor" or some such. You're not the first one to be confused by that. It really should be changed from "PRP Cofactor" to "Cofactor PRP test". And then for consistency, "LL" to "LL test", "PRP" to "PRP test", "P-1" to "P−1 test". 2019-01-12, 17:43   #271 JeppeSN "Jeppe" Jan 2016 Denmark 23·3·7 Posts Quote: Originally Posted by GP2 Never mind larger examples, there's no smaller example. The only other p=2^k+k which is a Mersenne prime exponent is k=1, p=3, but then W(k) = 1. Not sure I know what you mean. With k=1 you are describing the smaller example. It gives $$2^k+k = 3$$ and $$2^k=2$$ and the prime (seven) is $M(3)=2^3-1=2\cdot 2^2 - 1=W(2)$ The other example k=9 written the same way, since $$2^k+k = 521$$ and $$2^k=512$$, is $M(521)=2^{521}-1=512\cdot 2^{512} - 1=W(512)$ For the fun of it, we can merge the lists of Mersennes and Woodalls like this: Code: M(2) M(3) = W(2) W(3) M(5) M(7) W(6) M(13) M(17) M(19) M(31) W(30) M(61) W(75) W(81) M(89) M(107) W(115) M(127) W(123) W(249) W(362) W(384) W(462) M(521) = W(512) M(607) W(751) W(822) M(1279) M(2203) M(2281) M(3217) M(4253) M(4423) W(5312) . . . . . . Last fiddled with by JeppeSN on 2019-01-12 at 18:42 Reason: adding W(512) for comparison 2021-06-18, 07:00 #272 birtwistlecaleb     Jun 2021 41 Posts Good job! Similar Threads Thread Thread Starter Forum Replies Last Post mnd9 Data 1 2020-02-04 02:32 ewmayer Probability & Probabilistic Number Theory 0 2015-10-18 01:37 apocalypse GPU to 72 6 2015-04-07 04:41 Dubslow Factoring 3 2014-10-19 19:10 WraithX GMP-ECM 4 2009-01-12 16:29 All times are UTC. The time now is 00:04. Thu Jul 29 00:04:01 UTC 2021 up 5 days, 18:33, 0 users, load averages: 2.21, 2.25, 2.24 This forum has received and complied with 0 (zero) government requests for information. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation. A copy of the license is included in the FAQ.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43972867727279663, "perplexity": 4567.989054079546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00685.warc.gz"}
https://arxiv.org/abs/1703.00132
cs.SE (what is this?) # Title: Revisiting Unsupervised Learning for Defect Prediction Abstract: Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore "unsupervised" approaches to quality prediction that does not require labelled data. An alternate technique is to use "supervised" approaches that learn models from project data labelled with, say, "defective" or "not-defective".Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. At FSE'16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the SE literature. This paper repeats and refutes those results as follows. (1)There is much variability in the efficacy of the Yang et al. models so even with their approach, some supervised data is required to prune weaker models. (2)Their findings were grouped across $N$ projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. Even though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervisedpredictors did not perform outstandingly better than unsupervised ones. Hence, they may indeed be some combination of unsupervisedlearners to achieve comparable performance to supervised. We therefore encourage others to work in this promising area. Comments: 11 pages, 5 figures. Submitted to FSE2017 Subjects: Software Engineering (cs.SE); Learning (cs.LG) Cite as: arXiv:1703.00132 [cs.SE] (or arXiv:1703.00132v1 [cs.SE] for this version) ## Submission history From: Wei Fu [view email] [v1] Wed, 1 Mar 2017 04:36:06 GMT (549kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5696544051170349, "perplexity": 6012.57380301704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607704.68/warc/CC-MAIN-20170523221821-20170524001821-00302.warc.gz"}
https://rpg.stackexchange.com/questions/149406/how-creative-should-the-dm-let-an-artificer-be-in-terms-of-what-they-can-build?noredirect=1
# How creative should the DM let an artificer be in terms of what they can build? Just how creative should the DM let the Artificer be? One of my players is a guy who thinks way too hard on how to solve problems he's not meant to 'Solve' as a player. For example, I have a little set-piece in place for my campaign setting where there are roaming clouds of illusion magic that will appear at random times around the region. These clouds effectively have the magical power of 9th-level illusion magic violently swirling within them, causing all kinds of chaos when they appear. They are meant to be a hazard that the players are meant to avoid, but my Artificer player thinks differently. He surmises that with all his tool proficiencies like Smith's tools, Tinker's tools, Alchemy supplies, etc., that he should be able to build any kind of contraption he wants given time. I tried to explain that doing so would grind the game to a halt, as he would need to study for years in game time to come close to building the 'giant magical vacuum' that can suck up the illusion storms, and he wouldn't even be able to determine whether it would work or not. This is only one of many hypothetical arguements we've had in the past, and I would just like a reference to point to in the future if he ever picks to play Artificer again: Exactly how much of the hypothetical creativity should be allowed to actually happen when a player uses meta knowledge to build machines in a D&D setting? What power do artisan's tools actually have in the hands of an Artificer? • What level are the characters currently? Countering a 9th level spell effect is a bit much to expect from a class perk and minor equipment at level 3, for instance. – Michael Richardson Jun 6 at 15:28 • Have you simply stated "Your character does not have the magical knowledge to do this"? – PJRZ Jun 6 at 15:31 • What level is this character? How far, in levels, do you expect this campaign to go? – KorvinStarmast Jun 6 at 17:10 • Welcome to RPG.SE! Take the tour if you haven't already, and check out the help center for more guidance. Are you using the 2019 UA artificer, an older UA, or some other homebrew artificer? I ask because their class features might define what they can do as an artificer to a certain extent (beyond which it's left up to the DM). Also, what you "should" do is, for the most part, up to you; what's allowed by the class features is a more answerable question. – V2Blast Jun 7 at 3:45 # Artisan Tools function more-or-less the same for Artificers as they do for other classes; with a few perks Any character with proficiency in Artisan's Tools is able to use those tools to craft items, magical or non-magical. The non-magical rules are introduced in the Players's Handbook, expanded upon to include magical items in the Dungeon Master's Guide, and then revised/rebalanced in Xanathar's Guide to Everything. You can craft nonmagical objects, including adventuring equipment and works of art. You must be proficient with tools related to the object you are trying to create (typically artisan's tools). You might also need access to special materials or locations necessary to create it. For example, someone proficient with smith's tools needs a forge in order to craft a sword or suit of armor. For every day of downtime you spend crafting, you can craft one or more items with a total market value not exceeding 5 gp, and you must expend raw materials worth half the total market value. If something you want to craft has a market value greater than 5 gp, you make progress every day in 5-gp increments until you reach the market value of the item. For example, a suit of plate armor (market value 1,500 gp) takes 300 days to craft by yourself. Crafting, Player's Handbook, pg. 187 Magic items are the DM's purview, so you decide how they fall into the party's possession. As an option, you can allow player characters to craft magic items. The creation of a magic item is a lengthy, expensive task. To start, a character must have a formula that describes the construction of the item. The character must also be a spellcaster with spell slots and must be able to cast any spells that the item can produce. Moreover, the character must meet a level minimum determined by the item's rarity, as shown in the Crafting Magic Items table. For example, a 3rd-level character could create a wand of magic missiles (an uncommon item), as long as the character has spell slots and can cast magic missile. That same character could make a +1 weapon (another uncommon item), no particular spell required. Crafting a Magic Item, Dungeon Master's Guide, pg. 128 Emphasis mine, the relevancy of which should become quite apparent. Artificers have two particular perks as they relate to these rules: the first is that they have a special feature, the details of which are decided upon by their subclass, that enables more efficient crafting than other characters: Crafting. If you craft a magic item in the [potion/scroll/wand/armor] category, it takes you a quarter of the normal time, and it costs you half as much of the usual gold. Tools of the Trade [Alchemist/Archivist/Artillerist/Battlesmith], Unearthed Arcana: the Artificer Returns, 2019-05-14 The other major perk is their ability to at sunrise "infuse" items so that they behave as though they were magic items, but for our purposes we don't need to think about that perk. The important part is, while their ability to do so is greatly improved over that of other classes, Artificers aren't strictly capable of making better or more powerful magic items than anyone else: they just have an inherent affinity for doing so. # So about their plans to subjugate 9th level Spell Effects... As DM, there's good reasons to at least encourage the Artificer in question to try to tackle this. It might lead the campaign in an interesting direction, or create new ways for you, the DM, to interact with the narrative of your story. But, naturally, there needs to be limitations. And there are some very good hints to help us work out how stringent those limitations might have to be. ### If the clouds are "Level Nine Spells", an object capable of subjugating them probably also needs to produce a "Ninth Level Effect" This seems perfectly reasonable, yes? I'd like to think the player trying to do this will respect this as well. There's also precedent for this: the spell Imprisonment specifically says "A dispel magic spell can end the spell only if it is cast as a 9th-level spell, targeting either the prison or the special component used to create it." (PHB, pg. 252), so it's not unreasonable to rule that other ongoing effects of "Ninth Level Power" might have similar restrictions. So what is required for a player to create an item that can produce a "Ninth Level Effect"? Well, there's two rules we'll want to look at: the restrictions on crafting magic items of various rarities, and the rules for creating whole new magic items to place in a campaign. Power Level. If you make an item that lets a character kill whatever he or she hits with it, that item will likely unbalance your game. On the other hand, an item whose benefit rarely comes into play isn't much of a reward and probably not worth doling out as one. Use the Magic Item Power by Rarity table as a guide to help you determine how powerful an item should be, based on its rarity. $$\begin{array}{|l|l|l|} \hline \text{Magic Item Power by Rarity} \\ \hline \text{Rarity} & \text{Max Spell Level} & \text{Max Bonus} \\ \hline \text{Common} & \text{1st} & — \\ \hline \text{Uncommon} & \text{3rd} & +1 \\ \hline \text{Rare} & \text{6th} & +2 \\ \hline \text{Very rare} & \text{8th} & +3 \\ \hline \text{Legendary} & \text{9th} & +4 \\ \hline \end{array}$$ Creating a New Magic Item, Dungeon Master's Guide, pg. 284 Moreover, the character must meet a level minimum determined by the item's rarity, as shown in the Crafting Magic Items table. For example, a 3rd-level character could create a wand of magic missiles (an uncommon item), as long as the character has spell slots and can cast magic missile. That same character could make a +1 weapon (another uncommon item), no particular spell required. [...] $$\begin{array}{|l|l|l|} \hline \text{Crafting Magic Items} \\ \hline \text{Item Rarity} & \text{Creation Cost} & \text{Minimum Level} \\ \hline \text{Common} & \text{100 gp} & \text{3rd} \\ \hline \text{Uncommon} & \text{500 gp} & \text{3rd} \\ \hline \text{Rare} & \text{5,000 gp} & \text{6th} \\ \hline \text{Very rare} & \text{50,000 gp} & \text{11th} \\ \hline \text{Legendary} & \text{500,000 gp} & \text{17th} \\ \hline \end{array}$$ Crafting a Magic Item, Dungeon Master's Guide, pg. 128 So these two tables in conjunction with each other tell us some very important information: • A Magic Item or Device that can produce a 9th level spell (or a "Ninth Level Effect") probably qualifies as a Legendary rarity item. • A character who wishes to create a Legendary rarity item is required to be 17th level, and spend materials equivalent to 500,000gp (Disclaimer: Xanathar's Guide to Everything lowers the gold cost substantially, and as DM, I generally prefer those rules to the DMG rules; Your Milage May Vary) So in total, this Artificer (or any character for that matter) is probably going to be required to be at least level 17 before they can successfully create the kind of "Magic Cloud Vacuum" they intend to create. You might tweak these rules for your own purposes (maybe Artificers get access to higher level item recipes at a lower level? Maybe the clouds are more like sixth or seventh level instead of ninth?) but at least by the standards set by the game itself, it's certainly outside the capabilities of a low level character. Sidebar: per the way the rules are written, if such a character wanted to produce an item that actually produced a Ninth Level spell, not just a "Ninth Level Effect", they'd need to be able to both cast the spell and consume a Ninth Level Spell Slot—which Artificers never get. Personally, I would handwave that for Artificers, since it feels thematically inappropriate for a Wizard to be more capable at producing Magical Items than an Artificer, but in general it is a good rule to follow. # Conclusion Personally speaking, as DM, I prefer to be as permissive as possible when it comes to player decisions, unless it's obvious that they're abusing the rules and in doing so making the game unfun for everyone else. It's not obvious that that's what your player is doing—from your description it just sounds like they're really enthusiastic about the possibilities of a character that can create magic items—so I think it's okay to help them reach a point where they might be able to do something like this. But you need to make it clear that that's not going to happen for a new character. Like illustrated above, there's a set of relatively reliable rules that tell us that what they're trying to do is theoretically plausible, but they should be required to at least fulfill the minimum requirements. And if you use these rules as-is, that means they need to reach level 17. So if your campaign runs for long enough that they get to or near level 17, you should go ahead and set them up for their quest to build the magical Cloud Vacuum that will let them do exactly that. Just make sure it's clear to them that that's a long-term goal, not something they should expect to be able to do as a new or even moderately veteran character. • Even if their character is at a high enough level, considering the 500k gp gold value it would take them 68 years to craft the item. For a normal character it would be four times that, so 274 years. – Michael Jun 7 at 8:32 • @Michael This is why XGtE lowered both the cost and time substantially - now it would only cost them a year, with a couple of weeks of vacation time. – nick012000 Jun 7 at 12:58 • I agree with the overall analysis but it could also be argued that you are not really producing a level 9 effect but dispelling one. Then dispel magic is a level 3 spell and casting it 10 times gives you a ~95% chance to dispel a level 9 spell. Of course, the clouds could be multiple spells. – falsedot Jun 7 at 15:40 • @falsedot Which is fair—the DM will have to make a determination of whether they actually expect a ninth level effect to be necessary to interact with the clouds in the way the player wants. – Xirema Jun 7 at 15:51 • @falsedot I will note though that several ninth level spells, like Imprisonment or Prismatic Wall specifically stipulate when and how Dispel Magic can remove them. In the former case, Imprisonment may /only/ be dispelled if Dispel Magic is cast as a ninth level spell, and in the latter case—it appears to be poorly written, but the intent is that Dispel Magic only works on the last layer. – Xirema Jun 7 at 15:51 ## A player is handing you a campaign-spanning objective loaded with adventure seeds This seems like more of an opportunity than a problem. The player needs to: 1. Find out what the clouds actually are 2. Why they are there 3. How they can be stopped 4. Gather the required materials 5. Overcome the people who don’t want home to succeed because reasons 6. Succeed 7. Find out the unintended consequences of succeeding 8. Deal with those There are a good two dozen adventures in that! Of course, this can be a secondary story with clues scattered through the main thrust of your campaign. Or even one or a few side adventures so that everywhere he goes everyone wants to met the guy who beat the clouds. # This is not the character for this game. I'm going to assume that you had a session zero and that everyone is on the same page that they're Adventurers (heroic sound effects). Depending on level, the character may or may not even have the magical ability to do some of the things he wants, but there's a more important issue: Time. The artificer, with enough time, money, and know-how, very likely can MacGyver all sorts of zany solutions, with DM-approval. However, the group is playing Dungeons and Dragons (more heroic sound effects), not the My Little Band of Tinkerers minigame. The Artificer in question, I'll call him Arthur, does not have enough of the resource of "playing this game with the rest of us" to spend on creating some massive magical spell vacuum. Your explanation is correct. ### There are a few ways to deal with this You said it: it would take in-game years. The character leaves the party to follow his research and the player of Arthur and roll up a new character to go adventuring with the party. Sit down with player of Arthur and re-explain the session zero goals, and how (unfortunately) this character doesn't jive great with those goals. There's no issue with him spinning up a different character at the same level with equivalent (or the same) gear. Or he can use the same character with a different mindset. I assume, you still want this person to play in the game, so work with them. IF you did not have a session zero, or there are other synergy problems with the game, this is a great time to have one. Allow everyone to make sure their on the same page. What kind of theme are we running? How long should an adventuring day be? Is everyone assumed to be good? etc. I'll start by listing three "facts": 1. A DM's word is final 2. There is nothing wrong with players having goals and showing creativity 3. The game is meant to be fun for all The question is how to successfully combine these three facts. First, you many want to point out that artisan tools, smith's tools and so on are not magical and can do nothing on their own. In the hands of an artificer they function as spell focii, but that still doesn't mean they do anything more than an ordinary set of tools other than enable the artificer's spell-casting (mechanically speaking). Second, if the player insists that he doesn't want his character to do anything other than sit in his house and research and build, and willfully ignores all plot hooks, then there may not be much you can do other than encourage the player to play a different character that is willing to adventure! But as long as there is room for compromise, then there is nothing wrong with stating that the character simply doesn't have the knowledge yet but, assuming he does go an adventure (however reluctantly), planting a few little tidbits of lore here and there permit progress towards his goal without stalling your campaign. For example: the players come across some lore hinting at the origin of these mysterious storms. This leads into adventures where they discover a wizard, or the stories of a wizard, or tracks evidence of a wizard, who previously tried to stop them but disappeared in the Bad Lands. This in turn leads to more adventures where they discover the wizard's research and find out that there may be a way to build a machine to stop the storms, but it requires a very specific ritual and some very specific spell components. This leads on to... more adventure, and ultimately a success or failure of this goal. With a bit of imagination and planning, you could merge these 'side-quests' and the player's desire to build a machine to stop the storms into your campaign. • The edits were for flow and some word-smithing. (Like the answer). I hope you feel that it retained your meaning. – KorvinStarmast Jun 12 at 16:26 Yes, they could, but it would likely be a Legendary magic item, so it probably won't be doable until much higher levels. The Wayfarer's Guide to Eberron introduces a new variety of magic item to the game: Eldritch Machines, which are large, stationary devices that are intended to act largely as plot devices and setting fluff to explain where things like Warforged come from. One, in particular, is the Spell Sink, which creates an Antimagic Field zone in a three-mile radius around the device. I believe that the "giant magical vacuum" that your artificer seeks to create would count as one, since to quote the WGtE: Conversely, a mad artificer would create a massive vessel of dragonshards and exotic metals. It might be that the sole purpose of the device is to negate magic, or it could be that it is absorbing all magical energies in the area and storing that power for a cataclysmic effect! Since it is a Legendary item, and Eldritch Machines are not any of the item varieties that any of the Artificer subclasses get bonuses for crafting, this means that, according to Xanathar's Guide to Everything, it would require the following: • A magical ingredient obtained by overcoming a CR 19 challenge • 100,000 gp • 50 workweeks of downtime Since a CR 19 monster grants 22,000 XP, that means that, according to the encounter guidelines on p. 82 of the DMG, for a party of 4 PCs, a CR 19 foe will exceed their threshold for a Deadly encounter below level 14, will only become a Hard encounter at level 17, and will only be a Medium encounter at level 20. # Player desire is a good role playing opportunity. tl;dr As a DM, be upfront about your concerns and how you're going to run the game. Indulge the desires of the players as much as you can but balance the cost to the other people at table (including youself). ## Related anecdote: An inventor handled well. A player in a Dragonlance setting campaign was playing a gnome tinker. The player, and by extension, character was dead set on inventing new and helpful weaponry. The player would draw up designs and expected uses. The other players generally grinned and went along as long as it did not consume too much table time. The DM indulged the activity with the explicit warning that inventions more often will not work or break or just be downright dangerous. When it came to testing or using the inventions the DM had little trouble coming up with mechanics on the fly, and the described entertaining, disappointing, and sometimes terrifying outcomes. When the player refined and revised the trinkets, sometimes they got better or worse. • In the end, the trial, and error, and error, and more error were a good source of role play that made the character a source of fun for the people at the table. ## Considerations • Manage expectations: People are often disappointed when outcomes do not match their expectations. If failure is a high probability, that should be made abundantly clear at the outset. • Playtime at the table: Table time is frequently difficult to schedule and valuable to all the people at the table. Out of game or metagame discussions can be relegated to communications before or after game sessions where all of the players are present. • Other players: Providing opportunities for other players to be involved or buy into the stories and desires of each other is useful for creating fun and engagement for all. Avoiding or modifying narratives involving only one player should be paramount. ## Role play opportunity A player trying to make world changing or mitigating creations is invested in the fantasy world. Use this interest to drive narratives. Providing opportunities that are obviously connected to the interests of the characters can be engaging and exciting for them. Example: the desire to invent a portable weather shelter, and failing, might be described as failing in a specific way, such as, "Your magical trans-divination-enchantment fuse blew." That can lead to a side quest (or part of main quest) for a wand/amulet/ring macguffin that is mentioned to be useful for enchantment-divination transmutation. When they find the macguffin, it can set up a choice. Do they use it for it's intrinsic properties, or, do they have the artificer use it to try to finish the portable weather shelter? In this fashion the inventions aren't free. They're engaging and there's narrative driving the costs involved. • The 5e artificer has specific guidelines on how their inventions work, and this advice is not congruent with them. – nick012000 Jun 7 at 12:56 • @nick012000 I can understand that perspective, but this answer is consistent with the guidance of the DMG. Specifically, "The D&D rules help you and the other players have a good tie, but the rules aren't in charge. You're the DM, and you are in charge of the game." This answer illustrates an method of driving a narrative, and it would be a shame to let the rules ruin a narrative that the people at the table enjoy. – GcL Jun 7 at 13:12 • @nick012000 The artificer is UA, yes? – KorvinStarmast Jun 12 at 16:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20834699273109436, "perplexity": 1863.8752498937338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00229.warc.gz"}
https://www.thedarkdominion.eu/force-powers/
# FORCE POWERS ### Gaining Force Powers: The Force Sensitive Rank power group is one all force sensitives know and can use, though at this power level their power is more akin to tricks than any trained power. Select/learn force powers for each rank gained, up to the maximum power level of the force users rank. The number of force power points acquired depends on the rank of the force user, each power cost one force power point. Another way to spend a force power point is to increase the power level of an already known power, e.g. increase power from acolyte rank to adept rank or increase from master to Darth power level. ### Using Force powers: Force powers can be used out of regular combat, their usage and effects will then be roleplayed as the players see fit. When used in combat all force powers follow a set system of rules. They only deal one HP damage if used offensively with very few exceptions. Some powers have added effects, these effects are only applied if the roll is successful. ### Force affinity: Most force users show an affinity for some force powers, the group of powers a force user has affinity in is usually connected somehow. A sith gains one affinity group at acolyte rank, one at specialist rank and one at Darth rank to a total of three affinity groups. If a force user uses a power for which the force user has affinity with, the power will gain some kind of unusual benefit described in the specific force power description. Also a power used above acolyte power level will be usable as often as if it was one power level lower. The affinity groups are as follows: Dark Healing, Darkness, Drain Force, Force Absorption, Force Lightning, Force Movement, Force Senses, Force Stealth, Force Vocal, Mind Control, Pyrokinesis, Telekinesis Attract, Telekinesis Manipulation, Telekinesis Thrust Force Sensitive (everyone no cost) Force Inertia Force Pull Force Push Force Scream Force Sight Pyrokinesis Acolyte Absorb Energy Control Pain Dark Healing Force Deflect Force Disarm Force Fear Force Persuasion Force Shock Force Stealth Sense Force Animal Bond Aura of Uneasiness Drain Force False Light Side Aura Farsight Force Charge Force Choke Force Wound Force Empathy Force Illusion/Projection Force Lift Force Lightning Force Repulse Life Detection Reflect Energy Waves of Darkness Specialist Battle Meditation Battle Precognition Dissipate Energy Force Blast Force Bond Force Cloak Force Confusion Force Diminish Force Grip Force Horror Force Speed Force Tempest Force Throw Telepathy Voice Amplification Master Bolt of Hatred Chain Lightning Dominate Mind Drain Life Force Burst Force Crush Force Insanity Force Net Force Slow Levitate Negate Energy Postcognition Shatterpoint Darth Dark Side Tendrils Death Field Force Destruction Force Memory Rub Force Storm Force Vision Force Whirlwind
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755064606666565, "perplexity": 22918.401817914437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369512.68/warc/CC-MAIN-20210304174506-20210304204506-00045.warc.gz"}
https://nonsmooth.gricad-pages.univ-grenoble-alpes.fr/siconos/reference/cpp/kernel/file_NonSmoothEvent_hpp.html
# File kernel/src/simulationTools/NonSmoothEvent.hpp¶ Go to the source code of this file Non-Smooth Events. class NonSmoothEvent : public Event #include <NonSmoothEvent.hpp> Events due to non smooth behavior (contact occurence…) Those events are detected during Simulation process (integration of the smooth part with a roots-finding algorithm) and scheduled into the EventsManager. Public Functions NonSmoothEvent(double time, int notUsed) constructor with time value as a parameter Parameters • time: the time of the first event (a double) • notUsed: unused parameter (an int) ~NonSmoothEvent() destructor void process(Simulation &simulation) OSNS solving and IndexSets updating. Parameters • simulation: the simulation that owns this Event (through the EventsManager) Private Functions NonSmoothEvent() Default constructor. ACCEPT_SERIALIZATION(NonSmoothEvent) serialization hooks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4654074013233185, "perplexity": 28377.480344268366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00043.warc.gz"}
https://www.flyingcoloursmaths.co.uk/ask-uncle-colin-why-is-e-not-1/
# Ask Uncle Colin: Why is $e$ not 1? Dear Uncle Colin, If $e = \left( 1+ \frac{1}{n} \right)^n$ when $n = \infty$, how come it isn’t 1? Surely $1 + \frac{1}{\infty}$ is just 1? - I’m Not Finding It Natural, It’s Terribly Yucky Hi, INFINITY, and thanks for your message. You have fallen into one of maths’s classic traps: infinity1 is not a number - you can’t just plug it into equations and expect to get sensible things out. (If infinity was a number, we could flip your argument around and say “$1 + \frac{1}{n}$ is a bit more than 1, and if we multiply infinitely many of those together, it gets bigger each time, so it must go to infinity.” It doesn’t work that way, either.) Instead, we need to think about limits: What happens when $n$ gets really big? ### Binomial! We can expand the expression using the binomial series: $\br{1 + \frac{1}{n}}^n = 1 + n \times \frac{1}{n} + \frac{n(n-1)}{2} \frac{1}{n^2} + \dots$ That works out to be $1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \dots$, minus a load of terms with $n$s on the bottom. But when $n$ gets big, those all get extremely small, leaving you with $1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \dots = e$. I hope that helps! - Uncle Colin ## Colin Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove. 1. hey! that’s your name! [] #### Share This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7926392555236816, "perplexity": 1722.7339262784192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00045.warc.gz"}
http://www.vahala.caltech.edu/Research/eOFD
Search # electro-Optical Frequency Division (eOFD) Electro-Optical Frequency Division (eOFD) is a new method for implementing Optical Frequency Division (OFD) that has similarities to the way that electrical frquency division is applied to stabilize high-frequency oscillators. However, the method reverses the conventional architecture for stabilization of an electrical VCO by electro-optically linking it to a higher-frequency, optically-derived reference frequency, as opposed to a lower-frequency reference such as quartz. This allows the method to leverage the high stability of optically derived references to stabilize a common electrical VCO. Figure 1: Diagrams of the eOFD process [2]. As background to this section, it is helpful to read the section describing Optical Frequency Division (OFD). The idea of two-point locking [1] described there has inspired a third way to implement OFD that our group has recently demonstrated [2]. The idea is described in figure 1 and begins with two laser lines having a very stable frequency separation. In conventional two-point locking, these lasers would be used to stabilize two comb teeth in an existing frequency comb. In what we call electro-optical frequency division (eOFD), the frequency comb is generated from these lasers by phase modulation at a frequency determined by a voltage-controlled, electrical oscillator (VCO).  Upon phase modulation, each laser line generates a set of sidebands with a separation in frequency equal to the VCO frequency. The basic layout is shown in panel A in figure 1 where the dual frequency optical reference is shown at the top, followed below by the phase modulation (Optical divider) and finally at the bottom by the electrical VCO. A spectral representation of the process is given in panel C of figure 1 with the two lasers lines at ν1 and ν2. For a large enough number of sidebands there will be two sidebands near the mid point of the frequency span between the two laser frequencies and having a separation in frequency that can be easily measured using a photo detector.  This detected electrical signal carries the phase information of the VCO (multiplied by the number of sidebands, N = N1 + N2, between the two lasers) and can be used to provide feedback control to the VCO.  When the feedback loop is closed, the net result is that the VCO acquires the relative frequency stability of the lasers divided by N. Since the relative stability of the two lasers can be very good and N can be large, the VCO stability can be greatly improved. The actual implementation of this method is interesting in an architectural sense as it resembles a conventional electrical frequency synthesizer (see section on Microwave Photonics) except with the location in frequency space of the VCO and the reference oscillator reversed. The diagram in Panel B of figure 1 shows the idea. In the conventional electrical synthesizer, the VCO is divided down in frequency for comparison to a low frequency quartz oscillator. On the other hand, in eOFD an all-optical reference (the two lasers) is divided down using eOFD to stabilize the VCO. The advantage is that as noted in the section on OFD, optical sources can be orders-of-magnitude more stable than quartz so transferring this stability by eOFD in this new architecture has performance advantages over conventional electrical frequency division. Also, in comparison to conventional OFD, eOFD relies on relative stability of the optical reference as opposed to absolute stability. For reasons discussed in the section on Microwave photonics, relative stability is often more robust with respect to environmental disturbances. The data in figure 2 show how the VCO oscillator performance is improved through the eOFD process. The dashed black curve is the phase noise of an Agilent high performance microwave VCO. The red curve on the other hand is the phase noise of the optical reference (i.e., the difference frequency of the two laser sources). These sources are tuned to two different frequency separations and then divided down in frequency to control the VCO. The blue curve shows the case of optical division by 30x from an initial 327 GHz frequency separation while the green curve shows the VCO performance with division by an even larger factor of 148x from an initial 1.61 THz frequency separation. The improvement is quadratic in the division factor so there is a very large reduction in the phase noise of the already high-performance VCO. We are currently working on even higher performance implementations of this idea. Figure 2: Demonstration of eOFD [2]. W. C. Swann, E. Baumann, F. R. Giorgetta, N. R. Newbury, "Microwave generation with low residual phase noise from a femtosecond fiber laser with an intracavity electro-optic modulator," Opt. Express 19, 24387–24395 (2011) Jiang Li, Xu Yi, Hansuek Lee, Scott Diddams, Kerry Vahala, "Electro-Optical Frequency Division and Stable Microwave Synthesis," Science 345, 309-313 (2014)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639861941337585, "perplexity": 1297.6170433990476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529007.88/warc/CC-MAIN-20190723064353-20190723090353-00236.warc.gz"}
http://export.arxiv.org/abs/1905.09309
physics.ins-det (what is this?) # Title: Performance of the large scale HV-CMOS pixel sensor MuPix8 Abstract: The Mu3e experiment is searching for the charged lepton flavour violating decay $\mu^+\rightarrow e^+ e^- e^+$, aiming for an ultimate sensitivity of one in $10^{16}$ decays. In an environment of up to $10^9$ muon decays per second the detector needs to provide precise vertex, time and momentum information to suppress accidental and physics background. The detector consists of cylindrical layers of $50\, \mu\text{m}$ thin High Voltage Monolithic Active Pixel Sensors (HV-MAPS) placed in a $1\,\text{T}$ magnetic field. The measurement of the trajectories of the decay particles allows for a precise vertex and momentum reconstruction. Additional layers of fast scintillating fibre and tile detectors provide sub-nanosecond time resolution. The MuPix8 chip is the first large scale prototype, proving the scalability of the HV-MAPS technology. It is produced in the AMS aH18 $180\, \text{nm}$ HV-CMOS process. It consists of three sub-matrices, each providing an untriggered datastream of more than $10\,\text{MHits}/\text{s}$. The latest results from laboratory and testbeam characterisation are presented, showing an excellent performance with efficiencies $>99.6\,\text{\%}$ and a time resolution better than $10\, \text{ns}$ achieved with time walk correction. Subjects: Instrumentation and Detectors (physics.ins-det) Cite as: arXiv:1905.09309 [physics.ins-det] (or arXiv:1905.09309v1 [physics.ins-det] for this version) ## Submission history From: Heiko Augustin [view email] [v1] Wed, 22 May 2019 18:11:26 GMT (1679kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7746843695640564, "perplexity": 4551.270143213614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00189.warc.gz"}
http://www.aptech.com/questions/pdfexp-cdfexp/
# pdfexp, cdfExp 0 i´m lost in the use of pdfExp(x,a,m), cdfExp(x,a,m). Given that the exponential is iqual to: What are the location and mean parameter? For the cdfexp you said that it´s iqual to: qthelp://aptech.com.gauss.13.0/doc/Equation40.png I don´t see any m there? How can I have the same values than Matlab whith his expcdf and exppdf functions? Thank you. 0 The pdfExp and cdfExp in GAUSS calculate the two parameter exponential function. This is documented more clearly and accurately in the latest version of GAUSS. For easy comparison, I will show a few different parameterizations in GAUSS code. First, the single-parameter exponential function with a rate parameter called lambda. This is the one which you posted above. val = lambda * exp(-lambda * x); Second, the single-parameter exponential function with a scale parameter beta. This scale parameter is the reciprocal of the rate parameter. val = (1/beta) * exp(-x/beta); Finally, the two-parameter exponential function with a scale parameter beta and a threshold parameter (or location parameter in a sense) theta. This is the function in GAUSS. val = (1/beta) * exp(-(x - theta)/beta)); To calculate the single-parameter exponential function in GAUSS, you will always set the second input equal to 0. If you are thinking of the distribution in terms of the rate parameter, lambda then you will pass in the final value as the 1/lambda. Here are some examples: //x = 1.2, lambda = 0.5 val_1 = 0.5 * exp(-1.2*0.5); //x = 1.2, beta = 2 val_2 = 1/2 * exp(-1.2/2); //x = 1.2, beta = 2, theta = 0 val_3 = 1/2 * exp(-(1.2 - 0)/2); //x = 1.2, beta = 2, theta = 0 val_4 = pdfExp(1.2, 0, 2); For each of these the answer should be: 0.27440582. aptech 342 • ### Aptech Systems, Inc. Worldwide Headquarters Aptech Systems, Inc. 2350 East Germann Road, Suite #21 Chandler, AZ 85286 Phone: 360.886.7100 FAX: 360.886.8922 • ### For Pricing and Distribution Corporate Sales: Government Sales: • ### Training & Events Want more guidance while learning about the full functionality of GAUSS and its capabilities? Get in touch for in-person training or browse additional references below. • ### Tutorials Step-by-step, informative lessons for those who want to dive into GAUSS and achieve their goals, fast. • ### Have a Specific Question? Get a real answer from a real person • Need Support? • ### Support Plans Premier Support and Platinum Premier Support are annually renewable membership programs that provide you with important benefits including technical support, product maintenance, and substantial cost-saving features for your GAUSS System or the GAUSS Engine. • ### User Forums Join our community to see why our users are considered some of the most active and helpful in the industry!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5693373680114746, "perplexity": 4247.7455557026515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267865.20/warc/CC-MAIN-20140728011747-00021-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-5-problem-67e-chemistry-an-atoms-first-approach-2nd-edition/9781305079243/what-number-of-atoms-of-nitrogen-are-present-in-500-g-of-each-of-the-following-a-glycine-c2h5o2n/50ecaf25-a826-11e8-9bb5-0ece094302b6
What number of atoms of nitrogen are present in 5.00 g of each of the following? a. glycine, C 2 H 5 O 2 N b. magnesium nitride c. calcium nitrate d. dinitrogen tetroxide Chemistry: An Atoms First Approach 2nd Edition Steven S. Zumdahl + 1 other Publisher: Cengage Learning ISBN: 9781305079243 Chapter Section Chemistry: An Atoms First Approach 2nd Edition Steven S. Zumdahl + 1 other Publisher: Cengage Learning ISBN: 9781305079243 Chapter 5, Problem 67E Textbook Problem 55 views What number of atoms of nitrogen are present in 5.00 g of each of the following?a. glycine, C2H5O2Nb. magnesium nitridec. calcium nitrated. dinitrogen tetroxide (a) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of glycine (C2H5O2N) . Explanation of Solution Given The mass of glycine (C2H5O2N)  is 5.00g . The molar mass of glycine (C2H5O2N) is, (2×12.01+5×1.008+2×15.999+14.0)g/mol=75.058g/mol Formula The number of moles in C2H5O2N is calculated as, MolesofC2H5O2N=MassofC2H5O2NMolarmassofC2H5O2N Substitute the values of mass and molar mass of C2H5O2N in above equation, MolesofC2H5O2N=MassofC2H5O2NMolarmassofC2H5O2N=5 (b) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of magnesium nitride (c) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of calcium nitrate (d) Interpretation Introduction Interpretation: The mass of each compound is given. By using the mass, the number of nitrogen (N) atoms is to be calculated. Concept introduction: The atomic mass is defined as the sum of number of protons and number of neutrons. Molar mass of a substance is defined as the mass of the substance in gram of one mole of that compound. The molar mass of any compound can be calculated by adding of atomic weight of individual atoms present in it. The amount of substance containing 12g of pure carbon is called a mole. One mole of atoms always contains 6.022×1023 molecules. The number of molecules in one mole is also called Avogadro’s number. To determine: The number of nitrogen (N) atoms in 5.00g of dinitrogen tetraoxide Still sussing out bartleby? Check out a sample textbook solution. See a sample solution The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started Find more solutions based on key concepts The fiber-rich portion of the wheat kernel is the bran layer. T F Nutrition: Concepts and Controversies - Standalone book (MindTap Course List) What is the declination of the sun on october 30th? Fundamentals of Physical Geography How is the enzyme phosphorylase activated? Introduction to General, Organic and Biochemistry What is population genetics? Human Heredity: Principles and Issues (MindTap Course List) 4. The normal arterial range is Cardiopulmonary Anatomy & Physiology Three objects are brought close to one another, two at a time. When objects A and B are brought together, they ... Physics for Scientists and Engineers, Technology Update (No access codes included)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8377564549446106, "perplexity": 2231.1520501030855}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142603.80/warc/CC-MAIN-20200217145609-20200217175609-00207.warc.gz"}
http://machinethink.net/blog/object-detection-with-yolo/
# Real-time object detection with YOLO 20 May 2017 16 minutes Object detection is one of the classical problems in computer vision: Recognize what the objects are inside a given image and also where they are in the image. Detection is a more complex problem than classification, which can also recognize objects but doesn’t tell you exactly where the object is located in the image — and it won’t work for images that contain more than one object. YOLO is a clever neural network for doing object detection in real-time. In this blog post I’ll describe what it took to get the “tiny” version of YOLOv2 running on iOS using Metal Performance Shaders. Before you continue, make sure to watch the awesome YOLOv2 trailer. 😎 ## How YOLO works You can take a classifier like VGGNet or Inception and turn it into an object detector by sliding a small window across the image. At each step you run the classifier to get a prediction of what sort of object is inside the current window. Using a sliding window gives several hundred or thousand predictions for that image, but you only keep the ones the classifier is the most certain about. This approach works but it’s obviously going to be very slow, since you need to run the classifier many times. A slightly more efficient approach is to first predict which parts of the image contain interesting information — so-called region proposals — and then run the classifier only on these regions. The classifier has to do less work than with the sliding windows but still gets run many times over. YOLO takes a completely different approach. It’s not a traditional classifier that is repurposed to be an object detector. YOLO actually looks at the image just once (hence its name: You Only Look Once) but in a clever way. YOLO divides up the image into a grid of 13 by 13 cells: Each of these cells is responsible for predicting 5 bounding boxes. A bounding box describes the rectangle that encloses an object. YOLO also outputs a confidence score that tells us how certain it is that the predicted bounding box actually encloses some object. This score doesn’t say anything about what kind of object is in the box, just if the shape of the box is any good. The predicted bounding boxes may look something like the following (the higher the confidence score, the fatter the box is drawn): For each bounding box, the cell also predicts a class. This works just like a classifier: it gives a probability distribution over all the possible classes. The version of YOLO we’re using is trained on the PASCAL VOC dataset, which can detect 20 different classes such as: • bicycle • boat • car • cat • dog • person • and so on… The confidence score for the bounding box and the class prediction are combined into one final score that tells us the probability that this bounding box contains a specific type of object. For example, the big fat yellow box on the left is 85% sure it contains the object “dog”: Since there are 13×13 = 169 grid cells and each cell predicts 5 bounding boxes, we end up with 845 bounding boxes in total. It turns out that most of these boxes will have very low confidence scores, so we only keep the boxes whose final score is 30% or more (you can change this threshold depending on how accurate you want the detector to be). The final prediction is then: From the 845 total bounding boxes we only kept these three because they gave the best results. But note that even though there were 845 separate predictions, they were all made at the same time — the neural network just ran once. And that’s why YOLO is so powerful and fast. (The above pictures are from pjreddie.com.) ## The neural network The architecture of YOLO is simple, it’s just a convolutional neural network: Layer kernel stride output shape --------------------------------------------- Input (416, 416, 3) Convolution 3×3 1 (416, 416, 16) MaxPooling 2×2 2 (208, 208, 16) Convolution 3×3 1 (208, 208, 32) MaxPooling 2×2 2 (104, 104, 32) Convolution 3×3 1 (104, 104, 64) MaxPooling 2×2 2 (52, 52, 64) Convolution 3×3 1 (52, 52, 128) MaxPooling 2×2 2 (26, 26, 128) Convolution 3×3 1 (26, 26, 256) MaxPooling 2×2 2 (13, 13, 256) Convolution 3×3 1 (13, 13, 512) MaxPooling 2×2 1 (13, 13, 512) Convolution 3×3 1 (13, 13, 1024) Convolution 3×3 1 (13, 13, 1024) Convolution 1×1 1 (13, 13, 125) --------------------------------------------- This neural network only uses standard layer types: convolution with a 3×3 kernel and max-pooling with a 2×2 kernel. No fancy stuff. There is no fully-connected layer in YOLOv2. Note: The “tiny” version of YOLO that we’ll be using has only these 9 convolutional layers and 6 pooling layers. The full YOLOv2 model uses three times as many layers and has a slightly more complex shape, but it’s still just a regular convnet. The very last convolutional layer has a 1×1 kernel and exists to reduce the data to the shape 13×13×125. This 13×13 should look familiar: that is the size of the grid that the image gets divided into. So we end up with 125 channels for every grid cell. These 125 numbers contain the data for the bounding boxes and the class predictions. Why 125? Well, each grid cell predicts 5 bounding boxes and a bounding box is described by 25 data elements: • x, y, width, height for the bounding box’s rectangle • the confidence score • the probability distribution over the 20 classes Using YOLO is simple: you give it an input image (resized to 416×416 pixels), it goes through the convolutional network in a single pass, and comes out the other end as a 13×13×125 tensor describing the bounding boxes for the grid cells. All you need to do then is compute the final scores for the bounding boxes and throw away the ones scoring lower than 30%. Tip: To learn more about how YOLO works and how it is trained, check out this excellent talk by one of its inventors. This video actually describes YOLOv1, an older version of the network with a slightly different architecture, but the main ideas are still the same. Worth watching! ## Converting to Metal The architecture I just described is for Tiny YOLO, which is the version we’ll be using in the iOS app. The full YOLOv2 network has three times as many layers and is a bit too big to run fast enough on current iPhones. Since Tiny YOLO uses fewer layers, it is faster than its big brother… but also a little less accurate. YOLO is written in Darknet, a custom deep learning framework from YOLO’s author. The downloadable weights are available only in Darknet format. Even though the source code for Darknet is available, I wasn’t really looking forward to spending a lot of time figuring out how it works. Luckily for me, someone else already put in that effort and converted the Darknet models to Keras, my deep learning tool of choice. So all I had to do was run this “YAD2K” script to convert the Darknet weights to Keras format, and then write my own script to convert the Keras weights to Metal. However, there was a small wrinkle… YOLO uses a regularization technique called batch normalization after its convolutional layers. The idea behind “batch norm” is that neural network layers work best when the data is clean. Ideally, the input to a layer has an average value of 0 and not too much variance. This should sound familiar to anyone who’s done any machine learning because we often use a technique called “feature scaling” or “whitening” on our input data to achieve this. Batch normalization does a similar kind of feature scaling for the data in between layers. This technique really helps neural networks perform better because it stops the data from deteriorating as it flows through the network. To give you some idea of the effect of batch norm, here is a histogram of the output of the first convolution layer without and with batch normalization: Batch normalization is important when training a deep network, but it turns out we can get rid of it at inference time. Which is a good thing because not having to do the batch norm calculations will make our app faster. And in any case, Metal does not have an MPSCNNBatchNormalization layer. Batch normalization usually happens after the convolutional layer but before the activation function gets applied (a so-called “leaky” ReLU in the case of YOLO). Since both convolution and batch norm perform a linear transformation of the data, we can combine the batch normalization layer’s parameters with the weights for the convolution. This is called “folding” the batch norm layer into the convolution layer. Long story short, with a bit of math we can get rid of the batch normalization layers but it does mean we have to change the weights of the preceding convolution layer. A quick recap of what a convolution layer calculates: if x is the pixels in the input image and w is the weights for the layer, then the convolution basically computes the following for each output pixel: out[j] = x[i]*w[0] + x[i+1]*w[1] + x[i+2]*w[2] + ... + x[i+k]*w[k] + b This is a dot product of the input pixels with the weights of the convolution kernel, plus a bias value b. And here’s the calculation performed by the batch normalization to the output of that convolution: gamma * (out[j] - mean) bn[j] = ---------------------- + beta sqrt(variance) It subtracts the mean from the output pixel, divides by the variance, multiplies by a scaling factor gamma, and adds the offset beta. These four parameters — mean, variance, gamma, and beta — are what the batch normalization layer learns as the network is trained. To get rid of the batch normalization, we can shuffle these two equations around a bit to compute new weights and bias terms for the convolution layer: gamma * w w_new = -------------- sqrt(variance) gamma*(b - mean) b_new = ---------------- + beta sqrt(variance) Performing a convolution with these new weights and bias terms on input x will give the same result as the original convolution plus batch normalization. Now we can remove this batch normalization layer and just use the convolutional layer, but with these adjusted weights and bias terms w_new and b_new. We repeat this procedure for all the convolutional layers in the network. Note: The convolution layers in YOLO don’t actually use bias, so b is zero in the above equation. But note that after folding the batch norm parameters, the convolution layers do get a bias term. Once we’ve folded all the batch norm layers into their preceding convolution layers, we can convert the weights to Metal. This is a simple matter of transposing the arrays (Keras stores them in a different order than Metal) and writing them out to binary files of 32-bit floating point numbers. If you’re curious, check out the conversion script yolo2metal.py for more details. To test that the folding works the script creates a new model without batch norm but with the adjusted weights, and compares it to the predictions of the original model. ## The iOS app Of course I used Forge to build the iOS app. 😂 You can find the code in the YOLO folder. To try it out: download or clone Forge, open Forge.xcworkspace in Xcode 8.3 or later, and run the YOLO target on an iPhone 6 or up. The easiest way to test the app is to point your iPhone at some YouTube videos: The interesting code is in YOLO.swift. First this sets up the convolutional network: let leaky = MPSCNNNeuronReLU(device: device, a: 0.1) let input = Input() let output = input --> Resize(width: 416, height: 416) --> Convolution(kernel: (3, 3), channels: 16, padding: true, activation: leaky, name: "conv1") --> MaxPooling(kernel: (2, 2), stride: (2, 2)) --> Convolution(kernel: (3, 3), channels: 32, padding: true, activation: leaky, name: "conv2") --> MaxPooling(kernel: (2, 2), stride: (2, 2)) --> ...and so on... The input from the camera gets rescaled to 416×416 pixels and then goes into the convolutional and max-pooling layers. This is very similar to how any other convnet operates. The interesting thing is what happens with the output. Recall that the output of the convnet is a 13×13×125 tensor: there are 125 channels of data for each of the cells in the grid that is overlaid on the image. These 125 numbers contain the bounding boxes and class predictions, and we need to sort these out somehow. This happens in the function fetchResult(). Note: The code in fetchResult() runs on the CPU, not the GPU. It was simpler to implement that way. That said, the nested loop might benefit from the parallelism of a GPU. Maybe I’ll come back to this in the future and write a GPU version. Here is how fetchResult() works: public func fetchResult(inflightIndex: Int) -> NeuralNetworkResult<Prediction> { let featuresImage = model.outputImage(inflightIndex: inflightIndex) let features = featuresImage.toFloatArray() The output from the convolutional network is in the form of an MPSImage. We first convert this to an array of Float values called features, to make it a little easier to work with. The main body of fetchResult() is a huge nested loop. It looks at all of the grid cells and the five predictions for each cell: for cy in 0..<13 { for cx in 0..<13 { for b in 0..<5 { . . . } } } Inside this loop we compute the bounding box b for grid cell (cy, cx). First we read the x, y, width, and height for the bounding box from the features array, as well as the confidence score: let channel = b*(numClasses + 5) let tx = features[offset(channel, cx, cy)] let ty = features[offset(channel + 1, cx, cy)] let tw = features[offset(channel + 2, cx, cy)] let th = features[offset(channel + 3, cx, cy)] let tc = features[offset(channel + 4, cx, cy)] The offset() helper function is used to find the proper place in the array to read from. Metal stores its data in texture slices in groups of 4 channels at a time, which means the 125 channels are not stored consecutively but are scattered all over the place. (See the code for an in-depth explanation.) We still need to do some processing on these five numbers tx, ty, tw, th, tc as they are in a bit of a weird format. If you’re wondering where these formulas come from, they’re given in the paper (it’s a side effect of how the network was trained). let x = (Float(cx) + Math.sigmoid(tx)) * 32 let y = (Float(cy) + Math.sigmoid(ty)) * 32 let w = exp(tw) * anchors[2*b ] * 32 let h = exp(th) * anchors[2*b + 1] * 32 let confidence = Math.sigmoid(tc) Now x and y represent the center of the bounding box in the 416×416 image that we used as input to the neural network; w and h are the width and height of the box in that same image space. The confidence value for the bounding box is given by tc and we used the logistic sigmoid to turn this into a percentage. We now have our bounding box and we know how confident YOLO is that this box actually contains an object. Next, let’s look at the class predictions to see what kind of object YOLO thinks is inside the box: var classes = [Float](repeating: 0, count: numClasses) for c in 0..<numClasses { classes[c] = features[offset(channel + 5 + c, cx, cy)] } classes = Math.softmax(classes) let (detectedClass, bestClassScore) = classes.argmax() Recall that 20 of the channels in the features array contain the class predictions for this bounding box. We read those into a new array, classes. As is usual for classifiers, we take the softmax to turn the array into a probability distribution. And then we pick the class with the largest score as the winner. Now we can compute the final score for this bounding box — for example, “I’m 85% sure this bounding box contains a dog”. As there are 845 bounding boxes in total, we only want to keep the ones whose combined score is over a certain threshold. let confidenceInClass = bestClassScore * confidence if confidenceInClass > 0.3 { let rect = CGRect(x: CGFloat(x - w/2), y: CGFloat(y - h/2), width: CGFloat(w), height: CGFloat(h)) let prediction = Prediction(classIndex: detectedClass, score: confidenceInClass, rect: rect) predictions.append(prediction) } The above code is repeated for all the cells in the grid. When the loop is over, we have a predictions array with typically 10 to 20 predictions in it. We already filtered out any bounding boxes that have very low scores, but there still may be boxes that overlap too much with others. Therefore, the last thing we do in fetchResult() is a technique called non-maximum suppression to prune those duplicate bounding boxes. var result = NeuralNetworkResult<Prediction>() result.predictions = nonMaxSuppression(boxes: predictions, limit: 10, threshold: 0.5) return result } The algorithm used by the nonMaxSuppression() function is quite simple: 2. Remove any remaining bounding boxes that overlap it more than the given threshold amount (i.e. more than 50%). 3. Go to step 1 until there are no more bounding boxes left. This removes any bounding boxes that overlap too much with other boxes that have a higher score. It only keeps the best ones. And that’s pretty much all there is to it: a regular convolutional network and a bit of postprocessing of the results afterwards. ## How well does it work? The YOLO website claims that Tiny YOLO can do up to 200 frames per second. But of course that is on a fat desktop GPU, not on a mobile device. So how fast does it run on an iPhone? On my iPhone 6s it takes about 0.15 seconds to process a single image. That is only 6 FPS, barely fast enough to call it realtime. If you point the phone at a car driving by, you can see the bounding box trailing a little behind the car. Still, I’m impressed this technique works at all. 😁 Note: As I explained above, the processing of the bounding boxes runs on the CPU, not the GPU. Would YOLO run faster if it ran on the GPU entirely? Maybe, but the CPU code takes only about 0.03 seconds, 20% of the running time. It’s possible to do at least a portion of this work on the GPU but I’m not sure it’s worth the effort given that the conv layers still eat up 80% of the time. I think a major slowdown is caused by the convolutional layers that have 512 and 1024 output channels. From my experiments it seems that MPSCNNConvolution has more trouble with small images that have many channels than with large images that have fewer channels. One thing I’m interested in trying is to take a different network architecture, such as SqueezeNet, and retrain this network to predict the bounding boxes in its last layer. In other words, to take the YOLO ideas and put them on top of a smaller and faster convnet. Will the increase in speed be worth the loss in accuracy? Note: By the way, the recently released Caffe2 framework also runs on iOS with Metal support. The Caffe2-iOS project comes with a version of Tiny YOLO. It appears to run a little slower than the pure Metal version, at 0.17 seconds per frame.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47156211733818054, "perplexity": 1200.8719977362675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423787.24/warc/CC-MAIN-20170721162430-20170721182430-00220.warc.gz"}
http://www.dml.cz/handle/10338.dmlcz/119206
# Article Full entry | PDF   (0.2 MB) Keywords: vector lattice; uniformly complete vector lattice; lattice ordered algebra; almost $f$-algebra; $d$-algebra; $f$-algebra Summary: Let $A$ be a uniformly complete almost $f$-algebra and a natural number $p\in\{3,4,\dots \}$. Then $\Pi_{p}(A)= \{a_{1}\dots a_{p}; a_{k}\in A, k=1,\dots ,p\}$ is a uniformly complete semiprime $f$-algebra under the ordering and multiplication inherited from $A$ with $\Sigma_{p}(A)=\{a^{p}; 0\leq a\in A\}$ as positive cone. References: [1] Basly M., Triki A.: $FF$-algébres Archimédiennes réticulées. University of Tunis, preprint, 1988. MR 0964828 [2] Bernau S.J, Huijsmans C.B.: Almost $f$-algebras and $d$-algebras. Math. Proc. Camb. Phil. Soc. 107 (1990), 287-308. MR 1027782 | Zbl 0707.06009 [3] Beukers F., Huijsmans C.B.: Calculus in $f$-algebras. J. Austral. Math. Soc. (Series A) 37 (1984), 110-116. MR 0742249 | Zbl 0555.06014 [4] Boulabiar K.: A relationship between two almost $f$-algebra products. Algebra Univ., to appear. MR 1785321 | Zbl 1012.06022 [5] Buskes G., van Rooij A.: Almost $f$-algebras: structure and the Dedekind completion. in Three papers on Riesz spaces and almost $f$-algebras, Technical Report, Catholic University Nijmegen, Report 9526, 1995. Zbl 0967.46008 [6] Huijsmans C.B., de Pagter B.: Averaging operators and positive contractive projections. J. Math. Appl. 113 (1986), 163-184. MR 0826666 | Zbl 0604.47024 [7] Luxembourg W.A.J., Zaanen A.C.: Riesz spaces I. North-Holland, Amsterdam, 1971. [8] de Pagter B.: $f$-algebras and orthomorphisms. Thesis, Leiden, 1981. [9] Zaanen A.C.: Riesz spaces II. North-Holland, Amsterdam, 1983. MR 0704021 | Zbl 0519.46001 Partner of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951362669467926, "perplexity": 11398.581091576287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121478981.94/warc/CC-MAIN-20150124174438-00182-ip-10-180-212-252.ec2.internal.warc.gz"}
https://solvedlib.com/n/what-exactly-would-happen-if-introns-were-not-spliced-out,6997373
# What exactly would happen if introns were not spliced out/removed? know that RNA splicing occurs in molecular biology, and that ###### Question: What exactly would happen if introns were not spliced out/removed? know that RNA splicing occurs in molecular biology, and that it is a form of RNA processing in which a newly made precursor messenger RNA transcript is transformed into a mature messenger RNA (mRNA): During splicing introns are removed and exons arejoined together, but what would happen if the introns were not removed? #### Similar Solved Questions ##### May someone please correctly answer all three parts for me? I've posted this question 3 times... May someone please correctly answer all three parts for me? I've posted this question 3 times already and got wrong answers all times. Solution should be in (ksi) units by the way. Please answer all three parts to this question... ##### Question 2 For the following distribution find the Median[2 marksClass Boundary Frequency15.15-19.6519.65 2.1524.15-28.6528.65-33.1533.15-37.6537.65 4215 Question 2 For the following distribution find the Median [2 marks Class Boundary Frequency 15.15-19.65 19.65 2.15 24.15-28.65 28.65-33.15 33.15-37.65 37.65 4215... ##### Name: Note: (1) Close book:(2) One 2-side Seore e forinmata sheet, (3) No electromie device (4)... Name: Note: (1) Close book:(2) One 2-side Seore e forinmata sheet, (3) No electromie device (4) Tme5.50-6.1Spm, 2728/731Ns For signal v(t)-2j sin(2π × St) + 4cos(2π × 10t), determine (j) (S%) period; (b) (7%) Fourier series form11; C (8%) Fourier transform; (10%) power spectral den... ##### 74pointsHolesUse the chain rule to compute a( 0 and d( 0 at (u, w) = (1, 4) where Kx, Y, 2) - xly Y2z + 2x and r(u,v) cos(v) sin(v) , 421.au 0 r)au 0 74points Holes Use the chain rule to compute a( 0 and d( 0 at (u, w) = (1, 4) where Kx, Y, 2) - xly Y2z + 2x and r(u,v) cos(v) sin(v) , 421. au 0 r) au 0... ##### Help please ! his Question: 2 pts 33 of 34 (13 complete) This Test: 40 pls... Help please ! his Question: 2 pts 33 of 34 (13 complete) This Test: 40 pls The probability of flu symptoms for a person not receiving any treatment is 0,04. In a clinical trial of a common drug used to lower cholestera, 40 of 914 people treated experienced flu symptoms. Assuming the drug has no e... ##### 2- Which of these functions represent smocth surtace~(r+12) I)f(y)zei([email protected]:) III) f (1) =eI)f (y) = xSiny - Ye0) [I6) II.IIIc) II onlyAII %f theme) None 0f them 2- Which of these functions represent smocth surtace ~(r+12) I)f(y)ze i([email protected]:) III) f (1) =e I)f (y) = xSiny - Ye 0) [I 6) II.III c) II only AII %f them e) None 0f them... ##### 10 46·The weak base(s) in the following list is (are) (II) I (TV) HyCOs(aq) + H2O(l)-?... 10 46·The weak base(s) in the following list is (are) (II) I (TV) HyCOs(aq) + H2O(l)-? H,0. (aq) + HCOs (aq CH COOH Ho)Ho (aq) CH,COO-(aq) NaOH(aq) + CH3NH2 (aq) HsO(1) -Na.(aq) OH (aq) + () A) Only IV 47. Which of the following is the Arhenius Theory of acids and bases? (A) + H2O(I)- + CHIH&... ##### E8B.8 Suppose the magnetic field in particle detector chamber points vertically downward and has a magnitude of 1.5 T Suppose a collision in the target produces (among other things) a subatomic particle whose track bends to its right as we look along the direction of its motion What is the sign of the particle's charge? If this particle's trajec- tory lies in a horizontal plane and is a circle of radius R 85 cm, and we assume that the particle's charge has the same magnitude as t E8B.8 Suppose the magnetic field in particle detector chamber points vertically downward and has a magnitude of 1.5 T Suppose a collision in the target produces (among other things) a subatomic particle whose track bends to its right as we look along the direction of its motion What is the sign of ... ##### $3.15 y^{(4)}-5.34 y^{prime prime}+6.33 y^{prime}-2.03 y=0$ $3.15 y^{(4)}-5.34 y^{prime prime}+6.33 y^{prime}-2.03 y=0$... ##### Question 19 1 pts Compute the volume SSSx 1dV where X is the solid defined by... Question 19 1 pts Compute the volume SSSx 1dV where X is the solid defined by x2 + y2 < 4,0 <z10., A) 20 B) 407 C) 207 D) 807 C o ОА OB... ##### Sony International Corporation (SIC) is designing a new high-definition TV (HDTV) that is projected to have... Sony International Corporation (SIC) is designing a new high-definition TV (HDTV) that is projected to have the following per-unit costs to manufacture Cost Categories Unit Costs $650$ 250 Materials Cost Labor Costs $1,200$2,100 Overhead Costs Total unit cost SIC adds 25% to its manufacturing cos... ##### 1.12] Anew engineering graduate who started a consult- ing business borrowed money for 1 year to... 1.12] Anew engineering graduate who started a consult- ing business borrowed money for 1 year to furnish the office. The amount or of10%per year. Hove and it had an interest rate of 10%per year. How- ever, because the new graduate had not built up a credit history, the bank made him buy loan-default... ##### Suppose (1, 1) is a critical point of a function with continuous second derivatives. In the case of f .(l,1) = 10,f 1(1,1) = 8,f,(1,1) = 10 what can you say about f?fhas local maximum at ( 1,1)0 f has saddle point at (1,1)has local minimum at ( 1,1) Suppose (1, 1) is a critical point of a function with continuous second derivatives. In the case of f .(l,1) = 10,f 1(1,1) = 8,f,(1,1) = 10 what can you say about f? fhas local maximum at ( 1,1) 0 f has saddle point at (1,1) has local minimum at ( 1,1)... ##### 1)Determine the number o carbons in the longest continuous carbon chain. This chain is called the parent hydrocarbon. O 1)Determine the number o carbons in the longest continuous carbon chain. This chain is called the parent hydrocarbon. O... ##### The chemical reaction of convert MgaNz to MgO carrying out .under High temperatureTrueFalse The chemical reaction of convert MgaNz to MgO carrying out .under High temperature True False... ##### 2. Consider the following equilibrium: Mg(OH)2(8) 5 Mg?'(aq) + 2OH(aq) redict the shift in system at equilibrium if... 2. Consider the following equilibrium: Mg(OH)2(8) 5 Mg?'(aq) + 2OH(aq) redict the shift in system at equilibrium if a solution of HCI is added dropwise to the system at equilibrium. Briefly explain. 0. Predict the change in equilibrium if a solution of NaOH is added dropwise to the system at equ... ##### Case 9.2 L0 9 3-9.10 Mtarnldnjon Akc Pal Natie Carmcn Pcrc7VALLEY ASSOCIATES; PC Chrisloptic- Connclly. MD Uricorn & Mc dicimu 555-967-0303 NPI 8877365552Birth Dale Harlia Satu05/15/1946 Merticd Address 225 Potomac Dr Shaker Hcights, OH 44118-2345 Hecnha 555.628.5298 Employer Rctired Race White Exhnicity Hispanic Latino Preferred Language EnglishPATIENT NAME Peraz, Cunun OMTNO.Appt: DATETIMEFCRIZDZA2 DlnnMoZ444520 *tnHal &4n4 UteumicaedPinian_ Lutsuraucd Wnarontica:DescriptIonDescripmon Case 9.2 L0 9 3-9.10 Mtarnldnjon Akc Pal Natie Carmcn Pcrc7 VALLEY ASSOCIATES; PC Chrisloptic- Connclly. MD Uricorn & Mc dicimu 555-967-0303 NPI 8877365552 Birth Dale Harlia Satu 05/15/1946 Merticd Address 225 Potomac Dr Shaker Hcights, OH 44118-2345 Hecnha 555.628.5298 Employer Rctired Race Wh... ##### In "A Good Man is Hard to Find", O'Connor brilliantly incorporates the conflicts of Man vs.... In "A Good Man is Hard to Find", O'Connor brilliantly incorporates the conflicts of Man vs. Man, Man vs. Nature, and Man vs. Himself. Which of these conflicts do you think is strongest in the story? Be sure to use text examples and your best persuasion techniques.... ##### Erin enters her profile information into the MyPlate website and discovers that she needs 2000 kilocalories... Erin enters her profile information into the MyPlate website and discovers that she needs 2000 kilocalories per day to achieve a healthy body weight. Which of the following menu plans would allow her to meet the 2000-kcal intake while following the MyPlate lacto-ovo vegetarian guidelines?... ##### Ase 2 A firm has been started by a diary famer to produce and retail ice... ase 2 A firm has been started by a diary famer to produce and retail ice cream directly to the public. All its ice cream is sold through its own retail outlet, situated next to its farm, as it does not want other retailers to make profits from its products, and wants to ensure strict quality control... ##### Use Part of the Fundamenta Theorem of Calculus to find the derivative of the functionh(x) le 7 In(t) dth"(x) Use Part of the Fundamenta Theorem of Calculus to find the derivative of the function h(x) le 7 In(t) dt h"(x)... ##### Current rates of species extinction appear to be approximately _______ historical rates of extinction. A. equal to; B. 10 times lower than; $\mathrm{C}$. 10 times higher than; D. 50 to 100 times higher than; $\mathrm{E} .1000$ to 10,000 times higher than Current rates of species extinction appear to be approximately _______ historical rates of extinction. A. equal to; B. 10 times lower than; $\mathrm{C}$. 10 times higher than; D. 50 to 100 times higher than; $\mathrm{E} .1000$ to 10,000 times higher than... ##### If the price elasticity is equal to 2, a 1percent increase in price will cause the quantity demanded to If the price elasticity is equal to 2, a 1percent increase in price will cause the quantity demanded to........from 200units to .......units... ##### The expected value of the voltage across resistor 90V. However the measurement gives value 0f 97V. Calculate Absolute error Marks % error (3 Marks (iii) relative accuracy (3 Marks) (iv) percentage of accuracy (3 Marks) The expected value of the voltage across resistor 90V. However the measurement gives value 0f 97V. Calculate Absolute error Marks % error (3 Marks (iii) relative accuracy (3 Marks) (iv) percentage of accuracy (3 Marks)... ##### The (t,x) trajectorie? of N vehicles (=0,V) given by X;(t) mnnian {4t-4j,2t-j+l0},j=0,_N Draw (t,) diagram showing several of thece trajectories and discuss qualitatively what woul be happening (label some of the trajectories). Draw Cunlulative plots of vehicle number (j) vs. tat the locations ad*=30 of two observer; Calculate the total tfip Jmne experienced by vehicles to 10 between *=0 andx-3: Fron the diagram (a}. Fron the diagram (b)- d) Draw On sketch (b) the loci of points where vehicles a The (t,x) trajectorie? of N vehicles (=0,V) given by X;(t) mnnian {4t-4j,2t-j+l0},j=0,_N Draw (t,) diagram showing several of thece trajectories and discuss qualitatively what woul be happening (label some of the trajectories). Draw Cunlulative plots of vehicle number (j) vs. tat the locations ad*=3... ##### One of the most difficult aspects of signaling through pathways that involve proteolysis is keeping track... One of the most difficult aspects of signaling through pathways that involve proteolysis is keeping track of the names of the components and their functions. Sort the following list of proteins into the Notch, Wnt/B-cetenin, and Hedgehog signaling pathways, list them in he order they function, and s... ##### Determine whether the integral is convergent or divergent: [? 4 csc(x) dx convergent dlvergentIf It Is convergent, evaluate it; (If the quantity diverges enter DIVERGES.) Determine whether the integral is convergent or divergent: [? 4 csc(x) dx convergent dlvergent If It Is convergent, evaluate it; (If the quantity diverges enter DIVERGES.)... ##### Evaluate the integralVz? +y? d4,where R is the disk with center the origin and radius 2_325 34 % 38% 3161 3321 364 % 3128 % Evaluate the integral Vz? +y? d4,where R is the disk with center the origin and radius 2_ 3 25 3 4 % 3 8% 3 161 3 321 3 64 % 3 128 %... ##### BMELOLL4HELICS FORHLALOLOCTOBER 2020Question 1 (25 marks)Find an equation of the line that passes through the point (5. 8) and parallel to the line whose equation is y=6r+4 [6 marks](b) The management of ABC finds that the monthly fixed costs attributable t0 the production 0f their 100-watt light bulb is S12.100. Ifthe cost of producing each light bulb is S0.60 and each light bulb sells for SL.2O.find the cost function_[2 marks]lind the revenue function_[2 marks]find the profit function[3 marks] BMELOLL 4HELICS FORHLALO LOCTOBER 2020 Question 1 (25 marks) Find an equation of the line that passes through the point (5. 8) and parallel to the line whose equation is y=6r+4 [6 marks] (b) The management of ABC finds that the monthly fixed costs attributable t0 the production 0f their 100-watt lig... ##### Write an argument to determine if the sequence a, In ( 2n) - In (n+1) converges or diverges Ifit converges find the exact value of lim a , Write an argument to determine if the sequence a, In ( 2n) - In (n+1) converges or diverges Ifit converges find the exact value of lim a ,... ##### 13. If M is a finitely generated module over the PID. R, describe the structure of M {Tor(M) 1 Let R be a PLD and let M be a torsion R-module Prove that M is irreducible (cf: Exercises 9 to H1 of Section [0.3) if and only if M Rm for any nonzero element m € M VhpreIke Ihdaor Anonzero nrime ideal 13. If M is a finitely generated module over the PID. R, describe the structure of M {Tor(M) 1 Let R be a PLD and let M be a torsion R-module Prove that M is irreducible (cf: Exercises 9 to H1 of Section [0.3) if and only if M Rm for any nonzero element m € M VhpreIke Ihdaor Anonzero nrime ide... ##### 30. An engineer wants to know if the average amount of energy used in his factory... 30. An engineer wants to know if the average amount of energy used in his factory per day has changed since 2015. The factory used an average of 2000 megawatt hours (ouwt) per day in 2015. Since 2015, the engineer surveyed 300 days and found the average energy use was 2040 mwh per day with a sample ... ##### How do you solve 3x+y=0 and 5x+y=4? How do you solve 3x+y=0 and 5x+y=4?...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6988986730575562, "perplexity": 6279.716366692482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00251.warc.gz"}
https://fractalforums.org/fractal-mathematics-and-new-theories/28/tri-furcation-and-more/1247/msg6625
### "Tri"-furcation and more • 13 Replies • 600 Views 0 Members and 1 Guest are viewing this topic. #### Fraktalist • Strange Attractor • Posts: 1143 #### "Tri"-furcation and more « on: April 20, 2018, 11:02:00 PM » Hey! The attached image beautifully visualizes the connection between the Mandelbrot-Set and the Bifurcation diagram. Now I wonder, are there images of trifurcation? Basicallu going to the 3-bulb and using it to draw the same diagram but with 3 branches. I haven't found any online yet. But looking at this image, they must exist.. Maybe there even is some code/software out there to generate the bifurcation tree of basically any coordinate/blulb of the Mset? • 3f • Posts: 1831 #### Re: "Tri"-furcation and more « Reply #1 on: April 20, 2018, 11:22:10 PM » • 3f • Posts: 1483 #### Re: "Tri"-furcation and more « Reply #2 on: April 20, 2018, 11:28:47 PM » It could be done, with a continuous parameter path into a period 3 bulb, then 9, then 27 etc. (to be "proper" it should probably start at 0 or the cusp and pass through the various component centers). This path won't be (even close to) straight, and it won't be unique -- at every step there's a binary choice available. You'd also need to decide how to project down the orbit values to one dimension ... or else make a three dimensional object, with the horizontal plane the orbit in the complex plane and the vertical axis the parameter for the paramater-space path. So, vertical axis s, horizontal planes z orbits of 0 under z2 + c where c = f(s) follows a path as described above as s goes from, say, 0 to 1. • 3f • Posts: 1483 #### Re: "Tri"-furcation and more « Reply #3 on: April 20, 2018, 11:37:51 PM » Oh, and there won't be any equivalent to the chaotic part with the windows. The trifurcating path stays in bulbs to a limit point that belongs to the boundary of M, has irrational external angle, does not have a filament attached at it, and has a dendrite rather than a Siegel disk basin as the corresponding Julia shape. A continuation past that point would have to either double back on itself or else spiral back out through the exterior of M -- no stable finite attractors or chaos in the latter cases, like if the real-axis case jumped directly from the start of the chaotic area to already being past the antenna tip and skipped over everything in between. To get something more resembling the usual diagram you could, say, go into the top bulb and up through bulbs to the Y fork, and then take some path along the bramble to a branch tip Misiurewicz point, but the periods hit will be 1, 3, 6, 12, 24 ... i.e., one trifurcation and then plain-Jane bifurcations. Or ... you could make a path to a tip of one of the largest four bramble-antennas in the z3 Mandelbrot. Same branching choices but only four lead to maximal-size antennas. That follows a non-straight bramble and makes turns between bulbs, but ... it also only doubles the period each step. • 3f • Posts: 1483 #### Re: "Tri"-furcation and more « Reply #4 on: April 20, 2018, 11:46:48 PM » Oh, and one more thing (content warning: calculus): to really be comparable in an everything-to-scale way, the parameter s should relate to the path in c = x + iy space via (ds)2 = (dx)2 + (dy)2, i.e. s is distance traveled in the parameter plane from one end of the path. For the path to be smooth it should probably be some kind of Bezier spline starting at 0.25 and passing through 0 and then a succession of component roots and component centers, and avoiding undulating too much (especially not so much as for loops of it to protrude outside of the bulb interiors! It should only touch M-set boundary at 0.25, period-tripling component roots, and that irrational limit point). The other control points should be algorithmically set so as to minimize the integral of the path's local curvature over the path, I should think. • 3f • Posts: 1831 #### Re: "Tri"-furcation and more « Reply #5 on: April 21, 2018, 12:08:48 AM » For power 4 it's easy as in power 2. See below. • 3f • Posts: 1831 #### Re: "Tri"-furcation and more « Reply #6 on: April 21, 2018, 04:02:19 AM » This is a fun idea. Below some bifurcation diagrams for M2 set going on some straight lines. Maybe more fun would be to select some points interactively and then draw a curve through those points and plot the bifurcation diagram. Would be simple in KF, just save your locations and then read them in. Maybe I'll give it a try. #### Fraktalist • Strange Attractor • Posts: 1143 #### Re: "Tri"-furcation and more « Reply #7 on: April 25, 2018, 11:37:20 AM » thx guys! gerrit, those images look fascinating. could you do a simple one for the power 3 bulbs as pauldelbrot described? what tool are you using to plot these? • Fractal Feline • Posts: 190 #### Re: "Tri"-furcation and more « Reply #8 on: April 25, 2018, 06:16:56 PM » Now I wonder, are there images of trifurcation? Basicallu going to the 3-bulb and using it to draw the same diagram but with 3 branches. OK start from c = 0 ( center of period 1 component) One can go along internal ray 1/3 to the root of period 3 componnet then along internal ray o to the center of period 3 component then ... Nota that for bifiurcation c is real so z are also real . It means that one can draw diagam in 2d for above route c are not real ( complex ) and z are also complex. So one have to use special techique to draw it. Compare for example bifuraction diagram in 2d : https://commons.wikimedia.org/wiki/File:Bifurcation1-2.png also description below HTH • 3f • Posts: 1831 #### Re: "Tri"-furcation and more « Reply #9 on: April 25, 2018, 11:37:37 PM » thx guys! gerrit, those images look fascinating. could you do a simple one for the power 3 bulbs as pauldelbrot described? what tool are you using to plot these? What pauldelbrot described is hardly simple. Anyways here is the analogon of the usual plot for power 3, not very interesting of course. And some matlab/octave code to plot. Code: [Select] mpow = 3;nx = 1000;nit = 100;np = nx*nit;z = zeros(np,1);x = zeros(np,1);c1 = -.5;c2 = 0.5;xg = linspace(c1, c2,nx);kk = 1;for k=1:nx    xval = xg(k);    w = 0;    for j=1:nit        x(kk) =  xval;        w = w^mpow+xval;        if(abs(w)>2)            ww=2*sign(w);        else            ww = w;        end        z(kk) = ww;        kk = kk+1;    endendfigure(1)clfplot(x,z,'.','markersize',1);xlabel 'c';axis tight; • 3f • Posts: 1259 #### Re: "Tri"-furcation and more « Reply #10 on: April 26, 2018, 01:28:47 AM » For non-real C you can plot all the limit-cycle Z on one image, chances of overlap are small.  You can colour according to the position along the path.  In attached I have coloured using hue red at roots, going through yellow towards the next bond point in a straight line through the interior coordinate space (interior coordinate is derivative of limit cycle).  I have just plotted points, so there are gaps.  Perhaps it could be improved by drawing line segments between Z values, but I'm not 100% sure if the first Z value found will always correspond to the same logical line, and keeping track of a changing number of "previous Z" values isn't too fun either. • 3f • Posts: 1259 #### Re: "Tri"-furcation and more « Reply #11 on: April 26, 2018, 01:52:32 AM » Colouring by hue = iteration number / period shows some interesting structure. • 3f • Posts: 1831 #### Re: "Tri"-furcation and more « Reply #12 on: April 28, 2018, 05:51:18 AM » Something simple: The usual "bifurcation" diagram just plots all the (real) c values reached when c runs over the real axis. If you consider complex c the orbit is complex so not obvious to visualize. In the diagrams I posted earlier I just plotted |z| for a line of c values. You can do this in 3D too, plot all the points $$|z_n(c)|$$ until escaping (if) on a c grid. This is equivalent to plotting all the surfaces $$|z_n(c)|$$. I tried but it's too hard for me to make it look good. Attached on a small 1280X720 grid up to 50 iterations. I'm sure there are better way to plot this. #### hgjf2 • Fractal Friar • Posts: 102 #### Re: "Tri"-furcation and more « Reply #13 on: May 06, 2018, 09:05:00 AM » My model of "tri-furcation" made in C# is and one in 3D stereographic: ### Similar Topics ###### Fractal zooms: "redshiftriders", "libra" and "down down down" Started by RedshiftRider on Fractal movie gallery 0 Replies 163 Views December 10, 2018, 09:30:59 PM by RedshiftRider ###### "varanasi", fractal realtime music visualisation with "matrjoschka mirror balls" Started by udo2013 on Fractal movie gallery 0 Replies 88 Views July 08, 2019, 12:17:58 AM by udo2013 ###### Youtube "enhancer" chrome plugin "Particle" - alternative? Started by Fraktalist on Off Topic 11 Replies 916 Views January 31, 2018, 11:37:20 AM by Fraktalist ###### Having an issue with image "breaking Apart" Near Ends of my "artsy towers" Started by Icecafe on Mandelbulb3d 2 Replies 204 Views October 09, 2018, 01:09:41 AM by Icecafe ###### Arachnus--"The Master" & "Arachnus" by SINKERR--Fractals & Fusion Started by Paigan0 on Fractal movie gallery 0 Replies 110 Views January 07, 2019, 03:20:11 PM by Paigan0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7431357502937317, "perplexity": 5900.959083189905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00330.warc.gz"}
https://de.maplesoft.com/support/help/Maple/view.aspx?path=QuantumChemistry/CoupledCluster
CoupledCluster - Maple Help QuantumChemistry CoupledCluster (Mac OS X and Linux Only) an electron correlation method based on excitations from the Hartree-Fock wavefunction Calling Sequence CoupledCluster(molecule, options) Parameters molecule - list of lists; each list has 4 elements, the string of an atom's symbol and atom's x, y, and z coordinates options - (optional) equation(s) of the form option = value where option is one of symmetry, unit,  max_memory, frozen, max_cycle, conv_tol, conv_tol_normt, diis_space, diis_start_cycle, ccsdt, nuclear_gradient, return_rdm, populations, diis_start_energy_diff, conv_tol_hf, diis_hf, diis_space_hf, diis_start_cycle_hf, direct_scf_hf, direct_scf_tol_hf, level_shift_hf, max_cycle_hf, max_memory_scf_hf, nuclear_gradient_hf, populations_hf Description • The coupled cluster method includes electron correlation through a basis of excitations from the Hartree-Fock determinant wavefunction.  Unlike truncated configuration interaction, the coupled cluster method includes the excitations through an exponential ansatz that ensures size extensivity.  A method is size extensive if and only if its energy scales linearly with system size.  Truncation of the excitations generates a hierarchy of coupled cluster methods.  For example, the coupled cluster method with single and double excitations, known as CCSD, includes single and double excitations with higher excitations approximated as products of these lower excitations.  The CoupledCluster command currently implements CCSD as the default.  The energy from coupled cluster with a single, double, and perturbative triple excitations [CCSD(T)] can be computed by setting the keyword ccsdt = true.  The CoupledCluster command is only available on the MacOS X and Linux platforms.  The Parametric2RDM method can also be employed to obtain energies and properties with an accuracy similar to that of CCSD and CCSD(T). Outputs The table of following contents: ${t}\left[{\mathrm{e_tot}}\right]$ - float -- total electronic energy of the system ${t}\left[{\mathrm{e_corr}}\right]$ - float -- the difference between the coupled cluster energy and the Hartree-Fock energy ${t}\left[{\mathrm{mo_coeff}}\right]$ - Matrix -- coefficients expressing molecular (natural) orbitals (columns) in terms of atomic orbitals (rows) ${t}\left[{\mathrm{mo_occ}}\right]$ - Vector -- molecular (natural) orbital occupations ${t}\left[{\mathrm{e_tot_mp2}}\right]$ - float -- total electronic energy of the system calculated by MP2 method ${t}\left[{\mathrm{aolabels}}\right]$ - Vector -- string label for each atomic orbital consisting of the atomic symbol and the orbital name ${t}\left[{\mathrm{converged}}\right]$ - integer -- 1 or 0, indicating whether the calculation is converged or not ${t}\left[{\mathrm{t1}}\right]$ - Matrix -- coupled clusters' one-electron transition amplitudes ${t}\left[{\mathrm{t2}}\right]$ - Array -- coupled clusters' two-electron transition amplitudes ${t}\left[{\mathrm{nuclear_gradient}}\right]$ - Matrix -- the analytical nuclear gradients ${t}\left[{\mathrm{e_tot_ccsdt}}\right]$ - float -- the coupled cluster energy including single, double, and perturbative triple excitations [CCSD(T)] ${t}\left[{\mathrm{rdm1}}\right]$ - Matrix -- one-particle reduced density matrix (1-RDM) in molecular-orbital (MO) representation ${t}\left[{\mathrm{rdm2}}\right]$ - Array -- two-particle reduced density matrix (2-RDM) in molecular-orbital (MO) representation ${t}\left[{\mathrm{populations}}\right]$ - Matrix -- atomic-orbital populations ${t}\left[{\mathrm{dipole}}\right]$ - Vector -- dipole moment according to its x, y and z components ${t}\left[{\mathrm{charges}}\right]$ - Vector -- atomic charges from the populations Options • basis = string -- name of the basis set.  See Basis for a list of available basis sets.  Default is "sto-3g". • spin = nonnegint -- twice the total spin S (= 2S). Default is 0. • charge = nonnegint -- net charge of the molecule. Default is 0. • symmetry = string/boolean -- is the Schoenflies symbol of the abelian point-group symmetry which can be one of the following:  D2h, C2h, C2v, D2, Cs, Ci, C2, C1. true finds the appropriate symmetry while false (default) does not use symmetry. • unit = string -- "Angstrom" or "Bohr". Default is "Angstrom". • max_memory = posint -- allowed memory in MB. Default is 4000. • frozen = set -- set of orbitals to be frozen. • max_cycle = int -- max number of iterations. Default is 50. • conv_tol = float -- converge threshold. Default is ${10}^{-10}.$ • conv_tol_normt = float -- converge threshold for norm of coupled cluster transition amplitude. Default is ${10}^{-5}.$ • diis_space = int -- DIIS space size. By default, 8 Fock matrices and errors vector are stored • diis_start_cycle = int -- the step to start DIIS. Default is 0. • diis_start_energy_diff = float -- the energy difference threshold to start DIIS. • nuclear_gradient = boolean -- option to return the analytical nuclear gradient if available. Default is false. • return_rdm = string -- options to return the 1-RDM and/or 2-RDM: "none", "rdm1", "rdm1_and_rdm2". Default is "rdm1". • return_t2t1 = boolean -- option to return the one- and two-electron transition amplitudes.  Default is false. • populations = string -- atomic-orbital population analysis: "Mulliken" and "Mulliken/meta-Lowdin". Default is "Mulliken". • ccsdt = boolean -- option to return the energy from CCSD(T).  Default is false. Attributes for Hartree Fock: • conv_tol_hf = float -- converge threshold. Default is ${10}^{-10}.$ • diis_hf = boolean -- whether to employ diis. Default is true. • diis_space_hf = posint -- diis's space size. By default, 8 Fock matrices and error vectors are stored. • diis_start_cycle_hf = posint -- the step to start diis. Default is 1. • direct_scf_hf = boolean -- direct SCF in which integrals are recomputed is used by default. • direct_scf_tol_hf = float -- direct SCF cutoff threshold. Default is ${10}^{-13}.$ • level_shift_hf = float/int -- level shift (in au) for virtual space. Default is $0.$ • max_cycle_hf = posint -- max number of iterations. Default is 50. • max_memory_scf_hf = posint -- allowed memory in MB. Default is 4000. • nuclear_gradient_hf = boolean -- option to return the analytical nuclear gradient. Default is false. • populations_hf = string -- atomic-orbital population analysis: "Mulliken" and "Mulliken/meta-Lowdin". Default is "Mulliken". References 1 G. D. Purvis III and R. J. Bartlett, J. Chem. Phys. 76, 1910 (1982). "A full coupled‐cluster singles and doubles model: The inclusion of disconnected triples" 2 R. J. Bartlett and M. Musiał, Rev. Mod. Phys. 79, 291 (2007). "Coupled-cluster theory in quantum chemistry" Examples > $\mathrm{with}\left(\mathrm{QuantumChemistry}\right):$ A coupled cluster calculation of the  molecule > ${\mathrm{molecule}}{≔}\left[\left[{"H"}{,}{0}{,}{0}{,}{0}\right]{,}\left[{"F"}{,}{0}{,}{0}{,}{0.95000000}\right]\right]$ (1) > ${\mathrm{table}}{}\left({\mathrm{%id}}{=}{18446744078426156798}\right)$ (2) >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6472280621528625, "perplexity": 4975.334873705255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00430.warc.gz"}
https://www.helsinki.fi/en/researchgroups/unified-database-management-systems-udbms/research/top-k-string-auto-completion-with-synonyms
Top-k string auto-completion with synonyms Motivation Keyword searching is a ubiquitous activity performed by millions of users daily. However, cognitively formulating and physically typing search queries is a time-consuming and error-prone process. In response, keyword search engines have widely adopted auto-completion as a means of reducing the efforts required to submit a query. As users enter their query into the search box, auto-completion suggests possible queries the user may have in mind. Challenges The existing solutions of auto-completion provide the suggestions based on the beginning of the current input character sequence (i.e. prefix). Although this approach provides satisfactory auto-completion in many cases, it is far from optimal since it fails to take into account the semantic of users' input characters. There are many practical applications where syntactically different strings can represent the same real-world object. For example, Bill is a short form of William and Database Management Systems can be abbreviated as DBMS. However, these equivalence information suggests semantically similar strings that may have been missed by simple prefix based approaches. In this project, we expose these relations between different strings, to support efficient top-k completion queries with synonyms for different space and time complexity trade-offs. Further reading You may find our research paper at https://arxiv.org/abs/1611.03751. Contributions TWIN TRIES (TT): Two tries are constructed to present strings and synonym rules respectively in order to minimize the space occupancy. Each trie is a compact data structure, where the children of each node are ordered by the highest score among their respective descendants. Applicable synonym rules are indicated by pointers between two tries. An efficient top-k algorithm is developed to search both tries to find the synonym rules. EXPANSION TRIE (ET): A fast lookup-optimized solution by integrating synonym rules with the corresponding strings. Unlike TT, ET uses a single expended trie to represent both synonym and string rules. Therefore, by efficiently traversing this trie, ET is faster than TT to provide top-k completions. Meanwhile, ET often takes larger space overhead than TT, because ET needs to expand the strings with their applicable rules. HYBRID TRIES (HT): An optimized structure to strike a good balance between space and time cost for TT and ET. We try to find a balance between lookup speed and space cost by judiciously select part of synonym rules to expand the strings. We show that given a predefined space constraint, the optimal selection of synonym rules is NP-hard, which can be reduced to a 0/1 knapsack problem with item interactions. We provide an empirically efficient heuristic algorithm by extending the branch and bound algorithm. Download The top-k auto-completion tool is released as Java source code with corresponding binary executable files for convenience. Note that the binary file requires at least JRE 8 to run. Version Date Download link 0.1.1 20161129 Source Code            Binary Releases and Sample Dataset Usage: topk SEARCH_STRING [K] [OPTION] Perform top-K auto-completions for SEARCH_STRING using TRIE structure, the results are from DICT_FILE in respect of synonyms in SYN_FILE. All parameters: Parameter Default Value Comments SEARCH_STRING   The search string. The maximum number of results returned. -d /path/to/dict/file "dict.txt" List of dictionary strings with scores. s /path/to/synonym/file "rule.txt" List of synonym rules. -t TRIE "ET" The trie structure used in this search. Can be one of TT, ET or HT. If you choose HT, you may want to specify BUDGET. -b BUDGET 5000 A number indicates the additional space (in bytes) budget can be occupied by HT. Examples: // Top-10 lookup for "Intl. Conf" topk "Intl. Conf" // Top-5 lookup for "Intl. Conf" topk "Intl. Conf" 5 // Top-5 lookup for "Intl. Conf" using TT with "dict.txt" and "rule.txt" topk "Intl. Conf" 5 -d dict.txt -s rule.txt -t TT // Top-5 lookup for "Intl. Conf" using HT with budget equals 5000 topk "Intl. Conf" 5 -t TT -b 5000
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36363136768341064, "perplexity": 3865.1101704020934}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00446.warc.gz"}
https://lavelle.chem.ucla.edu/forum/search.php?author_id=6654&sr=posts
## Search found 11 matches Sun Feb 21, 2016 8:58 pm Forum: *Electrophiles Topic: Standard Entropy of Activation Replies: 2 Views: 599 ### Re: Standard Entropy of Activation If you're referring to page 89 in the course reader where it explains our pseudo-equilibrium constant and our the signs for ∆H and ∆S for our ∆G equation than it is referring to the energy going from the products to the peak of the activation energy needed for the process to occur. In other words, i... Sun Feb 14, 2016 1:56 pm Forum: Kinetics vs. Thermodynamics Controlling a Reaction Topic: zero order reaction Replies: 2 Views: 574 ### Re: zero order reaction Normally, with other order reactions, when we graph the change in concentration over time, the graph shows a curved line with a negative slope that shows the concentration first decreasing quickly and then slowing down. However, with zero order reactions, because the rate of the reaction is independ... Sun Feb 14, 2016 1:25 pm Forum: General Rate Laws Topic: half life Replies: 1 Views: 365 ### Re: half life Because we know that 1/64=(1/2)^6, we know that 6 half lives occur to get to 1/64th of the initial concentration. Therefore, we multiply our half-life time by 6, 6*0.43=2.58, to get an answer of 2.58 seconds. Mon Feb 01, 2016 7:49 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: Chapter 14 IN-TEXT Example 14.4: Self-Test 14.5A Replies: 2 Views: 469 ### Re: Chapter 14 IN-TEXT Example 14.4: Self-Test 14.5A Although it's not technically incorrect to have fractions when balancing an equation, unless it specifically states that you're balancing for a specific number of moles of a molecule, just put integers in your final balanced equations. Fri Jan 29, 2016 7:26 pm Forum: Balancing Redox Reactions Topic: Redox Reactions Replies: 3 Views: 575 ### Redox Reactions Does anybody know of any pneumonic device or anything else to help me remember the cathode's role vs the anode? I always seem to mix them up. Fri Jan 29, 2016 7:24 pm Forum: Balancing Redox Reactions Topic: Oxidation States Replies: 8 Views: 1030 ### Re: Oxidation States He hasn't mentioned any more yet since the first day of electrochemistry in the course reader, so hopefully not. Sun Jan 24, 2016 3:15 pm Forum: Gibbs Free Energy Concepts and Calculations Topic: Quiz 1 Preparation Replies: 1 Views: 349 ### Quiz 1 Preparation Question 9a requires that we calculate the standard reaction entropy for a reaction using standard molar entropies. The Standard molar entropies for O2,CO2 and H2O are given in the front of the packet; however, the standard molar entropy for C6H6 isn't provided. Should it be or are we supposed to ca... Sat Jan 23, 2016 1:49 pm Forum: Gibbs Free Energy Concepts and Calculations Topic: Spontaneous Reactions at certain temperatures Replies: 2 Views: 513 ### Re: Spontaneous Reactions at certain temperatures So for these types of situations, you'll want to look at the equation for delta G, but in a conceptual, not quantitative manner. ∆G=∆H-T∆S First, we know that a system is spontaneous if ∆G is negative, so we now want to look at qualities of enthalpy and entropy that will make that true. If ∆S is pos... Thu Jan 14, 2016 8:14 pm Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation) Topic: Homework Problem 8.73 Replies: 4 Views: 878 ### Re: Homework Problem 8.73 It's extremely helpful to draw the Lewis Structures of these molecules before finding enthalpies because that will illustrate the specific bonds you have to break so that the product may be formed with its bonds. For the reactants, you'll see that the Lewis structure is H-C:::C-H meaning that you ha... Thu Jan 14, 2016 8:01 pm Forum: Thermodynamic Systems (Open, Closed, Isolated) Topic: System and Surroundings Replies: 2 Views: 743 ### Re: System and Surroundings That’s correct. Assuming that we’re dealing with a perfect system guarantees that the heat gained/lost by the system will be equal to the negative heat lost/gained by the surroundings, which Lavelle outlines below that “perfect system” mention by putting heat given off by the reaction equal to heat ... Wed Jan 06, 2016 8:52 pm Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation) Topic: Understanding Bond Enthalpies Replies: 3 Views: 2141 ### Re: Understanding Bond Enthalpies Hi Leah! This idea seemed a bit weird to me at first as well; however, if we think about the example in the course reader where CH2 combines with HBr to form CH3 and CH2Br, we’re looking specifically at the energy required or released to break or create every bond changed. Each of these energies add... Go to advanced search
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398153185844421, "perplexity": 4247.946201163248}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00591.warc.gz"}
https://www.pveducation.org/biblio?page=10&s=year&o=asc&f%5Bauthor%5D=393
# Biblio Export 1 results: Author Title Type [ Year] Filters: Author is Bazmandegan-Shamili, Alireza  [Clear All Filters] 2010 , Sonochemical synthesis, characterization and thermal and optical analysis of CuO nanoparticles, Physica B: Condensed Matter, vol. 405, no. 15, pp. 3096 - 3100, 2010.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150897026062012, "perplexity": 20167.40128777453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00094.warc.gz"}
https://www.semanticscholar.org/paper/EM-counterparts-of-recoiling-black-holes%3A-general-Zanotti-Rezzolla/5b711386bf1c547359adce5e74c76a832346203d
# EM counterparts of recoiling black holes: general relativistic simulations of non-Keplerian discs @article{Zanotti2010EMCO, title={EM counterparts of recoiling black holes: general relativistic simulations of non-Keplerian discs}, author={Olindo Zanotti and Luciano Rezzolla and Luca Del Zanna and Carlos Palenzuela}, journal={arXiv: High Energy Astrophysical Phenomena}, year={2010} } We investigate the dynamics of a circumbinary disc that responds to the loss of mass and to the recoil velocity of the black hole produced by the merger of a binary system of supermassive black holes. We perform the first two-dimensional general relativistic hydrodynamics simulations of \textit{extended} non-Keplerian discs and employ a new technique to construct a "shock detector", thus determining the precise location of the shocks produced in the accreting disc by the recoiling black hole… Expand #### Figures and Tables from this paper Common-envelope Dynamics of a Stellar-mass Black Hole: General Relativistic Simulations • Physics • 2020 With the goal of providing more accurate and realistic estimates of the secular behavior of the mass accretion and drag rates in the "common-envelope" scenario encountered when a black hole or aExpand Minidisks in Binary Black Hole Accretion • Physics • 2016 Newtonian simulations have demonstrated that accretion onto binary black holes produces accretion disks around each black hole ("minidisks"), fed by gas streams flowing through the circumbinaryExpand Merging black hole binaries in gaseous environments: simulations in general-relativistic magnetohydrodynamics Merging supermassive black hole-black hole (BHBH) binaries produced in galaxy mergers are promising sources of detectable gravitational waves. If such a merger takes place in a gaseous environment,Expand • Physics • 2016 (Abridged) We here continue our effort to model the behaviour of matter when orbiting or accreting onto a generic black hole by developing a new numerical code employing advanced techniques gearedExpand High-energy signatures of binary systems of supermassive black holes • Physics • 2016 Context. Binary systems of supermassive black holes are expected to be strong sources of long gravitational waves prior to merging. These systems are good candidates to be observed with forthcomingExpand Accretion disks around kicked black holes: Post-kick Dynamics • Physics • 2011 Numerical calculations of merging black hole binaries indicate that asymmetric emission of gravitational radiation can kick the merged black hole at up to thousands of km/s, and a number of systemsExpand Recoiling Supermassive Black Holes: a search in the Nearby Universe The coalescence of a binary black hole can be accompanied by a large gravitational recoil due to anisotropic emission of gravitational waves. A recoiling supermassive black hole (SBH) canExpand Electromagnetic Counterparts to Black Hole Mergers During the final moments of a binary black hole (BH) merger, the gravitational wave (GW) luminosity of the system is greater than the combined electromagnetic (EM) output of the entire observableExpand Relativistic simulations of long-lived reverse shocks in stratified ejecta: the origin of flares in GRB afterglows • Physics • 2018 The X-ray light curves of the early afterglow phase from gamma-ray bursts (GRBs) present a puzzling variability, including flares. The origin of these flares is still debated, and often associatedExpand Papaloizou-Pringle instability suppression by the magnetorotational instability in relativistic accretion discs • Physics • 2017 Geometrically thick tori with constant specific angular momentum have been widely used in the last decades to construct numerical models of accretion flows onto black holes. Such discs are prone to aExpand #### References SHOWING 1-10 OF 63 REFERENCES Hydrodynamical response of a circumbinary gas disc to black hole recoil and mass loss • Physics • 2009 Finding electromagnetic (EM) counterparts of future gravitational wave (GW) sources would bring rich scientific benefits. A promising possibility, in the case of the coalescence of a supermassiveExpand Black hole mergers: the first light • Physics • 2009 The coalescence of supermassive black hole binaries occurs via the emission of gravitational waves, that can impart a substantial recoil to the merged black hole. We consider the energy dissipationExpand Perturbed disks get shocked. Binary black hole merger effects on accretion disks The merger process of a binary black hole system can have a strong impact on a circumbinary disk. In the present work we study the effect of both central mass reduction (due to the energy lossExpand REACTION OF ACCRETION DISKS TO ABRUPT MASS LOSS DURING BINARY BLACK HOLE MERGER • Physics • 2009 The association of an electromagnetic signal with the merger of a pair of supermassive black holes would have many important implications. For example, it would provide new information about gas andExpand Three-dimensional relativistic simulations of rotating neutron-star collapse to a Kerr black hole We present a new three-dimensional fully general-relativistic hydrodynamics code using high-resolution shock-capturing techniques and a conformal traceless formulation of the Einstein equations.Expand On the stability of thick accretion disks around black holes • Physics • 2002 Discerning the likelihood of the so-called runaway instability of thick accretion disks orbiting black holes is an important issue for most models of cosmic gamma-ray bursts. To this aim weExpand The runaway instability of thick discs around black holes. I. The constant angular momentum case • Physics • 2002 We present results from a numerical study of the runaway instability of thick discs around black holes. This instability is an important issue for most models of cosmic gamma-ray bursts, where theExpand Prompt Shocks in the Gas Disk Around a Recoiling Supermassive Black Hole Binary Supermassive black hole binaries (BHBs) produced in galaxy mergers recoil at the time of their coalescence due to the emission of gravitational waves (GWs). We simulate the response of a thin,Expand Post-merger electromagnetic emissions from disks perturbed by binary black holes • Physics • 2010 We simulate the possible emission from a disk perturbed by a recoiling supermassive black hole. To this end, we study radiation transfer from the system incorporating bremsstrahlung emission from aExpand Gravitational Recoil from Spinning Binary Black Hole Mergers • Physics • 2007 The inspiraling and merger of binary black holes will likely involve black holes with not only unequal masses but also arbitrary spins. The gravitational radiation emitted by these binaries willExpand
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7390826940536499, "perplexity": 2189.4954053633187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00496.warc.gz"}
http://blog.sigfpe.com/2006_10_01_archive.html?m=0
# A Neighborhood of Infinity ## Saturday, October 21, 2006 Here are some sketches of monads. I've annotated each with its species. I'm not a great artist and I only saw each of them for a fleeting moment so I had to fill in some details from memory. But I think the sketches are good enough to be able to recognise them in the wild. Click on the 'thumbnail' for the full size image. I'm currently trying to stalk some monads from the continuation family. When I succeed I'll add some sketches of those too. ## Tuesday, October 10, 2006 ### Games, Strategies and the Self-Composition of the List Monad. Haskell is so strict about type safety that randomly generated snippets of code that successfully typecheck are likely to do something useful, even if you've no idea what that useful thing is. This post is about what I found when I tried playing with the monad1 ListT [] even though I had no clue what purpose such a monad might serve. I've talked about using monad transformers a couple of times. One time I looked at compositions of various types of side-effect monad, and more recently I looked at compositions of the List monad with side-effects monads. So it was inevitable that I got onto composing the List monad with itself to get ListT []. This is literate Haskell, modulo the unicode characters. > import Data.List> import Control.Monad.Writer> import Control.Monad.List> test1 = do> x <- [1,2,3]> return (2*x)> go1 = test1 can be interpreted as a program that chooses x to be each of the values 1, 2 and 3 in turn, applies f to each one and collects up the result to give [2,4,6]. The List monad will happily make choices within choices. So the expression > test2 = do> x <- [1,2]> y <- [3,4]> return (x,y)> go2 = test2 considers, for each possible choice of x, each possible choice of y. The result is [(1,3),(1,4),(2,3),(2,4)]. More generally, for fixed sets a,b,...c the expression do x <- a y <- b … z <- c return (a,b,…,c) returns the cartesian product a×b×…×c. In the fully general case, a,b,…c could all depend on choices that were made earlierbut right now I'm talking about the case when a,b,…,c are all 'constant'. Here's a slightly different way of looking at this. Suppose a person called Player (P for short) is playing a solitaire game. At the first turn, P chooses a move from the set of possible first moves, then chooses a possible move from the set of second moves and so on. We can see that if we write the code do move1 <- possible_moves_1 move2 <- possible_moves_2 … last_move <- possible_last_moves return (move1,move2,…,last_move) the result is a list, each of whose elements is the sequence of possible plays in the game. If the possible_*; are all 'constant' then its just the cartesian product of the plays at each turn. But it's straightforward to adapt this code to enumerate all possible plays in a more general solitaire game - and that's exactly how the List monad is sometimes used to solve such games. So if List is about making choices, composing List with itself might seem to be about making two types of choice. And the most familiar situation where you need to iterate over two possible types of choices comes when you consider two player games. So I'll tell you a possible answer now: ListT [] is the game analysis monad! Let's look at some examples of code in the ListT [] monad. As I pointed out earlier, we use lift a when we want a to be a list in the inner monad and mlist a when we want a to be a list in the outer monad. So in ListT [] we can expect to use both of these with lists. Consider > mlist :: MonadPlus m => [a] -> m a> mlist = msum . map return> test3 = do> a <- lift [1,2]> b <- mlist [3,4]> return (a,b)> go3 = runListT test3 The value is [[(1,3),(1,4)],[(2,3),(2,4)]]. There's a nice interpretation of this. Introduce a new player called Strategist (S for short). Think of lift as meaning a choice for S and mlist as a choice for P. We can read the above as: for each first play a that S could make, go through all the first plays b that P could make. What's different from the solitaire example is how the plays have been grouped in the resulting list. For each decision S could have made, all of P's possible plays have been listed together in a sublist. But now consider this expression: > test4 = do> b <- mlist [3,4]> a <- lift [1,2]> return (b,a)> go4 = runListT test4 We get: [[(3,1),(4,1)],[(3,1),(4,2)],[(3,2),(4,1)],[(3,2),(4,2)]]. Maybe you expect to see the same result as before but reordered. Instead we have a longer list. Our interpretation above no longer seems to hold. But actually, we can salvage it? Go back to thinking about solitaire games. We can read the code for test2 as follows: First choose a value for x, ie. x=1. Now choose a value for y, ie. y=3. Put (1,3) in the list. Now backtrack to the last point of decision and choose again, this time y=4. Put (1,4) in the list. Now backtrack again. There are no more options for y so we backtrack all the way back to a again and set x=1, and now start again with y=3, and so on. Now we can reconsider test4 in a two player game. Think about the choices from the point of view of working through all of P's options while fixing one particular set of choices for S. (It took me a while to get this concept so I'll try to explain this a few different ways.) So P chooses b=3 and then S makes some choice, say a=1. Now we backtrack to P's next option, b=4. But on backtracking to P's last choice we backtracked through S's choice and on running forward again, we must make some choice for S, say a=1 again. There were 2 ways P could have played, so we end up with two distinct sequences of plays: (3,1) and (4,1) after P has worked through all of his options. We have a list with two possibilities: [(3,1),(4,1)]. But S could have played differently. In fact, because we play S's move twice for each sequence of P's moves there are three other ways this sequence could have come out, depending on S's strategy: [(3,1),(4,2)], [(3,2),(4,1)] or [(3,2),(4,2)]. And that is exactly what test4 gave: for each strategy that S could have chosen we get a list of all the ways P could have played against it. We get a list of 4 lists, each of which has two elements. Each of those elements is a sequence of plays. I'll try to make precise what I mean by 'strategy'. By strategy we mean a way of choosing a move as a function of the moves your opponent has made. That's what we mean by a program t play a game: we give it a set of inputs and it then responds with a move. Its move at the nth turn is a function of everything you did for the previous n-1 turns. So consider the example above. After P has chosen from [3,4], S must choose from [1,2]. So S's strategy can de described by a function from [3,4] to [1,2]. There are 4 such functions. Once we've fixed S's strategy, there are 2 possible ways S can play against it. Again, we have 4 strategies, and 2 plays for each strategy. And here's another way of looking at this. We'll go back to simple games where the set of options available at each stage are constant and independent of the history of the game. Let <A,B> mean a game where A is the set of S's strategies and B is the set of ways P can play against those strategies. (The set B doesn't depend on S's choice from A because we're now limited to 'simple' games.) Now introduce a binary operator * such that <A,B>*<C,D> is the game where the moves start off being those in <A,B> but when that's finished we move onto <C,D>. In <A,B>*<C,D>, P's options are simply B×D. So we know that in some sense <A,B>*<C,D>=<X,B×D> for some X. Now S's first move is from A so that bit's easy. S's second move from C depends on P's move from B. So S's second move is described by a function from B to C. So <A,B>*<C,D>≅<A×CB,D>. (I use ≅ because I mean 'up to isomorphism'. CB×A also describes S's strategies equally well.) * is a curious binary operation. It might not look it, but it's associative (up to ismorphism). And it has a curious similarity to the semidirect product of groups. Anyway, we can write test4 using this notation. It's computing <[1],[3,4]>*<[1,2],[1]> (up to isomorphism). [1] is just a convenient way of saying "no choice" and it gives us a way to allow P to move first my making S's first move a dummy one. Multiplying gives <[1,2][3,4],[3,4]>. [1,2][3,4] has 4 elements. So again we have 4 strategies with two ways to play against each of those strategies. It's interesting to look at <A,B>*<C,D>*<E,F>. This is <A×CB,B×D>*<E,F> which is <A×CB×EB×D,B×D×F>. This has a straightforward explanation: S gets to choose from A directly, to choose from C depending on how P plays from B and to choose from E depending on how P plays from both B and D. And yet another way of looking at things. Suppose S and P are going to play in a chess tournament with S playing first. Unfortunately, before setting out on the journey to the tournament S realises he’s going to be late and is going to be unable to communicate during the journey. What should S do? A smart move would be to email his first move to the tournament organisers before leaving on the journey. That would buy him time while P thinks of a response. Even better, S could additionally send a list of second moves, one for each move that P might take in his absence. Going further, S could send a list of third moves, one for each of the first two moves that P might take in his absence. These emails, containing these lists, are what I’m calling a strategy. When you enumerate a bunch of options in the ListT [] monad you end up enumerating over all strategies for S and all responses for P. It appears there’s a kind of asymmetry. Why are we considering strategies for S, but plays for P? This is partly explained by the above paragraph. In ListT [] there is an implicit ordering where S’s moves are considered to be before P’s. So if one of S’s choices comes after P’s, it needs to be pulled before P’s and that can only happen if it’s replaced with a strategy. There’s also another interesting aspect to this. When considering how to win at a game, you don’t need to test your strategy against every possible strategy your opponent might have. You only need to test it against every possible way your opponent could play against it and they only need to find one play to show your strategy isn’t foolproof. So ListT [] describes exactly what you need to figure out a winning strategy for a game. Anyway, that’s enough theory. Here an application. The following code computes a winning strategy in the game of Nim. (If you don’t know Nim, then you’re in for a treat if you read up on it here. The theory is quite beautiful, but I deliberately don't use it here.) It uses brute force to find a winning strategy for S and additionally outputs a proof that this is a winning strategy by listing all possible ways the game could play out. Note that I’m using the WriterT String (ListT []) monad so I can log the moves made in each game. I've also added another game, a variant of Kayles that I call Kayles'. In Kayles' there are n skittles in a row. On your turn you knock over a skittle and whichever of its immediate neighbours are still standing. The winner is the person who knocks over the last skittle. (The code describes the rules better than English text and you can adapt the code easily to play on any graph.) Evaluate nim or kayles' to get a display of a winning strategy and a proof that it wins ie. a list of all possible games against that strategy showing they lead to P losing. > strategies moves start = do> a <- lift $lift$ moves start> tell $[a]> let replies = moves a> if replies==[] then return () else do> b <- lift$ mlist $replies> tell$ [b]> strategies moves b> nim = mapM print $> runListT$ execWriterT $strategies moves start> where> start = [2,3,5]> moves [] = []> moves (a:as) = [a':as | a' <- [0..a-1]]> ++ [a:as' | as' <- moves as]> kayles' = print$ head $> runListT$ execWriterT \$ strategies (kmoves (nbhd verts)) verts> where> kmoves nbhd v = [v \\ nbhd i | i <- v]> verts = [1..13]> nbhd verts i = intersect verts [i-1,i,i+1] For example nim starts [[[2,3,1],…]] because a move to piles of size 2, 3 and 1 is a win in Nim. (2⊕3⊕1=0 for those who know the theory.) kayles' starts [[[1,2,3,4,5,9,10,11,12,13],…]] which means a winning move is to knock down skittles 6, 7 and 8 (you may see why symmetry makes this an obvious good opening move). DISCLAIMER: I'm not claiming this is a good algorithm for solving games. Just that it illustrates the meaning of ListT []. Finally a question. Are all monad transformers a form of semidirect product? Footnotes: [1] I haven't worked out the details but ListT [] probably isn't actually a monad. But it's close enough. It's probably associative up to ordering of the list, and as I'm using List as a poor man's Set monad2, I don't care about ordering. An alternative ListT is provided here. I'm not sure that solves the problem. [2] I'm pretty sure Set can't be implemented in Haskell as a monad. But that's another story. ## Wednesday, October 04, 2006 ### Negative Databases Protecting Data Privacy through Hard-to-Reverse Negative Databases is an entertaining paper. Here's the problem it tries to solve: You want to store personal details but you don't want people to be able to mine the database. For example you might want to store brief personal details with a social security number so you can verify that someone is a member of some organisation, but for security reasons you don't want a hacker to be able to get a list of members even if they steal the entire database. That sounds like an impossible set of requirements. But the trick is this: instead of storing the personal details you store a list of all possible bitstrings representing personal details that members don't have. Insane right? If the personal details are just 256 bits long, say, then to store details for 100 people, say, the database needs 2256-100 entries. That's vast. But the database of non-members is highly compressible. Let me lift an example directly from the papers. Suppose the personal information is three bits long and we wish to store the entries {000,100,101}. Then the non-entries are {001,010,011,110,111}. The non-entries can now be compressed using a wildcard scheme {001,*1*}. The *1* represents all of the entries fitting that pattern from 010 to 111. This turns out to be quite a compact representation. It's beginning to look plausible. And here's the neat bit: SAT can be translated into the problem of finding valid database entries from the wildcard representation. So extracting entries is provably NP-complete, despite the fact that we can check entries in a reasonable amount of time (for suitable definitions of 'reasonable'). Anyway, I still have a whole slew of objections, and the actual compression scheme proposed has holes in it that can result in false positives. But what initially sounded implausible is beginning to look workable.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5406700372695923, "perplexity": 1064.4987570956646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157262.85/warc/CC-MAIN-20160205193917-00065-ip-10-236-182-209.ec2.internal.warc.gz"}
https://brilliant.org/problems/olympiad-for-grade-10/
Algebra Level 4 Let $a,b,c> 0,abc=1$ Let the minimum of P is M $P= \frac{a^2}{\sqrt{2+2ab}}+\frac{b^2}{\sqrt{2+2bc}}+\frac{c^2}{\sqrt{2+2ca}}$ If $M=\frac{m}{n}$ Find the value of m+n if $$\gcd{m,n}=1$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488012790679932, "perplexity": 1019.9743373845392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00575.warc.gz"}
https://www.experts-exchange.com/questions/21837139/CDO-help.html
Solved # CDO help Posted on 2006-05-03 811 Views I'm using Collaborative Data Objects to send auto-generated emails.  I have used this many times before with no problem. I'm testing this on my development server and recently made some changes by setting up DNS.  When I run the CDO script, I don't get any errors, but the email isn't coming through to me. I don't think it is anything in my code, but I'm not certain.  I've looked in the bad mail folder under INETPUB and don't see any failed messages.  My router is setup to port forward for port 25 SMTP...it's the same configuration I've always had.  So, I can only assume that my recent DNS setup somehow messed up my SMTP mail server.  Again, this is only a testing/development box and I use the SMTP to just test the mail scripts to make sure they are processing. In case there is any error in my code, please see below: mailbody = "<font size=3 face = Verdana>" & "<strong>" & "*** This message has been automatically generated -- DO NOT REPLY. ***" & "</strong>" & "</font>" & vbCRLF & vbCRLF mailbody = mailbody & "<font size=2 face=Verdana>" & "The following technical support request was received via the CompassLearning web site. The inquirer's contact information is provided below." & "</font>" & vbCRLF & vbCRLF mailbody = mailbody & "First Name:  " & fname & vbCRLF & vbCRLF mailbody = mailbody & "Last Name:  " & lname & vbCRLF & vbCRLF mailbody = mailbody & "School Name:  " & schoolname & vbCRLF & vbCRLF mailbody = mailbody & "Address:  " & address & vbCRLF & vbCRLF mailbody = mailbody & "City:  " & city & vbCRLF & vbCRLF mailbody = mailbody & "State:  " & state & vbCRLF & vbCRLF mailbody = mailbody & "Zip:  " & zip & vbCRLF & vbCRLF mailbody = mailbody & "Phone:  " & phone & vbCRLF & vbCRLF mailbody = mailbody & "Email:  " & email & vbCRLF & vbCRLF mailbody = mailbody & "Product Type:  " & producttype & vbCRLF & vbCRLF mailbody = mailbody & "Comment:  " & comment & vbCRLF & vbCRLF Set objCDOSYSMail = Server.CreateObject("CDO.Message") Set objCDOSYSCon = Server.CreateObject ("CDO.Configuration") ' Outgoing SMTP server objCDOSYSCon.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "192.168.1.20" objCDOSYSCon.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25 objCDOSYSCon.Fields("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2 objCDOSYSCon.Fields("http://schemas.microsoft.com/cdo/configuration/smtpconnectiontimeout") = 60 objCDOSYSCon.Fields.Update ' Update the CDOSYS Configuration Set objCDOSYSMail.Configuration = objCDOSYSCon objCDOSYSMail.From = "[email protected]" objCDOSYSMail.To = "[email protected]" objCDOSYSMail.Subject = "Technical Support Request" objCDOSYSMail.HTMLBody = mailbody objCDOSYSMail.Send 'Close the server mail object Set objCDOSYSMail = Nothing Set objCDOSYSCon = Nothing Any suggestions on what to check to see why the mail isn't coming through??  I know enough about mail servers to be dangerous and that's about it.  I'm not sure how to troubleshoot this especially since I'm not getting any errors.  The page looks like it is processing fine....just no email. Thanks, -D- 0 Question by:-Dman100- LVL 30 Expert Comment What server os? running Exchange? also as the 1st & 2nd lines of code, make sure you have <% @ LANGUAGE = VBSCRIPT%> <% Option Explicit %> AND In IE , TOOLS-INTERNET OPTIONS - ADVANCED tab, locate and uncheck  SHOW FRIENDLY HTTP ERROR MESSAGES Run the code again.. any errors reported? 0 LVL 12 Expert Comment Do you have On Error Resume Next somewhere in your code that is preventing you from seeing an error? Could the fact that the to and from address is the same an issue? Does the badmail folder still get populated on the web server when you are using a remote mail server like this?  I know it does when you don't use the CDO.Configuration part of the code, but I have always wondered where it went when you were actually sending the mail through another server. 0 Author Comment Hey Irwin, It's Dwayne...I'm back again :) The server OS is Windows 2000 Server.  I'm not running exchange.  I just have IIS smtp mail setup so I can test my mail scripts. I'm wondering if when you helped me setup DNS, if it effected the SMTP mail.  My first thought was if the server-name changed from: dwayne-server to dwayne-server.com Would that cause the problem for the mail not to be delievered? I'm really in the dark when it comes to smtp and mail server setup?  I really do not know what to look for? -D- 0 LVL 30 Expert Comment Hey Dwayne, When I read the questions I fire away without looking at who wrote it.. but not to say that you're not another face in the crowd.. In fact, you hold the record of all the several thousand questions that I've answered, to have the most lengthiest solution.. being #1 is not too bad huh? Anyway, I'm off to the office for work. post back a shout out so that my office computer will get the link to this question.. thanks, Irwin 0 LVL 22 Expert Comment if you can sign onto your test server console, run through this test to see if your server has communication to your relay.  use the address in your script as the target. http://amset.info/exchange/telnet-test.asp 0 Author Comment Hey Irwin, yeah, working thru the DNS solution was pretty lengthy...I'll take being #1 on that :)  Glad you stuck it out with me. I figure I'm missing something really simple on this one because I've had it working with no problems in the past. 0 LVL 22 Expert Comment maybe you have bigger dns issues than you though. :P try with the www: http://www.amset.info/exchange/telnet-test.asp 0 LVL 30 Expert Comment "objCDOSYSCon.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "192.168.1.20" user your external WAN IP to test outgoing...  but send only 1 (one) record, not your entire database 0 Author Comment Hi  Irwin, Just got back and read your post.  I am a little confused.  Could you explain what you mean my external WAN IP?  I thought I would use my server IP to relay the email? How do I send 1 record? 0 LVL 30 Expert Comment 192.168.1.20 is your internal server IP.. for testing purposes, what is your EXTERNAL to the Internet IP "How do I send 1 record?" check that, I reviewed your code and you're not using a database 0 LVL 6 Assisted Solution Hi: you can look for a .eml file to see what cause your email don't sent, find out the message that you sent,open it with notepad ,you will see the error message. or when you send the message with asp ,please check the event log ,the event log will tell you what happen with smtp. thanks 0 Author Comment I have a dynamic IP that is generated from my ISP, is that the external IP you need? I checked the event log and it showed several warnings when I had tried to run the cdo script.  This is the error message: Message delivery to the remote domain 'centurytel.net' failed.  The error message is 'An smtp protocol error occurred.  The smtp verb which caused the error is 'MAIL'.  The response from the remote server is '533 5.3.0 207.119.5.179 rejected. Does that help? 0 LVL 30 Expert Comment "I have a dynamic IP that is generated from my ISP, is that the external IP you need?" yes.. apply that... what happens? 0 Author Comment It doesn't allow me to change the IP... I can either choose "All Unassigned" or use 192.168.1.20 0 LVL 30 Assisted Solution objCDOSYSCon.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "192.168.1.20" leave the website settings alone.. modify the above line with your dynamic DNS external IP 0 Author Comment Okay... "The transport failed to connect to the server" 0 LVL 12 Assisted Solution Maybe I don't completely understand CDO, but this is my basic understanding.  There are two different ways to send email with CDO. <% Sub SendEmail(strTo, strFrom, strSubject, strMessage) Set iMsg = Server.CreateObject("CDO.Message") Set iConf = Server.CreateObject("CDO.Configuration") Set Flds = iConf.Fields With Flds .Item("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2 .Item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "MAILSERVER NAME OR IP" .Item("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25 .Item("http://schemas.microsoft.com/cdo/configuration/smtpconnectiontimeout") = 30 .Update End With With iMsg Set .Configuration = iConf .To = strTo .From = strFrom .Subject  = strSubject .HTMLBody = strMessage .Send End With End Sub %> This would be used if you need to send email from a web server that DID NOT have SMTP set up. <% Sub SendMessage(strTo, strFrom, strSubject, strBody) Set oCDO = Server.CreateObject("CDO.Message") With oCDO .To       = strTo .From     = strFrom .Subject  = strSubject .HtmlBody = strBody .Send End With Set oCDO = Nothing End Sub %> This could be used on a web server that does have SMTP set up. If I am understanding these two options correctly, then couldn't you just use the second option based on the fact that you have SMTP setup on that server? 0 LVL 22 Expert Comment >>I checked the event log and it showed several warnings when I had tried to run the cdo script.  This is the error message: Message delivery to the remote domain 'centurytel.net' failed.  The error message is 'An smtp protocol error occurred.  The smtp verb which caused the error is 'MAIL'.  The response from the remote server is '533 5.3.0 207.119.5.179 rejected.<< i just saw this from before.  which server is this error from?  web server or email server? 0 Author Comment The error message was what was showing up in the event log on my development server.  I have a server that I use for development and testing. The warning in the event log shows the source as: SMTPSVC Is this what you are aking? 0 LVL 22 Expert Comment so this is the development web server?  if so, do you have the same log when you use an internal ip address?  the connection is probably refused because of the firewall, but it shouldnt be a problem with the internal ip. were you able to open the page that i gave you above to do the test?  if not, let me know and ill copy/paste it here for you. 0 Author Comment I wasn't able to openthe page that you sent as a URL. I'm not sure what you meant by "do you have the same log when you use an internal ip address"? Could you explain? Thanks for your help.  I appreciate it. -D- 0 LVL 22 Expert Comment as i understand it, you are receiving this error message on your development web server.  irwinpks had asked you to change the ip address that you script was pointing to, and you posted this error message.  i was wondering if you had received this error message before making this change or only after.  also i was curious if you would change the ip back to the internal address if it would give you the same error. here is a paste from that site, hopefully it makes sense: A Telnet test is used for doing diagnosis of your SMTP server. It can confirm whether the Exchange server is processing email correctly and is really the manual way of entering the commands that SMTP servers do when communicating. It involves establishing a Telnet session from a computer that is not located on the local network to the external (public) IP address of the Exchange server. You need to carry out the test from a machine at home, or from another office. Doing the test from a machine on your own network will produce useless results. Note: This is NOT a test for open relay. For open relay testing, please see our spam cleanup page. Start a command prompt. Either click start, run and type CMD or Choose Command Prompt from Start, Programs, Accessories, Command Prompt Type "telnet" (minus quotes) and press enter. At the Telnet prompt, type set localecho (minus quotes) and press enter. This lets you see what is going on. Still in the telnet prompt, enter the following command and then press enter open external-ip 25 open 111.222.333.444 25 You should get a response back similar to the following: 220 mail.server.domain Microsoft ESMTP MAIL Service, Version: 6.0.2790.0 Ready at Type the following command in to the telnet windows: ehlo testdomain.com and press enter (note "testdomain.com" can be anything that isn't a domain that the Exchange server is responsible for. After pressing OK you should get a response back 250 OK Type the following command in to the telnet window: and press enter (again where address@yourdomain is an email address that is not on the Exchange server. Note the lack of space between from and the first part of the address). After pressing OK you should get a response back: If you get "Access Denied" or another error message at this point then the remote server has an issue with your server connecting to them. Type the following command in to the telnet window: and then press enter (where [email protected] is an address that is on your Exchange server Once again note the lack of space between to and the first part of the e-mail address). If you get accessed denied or another message at this point then the mailbox has a problem - full, non-existent etc. After pressing ok you should get the response back: Now type DATA and press enter. You should get a response back similar to: 354 Send data. End with CRLF.CRLF Now you can type your message. Enter the following in to the Window: Subject: test message Press enter TWICE. Next type in some body message, something like: This is a test message sent via telnet And press enter. Enter a full stop (or period) and press enter. You should get back the response: 250 OK Finally close the session by typing Quit and press enter. You should get which will return the response: 221 closing connection You should now have an email with the subject and the body as entered. 0 Author Comment "i was wondering if you had received this error message before making this change or only after." I received the error message before changing the IP, so I was using the internal IP of my server. I tried the telnet session per the article, but the following paragraph says the telnet session for my home network is useless:  "It involves establishing a Telnet session from a computer that is not located on the local network to the external (public) IP address of the Exchange server. You need to carry out the test from a machine at home, or from another office. Doing the test from a machine on your own network will produce useless results." I went ahead and did the telnet session as described and got the following message at the very end: 250 2.6.0 <DWAYNE-SERVERTchoNo000000001@dwayne-server>Queued mail for delivery Nothing is in the queued folder? -D- 0 LVL 13 Assisted Solution ------- Dim objCDOConf,objCDOSYS Set objCDOSYS = Server.CreateObject("CDO.Message") Set objCDOConf = Server.CreateObject ("CDO.Configuration") With objCDOConf .Fields("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2 .Fields("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "mail.mydomain.com" .Fields("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate") = 1 .Fields.Update End With Set objCDOSYS.Configuration = objCDOConf With objCDOSYS .From = "[email protected]" .To = "[email protected]" .Subject = "Subject comes here" '.HTMLBody = strMailBody .TextBody = "Test message" .Send End with Set objCDOSYS = Nothing -------- 0 LVL 22 Expert Comment >>I went ahead and did the telnet session as described and got the following message at the very end: 250 2.6.0 <DWAYNE-SERVERTchoNo000000001@dwayne-server>Queued mail for delivery Nothing is in the queued folder?<< great.  im glad you continued anyways after reading that paragraph.  the page that i got this from is a site that is troubleshooting for exchange server.  in this application the test isnt useless.  250 codes mean "good".  this means that your web server is communicating with your exchange server.  the mail should be queued on your exchange if its not getting any further now.  do you see anything in there? 0 Author Comment Okay, I checked the following folders under INETPUB >> MAILROOT BadMail, Drop, Mailbox, Pickup, Queue, Route, SortTemp. The only folder that contained any files was in the BadMail folder.  This folder contained several files that where logs of emails that I had sent using my mail script using CDO and it also included the message I had sent thru telnet. Here is the text from the message sent via telnet: From: postmaster@dwayne-server To: [email protected] Date: Thu, 4 May 2006 19:13:23 -0500 MIME-Version: 1.0 Content-Type: multipart/report; report-type=delivery-status; Message-ID: <8wSA1XvpF00000001@dwayne-server> This is a MIME-formatted message. Portions of this message may be unreadable without a MIME-capable mail program. Content-Type: text/plain; charset=unicode-1-1-utf-7 This is an automatically generated Delivery Status Notification. Delivery to the following recipients failed. [email protected] Content-Type: message/delivery-status Reporting-MTA: dns;dwayne-server Arrival-Date: Thu, 4 May 2006 19:12:26 -0500 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.3.0 Diagnostic-Code: smtp;553 5.3.0 207.119.25.33 rejected; see http://www.njabl.org/cgi-bin/lookup.cgi?query=207.119.25.33 Content-Type: message/rfc822 Received: from yahoo.com ([192.168.1.100]) by dwayne-server with Microsoft SMTPSVC(5.0.2195.6713); Thu, 4 May 2006 19:12:26 -0500 subject: test messatgge From: [email protected] Bcc: Return-Path: [email protected] Message-ID: <DWAYNE-SERVERTchoNo00000001@dwayne-server> X-OriginalArrivalTime: 05 May 2006 00:12:57.0033 (UTC) FILETIME=[A9481390:01C66FD8] Date: 4 May 2006 19:12:57 -0500 this is a test message sent via telnet Here is the text from another message that I had sent using my mail script via CDO that failed: From: postmaster@dwayne-server To: [email protected] Date: Thu, 4 May 2006 10:16:06 -0500 MIME-Version: 1.0 Content-Type: multipart/report; report-type=delivery-status; Message-ID: <x4A2DreBB00000005@dwayne-server> This is a MIME-formatted message. Portions of this message may be unreadable without a MIME-capable mail program. Content-Type: text/plain; charset=unicode-1-1-utf-7 This is an automatically generated Delivery Status Notification. Delivery to the following recipients failed. [email protected] Content-Type: message/delivery-status Reporting-MTA: dns;dwayne-server Arrival-Date: Thu, 4 May 2006 10:16:06 -0500 Final-Recipient: rfc822;[email protected] Action: failed Status: 5.3.0 Diagnostic-Code: smtp;553 5.3.0 207.119.5.179 rejected; see http://www.njabl.org/cgi-bin/lookup.cgi?query=207.119.5.179 Content-Type: message/rfc822 Received: from dwayneserver ([192.168.1.20]) by dwayne-server with Microsoft SMTPSVC(5.0.2195.6713); Thu, 4 May 2006 10:16:06 -0500 From: <[email protected]> To: <[email protected]> Subject: CompassLearning Technical Support Request Date: Thu, 4 May 2006 10:16:06 -0500 Message-ID: <002401c66f8d$aa58b550$1401a8c0@dwayneserver> MIME-Version: 1.0 Content-Type: multipart/alternative; X-Mailer: Microsoft CDO for Windows 2000 Content-Class: urn:content-classes:message X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1409 Return-Path: [email protected] X-OriginalArrivalTime: 04 May 2006 15:16:06.0635 (UTC) FILETIME=[AA6363B0:01C66F8D] This is a multi-part message in MIME format. Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit *** This message has been automatically generated -- DO NOT REPLY. *** The following technical support request was received via the CompassLearning web site. The inquirer's contact information is provided below. First Name: Elizabeth Last Name: Pennington School Name: Virginia Beach High School Address: 10098 Virginia Beach Ave. City: Virginia Beach State: 46 Zip: 77079 Phone: 5403692584 Email: [email protected] Product Type: 2 Comment: this is a test Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <font size=3D3 face =3D Verdana><strong>*** This message has been = automatically generated -- DO NOT REPLY. ***</strong></font> <font size=3D2 face=3DVerdana>The following technical support request = was received via the CompassLearning web site. The inquirer's contact = information is provided below.</font> First Name:  Elizabeth Last Name:  Pennington School Name:  Virginia Beach High School City:  Virginia Beach State:  46 Zip:  77079 Phone:  5403692584 Email:  [email protected] Product Type:  2 Comment:  this is a test Is there a configuration problem with my smtp mail server?  Is this why the messages are failing? Regards, -D- 0 LVL 22 Expert Comment try this same telnet test, but do it from your mail server to the centurytel.net mail server (209.142.136.228). 0 Author Comment Just to make sure I understand correctly, when I go thru the telnet session, I should use: mail from:(use my server ip here) and rcpt to:209.142.136.228 Is that correct? 0 LVL 22 Expert Comment 0 Author Comment Okay, I apologize, I'm a little confused.  When you say to do a telnet test and do it from my mail server to the centurytel mailserver (209.142.136.228), could you explain further.  I'm not sure what you mean by that.  My apologies.  I'm smtp impaired :) 0 LVL 22 Expert Comment when you are sending an email to that [email protected], your mail server does the same thing as you are doing here to contact the mail server for centurytel.net.  what you are doing is manually making this connection. 0 Author Comment Okay, I hope I did this right? I opened a telnet session and used: open my-server-ip 25   "Where I placed my actual server IP" then ehlo 209.142.136.228 I completed the rest of the telent adding a subject and body text. I checked the badmail folder and it was in there. 0 LVL 22 Expert Comment ah ok.  actually you want to get onto the console of that server.  open the cmd prompt on that server.  then you want to open the telnet and open to the 209.142.136.228 address.  your server ip shouldnt go in there anywhere. 0 Author Comment Okay, I opened my command console on my server and started a telnet session.  I typed in: open 209.142.136.228 25 mail from:[email protected] I got a rejected error message. Am I correct in assuming the remote server isn't accepting my request?  If so, how do I go about to correct that? 0 LVL 22 Expert Comment you did an EHLO first right?  if so, then yes the remote server is rejecting your email.  what is the error that its giving you?  (you can right click -> mark to highlight text, then hit [enter] to copy to clipboard) 0 Author Comment yes, I after I opened the connection to the remote server, I typed in: ehlo yahoo.com then I typed in: mail from:[email protected] that's when I received the error message: 0 LVL 22 Expert Comment did you try visiting the link?  please post what that page tells you. 0 Author Comment This is the information from the link: please note: I didn't think it was a good idea to post the ip for security since that is the ip I'm using. Here is the result of your query: -------------------------------------------------------------------------------- This IP has not been tested. 0 LVL 22 Expert Comment >>please note: I didn't think it was a good idea to post the ip for security since that is the ip I'm using. not a problem at all.  somehow your ip address has gotten listed on a black list.  try following that link to request that your ip be removed.  it may take a little while though to get it freed up.  i would also check that second link to see that you arent on any other black lists. 0 Author Comment Okay, those ip's are automatically assigned to me from my ISP.  So, anytime I reboot my PC, I'll have a new IP. So, basically I'll constantly be blacklisted. What I don't understand, is I have been using smtp mail server in IIS to test my CDO mail scripts for several years.  Why all of the sudden would a block of IP's from my ISP be blacklisted? I don't understand, why would it be working and then stop? 0 LVL 22 Accepted Solution the black lists happen because of spammers.  its very possible that someone that shares your isp spammed a bunch, then got black listed.  this is one problem with having a dynamic ip address. 0 Author Comment Well, since I can't get this to work with my dynamic IP from my ISP, is it possible to do the following. I have a home LAN setup, three PC's running XP, Win 2000 Pro and 2000 server...all behind my router.  I get my dynamic IP from my ISP, but my server is set to a static IP just within my home network.  I have IIS and DNS running on my server. Can I setup my smtp mail server within IIS to work within my home network to test my mail scripts and relay email messages just within my network.  I'm just trying to see if I can setup this up internally for testing purposes using the mails sripts I develop.  Thereby, avoiding the need for a remote email server to relay my email. Not being very familiar with smtp setup, I wasn't sure if this was possilble? 0 LVL 12 Expert Comment I still feel like this is being over-complicated. I still would bet that if you used the code below, and left off the entire CDO.Configuration that it would work just fine. <% Sub SendMessage(strTo, strFrom, strSubject, strBody) Set oCDO = Server.CreateObject("CDO.Message") With oCDO .To       = strTo .From     = strFrom .Subject  = strSubject .HtmlBody = strBody .Send End With Set oCDO = Nothing End Sub %> 0 Author Comment Hi peterxlane...thanks for replying to my post.  I did give your suggestion a try, but the mail still fails and is depoisted into the badmail folder. I might be mistaken, but I believe I still have to have an ip for the smtp server in order to relay the message. Thanks for your help.  I appreciate it. Regards, -D- 0 LVL 22 Expert Comment try contacting your isp to see if they have an smtp mail server that you can relay through.  more than likely they will not be on any black lists. 0 ## Featured Post ### Suggested Solutions I recently decide that I needed a way to make my pages scream on the net.   While searching around how I can accomplish this I stumbled across a great article that stated "minimize the server requests." I got to thinking, hey, I use more than one… I would like to start this tip/trick by saying Thank You, to all who said that this could not be done, as it forced me to make sure that it could be accomplished. :) To start, I want to make sure everyone understands the importance of utilizing p… To add imagery to an HTML email signature, you have two options available to you. You can either add a logo/image by embedding it directly into the signature or hosting it externally and linking to it. The vast majority of email clients display l… Internet Business Fax to Email Made Easy - With eFax Corporate (http://www.enterprise.efax.com), you'll receive a dedicated online fax number, which is used the same way as a typical analog fax number. You'll receive secure faxes in your email, fr…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35660260915756226, "perplexity": 8852.183372804744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00208-ip-10-171-6-4.ec2.internal.warc.gz"}
https://cms.math.ca/10.4153/CMB-2008-012-8
location:  Publications → journals → CMB Abstract view # Dynamical Zeta Function for Several Strictly Convex Obstacles Published:2008-03-01 Printed: Mar 2008 • Vesselin Petkov Format: HTML LaTeX MathJax PDF PostScript ## Abstract The behavior of the dynamical zeta function $Z_D(s)$ related to several strictly convex disjoint obstacles is similar to that of the inverse $Q(s) = \frac{1}{\zeta(s)}$ of the Riemann zeta function $\zeta(s)$. Let $\Pi(s)$ be the series obtained from $Z_D(s)$ summing only over primitive periodic rays. In this paper we examine the analytic singularities of $Z_D(s)$ and $\Pi(s)$ close to the line $\Re s = s_2$, where $s_2$ is the abscissa of absolute convergence of the series obtained by the second iterations of the primitive periodic rays. We show that at least one of the functions $Z_D(s), \Pi(s)$ has a singularity at $s = s_2$. Keywords: dynamical zeta function, periodic rays MSC Classifications: 11M36 - Selberg zeta functions and regularized determinants; applications to spectral theory, Dirichlet series, Eisenstein series, etc. Explicit formulas 58J50 - Spectral problems; spectral geometry; scattering theory [See also 35Pxx] top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546732068061829, "perplexity": 848.7249622923285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00061-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.meritnation.com/cbse-class-11-science/math/math-ncert-solutions/linear-inequalities/ncert-solutions/41_1_1342_166_122_5549
NCERT Solutions for Class 11 Science Math Chapter 6 Linear Inequalities are provided here with simple step-by-step explanations. These solutions for Linear Inequalities are extremely popular among Class 11 Science students for Math Linear Inequalities Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the NCERT Book of Class 11 Science Math Chapter 6 are provided here for you for free. You will also love the ad-free experience on Meritnation’s NCERT Solutions. All NCERT Solutions for class Class 11 Science Math are prepared by experts and are 100% accurate. #### Question 1: Solve 24x < 100, when (i) x is a natural number (ii) x is an integer The given inequality is 24x < 100. (i) It is evident that 1, 2, 3, and 4 are the only natural numbers less than. Thus, when x is a natural number, the solutions of the given inequality are 1, 2, 3, and 4. Hence, in this case, the solution set is {1, 2, 3, 4}. (ii) The integers less than are …–3, –2, –1, 0, 1, 2, 3, 4. Thus, when x is an integer, the solutions of the given inequality are …–3, –2, –1, 0, 1, 2, 3, 4. Hence, in this case, the solution set is {…–3, –2, –1, 0, 1, 2, 3, 4}. #### Question 2: Solve –12x > 30, when (i) x is a natural number (ii) x is an integer The given inequality is –12x > 30. (i) There is no natural number less than. Thus, when x is a natural number, there is no solution of the given inequality. (ii) The integers less than are …, –5, –4, –3. Thus, when x is an integer, the solutions of the given inequality are …, –5, –4, –3. Hence, in this case, the solution set is {…, –5, –4, –3}. #### Question 3: Solve 5x– 3 < 7, when (i) x is an integer (ii) x is a real number The given inequality is 5x– 3 < 7. (i) The integers less than 2 are …, –4, –3, –2, –1, 0, 1. Thus, when x is an integer, the solutions of the given inequality are …, –4, –3, –2, –1, 0, 1. Hence, in this case, the solution set is {…, –4, –3, –2, –1, 0, 1}. (ii) When x is a real number, the solutions of the given inequality are given by x < 2, that is, all real numbers x which are less than 2. Thus, the solution set of the given inequality is x (–, 2). #### Question 4: Solve 3x + 8 > 2, when (i) x is an integer (ii) x is a real number The given inequality is 3x + 8 > 2. (i) The integers greater than –2 are –1, 0, 1, 2, … Thus, when x is an integer, the solutions of the given inequality are –1, 0, 1, 2 … Hence, in this case, the solution set is {–1, 0, 1, 2, …}. (ii) When x is a real number, the solutions of the given inequality are all the real numbers, which are greater than –2. Thus, in this case, the solution set is (– 2, ). #### Question 5: Solve the given inequality for real x: 4x + 3 < 5x + 7 4x + 3 < 5x + 7 4x + 3 – 7 < 5x + 7 – 7 4x – 4 < 5x 4x – 4 – 4x < 5x – 4x ⇒ –4 < x Thus, all real numbers x,which are greater than –4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–4, ). #### Question 6: Solve the given inequality for real x: 3x – 7 > 5x – 1 3x – 7 > 5x – 1 3x – 7 + 7 > 5x – 1 + 7 3x > 5x + 6 3x – 5x > 5x + 6 – 5x ⇒ – 2x > 6 Thus, all real numbers x,which are less than –3, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, –3). #### Question 7: Solve the given inequality for real x: 3(x – 1) 2 (x – 3) 3(x – 1) 2(x – 3) 3x – 3 2x – 6 3x – 3 + 3 2x – 6 + 3 3x 2x – 3 3x – 2x 2x – 3 – 2x x – 3 Thus, all real numbers x,which are less than or equal to –3, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, –3]. #### Question 8: Solve the given inequality for real x: 3(2 – x) 2(1 – x) 3(2 – x) 2(1 – x) 6 – 3x 2 – 2x 6 – 3x + 2x 2 – 2x + 2x 6 – x 2 6 – x – 6 2 – 6 ⇒ –x –4 x 4 Thus, all real numbers x,which are less than or equal to 4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 4]. #### Question 9: Solve the given inequality for real x: Thus, all real numbers x,which are less than 6, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 6). #### Question 10: Solve the given inequality for real x: Thus, all real numbers x,which are less than –6, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, –6). #### Question 11: Solve the given inequality for real x: Thus, all real numbers x,which are less than or equal to 2, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 2]. #### Question 12: Solve the given inequality for real x: Thus, all real numbers x,which are less than or equal to 120, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–∞, 120]. #### Question 13: Solve the given inequality for real x: 2(2x + 3) – 10 < 6 (x – 2) Thus, all real numbers x,which are greater than or equal to 4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (4, ∞). #### Question 14: Solve the given inequality for real x: 37 ­– (3x + 5) 9x – 8(x – 3) Thus, all real numbers x,which are less than or equal to 2, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 2]. #### Question 15: Solve the given inequality for real x: Thus, all real numbers x,which are greater than 4, are the solutions of the given inequality. Hence, the solution set of the given inequality is (4, ). #### Question 16: Solve the given inequality for real x: Thus, all real numbers x,which are less than or equal to 2, are the solutions of the given inequality. Hence, the solution set of the given inequality is (–, 2]. #### Question 17: Solve the given inequality and show the graph of the solution on number line: 3x – 2 < 2x +1 3x – 2 < 2x +1 3x – 2x < 1 + 2 x < 3 The graphical representation of the solutions of the given inequality is as follows. #### Question 18: Solve the given inequality and show the graph of the solution on number line: 5x – 3 3x – 5 The graphical representation of the solutions of the given inequality is as follows. #### Question 19: Solve the given inequality and show the graph of the solution on number line: 3(1 – x) < 2 (x + 4) The graphical representation of the solutions of the given inequality is as follows. #### Question 20: Solve the given inequality and show the graph of the solution on number line: The graphical representation of the solutions of the given inequality is as follows. #### Question 21: Ravi obtained 70 and 75 marks in first two unit test. Find the minimum marks he should get in the third test to have an average of at least 60 marks. Let x be the marks obtained by Ravi in the third unit test. Since the student should have an average of at least 60 marks, Thus, the student must obtain a minimum of 35 marks to have an average of at least 60 marks. #### Question 22: To receive Grade ‘A’ in a course, one must obtain an average of 90 marks or more in five examinations (each of 100 marks). If Sunita’s marks in first four examinations are 87, 92, 94 and 95, find minimum marks that Sunita must obtain in fifth examination to get grade ‘A’ in the course. Let x be the marks obtained by Sunita in the fifth examination. In order to receive grade ‘A’ in the course, she must obtain an average of 90 marks or more in five examinations. Therefore, Thus, Sunita must obtain greater than or equal to 82 marks in the fifth examination. #### Question 23: Find all pairs of consecutive odd positive integers both of which are smaller than 10 such that their sum is more than 11. Let x be the smaller of the two consecutive odd positive integers. Then, the other integer is x + 2. Since both the integers are smaller than 10, x + 2 < 10 x < 10 – 2 x < 8 … (i) Also, the sum of the two integers is more than 11. x + (x + 2) > 11 2x + 2 > 11 2x > 11 – 2 2x > 9 From (i) and (ii), we obtain $4.5 Since x is an odd number, x can take the values, 5 and 7. Thus, the required possible pairs are (5, 7) and (7, 9). #### Question 24: Find all pairs of consecutive even positive integers, both of which are larger than 5 such that their sum is less than 23. Let x be the smaller of the two consecutive even positive integers. Then, the other integer is x + 2. Since both the integers are larger than 5, x > 5 ... (1) Also, the sum of the two integers is less than 23. x + (x + 2) < 23 2x + 2 < 23 2x < 23 – 2 2x < 21 From (1) and (2), we obtain 5 < x < 10.5. Since x is an even number, x can take the values, 6, 8, and 10. Thus, the required possible pairs are (6, 8), (8, 10), and (10, 12). #### Question 25: The longest side of a triangle is 3 times the shortest side and the third side is 2 cm shorter than the longest side. If the perimeter of the triangle is at least 61 cm, find the minimum length of the shortest side. Let the length of the shortest side of the triangle be x cm. Then, length of the longest side = 3x cm Length of the third side = (3x – 2) cm Since the perimeter of the triangle is at least 61 cm, Thus, the minimum length of the shortest side is 9 cm. #### Question 26: A man wants to cut three lengths from a single piece of board of length 91 cm. The second length is to be 3 cm longer than the shortest and the third length is to be twice as long as the shortest. What are the possible lengths of the shortest board if the third piece is to be at least 5 cm longer than the second? [Hint: If x is the length of the shortest board, then x, (x + 3) and 2x are the lengths of the second and third piece, respectively. Thus, x = (x + 3) + 2x 91 and 2x (x + 3) + 5] Let the length of the shortest piece be x cm. Then, length of the second piece and the third piece are (x + 3) cm and 2x cm respectively. Since the three lengths are to be cut from a single piece of board of length 91 cm, x cm + (x + 3) cm + 2x cm 91 cm 4x + 3 91 4x 91 ­– 3 4x 88 Also, the third piece is at least 5 cm longer than the second piece. 2x (x + 3) + 5 2x x + 8 x 8 … (2) From (1) and (2), we obtain 8 x 22 Thus, the possible length of the shortest board is greater than or equal to 8 cm but less than or equal to 22 cm. #### Question 1: Solve the given inequality graphically in two-dimensional plane: x + y < 5 The graphical representation of x + y = 5 is given as dotted line in the figure below. This line divides the xy-plane in two half planes, I and II. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 + 0 < 5 or, 0 < 5, which is true Therefore, half plane II is not the solution region of the given inequality. Also, it is evident that any point on the line does not satisfy the given strict inequality. Thus, the solution region of the given inequality is the shaded half plane I excluding the points on the line. This can be represented as follows. #### Question 2: Solve the given inequality graphically in two-dimensional plane: 2x + y 6 The graphical representation of 2x + y = 6 is given in the figure below. This line divides the xy-plane in two half planes, I and II. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 2(0) + 0 ≥ 6 or 0 ≥ 6, which is false Therefore, half plane I is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the shaded half plane II including the points on the line. This can be represented as follows. #### Question 3: Solve the given inequality graphically in two-dimensional plane: 3x + 4y 12 3x + 4y 12 The graphical representation of 3x + 4y = 12 is given in the figure below. This line divides the xy-plane in two half planes, I and II. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 3(0) + 4(0) 12 or 0 12, which is true Therefore, half plane II is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the shaded half plane I including the points on the line. This can be represented as follows. #### Question 4: Solve the given inequality graphically in two-dimensional plane: y + 8 2x The graphical representation of y + 8 = 2x is given in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 + 8 2(0) or 8 0, which is true Therefore, lower half plane is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) including the line. The solution region is represented by the shaded region as follows. #### Question 5: Solve the given inequality graphically in two-dimensional plane: xy 2 The graphical representation of xy = 2 is given in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 – 0 2 or 0 2, which is true Therefore, the lower half plane is not the solution region of the given inequality. Also, it is clear that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) including the line. The solution region is represented by the shaded region as follows. #### Question 6: Solve the given inequality graphically in two-dimensional plane: 2x – 3y > 6 The graphical representation of 2x – 3y = 6 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 2(0) – 3(0) > 6 or 0 > 6, which is false Therefore, the upper half plane is not the solution region of the given inequality. Also, it is clear that any point on the line does not satisfy the given inequality. Thus, the solution region of the given inequality is the half plane that does not contain the point (0, 0) excluding the line. The solution region is represented by the shaded region as follows. #### Question 7: Solve the given inequality graphically in two-dimensional plane: –3x + 2y –6 The graphical representation of – 3x + 2y = – 6 is given in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 3(0) + 2(0) – 6 or 0 –6, which is true Therefore, the lower half plane is not the solution region of the given inequality. Also, it is evident that any point on the line satisfies the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) including the line. The solution region is represented by the shaded region as follows. #### Question 8: Solve the given inequality graphically in two-dimensional plane: 3y – 5x < 30 The graphical representation of 3y – 5x = 30 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 3(0) – 5(0) < 30 or 0 < 30, which is true Therefore, the upper half plane is not the solution region of the given inequality. Also, it is evident that any point on the line does not satisfy the given inequality. Thus, the solution region of the given inequality is the half plane containing the point (0, 0) excluding the line. The solution region is represented by the shaded region as follows. #### Question 9: Solve the given inequality graphically in two-dimensional plane: y < –2 The graphical representation of y = –2 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 < –2, which is false Also, it is evident that any point on the line does not satisfy the given inequality. Hence, every point below the line, y = –2 (excluding all the points on the line), determines the solution of the given inequality. The solution region is represented by the shaded region as follows. #### Question 10: Solve the given inequality graphically in two-dimensional plane: x > –3 The graphical representation of x = –3 is given as dotted line in the figure below. This line divides the xy-plane in two half planes. Select a point (not on the line), which lies in one of the half planes, to determine whether the point satisfies the given inequality or not. We select the point as (0, 0). It is observed that, 0 > –3, which is true Also, it is evident that any point on the line does not satisfy the given inequality. Hence, every point on the right side of the line, x = –3 (excluding all the points on the line), determines the solution of the given inequality. The solution region is represented by the shaded region as follows. #### Question 1: Solve the following system of inequalities graphically: x 3, y 2 x 3 … (1) y 2 … (2) The graph of the lines, x = 3 and y = 2, are drawn in the figure below. Inequality (1) represents the region on the right hand side of the line, x = 3 (including the line x = 3), and inequality (2) represents the region above the line, y = 2 (including the line y = 2). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 2: Solve the following system of inequalities graphically: 3x + 2y 12, x 1, y 2 3x + 2y 12 … (1) x 1 … (2) y 2 … (3) The graphs of the lines, 3x + 2y = 12, x = 1, and y = 2, are drawn in the figure below. Inequality (1) represents the region below the line, 3x + 2y = 12 (including the line 3x + 2y = 12). Inequality (2) represents the region on the right side of the line, x = 1 (including the line x = 1). Inequality (3) represents the region above the line, y = 2 (including the line y = 2). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 3: Solve the following system of inequalities graphically: 2x + y 6, 3x + 4y 12 2x + y 6 … (1) 3x + 4y 12 … (2) The graph of the lines, 2x + y= 6 and 3x + 4y = 12, are drawn in the figure below. Inequality (1) represents the region above the line, 2x + y= 6 (including the line 2x + y= 6), and inequality (2) represents the region below the line, 3x + 4y =12 (including the line 3x + 4y =12). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 4: Solve the following system of inequalities graphically: x + y 4, 2xy > 0 x + y 4 … (1) 2xy > 0 … (2) The graph of the lines, x + y = 4 and 2xy = 0, are drawn in the figure below. Inequality (1) represents the region above the line, x + y = 4 (including the line x + y = 4). It is observed that (1, 0) satisfies the inequality, 2xy > 0. [2(1) – 0 = 2 > 0] Therefore, inequality (2) represents the half plane corresponding to the line, 2xy = 0, containing the point (1, 0) [excluding the line 2xy > 0]. Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on line x + y = 4 and excluding the points on line 2xy = 0 as follows. #### Question 5: Solve the following system of inequalities graphically: 2xy > 1, x – 2y < –1 2xy > 1 … (1) x – 2y < –1 … (2) The graph of the lines, 2xy = 1 and x – 2y = –1, are drawn in the figure below. Inequality (1) represents the region below the line, 2xy = 1 (excluding the line 2xy = 1), and inequality (2) represents the region above the line, x – 2y = –1 (excluding the line x – 2y = –1). Hence, the solution of the given system of linear inequalities is represented by the common shaded region excluding the points on the respective lines as follows. #### Question 6: Solve the following system of inequalities graphically: x + y 6, x + y 4 x + y 6 … (1) x + y 4 … (2) The graph of the lines, x + y = 6 and x + y = 4, are drawn in the figure below. Inequality (1) represents the region below the line, x + y = 6 (including the line x + y = 6), and inequality (2) represents the region above the line, x + y = 4 (including the line x + y = 4). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 7: Solve the following system of inequalities graphically: 2x + y 8, x + 2y 10 2x + y= 8 … (1) x + 2y = 10 … (2) The graph of the lines, 2x + y= 8 and x + 2y = 10, are drawn in the figure below. Inequality (1) represents the region above the line, 2x + y = 8, and inequality (2) represents the region above the line, x + 2y = 10. Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 8: Solve the following system of inequalities graphically: x + y 9, y > x, x 0 x + y 9      ... (1) y > x             ... (2) x 0             ... (3) The graph of the lines, x + y= 9 and y = x, are drawn in the figure below. Inequality (1) represents the region below the line, x + y = 9 (including the line x + y = 9). It is observed that (0, 1) satisfies the inequality, y > x. [1 > 0] Therefore, inequality (2) represents the half plane corresponding to the line, y = x, containing the point (0, 1) [excluding the line y = x]. Inequality (3) represents the region on the right hand side of the line, x = 0 or y-axis (including y-axis). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the lines, x + y = 9 and x = 0, and excluding the points on line y = x as follows. #### Question 9: Solve the following system of inequalities graphically: 5x + 4y 20, x 1, y 2 5x + 4y 20 … (1) x 1 … (2) y 2 … (3) The graph of the lines, 5x + 4y = 20, x = 1, and y = 2, are drawn in the figure below. Inequality (1) represents the region below the line, 5x + 4y = 20 (including the line 5x + 4y = 20). Inequality (2) represents the region on the right hand side of the line, x = 1 (including the line x = 1). Inequality (3) represents the region above the line, y = 2 (including the line y = 2). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 10: Solve the following system of inequalities graphically: 3x + 4y 60, x + 3y 30, x 0, y 0 3x + 4y 60 … (1) x + 3y 30 … (2) The graph of the lines, 3x + 4y = 60 and x + 3y = 30, are drawn in the figure below. Inequality (1) represents the region below the line, 3x + 4y = 60 (including the line 3x + 4y = 60), and inequality (2) represents the region below the line, x + 3y = 30 (including the line x + 3y = 30). Since x 0 and y 0, every point in the common shaded region in the first quadrant including the points on the respective line and the axes represents the solution of the given system of linear inequalities. #### Question 11: Solve the following system of inequalities graphically: 2x + y 4, x + y 3, 2x – 3y 6 2x + y 4 … (1) x + y 3 … (2) 2x – 3y 6 … (3) The graph of the lines, 2x + y= 4, x + y = 3, and 2x – 3y = 6, are drawn in the figure below. Inequality (1) represents the region above the line, 2x + y= 4 (including the line 2x + y= 4). Inequality (2) represents the region below the line, x + y = 3 (including the line x + y = 3). Inequality (3) represents the region above the line, 2x – 3y = 6 (including the line 2x – 3y = 6). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 12: Solve the following system of inequalities graphically: x – 2y ≤ 3, 3x + 4y ≥ 12, x ≥ 0, y ≥ 1 x – 2y ≤ 3 … (1) 3x + 4y ≥ 12 … (2) y ≥ 1 … (3) The graph of the lines, x – 2y = 3, 3x + 4y = 12, and y = 1, are drawn in the figure below. Inequality (1) represents the region above the line, x – 2y = 3 (including the line x – 2y = 3). Inequality (2) represents the region above the line, 3x + 4y = 12 (including the line 3x + 4y = 12). Inequality (3) represents the region above the line, y = 1 (including the line y = 1). The inequality, x ≥ 0, represents the region on the right hand side of y-axis (including y-axis). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines and y- axis as follows. #### Question 13: [[Q]] Solve the following system of inequalities graphically: 4x + 3y ≤ 60, y ≥ 2x, x ≥ 3, x, y ≥ 0 4x + 3y ≤ 60 … (1) y ≥ 2x … (2) x ≥ 3 … (3) The graph of the lines, 4x + 3y = 60, y = 2x, and x = 3, are drawn in the figure below. Inequality (1) represents the region below the line, 4x + 3y = 60 (including the line 4x + 3y = 60). Inequality (2) represents the region above the line, y = 2x (including the line y = 2x). Inequality (3) represents the region on the right hand side of the line, x = 3 (including the line x = 3). Hence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the respective lines as follows. #### Question 14: Solve the following system of inequalities graphically: 3x + 2y 150, x + 4y 80, x 15, y 0, x 0 3x + 2y 150 … (1) x + 4y 80 … (2) x 15 … (3) The graph of the lines, 3x + 2y = 150, x + 4y = 80, and x = 15, are drawn in the figure below. Inequality (1) represents the region below the line, 3x + 2y = 150 (including the line 3x + 2y = 150). Inequality (2) represents the region below the line, x + 4y = 80 (including the line x + 4y = 80). Inequality (3) represents the region on the left hand side of the line, x = 15 (including the line x = 15). Since x 0 and y 0, every point in the common shaded region in the first quadrant including the points on the respective lines and the axes represents the solution of the given system of linear inequalities. #### Question 15: Solve the following system of inequalities graphically: x + 2y 10, x + y 1, xy 0, x 0, y 0 x + 2y 10 … (1) x + y 1 … (2) xy 0 … (3) The graph of the lines, x + 2y = 10, x + y = 1, and xy = 0, are drawn in the figure below. Inequality (1) represents the region below the line, x + 2y = 10 (including the line x + 2y = 10). Inequality (2) represents the region above the line, x + y = 1 (including the line x + y = 1). Inequality (3) represents the region above the line, xy = 0 (including the line xy = 0). Since x 0 and y 0, every point in the common shaded region in the first quadrant including the points on the respective lines and the axes represents the solution of the given system of linear inequalities. #### Question 1: Solve the inequality 2 3x – 4 5 2 3x – 4 5 2 + 4 3x – 4 + 4 5 + 4 6 3x 9 2 x 3 Thus, all the real numbers, x, which are greater than or equal to 2 but less than or equal to 3, are the solutions of the given inequality. The solution set for the given inequalityis [2, 3]. #### Question 2: Solve the inequality 6 –3(2x – 4) < 12 6 – 3(2x – 4) < 12 2 –(2x – 4) < 4 ⇒ –2 2x – 4 > –4 4 – 2 2x > 4 – 4 2 2x > 0 1 x > 0 Thus, the solution set for the given inequalityis (0, 1]. #### Question 3: Solve the inequality Thus, the solution set for the given inequalityis [–4, 2]. #### Question 4: Solve the inequality ⇒ –75 < 3(x – 2) 0 ⇒ –25 < x – 2 0 ⇒ – 25 + 2 < x 2 ⇒ –23 < x 2 Thus, the solution set for the given inequalityis (–23, 2]. #### Question 5: Solve the inequality Thus, the solution set for the given inequalityis. #### Question 6: Solve the inequality Thus, the solution set for the given inequalityis. #### Question 7: Solve the inequalities and represent the solution graphically on number line: 5x + 1 > –24, 5x – 1 < 24 5x + 1 > –24 5x > –25 x > –5 … (1) 5x – 1 < 24 5x < 25 x < 5 … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is (–5, 5). The solution of the given system of inequalities can be represented on number line as #### Question 8: Solve the inequalities and represent the solution graphically on number line: 2(x – 1) < x + 5, 3(x + 2) > 2 – x 2(x – 1) < x + 5 2x – 2 < x + 5 2xx < 5 + 2 x < 7 … (1) 3(x + 2) > 2 – x 3x + 6 > 2 – x 3x + x > 2 – 6 4x > – 4 x > – 1 … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is (–1, 7). The solution of the given system of inequalities can be represented on number line as #### Question 9: Solve the following inequalities and represent the solution graphically on number line: 3x – 7 > 2(x – 6), 6 – x > 11 – 2x 3x – 7 > 2(x – 6) ⇒ 3x – 7 > 2x – 12 ⇒ 3x – 2x > – 12 + 7 x > –5 … (1) 6 – x > 11 – 2x ⇒ –x + 2x > 11 – 6 x > 5 … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is. The solution of the given system of inequalities can be represented on number line as #### Question 10: Solve the inequalities and represent the solution graphically on number line: 5(2x – 7) – 3(2x + 3) 0, 2x + 19 6x + 47 5(2x – 7) – 3(2x + 3) 0 10x – 35 – 6x – 9 0 4x – 44 0 4x 44 x 11 … (1) 2x + 19 6x + 47 19 – 47 6x – 2x ⇒ –28 4x ⇒ –7 x … (2) From (1) and (2), it can be concluded that the solution set for the given system of inequalities is [–7, 11]. The solution of the given system of inequalities can be represented on number line as #### Question 11: A solution is to be kept between 68°F and 77°F. What is the range in temperature in degree Celsius (C) if the Celsius/Fahrenheit (F) conversion formula is given by Since the solution is to be kept between 68°F and 77°F, 68 < F < 77 Putting we obtain Thus, the required range of temperature in degree Celsius is between 20°C and 25°C. #### Question 12: A solution of 8% boric acid is to be diluted by adding a 2% boric acid solution to it. The resulting mixture is to be more than 4% but less than 6% boric acid. If we have 640 litres of the 8% solution, how many litres of the 2% solution will have to be added? Let x litres of 2% boric acid solution is required to be added. Then, total mixture = (x + 640) litres This resulting mixture is to be more than 4% but less than 6% boric acid. 2%x + 8% of 640 > 4% of (x + 640) And, 2% x + 8% of 640 < 6% of (x + 640) 2%x + 8% of 640 > 4% of (x + 640) 2x + 5120 > 4x + 2560 5120 – 2560 > 4x – 2x 5120 – 2560 > 2x 2560 > 2x 1280 > x 2% x + 8% of 640 < 6% of (x + 640) 2x + 5120 < 6x + 3840 5120 – 3840 < 6x – 2x 1280 < 4x 320 < x 320 < x < 1280 Thus, the number of litres of 2% of boric acid solution that is to be added will have to be more than 320 litres but less than 1280 litres. #### Question 13: How many litres of water will have to be added to 1125 litres of the 45% solution of acid so that the resulting mixture will contain more than 25% but less than 30% acid content? Let x litres of water is required to be added. Then, total mixture = (x + 1125) litres It is evident that the amount of acid contained in the resulting mixture is 45% of 1125 litres. This resulting mixture will contain more than 25% but less than 30% acid content. 30% of (1125 + x) > 45% of 1125 And, 25% of (1125 + x) < 45% of 1125 30% of (1125 + x) > 45% of 1125 25% of (1125 + x) < 45% of 1125 ∴ 562.5 < x < 900 Thus, the required number of litres of water that is to be added will have to be more than 562.5 but less than 900. #### Question 14: IQ of a person is given by the formula Where MA is mental age and CA is chronological age. If 80 IQ 140 for a group of 12 years old children, find the range of their mental age.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628231287002563, "perplexity": 473.9659199784292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00342.warc.gz"}
https://gmatclub.com/forum/if-x-and-y-are-positive-is-the-ratio-of-x-to-y-greater-than-102341.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 May 2019, 17:42 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If x and y are positive, is the ratio of x to y greater than Author Message TAGS: ### Hide Tags Manager Joined: 17 Nov 2009 Posts: 219 If x and y are positive, is the ratio of x to y greater than  [#permalink] ### Show Tags 06 Oct 2010, 08:52 3 10 00:00 Difficulty: 35% (medium) Question Stats: 68% (01:29) correct 32% (01:34) wrong based on 225 sessions ### HideShow timer Statistics If x and y are positive, is the ratio of x to y greater than 3 ? (1) x is 2 more than 3 times y. (2) The ratio of 2x to 3y is greater than 2 Math Expert Joined: 02 Sep 2009 Posts: 55188 ### Show Tags 06 Oct 2010, 08:58 1 3 If x and y are positive, is the ratio of x to y greater than 3 ? Is $$\frac{x}{y}>3$$? --> since y is positive, we can multiply both sides by it to get: is $$x>3y$$? (1) x is 2 more than 3 times y --> $$x=3y+2$$ --> directly tells us that $$x$$ is 2 more than $$3y$$. Sufficient. Or: substitute $$x$$: is $$x>3y$$? --> is $$3y+2>3y$$? --> is $$2>0$$? YES. Sufficient. (2) The ratio of 2x to 3y is greater than 2 --> $$\frac{2x}{3y}>2$$ --> $$x>3y$$. Sufficient. _________________ Current Student Joined: 12 Aug 2015 Posts: 2616 Schools: Boston U '20 (M) GRE 1: Q169 V154 Re: If x and y are positive, is the ratio of x to y greater than  [#permalink] ### Show Tags 13 Mar 2016, 07:03 1 Whenever we are given a combination of equality and inequality we must substitute that equality in the inequality to obtain the ranges here statement 2 is obvious;y true in statement 1 after we substitute x=3y+2 => we get 2>0 => true . _________________ IIMA, IIMC School Moderator Joined: 04 Sep 2016 Posts: 1333 Location: India WE: Engineering (Other) Re: If x and y are positive, is the ratio of x to y greater than  [#permalink] ### Show Tags 24 Apr 2018, 18:09 Bunuel niks18 amanvermagmat gmatbusters chetan2u Quote: If x and y are positive, is the ratio of x to y greater than 3 ?[/b] Is $$\frac{x}{y}>3$$? --> since y is positive, we can multiply both sides by it to get: is $$x>3y$$? (1) x is 2 more than 3 times y --> $$x=3y+2$$ --> directly tells us that $$x$$ is 2 more than $$3y$$. Sufficient. Why does this hold true even for decimal values though we are not mentioned that x and y are integers in questions stem? _________________ It's the journey that brings us happiness not the destination. Feeling stressed, you are not alone!! Retired Moderator Joined: 27 Oct 2017 Posts: 1225 Location: India GPA: 3.64 WE: Business Development (Energy and Utilities) Re: If x and y are positive, is the ratio of x to y greater than  [#permalink] ### Show Tags 24 Apr 2018, 19:40 Yes this is valid for any positive number (integers, decimals, fractions, surds etc) You can see it like this. Statement 1: x = 3y+2 Hence x/y = (3y+2)/y = 3+2/y hence>3. Sufficient Statement 2: 2x/3y >2: hence multiplying 3/2 both sides of inequality, we get x/y>3. Sufficient. Bunuel niks18 amanvermagmat gmatbusters chetan2u Quote: If x and y are positive, is the ratio of x to y greater than 3 ?[/b] Is $$\frac{x}{y}>3$$? --> since y is positive, we can multiply both sides by it to get: is $$x>3y$$? (1) x is 2 more than 3 times y --> $$x=3y+2$$ --> directly tells us that $$x$$ is 2 more than $$3y$$. Sufficient. Why does this hold true even for decimal values though we are not mentioned that x and y are integers in questions stem? Posted from my mobile device Posted from my mobile device _________________ Manager Joined: 03 Sep 2018 Posts: 61 Re: If x and y are positive, is the ratio of x to y greater than  [#permalink] ### Show Tags 20 Feb 2019, 13:28 What would the answer be if it did not have the restriction that x and y > 0? _________________ Re: If x and y are positive, is the ratio of x to y greater than   [#permalink] 20 Feb 2019, 13:28 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7531558275222778, "perplexity": 2127.0145281148716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00088.warc.gz"}
http://openstudy.com/updates/50ebb90be4b0d4a537cc9f2c
Here's the question you clicked on: 55 members online • 0 viewing ## anonymous 3 years ago Can anyone show me an example physics numerical which cant be solved just by memorizing formulas and need deep understanding of the concept?Please explain all the steps used. Please explain those deep concepts also Delete Cancel Submit • This Question is Closed 1. anonymous • 3 years ago Best Response You've already chosen the best response. 0 you know Kirchoff's laws ? if you do, then you know that the closed loop integral of a circuit is always equal to 0, but that also requires you to to take the potential difference of the battery, what if you don't have a battery but instead you have a magnetic field exactly as required for the same amount of current to flow through the circuit.. would Kirchoff's law fail ? if not, why ? if yes, then what is the integral equal to ? 2. anonymous • 3 years ago Best Response You've already chosen the best response. 0 To paraphrase Poincare: Science(physics) is no more a collection of facts (formulas) than a house is a collection of stones. Formulas are derived from the deep understanding of concepts for instances of special interest to demonstrate the quantitative relationship among the physical quantities involved and for convenience and ease in obtaining desired numerical results for these instances. The deep understanding of concepts is necessary to determine the proper relationship among the physical quantities from which you can choose appropriate formula's to obtain desired information. Most real problems involve more complex situations than can be solved by the simple application of a single equation. Even applying single equation may be problematic in recognizing that the equation is appropriate or in choosing the correct values for the parameters from the known information or knowing how and when to augment the given information with necessary additional information. 3. anonymous • 3 years ago Best Response You've already chosen the best response. 0 4. anonymous • 3 years ago Best Response You've already chosen the best response. 0 Consider a tank of water of depth d which you wish to empty with a 5/8 inch diameter garden hose siphon after it passes over an obstruction of height z above the water surface and leads to a level h below the water surface. What is the maximum height of the sIphon loop above the water level before the water column in the sIphon breaks. Use Bernoulli's law because we have a fluid flowing through a tube at different heights. Bernoulli's Law states that at two points in the same streamline for a noncompressible nonviscous fluid in steady flow the sum of the pressure, the kinetic energy per unit volume and the potential energy per unit volume is the same. I'll call this sum the Bernoulli variable. At the top of the siphon the Bernoulli's variable for this point is $P+\frac{ 1 }{ 2 }\rho v ^{2}+\rho gh$. Now at the level of the water the Bernoulli variable has the value of the atmospheric pressure only so we can equate these. So the Bernoulli equation for a point in the loop is $P+\frac{ 1 }{ 2 } \rho v ^{2}+\rho gh=P _{ATM}$ We know that a fluid cannot support a tensile force ( i.e. water cannot pull anything) so when the absolute pressure in the loop drops to 0 the water column will break. So now we have for the maximum z $\rho gz _{\max} =P _{ATM}-\frac{ 1 }{ 2 }\rho v ^{2}$ But what is the velocity v. Since this is a noncompressible fluid the velocity is the same at all points in the siphon. In fact the velocity is determined only by the height of surface of the water above the outlet of the siphon. This is Torricelli's law (derived from Bernoulli's law) given by the equation $v _{out}= \sqrt{2gh}$ So the maximum z equation becomes $z _{\max} = \frac{ P _{MAX} }{ \rho g }- h$ The diameter of the siphon is irrelevant except that its relatively large diameter minimizes viscous forces and surface tension making Bernoulli's law a more valid application to this problem. Reviewing the approach you had to determine the law(s) that are appropriate and the conditions for the breaking of the water column. You had to deduce the Bernoulli variables in the siphon at the water level and top of the siphon. You had to deduce that the velocity at the siphon outlet. And lastly you had to remember the consistent set of units of density, gravitational acceleration, pressure, distance and area. In this case density is in slugs per cubic ft, acceleration of gravity in ft/sec^2, h in feet, and atmospheric pressure in lbs per sq ft. (remember the embarrassment of the company who built a Martian probe when inches and centimeters were interchanged) I think this problem is representative of physics problems in general. Some problems are easier less involved, many are more complex especially in application of physical intuition and/or the development of the mathematical relationships. Generally real problems are not plug and play. 5. anonymous • 3 years ago Best Response You've already chosen the best response. 0 Well, obviously any numerical problem for which there does not yet exist a formula, but for which a formula can be derived from fundamental principles. Such problems can readily be devised at the graduate student level, but they take a lot of work, so I'm sure not going to do one for your pleasure. Send me \$500 and I might do it. Also, of course, any research problem falls into this category. 6. anonymous • 3 years ago Best Response You've already chosen the best response. 0 7. anonymous • 3 years ago Best Response You've already chosen the best response. 0 8. AravindG • 3 years ago Best Response You've already chosen the best response. 0 you can also check out iit group closed questions section.there are very nice questions there :) 9. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777007699012756, "perplexity": 677.4524158576506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00007-ip-10-164-35-72.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1640495/why-is-the-accuracy-of-prhypothesis-in-bayess-theorem-less-important-than
# Why is the accuracy of $\Pr($hypothesis) in Bayes's Theorem less important than apparent? Source: p 224, Think: A Compelling Introduction to Philosophy (1 ed, 1999) by Simon Blackburn. I capitalised miniscules, which the author uses for variables. I pursue only intuition; please do not answer with formal proofs. Discussing Bayes's Theorem ( $\Pr(H|E)=\frac{\Pr(E|H)\Pr(H)}{\Pr(E)}$ ), the author abbreviates 'evidence' to E and 'hypothesis' to H. Of course, very often it is difficult or impossible to quantify the "prior" probabilities [ $\Pr(H)$ ] with any accuracy. It is important to realize that this need not matter as much as it might seem. Two factors alleviate the problem. First, even if we assign a range to each figure, [1.] it may be that all ways of calculating the upshot give a sufficiently similar result. And second, it may be that in the face of enough evidence, [2.] difference of prior opinion gets swamped. Investigators starting with very different antecedent attitudes to $\Pr(H)$ might end up assigning similarly high values to $\Pr(H \mid E)$, when $E$ becomes impressive enough. My interpretation of the above as overconfident and presumptuous implies my failure to comprehend it; somehow, I am unpersuaded by 1 and 2. Would these please be explained? 2) basically says "If you receive enough evidence, however unconvinced you were at the start, you must eventually become convinced". Since in real life we actually have a lot of data - medical trials, for instance, get run even when we could in principle know the answers already from extant data - we can afford to be quite imprecise with our priors. (Of course, the assumption here is that we have lots of data. If we are trying to extract the truth from very little data, then it becomes very important to have a reasonable prior, because the updating process has so little effect when the data is so short.) 1 seems to be a rephrasing of 2, to me; 2 seems to be the reason why 1 is true. The second statement is related to the way that Bayesian probability reacts in the light of new evidence, always converging towards the implied truth. No matter whether you are skeptical about the relation between cancer and tobacco, or a believer; after seeing a certain amount of reasonable evidence you should change your opinion to fit better the facts. It is however true that certain specially toxic initial priors can hamper your ability to reason correctly. Perhaps the more simple and radical example of this is when you take as prior $P(A)=0$ or $P(A)=1$. The problem of selecting reasonable initial prior is difficult , but can be as simple in some special occasions as starting from the total ignorance ($P(A)=1/2$) and letting the data correct our views. Regarding 1, I do not fully get what he is getting at. Every way of calculating something should give the same result, since it is uniquely determined from the priors. If we have some small discrepancies between our prior and that of our partner's, which is further reduced by evidence updates, then in many cases we can be content with either result, but we have to be careful. Edit: You can take A as any statement which is susceptible to be changed by evidence (which is a pretty broad category, though it may exclude logical propositions if we are presupposing logical omniscience). For example, A could be the aforementioned 'Smoking causes cancer'. • Thanks. Can you please clarify what $A$ means in your 3rd paragraph? – Greek - Area 51 Proposal Feb 7 '16 at 6:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084845542907715, "perplexity": 762.6239559723933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578762045.99/warc/CC-MAIN-20190426073513-20190426095513-00153.warc.gz"}
https://blog.csdn.net/kingdam578/article/details/102690558
# FFmpeg In Android - tutorial-2-Outputting to the Screen输出到屏幕 FFmpeg in Android 专栏收录该内容 16 篇文章 0 订阅 ### SDL and Video SDL 和视频 To draw to the screen, we’re going to use SDL. SDL stands for Simple Direct Layer, and is an excellent library for multimedia, is cross-platform, and is used in several projects. You can get the library at the official website or you can download the development package for your operating system if there is one. You’ll need the libraries to compile the code for this tutorial (and for the rest of them, too). SDL has many methods for drawing images to the screen, and it has one in particular that is meant for displaying movies on the screen - what it calls a YUV overlay. YUV (technically not YUV but YCbCr) *** A note: **There is a great deal of annoyance from some people at the convention of calling “YCbCr”,“YUV”. Generally speaking, YUV is an analog format and YCbCr is a digital format. ffmpeg and SDL both refer to YCbCr as YUV in their code and macros. is a way of storing raw image data like RGB. Roughly speaking, Y is the brightness (or “luma”) component, and U and V are the color components. (It’s more complicated than RGB because some of the color information is discarded, and you might have only 1 U and V sample for every 2 Y samples.) SDL’s YUV overlay takes in a raw array of YUV data and displays it. It accepts 4 different kinds of YUV formats, but YV12 is the fastest. There is another YUV format called YUV420P that is the same as YV12, except the U and V arrays are switched. The 420 means it is subsampled at a ratio of 4:2:0, basically meaning there is 1 color sample for every 4 luma samples, so the color information is quartered. This is a good way of saving bandwidth, as the human eye does not percieve this change. The “P” in the name means that the format is “planar” – simply meaning that the Y, U, and V components are in separate arrays. ffmpeg can convert images to YUV420P, with the added bonus that many video streams are in that format already, or are easily converted to that format. SDL 库中有许多种方式来在屏幕上绘制图形,而且它有一个特殊的方式来在屏幕上显示图像——这种方式叫做 YUV overlay。 YUV(从技术上来讲并不叫 YUV 而是叫做 YCbCr)其实是一种类似于 RGB 方式的存储原始图像的格式 ,很多人被YUV/YCbCr搞得很困惑。通常来讲,YUV是模拟格式,YCbCr是数字格式,ffmpeg和SDL对两者不加区别。 粗略的讲, Y 是亮度分量(luma), U 和 V 是色度分量。(这种格式比 RGB 复杂的多,因为一些颜色信息被丢弃了,而且可以每两个 Y 采样点,只有一个 U 和一个 V 采样点)。 SDL 的 YUV ovelay使用一组原始的 YUV 数据并且在屏幕上显示出它们。它可以允许 4 种不同的 YUV 格式,但是其中的 YV12 是最快的一种。还有一个叫做 YUV420P 的 YUV 格式,它和 YV12 是一样的,除了 U 和 V 分量的位置被调换了以外。 420 意味着它以4:2:0 的比例进行了二次抽样,基本上就意味着 1 个颜色分量对应着 4 个亮度分量。所以它的色度信息只有原来的 1/4。这是一种节省带宽的好方式,因为人眼察觉不到这种变化。名称中的 P 表示这种格式是平面的——简单的说就是 Y, U 和 V 分量分别在不同的数组中。 FFMPEG 可以把图像格式转换为 YUV420P,但是现在很多视 So our current plan is to replace the ppm_save() function from Tutorial 1, and instead output our frame to the screen. But first we have to start by seeing how to use the SDL Library. First we have to include the libraries and initalize SDL: #include <SDL2/SDL.h> if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) { fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError()); exit(1); } SDL_Init() essentially tells the library what features we’re going to use. SDL_GetError(), of course, is a handy debugging function. SDL_Init() 函数告诉了 SDL 库,哪些特性我们将要用到。当然 SDL_GetError() 是一个用来调试出错的函数。 ### Creating a Display 创建一个显示 Now we need a place on the screen to put stuff. The basic area for displaying images with SDL is called a SDL_Window: //SDL 2.0 Support for multiple windows screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, screen_w, screen_h, SDL_WINDOW_OPENGL); if(!screen) { printf("SDL: could not create window - exiting:%s\n",SDL_GetError()); return -1; } This sets up a screen with the given width and height. The next option is the bit depth of the screen - 0 is a special value that means “same as the current display”. (This does not work on OS X; see source.) sdlRenderer = SDL_CreateRenderer(screen, -1, SDL_RENDERER_ACCELERATED); sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING, video_dec_ctx->width, video_dec_ctx->height); ### Displaying the Image 显示图像 Well that was simple enough! Now we just need to display the image. Let’s go all the way down to where we had our finished frame. We can get rid of all that stuff we had for the RGB frame, and we’re going to replace the ppm_save() with our display code. sws_scale(sws_ctx, (const uint8_t* const*)frame->data, frame->linesize, 0, video_dec_ctx->height, video_dst_data, video_dst_linesize); SDL_UpdateYUVTexture(sdlTexture, &sdlRect, video_dst_data[0], video_dst_linesize[0], video_dst_data[1], video_dst_linesize[1], video_dst_data[2], video_dst_linesize[2]); SDL_RenderClear( sdlRenderer ); SDL_RenderCopy( sdlRenderer, sdlTexture, NULL, &sdlRect); SDL_RenderPresent( sdlRenderer ); Now our video is displayed! 现在我们的视频显示出来了! Let’s take this time to show you another feature of SDL: its event system. SDL is set up so that when you type, or move the mouse in the SDL application, or send it a signal, it generates an event. Your program then checks for these events if it wants to handle user input. Your program can also make up events to send the SDL event system. This is especially useful when multithread programming with SDL, which we’ll see in Tutorial 4. In our program, we’re going to poll for events right after we finish processing a packet. For now, we’re just going to handle the SDL_QUIT event so we can exit: SDL_Event event; av_free_packet(&packet;); SDL_PollEvent(&event;); switch(event.type) { case SDL_QUIT: SDL_Quit(); exit(0); break; default: break; } And there we go! Get rid of all the old cruft, and you’re ready to compile. g++ -std=c++14 -o tuturial03 tutorial03.cpp -I/INCLUDE_PATH -L/LIB_PATH -lavutil -lavformat -lavcodec -lswscale -lswresample -lavdevice -lz -lavutil -lm -lpthread -ldl • 0 点赞 • 0 评论 • 0 收藏 • 扫一扫,分享海报 08-21 1062 04-21 341 09-01 271 12-13 1865 12-25 5096 08-02 5010 11-24 957 11-15 4754 02-18 534 02-15 346 10-22 3787 06-05 312 04-13 180
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2237739861011505, "perplexity": 7237.000465145043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00416.warc.gz"}
https://www.intmath.com/blog/learning/rate-this-exchange-79
# Rate this exchange… Who really solved it? By Murray Bourne, 23 May 2005 ## Me Posted on 14 May 2005 10:08 PM Last night I uploaded newer files via FTP for IntMath and they appeared correctly in the FTP program as having been uploaded. But there was no change to the files when requesting them with a browser. I then used File Manager in cPanel with some success - the changes appeared. Same thing this morning - but without success. It seems that they are cached somewhere and the newer files are not appearing. Of course I have cleared the cache in my browsers... Also, there are some strange things happening with log files and error files in cPanel - I gather they are due to the updates you are making (which are appreciated :-)) ## Support Posted on 14 May 2005 10:33 PM Hi Murray we just moved all clients from Hobbes to the new server so you may have lost some data on the move. Can you upload them again? ## Me Posted on 15 May 2005 12:02 AM I tried again but still the same deal - shows as correctly uploaded but not showing in the browser. ## Support Posted on 15 May 2005 12:52 AM Make sure you are using 72.9.235.194 as the FTP login info. ## Me Posted on 15 May 2005 04:13 AM I was using "squarecirclez.com" before and changed it to 72.9.235.194 just now. The files showing on the server were the new ones. I verified this in cPanel's File Manager - which also shows the new ones. Are the uploaded files going to the new server but the browser request is going to the old server?? ## Support Posted on 15 May 2005 06:40 AM Yes, that could be the problem. Please ping squarecirclez.com and tell us what IP it is resolving to for you. Regards ## Me Posted on 15 May 2005 08:13 AM It is resolving 72.9.235.194... I'm wondering what you are seeing that end. The page www.intmath.com/index.php should have the following at the top of the code: <style type="text/css" media="screen"> @import url( includes/math.css); </style> The old file does not. ## Support Posted on 15 May 2005 08:32 AM No, it does not have that at the top. Try using FTP with 72.9.235.194 and then the page should be updated. ## Me Posted on 15 May 2005 09:06 AM Pls refer to my 04:13 reply - I DID use 72.9.235.194 then and tried it again a minute ago. I can see the files are uploaded properly... I even tried removing the index file and replacing it again - the correct one is there in the ftp app. Is anyone else having the same problem? ## Support Posted on 15 May 2005 09:08 AM Are you sure the code you are uploading has that code at the top? We have a few other customers having this problem due to the DNS changes, but it is because their ISP is not resolving their websites to the new server yet. ## Me Posted on 15 May 2005 09:17 AM Yes, I have had plenty of time to quadruple check - all day in fact. The new file is 10793 bites - correctly showing in ftp client. I also uploaded the math.css script - browser says it is not there. As for the ISP possibility - doesn't follow if you also cannot see the correct script. ## Support Posted on 15 May 2005 09:22 AM I recommend you wait another 24-48 hours for the DNS to fully propegate. ## Me Posted on 15 May 2005 08:59 PM I had a thought overnight based on your suggestion to ping squarecirclez.com. I pinged intmath.com (the addon where the new files were going) and it returned 67.18.208.100. I then pointed the ftp app to that IP and found it did not have the new files. I uploaded the new files and they appeared just fine in the browser. So I'm thinking now the files were going into the new server and what I have just done is to put the files into the old server. (I just need to get my head around what happened.) Correct? In which case your suggestion to wait seems the most appropriate... ## Support Posted on 15 May 2005 09:08 PM Yes, that is correct. Some ISPs will take much longer to make DNS changes (up to 4 days sometimes). Regards ## Me Posted on 18 May 2005 09:38 AM Well, here it is 4 days and still the old server is in use. Please expedite - I am having to maintain 2 servers - one that I cannot see - doesn't make sense. ## Support Posted on 18 May 2005 10:26 AM Please ping the website now and tell us what IP it resolves to. ## Me Posted on 18 May 2005 08:03 PM Same result as all week. The addon account, intmath.com, returns the old server 67.18.208.100. ## Support Posted on 18 May 2005 08:39 PM in ms dos type ipconfig /flushdns. Let me know if that works. ## Me Posted on 18 May 2005 08:44 PM Response was "Successfully flushed the DNS Resolver Cache" however when pinging both of my domains there is NO change. ## Support Posted on 18 May 2005 09:14 PM That is strange, anywhere I check, from my PC, from our servers, from dnsreport from whois servers all report that domain resolving to the new server. The only thing I can think of is removing tmp files even offline ones. Are you using a proxy? That may also be the problem. ## Me Posted on 18 May 2005 10:53 PM ?? A whois.sc check just now gives me: ******************************************* INTMATH.COM Website Title: Interactive Mathematics - Learn math while you play with it! Meta Description: Learn mathematics while playing with it! Uses LiveMath, Flash and Scientific Notebook to enhance mathematics lectures. You can perform mathematical 'what ifs'... Meta Keywords: mathematics, math, maths, calculus, integration, application, application of integration, LiveMath, interactive, volume, area, centroid, moment of inertia, work, Hooke's Law, spring, charge, HIC, head injury criterion, car accident, auto accident Response Code: 200 SSL Cert: host1.anonymous-web-server.com expires in 199 days Website Status: Active Reverse IP: Web server hosts 407 websites (reverse ip tool requires free login) Server Type: Apache (Spry.com also uses Apache) IP Address: 67.18.208.100 (ARIN & RIPE IP search) IP Location: - Theplanet.com Internet Services Inc ************************************************* So I cannot see why your side says the IP is the new server. But if I enter "www.intmath.com" I am still getting the old server. This is consistent with the whois report. It would appear to me that thePlanet.com need to change the setting for this domain. I am getting sick of the time wasted with this issue. Please fix. ## Support Posted on 18 May 2005 11:37 PM Hi, DNS is recreated this should fix your issue -- allow few hrs and everything will be good. Regards See the 1 Comment below. ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can't mix both types of math entry in your comment.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4106060266494751, "perplexity": 4933.985964757073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00250.warc.gz"}
https://gamedev.stackexchange.com/questions/110296/matching-collider-to-character-mesh-size-in-different-postures
# Matching collider to character mesh size in different postures What's the optimal method in Unity to match the size of a Capsule Collider to the actual size of the game object it is part of? To understand what I'm trying to do let me give an example: I have a 3D player character with a Capsule Collider that is used to check collision. The character can go into various postures, e.g. crouch, prone, jump. I'm having difficulties to match the height of the collider to the height of the character when crouching or jumping because depending on the used animations the character's height is smaller by various amounts when crouched or jumping. All I could do so far is set a multiplier for a specific posture by that the collider scales but this is very inefficient, especially when the character tries to jump onto a higher ground. There seems to be no way of knowing the exact size of a mesh. I'm not sure if it might help to loop through all child meshes (torso, hair, etc.) in a game object and obtain their bounds through SkinnedMeshRenderer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45646440982818604, "perplexity": 965.8892179345148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301014.74/warc/CC-MAIN-20191215015215-20191215043215-00176.warc.gz"}
http://www.physicsforums.com/showpost.php?p=164678&postcount=1
View Single Post P: 1 I'm reading the first edition of Mechanics by Landau et al, published in 1960. Just before equation 3.1 on page 5 it says exactly this: "Since space is isotropic, the Lagrangian must also be independent of the direction of v, and is therefore a function only of it's magnitude, i.e. of v(bold)^2 = v(italic)^2: L = L(v(italic)^2) (3.1)" This seems very cryptic to me since the magnitude is sqrt(v(bold)^2) = v(italic). Could someone fill in the missing details for me please? Funky
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012216925621033, "perplexity": 592.9934436584748}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00463-ip-10-146-231-18.ec2.internal.warc.gz"}
https://forum.polymake.org/viewtopic.php?f=8&p=1997&sid=9f760824d04ae7d1732f442840a71830
## Kazarnovskii Pseudovolume Questions and problems about using polymake go here. Moderator: Moderators gino Posts: 1 Joined: 26 Apr 2017, 10:59 ### Kazarnovskii Pseudovolume Good afternoon, I would like to know if it is possible to use polymake in order to compute the Kazarnovskii pseudovolume of 4-dimensional polytopes. If $\Gamma$ is a polytope in $\mathbb C^2$, the Kazarnovskii pseudovolume $P_2(\Gamma)$ is, by definition, the sum $\frac{1}{\pi}\sum_\Delta \rho(\Delta)vol_2(\Delta)\psi(\Delta)$, as $\Delta$ runs in the set of 2-dimensional faces of $\Gamma$, where: - $\rho(\Delta)=1-\langle v_1,v_2\rangle^2$, with $\{v_1,v_2\}$ an orthonormal basis (respect to the scalar product $Re\langle\,,\rangle$ given by the real part of the standard hermitian one) of the plane parallel to $\Delta$ and passing through the origin; - $vol_2(\Delta)$ is the surface area of $\Delta$; - $\psi(\Delta)$ is the outer angle of $\Gamma$ at $\Delta$. So $P_2(\Gamma)$ is just a weighted version of the 2nd intrinsic volume of $\Gamma$ taking into account the position of $\Gamma$ with respect to complex structure of the ambient space. My question is the following: is polymake able to perform the necessary linear algebra computation on the set of the ridges of $\Gamma$? joswig Main Author Posts: 187 Joined: 24 Dec 2010, 11:10 Contact: ### Re: Kazarnovskii Pseudovolume This is a special computation which is not supported by polymake right away. One thing which makes this a bit delicate is that this needs to be implemented with floats. By design polymake is primarily about exact computations. Therefore typical float linear algebra is dramatically underdeveloped in polymake. Essentially, the only non-trivial algorithm being singular value decomposition (and thus solving systems of linear equations with reasonable accuracy). It seems doable though, by writing suitable C++ client code. Just a general warning: starting out with float coordinates for your points or inequalities polymake (by default) will convert into exact rational numbers. The output can later be converted to floats; see, e.g., https://forum.polymake.org/viewtopic.php?f=9&t=7&p=24&hilit=float#p24.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092167973518372, "perplexity": 758.7432305511115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424586.47/warc/CC-MAIN-20170723182550-20170723202550-00618.warc.gz"}
http://gmatclub.com/forum/for-a-recent-play-performance-the-ticket-prices-were-25-pe-165280.html?sort_by_oldest=true
Find all School-related info fast with the new School-Specific MBA Forum It is currently 25 Oct 2014, 13:37 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # For a recent play performance, the ticket prices were $25 pe Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Math Expert Joined: 02 Sep 2009 Posts: 23422 Followers: 3617 Kudos [?]: 28972 [0], given: 2874 For a recent play performance, the ticket prices were$25 pe [#permalink]  31 Dec 2013, 06:21 Expert's post 2 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 67% (02:09) correct 33% (01:25) wrong based on 156 sessions The Official Guide For GMAT® Quantitative Review, 2ND Edition For a recent play performance, the ticket prices were $25 per adult and$15 per child. A total of 500 tickets were sold for the performance. How many of the tickets sold were for adults? (1) Revenue from ticket sales for this performance totaled $10,500. (2) The average (arithmetic mean) price per ticket sold was$21. Data Sufficiency Question: 14 Category: Algebra Simultaneous equations Page: 154 Difficulty: 650 GMAT Club is introducing a new project: The Official Guide For GMAT® Quantitative Review, 2ND Edition - Quantitative Questions Project Each week we'll be posting several questions from The Official Guide For GMAT® Quantitative Review, 2ND Edition and then after couple of days we'll provide Official Answer (OA) to them along with a slution. We'll be glad if you participate in development of this project: 2. Please vote for the best solutions by pressing Kudos button; 3. Please vote for the questions themselves by pressing Kudos button; 4. Please share your views on difficulty level of the questions, so that we have most precise evaluation. Thank you! [Reveal] Spoiler: OA _________________ Kaplan Promo Code Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes Math Expert Joined: 02 Sep 2009 Posts: 23422 Followers: 3617 Kudos [?]: 28972 [2] , given: 2874 Re: For a recent play performance, the ticket prices were $25 pe [#permalink] 31 Dec 2013, 06:22 2 This post received KUDOS Expert's post SOLUTION For a recent play performance, the ticket prices were$25 per adult and $15 per child. A total of 500 tickets were sold for the performance. How many of the tickets sold were for adults? (1) Revenue from ticket sales for this performance totaled$10,500 --> 25a+15c=10,500 --> 25a+15(500-a)=10,500. We can solve for a. Sufficient. (2) The average (arithmetic mean) price per ticket sold was $21 --> \frac{25a+15c}{500}=21 --> 25a+15c=10,500. The same info as above. Sufficient. Answer: D. _________________ Intern Joined: 03 Dec 2013 Posts: 20 Location: Uzbekistan Concentration: Finance, Entrepreneurship GMAT 1: 610 Q47 V27 GRE 1: 600 Q790 V400 GPA: 3.4 WE: Analyst (Commercial Banking) Followers: 0 Kudos [?]: 10 [1] , given: 54 Re: For a recent play performance, the ticket prices were$25 pe [#permalink]  01 Jan 2014, 21:31 1 KUDOS After reading the given problem we get this problem: x +y = 500 25x +15y = (The sum of money spent for tickets)=> This is what we need to solve the problem. (1) is sufficient. It tells us about revenue which is the sum of all prices of tickets. (2) is also sufficient. If we have average price and the number of all tickets then we can find the total sum. Moderator Joined: 25 Apr 2012 Posts: 683 Location: India GPA: 3.21 Followers: 18 Kudos [?]: 326 [0], given: 683 Re: For a recent play performance, the ticket prices were $25 pe [#permalink] 01 Jan 2014, 23:57 For a recent play performance, the ticket prices were$25 per adult and $15 per child. A total of 500 tickets were sold for the performance. How many of the tickets sold were for adults? (1) Revenue from ticket sales for this performance totaled$10,500. (2) The average (arithmetic mean) price per ticket sold was $21. Sol: Let A = Total no. of Adult tickets C: Total no. of Child Tickets Given A+C=500, we need to find A ? Price of Adult Ticket:$25 Price of Child Ticket : $15 From St 1, we have 25*A+15*C = 10500 We also know A+C= 500 We have 2 variables and 2 equations and therefore we can solve for A. We can leave it that. So B C and E ruled out From St 2 we have Average price is$ 21. Refer attachment Attachment: WA.PNG [ 12.23 KiB | Viewed 991 times ] Now $21 is 6$ more than Child Ticket price and $4 Less than Adult ticket price. So by Weighted Average principle. Total difference between Adult and Child Ticket price is$ 10 Number of Adult Tickets will be: 6/10 *500 = 300 -----> A Just for clarity purpose we can calculate St 1 as well 5A+3C= 2100-------Eq 1 A+C=500-----> 3A+3C= 1500-------> Eq 2 Subtracting 2 from1, we get 2A=600 or A =300. Ans D _________________ “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” Manager Status: GMATting Joined: 21 Mar 2011 Posts: 108 Concentration: Strategy, Technology GMAT 1: 590 Q45 V27 Followers: 1 Kudos [?]: 60 [0], given: 104 Re: For a recent play performance, the ticket prices were $25 pe [#permalink] 02 Jan 2014, 00:17 Since the question asks for the number of tickets sold for adults, let us assume x to be the number of tickets sold to adults. Average price/ticket = (no. of adult tickets) * (Price/adult ticket) + (no. of child tickets) * (Price/child ticket) Av. price = 25 * A + 15 * C Since A + C = 500; C = 500 - A Av. price = 25A + 15(500 - A) = 25A + 7500 - 15A = 10A + 7500; So, if we know the av.price/ticket, we can A; 1) Av.price = 10500/500 = 21; Sufficient 2) Av.price is given as 21; Sufficient. Hence (D). Math Expert Joined: 02 Sep 2009 Posts: 23422 Followers: 3617 Kudos [?]: 28972 [0], given: 2874 Re: For a recent play performance, the ticket prices were$25 pe [#permalink]  05 Jan 2014, 10:54 Expert's post SOLUTION For a recent play performance, the ticket prices were $25 per adult and$15 per child. A total of 500 tickets were sold for the performance. How many of the tickets sold were for adults? (1) Revenue from ticket sales for this performance totaled $10,500 --> 25a+15c=$10,500 --> 25a+15(500-a)=$10,500. We can solve for a. Sufficient. (2) The average (arithmetic mean) price per ticket sold was$21 --> \frac{25a+15c}{500}=21 --> 25a+15c=$10,500. The same info as above. Sufficient. Answer: D. _________________ Re: For a recent play performance, the ticket prices were$25 pe   [#permalink] 05 Jan 2014, 10:54 Similar topics Replies Last post Similar Topics: For a recent play performance, the ticket prices were $25 2 08 Sep 2009, 12:29 For a recent play performance, the ticket prices were$25 1 06 Oct 2008, 12:50 For a recent play performance, the tickets prices were $2 5 07 Mar 2008, 21:29 Tickets to a concert were priced at$3.00 for each student 5 24 Apr 2007, 05:15 the play performance 1 11 Jun 2005, 18:42 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3369596004486084, "perplexity": 10005.447355481527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650516.39/warc/CC-MAIN-20141024030050-00227-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/using-numbers
Using Numbers In this math and language arts worksheet, students solve 5 -part math problems which are written in word form. These simple math problems need to have their answers expressed in word form as well. Example: Ten plus two, add fifteen, add forty one, subtract five, multiply by three (one hundred eighty nine). Concepts Resource Details 4th - 5th Subjects English Language Arts 4 more... Resource Types Worksheets 2 more... Usage Permissions Public Domain
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012592196464539, "perplexity": 18110.854017875954}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813626.7/warc/CC-MAIN-20180221143216-20180221163216-00648.warc.gz"}
http://math.stackexchange.com/users/45/ben-alpert?tab=activity&sort=posts
Ben Alpert Reputation 2,174 Next privilege 2,500 Rep. Create tag synonyms May13 answered Probability of Heads in a coin Jun23 asked Use of “inverse” to mean reciprocal Jun21 asked Proving Stewart's theorem without trig Mar5 answered Why is the 2nd derivative written as $\frac{\mathrm d^2y}{\mathrm dx^2}$? Feb13 asked Finding $\int_0^{\pi/2} \sin x\,dx$ Feb13 asked Coloring points on an n-gon Jul30 asked Which average to use? (RMS vs. AM vs. GM vs. HM) Jul30 answered Proof that $n^3+2n$ is divisible by 3 Jul30 answered Proof that $n^3+2n$ is divisible by 3 Jul28 answered How do I convert from Cartesian to conical coordinates? Jul25 asked Picking cakes if we need at least one of each type Jul25 answered Combinations of selecting n objects with k different types Jul25 asked Probability that two people see each other at the coffee shop Jul25 answered Probability that a stick randomly broken in two places can form a triangle Jul24 answered Balance chemical equations without trial and error? Jul23 asked Counting how many hands of cards use all four suits Jul22 answered Can there be two distinct, continuous functions that are equal at all rationals? Jul22 answered Looking for a book similar to “Think of a Number” Jul22 answered What can we conclude from correlation? Jul21 asked Tiling a 3 by 2n rectangle with dominoes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.702736496925354, "perplexity": 2739.1566098354074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651873.94/warc/CC-MAIN-20150417045731-00026-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/5660/exporting-plots-with-cyrillic-text-elements-to-pdf?answertab=active
# Exporting plots with Cyrillic text elements to pdf I'm trying to export a plot with some added notes which happen to be in Russian. Using Mathematica 8.04 and WinXP I evaluate Export["smt.pdf", "Текст на русском", CharacterEncoding -> "WindowsCyrillic"] Which gives nonsense as output. Is there a way to solve this? - Possibly related: Mathematica exports to PDF 1.4, which doesn't deal with certain kinds of OpenType fonts (scroll to bottom of the linked answer). –  Verbeia May 17 '12 at 6:34 Works for me on Win7 mma8.0.1 Screenshot –  Ajasja May 17 '12 at 6:57 @user829438 That was just a simple way to demonstrate that it works. Export and ExportString produce the same data---Export will write a readable PDF on my machine (WinXP, Mathematica 8.0.4, like yours). Test Export["test.pdf", "Текст на русском"] again (precisely as I wrote it, don't include CharacterEncoding), and if it still doesn't give you a readable output (try opening with Adobe Reader, that's what I used), then I have one more guess: set the system language to US English, reboot (just in case), and try again. I know it affects some things, e.g. parsing dates. –  Szabolcs May 17 '12 at 9:06 So that solves it, thanks. @Szabolcs, would you post it as a solution or is this problem to local? –  iav May 17 '12 at 11:59 @Szabolcs Brilliant! It works, but precisely, we must change the "Language for applications which do not support Unicode" to US English. I should say that after changing this option my computer now runs only in "Safe mode" and changing this option to its default walue does not help (ordinary run of Windows produces blue screen). I use Mathematica 8.0.4 on Windows XP SP3. Now I will reinstall Windows since I cannot load it in the usual way... –  Alexey Popkov Jul 30 '12 at 14:26 It seems that the problem can be solved by setting explicit value of the CharacterEncoding global FE option (checked with MMa 8.0.4 and 9.0.0): SetOptions[\$FrontEnd, CharacterEncoding -> "UTF8"]; Export["test.pdf", "кириллический текст"] An equivalent way (without changing the global FE settings): Export["test.pdf", Style["кириллический текст", CharacterEncoding -> "UTF8"]] Instead of "UTF8" one may set "UTF-8" or "ASCII" with the same effect. The drawback of this approach is that all non-English letters are outlined. - Thanks, @Alexey. –  iav Mar 12 '13 at 12:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3827265202999115, "perplexity": 5280.094964211182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00034-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-write-an-equation-of-a-line-through-the-the-point-5-3-and-is-parallel#635765
Algebra Topics # How do you write an equation of a line through the the point (-5, -3) and is parallel to the line 7x+2y=5? Jun 26, 2018 2y+ $7 x$= -41 #### Explanation: Since they're parallel, this means that they have the same gradient. So start by finding the gradient of the line by using the line equation provided, To do this, it as to be in the $y = m x + c$ format. So: 2y= 5- $7 x$ y= $\frac{5}{2}$- $\frac{7}{2}$$x$ The gradient is the co-efficient of $x$, which is $- \frac{7}{2}$. So: m= -$\frac{7}{2}$ Next step is finding the y-intercept (c). To do this we make use of the gradient and the point provided. So you get: -3 = (-$\frac{7}{2}$) x (-5) +c Here, notice that I'm using $y - m x + c$ to find my answer. Where y= -3, $x$= -5 and m= -$\frac{7}{2}$. Now, let's find c: -3 = $\frac{35}{2}$ +c c= -$\frac{41}{2}$ Now, SUBSTITUTION TIME! y= -$\frac{7}{2}$$x$- $\frac{41}{2}$ Take 2 to the other side by multiplication: 2y= $- 7 x$- 41 Then: 2y+ $7 x$= -41 Jun 26, 2018 $7 x + 2 y = - 41$ #### Explanation: • " Parallel lines have equal slopes" $\text{the equation of a line in "color(blue)"slope-intercept form}$ is. •color(white)(x)y=mx+b $\text{where m is the slope and b the y-intercept}$ $\text{rearrange "7x+2y=5" into this form}$ $\text{subtract "7x" from both sides and divide by 2}$ $2 y = - 7 x + 5$ $y = - \frac{7}{2} x + \frac{5}{2} \leftarrow \textcolor{b l u e}{\text{in slope-intercept form}}$ $\text{with slope m } = - \frac{7}{2}$ $y = - \frac{7}{2} x + b \leftarrow \textcolor{b l u e}{\text{is the partial equation}}$ $\text{to find b substitute "(-5,-3)" into the partial equation}$ $- 3 = \frac{35}{2} + b \Rightarrow b = - \frac{6}{2} - \frac{35}{2} = - \frac{41}{2}$ $y = - \frac{7}{2} x - \frac{41}{2} \leftarrow \textcolor{red}{\text{in slope-intercept form}}$ $\text{multiply through by 2}$ $2 y = - 7 x - 41$ $7 x + 2 y = - 41 \leftarrow \textcolor{red}{\text{in standard form}}$ ##### Impact of this question 3030 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 37, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5846587419509888, "perplexity": 2042.757487805619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00674.warc.gz"}
https://rd.springer.com/article/10.1007%2Fs40819-018-0483-0
# Simulation of Natural Convective Boundary Layer Flow of a Nanofluid Past a Convectively Heated Inclined Plate in the Presence of Magnetic Field Open Access Original Paper ## Abstract This paper deals with the numerical simulation of transient magnetohydrodynamics natural convective boundary layer flow of a nanofluid over an inclined plate. In the modeling of nanofluids, dynamic effects including the Brownian motion and thermophoresis are taken into account. Numerical solutions have been computed via the Galerkin-finite element method. The effects of angle of inclination, buoyancy-ratio parameter, Brownian motion, thermophoresis and magnetic field are taken into account and controlled by non-dimensional parameters. To compute the rate of convergence and error of the computed numerical solution, the double mesh principle is used. Similarity solutions are calculated and presented graphically for non-dimensional velocity, temperature, local rate of heat and mass transfer with pertinent parameters. The modified Nusselt number decreases with increasing inclination angle, buoyancy-ratio parameter, Brownian motion and thermophoresis parameter, whereas it increases with increasing Prandtl number. Validation of the results is achieved with earlier results for forced convective flow and non-magnetic studies. Such problems have several applications in engineering and petroleum industries such as electroplating, chemical processing of heavy metals and solar water heaters. External magnetic fields play an important role in electrical power generation, inclination/acceleration sensors, fine-tuning of the final materials to industrial specification because of their controlling behaviour on the flow characteristics of nanofluids. ## Keywords Nanofluid MHD Inclined plate Similarity solution Convective boundary condition Finite element method Double mesh principle ## Roman $$B_o$$ Uniform magnetic field strength $$D_B$$ Brownian diffusion coefficient $$D_T$$ Thermophoresis diffusion coefficient $$E_N$$ Error f Dimensionless stream function $$g_e$$ Acceleration due to gravity h Dimensionless velocity function $$h_e$$ Step size $$h_f$$ Heat transfer coefficient k Thermal conductivity Ln Nanofluid Lewis number M Dimensionless magnetic parameter Nb Brownian motion parameter Nc Convective heating parameter Nr Buoyancy-ratio parameter Nt Thermophoresis parameter $$Nu_x$$ Local Nusselt number Nur Reduced Nusselt number Pr Prandtl number $$Ra_x$$ Local Rayleigh number $$r^N$$ Rate of convergence $$Sh_{x,n}$$ Local nanoparticle Sherwood number Shrn Reduced nanoparticle Sherwood number T Fluid temperature $$T_f$$ Hot fluid temperature $$T_w$$ Fluid temperature at the wall $$T_\infty$$ Ambient temperature u, v Velocity components along x and y-directions ## Greek symbols $$\alpha _m$$ Thermal diffusivity $$\beta$$ Thermal expansion coefficient $$\delta$$ Acute angle of the plate to the vertical $$\mu$$ Viscosity of the nanofluid $$\nu$$ Kinematic viscosity of the fluid $$\phi$$ Dimensionless nanoparticle volume fraction $$\hat{\phi }$$ Nanoparticle volume fraction $$\hat{\phi }_w$$ Nanoparticle volume fraction at the wall $$\hat{\phi }_\infty$$ Ambient nanoparticle volume fraction $$\psi$$ Stream function $$\rho _f$$ Density of the nanofluid $$\sigma _{nf}$$ Electrical conductivity of the nanofluid $$\tau$$ Ratio between the effective heat capacity of the nanoparticle material and heat capacity of the fluid, defined by $$(\rho c_p)_p/(\rho c_p)_f$$ $$\theta$$ Dimensionless temperature ## Subscripts f Base fluid nf Nanofluid p Nanoparticle $$w,\infty$$ Condition at the surface and in the free stream, respectively ## Introduction Nanoparticles provide a connection between molecular structure and bulk materials. When nanoparticles strategically deployed in the base fluids, the ensuing nanofluids have been verified to achieve remarkable enhancement in the properties of thermal conductivity, as introduced by Choi [8]. This has made nanofluids attractive in various areas of recent technology incorporating heat exchangers [15], aerospace cooling systems [20], and energy systems [17]. The two most common approaches to investigate the phenomena of heat and mass transfer characteristics are either the Tiwari and Das model [26] (which only requires momentum and energy equations and incorporates nanoparticle effects via a volume fraction parameter only) and the Buongiorno non-homogeneous model [6] (which introduces a separate equation for the nanoparticle concentration). Several researchers worked on these models including Hatami et al. [14], Goyal and Bhargava [12], Hamad et al. [13]. The natural convection exerts a significant influence on the heat and mass transfer analysis in the problems of nanofluids. In most of the fluid flow processes, transport phenomena occur due to the combined effect of heat and mass transfer. This is because of buoyancy effects arising from density variation, which is due to variation in temperature and/or concentration of particles. The classical problem, which involves natural convective flow of a regular fluid over a vertical plate, was first investigated theoretically by Pohlhausen [22]. Thereafter, Bejan [5] incorporated the effect of Prandtl number on boundary layers in natural convective fluid flow problems. An extension of the classical problem [22] to incorporate the effect of heat and mass transfer was investigated by Khair and Bejan [16]. Later, Aziz and Khan [4] numerically investigated the free convective boundary layer flow of a nanofluid over a vertical plate. Their analysis showed that the flow pattern, heat and mass transfer analysis strongly influenced by the pertinent parameters. Lately, the problems of free convection fluid flow over a plate for different values of inclination angle were frequently encountered in engineering devices such as solar water heaters and inclination/acceleration sensors. Most of the researchers [1, 3, 7] observed that fluid flow through the medium was favoured in case of an inclined plate as inclination to the vertical reduces the drag force. A generalized formulation was explained by Ali et al. [2] for the combined effect of chemical reaction and radiation on MHD free convective flow of viscous fluid over an inclined plate. They found that the flow features not only depend on the magnitude of inclination but also on the distance from the leading edge. Later, Narahari et al. [19] has studied the effect of free convective flow of a nanofluid over an isothermal inclined plate and observed that the thickness of the momentum boundary layer decreases with an increase in angle of inclination whereas the temperature and nanoparticle volume fraction increase with increasing inclination angle. The study of flow analysis and heat transfer under the influence of an applied magnetic field is considered a significant research topic due to its numerous scientific, industrial and biological applications such as crystal growth, cooling of metallic plates, production of magnetorheostatic materials known as smart fluids, metal casting and liquid metal cooling blankets for fusion reactors. The rate of heat transfer can be controlled by MHD flow in electrically conducting fluid and hence desired cooling effect can be achieved. The different types of thermal boundary conditions were used by Sathiyamoorthy and Chamkha [23] to study steady state, laminar, 2D natural convective flow in the presence of an inclined magnetic field in a square enclosure filled with liquid Gallium. Recently, Goyal and Bhargava [9] numerically investigated the MHD viscoelastic nanofluid flow past a stretching sheet with heat source/sink and partial slip. It was observed from the study that modified Nusselt number is directly proportional to Brownian motion and thermophoretic parameters and indirectly proportional to all other parameters. As, the study of a convectively heated inclined plate plays an important role in many processes such as manufacturing of tetrapacks, glass fibres, plastic and rubber sheets, solidification of casting. An efficient manufacturing of such materials incorporates various physical phenomena including the implementation of magnetohydrodynamics (MHD), thermal and mass diffusion effects at nanoscale level. To improve the interpretation of the inter-disciplinary transport phenomena in such type of systems, a robust approach is provided with the help of mathematical model. Hence, motivated by this, the present study focused to develop a mathematical model for natural convective boundary layer flow of a nanofluid past a convectively heated inclined plate in the presence of Magnetic field. The Buongiorno nanofluid model approach [6] is used which emphasizes the Brownian motion and thermophoresis effects. This approach also introduces a separate equation for nano-particle species diffusion. By using the suitable similarity transformation for velocity, temperature and nanoparticle concentration, the equations governing for flow, heat and mass transfer were transformed to a set of ordinary differential equations. The resulting equations subjected to the boundary conditions were solved numerically using conventional finite element method (FEM). The numerical investigation is carried out for different thermophysical parameters, namely: the magnetic parameter, buoyancy-ratio parameter, convective heat parameter, Prandtl number, nanofluid Lewis number, Brownian motion parameter, and thermophoresis parameter. The obtained results are validated by comparing with work of other authors that has reported in literature. The rates of heat and nano-mass transfer were computed and were shown in both tabular and graphical formats. ## Problem Formulation ### Governing Equations and Boundary Conditions The flow of fluid was assumed to be steady, incompressible, two-dimensional and laminar with constant physical properties. The semi-infinite plate was inclined at an acute angle $$\displaystyle {\delta }$$ to the vertical axis. With $$\displaystyle {x}$$-axis measured along the plate, a magnetic field of uniform strength $$\displaystyle {B_o}$$ was applied in the $$\displaystyle {y}$$-direction (normal to the flow direction). The gravitational acceleration $$\displaystyle {g_e}$$ was acting downward. In addition, the buoyancy effects were included in momentum transfer with the usual Boussinesq approximation. It was also assumed that the lower side of the plate was heated by convection through a hot fluid at temperature $$\displaystyle {T_f}$$ and with coefficient of heat transfer $$\displaystyle {h_f}$$. It was assumed that both the nanoparticles and the base fluid are in thermal equilibrium. In the vicinity of the plate, three different types of boundary layers (momentum, thermal and nanoparticle volume fraction) were formed. The physical configuration of the problem is shown in Fig. 1. Upon incorporating the main assumptions into the conservation equations for mass, momentum, thermal energy and nanoparticle species, the dimensional set of governing equations is written as: \begin{aligned} \frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}= & {} 0 \end{aligned} (1) \begin{aligned} \rho _f\bigg (u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}\bigg )= & {} \mu \frac{\partial ^2 u}{\partial y^2}-\sigma _{nf} B_o^2 u + [(1-\hat{\phi }_\infty )\rho _{f_\infty }\beta g_e (T-T_\infty )\nonumber \\&-\, (\rho _p-\rho _{f_\infty } )g_e (\hat{\phi }-\hat{\phi }_\infty ))]\cos \delta \end{aligned} (2) \begin{aligned} u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}= & {} \alpha _m\frac{\partial ^2 T}{\partial y^2}+\tau \bigg [D_B\frac{\partial {\hat{\phi }}}{\partial y}\frac{\partial T}{\partial y}+\frac{D_T}{T_\infty }\bigg (\frac{\partial T}{\partial y}\bigg )^2\bigg ] \end{aligned} (3) \begin{aligned} u\frac{\partial \hat{\phi }}{\partial x}+v\frac{\partial \hat{\phi }}{\partial y}= & {} D_B\frac{\partial ^2 \hat{\phi }}{\partial y^2}+\frac{D_T}{T_\infty }\frac{\partial ^2 T}{\partial y^2} \end{aligned} (4) where u and v are the velocity components parallel and perpendicular to the plate, respectively, $$B_o$$ is uniform magnetic field strength, $$\hat{\phi }$$ is the local solid volume fraction of the nanoparticles, $$\beta$$ is volumetric thermal expansion coefficient of the base fluid, $$D_B$$ is the Brownian diffusion coefficient, $$D_T$$ is the thermophoretic diffusion coefficient, and T is the local temperature. Continuity, momentum, thermal energy, and nanoparticle species equations for nanofluid are represented by Eqs. (1)–(4), respectively. The terms (from left to right) in the right side of Eq. (2) represent the stress component due to viscosity, the convective acceleration and the force due to the magnetic field. The first and second terms in the square bracket in (2) represent the positive (upward) buoyancy term due to the thermal expansion of the base fluid and the negative (downward) buoyancy term due to the variation in densities of the nanoparticles and the base fluid, respectively. The terms in the left hand side of Eq. (3) are the convection terms due to temperature. On the other hand, the terms on right side (left to right) represent the heat enthalpy, diffusion of thermal energy due to Brownian diffusion and thermophoretic effect. A similar interpretation could be given to the terms on the right hand side of Eq. (4). The boundary conditions may be written as \begin{aligned} u=0,~v=0,~\hat{\phi }= & {} \hat{\phi }_w,~-k\frac{\partial T}{\partial y}=h_f(T_f-T)~~ at ~~ y=0 \end{aligned} (5) \begin{aligned} u=0,~v=0,~\hat{\phi }= & {} \hat{\phi }_\infty ,\quad T=T_\infty ~~ as ~~ y\rightarrow \infty \end{aligned} (6) ### Similarity Transformations By introducing the stream function $$\displaystyle {\psi }$$, with $$\displaystyle {u=\partial \psi /\partial y}$$ and $$\displaystyle {v=-\partial \psi /\partial x}$$, the system of Eqs. (1)–(4) reduces to \begin{aligned}&0=\mu \frac{\partial ^3 \psi }{\partial y^3}-\rho _f\bigg (\frac{\partial \psi }{\partial y} \frac{\partial ^2 \psi }{\partial x\partial y}-\frac{\partial \psi }{\partial x}\frac{\partial ^2 \psi }{\partial y^2}\bigg )-\sigma _{nf} B_o^2 \frac{\partial \psi }{\partial y} + [(1-\hat{\phi }_\infty )\rho _{f_\infty }\beta g_e\nonumber \\&\quad (T-T_\infty )-(\rho _p-\rho _{f_\infty } g_e (\hat{\phi }-\hat{\phi }_\infty ))]\cos \delta \end{aligned} (7) \begin{aligned}&\frac{\partial \psi }{\partial y}\frac{\partial T}{\partial x}-\frac{\partial \psi }{\partial x}\frac{\partial T}{\partial y}=\alpha _m\frac{\partial ^2 T}{\partial y^2}+\tau \bigg [D_B\frac{\partial \hat{\phi }}{\partial y}\frac{\partial T}{\partial y}+\frac{D_T}{T_\infty }\bigg (\frac{\partial T}{\partial y}\bigg )^2\bigg ] \end{aligned} (8) \begin{aligned}&\frac{\partial \psi }{\partial y}\frac{\partial \hat{\phi }}{\partial x}-\frac{\partial \psi }{\partial x}\frac{\partial \hat{\phi }}{\partial y}=D_B\frac{\partial ^2 \hat{\phi }}{\partial y^2}+\frac{D_T}{T_\infty }\frac{\partial ^2 T}{\partial y^2} \end{aligned} (9) The following similarity transformation are used in order to non-dimensionlize the system of differential Eqs. (1)–(4): \begin{aligned} \eta =\frac{y}{x}Ra_x^{1/4},~ \psi =\alpha _m Ra_x^{1/4}f(\eta ), ~\theta (\eta )=\frac{T-T_\infty }{T_f-T_\infty },~ \phi (\eta )=\frac{\hat{\phi }-\hat{\phi }_\infty }{\hat{\phi }_w-\hat{\phi }_\infty } \end{aligned} (10) with the local Rayleigh number is defined as \begin{aligned} Ra_x=\frac{(1-\hat{\phi }_\infty ) \beta g_e (T_f-T_\infty ) x^3}{\nu \alpha _m} \end{aligned} (11) Now \begin{aligned} \dfrac{\partial \eta }{\partial x}= & {} -\frac{1}{4} \frac{y}{x^2}Ra_x^{1/4},\nonumber \\ \dfrac{\partial \eta }{\partial y}= & {} \frac{1}{x}Ra_x^{1/4}, \end{aligned} (12) \begin{aligned} u= & {} \frac{\partial \psi }{\partial y}=\alpha _mRa_x^{1/4}\frac{\partial f}{\partial \eta }\frac{\partial \eta }{\partial y}=\frac{\alpha _m}{x} Ra_x^{1/2}f'(\eta ),\nonumber \\ v= & {} -\frac{\partial \psi }{\partial x}=-\alpha _m\bigg (Ra_x^{1/4}\frac{\partial f}{\partial \eta }\frac{\partial \eta }{\partial x}+ \frac{1}{4}Ra_x^{-3/4}\frac{3Ra_x}{x} \bigg )\nonumber \\= & {} \frac{y}{4x^2}\alpha _m Ra_x^{1/2}f'(\eta )-\frac{3}{4x}\alpha _m Ra_x^{1/4}f(\eta ),\nonumber \\ \frac{\partial u}{\partial x}= & {} \frac{1}{2}\frac{\alpha _m}{x^2}Ra_x^{1/2}f'(\eta ) -\frac{y}{4x^2}\frac{\alpha _m}{x}Ra_x^{3/4}f''(\eta ),\nonumber \\ \frac{\partial u}{\partial y}= & {} \frac{\alpha _m}{x}Ra_x^{3/4} \frac{\partial f'}{\partial \eta }\frac{\partial \eta }{\partial y} =\frac{\alpha _m}{x^2}Ra_x^{3/4}f''(\eta ),\nonumber \\ \frac{\partial ^2 u}{\partial y^2}= & {} \frac{\alpha _m}{x^2} Ra_x^{3/4}\frac{\partial f''}{\partial \eta }\frac{\partial \eta }{\partial y} =\frac{\alpha _m}{x^3}Ra_x f'''(\eta ). \end{aligned} (13) Substituting (12), (13) in Eq. (1), we have \begin{aligned}&\frac{\alpha _m}{x}Ra_x^{1/2}f'\bigg (\frac{\alpha _m}{2x^2} Ra_x^{1/2}f'-\frac{\alpha _m}{4x}\frac{y}{x^2}Ra_x^{3/4}f''\bigg )\nonumber \\&\qquad -\bigg (-\frac{y}{4x^2}\alpha _mRa_x^{1/2}f'+\frac{3}{4} \frac{\alpha _m}{x}Ra_x^{1/4}f\bigg )\frac{\alpha _m}{x^2}Ra_x^{3/4}f''\nonumber \\&\quad =\frac{\mu }{\rho _f}\frac{\alpha _m}{x^3}Ra_xf''' -\frac{\sigma _{nf} B_o^2}{\rho _f}\frac{\alpha _m}{x}Ra_x^{1/2}f'\nonumber \\&\qquad +\,\frac{1}{\rho _f}\bigg ((1-\hat{\phi }_\infty ) \rho _{f_\infty }\beta g_e (T_f-T_\infty )\theta -(\rho _f-\rho _{f_\infty }) g_e (\hat{\phi }_w-\hat{\phi }_{\infty })\phi ))\bigg )\cos \delta \qquad \quad \end{aligned} (14) With simple calculations, the above equation can be written as: \begin{aligned}&\frac{\alpha _m}{2x^3}Ra_xf'^2-\frac{3\alpha _m^2}{x^3}Ra_xff'' =\frac{\mu }{\rho _f}\frac{\alpha _m}{x^3}Ra_xf''' +\frac{(1-\hat{\phi }_\infty )\rho _{f_\infty }\beta g_e (T_f-T_\infty )}{\rho _f}\nonumber \\&\quad \bigg (\theta -\frac{(\rho _f-\rho _{f_\infty })g (\rho _f-\rho _{f_\infty })}{(1-\hat{\phi }_\infty ) \rho _{f_\infty }\beta g_e (T_f-T_\infty )}\bigg ) \cos \delta -\frac{\sigma _{nf} B_o^2}{\rho _f}\frac{\alpha _m}{x}Ra_x^{1/2}f' \nonumber \\&\quad \Rightarrow \frac{\alpha _m^2}{4x^3}\bigg (2f'^2-3ff''\bigg ) =\frac{\mu }{\rho _f}\frac{\alpha _m}{x^3}Ra_xf''' +\frac{(1-\hat{\phi }_\infty )\rho _{f_\infty }\beta g (T_f-T_\infty )}{\rho _f}\nonumber \\&\quad \quad \bigg (\theta -Nr\phi \bigg )\cos \delta -\frac{\sigma B_o^2}{\rho _f}\frac{\alpha _m}{x}Ra_x^{1/2}f', \end{aligned} (15) which implies that \begin{aligned} f'''+\frac{\alpha _m}{4\mu }\rho _f\bigg (3ff''-2f'^2\bigg ) +\bigg (\theta -Nr\phi \bigg )\cos \delta -\frac{\sigma B_o^2x^3}{\mu Ra_x^{1/2}}f'= & {} 0\nonumber \\ \Rightarrow f'''+\frac{1}{4Pr}\bigg (3ff''-2f'^2\bigg ) +\bigg (\theta -Nr\phi \bigg )\cos \delta -Mf'= & {} 0, \end{aligned} (16) where prime denote differentiation with respect to $$\displaystyle {\eta }$$ and the parameters $$\displaystyle {Pr}$$ (Prandtl number), $$\displaystyle {Nr}$$ (Buoyancy-ratio parameter) and $$\displaystyle {M}$$ (magnetic parameter) appearing in Eq. (15) are defined as: \begin{aligned} Pr=\frac{\mu }{\alpha _m},~~Nr=\frac{(\rho _p-\rho _f)(\hat{\phi }_w-\hat{\phi }_\infty )}{\rho _f \beta (1-\hat{\phi }_w)(T_f-T_\infty )},~~ M=\frac{\sigma _{nf} B_o^2x^3}{\mu Ra_x^{1/2}} \end{aligned} (17) In order to non-dimensionlize the energy equation (3), the following terms are computed as: \begin{aligned} \frac{\partial T}{\partial y}= & {} (T_f-T_{\infty })\frac{\partial \theta }{\partial \eta }\frac{\partial \eta }{\partial y} =\frac{(T_f-T_{\infty })}{x}Ra_x^{1/4}\theta '(\eta ),\nonumber \\ \frac{\partial ^2 T}{\partial y^2}= & {} \frac{\partial }{\partial y}\bigg (\frac{(T_f-T_{\infty })}{x}Ra_x^{1/4} \theta '(\eta )\bigg )=\frac{(T_f-T_{\infty })}{x}Ra_x^{1/4} \frac{\partial ^2 \theta }{\partial \eta ^2}\frac{\partial \eta }{\partial y}\nonumber \\= & {} \frac{(T_f-T_{\infty })}{x^2}Ra_x^{1/2}\theta ''(\eta ),\nonumber \\ \frac{\partial T}{\partial x}= & {} (T_f-T_{\infty })\frac{\partial \theta }{\partial \eta } \frac{\partial \eta }{\partial x}=-\frac{y}{4x^2} (T_f-T_{\infty })Ra_x^{1/4} \theta '(\eta ), \end{aligned} (18) and \begin{aligned} \frac{\partial \hat{\phi }}{\partial y}= & {} (\hat{\phi }_w-\hat{\phi }_{\infty })\frac{\partial \phi }{\partial \eta }\frac{\partial \eta }{\partial y}=\frac{(\hat{\phi }_w-\hat{\phi }_{\infty })}{x}Ra_x^{1/4}\phi '(\eta ),\nonumber \\ \frac{\partial ^2 \hat{\phi }}{\partial y^2}= & {} \frac{\partial }{\partial y}\bigg (\frac{(\hat{\phi }_w -\hat{\phi }_{\infty })}{x}Ra_x^{1/4}\phi '(\eta )\bigg ) =\frac{(\hat{\phi }_w-\hat{\phi }_{\infty })}{x}Ra_x^{1/4} \frac{\partial ^2 \phi }{\partial \eta ^2}\frac{\partial \eta }{\partial y}\nonumber \\= & {} \frac{(\hat{\phi }_w-\hat{\phi }_{\infty })}{x^2} Ra_x^{1/2}\phi ''(\eta ),\nonumber \\ \frac{\partial \hat{\phi }}{\partial x}= & {} (\hat{\phi }_w -\hat{\phi }_{\infty })\frac{\partial \phi }{\partial \eta } \frac{\partial \eta }{\partial x}=-\frac{y}{4x^2}(\hat{\phi }_w -\hat{\phi }_{\infty })Ra_x^{1/4} \phi '(\eta ). \end{aligned} (19) Now substituting these values from Eqs. (12), (13), (18) and (19) in (3), we obtain \begin{aligned}&-\frac{\alpha _m}{x}Ra_x^{1/2}f'(\eta )\frac{y}{4x^2} (T_f-T_{\infty })Ra_x^{1/4}\theta '(\eta )\nonumber \\&\quad -\bigg (-\frac{y}{4x^2}\alpha _m Ra_x^{1/2}f'(\eta )+\frac{3}{4x}\alpha _m f(\eta ) Ra_x^{1/4}\bigg )\nonumber \\&\quad \times \, \frac{(T_f-T_\infty )}{x}Ra_x^{1/4}\theta '(\eta ) =\alpha _m\frac{(T_f-T_\infty )}{x^2}Ra_x^{1/2}\theta ''(\eta )\nonumber \\&\quad +\,\tau \bigg (D_B \frac{(\hat{\phi }_w-\hat{\phi }_\infty )}{x} \frac{(T_f-T_\infty )}{x}Ra_x^{1/2}\theta '\phi '+\frac{D_T}{T_\infty }\frac{(T_f-T_\infty )^2}{x^2}\theta '^2 \bigg ), \end{aligned} (20) \begin{aligned}&\Rightarrow -\frac{\alpha _m}{4}\frac{y}{x^3}(T_f-T_{\infty }) Ra_x^{3/4}f'\theta '+\frac{1}{4}\frac{\alpha _m}{x}\frac{y}{x^2} (T_f-T_{\infty })Ra_x^{3/4}f'\theta '\nonumber \\&\quad -\frac{3}{4x^2}\alpha _m (T_f-T_{\infty })Ra_x^{1/2}f\theta ' =\frac{\alpha _m}{x^2}(T_f-T_{\infty })Ra_x^{1/2}\theta '' \nonumber \\&\quad +\bigg (\frac{D_B}{x^2}(\hat{\phi }_w-\hat{\phi }_\infty )(T_f-T_\infty ) Ra_x^{1/2}\theta '\phi '+\frac{D_T}{T_\infty }\frac{(T_f-T_\infty )^2}{x^2}\theta '^2\bigg ) \end{aligned} (21) \begin{aligned}&-\frac{3}{4}\alpha _mf\theta '=\alpha _m\theta ''+\tau \bigg (D_B(\hat{\phi }_w-\hat{\phi }_\infty )\theta '\phi ' +\frac{D_T}{T_\infty }(T_f-T_\infty )\theta '^2\bigg ), \end{aligned} (22) \begin{aligned}&\Rightarrow \theta ''+\frac{3}{4}f\theta '+\frac{\tau }{\alpha _m}D_B(\hat{\phi }_w-\hat{\phi }_\infty )\theta ' \phi '+\tau \frac{D_T}{T_\infty }\frac{(T_f-T_{\infty })}{\alpha _m} \theta '^2=0, \end{aligned} (23) \begin{aligned}&\Rightarrow \theta ''+\frac{3}{4}f\theta '+ Nb\theta ' \phi ' + Nt \theta '^2=0 \end{aligned} (24) where $$\displaystyle {Nb}$$ (Brownian motion parameter) and $$\displaystyle {Nt}$$ (thermophoresis parameter) appearing in equation (24) are defined as: \begin{aligned} Nb=\frac{\tau D_B (\hat{\phi }_w-\hat{\phi }_\infty )}{\alpha _m},~~~~~Nt=\frac{\tau D_T (T_f-T_{\infty })}{T_\infty \alpha _m} \end{aligned} (25) Now substituting the values from Eqs. (12), (13), (18) and (19) in the nanoparticle concentration equation (3), we get \begin{aligned}&-\frac{\alpha _m}{x}Ra_x^{1/2}f'\frac{y}{4x^2}Ra_x^{1/4}\phi ' -\bigg (-\frac{y}{4x^2}\alpha _mRa_x^{1/2}f'+\frac{3}{4x}\alpha _m Ra_x^{1/4}f\bigg )\nonumber \\&\quad \frac{(\hat{\phi }_w-\hat{\phi }_{\infty })}{x} Ra_x^{1/4}\phi ' =\frac{D_B(\hat{\phi }_w-\hat{\phi }_{\infty })}{x^2}Ra_x^{1/2} \phi ''+\frac{D_T}{T_\infty } \frac{(T_f-T_\infty )}{x^2}Ra_x^{1/2}\theta '' \end{aligned} (26) \begin{aligned}&-\frac{3}{4x^2}\alpha _mRa_x^{1/2}(\hat{\phi }_w -\hat{\phi }_{\infty })f\phi '\nonumber \\&\quad =\frac{D_B}{x^2}(\hat{\phi }_w-\hat{\phi }_{\infty }) Ra_x^{1/2}\phi ''+\frac{D_T}{T_\infty }\frac{(T_f-T_\infty )}{x^2}Ra_x^{1/2}\theta '' \end{aligned} (27) or \begin{aligned}&-\frac{3}{4}\alpha _m f\phi '=D_B\phi ''+\frac{D_T}{T_|infty}\frac{(T_f-T_\infty )}{(\hat{\phi }_w-\hat{\phi }_{\infty })}\theta '', \end{aligned} (28) \begin{aligned}&\Rightarrow \phi ''+\frac{1}{D_B}\frac{D_T}{T_|infty} \frac{(T_f-T_\infty )}{(\hat{\phi }_w-\hat{\phi }_{\infty })}\theta '', \end{aligned} (29) \begin{aligned}&\quad \Rightarrow \phi ''+\frac{Nt}{Nb}\theta ''+\frac{3}{4}Lnf\phi '=0 \end{aligned} (30) where $$\displaystyle {Ln}$$ (nanofluid Lewis number) is defined by $$\displaystyle {Ln=\frac{\alpha _m}{D_B}}$$. Finally, the following system of non-dimensionlize is obtained as \begin{aligned}&f'''+\frac{1}{4 Pr}(3f f''-2f'^2)+(\theta -Nr \phi )\cos \delta -M f'=0 \end{aligned} (31) \begin{aligned}&\theta ''+\frac{3}{4}f\theta '+Nb \theta ' \phi ' + Nt\theta '^2=0 \end{aligned} (32) \begin{aligned}&\phi ''+\frac{Nt}{Nb}\theta ''+\frac{3}{4}Ln f \phi '=0 \end{aligned} (33) subject to the following boundary conditions: \begin{aligned} f(\eta )= & {} 0,~f'(\eta )=0,~\theta '(\eta )=-Nc[1-\theta (\eta )],~\phi (\eta )=1,~~~~ \text{ at } ~~~~\eta =0 \end{aligned} (34) \begin{aligned} f'(\eta )= & {} 0,~\theta (\eta )=0,~\phi (\eta )=0,~~~~ \text{ as } ~~~~\eta \rightarrow \infty \end{aligned} (35) where $$\displaystyle {Nc}$$ (convective heating parameter) appearing in boundary condition (35) is defined as: \begin{aligned} Nc=\frac{h_f x^{1/4}}{k}\bigg [\frac{\nu \alpha _m}{(1-\hat{\phi }_\infty )g_e \beta (T_f-T_\infty )}\bigg ]^{1/4} \end{aligned} (36) Nusselt and Sherwood number evaluation The understanding of heat and mass transfer at the wall plays an important role in estimating the performance of several microfluidic/ nanofluidic/ thermal devices. The related information with the variation in the properties of wall yields information which may lead to a change in the design with an improvement in the performance and efficiency of the devices. Thus the heat and mass transfer rates are the important characteristics that need to be computed [25]. These quantities, local Nusselt number $$\displaystyle {Nu_x}$$ and the local nanofluid Sherwood number $$\displaystyle {Sh_{x,n}}$$, can be written as: \begin{aligned} Nu_x=\frac{x q_w}{k(T_f-T_\infty )},~Sh_{x,n}=\frac{x q_{np}}{D_B(\hat{\phi }_w-\hat{\phi }_\infty )}, \end{aligned} (37) where $$\displaystyle {q_w}$$ and $$\displaystyle {q_{np}}$$ are the wall heat and nano mass fluxes, respectively. The modified Nusselt number $$\displaystyle {Nur}$$ and modified nanoparticle Sherwood number $$\displaystyle {Shrn}$$ can be introduced and represented as follows: \begin{aligned} Nur=Ra_x^{1/4}Nu_x=-\theta '(0),~Shrn=Ra_x^{1/4}Sh_{x,n}=-\phi '(0), \end{aligned} (38) ## Numerical Solution with Finite Element Solution In this section, the effects of important parameters on flow analysis and on heat and mass transfer characteristics are discussed in the form of numerical solution of Eqs. (31)–(33). It’s very difficult to find the analytical solution of Eqs. (31)–(33). Hence, the conventional finite element method (FEM), which is a numerical approach, is used as it is the most adaptive and popular method for solving differential equations. The basic step of FEM requires the division of the whole domain into smaller, non-overlapping sub-domains in order to solve the flow physics within the domain geometry. This results in the generation of a grid of elements overlaying the whole domain geometry. It is an enormously useful method (in terms of both resolving material nonlinearity and complex geometrical) and has received significant attention in nonlinear problems involving heat transfer [3, 10], nanofluid mechanics [11], membrane structural mechanics [12], biological systems [13], electrical systems [19], and many others. The non-linear coupled differential equations (31)–(33) subject to the boundary conditions (34), (35) have been solved using Finite element method (FEM). By assuming \begin{aligned} f'=h \end{aligned} (39) The system of Eqs. (31)-(33) can be reduced into a pair of lower order equations as follows: \begin{aligned}&h''+\frac{1}{4Pr}(3fh'-2h^2)+(\theta -Nr \phi )\cos \delta -Mh=0 \end{aligned} (40) \begin{aligned}&\theta ''+\frac{3}{4}f\theta '+Nb \phi ' \theta ' +Nt (\theta ')^2=0 \end{aligned} (41) \begin{aligned}&\phi ''+\frac{Nt}{Nb}\theta ''+\frac{3}{4} Ln f \phi '=0 \end{aligned} (42) The corresponding boundary conditions now become; \begin{aligned} f(0)= & {} 0,~h(0)=0,~\phi (0)=1,~\theta '(0)=-Nc[1-\theta (0)]~~~~as~~~~\eta =0 \end{aligned} (43) \begin{aligned} h(\infty )= & {} 0,~\theta (\infty )=0,~\phi (\infty )=0~~~~as~~~~\eta \rightarrow \infty \end{aligned} (44) ### Variational Formulation The variational form associated with Eqs. (40)–(42) over a typical linear element $$\displaystyle {(\eta _e,\eta _{e+1})}$$ is given by \begin{aligned}&\int _{\eta _e}^ {\eta _{e+1}} W_1\{f'-h\}d\eta =0 \end{aligned} (45) \begin{aligned}&\int _{\eta _e}^ {\eta _{e+1}} W_2\bigg \{h''+\frac{1}{4Pr}(3fh'-2h^2)+(\theta -Nr \phi )\cos \delta -Mh\bigg \}d\eta =0 \end{aligned} (46) \begin{aligned}&\int _{\eta _e}^ {\eta _{e+1}} W_3\bigg \{\theta ''+\frac{3}{4}f\theta '+Nb \phi ' \theta ' +Nt (\theta ')^2\bigg \}d\eta =0 \end{aligned} (47) \begin{aligned}&\int _{\eta _e}^ {\eta _{e+1}} W_4\bigg \{\phi ''+\frac{Nt}{Nb}\theta ''+\frac{3}{4} Ln f \phi '\bigg \}d\eta =0 \end{aligned} (48) where $$\displaystyle {W_1,W_2,W_3}$$ and $$\displaystyle {W_4}$$ are arbitrary test function and may be viewed as the variation in $$\displaystyle {f,~h,~\theta }$$ and $$\displaystyle {\phi }$$, respectively. ### Finite Element Formulation Let the domain be divided into linear elements $$\displaystyle {(\Omega _e)}$$. The finite element model can be obtained from Eqs. (4548) by substituting the approximations of the form \begin{aligned} \Theta =\sum _{i=1}^2 \Theta _j \psi _j \end{aligned} where, $$\displaystyle {\Theta }$$ stands for either $$\displaystyle {f,~h,~\theta ,}$$ or $$\displaystyle {\phi }$$. So, \begin{aligned} f=\sum _{j=1}^2 f_j \psi _j,~ h=\sum _{j=1}^2 h_j \psi _j, ~\theta =\sum _{j=1}^2 \theta _j \psi _j, ~\phi =\sum _{j=1}^2 \phi _j \psi _j \end{aligned} (49) with $$\displaystyle {W_1=W_2=W_3=W_4=\psi _j,~(j=1,2)}$$ where $$\displaystyle {\psi _j}$$ are the linear interpolation functions for a linear element $$\displaystyle {\Omega _e}$$. The finite element model of the equations thus formed, is given by: \begin{aligned} \begin{bmatrix} [K^{11}]&[K^{12}]&[K^{13}]&[K^{14}] \\ [K^{21}]&[K^{22}]&[K^{23}]&[K^{24}] \\ [K^{31}]&[K^{32}]&[K^{33}]&[K^{34}] \\ [K^{41}]&[K^{42}]&[K^{43}]&[K^{44}] \\ \end{bmatrix} \begin{bmatrix} \{f\} \\ \{h\} \\ \{\theta \} \\ \{\phi \} \\ \end{bmatrix}= \begin{bmatrix} \{b^1\} \\ \{b^2\} \\ \{b^3\} \\ \{b^4\} \\ \end{bmatrix} \end{aligned} where $$\displaystyle {[K^{mn}]}$$ and $$\displaystyle {[b^{mn}] ~(m,n=1,2,3,4)}$$ are the matrices of order $$2\times 2$$ and $$2\times 1$$ respectively and therefore each element matrix is of order $$8\times 8$$. These matrices are defined as follows: \begin{aligned} K_{ij}^{11}= & {} \int _{\eta _e}^{\eta _{e+1}}\psi _i \frac{\partial \psi _j }{\partial \eta } d\eta ,~~ K_{ij}^{12}=-\int _{\eta _e}^{\eta _{e+1}}\psi _i\psi _j d\eta ,~~K_{ij}^{13}=K_{ij}^{14}=0,\nonumber \\ K_{ij}^{22}= & {} -\int _{\eta _e}^{\eta _{e+1}}\frac{\partial \psi _i}{\partial \eta }\frac{\partial \psi _j}{\partial \eta }d\eta -\frac{1}{2Pr}\int _{\eta _e}^{\eta _{e+1}} \psi _i\bar{h}\psi _jd\eta -M\int _{\eta _e}^{\eta _{e+1}}\psi _i\psi _jd\eta ,\nonumber \\ K_{ij}^{21}= & {} \frac{3}{4Pr}\int _{\eta _e}^{\eta _{e+1}} \psi _i\bar{h'}\psi _jd\eta ,~~K_{ij}^{23}=\cos \delta \int _{\eta _e}^{\eta _{e+1}}\psi _i\psi _jd\eta ,\nonumber \\ K_{ij}^{24}= & {} -Nr \cos \delta \int _{\eta _e}^{\eta _{e+1}}\psi _i\psi _jd\eta , K_{ij}^{31}=\frac{3}{4}\int _{\eta _e}^{\eta _{e+1}}\psi _i \bar{\theta '}\psi _jd\eta ,~~K_{ij}^{32}=0,\nonumber \\ K_{ij}^{33}= & {} -\int _{\eta _e}^{\eta _{e+1}}\frac{\partial \psi _i}{\partial \eta }\frac{\partial \psi _j}{\partial \eta }d\eta +Nt \int _{\eta _e}^{\eta _{e+1}}\psi _i\bar{\theta '} \frac{\partial \psi _i}{\partial \eta }d\eta ,\nonumber \\ K_{ij}^{34}= & {} Nb\int _{\eta _e}^{\eta _{e+1}}\psi _i\bar{\theta '} \frac{\partial \psi _i}{\partial \eta }d\eta ,~~K_{ij}^{42}=0, ~~K_{ij}^{41}=\frac{3}{4}Ln\int _{\eta _e}^{\eta _{e+1}}\psi _i \bar{\phi '}\frac{\partial \psi _i}{\partial \eta }d\eta ,\nonumber \\ K_{ij}^{43}= & {} -\frac{Nt}{Nb}\int _{\eta _e}^{\eta _{e+1}} \frac{\partial \psi _i}{\partial \eta }\frac{\partial \psi _j}{\partial \eta }d\eta ,~~K_{ij}^{44}=-\int _{\eta _e}^{\eta _{e+1}} \frac{\partial \psi _i}{\partial \eta }\frac{\partial \psi _j}{\partial \eta }d\eta , \end{aligned} (50) where \begin{aligned} \overline{h}=\sum _{i=1}^2\overline{h_i}\psi _i,~~ \overline{h'}=\sum _{i=1}^2\overline{h_i}\frac{\partial \psi _i}{\partial \eta },~~\overline{\theta '}=\sum _{i=1}^2\overline{\theta _i}\frac{\partial \psi _i}{\partial \eta },~~\overline{\phi '}=\sum _{i=1}^2\overline{\phi _i}\frac{\partial \psi _i}{\partial \eta } \end{aligned} (51) The computational domain is discretized with uniformly distributed 2000 linear elements. The length of the boundary layer region i.e. $$\displaystyle {\eta _\infty }$$ is chosen as 14. Results were obtained even for large values of $$\displaystyle {\eta _\infty }$$, but after $$\displaystyle {\eta _\infty =14}$$, no appreciable effect on results was observed. Therefore, the boundary layer thickness is chosen as 14. At every node four functions $$\displaystyle {f,~f',~\theta }$$ and $$\displaystyle {\phi }$$ are to be calculated; hence after assembly of the element equations, we obtain a system of 8004 non-linear equations. Table 1 Comparison of Nur of regular fluid for various values of Pr with $$Ln=10$$, $$Nb=Nt=Nr=10^{-5}$$, $$M=0$$ Pr Bejan [5] Kuznetsov and Nield [17] Narahari et al. [19] Present results 1 0.401 0.401 0.401 0.4014 10 0.465 0.463 0.459 0.4654 100 0.490 0.481 0.473 0.4904 1000 0.499 0.484 0.474 0.4970 Table 2 Comparison of results for Nur and Shrn when $$Nt=0.1$$, $$Nc=10$$, $$Ln=10$$, $$M=0,$$ $$\delta =0$$ Nb Nr $$Pr=1$$ $$Pr=5$$ Nur Shrn Nur Shrn Aziz and Khan [4] Present results Aziz and Khan [4] Present results Aziz and Khan [4] Present results Aziz and Khan [4] Present results 0.1 0.1 0.3396 0.3395 0.9954 0.9955 0.3807 0.3807 1.0608 1.0609 0.2 0.3366 0.3364 0.9828 0.9830 0.3773 0.3770 1.0482 1.0484 0.3 0.3334 0.3331 0.9697 0.9699 0.3739 0.3737 1.0351 1.0352 0.4 0.3301 0.3297 0.9559 0.9563 0.3702 0.3699 1.0214 1.0217 0.5 0.3267 0.3266 0.9414 0.9415 0.3665 0.3663 1.0071 1.0075 0.3 0.1 0.2939 0.2938 1.0435 1.0437 0.3306 0.3301 1.1101 1.1102 0.2 0.2918 0.2917 1.0317 1.0317 0.3282 0.3280 1.0985 1.0988 0.3 0.2896 0.2896 1.0195 1.0199 0.3258 0.3255 1.0866 1.0870 0.4 0.2872 0.2869 1.0067 1.0069 0.3232 0.3231 1.0741 1.0743 0.5 0.2848 0.2844 0.9934 0.9935 0.3206 0.3202 1.0611 1.0612 0.5 0.1 0.2530 0.2525 1.0584 1.0585 0.2855 0.2852 1.1263 1.1266 0.2 0.2513 0.2512 1.0471 1.0471 0.2836 0.2835 1.1152 1.1153 0.3 0.2495 0.2492 1.0353 1.0355 0.2816 0.2818 1.1037 1.1040 0.4 0.2477 0.2475 1.0230 1.0233 0.2796 0.2794 1.0918 1.0919 0.5 0.2458 0.2456 1.0102 1.0105 0.2775 0.2771 1.0794 1.0795 Table 3 Errors and rate of convergence for the finite element method (FEM) $$h_e$$ $$f'$$ $$\theta$$ $$\phi$$ Error Rate Error Rate Error Rate 0.128 2.0186e−2 1.1664 7.0795e−2 1.5609 4.0550e−2 1.4903 0.064 8.9935e−3 1.2898 2.3994e−2 1.6565 1.4433e−2 1.5759 0.032 3.6779e−3 1.4059 7.6112e−3 1.7482 4.8414e−3 1.6549 0.016 1.3880e−3 1.5135 2.2656e−3 1.8108 1.5374e−3 1.7238 0.008 4.8616e−4 1.5849 6.4578e−4 1.9468 4.6547e−4 1.8136 0.004 1.6206e−4 1.6328 1.6751e−4 2.0004 1.3242e−4 1.9091 0.002 5.2259e−5 4.1868e−5 3.5258e−5 Owing to the nonlinearity of the system an iterative scheme has been used to solve it iteratively. The system of equations is linearized by incorporating known functions $$\displaystyle {\bar{f},~\bar{f'},~\bar{\theta },~\bar{\phi }}$$ which are calculated using the approximate values of variables $$\displaystyle {f,~f',~\theta ,~\phi }$$ at node $$\displaystyle {i}$$ on previous iteration, as given in Eq. (50). The whole system is solved by using a Gaussian elimination method and the whole procedure is executed in MATLAB. This gives a new set of values of unknowns $$\displaystyle {f,~f',~\theta ,~\phi }$$ and the process continues until the required accuracy of $$\displaystyle {1\times 10 ^{-5}}$$ is achieved. ## Validation of the Numerical Procedure For validation purpose, results were compared with previously reported results in the literature. The results for the regular fluid at different values of $$\displaystyle {Pr}$$ were compared with those reported by Bejan [5], Kuznetsov and Nield [17] and Narahari et al. [19], has been captured in Table 1. Closer correlation has been achieved as compared to results computed by [5, 17, 19]. On the other hand, Table 2 shows the excellent correlation between the current FEM computation and the earlier results of Aziz and Khan [4] on the modified Nusselt and nanoparticle Sherwood number under the influence of the various parameters such as $$\displaystyle {Nb,~Nr,~Pr}$$. ## Double Mesh Principle To estimate the error and compute the rate of convergence in the computed numerical solution, the double mesh principle was used [18, 21, 24]. As the exact solution of the problem was unknown, and therefore to approximate the pointwise errors $$\displaystyle {|(\tilde{\Theta }-\Theta )(\eta _i)|}$$; $$\displaystyle {i=0,1,2,\ldots N}$$, we have used variant of double mesh principle, where $$\displaystyle {\Theta (\eta )}$$ and $$\displaystyle {\tilde{\Theta }(\eta )}$$ denote the numerical solutions of the system of ODE’s (39)–(42) at the two consecutive different mesh. A numerical solution $$\displaystyle {\Theta (\eta )}$$ to $$\displaystyle {\tilde{\Theta }(\eta )}$$ is calculated which was given by FEM on the mesh $$\displaystyle {\{\hat{\eta }_i\}}$$ that contained the mesh points $$\displaystyle {\eta _i}$$ of the original mesh and their midpoints (i.e. $$\displaystyle {\hat{\eta }_{2i}=\eta _i, i=0,1,\ldots , N}, \displaystyle {\hat{\eta }_{2i+1}=\frac{\eta _i+\eta _{i+1}}{2}}, \displaystyle {i=0,1,\ldots , N-1}$$). Then at the mesh points, $$\displaystyle {\eta _i, i=0,1,\ldots ,N}$$, the maximum error is computed as: \begin{aligned} E_N=\max _{0\le i\le N}|\Theta (\eta _i)-\tilde{\Theta }(\eta _i)| \end{aligned} (52) From these estimates of the errors, the corresponding order of convergence has been obtained, which is defined as: \begin{aligned} r^N=\log _2\frac{E_N}{E_{2N}} \end{aligned} (53) From Table 3, it is concluded that at each step, i.e after each refinement, approximate error corresponding to each function is reducing. Hence, the approximated solution of the current problem is approaching the exact solution. Also, an increment in the convergence rate is observed at each step, which shows that the computed numerical solution is rapidly converging on the exact solution. ## Computations and Discussion Numerical computations have been carried out for different values of the parameters involved, namely, $$\displaystyle {M}$$$$\displaystyle {Nr}$$$$\displaystyle {Nc}$$, $$\displaystyle {Pr}$$$$\displaystyle {Ln}$$$$\displaystyle {Nb}$$$$\displaystyle {Nt}$$ that describe the flow characteristics, heat and mass transfer and the results are reported in terms of graphs and tables. In Figs. 23456789101112131415161718 and 19, we generally utilize the following data (unless otherwise stated): $$\displaystyle {Pr=5}$$, $$\displaystyle {Nb=Nr=Nc=0.5}$$$$\displaystyle {Nt=0.3}$$,  $$\displaystyle {Ln=10}$$$$\displaystyle {M=1}$$$$\displaystyle {\delta =\pi /4}$$. Table 4 indicates dependency of the modified Nusselt number $$\displaystyle {Nur}$$ and modified nanoparticle Sherwood number $$\displaystyle {Shrn}$$ over changes in the Prandtl number $$\displaystyle {Pr}$$, the Brownian motion parameter $$\displaystyle {Nb}$$ and the buoyancy parameter $$\displaystyle {Nr}$$ when the rest of the parameters are fixed. The modified Nusselt and nanoparticle Sherwood numbers increase with an increase in the Prandtl number $$\displaystyle {Pr}$$. For a fixed $$\displaystyle {Pr}$$, both $$\displaystyle {Nur}$$ and $$\displaystyle {Shrn}$$ decrease as $$\displaystyle {Nb}$$ and $$\displaystyle {Nr}$$ increase. Table 5 shows the changes in the magnetic field parameter $$\displaystyle {M}$$, the thermophoretic parameter $$\displaystyle {Nt}$$, and the angle of inclination $$\displaystyle {\delta }$$ affect the modified Nusselt number and the modified nanofluid Sherwood number. It is noticed that the performance of heat and nanoparticle mass transfer of the plate decrease as the magnetic field strength ,$$\displaystyle {M}$$, and angle of inclination ,$$\displaystyle {\delta }$$, are gradually enlarged. In the same table, the corresponding heat and nanoparticle mass transfer are also represented for different values of $$\displaystyle {Nt}$$. The effect of the nanofluid Lewis number on the the modified Nusselt number and the modified nanoparticle Sherwood number is shown in Table 6. As the nanofluid Lewis number increases, the modified Nusselt number decreases slightly but there is a substantial increase in the modified nanoparticle Sherwood number. Tables 45 and 6 provide information about the heat and mass transfer characteristics of the flow in a form convenient for research and engineering calculations. Figures  23, and 4 elucidate the variations of the functions $$\displaystyle {f'(\eta ),~\theta (\eta )}$$ and $$\displaystyle {\phi (\eta )}$$ under the influence of the magnetic field parameter $$\displaystyle {M}$$. It is clearly observed that the velocity of the fluid decreases, whereas the temperature increases, with increasing strength of magnetic field. As the application of a transverse magnetic field will result a resistive/drag force, known as Lorentz force, which tends to resist fluid flow and as a result this force prevents the development of momentum and decelerates the flow. Against the action of magnetic field, the additional work done in dragging the nanofluid is expressed as thermal energy. This heats the nanofluid and increases temperature. Consequently, the presence of a magnetic field attenuates the thickness of momentum boundary layer and augments the thickness of thermal boundary layer. Moreover, the warming of the boundary layer also helps in nanoparticle diffusion due to which a rise in nanoparticle volume fraction $$\displaystyle {\phi (\eta )}$$ can be observed, as shown in Fig. 4. In Figs. 56 and 7, the influence of the plate inclination from the vertical, $$\displaystyle {\delta }$$, ranging from 0 to $$\displaystyle {\pi /4}$$, on the velocity $$\displaystyle {f'(\eta )}$$, temperature $$\displaystyle {\theta (\eta )}$$ and nanoparticle volume fraction $$\displaystyle {\phi (\eta )}$$ profiles are depicted, respectively. It is observed from Fig. 5 that within the hydrodynamic boundary layer, the velocity of the fluid is diminished with an augmentation of inclination angle. This is because to the plate’s alignment via the thermal buoyancy term, $$\displaystyle {g[-(\rho _p-\rho _{f_\infty })(\hat{\phi }-\hat{\phi }_\infty )+(1-\hat{\phi }_\infty )\rho _{f_\infty } \beta (T-T_\infty )]\cos \delta }$$, which is arising in the momentum equation (2). As the value of $$\displaystyle {\delta }$$ increases, the corresponding value of $$\displaystyle {\cos \delta }$$ decreases. This causes the buoyancy effect to be vanished with increasing the plate inclination. Consequently, the driving force to the fluid attenuates, resulting in decrease of velocity of the fluid. A similar type of trend has been found by Alam et al. [1] in case of velocity profile. On the other hand, a depletion in buoyancy effect will enhance thermal and species (nanoparticle) diffusion, which is shown in Figs. 6 and 7. Brownian motion is the haphazard motion of nanoparticles inside the base fluid due to of the continuous collision of nanoparticle with the molecules of base fluid. This motion of the particles is described by parameter $$\displaystyle {Nb}$$, known as Brownian motion coefficient. Figures 89 and 10 illustrate the effect of $$\displaystyle {Nb}$$ on velocity $$\displaystyle {f'(\eta )}$$, temperature $$\displaystyle {\theta (\eta )}$$ and concentration $$\displaystyle {\phi (\eta )}$$ profiles. With an increase in $$\displaystyle {Nb}$$, the randomness of the nanoparticles increased and as a result, nanoparticles move more chaotically, causing more collisions in the system and vice versa. This increase in number of collisions and velocity result an increase in heat transfer properties, and thus, the value of temperature increases. Simultaneously, the increase in $$\displaystyle {Nb}$$ has an adverse effect on the concentration of nanoparticles along the wall. The nanoparticles start moving away from the boundary into the fluid by increasing the random motion of nanoparticles which causing a decrease in the value of concentration of nanoparticles along the wall. The phenomenon of diffusion of particles, in the presence of temperature gradient, is known as thermophoresis. The variation of velocity $$\displaystyle {f'(\eta )}$$, temperature $$\displaystyle {\theta (\eta )}$$ and nanoparticle concentration $$\displaystyle {\phi (\eta )}$$ for various values of $$\displaystyle {Nt}$$ is depicted in Figs. 1112 and 13. Augmentation in the value of $$\displaystyle {Nt}$$ causes temperature gradient which results in escalating the force (thermophoretic) between nanoparticles. This force is responsible for more fluid being heated and elevates the temperature. The same effect is observed in the case of nanoparticle concentration by strengthening the effect of thermophoresis $$\displaystyle {Nt}$$, as shown in Fig. 13. Table 4 Variation of Nur and Shrn for PrNb and Nr for $$\delta =\pi /4$$, $$Nc=10$$, $$Nt=0.5$$, $$M=0.1$$, $$Ln=10$$ Nb Nr $$Pr=1$$ $$Pr=5$$ $$Pr=10$$ Nur Shrn Nur Shrn Nur Shrn 0.1 0.1 0.2120 0.7253 0.2183 0.7418 0.2191 0.7441 0.2 0.2076 0.7094 0.2136 0.7252 0.2144 0.7273 0.3 0.2030 0.6926 0.2086 0.7076 0.2094 0.7097 0.4 0.1981 0.6748 0.2034 0.6890 0.2042 0.6909 0.5 0.1929 0.6559 0.1979 0.6692 0.1986 0.6710 0.6 0.1874 0.6355 0.1921 0.6479 0.1927 0.6496 0.3 0.1 0.1840 0.8280 0.1897 0.8420 0.1905 0.8439 0.2 0.1816 0.8143 0.1871 0.8280 0.1879 0.8299 0.3 0.1791 0.8000 0.1845 0.8133 0.1852 0.8151 0.4 0.1766 0.7849 0.1817 0.7978 0.1825 0.7995 0.5 0.1739 0.7689 0.1789 0.7814 0.1796 0.7831 0.6 0.1710 0.7519 0.1758 0.7640 0.1765 0.7656 0.5 0.1 0.1581 0.8502 0.1632 0.8639 0.1639 0.8658 0.2 0.1564 0.8371 0.1614 0.8506 0.1620 0.8524 0.3 0.1546 0.8235 0.1594 0.8366 0.1601 0.8384 0.4 0.1527 0.8091 0.1574 0.8219 0.1580 0.8236 0.5 0.1507 0.7939 0.1552 0.8064 0.1559 0.8081 0.6 0.1486 0.7778 0.1530 0.7900 0.1536 0.7916 Table 5 Variation of Nur and Shrn for MNt and $$\delta$$ for $$Nc=10$$, $$Pr=5.0$$, $$Nb=Nr=0.5$$, $$Ln=10$$ Nt $$\delta$$ $$M=1.0$$ $$M=3.0$$ $$M=5.0$$ Nur Shrn Nur Shrn Nur Shrn 0.1 0 0.1979 0.8541 0.1500 0.6858 0.1266 0.5903 $$\pi /12$$ 0.1956 0.8446 0.1480 0.6768 0.1249 0.5819 $$\pi /6$$ 0.1885 0.8156 0.1418 0.6490 0.1198 0.5561 $$\pi /4$$ 0.1759 0.7636 0.1312 0.5997 0.1111 0.5105 0.3 0 0.1829 0.8692 0.1387 0.6958 0.1169 0.5986 $$\pi /12$$ 0.1808 0.8596 0.1368 0.6867 0.1153 0.5901 $$\pi /6$$ 0.1743 0.8300 0.1310 0.6585 0.1105 0.5642 $$\pi /4$$ 0.1626 0.7769 0.1211 0.6086 0.1024 0.5186 0.5 0 0.1696 0.8886 0.1284 0.7089 0.1081 0.6097 $$\pi /12$$ 0.1676 0.8787 0.1267 0.6996 0.1067 0.6012 $$\pi /6$$ 0.1615 0.8483 0.1213 0.6709 0.1022 0.5751 $$\pi /4$$ 0.1507 0.7939 0.1121 0.6204 0.0946 0.5294 The buoyancy-ratio parameter $$\displaystyle {Nr}$$ is defined as the ratio of the variation of the fluid density (due to the variation of the concentration) to the variation of the density of the nanofluid (due to the variation of temperature). Figures 1415 and 16 present the behavior of buoyancy ratio parameter $$\displaystyle {Nr}$$ on the velocity $$\displaystyle {f'(\eta )}$$, temperature $$\displaystyle {\theta (\eta )}$$ and nanoparticle volume fraction $$\phi (\eta )$$ profiles. It is observed from these figures that an increase in the Buoyancy-ratio parameter increases the magnitude of the dimensionless temperature and nanoparticle concentration while decreases the magnitude of the dimensionless velocity of the nanofluid. Figures 17 and 18 display the effect of the convective heating parameter $$\displaystyle {Nc}$$ on velocity $$\displaystyle {f'(\eta )}$$ and temperature $$\displaystyle {\theta (\eta )}$$ profiles. It is noted that velocity and the temperature of the fluid increase with an increase in $$\displaystyle {Nc}$$. Figure 19 depicts that the nanofluid Lewis number significantly affects the concentration of nanoparticles $$\displaystyle {\phi (\eta )}$$. For a base fluid of certain thermal diffusivity $$\displaystyle {\alpha _m}$$, a higher Lewis implies a lower Brownian diffusion coefficient $$\displaystyle {D_B}$$ (as $$Ln=\alpha _m/D_B$$) which must result in a shorter penetration depth for the concentration boundary layer. This is exactly what Fig. 19 represents. Table 6 Variation of Nur and Shrn for LnNt and Nr for $$\delta =\pi /4$$, $$Nc=10$$, $$Nb=0.5$$, $$M=0.1$$, $$Pr=5.0$$ Nt Nr $$Ln=1$$ $$Ln=5$$ $$Ln=10$$ Nur Shrn Nur Shrn Nur Shrn 0.1 0.1 0.2105 0.2710 0.1897 0.6148 0.1839 0.8201 0.2 0.2025 0.2607 0.1868 0.6025 0.1821 0.8071 0.3 0.1938 0.2495 0.1837 0.5895 0.1801 0.7934 0.4 0.1842 0.2374 0.1805 0.5757 0.1781 0.7789 0.5 0.1736 0.2241 0.1770 0.5608 0.1759 0.7636 0.6 0.1617 0.2095 0.1733 0.5447 0.1736 0.7472 0.3 0.1 0.1986 0.2437 0.1763 0.6191 0.1704 0.8333 0.2 0.1902 0.2336 0.1734 0.6067 0.1686 0.8203 0.3 0.1810 0.2228 0.1704 0.5937 0.1667 0.8066 0.4 0.1709 0.2116 0.1672 0.5798 0.1647 0.7922 0.5 0.1598 0.1999 0.1638 0.5649 0.1626 0.7769 0.6 0.1477 0.1879 0.1601 0.5488 0.1604 0.7607 0.5 0.1 0.1875 0.2235 0.1641 0.6283 0.1581 0.8502 0.2 0.1788 0.2133 0.1613 0.6159 0.1564 0.8371 0.3 0.1693 0.2031 0.1583 0.6027 0.1546 0.8235 0.4 0.1589 0.1930 0.1552 0.5888 0.1527 0.8091 0.5 0.1477 0.1836 0.1519 0.5738 0.1507 0.7939 0.6 0.1358 0.1754 0.1484 0.5577 0.1486 0.7778 ## Conclusion A combined similarity-numerical approach is used to study the natural convective boundary layer flow of a nanofluid past a convectively heated inclined plate in the presence of magnetic field, using a model in which Brownian motion and thermophoresis are accounted for. By use of appropriate similarity transformation, the essential partial differential equations with the corresponding boundary conditions are numerically tackled using Galerkin-finite element method (FEM). The impact of the pertinent parameters upon the flow, temperature, nanoparticle-concentration, modified Nusselt and Sherwood numbers are represented in tabular as well as in graphical form. The use of a convective boundary condition instead of a constant temperature or a constant heat flux makes this approach novel. The computational analysis leads to the following conclusions: 1. 1. Amplifying the strength of magnetic field $$\displaystyle {M}$$ attenuates the thickness of the momentum boundary layer and expands the thermal and nano-volume fraction boundary layer. The application of an external magnetic field produces Lorentz drag force which retards the fluid motion. By customizing the external magnetic field, the transfer of heat can be controlled. In the field of ’smart’ cooling devices, widespread growth is based on this idea. 2. 2. With an augmentation in the magnetic parameter $$\displaystyle {M}$$, the magnitude of heat and nano-mass transfer rates decrease as a consequence of intensified Lorentz drag force. 3. 3. Strengthening the thermophoresis $$\displaystyle {Nt}$$ and Brownian motion $$\displaystyle {Nb}$$ parameters, the rate of heat and nano-mass transfer decrease for an increase in the value of magnetic field parameter $$\displaystyle {M}$$. The heat and mass transfer rates can be altered by taking different combinations of base fluid and nanoparticles. This idea can be implemented for numerous industrial applications involving inclined/vertical plates (production of glass fibres, plastic products, tetrapacks etc.) in adjusting the heat transfer rates. 4. 4. With the mounting values of an angle of inclination $$\displaystyle {\delta }$$, the width of the momentum boundary layer decays whereas the reverse effect occurs for temperature and concentration boundary layers. 5. 5. The use of a convective boundary condition instead of a constant temperature or a constant heat flux makes this approach novel. As, the convective heating parameter $$\displaystyle {Nc}$$ enhances the rate of heat transfer at the surface of the plate. This effect finds application in case of heat exchangers where the convection in the fluid past the solid surface influenced the conduction in the solid surface. 6. 6. The excellent accuracy of the computed FEM results was shown with the help of double mesh principle. However, the present study has been focused on the steady-state situation, time-dependent flow of nanofluid over the plate will be addressed in the future investigations. Also the present two-phase model might be extended for turbulent nanofluid flow problem with the inclusion of other slip mechanism viz. diffusiophoresis, inertia and drag force. ## References 1. 1. Alam, M., Rahman, M., Sattar, M.: On the effectiveness of viscous dissipation and joule heating on steady magnetohydrodynamic heat and mass transfer flow over an inclined radiate isothermal permeable surface in the presence of thermophoresis. Commun. Nonlinear Sci. Numer. Simul. 14, 2132–2143 (2009) 2. 2. Ali, F., Khan, I., Samiulhaq, S.S.: Conjugate effects of heat and mass transfer on MHD free convection flow over an inclined plate embedded in a porous medium. Abbott D, ed. PLoS ONE 8(6), e65,223 (2013). 3. 3. Anghel, M., Hossain, M.A., Zeb, S., Pop, I.: Combined heat and mass transfer by free convection past an inclined flat plate. Int. J. Appl. Mech. Eng. 2, 473–497 (2001) 4. 4. Aziz, A., Khan, W.: Natural convective boundary layer flow of a nanofluid past a convectively heated vertical plate. Int. J. Therm. Sci. 52, 83–90 (2012) 5. 5. Bejan, A.: Convection Heat Transfer. Wiley, New York (1984) 6. 6. Buongiorno, J.: Convective transport in nanofluids. J. Heat Transf. 128, 240–250 (2006) 7. 7. Chen, C.H.: Heat and mass transfer in MHD flow by natural convection from a permeable, inclined surface with variable wall temperature and concentration. Acta Mech. 172, 219–235 (2004) 8. 8. Choi, S.U.S.: Enhancing thermal conductivity of fluids with nanoparticles in developments and applications of non-Newtonian flows. In: Siginer, D.A., Wang, H.P. (eds.) ASME FED 231/MD, vol. 66, pp. 99–105 (1995)Google Scholar 9. 9. Goyal, M., Bhargava, R.: Numerical solution of MHD viscoelastic nanofluid flow over a stretching sheet with partial slip and heat source/sink. ISRN Nanotechnol. 2013, 1–11 (2013) 10. 10. Goyal, M., Bhargava, R.: Boundary layer flow and heat transfer of viscoelastic nanofluids past a stretching sheet with partial slip conditions. Appl. Nanosci. 4, 761–767 (2014a) 11. 11. Goyal, M., Bhargava, R.: Numerical study of thermodiffusion effects on boundary layer flow of nanofluids over a power law stretching sheet. Microfluid. Nanofluidics 17, 591–604 (2014b) 12. 12. Goyal, M., Bhargava, R.: Thermodiffusion effects on boundary layer flow of viscoelastic nanofluids over a stretching sheet with viscous dissipation and non-uniform heat source using hp-finite element method. Proc. IMechE Part N J. Nanoeng. Nanosyst. 230, 124–140 (2014c)Google Scholar 13. 13. Hamad, M.A.A., Pop, I., Ismail, A.I.M.: Magnetic field effects on free convection flow of a nanofluid past a vertical semi-infinite flat plate. Nonlinear Anal. Real World Appl. 12, 1338–1346 (2011) 14. 14. Hatami, M., Jing, D., Song, D., Sheikholeslami, M., Ganji, D.: Heat transfer and flow analysis of nanofluid flow between parallel plates in presence of variable magnetic field using HPM. J. Magn. Magn. Mater. 396, 275–282 (2015) 15. 15. Huminic, G., Huminic, A.: Application of nanofluids in heat exchangers: a review. Renew. Sust. Energy Rev. 16(8), 5625–5638 (2012). 16. 16. Khair, K.R., Bejan, A.: Mass transfer to natural convection boundary layer flow driven by heat transfer. Int. J. Heat Mass Transf. 30, 369–376 (1985)Google Scholar 17. 17. Kuznetsov, A., Nield, D.: Natural convective boundary-layer flow of a nanofluid past a vertical plate. Int. J. Therm. Sci. 49, 243–247 (2010) 18. 18. Margenov, S., Vulkov, L., Vulkov, L.G., Wasniewski, J.: Numerical Analysis and Its Applications: 4th International Conference, NAA 2008 Lozenetz, Bulgaria, June 2008. Revised Selected Papers. Springer (2008)Google Scholar 19. 19. Narahari, M., Akilu, S., Jaafar, A.: Free convection flow of a nanofluid past an isothermal inclined plate. Appl. Mech. Mater. 390, 129–133 (2013) 20. 20. Narvaez, J.A., Veydt, A.R., Wilkens, R.J.: Evaluation of nanofluids as potential novel coolant for aircraft applications: the case of de-ionized water-based alumina nanofluids. ASME J Heat Transf. 136(051), 702 (2014)Google Scholar 21. 21. Natesan, S., Jayakumar, J., Vigo-Aguiar, J.: Parameter uniform numerical method for singularly perturbed turning point problems exhibiting boundary layers. J. Comput. Appl. Math. 158, 121–134 (2003) 22. 22. Pohlhausen, E.: Der warmeaustausch zwischen festen korpern und flussigkeiten mit kleiner reibung und kleiner warmeleitung. J. Appl. Math. Mech. 1, 115–121 (1921) 23. 23. Sathiyamoorthy, M., Chamkha, A.: Effect of magnetic field on natural convection flow in a liquid gallium filled square cavity for linearly heated side wall(s). Int. J. Therm. Sci. 49, 1856–1865 (2010) 24. 24. Shanthi, V., Ramanujam, N., Natesan, S.: Fitted mesh method for singularly perturbed reaction–convection–diffusion problems with boundary and interior layers. J. Appl. Math. Comput. 22, 49–65 (2006) 25. 25. Siddiqa, S., Hossain, M., Saha, S.C.: The effect of thermal radiation on the natural convection boundary layer flow over a wavy horizontal surface. Int. J. Therm. Sci. 84, 143–150 (2014) 26. 26. Tiwari, R.K., Das, M.K.: Heat transfer augmentation in a two-sided lid-driven differentially heated square cavity utilizing nanofluids. Int. J. Heat Mass Transf. 50, 2002–2018 (2007)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9881389737129211, "perplexity": 5236.563116808107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947690.43/warc/CC-MAIN-20180425022414-20180425042414-00033.warc.gz"}
https://electronics.stackexchange.com/questions/56469/can-a-hartley-oscillator-be-built-using-fixed-inductors
# Can a Hartley Oscillator be built using Fixed Inductors? I've been playing around with fundamental circuits (no professional EE background), and became interested in oscillators. I have been trying to build the Hartley oscillator as described here. The document states that "an Hartley Oscillator circuit can be made from [...] a pair of series connected coils [...]". I had a couple of 22mH fixed inductors, which I hooked up on a breadboard with the other needed pieces. When I test the transistor amplifier independently, it seems to be working. However, there is no sign of oscillation in this circuit. So my ignorant question is, can I use the fixed inductors indicated? I saw mention of the notion of 'mutual inductance', and I'm not sure you can get such with these discrete components. Update My friends and I started out by copying Oli's quick circuit, and were gratified to get a crisp sinusoid waveform. Thanks, Oli! However, I clearly still have a long way to go, as when we attempted to change the frequency of the circuit we were mimicking, we got zero oscillation. And the original Hartley circuit remains stubborn. I've bought a couple of used books and will be working through them with an eye to getting the original circuit (among others) working. • I built a Colpitts oscillator the other day because it doesn't require the coupled inductors. en.m.wikipedia.org/wiki/Colpitts_oscillator – jippie Jan 29 '13 at 20:28 • Yep, that's on my list as well. Bear in mind that I'm doing this for education, so skipping the 'hard' one is counter to the spirit of the endeavor. ;^)~ – Don Wakefield Jan 29 '13 at 20:46 ## 2 Answers Although it's often shown as one inductor with a tap taken off somehwere, you can use two separate inductors for a Hartley oscillator. You might want to consider capacitively coupling the feedback as in the second example, this will probably make it easier for your circuit to start to oscillate: You don't have to use the RFC choke shown, you can still use a resistor at the collector. EDIT Here is my rough circuit with pictures and scope capture: Picture on breadboard: Just in case someone suggest coupling is taking place ;-) Scope Capture from collector (supply around 6V) • Are you implying L(top half) and L(lower half) do not need magnetic coupling? Just to make sure, I am not referring to L2 at all. – jippie Jan 29 '13 at 21:03 • Yes, they don't need coupling for the circuit to oscillate (but they can be coupled also) The equations are different in each case, see the OP's link. There are all sorts of variations with this type of oscillator. – Oli Glaser Jan 29 '13 at 21:38 • I think you are right, according to the formula there $f_o=\frac{1}{2 \pi \sqrt{(L_{XY}+L_{YZ}+2M)C}}$ varies with M ($M = k \cdot \sqrt{L_{XY} \cdot L_{YZ}}$, with $0<k<1$), but M can be 0. – jippie Jan 29 '13 at 22:00 • Just as a sanity check I built a quick non-coupled one on a breadboard and it oscillates (badly, as I haven't done any calculations, but it does oscillate - will post pics/circuit if anyone wants to see) – Oli Glaser Jan 29 '13 at 22:13 • Right, I added the circuit with a few pictures - as you can see, the output amplitude is quite small, but not too bad a sine wave. Note I capacitively coupled the feedback from collector as mentioned. It's not so easy to simulate (but it does) and the frequency agrees within a few kHz. – Oli Glaser Jan 29 '13 at 22:47 According to the page you linked, The feedback of the tuned tank circuit is taken from the centre tap of the inductor coil or even two separate coils in series which are in parallel with a variable capacitor, C as shown. In this case "coil" is just a synonym for "inductor". Note, though, that the circuit does depend on magnetic coupling between the coils. You won't want to use "shielded" inductors for this, and you may need to experiment with the physical arrangement of your two separate inductors to get it to work. • Thanks. I'd assume you would want them as close to each other as you can arrange to maximize the chances of coupling. Since they are in series, I've got them sharing one row on the breadboard (one pin each), so they're touching. Would you suggest that they actually be separated further? – Don Wakefield Jan 29 '13 at 18:16 • Probably close and oriented parallel to each other. I'm not sure if end-to-end or side-by-side will work better. Also have a look at the linked questions on the right of the page --- there was a recent post with an example of doing this with air-core coils. – The Photon Jan 29 '13 at 18:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6014561653137207, "perplexity": 981.7027950127558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00487.warc.gz"}
http://zbmath.org/?q=an:1119.49025
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Well posedness in vector optimization problems and vector variational inequalities. (English) Zbl 1119.49025 Summary: We give notions of well posedness for a vector optimization problem and a vector variational inequality of the differential type. First, the basic properties of well-posed vector optimization problems are studied and the case of $C$-quasiconvex problems is explored. Further, we investigate the links between the well posedness of a vector optimization problem and of a vector variational inequality. We show that, under the convexity of the objective function $f$, the two notions coincide. These results extend properties which are well known in scalar optimization. ##### MSC: 49K40 Sensitivity, stability, well-posedness of optimal solutions 90C29 Multi-objective programming; goal programming 47J20 Inequalities involving nonlinear operators ##### References: [1] Dontchev, A. L., and Zolezzi, T., Well-Posed Optimization Problems, Lecture Notes in Mathematics, Springer Verlag, Berlin, Germany, Vol. 1543, 1993. [2] Hadamard, J., Sur les Problèmes aux Dérivees Partielles et Leur Signification Physique, Bulletin of the University of Princeton, Vol. 13, pp. 49–52, 1902. [3] Tykhonov, A. N., On the Stability of Functional Optimization Problems, USSR Computational Mathematics and Mathematical Physics, Vol. 6, pp. 28–33, 1966. · Zbl 0212.23803 · doi:10.1016/0041-5553(66)90003-6 [4] Kinderlehrer, D., and Stampacchia, G., An Introduction to Variational Inequalities and Their Applications, Pure and Applied Mathematics, Academic Press, New York, NY, Vol. 88, 1980. [5] Crespi, G. P., Ginchev, I., and Rocca, M., Existence of Solutions and Starshapedness in Minty Variational Inequalities, Journal of Global Optimization, Vol. 32, pp. 485–494, 2005. · Zbl 1097.49007 · doi:10.1007/s10898-003-2685-0 [6] Crespi, G. P., Ginchev, I., and Rocca, M., Minty Variational Inequalities, Increase-Along-Rays Property, and Optimization, Journal of Optimization Theory and Applications, Vol. 123, pp. 479–496, 2004. · Zbl 1059.49010 · doi:10.1007/s10957-004-5719-y [7] Ekeland, I., Nonconvex Minimization Problems, Bulletin of the American Mathematical Society, Vol. 1, pp. 443–474, 1979. · Zbl 0441.49011 · doi:10.1090/S0273-0979-1979-14595-6 [8] Lucchetti, R., and Patrone, F., A Characterization of Tykhonov Well Posedness for Minimum Problems, with Applications to Variational Inequalities. Numerical Functional Analysis and Optimization, Vol. 3, pp. 461–476, 1981. · Zbl 0479.49025 · doi:10.1080/01630568108816100 [9] Loridan, P., Well Posedness in Vector Optimization, Recent Developments in Well-Posed Variational Problems, Edited by R. Lucchetti and J. Revalski, Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, Netherlands, Vol. 331, pp. 171–192, 1995. [10] Miglierina, E., Molho, E., and Rocca, M., Well Posedness and Scalarization in Vector Optimization, Journal of Optimization Theory and Applications, Vol. 126, pp. 391–409, 2005. · Zbl 1129.90346 · doi:10.1007/s10957-005-4723-1 [11] Giannessi, F., Theorems of the Alternative, Quadratic Programs, and Complementarity Problems, Variational Inequalities and Complementarity Problems: Theory and Applications, Edited by R.W. Cottle, F. Giannessi, and J. L. Lions, Wiley, New York, N.Y., pp. 151–186, 1980. [12] Giannessi, F., On Minty Variational Principle, New Trends in Mathematical Programming, Edited by F. Giannessi, S. Komlósi, and T. Rapesák, Kluwer Academic Publishers, Boston, Massachussetts, pp. 93–99, 1998. [13] Bednarczuk, E., and Penot, J. P., On the Positions of the Notion of Well-Posed Minimization Problems, Bollettino dell’Unione Matematica Italiana, Vol. 7, pp. 665–683, 1992. [14] Bednarczuk, E., and Penot, J. P., Metrically Well-Set Minimization Problems, Applied Mathematics and Optimization, Vol. 26, pp. 273–285, 1992. · Zbl 0762.90073 · doi:10.1007/BF01371085 [15] Luc, D. T., Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems, Springer Verlag, Berlin, Germany, Vol. 319, 1989. [16] Tammer, C., A Generalization of Ekeland’s Variational Principle, Optimization, Vol.$\sim$25, pp. 129–141, 1992. [17] Rockafellar, R. T., and Wets, R. J. B., Variational analysis, Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, Berlin, Germany, Vol. 317, 1998. [18] Hiriart-Hurruty, J. B., Tangent Cones, Generalized Gradients and Mathematical Programming in Banach Spaces, Mathematical Methods of Operations Research, Vol.$\sim$4, pp. 79–97, 1979. [19] Gorohovik, V. V., Convex and Nonsmooth Optimization Problems of Vector Optimization, Navuka i Tékhnika, Minsk, Ukraina, Vol. 240, 1990 (in Russian). [20] Ciligot-Travain, M., On Lagrange-Kuhn-Tucker Multipliers for Pareto Optimization Problems, Numerical Functional Analysis and Optimization, Vol. 15, pp. 689–693, 1994. [21] Amahroq, T., and Taa, A., On Lagrange-Kuhn-Tucker Multipliers for Multiobjective Optimization Problems, Optimization, Vol. 41, pp. 159–172, 1997. [22] Zaffaroni, A., Degrees of Efficiency and Degrees of Minimality, SIAM Journal Control and Optimization, Vol. 42, 1071–1086, 2003. [23] Nikodem, K., Continuity of K–Convex Set-Valued Functions, Bulletin of the Polish Academy of Sciences, Vol. 24, pp. 393–400, 1986. [24] Henkel, E. C., and Tammer, C., -Variational Inequalities in Partially Ordered Spaces, Optimization, Vol. 36, 105–118, 1992.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907060027122498, "perplexity": 4628.377542442981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011167968/warc/CC-MAIN-20140305091927-00098-ip-10-183-142-35.ec2.internal.warc.gz"}
http://projects.scbdd.com/pybiomed/reference/PyProteinAAComposition.html
# PyProteinAAComposition module¶ The module is used for computing the composition of amino acids, dipetide and 3-mers (tri-peptide) for a given protein sequence. You can get 8420 descriptors for a given protein sequence. You can freely use and distribute it. If you hava any problem, you could contact with us timely! References: [1]: Reczko, M. and Bohr, H. (1994) The DEF data base of sequence based protein fold class predictions. Nucleic Acids Res, 22, 3616-3619. [2]: Hua, S. and Sun, Z. (2001) Support vector machine approach for protein subcellular localization prediction. Bioinformatics, 17, 721-728. [3]:Grassmann, J., Reczko, M., Suhai, S. and Edler, L. (1999) Protein fold class prediction: new methods of statistical classification. Proc Int Conf Intell Syst Mol Biol, 106-112. Authors: Zhijiang Yao and Dongsheng Cao. Date: 2016.06.04 Email: [email protected] PyProteinAAComposition.CalculateAAComposition(ProteinSequence)[source] Calculate the composition of Amino acids for a given protein sequence. Usage: result=CalculateAAComposition(protein) Input: protein is a pure protein sequence. Output: result is a dict form containing the composition of PyProteinAAComposition.CalculateAADipeptideComposition(ProteinSequence)[source] Calculate the composition of AADs, dipeptide and 3-mers for a given protein sequence. Usage: result=CalculateAADipeptideComposition(protein) Input: protein is a pure protein sequence. Output: result is a dict form containing all composition values of PyProteinAAComposition.CalculateDipeptideComposition(ProteinSequence)[source] Calculate the composition of dipeptidefor a given protein sequence. Usage: result=CalculateDipeptideComposition(protein) Input: protein is a pure protein sequence. Output: result is a dict form containing the composition of PyProteinAAComposition.GetSpectrumDict(proteinsequence)[source] Calcualte the spectrum descriptors of 3-mers for a given protein. Usage: result=GetSpectrumDict(protein) Input: protein is a pure protein sequence. Output: result is a dict form containing the composition values of 8000 PyProteinAAComposition.Getkmers()[source] Get the amino acid list of 3-mers. Usage: result=Getkmers() Output: result is a list form containing 8000 tri-peptides.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31202232837677, "perplexity": 17066.347428334087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00592.warc.gz"}
http://mtosmt.org/issues/mto.17.23.3/blattler_examples.php?id=1&nonav=true
Example 2. Comparison of the impact of chord inversion upon an additive chord and upon a triad (a) inversion of the penultimate chord of Example 1 (b) replacement of penultimate chord of Example 1 with a dominant triad (c) inversion of the penultimate chord of Example 2b
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9012324213981628, "perplexity": 3061.4675643502164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746386.1/warc/CC-MAIN-20181120110505-20181120132505-00123.warc.gz"}
https://avidemia.com/pure-mathematics/miscellaneous-examples-on-chapter-ix/
1. Given that $$\log_{10} e = .4343$$ and that $$2^{10}$$ and $$3^{21}$$ are nearly equal to powers of $$10$$, calculate $$\log_{10}2$$ and $$\log_{10}3$$ to four places of decimals. 2. Determine which of $$(\frac{1}{2}e)^{\sqrt{3}}$$ and $$(\sqrt{2})^{\frac{1}{2}\pi}$$ is the greater. [Take logarithms and observe that $$\sqrt{3}/(\sqrt{3} + \frac{1}{4}\pi) < \frac{2}{5} \sqrt{3} < .6929 < \log 2$$.] 3. Show that $$\log_{10}n$$ cannot be a rational number if $$n$$ is any positive integer not a power of $$10$$. [If $$n$$ is not divisible by $$10$$, and $$\log_{10}n = p/q$$, we have $$10^{p} = n^{q}$$, which is impossible, since $$10^{p}$$ ends with $$0$$ and $$n^{q}$$ does not. If $$n = 10^{a}N$$, where $$N$$ is not divisible by $$10$$, then $$\log_{10}N$$ and therefore $\log_{10}n = a + \log_{10}N$ cannot be rational.] 4. For what values of $$x$$ are the functions $$\log x$$, $$\log\log x$$, $$\log\log\log x$$, … (a) equal to $$0$$ (b) equal to $$1$$ (c) not defined? Consider also the same question for the functions $$lx$$, $$llx$$, $$lllx$$, …, where $$lx = \log |x|$$. 5. Show that $\log x – \binom{n}{1} \log(x + 1) + \binom{n}{2} \log(x + 2) – \dots + (-1)^{n} \log(x + n)$ is negative and increases steadily towards $$0$$ as $$x$$ increases from $$0$$ towards $$\infty$$. [The derivative of the function is $\sum_{0}^{n} (-1)^{r} \binom{n}{r} \frac{1}{x + r} = \frac{n!}{x(x + 1) \dots (x + n)},$ as is easily seen by splitting up the right-hand side into partial fractions. This expression is positive, and the function itself tends to zero as $$x \to \infty$$, since $\log(x + r) = \log x + \epsilon_{x},$ where $$\epsilon_{x} \to 0$$, and $$1 – \dbinom{n}{1} + \dbinom{n}{2} – \dots = 0$$.] 6. Prove that $\left(\frac{d}{dx}\right)^{n} \frac{\log x}{x} = \frac{(-1)^{n} n!}{x^{n+1}} \left(\log x – 1 – \frac{1}{2} – \dots – \frac{1}{n}\right).$ 7. If $$x > -1$$ then $$x^{2} > (1 + x) \{\log(1 + x)\}^{2}$$. [Put $$1 + x = e^{\xi}$$, and use the fact that $$\sinh \xi > \xi$$ when $$\xi > 0$$.] 8. Show that $$\{\log(1 + x)\}/x$$ and $$x/\{(1 + x)\log(1 + x)\}$$ both decrease steadily as $$x$$ increases from $$0$$ towards $$\infty$$. 9. Show that, as $$x$$ increases from $$-1$$ towards $$\infty$$, the function $$(1 + x)^{-1/x}$$ assumes once and only once every value between $$0$$ and $$1$$. 10. Show that $$\dfrac{1}{\log(1 + x)} – \dfrac{1}{x} \to \dfrac{1}{2}$$ as $$x \to 0$$. 11. Show that $$\dfrac{1}{\log(1 + x)} – \dfrac{1}{x}$$ decreases steadily from $$1$$ to $$0$$ as $$x$$ increases from $$-1$$ towards $$\infty$$. [The function is undefined when $$x = 0$$, but if we attribute to it the value $$\frac{1}{2}$$ when $$x = 0$$ it becomes continuous for $$x = 0$$. Use Ex. 7 to show that the derivative is negative.] 12. Show that the function $$(\log \xi – \log x)/(\xi – x)$$, where $$\xi$$ is positive, decreases steadily as $$x$$ increases from $$0$$ to $$\xi$$, and find its limit as $$x \to \xi$$. 13. Show that $$e^{x} > Mx^{N}$$, where $$M$$ and $$N$$ are large positive numbers, if $$x$$ is greater than the greater of $$2\log M$$ and $$16N^{2}$$. [It is easy to prove that $$\log x < 2\sqrt{x}$$; and so the inequality given is certainly satisfied if $x > \log M + 2N\sqrt{x},$ and therefore certainly satisfied if $$\frac{1}{2}x > \log M$$, $$\frac{1}{2}x > 2N\sqrt{x}$$.] 14. If $$f(x)$$ and $$\phi(x)$$ tend to infinity as $$x \to \infty$$, and $$f'(x)/\phi'(x) \to \infty$$, then $$f(x)/\phi(x) \to \infty$$. [Use the result of Ch. VI, Misc. Ex. 33.] By taking $$f(x) = x^{\alpha}$$, $$\phi(x) = \log x$$, prove that $$(\log x)/x^{\alpha} \to 0$$ for all positive values of $$\alpha$$. 15. If $$p$$ and $$q$$ are positive integers then $\frac{1}{pn + 1} + \frac{1}{pn + 2} + \dots + \frac{1}{qn} \to \log\left(\frac{q}{p}\right)$ as $$n \to \infty$$. [Cf. Ex. LXXVIII. 6.] 16. Prove that if $$x$$ is positive then $$n\log\{\frac{1}{2}(1 + x^{1/n})\} \to -\frac{1}{2}\log x$$ as $$n \to \infty$$. [We have $n\log\{\tfrac{1}{2}(1 + x^{1/n})\} = n\log\{1 – \tfrac{1}{2}(1 – x^{1/n})\} = \tfrac{1}{2}n(1 – x^{1/n}) \frac{\log(1 – u)}{u}$ where $$u = \frac{1}{2}(1 – x^{1/n})$$. Now use § 209 and Ex. LXXXII. 4.] 17. Prove that if $$a$$ and $$b$$ are positive then $\{\tfrac{1}{2}(a^{1/n} + b^{1/n})\}^{n} \to \sqrt{ab}.$ [Take logarithms and use Ex. 16.] 18. Show that $1 + \frac{1}{3} + \frac{1}{5} + \dots + \frac{1}{2n – 1} = \tfrac{1}{2}\log n + \log 2 + \tfrac{1}{2} \gamma + \epsilon_{n},$ where $$\gamma$$ is Euler’s constant (Ex. LXXXIX. 1) and $$\epsilon_{n} \to 0$$ as $$n \to \infty$$. 19. Show that $1 + \tfrac{1}{3} – \tfrac{1}{2} + \tfrac{1}{5} + \tfrac{1}{7} – \tfrac{1}{4} + \tfrac{1}{9} + \dots = \tfrac{3}{2} \log 2,$ the series being formed from the series $$1 – \frac{1}{2} + \frac{1}{3} – \dots$$ by taking alternately two positive terms and then one negative. [The sum of the first $$3n$$ terms is $\begin{gathered} 1 + \frac{1}{3} + \frac{1}{5} + \dots + \frac{1}{4n – 1} – \frac{1}{2} \left(1 + \frac{1}{2} + \dots + \frac{1}{n}\right)\\ = \tfrac{1}{2}\log 2n + \log 2 + \tfrac{1}{2}\gamma + \epsilon_{n} – \tfrac{1}{2}(\log n + \gamma + \epsilon_{n}’), \end{gathered}$ where $$\epsilon_{n}$$ and $$\epsilon’_{n}$$ tend to $$0$$ as $$n \to \infty$$. (Cf. Ex. LXXVIII. 6).] 20. Show that $$1 – \frac{1}{2} – \frac{1}{4} + \frac{1}{3} – \frac{1}{6} – \frac{1}{8} + \frac{1}{5} – \frac{1}{10} – \dots = \frac{1}{2}\log 2$$. 21. Prove that $\sum_{1}^{n} \frac{1}{\nu(36\nu^{2} – 1)} = -3 + 3\Sigma_{3n+1} – \Sigma_{n} – S_{n}$ where $$S_{n} = 1 + \dfrac{1}{2} + \dots + \dfrac{1}{n}$$, $$\Sigma_{n} = 1 + \dfrac{1}{3} + \dots + \dfrac{1}{2n – 1}$$. Hence prove that the sum of the series when continued to infinity is $-3 + \tfrac{3}{2}\log 3 + 2\log 2.$ 22. Show that $\sum_{1}^{\infty} \frac{1}{n(4n^{2} – 1)} = 2\log 2 – 1, \quad \sum_{1}^{\infty} \frac{1}{n(9n^{2} – 1)} = \tfrac{3}{2}(\log 3 – 1).$ 23. Prove that the sums of the four series $\sum_{1}^{\infty} \frac{1}{4n^{2} – 1},\quad \sum_{1}^{\infty} \frac{(-1)^{n-1}}{4n^{2} – 1},\quad \sum_{1}^{\infty} \frac{1}{(2n + 1)^{2} – 1},\quad \sum_{1}^{\infty} \frac{(-1)^{n-1}}{(2n + 1)^{2} – 1}$ are $$\frac{1}{2}$$, $$\frac{1}{4}\pi – \frac{1}{2}$$, $$\frac{1}{4}$$, $$\frac{1}{2}\log 2 – \frac{1}{4}$$ respectively. 24. Prove that $$n!\, (a/n)^{n}$$ tends to $$0$$ or to $$\infty$$ according as $$a < e$$ or $$a > e$$. [If $$u_{n} = n!\, (a/n)^{n}$$ then $$u_{n+1}/u_{n} = a\{1 + (1/n)\}^{-n} \to a/e$$. It can be shown that the function tends to $$\infty$$ when $$a = e$$: for a proof, which is rather beyond the scope of the theorems of this chapter, see Bromwich’s Infinite Series, pp. 461 et seq.] 25. Find the limit as $$x \to \infty$$ of $\left(\frac{a_{0} + a_{1} x + \dots + a_{r} x^{r}} {b_{0} + b_{1} x + \dots + b_{r} x^{r}}\right)^{\lambda_{0}+\lambda_{1}x},$ distinguishing the different cases which may arise. 26. Prove that $\sum \log \left(1 + \frac{x}{n}\right)\quad (x > 0)$ diverges to $$\infty$$. [Compare with $$\sum (x/n)$$.] Deduce that if $$x$$ is positive then $(1 + x)(2 + x) \dots (n + x)/n! \to \infty$ as $$n \to \infty$$. [The logarithm of the function is $$\sum\limits_{1}^{n} \log \left(1 + \dfrac{x}{\nu}\right)$$.] 27. Prove that if $$x > -1$$ then $\begin{gathered} \frac{1}{(x + 1)^{2}} = \frac{1}{(x + 1) (x + 2)} + \frac{1!}{(x + 1) (x + 2) (x + 3)}\\ + \frac{2!}{(x + 1) (x + 2) (x + 3) (x + 4)} + \dots.\end{gathered}$ [The difference between $$1/(x + 1)^{2}$$ and the sum of the first $$n$$ terms of the series is $\frac{1}{(x + 1)^{2}}\, \frac{n!}{(x + 2) (x + 3) \dots (x + n + 1)}.]$ 28. No equation of the type $Ae^{\alpha x} + Be^{\beta x} + \dots = 0,$ where $$A$$, $$B$$, … are polynomials and $$\alpha$$, $$\beta$$, … different real numbers, can hold for all values of $$x$$. [If $$\alpha$$ is the algebraically greatest of $$\alpha$$, $$\beta$$, …, then the term $$Ae^{\alpha x}$$ outweighs all the rest as $$x \to \infty$$.] 29. Show that the sequence $a_{1} = e,\quad a_{2} = e^{e^{2}},\quad a_{3} = e^{e^{e^{3}}},\ \dots$ tends to infinity more rapidly than any member of the exponential scale. [Let $$e_{1}(x) = e^{x}$$, $$e_{2}(x) = e^{e_{1}(x)}$$, and so on. Then, if $$e_{k}(x)$$ is any member of the exponential scale, $$a_{n} > e_{k}(n)$$ when $$n > k$$.] 30. Prove that $\frac{d}{dx} \{\phi(x)\}^{\psi(x)} = \frac{d}{dx} \{\phi(x)\}^{\alpha} + \frac{d}{dx} \{\beta^{\psi(x)}\}$ where $$\alpha$$ is to be put equal to $$\psi(x)$$ and $$\beta$$ to $$\phi(x)$$ after differentiation. Establish a similar rule for the differentiation of $$\phi(x)^{[\{\psi(x)\}^{\chi(x)}]}$$. 31. Prove that if $$D_{x}^{n} e^{-x^{2}} = e^{-x^{2}} \phi_{n}(x)$$ then (i) $$\phi_{n}(x)$$ is a polynomial of degree $$n$$, (ii) $$\phi_{n+1} = -2x\phi_{n} + \phi_{n}’$$, and (iii) all the roots of $$\phi_{n} = 0$$ are real and distinct, and separated by those of $$\phi_{n-1} = 0$$. [To prove (iii) assume the truth of the result for $${\kappa} = 1$$, $$2$$, …, $${n}$$, and consider the signs of $${\phi_{n+1}}$$ for the $$n$$ values of $$x$$ for which $${\phi_{n}} = 0$$ and for large (positive or negative) values of $$x$$.] 32. The general solution of $$f(xy) = f(x)f(y)$$, where $$f$$ is a differentiable function, is $$x^{a}$$, where $$a$$ is a constant: and that of $f(x + y) + f(x – y) = 2f(x)f(y)$ is $$\cosh ax$$ or $$\cos ax$$, according as $$f”(0)$$ is positive or negative. [In proving the second result assume that $$f$$ has derivatives of the first three orders. Then $2f(x) + y^{2}\{f”(x) + \epsilon_{y}\} = 2f(x)[f(0) + yf'(0) + \tfrac{1}{2} y^{2}\{f”(0) + \epsilon_{y}’\}],$ where $$\epsilon_{y}$$ and $$\epsilon_{y}’$$ tend to zero with $$y$$. It follows that $$f(0) = 1$$, $$f'(0) = 0$$, $$f”(x) = f”(0)f(x)$$, so that $$a = \sqrt{f”(0)}$$ or $$a = \sqrt{-f”(0)}$$.] 33. How do the functions $$x^{\sin(1/x)}$$, $$x^{\sin^{2}(1/x)}$$, $$x^{\csc(1/x)}$$ behave as $$x \to +0$$? 34. Trace the curves $$y = \tan x e^{\tan x}$$, $$y = \sin x \log \tan \frac{1}{2}x$$. 35. The equation $$e^{x} = ax + b$$ has one real root if $$a < 0$$ or $$a = 0$$, $$b > 0$$. If $$a > 0$$ then it has two real roots or none, according as $$a\log a > b – a$$ or $$a\log a < b – a$$. 36. Show by graphical considerations that the equation $$e^{x} = ax^{2} + 2bx + c$$ has one, two, or three real roots if $$a > 0$$, none, one, or two if $$a < 0$$; and show how to distinguish between the different cases. 37. Trace the curve $$y = \dfrac{1}{x} \log\left(\dfrac{e^{x} – 1}{x}\right)$$, showing that the point $$(0, \frac{1}{2})$$ is a centre of symmetry, and that as $$x$$ increases through all real values, $$y$$ steadily increases from $$0$$ to $$1$$. Deduce that the equation $\frac{1}{x} \log\left(\frac{e^{x} – 1}{x}\right) = \alpha$ has no real root unless $$0 < \alpha < 1$$, and then one, whose sign is the same as that of $$\alpha – \frac{1}{2}$$. [In the first place $y – \tfrac{1}{2} = \frac{1}{x} \left\{\log\left(\frac{e^{x} – 1}{x}\right) – \log e^{\frac{1}{2} x}\right\} = \frac{1}{x} \log\left(\frac{\sinh \frac{1}{2}x}{\frac{1}{2}x}\right)$ is clearly an odd function of $$x$$. Also $\frac{dy}{dx} = \frac{1}{x^{2}} \left\{\tfrac{1}{2} x\coth \tfrac{1}{2}x – 1 – \log\left(\frac{\sinh \frac{1}{2}x}{\frac{1}{2}x}\right)\right\}.$ The function inside the large bracket tends to zero as $$x \to 0$$; and its derivative is $\frac{1}{x} \left\{1 – \left(\frac{\frac{1}{2}x}{\sinh \frac{1}{2}x}\right)^2\right\},$ which has the sign of $$x$$. Hence $$dy/dx > 0$$ for all values of $$x$$.] 38. Trace the curve $$y = e^{1/x} \sqrt{x^{2} + 2x}$$, and show that the equation $e^{1/x} \sqrt{x^{2} + 2x} = \alpha$ has no real roots if $$\alpha$$ is negative, one negative root if $0 < \alpha < a = e^{1/\sqrt{2}} \sqrt{2 + 2\sqrt{2}},$ and two positive roots and one negative if $$\alpha > a$$. 39. Show that the equation $$f_{n}(x) = 1 + x + \dfrac{x^{2}}{2!} + \dots + \dfrac{x^{n}}{n!} = 0$$ has one real root if $$n$$ is odd and none if $$n$$ is even. [Assume this proved for $$n = 1$$, $$2$$, … $$2k$$. Then $$f_{2k+1}(x) = 0$$ has at least one real root, since its degree is odd, and it cannot have more since, if it had, $$f’_{2k+1}(x)$$ or $$f_{2k}(x)$$ would have to vanish once at least. Hence $$f_{2k+1}(x) = 0$$ has just one root, and so $$f_{2k+2}(x) = 0$$ cannot have more than two. If it has two, say $$\alpha$$ and $$\beta$$, then $$f’_{2k+2}(x)$$ or $$f_{2k+1}(x)$$ must vanish once at least between $$\alpha$$ and $$\beta$$, say at $$\gamma$$. And $f_{2k+2}(\gamma) = f_{2k+1}(\gamma) + \frac{\gamma^{2k+2}}{(2k + 2)!} > 0.$ But $$f_{2k+2}(x)$$ is also positive when $$x$$ is large (positively or negatively), and a glance at a figure will show that these results are contradictory. Hence $$f_{2k+2}(x) = 0$$ has no real roots.] 40. Prove that if $$a$$ and $$b$$ are positive and nearly equal then $\log \frac{a}{b} = \frac{1}{2}(a – b) \left(\frac{1}{a} + \frac{1}{b}\right),$ approximately, the error being about $$\frac{1}{6}\{(a – b)/a\}^{3}$$. [Use the logarithmic series. This formula is interesting historically as having been employed by Napier for the numerical calculation of logarithms.] 41. Prove by multiplication of series that if $$-1 < x < 1$$ then \begin{aligned} \tfrac{1}{2}\{\log(1 + x)\}^{2} &= \tfrac{1}{2} x^{2} – \tfrac{1}{3}(1 + \tfrac{1}{2})x^{3} + \tfrac{1}{4}(1 + \tfrac{1}{2} + \tfrac{1}{3})x^{4} – \dots,\\ \tfrac{1}{2}(\arctan x)^{2} &= \tfrac{1}{2} x^{2} – \tfrac{1}{4}(1 + \tfrac{1}{3})x^{4} + \tfrac{1}{6}(1 + \tfrac{1}{3} + \tfrac{1}{5})x^{6} – \dots.\end{aligned} 42. Prove that $(1 + \alpha x)^{1/x} = e^{\alpha}\{1 – \tfrac{1}{2} a^{2}x + \tfrac{1}{24}(8 + 3a)a^{3}x^{2}(1 + \epsilon_{x})\},$ where $$\epsilon_{x} \to 0$$ with $$x$$. 43. The first $$n + 2$$ terms in the expansion of $$\log\left(1 + x + \dfrac{x^{2}}{2!} + \dots + \dfrac{x^{n}}{n!}\right)$$ in powers of $$x$$ are $x – \frac{x^{n+1}}{n!} \left\{\frac{1}{n + 1} – \frac{x}{1!\, (n + 2)} + \frac{x^{2}}{2!\, (n + 3)} – \dots + (-1)^{n} \frac{x^{n}}{n!\, (2n + 1)} \right\}.$ 44. Show that the expansion of $\exp \left(-x – \frac{x^{2}}{2} – \dots – \frac{x^{n}}{n}\right)$ in powers of $$x$$ begins with the terms $1 – x + \frac{x^{n+1}}{n + 1} – \sum_{s=1}^{n} \frac{x^{n+s+1}}{(n + s)(n + s + 1)}.$ 45. Show that if $$-1 < x < 1$$ then \begin{aligned} \frac{1}{3}x + \frac{1\cdot4}{3\cdot6}2^{2}x^{2} + \frac{1\cdot4\cdot7}{3\cdot6\cdot9}3^{2}x^{3} + \dots &= \frac{x(x + 3)}{9(1 – x)^{7/3}},\\ \frac{1}{3}x + \frac{1\cdot4}{3\cdot6}2^{3}x^{2} + \frac{1\cdot4\cdot7}{3\cdot6\cdot9}3^{3}x^{3} + \dots &= \frac{x(x^{2} + 18x + 9)}{27(1 – x)^{10/3}}.\end{aligned} [Use the method of Ex. XCII. 6. The results are more easily obtained by differentiation; but the problem of the differentiation of an infinite series is beyond our range.] 46. Prove that \begin{aligned} \int_{0}^{\infty} \frac{dx}{(x + a)(x + b)} &= \frac{1}{a – b} \log\left(\frac{a}{b}\right), \\ \int_{0}^{\infty} \frac{dx}{(x + a)(x + b)^{2}} &= \frac{1}{(a – b)^{2}b}\left\{a – b – b\log\left(\frac{a}{b}\right)\right\},\\ \int_{0}^{\infty} \frac{x\, dx}{(x + a)(x + b)^{2}} &= \frac{1}{(a – b)^{2}} \left\{a\log\left(\frac{a}{b}\right) – a + b\right\},\\ \int_{0}^{\infty} \frac{dx}{(x + a)(x^{2} + b^{2})} &= \frac{1}{(a^{2} + b^{2})b} \left\{\tfrac{1}{2}\pi a – b\log\left(\frac{a}{b}\right)\right\},\\ \int_{0}^{\infty} \frac{x\, dx}{(x + a)(x^{2} + b^{2})} &= \frac{1}{a^{2} + b^{2}} \left\{\tfrac{1}{2}\pi b + a\log\left(\frac{a}{b}\right)\right\},\end{aligned} provided that $$a$$ and $$b$$ are positive. Deduce, and verify independently, that each of the functions $a – 1 – \log a,\quad a\log a – a + 1,\quad \tfrac{1}{2}\pi a – \log a,\quad \tfrac{1}{2}\pi + a\log a$ is positive for all positive values of $$a$$. 47. Prove that if $$\alpha$$, $$\beta$$, $$\gamma$$ are all positive, and $$\beta^{2} > \alpha\gamma$$, then $\int_{0}^{\infty} \frac{dx}{\alpha x^{2} + 2\beta x + \gamma} = \frac{1}{\sqrt{\beta^{2} – \alpha\gamma}} \log \left\{\frac{\beta + \sqrt{\beta^{2} – \alpha\gamma}} {\sqrt{\alpha\gamma}} \right\};$ while if $$\alpha$$ is positive and $$\alpha\gamma > \beta^{2}$$ the value of the integral is $\frac{1}{\sqrt{\alpha\gamma – \beta^{2}}} \arctan \left\{\frac{\sqrt{\alpha\gamma – \beta^{2}}}{\beta}\right\},$ that value of the inverse tangent being chosen which lies between $$0$$ and $$\pi$$. Are there any other really different cases in which the integral is convergent? 48. Prove that if $$a > -1$$ then $\int_{1}^{\infty} \frac{dx}{(x + a)\sqrt{x^{2} – 1}} = \int_{0}^{\infty} \frac{dt}{\cosh t + a} = 2\int_{1}^{\infty}\frac{du}{u^{2} + 2au + 1};$ and deduce that the value of the integral is $\frac{2}{\sqrt{1 – a^{2}}} \arctan \sqrt{\frac{1 – a}{1 + a}}$ if $$-1 < a < 1$$, and $\frac{1}{\sqrt{a^{2} – 1}} \log\frac{\sqrt{a + 1} + \sqrt{a – 1}} {\sqrt{a + 1} – \sqrt{a – 1}} = \frac{2}{\sqrt{a^{2} – 1}} \operatorname{arg tanh} \sqrt{\frac{a – 1}{a + 1}}$ if $$a > 1$$. Discuss the case in which $$a = 1$$. 49. Transform the integral $$\int_{0}^{\infty} \frac{dx}{(x + a) \sqrt{x^{2} + 1}}$$, where $$a > 0$$, in the same ways, showing that its value is $\frac{1}{\sqrt{a^{2} + 1}} \log\frac{a + 1 + \sqrt{a^{2} + 1}}{a + 1 – \sqrt{a^{2} + 1}} = \frac{2}{\sqrt{a^{2} + 1}} \operatorname{arg tanh} \frac{\sqrt{a^{2} + 1}}{a + 1}.$ 50. Prove that $\int_{0}^{1} \arctan x\, dx = \tfrac{1}{4}\pi – \tfrac{1}{2}\log 2.$ 51. If $$0 < \alpha < 1$$, $$0 < \beta < 1$$, then $\int_{-1}^{1} \frac{dx}{\sqrt{(1 – 2\alpha x + \alpha^{2})(1 – 2\beta x + \beta^{2})}} = \frac{1}{\sqrt{\alpha\beta}} \log \frac{1 + \sqrt{\alpha\beta}}{1 – \sqrt{\alpha\beta}}.$ 52. Prove that if $$a > b > 0$$ then $\int_{-\infty}^{\infty} \frac{d\theta}{a\cosh \theta + b\sinh \theta} = \frac{\pi}{\sqrt{a^{2} – b^{2}}}{.}$ 53. Prove that $\int_{0}^{1} \frac{\log x}{1 + x^{2}}\, dx = -\int_{1}^{\infty} \frac{\log x}{1 + x^{2}}\, dx,\quad \int_{0}^{\infty} \frac{\log x}{1 + x^{2}}\, dx = 0,$ and deduce that if $$a > 0$$ then $\int_{0}^{\infty} \frac{\log x}{a^{2} + x^{2}}\, dx = \frac{\pi}{2a}\log a.$ [Use the substitutions $$x = 1/t$$ and $$x = au$$.] 54. Prove that $\int_{0}^{\infty} \log \left(1 + \frac{a^{2}}{x^{2}}\right) dx = \pi a$ if $$a > 0$$. [Integrate by parts.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991974949836731, "perplexity": 202.3665441633915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585911.17/warc/CC-MAIN-20211024050128-20211024080128-00645.warc.gz"}
https://socratic.org/questions/how-do-you-solve-2-ln-x-3-0-and-find-any-extraneous-solutions
Precalculus Topics # How do you solve 2 ln (x + 3) = 0 and find any extraneous solutions? This equation means that $x + 3 = 1$ or more precisely $x + 3 = {e}^{2 i \frac{\pi}{n}}$. For x real $x + 3 = \pm 1$ thus $x = - 4 \mathmr{and} x = - 2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929364323616028, "perplexity": 452.78949317970853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00548.warc.gz"}
https://codereview.stackexchange.com/questions/209154/a-java-quadrilateral-inheritance-hierarchy-revisited
# A Java Quadrilateral Inheritance Hierarchy - revisited This is an exercise from Deitel&Deitel's "Java. How to Program (Early Objects)", 10th edition. 9.8 (Quadrilateral Inheritance Hierarchy) Write an inheritance hierarchy for classes Quadrilateral, Trapezoid, Parallelogram, Rectangle and Square. Use Quadrilateral as the superclass of the hierarchy. Create and use a Point class to represent the points in each shape. Make the hierarchy as deep (i.e., as many levels) as possible. Specify the instance variables and methods for each class. The private instance variables of Quadrilateral should be the x-y coordinate pairs for the four endpoints of the Quadrilateral. Write a program that instantiates objects of your classes and outputs each object’s area (except Quadrilateral). I've seen some realizations of this shape hierarchy on the Internet, but they impose additional restrictions on the orientation of quadrilaterals. For example, the bases of trapezoids/parallelograms are parallel to the X or Y axis etc. There are no indications of such restrictions in the task. That's why I tried to implement this hierarchy using a simple custom Vector class. The Vectors are also used to work with points. This is a practice I've read about here. package geometry2D; import doubleWrapper.DoubleHandler; public class Vector { private double x; private double y; public Vector(double x, double y) { setX(x); setY(y); } public Vector(Vector vector) { setX(vector.getX()); setY(vector.getY()); } public Vector(Vector v1, Vector v2) { setX(v2.getX() - v1.getX()); setY(v2.getY() - v1.getY()); } public Vector clone() { return new Vector(this); } public boolean equals(Vector vector) { return DoubleHandler.compare(this.x, vector.x) == 0 && DoubleHandler.compare(this.y, vector.y) == 0; } public double length() { return Math.sqrt(Math.pow(x, 2) + Math.pow(y, 2)); } public Vector toUnitVector() { double length = this.length(); if (DoubleHandler.compare(length, 0.0) == 0) throw new IllegalArgumentException("ERROR: undefined direction! A zero vector cannot be converted to a unit vector!"); return this.scale(1 / length); } public Vector scale(double scale) { return new Vector(this.x * scale, this.y * scale); } } public Vector subtract(Vector subtrahend) { return new Vector(this.x - subtrahend.x, this.y - subtrahend.y); } public static double dotProduct(Vector v1, Vector v2) { return v1.x * v2.x + v1.y * v2.y; } public static double spanArea(Vector v1, Vector v2) { return v1.x * v2.y - v1.y * v2.x; } public boolean isCollinear(Vector v1, Vector v2) { v1.subtract(this); v2.subtract(this); return DoubleHandler.compare(spanArea(v1, v2), 0.0) == 0; } public Vector rotate(double angle) { double cos = Math.cos(angle); double sin = Math.sin(angle); Vector rotateUnitX = new Vector(cos, sin); Vector rotateUnitY = new Vector(-sin, cos); rotateUnitY.scale(this.y)); } public void setX(double x) { this.x = x; } public double getX() { return x; } public void setY(double y) { this.y = y; } public double getY() { return y; } } To compare doubles properly, I make use of this method for comparing doubles. package doubleWrapper; public class DoubleHandler { private static final long BITS = 0xFFFFFFFFFFFFFFF0L; public static int compare(double a, double b) { long bitsA = Double.doubleToRawLongBits(a) & BITS; long bitsB = Double.doubleToRawLongBits(b) & BITS; if (bitsA < bitsB) return -1; if (bitsA > bitsB) return 1; return 0; } } Quadrilateral class. There are no setters, so the objects of this class are immutable. All the subclasses are implemented in a similar fashion. Additionally, the immutability solves the "Rectangle-Square" OOP problem (at least as I've understood). package geometry2D; private Vector v0; private Vector v1; private Vector v2; private Vector v3; public Quadrilateral(Vector v0, Vector v1, Vector v2, Vector v3) { if (v1.equals(v0) || v2.equals(v0) || v2.equals(v1) || v3.equals(v0) || v3.equals(v1) || v3.equals(v2)) throw new IllegalArgumentException("ERROR: two or more points coincide!"); if (v0.isCollinear(v1, v2) || v0.isCollinear(v1, v3) || v0.isCollinear(v2, v3) || v1.isCollinear(v2, v3)) throw new IllegalArgumentException( "ERROR: at least three of the defined points are collinear!"); this.v0 = v0; this.v1 = v1; this.v2 = v2; this.v3 = v3; } public Vector getV0() { return v0; } public Vector getV1() { return v1; } public Vector getV2() { return v2; } public Vector getV3() { return v3; } @Override public String toString() { return String.format("%s%n(%.3f, %.3f)%n(%.3f, %.3f)%n(%.3f, %.3f)%n(%.3f, %.3f)%n", this.getClass().getSimpleName(), v0.getX(), v0.getY(), v1.getX(), v1.getY(), v2.getX(), v2.getY(), v3.getX(), v3.getY()); } } Trapezoid class. The order of points in the constructor reflects the sequence in which they are connected in the quadrilateral. It is assumed that: v1 - v0 is a base of the trapezoid; v2 - v0 is a lateral side; length is the length of the remaining base. The trapezoid's diagonal divides it into two triangles, and the area of the trapezoid is calculated by summing the areas of the triangles. Using Vectors, it's very easy to find the area of a triangle using determinants (spanArea method). package geometry2D; public class Trapezoid extends Quadrilateral { public Trapezoid(Vector v0, Vector v1, Vector v2, double length) { super(v0, v1, if (length <= 0) throw new IllegalArgumentException( "ERROR: a triangle or a self-intersecting trapezoid!"); } public double getArea() { Vector base1 = new Vector(getV1(), getV0()); Vector side2 = new Vector(getV2(), getV1()); Vector base2 = new Vector(getV3(), getV2()); Vector side1 = new Vector(getV0(), getV3()); return 0.5 * ( Math.abs(Vector.spanArea(base1, side1)) + Math.abs(Vector.spanArea(base2, side2))); } } The Parallelogram, Rectangle and Square classes are easy to extend out one by one. I'd like to hear any improvement suggestions on my realization. 1) Your Quadrilateral class and its subclasses aren't immutable right now... Having no setters doesn't make an objects immutable. Any fields that are mutable themselves and are not hidden nor cloned when returned make the whole object mutable. An example with your code : new Square(vector).getV0().setY(17); // I just made the square... not a square anymore You have to make your Vector class immutable as well by getting rid of the setters. If I were you, I'd also make every field final. 2) Please note that double (and float) are fairly odd fellas. They have a special value named NaN (used to represents some erroneous results such as 0/0) that can (and will) really mess up your calculations. You should check against it in the Vector constructor with this method : https://docs.oracle.com/javase/10/docs/api/java/lang/Double.html#isNaN(double) 3) You should also reimplement the hashCode method as, when you override equals you should override the hashCode as well as can be seen in the Object javadoc : https://docs.oracle.com/javase/10/docs/api/java/lang/Object.html#hashCode() 4) The isCollinear method does not work as intended right now... don't forget to unit test your code ;) 5) When you override the clone method, you should also implements the Cloneable interface. However, I'd recommend not implementing it (and thus removing the method) as, firstly, there is usually little point in cloning an immutable object and, secondly, you already have a copy constructor. 6) I don't really get why you need to use this special method for your Double comparison.... if you want high precision with real number... double are simply not meant for you... 7) You can't use the getArea method on objects of type 'Quadrilateral' which I find odd 8) Finally, please note that the package name doesn't respect the conventions (it should start with a DNS suffix such as 'fr' or 'com'). Overall, the code is fairly clear and easy to read so that's a good point. • Thanks for the answer! Actually, in my first version Vectors were mutable, so I failed to make appropriate changes. Dec 8 '18 at 14:33 • A point about that specific method for comparing doubles: as far as I know, long calculations involving floating-point values usually lead to error accumulation. These errors might cause incorrect comparisons, for example where two very close values (the difference is in the last several bits) are considered as not equal. This is not the behavior I'd like to see in my program. Dec 8 '18 at 14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23375685513019562, "perplexity": 8297.642175201736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00695.warc.gz"}
http://wiki.planetmath.org/blumnumber
# Blum number Given a semiprime $n=pq$, if both $p$ and $q$ are Gaussian primes with no imaginary part, then $n$ is called a Blum number. The first few Blum numbers are 21, 33, 57, 69, 77, 93, 129, 133, 141, 161, 177, 201, 209, 213, 217, 237, 249, 253, 301, 309, 321, 329, 341, 381, 393, 413, 417, 437, 453, 469, 473, 489, 497, etc., listed in A016105 of Sloane’s OEIS. A semiprime that is a Blum number is also a semiprime among the Gaussian integers and its prime factors also have no imaginary parts. The other real semiprimes are not semiprimes among the Gaussian integers. For example, 177 can only be factored as $3\times 59$ whether Gaussian integers are allowed or not. 159, on the other hand can be factored as either $3\times 53$ or $3(-i)(2+7i)(7+2i)$. Large Blum numbers had applications in cryptography prior to advances in integer factorization by means of quadratic sieves. Title Blum number BlumNumber 2013-03-22 17:53:17 2013-03-22 17:53:17 PrimeFan (13766) PrimeFan (13766) 4 PrimeFan (13766) Definition msc 11A51 Blum integer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 7, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6608887314796448, "perplexity": 465.1896085818692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867885.75/warc/CC-MAIN-20180625131117-20180625151117-00567.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1650/2/c/g/
# Properties Label 1650.2.c.g Level $1650$ Weight $2$ Character orbit 1650.c Analytic conductor $13.175$ Analytic rank $0$ Dimension $2$ CM no Inner twists $2$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$1650 = 2 \cdot 3 \cdot 5^{2} \cdot 11$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1650.c (of order $$2$$, degree $$1$$, not minimal) ## Newform invariants Self dual: no Analytic conductor: $$13.1753163335$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-1})$$ Defining polynomial: $$x^{2} + 1$$ Coefficient ring: $$\Z[a_1, a_2]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 330) Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of $$i = \sqrt{-1}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + i q^{2} + i q^{3} - q^{4} - q^{6} -i q^{8} - q^{9} +O(q^{10})$$ $$q + i q^{2} + i q^{3} - q^{4} - q^{6} -i q^{8} - q^{9} + q^{11} -i q^{12} -6 i q^{13} + q^{16} + 2 i q^{17} -i q^{18} + 4 q^{19} + i q^{22} + q^{24} + 6 q^{26} -i q^{27} + 10 q^{29} + i q^{32} + i q^{33} -2 q^{34} + q^{36} + 6 i q^{37} + 4 i q^{38} + 6 q^{39} + 2 q^{41} -4 i q^{43} - q^{44} -8 i q^{47} + i q^{48} + 7 q^{49} -2 q^{51} + 6 i q^{52} + 10 i q^{53} + q^{54} + 4 i q^{57} + 10 i q^{58} + 4 q^{59} -2 q^{61} - q^{64} - q^{66} -4 i q^{67} -2 i q^{68} -8 q^{71} + i q^{72} -2 i q^{73} -6 q^{74} -4 q^{76} + 6 i q^{78} + 8 q^{79} + q^{81} + 2 i q^{82} + 12 i q^{83} + 4 q^{86} + 10 i q^{87} -i q^{88} + 6 q^{89} + 8 q^{94} - q^{96} + 18 i q^{97} + 7 i q^{98} - q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q - 2q^{4} - 2q^{6} - 2q^{9} + O(q^{10})$$ $$2q - 2q^{4} - 2q^{6} - 2q^{9} + 2q^{11} + 2q^{16} + 8q^{19} + 2q^{24} + 12q^{26} + 20q^{29} - 4q^{34} + 2q^{36} + 12q^{39} + 4q^{41} - 2q^{44} + 14q^{49} - 4q^{51} + 2q^{54} + 8q^{59} - 4q^{61} - 2q^{64} - 2q^{66} - 16q^{71} - 12q^{74} - 8q^{76} + 16q^{79} + 2q^{81} + 8q^{86} + 12q^{89} + 16q^{94} - 2q^{96} - 2q^{99} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1650\mathbb{Z}\right)^\times$$. $$n$$ $$551$$ $$727$$ $$1201$$ $$\chi(n)$$ $$1$$ $$-1$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 199.1 − 1.00000i 1.00000i 1.00000i 1.00000i −1.00000 0 −1.00000 0 1.00000i −1.00000 0 199.2 1.00000i 1.00000i −1.00000 0 −1.00000 0 1.00000i −1.00000 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 5.b even 2 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 1650.2.c.g 2 3.b odd 2 1 4950.2.c.j 2 5.b even 2 1 inner 1650.2.c.g 2 5.c odd 4 1 330.2.a.d 1 5.c odd 4 1 1650.2.a.h 1 15.d odd 2 1 4950.2.c.j 2 15.e even 4 1 990.2.a.b 1 15.e even 4 1 4950.2.a.bg 1 20.e even 4 1 2640.2.a.t 1 55.e even 4 1 3630.2.a.f 1 60.l odd 4 1 7920.2.a.m 1 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 330.2.a.d 1 5.c odd 4 1 990.2.a.b 1 15.e even 4 1 1650.2.a.h 1 5.c odd 4 1 1650.2.c.g 2 1.a even 1 1 trivial 1650.2.c.g 2 5.b even 2 1 inner 2640.2.a.t 1 20.e even 4 1 3630.2.a.f 1 55.e even 4 1 4950.2.a.bg 1 15.e even 4 1 4950.2.c.j 2 3.b odd 2 1 4950.2.c.j 2 15.d odd 2 1 7920.2.a.m 1 60.l odd 4 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(1650, [\chi])$$: $$T_{7}$$ $$T_{13}^{2} + 36$$ $$T_{17}^{2} + 4$$ $$T_{19} - 4$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$1 + T^{2}$$ $3$ $$1 + T^{2}$$ $5$ $$T^{2}$$ $7$ $$T^{2}$$ $11$ $$( -1 + T )^{2}$$ $13$ $$36 + T^{2}$$ $17$ $$4 + T^{2}$$ $19$ $$( -4 + T )^{2}$$ $23$ $$T^{2}$$ $29$ $$( -10 + T )^{2}$$ $31$ $$T^{2}$$ $37$ $$36 + T^{2}$$ $41$ $$( -2 + T )^{2}$$ $43$ $$16 + T^{2}$$ $47$ $$64 + T^{2}$$ $53$ $$100 + T^{2}$$ $59$ $$( -4 + T )^{2}$$ $61$ $$( 2 + T )^{2}$$ $67$ $$16 + T^{2}$$ $71$ $$( 8 + T )^{2}$$ $73$ $$4 + T^{2}$$ $79$ $$( -8 + T )^{2}$$ $83$ $$144 + T^{2}$$ $89$ $$( -6 + T )^{2}$$ $97$ $$324 + T^{2}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788670539855957, "perplexity": 7489.4424307951385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141213431.41/warc/CC-MAIN-20201130100208-20201130130208-00123.warc.gz"}
https://www.aboehler.at/doku/doku.php/projects:pydund
## pydund - Pyhon dund replacement (PalmOS Bluetooth connectivity) Years ago, when I was still at school, I bought a Palm Tungsten T3 PDA. A few days ago, I resurrected the device (i.e. replaced the battery and started using it again). The PDA still works fine, however Bluetooth HotSync doesn't work anymore since the required Bluetooth-daemons on the Linux-side are deprecated. I looked around for a long time until I found an implementation of rfcomm.py in the bluez-compassion project here. The original script can emulate the necesarry rfcomm server, but lacks an implementation of the PPP frontend. Furthermore, it is limited to either stdin/stdout or two separate named pipes - both of which can't be used for pppd. I modified the code to include support for Pseudo-Terminals (pty) and direct launching of pppd. So far, it works fine for me, even though the PPP connection parameters are currently hardcoded in the file. Furthermore, if you want to use it, you need to configure “sudo” so that it allows running pppd without a password for your user. I did this by adding the following line to the sudoers-file (allowing members of the group “sudo” to run pppd without a password): %sudo ALL=(ALL) NOPASSWD:/usr/bin/pppd ### Usage Just run./rfcomm.py -D for dund support. On Arch, you need to prefix it with python2, since it seems to only work with Python 2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31341037154197693, "perplexity": 3093.204567520948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00093.warc.gz"}
https://www.physicsforums.com/threads/standard-ml-filter-and-mod.532378/
# Standard ML - Filter and mod 1. Sep 21, 2011 ### azrarillian Hi, i'm using SML and i'm trying to make a program/function that finds all the prime numbers in an Int list of numbers. what i'm trying to do is make a function that removes any (and all) elements x of the int list where x mod p = 0, and where p is the first prime number (2). then i want to make a recursion so that it does the same for the next element after p, which should be a prime number. the only problem i have is that i don't know how to filter or delete the elements x in the list. I've tried to use the function 'filter' but I can't figure out how to take modulo of the tail of the list (or rather of the elements in the tail) and the prime number. also, I know that there are other ways to find primenumbers, and though this is the way i want to use (for now) any and all help, otherwise, is welcome. Last edited: Sep 21, 2011 Can you offer guidance or do you also need help? Draft saved Draft deleted Similar Discussions: Standard ML - Filter and mod
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318396806716919, "perplexity": 338.95151551299705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00518.warc.gz"}
https://eprint.iacr.org/2009/151
## Cryptology ePrint Archive: Report 2009/151 Euclid's Algorithm, Guass' Elimination and Buchberger's Algorithm Shaohua Zhang Abstract: It is known that Euclid's algorithm, Guass' elimination and Buchberger's algorithm play important roles in algorithmic number theory, symbolic computation and cryptography, and even in science and engineering. The aim of this paper is to reveal again the relations of these three algorithms, and, simplify Buchberger's algorithm without using multivariate division algorithm. We obtain an algorithm for computing the greatest common divisor of several positive integers, which can be regarded as the generalization of Euclid's algorithm. This enables us to re-find the Guass' elimination and further simplify Buchberger's algorithm for computing Gr\"{o}bner bases of polynomial ideals in modern Computational Algebraic Geometry. Category / Keywords: Euclid's algorithm, Guass' elimination, multivariate polynomial, Gr\"{o}bner bases, Buchberger's algorithm Date: received 9 Mar 2009, last revised 14 Jan 2010 Contact author: shaohuazhang at mail sdu edu cn Available format(s): PDF | BibTeX Citation Note: The paper have been improved. Short URL: ia.cr/2009/151 [ Cryptology ePrint archive ]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661580920219421, "perplexity": 3128.8789054734516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540532624.3/warc/CC-MAIN-20191211184309-20191211212309-00209.warc.gz"}
http://www.pdffilestore.com/calculus-early-transcendentals-8th-edition/
The calculus early transcendentals 8th edition is a math course by James Stewart. The book is a global best-seller because of its format, which has clear, concise, and actual relevant real-world examples. The author uses the book to convey the usefulness of calculations to improve technical proficiency and evaluate the inherent beauty of the subject. Related Book ### Calculus Early Transcendentals 8th Edition Solution calculus early transcendentals 8th edition exercise The study materials are integrated with patient examples to help build mathematical confidence and push the learner to obtain success in the course. The book contains the following seventeen chapters; Chapter 1 – Functions And Models Chapter 2 – Limits And Derivatives Chapter 3 – Differentiation Rules Chapter 4 – Applications Of Differentiation Chapter 5 – Integrals Chapter 6 – Applications Of Integration Chapter 7 – Techniques Of Integration Chapter 8 – Further Applications Of Integration Chapter 9 – Differential Equations Chapter 10 – The Parametric Equations And Their Polar Coordinates Chapter 11 – Infinite Sequences And Series Chapter 12 – Vectors And The Geometry Of Space Chapter 13 – Vector Functions Chapter 14 – Partial Derivatives Chapter 15 – Multiple Integrals Chapter 16 – Vector Calculus Chapter 17 – Second-Order Differential Equations Chapter format We will look at the format in the first chapter to fully understand why this book is beloved by many students. Chapter 1 Functions and Models 1.1 The Four Ways to Represent a Function and its exercises on page 19 1.2 The Mathematical Models: A Catalog of Essential Functions and its exercises on page 33 1.3 The New Functions from Old Functions and its exercises on page 42 1.4 The Exponential Functions and its exercises on page 53 1.5 The Inverse Functions and Logarithms and its exercises on page 66 The Review and Concept Check is on page 68 The Review and True-False Quiz is on page 69 The Chapter Review and its exercises on page 69 The Problems Plus is on page 76 As you can clearly see, every chapter has practical exercises at the end of each lesson. Besides, at the end of each chapter, there are also review questions and true-false quizzes and problem plus and chapter review questions and answers. The practical applications will help a learner remember the subject matter much better. The question in the textbook is below. Question: What is a function? Answer: A function is defined as an ordered pair (x, f (x)) so that a defined rule relates x and f (x). The set of all these values is the domain D of the function f, and the set of all values of (x) is called the interval R. ## James Stewart Calculus Solution The Students manual for James Stewart’s calculus 8th edition with solution is a book containing completed solutions to all of the exercises in the text. It gives calculus students a way to look at the solutions to the book’s problems and make sure that they did take the correct steps to come to an answer. This book is full of those practice questions and answers at the end of it. The chapters 1. Functions And Limits 2. Derivatives 3. Application Of Differentiation 4. Integrals 5. Applications Of Integration 6. Inverse Functions: Exponential, Logarithmic, And Inverse Trigonometric Functions 7. Techniques Of Integration 8. Further Applications Of Integration 9. Differential Equations 10. Parametric Equations And Polar Coordinates 11. Infinite Sequences And Series 12. Vectors And The Geometry Of Space 13. Vector Functions 14. Partial Derivatives 15. Multiple Integrals 16. Vector Calculus 17. Second-Order Differential Equations This below problem is found in the first section, Chapter T, and problem 1ADT The problem is about evaluating each of the below expressions without a calculator. (a) (-3)4         (b) —34                       (c) 3-4                         (d) 523/ 5 21 We will examine each answer individually. 1. a) The actual answer to the problem and the value of (-3)4 is 81. For the procedural explanation of the answer to the problem, we have to look at it this way. Because of the mathematical equation, the power value is given as the number four. First, you have to multiply negative three (-3) four times. This is to calculate its value (-3)4. (-3)4 = (-3) x (-3) x (-3) x (-3) =9×9 = 81 thus we deduce the answer to be the value of (-3)4which is 81. 2. b) In this case, we have to determine and evaluate the value of —34,and the answer is —81. For the procedural explanation, we need to consider the given expression —34. In this case, since the power is 4, you need actually to multiply three-four times to compute the value of —34. Mathematically this looks like the equation below. —34 = — (3) x (3) x (3) x (3) = —9 x 9 = —81 Now the answer becomes clear that the value of —34 is actually —81. 1. c) To decide on this equation, we have to break it down first to evaluate it and solve the value of 3-4. The answer to the problem for 3-4is 1/ 81 For the procedural explanation, you have to make a consideration of the given problem and which is 3-4. Besides, since the power is —4, we must multiply 1/3 four times to find the value of 3-4. So the equation will look like this; 3-4 = (1/3) X (1/3) X (1/3) = 1/9 X 1/9 = 1/81 So the answer then becomes clear, and the value of 3-4 becomes 1/81. 1. d) To determine and evaluate the problem which asks for the value of 521/523 And the answer to the problem on the value of 521 /523 is actually 25. The procedural explanation of the answer looks like this. Moreover, the formula uses the following; the (a) becomes a real number, and so is the (m). (n) Which are natural numbers, so following that logic, we have the following equation. am/bn = am-n So the calculation then becomes tied to the given expression of   521/523 So using the above formula, we can then compute the actual value of 521/523 Using the formula as so 521/523 = 5 23-21 which is equals to 52 = 25 So your final answer to the problem and the value of 521 /523 is 25. Final words The author, James Stewart, was a renowned mathematician. His books on mathematics are sought after because they offer some practical lessons on how to learn the subject matter and sprinkle many exercises that are used to solidify that lesson. The book is actually filled with exercises that are easy to follow and their corresponding answers for reviewing. No lesson is easier to learn than the one with practical examples included. As you know, James Stewart’s calculus books are usually very popular with a lot of calculus students, and the reason is that he offers real practical examples but also puts in an extra effort in providing practice problems and their solutions. Scroll to Top Scroll to Top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177005648612976, "perplexity": 895.1827861143211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00734.warc.gz"}
https://www.ias.ac.in/listing/bibliography/pram/Diptimoy_Ghosh
• Diptimoy Ghosh Articles written in Pramana – Journal of Physics • 𝐵 Physics: WHEPP-XI working group report We present the report of the 𝐵 physics working group of the Workshop on High Energy Physics Phenomenology (WHEPP-XI), held at the Physical Research Laboratory, Ahmedabad, in January 2010. • Physics beyond the Standard Model through $b \rightarrow s\mu^{+} \mu^{-}$ transition A comprehensive study of the impact of new-physics operators with different Lorentz structures on decays involving the $b \rightarrow s\mu^{+} \mu^{-}$ transition is performed. The effects of new vector– axial vector (VA), scalar–pseudoscalar (SP) and tensor (T) interactions on the differential branching ratios, forward–backward asymmetries ($A_{\text{FB}}$ ’s), and direct CP asymmetries of $\bar{B}_{s}^{0} \rightarrow \mu^{+} \mu^{-}, \bar{B}_{d}^{0} \rightarrow X_{s} \mu^{+} \mu^{-}, \bar{B}_{s}^{0} \rightarrow \mu^{+} \mu^{-} \gamma, \bar{B}_{d}^{0} \rightarrow \bar{K} \mu^{+} \mu^{-}$, and $\bar{B}_{d}^{0} \rightarrow \bar{K}^{*} \mu^{+} \mu^{-}$ are examined. In $\bar{B}_{d}^{0} \rightarrow \bar{K}^{*} \mu^{+} \mu^{-}$, we also explore the longitudinal polarization fraction $f_{\text{L}}$ and the angular asymmetries $A_{\text{T}}^{2}$ and $A_{\text{LT}}$, the direct CP asymmetries in them, as well as the triple-product CP asymmetries $A_{\text{T}}^{\text{(im)}}$ and $A_{\text{{LT}}^{\text{(im)}}$. While the new VA operators can significantly enhance most of the observables beyond the Standard Model predictions, the SP and T operators can do this only for $A_{\text{FB}}$ in $\bar{B}_{d}^{0} \rightarrow \bar{K} \mu^{+} \mu^{-}$. • $B_{s}$ data at Tevatron and possible new physics The new physics (NP) is parametrized with four model-independent quantities: the magnitudes and phases of the dispersive part $M_{12}$ and the absorptive part $\Gamma_{12}$ of the NP contribution to the effective Hamiltonian. We constrain these parameters using the four observables $\Delta M_{\text{s}}$, $\Delta \Gamma_{\text{s}}$, the mixing phase $\beta_{\text{s}}^{J/\psi \phi}$ and $A_{\text{sl}}^{b}$. This formalism is extended to include charge-parity-time reversal (CPT) violation, and it is shown that CPT violation by itself, or even in the presence of CPTconserving NP without an absorptive part, helps only marginally in the simultaneous resolution of these anomalies. • # Pramana – Journal of Physics Volume 94, 2020 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9003090262413025, "perplexity": 2144.209129931562}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00402.warc.gz"}
https://arxiv.org/abs/cs/0410047
cs (what is this?) Title: Simple Distributed Weighted Matchings Abstract: Wattenhofer [WW04] derive a complicated distributed algorithm to compute a weighted matching of an arbitrary weighted graph, that is at most a factor 5 away from the maximum weighted matching of that graph. We show that a variant of the obvious sequential greedy algorithm [Pre99], that computes a weighted matching at most a factor 2 away from the maximum, is easily distributed. This yields the best known distributed approximation algorithm for this problem so far. Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Discrete Mathematics (cs.DM) Cite as: arXiv:cs/0410047 [cs.DC] (or arXiv:cs/0410047v1 [cs.DC] for this version) Submission history From: Jaap-Henk Hoepman [view email] [v1] Tue, 19 Oct 2004 09:00:06 GMT (9kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363135099411011, "perplexity": 3357.4230123039974}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00675.warc.gz"}
https://www.redcrab-software.com/en/Calculator/Electrics/Voltage-Drop
# Voltage drop Online calculators and formulas for calculating the voltage loss in a wire ## Calculate loss voltage of a wire This page calculates the voltage drop that is lost in a wire due to its resistance. To do this, the input voltage, the current, the simple cable length and the cable cross-section must be specified. A phase shift in the case of inductive loading can be specified as an option. A value of 1 is preset for Cos φ for ohmic load and direct current. The specific resistance or the conductance can be specified for the material of the conductor. The following table contains the most common values of the conductance. #### Conductance Copper 56.0 Silver 62.5 Aluminium 35.0 For a list of other specific resistances and conductance values click here. Voltage drop calculator Input Delete Entries Voltage (V) Load current (A) Wire length (one way) (m) *) Cross-sectional area (mm2) Cos φ Resistivity (Ω) Conductance (S) Decimal places 0 1 2 3 4 6 8 10 Result Loss voltage Useful voltage Voltage drop Wire resistance ### Legend $$\displaystyle A$$ cross-section $$\displaystyle l$$  length $$\displaystyle R$$ Resistance of the wire $$\displaystyle ρ$$ Specific resistance $$\displaystyle σ$$Specific conductance $$\displaystyle Un$$ Nominal voltage (input) $$\displaystyle ΔU$$ Loss voltage *) Double the line length is calculated (outward and return line). ## Formulas for voltage drop calculation Single wire resistance $$\displaystyle R=\frac{ρ · l}{A}$$ $$\displaystyle =\frac{l}{σ · A}$$ Total wire resistance $$\displaystyle R=2 ·\frac{ρ · l}{A}$$ $$\displaystyle =2 ·\frac{l}{σ · A}$$ loss voltage $$\displaystyle ΔU=2 ·\frac{l}{σ · A}· I · cos( φ)$$ voltage drop in % $$\displaystyle Δu=\frac{ΔU}{Un} ·100 \%$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655294179916382, "perplexity": 3339.6280948028298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00192.warc.gz"}
https://www.nature.com/articles/s41467-018-04485-1?error=cookies_not_supported&code=f6ebcebc-6eca-4fa9-85a7-d9e6e9eb4c83
Article | Open | Published: # Signal and noise extraction from analog memory elements for neuromorphic computing ## Abstract Dense crossbar arrays of non-volatile memory (NVM) can potentially enable massively parallel and highly energy-efficient neuromorphic computing systems. The key requirements for the NVM elements are continuous (analog-like) conductance tuning capability and switching symmetry with acceptable noise levels. However, most NVM devices show non-linear and asymmetric switching behaviors. Such non-linear behaviors render separation of signal and noise extremely difficult with conventional characterization techniques. In this study, we establish a practical methodology based on Gaussian process regression to address this issue. The methodology is agnostic to switching mechanisms and applicable to various NVM devices. We show tradeoff between switching symmetry and signal-to-noise ratio for HfO2-based resistive random access memory. Then, we characterize 1000 phase-change memory devices based on Ge2Sb2Te5 and separate total variability into device-to-device variability and inherent randomness from individual devices. These results highlight the usefulness of our methodology to realize ideal NVM devices for neuromorphic computing. ## Introduction Over several decades, the von Neumann architecture has enabled exponential improvements in system performance. However, as device scaling has slowed and demand to handle big data has soared, the time and energy spent transporting data across the physically separated memory and processing units have started to limit the performance and power efficiency. As potential alternatives, neuro-inspired non-von Neumann computing paradigms have become promising candidates to perform real-world tasks1, 2. One avenue of research is referred to as in-memory computing or computational memory, which exploits the physical properties of non-volatile memory (NVM) devices for both storing and processing information3,4,5,6. Recently, a large-scale experimental demonstration of this concept using an array of one million phase-change memory (PCM) devices has been reported7. Another paradigm is hardware acceleration of deep neural network (DNN)8,9,10,11,12 training via the use of dense crossbar arrays of NVM to perform locally analog computation at the location of the data. As shown in Fig. 1, it is possible to use NVM devices with variable conductance states, such as resistive random access memory (ReRAM)13 and PCM14 to represent the synaptic weights and to perform vector-matrix multiplication using the basic electrical principles, i.e., Ohm’s and Kirchhoff’s laws, thus enabling local and parallel computation on a large scale. By making the conductance change of the NVM element bidirectional, backpropagation algorithm can be implemented. Such a crossbar array of NVMs is expected to achieve significant acceleration factors of DNN training and remarkable reduction in power and area15, 16. Another active area of research is spiking neural networks (SNNs) motivated by the need to build more biologically realistic neural network models. Several neuromorphic computing platforms are being developed which are optimized for emulating spike-based computation. These SNNs are typically trained using certain local update rules, such as the spike-timing-dependent plasticity. NVM devices have recently found applications as both synaptic and neuronal elements of such SNNs17,18,19,20. The key technical challenge for these applications is to realize ideal NVM elements with continuous (analog-like) conductance tuning capability in response to electrical pulses with acceptable noise levels. For acceleration of DNN training, symmetric conductance change with positive and negative pulse amplitudes is another key requirement15, 16. The device conductance should go up with a voltage pulse of one polarity and should go down by the same magnitude with a voltage pulse of the opposite polarity. In general, NVM elements do not show this symmetric switching behavior. Therefore, a differential approach is often used in which two conductance values are compared in a unit cell14. In this configuration, linearity in switching is required to ensure a symmetric differential signal. In reality, most NVM elements exhibit highly non-linear evolution of conductance as a function of the number of consecutively applied pulses. This results in significant errors in weight updates13. In addition, such non-linear conductance change makes separation of signal and noise extremely difficult. Most NVM elements show stochasticity related to the physical origins of switching. When incremental weight updates are performed for analog NVM devices, the magnitude of conductance change approaches the level of inherent randomness21, manifesting as significant noise components. Therefore, establishing a universally applicable methodology to evaluate signal-to-noise ratio (SNR) of non-linear and analog NVM devices is of paramount importance for neuromorphic computing applications. In this study, we first establish a practical methodology based on a machine learning algorithm to precisely separate signal and noise components from an analog NVM device with non-linear conductance changes. The methodology is agnostic to the device physics, enabling us to apply it to different types of NVM elements. First, the methodology is applied to HfO2-based ReRAM to understand the relationship between switching symmetry and SNR. Next, the methodology is applied to PCM devices based on doped-Ge2Sb2Te5 (GST). We characterize 1000 devices and separate device-to-device variability and inherent randomness from individual devices. ## Results ### Analog switching behaviors of ReRAM and PCM As shown in Fig. 2a, our ReRAM device exhibited analog-like (incremental) change in the device conductance (G) in response to voltage pulses. Consecutive positive voltage (set) pulses (pulse number 1–1000) on the top electrode caused an overall ascending trend of G with some pulse-to-pulse fluctuations. On the other hand, consecutive negative voltage (reset) pulses (pulse number 1001–2000) caused a descending trend of G with similar fluctuations. The change of G in oxide ReRAM device is attributed to change in the configuration of the current conducting filament which consists of oxygen vacancies in a metal oxide film22, 23 as schematically illustrated in Fig. 2b. The movement of the oxygen vacancies in response to electrical signals has a probabilistic nature and it emerges as inherent randomness in weight updates, which are superimposed on the expected signal13. As for PCM, we investigated the device G changes in response to 20 consecutive set pulses. Figure 2c is a plot of G as a function of pulse number, showing incremental changes with a non-linear trace, which is convoluted with pulse-to-pulse fluctuations. The PCM device includes a small part of phase-change material that is sandwiched by top and bottom electrodes. Transition from the low conductance state (amorphous phase) to the high conductance state (crystalline phase) is caused by set pulses that create sufficient joule heating for crystallization of the GST material while the temperature is kept below the melting point as schematically illustrated in Fig. 2d. Due to the stochastic nature in crystallization of the phase-change materials2, 20, 21, 24, 25, there is significant randomness associated with the weight updates. On the other hand, reset to the low conductance state requires melting of the GST material and this process is known to be abrupt. For the purpose of characterization of analog switching behaviors, we focused on incremental set operations for PCM in this study. ### Characterization of NVM elements To evaluate the performance of analog NVM elements for neuromorphic computing applications, one has to extract noise-free signals from experimental data. A conventional approach is to assume a parametric model for expected conductance changes, derived from relatively simple assumptions on underlying physics. For ReRAM devices, an exponential formula has been proposed to capture the non-linear trend13. However, the pre-assumed exponential relationship often causes significant errors when fitting weight update as a function of number of applied pulses. In addition, different NVM elements generally need different fitting formulas, making it difficult to compare key performance parameters, such as switching symmetry and SNR, on a common ground. To address this issue, we leverage a machine learning algorithm called Gaussian process regression (GPR)26. GPR is a non-parametric Bayesian regression method, which does not assume any specific functional form such as linear and exponential. The main motivation for implementing GPR in the analysis of analog NVM elements is to let experimental data give predictions of noise-free signals by themselves. The major assumption we used is the smoothness of the curve. For analog NVM devices, we exploit continuous changes in switching media (e.g., filament configuration for ReRAM, volume of crystalline region for PCM) rather than non-continuous phenomena to achieve incremental conductance changes. This makes analog switching data highly compatible with the assumption of smoothness. The key ingredient of GPR is the kernel matrix (Eq. (6) in Methods), which controls the smoothness of the estimated functional curve. We established a practical approach to optimize the kernel matrix by combining the Bayesian marginalized likelihood maximization with the frequentists’ cross-validation approach. This enabled us to precisely separate signal and noise for our large dataset while avoiding numerical instability. The proposed inference procedure also assumes that a prior probability distribution over underlying functions follows a multivariate Gaussian distribution, which consists of a linear combination of finite random variables. This assumption is consistent with the switching mechanism of analog memory devices where the device conductance is governed by parallel configurations of randomly distributed conducting filaments comprising oxygen vacancies or crystalline phase-change materials. The measured device conductance values indeed follow a Gaussian distribution around noise-free signals and this was verified by observing the distribution of noise in our experimental data for ReRAM (Supplementary Note 1). The details of our GPR-based methodology are described in Methods section. We performed cross-validation27 using our ReRAM data and confirmed that the GPR-based methodology extracted the inherent features irrespective of the sampling size (Supplementary Note 2). We confirmed the robustness of our methodology against the variation of duration of input pulses from 5 to 100 ns, covering the range of interest for neuromorphic computing (Supplementary Note 3). We also confirmed the robustness of our methodology against the variation of test temperature (Supplementary Note 4). For the rest of the analysis, we used a pulse duration of 100 ns and tested the devices at room temperature. Next, we extracted key performance metrics using the GPR fitting. We applied the methodology to our ReRAM data with 1000 consecutive set pulses, followed by 1000 consecutive reset pulses, for the purpose of characterizing switching symmetry. As shown in Fig. 3a, the GPR fitting gave predicted noise-free curves (red lines) for both set (black) and reset (blue) pulse sequences. Once the noise-free curves are estimated, the G change per pulse, denoted by ΔG, is easily computed, based on which we define SNR as $${\mathrm{SNR}}\,\underline{\underline {{\mathrm{def}}}} \,\frac{{{\mathrm{\Delta }}G}}{r},$$ (1) where r represents the absolute difference between predicted and observed G values (i.e., residuals). The impact of SNR on the accuracy of neural network was previously discussed21. Since relatively long sequences were used for both ReRAM and PCM devices to minimize fluctuations in read signals, we attribute r to inherent randomness associated with the physical origin of weight update. In artificial neural network implementations, fast reading is particularly preferred to decrease the overall cycle time and consequently accelerate the computational operations. This should increase the contribution of read noise. In this case, we need to optimize the read operation to balance the overall performance and the noise level, which is beyond the scope of this work. The extracted r value is shown as a function of pulse number in Fig. 3b. The absolute ΔG values for set and reset pulses are denoted by ΔG+ and ΔG, respectively. The ΔG+ (black) and ΔG (blue) are plotted as a function of pulse number in Fig. 3c. Figure 3d shows absolute SNR, calculated locally at each pulse from ΔG and r. For characterization of switching symmetry, we introduce symmetry factor (SF), which is defined as $${\mathrm{SF}}\underline{\underline {\,{\mathrm{def}}\,}} \frac{{\Delta G_ + - \Delta G_ - }}{{\Delta G_ + + \Delta G_ - }}.$$ (2) With this definition, the degree of symmetry is quantified as a value between −1 and 1, with 0 corresponding to the perfect symmetry. Asymmetry in both directions (larger ΔG+ versus ΔG) are equally weighted around 0 and can be compared with absolute values. In order to compute SF and SNR at a given G level, we need to express ΔG+, ΔG, and r as functions of G. Therefore, we divided the total G range into 100 sub-ranges and computed a mean value of ΔG and a root mean square value of r within each G sub-range. In this way, one can obtain SF and SNR for each G sub-range. The local extraction (i.e., at a certain pulse number or G level) of SF and SNR is a powerful feature of our methodology. The symmetry requirement for acceleration of DNN training specified in ref. 15 (<5% difference between ΔG+ and ΔG) corresponds to |SF| <0.025. ### Switching symmetry and SNR of ReRAM devices We applied the GPR-based methodology on our ReRAM devices with different metal oxide thicknesses (device A: 5 nm, device B: 4 nm). The devices were tested under different set and reset voltages and the SNR and SF values were extracted locally at each G level, as shown in Fig. 4a. For SNR, we took mean values for set and reset traces. Representative G versus pulse number traces are shown in insets. Figure 4b shows a cross-sectional two-dimensional plot of SNR versus SF taken at G ~20 μs from Fig. 4a. At this G level, low |SF| values were achieved at relatively low SNR values, and vice versa. Data points are absent in the upper-left corner of Fig. 4b, indicating that there is a fundamental tradeoff between SNR and SF values. In order to investigate the relationship between SNR and SF values for multiple device/pulse conditions spanning different G levels, they were grouped according to SNR values and cumulative distribution function of |SF| were compared, as shown in Fig. 4c. The reproducibility of the trend was confirmed up to 10 different devices of device type B (Supplementary Note 5). One can clearly observe that the device/pulse conditions that lead to higher SNR values tend to result in poor switching symmetry. The tradeoff can be directly observed in the G versus pulse number plots (the insets of Fig. 4a). We speculate that higher switching symmetry is achieved by making the movement of oxygen vacancies more incremental and thereby changing the width of current conducting filament rather than completely rupturing and reforming it. ΔG is smaller for the former case and it should eventually approach the level of inherent randomness, resulting in lower SNR values. Such a tradeoff makes it difficult to improve both switching symmetry and SNR at the same time and it remains as a key challenge for ReRAM devices for neuromorphic computing applications. However, if these key metrics are accurately quantified like we demonstrated with our GPR-based methodology, one can optimize the device and pulse conditions to find the optimum point within the tradeoff. As reviewed in a previous section, switching symmetry is a critical requirement to implement backpropagation algorithm for DNNs. In reality, learning accuracy is compromised due to non-ideal (asymmetric) switching characteristics of synaptic elements. Therefore, we optimized the device condition (device A) and the pulse condition (set: 1.6 V, reset: –1.8 V) using the GPR-based methodology to minimize SF. The beauty of our methodology is the capability to extract SF, agnostic to switching mechanisms and irrespective of data size. This enabled us to compare our ReRAM data with various resistive switching devices in literature28,29,30,31,32,33,34,35. There have been reports on improved switching symmetry using pulses with varying amplitude28, 30, 31. These cases were benchmarked together and marked separately in Fig. 4d. One can see a general trend of improved symmetry using pulses with varying amplitudes. This approach, however, requires sensing of current states of individual devices and adjustment of voltage amplitudes, which is not compatible with local and parallel computation. It should be noted that our optimized ReRAM data showed good switching symmetry compared with all benchmark data with identical voltage pulses. This is a significant step forward to realize online training capability in a parallel manner. Future work needs to focus on simultaneously achieving sufficiently high SNR values with materials optimizations. ### Breakdown of variability components in 90-nm PCM devices A conventional approach to extract inherent randomness associated with weight updates is to test multiple devices and to obtain statistical distributions21. The variability obtained in this manner, however, includes device-to-device variability in addition to inherent randomness from individual devices. These variability components need to be quantified separately in order to accurately assess potentials of certain NVM elements for neuromorphic computing applications. We tested 1000 PCM devices and extracted signal and noise from individual devices using our GPR-based methodology. This enabled us to further break down the total variability to the inherent randomness of individual devices and the device-to-device variability. These two variability components are illustrated in Fig. 5a with two representative PCM devices (devices 1 and 2) that were fabricated with the identical process. The GPR fitting was performed to predict noise-free signals as shown in red and blue solid lines, respectively, in Fig. 5a. The predicted signals for devices 1 and 2 deviate from each other due to device-to-device variability. In addition, the experimental data points (shown in circles) fluctuate around the individual fitted lines, which is attributable to inherent randomness of weight updates since the read noise was minimized by the test sequence as described in Methods section. We compared the histograms of ΔG values extracted from experimental data and fitted curves after the pulse numbers 2 (Fig. 5b) and 6 (Fig. 5c). The statistical distribution of the fitted curves (red) is the contribution from device-to-device variability, whereas the statistical distribution of the experimental data (blue) includes inherent randomness superimposed on top of that. The latter distribution was much wider, clearly showing significant contribution of inherent randomness. The peak ΔG value decreased and the device-to-device variability (red) tightened from the second to the sixth pulse. On the other hand, the inherent randomness remained relatively constant. This resulted in the tail of total distribution (blue) extending into the negative ΔG regime, which is undesirable (Fig. 5c). The mean and standard deviation of ΔG obtained from the experimental data (shown in black circles and error bars) were compared with the root mean square of inherent randomness (r) obtained from the GPR-based methodology (shown in red error bars) as a function of pulse number in Fig. 5d. The total standard deviation became comparable with ΔG for incremental weight updates. Since the learning accuracy is known to degrade when the ratio of standard deviation to ΔG becomes >121, reduction of variability is indispensable. Our analysis revealed that a large portion of total variability is attributed to inherent randomness of individual devices (~67%) for a mature technology based on the 90 nm CMOS baseline. The median SNR value calculated from inherent randomness is ~35% for PCM devices, which is comparable to our ReRAM device switching at a similar G level (cf. Fig. 4b). This indicates that variability due to inherent randomness is a common challenge for ReRAM and PCM for neuromorphic computing applications. Innovations in device and material are needed to suppress this component. Our methodology based on GPR enables precise extraction of inherent randomness from individual devices and provides useful guidelines for further improvement. ## Discussion We established a practical methodology based on GPR to precisely separate signal and noise components from analog NVM elements with non-linear conductance changes. This solves key technical challenges for characterization of artificial synapses of neuromorphic computing system, namely extraction of switching symmetry and SNR. The methodology is agnostic to switching mechanisms and therefore applicable to various types of NVMs. We applied the methodology to HfO2-based ReRAM devices and found the tradeoff between switching symmetry and SNR. Using SF as a guideline, substantial improvement in switching symmetry was achieved compared to reported ReRAM devices in literature. By systematic analysis of 1000 GST-based PCM devices, we clearly demonstrated that a large portion of variability in weight update is attributable to inherent randomness from individual devices and this is the key component to be suppressed in order to achieve high classification accuracy. Finally, the proposed methodology helps neuromorphic system engineers in two ways depending on phases of technology development. In an exploratory phase, our methodology enables extraction of switching symmetry and SNR from individual devices and expedites search for ideal materials. The conventional methodology requires fabrication of many devices with tight device-to-device variability for extraction of SNR, which is difficult to attain in the early stage when exotic material options need to be screened. In a relatively mature technology phase, our methodology helps find the optimum input signals (e.g., duration and amplitude of pulses) that provide the best switching symmetry (linearity) and SNR within the tradeoff for the entire neuromorphic system. ## Methods ### PCM device fabrication and test The PCM devices were integrated into a chip fabricated in the 90 nm CMOS technology36. The phase-change material is doped Ge2Sb2Te5. The bottom electrode has a radius of ~20 nm and was defined using a sub-lithographic key-hole transfer process37. The phase-change material is ~100 nm-thick and extends to the top electrode. All experiments in this work were done on an array comprising 1 million devices, which is organized as a matrix of 512 word lines (WLs) and 2048 bit lines (BLs). The selection of one PCM device is done by serially addressing a WL and a BL. A single selected device can be programmed by forcing a current through the BL with a voltage-controlled current source. For reading a PCM cell, the selected BL is biased to a constant voltage of 0.3 V. The resulting read current is integrated by a capacitor, and the resulting voltage is then digitized by an on-chip 8-bit cyclic ADC. The ADCs are calibrated by means of on-chip reference poly-silicon resistors. As for characterization of incremental device G change, each device was first initialized to a state that has almost zero conductance. After the initialization, a set pulse of 70 μA was applied followed by conductance read steps. The read step was repeated 50 times to obtain mean G values in order to minimize read noise and to focus on characterization of write noise. This sequence was repeated 20 times to obtain G values as a function of pulse numbers. ### GPR-based methodology The goal of GPR is to learn a probability distribution of the output signal, $$y$$, conditioned on the input signal, $$x$$, from data $$\left\{ {\left( {x^{\left( n \right)},y^{\left( n \right)}} \right){\mathrm{|}}n = 1, \ldots ,N} \right\}$$, where $$N$$ is the number of samples and the superscript $$(n)$$ denotes the $$n$$-th sample in the data. The distribution is given by $$p\left( {y{\mathrm{|}}x} \right) = {\cal N}\left( {y|m\left( x \right),s^2\left( x \right)} \right),$$ (3) $$m\left( x \right) = {\mathbf{k}}^{\mathrm{T}}\left( {{\mathbf{K}} + {\mathbf{I}}} \right)^{ - 1}{\mathbf{y}}_N,$$ (4) $$s^2\left( x \right) = \sigma ^2\left[ {2 - {\mathbf{k}}^{\mathrm{T}}\left( {{\mathbf{K}} + {\mathbf{I}}} \right)^{ - 1}{\mathbf{k}}} \right],$$ (5) where $${\cal N}\left( {y|m\left( x \right),s^2\left( x \right)} \right)$$ denotes the Gaussian distribution of $$y$$ with the mean $$m\left( x \right)$$ and the variance $$s^2\left( x \right)$$. Also, $$\sigma ^2$$ denotes the variance that corresponds to measurement noise, I denotes the identity matrix, and $${\mathbf{y}}_N = \left( {y^{\left( 1 \right)}, \ldots ,y^{\left( N \right)}} \right)^{{\mathrm{T}}}$$, where the superscript T denotes the matrix transpose. The key ingredient of GPR is the kernel matrix $${\mathbf{K}}$$, which controls the smoothness of the estimated functional curve. We use a non-dimensional kernel $${\mathbf{K}}$$ whose $$(i,j)$$ element is given by $$K\left( {x^{\left( i \right)},x^{\left( j \right)}} \right)\underline{\underline {\,{\mathrm{def}}\,}} {\mathrm{exp}}\left( { - \frac{{|x^{\left( i \right)} - x^{\left( j \right)}|^2}}{{2\sigma _K^2}}} \right).$$ (6) The $$n$$-th entry of the $$N$$-dimensional vector $${\mathbf{k}}(x)$$ is also given by $$K\left( {x,x^{\left( n \right)}} \right)$$. The parameters $$\sigma _K^2,\sigma ^2$$ are learned from the data, as explained later. The idea is to use the predictive mean, $$m\left( x \right)$$, at the input value (pulse number) $$x$$, as a noise-free version of the output signal (G). ### Determining GPR parameters The parameter $$\sigma ^2$$ is determined by maximizing the log marginalized likelihood26, which is given by $$E\left( \sigma \right)\,\underline{\underline {{\mathrm{def}}}} \, - \frac{N}{2}\ln \sigma ^2 - \frac{1}{{2\sigma ^2}}{\mathbf{y}}_N^ \top \left( {{\mathbf{K}} + {\mathbf{I}}} \right)^{ - 1}{\mathbf{y}}_N - \frac{1}{2}{\mathrm{ln}}\,{\mathrm{det}}\left( {{\mathbf{K}} + {\mathbf{I}}} \right) + c,$$ (7) in our parameterization, where $$c$$ denotes an unimportant constant, and det is the matrix determinant. Assuming $$\sigma _K$$ is given for now and taking the derivative with respect to $$\sigma ^{ - 2}$$, we have $$\sigma ^2 = \left( {\frac{1}{N}} \right){\mathbf{y}}_N^{\mathrm{T}}\left( {{\mathbf{K}} + {\mathbf{I}}} \right)^{ - 1}{\mathbf{y}}_N.$$ (8) To compute this, we need a value of $$\sigma _K$$. In theory, we could find it by maximizing $$E$$ simultaneously with $$\sigma .$$ This approach, however, involves a complex non-linear optimization procedure and often results in numerical instability in our application. Here we propose a practical approach that combines the Bayesian marginalized likelihood maximization with the frequentists’ cross-validation approach. Specifically, to determine $$\sigma _K$$, we maximize the predictive leave-one-out (LOO) likelihood, as defined by $$L\left( {\sigma _K} \right)\,\underline{\underline {{\mathrm{def}}}} \,\mathop {\sum }\limits_{i = 1}^N {\mathrm{ln}}\,{\cal N}\left( {y^{\left( i \right)}|m_{ - i}\left( {x^{\left( i \right)}} \right),s_{ - i}^2\left( {x^{(i)}} \right)} \right),$$ (9) where $$m_{ - i}$$ and $$s_{ - i}^2$$ are the predictive mean and variance of GPR (Eqs. (4) and (5)) obtained from the dataset excluding the i-th sample. To find the maximizer of $$L\left( {\sigma _K} \right)$$, we can leverage the fact that the observed variance does not depend heavily on the input across the entire domain. By replacing $$s_{ - i}^2$$ with a constant, the LOO likelihood criterion is reduced to the task of finding a minimizer of the mean square of the residual (i.e., r), which is easily done independently of $$\sigma ^2$$. In this study, we use the following procedure and criterion to find an appropriate σ K value from the experimental data. We vary σ K to cover a wide range and identify an optimum range where the change of σ K negligibly affects extracted r values. This is practically equivalent to maximizing the predictive LOO likelihood. Our criterion is r change of <1% for σ K change of 10% and this is met with a σ K value of around $$3 \times N$$ for our dataset (Supplementary Note 6). ### Data availability The data that support the findings of this study are available from the corresponding author upon request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014). 2. 2. Burr, G. W. et al. Neuromorphic computing using non-volatile memory. Adv. Phys. X 2, 89–124 (2016). 3. 3. Gallo, M. L. et al. Mixed-precision in-memory computing. Nat. Electron. 1, 246–253 (2018). 4. 4. Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017). 5. 5. Wright, C. D., Hosseini, P. & Diosdado, J. A. V. Beyond von-Neumann computing with nanoscale phase-change memory devices. Adv. Funct. Mater. 23, 2248–2254 (2012). 6. 6. Hosseini, P., Sebastian, A., Papandreou, N., Wright, C. D. & Bhaskaran, H. Accumulation-based computing using phase-change memories with FET access devices. IEEE Electron Device Lett. 36, 975–977 (2015). 7. 7. Sebastian, A. et al. Temporal correlation detection using computational phase-change memory. Nat. Commun. 8, 1115 (2017). 8. 8. Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). 9. 9. Collobert, R. & Weston, J. A unified architecture for natural language processing. In Proc. 25th International Conference on Machine Learning - ICML 08 (ACM, Helsinki, Finland, 2008). 10. 10. Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211–252 (2015). 11. 11. Hinton, G. et al. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29, 82–97 (2012). 12. 12. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2017). 13. 13. Chen, P.-Y. et al. Mitigating effects of non-ideal synaptic device characteristics for on-chip learning. In 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) https://doi.org/10.1109/iccad.2015.7372570 (Publisher IEEE, Austin, USA, 2015). 14. 14. Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron Devices 62, 3498–3507 (2015). 15. 15. Gokmen, T. & Vlasov, Y. Acceleration of deep neural network training with resistive cross-point devices: design considerations. Front. Neurosci. 10, 333 (2016). 16. 16. Agarwal, S. et al. Resistive memory device requirements for a neural algorithm accelerator. In 2016 International Joint Conference on Neural Networks (IJCNN) https://doi.org/10.1109/ijcnn.2016.7727298 (Publisher IEEE, Vancouver, Canada, 2016). 17. 17. Kuzum, D., Jeyasingh, R. G. D., Lee, B. & Wong, H.-S. P. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Lett. 12, 2179–2186 (2011). 18. 18. Kim, S. et al. NVM neuromorphic core with 64k-cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous in-situ learning. In 2015 IEEE International Electron Devices Meeting (IEDM) https://doi.org/10.1109/iedm.2015.7409716 (Publisher IEEE, Washington DC, USA, 2015). 19. 19. Saïghi, S. et al. Plasticity in memristive devices for spiking neural networks. Front. Neurosci. 9, 51 (2015). 20. 20. Tuma, T., Pantazi, A., Gallo, M. L., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. Nat. Nanotechnol. 11, 693–699 (2016). 21. 21. Boybat, I. et al. Stochastic weight updates in phase-change memory-based synapses and their influence on artificial neural networks. In 2017 13th Conference on Ph.D. Research in Microelectronics and Electronics (PRIME) https://doi.org/10.1109/prime.2017.7974095 (Publisher IEEE, Giardini Naxos, Italy, 2017). 22. 22. Miranda, E., Jimenez, D. & Sune, J. The quantum point-contact memristor. IEEE Electron Device Lett. 33, 1474–1476 (2012). 23. 23. Ielmini, D. Modeling the universal set/reset characteristics of bipolar RRAM by field- and temperature-driven filament growth. IEEE Trans. Electron Devices 58, 4309–4317 (2011). 24. 24. Wong, H.-S. P. et al. Phase change memory. Proc. IEEE 98, 2201–2227 (2010). 25. 25. Gallo, M. L., Tuma, T., Zipoli, F., Sebastian, A. & Eleftheriou, E. Inherent stochasticity in phase-change memory devices. In 2016 46th European Solid-State Device Research Conference (ESSDERC) https://doi.org/10.1109/essderc.2016.7599664 (Publisher IEEE, Lausanne, Switzerland, 2016). 26. 26. Rasmussen, C. E. & Williams, C. K. I. Gaussian Processes for Machine Learning (MIT Press, Cambridge, United States 2008). 27. 27. James, G., Witten, D., Hastie, T. & Tibshirani, R. An Introduction to Statistical Learning: with Applications in R (Springer, New York, United States 2017). 28. 28. Jang, J.-W., Park, S., Burr, G. W., Hwang, H. & Jeong, Y.-H. Optimization of conductance change in Pr1–xCa x MnO3-based synaptic devices for neuromorphic systems. IEEE Electron Device Lett. 36, 457–459 (2015). 29. 29. Jo, S. H. et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10, 1297–1301 (2010). 30. 30. Wang, I.-T., Chang, C.-C., Chiu, L.-W., Chou, T. & Hou, T.-H. 3D Ta/TaOx/TiO2/Ti synaptic array and linearity tuning of weight update for hardware neural network applications. Nanotechnology 27, 365204 (2016). 31. 31. Chen, W. et al. A CMOS-compatible electronic synapse device based on Cu/SiO2/W programmable metallization cells. Nanotechnology 27, 255202 (2016). 32. 32. Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017). 33. 33. Marinella, M. J. et al. Multiscale co-design analysis of energy, latency, area, and accuracy of a ReRAM analog neural training accelerator. Preprint at http://arxiv.org/abs/1707.09952 (2017). 34. 34. Wu, W. et al. Improving analog switching in HfOx-based resistive memory with a thermal enhanced layer. IEEE Electron Device Lett. 38, 1019–1022 (2017). 35. 35. Woo, J. et al. Improved synaptic behavior under identical pulses using AlOx/HfO2 bilayer RRAM array for neuromorphic systems. IEEE Electron Device Lett. 37, 994–997 (2016). 36. 36. Close, G. F. et al. Device, circuit and system-level analysis of noise in multi-bit phase-change memory. In 2010 International Electron Devices Meeting https://doi.org/10.1109/iedm.2010.5703445 (Publisher IEEE, San Francisco, USA, 2010). 37. 37. Breitwisch, M. et al. Novel lithography-independent pore phase change memory. In 2007 IEEE Symposium on VLSI Technology https://doi.org/10.1109/vlsit.2007.4339743 (Publisher IEEE, Kyoto, Japan, 2007). ## Acknowledgements We would like to thank Marwan Khater, Hiroyuki Miyazoe, Adam Pyzyna, and the staff of Microelectronics Research Laboratory at IBM T.J. Watson Research Center for their contributions in device fabrication. We would also like to thank Wilfried Haensch for management support and valuable discussions. ## Author information ### Affiliations 1. #### IBM T. J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY, 10598, USA • N. Gong • , T. Idé • , S. Kim • , V. Narayanan •  & T. Ando • N. Gong 3. #### IBM Research-Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland • I. Boybat •  & A. Sebastian • I. Boybat ### Contributions T.A. conceived the idea. N.G. and T.A. performed the experiments and analyzed all data. T.I., N.G., and T.A. developed the GPR-based methodology. S.K. performed the experiments on ReRAM and analyzed the data. I.B. and A.S. performed the experiments on PCM and analyzed the data. V.N. provided managerial support and critical comments. N.G. and T.A. wrote the manuscript with input from all the authors. ### Competing interests The authors declare no competing interests. ### Corresponding author Correspondence to T. Ando.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6965408325195312, "perplexity": 2635.226706400151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00058.warc.gz"}
https://physics.stackexchange.com/questions/218305/is-there-a-model-of-the-universe-with-the-transfinite-spacetime
# Is there a model of the universe with the transfinite (space)time? In mathematics there is a concept of ordinal numbers where one can count to infinity and beyond. For example the least number that is greater than all the finite numbers is denoted by $\omega$. Such a number $\omega$ is said to be a limit of the finite numbers or a limit ordinal. If one is counting as with natural numbers, the next numbers after $\omega$ are $\omega+1, \omega+2, \omega+3$, ... The limit of this sequence is a limit ordinal $\omega+\omega=\omega \cdot 2$. Then one could count from $\omega \cdot 2$ and so on. Eventually one would get to the number denoted by $\omega_1$ which represents the cardinality (or size) of the real numbers; note that the cardinality of the natural numbers is $\omega$. But then one could still go beyond. Then when one needs to prove a statement which is true for all ordinal numbers one can do so by transfinite induction. In the base case one proves the statement for the ordinal $0$. The inductive case has a successor case and a limit case. In the successor case, one assumes that the statement is true for an ordinal $\alpha$ and then proves it for the ordinal $\alpha+1$. In the limit case, if $\delta$ is a limit ordinal, then one assumes that the statement holds for all $\alpha < \delta$ and proves it for the ordinal $\delta$. I am interested in the model of the universe that allows the possibility that the spacetime and especially the time dimension is transfinite. The standard model of physics explains with the equations what happens in successor cases: given a complete information about the system, one can derive the possible (I know this may be too simplified, but I do not know much about the quantum mechanics) state of that system one 1 second later, e.g. that a football would be 5 meters closer to the goalkeeper. However, I am looking for a theory that would have the rules that would specify what happens at the limit stage. For example if the theory claimed that the universe continues expanding and getting colder as its time is closer to the time $\omega$ (infinity), then what would happen with the universe at the time $\omega$ and $\omega+1$? I remember a talk from 6 years ago by some distinguished physisist (from Oxford I think) who introduced a model of the universe where the universe would expand up to a very distant time in the future and then at some point it would start collapsing to a point from when a Big Bang would reoccur again and a new universe would start. I think it would make a sense for these crucial events such as the change from the expansion to the contraction and from the contraction to the expansion to happen at the limit stages of the time. Similarly, he said that some universes could be richer than their predecessors according to certain patters. But of course, at that time, I understood the talk only at a very intuitive level. Note that some limits ordinals are stronger than others in a sense of under what operations they are closed. For example if $\alpha$ and $\beta$ are any ordinals less than $\omega_1$, then their addition, multiplication, exponentiation is less than $\omega_1$. On the other hand $\omega+1$ is less than $\omega \cdot 2$, but $(\omega+1)+(\omega+1)=\omega+\omega+1=\omega \cdot 2 +1$ which is greater than $\omega \cdot 2$, so $\omega \cdot 2$ is not even closed under addition. One defines the mathematical universe of all sets as the union of successive classes $L_\alpha$ for an ordinal $\alpha$ , see Constructible universe. It turns out that the richness of a class $L_\alpha$ depends much on how closed $\alpha$ is. Therefore I would expect that the physical universe at the limit ordinal would have also a much richer structure locally (wrt time), i.e. more laws and phenomena of a general theory could be observed and measured in the universe at that time. So are there any models of the universe that consider the existence of the transfinite time dimension? I am also happy to be pointed out to some references, but in such cases brief explanations included here will be appreciated. My background is mathematics, not physics, so please accept my apologies for an uneducated question. • I'm not exactly sure what you mean by a transfinite model of spacetime. Are you thinking of using something like the long line? If yes, note that physics crucially relies on the differentiable structure of spacetime, which is highly non-unique for objects like the long line, so you get a host of problems associated with choosing the right one as soon as you allow such objects as spacetimes. – ACuriousMind Nov 13 '15 at 13:14 • @ACuriousMind The long line is a total order on $\omega_1 \times [0,1)$. Yes, I meant to use something similar to α×[0,1) for an ordinal α. But by the theorem of Simon Donaldson R4 (spacetime) has uncountably many (or $\omega_1$-many) non-diffeomorphic structures. C.f. en.wikipedia.org/wiki/Exotic_R4. So do you not face the same problem in the standard model with spacetime already? – Dávid Natingga Nov 13 '15 at 13:34 • Well, it's "obvious" which one to choose for $\mathbb{R}^4$ - the non-exotic one. It's not obvious to me which one to choose on these weird objects. Also, for larger ordinals $\alpha$, $\alpha\times[0,1)$ is no longer a topological manifold, if I understand correctly, so this doesn't fit at all into usual models of spacetime. – ACuriousMind Nov 13 '15 at 13:36 • @ACuriousMind Yes, it is true that many concepts including the notion of spacetime would need to be generalized for such a model. So it seems that you have not come across such a model. – Dávid Natingga Nov 13 '15 at 13:43 • @ACuriousMind Could you please justify the evidence for differential structure of spacetime being a standard $\mathbb{R}^4$ and not an exotic one? – krzysiekb Feb 3 '17 at 10:34 I think that with this question you are overstretching the boundaries of applicability of maths to physics. I think yours is ultimately a philosophical question, so an answer will also have to be somewhat philosophical. It has often been stated how remarkable it is that maths is so unreasonably effective at describing physics. Indeed this seems miraculous, but it undoubtedly plays a role that our most basic maths concepts stem quite directly from the world around us (natural numbers from counting, rational numbers from ratios and then lengths, real numbers from limits of lengths, etc). Axiomatizing this already leads to some problems, but in general these are ignored without grave consequences. When you define more and more abstract concepts, you may run into concepts that don't have any obvious link to the world around us anymore. An example is one that you gave yourself: what is the cardinality of $\omega_1$? You said that it is the cardinality of the continuum, but I'm sure you are aware yourself of the fact that the truth of that statement is independent of ZFC, which by many is considered to be the basis of mathematical axiomatization: both its assertion and its negation can be postulated without introducing contradictions. For physical applications of ordinal numbers to make a decision on this seems to be a minimal requirement, but since it doesn't seem to be possible to base the decision on observation, I don't think such a model could be useful. It would appear to be important that ordinal numbers derive from and thus must have been preceded by cardinal numbers and also that ordinal numbers themselves imply/require spacetime, given their special temporal/sequential interrelationship, whereas cardinal numbers do not. To my mind, this implies a model of the universe requiring an a priori "period" preceding spacetime where there was indeed a singularity -- that is, a single dimension mapped only by cardinal numbers, before the second dimension, spacetime, emerged. The mass-energy equation suggests that this first static dimension was one of mass, followed by the introduction of a ordinal-oriented spacetime dimension through the introduction of electromagnetism. Indeed, such a model predicts the existence of dark matter as our view of the primal mass/matter not yet implicated in spacetime/electromagnetism (hence its invisibility to us), which charged/changed those particles into the standard elementary particles. The Big Bang would then be the explosion of mass, countable with natural numbers, into a new transfinite realm structured by spacetime. The very nature of ordinal numbers suggests that this structuring should not have been instantaneous at the Big Bang but should have progressed sequentially -- the expanding universe backs this up, perhaps, as more dark matter is structured and incorporated into spacetime. Again, this is a mathematical model which I offer to further exploration of this concept. My apologies for any blatant misconceptions and for reintroducing the idea of "let there be light" as a founding principle of creation. • You should make use of existing theories. If you want to go a bit outside the norm, that is fine but you shouldn't write personal theories. – Yashas Feb 15 '17 at 6:03 • @YashasSamaga I think his post quality is far above the typical own-theory propagators, but unfortunately it is still offtopic. – peterh - Reinstate Monica Feb 15 '17 at 6:55 • @Yashas, peterh: Thanks for the feedback, I appreciate your politeness. I should have limited myself to asking this: if spacetime is transfinite, could this relate to a model including mass as a finite variable due to some implication of the cardinal/ordinal relationship? This relates better to the query. I am neither a mathematician nor a physicist (archivist, actually). I was hoping someone more knowledgeable might explore this inkling I had - I am incompetent to do so myself but couldn't find literature on the concept apart from this query. – steigewaerter Feb 17 '17 at 3:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8513429164886475, "perplexity": 304.51907874660907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00366.warc.gz"}
https://tex.stackexchange.com/questions/319/latex-xetex-setup-tamil-indic-languages?noredirect=1
# LaTeX/XeTeX setup Tamil/Indic languages I use TexMaker and LyX in Ubuntu. I'd like to typeset Tamil/Telugu/Hindi text, and so far I've been unsuccessful. Please suggest me a working TeX/LaTeX/variants setup for Indic languages, especially Tamil. edit: XeTeX seems to have good Unicode support, and I read TexMaker has XeteX support too. I installed all XeTeX, latex-tamil packages etc. But couldn't make them work yet. Documentations talk about Arabic or Korean text. Nothing mentioned about Tamil/Indic text. I will give what I learned by trial and error. This is pertaining to Windows platform. ( I used the material given found here at the XeLaTeX wiki.) The trick is to 1. use the fonts available in the system's font directory. (Windows7 provides Latha font for Tamil) and 2. compile your source file with xelatex, not pdflatex! (For this in Windows platform, 'Texworks' can be used as this is an unicode editor. Check whether your favourite editor can save your file in utf-8 format) In the preamble include the following \usepackage{fontspec} \newfontfamily{\lathatam}{Latha} The declaration in the first set of parentheses is the command to call Tamil encoding in the body of your document such as: {\lathatam அய்யா வணக்கம்.} Another Tamil font encoding is 'Arial Unicode MS'. To use this declare \newfontfamily{\anothertam}{Arial Unicode MS} in the preamble, and use it by doing: {\anothertam நான் நலம். நீங்கள் நலமா} When you compile with xelatex, you will see the difference between these fonts. As of 2019, both babel and polyglossia support Tamil on XeLaTeX. LuaLaTeX does not currently work with Indic scripts, but there is also an experimental project integrating Harfbuzz and LuaLaTeX that should work. The babel package supports more languages and engines, while the polyglossia package has a somewhat simpler user interface. Either of these let you use any system font that supports Tamil. Any font you can use in your word processor should work. Save as UTF-8. ## With Babel \documentclass{article} \usepackage[english]{babel} \usepackage{fontspec} \babelprovide[import]{tamil} \defaultfontfeatures{Scale=MatchLowercase} \babelfont{rm}[Scale=1.0]{Latin Modern Roman} \babelfont[tamil]{rm}{Latha} \begin{document} \foreignlanguage{tamil}{தமிழ் அரிச்சுவடி-தமிழ் மொழி} \end{document} This set-up is for the common case where you want to use a bit of Tamil in a multilingual document. You can also declare Tamil the main language, or set a section in Tamil with \begin{otherlanguage}{tamil}...\end{otherlanguage}. You can define \babelfont[tamil]{sf}{Some Font} to get a sans-serif font and \babelfont[tamil]{tt}{Some Font} to get monospace. If you get warning messages about the font not supporting the language Tamil for the script Tamil, they’re harmless, but you can suppress them by adding the option \babelfont[tamil]{rm}[Language=Default]{Some Font}. What happened is that the selected font doesn’t add an OpenType language tag. ## With Polyglossia \documentclass{article} \usepackage{polyglossia} \setdefaultlanguage{english} \setotherlanguage{tamil} \defaultfontfeatures{Scale=MatchLowercase} \newfontfamily\tamilfont{Latha}[Script=Tamil] \begin{document} \texttamil{தமிழ் அரிச்சுவடி-தமிழ் மொழி} \end{document} You can also use the tamil environment for sections, and define \tamilfontsf and \tamilfonttt similarly to \tamilfont. ## If You Cannot Use XeTeX It would be nice if the last few holdouts added support for XeTeX (and, in the future, HarfTeX). Since some publishers still do not allow it, you might still need to fall back on a workaround such as LianTze Lim’s solution. If you need only a few short words or phrases in Tamil, another workaround is to compile them with XeLaTeX as tiny standalone PDFs, then insert the PDFs as images. • Instead of \usepackage[bidi=default]{babel} and \babelprovide[main, import]{english}, just write \usepackage[english]{babel} (Tamil is not an RTL script). – Javier Bezos Jul 7 '19 at 16:41 • @JavierBezos I ended up leaving the \babelprovide[import, main] line, because most requests I’ve seen for Tamil, Malayalam, etc. are about multilingual documents. I mentioned a few other use cases. I removed the unnecessary package option. Thanks for pointing that out! – Davislor Jul 7 '19 at 18:45 • There is no reason to load english with \babelprovide. The standard method, as a package option, is still preferred. – Javier Bezos Jul 8 '19 at 13:14 • @JavierBezos I suppose my own preference is to load all my languages by the same method, but if you say [english] is preferred, I’ll change my MWE. – Davislor Jul 8 '19 at 19:27 • Could you update your answer to include luahbtex + luaotfload 3.11? \babelfont[tamil]{rm}[Script=Tamil,Renderer=Harfbuzz]{Latha} should work fine then. See tex.stackexchange.com/a/493185/2388 for installation instructions. – Ulrike Fischer Nov 11 '19 at 10:04 I was able to typeset Tamil using LaTeX on Ubuntu by installing the itrans and itrans-fonts packages via synaptic (or apt-get). It doesn't let you type in Tamil directly, rather you have to key in the ASCII transcription, then process it with itrans from the command prompt, then run (pdf)latex on the resultant file. Say I have the following file nandri-pre.tex: \documentclass{minimal} \usepackage[preprocess]{itrans} \newfont{\tmlb}{wntml12} \newfont{\tmls}{wntml10} \hyphenchar\tmlb=-1 \hyphenchar\tmls=-1 #tamilifm=wntml.ifm #tamilfont=\tmlb \begin{document} Hi! {#tamil na^nRi #endtamil} \end{document} Process it with itrans: $itrans -i nandri-pre.tex -o nandri.tex Then run (pdf)latex on nandri.tex, which is of course the file to edit if you have further text to add. • Appreciate the shout-out. Unfortunately, a few important sites still require PDFLaTeX, and might need something like that, or a hack like compiling all the Tamil words to PDF and including them. – Davislor Jul 8 '19 at 20:44 To use various indic languages in latex with texmaker I recommend following steps to be followed by viewers of this post. 1. Download latest version of MikTeX. Install it in your system. I use the C:/latex/ directory. 2. Download devnag developed by velthuis from CTAN. Install it in your system. I use the c:/latex/velthuis directory. 3. Open mycomputer->all programmes->miktex->miktex settings. Go to root tab and add c:/latex/velthuis directory and click OK. 4. Install TeXMaker and go to "user" tab and open "user command" and then "edit user command". Enter "devnagari" in menu item and commands c:/latex/velthuis/bin/devnag.exe %.dn|c:/latex/miktex/bin/latex.exe -interaction=nonstopmode %.tex|"C:/latex/MiKTex/miktex/bin/yap.exe" %.dvi in command field. Click OK. 5. Now devnagri will appear in dropdown in menu bar after arrow. You can add extra command using | having no space before and after |. 6. Now copy misspal file from c:/latex/velthuis/doc/generic/ folder in TeXMaker and save it as misspall.dn. Now run devnagri command and you will see out put in DVI preview in devnagari script. If you want write document in tamils then use itrans instead of devnag. It is also working with LaTeX and TeXMaker very well. 7. Remember %.dn means % denotes to filename without extension and .dn extension of file. Users must read doc or manual of devnag or itrans. • While a link to your homepage may be part of your user profile, it should not be included in answers. – lockstep Apr 10 '11 at 12:56 For Hindi, you can probably use the devanagari package for LaTeX. I've used it for Sanskrit. Just note that the "internal" codes for the script is a bit obtuse, so it is suggested that you follow the documentation and type in a more readable format, and then pass the source file through a preprocessor. (Included in the distribution.) There are also language packages for Telugu and Tamil, but not having used either I cannot say more about them. • Thanks Willie. I've installed Tamil packages but couldn't get them work yet. Heard about Xetex's support for Unicode characters. Any idea how to use it in Ubungu? (Guess it should go to another question) – ananth.p Jul 27 '10 at 21:11 • Sorry, but I've really limited experience with Xetex, and had never played with the Tamil package myself. Good luck! – Willie Wong Jul 27 '10 at 21:15 • Will try devanagari first. 'Internal Codes' meaning I should be typing the source with some kind of hex code letter-by-letter? Thanks for the tip. – ananth.p Jul 27 '10 at 21:30 • This page has a code example that works (partially)- tug.org/pipermail/xetex/2009-December/015051.html Letters with dot about (க், ல்) are rendered wrong. TexMaker didn't help much. It wouldn't let me type in Tamil, but I could copy-paste unicode text into it. TexMaker, by default, tries to compile with Latex, I had to compile from command line ($ xelatex <source>) – ananth.p Jul 27 '10 at 21:46 For my Ubuntu system I did as Lian Tze Lim suggested. Use the package manager to install the itrans and itrans-fonts packages. No muss No fuss. For Windows and MiKTex 2.9 the set up process was more involved. Below is the batch file I created to facilitate the copying. 1) Install MiKTeX 2) Use the MiKTeX package manager to install the indic-type1 package and the devanagari packages. 4) extract itrans53-win32.zip to some temporary location. I used C:\temp\Tamil\ITRANS53. 5) Open a command window which is running as administrator. (many of the copies are into c:\program files\ and that requires the process be run with elevated privilege) 6) CD to the temporary location of ITrans (e.g. C:\temp\Tamil\ITRANS53) 7) execute the batchfile commands below. 8) close the command window 9) Right click on start->computer and select Properties. 10) Select Advanced System Settings->Environment Variables 11) add new system environment variable named: ITRANSPATH. See the batch file commands below for the exact value for this variable. 12) open a command window 13) The command: itrans -I <filename>.itx -o <filename>.tex will now work and (pdf)latex can resolve the packages, fonts, and commands referenced in the output from itrans.exe. I can now process LaTeX files in both Ubuntu and Windows 7 and have the source files (.Tex, .ITX, etc.) under revision control The batch file is: echo off rem Record where is MikTeX is installed set MiktexRoot=C:\Program Files\MiKTeX 2.9 rem copy itrans.exe to a directory already within the path environment variable rem namely the path adjustment made by the installer for MiKTeX which puts rem all of the MiKTeX installed tools on the PATH variable rem rem for 32-bit windows systems remove the x64 suffix copy ".\bin\*.exe" "%MiktexRoot%\miktex\bin\x64\*.*" /Y /V rem Create the directories within the MikTeX structure used or referenced by the itrans package mkdir "%MiktexRoot%\doc\itrans" mkdir "%MiktexRoot%\doc\itrans\contrib" mkdir "%MiktexRoot%\fonts\source\public\itrans" mkdir "%MiktexRoot%\fonts\type1\public\itrans" mkdir "%MiktexRoot%\fonts\tfm\public\itrans" mkdir "%MiktexRoot%\fonts\afm\public\itrans" mkdir "%MiktexRoot%\fonts\truetype\public\itrans" mkdir "%MiktexRoot%\tex\latex\itrans" mkdir "%MiktexRoot%\tex\latex\itrans\fonts" rem Copy itrans package files into the MiKTeX structure rem used http:\\tex.stackexchange.com\questions\1754\tamil-tex-in-windows rem and the installation script for Tamil-Omega as guides for the copy commands rem Listed below rem rem Copy Documentation files copy ".\doc\*.*" "%MiktexRoot%\doc\itrans\*.*" /Y /V copy ".\contrib\*.*" "%MiktexRoot%\doc\itrans\contrib\*.*" /Y /V rem copy Font Files copy ".\lib\fonts\*.mf" "%MiktexRoot%\fonts\source\public\itrans\*.*" /Y /V copy ".\lib\fonts\*.pfa" "%MiktexRoot%\fonts\type1\public\itrans\*.*" /Y /V copy ".\lib\fonts\*.pfb" "%MiktexRoot%\fonts\type1\public\itrans\*.*" /Y /V copy ".\lib\fonts\*.pfm" "%MiktexRoot%\fonts\type1\public\itrans\*.*" /Y /V copy ".\lib\fonts\*.tfm" "%MiktexRoot%\fonts\tfm\public\itrans\*.*" /Y /V copy ".\lib\fonts\*.afm" "%MiktexRoot%\fonts\afm\public\itrans\*.*" /Y /V copy ".\lib\fonts\*.ttf" "%MiktexRoot%\fonts\truetype\public\itrans\*.*" /Y /V rem copy all of the ITRANS Lib structure into MiKTeX structure. copy ".\lib\*.*" "%MiktexRoot%\tex\latex\itrans\*.*" /Y /V copy ".\lib\fonts\*.*" "%MiktexRoot%\tex\latex\itrans\fonts\*.*" /Y /V rem post installation commands to rebuilt the font name database ans to process all of the font MAPping files. texhash updmap rem rem With the above copies MikTeX can now find the ITRANS fonts and resolve references created by the itrans.exe preprocessor rem rem But the preprocessor cannot be run from the command line because itrans.exe is expecting to find a rem specific ITRANS structure somewhere on the disk via the environment variable: ITRANSPATH
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207494854927063, "perplexity": 14091.899989652979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00023.warc.gz"}
https://vismor.com/documents/network_analysis/matrix_algorithms/S3.SS3.php
# 3.3 Solving Overdetermined Systems A m × n system of linear equations with m < n is overdetermined. There are more equations than there are unknowns. ”Solving” this equation is the process of reducing the system to an m × m problem then solving the reduced set of equations. A common technique for constructing a reduced set of equations is known as the least squares solution to the equations. The least squares equations are derived by premultiplying Equation 19 by AT, i.e. $\mathbf{\left({A}^{T}A\right)x={A}^{T}b}$ (26) Often Equation 26 is referred to as the normal equations of the linear least squares problem. The least squares terminology refers to the fact that the solution to Equation 26 minimizes the sum of the squares of the differences between the left and right sides of Equation 19.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9562287926673889, "perplexity": 197.17315883625704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188132.48/warc/CC-MAIN-20170322212948-00002-ip-10-233-31-227.ec2.internal.warc.gz"}
https://eventuallyalmosteverywhere.wordpress.com/category/analysis/variational-principles/
# Large Deviations 6 – Random Graphs As a final instalment in this sequence of posts on Large Deviations, I’m going to try and explain how one might be able to apply some of the theory to a problem about random graphs. I should explain in advance that much of what follows will be a heuristic argument only. In a way, I’m more interested in explaining what the technical challenges are than trying to solve them. Not least because at the moment I don’t know exactly how to solve most of them. At the very end I will present a rate function, and reference properly the authors who have proved this. Their methods are related but not identical to what I will present. Problem Recall the two standard definitions of random graphs. As in many previous posts, we are interested in the sparse case where the average degree of a vertex is o(1). Anyway, we start with n vertices, and in one description we add an edge between any pair of vertices independently and with fixed probability $\frac{\lambda}{n}$. In the second model, we choose uniformly at random from the set of graphs with n vertices and $\frac{\lambda n}{2}$ edges. Note that if we take the first model and condition on the number of edges, we get the second model, since the probability of a given configuration appearing in G(n,p) is a function only of the number of edges present. Furthermore, the number of edges in G(n,p) is binomial with parameters $\binom{n}{2}$ and p. For all purposes here it will make no difference to approximate the former by $\frac{n^2}{2}$. Of particular interest in the study of sparse random graphs is the phase transition in the size of the largest component observed as $\lambda$ passes 1. Below 1, the largest component has size on a scale of log n, and with high probability all components are trees. Above 1, there is a unique giant component containing $\alpha_\lambda n$ vertices, and all other components are small. For $\lambda\approx 1$, where I don’t want to discuss what ‘approximately’ means right now, we have a critical window, for which there are infinitely many components with sizes on a scale of $n^{2/3}$. A key observation is that this holds irrespective of which model we are using. In particular, this is consistent. By the central limit theorem, we have that: $|E(G(n,\frac{\lambda}{n}))|\sim \text{Bin}\left(\binom{n}{2},\frac{\lambda}{n}\right)\approx \frac{n\lambda}{2}\pm\alpha,$ where $\alpha$ is the error due to CLT-scale fluctuations. In particular, these fluctuations are on a scale smaller than n, so in the limit have no effect on which value of $\lambda$ in the edge-specified model is appropriate. However, it is still a random model, so we can condition on any event which happens with positive probability, so we might ask: what does a supercritical random graph look like if we condition it to have no giant component? Assume for now that we are considering $G(n,\frac{\lambda}{n}),\lambda>1$. This deviation from standard behaviour might be achieved in at least two ways. Firstly, we might just have insufficient edges. If we have a large deviation towards too few edges, then this would correspond to a subcritical $G(n,\frac{\mu n}{2})$, so would have no giant components. However, it is also possible that the lack of a giant component is due to ‘clustering’. We might in fact have the correct number of edges, but they might have arranged themselves into a configuration that keeps the number of components small. For example, we might have a complete graph on $Kn^{1/2}$ vertices plus a whole load of isolated vertices. This has the correct number of edges, but certainly no giant component (that is an O(n) component). We might suspect that having too few edges would be the primary cause of having no giant component, but it would be interesting if clustering played a role. In a previous post, I talked about more realistic models of complex networks, for which clustering beyond the levels of Erdos-Renyi is one of the properties we seek. There I described a few models which might produce some of these properties. Obviously another model is to take Erdos-Renyi and condition it to have lots of clustering but that isn’t hugely helpful as it is not obvious what the resulting graphs will in general look like. It would certainly be interesting if conditioning on having no giant component were enough to get lots of clustering. To do this, we need to find a rate function for the size of the giant component in a supercritical random graph. Then we will assume that evaluating this near 0 gives the LD probability of having ‘no giant component’. We will then compare this to the straightforward rate function for the number of edges; in particular, evaluated at criticality, so the probability that we have a subcritical number of edges in our supercritical random graph. If they are the same, then this says that the surfeit of edges dominates clustering effects. If the former is smaller, then clustering may play a non-trivial role. If the former is larger, then we will probably have made a mistake, as we expect on a LD scale that having too few edges will almost surely lead to a subcritical component. Methods The starting point is the exploration process for components of the random graph. Recall we start at some vertex v and explore the component containing v depth-first, tracking the number of vertices which have been seen but not yet explored. We can extend this to all components by defining: $S(0)=0, \quad S(t)=S(t-1)+(X(t)-1),$ where X(t) is the number of children of the t’th vertex. For a single component, S(t) is precisely the number of seen but unexplored vertices. It is more complicated in general. Note that when we exhaust the first component S(t)=-1, and then when we exhaust the second component S(t)=-2 and so on. So in fact $S_t-\min_{0\leq s\leq t}S_s$ is the number of seen but unexplored vertices, with $\min_{0\leq s\leq t}S_s$ equal to (-1) times the number of components already explored up to time t. Once we know the structure of the first t vertices, we expect the distribution of X(t) – 1 to be $\text{Bin}\Big(n-t-[S_t-\min_{0\leq s\leq t}S_s],\tfrac{\lambda}{n}\Big)-1.$ We aren’t interested in all the edges of the random graph, only in some tree skeleton of each component. So we don’t need to consider the possibility of edges connecting our current location to anywhere we’ve previously visited (as such an edge would have been consider then – it’s a depth-first exploration), hence the -t. But we also don’t want to consider edges connecting our current location to anywhere we’ve seen, since that would be a surplus edge creating a cycle, hence the -S_s. It is binomial because by independence even after all this conditioning, the probability that there’s an edge from my current location to any other vertex apart from those discounted is equal to $\frac{\lambda}{n}$ and independent. For Mogulskii’s theorem in the previous post, we had an LDP for the rescaled paths of a random walk with independent stationary increments. In this situation we have a random walk where the increments do not have this property. They are not stationary because the pre-limit distribution depends on time. They are also not independent, because the distribution depends on behaviour up to time t, but only through the value of the walk at the present time. Nonetheless, at least by following through the heuristic of having an instantaneous exponential cost for a LD event, then products of sums becoming integrals within the exponent, we would expect to have a similar result for this case. We can find the rate function $\Lambda_\lambda^*(x)of$latex \text{Po}(\lambda)-1\$ and thus get a rate function for paths of the exploration process $I_\lambda(f)=\int_0^1 \Lambda_{(1-t-\bar{f}(t))\lambda}^*(f')dt,$ where $\bar{f}(t)$ is the height of f above its previous minimum. Technicalities and Challenges 1) First we need to prove that it is actually possible to extend Mogulskii to this more general setting. Even though we are varying the distribution continuously, so we have some sort of ‘local almost convexity’, the proof is going to be fairly fiddly. 2) Having to consider excursions above the local minima is a massive hassle. We would ideally like to replace $\bar{f}$ with f. This doesn’t seem unreasonable. After all, if we pick a giant component within o(n) steps, then everything considered before the giant component won’t show up in the O(n) rescaling, so we will have a series of macroscopic excursions above 0 with widths giving the actual sizes of the giant components. The problem is that even though with high probability we will pick a giant component after O(1) components, then probability that we do not do this decays only exponentially fast, so will show up as a term in the LD analysis. We would hope that this would not be important – after all later we are going to take an infimum, and since the order we choose the vertices to explore is random and in particular independent of the actual structure, it ought not to make a huge difference to any result. 3) A key lemma in the proof of Mogulskii in Dembo and Zeitouni was the result that it doesn’t matter from an LDP point of view whether we consider the linear (continuous) interpolation or the step-wise interpolation to get a process that actually lives in $L_\infty([0,1])$. In this generalised case, we will also need to check that approximating the Binomial distribution by its Poisson limit is valid on an exponential scale. Note that because errors in the approximation for small values of t affect the parameter of the distribution at larger times, this will be more complicated to check than for the IID case. 4) Once we have a rate function, if we actually want to know about the structure of the ‘typical’ graph displaying some LD property, we will need to find the infimum of the integrated rate function with some constraints. This is likely to be quite nasty unless we can directly use Euler-Lagrange or some other variational tool. $I_{(1+\epsilon)}(0)\approx \frac{\epsilon^3}{6},$ $-\lim\tfrac{1}{n}\log\mathbb{P}\Big(\text{Bin}(\tfrac{n^2}{2},\tfrac{1+\epsilon}{n})\leq\tfrac{n}{2}\Big)\approx \frac{\epsilon^2}{4}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8639914989471436, "perplexity": 226.5693826818204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00140.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcdss.2018008
# American Institute of Mathematical Sciences February  2018, 11(1): 119-141. doi: 10.3934/dcdss.2018008 ## Modeling and optimal control of HIV/AIDS prevention through PrEP Center for Research & Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810–193 Aveiro, Portugal * Corresponding author: Cristiana J. Silva Received  September 2016 Revised  February 2017 Published  January 2018 Pre-exposure prophylaxis (PrEP) consists in the use of an antiretroviral medication to prevent the acquisition of HIV infection by uninfected individuals and has recently demonstrated to be highly efficacious for HIV prevention. We propose a new epidemiological model for HIV/AIDS transmission including PrEP. Existence, uniqueness and global stability of the disease free and endemic equilibriums are proved. The model with no PrEP is calibrated with the cumulative cases of infection by HIV and AIDS reported in Cape Verde from 1987 to 2014, showing that it predicts well such reality. An optimal control problem with a mixed state control constraint is then proposed and analyzed, where the control function represents the PrEP strategy and the mixed constraint models the fact that, due to PrEP costs, epidemic context and program coverage, the number of individuals under PrEP is limited at each instant of time. The objective is to determine the PrEP strategy that satisfies the mixed state control constraint and minimizes the number of individuals with pre-AIDS HIV-infection as well as the costs associated with PrEP. The optimal control problem is studied analytically. Through numerical simulations, we demonstrate that PrEP reduces HIV transmission significantly. Citation: Cristiana J. Silva, Delfim F. M. Torres. Modeling and optimal control of HIV/AIDS prevention through PrEP. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 119-141. doi: 10.3934/dcdss.2018008 ##### References: show all references ##### References: Model (1) fitting the total population of Cape Verde between 1987 and 2014 [25,42]. The $l_2$ norm of the difference between the real total population of Cape Verde and our prediction gives an error of $1.9\%$ of individuals per year with respect to the total population of Cape Verde in 2014 Model (1) fitting the data of cumulative cases of HIV and AIDS infection in Cape Verde between 1987 and 2014 [25]. The $l_2$ norm of the difference between the real data and the cumulative cases of infection by HIV/AIDS given by model (1) gives, in both cases, an error of $0.03\%$ of individuals per year with respect to the total population of Cape Verde in 2014 Top left: cumulative HIV and AIDS cases. Top right: pre-AIDS HIV infected individuals $I$. Bottom left: HIV-infected individuals under ART treatment $C$. Bottom right: HIV-infected individuals with AIDS symptoms $A$. Expression "with PrEP" refers to the case $(\psi, \theta) = (0.1, 0.001)$ and "no PrEP" refers to the case $(\psi, \theta) = (0, 0)$ Top left: Individuals under PrEP, $E$. Top right: pre-AIDS HIV infected individuals, $I$. Bottom left: HIV-infected individuals under ART treatment, $C$. Bottom right: HIV-infected individuals with AIDS symptoms, $A$. The continuous line is the solution of model (10) for $\psi = 0.1$, the dashed line "$-\, -$" is the solution of the optimal control problem with no mixed state control constraint and "$\cdot \, -$" is the solution of model (10) for $\psi = 0.9$ Solutions of the optimal control problem with no mixed state control constraint. (a) Optimal control. (b) Total number of individuals that take PrEP at each instant of time Top left: Individuals under PrEP, $E$. Top right: pre-AIDS HIV infected individuals, $I$. Bottom left: HIV-infected individuals under ART treatment, $C$. Bottom right: HIV-infected individuals with AIDS symptoms, $A$. The continuous line is the solution of model (10) for $\psi = 0.1$, the dashed line "$-\, -$" is the solution of the optimal control problem with the mixed state control constraint (21) and "$\cdot \, -$" is the solution of model (10) for $\psi = 0.9$ (a) Optimal control $\tilde{u}$ considering the mixed state control constraint (21). (b) Total number of individuals under PrEP at each instant of time for $t \in [0, 25]$ associated with the optimal control $\tilde{u}$. (c) Total number of individuals under PrEP at each instant of time for $t \in [0, 25]$ associated with $\psi = 0.61$ Extremals of the optimal control problem (17)–(20) with $\theta = 0.001$ Cumulative cases of infection by HIV/AIDS and total population in Cape Verde in the period 1987–2014 [25,42] Year 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 HIV/AIDS 61 107 160 211 244 303 337 358 395 432 471 560 660 779 913 1064 1233 1493 1716 2015 2334 2610 2929 3340 3739 4090 4537 4946 Population 323972 328861 334473 341256 349326 358473 368423 378763 389156 399508 409805 419884 429576 438737 447357 455396 462675 468985 474224 478265 481278 483824 486673 490379 495159 500870 507258 513906 Year 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 HIV/AIDS 61 107 160 211 244 303 337 358 395 432 471 560 660 779 913 1064 1233 1493 1716 2015 2334 2610 2929 3340 3739 4090 4537 4946 Population 323972 328861 334473 341256 349326 358473 368423 378763 389156 399508 409805 419884 429576 438737 447357 455396 462675 468985 474224 478265 481278 483824 486673 490379 495159 500870 507258 513906 Parameters of the HIV/AIDS model (1) for Cape Verde Symbol Description Value Reference $N(0)$ Initial population $323 972$ [38] $\Lambda$ Recruitment rate $13045$ [38] $\mu$ Natural death rate $1/69.54$ [38] $\beta$ HIV transmission rate $0.752$ Estimated $\eta_C$ Modification parameter $0.015$, $0.04$ Assumed $\eta_A$ Modification parameter $1.3$, $1.35$ Assumed $\phi$ HIV treatment rate for $I$ individuals $1$ [30] $\rho$ Default treatment rate for $I$ individuals $0.1$ [30] $\alpha$ AIDS treatment rate $0.33$ [30] $\omega$ Default treatment rate for $C$ individuals $0.09$ [30] $d$ AIDS induced death rate $1$ [39] Symbol Description Value Reference $N(0)$ Initial population $323 972$ [38] $\Lambda$ Recruitment rate $13045$ [38] $\mu$ Natural death rate $1/69.54$ [38] $\beta$ HIV transmission rate $0.752$ Estimated $\eta_C$ Modification parameter $0.015$, $0.04$ Assumed $\eta_A$ Modification parameter $1.3$, $1.35$ Assumed $\phi$ HIV treatment rate for $I$ individuals $1$ [30] $\rho$ Default treatment rate for $I$ individuals $0.1$ [30] $\alpha$ AIDS treatment rate $0.33$ [30] $\omega$ Default treatment rate for $C$ individuals $0.09$ [30] $d$ AIDS induced death rate $1$ [39] [1] Cristiana J. Silva, Delfim F. M. Torres. A TB-HIV/AIDS coinfection model and optimal control treatment. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4639-4663. doi: 10.3934/dcds.2015.35.4639 [2] Filipe Rodrigues, Cristiana J. Silva, Delfim F. M. Torres, Helmut Maurer. Optimal control of a delayed HIV model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 443-458. doi: 10.3934/dcdsb.2018030 [3] Yali Yang, Sanyi Tang, Xiaohong Ren, Huiwen Zhao, Chenping Guo. Global stability and optimal control for a tuberculosis model with vaccination and treatment. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 1009-1022. doi: 10.3934/dcdsb.2016.21.1009 [4] Jaouad Danane, Karam Allali. Optimal control of an HIV model with CTL cells and latently infected cells. Numerical Algebra, Control & Optimization, 2019, 0 (0) : 0-0. doi: 10.3934/naco.2019048 [5] Ellina Grigorieva, Evgenii Khailov, Andrei Korobeinikov. An optimal control problem in HIV treatment. Conference Publications, 2013, 2013 (special) : 311-322. doi: 10.3934/proc.2013.2013.311 [6] Gigi Thomas, Edward M. Lungu. A two-sex model for the influence of heavy alcohol consumption on the spread of HIV/AIDS. Mathematical Biosciences & Engineering, 2010, 7 (4) : 871-904. doi: 10.3934/mbe.2010.7.871 [7] Praveen Kumar Gupta, Ajoy Dutta. Numerical solution with analysis of HIV/AIDS dynamics model with effect of fusion and cure rate. Numerical Algebra, Control & Optimization, 2019, 9 (4) : 393-399. doi: 10.3934/naco.2019038 [8] Hee-Dae Kwon, Jeehyun Lee, Myoungho Yoon. An age-structured model with immune response of HIV infection: Modeling and optimal control approach. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 153-172. doi: 10.3934/dcdsb.2014.19.153 [9] Semu Mitiku Kassa. Three-level global resource allocation model for HIV control: A hierarchical decision system approach. Mathematical Biosciences & Engineering, 2018, 15 (1) : 255-273. doi: 10.3934/mbe.2018011 [10] Jinliang Wang, Lijuan Guan. Global stability for a HIV-1 infection model with cell-mediated immune response and intracellular delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 297-302. doi: 10.3934/dcdsb.2012.17.297 [11] Yu Ji. Global stability of a multiple delayed viral infection model with general incidence rate and an application to HIV infection. Mathematical Biosciences & Engineering, 2015, 12 (3) : 525-536. doi: 10.3934/mbe.2015.12.525 [12] Shengqiang Liu, Lin Wang. Global stability of an HIV-1 model with distributed intracellular delays and a combination therapy. Mathematical Biosciences & Engineering, 2010, 7 (3) : 675-685. doi: 10.3934/mbe.2010.7.675 [13] Sanjukta Hota, Folashade Agusto, Hem Raj Joshi, Suzanne Lenhart. Optimal control and stability analysis of an epidemic model with education campaign and treatment. Conference Publications, 2015, 2015 (special) : 621-634. doi: 10.3934/proc.2015.0621 [14] B. M. Adams, H. T. Banks, Hee-Dae Kwon, Hien T. Tran. Dynamic Multidrug Therapies for HIV: Optimal and STI Control Approaches. Mathematical Biosciences & Engineering, 2004, 1 (2) : 223-241. doi: 10.3934/mbe.2004.1.223 [15] M'hamed Kesri. Structural stability of optimal control problems. Communications on Pure & Applied Analysis, 2005, 4 (4) : 743-756. doi: 10.3934/cpaa.2005.4.743 [16] Christopher M. Kribs-Zaleta, Melanie Lee, Christine Román, Shari Wiley, Carlos M. Hernández-Suárez. The Effect of the HIV/AIDS Epidemic on Africa's Truck Drivers. Mathematical Biosciences & Engineering, 2005, 2 (4) : 771-788. doi: 10.3934/mbe.2005.2.771 [17] Brandy Rapatski, Petra Klepac, Stephen Dueck, Maoxing Liu, Leda Ivic Weiss. Mathematical epidemiology of HIV/AIDS in cuba during the period 1986-2000. Mathematical Biosciences & Engineering, 2006, 3 (3) : 545-556. doi: 10.3934/mbe.2006.3.545 [18] Moatlhodi Kgosimore, Edward M. Lungu. The Effects of Vertical Transmission on the Spread of HIV/AIDS in the Presence of Treatment. Mathematical Biosciences & Engineering, 2006, 3 (2) : 297-312. doi: 10.3934/mbe.2006.3.297 [19] C.Z. Wu, K. L. Teo. Global impulsive optimal control computation. Journal of Industrial & Management Optimization, 2006, 2 (4) : 435-450. doi: 10.3934/jimo.2006.2.435 [20] Hui Miao, Zhidong Teng, Chengjun Kang. Stability and Hopf bifurcation of an HIV infection model with saturation incidence and two delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2365-2387. doi: 10.3934/dcdsb.2017121 2018 Impact Factor: 0.545
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32941916584968567, "perplexity": 2247.895912760363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00443.warc.gz"}
https://www.zbmath.org/?q=cc%3A78A+cc%3A45
× ## Found 1,255 Documents (Results 1–100) 100 MathJax Full Text: Full Text: ### BIE model of periodic diffraction problems in optics. (English)Zbl 07478518 MSC:  78A45 45P05 Full Text: Full Text: ### Nonlinear propagation of leaky TE-polarized electromagnetic waves in a metamaterial Goubau line. (English)Zbl 1483.78002 MSC:  78A40 34L30 45G10 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### Application of boundary perturbations on medical monitoring and imaging techniques. (English)Zbl 1482.78007 Rassias, Themistocles M. (ed.), Nonlinear analysis, differential equations, and applications. Cham: Springer. Springer Optim. Appl. 173, 101-130 (2021). Full Text: Full Text: Full Text: ### Method of orthogonal polynomials for an approximate solution of singular integro-differential equations as applied to two-dimensional diffraction problems. (English. Russian original)Zbl 07373283 Differ. Equ. 57, No. 6, 814-823 (2021); translation from Differ. Uravn. 57, No. 6, 830-839 (2021). Full Text: Full Text: Full Text: ### Analytical inversion of the operator matrix for the problem of diffraction by a cylindrical segment in Sobolev spaces. (English. Russian original)Zbl 1467.45018 Comput. Math. Math. Phys. 61, No. 3, 424-430 (2021); translation from Zh. Vychisl. Mat. Mat. Fiz. 61, No. 3, 450-456 (2021). Full Text: ### Uniqueness and existence theorems for the problems of electromagnetic-wave scattering by three-dimensional anisotropic bodies in differential and integral formulations. (English. Russian original)Zbl 1462.35379 Comput. Math. Math. Phys. 61, No. 1, 80-89 (2021); translation from Zh. Vychisl. Mat. Mat. Fiz. 61, No. 1, 85-94 (2021). Full Text: Full Text: Full Text: Full Text: ### Numerical reconstruction of two-dimensional particle size distributions from laser diffraction data. (English)Zbl 1461.65283 MSC:  65R32 78A46 45B05 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### Reconstruction of magnetic susceptibility using full magnetic gradient data. (English. Russian original)Zbl 1451.78027 Comput. Math. Math. Phys. 60, No. 6, 1000-1007 (2020); translation from Zh. Vychisl. Mat. Mat. Fiz. 60, No. 6, 1027-1034 (2020). Full Text: ### Singular modes of the integral scattering operator in anisotropic inhomogeneous media. (English. Russian original)Zbl 1451.78022 Differ. Equ. 56, No. 9, 1212-1218 (2020); translation from Differ. Uravn. 56, No. 9, 1245-1251 (2020). Full Text: ### Study of the kernels of integral equations in problems of wave diffraction in waveguides and by periodic structures. (English. Russian original)Zbl 1451.78031 Differ. Equ. 56, No. 9, 1167-1180 (2020); translation from Differ. Uravn. 56, No. 9, 1201-1213 (2020). Full Text: ### Integral representations of fields in three-dimensional problems of diffraction by penetrable bodies. (English. Russian original)Zbl 1448.78031 Differ. Equ. 56, No. 9, 1148-1152 (2020); translation from Differ. Uravn. 56, No. 9, 1182-1186 (2020). Full Text: Full Text: Full Text: ### Solvability of the integro-differential equation in the problem of wave diffraction on a junction of rectangular waveguides. (English. Russian original)Zbl 1448.35497 Differ. Equ. 56, No. 8, 1041-1049 (2020); translation from Differ. Uravn. 56, No. 8, 1065-1072 (2020). Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### Compact equivalent inverse of the electric field integral operator on screens. (English)Zbl 1446.45015 MSC:  45P05 31A10 78A25 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### Some old and some new results in inverse obstacle scattering. (English)Zbl 1455.35197 Langer, Ulrich (ed.) et al., Maxwell’s equations. Analysis and numerics. Contributions from the workshop on analysis and numerics of acoustic and electromagnetic problems, RICAM, Linz, Austria, October 17–22, 2016. Berlin: De Gruyter. Radon Ser. Comput. Appl. Math. 24, 259-283 (2019). Full Text: ### Bell polynomials in the Mathematica system and asymptotic solutions of integral equations. (English. Russian original)Zbl 1463.45004 Theor. Math. Phys. 201, No. 3, 1798-1807 (2019); translation from Teor. Mat. Fiz. 201, No. 3, 446-456 (2019). Full Text: Full Text: Full Text: Full Text: ### Numerical methods for a nonstationary 3D singular integral equation of electrodynamics. (English. Russian original)Zbl 1431.78011 Differ. Equ. 55, No. 9, 1250-1257 (2019); translation from Differ. Uravn. 55, No. 9, 1293-1300 (2019). Full Text: Full Text: ### Theory of integral equations for axisymmetric scattering by a disk. (English. Russian original)Zbl 1437.45001 Comput. Math. Math. Phys. 59, No. 8, 1372-1379 (2019); translation from Zh. Vychisl. Mat. Mat. Fiz. 59, No. 8, 1431-1438 (2019). MSC:  45B05 78A46 Full Text: Full Text: Full Text: Full Text: ### Inverse acoustic and electromagnetic scattering theory. 4th expanded edition. (English)Zbl 1425.35001 Applied Mathematical Sciences 93. Cham: Springer (ISBN 978-3-030-30350-1/hbk; 978-3-030-30351-8/ebook). xvii, 518 p. (2019). Full Text: ### Temporally manipulated plasmons on graphene. (English)Zbl 1419.78006 MSC:  78A40 78M25 35Q60 45A05 Full Text: ### Boundary integral methods in bioelectromagnetics and biomedical applications of electromagnetic fields. (English)Zbl 1418.78009 Cheng, Alexander H.-D. (ed.) et al., Boundary elements and other mesh reduction methods XXXXI. Selected papers based on the presentations at the 41st international conference (BEM/MRM), New Forest, UK, September 11–13, 2018. Southampton: WIT Press. WIT Trans. Eng. Sci. 122, 85-94 (2019). Full Text: ### Modelling of grounding system placed into vertically layered soil. (English)Zbl 1418.78007 Cheng, Alexander H.-D. (ed.) et al., Boundary elements and other mesh reduction methods XXXXI. Selected papers based on the presentations at the 41st international conference (BEM/MRM), New Forest, UK, September 11–13, 2018. Southampton: WIT Press. WIT Trans. Eng. Sci. 122, 73-84 (2019). Full Text: ### BEM analysis of plane wave coupling to three-phase power line. (English)Zbl 1418.78008 Cheng, Alexander H.-D. (ed.) et al., Boundary elements and other mesh reduction methods XXXXI. Selected papers based on the presentations at the 41st international conference (BEM/MRM), New Forest, UK, September 11–13, 2018. Southampton: WIT Press. WIT Trans. Eng. Sci. 122, 63-72 (2019). Full Text: Full Text: Full Text: Full Text: Full Text: ### Non-stationary electromagnetics. An integral equations approach. 2nd edition. (English)Zbl 1426.78001 Singapore: Pan Stanford Publishing (ISBN 978-981-4774-95-6/hbk; 978-0-429-65095-6/ebook). xiv, 459 p. (2019). Full Text: Full Text: ### A mathematical and numerical framework for near-field optics. (English)Zbl 1407.78018 MSC:  78A46 45Q05 Full Text: ### Learning light transport the reinforced way. (English)Zbl 1483.68291 Owen, Art B. (ed.) et al., Monte Carlo and quasi-Monte Carlo methods, MCQMC 2016. Proceedings of the 12th international conference on ‘Monte Carlo and quasi-Monte Carlo methods in scientific computing’, Stanford, CA, August 14–19, 2016. Cham: Springer. Springer Proc. Math. Stat. 241, 181-195 (2018). Full Text: ### Integral equation methods in inverse obstacle scattering with a generalized impedance boundary condition. (English)Zbl 1407.65271 Dick, Josef (ed.) et al., Contemporary computational mathematics – a celebration of the 80th birthday of Ian Sloan. In 2 volumes. Cham: Springer. 721-740 (2018). Full Text: Full Text: ### Methods of mathematical modeling in waves scattering on local inhomogeneous interfaces between two media. (English)Zbl 1406.78024 MSC:  78M25 78A45 45E99 65R20 35J20 78A25 Full Text: ### Discretization methods for three-dimensional singular integral equations of electromagnetism. (English. Russian original)Zbl 1407.65299 Differ. Equ. 54, No. 9, 1225-1235 (2018); translation from Differ. Uravn. 54, No. 9, 1251-1261 (2018). Full Text: ### Method of integral equations for the three-dimensional problem of wave reflection from an irregular surface. (English. Russian original)Zbl 1406.78026 Differ. Equ. 54, No. 9, 1191-1201 (2018); translation from Differ. Uravn. 54, No. 9, 1218-1227 (2018). MSC:  78M34 78A45 35J05 45B05 45E99 65R20 78M25 Full Text: ### Two-step method for solving inverse problem of diffraction by an inhomogenous body. (English)Zbl 1402.78013 Beilina, L. (ed.) et al., Nonlinear and inverse problems in electromagnetics, PIERS 2017, St. Petersburg, Russia, May 22–25, 2017. Cham: Springer (ISBN 978-3-319-94059-5/hbk; 978-3-319-94060-1/ebook). Springer Proceedings in Mathematics & Statistics 243, 83-92 (2018). MSC:  78A46 78A45 45B05 45Q05 35J05 78M25 65R20 Full Text: ### A nonlinear multiparameter EV problem. (English)Zbl 1402.78015 Beilina, L. (ed.) et al., Nonlinear and inverse problems in electromagnetics, PIERS 2017, St. Petersburg, Russia, May 22–25, 2017. Cham: Springer (ISBN 978-3-319-94059-5/hbk; 978-3-319-94060-1/ebook). Springer Proceedings in Mathematics & Statistics 243, 55-70 (2018). Full Text: ### Two problems about conducting strips. (English)Zbl 1401.45003 MSC:  45B05 78A30 Full Text: ### All-analytical evaluation of the singular integrals involved in the method of moments. (English)Zbl 1395.45005 MSC:  45E05 78A25 94A12 Full Text: Full Text: Full Text: Full Text: ### A magnetostatic energy formula arising from the $$L^2$$-orthogonal decomposition of the stray field. (English)Zbl 1397.78016 MSC:  78A30 45P05 78M25 Full Text: Full Text: Full Text: ### Novel single-source surface integral equation for scattering problems by 3-D dielectric objects. (English)Zbl 1391.45006 MSC:  45E05 78A45 Full Text: Full Text: Full Text: all top 5 all top 5 all top 5 all top 3 all top 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24643389880657196, "perplexity": 6028.493459992598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00639.warc.gz"}
http://philpapers.org/s/%20Johnstone
## Works by Johnstone 196 found Sort by: Disambiguations: Henry W. Johnstone [37] Henry W. Johnstone Jr [26] Albert A. Johnstone [14] Jas Johnstone [11] Johnstone Jr [10] Gerry Johnstone [8] D. J. Johnstone [6] James Johnstone [6] Not all matches are shown. Search with initial or firstname to single out others. Profile: Albert Arnold Johnstone (University of Oregon)Profile: Mark Johnstone (McMaster University)Profile: Dougie Johnstone (University of Leicester)Profile: Justine Johnstone (University of Sussex)Profile: Katie B. JohnstoneProfile: Lyn Johnstone (Royal Holloway University of London)Profile: Marie Johnstone (Dalhousie University) 1. PLoS ONE 3(3): e1897. doi:10.1371/journal.pone. My bibliography Export citation My bibliography Export citation 3. Notices Amer. Math. Sac. 51, 2004). Logically, such a "Grothendieck topos" is something like a universe of continuously variable sets. Before long, however, F.W. Lawvere and M. Tierney provided an elementary axiomatization.. No categories Translate to English My bibliography Export citation My bibliography Export citation 5. William Johnstone (forthcoming). Book Review: First and Second Chronicles. [REVIEW] Interpretation 58 (2):206-206. No categories My bibliography Export citation 6. No categories My bibliography Export citation 7. No categories My bibliography Export citation 8. Robert Clarke & Tom Johnstone (2013). Prefrontal Inhibition of Threat Processing Reduces Working Memory Interference. Frontiers in Human Neuroscience 7. My bibliography Export citation 9. Albert A. Johnstone (2013). Why Emotion? Journal of Consciousness Studies 20 (9-10):15-38. The various roles proposed for emotion, whether psychological such as preparing for action or serving prior concerns, or biological such as protecting and promoting well-being, are easily shown to have an awkward number of exceptions. This paper attempts to explain why. To this end it undertakes a Husserlian phenomenological examination of first-person experience of two types of responses, the various somatic responses elicited by sensations (pain, cold, pleasure, sudden intensity) and the various personal directed emotions (grief, fear, affection, joy). The (...) My bibliography Export citation 10. Brian V. Johnstone (2013). Eschatology and Social Ethics. Bijdragen 37 (1):47-85. No categories My bibliography Export citation 11. Justine Johnstone (2013). Supersizing the Mind. Journal of Critical Realism 12 (3):405-409. No categories My bibliography Export citation 12. Mark A. Johnstone (2013). Aristotle on Sounds. British Journal for the History of Philosophy 21 (5):631-48. In this paper I consider two related issues raised by Aristotle's treatment of hearing and sounds. The first concerns the kinds of changes Aristotle takes to occur, in both perceptual medium and sense organs, when a perceiver hears a sounding object. The second issue concerns Aristotle's views on the nature and location of the proper objects of auditory perception. I argue that Aristotle's views on these topics are not what they have sometimes been taken to be, and that when rightly (...) My bibliography Export citation 13. Mark A. Johnstone (2013). Anarchic Souls: Plato's Depiction of the Democratic Man. Phronesis 58 (2):139-59. In books 8 and 9 of Plato’s Republic, Socrates provides a detailed account of the nature and origins of four main kinds of vice found in political constitutions and in the kinds of people that correspond to them. The third of the four corrupt kinds of person he describes is the ‘democratic man’. In this paper, I ask what ‘rules’ in the democratic man’s soul. It is commonly thought that his soul is ruled in some way by its appetitive part, (...) My bibliography Export citation 14. Peter Johnstone (2013). What Do Freyd's Toposes Classify? Logica Universalis 7 (3):335-340. We describe a method for presenting (a topos closely related to) either of Freyd’s topos-theoretic models for the independence of the axiom of choice as the classifying topos for a geometric theory. As an application, we show that no such topos can admit a geometric morphism from a two-valued topos satisfying countable dependent choice. My bibliography Export citation 15. Albert A. Johnstone (2012). The Deep Bodily Roots of Emotion. Husserl Studies 28 (3):179-200. This article explores emotions and their relationship to ‘somatic responses’, i.e., one’s automatic responses to sensations of pain, cold, warmth, sudden intensity. To this end, it undertakes a Husserlian phenomenological analysis of the first-hand experience of eight basic emotions, briefly exploring their essential aspects: their holistic nature, their identifying dynamic transformation of the lived body, their two-layered intentionality, their involuntary initiation and voluntary espousal. The fact that the involuntary tensional shifts initiating emotions are irreplicatable voluntarily, is taken to show that (...) My bibliography Export citation 16. M. -J. Johnstone (2012). Bioethics, Cultural Differences and the Problem of Moral Disagreements in End-Of-Life Care: A Terror Management Theory. Journal of Medicine and Philosophy 37 (2):181-200. Next SectionCultural differences in end-of-life care and the moral disagreements these sometimes give rise to have been well documented. Even so, cultural considerations relevant to end-of-life care remain poorly understood, poorly guided, and poorly resourced in health care domains. Although there has been a strong emphasis in recent years on making policy commitments to patient-centred care and respecting patient choices, persons whose minority cultural worldviews do not fit with the worldviews supported by the conventional principles of western bioethics face a (...) My bibliography Export citation 17. Mark A. Johnstone (2012). Aristotle on Odour and Smell. Oxford Studies in Ancient Philosophy 43:143-83. The sense of smell occupies a peculiar intermediate position within Aristotle's theory of sense perception: odours, like colours and sounds, are perceived at a distance through an external medium of air or water; yet in their nature they are intimately related to flavours, the proper objects of taste, which for Aristotle is a form of touch. In this paper, I examine Aristotle's claims about odour and smell, especially in De Anima II.9 and De Sensu 5, to see what light they (...) My bibliography Export citation 18. Megan-Jane Johnstone (2012). Academic Freedom and the Obligation to Ensure Morally Responsible Scholarship in Nursing. Nursing Inquiry 19 (2):107-115. No categories My bibliography Export citation 19. This article brings together the United Nations’ International Covenant on Economic, Social and Cultural Rights (ICESCR) and John McMurtry’s theory of value. In this perspective, the ICESCR is construed as a prime example of “civil commons,” while McMurtry’s theory of value is proposed as a tool of interpretation of the covenant. In particular, McMurtry’s theory of value is a hermeneutical device capable of highlighting: (a) what alternative conception of value systemically operates against the fulfilment of the rights enshrined in the (...) No categories My bibliography Export citation 20. Albert A. Johnstone (2011). The Basic Self and Its Doubles. Journal of Consciousness Studies 18 (7-8):169-195. As Descartes noted, a proper account of the nature of the being one is begins with a basic self present in first-person experience, a self that one cannot cogently doubt being. This paper seeks to uncover such a self, first within consciousness and thinking, then within the lived or first-person felt body. After noting the lack of grounding of Merleau-Ponty’s commonly referenced reflections, it undertakes a phenomenological investigation of the body that finds the basic self to reside in one’s espoused (...) My bibliography Export citation 21. D. J. Johnstone & D. V. Lindley (2011). Elementary Proof That Mean–Variance Implies Quadratic Utility. Theory and Decision 70 (2):149-155. An extensive literature overlapping economics, statistical decision theory and finance, contrasts expected utility [EU] with the more recent framework of mean–variance (MV). A basic proposition is that MV follows from EU under the assumption of quadratic utility. A less recognized proposition, first raised by Markowitz, is that MV is fully justified under EU, if and only if utility is quadratic. The existing proof of this proposition relies on an assumption from EU, described here as “Buridan’s axiom” after the French philosopher’s (...) No categories My bibliography Export citation 22. Mark A. Johnstone (2011). Changing Rulers in the Soul: Psychological Transitions in Republic 8-9. Oxford Studies in Ancient Philosophy 41:139-67. My bibliography Export citation 23. Megan-Jane Johnstone (2011). Nursing and Justice as a Basic Human Need. Nursing Philosophy 12 (1):34-44. My bibliography Export citation 24. Joel David Hamkins & Thomas A. Johnstone (2010). Indestructible Strong Unfoldability. Notre Dame Journal of Formal Logic 51 (3):291-321. Using the lottery preparation, we prove that any strongly unfoldable cardinal $\kappa$ can be made indestructible by all. My bibliography Export citation 25. Christopher Lyle Johnstone (2009). Listening to the Logos: Speech and the Coming of Wisdom in Ancient Greece. University of South Carolina Press. Prologue -- The Greek stones speak : toward an archaeology of consciousness -- Singing the muses' song : myth, wisdom, and speech -- Physis, kosmos, logos : presocratic thought and the emergence of nature-consciousness -- Sophistical wisdom, Socratic wisdom, and the political life -- Civic wisdom, divine wisdom : Socrates, Plato, and two visions for the Athenian citizen -- Speculative wisdom, practical wisdom : Aristotle and the culmination of Hellenic thought -- Epilogue. My bibliography Export citation 26. M. -J. Johnstone (2009). Editorial Comment. Nursing Ethics 16 (5):523-524. My bibliography Export citation 27. Recent research suggests that spiritual experiences are related to increased physiological activity of the frontal and temporal lobes and decreased activity of the right parietal lobe. The current study determined if similar relationships exist between self-reported spirituality and neuropsychological abilities associated with those cerebral structures for persons with traumatic brain injury (TBI). Participants included 26 adults with TBI referred for neuropsychological assessment. Measures included the Core Index of Spirituality (INSPIRIT); neuropsychological indices of cerebral structures: temporal lobes (Wechsler Memory Scale-III), right (...) My bibliography Export citation 28. Jennifer Johnstone & Niko Tiliopoulos (2008). Exploring the Relationship Between Schizotypal Personality Traits and Religious Attitude in an International Muslim Sample. Archive for the Psychology of Religion 30 (1):241-253. No categories My bibliography Export citation 29. Thomas A. Johnstone (2008). Strongly Unfoldable Cardinals Made Indestructible. Journal of Symbolic Logic 73 (4):1215-1248. I provide indestructibility results for large cardinals consistent with V = L, such as weakly compact, indescribable and strongly unfoldable cardinals. The Main Theorem shows that any strongly unfoldable cardinal κ can be made indestructible by <κ-closed. κ-proper forcing. This class of posets includes for instance all <κ-closed posets that are either κ -c.c, or ≤κ-strategically closed as well as finite iterations of such posets. Since strongly unfoldable cardinals strengthen both indescribable and weakly compact cardinals, the Main Theorem therefore makes (...) My bibliography Export citation 30. D. J. Johnstone (2007). The Value of a Probability Forecast From Portfolio Theory. Theory and Decision 63 (2):153-203. A probability forecast scored ex post using a probability scoring rule (e.g. Brier) is analogous to a risky financial security. With only superficial adaptation, the same economic logic by which securities are valued ex ante – in particular, portfolio theory and the capital asset pricing model (CAPM) – applies to the valuation of probability forecasts. Each available forecast of a given event is valued relative to each other and to the “market” (all available forecasts). A forecast is seen to be (...) My bibliography Export citation 31. David Johnstone (2007). Economic Darwinism: Who has the Best Probabilities? [REVIEW] Theory and Decision 62 (1):47-96. Simulation evidence obtained within a Bayesian model of price-setting in a betting market, where anonymous gamblers queue to bet against a risk-neutral bookmaker, suggests that a gambler who wants to maximize future profits should trade on the advice of the analyst cum probability forecaster who records the best probability score, rather than the highest trading profits, during the preceding observation period. In general, probability scoring rules, specifically the log score and better known “Brier” (quadratic) score, are found to have higher (...) No categories My bibliography Export citation 32. Gerry Johnstone (2007). Critical Perspectives on Restorative Justice. In Gerry Johnstone & Daniel W. van Ness (eds.), Handbook of Restorative Justice. 598--614. No categories My bibliography Export citation 33. Gerry Johnstone & Daniel Van Ness (2007). The Meaning of Restorative Justice. In Gerry Johnstone & Daniel W. van Ness (eds.), Handbook of Restorative Justice. No categories My bibliography Export citation 34. Gerry Johnstone & DanielW Van Ness (2007). Evaluation and Restorative Justice. In Gerry Johnstone & Daniel W. van Ness (eds.), Handbook of Restorative Justice. No categories My bibliography Export citation 35. Gerry Johnstone & DanielW Van Ness (2007). Restorative Justice in Social Context. In Gerry Johnstone & Daniel W. van Ness (eds.), Handbook of Restorative Justice. No categories My bibliography Export citation 36. Gerry Johnstone & DanielW Van Ness (2007). Roots of Restorative Justice. In Gerry Johnstone & Daniel W. van Ness (eds.), Handbook of Restorative Justice. No categories My bibliography Export citation 37. Gerry Johnstone & DanielW Van Ness (2007). The Global Appeal of Restorative Justice. In Gerry Johnstone & Daniel W. van Ness (eds.), Handbook of Restorative Justice. No categories My bibliography Export citation 38. Henry W. Johnstone (2007). The Philosophical Basis of Rhetoric. Philosophy and Rhetoric 40 (1):15-26. No categories My bibliography Export citation 39. Justine Johnstone (2007). Towards a Creativity Research Agenda in Information Ethics. International Review of Information Ethics 7:09. No categories My bibliography Export citation 40. Justine Johnstone (2007). Technology as Empowerment: A Capability Approach to Computer Ethics. [REVIEW] Ethics and Information Technology 9 (1):73-87. Standard agent and action-based approaches in computer ethics tend to have difficulty dealing with complex systems-level issues such as the digital divide and globalisation. This paper argues for a value-based agenda to complement traditional approaches in computer ethics, and that one value-based approach well-suited to technological domains can be found in capability theory. Capability approaches have recently become influential in a number of fields with an ethical or policy dimension, but have not so far been applied in computer ethics. The (...) My bibliography Export citation 41. Christopher Lyle Johnstone (2006). Sophistical Wisdom:. Philosophy and Rhetoric 39 (4):265-289. No categories My bibliography Export citation 42. Mark Johnstone (2006). Better Than Mere Knowledge? The Function of Sensory Awareness. In John Hawthorne & Tamar Gendler (eds.), Perceptual Experience. Oxford University Press. 260--290. No categories My bibliography Export citation 43. Peter T. Johnstone (2006). Complemented Sublocales and Open Maps. Annals of Pure and Applied Logic 137 (1):240-255. My bibliography Export citation 44. A. Johnstone & M. Sheets-Johnstone (2005). Edmund Husserl: A Review of the Lectures on Transcendental Logic. [REVIEW] Journal of Consciousness Studies 12 (2):43-51. The centerpiece of the Analyses is a translation from the German of notes for a series of lectures given by phenomenologist Edmund Husserl in the early twenties, which is to say some eighty years ago. Husserl designated the topic of the lectures 'transcendental logic'. In this context, the term, 'transcendental', is not to be understood in some mystical sense, but rather in a Kantian sense: pertaining to the conditions of possibility of experience. Likewise, the term, 'logic', is not to be (...) My bibliography Export citation 45. G. Johnstone (2005). Research Ethics in Criminology. Research Ethics 1 (2):60-66. No categories My bibliography Export citation My bibliography Export citation My bibliography Export citation 48. Albert Johnstone (2003). Self-Reference and Gödel's Theorem: A Husserlian Analysis. [REVIEW] Husserl Studies 19 (2):131-151.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281561851501465, "perplexity": 15103.482439517531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999670924/warc/CC-MAIN-20140305060750-00065-ip-10-183-142-35.ec2.internal.warc.gz"}
https://statgeek.net/2018/03/16/simple-maths-of-a-fairer-uss-deal/
## Simple maths of a fairer USS deal In yesterday’s post I showed a graph, followed by some comments to suggest that future USS proposals with a flatter (or even increasing) “percent lost” curve would be fairer (and, as I argued earlier in my Robin Hood post, more affordable at the same time). It’s now clear to me that my suggestion seemed a bit cryptic to many (maybe most!) who read it yesterday.  So here I will try to show more specifically how to achieve a flat curve.  (This is not because I think flat is optimal.  It’s mainly because it’s easy to explain.  As already mentioned, it might not be a bad idea if the curve was actually to increase a bit as salary levels increase; that would allow those with higher salaries to feel happy that they are doing their bit towards the sustainable future of USS.) ## Flattening the curve The graph below is the same as yesterday’s but with a flat (blue, dashed) line drawn at the level of 4% lost across all salary levels. I drew the line at 4% here just as an example, to illustrate the calculation.  The actual level needed — i.e, the “affordable” level for universities —  would need to be determined by negotiation; but the maths is essentially the same, whatever the level (within reason). Let’s suppose we want to adjust the USS contribution and benefits parameters to achieve just such a flat “percent lost” curve, at the 4% level.  How is that done? I will assume here the same adjustable parameters that UUK and UCU appear to have in mind, namely: • employee contribution rate E (as percentage of salary — currently 8; was 8.7 in the 12 March proposal; was 8 in the January proposal) • threshold salary T, over which defined benefit (DB) pension entitlement ceases (which is currently £55.55k; was £42k in the 12 March proposal; and was £0 in the January proposal) • accrual rate A, in the DB pension.  Expressed here in percentage points (currently 100/75; was 100/85 in the 12 March proposal; and not relevant to the January proposal). • employer contribution rate (%) to the defined contribution (DC) part of USS pension.  Let’s allow different rates $C_1$ and $C_2$ for, respectively, salaries between T and £55.55k, and salaries over £55.55k. (Currently $C_1$ is irrelevant, and $C_2$ is 13 (max); these were both set at 12 in the 12th March proposal; and were both 13.25 in the January proposal.) I will assume also, as all the recent proposals do, that the 1% USS match possibility is lost to all members. Then, to get to 4% lost across the board, we need simply to solve the following linear equations.  (To see where these came from, please see this earlier post.) For salary up to T: $(E - 8) + 19(100/75 - A) + 1] = 4.$ For salary between T and £55.55k: $-8 + 19(100/75) - C_1 + 1 = 4.$ For salary over £55.55k: $13 - C_2 = 4.$ Solving those last two equations is simple, and results in $C_1 = 14.33, \qquad C_2 = 9.$ The first equation above clearly allows more freedom: it’s just one equation, with two unknowns, so there are many solutions available.  Three example solutions, still based the illustrative 4% loss level across all salary levels, are: $E=8, \qquad A = 1.175 = 100/85.1$ $E = 8.7, \qquad A = 1.21 = 100/82.6$ $E = 11, \qquad A = 100/75.$ At the end here I’ll give code in R to do the above calculation quite generally, i.e., for any desired percentage loss level.  First let me just make a few remarks relating to all this. ## Remarks ### Choice of threshold Note that the value of T does not enter into the above calculation.  Clearly there will be (negotiable) interplay between T and the required percentage loss, though, for a given level of affordability. ### Choice of $C_2$ Much depends on the value of $C_2$. The calculation above gives the value of $C_2$ needed for a flat “percent lost” curve, at any given level for the percent lost (which was 4% in the example above). To achieve an increasing “percent lost” curve, we could simply reduce the value of $C_2$ further than the answer given by the above calculation.  Alternatively, as suggested in my earlier Robin Hood post, USS could apply a lower value of $C_2$ only for salaries above some higher threshold — i.e., in much the same spirit as progressive taxation of income. Just as with income tax, it would be important not to set $C_2$ too small, otherwise the highest-paid members would quite likely want to leave USS.  There is clearly a delicate balance to be struck, at the top end of the salary spectrum. But it is clear that if the higher-paid were to sacrifice at least as much as everyone else, in proportion to their salary, then that would allow the overall level of “percent lost” to be appreciably reduced, which would benefit the vast majority of USS members. ### Determination of the overall “percent lost” Everything written here constitutes a methodology to help with finding a good solution.  As mentioned at the top here, the actual solution — and in particular, the actual level of USS member pain (if any) deemed to be necessary to keep USS afloat — will be a matter for negotiation.  The maths here can help inform that negotiation, though. ## Code for solving the above equations ## Function to compute the USS parameters needed for a ## flat "percent lost" curve ## ## Function arguments are: ## loss: in percentage points, the constant loss desired ## E: employee contribution, in percentage points ## A: the DB accrual rate ## ## Exactly one of E and A must be specified (ie, not NULL). ## ## Example calls: ## flatcurve(4.0, A = 100/75) ## flatcurve(2.0, E = 10.5) ## flatcurve(1.0, A = 100/75) # status quo, just 1% "match" lost flatcurve <- function(loss, E = NULL, A = NULL){ if (is.null(E) && is.null(A)) { stop("E and A can't both be NULL")} if (!is.null(E) && !is.null(A)) { stop("one of {E, A} must be NULL")} c1 <- 19 * (100/75) - (7 + loss) c2 <- 13 - loss if (is.null(E)) { E <- 7 + loss - (19 * (100/75 - A)) } if (is.null(A)) { A <- (E - 7 - loss + (19 * 100/75)) / 19 } return(list(loss_percent = loss, employee_contribution_percent = E, accrual_reciprocal = 100/A, DC_employer_rate_below_55.55k = c1, DC_employer_rate_above_55.55k = c2)) } The above function will run in base R. Here are three examples of its use (copied from an interactive session in R): ### Specify 4% loss level, ### still using the current USS DB accrual rate > flatcurve(4.0, A = 100/75) $loss_percent [1] 4$employee_contribution_percent [1] 11 $accrual_reciprocal [1] 75$DC_employer_rate_below_55.55k [1] 14.33333 $DC_employer_rate_above_55.55k [1] 9 #------------------------------------------------------------ ### This time for a smaller (2%) loss, ### with specified employee contribution > flatcurve(2.0, E = 10.5)$loss_percent [1] 2 $employee_contribution_percent [1] 10.5$accrual_reciprocal [1] 70.80745 $DC_employer_rate_below_55.55k [1] 16.33333$DC_employer_rate_above_55.55k [1] 11 #------------------------------------------------------------ ### Finally, my personal favourite: ### --- status quo with just the "match" lost > flatcurve(1, A = 100/75) $loss_percent [1] 1$employee_contribution_percent [1] 8 $accrual_reciprocal [1] 75$DC_employer_rate_below_55.55k [1] 17.33333 \$DC_employer_rate_above_55.55k [1] 12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7419849634170532, "perplexity": 3213.8175698137593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00468.warc.gz"}
http://www.maplesoft.com/support/help/Maple/view.aspx?path=Finance/BenchmarkRate
Finance - Maple Help Home : Support : Online Help : Mathematics : Finance : Interest Rates : Finance/BenchmarkRate Finance BenchmarkRate create a benchmark rate Calling Sequence BenchmarkRate(forecast, opts) BenchmarkRate(tenor, units, familyname, forecast) Parameters tenor - positive integer; length of the tenor units - Days, Weeks, Months, or Years; time units familyname - family name; can be one of the following: AUDLIBOR, CADLIBOR, CDOR, CHFLIBOR, DKKLIBOR, EURIBOR, GBPLIBOR, JIBAR, JPYLIBOR, NZDLIBOR, TIBOR, TRLIBOR, USDLIBOR, ZIBOR forecast - yield term structure; forecasted rate or term structure Description • The BenchmarkRate(forecast, opts) calling sequence creates a new benchmark rate with the specified calendar, day count and business day conventions. • The BenchmarkRate(tenor, units, familyname, forecast) command creates one of the standard benchmark rates (e.g. one of the LIBOR rates). In this case standard market calendar, day count and business day conventions will be used. • The parameter tenor is the length of the tenor. The parameter units specifies time units for the tenor. The parameter familyname is the family name for the benchmark index. • The parameter forecast is the forecasted rate. For dates preceding the global evaluation date historic data can be used (see the LoadHistory command). Otherwise the forecasted interest rate is used. Examples > $\mathrm{with}\left(\mathrm{Finance}\right):$ > $\mathrm{SetEvaluationDate}\left("January 15, 2005"\right):$ Set defaults assumed by the USDLIBOR rate. > $\mathrm{Settings}\left(\left[\mathrm{calendar}=\mathrm{NewYork},\mathrm{compounding}=\mathrm{Simple},\mathrm{settlementdays}=2,\mathrm{daycounter}=\mathrm{Actual360},\mathrm{businessdayconvention}=\mathrm{ModifiedFollowing}\right]\right)$ $\left[{\mathrm{calendar}}{=}{\mathrm{Null}}{,}{\mathrm{compounding}}{=}{\mathrm{Continuous}}{,}{\mathrm{settlementdays}}{=}{0}{,}{\mathrm{daycounter}}{=}{\mathrm{Historical}}{,}{\mathrm{businessdayconvention}}{=}{\mathrm{Unadjusted}}\right]$ (1) Create a 6-month USD LIBOR rate and use float rate of 7% as forecast. > $\mathrm{benchmark}≔\mathrm{BenchmarkRate}\left(6,\mathrm{Months},\mathrm{USDLIBOR},\mathrm{ForwardCurve}\left(0.07,\mathrm{compounding}=\mathrm{Continuous}\right)\right)$ ${\mathrm{benchmark}}{:=}{\mathbf{module}}\left({}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end module}}$ (2) This will throw an error since not historic data is available to the given benchmark rate. On the other hand, the forecasted interest rate is used only for dates after the global evaluation date. > $\mathrm{benchmark}\left("January 11, 2005"\right)$ > $\mathrm{LoadHistory}\left(\mathrm{benchmark},"January 08, 2005",\left[0.01,0.02,0.03,0.04,0.05,0.06,0.07\right]\right):$ Note that historic data is used only for dates preceding the global evaluation date. Otherwise the forecasted interest rate is used. > $\mathrm{benchmark}\left("January 11, 2005"\right)$ ${0.04000000000}$ (3) > $\mathrm{benchmark}\left("January 10, 2005"\right)$ ${0.03000000000}$ (4) > $\mathrm{benchmark}\left("January 14, 2005"\right)$ ${0.07000000000}$ (5) > $\mathrm{benchmark}\left("January 16, 2005"\right)$ ${0.07124638451}$ (6) > $\mathrm{rate}≔\mathrm{benchmark}\left("January 17, 2005"\right)$ ${\mathrm{rate}}{:=}{0.07124638451}$ (7) > $\mathrm{fixingdate}≔\mathrm{AdvanceDate}\left("Jan-17-2005",2,\mathrm{Days}\right)$ ${\mathrm{fixingdate}}{:=}{"Jan-19-2005"}$ (8) > $\mathrm{maturitydate}≔\mathrm{AdvanceDate}\left(\mathrm{fixingdate},6,\mathrm{Months}\right)$ ${\mathrm{maturitydate}}{:=}{"Jul-19-2005"}$ (9) > $\mathrm{EquivalentRate}\left(0.07,\mathrm{Continuous},\mathrm{Simple},\mathrm{fixingdate},\mathrm{maturitydate}\right)$ ${0.07124638451}$ (10) Compatibility • The Finance[BenchmarkRate] command was introduced in Maple 15.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6529073715209961, "perplexity": 5964.117489664858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125897.19/warc/CC-MAIN-20160428161525-00173-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/168/2/u/a/
# Properties Label 168.2.u.a Level 168 Weight 2 Character orbit 168.u Analytic conductor 1.341 Analytic rank 0 Dimension 16 CM no Inner twists 4 # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$168 = 2^{3} \cdot 3 \cdot 7$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 168.u (of order $$6$$, degree $$2$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$1.34148675396$$ Analytic rank: $$0$$ Dimension: $$16$$ Relative dimension: $$8$$ over $$\Q(\zeta_{6})$$ Coefficient field: $$\mathbb{Q}[x]/(x^{16} - \cdots)$$ Defining polynomial: $$x^{16} - 6 x^{15} + 19 x^{14} - 42 x^{13} + 65 x^{12} - 48 x^{11} - 94 x^{10} + 444 x^{9} - 962 x^{8} + 1332 x^{7} - 846 x^{6} - 1296 x^{5} + 5265 x^{4} - 10206 x^{3} + 13851 x^{2} - 13122 x + 6561$$ Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$2^{8}$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{6}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\ldots,\beta_{15}$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( \beta_{3} + \beta_{9} ) q^{3} + \beta_{14} q^{5} -\beta_{10} q^{7} + ( 1 + \beta_{1} - \beta_{2} - \beta_{4} + \beta_{6} + \beta_{13} ) q^{9} +O(q^{10})$$ $$q + ( \beta_{3} + \beta_{9} ) q^{3} + \beta_{14} q^{5} -\beta_{10} q^{7} + ( 1 + \beta_{1} - \beta_{2} - \beta_{4} + \beta_{6} + \beta_{13} ) q^{9} + ( \beta_{9} + \beta_{12} + \beta_{14} - \beta_{15} ) q^{11} + ( \beta_{3} + \beta_{6} - \beta_{8} + \beta_{13} ) q^{13} + ( \beta_{1} + \beta_{2} - \beta_{3} - 2 \beta_{5} + \beta_{7} + \beta_{8} - 2 \beta_{9} - \beta_{10} - \beta_{11} - \beta_{12} - 2 \beta_{14} + \beta_{15} ) q^{15} + ( -2 \beta_{1} + \beta_{3} + \beta_{5} - 2 \beta_{7} - \beta_{8} + 2 \beta_{9} + \beta_{11} + \beta_{12} + 2 \beta_{14} - \beta_{15} ) q^{17} + ( -\beta_{2} - \beta_{6} + \beta_{10} ) q^{19} + ( 1 - \beta_{2} - \beta_{5} - \beta_{11} - \beta_{12} - \beta_{14} ) q^{21} + ( -1 - \beta_{1} + \beta_{2} - \beta_{3} + \beta_{4} - \beta_{6} + \beta_{7} - \beta_{9} + \beta_{11} - \beta_{13} ) q^{23} + ( \beta_{2} - \beta_{3} - 2 \beta_{4} + \beta_{5} + \beta_{8} + \beta_{9} + \beta_{10} - \beta_{13} ) q^{25} + ( -1 + \beta_{2} + 2 \beta_{4} - \beta_{6} + 2 \beta_{8} + \beta_{10} + \beta_{11} - \beta_{13} ) q^{27} + ( \beta_{1} - 2 \beta_{3} + 2 \beta_{5} + \beta_{7} - \beta_{8} - 4 \beta_{9} - \beta_{11} - \beta_{12} - 2 \beta_{14} + \beta_{15} ) q^{29} + ( -4 + \beta_{3} + 2 \beta_{4} + \beta_{5} + \beta_{6} - \beta_{8} + \beta_{9} - \beta_{10} - \beta_{13} ) q^{31} + ( \beta_{1} - \beta_{2} + \beta_{5} - \beta_{6} - \beta_{9} + \beta_{10} - 2 \beta_{14} + 2 \beta_{15} ) q^{33} + ( -\beta_{1} + \beta_{3} + \beta_{4} + 2 \beta_{5} - \beta_{6} - \beta_{7} - \beta_{9} + \beta_{10} - \beta_{13} + 2 \beta_{15} ) q^{35} + ( -1 + 2 \beta_{2} - 2 \beta_{3} + \beta_{4} - \beta_{5} - 2 \beta_{6} + 2 \beta_{8} - \beta_{9} + \beta_{10} - \beta_{13} ) q^{37} + ( \beta_{2} - 3 \beta_{4} + \beta_{6} + \beta_{9} + \beta_{11} + \beta_{12} - \beta_{15} ) q^{39} + ( -2 \beta_{3} - 2 \beta_{8} ) q^{41} + ( 2 - 2 \beta_{2} - \beta_{3} - 2 \beta_{5} - \beta_{6} + \beta_{8} - 2 \beta_{9} + 2 \beta_{10} + \beta_{13} ) q^{43} + ( -4 - 2 \beta_{1} + 2 \beta_{3} + 2 \beta_{4} - 2 \beta_{5} + \beta_{6} + 2 \beta_{8} + 2 \beta_{9} - \beta_{10} + \beta_{11} - \beta_{13} + \beta_{14} - \beta_{15} ) q^{45} + ( -1 - \beta_{1} + \beta_{2} - \beta_{3} - \beta_{4} - 2 \beta_{5} + \beta_{6} + \beta_{7} - \beta_{9} - 2 \beta_{10} - \beta_{11} - 2 \beta_{12} + \beta_{13} - 2 \beta_{14} - 2 \beta_{15} ) q^{47} + ( -3 + \beta_{3} + 2 \beta_{4} - 2 \beta_{5} - \beta_{8} - 2 \beta_{9} - \beta_{10} ) q^{49} + ( 2 + \beta_{1} - \beta_{2} - \beta_{3} - 2 \beta_{4} + 2 \beta_{5} + \beta_{6} - \beta_{7} - 4 \beta_{8} + \beta_{10} - 3 \beta_{11} + 2 \beta_{13} - \beta_{14} ) q^{51} + ( 2 \beta_{9} + \beta_{11} + 2 \beta_{12} + \beta_{14} - 2 \beta_{15} ) q^{53} + ( 2 - \beta_{2} - 2 \beta_{3} - 4 \beta_{4} + \beta_{6} + 2 \beta_{8} - \beta_{10} + \beta_{13} ) q^{55} + ( -1 - \beta_{3} - \beta_{6} + \beta_{7} - 2 \beta_{9} - \beta_{12} + \beta_{13} ) q^{57} + ( 2 - 2 \beta_{2} + \beta_{3} - \beta_{4} + 3 \beta_{5} + \beta_{6} - 4 \beta_{7} - 3 \beta_{8} + 3 \beta_{9} + \beta_{10} + \beta_{11} + 2 \beta_{12} + \beta_{13} + 3 \beta_{14} ) q^{59} + ( 1 + \beta_{2} + \beta_{4} + \beta_{6} - 2 \beta_{10} + \beta_{13} ) q^{61} + ( -3 + \beta_{4} + \beta_{5} + \beta_{6} - 2 \beta_{8} - \beta_{10} - 2 \beta_{12} + \beta_{13} + \beta_{14} ) q^{63} + ( -1 + \beta_{1} + \beta_{2} + 3 \beta_{3} + \beta_{4} - 3 \beta_{5} - \beta_{6} + 3 \beta_{7} + 6 \beta_{8} - \beta_{11} - \beta_{13} - 2 \beta_{14} ) q^{65} + ( \beta_{2} + 2 \beta_{4} + \beta_{10} - \beta_{13} ) q^{67} + ( 1 - 2 \beta_{4} - 2 \beta_{6} + \beta_{7} - \beta_{8} + \beta_{12} - 2 \beta_{13} ) q^{69} + ( 1 + 3 \beta_{1} - \beta_{2} - 2 \beta_{5} + \beta_{7} + \beta_{8} + \beta_{10} - \beta_{12} + 3 \beta_{15} ) q^{71} + ( 2 + \beta_{2} + \beta_{3} - \beta_{4} + \beta_{5} + \beta_{6} - \beta_{8} + \beta_{9} - 2 \beta_{10} - 2 \beta_{13} ) q^{73} + ( 3 + \beta_{1} + \beta_{3} + 3 \beta_{4} + 2 \beta_{5} - \beta_{7} - \beta_{9} + \beta_{10} + \beta_{11} + 2 \beta_{12} - \beta_{13} + 2 \beta_{14} + 2 \beta_{15} ) q^{75} + ( -\beta_{1} + 3 \beta_{3} + \beta_{4} - \beta_{5} - \beta_{6} - \beta_{7} + 2 \beta_{8} + 2 \beta_{9} + \beta_{10} + 3 \beta_{11} - \beta_{13} + \beta_{14} + 2 \beta_{15} ) q^{77} + ( 4 - \beta_{2} + 2 \beta_{3} - 4 \beta_{4} + \beta_{5} + \beta_{6} - 2 \beta_{8} + \beta_{9} + \beta_{13} ) q^{79} + ( 4 \beta_{4} + 2 \beta_{5} + 2 \beta_{8} + 2 \beta_{9} + 2 \beta_{12} + 2 \beta_{14} - \beta_{15} ) q^{81} + ( 1 + 2 \beta_{1} - \beta_{2} + 2 \beta_{3} - 2 \beta_{4} + 2 \beta_{6} + 2 \beta_{8} - \beta_{10} - 3 \beta_{11} + 2 \beta_{13} - 2 \beta_{15} ) q^{83} + ( 1 - \beta_{2} + 2 \beta_{3} + 4 \beta_{5} - 2 \beta_{6} - 2 \beta_{8} + 4 \beta_{9} + \beta_{10} + 2 \beta_{13} ) q^{85} + ( 4 - 4 \beta_{1} + 3 \beta_{2} + \beta_{3} - 2 \beta_{4} - \beta_{5} - \beta_{6} + \beta_{8} + \beta_{9} - 2 \beta_{10} + 2 \beta_{11} - 2 \beta_{13} + 2 \beta_{14} - 2 \beta_{15} ) q^{87} + ( -1 + \beta_{2} - 2 \beta_{3} - \beta_{4} - 4 \beta_{5} + \beta_{6} + 2 \beta_{7} - 2 \beta_{9} - 2 \beta_{10} - 2 \beta_{11} - 4 \beta_{12} + \beta_{13} - 2 \beta_{14} ) q^{89} + ( 4 - 3 \beta_{2} + 2 \beta_{3} + 2 \beta_{4} + 3 \beta_{5} + \beta_{6} - 2 \beta_{8} + 3 \beta_{9} + \beta_{10} + 2 \beta_{13} ) q^{91} + ( 3 + \beta_{1} - \beta_{2} - 4 \beta_{3} - 3 \beta_{4} + \beta_{5} + \beta_{6} - 2 \beta_{7} - 2 \beta_{8} - \beta_{9} - \beta_{10} + \beta_{14} ) q^{93} + ( -2 \beta_{3} - \beta_{4} - 2 \beta_{5} + \beta_{6} - 2 \beta_{8} + \beta_{9} - \beta_{10} - \beta_{11} - \beta_{12} + \beta_{13} - \beta_{15} ) q^{95} + ( -2 + \beta_{2} + \beta_{3} + 4 \beta_{4} - 3 \beta_{6} - \beta_{8} + \beta_{10} - 3 \beta_{13} ) q^{97} + ( 1 - 3 \beta_{1} - \beta_{2} + \beta_{3} - 2 \beta_{5} - \beta_{6} - \beta_{7} + \beta_{8} + 2 \beta_{9} + \beta_{10} + 2 \beta_{11} + \beta_{12} + \beta_{13} + 4 \beta_{14} - 3 \beta_{15} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$16q + 4q^{7} + 2q^{9} + O(q^{10})$$ $$16q + 4q^{7} + 2q^{9} + 8q^{15} - 6q^{19} + 14q^{21} - 18q^{25} - 48q^{31} - 12q^{33} - 2q^{37} - 22q^{39} + 20q^{43} - 42q^{45} - 28q^{49} + 6q^{51} - 8q^{57} + 36q^{61} - 32q^{63} + 14q^{67} + 30q^{73} + 54q^{75} + 28q^{79} + 30q^{81} + 16q^{85} + 78q^{87} + 66q^{91} + 16q^{93} + 20q^{99} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{16} - 6 x^{15} + 19 x^{14} - 42 x^{13} + 65 x^{12} - 48 x^{11} - 94 x^{10} + 444 x^{9} - 962 x^{8} + 1332 x^{7} - 846 x^{6} - 1296 x^{5} + 5265 x^{4} - 10206 x^{3} + 13851 x^{2} - 13122 x + 6561$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu^{2}$$ $$\beta_{2}$$ $$=$$ $$($$$$-\nu^{15} - 15 \nu^{14} + 71 \nu^{13} + 48 \nu^{12} - 110 \nu^{11} + 384 \nu^{10} - 266 \nu^{9} - 252 \nu^{8} + 854 \nu^{7} - 1272 \nu^{6} + 378 \nu^{5} + 2160 \nu^{4} - 14985 \nu^{3} + 21627 \nu^{2} + 28431 \nu - 52488$$$$)/34992$$ $$\beta_{3}$$ $$=$$ $$($$$$-11 \nu^{15} - 30 \nu^{14} + 142 \nu^{13} - 363 \nu^{12} + 662 \nu^{11} - 258 \nu^{10} - 1288 \nu^{9} + 3546 \nu^{8} - 8138 \nu^{7} + 7392 \nu^{6} + 846 \nu^{5} - 28890 \nu^{4} + 41067 \nu^{3} - 38880 \nu^{2} + 34992 \nu + 19683$$$$)/69984$$ $$\beta_{4}$$ $$=$$ $$($$$$-22 \nu^{15} + 75 \nu^{14} - 238 \nu^{13} + 489 \nu^{12} - 737 \nu^{11} + 429 \nu^{10} + 1483 \nu^{9} - 5787 \nu^{8} + 11651 \nu^{7} - 14727 \nu^{6} + 5787 \nu^{5} + 23193 \nu^{4} - 70227 \nu^{3} + 126846 \nu^{2} - 147987 \nu + 131220$$$$)/34992$$ $$\beta_{5}$$ $$=$$ $$($$$$-25 \nu^{15} + 111 \nu^{14} - 295 \nu^{13} + 606 \nu^{12} - 770 \nu^{11} - 66 \nu^{10} + 2656 \nu^{9} - 7812 \nu^{8} + 13268 \nu^{7} - 13818 \nu^{6} - 2826 \nu^{5} + 39366 \nu^{4} - 88047 \nu^{3} + 133893 \nu^{2} - 131949 \nu + 69984$$$$)/23328$$ $$\beta_{6}$$ $$=$$ $$($$$$-19 \nu^{15} + 95 \nu^{14} - 274 \nu^{13} + 500 \nu^{12} - 653 \nu^{11} + 145 \nu^{10} + 2131 \nu^{9} - 6443 \nu^{8} + 11435 \nu^{7} - 11899 \nu^{6} - 513 \nu^{5} + 29421 \nu^{4} - 73926 \nu^{3} + 110970 \nu^{2} - 131463 \nu + 85293$$$$)/11664$$ $$\beta_{7}$$ $$=$$ $$($$$$-125 \nu^{15} + 669 \nu^{14} - 2069 \nu^{13} + 4116 \nu^{12} - 5794 \nu^{11} + 1302 \nu^{10} + 16736 \nu^{9} - 53232 \nu^{8} + 92296 \nu^{7} - 97758 \nu^{6} + 3510 \nu^{5} + 239166 \nu^{4} - 587979 \nu^{3} + 857547 \nu^{2} - 888651 \nu + 599238$$$$)/69984$$ $$\beta_{8}$$ $$=$$ $$($$$$19 \nu^{15} - 82 \nu^{14} + 226 \nu^{13} - 433 \nu^{12} + 542 \nu^{11} - 74 \nu^{10} - 1912 \nu^{9} + 5482 \nu^{8} - 9482 \nu^{7} + 10040 \nu^{6} - 42 \nu^{5} - 24354 \nu^{4} + 60237 \nu^{3} - 95580 \nu^{2} + 105948 \nu - 75087$$$$)/7776$$ $$\beta_{9}$$ $$=$$ $$($$$$193 \nu^{15} - 753 \nu^{14} + 1975 \nu^{13} - 3624 \nu^{12} + 4292 \nu^{11} - 84 \nu^{10} - 16018 \nu^{9} + 46218 \nu^{8} - 81374 \nu^{7} + 85788 \nu^{6} + 756 \nu^{5} - 197316 \nu^{4} + 515565 \nu^{3} - 849285 \nu^{2} + 958635 \nu - 708588$$$$)/69984$$ $$\beta_{10}$$ $$=$$ $$($$$$100 \nu^{15} - 429 \nu^{14} + 1207 \nu^{13} - 2301 \nu^{12} + 2648 \nu^{11} + 510 \nu^{10} - 10624 \nu^{9} + 29190 \nu^{8} - 48824 \nu^{7} + 45918 \nu^{6} + 9288 \nu^{5} - 142074 \nu^{4} + 312012 \nu^{3} - 474093 \nu^{2} + 502281 \nu - 347733$$$$)/34992$$ $$\beta_{11}$$ $$=$$ $$($$$$-61 \nu^{15} + 302 \nu^{14} - 784 \nu^{13} + 1463 \nu^{12} - 1772 \nu^{11} + 100 \nu^{10} + 6358 \nu^{9} - 18728 \nu^{8} + 32084 \nu^{7} - 33490 \nu^{6} - 1116 \nu^{5} + 85716 \nu^{4} - 197181 \nu^{3} + 331290 \nu^{2} - 374220 \nu + 255879$$$$)/23328$$ $$\beta_{12}$$ $$=$$ $$($$$$-74 \nu^{15} + 351 \nu^{14} - 1037 \nu^{13} + 2043 \nu^{12} - 2470 \nu^{11} + 234 \nu^{10} + 8450 \nu^{9} - 24816 \nu^{8} + 42964 \nu^{7} - 43500 \nu^{6} - 3870 \nu^{5} + 111618 \nu^{4} - 271836 \nu^{3} + 402489 \nu^{2} - 438615 \nu + 321489$$$$)/23328$$ $$\beta_{13}$$ $$=$$ $$($$$$-41 \nu^{15} + 179 \nu^{14} - 482 \nu^{13} + 962 \nu^{12} - 1225 \nu^{11} + 205 \nu^{10} + 3863 \nu^{9} - 11879 \nu^{8} + 20887 \nu^{7} - 22783 \nu^{6} + 1875 \nu^{5} + 53433 \nu^{4} - 134892 \nu^{3} + 214650 \nu^{2} - 231579 \nu + 165483$$$$)/11664$$ $$\beta_{14}$$ $$=$$ $$($$$$-275 \nu^{15} + 1101 \nu^{14} - 2921 \nu^{13} + 5466 \nu^{12} - 7066 \nu^{11} + 6 \nu^{10} + 24788 \nu^{9} - 70872 \nu^{8} + 123088 \nu^{7} - 124578 \nu^{6} - 5346 \nu^{5} + 321678 \nu^{4} - 799713 \nu^{3} + 1259955 \nu^{2} - 1358127 \nu + 883548$$$$)/69984$$ $$\beta_{15}$$ $$=$$ $$($$$$-2 \nu^{15} + 8 \nu^{14} - 21 \nu^{13} + 38 \nu^{12} - 41 \nu^{11} - 17 \nu^{10} + 195 \nu^{9} - 497 \nu^{8} + 779 \nu^{7} - 661 \nu^{6} - 349 \nu^{5} + 2499 \nu^{4} - 5247 \nu^{3} + 7371 \nu^{2} - 7452 \nu + 4617$$$$)/432$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$($$$$\beta_{15} - 2 \beta_{14} + \beta_{13} - \beta_{12} - \beta_{11} + \beta_{10} - 4 \beta_{9} + \beta_{8} + 2 \beta_{7} - \beta_{6} - \beta_{5} - \beta_{4} - 3 \beta_{3} + 2 \beta_{1} + 2$$$$)/4$$ $$\nu^{2}$$ $$=$$ $$\beta_{1}$$ $$\nu^{3}$$ $$=$$ $$($$$$\beta_{15} - 6 \beta_{13} + \beta_{12} + 6 \beta_{11} + \beta_{10} - \beta_{8} + \beta_{7} - 6 \beta_{6} + 10 \beta_{4} - 2 \beta_{3} + \beta_{2} - \beta_{1} - 5$$$$)/4$$ $$\nu^{4}$$ $$=$$ $$\beta_{15} - 2 \beta_{14} + \beta_{13} - 2 \beta_{12} - \beta_{10} + \beta_{6} + 3 \beta_{4} - 2 \beta_{3}$$ $$\nu^{5}$$ $$=$$ $$($$$$-2 \beta_{15} - 6 \beta_{14} + 17 \beta_{13} - 14 \beta_{12} - 7 \beta_{11} - 16 \beta_{10} - 6 \beta_{9} + 7 \beta_{7} - \beta_{6} - 25 \beta_{5} - 15 \beta_{4} - 7 \beta_{3} - \beta_{2} - \beta_{1} - 15$$$$)/4$$ $$\nu^{6}$$ $$=$$ $$-2 \beta_{15} + 8 \beta_{14} - 2 \beta_{13} + 2 \beta_{12} + 4 \beta_{11} - 6 \beta_{10} + 8 \beta_{9} + 6 \beta_{8} - 2 \beta_{7} + 2 \beta_{6} - 12 \beta_{5} + 4 \beta_{3} + 6 \beta_{2} - 2 \beta_{1} + 3$$ $$\nu^{7}$$ $$=$$ $$($$$$-13 \beta_{15} + 2 \beta_{14} - 37 \beta_{13} - 11 \beta_{12} + 13 \beta_{11} - 37 \beta_{10} - 92 \beta_{9} + 51 \beta_{8} + 22 \beta_{7} - 11 \beta_{6} - 51 \beta_{5} - 11 \beta_{4} - 81 \beta_{3} + 48 \beta_{2} - 26 \beta_{1} + 22$$$$)/4$$ $$\nu^{8}$$ $$=$$ $$-2 \beta_{14} + 12 \beta_{13} - 18 \beta_{11} + 4 \beta_{10} - 14 \beta_{9} - 4 \beta_{8} - 14 \beta_{7} + 8 \beta_{6} + 2 \beta_{5} - 12 \beta_{4} - 42 \beta_{3} - 8 \beta_{2} + 15 \beta_{1} + 12$$ $$\nu^{9}$$ $$=$$ $$($$$$-37 \beta_{15} - 10 \beta_{13} - 37 \beta_{12} - 94 \beta_{11} - 21 \beta_{10} - 299 \beta_{8} - 37 \beta_{7} - 10 \beta_{6} - 194 \beta_{4} - 102 \beta_{3} - 21 \beta_{2} + 37 \beta_{1} + 97$$$$)/4$$ $$\nu^{10}$$ $$=$$ $$23 \beta_{15} - 28 \beta_{14} - 19 \beta_{13} - 12 \beta_{12} + 16 \beta_{11} + 19 \beta_{10} + 4 \beta_{9} - 44 \beta_{8} + 25 \beta_{6} - 44 \beta_{5} + 51 \beta_{4} - 16 \beta_{3} + 44 \beta_{2}$$ $$\nu^{11}$$ $$=$$ $$($$$$-6 \beta_{15} - 682 \beta_{14} + 179 \beta_{13} - 346 \beta_{12} - 173 \beta_{11} - 432 \beta_{10} - 794 \beta_{9} + 173 \beta_{7} + 253 \beta_{6} - 259 \beta_{5} - 349 \beta_{4} - 173 \beta_{3} + 253 \beta_{2} - 3 \beta_{1} - 349$$$$)/4$$ $$\nu^{12}$$ $$=$$ $$-44 \beta_{15} + 48 \beta_{14} + 64 \beta_{13} + 172 \beta_{12} + 24 \beta_{11} - 32 \beta_{10} + 200 \beta_{9} - 8 \beta_{8} - 172 \beta_{7} - 64 \beta_{6} + 16 \beta_{5} + 100 \beta_{3} + 32 \beta_{2} - 44 \beta_{1} - 135$$ $$\nu^{13}$$ $$=$$ $$($$$$-855 \beta_{15} + 1182 \beta_{14} - 615 \beta_{13} + 327 \beta_{12} + 855 \beta_{11} - 615 \beta_{10} + 1068 \beta_{9} - 647 \beta_{8} - 654 \beta_{7} - 681 \beta_{6} + 647 \beta_{5} - 1209 \beta_{4} + 741 \beta_{3} + 1296 \beta_{2} - 1710 \beta_{1} + 2418$$$$)/4$$ $$\nu^{14}$$ $$=$$ $$196 \beta_{14} + 324 \beta_{13} + 372 \beta_{11} + 460 \beta_{10} - 224 \beta_{9} + 416 \beta_{8} - 20 \beta_{7} - 136 \beta_{6} - 208 \beta_{5} - 984 \beta_{4} - 468 \beta_{3} + 136 \beta_{2} - 191 \beta_{1} + 984$$ $$\nu^{15}$$ $$=$$ $$($$$$-2919 \beta_{15} + 2858 \beta_{13} + 153 \beta_{12} - 490 \beta_{11} - 1463 \beta_{10} - 9 \beta_{8} + 153 \beta_{7} + 2858 \beta_{6} - 13798 \beta_{4} - 962 \beta_{3} - 1463 \beta_{2} + 2919 \beta_{1} + 6899$$$$)/4$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/168\mathbb{Z}\right)^\times$$. $$n$$ $$73$$ $$85$$ $$113$$ $$127$$ $$\chi(n)$$ $$1 - \beta_{4}$$ $$1$$ $$-1$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 17.1 1.22961 − 1.21986i 1.73018 + 0.0805675i −0.441628 + 1.67480i 0.934861 + 1.45809i −0.601642 − 1.62420i 0.247636 + 1.71426i −1.70742 + 0.291063i 1.60841 − 0.642670i 1.22961 + 1.21986i 1.73018 − 0.0805675i −0.441628 − 1.67480i 0.934861 − 1.45809i −0.601642 + 1.62420i 0.247636 − 1.71426i −1.70742 − 0.291063i 1.60841 + 0.642670i 0 −1.67480 0.441628i 0 1.40397 + 2.43175i 0 −2.08606 + 1.62738i 0 2.60993 + 1.47928i 0 17.2 0 −1.45809 + 0.934861i 0 −1.90017 3.29119i 0 2.23495 1.41598i 0 1.25207 2.72623i 0 17.3 0 −1.21986 1.22961i 0 −1.40397 2.43175i 0 −2.08606 + 1.62738i 0 −0.0238727 + 2.99991i 0 17.4 0 0.0805675 1.73018i 0 1.90017 + 3.29119i 0 2.23495 1.41598i 0 −2.98702 0.278792i 0 17.5 0 0.291063 + 1.70742i 0 −0.0726693 0.125867i 0 1.05451 + 2.42652i 0 −2.83056 + 0.993934i 0 17.6 0 0.642670 + 1.60841i 0 1.28955 + 2.23357i 0 −0.203402 2.63792i 0 −2.17395 + 2.06735i 0 17.7 0 1.62420 0.601642i 0 0.0726693 + 0.125867i 0 1.05451 + 2.42652i 0 2.27605 1.95437i 0 17.8 0 1.71426 0.247636i 0 −1.28955 2.23357i 0 −0.203402 2.63792i 0 2.87735 0.849022i 0 89.1 0 −1.67480 + 0.441628i 0 1.40397 2.43175i 0 −2.08606 1.62738i 0 2.60993 1.47928i 0 89.2 0 −1.45809 0.934861i 0 −1.90017 + 3.29119i 0 2.23495 + 1.41598i 0 1.25207 + 2.72623i 0 89.3 0 −1.21986 + 1.22961i 0 −1.40397 + 2.43175i 0 −2.08606 1.62738i 0 −0.0238727 2.99991i 0 89.4 0 0.0805675 + 1.73018i 0 1.90017 3.29119i 0 2.23495 + 1.41598i 0 −2.98702 + 0.278792i 0 89.5 0 0.291063 1.70742i 0 −0.0726693 + 0.125867i 0 1.05451 2.42652i 0 −2.83056 0.993934i 0 89.6 0 0.642670 1.60841i 0 1.28955 2.23357i 0 −0.203402 + 2.63792i 0 −2.17395 2.06735i 0 89.7 0 1.62420 + 0.601642i 0 0.0726693 0.125867i 0 1.05451 2.42652i 0 2.27605 + 1.95437i 0 89.8 0 1.71426 + 0.247636i 0 −1.28955 + 2.23357i 0 −0.203402 + 2.63792i 0 2.87735 + 0.849022i 0 $$n$$: e.g. 2-40 or 990-1000 Embeddings: e.g. 1-3 or 89.8 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 3.b odd 2 1 inner 7.d odd 6 1 inner 21.g even 6 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 168.2.u.a 16 3.b odd 2 1 inner 168.2.u.a 16 4.b odd 2 1 336.2.bc.f 16 7.b odd 2 1 1176.2.u.b 16 7.c even 3 1 1176.2.k.a 16 7.c even 3 1 1176.2.u.b 16 7.d odd 6 1 inner 168.2.u.a 16 7.d odd 6 1 1176.2.k.a 16 12.b even 2 1 336.2.bc.f 16 21.c even 2 1 1176.2.u.b 16 21.g even 6 1 inner 168.2.u.a 16 21.g even 6 1 1176.2.k.a 16 21.h odd 6 1 1176.2.k.a 16 21.h odd 6 1 1176.2.u.b 16 28.f even 6 1 336.2.bc.f 16 28.f even 6 1 2352.2.k.i 16 28.g odd 6 1 2352.2.k.i 16 84.j odd 6 1 336.2.bc.f 16 84.j odd 6 1 2352.2.k.i 16 84.n even 6 1 2352.2.k.i 16 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 168.2.u.a 16 1.a even 1 1 trivial 168.2.u.a 16 3.b odd 2 1 inner 168.2.u.a 16 7.d odd 6 1 inner 168.2.u.a 16 21.g even 6 1 inner 336.2.bc.f 16 4.b odd 2 1 336.2.bc.f 16 12.b even 2 1 336.2.bc.f 16 28.f even 6 1 336.2.bc.f 16 84.j odd 6 1 1176.2.k.a 16 7.c even 3 1 1176.2.k.a 16 7.d odd 6 1 1176.2.k.a 16 21.g even 6 1 1176.2.k.a 16 21.h odd 6 1 1176.2.u.b 16 7.b odd 2 1 1176.2.u.b 16 7.c even 3 1 1176.2.u.b 16 21.c even 2 1 1176.2.u.b 16 21.h odd 6 1 2352.2.k.i 16 28.f even 6 1 2352.2.k.i 16 28.g odd 6 1 2352.2.k.i 16 84.j odd 6 1 2352.2.k.i 16 84.n even 6 1 ## Hecke kernels This newform subspace is the entire newspace $$S_{2}^{\mathrm{new}}(168, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ 1 $3$ $$1 - T^{2} - 7 T^{4} - 24 T^{5} + 22 T^{6} + 48 T^{7} + 10 T^{8} + 144 T^{9} + 198 T^{10} - 648 T^{11} - 567 T^{12} - 729 T^{14} + 6561 T^{16}$$ $5$ $$( 1 - 13 T^{2} + 93 T^{4} - 434 T^{6} + 1886 T^{8} - 10850 T^{10} + 58125 T^{12} - 203125 T^{14} + 390625 T^{16} )( 1 + 2 T^{2} - 39 T^{4} - 38 T^{6} + 836 T^{8} - 950 T^{10} - 24375 T^{12} + 31250 T^{14} + 390625 T^{16} )$$ $7$ $$( 1 - 2 T + 9 T^{2} - 10 T^{3} + 44 T^{4} - 70 T^{5} + 441 T^{6} - 686 T^{7} + 2401 T^{8} )^{2}$$ $11$ $$1 + 49 T^{2} + 1300 T^{4} + 22265 T^{6} + 252641 T^{8} + 1395328 T^{10} - 12534274 T^{12} - 440430082 T^{14} - 6222779240 T^{16} - 53292039922 T^{18} - 183514305634 T^{20} + 2471908667008 T^{22} + 54155842054721 T^{24} + 577496758741265 T^{26} + 4079956889737300 T^{28} + 18607741845578809 T^{30} + 45949729863572161 T^{32}$$ $13$ $$( 1 - 49 T^{2} + 1278 T^{4} - 23495 T^{6} + 341186 T^{8} - 3970655 T^{10} + 36500958 T^{12} - 236513641 T^{14} + 815730721 T^{16} )^{2}$$ $17$ $$1 - 42 T^{2} + 1059 T^{4} - 6894 T^{6} - 204407 T^{8} + 7559412 T^{10} - 49293810 T^{12} - 1407253056 T^{14} + 49740922386 T^{16} - 406696133184 T^{18} - 4117068305010 T^{20} + 182465828749428 T^{22} - 1425893651242487 T^{24} - 13898261949695406 T^{26} + 616996949226316899 T^{28} - 7071868715494839018 T^{30} + 48661191875666868481 T^{32}$$ $19$ $$( 1 + 3 T + 62 T^{2} + 177 T^{3} + 2049 T^{4} + 6000 T^{5} + 55390 T^{6} + 152826 T^{7} + 1217108 T^{8} + 2903694 T^{9} + 19995790 T^{10} + 41154000 T^{11} + 267027729 T^{12} + 438269523 T^{13} + 2916844622 T^{14} + 2681615217 T^{15} + 16983563041 T^{16} )^{2}$$ $23$ $$1 + 102 T^{2} + 6323 T^{4} + 248898 T^{6} + 6664873 T^{8} + 89132724 T^{10} - 1162854034 T^{12} - 108042764448 T^{14} - 3248548802990 T^{16} - 57154622392992 T^{18} - 325414235728594 T^{20} + 13194842036331636 T^{22} + 521932771402734313 T^{24} + 10310975788054808802 T^{26} +$$$$13\!\cdots\!83$$$$T^{28} +$$$$11\!\cdots\!18$$$$T^{30} +$$$$61\!\cdots\!61$$$$T^{32}$$ $29$ $$( 1 - 103 T^{2} + 6606 T^{4} - 293969 T^{6} + 9810626 T^{8} - 247227929 T^{10} + 4672298286 T^{12} - 61266802063 T^{14} + 500246412961 T^{16} )^{2}$$ $31$ $$( 1 + 24 T + 350 T^{2} + 3792 T^{3} + 33057 T^{4} + 243840 T^{5} + 1597150 T^{6} + 9593544 T^{7} + 54548420 T^{8} + 297399864 T^{9} + 1534861150 T^{10} + 7264237440 T^{11} + 30528833697 T^{12} + 108561740592 T^{13} + 310626288350 T^{14} + 660302738664 T^{15} + 852891037441 T^{16} )^{2}$$ $37$ $$( 1 + T - 60 T^{2} + 689 T^{3} + 2741 T^{4} - 33360 T^{5} + 198298 T^{6} + 1145242 T^{7} - 8840520 T^{8} + 42373954 T^{9} + 271469962 T^{10} - 1689784080 T^{11} + 5137075301 T^{12} + 47777986373 T^{13} - 153943584540 T^{14} + 94931877133 T^{15} + 3512479453921 T^{16} )^{2}$$ $41$ $$( 1 + 240 T^{2} + 27996 T^{4} + 2035728 T^{6} + 100303238 T^{8} + 3422058768 T^{10} + 79110004956 T^{12} + 1140025017840 T^{14} + 7984925229121 T^{16} )^{2}$$ $43$ $$( 1 - 5 T + 76 T^{2} - 341 T^{3} + 2710 T^{4} - 14663 T^{5} + 140524 T^{6} - 397535 T^{7} + 3418801 T^{8} )^{4}$$ $47$ $$1 - 158 T^{2} + 13171 T^{4} - 396034 T^{6} - 16747399 T^{8} + 2338864468 T^{10} - 73391270338 T^{12} - 1850417430616 T^{14} + 239524986298546 T^{16} - 4087572104230744 T^{18} - 358125987434202178 T^{20} + 25211123725919029972 T^{22} -$$$$39\!\cdots\!39$$$$T^{24} -$$$$20\!\cdots\!66$$$$T^{26} +$$$$15\!\cdots\!11$$$$T^{28} -$$$$40\!\cdots\!02$$$$T^{30} +$$$$56\!\cdots\!21$$$$T^{32}$$ $53$ $$1 + 265 T^{2} + 32968 T^{4} + 3168029 T^{6} + 288393293 T^{8} + 22350648928 T^{10} + 1460664537746 T^{12} + 90342579430370 T^{14} + 5160533915433520 T^{16} + 253772305619909330 T^{18} + 11525345782458595826 T^{20} +$$$$49\!\cdots\!12$$$$T^{22} +$$$$17\!\cdots\!73$$$$T^{24} +$$$$55\!\cdots\!21$$$$T^{26} +$$$$16\!\cdots\!88$$$$T^{28} +$$$$36\!\cdots\!85$$$$T^{30} +$$$$38\!\cdots\!21$$$$T^{32}$$ $59$ $$1 - 187 T^{2} + 11904 T^{4} - 267479 T^{6} + 18518765 T^{8} - 3131679552 T^{10} + 215589458578 T^{12} - 8824777678534 T^{14} + 393339242196864 T^{16} - 30719051098976854 T^{18} + 2612375297384172658 T^{20} -$$$$13\!\cdots\!32$$$$T^{22} +$$$$27\!\cdots\!65$$$$T^{24} -$$$$13\!\cdots\!79$$$$T^{26} +$$$$21\!\cdots\!24$$$$T^{28} -$$$$11\!\cdots\!07$$$$T^{30} +$$$$21\!\cdots\!41$$$$T^{32}$$ $61$ $$( 1 - 18 T + 331 T^{2} - 4014 T^{3} + 46777 T^{4} - 446148 T^{5} + 4203790 T^{6} - 34615152 T^{7} + 285617722 T^{8} - 2111524272 T^{9} + 15642302590 T^{10} - 101267119188 T^{11} + 647666904457 T^{12} - 3390209552214 T^{13} + 17053243913491 T^{14} - 56569371048378 T^{15} + 191707312997281 T^{16} )^{2}$$ $67$ $$( 1 - 7 T - 200 T^{2} + 865 T^{3} + 28259 T^{4} - 67468 T^{5} - 2804524 T^{6} + 1482940 T^{7} + 222922288 T^{8} + 99356980 T^{9} - 12589508236 T^{10} - 20291878084 T^{11} + 569450528339 T^{12} + 1167858217555 T^{13} - 18091676433800 T^{14} - 42424981237261 T^{15} + 406067677556641 T^{16} )^{2}$$ $71$ $$( 1 - 224 T^{2} + 21244 T^{4} - 988448 T^{6} + 37865158 T^{8} - 4982766368 T^{10} + 539845751164 T^{12} - 28694463598304 T^{14} + 645753531245761 T^{16} )^{2}$$ $73$ $$( 1 - 15 T + 312 T^{2} - 3555 T^{3} + 48909 T^{4} - 547104 T^{5} + 5639826 T^{6} - 55083990 T^{7} + 457851344 T^{8} - 4021131270 T^{9} + 30054632754 T^{10} - 212832756768 T^{11} + 1388929569069 T^{12} - 7369769513115 T^{13} + 47216278602168 T^{14} - 165710977786455 T^{15} + 806460091894081 T^{16} )^{2}$$ $79$ $$( 1 - 14 T - 152 T^{2} + 1448 T^{3} + 34763 T^{4} - 170492 T^{5} - 3816772 T^{6} + 3401582 T^{7} + 382142944 T^{8} + 268724978 T^{9} - 23820474052 T^{10} - 84059205188 T^{11} + 1354021665803 T^{12} + 4455577665752 T^{13} - 36949293239192 T^{14} - 268854725806226 T^{15} + 1517108809906561 T^{16} )^{2}$$ $83$ $$( 1 + 141 T^{2} + 22278 T^{4} + 1798779 T^{6} + 189218258 T^{8} + 12391788531 T^{10} + 1057276475238 T^{12} + 46098592645029 T^{14} + 2252292232139041 T^{16} )^{2}$$ $89$ $$1 - 378 T^{2} + 69795 T^{4} - 8481534 T^{6} + 768803689 T^{8} - 53847736620 T^{10} + 2382276614382 T^{12} + 24847322779392 T^{14} - 11761219221673230 T^{16} + 196815643735564032 T^{18} +$$$$14\!\cdots\!62$$$$T^{20} -$$$$26\!\cdots\!20$$$$T^{22} +$$$$30\!\cdots\!09$$$$T^{24} -$$$$26\!\cdots\!34$$$$T^{26} +$$$$17\!\cdots\!95$$$$T^{28} -$$$$73\!\cdots\!98$$$$T^{30} +$$$$15\!\cdots\!61$$$$T^{32}$$ $97$ $$( 1 - 429 T^{2} + 87258 T^{4} - 11450019 T^{6} + 1191212138 T^{8} - 107733228771 T^{10} + 7724888001498 T^{12} - 357344990114541 T^{14} + 7837433594376961 T^{16} )^{2}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9984939694404602, "perplexity": 11319.130785862515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00310.warc.gz"}
https://math.stackexchange.com/questions/3182067/surface-area-of-a-solid-of-revolution-request-for-a-source-for-rigourse-proof?noredirect=1
# surface area of a solid of revolution. (request for a source for rigourse proof ) I know that this question has been asked and answered before , but non of the answers give a rigourse treatment of the problem . All the explanations boil down to give some argument using infinitesimals and how we should keep the linear orders and neglect the higher orders in the integral. But why should we neglect the non linear terms ? who decides that it's okay to neglect non linear terms, but it's not okay to do the same for the linear terms.What i'm looking for is a rigourse proof for the formula of the surface area of a solid of revolution. Preferable not very advanced, a reference for a book also would work. ## 1 Answer I think have a decent outline of a proof. First prove the relationship between area under a curve and definite integration. Then prove how to convert Surface Area and volume problems to related definite integral problems rigorously. It might be best to start with simple area under the curve of a function. The area under a a curve can be represented by a Riemann Sum. The Sum is easily calculated from usual geometric formulae and we know it will be less than the actual area under the curve. We can proceed from that in a couple different ways. The simpler geometric figures added up to calculated the area can shrink, requiring more of them. We keep track of the error term and note that it gets arbitrarily small. That should give you a rigorous proof with the dropping of the square terms of the error. An alternative approach, one can construct a Riemann Sum that deliberately over estimates the area. It can be shown that the overestimate and the underestimate converge using various convergence proofs. Applying the same principles to 3 dimensions then gives you the usual formulas. Every definite integral is finding the area of a curve, so whatever 1D integral gives you a volume or a surface area is a problem that has been re-represented as the area under a curve. The function you end up integrating contains the geometric information of the surface of interest. So once you prove that the new function does in fact contain the desired information, then the proofs of integrating a single valued function apply. A slightly different approach, more geometric approach: Given a vector function $$\vec{E}(x,y,z)$$ the volume integral of its Divergence is the surface integral of its component normal to the surface: Letting $$\hat{n}$$ be the normal to our surface. $$\int\int\int \nabla\cdot\vec{E} \ dV = \int\int \vec{E} \cdot \hat{n} \ dA$$ So if the divegrence of $$\vec{E}$$ is 1, then the integral gives us the entire volume of the region of integration. Here we note that if $$\vec{r}$$ is a position vector from the origin to the point of integration, $$\nabla \cdot \vec{r}/3=1$$. Which means our volume can be expressed as a surface integral $$\int\int \vec{r}\cdot \hat{n}/3 \ dA$$ Our surface integral is $$\int\int 1 \ dA=\int \int \hat{n} \cdot \hat {n} \ dA$$. Working in reverse direction from before, we can conver the area integral into a volume integral using the with the reverse of Gauss' Law. Area = $$\int\int \int \nabla \cdot \hat{n} \ dV$$ Note the relationship between the Area Integral expression for Volume vs. The area integral expression for surface area. One is proportional to $$\hat{n} \cdot \hat{n}$$, the other is proportional to $$\vec{r} \cdot \hat{n}=r\cos{\theta}$$ where $$\cos{\theta}$$ That cosine term represents a needed transformation to change the area integral to a volume integral, information about the shape of an object being integrated. Also keep in mind how triple integrals work in Cylindrical Coordinates. Combine all this I think you get a proof without the "ghost of departed quantities". • "An alternative approach, one can construct a Riemann Sum that deliberately over estimates the area. It can be shown that the overestimate and the underestimate converge using various convergence proofs". I think this is what i'm looking for , can you please direct me to a source which handle this problem , or what kind of textbooks that treat this subject ? – yousef magableh Apr 12 at 5:31 • – TurlocTheRed Apr 12 at 14:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.955528974533081, "perplexity": 175.5643843527521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541319511.97/warc/CC-MAIN-20191216093448-20191216121448-00468.warc.gz"}
https://www.studypug.com/us/en/math/trigonometry/find-the-exact-value-of-trigonometric-ratios
##### 2.4 Find the exact value of trigonometric ratios Triangles, angles, sides and hypotenuse, - these are the basic parts of Trigonometry. In this chapter we will brush up on these parts once more. By now you are already familiar with the radian. The radian is the term used to define the measure of a standard angle. It is defined that the measure of the radian is equal to that of the length of its corresponding arc. In this chapter we will learn about the different types of angles such as the standard angles, reference angles and co-terminal angles. Standard angles, if you would recall from our previous chapter, are angles that are formed by the intersection of a ray and the x axis. The x axis is referred to as the initial side and the ray is referred to as the terminal side. Take note that standard angles have their vertices in the center of a unit circle. We also discussed in that same chapter about the reference angles, which are the angles associated with the angles in every standard angle. The reference angles are the acute angle formed by the terminal side of the standard angle and the x axis. The last kind of angle is the co-terminal angles, which as the word suggests are angles that have the same terminal sides. We will get to learn more about these angles from chapter 8.1 to 8.3. In the next two parts of the chapter, we will learn about the general form of the different trigonometric functions. This will help us find the exact values of the trigonometric functions as well as use the ASTC rule in Trigonometry. In 8.6 to 8.8 we will review what we have learned before about a unit circle, and the such as the definition of the radians, and the length of the arc, converting between the measures of the angles, and the trigonometric ratios of angles in radians.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580590486526489, "perplexity": 168.36087037810543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00343-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/1102.1060/
# Polycyclic aromatic hydrocarbons in the dwarf galaxy IC 10 Wiebe D.S., Egorov O.V., Lozinskaya T.A. ###### Abstract Infrared observation from the Spitzer Space Telescope archive are used to study the dust component of the interstellar medium in the IC 10 irregular galaxy. Dust distribution in the galaxy is compared to the distributions of H and [SII] emission, neutral hydrogen and CO clouds, and ionizing radiation sources. The distribution of polycyclic aromatic hydrocarbons (PAH) in the galaxy is shown to be highly non-uniform with the mass fraction of these particles in the total dust mass reaching 4%. PAHs tend to avoid bright HII regions and correlate well with atomic and molecular gas. This pattern suggests that PAHs form in the dense interstellar gas. We propose that the significant decrease of the PAH abundance at low metallicity is observed not only globally (at the level of entire galaxies), but also locally (at least, at the level of individual HII regions). We compare the distribution of the PAH mass fraction to the distribution of high-velocity features, that we have detected earlier in wings of H and SII lines, over the entire available galaxy area. No conclusive evidence for shock destruction of PAHs in the IC 10 galaxy could be found. Institute of Astronomy, Russian Academy of Sciences, ul. Pyatnitskaya 48, Moscow, 119017 Russia Sternberg Astronomical Institute, Universitetskii pr. 13, Moscow, 119992 Russia ## 1 Introduction Numerous infrared space observatories, first of all, the Spitzer space telescope, as well as MSX and ISO satellites, opened up a new era in studies of the star formation both in nearby and distant galaxies. The so-called polycyclic aromatic hydrocarbons (PAH) [1] — macromolecules consisting of several tens or several hundred atoms, mostly carbon and hydrogen — are of special interest in relation to these observations. The absorption of an ultraviolet (UV) photon by such a molecule excites bending and vibrational modes, and, as a result, near IR photons are emitted. PAH emission bands may account for a substantial fraction (up to several dozen percent) of the entire infrared luminosity of the galaxy [2]. Polycyclic aromatic hydrocarbons attract considerable interest at least for two reasons. First, their emission is related to the overall UV radiation field of a galaxy, making them a natural indicator of the star formation rate. Second, PAH molecules not only trace the state of the interstellar medium, but also play an important role in its physical and chemical evolution. The former aspect is interesting both for interpretation of available IR observations and for planning new near-IR space missions (JWST, SOFIA, SPICA, etc.). The latter aspect is of great importance for development of models for various objects ranging from protoplanetary disks to the interstellar medium of an entire galaxy. Unfortunately, PAH formation and destruction mechanisms in the interstellar medium still are not understood. Such possible scenarios as the synthesis of PAHs in carbon-rich atmospheres of AGB and post-AGB stars or in dense molecular clouds as well as the destruction of PAHs by shocks and UV radiation are discussed extensively in the literature (see Sandstrom et al. [3] and references therein). The observed deficit of the emission of these macromolecules in metal-poor galaxies may be an important indication of the nature of the PAH evolutionary cycle. Note that, as shown by Draine et al. [4], this deficit is related to the real lack of PAHs and not to the low efficiency of the excitation of their IR transitions. In galaxies with oxygen abundance the typical mass fraction of PAHs (the fraction of the total dust mass in particles consisting of at most one thousand atoms) is equal to about 4%, i.e., about the same as in the Milky Way. At metallicities the average decreases quite sharply down to 1% and even lower. In order to carify the cause of this transition and to identify the PAH formation and destruction mechanisms as well as their relation to the physical parameters and the metallicity of a galaxy, Sandstrom et al. [3] analyzed in detail Spitzer observations of the dust component in the nearest irregular dwarf galaxy — the Small Magellanic Cloud (SMC). These authors found weak correlation or no correlation between and such SMC parameters as the location of carbon-rich asymptotic giant branch stars, supergiant HI shells and young supernova remnants, and the turbulent Mach number. They showed that correlates with CO intensity and increases in regions of high dust and molecular gas surface density. Sandstrom et al. [3] concluded that PAH mass fraction is high in regions of active star formation, but suppressed in bright HII regions. The irregular dwarf galaxy IC 10 is analogous to the SMC in terms of a number of parameters. The average gas metallicity in IC 10 is , varying from 7.6 to 8.5 in different HII regions ([5, 6, 7] and references therein), i.e., it covers the very range where the transition from high to low PAH abundance occurs. The interstellar medium of this galaxy is characterized by a filamentary, multi-shell structure. In H and [SII] images IC 10 appears as a giant complex of multiple shells and supershells, arc- and ring-shaped features with sizes ranging from 50 to 800–1000 pc (see [8, 9, 10, 11, 12] and references therein). The HI distribution also shows numerous “holes”, supershells, and extended irregular features with rudiments of a spiral pattern [8]. The IC 10 galaxy is especially attractive for the analysis of the dust component because, unlike the SMC, it is a starburst galaxy. It is often classified as a BCD-type object because of its high H and IR luminosity [13]. The stellar population of IC 10 shows evidence of two star formation bursts. The first burst is at least 350 Myr old, while the second one has occurred 4–10 million years ago (see [14, 15, 16] and references therein). For the purpose of identifying the influence of shocks and/or UV radiation on dust the anomalously large population of Wolf–Rayet (WR) stars in IC 10 is of special interest. Here the highest density of WR stars is observed among the known dwarf galaxies, comparable to the density of these stars in massive spiral galaxies [13, 15, 16, 17, 18, 19]. High H and IR luminosity of IC 10, combined with the large number of WR stars, indicates that the last burst of star formation in this galaxy must have been short, but engulfed most of the galaxy. The anomalously high number of WR stars means that we are actually witnessing a short period immediately after the last episode of star formation. The central, brightest region, associated with this last star formation episode, is located in the south-eastern part of the galaxy and includes the largest and densest HI cloud, a molecular cloud seen in CO lines, a conspicuous dust lane, and a complex of large emission nebulae, reaching 300–400 pc in size, with two shell nebulae HL111 and HL106 (according to the catalog of Hodge & Lee [20]), as well as young star clusters and about a dozen WR stars (see [8, 9, 10, 5, 7] and references therein). An H image of this central region of the galaxy is shown in Figure 1. According to Vacca et al. [16], the center of the last star formation episode is located near the object that was earlier classified as the WR star M24. (Hereafter letters R and M, followed by a number, refer to WR stars from the lists of Royer et al. [22] and Massey & Holmes [18], respectively). Vacca et al. [16] showed that M24 is actually a close group consisting of at least six blue stars, four of these stars being possible WR candidates. Lopez-Sanchez et al. [23] have conclusively identified two WR stars in this region. The HL111c nebula, surrounding M24, is one of the brightest HII regions in IC 10 and the brightest part of the HL111 shell. The neighborhood of M24 and the inner cavern of this shell host youngest (2–4 Myr old) star clusters in the galaxy [7, 14]. The shell nebula HL106 is located in the densest southern part of a complex, consisting of HI, CO, and dust clouds mentioned above. The ionizing radiation in this region must be generated by WR stars R2 and R10 and clusters 4-3 and 4-4 from the list of Hunter [14]. According to the above author, these clusters are a few times older than young clusters 4-1 and 4-2 in the HL111 region. From the south, adjacent to the HI and CO clouds and the dust lane is a unique object, the so-called synchrotron supershell [24]. Until recently, it was believed to have been formed as a result of multiple explosions of about a dozen supernovae [24, 25, 26, 27]. Lozinskaya & Moiseev [28] for the first time explained the formation of this synchrotron supershell by a hypernova explosion. The above features of IC 10 offer great possibilities for the study of the structure and physical characteristics of the dust component of a dwarf galaxy and the role that shocks play in its evolution. In this paper we analyze the connection between our data on IC 10, obtained earlier, and observations of this galaxy with the Spitzer telescope. In the following sections we describe the technique used to analyze these observations, present the obtained results and discuss them. In Conclusions we summarize the main findings of this work. ## 2 Observations ### 2.1 IR observations In this paper we use Spitzer archive observations of IC 10 obtained as a part of the program “A mid-IR Hubble atlas of galaxies” [29]. These data were downloaded from the Spitzer Heritage Archive. The MOPEX software was utilized to compose image mosaics and custom IDL procedures were used to analyze them. One of the most complicated issues in the analysis of such images is the choice of the background level. Sandstrom et al. [3] used a rather sophisticated procedure for this purpose, because the Small Magellanic Cloud occupies a large area on the sky. The angular size of IC 10 is small and one may therefore expect that background (mostly intragalactic and zodiacal) variations are small toward this galaxy. In this paper we set the background level in all the IR bands considered to the average brightness in areas located far from the star forming regions in IC 10. The adopted background values are listed Table 1. Such a simple procedure for background estimation is acceptable for our purposes. It is interesting that background values estimated using SPOT (Spitzer Planning Observations Tool), which are also shown in Table 1 (these are values written in FITS file headers), in some cases differ appreciably from the adopted values. This further supports the use of the background estimate extracted from real data. ### 2.2 Optical and 21 cm line observations To analyze possible effects of shocks on the dust component, we use results of H and [SII]Å observations, made with the SCORPIO focal reducer and the scanning Fabry-Perot interferometer at the 6-m telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences and described in detail by Lozinskaya et al. [12] and Egorov et al. [30]. To compare the dust component distribution with the large-scale structure and kinematics of HI in IC 10, we used 21-cm VLA data obtained by Wilcots and Miller [8]. Egorov et al. [30] reanalyzed the data cube of these observations, provided to us by the authors, in order to study the “local” structure and kinematics of HI in the neighborhood of the star forming complex and the brightest nebulae HL111 and HL106. We used the data with an angular resolution of (corresponding to a linear resolution of about 20 pc for the adopted distance of 800 pc to the galaxy). ## 3 Results The IR and H maps of the central region of IC 10 are shown in Figure 2. The interest to near-IR observations of galaxies is related to the fact that UV-excited PAH bands can be used as an indicator of the number of hot stars and hence as an indirect indicator of the star formation rate. Draine & Li [31] proposed to parameterize the UV radiation field of the galaxy as the sum of the “minimum” diffuse UV field (the lower cutoff of the starlight intensity distribution), filling up most of the galaxy’s volume, and a more intense UV field with a power-law distribution, which illuminates only the mass fraction of all the dust in the galaxy. The quantity, expressed in units of the average UV radiation field in our Galaxy, characterizes the overall rate of star formation in the system studied, whereas allows one to estimate the mass fraction of the galaxy involved in the ongoing star formation. Other parameters introduced are the 24-to-70 m flux ratio, which characterizes the fraction of “hot” dust, and , the dust luminosity fraction contributed by regions with the UV radiation intensity , i.e., the dust luminosity coming from photodissociation regions. Draine & Li [31] proposed a general algorithm for estimating the parameters of a galaxy from IR observations at 8 m, 24 m, 70 m, and 160 m (3.6 m data are used to remove the starlight contribution). Unfortunately, this algorithm can be applied to IC 10 only partially, because 160 m observations are not available for a substantial part of the galaxy. Results of long-wavelength observations are shown in Figure 3. As is evident from the figure, 160 m data are mostly available for outskirts of the galaxy and cover the star forming regions only partially (taking into account the angular resolution, which is equal to 40 at 160 m). Nevertheless, we used the available data and the technique described by Draine & Li [31] to determine , , and , the mass fraction of dust contained in PAH (or, more precisely, in particles with less than 1000 carbon atoms). We then averaged the data of IR observations over the region of IC 10, covered by 160 m data, and inferred the following parameters for this region: , , and . A comparison of these values with results of Draine et al. [4] for 65 galaxies of different types shows that parameters of IC 10 differ appreciably from the typical values of the corresponding quantities. Comparable values are only found for two other Irr galaxies — NGC 2915 () and NGC 5408 (). Note that NGC 5408 was also shown to contain starburst regions [32]. The parameter of IC 10 is also close to that of NGC 5408, but is much lower than the corresponding value in Mrk 33, another irregular galaxy from the list of Draine & Li [31], where it exceeds 10%. (Draine & Li [31] also report high value for the Seyfert galaxy NGC 5195, however, the result for this object is more dependent on the adopted radiation field parameters than in the case of Mrk 33.) In the sample of Draine & Li [31] Mrk 33 is also the galaxy with the highest 24-to-70 m flux ratio () and the largest value of . The corresponding values for IC 10 are equal to about 0.7 and 23%, respectively. On the whole, as far as radiation-field parameters are concerned, IC 10 is quite a typical Irr starburst galaxy. The IC 10 galaxy has unusually high PAH mass fraction . Its value, inferred using the technique of Draine & Li [31], significantly exceeds the corresponding parameters for all the galaxies mentioned above (1.3% in Mrk 33, 2.4% in NGC 5195, 1.4% in NGC 2915, and 0.4% in NGC 5408). In the algorithm of Draine & Li [31] this parameter is inferred from the sole quantity—the ratio of the average 8 m flux to the sum of the 70 m and 160 m fluxes. Our estimate for this parameter is 0.19, which, according to Draine & Li [31], corresponds to . To check whether such an unusually high is obtained due to the lack of 160 m data, we performed a more detailed fit of the observed 5.8 m, 8 m, 24 m, and 70 m fluxes based on the models of Draine & Li [31] using local rather than average fluxes. This technique allowed us to obtain individual estimates for different regions of the galaxy. Our modeling showed that the final average and its distribution across the galaxy do depend appreciably on the choice of the passbands used in the fit. However, all the considered cases still yield a high 8 m flux-averaged PAH fraction ranging from 2.9% to 4.5%. Note that the distribution of in the galaxy is quite irregular. Along with regions of high there are vast areas, where is less than 1%. As we mentioned above, the particular features of the distribution of depend on the choice of passbands used in the fit of photometric data. Hereafter we use the 8 m to 24 m () flux ratio as the local indicator of the PAH fraction. A number of authors and, in particular, Sandstrom et al. [3], pointed out the possibility of using the above flux ratio for this purpose (note, however, that the correlation between and found by Sandstrom et al. [3] is rather weak). The halftones in Figure 4 (left panel) show the distribution of this flux ratio in the central star forming region of IC 10, and contours correspond to the distribution of H intensity. Lighter tones indicate low ratios and, correspondingly, low , whereas darker tones indicate higher . A wide semi-ring near the HL111 and HL106 regions is immediately apparent, which can be traced by low ratio, weak H intensity, and the locations of WR stars (we connected them by lines to emphasize the location of the semi-ring). It might be supposed that the low PAH abundance in this region is caused by the destruction of these particles by the ultraviolet radiation of WR stars, however, further studies are needed for a more definite conclusion. The relation between the PAH abundance and star formation tracers is apparent not only in this semi-ring, but in the entire considered region. In the right panel of Figure 4 we show the correlation between the flux ratio and H intensity. It is evident from the figure that (and hence ) decreases with increasing H intensity. This may indicate that factors operating in the vicinity of the region of ongoing star formation, e.g., the UV radiation, have a destructive effect on PAH particles. The flux ratio approaches unity in regions with less intense H flux, and this corresponds to values of about 2–3% [3]. PAH particles may also be destroyed by shocks. Therefore, generally speaking, the above-mentioned low -ratio “semi-ring” located near HL111 and HL106 might have formed due to the destructive effect not only from the UV radiation of the WR stars located within it, but also from shocks produced by winds of these stars. The primary shock indicators are high-velocity gas motions. A detailed study of the ionized gas kinematics in IC 10 by Egorov et al. [30] indeed revealed weak high-velocity features in wings of H and [SII]Å lines in the inner cavern of the HL111 nebula and in other regions of the complex of violent star formation. In particular, such features were found in the vicinity of two WR stars located in the “semi-ring” mentioned above. We reanalyzed the results of observations of the galaxy in both lines made with the Fabry–Perot interferometer at the 6-m telescope of the Special Astrophyscial Observatory of the Russian Academy of Sciences in order to reveal possible anticorrelation between high-velocity gas motions and . We computed H and [SII]Å line profiles for several regions of high and low ratio. Weak high-velocity features at a level of about 2–6% of the peak intensity are found in wings of both lines, and this coincidence confirms the reality of corresponding motions. However, these high-velocity features show up both in regions with high and low ratios. To obtain more definitive results, we mapped the distributions of velocities and intensities of high-velocity features in blue and red wings of H line in the entire available field of the galaxy and in the central star forming region. The resulting maps indicate that high-velocity features in blue and red wings of the line show up in ranges from 50–60 to 100–110 km/s and 50 to 100 km/s, respectively, relative to the velocity of the line peak. In Figure 5 we compare the flux ratio to the intensities of high-velocity features in blue (left panel) and red (right panel) wings of H line. If the PAH abundance depends on the presence of shocks, one would expect the intensity of high-velocity features to anticorrelate with . No such anticorrelation can be seen in the figure, albeit a certain pattern does emerge: higher intensities of high-velocity features in both the blue and red wings tend to “avoid” regions with the highest ratios, although they are observed in nearby, slightly offset, locations. Nonetheless, the results reported here do not allow us to conclusively associate the destruction of PAH particles with shocks produced by stellar winds and/or supernova explosions. Arkhipova et al. [7] determined metallicities for a number of HII regions in IC 10. It is interesting to relate these metallicities to PAH content in order to see whether decreases with decreasing metallicity within the galaxy in the same way as it does when we compare different Irr galaxies. Figure 6 shows the ratios as a function of oxygen abundance for HII regions from the list of Arkhipova et al. [7]. In some cases two data points in this plot correspond to the same HII region. Metallicities of these HII regions inferred from long-slit and MPFS observations differ slightly, possibly due to different integration areas. We show only the data points with metallicity errors smaller than or equal to 0.05 dex. It is evident from Figure 6 that the ratio indeed decreases with decreasing oxygen abundance, although the turn-off value of is about rather than 8.0–8.1 as found in earlier works. This result indicates that the metallicity dependence of the PAH abundance shows up not only globally (at the level of entire galaxies), but also locally (at least at the level of individual HII regions). As we pointed out in the Introduction, the only important difference between the two nearest Irr galaxies IC 10 and SMC is that in IC 10 we observe the interstellar medium immediately after a violent burst of star formation that has encompassed most of the galaxy. It is therefore of interest to compare the data obtained in this work with results of a detailed study of the dust component in the SMC performed by Sandstrom et al. [3]. (Recall that we use the flux ratio to measure the PAH mass fraction and draw our conclusions based on this ratio.) IC 10, like the SMC, shows strong variations from one region to another with PAH avoiding bright HII regions. The lower spatial resolution prevents us from concluding that PAHs are located in the shells of bright nebulae, however, the large-scale map (Figure 4) shows clearly that the H brightness anticorrelates with the ratio. Sandstrom et al. [3] found weak or no correlation between and the location of HI supershells in the SMC. The IC 10 galaxy, on the contrary, shows well-correlated (nearly coincident) extended shell-like structures in maps of flux ratio and 21-cm HI emission (Figure 7). A correlation between the 8 m IR emission and the extended HI shell is also apparent in the middle panel of Figure 2. The large-scale correlation between 8 m brightness and extended arcs and HII and HI shells in IC 10 and in a number of other starburst Irr galaxies has been known since long (Hunter et al. [33]). The above authors attributed all the 8 m flux solely to PAH emission and concluded that correlates with brightness of giant shells and supershells. The correlation between the flux ratio and the 21-cm line emission leads us to the same conclusion in a somewhat more straightforward way. We believe this is a real correlation, as the stellar wind from numerous young star clusters and WR stars located inside extended shells shapes observed shell-like structure of both the gas and dust in IC 10. Another implication is that PAH molecules do not undergo significant destruction during the sweep-up of giant shells (see also [33]). The brightest extended CO cloud in the galaxy and the dust lane that coincides with it are located just to the south of the complex of ongoing star formation and are immediately adjacent to the HL106 nebula. Figure 8 shows the structure of this cloud according to data of Leroy et al. [10] superimposed on the distribution of flux ratio. (We obtained a composite map of the entire CO cloud by combining maps of its individual components: clouds B11a, B11b, B11c, and B11d in Figure 7 from [10].) The total gas column density (HI) toward the dense cloud, discussed here, amounts to cm [8]. According to CO emission observations, the column density of neutral and molecular hydrogen (H) in this direction is about cm [10, 34]. It follows from Figure 8 that three regions with highest ratios have exactly the same locations and sizes as the CO clouds B11a, B11c, and B11d from [10]. The fourth CO cloud—B11b—coincides with the bright shell nebula HL106. Egorov et al. [30] showed that the optical nebula HL106 is not located behind a dense cloud layer, but is partly embedded in it. Egorov et al. [30] also concluded that B11b is physically associated with the optical nebula. First, the radial velocity of the B11b cloud ( km/s [10]) coincides with the velocity of ionized gas in HL106 determined by Egorov et al. [30]. But most importantly, the brightest southern arc HL106 exactly outlines the boundaries of the B11b cloud. Such an ideal coincidence cannot be accidental and hints to a physical relation between the thin ionized shell and the molecular cloud B11b. The HL106 shell, which exactly bounds the B11b cloud, formed due to photodissociation of molecular gas at the boundary of this cloud as well as due to ionization by the UV radiation of WR stars R2 and R10 and clusters 4-3 and 4-4. Hunter [14] estimates these clusters to be about 20-30 Myr old. The presence of a bright ionizing nebula that surrounds B11b explains low flux ratio toward this cloud. We can thus conclude with certainty that highest ratios and, consequently, highest PAH fractions are indeed found toward dense CO clouds. The only exception is the region of the brightest nebula HL45. The evident drop of the flux ratio observed in this area may be due to destruction of PAHs by strong UV radiation. ## 4 Discussion and conclusions In the Introduction we have already emphasized the importance of understanding the evolution of PAHs and their relation to other galaxy components. In this work we compare results of infrared observations of IC 10 with other available observations in order to identify possible indications to the origin of PAH. One of the key PAH properties to be explained by their evolutionary model is the low content of these particles in metal-poor galaxies. Two hypotheses are mainly discussed in the literature — less efficient formation and more efficient destruction of PAHs in metal-poor systems. Galliano et al. [35] argue that the dependence of on the metallicity of a host galaxy can be naturally explained if we assume that PAH particles are synthesized in the atmospheres of long-lived AGB stars. In this case low metal and PAH abundances are due to the slower stellar evolution. However, if this assumption were true, the PAH fraction in the IC 10 galaxy would be, first, low, and second, uniformly distributed throughout the galaxy. We show the pattern to be exactly the opposite — (given by the flux ratio) varies appreciably across the galaxy and amounts almost to 4% in some areas. From the viewpoint of the spatial localization, PAHs correlate with both dense-gas indicators studied (HI and CO clouds). The PAH mass fraction decreases only in the neighborhood of HII regions and WR stars, which is consistent with the hypothesis that these particles are destroyed by UV radiation and shocks (although we failed to find convincing evidence for PAH destruction by shocks). On the whole, the pattern observed in IC 10 is qualitatively consistent with the assumption that PAH particles in molecular clouds form in situ. In this case the current high value in IC 10 may be related to a recent burst of star formation during which PAH particles have formed in dense gas, and did not have enough time to be destroyed anywhere except for the immediate neighborhood of the UV radiation sources. If this interpretation is correct, the metallicity dependence of should show up until PAH particles begin to be destroyed by ultraviolet radiation, and reflects the peculiarities of their formation rather than their subsequent evolution. We further plan to verify our conclusions by analyzing observational results on other dwarf galaxies. ## 5 Acknowledgments This work was supported by the Russian Foundation for Basic Research (grants nos. 10-02-00091 and 10-02-00231) and the Russian Federal Agency on Science and Innovation (contract no. 02.740.11.0247). O.V. Egorov thanks the Dynasty Foundation of Noncommercial Programs for financial support. Authors are grateful to Suzanne Madden and Tara Parkin for useful discussions. ## References • [1] Tielens A.G.G.M. Ann. Rev. Astron. Astroph. 46, 289 (2008) • [2] Smith J.D.T., Draine B.T., Dale D.A., Moustakas J. et al. Astrophys. J. 656, 770 (2007) • [3] Sandstrom K.M., Bolatto A.D., Draine B., Bot C., Stanimirovic S. Astrophys. J. 715, 701 (2010) • [4] Draine B.T., Dale D.A., Bendo G. et al. Astrophys. J. 663, 866 (2007) • [5] Lozinskaya T.A., Egorov O.V., Moiseev A.V., Bizyaev D.V. Astron. Lett. 35, 730 (2009) • [6] Magrini L., Gonçalves D.R. Mon. Not. Roy. Astron. Soc. 398, 280 (2009) • [7] Arkhipova V.P., Egorov O.V., Lozinskaya T.A., Moiseev A.V. Pis’ma Astron. Zh. 37, 83 (2011) • [8] Wilcots E.M., Miller B.W. Astron.J. 116, 2363 (1998) • [9] Gil de Paz A., Madore B.F., Pevunova O. Astrophys. J. Sup. Ser. 147, 29 (2003) • [10] Leroy A., Bolatto A., Walter F., Blitz L. Astrophys. J. 643, 825 (2006) • [11] Chyzy K.T., Knapik J., Bomans D.J., Klein U., Beck R., Soida M., Urbanik M. Astron. Astrophys. 405, 513 (2003) • [12] Lozinskaya T.A., Moiseev A.V., Podorvanyuk N.Yu., Burenkov A.N. Astron. Lett. 34, 217 (2008) • [13] Richer M.G., Bullejos A., Borissova J. et al. Astron. Astrophys. 370, 34 (2001) • [14] Hunter D. Astrophys. J. 559, 225 (2001) • [15] Massey P., Olsen K., Hodge P., Jacoby G., McNeill R., Smith R., Strong Sh. Astron. J. 133, 2393 (2007) • [16] Vacca W.D., Sheehy C.D., Graham J.R. Astrophys. J. 662, 272 (2007) • [17] Massey P., Armandroff T.E., Conti P.S. Astron. J. 103, 1159 (1992) • [18] Massey P., Holmes S. Astrophys. J. 580, L35 (2002) • [19] Crowther P.A, Drissen L., Abbott J.B., Royer P., Smartt S.J., Astron. Astrophys. 404, 483 (2003) • [20] Hodge P., Lee M.G. Proc. Astron. Soc. Pacif. 102, 26 (1990) • [21] Tikhonov N. A., Galazutdinova O. A. Astron. Lett. 35, 748 (2009) • [22] Royer P., Smartt S.J., Manfroid J., Vreux J. Astron. Astrophys. 366, L1 (2001) • [23] Lopez-Sanchez A.R., Mesa-Delgado A., Lopez-Martin L., Esteban C. Mon. Not. Roy. Astron. Soc. In press (arXiv:1010.1806) • [24] Yang H., Skillman E.D. Astron. J. 106, 1448 (1993) • [25] Bullejos A., Rozado M. Rev. Mex. (Ser de Conf.) 12, 254 (2002) • [26] Rosado M., Valdez-Gutiérrez M., Bullejos A., Arias L., Georgiev L., Ambrocio-Cruz P., Borissova J., Kurtev R. ASP Conf. Ser. 282, 50 (2002) • [27] Thurow J.C., Wilcots E.M. Astron. J. 129, 745 (2005) • [28] Lozinskaya T.A., Moiseev A.V. Mon. Not. Roy. Astron. Soc. 381, L26 (2007) • [29] Fazio G., Pahre M. Spitzer Proposal ID 69 (2004) • [30] Egorov O.V., Lozinskaya T.A., Moiseev A.V. Astron. Rep. 87, 277 (2010) • [31] Draine, B.T., Li A. Astrophys. J. 657, 810 (2007) • [32] I. D. Karachentsev, M. E. Sharina, A. E. Dolphin, E. K. Grebel, D. Geisler, P. Guhathakurta, P. W. Hodge, V. E. Karachentseva, A. Sarajedini, P. Seitzer. Astron. Astrophys. 385, 21 (2002) • [33] Hunter D.H., Elmegreen B.G., Martin E. Astron. J. 132, 801 (2006) • [34] Bolatto A.D., Jackson J.M., Wilson C.D., Moriarty-Schieven G. Astrophys. J. 532, 909 (2000) • [35] Galliano F., Dwek E., Chanial P. Astrophys. J. 672, 214 (2008)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8925583958625793, "perplexity": 2350.0468001959543}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00673.warc.gz"}
http://www.ams.org/mathscinet/msc/msc2010.html?t=55R50&btn=Current
### Browse Classification Select a 2 digit classification ### Search Classifications Enter a keyword, phrase or a 2-, 3-, or 5-digit classification < 55Qxx | 55-XX | 55Sxx > 55-XX Algebraic topology 55Rxx Fiber spaces and bundles [See also 18F15, 32Lxx, 46M20, 57R20, 57R22, 57R25] 55R05 Fiber spaces 55R10 Fiber bundles 55R12 Transfer 55R15 Classification 55R20 Spectral sequences and homology of fiber spaces [See also 55Txx] 55R25 Sphere bundles and vector bundles 55R35 Classifying spaces of groups and $H$-spaces 55R37 Maps between classifying spaces 55R40 Homology of classifying spaces, characteristic classes [See also 57Txx, 57R20] 55R45 Homology and homotopy of $B{\rm O}$ and $B{\rm U}$; Bott periodicity 55R50 Stable classes of vector space bundles, $K$-theory [See also 19Lxx} {For algebraic $K$-theory, see 18F25, 19-XX]} 55R55 Fiberings with singularities 55R60 Microbundles and block bundles [See also 57N55, 57Q50] 55R65 Generalizations of fiber spaces and bundles 55R70 Fibrewise topology 55R80 Discriminantal varieties, configuration spaces 55R91 Equivariant fiber spaces and bundles [See also 19L47] 55R99 None of the above, but in this section < 55Qxx | 55-XX | 55Sxx > American Mathematical Society 201 Charles Street Providence, RI 02904-2294
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8963502645492554, "perplexity": 24781.89550949481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133455.98/warc/CC-MAIN-20140914011213-00121-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.emathematics.net/g5_ratios.php?def=find
User: • Matrices • Algebra • Geometry • Graphs and functions • Trigonometry • Coordinate geometry • Combinatorics Suma y resta Producto por escalar Producto Inversa Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations 2-D Shapes Areas Pythagorean Theorem Distances Graphs Definition of slope Positive or negative slope Determine slope of a line Equation of a line Equation of a line (from graph) Quadratic function Parallel, coincident and intersecting lines Asymptotes Limits Distances Continuity and discontinuities Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines Equations of a straight line Parallel, coincident and intersecting lines Distances Angles in space Inner product Ratios and proportions Equivalent ratios: find the missing number Fill in the missing number to complete the proportion. 1 to = 4 to 20 Write 4 to 20 as a fraction. Then write an equivalent fraction with 1 as the numerator. $\frac{2}{20}=\frac{4\;\div\;4}{20\;\div\;4}=\frac{1}{5}$ 1 to 5 = 4 to 20 Fill in the missing number to complete the proportion. 5 to 7 = to 70
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495341777801514, "perplexity": 4376.538657032178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203493.88/warc/CC-MAIN-20190324210143-20190324232143-00084.warc.gz"}
https://www.physicsforums.com/threads/logarithms-in-one-log.893610/
# Homework Help: Logarithms in one log 1. Nov 16, 2016 ### rashida564 1. The problem statement, all variables and given/known data can we put 3log2(x)-4log(y)+log2(5) in one logarithm it try in all the ways but i can't find the solution . 2. Relevant equations loga(b)=logx(b)/logx(a) log(b*a)=log(b)+log(a) 3. The attempt at a solution log2(5x^3)-log(y^4) log2(5x^3)-log2(y^4)/log2(10) 2. Nov 16, 2016 ### Staff: Mentor There are also formulas for $\log_2 a - \log_2 b$ and $\log_2 a^c$ which you need here. 3. Nov 16, 2016 ### rashida564 i don't know that i should do 4. Nov 16, 2016 ### Staff: Mentor Well you have which I read as $\log_2 5x^3 - \log_{10} y^4 = \log_2 5x^3 - \frac{1}{\log_2 10}\log_2 y^4$. Now you can use $c \cdot \log_2 a = \log_2 a^c$ and $\log_2 a - \log_2 b = \log_2 \frac{a}{b}$ to write all in a single $\log_2$ expression. (Of course with a constant $c=\log_2 10$.) 5. Nov 16, 2016 ### rashida564 log2(5x^3/((log2y^4)^(1/log2(10)))) then who i can write it as a single log i see three logs 6. Nov 16, 2016 ### Staff: Mentor You cannot get rid of the constant $\log_2 10$ if you are dealing with two different basis. Are you sure they are meant to be different? And you have one $\log_2$ too many in the application of the formulas. 7. Nov 16, 2016 ### rashida564 sory for that 8. Nov 16, 2016 sorry*
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523740410804749, "perplexity": 1985.1540045379154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00322.warc.gz"}
https://www.scribd.com/document/68191340/Statistics-Formulas
# statistics formulas : Below are listed the main formulas used in statistics: a> Arithmetic mean formulas: 1> The mean for ungroup ( individual ) data or the arithmetic mean is denoted by: and it‟s formula is: 2>The arithmetic mean for group (discrete) data is calculated using formula: 3> The arithmetic mean for continuous data is calculated using the formulas: Direct method: Deviation method: Step deviation method: Where , d = X – A , A = assumed mean and i = height of the class. b> Median formulas: 1> Median for ungroup data: i> Median = „th observation if the “N” is odd. ii> Median = average of 2> Median for group data: and observation if “N” is even. N = total numbers of items. f= frequency of median class . c> Quartiles formulas : 1> First quartile: For individual data: observation in the data arranged in ascending or decreasing order. and exact first quartile = where .f. .Median = ( For continuous frequency distribution ) where . h= size of first quartileclass . For continuous data: lies in place. 2> Third quartile: For individual data: observation in the data arranged in ascending or decreasing order. For continuous data: lies in place. f= frequency of first quartile class . c. N = total numbers of items. = cumulative frequency of pre-first quartile class . . l= lower limit of median class . h= size of median class . For group ( discrete) data: observation in the data arranged in ascending or decreasing order.f. = cumulative frequency of pre-median class . c. For group ( discrete) data: observation in the data arranged in ascending or decreasing order. l= lower limit of first quartile class . = cumulative frequency of pre-third quartile class . preceding modal class . Mode = Where . = frequency of the class = the frequency of class succeeding modal class .and exact third quartile = where . c. . height = frequency of modal class . h = class width or e> Deviation formulas: 1> Mean deviation: Mean deviation from mean ( For ungroup data: ) For group ( continuous and discrete ) data: and Mean deviation from mode ( ) .f. h= size of third quartileclass . d> Mode formulas: 1> Mode = value with highest frequency for ungroup and discrete frequency distribution. f= frequency of third quartile class . l = lower limit of mode class . N = total numbers of items. Mode = 5> If two or more than two highest frequency data are present then . l= lower limit of third quartile class . 2> Mode ( 3> If )= or then. ( for continuous frequency distribution) Mode = 4> If then . Standard where .For ungroup data: and For group ( continuous and discrete ) data: where . ( is assumed mean). 5> Range: deviation ( ( ) = is assumed mean). Range = Largest item – smallest item = Co-efficient of range = . Standard deviation ( For group data ii : )= where . 2> Quartile deviation: Inter quartile range = Quartile deviation or semi – Inter quartile range = Co-efficient of quartile deviation = 4> Standard deviation: For ungroup data: Standard deviation ( For group data i : )= where . ( is actual mean).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808777928352356, "perplexity": 5424.434414896877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530668.28/warc/CC-MAIN-20171213182224-20171213202224-00536.warc.gz"}
http://link.springer.com/article/10.1007/s00703-007-0276-1
, Volume 99, Issue 1-2, pp 105-128 Date: 12 Nov 2007 # The impact of the PBL scheme and the vertical distribution of model layers on simulations of Alpine foehn Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Summary This paper investigates the influence of the planetary boundary-layer (PBL) parameterization and the vertical distribution of model layers on simulations of an Alpine foehn case that was observed during the Mesoscale Alpine Programme (MAP) in autumn 1999. The study is based on the PSU/NCAR MM5 modelling system and combines five different PBL schemes with three model layer settings, which mainly differ in the height above ground of the lowest model level (z 1). Specifically, z 1 takes values of about 7 m, 22 m and 36 m, and the experiments with z 1 = 7 m are set up such that the second model level is located at z = 36 m. To assess if the different model setups have a systematic impact on the model performance, the simulation results are compared against wind lidar, radiosonde and surface measurements gathered along the Austrian Wipp Valley. Moreover, the dependence of the simulated wind and temperature fields at a given height (36 m above ground) on z 1 is examined for several different regions. Our validation results show that at least over the Wipp Valley, the dependence of the model skill on z 1 tends to be larger and more systematic than the impact of the PBL scheme. The agreement of the simulated wind field with observations tends to benefit from moving the lowest model layer closer to the ground, which appears to be related to the dependence of lee-side flow separation on z 1. However, the simulated 2 m-temperatures are closest to observations for the intermediate z 1 of 22 m. This is mainly related to the fact that the simulated low-level temperatures decrease systematically with decreasing z 1 for all PBL schemes, turning a positive bias at z 1 = 36 m into a negative bias at z 1 = 7 m. The systematic z 1-dependence is also observed for the temperatures at a fixed height of 36 m, indicating a deficiency in the self-consistency of the model results that is not related to a specific PBL formulation. Possible reasons for this deficiency are discussed in the paper. On the other hand, a systematic z 1-dependence of the 36-m wind speed is encountered only for one out of the five PBL schemes. This turns out to be related to an unrealistic profile of the vertical mixing coefficient. Correspondence: Günther Zängl, Meteorologisches Institut der Universitat München, 80333 München, Germany
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008215069770813, "perplexity": 1419.861133784492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133564.63/warc/CC-MAIN-20140914011213-00341-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://tex.stackexchange.com/questions/98811/switch-on-number-of-arguments-given-to-a-macro/98812
# Switch on number of arguments given to a macro I want to define a macro that does different things depending on the number of (optional) arguments given to it. Is this possible? How? \documentclass{standalone} \usepackage{xparse} \DeclareDocumentCommand{\MyCommand}{ g g g }{ % if 1 parameter thing for 1 parameter #1 % else if 2 parameters thing for 2 parameters #1 #2 % else if 3 parameters thing for 3 parameters #1 #2 #3 } \begin{document} \MyCommand{one} \MyCommand{one}{two} \MyCommand{one}{two}{three} \end{document} - I assume you know this is not recommended, as while we provide the g argument in xparse it's non-standard LaTeX syntax. –  Joseph Wright Feb 18 at 21:40 right, yeah, it should probably be { o o o } I guess –  flamingpenguin Feb 18 at 21:46 The short answer is yes. You can use \IfNoValueTF as a switch. It's probably better to go with o instead of g. I think the code is actually cleaner using \ExplSyntaxOn. Some might say easier to read (not all the distracting % to read)--- others might not. ;) With \ExplSyntaxOn you will have to use ~ for spacing. \documentclass{article} \usepackage{xparse} \pagestyle{empty} \ExplSyntaxOn \NewDocumentCommand{\testing}{ ooo } { \IfNoValueTF {#1} { \texttt{Hello ~ World} }{ \IfNoValueTF {#2} { \textit{#1} }{ \IfNoValueTF {#3} { \textbf{#1}:\textsf{#2} } { \textbf{#3}:\textit{#2}:\textsf{#1} } }} } \ExplSyntaxOff \begin{document} \testing[One][Two][Three] \testing[One][Two] \testing[One] \testing \end{document} # A LaTeX3 Solution \documentclass{article} \usepackage{xparse} \pagestyle{empty} \ExplSyntaxOn \int_new:N \g__my_options_count_int \int_gset:Nn \g__my_options_count_int {3} \NewDocumentCommand{\testing}{ ooo } { \IfNoValueT {#3}{ \int_gdecr:N \g__my_options_count_int } \IfNoValueT {#2}{ \int_gdecr:N \g__my_options_count_int } \IfNoValueT {#1}{ \int_gdecr:N \g__my_options_count_int } \int_case:nnn { \g__my_options_count_int } { { 3 }{\textbf{#3}:\textit{#2}:\textsf{#1}} { 2 }{\textbf{#1}:\textsf{#2}} { 1 }{\textit{#1}} }{ \texttt{Hello ~ World} } \int_gset:Nn \g__my_options_count_int {3} } \ExplSyntaxOff \begin{document} \testing[One][Two][Three] \testing[One][Two] \testing[One] \testing \end{document} # All the above produce: - »not all the distracting % to read)« - one can get used to them ;) –  cgnieder Feb 18 at 22:02 @cgnieder true. But with \ExplSyntaxOn you also don't have to worry about whether you forgot any %. –  A.Ellett Feb 18 at 22:05 If there's no first optional argument there's no second and third either. I'd go from the first to the third: try it and you'll see that the order is more natural. –  egreg Feb 18 at 22:06 @egreg So right. :( And it would be cleaner. –  A.Ellett Feb 18 at 22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879685401916504, "perplexity": 9126.8149989595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163066095/warc/CC-MAIN-20131204131746-00004-ip-10-33-133-15.ec2.internal.warc.gz"}