text
stringlengths
104
605k
# In the absence of limits from the codomain, do evaluation functors still preserve limits? Consider a diagram $K:\mathsf J\longrightarrow [\mathsf C,\mathsf D]$ as well as the evaluation functors $\operatorname{ev}_C:[\mathsf C,\mathsf D]\to \mathsf D$. The fact limits in functor categories are computed pointwise amounts to the following. Assuming each $\operatorname{ev}_C\circ K$ has a limit, these limits come together to define a limit of $K$. Consequently each evaluation functor preserves limits and the evaluation functors "jointly create limits". As an exercise I've been told to prove that without any assumptions, the evaluation functors preserve limits (and colimits). However, I just don't see why this should be true. The existence of a limit for each composite $\operatorname{ev}_C\circ K$ is used to construct to limit upstairs (using universal properties), and that is what makes preservation obvious. I don't understand why the limit of the functor $K$ should evaluate to limits of $\operatorname{ev}_C\circ K$ in general. Am I wrong or is there a counterexample? (I started fiddling with diagrams, but I'm hoping for some trivial counterexample.) • Start by asking yourself when is it automatic that the evaluation functor preserves limits. What condition on $D$ is required? Work from there. Apr 19 '18 at 14:01 • @IttayWeiss I'm not sure I understand your advice. Certainly if $\mathsf D$ is complete then the limits of the composites $\operatorname{ev}_C\circ K$ exist and we may proceed with the usual proof. Alternatively, if $\mathsf D$ is complete may consider left and right adjoints to $\operatorname{ev}_C$ given by pointwise left and right Kan extensions along the points of $\mathsf C$. How does this help if we drop assumptions on $\mathsf D$ (that is what I'm asking)? Apr 19 '18 at 14:44 • Try to directly write down the right adjoint to evaluation. What minimal condition guarantees its existence? Apr 19 '18 at 16:31 • @IttayWeiss I don't know how to write down the right adjoint. I have no suspect for what it might be concretely, especially without any limits with which to produce a weighted limit formula for the Kan extension. Could you please elaborate? Apr 19 '18 at 20:09 Take $\mathbf C$ to be the category with one non-identity morphism, $\mathbf D$ to be the category with three non-identity morphisms $A\to B\leftleftarrows C$, and $\mathbf J\xrightarrow{K}[\mathbf C,\mathbf D]=\mathbf D^\to$ to be two copies of the morphism $A\to B$ ($\mathbf J$ is the category with two objects). Then as a morphism $A\to B$ is its own square because the only morpisms to it are commutative squares from itself and $A\xrightarrow{\mathrm id_A}A$, but its codomain $B$ does not have a square because the pair of morphisms $B\leftleftarrows C$ only factor jointly through themselves, while the redundant pair of morphisms $A\to B$ only factor through $A$ and $B$. Below is my reasoning for arriving at this minimal counter-example. In the absence of limit conditions on a category, a good replacement for the notion of a limit-preserving functor is the notion of a flat functor. Explicitly, given a diagram $\mathbf J\xrightarrow{K}\mathbf E$, a functor $\mathbf E\xrightarrow{F}\mathbf D$ is $K$-flat if any cone over the diagram $\mathbf J\xrightarrow{K}\mathbf E\xrightarrow{F}\mathbf D$ factors through the image of a cone over $\mathbf J\xrightarrow{K}\mathbf E$. This notion is good because a) if $\mathbf J\xrightarrow{K}\mathbf E$ has a (weak) limit, then $\mathbf E\xrightarrow{F}\mathbf D$ is $K$-flat if and only if it preserves this (weak) limit, b) right adjoints are flat for all diagrams, c) the general adjoint functor theorem says that when $\mathbf E$ is Cauchy-complete locally small, then $\mathbf E\xrightarrow{F}\mathbf D$ has a left adjoint if and only if $\mathbf E\xrightarrow{F}\mathbf D$ is flat for small diagrams and objects of $\mathbf D$ satisfy the solution set condition (i.e. have "small prereflections") with respect to $\mathbf E\xrightarrow{F}\mathbf D$. In your situation, a cone over $\mathbf J\xrightarrow{K}[\mathbf C,\mathbf D]\xrightarrow{\mathrm{ev}_{c_0}}\mathbf D$ is of course an object $d\in\mathbf D$ equipped with a family of morphisms $d\to K(j,c_0)$ natural in $j$. On the other hand, a cone over $\mathbf J\xrightarrow{K}[\mathbf C,\mathbf D]$ is a functor $\mathbf C\xrightarrow{X}\mathbf D$ equipped with natural transformations $X\Rightarrow K$, i.e. a family of morphisms $X(c)\to K(j,c)$ in $\mathbf D$ natural in both $j$ and $c$. Thus, evaluation at $c_0$ is flat if any family of morphisms $d\to K(j,c_0)$, natural in $j$, factors as $d\to X(c_0)\to K(j,c_0)$ for a family of morphisms $X(c)\to K(j,c)$ natural in $j$ and $c$. For a minimal counterexample, we see that $J$ being empty or the category with one object cannot work, hence we should take it to be at least the category with two objects. Then we are reduced to ensuring that a pair of morphisms $d\rightrightarrows K(j_0,c_0), K(j_1,c_0)$ do not jointly factor as $d\to X(j,c_0)\to K(j,c_0)$ for a family of morphisms $X(j,c)\to K(j,c)$ that is natural in $c$. Because the only reason this would break now is naturality, for a minimal counter-example we can take $\mathbf C$ to be the category with two objects and a single non-identity morphism between them. Then the diagram $K$ is simply a pair of morphisms in $\mathbf D$, and we want to set up $\mathbf D$ so that the two morphisms have a product but either the domain or codomain of that product is not the product of their domains or codomains. Keeping in mind that the two morphisms don't have to be distinct, I arrived at the above counter-example. • Re : "set up $D$ so that the two morphisms have a product but either the domain or codomain of that product is not the product of their domains or codomains", it's perhaps worth pointing out that this can only work for the codomains : the "domain" functor from the category of arrows to $D$ does in fact preserves limits, because it is the right adjoint of the functor from $D$ to its category of arrows that map every object to its identity arrow. Apr 20 '18 at 12:16 I think the following provides a counterexample : take $C$ to be the poset $\{0\leq 1\}$, and $D$ be the poset $\{x_1,x_2,x_3,y_1,y_2,y_3,z\}$, where $x_i\leq y_i$ for all $i$, $x_1\leq x_2$, $x_1\leq x_3$ (similarly for the $y_i$), and moreover $z\leq y_2$ and $z\leq y_3$. The functor category $[C,D]$ is the same thing as the category of arrows of $D$, which, since $D$ is a poset, is the same thing as pairs $(a,b)$ for which $a\leq b$, and there is a (unique) arrow $(a,b)\to (a',b')$ if and only if $a\leq a'$ and $b\leq b'$. Now $(x_1,y_1)$ is the product of $(x_2,y_2)$ and $(x_3,y_3)$; indeed, if $(a,b)$ has arrows to $(x_2,y_2)$ and $(x_3,y_3)$, then we must have $a\leq x_2$ and $a\leq x_3$, which forces $a=x_1$, and since $x_1\leq b\leq y_2$ and $b\leq y_3$, we must have either $b=x_1$ or $b=y_1$. Thus in any case, we have a (necessarily unique) arrow $(a,b)\to (x_1,y_1)$. But note that $y_1$ is not actually the product of $y_2$ and $y_3$, since there is no arrow between $y_1$ and $z$. • Dear Arnaud, thank you very much for this answer. I have accepted the answer by Vladimir Sotirov because I feel it is exceptionally sharp and instructional. Apr 19 '18 at 23:23
#### jp2003 ##### New member 3....... 2....... 5 ___ + __ _ ___ x+3.. x... ..... 2 answer in book is 12 _ 5x _ 5x^2 .................... ..._______________ ........... .............. ......2x(x+3) thank you #### soroban ##### Elite Member Hello, jp2003! Adding and subtracting fractions is never a plesant task . . . $$\displaystyle \L\frac{3}{x\,+\,3}\,+\,\frac{2}{x}\,-\,\frac{5}{2}$$ Answer in book is: $$\displaystyle \L\:\frac{12\,-\,5x\,-\,5x^2}{2x(x\,+\,3)}$$ To add and/or subtract fractions, the denominators must be the same. Find the common denominator: $$\displaystyle \:2x(x\,+\,3)$$ Then "convert" the fractions so they each have the common denominator. Multiply the first fraction by $$\displaystyle \frac{2x}{2x}:\;\;\;\L\frac{2x}{2x}\,\cdot\,\frac{3}{x\,+\,3} \;=\;\frac{6x}{2x(x\,+\,3)}$$ Multiply the second fraction by $$\displaystyle \frac{2(x+3)}{2(x+3)}:\;\;\;\L\frac{2(x+3)}{2(x+3)}\,\cdot\,\frac{2}{x} \;=\;\frac{4(x\,+\,3)}{2x(x\,+\,3)}$$ Multiply the third fraction by $$\displaystyle \frac{x(x+3)}{x(x+3)}:\;\;\;\L\frac{x(x+3)}{x(x+3)}\,\cdot\,\frac{5}{2}\;=\;\frac{5x(x\,+\,3)}{2x(x\,+\,3)}$$ The problem becomes: $$\displaystyle \L\:\frac{6x}{2x(x\,+\,3)}\,+\,\frac{4(x+3)}{2x(x\,+\,3)} \,-\,\frac{5x(x\,+\,3)}{2x(x\,+\,3)}$$ Then we have: $$\displaystyle \L\:\frac{6x\,+\,4(x\,+\,3)\,-\,5x(x\,+\,3)}{2x(x\,+\,3)} \:=\:\frac{6x\,+\,4x\,+\,12\,-\,5x^2\,-\,15x}{2x(x\,+\,3)}$$ And finally: $$\displaystyle \L\:\frac{12\,-\,5x\,-\,5x^2}{2x(x\,+\,3)}$$ #### Denis ##### Senior Member Just a suggestion; I find it easier and faster and a bit less confusing if you go in stages; like: 2/x - 5/2 = (4 - 5x) / (2x) Now bring in the other term: 3 / (x + 3) + (4 - 5x) / (2x) = you know what to do :wink: Similarly, if you had 4 terms, you can: do the 1st 2 do the next 2 do the above 2 :idea: Much easier to "check back" if you end up with wrong answer; take my word for it.
mersenneforum.org Riesel base 3 reservations/statuses/primes Register FAQ Search Today's Posts Mark Forums Read 2008-11-23, 13:47 #133 michaf     Jan 2005 47910 Posts While testing my 80M range (520M-600M) I first tried pfgw to 1000 using Karsten's scripts, then sieve it, and tried Karsten's 'high-n' script That took about 4 minutes writing time when a prime was found, so it was waaaay too slow :) second,I tried to n=2000 with pfgw, sieved, and then high-n'd it, which was way better (30 sec/write). I'm finishing that now as it is quick enough. I think that running pfgw to 2500 or even 3000 is the most efficient. I will test on following ranges what is best (I'll go for 100M ranges then, and will note total time taken. expect an update in about 10 weeks :) ) 2008-11-23, 14:14   #134 Flatlander I quite division it "Chris" Feb 2005 England 81D16 Posts Quote: Originally Posted by gd_barnes Chris, can you double-check the k=500M-505M range? I messed around with it for a bit but don't feel I can spare the resources at the moment...too many other efforts going on. Thanks, Gary No problem. I'll start when the NPLB rally finishes. 2008-11-24, 21:26 #136 michaf     Jan 2005 479 Posts Hmm.. no offense taken :) I find it troubling that there are some misses though. another thing: I just found out the hard way that doing large ranges just isn't doable. The harddrive thrashes, and just keeps on writing/reading, while the processor is out of work... I will do all the ranges by 10M now :) 2008-11-25, 03:12   #137 gd_barnes May 2007 Kansas; USA 72×11×19 Posts Quote: Originally Posted by michaf Hmm.. no offense taken :) I find it troubling that there are some misses though. another thing: I just found out the hard way that doing large ranges just isn't doable. The harddrive thrashes, and just keeps on writing/reading, while the processor is out of work... I will do all the ranges by 10M now :) Since the missing primes are all n=1001-1004, my impression is that you stopped testing your "1st run" at n=1000 but started testing your "2nd run" at n=1004 with the k's that were remaining at n=1000. Perhaps it was something to do with having to re-run the "2nd run" as a result of only doing the k=500M-502M range the 1st time around. Although that still doesn't explain why the 2nd run found 2 primes for n=1004 but missed 1 other prime for n=1004. Can you check where you started and stopped your 2 runs? We need to know if there is a bug in Karsten's process or if it was just a "user error". Gary Last fiddled with by gd_barnes on 2008-11-25 at 03:13 2008-11-25, 07:12   #138 henryzz Just call me Henry "David" Sep 2007 Cambridge (GMT/BST) 131678 Posts Quote: Originally Posted by henryzz unfortunately this has not worked it produced a file with n values in the order Code: 1 10 100 1000 1001 etc. which is not very easy to split once i have a solution for this i will be able to send my results to gary 2008-11-25, 15:32   #139 Flatlander I quite division it "Chris" Feb 2005 England 31×67 Posts Gary, either everyone here thinks you are infallible or nobody reads your posts! Quote: Something does smell a little bit fishy here though: Code: k-range k's remaining 500M-505M 8 505M-510M 27 510M-515M 15 515M-520M 30 Er, there weren't 8 ks between 500M and 505M in michaf's post, there were 17! I can't belive no-one noticed! lol I have just confirmed them: Code: 500145402 500968542 501526364 501628284 501947956 502362446 502579034 502598216 502683156 502732374 503092266 503163566 503210228 503449428 503961636 504291412 504632274 This is a very reassuring double check. I'll stop laughing soon. 2008-11-25, 16:48   #140 Flatlander I quite division it "Chris" Feb 2005 England 31×67 Posts Quote: Originally Posted by gd_barnes ... I double-checked the removal of multiples of the base (MOB) with your site. Looks great! There were no k's remaining that are divisible by 3. A total of 12 were eliminated leaving 35 k's remaining for the range. Nice work! Gary Looks like the joke's on me. lol 2008-11-25, 17:28   #141 michaf Jan 2005 479 Posts Quote: Originally Posted by gd_barnes Can you check where you started and stopped your 2 runs? We need to know if there is a bug in Karsten's process or if it was just a "user error". Gary The sieve started at n=1000, but that doesn't imply an error in the script. I have had some very hard working days, so any user error that is possible to make, will most likely be made :) 2008-11-26, 03:38   #142 gd_barnes May 2007 Kansas; USA 72·11·19 Posts Quote: Originally Posted by Flatlander Gary, either everyone here thinks you are infallible or nobody reads your posts! Er, there weren't 8 ks between 500M and 505M in michaf's post, there were 17! I can't belive no-one noticed! lol I have just confirmed them: Code: 500145402 500968542 501526364 501628284 501947956 502362446 502579034 502598216 502683156 502732374 503092266 503163566 503210228 503449428 503961636 504291412 504632274 This is a very reassuring double check. I'll stop laughing soon. I am infallible. You didn't understand. The # of k's remaining is AFTER removing k's that are divisible by the base that don't need to be searched. Just look at Kenneth's website and you'll see! Divide all the k's by 3. If a k is divisible by 3, subtract 1 and see if it is composite. If so, remove it; if not keep it. In almost all cases, you will remove it because k/3^q is already being searched or already has a prime. Ha, ha, ha. Now I laugh times 3! Gary Edit: I just now saw that you said the "joke's on me". I can delete your posts and this post if you want but I have to admit I couldn't resist getting in a dig on you. BTW, both Kenneth and Max have caught errors on my web pages before. :-) Last fiddled with by gd_barnes on 2008-11-26 at 03:41 Reason: edit 2008-11-26, 14:19   #143 Flatlander I quite division it "Chris" Feb 2005 England 31·67 Posts Quote: Originally Posted by gd_barnes ... Edit: I just now saw that you said the "joke's on me". I can delete your posts and this post if you want but I have to admit I couldn't resist getting in a dig on you. BTW, both Kenneth and Max have caught errors on my web pages before. :-) No, I think my public humilation should stand. It might teach me to keep my big mouth shut! Similar Threads Thread Thread Starter Forum Replies Last Post gd_barnes Conjectures 'R Us 2206 2020-11-25 20:28 gd_barnes Conjectures 'R Us 862 2020-11-25 17:01 gd_barnes Conjectures 'R Us 1418 2020-11-24 19:45 Siemelink Conjectures 'R Us 1673 2020-11-18 12:14 gd_barnes Conjectures 'R Us 388 2020-10-21 19:42 All times are UTC. The time now is 17:03. Sat Nov 28 17:03:47 UTC 2020 up 79 days, 14:14, 4 users, load averages: 0.90, 1.09, 1.23
# Prove that $\sum_{n=1}^\infty \frac{\sigma_a(n)}{n^s}=\zeta(s)\zeta(s-a)$ I would appreciate a hint concerning how to surpass the roadblock I've encountered in my attempt at a proof below. A nicer proof than mine would also help (Edit: The latter part is now done by Gerry Myserson, the prior remains). Attempt at a proof (below): As $\sigma_a(x)$ is completely multiplicative, we can take the infinite product of prime series: $$\sum_{n=1}^\infty \frac{\sigma_a(n)}{n^s}= \prod_{\text{p prime}}\sum_{k=0}^\infty \frac{\sigma_a(p^k)}{p^{ks}}$$ $$=\prod_{\text{p prime}}\sum_{k=0}^\infty \frac{\frac{p^{(k+1)a}-1}{p^a-1}}{p^{ks}}$$ $$=\prod_{\text{p prime}}\sum_{k=0}^\infty \frac{1}{p^a-1}\left[\frac{p^{(k+1)a}}{p^{ks}}-\frac{1}{p^{ks}}\right]$$ $$=\prod_{\text{p prime}} \frac{1}{p^a-1}\left[p^a\zeta(s-a)-\zeta(s)\right]$$ $$=\zeta(a)\prod_{\text{p prime}} \left[\zeta(s-a)-p^{-a}\zeta(s)\right]$$ I cannot see how to extract $\zeta(s)\zeta(s-a)$ from this. Since $\sigma_a=f\ast g$ is the Dirichlet product with $f(n)=1$ and $g(n)=n^a$, and since multiplication of Dirichlet series is given with this product, we obtain $$\sum_{n=1}^{\infty}\sigma_a(n)n^{-s}=\sum_{n=1}^{\infty}n^{-s}\sum_{n=1}^{\infty}n^an^{-s}=\zeta(s)\zeta(s-a).$$ $$\zeta(s)\zeta(s-a)=\sum_j(1/j^s)\sum_k(k^a/k^s)=\sum_n c(n)/n^s$$ where we have to prove $c(n)=\sigma_a(n)$. But every factorization $n=jk$ contributes $k^a$ to the coefficient of $n^{-s}$.
## Bundling AppImages with Themes. One of the projects which I have been undertaking in recent weeks has been, to teach myself GUI programming using the Qt5 GUI Library, of which I have version 5.7.1 installed on a good, tower computer, along with the IDE “Qt Creator”. What can be observed about this already is, that under Debian 9 / Stretch, which is a specific build of Linux, in addition to just a few packages, it’s really necessary to install many additional packages, before one is ready to develop Qt Applications, because of the way Debian breaks the facility into many smaller packages. Hypothetically, if a person was using the Windows, Qt SDK, then he or she would have many of the resources all in one package. Beyond just teaching myself the basics of how to design GUIs with this, I’ve also explored what the best way is, to deploy the resulting applications, so that other people – technically, my users – may run them. This can be tricky because, with Qt especially, libraries tend to be incompatible, due to even minor version differences. So, an approach which can be taken is, to bundle the main libraries required into an AppImage, such that, when the developer has compiled everything, the resulting AppImage – a binary – is much more likely actually to run, on different versions of Linux specifically. The tool which I’ve been using, to turn my compiled binaries into AppImage’s, is called ‘linuxdeployqt‘, and is not available in the Debian / Stretch repositories. However, it does run under …Stretch. But a developer may have questions that go beyond just this basic capability, such as, what he or she can do, so that the application will have a predictable appearance – a “Style” or “Theme” – on the end-user’s Linux computer. And essentially, I can think of two ways to approach that question: The ‘official but slightly quirky way’, and ‘a dirty fix, that seems to get used often’… The official, but slightly quirky way: Within the AppImage, there will be a ‘plugins’ directory, within which there will be a ‘platformthemes’ as well as a ‘styles’ subdirectory. It’s important to note, that these subdirectories serve officially different purposes: • The ‘platformthemes’ subdirectory will contain plugins, that allow the application to connect with whatever theme engine the end-user’s computer has. Its plugins need to match libraries that the eventual user has, determining his desktop theme, And • The ‘styles’ subdirectory may contain plugins, which the end-user does not have installed, but were usually compiled by upstream developers, to make use of one specific platform-engine each. Thus, what I had in these directories, for better or worse, was as shown: dirk@Phosphene:~/Programs/build-Dirk_Roots_GUI_1-Desktop-Release/plugins/platformthemes$ls KDEPlasmaPlatformTheme.so libqgtk2.so libqgtk3.so dirk@Phosphene:~/Programs/build-Dirk_Roots_GUI_1-Desktop-Release/plugins/platformthemes$ dirk@Phosphene:~/Programs/build-Dirk_Roots_GUI_1-Desktop-Release/plugins/styles$ls breeze.so libqgtk2style.so dirk@Phosphene:~/Programs/build-Dirk_Roots_GUI_1-Desktop-Release/plugins/styles$ The reader may already get, that this was a somewhat amateurish way, to satisfy themes on the end-user’s machine. But in reality, what this set of contents, of the AppImage, does rather well is, to make sure that the 3 main theme engines on an end-user’s computer are recognized: 1. Gtk2, 2. Gtk3, 3. Plasma 5. And, if the application tries to make no attempts to set its own theme or style, it will most probably run with the same theme, that the end-user has selected for his desktop. But, what the point of this posting really is, is to give a hint to the reader, as to how his AppImage could set its own theme eventually. And so, according to what I just cited above, my application could choose to select “Breeze” as the Style with which to display itself, or “Gtk2″. But, here is where the official way gets undermined, at least as the state of the art was, with v5.7.1 of Qt: • ‘Breeze’ can only be set (by the application), if the end-user’s machine is running Plasma 5 (:1), And • ‘Gtk2′ can only be set (by the application), if the end-user’s machine supports Gtk2 themes, which many Plasma 5 computers have the additional packages installed, to do. What this means is that, even though I could try to create a predictable experience for the end-user, what the end-user will see can still vary, depending on what, exactly, his platform is. And beyond that, even though I could set the ‘Gtk2′ Style with better reliability in the outcome, I could also just think, that the classical, ‘Gtk2′ style is a boring style, not worthy of my application. Yet, in this situation, I can only select the “Breeze” theme from within my application successfully, if the end-user is based on Plasma 5. If the end-user is not, then my application’s attempt to set “Breeze” will actually cause Qt v5.7.1 to choose the “Fusion” theme, that Qt5 always supports, that might look okay, but that is not “Breeze”… So, what other options does the application developer have? (Updated 9/12/2020, 18h15… ) ## Observations, on how to insert Unicode and Emojis into text, using a KDE 4 / Plasma 5.8 -based Linux computer. One of the earliest ‘inventions’ on the Internet, were ‘Smilies’, which were just typed in to emails, and which, when viewed as text, evoked the perception of whichever face they represented. But, graphical user interfaces – GUIs – replaced simple text even in the 1990s, and the first, natural thing which developers coded-in to email clients was, the ability to convert typed, text-based smilies, into actual images, flowed with the text. Also, simple colon-parenthesis sequences were replaced with other, more varied sequences, which could be converted by some email clients into fancier images, than simply, smiling faces. Actually, the evolution of the early Internet was slightly more complex than that, and I have even forgotten some of the real terms that were used to describe that History. But there is an even more recent shift in the language of the Internet, which creates a distinction between Smilies, and ‘Emojis’. In this context, even many ‘Emoticons’ were really just smilies. Emojis distinguish themselves, in that these pictograms are represented as part of text in the form of Unicode values, of which there is such a large supply, that some Unicode values represent these pictograms, instead of always representing characters of the Earth’s many languages, including Chinese, Korean, Cyrillic, etc. What some readers might ask next could be, ‘Traditionally, text was encoded as 7-bit or 8-bit ASCII, how can 16-bit or 32-bit Unicode characters simply be inserted into that?’ And the short answer is, through either UTF-8 or UTF-16 Encoding. Hence, in a body of text that mainly consists of 8-bit codes, half of which are not normally used, sequences of bytes can be encoded, which can be recognized as special, because their 8-bit values do not correspond to valid ASCII characters, and their sequences complete a Unicode character. One fact which is good to know about these Emojis is, that they are often proprietary, which means that they are often either the intellectual property of an IT company, or part of an Open-Source project. But the actual aspect of that which can be proprietary is, the way in which Unicode values are rendered to images. What that means is that, for example, I can put the following code into my blog: 🤐 . That is also referred to as Unicode character ‘U+1F910′. Its length extends beyond 16 bits by 1 bit, and the next 4, most-significant bits are all 1’s, as expressed by the hexadecimal digit ‘F’. It’s supposed to be a pictogram of a deceased entity, as if that were stated correctly by a head which has had certain features crossed out. But for my blog, the use of such a code can be a hazard, because it will not display equally on Android devices, as it displays on iOS devices. And, on certain Linux computers, it might not be rendered at all, instead just resulting in a famous rectangle that seems to have dots or numbers inside it. This latter result will form, when the client-program could not find the correct Font, to convert this code into an image. (:3) Those fonts are what’s proprietary. And, they also provide some consistency in style, between Android devices, OR between iOS devices, OR between Windows devices, etc. Well, I began this posting by musing about the early days of the Internet. During those days, some users – myself included 😊  – did some things which were truly foolish, and which included, to put background images into our HTML-composed emails, and, to decorate documents with (8-bit) dingbat fonts, just because it was fun to pass certain fancier documents around, than POT. I don’t think there is really anything wrong with potential readers, who still put background images into their emails. What I mean is that many of my contacts today, prefer emails which are not even HTML. This earlier practice, of using dingbat fonts etc., tended to play favourably into the hands of the tech giants, because the resulting documents could only be viewed by certain applications. And so today, I needed to ask myself the question, of how often the use of Emojis can actually result in a document, which the recipient cannot read. And my conclusion is that today, such an indecipherable outcome is actually rare. So, how I would put a long story short is to say, that Commercialism is back, riding on the desire of younger people to put more-interesting content into their messages, and perhaps, without some of the younger people being aware that when they put Emojis, they are including themselves as the software-disciples of one larger group or another. But that larger group mainly seems to be drawing its profits, from the ability of certain software to insert the images, rather than, the ability of only certain software to render them at the receiving end (at all). Everybody knows that, even though the input methods on our smart-phones don’t lead to massively good prose, they almost always offer a rich supply of Smilies, plus Emojis, all displayed to the sender using his or her own font, but later displayed to the recipient, using a potentially different font. The way Linux computers can be given such fonts, is through the installation of packages such as ‘fonts-symbola’ and ‘ttf-ancient-fonts’, or of ‘fonts-noto‘… The main drawback of the open-source ‘Symbola’ font, for example, is simply, that it often gives a more boring depiction of the same Unicode character, than the depiction which the true Colour Noto Font from Google would give. One interesting way in which Linux users are already in on the party is, in the fact that actual Web-browsers are usually set to download fonts as they are needed, even under Linux, for the display of Web-pages. Yet, email clients do not fall into that category of applications, and whether they render Emojis depends on whether these font packages are installed. Hence, if the ability to send Emojis from a Linux computer is where it’s at, then this is going to be the subject of the rest of my posting. I can put two and two together, you know… (Updated 7/31/2020, 15h10… )
## Background & Summary Plant traits are the morphological, chemical, physiological or phenological properties of individuals1. They determine how plants as primary producers capture, process and store resources, how they respond to their abiotic and biotic environment and disturbances, and how they affect other trophic levels and the fluxes of water, carbon and energy through ecosystems2,3,4,5,6,7,8. Despite the overwhelming diversity of plant forms and life histories on Earth, single plant organs, such as leaves, stems, or seeds, show comparatively few essential trait combinations9. Evidence for recurrent trait syndromes beyond the level of single organs has been rare, restricted geographically or taxonomically, and often contradictory. Díaz et al.9 addressed this question by analyzing the worldwide variation in six major traits critical to growth, survival and reproduction, namely: plant height (H), stem specific density (SSD), leaf area (LA), leaf mass per area (LMA), leaf nitrogen content per dry mass (Nmass) and diaspore (seed or spore) mass (SM). Díaz et al.9 found that occupancy of the six-dimensional trait space is highly constrained, and is captured in a two-dimensional global spectrum of plant form and function, indicating strong correlation and trade-offs among traits. These results provide a foundation and baseline for studies of plant evolution, comparative plant and ecosystems ecology, and predictive modelling of future vegetation based on continuous variation in essential plant functional dimensions. Here we provide the trait dataset that served as basis for the analysis of the global spectrum of plant form and function presented in Díaz et al.9 –the ‘Global Spectrum of Plant Form and Function Dataset’ (short here ‘Global Spectrum Dataset’). The dataset is predominantly based on trait records compiled in the TRY database10,11 and provides trait values corresponding –to the extent possible–to mature and healthy plants grown under natural conditions within the species distribution range. The dataset provides species mean values for the six plant traits mentioned above plus leaf dry matter content, used for the imputation of stem specific density. The dataset covers >46,000 of the approximately 391,000 vascular plant species known to science12. Despite the rapid development of large plant trait datasets, the Global Spectrum Dataset stands out in terms of coverage and reliability. First, it provides quantitative information for a very high number of species, including about 5% of them with ‘complete coverage’ (all six traits). Second, it represents a unique combination of probabilistic outlier detection and comprehensive validation of trait values against expert knowledge and external information for data quality assurance. Third, it contains the attribution of data to original references, even if datasets contributed to TRY had been assembled from multiple original sources. The quantitative trait data are enhanced by higher-level taxonomic information, based on the Angiosperm Phylogeny APG III (http://www.mobot.org/MOBOT/research/APweb/) and categorical traits, based on the ‘TRY – Categorical Traits Dataset’13, enriched by field data and various literature sources. This information facilitates stratification of species and quantitative traits according to phylogenetic and morpho-functional criteria. The present dataset results from the integration of trait measurements from many datasets received via TRY and additional, partly unpublished, data. The data come from largely independent studies, that address a wide variety of questions at different scales, and using different measurement methods, units and terminologies14. The development of the dataset therefore faced three challenges: (1) to derive a dataset of species mean values covering all six traits with the aim of being representative of vascular plant species worldwide; (2) to detect erroneous trait records (due to errors in sampling, measurement, unit conversion, etc.); and (3) to ensure that correctly measured extreme values of traits in nature were not mistakenly identified as outliers and therefore excluded from the dataset. To deal with these challenges, we collected as many trait observations as possible. The dataset was developed over a period of six years (2009–2015) with continuous addition of new trait records as data became available. The final dataset is based on almost 1 million trait records, which can be traced back to ca. 2,500 references (see file: ‘References_original_sources.xlsx’). We identified outliers and potential errors based on a probabilistic approach10 combined with validation by domain experts and external information. These combined efforts of data acquisition, integration and quality control resulted in the most comprehensive and probably most accurate dataset for species mean traits of vascular plants published so far. ## Methods ### Selection of plant traits There is an extensive literature summarized in Díaz et al.9 and Pérez-Harguindeguy et al.6 supporting the key importance of the six core traits chosen – H, SSD, LA, LMA, Nmass and SM – to growth, survival and reproduction. Díaz et al.9 went further by showing that, together, these traits capture the essence of plant form and function at the broad scale: a two-dimensional space, with one major dimension reflecting the size of whole plants and its organs, and the other representing a balance between leaf construction cost against growth potential, captures roughly three-quarters of total trait variation. The core quantitative traits were complemented with the categorical traits: woodiness, growth form, succulence, adaptation to terrestrial or aquatic habitats, nutrition type, and leaf type. ### Definition of traits In the following section we provide the names and definitions used for the continuous traits in the original publication of the global spectrum9, plus the names and definitions used in the Thesaurus Of Plant Characteristics (TOP)14. The detailed rationale, ecological meaning and key references for each of them can be found in the methods section of Díaz et al.9 and in Garnier et al.7. For the categorical traits we provide names, definition where available, and the categories used in the database. Traits were mostly measured following the protocols and definitions specified in the ‘New Handbook for Standardised Measurement of Plant Functional Traits Worldwide’6 (http://www.nucleodiversus.org). In the case of data from the LEDA database, measurements followed the protocols developed in the context of the LEDA project16 (https://www.leda-traitbase.org). In the case of published datasets individual measurement protocols are available in the original publications listed in Table S1. #### Plant height (H) (unit: m) Adult plant height, i.e. typical height of the upper boundary of the main photosynthetic tissues at maturity (TOP: vegetative plant height; the plant height considering the highest vegetative component). #### Stem specific density (SSD) (unit: mg mm−3) Stem dry mass per unit of stem fresh volume (TOP: stem specific density; the ratio of the mass of the stem or a unit thereof assessed after drying to its volume assessed without drying). SSD is much more commonly measured on woody species (particularly trees), than on non-woody species. Therefore, gaps in SSD for non-woody species were filled by estimates derived from leaf dry matter content (see Data Imputation below). #### Leaf area (LA) (unit: mm2) One-sided surface area of an individual lamina (TOP: leaf lamina area; the area of the leaf lamina in the one-sided projection; in case of compound leaves the area of a leaflet lamina). #### Leaf mass per area (LMA) (unit: g m−2) Leaf dry mass per unit of lamina surface area (TOP: leaf mass per area, the ratio of the dry mass of a leaf to its area). #### Leaf nitrogen per mass (Nmass) (unit: mg g−1) Leaf nitrogen content per unit of lamina dry mass (leaf total N) (TOP: leaf nitrogen content per leaf dry mass; the ratio of the quantity of nitrogen in the leaf or component thereof, i.e. leaf lamina or leaflet, per respective unit dry mass). #### Diaspore mass (SM) (unit: mg) Dry mass of an individual seed or spore plus any additional structures that assist dispersal and do not easily detach (TOP: seed dry mass; mass of an individual seed or spore assessed after drying; seed dry mass). Spore mass of pteridophytes, rarely reported in the literature, was estimated from published values of diaspore diameter and density (see Data Imputation below). #### Leaf dry matter content (LDMC) (unit: g g−1) The ratio of the dry mass of the leaf or component thereof, i.e. leaf lamina, to the corresponding water saturated fresh mass. In addition to the six focal traits, we compiled LDMC for herbaceous plants to calculate missing values for SSD (see Data Imputation below). #### Adaptation to terrestrial or aquatic habitats On the basis of the type of habitat in which the species naturally grows. Categories: aquatic, aquatic/semiaquatic, semiaquatic, terrestrial. #### Woodiness A feature of the whole plant defining the occurrence and distribution of wood along the stem. Categories: woody, non-woody, semi-woody (woody at base of stem(s) only). #### Growth form Growth form is mainly determined by woodiness and the direction and extent of growth, and any branching of the main shoot axis or axes. Categories: bamboo graminoid, climber, fern, herbaceous graminoid, herbaceous non-graminoid, herbaceous non-graminoid/shrub, succulent, shrub, shrub/tree, tree, other. #### Succulence Succulence characterizes plants with parts that are thickened, fleshy, and engorged, usually to retain water in conditions where climate or soil characteristics strongly limit water availability to plants. This criterion aims to provide more detailed information to the succulent growth form whenever available. Categories: leaf and stem succulent, leaf rosette and stem succulent, leaf rosette succulent, leaf rosette succulent (tall), leaf succulent, stem succulent, stem succulent (short), stem succulent (tall), succulent. #### Nutrition type Nutrition type here refers to whether the major source of energy and nutrients for the plant is photosynthesis, animals, dead material or other plants. Parasitism categories: hemiparasitic, holoparasitic, independent, parasitic. Carnivory categories: carnivorous, detritivorous. According to the ‘New Handbook for Standardised Measurement of Plant Functional Traits Worldwide’6 succulence and nutrition type are part of growth form. We here treat them separately for simplicity and to avoid combined categories. #### Leaf type A classification of presence/absence of photosynthetic active leaves and their basic forms. Categories: broadleaved, needleleaved, scale-shaped, scale-shaped/needleleaved, photosynthetic stem. ### Definition of representative trait records The six core quantitative traits certainly show intraspecific variation, amongst others caused by different ontogenetic stages and growth conditions. The dataset, focused on mean trait values for species rather than intraspecific variation, was intended to represent species mean trait values for mature and healthy (not obviously unhealthy) plants grown under natural conditions within the species distribution range. Leaf traits were intended to represent young but fully expanded and healthy leaves from the light exposed top canopy. Trait records not conforming to these requirements, i.e. records from plants grown in laboratories under experimental conditions and records measured on juvenile plants, were excluded from the dataset. This decision was made based on the respective metadata in the TRY database (see below). ### Data sources The vast majority of quantitative trait data was provided by the TRY Plant Trait Database10 (https:// www.try-db.org, TRY version 2.0 accessed July 2010, updated by TRY version 3.0 accessed May 2015). This dataset was supplemented by a small number of published data not included in TRY and original unpublished data contributed by W. J. Bond, J. H. C. Cornelissen, S. Díaz, L. Enrico, M. T. Fernandez-Piedade, L. D. Gorné, D. Kirkup, M. Kleyer, N. Salinas, E.-D. Schulze, K. Thompson, and R. Urrutia-Jalabert. Categorical traits were derived from the TRY Categorical Traits Dataset (https://www.try-db.org/TryWeb/Data.php#3), enhanced by field data and various literature sources. The datasets contributing via TRY to the quantitative traits are described in Supplementary Table S1, which contains data from refs. 4,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233 and the following unpbublished datasets: French Weeds Trait Database; Photosynthesis and Leaf Characteristics Database; South African Woody Plants Database (ZLTP); Tundra Plant Traits Database; Leaf N-Retention Database; Traits for Herbaceous Species from Andorra; Leaf Characteristics of Pinus sylvestris and Picea abies; Plant Coastal Dune Traits (France, Aquitaine); Dispersal Traits Database; LABDENDRO Brazilian Subtropical Forest Traits Database; Growth and Herbivory of Juvenile Trees; Cold Tolerance, Seed Size and Height of North American Forest Tree Species; Harze Trait Intravar: SLA; LDMC and Plant Height for Calcareous Grassland Species in South Belgium; Functional Traits for Restoration Ecology in the Colombian Amazon; Komati Leaf Trait Data; Baccara - Plant Traits of European Forests; Traits of Bornean Trees Database; Meadow Plant Traits: Biomass Allocation, Rooting depth; New South Wales Plant Traits Database; Traits for Herbaceous Species from Andorra; Catalonian Mediterranean Shrubland Trait Database; The Netherlands Plant Height Database; Plant Traits from Spanish Mediterranean Shrublands; Crown Architecture Database; Maxfield Meadow, Rocky Mountain Biological Laboratory – LMA; Herbaceous Plants Traits From Southern Germany; Leaf Area, Dry Mass and SLA Dataset; Herbaceous Leaf Traits Database Old Field New York; Plant Functional Traits From the Province of Almeria, Spain; Traits for Common Grasses and Herbs in Spain; Midwestern and Southern US Herbaceous Species Trait Database; Overton/Wright New Zealand Database; San Lorenzo Epiphyte Leaf Traits Database. The reference for each individual trait record contributing via TRY to the Global Spectrum Dataset before exclusion of non-representative trait records, errors and duplicates is documented in the data file ‘References.xlsx’. ### Data integration and quality management #### Semantic integration of terminologies from different datasets Ecological studies are carried out for a large number of different questions at different scales and researchers often work independently and with little coordination among them. This results in idiosyncratic datasets using heterogeneous terminologies14. The first step was therefore a semantic integration of terminologies. The core traits were standardized according to the definitions and measurement protocols provided in the Thesaurus Of Plant Characteristics (TOP)14 and the ‘New Handbook for Standardised Measurement of Plant Functional Traits Worldwide’6,15. The metadata for plant and organ maturity (juvenile, mature), health (healthy, not healthy), growth conditions (natural conditions, experimental conditions), and sun- versus shade-grown leaves were harmonized across datasets. #### Consolidation of taxonomy Species names were standardized and attributed to families according to The Plant List (http://www.theplantlist.org), the commonly accepted list for vascular plants at the time of publication of Díaz et al.9, using TNRS234,235, complemented by manual standardization by experts. Attribution of families to higher-rank groups was made according to APG III (2009) (http://www.mobot.org/MOBOT/research/APweb/). #### Conversion and correction of units, and exclusion of errors Different datasets often used different units for the same trait. After conversion to the standardized unit per trait, differences among datasets - sometimes in the order of magnitude - became obvious. These differences could often be traced back to errors in the original units and were corrected. Obvious errors (e.g. impossible trait values like LMA < 0 g/m2) were excluded from the dataset. #### Data imputation To improve the number of species with values for all six core traits, trait records for stem SSD, LMA, Nmass and SM were complemented by trait values derived from records of related traits: #### - Imputation of SSD Trait records for SSD are available for a very large number of woody species, but only for very few herbaceous species. To incorporate this fundamental trait in the analyses by Díaz et al.9, we complemented SSD of herbaceous species using an estimation based on leaf dry matter content (LDMC), a much more widely available trait, and its close correlation to stem dry matter content (StDMC, the ratio of stem dry mass to stem water-saturated fresh mass). StDMC is a good proxy of SSD in herbaceous plants with a ratio of approximately 1:1199, despite substantial differences in stem anatomy among botanical families236, including those between non-monocotyledons and monocotyledons (where sheaths were measured). We used a data set of 422 herbaceous species collected in the field across Europe and Israel, and belonging to 31 botanical families, to parameterize linear relationships of StDMC to LDMC. The slopes of the relationship were significantly higher for monocotyledons than for other angiosperms (F = 12.3; P < 0.001, from a covariance analysis); within non-monocotyledons, the slope for Fabaceae was higher than that for species from other families (F = 4.5; P < 0.05, from a covariance analysis). We thus used three different equations to predict SSD for 1963 herbaceous species for which LDMC values were available in TRY (Table 1): one for monocotyledons, one for Fabaceae, and a third one for other non-monocotyledons. Estimated data are flagged. #### - Imputation of LMA Trait records for SLA (leaf area per leaf dry mass) were converted to LMA (leaf dry mass per leaf area): LMA = 1/SLA. #### - Imputation of Nmass Trait records for leaf nitrogen content per leaf area (Narea) were converted to records of leaf nitrogen content per leaf dry mass (Nmass) if records for LMA were available for the same observation (leaf): Nmass = Narea/LMA. #### - Imputation of SM To be able to include trait data for pteridophytes in the analyses in Díaz et al.9, diaspore mass values were estimated based on published data for spore radius (r). We assumed that spores would be approximately spherical, with volume = (4/3)πr3, and that their density would be 0.5 mg mm−3 (refs. 237,238,239,240). Although these assumptions were imprecise, we are confident they result in spore masses within the right order of magnitude and several orders of magnitude smaller than seed mass of spermatophytes. Most data were from Page237, data for Sadleria pallida were from Lloyd238, for Pteridium aquilinum from Conway239, and for Diphasiastrum spp from Stoor et al.240. #### Probabilistic outlier detection The hierarchical taxonomic classification of plants into families, genera and species has been shown to be highly informative with respect to the probability of trait values241,242,243. We therefore used it to conduct outlier detection at each of these levels. The six core traits provided in the Global Spectrum Dataset are approximately normally distributed on a logarithmic scale10. We therefore assume that on log-scale, traits sample from normal distributions. In the context of a normal distribution the density distribution is symmetric to the mean with 99.73% (99.99%) of data to be expected within the range of mean +/− 3 standard deviations, and 99.99% of data within +/− 4 standard deviations. Using these wide confidence intervals ensures that extreme values that correspond to truly extreme values of traits in nature are not mistakenly identified as outliers and therefore excluded from the dataset. The z-score indicates how many standard deviations a record is away from the mean: $${\rm{z}} \mbox{-} {\rm{s}}{\rm{c}}{\rm{o}}{\rm{r}}{\rm{e}}=({\rm{v}}{\rm{a}}{\rm{l}}{\rm{u}}{\rm{e}}-{\rm{m}}{\rm{e}}{\rm{a}}{\rm{n}})/{\rm{s}}{\rm{t}}{\rm{a}}{\rm{n}}{\rm{d}}{\rm{a}}{\rm{r}}{\rm{d}}\,{\rm{d}}{\rm{e}}{\rm{v}}{\rm{i}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}$$ Trait values with absolute z-scores >4 (>3) have a probability of less than 0.1% (0.3%) to be true values of the normal distribution. These trait values are most probably caused by errors not yet detected for these individual records, e.g., wrong unit, decimal error of trait value, wrong species (e.g. by mistake attributing a herb species name to a height measured on a tree), problems related to the trait definition or non-representative growth or measurement conditions. We acknowledge however that our z-score cutoff choice is an arbitrary one. In many cases the number of trait values per taxon (e.g. a given species) was too small for a representative sample and did not provide a reliable estimate of the standard deviation (see Fig. 1). To circumvent this problem, we used the average standard deviation of trait values at the given taxonomic level, e.g., species, genus, family or all vascular plants. This average is an approximation of the standard deviation to be expected for an individual taxon, if a sufficient number of observations would be available (Fig. 1)10. This probability-based data quality assessment on the different levels of the taxonomic hierarchy is routinely conducted within the TRY database for all traits with more than 1000 records. The z-score values for each trait record are made available on the TRY website and the highest absolute value is provided with each data release. Trait values with an absolute z-score >4 (more than 4 standard deviations from at least one taxon mean) were excluded from the dataset unless their retention could be justified from external sources. Trait records with an absolute z-score 3 to 4 (3 to 4 standard deviations from at least one taxon mean) were checked by domain experts among the authors for plausibility, and retained or excluded accordingly. #### Exclusion of duplicate trait records Duplicate trait records were identified on the basis of the following criteria: same species (after standardization of taxonomy), similar trait values (accounting for rounding errors after semantic integration, unit conversion and data complementation), and no information on different measurement locations or dates. #### Calculation of species mean trait values The resulting dataset was used to calculate species mean trait values, without further stratification along, e.g., datasets or measurement sites. As trait distributions of the six core traits have been shown to be log-normal9, the mean species trait values were calculated after log-transformation of the trait values (geometric mean). Data for the categorical traits were added and, if in doubt, checked against expert knowledge and independent external information from specialized websites in the Internet. #### Final validation of taxonomy and mean trait values Taxonomy was finally checked once more manually against the Plant List and APGIII. The ten most extreme species mean values of each trait (smallest and largest) were checked manually for reliability against external sources. Finally, outliers of species mean traits – after categorization of species according to the categorical traits and in bi- and multivariate trait space – were validated against external sources (see Díaz et al.9 Fig. 2, Extended Data Fig. 3, and Extended Data Fig. 4). ## Data Records The dataset is available under a CC-BY license at the TRY File Archive (https://www.try-db.org/TryWeb/Data.php): Díaz, S. et al. The global spectrum of plant form and function: enhanced species-level trait dataset. TRY File Archive https://doi.org/10.17871/TRY.81 (2022)244 ### The dataset consists of two data files • Species_mean_traits.xlsx • References.xlsx ### Species_mean_traits.xlsx The file provides mean trait values of plants grown under natural conditions for 46,047 species (including a small number of genus level classifications, sub-species and local varieties). Species names and mean trait values are complemented by taxonomic hierarchy (genus, family and phylogenetic group), the number of trait records contributing to each mean trait value and by categorical traits. Values of all six traits were available for 2,214 species. In total the dataset contains 476,932 entries for quantitative and categorical trait records and higher-level taxonomy (92,159 entries for quantitative traits, 200,585 entries for categorical traits, and 184,188 entries for higher-level taxonomy). The quantitative species-level trait information is based on about 1 million trait records (see Table S1), measured on >500,000 plant individuals (number of different Observations in References (see below)). One trait record reported in the datasets is often based on several replicated measurements from different representative individuals at a site. The New Handbook for Standardised Measurement of Plant Functional Traits Worldwide6 recommends measurements on 10 to 25 individual plants or leaves, depending on the trait. Therefore in the cases that followed this or related protocols, a trait record in the original database probably represents the site-specific mean trait value for a given species. Reporting only the site-specific mean trait value was standard procedure in older publications and aggregated databases, assuming a common approach to replicated measurements on different individuals. More recent datasets tend to provide all individual measurements, among other reasons because this allows better treatment of intraspecific trait variation. The present dataset was derived from 157 datasets (Table S1). Trait records can be traced to ca. 2500 original publications (see References_original_sources.xlsx). All species are complemented with higher-level taxonomic information; 92.5% and 84.8% of species are attributed to categories according to woodiness and basic growth-form, respectively. The raw data are available via the TRY Database (https://www.try-db.org/TryWeb/Home.php). ### References.xlsx This file contains the references of all trait data, which contributed to the core traits of the Global Spectrum Dataset via the TRY database. If datasets contributed to TRY were already compiled from original publications, the table also provides the references of these original publications. The references are linked to the data in the species mean trait dataset via species unique identifiers and trait names. The sum of replicates in the species mean trait table is about 100,000 trait records less than the sum of 979,924 trait records in References and Supplementary Table S1, because the species mean trait table contains mean trait values and information on number of trait records only for those species-trait combinations that were retained after data cleaning and imputation. ## Technical Validation The dataset has a global coverage in geographic and climate space (Fig. 2, also Díaz et al.9 Extended Data Fig. 1), however with known gaps9,10,11. The numbers of species characterized per trait are similar to the TRY Database version 5, published in 201911. This indicates the efficiency of data collection and curation for the Global Spectrum Dataset. All species mean trait values (Table 2) are within the ranges published in Kattge et al.10. Histograms of trait frequency distributions are provided in Fig. 3. The coverage of species per trait with respect to woodiness is presented in Fig. 4. The dataset has so far been used in Díaz et al.9, where the data show a high internal consistency in bi- and multivariate analyses: known bivariate relationships were well reproduced (Díaz et al.9 Extended Data Figs. 3 and 4) and individual species were located in the first axes of the principal component analysis in positions expected from general knowledge about these species (Díaz et al.9 Fig. 2). ## Usage Notes In case the dataset is used in publications, both this paper and Díaz et al.9 should be cited. The six quantitative traits compiled here (plus LDMC) are among the best-covered quantitative traits in the TRY database. However, as is typical for these kinds of observational data, the numbers of records per species are unevenly distributed: few species mean trait values are based on a large number of records, while a large fraction of the species mean estimates is based on only a few or a single trait record(s) (see difference between mean and median number of trait records per species and trait in Table 2, the number of trait records per species mean is also indicated in the dataset file ‘Species_mean_traits.xlsx’). The representativeness of these mean values should be taken with caution, because the trait measurements have to be treated as samples from the variation of traits within species, which – for some traits – can be substantial10. However, as mentioned above, one trait record is often based on several trait measurements on characteristic individuals and therefore represents a species per site-specific mean value. In the context of large-scale analyses the variation within species has been shown to be considerably smaller than the variation between species10.
# Field and Potential of 3 charges conceptual problem 1. Sep 23, 2007 ### mitleid One positive (+2q) and two negative (-q) charges are arranged as displayed in figure 1. Calculate electric field E and electric potential P at points along the poitive y-axis as functions of their coordinate y. What is the direction of E at those points? In your results, does y-component Ey of the field satisfy Ey = -dP/dy? Is it supposed to satisfy? Coulomb's Law E = ke*q/r$$^{2}$$ First of all, the x-component of the field at any point along the y-axis is zero, since -q(-) and -q(+) cancel out one another, and 2q provides no field in the x-direction. I know the field at any point will be equal to the field from 2q (positive y) minus the two y-components from the other two particles (negative y). The contribution from the positive particle is simple. Ey(+) = ke(2q/y$$^{2}$$) The other contributions require a little trigonometry, which I'm hoping I've done correctly.. Assuming r is equal to the distance from -q to y (hypotenuse), r$$^{2}$$ = y$$^{2}$$ + a$$^{2}$$. Therefor E(-) = Ke(-2q/y$$^{2}$$+a$$^{2}$$). Now I have to break this down to find the y-component for E(-) which (I think) is just : Ey(-) = sin$$\alpha$$*E(-) Since the ultimate goal here is to define two functions, should I define sin$$\alpha$$ in terms of y for integration purposes? Could I say sin$$\alpha$$= y/r = y/(y$$^{2}$$+a$$^{2}$$)$$^{1/2}$$? Oof, things are getting rusty... My gut says I will have to integrate the equations for Ey(+) and Ey(-), and the difference between them will be my function for the field. I haven't really started at the potential equation yet... figured I would check to see if I'm headed the right direction first. Any advice? Last edited: Sep 23, 2007 2. Sep 23, 2007 ### Proggle It's much easier the other way around. Obtain an expression for the potential, which is simpler to work with being a scalar. You can then obtain the electric field with a gradient operator (in other words Ex=-dV/dx Ey=-dV/dy, etc. (partial derivatives)). 3. Sep 23, 2007 ### mitleid Potential = Ke (q/r), but r = $$\sqrt{$$y$$^{2}$$+a$$^{2}$$ for the two negative charges. So will the total P be Ke(2q/y) + Ke(-2q/$$\sqrt{$$y$$^{2}$$+a$$^{2}$$)? When I derive this I get something like Ke(dq/y$$^{2}$$) -Ke(dq/2(y$$^{2}$$+a$$^{2}$$)$$^{3/2}$$) I see! This is the same as Ey with substituted sin like I was asking. The only issue I see is when I derived I got a 2 in the denominator... 4. Sep 23, 2007 ### Proggle Don't forget the chain rule... 5. Sep 23, 2007 ### mitleid hahah... I can't help it, my mind refuses to retain calculus methods.
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are viewing an older version of this Concept. Go to the latest version. Electrolytic Cells % Progress Practice Electrolytic Cells Progress % Electrolytic Cells Do we have heat yet? In 1989, two scientists announced that they had achieved “cold fusion”, the process of fusing together elements at essentially room temperature to achieve energy production. The hypothesis was that the fusion would produce more energy than was required to cause the process to occur. Their process involved the electrolysis of heavy water (water molecules containing some deuterium instead of normal hydrogen) on a palladium electrode. The experiments could not be reproduced and their scientific reputations were pretty well shot. However, in more recent years, both industry and government researchers are taking another look at this process. The device illustrated above is part of a government project, and NASA is completing some studies on the topic as well. Cold fusion may not be so “cold” after all. Electrolytic Cells A voltaic cell uses a spontaneous redox reaction to generate an electric current. It is also possible to do the opposite. When an external source of direct current is applied to an electrochemical cell, a reaction that is normally nonspontaneous can be made to proceed. Electrolysis is the process in which electrical energy is used to cause a nonspontaneous chemical reaction to occur. Electrolysis is responsible for the appearance of many everyday objects such as gold-plated or silver-plated jewelry and chrome-plated car bumpers. An electrolytic cell is the apparatus used for carrying out an electrolysis reaction. In an electrolytic cell, electric current is applied to provide a source of electrons for driving the reaction in a nonspontaneous direction. In a voltaic cell, the reaction goes in a direction that releases electrons spontaneously. In an electrolytic cell, the input of electrons from an external source forces the reaction to go in the opposite direction. Zn/Cu cell. The spontaneous direction for the reaction between Zn and Cu is for the Zn metal to be oxidized to Zn 2+ ions, while the Cu 2+ ions are reduced to Cu metal. This makes the zinc electrode the anode and the copper electrode the cathode. When the same half-cells are connected to a battery via the external wire, the reaction is forced to run in the opposite direction. The zinc electrode is now the cathode and the copper electrode is the anode. $& \text{oxidation (anode)} && \text{Cu}(s) \rightarrow \text{Cu}^{2+}(aq) + 2e^- && E^0 =-0.34 \text{ V} \\& \text{reduction (cathode)} && \text{Zn}^{2+} (aq) + 2e^- \rightarrow \text{Zn}(s) && E^0 = -0.76 \text{ V} \\\hline& \text{overall reaction} && \text{Cu}(s)+\text{Zn}^{2+}(aq) \rightarrow \text{Cu}^{2+}(aq)+\text{Zn}(s) && E^0{_\text{cell}} =-1.10 \text{ V}$ The standard cell potential is negative, indicating a nonspontaneous reaction. The battery must be capable of delivering at least 1.10 V of direct current in order for the reaction to occur. Another difference between a voltaic cell and an electrolytic cell is the signs of the electrodes. In a voltaic cell, the anode is negative and the cathode is positive. In an electrolytic cell, the anode is positive because it is connected to the positive terminal of the battery. The cathode is therefore negative. Electrons still flow through the cell form the anode to the cathode. Summary • The function of an electrolytic cell is described. • Reactions illustrating electrolysis are given. Practice Watch the video at the link below and answer the following questions: 1. What was the source of electricity? 2. What was the purpose of the steel attached to an electrode? 3. What is used to help carry the electric current? Review 1. What would be the products of a spontaneous reaction between Zn/Zn 2+ and Cu/Cu 2+ ? 2. How do we know that the reaction forming Cu 2+ is not spontaneous? 3. What would be the voltage for the reaction where Zn metal forms Zn 2+ ?
# Saruman's Army Time Limit: 1000MS Memory Limit: 65536K [显示标签] ## Description Saruman the White must lead his army along a straight path from Isengard to Helm’s Deep. To keep track of his forces, Saruman distributes seeing stones, known as palantirs, among the troops. Each palantir has a maximum effective range of R units, and must be carried by some troop in the army (i.e., palantirs are not allowed to “free float” in mid-air). Help Saruman take control of Middle Earth by determining the minimum number of palantirs needed for Saruman to ensure that each of his minions is within R units of some palantir. ## Input The input test file will contain multiple cases. Each test case begins with a single line containing an integer R, the maximum effective range of all palantirs (where 0 ≤ R ≤ 1000), and an integer n, the number of troops in Saruman’s army (where 1 ≤ n ≤ 1000). The next line contains n integers, indicating the positions x1, …, xn of each troop (where 0 ≤ xi ≤ 1000). The end-of-file is marked by a test case with R = n = −1. ## Output For each test case, print a single integer indicating the minimum number of palantirs needed. ## Sample Input 0 3 10 20 20 10 7 70 30 1 7 15 20 50 -1 -1 ## Sample Output 2 4 ## Hint In the first test case, Saruman may place a palantir at positions 10 and 20. Here, note that a single palantir with range 0 can cover both of the troops at position 20. In the second test case, Saruman can place palantirs at position 7 (covering troops at 1, 7, and 15), position 20 (covering positions 20 and 30), position 50, and position 70. Here, note that palantirs must be distributed among troops and are not allowed to “free float.” Thus, Saruman cannot place a palantir at position 60 to cover the troops at positions 50 and 70. ## Source Stanford Local 2006
Breaking News # Limit Cos X 1 X Limit Cos X 1 X. Find the limit limx→0 sin 4 x / 4 x. Since 0 0 0 0 is of indeterminate form, apply l'hospital's rule. Contoh soal limit tak hingga fungsi trigonometri berikut ini adalah contoh soal limit fungsi trigonometri untuk x menuju tak hingga. We have indeterminateness of type 0/0, i.e. Limit for the numerator is \lim_{x \to 0^+}\left(\sin^{2}{\left(x \right)} + 2 \sin{\left(x \right)} \cos{\left(x \right. Since cosine has a period of 360°,. If the limit exists and that the calculator is able to calculate, it returned. ## We determine this by utilising l'hospital's rule. Since 0 0 0 0 is of indeterminate form, apply l'hospital's rule. This question illustrates how you can’t always get an answer by graphing an equation. For the calculation result of a limit such. ## For The Calculation Result Of A Limit Such. 1 − cos ( x) = 2 sin 2 ( x / 2) so cos ( x) − 1 x = − 2 sin 2 ( x / 2) x = − sin 2 ( x / 2) x / 2 = − sin 2 ( x / 2) ( x / 2) 2 ( x 2) → − x 2 → 0 note that this also. We have indeterminateness of type 0/0, i.e. ### It Is Possible To Calculate The Limit At A Of A Function Where A Represents A Real : Calculus limit of cos(1/x) from the graphif you enjoyed this video please consider liking, sharing, and subscribing.udemy courses via my website: ### Kesimpulan dari Limit Cos X 1 X. (or equal) leading to cos (x) is 0<x<1 (or equal) indicating cyclic. You can use these properties to evaluate many limit problems involving the six basic trigonometric functions.
# Arithmetic Progressions : Exercise 5.3 (Mathematics NCERT Class 10th) Like the video?  and get such videos daily! Q.1      Find the sum of the following APs : (i) 2, 7, 12, .... to 10 terms. (ii) – 37, –33, – 29, .... to 12 terms. (iii) 0.6, 1.7, 2.8...., to 100 terms. (iv) ${1 \over {15}},{1 \over {12}},{1 \over {10}},....$ to 11 terms. Sol.        (i) Let a be the first term and d be the common difference of the given AP then, we have a = 2 and d = 7 – 2 = 5 We have to find the sum of 10 terms of the given AP. Putting a = 2, d = 5, n = 10 in ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right]$ we get ${S_{10}} = {{10} \over 2}\left[ {2 \times 2 + \left( {10 - 1} \right)5} \right]$ $= 5\left( {4 + 9 \times 5} \right)$ = 5(4 + 45) = 5 × 49 = 245 (ii) Let a be the first term and d be the common difference of the given AP. Then we have a = – 37, d = – 33 – (– 37) = – 33 + 37 = 4 We have to find the sum of 12 terms of the given AP. Putting a = – 37, d = 4, n = 12 in ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right],we\,\,get$ ${S_{12}} = {{12} \over 2}\left[ {2 \times - 37 + 12\left( {12 - 1} \right)4} \right]$ = 6 (– 74 + 11 × 4) = 6 (– 74 + 44) = 6 × (– 30) = – 180 (iii) Let a be the first term and d be the common difference of the given AP. Then, we have a = 0.6, d = 1.7 – 0.6 = 1.1 We have to find the sum of 100 terms of the given AP Putting a = 0.6 d = 1.1, n = 100 in ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right],\,we\,get$ ${S_{100}} = {{100} \over 2}\left[ {2 \times 0.6 + \left( {100 - 1} \right)1.1} \right]$ = 50(1.2 + 99 × 1.1) = 50 (1.2 + 108.9) = 50 × 110.1 = 5505 (iv) Let a be the first term and d be the common difference of the given AP. Then we have $a = {1 \over {15}},d = {1 \over {12}} - {1 \over {15}} = {{5 - 4} \over {60}} = {1 \over {60}}$ We have to find the sum of 11 terms of the given AP. Putting $a = {1 \over {15}},d = {1 \over {60}},n = 11\,in$ ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right],\,we\,get$ ${S_{11}} = {{11} \over 2}\left[ {2 \times {1 \over {15}} + \left( {11 - 1} \right){1 \over {60}}} \right]$ $= {{11} \over 2}\left( {{2 \over {15}} + 10 \times {1 \over {60}}} \right)$ $= {{11} \over 2}\left( {{2 \over {15}} + {1 \over 6}} \right)$ $= {{11} \over 2} \times {{4 + 5} \over {30}}$ $= {{11} \over 2} \times {9 \over {30}} = {{33} \over {20}}$ Q.2      Find the sums given below : (i) $7 + 10{1 \over 2} + 14 + .... + 84$ (ii) 34 + 32 + 30 + ...... + 10 (iii) – 5 + (– 8) + (– 11) + .... + (– 230) Sol.        (i) Here, the last term is given. We will first have to find the number of terms. $a = 7,\,d = 10{1 \over 2} - 7 = 3{1 \over 2} = {7 \over 2},\ell = {a_n} = 84$ Therefore, 84 = a + (n – 1) d $\Rightarrow$ $84 = 7 + \left( {n - 1} \right){7 \over 2}$ $\Rightarrow$ ${7 \over 2}\left( {n - 1} \right) = 84 - 7$ $\Rightarrow$ ${7 \over 2}\left( {n - 1} \right) = 77$ $\Rightarrow$ $n - 1 = 77 \times {2 \over 7}$ $\Rightarrow$ n – 1 = 22 $\Rightarrow$ n = 23 We know that ${S_n} = {n \over 2}\left( {a + \ell } \right)$ $\Rightarrow$ ${S_{23}} = {{23} \over 2}\left( {7 + 84} \right) = {{23} \over 2} \times 91$ $= {{2093} \over 2} = 1046{1 \over 2}$ (ii) Here, the last term is given. We will first have to find the number of terms. a = 34, d = 32 – 34 = – 2, l = ${a_n} = 10$ Therefore 10 = a + (n – 1)d $\Rightarrow$ 10 = 34 + (n – 1) (– 2) $\Rightarrow$ (– 2) (n – 1) = 10 – 34 $\Rightarrow$ (– 2) (n –1) = – 24 $\Rightarrow$ n – 1 = 12 $\Rightarrow$ n = 12 + 1 = 13 Using ${S_n} = {n \over 2}\left( {a + \ell } \right)$, we have ${S_{13}} = {{13} \over 2}\left( {34 + 10} \right) = {{13} \over 2} \times 44$ = 13 × 22 = 286 (iii) Here the last term is given. We will first have to find the number of terms. a = – 5, d = – 8 – (–5) = – 8 + 5 = – 3, l = ${a_n} = - 230$ Therefore – 230 = a + (n – 1) d $\Rightarrow$ – 230 = – 5 + (n – 1) (– 3) $\Rightarrow$ (– 3) (n – 1) = – 230 + 5 $\Rightarrow$ (– 3) (n – 1) = – 225 $\Rightarrow$ $n - 1 = {{ - 225} \over { - 3}}$ $\Rightarrow$ n – 1 = 75 $\Rightarrow$ n = 75 + 1 = 76 Using ${S_n} = {n \over 2}\left( {a + \ell } \right)$, we have ${S_{76}} = {{76} \over 2}\left( { - 5 - 230} \right)$ = 38 × – 235 = – 8930 Purchase here or Request a Call Back from Our Academic Counsellor to Get Complete Animated Video Course of Your Class. Q.3      In an AP : (i) Given a = 5, d = 3, ${a_n} = 50,\,find\,n\,and\,{S_n}$. (ii) Given a = 7, ${a_{13}} = 35,\,find\,d\,and\,{S_{13}}$ (iii)Given ${a_{12}} = 37,\,d = 3,\,find\,a\,and\,{S_{12}}$ (iv)Given ${a_3} = 15,\,{S_{10}} = 125,\,find\,d\,and\,{a_{10}}$ (v) Given $d = 5,\,{S_9} = 75,\,find\,a\,and\,{a_9}$ (vi) Given a = 2, d = 8 , ${S_n} = 90,\,find\,n\,and\,{a_n}$ (vii) Given a = 8, ${a_n} = 62,\,{S_n} = 210$, find n and d. (viii) Given ${a_n} = 4,\,d = 2,\,{S_n} = - 14$, find n and a. (ix) Given a = 3, n = 8, S = 192, find d. (x) Given l = 28, S = 144, and there are total 9 terms. Find a. Sol.        (i) We have a = 5, d = 3 and ${a_n} = 50$ $\Rightarrow$ a + (n – 1)d = 50 $\Rightarrow$ 5 + (n – 1) 3 = 50 $\Rightarrow$ 3(n – 1) = 50 – 5 $\Rightarrow$ $n - 1 = {{45} \over 3} = 15$ $\Rightarrow$ n = 15 + 1 = 16 Putting n = 16, a = 5 and $\ell = {a_n} = 50\,in\,{S_n} = {n \over 2}\left( {a + \ell } \right)$ We get ${S_{16}} = {{16} \over 2}\left( {5 + 50} \right) = 8 \times 55 = 440$ Hence, n = 16 and ${S_{16}} = 440$ (ii) We have a = 7 and ${a_{13}} = 35$ Let d be the common difference of the given AP. Then, $\Rightarrow$ ${a_{13}} = 35$ $\Rightarrow$ a + 12 d = 35 $\Rightarrow$ 7 + 12 d = 35 [Since a = 7] $\Rightarrow$ 12d = 35 – 7 = 28 $\Rightarrow$ $d = {{28} \over {12}} = {7 \over 3}$ Putting n = 13, a = 7 and $\ell = {a_{13}} = 35\,in$ ${S_n} = {n \over 2}\left( {a + \ell } \right),we\,get$ ${S_{13}} = {{13} \over 2}\left( {7 + 35} \right)$ $= {{13} \over 2} \times 42$ = 13 × 21 = 273 Hence, $d = {7 \over 3}and\,{S_{13}} = 273$ (iii) We have ${a_{12}} = 37,\,d = 3$ Let a be the first term of the given AP. Then, ${a_{12}} = 37$ $\Rightarrow$ a + 11d = 37 $\Rightarrow$ a + 11(3) = 37 $\Rightarrow$ a + 11(3) = 37 [Since d = 3] $\Rightarrow$ a = 37 – 33 = 4 Putting n = 12, a = 4 and $\ell = {a_{12}} = 37\,in\,$ ${S_n} = {n \over 2}\left( {a + \ell } \right),\,we\,get$ ${S_{12}} = {{12} \over 2}\left( {4 + 37} \right) = 6 \times 41 = 246$ Hence, , a = 4 and ${S_{12}} = 246$ (iv) We have, ${a_3} = 15,{S_{10}} = 125$ Let a be the first term and d the common difference of the given AP. Then, ${a_3} = 15\,\,and\,\,{S_{10}} = 125$ $\Rightarrow$ a + 2d = 15 ... (1) and ${{10} \over 2}$ [2a + (10 – 1)d] = 125 $\Rightarrow$ 5(2a + 9d) = 125 $\Rightarrow$ 2a + 9d = 25 ... (2) 2 × (1) – (2) gives, 2(a + 2d) – (2a + 9d) = 2 × 15 – 25 $\Rightarrow$ 4d – 9d = 30 – 25 $\Rightarrow$ – 5d = 5 $\Rightarrow$ $d = - {5 \over 5} = - 1$ Now, ${a_{10}} = a + 9d = \left( {a + 2d} \right) + 7d$ = 15 + 7 (– 1) [Using (1)] = 15 – 7 = 8 Hence, d = – 1 and ${a_{10}} = 8$ (v) We have d = 5 , ${S_9} = 75$ Let a be the first term of the given AP. Then, ${S_9} = 75$ $\Rightarrow$ ${9 \over 2}\left[ {2a + \left( {9 - 1} \right)5} \right] = 75$ $\Rightarrow$ ${9 \over 2}\left( {2a + 40} \right) = 75$ $\Rightarrow$ 9a + 180 = 75 $\Rightarrow$ 9a = 75 – 180 $\Rightarrow$ 9a = – 105 $\Rightarrow$ $a = {{ - 105} \over 9} = {{ - 35} \over 3}$ Now ${a_9} = a + 8d = {{ - 35} \over 3} + 8 \times 5$ $= {{ - 35 + 120} \over 3} = {{85} \over 3}$ Hence, $a = {{ - 35} \over 3}and\,{a_9} = {{85} \over 3}$ (vi) We have, a = 2, d = 8 , ${S_n} = 90$ ${S_n} = 90$ $\Rightarrow$ ${n \over 2}\left[ {2 \times 2 + \left( {n - 1} \right)8} \right] = 90$ $\Rightarrow$ ${n \over 2}\left( {4 + 8n - 8} \right) = 90$ $\Rightarrow$ ${n \over 2}\left( {8n - 4} \right) = 90$ $\Rightarrow$ $n\left( {4n - 2} \right) = 90$ $\Rightarrow$ $4{n^2} - 2n - 90 = 0$ Therefore, $n = {{ - \left( { - 2} \right) \pm \sqrt {{{\left( { - 2} \right)}^2} - 4 \times 4 \times \left( { - 90} \right)} } \over {2 \times 4}}$ $= {{2 \pm \sqrt {4 + 1440} } \over 8}$ $= {{2 \pm \sqrt {1444} } \over 8}$ $= {{2 \pm 38} \over 8}$ $= {{40} \over 8},{{ - 36} \over 8} = 5,{{ - 9} \over 2}$ But n cannot be negative Therefore, n = 5 Now ${a_n} = a + \left( {n - 1} \right)d$ $\Rightarrow$ ${a_5} = 2 + \left( {5 - 1} \right)8 = 2 + 32 = 34$ Hence, n = 5 and ${a_n} = 34$ (vii) We have , a = 8, ${a_n} = 62,\,{S_n} = 210$ Let d be the common difference of the given AP. Now, ${S_n} = 210$ $\Rightarrow$ ${n \over 2}\left( {a + \ell } \right) = 210$ $\Rightarrow$ ${n \over 2}\left( {8 + 62} \right) = 210$ [Since $a = 8,\,{a_n} = 62$] $\Rightarrow$ ${n \over 2} \times 70 = 210$ $\Rightarrow$ $n = 210 \times {2 \over {70}} = 3 \times 2 = 6$ and ${a_n} = 62$ $\Rightarrow$ ${a_6} = 62$ $\Rightarrow$ a + 5d = 62 $\Rightarrow$ 8 + 5d = 62 [since a = 8] $\Rightarrow$ 5d = 62 – 8 = 54 $\Rightarrow$ $d = {{54} \over 5}$ Hence, $d = {{54} \over 5}$ and n = 6 (viii) We have ${a_n} = 4,\,d = 2,{S_n} = - 14$ Let a be the first term of the given AP. Then. ${a_n} = 4$ $\Rightarrow$ a + (n – 1)2 = 4 [since d = 2] $\Rightarrow$ a = 4 – 2 (n – 1) ... (1) and ${S_n} = - 14$ $\Rightarrow$ ${n \over 2}\left( {a + \ell } \right) = - 14$ [since $\ell = {a_n}$] $\Rightarrow$ n (a + 4) = – 28 $\Rightarrow$ n[4 – 2 (n – 1) + 4] = – 28 $\Rightarrow$ n (4 – 2n + 2 + 4) = – 28 $\Rightarrow$ n(– 2n + 10) = – 28 $\Rightarrow$ n (– n + 5) = – 14 $\Rightarrow$ $- {n^2} + 5n = - 14$ $\Rightarrow$ ${n^2} - 5n - 14 = 0$ $\Rightarrow$ (n – 7) (n + 2) = 0 $\Rightarrow$ n = 7 or – 2 But n cannot be negative n = 7 Putting n = 7 in (1), we get a = 4 – 2 (7 – 1) = 4 – 2 × 6 = 4 – 12 = – 8 Hence, n = 7 and a = – 8 (ix) We have, a = 3, n = 8, S = 192 Let d be the common difference of the given AP. ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right]$ $\Rightarrow$ $192 = {8 \over 2}\left[ {2 \times 3 + \left( {8 - 1} \right)d} \right]$ $\Rightarrow$ 192 = 4(6 + 7d) $\Rightarrow$ 48 = 6 + 7d $\Rightarrow$ 7d = 48 – 6 $\Rightarrow$ 7d = 42 $\Rightarrow$ $d = {{42} \over 7} = 6$ Hence, d = 6 (x) We have l = 28, S = 144, n = 9 Let a be the first term of the given AP. S = 144 $\Rightarrow$ ${n \over 2}\left( {a + \ell } \right) = 144$ $\Rightarrow$ ${9 \over 2}\left( {a + 28} \right) = 144$ $\Rightarrow$ $a + 28 = 144 \times {2 \over 9}$ $\Rightarrow$ a + 28 = 32 $\Rightarrow$ a = 32 – 28 = 4 Hence, a = 4 Q.4      How many terms of the AP : 9 , 17, 25, ... must be taken to give a sum of 636 ? Sol.        Let the first term be a = 9 and common difference d = 17 – 9 = 8. Let the sum of n terms be 636. Then, ${S_n} = 636$ $\Rightarrow$ ${n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right] = 636$ $\Rightarrow$ ${n \over 2}\left[ {2 \times 9 + \left( {n - 1} \right)8} \right] = 636$ $\Rightarrow$ ${n \over 2}\left( {18 + 8n - 8} \right) = 636$ $\Rightarrow$ ${n \over 2}\left( {8n + 10} \right) = 636$ $\Rightarrow$ n(4n + 5) = 636 $\Rightarrow$ $4{n^2} + 5n - 636 = 0$ Therefore, $n = {{ - 5 \pm \sqrt {25 - 4 \times 4 - 636} } \over {2 \times 4}}$ $= {{ - 5 \pm \sqrt {25 + 10176} } \over 8}$ $= {{ - 5 \pm \sqrt {10201} } \over 8}$ $= {{ - 5 \pm 101} \over 8} = {{96} \over 8},{{ - 106} \over 8}$ $12,{{ - 53} \over 4}$ But n cannot be negative Therefore, n = 12 Thus, the sum of 12 terms is 636. Q.5     The first term of an AP is 5, the last term is 45 and the sum is 400. Find the number of terms and the common difference. Sol.          Let a be the first term and d the common difference of the AP such that. a = 5, l = 45 and S = 400 Therefore, S = 400 $\Rightarrow$ ${n \over 2}\left( {a + \ell } \right) = 400$ $\Rightarrow$ $n\left( {5 + 45} \right) = 400 \times 2$ $\Rightarrow$ n(50) = 400 × 2 $\Rightarrow$ $n = {{400 \times 2} \over {50}} = 8 \times 2 = 16$ and l = 45 $\Rightarrow$ a + (n – 1) d = 45 $\Rightarrow$ 5 + (16 – 1)d = 45 $\Rightarrow$ 15d = 45 – 5 = 40 $\Rightarrow$ $a = {{40} \over {15}} = {8 \over 3}$ Hence, the number of term is 16 and the common difference is ${8 \over 3}$. Register here for Live Tutoring Crash Course to Score A+ in Board & Final Exams. Q.6      The first and the last terms of an AP are 17 and 350 respectively. If the common difference is 9, how many terms are there and what is their sum? Sol.         Let a be the first term and d be the common difference. Let l be its last term. Then a = 17, $\ell = {a_n} = 350$, d = 9 . $\ell = {a_n} = 350$ $\Rightarrow$ a + (n – 1) d = 350 $\Rightarrow$ 17 + (n – 1)9 = 350 $\Rightarrow$ 9(n – 1) = 350 – 17 = 333 $\Rightarrow$ $n - 1 = {{333} \over 9} = 37$ $\Rightarrow$ n = 37 +1 = 38 Putting a = 17, l = 350, n = 38 in ${S_n} = {n \over 2}\left( {a + \ell } \right),we\,\,get$ ${S_{38}} = {{38} \over 2}\left( {17 + 350} \right)$ = 19 × 367 = 6973 Hence, there are 38 terms in the AP having their sum as 6973. Q.7      Find the sum of first 22 terms of an AP in which d = 7 and 22nd term is 149. Sol.         Let a be the first term and d the common difference of the given AP then, d = 7 and ${a_{22}} = 149$ $\Rightarrow$ a + (22 – 1) d = 149 $\Rightarrow$ a + 21 × 7 = 149 $\Rightarrow$ a = 149 – 147 = 2 Putting n = 22, a = 2 and d = 7 in ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right]$, we get ${S_{22}} = {{22} \over 2}\left[ {2 \times 2 + \left( {22 - 1} \right)7} \right]$ = 11(4 + 21 × 7) = 11(4 + 147) = 11 × 151 = 1661 Hence, the sum of first 22 terms is 1661. Q.8      Find the sum of first 51 terms of an AP whose second and third terms are 14 and 18 respectively. Sol.            Let a be the first term and d the common difference of the given AP. Then, ${a_2} = 14\,\,and\,\,{a_3} = 18$ $\Rightarrow$ a + d = 14 and a + 2d = 18 Solving these equations , we get d = 4 and a = 10 Putting a = 10, d = 4 and n = 51 in ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right]$, we get ${S_{51}} = {{51} \over 2}\left[ {2 \times 10 + \left( {51 - 1} \right) \times 4} \right]$ $= {{51} \over 2}\left[ {20 + 50 \times 4} \right]$ $= {{51} \over 2}\left( {20 + 200} \right) = {{51} \over 2} \times 220$ = 51 × 110 = 5610 Q.9       If the sum of 7 terms of an AP is 49 and that of 17 terms is 289, find the sum of n terms. Sol.         Let a be the first term and d the common difference of the given AP. Then. ${S_7} = 49\,\,and\,\,{S_{17}} = 289$ $\Rightarrow$ ${7 \over 2}\left[ {2a + \left( {7 - 1} \right)d} \right] = 49$ $\Rightarrow$ ${7 \over 2}\left( {2a + 6d} \right) = 49$ $\Rightarrow$ a + 3d = 7 ... (1) and ${{17} \over 2}\left[ {2a + \left( {17 - 1} \right)d} \right] = 289$ $\Rightarrow$ ${{17} \over 2}\left( {2a + 16d} \right) = 289$ $\Rightarrow$ a + 8d = 17 ... (2) Solving these two equations, we get $\Rightarrow$ 5d = 10 , d = 2 and a = 1 Therefore, ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right]$ $= {n \over 2}\left[ {2 \times 1 + \left( {n - 1} \right)2} \right]$ $= {n \over 2}\left( {2 + 2n - 2} \right) = {n \over 2} \times 2n = {n^2}$ Q.10      Show that ${a_1},{a_2}....\,{a_n}....$ form an AP where ${a_n}$ is defined as below : (i) ${a_n} = 3 + 4n$ (ii) ${a_n} = 9 - 5n$ Also find the sum of the first 15 term in each case. Sol.           (i) We have, ${a_n} = 3 + 4n$ Substituting n = 1, 2, 3, 4, ... , n , we get The sequence 7, 11, 15, 19, .... (3 + 4n) which is an AP with common difference 4. Putting a = 7, d = 4 and n = 15 in ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right]$, we get ${S_{15}} = {{15} \over 2}\left[ {2 \times 7 + \left( {15 - 1} \right)4} \right]$ $= {{15} \over 2}\left( {14 + 14 \times 4} \right) = {{15} \over 2}\left( {14 + 56} \right)$ $= {{15} \over 2} \times 70 = 15 \times 35 = 525$ (ii) We have, ${a_n} = 9 - 5n$ Substituting n = 1, 2, 3, 4, .... n, we get The sequence 4, – 1, – 6, – 11, .... (9 – 5n), which is an AP with common difference – 5. Putting a = 4, d = – 5 and n = 15 in ${S_n} = {n \over 2}\left[ {2a + \left( {n - 1} \right)d} \right],$ we get ${S_{15}} = {{15} \over 2}\left[ {2 \times 4 + \left( {15 - 1} \right)\left( { - 5} \right)} \right]$ $= {{15} \over 2}\left( {8 + 14 \times - 5} \right)$ $= {{15} \over 2}\left( {8 - 70} \right) = {{15} \over 2} \times - 62$ = 15 × – 31 = – 465 Register here and Watch Previous Year CBSE Papers Video Solutions for FREE. Q.11     If the sum of the first n terms of an AP is 4n $- {n^2}$, what is the first term (that is ${S_1}$)? What is the sum of first two terms ? What is the second term? Similarly, find the 3rd the 10th and the nth terms. Sol.          According to the question, ${S_n} = 4n - {n^2}$ ${S_1} = 4 \times 1 - {1^2}$ = 4 – 1 = 3 $\Rightarrow$ First term = 3 Now, sum of first two terms = ${S_2} = 4 \times 2 - {2^2}$ $= 8 - 4 = 4$ Therefore Second term $= {S_2} - {S_1} = 4 - 3 = 1$ $= {S_3} = 4 \times 3 - {3^2}$ = 12 – 9 = 3 Therefore Third term = ${S_3} - {S_2}$ = 3 – 4 = – 1 ${S_9} = 4 \times 9 - {9^2}$ = 36 – 81 = – 45 and, ${S_{10}} = 4 \times 10 - {10^2}$ = 40 – 100 = – 60 Therefore Tenth term = ${S_{10}} - {S_9}$ = – 60 – (– 45) = – 60 + 45 = – 15 Also, ${S_n} = 4n - {n^2}$ and ${S_{n - 1}} = 4\left( {n - 1} \right) - {\left( {n - 1} \right)^2}$ $= 4n - 4 - {n^2} + 2n - 1$ $= - {n^2} + 6n - 5$ Therefore, nth term = ${S_n} - {S_{n - 1}}$ $= 4n - {n^2} - \left( { - {n^2} + 6n - 5} \right)$ $= 4n - {n^2} + {n^2} - 6n + 5 = 5 - 2n$ Q.12      Find the sum of the first 40 positive integers divisible by 6. Sol.          The first positive integers divisible by 6 are 6, 12, 18, .... Clearly, it is an AP with first term a = 6 and common difference d = 6. We want to find ${S_{10}}$ Therefore, ${S_{40}} = {{40} \over 2}\left[ {2 \times 6 + \left( {40 - 1} \right)6} \right]$ = 20 (12 + 39 × 6) = 20(12 + 234) = 20 × 246 = 4920 Q.13      Find the sum of the first 15 multiples of 8. Sol.           The first 15 multiples of 8 are 8 × 1, 8 × 2, 8 × 3, ... 8 × 15 i.e., 8, 16, 24 .... 120, which is an AP. Therefore Sum of 1st 15 multiples of $8 = {{15} \over 2}\left( {8 + 120} \right)$ $\left[ {{S_n} = {n \over 2}\left( {a + \ell } \right)} \right]$ $= {{15} \over 2} \times 128$ = 15 × 64 = 960 Q.14      Find the sum of the odd numbers between 0 and 50. Sol.          The odd numbers between 0 and 50 are 1, 3, 5, 49. They form an AP and there are 25 terms. Therefore, Their sum $= {{25} \over 2}\left( {1 + 49} \right)$ $= {{25} \over 2} \times 50$ = 25 × 25 = 625 Q.15     A contract on construction job specifies a penalty for delay of completion beyond a certain date as follows: Rs 200 for the first day, Rs 250 for the second day, Rs 300 for the third day, etc., the penalty for each succeeding day being Rs 50 more than for the preceding day. how much money the contractor has to pay as penalty, if he has delayed the work by 30 days? Sol.           Here a = 200 , d = 50 and n = 30 Therefore, $S = {{30} \over 2}\left[ {2 \times 200 + \left( {30 - 1} \right)50} \right]$ $\left[ {Since\,\,{S_n} = {n \over 2}(2a + \left( {n - 1} \right)d} \right]$ = 15(400 + 29 × 50) = 15(400 + 1450) = 15 × 1850 = 27750 Hence, a delay of 30 days costs the contractor Rs 27750. Want to Know, How to Study Effectively? Click here for Best Tips. Q.16      A sum of Rs 700 is to be used to give seven each prizes to students of a school for their overall academic performance. If each prize is Rs 20 less than its preceding prize, find the value of each of the prizes. Sol.            Let the respective prizes be a + 60, a + 40, a + 20, a, a – 20, a – 40, a – 60 Therefore, The sum of the prizes is a + 60 + a + 40 + a + 20 + a + a – 20 + a – 40 + a – 60 = 700 $\Rightarrow$ 7a = 700 $\Rightarrow$ $a = {{700} \over 7} = 100$ Therefore, The seven prizes are 100 + 60, 100 + 40, 100 + 20, 100, 100 – 20, 100 – 40, 100 – 60 or 160, 140, 120, 100, 80, 60, 40 (in Rs) Q.17     In a school, students thought of planting trees in an around the school to reduce air pollution. It was decided that the number of trees, that each section of each class will plant, will be the same as the class, in which they are studying e.g., a section of Class I will plant 1 tree, a section of Class II will plant 2 trees and so on till Class XII. There are three sections of each class. How many trees will be planted by the students? Sol.             Since there are three sections of each class, so the number of trees planted by class I, class II, class III,... class XII are 1 × 3, 2 × 3, 3 × 3, .... 12 × 3 respectively. i.e., 3, 6, 9, ... 36. Clearly, it form an AP. The sum of the number of the trees planted by these classes. $= {{12} \over 2}\left( {3 + 36} \right) = 6 \times 39 = 234$ Q.18      A spiral is made up of successive semicircles , with centres alternately at A and B, starting with cenre at A, of radii 0.5 cm, 1.0 cm, 1.5 cm, 2.0 cm, .... as shown in fig. What is the total length of such a spiral made up of thirteen consecutive semicircles ? $\left( {Take\,\pi = {{22} \over 7}} \right)$ Sol.       Length of a semi-circum ference = $\pi r$ where r is the radius of the circle. Therefore, Length of spiral made up of thirteen consecutive semicircles. $= \left( {\pi \times 0.5 + \pi \times 1.0 + \pi + 1.5 + \pi \times 2.0 + .... + \pi \times 6.5} \right)cm$ $= \pi \times 0.5\left( {1 + 2 + 3 + .... + 13} \right)cm$ $= \pi \times 0.5 \times {{13} \over 2}\left( {2 \times 1 + [13 - 1} \right) \times 1]\,cm$ $= {{22} \over 7} \times {5 \over {10}} \times {{13} \over 2} \times 14\,cm = 143\,cm$ Q.19      200 logs are stacked in the following manner. 20 logs in the bottom row, 19 in the next row, 18 in the row next to it and so on (see figure). In how many rows are the 200 logs placed and how many logs are in the top row? Sol.       Clearly logs stacked in each row form a sequence 20 + 19 + 18 + 17 + .... It is an AP with a = 20, d = 19 – 20 = – 1. Let ${S_n} = 200$. Then ${n \over 2}\left[ {2 \times 20 + \left( {n - 1} \right)\left( { - 1} \right)} \right] = 200$ $\Rightarrow$ n(40 – n + 1) = 400 $\Rightarrow$ ${n^2} - 41n + 400 = 0$ $\Rightarrow$ $\left( {n - 16} \right)\left( {n - 25} \right) = 0$ $\Rightarrow$ n = 16 or 25 Here the common difference is negative. The terms go on diminishing and 21st term becomes zero. All terms after 21st term are negative. These negative terms when added to positive terms from 17th term to 20th term, cancel out each other and the sum remains the same. Thus n = 25 is not valid for this problem. So we take n = 16. Thus, 200 logs are placed in 16 rows. Number of logs in the 16th row $= {a_{16}}$ = a + 15d = 20 + 15(–1) = 20 – 15 = 5 Q.20      In a potato race, a bucket is placed at the starting point, which is 5 cm from the first potato, and the other potatoes are placed 3 m apart in a straight line. There are ten potatoes in the line (see figure). A competitor starts from the bucket, picks up the earest potato, runs back with it, drops it in the bucket, runs back to pick up the next potato, runs to the bucket to drop it in, and she continues in the same way until all the potatoes are in the bucket. What is the total distance the competitor has to run? Sol.      To pick up the first potato second potato, third potato, fourth potato, .... The distance (in metres) run by the competitor are 2 × 5 ; 2 × (5 + 3), 2 × (5 + 3 + 3), 2 × (5 + 3 + 3 + 3), .... i.e., 10, 16, 22, 28, .... which is in AP with a = 10, d = 16 – 10 = 6 Therefore, The sum of first ten terms, ${S_{10}} = {{10} \over 2}\left[ {(2 \times 10 + \left( {10 - 1} \right) \times 6)} \right]$ = 5(20 + 54) = 5 × 74 = 370 Therefore, The total distance the competitor has to run is 370 m. • Anonymous thank u • Anonymous A very use full app • Anonymous Message * very nice. Tnk u • Thanks so much • Anonymous Thanks so much • Shubham • Vivek Kumar Solved my problem • #its very helpful for every students!!!..this site helps to score good marks and the doubted questions can be cleared in ur house itself!# TYSM 🙂 • Swapna Thank you very much • Yuvraj Singh Thanks for my help • Thanku for this app • LOVEJOT SINGH Very Nice • Thank you so much .This app is real perfect for evry student • Thanks very much you are gud • Thanks very much • ShivAnand Good app keep on working you will get more visitors to ur site • Shailendra Apke vidio soluition bahut dhangse samajha hu or santushth bhi h • unknown • unknown thank u • Ajay Dubey So usefull Thanks • Anonymous Thanks you so much • @vetri thanks for your help✌ This is good A very good elaboration of each step. I really appreciate it. • Anonymous Brilliant • Gunjan Thanks a lot . • THANKS • Neetu kanwar • thank you so much • Its very easy....thank you so much • Its very easy....thank you so much • EXCELLENT • thank u it is very helpful • Lakhiram I am so happy to this solutions • Anonymous I am so happy to this solutions • amazing • Reeya Chettri Thank you for the help and better understanding of the concept • Shbnam Thank sir it's very helpful for me thanku • Gaurav hudinwal So nice app I am so happy • sunil kumar verma very nice app • @nkita Message *awesome ylll • Priyanka jat Message *l like this this was very helpful for me and my friend sm • Anonymous Hmm good • Akanksha It's amazing .....Helped a lot to prepare for exam • Lalitha Very nice • Swathi. E It is very useful thanks • Dhanraj singh Thank u • Anonymous It is very useful for me. It is my favorite site • Thank you so much • Thank you • Anushka Osmmmmm • Monica Raj I like this app so much Thank you • Kundan Kumar paswan very good app • Good I like it • Very good app And A kind for me Helped in solving mah sums Marvelous • Helped a lot to studies • Nice • So nice its very important to me .... its so helpfull to do the problems which we do no ... so thanku for publishing this type of app and network...... • Wow !!!!helped me a lot • kunj kashyap raj put thanks • Suraj Kumar Very very nice app for leaning • Kajal Singh I love this. Thanks for helping me dronstudy. com • Nikhil nikhi Nice • Jatin sharma I'm really like this....... • Anonymous • Romila Rasaily Asmn it's really good and amazing • Romila Rasaily Asmn it's really good news amazing • Anonymous Asmn it's really good news amazing • Thanks so much • Rahul Basak It's very reliable and helpful for all class 10 students. Thank u for providing us with such a good solved study material. • Thank you so much ⭐⭐ ⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐ • Thank you it is very helpful Very nice and interesting I having fun of it • Sumit kashyap Thanks • Harsha.k.h This helped me A lot in my academics • Anonymous It is good and useful to answer or write in board and getting appreciation by mam..... • Jasneet Thanku it really help in my exams ✌ • Jyotsana shrivastava Very very thnx • Anonymous Thank u very helpful this app • Anisha khare • Janaki Very usefull • vaishali Chauhan Best • Superb app • rahul bhagat thanx • Arjun singh Very nice I like it☺ • Kanika Thank you this app help to me solve this question • Anonymous Tnx the I am very hppy 4 helping me • Anonymous Nice • Anonymous THNKS • tripti saini Actually i dont like maths but this dronstudy makes me like maths................. Nyc • chanchal verma awesomeeeeeee • Anonymous Help me a lot and I got interested in doing maths sums • Anonymous Help me a lot and I got interested in doing maths sums • Shubham Mishra Thanks for helping I love it • vishal maury nyce (thanks for this) • amritpal kaur I am so happy that app doing heLP me • amritpal kaur Good aap • amritpal kaur Thank u☺ • amritpal kaur thanku ☺☺ • Anonymous Loved this site • amritpal kaur thank u☺☺☺☺☺☺☺☺ • shobana thank u, its very helpful for us • Manasvi I loved this site. It helped me a lot to solve my doubts. • thank you so much • THANKS A LOT • shipra Dixit This is really good aap more help to solve math questions • Anonymous It's app not AAP • Pooja khatri That's really good aap • Anonymous Thanks for ur answer.....helped me a lot • prashant this is really amazing • Saijal This is really amazing • Saijal THIS IS REALY AMAZING • puneet nice • Anonymous thanks • ireena thank u 🙂 • ireena thank you • bahut mast • Sanyogita Its really gud .... • Subiksha Nice • Avnish kumar very very nice • Unnati Baliyan Message *thanku soo much • Danish This os help us to understand maths salutation .thanks • Rohit Nice FREE CBSE Video Solutions & Chapter Video Lectures WatchNOW
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 About the fake abs : f(x) = f(-x) sheldonison Long Time Fellow Posts: 641 Threads: 22 Joined: Oct 2008 12/11/2014, 10:58 PM (This post was last modified: 12/11/2014, 11:18 PM by sheldonison.) (12/11/2014, 01:22 PM)tommy1729 Wrote: When discussing fake function theory we came across the fake sqrt. ... fake_sqrt(x^2). $f(x)=\sum_{n=0}^{\infty} a_n x^n\;\;\;\;\; a_n = \frac{1}{\Gamma(n+0.5)}$ for large positive numbers, $g(x)=f(x)\exp(-x) \approx \sqrt{x}$ $g(x^2) = g((-x)^2) \approx x\;\;$ this is true if |real(x)| is large enough, and |imag(x)| isn't too large However, at the imaginary axis $g(x^2)$ grows large exponentially, and does not behave like x at all. And g(x^2) never behaves like abs(x), anywhere in the complex plane Here are some example calculations: $g(25)=5 + 1.5\cdot10^{-13}\;\;\;$ 5^2, small error term $g((5+0.1i)^2) = 5+0.1i - k\cdot10^{-13}\;\;\;$also a small error term, but not abs(x^2) $g(-25)=-866955233 \;\;\;$ (5i)^2, huge error term, nowhere near 5i $g(25i) = 3.53768061172 + 3.52450328163i \;\;\;\;\sqrt{25i}\approx 3.535534 +3.535534i$ - Sheldon « Next Oldest | Next Newest » Messages In This Thread About the fake abs : f(x) = f(-x) - by tommy1729 - 12/11/2014, 01:22 PM RE: About the fake abs : f(x) = f(-x) - by sheldonison - 12/11/2014, 10:58 PM RE: About the fake abs : f(x) = f(-x) - by tommy1729 - 12/11/2014, 11:59 PM Possibly Related Threads... Thread Author Replies Views Last Post " fake ring theory " tommy1729 0 2,157 06/11/2014, 11:29 PM Last Post: tommy1729 fake id(x) for better 2sinh method. tommy1729 5 5,382 06/05/2014, 08:21 AM Last Post: tommy1729 Users browsing this thread: 1 Guest(s)
# How to express the data structure in the pesudocode of your algorithm [closed] If I stored the output for my algorithm as a data structure as a cell array. How can I express that when I am writing the pseudocode for my algorithm. For example, I have a set of $$n$$ values like {2,4,6} and for each value, I run the algorithm and output a matrix. I want to express in the pseudocode of my algorithm how I stored the matrix for each value and output these values and the matrices corresponding to it. • Is there something preventing you from just writing that down...? There is no standard for pseudocode; you can do as you please. – dkaeae Jul 17 '19 at 7:08 ALGORITHM(S, n): For i = 1 to n do S = ADD(S, i) EndFor Then followed by something like "... the algorithm performs $$\Theta(n)$$ units of work assuming that the set $$S$$ is implemented as blah so that the addition of an element ADD(S,i) runs in blah time ..." Most of the time, the author explicitly specifies any data structure used and it's operations, For example, if I am using a $$stack$$ , I would specify that $$add/push$$ operation adds an element to the stack and $$delete/pop/remove$$ deletes an element from the stack. When not mentioned, it is advisable to use standard names used by popular programming languages.
# Hankel determinants and irrationality questions ## Affiliation: University of Newcastle ## Date: Wed, 18/11/2015 - 1:30pm ## Venue: K-J17-101 (Ainsworth Building 101) UNSW ## Abstract: It is a classical fact that the irrationality of a real number $x$ follows from the existence of a sequence $p_n/q_n$, with integral $p_n$ and $q_n$, such that $q_nx-p_n$ is nonzero for all $n$ and tends to $0$ as $n$ tends to infinity. In my talk I will discuss an extension of this criterion in the case when the sequence possesses an additional structure; in particular, the requirement $q_nx-p_n\to 0$ is weakened. Some applications will be given including a new proof of the irrationality of $\pi$.
Lemma 8.6.11. Let $\mathcal{C}$ be a site. Let $F : \mathcal{S} \to \mathcal{T}$ be a $1$-morphism of categories fibred in groupoids over $\mathcal{C}$. Assume that 1. $\mathcal{T}$ is a stack in groupoids over $\mathcal{C}$, 2. for every $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ the functor $\mathcal{S}_ U \to \mathcal{T}_ U$ of fibre categories is faithful, 3. for each $U$ and each $y \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{T}_ U)$ the presheaf $(h : V \to U) \longmapsto \{ (x, f) \mid x \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{S}_ V), f : F(x) \to f^*y\text{ over }V\} /\cong$ is a sheaf on $\mathcal{C}/U$. Then $\mathcal{S}$ is a stack in groupoids over $\mathcal{C}$. Proof. We have to prove descent for morphisms and descent for objects. Descent for morphisms. Let $\{ U_ i \to U\}$ be a covering of $\mathcal{C}$. Let $x, x'$ be objects of $\mathcal{S}$ over $U$. For each $i$ let $\alpha _ i : x|_{U_ i} \to x'|_{U_ i}$ be a morphism over $U_ i$ such that $\alpha _ i$ and $\alpha _ j$ restrict to the same morphism $x|_{U_ i \times _ U U_ j} \to x'|_{U_ i \times _ U U_ j}$. Because $\mathcal{T}$ is a stack in groupoids, there is a morphism $\beta : F(x) \to F(x')$ over $U$ whose restriction to $U_ i$ is $F(\alpha _ i)$. Then we can think of $\xi = (x, \beta )$ and $\xi ' = (x', \text{id}_{F(x')})$ as sections of the presheaf associated to $y = F(x')$ over $U$ in assumption (3). On the other hand, the restrictions of $\xi$ and $\xi '$ to $U_ i$ are $(x|_{U_ i}, F(\alpha _ i))$ and $(x'|_{U_ i}, \text{id}_{F(x'|_{U_ i})})$. These are isomorphic to each other by the morphism $\alpha _ i$. Thus $\xi$ and $\xi '$ are isomorphic by assumption (3). This means there is a morphism $\alpha : x \to x'$ over $U$ with $F(\alpha ) = \beta$. Since $F$ is faithful on fibre categories we obtain $\alpha |_{U_ i} = \alpha _ i$. Descent of objects. Let $\{ U_ i \to U\}$ be a covering of $\mathcal{C}$. Let $(x_ i, \varphi _{ij})$ be a descent datum for $\mathcal{S}$ with respect to the given covering. Because $\mathcal{T}$ is a stack in groupoids, there is an object $y$ in $\mathcal{T}_ U$ and isomorphisms $\beta _ i : F(x_ i) \to y|_{U_ i}$ such that $F(\varphi _{ij}) = \beta _ j|_{U_ i \times _ U U_ j} \circ (\beta _ i|_{U_ i \times _ U U_ j})^{-1}$. Then $(x_ i, \beta _ i)$ are sections of the presheaf associated to $y$ over $U$ defined in assumption (3). Moreover, $\varphi _{ij}$ defines an isomorphism from the pair $(x_ i, \beta _ i)|_{U_ i \times _ U U_ j}$ to the pair $(x_ j, \beta _ j)|_{U_ i \times _ U U_ j}$. Hence by assumption (3) there exists a pair $(x, \beta )$ over $U$ whose restriction to $U_ i$ is isomorphic to $(x_ i, \beta _ i)$. This means there are morphisms $\alpha _ i : x_ i \to x|_{U_ i}$ with $\beta _ i = \beta |_{U_ i} \circ F(\alpha _ i)$. Since $F$ is faithful on fibre categories a calculation shows that $\varphi _{ij} = \alpha _ j|_{U_ i \times _ U U_ j} \circ (\alpha _ i|_{U_ i \times _ U U_ j})^{-1}$. This finishes the proof. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
# Solving fluid's Poisson equation for periodic problem or more easy way? 1. Jul 21, 2010 ### omyojj The problem is about mathematics but it originates from the self-gravitational instability of incompressible fluid, so let me explain the situation first. I have an incompressible uniform fluid disk that is infinite in the x-y direction. The disk has a finite thickness $$2a$$ along the z-direction. (-a<z<a) The space exterior to the disk is assumed to be filled with a rarefied medium that has constant pressure equal to the fluid's, which prevents the disk from dispersing. Thus, the initial density distribution has a step discontinuity and can be written as $$\rho(x,y,z) = \rho_0 ( \theta(z-a) - \theta(z+a) )$$ where $$\theta(z)$$ is a step function. Now I want to apply small Lagrangian perturbation to the fluid of the form $$\xi_{x,z}(x,z) = \xi_{x,z}(z) e^{ikx + i\omega t}$$ where $$\xi_{x,z}$$ is the x,z-component of Lagrangian displacement vector. Perturbation has its wavenumber k along the x-direction, and I assumed time dependence. Also I consider only the perturbation that has even reflection symmetry for displacement, that is , $$\xi_{x,z}(z) = - \xi_{x,z}(-z)$$ () (Sausage type: The rectangular shape of the slab changed slightly (though infinitesimally) so that it looks more like a cylinder now) Deep inside the disk, there would be no change in density because the fluid itself is incompressible. ($$\nabla \cdot {\mathbf{\xi}} = 0$$) But near the boundary surfaces, discrete density changes in Eulerian density variable $$\delta\rho(x,z)$$ could occur if the difference b/w a and the height from the midplane(z) is smaller than the Lagrangian displacement vector at z=a. $$\delta\rho(x,z) = \rho_0 [ \theta(z - \xi_z(z=a)e^{ikx} - \theta(z - a) ] + \rho_0 [ \theta(z+a) - \theta(z - \xi_z(z=-a)e^{ikx}) ]$$ (Of course, Lagrangian density perturbation is everywhere zero, i.e., $$\Delta \rho = 0$$ .) Now I want to introduce self-gravity at this point because I want to examine the strength of perturbed gravity that makes the system unstable to this small disturbances. $$\nabla^2 \delta\psi = 4\pi G \delta\rho$$ Can I solve the above equation for $$\delta\psi(x,z)$$ with right-hand side involving step functions of sinusoidal behavior in x-direction? Any hint or help would be much appreciated. Thank you. BTW, excuse my English.. Last edited: Jul 21, 2010
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Calabi-Yau Varieties and Mirror Symmetry Edited by: Noriko Yui, Queen's University, Kingston, ON, Canada, and James D. Lewis, University of Alberta, Edmonton, AB, Canada A co-publication of the AMS and Fields Institute. SEARCH THIS BOOK: Fields Institute Communications 2003; 367 pp; hardcover Volume: 38 ISBN-10: 0-8218-3355-3 ISBN-13: 978-0-8218-3355-1 List Price: US$129 Member Price: US$103.20 Order Code: FIC/38 Strings 2001 - Atish Dabholkar, Sunil Mukhi and Spenta R Wadia Mirror Symmetry - Claire Voisin The idea of mirror symmetry originated in physics, but in recent years, the field of mirror symmetry has exploded onto the mathematical scene. It has inspired many new developments in algebraic and arithmetic geometry, toric geometry, the theory of Riemann surfaces, and infinite-dimensional Lie algebras among others. The developments in physics stimulated the interest of mathematicians in Calabi-Yau varieties. This led to the realization that the time is ripe for mathematicians, armed with many concrete examples and alerted by the mirror symmetry phenomenon, to focus on Calabi-Yau varieties and to test for these special varieties some of the great outstanding conjectures, e.g., the modularity conjecture for Calabi-Yau threefolds defined over the rationals, the Bloch-Beilinson conjectures, regulator maps of higher algebraic cycles, Picard-Fuchs differential equations, GKZ hypergeometric systems, and others. The articles in this volume report on current developments. The papers are divided roughly into two categories: geometric methods and arithmetic methods. One of the significant outcomes of the workshop is that we are finally beginning to understand the mirror symmetry phenomenon from the arithmetic point of view, namely, in terms of zeta-functions and L-series of mirror pairs of Calabi-Yau threefolds. The book is suitable for researchers interested in mirror symmetry and string theory. Titles in this series are co-published with the Fields Institute for Research in Mathematical Sciences (Toronto, Ontario, Canada). Graduate students and research mathematicians interested in mirror symmetry and string theory. Geometric methods • V. V. Batyrev and E. N. Materov -- Mixed toric residues and Calabi-Yau complete intersections • L. Chiang and S.-s. Roan -- Crepant resolutions of $$\mathbb{C}^n/A_1(n)$$ and flops of $$n$$-folders for $$n=4,5$$ • P. L. del Angel and S. Müller-Stach -- Picard-Fuchs equations, integrable systems and higher algebraic K-theory • S. Hosono -- Counting BPS states via holomorphic anomaly equations • J. D. Lewis -- Regulators of Chow cycles on Calabi-Yau varieties Arithmetic methods • P. Candelas, X. de la Ossa, and F. Rodriguez-Villegas -- Calabi-Yau manifolds over finite fields, II • L. Dieulefait and J. Manoharmayum -- Modularity of rigid Calabi-Yau threefolds over $$\mathbb{Q}$$ • Y. Goto -- $$K3$$ surfaces with symplectic group actions • T. Ito -- Birational smooth minimal models have equal Hodge numbers in all dimensions • B. H. Lian and S.-T. Yau -- The $$n$$th root of the mirror map • L. Long -- On a Shioda-Inose structure of a family of K3 surfaces • M. Lynker, V. Periwal, and R. Schimmrigk -- Black hole attractor varieties and complex multiplication • F. Rodriguez-Villegas -- Hypergeometric families of Calabi-Yau manifolds • R. Schimmrigk -- Aspects of conformal field theory from Calabi-Yau arithmetic • J. Stienstra -- Ordinary Calabi-Yau-3 crystals • J. Stienstra -- The ordinary limit for varieties over $$\mathbb{Z}[x_1,\ldots,x_r]$$ • N. Yui -- Update on the modularity of Calabi-Yau varieties with appendix by Helena Verrill • N. Yui and J. D. Lewis -- Problems
# Solution: All of the following objects are chiral EXCEPT:1) hand2) shoe3) DNA4) face ###### Problem All of the following objects are chiral EXCEPT: 1) hand 2) shoe 3) DNA 4) face ###### Solution Definition of chiral is that these are objects that is asymmetric in such a way that the structure and its mirror View Complete Written Solution
## Sunday, August 20, 2017 ### Analytic Expressions for the Inner-Rim Structure of Passively Heated Protoplanetary Disks Analytic Expressions for the Inner-Rim Structure of Passively Heated Protoplanetary Disks Authors: Ueda et al Abstract: We analytically derive the expressions for the structure of the inner region of protoplanetary disks based on the results from the recent hydrodynamical simulations. The inner part of a disk can be divided into four regions: dust-free region with gas temperature in the optically thin limit, optically thin dust halo, optically thick condensation front and the classical optically thick region in order from the inside. We derive the dust-to-gas mass ratio profile in the dust halo using the fact that partial dust condensation regulates the temperature to the dust evaporation temperature. Beyond the dust halo, there is an optically thick condensation front where all the available silicate gas condenses out. The curvature of the condensation surface is determined by the condition that the surface temperature must be nearly equal to the characteristic temperature ∼1200K. We derive the mid-plane temperature in the outer two regions using the two-layer approximation with the additional heating by the condensation front for the outermost region. As a result, the overall temperature profile is step-like with steep gradients at the borders between the outer three regions. The borders might act as planet traps where the inward migration of planets due to gravitational interaction with the gas disk stops. The temperature at the border between the two outermost regions coincides with the temperature needed to activate magnetorotational instability, suggesting that the inner edge of the dead zone must lie at this border. The radius of the dead-zone inner edge predicted from our solution is ∼ 2-3 times larger than that expected from the classical optically thick temperature. ### Exploring dust around HD 142527 down to 0.025" / 4au using SPHERE/ZIMPOL Exploring dust around HD142527 down to 0.025" / 4au using SPHERE/ZIMPOL Authors: Avenhaus et al Abstract: We have observed the protoplanetary disk of the well-known young Herbig star HD 142527 using ZIMPOL Polarimetric Differential Imaging with the VBB (Very Broad Band, ~600-900nm) filter. We obtained two datasets in May 2015 and March 2016. Our data allow us to explore dust scattering around the star down to a radius of ~0.025" (~4au). The well-known outer disk is clearly detected, at higher resolution than before, and shows previously unknown sub-structures, including spirals going inwards into the cavity. Close to the star, dust scattering is detected at high signal-to-noise ratio, but it is unclear whether the signal represents the inner disk, which has been linked to the two prominent local minima in the scattering of the outer disk, interpreted as shadows. An interpretation of an inclined inner disk combined with a dust halo is compatible with both our and previous observations, but other arrangements of the dust cannot be ruled out. Dust scattering is also present within the large gap between ~30 and ~140au. The comparison of the two datasets suggests rapid evolution of the inner regions of the disk, potentially driven by the interaction with the close-in M-dwarf companion, around which no polarimetric signal is detected. ### In situ accretion of gaseous envelopes on to planetary cores embedded in evolving protoplanetary discs In situ accretion of gaseous envelopes on to planetary cores embedded in evolving protoplanetary discs Authors: Coleman et al Abstract: The core accretion hypothesis posits that planets with significant gaseous envelopes accreted them from their protoplanetary discs after the formation of rocky/icy cores. Observations indicate that such exoplanets exist at a broad range of orbital radii, but it is not known whether they accreted their envelopes in situ, or originated elsewhere and migrated to their current locations. We consider the evolution of solid cores embedded in evolving viscous discs that undergo gaseous envelope accretion in situ with orbital radii in the range 0.1−10au. Additionally, we determine the long-term evolution of the planets that had no runaway gas accretion phase after disc dispersal. We find: (i) Planets with 5M⊕ cores never undergo runaway accretion. The most massive envelope contained 2.8M⊕ with the planet orbiting at 10au. (ii) Accretion is more efficient onto 10M⊕ and 15M⊕ cores. For orbital radii ap≥0.5au, 15M⊕ cores always experienced runaway gas accretion. For ap≥5au, all but one of the 10M⊕ cores experienced runaway gas accretion. No planets experienced runaway growth at ap=0.1au. (iii) We find that, after disc dispersal, planets with significant gaseous envelopes cool and contract on Gyr time-scales, the contraction time being sensitive to the opacity assumed. Our results indicate that Hot Jupiters with core masses ≲15M⊕ at ≲0.1au likely accreted their gaseous envelopes at larger distances and migrated inwards. Consistently with the known exoplanet population, Super-Earths and mini-Neptunes at small radii during the disc lifetime, accrete only modest gaseous envelopes. ## Saturday, August 19, 2017 ### Binary Star Formation and the Outflows from their Disks Binary Star Formation and the Outflows from their Discs Authors: Kuruwita et al Abstract: We carry out magnetohydrodynamical simulations with FLASH of the formation of a single, a tight binary (a∼2.5 AU) and a wide binary star (a∼45 AU). We study the outflows and jets from these systems to understand the contributions the circumstellar and circumbinary discs have on the efficiency and morphology of the outflow. In the single star and tight binary case we obtain a single pair of jets launched from the system, while in the wide binary case two pairs of jets are observed. This implies that in the tight binary case the contribution of the circumbinary disc on the outflow is greater than that in the wide binary case. We also find that the single star case is the most efficient at transporting mass, linear and angular momentum from the system, while the wide binary case is less efficient (∼50%,∼33%,∼42% of the respective quantities in the single star case). The tight binary's efficiency falls between the other two cases (∼71%,∼66%,∼87% of the respective quantities in the single star case). By studying the magnetic field structure we deduce that the outflows in the single star and tight binary star case are magnetocentrifugally driven, whereas in the wide binary star case the outflows are driven by a magnetic pressure gradient. ### HD far infrared emission as a measure of protoplanetary disk mass HD far infrared emission as a measure of protoplanetary disk mass Authors: Trapman et al Abstract: Protoplanetary disks around young stars are the sites of planet formation. While the dust mass can be estimated using standard methods, determining the gas mass - and thus the amount of material available to form giant planets - has proven to be very difficult. Hydrogen deuteride (HD) is a promising alternative to the commonly-used gas mass tracer, CO. We aim to examine the robustness of HD as tracer of the disk gas mass, specifically the effect of gas mass on the HD FIR emission and its sensitivity to the vertical structure. Deuterium chemistry reactions relevant for HD were implemented in the thermochemical code DALI and models were run for a range of disk masses and vertical structures. The HD J=1-0 line intensity depends directly on the gas mass through a sublinear power law relation with a slope of ~0.8. Assuming no prior knowledge about the vertical structure of a disk and using only the HD 1-0 flux, gas masses can be estimated to within a factor of 2 for low mass disks (Mdisk less than 10−3 M⊙). For more massive disks, this uncertainty increases to more than an order of magnitude. Adding the HD 2-1 line or independent information about the vertical structure can reduce this uncertainty to a factor of ~3 for all disk masses. For TW Hya, using the radial and vertical structure from Kama et al. 2016b the observations constrain the gas mass to 6⋅10−3 M⊙ less than Mdisk less than 9⋅10−3 M⊙. Future observations require a 5σ sensitivity of 1.8⋅10−20 W m−2 (2.5⋅10−20 W m−2) and a spectral resolving power R greater than 300 (1000) to detect HD 1-0 (HD 2-1) for all disk masses above 10−5 M⊙ with a line-to-continuum ratio greater than 0.01. These results show that HD can be used as an independent gas mass tracer with a relatively low uncertainty and should be considered as an important science goal for future FIR missions. ### Increased H2CO production in the outer disk around HD 163296 Increased H2CO production in the outer disk around HD 163296 Authors: Hallam et al Abstract: It is known that an embedded massive planet will open a gap in a protoplanetary disc via angular momentum exchange with the disc material. The resulting surface density profile of the disc is investigated for one dimensional and two dimensional disc models and, in agreement with previous work, it is found that one dimensional gaps are significantly deeper than their two dimensional counterparts for the same initial conditions. We find, by applying one dimensional torque density distributions to two dimensional discs containing no planet, that the excitement of the Rossby wave instability and the formation of Rossby vortices play a critical role in setting the equilibrium depth of the gap. Being a two dimensional instability, this is absent from one dimensional simulations and does not limit the equilibrium gap depth there. We find similar gap depths between two dimensional gaps formed by torque density distributions, in which the Rossby wave instability is present, and two dimensional planet gaps, in which no Rossby wave instability is present. This can be understood if the planet gap is maintained at marginal stability, even when there is no obvious Rossby wave instability present. Further investigation shows the final equilibrium gap depth is very sensitive to the form of the applied torque density distribution, and using improved one dimensional approximations from three dimensional simulations can go even further to reducing the discrepancy between one and two dimensional models, especially for lower mass planets. This behaviour is found to be consistent across discs with varying parameters. ## Friday, August 18, 2017 ### The Viewing Geometry of Brown Dwarfs Influences Their Observed Colours and Variability Properties The Viewing Geometry of Brown Dwarfs Influences Their Observed Colours and Variability Properties Authors: Vos et al Abstract: In this paper we study the full sample of known Spitzer [3.6 μm] and J-band variable brown dwarfs. We calculate the rotational velocities, vsini, of 16 variable brown dwarfs using archival Keck NIRSPEC data and compute the inclination angles of 19 variable brown dwarfs. The results obtained show that all objects in the sample with mid-IR variability detections are inclined at an angle >20∘, while all objects in the sample displaying J-band variability have an inclination angle >35∘. J-band variability appears to be more affected by inclination than \textit{Spitzer} [3.6 μm] variability, and is strongly attenuated at lower inclinations. Since J-band observations probe deeper into the atmosphere than mid-IR observations, this effect may be due to the increased atmospheric path length of J-band flux at lower inclinations. We find a statistically significant correlation between the colour anomaly and inclination of our sample, where field objects viewed equator-on appear redder than objects viewed at lower inclinations. Considering the full sample of known variable L, T and Y spectral type objects in the literature, we find that the variability properties of the two bands display notably different trends, due to both intrinsic differences between bands and the sensitivity of ground-based versus space-based searches. However, in both bands we find that variability amplitude may reach a maximum at ∼7−9 hr periods. Finally, we find a strong correlation between colour anomaly and variability amplitude for both the J-band and mid-IR variability detections, where redder objects display higher variability amplitudes. ### HD 202206: A Circumbinary Brown Dwarf System HD 202206: A Circumbinary Brown Dwarf System Authors: Benedict et al Abstract: Using Hubble Space Telescope Fine Guidance Sensor astrometry and previously published radial velocity measures, we explore the exoplanetary system HD 202206. Our modeling results in a parallax, ${\pi }_{\mathrm{abs}}=21.96\pm 0.12$ milliseconds of arc, a mass for HD 202206 B of ${{ \mathcal M }}_{B}={0.089}_{-0.006}^{+0.007}\,{{ \mathcal M }}_{\odot }$, and a mass for HD 202206 c of ${{ \mathcal M }}_{c}={17.9}_{-1.8}^{+2.9}\,{{ \mathcal M }}_{\mathrm{Jup}}$. HD 202206 is a nearly face-on G + M binary orbited by a brown dwarf. The system architecture that we determine supports past assertions that stability requires a 5:1 mean motion resonance (we find a period ratio, ${P}_{c}/{P}_{B}=4.92\pm 0.04$) and coplanarity (we find a mutual inclination, ${\rm{\Phi }}=6^\circ \pm 2^\circ$). ### A survey for planetary-mass brown dwarfs in the Chamaeleon I star-forming region A survey for planetary-mass brown dwarfs in the Chamaeleon I star-forming region Authors: Esplin et al Abstract: We have performed a search for planetary-mass brown dwarfs in the Chamaeleon I star-forming region using proper motions and photometry measured from optical and infrared images from the Spitzer Space Telescope, the Hubble Space Telescope, and ground-based facilities. Through near-infrared spectroscopy at Gemini Observatory, we have confirmed six of the candidates as new late-type members of Chamaeleon I >M7.75. One of these objects, Cha J11110675-7636030, has the faintest extinction-corrected M_K among known members, which corresponds to a mass of 3-6 M_Jup according to evolutionary models. That object and two other new members have redder mid-IR colors than young photospheres at greater than M9.5, which may indicate the presence of disks. However, since those objects may be later than M9.5 and the mid-IR colors of young photospheres are ill-defined at those types, we cannot determine conclusively whether color excesses from disks are present. If Cha J11110675-7636030 does have a disk, it would be a contender for the least-massive known brown dwarf with a disk. Since the new brown dwarfs that we have found extend below our completeness limit of 6-10 M_Jup, deeper observations are needed to measure the minimum mass of the initial mass function in Chamaeleon I. ## Thursday, August 17, 2017 ### A feature-rich transmission spectrum for WASP-127b A feature-rich transmission spectrum for WASP-127b Authors: Palle et al Abstract: WASP-127b is one of the lowest density planets discovered to date. With a sub-Saturn mass (Mp=0.18±0.02MJ) and super-Jupiter radius (Rp=1.37±0.04RJ), it orbits a bright G5 star, which is about to leave the main-sequence. We aim to explore WASP-127b's atmosphere in order to retrieve its main atmospheric components, and to find hints for its intriguing inflation and evolutionary history. We used the ALFOSC spectrograph at the NOT telescope to observe a low resolution (R∼330, seeing limited) long-slit spectroscopic time series during a planetary transit, and present here the first transmission spectrum for WASP-127b. We find the presence of a strong Rayleigh slope at blue wavelengths and a hint of Na absorption, although the quality of the data does not allow us to claim a detection. At redder wavelengths the absorption features of TiO and VO are the best explanation to fit the data. Although higher signal-to-noise ratio observations are needed to conclusively confirm the absorption features, WASP-127b seems to posses a cloud-free atmosphere and is one of the best targets to perform further characterization studies in the near future. ### 10 Million Year Old Star PDS 110 has a Hot Jupiter Periodic Eclipses of the Young Star PDS 110 Discovered with WASP and KELT Photometry Authors: Osborn et al Abstract: We report the discovery of eclipses by circumstellar disc material associated with the young star PDS 110 in the Ori OB1a association using the SuperWASP and KELT surveys. PDS 110 (HD 290380, IRAS 05209-0107) is a rare Fe/Ge-type star, a ~10 Myr-old accreting intermediate-mass star showing strong infrared excess (LIR/Lbol ~ 0.25). Two extremely similar eclipses with a depth of ~30\% and duration ~25 days were observed in November 2008 and January 2011. We interpret the eclipses as caused by the same structure with an orbital period of 808±2 days. Shearing over a single orbit rules out diffuse dust clumps as the cause, favouring the hypothesis of a companion at ~2AU. The characteristics of the eclipses are consistent with transits by an unseen low-mass (1.8-70MJup) planet or brown dwarf with a circum-secondary disc of diameter ~0.3 AU. The next eclipse event is predicted to take place in September 2017 and could be monitored by amateur and professional observatories across the world. ### New Insights on Planet Formation in WASP-47 from a Simultaneous Analysis of Radial Velocities and Transit Timing Variations New Insights on Planet Formation in WASP-47 from a Simultaneous Analysis of Radial Velocities and Transit Timing Variations Authors: Weiss et al Abstract: Measuring precise planet masses, densities, and orbital dynamics in individual planetary systems is an important pathway toward understanding planet formation. The WASP-47 system has an unusual architecture that motivates a complex formation theory. The system includes a hot Jupiter ("b") neighbored by interior ("e") and exterior ("d") sub-Neptunes, and a long-period eccentric giant planet ("c"). We simultaneously modeled transit times from the Kepler K2 mission and 118 radial velocities to determine the precise masses, densities, and Keplerian orbital elements of the WASP-47 planets. Combining RVs and TTVs provides a better estimate of the mass of planet d ($13.6\pm 2.0\,{M}_{\oplus }$) than that obtained with only RVs ($12.75\pm 2.70\,{M}_{\oplus }$) or TTVs ($16.1\pm 3.8\,{M}_{\oplus }$). Planets e and d have high densities for their size, consistent with a history of photoevaporation and/or formation in a volatile-poor environment. Through our RV and TTV analysis, we find that the planetary orbits have eccentricities similar to the solar system planets. The WASP-47 system has three similarities to our own solar system: (1) the planetary orbits are nearly circular and coplanar, (2) the planets are not trapped in mean motion resonances, and (3) the planets have diverse compositions. None of the current single-process exoplanet formation theories adequately reproduce these three characteristics of the WASP-47 system (or our solar system). We propose that WASP-47, like the solar system, formed in two stages: first, the giant planets formed in a gas-rich disk and migrated to their present locations, and second, the high-density sub-Neptunes formed in situ in a gas-poor environment. ## Wednesday, August 16, 2017 ### An upper limit on the mass of the circumplanetary disk for DH Tau b An upper limit on the mass of the circumplanetary disk for DH Tau b Authors: Wolff et al Abstract: DH Tau is a young (∼1 Myr) classical T Tauri star. It is one of the few young PMS stars known to be associated with a planetary mass companion, DH Tau b, orbiting at large separation and detected by direct imaging. DH Tau b is thought to be accreting based on copious Hα emission and exhibits variable Paschen Beta emission. NOEMA observations at 230 GHz allow us to place constraints on the disk dust mass for both DH Tau b and the primary in a regime where the disks will appear optically thin. We estimate a disk dust mass for the primary, DH Tau A of 17.2±1.7M⊕, which gives a disk-to-star mass ratio of 0.014 (assuming the usual Gas-to-Dust mass ratio of 100 in the disk). We find a conservative disk dust mass upper limit of 0.42M⊕ for DH Tau b, assuming that the disk temperature is dominated by irradiation from DH Tau b itself. Given the environment of the circumplanetary disk, variable illumination from the primary or the equilibrium temperature of the surrounding cloud would lead to even lower disk mass estimates. A MCFOST radiative transfer model including heating of the circumplanetary disk by DH Tau b and DH Tau A suggests that a mass averaged disk temperature of 22 K is more realistic, resulting in a dust disk mass upper limit of 0.09M⊕ for DH Tau b. We place DH Tau b in context with similar objects and discuss the consequences for planet formation models. ### Internal Structure of Giant and Icy Planets: Importance of Heavy Elements and Mixing Internal Structure of Giant and Icy Planets: Importance of Heavy Elements and Mixing Authors: Helled et al Abstract: In this chapter we summarize current knowledge of the internal structure of giant planets. We concentrate on the importance of heavy elements and their role in determining the planetary composition and internal structure, in planet formation, and during the planetary long-term evolution. We briefly discuss how internal structure models are derived, present the possible structures of the outer planets in the Solar System, and summarise giant planet formation and evolution. Finally, we introduce giant exoplanets and discuss how they can be used to better understand giant planets as a class of planetary objects. ### Moderately Eccentric Warm Jupiters from Secular Interactions with Exterior Companions Moderately Eccentric Warm Jupiters from Secular Interactions with Exterior Companions Authors: Anderson et al Abstract: Recent work suggests that most warm Jupiters (WJs, giant planets with semi-major axes in the range of 0.1-1 AU) probably form in-situ, or arrive in their observed orbits through disk migration. However, both in-situ formation and disk migration, in their simplest flavors, predict WJs to be in low-eccentricity orbits, in contradiction with many observed WJs that are moderately eccentric (e=0.2−0.7). This paper examines the possibility that the WJ eccentricities are raised by secular interactions with exterior giant planet companions. Eccentricity growth may arise from an inclined companion (through Lidov-Kozai cycles), or from an eccentric, nearly coplanar companion. We quantify the necessary conditions (in terms of the eccentricity, semi-major axis and inclination) for external perturbers of various masses to raise the WJ eccentricity. We also consider the sample of eccentric WJs with detected outer companions, and for each system, identify the range of mutual inclinations needed to generate the observed eccentricity. For most systems, we find that relatively high inclinations (at least ∼40∘) are needed so that Lidov-Kozai cycles are induced; the observed outer companions are typically not sufficiently eccentric to generate the observed WJ eccentricity in a low-inclination configuration. The results of this paper place constraints on possibly unseen external companions to eccentric WJs. Observations that probe mutual inclinations of giant planet systems will help clarify the origin of eccentric WJs and the role of external companions. ## Tuesday, August 15, 2017 ### Compositional imprints in density-distance-time: a rocky composition for close-in low-mass exoplanets from the location of the valley of evaporation Compositional imprints in density-distance-time: a rocky composition for close-in low-mass exoplanets from the location of the valley of evaporation Authors: Jin et al Abstract: We use a theoretical end-to-end model that includes planet formation, thermodynamic evolution, and atmospheric escape to investigate how the statistical imprints of evaporation depend on the bulk composition of the planetary cores (rocky vs. icy). We find that the typical population-wide imprints of evaporation like the location of the "evaporation valley" in the distance-radius plane and the corresponding one-dimensional bimodal distribution in planetary radii are clearly different depending on the bulk composition of close-in low-mass planetary cores. Comparison with the observed position of the valley as found recently by Fulton et al. (2017) suggests that Kepler planets in this domain have a predominately Earth-like rocky composition. Combined with the excess of period ratios outside of MMR, this suggests that low-mass Kepler planets formed inside of the iceline, but still undergoing orbital migration. The core radius becomes visible for planets losing all primordial H/He. For such planets in the "triangle of evaporation" in the distance-radius plane, the degeneracy in possible compositions is reduced. In the observed a-R diagram, we identify a trend to more volatile-rich compositions with increasing planet radius and potentially distance (R/R_earth less than 1.6 rocky; 1.6-3.0 H/He and/or ices; > 3: H/He). Moreover, we find that the mass-density distribution contains important information about planet formation and evolution. Evaporation removes close-in low-mass planets with low density in the mass-density space. This causes density and orbital distance to be anti-correlated for low-mass planets, in contrast to giant planets, where closer planets are less dense, due to inflation mechanisms. The temporal evolution of the statistical properties of the population reported here will be of particular interest for the future PLATO 2.0 mission which will be able to observe the temporal dimension. ### K2-66b and K2-106b: Two Extremely Hot Sub-Neptune-size Planets with High Densities Sinukoff et al Abstract: We report precise mass and density measurements of two extremely hot sub-Neptune-size planets from the K2 mission using radial velocities, K2 photometry, and adaptive optics imaging. K2-66 harbors a close-in sub-Neptune-sized (${2.49}_{-0.24}^{+0.34}$ ${R}_{\oplus }$) planet (K2-66b) with a mass of $21.3\pm 3.6$ ${M}_{\oplus }$. Because the star is evolving up the subgiant branch, K2-66b receives a high level of irradiation, roughly twice the main-sequence value. K2-66b may reside within the so-called "photoevaporation desert," a domain of planet size and incident flux that is almost completely devoid of planets. Its mass and radius imply that K2-66b has, at most, a meager envelope fraction (less than 5%) and perhaps no envelope at all, making it one of the largest planets without a significant envelope. K2-106 hosts an ultra-short-period planet (P = 13.7 hr) that is one of the hottest sub-Neptune-size planets discovered to date. Its radius (${1.82}_{-0.14}^{+0.20}$ ${R}_{\oplus }$) and mass ($9.0\pm 1.6$ ${M}_{\oplus }$) are consistent with a rocky composition, as are all other small ultra-short-period planets with well-measured masses. K2-106 also hosts a larger, longer-period planet (${R}_{{\rm{p}}}$ = ${2.77}_{-0.23}^{+0.37}$ ${R}_{\oplus }$, P = 13.3 days) with a mass less than $24.4$ ${M}_{\oplus }$ at 99.7% confidence. K2-66b and K2-106b probe planetary physics in extreme radiation environments. Their high densities reflect the challenge of retaining a substantial gas envelope in such extreme environments. ### Water in SuperEarth 55 Cancri e's Atmosphere A Search for Water in a Super-Earth Atmosphere: High-resolution Optical Spectroscopy of 55Cancri e Authors: Esteves et al Abstract: We present the analysis of high-resolution optical spectra of four transits of 55Cnc e, a low-density super-Earth that orbits a nearby Sun-like star in under 18 hr. The inferred bulk density of the planet implies a substantial envelope, which, according to mass–radius relationships, could be either a low-mass extended or a high-mass compact atmosphere. Our observations investigate the latter scenario, with water as the dominant species. We take advantage of the Doppler cross-correlation technique, high-spectral resolution, and the large wavelength coverage of our observations to search for the signature of thousands of optical water absorption lines. Using our observations with HDS on the Subaru telescope and ESPaDOnS on the Canada–France–Hawaii Telescope, we are able to place a 3σ lower limit of 10 g mol−1 on the mean-molecular weight of 55Cnc e's water-rich (volume mixing ratio >10%), optically thin atmosphere, which corresponds to an atmospheric scale-height of ~80 km. Our study marks the first high-spectral resolution search for water in a super-Earth atmosphere, and demonstrates that it is possible to recover known water-vapor absorption signals in a nearby super-Earth atmosphere, using high-resolution transit spectroscopy with current ground-based instruments. ## Monday, August 14, 2017 ### Habitability Properties of Circumbinary Planets Habitability Properties of Circumbinary Planets Author: Shevchenko Abstract: It is shown that several habitability conditions (in fact, at least seven such conditions) appear to be fulfilled automatically by circumbinary planets of main-sequence stars (CBP-MS), whereas on Earth, these conditions are fulfilled only by chance. Therefore, it looks natural that most of the production of replicating biopolymers in the Galaxy is concentrated on particular classes of CBP-MS, and life on Earth is an outlier, in this sense. In this scenario, Lathe's mechanism for the tidal "chain reaction" abiogenesis on Earth is favored as generic for CBP-MS, due to photo-tidal synchronization inherent to them. Problems with this scenario are discussed in detail. ### Frequent Flaring in the TRAPPIST-1 System: Unsuited for Life? Frequent Flaring in the TRAPPIST-1 System—Unsuited for Life? Authors: Vida et al Abstract: We analyze the K2 light curve of the TRAPPIST-1 system. The Fourier analysis of the data suggests P rot = 3.295 ± 0.003 days. The light curve shows several flares, of which we analyzed 42 events with integrated flare energies of 1.26 × 1030–1.24 × 1033 erg. Approximately 12% of the flares were complex, multi-peaked eruptions. The flaring and the possible rotational modulation shows no obvious correlation. The flaring activity of TRAPPIST-1 probably continuously alters the atmospheres of the orbiting exoplanets, which makes these less favorable for hosting life. ### Cosmic Rays near Proxima Centauri b Cosmic Rays near Proxima Centauri b Authors: Struminsky et al Abstract: Cosmic rays are an important factor of space weather determining radiation conditions near the Earth and it seems to be essential to clarify radiation conditions near extrasolar planets too. Last year a terrestrial planet candidate was discovered in an orbit around Proxima Centauri. Here we present our estimates on parameters of stellar wind of the Parker model, possible fluxes and fluencies of galactic and stellar cosmic rays based on the available data of the Proxima Centauri activity and its magnetic field. We found that galactic cosmic rays will be practically absent near Proxima~b up to energies of 1~TeV due to the modulation by the stellar wind. Stellar cosmic rays may be accelerated in Proxima Centauri events, which are able to permanently maintain density of stellar cosmic rays in the astrosphere comparable to low energy cosmic ray density in the heliosphere. Maximal proton intensities in extreme Proxima events should be by 3--4 orders more than in solar events. ## Sunday, August 13, 2017 ### Hints for Small Disks around Very Low Mass Stars and Brown Dwarfs Hints for Small Disks around Very Low Mass Stars and Brown Dwarfs Authors: Hendler et al Abstract: The properties of disks around brown dwarfs and very low mass stars (hereafter VLMOs) provide important boundary conditions on the process of planet formation and inform us about the numbers and masses of planets than can form in this regime. We use the Herschel Space Observatory PACS spectrometer to measure the continuum and [O i] 63 μm line emission toward 11 VLMOs with known disks in the Taurus and Chamaeleon I star-forming regions. We fit radiative transfer models to the spectral energy distributions of these sources. Additionally, we carry out a grid of radiative transfer models run in a regime that connects the luminosity of our sources with brighter T Tauri stars. We find that VLMO disks with sizes 1.3–78 au, smaller than typical T Tauri disks, fit well the spectral energy distributions assuming that disk geometry and dust properties are stellar mass independent. Reducing the disk size increases the disk temperature, and we show that VLMOs do not follow previously derived disk temperature–stellar luminosity relationships if the disk outer radius scales with stellar mass. Only 2 out of 11 sources are detected in [O i] despite a better sensitivity than was achieved for T Tauri stars, suggesting that VLMO disks are underluminous. Using thermochemical models, we show that smaller disks can lead to the unexpected [O i] 63 μm nondetections in our sample. The disk outer radius is an important factor in determining the gas and dust observables. Hence, spatially resolved observations with ALMA—to establish if and how disk radii scale with stellar mass—should be pursued further. ### Vertical Distribution and Kinematics of Protoplanetary Nebulae in the Galaxy Vertical Distribution and Kinematics of Protoplanetary Nebulae in the Galaxy Authors: Bobylev et al Abstract: The catalogue of protoplanetary nebulae by Vickers et al. has been supplemented with the line-of-sight velocities and proper motions of their central stars from the literature. Based on an exponential density distribution, we have estimated the vertical scale height from objects with an age less than 3 Gyr belonging to the Galactic thin disk (luminosities higher than 5000 Lo) to be h=146+/-15 pc, while from a sample of older objects (luminosities lower than 5000 Lo) it is h=568+/-42 pc. We have produced a list of 147 nebulae in which there are only the line-of-sight velocities for 55 nebulae, only the proper motions for 25 nebulae, and both line-of-sight velocities and proper motions for 67 nebulae. Based on this kinematic sample, we have estimated the Galactic rotation parameters and the residual velocity dispersions of protoplanetary nebulae as a function of their age. We have established that there is a good correlation between the kinematic properties of nebulae and their separation in luminosity proposed by Vickers. Most of the nebulae are shown to be involved in the Galactic rotation, with the circular rotation velocity at the solar distance being V_0=227+/-23 km/s. The following principal semiaxes of the residual velocity dispersion ellipsoid have been found: (sigma1, sigma2, sigma3) = (47, 41, 29) km/s from a sample of young protoplanetary nebulae (with luminosities higher than 5000 Lo), (sigma1, sigma2, sigma3) = (50, 38, 28) km/s from a sample of older protoplanetary nebulae (with luminosities of 4000 Lo or 3500 Lo), and (sigma1, sigma_2, sigma3) = (91, 49, 36) km/s from a sample of halo nebulae (with luminosities of 1700 Lo). ### An Analytical Model for the Evolution of the Protoplanetary Disks An Analytical Model for the Evolution of the Protoplanetary Disks Authors: Khajenabi et al Abstract: We obtain a new set of analytical solutions for the evolution of a self-gravitating accretion disk by holding the Toomre parameter close to its threshold and obtaining the stress parameter from the cooling rate. In agreement with the previous numerical solutions, furthermore, the accretion rate is assumed to be independent of the disk radius. Extreme situations where the entire disk is either optically thick or optically thin are studied independently, and the obtained solutions can be used for exploring the early or the final phases of a protoplanetary disk evolution. Our solutions exhibit decay of the accretion rate as a power-law function of the age of the system, with exponents −0.75 and −1.04 for optically thick and thin cases, respectively. Our calculations permit us to explore the evolution of the snow line analytically. The location of the snow line in the optically thick regime evolves as a power-law function of time with the exponent −0.16; however, when the disk is optically thin, the location of the snow line as a function of time with the exponent −0.7 has a stronger dependence on time. This means that in an optically thin disk inward migration of the snow line is faster than an optically thick disk. ## Saturday, August 12, 2017 ### A Complete ALMA Map of the Fomalhaut Debris Disk A Complete ALMA Map of the Fomalhaut Debris Disk Authors: MacGregor et al Abstract: We present ALMA mosaic observations at 1.3 mm (223 GHz) of the Fomalhaut system with a sensitivity of 14 μJy/beam. These observations provide the first millimeter map of the continuum dust emission from the complete outer debris disk with uniform sensitivity, enabling the first conclusive detection of apocenter glow. We adopt a MCMC modeling approach that accounts for the eccentric orbital parameters of a collection of particles within the disk. The outer belt is radially confined with an inner edge of 136.3±0.9 AU and width of 13.5±1.8 AU. We determine a best-fit eccentricity of 0.12±0.01. Assuming a size distribution power law index of q=3.46±0.09, we constrain the dust absorptivity power law index β to be 0.9<β<1.5. The geometry of the disk is robustly constrained with inclination 65.6±0.3, position angle 337.9±0.3, and argument of periastron 22.5±4.3. Our observations do not confirm any of the azimuthal features found in previous imaging studies of the disk with HST, SCUBA, and ALMA. However, we cannot rule out structures 10AU in size or which only affect smaller grains. The central star is clearly detected with a flux density of 0.75±0.02 mJy, significantly lower than predicted by current photospheric models. We discuss the implications of these observations for the directly imaged Fomalhaut b and the inner dust belt detected at infrared wavelengths. ### Detection of exocometary CO within the 440 Myr-old Fomalhaut belt: a similar CO+CO2 ice abundance in exocomets and Solar System comets Detection of exocometary CO within the 440 Myr-old Fomalhaut belt: a similar CO+CO2 ice abundance in exocomets and Solar System comets Authors: Matrà et al Abstract: Recent ALMA observations present mounting evidence for the presence of exocometary gas released within Kuiper belt analogues around nearby main sequence stars. This represents a unique opportunity to study their ice reservoir at the younger ages when volatile delivery to planets is most likely to occur. We here present the detection of CO J=2-1 emission co-located with dust emission from the cometary belt in the 440 Myr-old Fomalhaut system. Through spectro-spatial filtering, we achieve a 5.4σ detection and determine that the ring's sky-projected rotation axis matches that of the star. The CO mass derived (0.65−42×10−7 M⊕) is the lowest of any circumstellar disk detected to date, and must be of exocometary origin. Using a steady state model, we estimate the CO+CO2 mass fraction of exocomets around Fomalhaut to be between 4.6-76%, consistent with Solar System comets and the two other belts known to host exocometary gas. This is the first indication of a similarity in cometary compositions across planetary systems that may be linked to their formation scenario and is consistent with direct ISM inheritance. In addition, we find tentative evidence that (49±27)% of the detected flux originates from a region near the eccentric belt's pericentre. If confirmed, the latter may be explained through a recent impact event or CO pericentre glow due to exocometary release within a steady state collisional cascade. In the latter scenario, we show how the azimuthal dependence of the CO release rate leads to asymmetries in gas observations of eccentric exocometary belts. ### Different dust and gas radial extents in protoplanetary disks: consistent models of grain growth and CO emission Different dust and gas radial extents in protoplanetary disks: consistent models of grain growth and CO emission Authors: Facchini et al Abstract: ALMA observations of protoplanetary disks confirm earlier indications that there is a clear difference between the dust and gas radial extents. The origin of this difference is still debated, with both radial drift of the dust and optical depth effects suggested in the literature. In this work, the feedback of realistic dust particle distributions onto the gas chemistry and molecular emissivity is investigated, with a particular focus on CO isotopologues. The radial dust grain size distribution is determined using dust evolution models that include growth, fragmentation and radial drift. A new version of the code DALI is used to take into account how dust surface area and density influence the disk thermal structure, molecular abundances and excitation. The difference of dust and gas radial sizes is largely due to differences in the optical depth of CO lines and millimeter continuum, without the need to invoke radial drift. The effect of radial drift is primarily visible in the sharp outer edge of the continuum intensity profile. The gas outer radius probed by 12CO emission can easily differ by a factor of ∼2 between the models for a turbulent α ranging between typical values. Grain growth and settling concur in thermally decoupling the gas and dust components, due to the low collision rate with large grains. As a result, the gas can be much colder than the dust at intermediate heights, reducing the CO excitation and emission, especially for low turbulence values. Also, due to disk mid-plane shadowing, a second CO thermal desorption (rather than photodesorption) front can occur in the warmer outer mid-plane disk. The models are compared to ALMA observations of HD 163296 as a test case. In order to reproduce the observed CO snowline of the system, a binding energy for CO typical of ice mixtures needs to be used rather than the lower pure CO value. ## Friday, August 11, 2017 ### OGLE-2014-BLG-1112LB: A new Brown Dwarf Detected Through Microlensing OGLE-2014-BLG-1112LB: A Microlensing Brown Dwarf Detected Through the Channel of a Gravitational Binary-Lens Event Authors: Han et al Abstract: Due to the nature depending on only the gravitational field, microlensing, in principle, provides an important tool to detect faint and even dark brown dwarfs. However, the number of identified brown dwarfs is limited due to the difficulty of the lens mass measurement that is needed to check the substellar nature of the lensing object. In this work, we report a microlensing brown dwarf discovered from the analysis of the gravitational binary-lens event OGLE-2014-BLG-1112. We identify the brown-dwarf nature of the lens companion by measuring the lens mass from the detections of both microlens-parallax and finite-source effects. We find that the companion has a mass of (3.03±0.78)×10−2 M⊙ and it is orbiting a solar-type primary star with a mass of 1.07±0.28 M⊙. The estimated projected separation between the lens components is 9.63±1.33 au and the distance to the lens is 4.84±0.67 kpc. We discuss the usefulness of space-based microlensing observations in detecting brown dwarfs through the channel of binary-lens events. ### Photopolarimetric characteristics of brown dwarfs bearing uniform cloud decks Photopolarimetric characteristics of brown dwarfs bearing uniform cloud decks Authors: Sanghavi et al Abstract: It has long been known that an envelope of scattering particles like free electrons, atoms and molecules, or particulate aggregates like haze or cloud grains affect the intensity and polarization of radiation emitted by a rotating body (Chandrasekhar 1946; Harrington and Collins 1968, Sengupta and Marley 2010, Marley and Sengupta 2011, de Kok et al. 2011). Due to their high rotation rates, brown dwarfs (BDs) are expected to be considerably oblate. We present a conics-based radiative transfer scheme for computing the disc-resolved and disc-integrated polarized emission of an oblate body. Using this capability, we examine the photopolarimetric signal of BDs as a function of the scattering properties of its atmosphere like cloud optical thickness and cloud grain size as well as properties specific to the BD such as its oblateness and the orientation of its rotation axis relative to the observer. The polarizing effect of temperature inhomogeneity caused by gravity-darkening is considered distinctly from the effect of oblateness, revealing that resulting temperature gradients cause intensity differences that can amplify the disc-integrated polarization by a factor of 2. Our examination of the properties of scatterers suggests that the contested relative brightening in the J-band for cooler BDs in the L/T-transition can partly be explained by thick clouds bearing larger-sized grains. Grain-size affects both the intensity and polarization of emitted radiation - as grain-size increases relative to wavelength, the polarization caused by scattering decreases sharply, especially at infrared wavelengths where Rayleigh scattering due to atoms and molecules becomes negligible. We thus claim that the presence of scattering particles is a necessary but not sufficient condition for observing polarization of emitted light.
# Small and Simple DDNS Client In-a-dyn is a small and simple Dynamic DNS, DDNS, client with HTTPS support. It is commonly available in many GNU/Linux distributions, used in off-the-shelf routers and Internet gateways to automate the task of keeping your DNS record up to date with any IP address changes from your ISP. It can also be used in installations with redundant (backup) connections to the Internet. ## Supported Providers The following is a curated list of some of the natively supported DDNS providers. Other providers, e.g. http://twoDNS.de, can usually be supported using the custom provider support. For the full details, see the README, or inadyn.conf(5) found in the tarball. Some of these services are free of charge for non-commercial use, others take a small fee, but also provide more domains to choose from. ## Example The configuration file on most systems is in /etc/inadyn.conf: # In-A-Dyn v2.0 configuration file format period = 300 # The FreeDNS username must be in lower case and # the password (max 16 chars) is case sensitive provider freedns.afraid.org { hostname = some.example.com } In-a-dyn comes with a systemd unit file, so simply restart the service or send SIGHUP to an already running inadyn to make it reload the .conf file. If you’ve built Inadyn yourself from source, the .conf file may be located elsewhere. See the --prefix argument to the configure script, use --help or see the README for details on building. More examples in the inadyn.conf(5) man page and the README. Note: The .conf file format syntax changed in v2.0!
# 9.7: Factoring Polynomials Completely Difficulty Level: At Grade Created by: CK-12 We say that a polynomial is factored completely when we factor as much as we can and we are unable to factor any more. Here are some suggestions that you should follow to make sure that you factor completely. Factor all common monomials first. Identify special products such as difference of squares or the square of a binomial. Factor according to their formulas. If there are no special products, factor using the methods we learned in the previous sections. Look at each factor and see if any of these can be factored further. Example 1: Factor the following polynomials completely. (a) (b) Solution: (a) Look for the common monomial factor. . Recognize as a difference of squares. We factor . If we look at each factor we see that we can't factor anything else. The answer is . (b) Recognize this as a perfect square and factor as . If we look at each factor we see that we can't factor anything else. The answer is . ## Factoring Common Binomials The first step in the factoring process is often factoring the common monomials from a polynomial. Sometimes polynomials have common terms that are binomials. For example, consider the following expression. You can see that the term appears in both terms of the polynomial. This common term can be factored by writing it in front of a set of parentheses. Inside the parentheses, we write all the terms that are left over when we divide them by the common factor. This expression is now completely factored. Let’s look at some examples. Example 2: Factor . Solution: has a common binomial of . When we factor the common binomial, we get . ## Factoring by Grouping It may be possible to factor a polynomial containing four or more terms by factoring common monomials from groups of terms. This method is called factoring by grouping. The following example illustrates how this process works. Example 3: Factor . Solution: There isn't a common factor for all four terms in this example. However, there is a factor of 2 that is common to the first two terms and there is a factor of that is common to the last two terms. Factor 2 from the first two terms and factor from the last two terms. Now we notice that the binomial is common to both terms. We factor the common binomial and get. Our polynomial is now factored completely. We know how to factor Quadratic Trinomials where using methods we have previously learned. To factor a quadratic polynomial where , we follow the following steps. 1. We find the product . 2. We look for two numbers that multiply to give and add to give . 3. We rewrite the middle term using the two numbers we just found. 4. We factor the expression by grouping. Let’s apply this method to the following examples. Example 4: Factor by grouping. Solution: Follow the steps outlined above. The number 12 can be written as a product of two numbers in any of these ways: Rewrite the middle term as: , so the problem becomes the following. Factor an from the first two terms and 2 from the last two terms. Now factor the common binomial . In this example, all the coefficients are positive. What happens if the is negative? Example 5: Factor by grouping. Solution: The number 24 can be written as a product of two numbers in any of these ways. Rewrite the middle term as , so the problem becomes: Factor by grouping. Factor a from the first two terms and factor –4 from the last two terms. Now factor the common binomial . ## Solving Real-World Problems Using Polynomial Equations Now that we know most of the factoring strategies for quadratic polynomials, we can see how these methods apply to solving real-world problems. Example 6: The product of two positive numbers is 60. Find the two numbers if one of the numbers is 4 more than the other. Solution: one of the numbers and equals the other number. The product of these two numbers equals 60. We can write the equation. Write the polynomial in standard form. Factor: and and This is the correct choice. The expression factors as . Solve: Since we are looking for positive numbers, the answer must be positive. for one number, and for the other number. ## Practice Set Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set.  However, the practice exercise is the same in both. Factor completely. Factor by grouping. Solve the following application problems. 1. One leg of a right triangle is seven feet longer than the other leg. The hypotenuse is 13 feet. Find the dimensions of the right triangle. 2. A rectangle has sides of and . What value of gives an area of 108? 3. The product of two positive numbers is 120. Find the two numbers if one numbers is seven more than the other. 4. Framing Warehouse offers a picture-framing service. The cost for framing a picture is made up of two parts. The cost of glass is $1 per square foot. The cost of the frame is$2 per linear foot. If the frame is a square, what size picture can you get framed for \$20.00? Mixed Review 1. The area of a square varies directly with its side length. 1. Write the general variation equation to model this sentence. 2. If the area is 16 square feet when the side length is 4 feet, find the area when . 2. The surface area is the total amount of surface of a three-dimensional figure. The formula for the surface area of a cylinder is , where and . Determine the surface area of a soup can with a radius of 2 inches and a height of 5.5 inches. 3. Factor . Solve this polynomial when it equals zero. 4. What is the greatest common factor of , and ? 5. Discounts to the hockey game are given to groups with more than 12 people. 1. Graph this solution on a number line 2. What is the domain of this situation? 3. Will a church group with 12 members receive a discount? ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Tags: Subjects:
# From the Origin of Probability Theory to the Current Problems of Machine Learning Machine learning is mainly standing on the pillars of probability theory and statistics. It was Kolmogorov's celebrated book Grundbegriffe der Wahrscheinlichkeitsrechnung'' (translated as ''Foundations of the Theory of Probability'') which in 1933 laid the foundation for modern probability theory. And it buried the many different approaches to mathematically describe probabilities from this time. Nowadays, the axiomatization suggested by Kolmogorov is mostly, in particular in the field of machine learning, unquestioned. However, machine learning, due to its large potential and use, is facing previously unnoticed problems such as discrimination, distribution-shifts, missing interpretability, etc. . Research is making progress in tackling these challenges. But it often ignores the foundations on which its machinery is standing. What can we learn from different axiomatizations of probability for understanding the current difficulties of machine learning techniques? Are the probabilistic assumptions made to model data reasonable and meaningful and to what extent? In this project, we go back in time. Primarily, we contrast the less known axiomatization of probability and randomness by Von Mises from 1914 to Kolmogorov's approach. Our first paper already revealed a surprising statement: randomness and fairness can be considered equivalent concepts in machine learning: [Fairness and Randomness in Machine Learning: Statistical Independence and Relativization] Further results will be published shortly...
# energy field of a newly created particle The question is: How to calculate the energy of electrostatic field of a newly created particle. The usual formula involves integration 'all over space'. According to GR I believe that 'all over space' means the interior volume with radius c*t_now, where t_now is the lifetime of the particle at the time of calculation and c is the light speed. Thus the energy stored in the field is a quantity dependent on time evolution. - The energy of electrostatic field is not involved in any exchanges so it is a meaningless notion. –  Vladimir Kalitvianski Jan 22 '11 at 14:31 classical physics doesn't have any well-defined framework to describe the "creation of particles". So you have to talk about the real world - or, theoretical speaking, the world of quantum field theory (or something that contains it) - to be able to discuss the creation of particles. However, in quantum field theory, you can't separate the energy coming from the field from the energy coming from the particles themselves in such a sharp way. In particular, even the lightest charged particle - the electron - has mass equal to 511 keV. That's big, and even if you compute the whole energy of the electrostatic field at the distance of the classical electron radius, $2.82 \times 10^{-15}$ meters, or shorter (not far from the radius of the proton, by the way), you will obtain a smaller energy than the rest energy of the electron. If you want to say that a (virtual) electron existed at least for a little while, you need to assume that the lifetime was at least the Compton wavelength of the electron (divided by $c$). That's about 100 times longer a distance. And even with this distance, the uncertainty of the energy is comparable to those 511 keV. This leads me to the key point which is the uncertainty principle for time and energy. If you determine the timing of the places where you measure the energy with a much better accuracy than the Compton wavelength of the electron (over $c$), then the uncertainty of the energy you measure will inevitably be much greater than the whole electron rest mass (times $c^2$). And if you determine the the timing with a worse accuracy, then you will inevitably include almost the whole electric field of the electron, and you will get the electron mass (times $c^2$) of 511 keV with a big accuracy. At any rate, you won't be able to show any violation of the total energy which holds exactly in any system with a time-translational symmetry. You will be extremely far from finding such a contradiction: even if you could attribute parts of energy to fields and particles at every moment, as you suggested and which you cannot because of the uncertainty principle, you couldn't measure it with the same precision, and even if you could measure it, the variable mass of the electron at the "center" of the field could always compensate any violation of the conservation law, anyway. So the really important message is that the total energy is conserved, it can be measured as long as we have a long enough time to measure it, and attributing energy to small pieces of space around a particle is not the right way to proceed - neither theoretically nor experimentally. Instead, one should ask what are the probabilities of various processes. External and internal particles will always show that energy is conserved. However, energy can only be measured accurately if it is measured for a long enough time - longer than $\hbar/\Delta E$ where $\Delta E$ is the required precision. If you think about these matters, you will find out that you are trying to measure the amount and localization of energy - in space and/or in time - more accurately than the uncertainty principle allows. Cheers LM - Luboš Motl: Thanks for your response (I follow your TRF;) I firmly believe that a particle is not separable from its exterior field and the total energy of the ensemble is constant. The problem is that if we accept that the field spreads out in space at the speed of light then the field 'outside', using a maximal radius by a suitable convention, has an energy content increased over time, implying that the energy located in what we call 'particle' is expected to decrease over time. Even working with probabilities I arrive to the same conclusion. –  Helder Velez Jan 19 '11 at 14:27 According to the usual laws of physics, charge cannot be created or destroyed. So it's impossible to create a particle with charge q without annihilating a particle with the same charge, or also creating a particle with charge -q, or some combination of these. Suppose you annihilated a particle with the same charge. Then your problem might better be described as one of "which particle should you attribute the electrostatic energy to, the annihilated one or the created one?" And if instead it was a case of creating two particles with charges +q and -q, then your problem could be described as one of having to separate the electrostatic energies of the two particles. So I don't see that there is any real problem with defining the energy of the electrostatic field in a particle. It's a matter of semantics. I think the universe has no opinion as to which particle we attribute the electrostatic energy. - We can invent a time varying charge density $\rho (t)$ to model appearing and disappearing charge and watch the filed evolution. It seems to me I saw somewhere a problem of filed propagation from suddenly created charge. Of course, such a phenomenon does not occur in reality. As I said previously, the electrostatic filed energy is not involved in any exchange so its "evolution" does not matter. –  Vladimir Kalitvianski Jan 23 '11 at 19:44 The electrostatic field (or gravitational) detector work by energy transfer from the environment (the field) and the matter of the detector. As energy got transferred we can assume that field has energy. –  Helder Velez Jan 24 '11 at 18:58 @all The particle (or the pair) spent all his existence impressing the environment (vacuum, space,..) with a spreading field. When the particle (or pair) got annihilated (converted to gamma-rays) the interaction time is very short. The far field can not be instantly unset or reverted to the source.Thus the energy of the field 'is lost' into the space and keeps spreading. Do you agree that 'all over space' has to be only the space interior to the sphere with radius c*t_now ? –  Helder Velez Jan 24 '11 at 19:23
NAV • GUNROCK: GPU GRAPH ANALYTICS • BUILDING GUNROCK • WHY GUNROCK? • METHODOLOGY FOR GRAPH ANALYTICS PERFORMANCE • PROGRAMMING MODEL • GIT FORKING WORKFLOW • GOOGLETEST FOR GUNROCK • POSSIBLE GUNROCK PROJECTS • GUNROCK RELEASE NOTES • FREQUENTLY ASKED QUESTIONS • # Gunrock: GPU Graph Analytics Gunrock is a CUDA library for graph-processing designed specifically for the GPU. It uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on a vertex or edge frontier. Gunrock achieves a balance between performance and expressiveness by coupling high performance GPU computing primitives and optimization strategies with a high-level programming model that allows programmers to quickly develop new graph primitives with small code size and minimal GPU programming knowledge. For more details, please visit our website, read Why Gunrock, our TOPC 2017 paper Gunrock: GPU Graph Analytics, look at our results, and find more details in our publications. See Release Notes to keep up with the our latest changes. Gunrock is featured on NVIDIA's list of GPU Accelerated Libraries as the only external library for GPU graph analytics. Service System Environment Status Jenkins Ubuntu 16.04.4 LTS CUDA 10.0, NVIDIA Driver 410.73, GCC/G++ 5.4, Boost 1.58.0 ## Quickstart git clone --recursive https://github.com/gunrock/gunrock/ cd gunrock mkdir build cd build cmake .. make -j\$(nproc) make test ## Results and Analysis We are gradually adding summaries of our results to these web pages (please let us know if you would like other comparisons). These summaries also include a table of results along with links to the configuration and results of each individual run. We detail our methodology for our measurements here. For reproducibility, we maintain Gunrock configurations and results in our github gunrock/io repository. We are happy to run experiments with other engines, particularly if those engines output results in our JSON format / a format that can be easily parsed into JSON format. ## Reporting Problems To report Gunrock bugs or request features, please file an issue directly using Github. ## Publications Yuechao Pan, Roger Pearce, and John D. Owens. Scalable Breadth-First Search on a GPU Cluster. In Proceedings of the 31st IEEE International Parallel and Distributed Processing Symposium, IPDPS 2018, May 2018. [http] Yangzihao Wang, Yuechao Pan, Andrew Davidson, Yuduo Wu, Carl Yang, Leyuan Wang, Muhammad Osama, Chenshan Yuan, Weitang Liu, Andy T. Riffel, and John D. Owens. Gunrock: GPU Graph Analytics. ACM Transactions on Parallel Computing, 4(1):3:1–3:49, August 2017. [DOI | http] Yuechao Pan, Yangzihao Wang, Yuduo Wu, Carl Yang, and John D. Owens. Multi-GPU Graph Analytics. In Proceedings of the 31st IEEE International Parallel and Distributed Processing Symposium, IPDPS 2017, pages 479–490, May/June 2017. [DOI | http] Yangzihao Wang, Sean Baxter, and John D. Owens. Mini-Gunrock: A Lightweight Graph Analytics Framework on the GPU. In Graph Algorithms Building Blocks, GABB 2017, pages 616–626, May 2017. [DOI | http] Leyuan Wang, Yangzihao Wang, Carl Yang, and John D. Owens. A Comparative Study on Exact Triangle Counting Algorithms on the GPU. In Proceedings of the 1st High Performance Graph Processing Workshop, HPGP '16, pages 1–8, May 2016. [DOI | http] Yangzihao Wang, Andrew Davidson, Yuechao Pan, Yuduo Wu, Andy Riffel, and John D. Owens. Gunrock: A High-Performance Graph Processing Library on the GPU. In Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '16, pages 11:1–11:12, March 2016. Distinguished Paper. [DOI | http] Yuduo Wu, Yangzihao Wang, Yuechao Pan, Carl Yang, and John D. Owens. Performance Characterization for High-Level Programming Models for GPU Graph Analytics. In IEEE International Symposium on Workload Characterization, IISWC-2015, pages 66–75, October 2015. Best Paper finalist. [DOI | http] Carl Yang, Yangzihao Wang, and John D. Owens. Fast Sparse Matrix and Sparse Vector Multiplication Algorithm on the GPU. In Graph Algorithms Building Blocks, GABB 2015, pages 841–847, May 2015. [DOI | http] Afton Geil, Yangzihao Wang, and John D. Owens. WTF, GPU! Computing Twitter's Who-To-Follow on the GPU. In Proceedings of the Second ACM Conference on Online Social Networks, COSN '14, pages 63–68, October 2014. [DOI | http] ## Presentations GTC 2018, Latest Development of the Gunrock Graph Processing Library on GPUs, March 2018. [slides | video] GTC 2018, Writing Graph Primitives with Gunrock, March 2018. [slides | video] GTC 2016, Gunrock: A Fast and Programmable Multi-GPU Graph Processing Library, April 2016. [slides] NVIDIA webinar, April 2016. [slides] GPU Technology Theater at SC15, Gunrock: A Fast and Programmable Multi-GPU Graph processing Library, November 2015. [slides | video] GTC 2014, High-Performance Graph Primitives on the GPU: design and Implementation of Gunrock, March 2014. [slides | video] ## Gunrock Developers • Yangzihao Wang, University of California, Davis • Yuechao Pan, University of California, Davis • Yuduo Wu, University of California, Davis • Carl Yang, University of California, Davis • Leyuan Wang, University of California, Davis • Weitang Liu, University of California, Davis • Muhammad Osama, University of California, Davis • Chenshan Shari Yuan, University of California, Davis • Andy Riffel, University of California, Davis • Huan Zhang, University of California, Davis • John Owens, University of California, Davis ## Acknowledgments Thanks to the following developers who contributed code: The connected-component implementation was derived from code written by Jyothish Soman, Kothapalli Kishore, and P. J. Narayanan and described in their IPDPSW '10 paper A Fast GPU Algorithm for Graph Connectivity (DOI). The breadth-first search implementation and many of the utility functions in Gunrock are derived from the b40c library of Duane Merrill. The algorithm is described in his PPoPP '12 paper Scalable GPU Graph Traversal (DOI). Thanks to Erich Elsen and Vishal Vaidyanathan from Royal Caliber and the Onu Team for their discussion on library development and the dataset auto-generating code. Thanks to Adam McLaughlin for his technical discussion. Thanks to Oded Green for his technical discussion and an optimization in the CC primitive. Thanks to the Altair and Vega-lite teams in the Interactive Data Lab at the University of Washington for graphing help. We appreciate the technical assistance, advice, and machine access from many colleagues at NVIDIA: Chandra Cheij, Joe Eaton, Michael Garland, Mark Harris, Ujval Kapasi, David Luebke, Duane Merrill, Josh Patterson, Nikolai Sakharnykh, and Cliff Woolley. This work was funded by the DARPA HIVE program under AFRL Contract FA8650-18-2-7835, the DARPA XDATA program under AFRL Contract FA8750-13-C-0002, by NSF awards OAC-1740333, CCF-1629657, OCI-1032859, and CCF-1017399, by DARPA STTR award D14PC00023, and by DARPA SBIR award W911NF-16-C-0020. Our XDATA principal investigator was Eric Whyne of Data Tactics Corporation and our DARPA program manager is Mr. Wade Shen (since 2015), and before that Dr. Christopher White (2012–2014). Thanks to Chris, Wade, and DARPA business manager Gabriela Araujo for their support during the XDATA program. Gunrock is copyright The Regents of the University of California, 2013–2018. The library, examples, and all source code are released under Apache 2.0. # Building Gunrock Gunrock's current release has been tested on Linux Mint 15 (64-bit), Ubuntu 12.04, 14.04 and 15.10 with CUDA 7.5 installed, compute architecture 3.0 and g++ 4.8. We expect Gunrock to build and run correctly on other 64-bit and 32-bit Linux distributions, Mac OS, and Windows. ## Prerequisites Required Dependencies: • CUDA (7.5 or higher) is used to implement Gunrock. • Refer to NVIDIA's CUDA homepage to download and install CUDA. • Refer to NVIDIA CUDA C Programming Guide for detailed information and examples on programming CUDA. • Boost (1.58 or higher) is used for for the CPU reference implementations of Connected Component, Betweenness Centrality, PageRank, Single-Source Shortest Path, and Minimum Spanning Tree. • Refer to Boost Getting Started Guide to install the required Boost libraries. • Alternatively, you can also install Boost by running /gunrock/dep/install_boost.sh script (recommended installation method). • Ideal location for Boost installation is /usr/local/. If the build cannot find your Boost library, make sure a symbolic link for boost installation exists somewhere in /usr/ directory. • ModernGPU and CUB used as external submodules for Gunrock's APIs. • You will need to download or clone ModernGPU and CUB, and place them to gunrock/externals • Alternatively, you can clone gunrock recursively with git clone --recursive https://github.com/gunrock/gunrock • or if you already cloned gunrock, under gunrock/ directory: run git submodule init and git submodule update Optional Dependencies: • METIS is used as one possible partitioner to partition graphs for multi-gpu primitives implementations. • Refer to METIS Installation Guide • Alternatively, you can also install METIS by running /gunrock/dep/install_metis.sh script. • If the build cannot find your METIS library, please set the METIS_DLL environment variable to the full path of the library. ## Compilation Simple Gunrock Compilation: # Using git (recursively) download gunrock git clone --recursive https://github.com/gunrock/gunrock wget --no-check-certificate https://github.com/gunrock/gunrock/archive/master.zip Compiling gunrock cd gunrock mkdir build && cd build cmake .. make • Binary test files are available in build/bin directory. • You can either run the test for all primitives by typing make check or ctest in the build directory, or do your own testings manually. • Detailed test log from ctest can be found in /build/Testing/Temporary/LastTest.log, alternatively you can run tests with verbose option enabled ctest -v. You can also compile gunrock with more specific/advanced settings using cmake -D[OPTION]=ON/OFF. Following options are available: • GUNROCK_BUILD_LIB (default: ON) - Builds main gunrock library. • GUNROCK_BUILD_SHARED_LIBS (default: ON) - Turn off to build for static libraries. • GUNROCK_BUILD_APPLICATIONS (default: ON) - Set off to only build one of the following primitive (GUNROCK_APP_* must be set on to build if this option is turned off.) Example for compiling gunrock with only Breadth First Search (BFS) primitive mkdir build && cd build cmake -DGUNROCK_BUILD_APPLICATIONS=OFF -DGUNROCK_APP_BFS=ON .. make • GUNROCK_APP_BC (default: OFF) • GUNROCK_APP_BFS (default: OFF) • GUNROCK_APP_CC (default: OFF) • GUNROCK_APP_PR (default: OFF) • GUNROCK_APP_SSSP (default: OFF) • GUNROCK_APP_DOBFS (default: OFF) • GUNROCK_APP_HITS (default: OFF) • GUNROCK_APP_SALSA (default: OFF) • GUNROCK_APP_MST (default: OFF) • GUNROCK_APP_WTF (default: OFF) • GUNROCK_APP_TOPK (default: OFF) • GUNROCK_MGPU_TESTS (default: OFF) - If on, tests multiple GPU primitives with ctest. • GUNROCK_GENCODE_SM<> (default: GUNROCK_GENCODE_SM30,35,61=ON) change to generate code for a different compute capability. • CUDA_VERBOSE_PTXAS (default: OFF) - ON to enable verbose output from the PTXAS assembler. ## Generating Datasets All dataset-related code is under the gunrock/dataset/ subdirectory. The current version of Gunrock only supports Matrix-market coordinate-formatted graph format. The datasets are divided into two categories according to their scale. Under the dataset/small/ subdirectory, there are trivial graph datasets for testing the correctness of the graph primitives. All of them are ready to use. Under the dataset/large/ subdirectory, there are large graph datasets for doing performance regression tests. * To download them to your local machine, just type make in the dataset/large/ subdirectory. * You can also choose to only download one specific dataset to your local machine by stepping into the subdirectory of that dataset and typing make inside that subdirectory. ## Hardware Laboratory Tested Hardware: Gunrock with GeForce GTX 970, Tesla K40s. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. # Why Gunrock? Gunrock is a stable, powerful, and forward-looking substrate for GPU-based graph-centric research and development. Like many graph frameworks, it leverages a bulk-synchronous programming model and targets iterative convergent graph computations. We believe that today Gunrock offers both the best performance on GPU graph analytics as well as the widest range of primitives. • Gunrock has the best performance of any programmable GPU+graph library. Gunrock primitives are an order of magnitude faster than (CPU-based) Boost, outperform any other programmable GPU-based system, and are comparable in performance to hardwired GPU graph primitive implementations. When compared to Ligra, a best-of-breed CPU system, Gunrock currently matches or exceeds Ligra's 2-CPU performance with only one GPU. Gunrock's abstraction separates its programming model from the low-level implementation details required to make a GPU implementation run fast. Most importantly, Gunrock features very powerful load-balancing capabilities that effectively address the inherent irregularity in graphs, which is a problem we must address in all graph analytics. We have spent significant effort developing and optimizing these features---when we beat hardwired analytics, the reason is load balancing---and because they are beneath the level of the programming model, improving them makes all graph analytics run faster without needing to expose them to programmers. • Gunrock's data-centric programming model is targeted at GPUs and offers advantages over other programming models. Gunrock is written in a higher-level abstraction than hardwired implementations, leveraging reuse of its fundamental operations across different graph primitives. Gunrock has a bulk-synchronous programming model that operates on a frontier of vertices or edges; unlike other GPU-based graph analytic programming models, Gunrock focuses not on sequencing computation but instead on sequencing operations on frontier data structures. This model has two main operations: compute, which performs a computation on every element in the current frontier, and traversal, which generates a new frontier from the current frontier. Traversal operations include advance (the new frontier is based on the neighbors of the current frontier) and filter (the new frontier is a programmable subset of the current frontier). We are also developing new Gunrock operations on frontier data structures, including neighbor, gather-reduce, and global operations. This programming model is a better fit to high-performance GPU implementations than traditional programming models adapted from CPUs. Specifically, traditional models like gather-apply-scatter (GAS) map to a suboptimal set of GPU kernels that do a poor job of capturing producer-consumer locality. With Gunrock, we can easily integrate compute steps within the same kernels as traversal steps. As well, Gunrock's frontier-centric programming model is a better match for key optimizations such as push-pull direction-optimal search or priority queues, which to date have not been implemented in other GPU frameworks, where they fit poorly into the abstraction. • Gunrock supports more primitives than any other programmable GPU+graph library. We currently support a wide variety of graph primitives, including traversal-based (breadth-first search, single-source shortest path); node-ranking (HITS, SALSA, PageRank); and global (connected component, minimum spanning tree). Many more algorithms are under active development (including graph coloring, maximal independent set, community detection, and graph matching). • Gunrock has better scalability with multiple GPUs on a node than any other graph library. We not only show better BFS performance on a single node than any other GPU framework but also outperform other frameworks, even those customized to BFS, with up to four times as many GPUs. More importantly, our framework supports all Gunrock graph primitives rather than being customized to only one primitive. • Gunrock's programming model scales to multiple GPUs while still using the same code as a single-GPU primitive. Other frameworks require rewriting their primitives when moving from one to many GPUs. Gunrock's multi-GPU programming model uses single-node Gunrock code at its core so that single-GPU and multi-GPU operations can share the same codebase. # Methodology for Graph Analytics Performance We welcome comments from others on the methodology that we use for measuring Gunrock's performance. Currently, Gunrock is a library that requires no preprocessing. By this we mean that Gunrock inputs graphs in a "standard" format, e.g., compressed sparse row or coordinate, such as those available on common graph repositories (SNAP or SuiteSparse (UF)). In our experiments, we use MatrixMarket format. Other graph libraries may benefit from preprocessing of input datasets. We would regard any manipulation of the input dataset (e.g., reordering the input or more sophisticated preprocessing such as graph coloring or CuSha's G-Shards) to be preprocessing. We think preprocessing is an interesting future direction for Gunrock, but have not yet investigated it. We hope that any graph libraries that do preprocessing report results with both preprocessed and unmodified input datasets. (That being said, we do standardize input graphs in two ways: before running our experiments, we remove self-loops/duplicated edges. If the undirected flag is set, we convert the input graph to undirected. When we do so, that implies one edge in each direction, and we report edges for that graph accordingly. What we do here appears to be standard practice.) In general, we try to report results in two ways: • Throughput, measured in edges traversed per second (TEPS). We generally use millions of TEPS (MTEPS) as our figure of merit. • Runtime, typically measured in ms. We measure runtime entirely on the GPU, with the expectation that the input data is already on the GPU and the output data will be stored on the GPU. This ignores transfer times (either disk to CPU or CPU to GPU), which are independent of the graph analytics system. It is our expectation that GPU graph analytics will be most effective when (a) they are run on complex primitives and/or (b) run on sequences of primitives, either of which would mitigate transfer times. GPU graph analytics are likely not well suited to running one single simple primitive; for a simple primitive like BFS, it is more expensive to transfer the graph from CPU to GPU than it is to complete the BFS. To calculate TEPS, we require the number of edges traversed (touched), which we count dynamically. For traversal primitives, we note that non-connected components will not be visited, so the number of visited edges may be fewer than the number of edges in the graph. We note that precisely counting edges during the execution of a particular primitive may have performance implications, so we may approximate (see BFS). Notes on specific primitives follow. ## BFS When we count the number of edges traversed, we do so by summing the number of outbound edges for each visited vertex. For forward, non-idempotent BFS, this strategy should give us an exact count, since this strategy visits every edge incident to a visited vertex. When we enable idempotence, we may visit a node more than once and hence may visit an edge more than once. For backward (pull) BFS, when we visit a vertex, we count all edges incoming to that vertex even if we find a visited predecessor before traversing all edges (and terminate early). (To do so otherwise has performance implications.) Enterprise uses the same counting strategy. If a comparison library does not measure MTEPS for BFS, we compute it by the number of edges visited divided by runtime; if the former is not available, we use Gunrock's edges-visited count. ## SSSP In general we find MTEPS comparisons between different approaches to SSSP not meaningful: because an edge may be visited one or many times, there is no standard way to count edges traversed. Different algorithms may not only visit a very different number of edges (Dijkstra vs. Bellman-Ford will have very different edge visit counts) but may also have a different number of edges visited across different invocations of the same primitive. When we report Gunrock's SSSP MTEPS, we use the number of edges queued as the edge-traversal count. To have a meaningful SSSP experiment, it is critical to have varying edge weights. SSSP measured on uniform edge weights is not meaningful (it becomes BFS). In our experiments, we set edge weights randomly/uniformly between 1 and 64. ## BC If a comparison library does not measure MTEPS for BC, we compute it by twice the number of edges visited in the forward phase divided by runtime (the same computation we use for Gunrock). ## PageRank We measure PageRank elapsed time on one iteration of PageRank. (Many other engines measure results this way and it is difficult to extrapolate from this measurement to runtime of the entire algorithm.) # Programming Model This page describes the programming model we use in Gunrock. Gunrock targets graph computations that are generally expressed as "iterative convergent processes". By "iterative," we mean operations that may require running a series of steps repeatedly; by "convergent," we mean that these iterations allow us to approach the correct answer and terminate when that answer is reached. Many graph-computation programming models target a similar goal. Many of these programming models focus on sequencing steps of computation. Gunrock differs from these programming models in its focus on manipulating a data structure. We call this data structure a frontier of vertices or edges. The frontier represents the subset of vertices or edges that is actively participating in the computation. Gunrock operators input one or more frontiers and output one or more frontiers. Generically, graph operations can often be expressed via a push abstraction (graph elements "push" local private updates into a shared state) or a pull abstraction (graph elements "pull" updates into their local private state) (Besta et al. publication on push-vs.-pull, HPDC '17). Gunrock's programming model supports both of these abstractions. (For instance, Gunrock's direction-optimized breadth-first-search supports both push and pull BFS phases. Mini-Gunrock supports pull-based BFS and PR.) Push-based approaches may or may not require synchronization (such as atomics) for correct operation; this depends on the primitive. Gunrock's idempotence optimization (within its BFS implementation) is an example of a push-based primitive that does not require atomics. ## Operators In the current Gunrock release, we support four operators. • Advance: An advance operator generates a new frontier from the current frontier by visiting the neighbors of the current frontier. A frontier can consist of either vertices or edges, and an advance step can input and output either kind of frontier. Advance is an irregularly-parallel operation for two reasons: 1)~different vertices in a graph have different numbers of neighbors and 2)~vertices share neighbors. Thus a vertex in an input frontier map to multiple output items. An efficient advance is the most significant challenge of a GPU implementation. • Filter: A filter operator generates a new frontier from the current frontier by choosing a subset of the current frontier based on programmer-specified criteria. Each input item maps to zero or one output items. • Compute: A compute operator defines an operation on all elements (vertices or edges) in its input frontier. A programmer-specified compute operator can be used together with all three traversal operators. Gunrock performs that operation in parallel across all elements without regard to order. • Segmented intersection: A segmented intersection operator takes two input node frontiers with the same length, or an input edge frontier, and generates both the number of total intersections and the intersected node IDs as the new frontier. We note that compute operators can often be fused with a neighboring operator into a single kernel. This increases producer-consumer locality and improves performance. Thus within Gunrock, we express compute operators as "functors", which are automatically merged into their neighboring operators. Within Gunrock, we express functors in one of two flavors: • Cond Functor: Cond functors input either a vertex id (as in VertexCond) or the source id and the dest id of an edge (as in EdgeCond). They also input data specific to the problem being solved to decide whether the vertex or the edge is valid in the outgoing frontier. • Apply Functor: Apply functors take the same set of arguments as Cond functors, but perform user-specified computation on the problem-specific data. ## Creating a New Graph Primitive To create a new graph primitive, we first put all the problem-specific data into a data structure. For BFS, we need a per-node label value and a per-node predecessor value; for CC, we need a per-edge mark value, a per-node component id value, etc. Then we map the algorithm into the combination of the above three operators. Next, we need to write different functors for these operators. Some graph algorithms require only one functor (BFS), but some graph algorithms need more (CC needs seven). Finally, we write an enactor to load the proper operator with the proper functor. We provide a graph primitive template. The problem, functor, and enactor files are under gunrock/app/sample, and the driver code is under tests/sample. # Git Forking Workflow Transitioning over from Git Branching Workflow suggested by Vincent Driessen at nvie to Git Forking Workflow for Gunrock. ## How Forking Workflow Works? As in the other Git workflows, the Forking Workflow begins with an official public repository stored on a server. But when a new developer wants to start working on the project, they do not directly clone the official repository. Instead, they fork the official repository to create a copy of it on the server. This new copy serves as their personal public repository—no other developers are allowed to push to it, but they can pull changes from it (we’ll see why this is important in a moment). After they have created their server-side copy, the developer performs a git clone to get a copy of it onto their local machine. This serves as their private development environment, just like in the other workflows. When they're ready to publish a local commit, they push the commit to their own public repository—not the official one. Then, they file a pull request with the main repository, which lets the project maintainer know that an update is ready to be integrated. The pull request also serves as a convenient discussion thread if there are issues with the contributed code. To integrate the feature into the official codebase, the maintainer pulls the contributor’s changes into their local repository, checks to make sure it doesn’t break the project, merges it into his local master branch, then pushes the master branch to the official repository on the server. The contribution is now part of the project, and other developers should pull from the official repository to synchronize their local repositories. ## Gunrock's Forking Workflow: gunrock/gunrock: • Master Branch: Reserved only for final releases or some bug fixes/patched codes. • Dev Branch: Current working branch where all developers push their changes to. This dev branch will serve as the "next release" gunrock, eliminating the need of managing individual branches for each feature and merging them when it is time for the release. personal-fork/gunrock • Feature Branch: This is the developer's personal repository with their feature branch. Whatever changes they would like to contribute to gunrock must be in their own personal fork. And once it is time to create a pull request, it is done so using github pull request, a reviewer checks it and the changes are merged into gunrock/gunrock dev branch. Note that transitioning to this type of workflow from branching model doesn't require much effort, we will just have to start working on our forks and start creating pull requests to one dev branch. ## How to contribute? • Fork using GitHub; https://github.com/gunrock/gunrock • git clone --recursive https://github.com/gunrock/gunrock.git • git remote set-url --push origin https://github.com/username/gunrock.git This insures that you are pulling from gunrock/gunrock (staying updated with the main repository) but pushing to your own fork username/gunrock. • git add • git commit -m "Describe your changes." • git push • Once you've pushed the changes on your fork, you can create a pull request on Github to merge the changes. • Pull request will then be reviewed and merged into the dev branch. # GoogleTest for Gunrock Recommended Read: Introduction: Why Google C++ Testing Framework? When writing a good test, we would like to cover all possible functions (or execute all code lines), what I will recommend to do is write a simple test, run code coverage on it, and use codecov.io to determine what lines are not executed. This gives you a good idea of what needs to be in the test and what you are missing. What is code coverage? Code coverage is a measurement used to express which lines of code were executed by a test suite. We use three primary terms to describe each lines executed. • hit indicates that the source code was executed by the test suite. • partial indicates that the source code was not fully executed by the test suite; there are remaining branches that were not executed. • miss indicates that the source code was not executed by the test suite. Coverage is the ratio of hits / (hit + partial + miss). A code base that has 5 lines executed by tests out of 12 total lines will receive a coverage ratio of 41% (rounding down). Below is an example of what lines are a hit and a miss; you can target the lines missed in the tests to improve coverage. ## Example Test Using GoogleTest /** * @brief BFS test for shared library advanced interface * @file test_lib_bfs.h */ // Includes required for the test #include "stdio.h" #include "gunrock/gunrock.h" #include "gmock/gmock.h" #include "gtest/gtest.h" // Add to gunrock's namespace namespace gunrock { /* Test function, test suite in this case is * sharedlibrary and the test itself is breadthfirstsearch */ { struct GRTypes data_t; // data type structure data_t.VTXID_TYPE = VTXID_INT; // vertex identifier data_t.SIZET_TYPE = SIZET_INT; // graph size type data_t.VALUE_TYPE = VALUE_INT; // attributes type int srcs[3] = {0,1,2}; struct GRSetup *config = InitSetup(3, srcs); // gunrock configurations int num_nodes = 7, num_edges = 15; // number of nodes and edges int row_offsets[8] = {0, 3, 6, 9, 11, 14, 15, 15}; int col_indices[15] = {1, 2, 3, 0, 2, 4, 3, 4, 5, 5, 6, 2, 5, 6, 6}; struct GRGraph *grapho = (struct GRGraph*)malloc(sizeof(struct GRGraph)); struct GRGraph *graphi = (struct GRGraph*)malloc(sizeof(struct GRGraph)); graphi->num_nodes = num_nodes; graphi->num_edges = num_edges; graphi->row_offsets = (void*)&row_offsets[0]; graphi->col_indices = (void*)&col_indices[0]; gunrock_bfs(grapho, graphi, config, data_t); int *labels = (int*)malloc(sizeof(int) * graphi->num_nodes); labels = (int*)grapho->node_value1; // IMPORTANT: Expected output is stored in an array to compare against determining if the test passed or failed int result[7] = {2147483647, 2147483647, 0, 1, 1, 1, 2}; for (int i = 0; i < graphi->num_nodes; ++i) { // IMPORTANT: Compare expected result with the generated labels EXPECT_EQ(labels[i], result[i]) << "Vectors x and y differ at index " << i; } if (graphi) free(graphi); if (grapho) free(grapho); if (labels) free(labels); } } // namespace gunrock 1. Create a test_.h file and place it in the appropriate directory inside /path/to/gunrock/tests/. I will be using test_bfs_lib.h as an example. 2. In the tests/test.cpp file, add your test file as an include: #include "bfs/test_lib_bfs.h". 3. In your test_.h file, create a TEST() function, which takes two parameters: TEST(, ). 4. Use EXPECT and ASSERT to write the actual test itself. I have provided a commented example below: 5. Now when you run the binary called unit_test, it will automatically run your test suite along with all other google tests as well. This binary it automatically compiled when gunrock is built, and is found in /path/to/builddir/bin/unit_test. Final Remarks: • I highly recommend reading the Primer document mentioned at the start of this tutorial. It explains in detail how to write a unit test using google test. My tutorial has more been about how to incorporate it into Gunrock. • Another interesting read is Measuring Coverage at Google. • Framework: We are exploring more operators such as neighborhood reduction and segmented intersection. Generally we want to find the right set of operators that can abstract most graph primitives while delivering high performance. • API: We would like to make an API refactoring to simplify parameter passing and to isolate parts of the library that dependencies are not necessary. The target is to make the frontier concept more clear, and to promote code reuse. • Primitives: Our near-term goal is to graduate several primitives in dev branch including A* search, weighted label propagation, subgraph matching, triangle counting, and clustering coefficients; implement maximal independent set, max flow, and graph coloring algorithms, build better support for bipartite graph algorithms, and explore community detection algorithms. Our long term goals include algorithms on dynamic graphs, multi-level priority queue support, graph partitioning, and more flexible and scalable multi-GPU algorithms. # Possible Gunrock projects Possible projects are in two categories: infrastructure projects that make Gunrock better but have minimal research value, and research projects that are longer-term and hopefully have research implications of use to the community. For any discussion on these, please use the existing Github issue (or make one). ## Infrastructure projects • Containerize Gunrock (a Docker container) [issue] • Support a Windows build [issue] • Develop a procedure to go from "How does Gunrock do on dataset X" to actually getting results and the right command lines for dataset X. Right now we do this manually with lots of iterations every time. We can automate and document this much better. • Many apps have minimal documentation; we need better text when a user runs ./bin/primitive --help. ## Research projects • Better defaults and/or decision procedures for setting Gunrock parameters (possibly a machine-learning approach for this) • How can we preprocess Gunrock input to increase performance? This could be either reordering CSR for better performance (e.g., reverse Cuthill-McKee) or a new format. • If we had a larger number of X in the hardware—e.g., more registers, more SMs, more threads/SM, more shared memory, bigger cache---how would it help performance? (Where would we want NVIDIA to spend more transistors to best help our performance?) • How much locality is there in frontiers with respect to the "active" frontier vs. the entire set of vertices? Interesting visualization project, for instance: Get a list of the active vertices in a frontier as a function of iteration, so iteration 0 is vertex set A, iteration 1 is vertex set B, etc. For one iteration, visualize the vertex set as a color per chunk of vertices, say, 1024 vertices per pixel. If all 1024 vertices are part of that frontier, the pixel is white, if 0 black, and gray in between. Then each iteration makes another row of pixels. This shows three things: (a) how many vertices are in the frontier compared to not; (b) how much spatial locality there is; (c) how the frontier evolves over time. One of the goals of this effort would be to determine how useful it would be to do some reordering of vertices either statically or dynamically, and either locally (within a chunk of vertices) or globally. # Gunrock Release Notes Gunrock release 0.5 is a feature (minor) release that adds: • New primitives and better support for existing primitives. • New operator: Intersection. • Unit-testing support through Googletest infrastructure. • CPU reference code for correctness checking for some of the primitives. • Support for central integration (Jenkins) and code-coverage. • Overall bug fixes and support for new CUDA architectures. ## v0.5 Changelog All notable changes to gunrock for v0.5 are documented below: • New primitives: • A* • Weighted Label Propagation (LP) • Minimum Spanning Tree (MST) • Random Walk (RW) • Triangle Counting (TC) • Operator: • Intersection operator (for example, see TC) • Unit-testing: • Googletest support (see unittests directory) • Docs • Support using Slate (see https://github.com/gunrock/docs) • CPU reference code • Run scripts for all primitives • Clang-format based on Google style • see commit aac9add (revert for diff) • Support for Volta and Turing architectures • Regression tests to ctest for better code-coverage • Memset kernels • Multi-gpu testing through Jenkins ### Removed • Subgraph matching and join operator removed due to race conditions (SM is not added to the future release) • Plots generation python scripts removed (see https://github.com/gunrock/io) • MaxFlow primitive removed, wasn't fully implemented for a release (implementation exists in the new API for future release) • Outdated documentation ### Fixed • HITS now produces correct results • Illegal memory access fixed for label propagation (LP) primitive • WTF Illegal memory access fixed for frontier queue (see known issues for problems with this) • Other minor bug fixes ### Changed • Updated README and other docs • Moved previously tests directory to examples • Doesn't require CMakeLists.txt (or cmake) to run make • Moved all docs to Slate ## Known Issues: • WTF has illegal memory access (https://github.com/gunrock/gunrock/issues/503). • A* sometimes outputs the wrong path randomly (https://github.com/gunrock/gunrock/issues/502). • Random Walk uses custom kernels within gunrock, this is resolved for future releases. • CPU Reference code not implemented for SALSA, TC and LP (https://github.com/gunrock/gunrock/issues/232). # Frequently Asked Questions Some of the most common questions we have come across during the life of Gunrock project. If your question isn't already answered below, feel free to create an issue on GitHub. ## What does it do? Gunrock is a fast and efficient graph processing library on the GPU that provides a set of graph algorithms used in big data analytics and visualization with high performance. It also provides a set of operators which abstract the general operations in graph processing for other developers to build high-performance graph algorithm prototypes with minimum programming effort. ## How does it do it? Gunrock takes advantage of the immense computational power available in commodity-level, off-the-shelf Graphics Processing Units (GPUs), originally designed to handle the parallel computational tasks in computer graphics, to perform graph traversal and computation in parallel on thousands of GPU's computing cores. ## Who should want this? Gunrock is built with two kinds of users in mind: The first kind of users are programmers who build big graph analytics and visualization projects and need to use existing graph primitives provided by Gunrock. The second kind of users are programmers who want to use Gunrock's high-level, programmable abstraction to express, develop, and refine their own (and often more complicated) graph primitives. ## What is the skill set users need to use it? For the first kind of users, C/C++ background is sufficient. We are also building Gunrock as a shared library with C interfaces that can be loaded by other languages such as Python and Julia. For the second kind of users, they need to have the C/C++ background and also an understanding of parallel programming, especially BSP (Bulk-Synchronous Programming) model used by Gunrock. ## What platforms/languages do people need to know in order to modify or integrate it with other tools? Using the exposed interface, the users do not need to know CUDA or OpenCL to modify or integrate Gunrock to their own tools. However, an essential understanding of parallel programming and BSP model is necessary if one wants to add/modify graph primitives in Gunrock. ## Why would someone want this? The study of social networks, webgraphs, biological networks, and unstructured meshes in scientific simulation has raised a significant demand for efficient parallel frameworks for processing and analytics on large-scale graphs. Initial research efforts in using GPUs for graph processing and analytics are promising. ## How is it better than the current state of the art? Most existing CPU large graph processing libraries perform worse on large graphs with billions of edges. Supercomputer or expensive clusters can achieve close to real-time feedback with high cost on hardware infrastructure. With GPUs, we can achieve the same real-time feedback with much lower cost on hardware. Gunrock has the best performance among the limited research efforts toward GPU graph processing. Our peak Edge Traversed Per Second (ETPS) can reach 3.5G. And all the primitives in Gunrock have 10x to 25x speedup over the equivalent single-node CPU implementations. With a set of general graph processing operators exposed to users, Gunrock is also more flexible than other GPU/CPU graph library in terms of programmability. ## How would someone get it? Gunrock is an open-source library. The code, documentation, and quick start guide are all on its GitHub page. ## Is a user account required? No. One can use either git clone or download directly to get the source code and documentation of Gunrock. ## Are all of its components/dependencies easy to find? Gunrock has three dependencies. Two of them are also GPU primitive libraries which also reside on GitHub. The third one is Boost (Gunrock uses Boost Graph Library to implement CPU reference testing algorithms). All dependencies do not require installation. To use, one only needs to download or git clone them and put them in the according directories. More details in the installation section of this documentation. ## How would someone install it? For C/C++ programmer, integrating Gunrock into your projects is easy. Since it is a template based library, just add the include files in your code. The simple example and all the testrigs will provide detailed information on how to do this. For programmers who use Python, Julia, or other language and want to call Gunrock APIs, we are building a shared library with binary compatible C interfaces. It will be included in the soon-to-arrive next release of Gunrock. ## Can anyone install it? Do they need IT help? Gunrock is targeted at developers who are familiar with basic software engineering. For non-technical people, IT help might be needed. ## Does this process actually work? All the time? On all systems specified? Currently, Gunrock has been tested on two Linux distributions: Linux Mint and Ubuntu. But we expect it to run correctly on other Linux distributions too. We are currently building a CMake solution to port Gunrock to Mac and Windows. The feature will be included in the soon-to-arrive next release of Gunrock. ## How would someone test that it's working with provided sample data? Testrigs are provided as well as a small simple example for users to test the correctness and performance of every graph primitive. ## Is the "using" of sample data clear? On Linux, one only needs to go to the dataset directory and run "make"; the script will automatically download all the needed datasets. One can also choose to download a single dataset in its separate directory. ## How would someone use it with their own data? Gunrock supports Matrix Market (.mtx) file format; users need to pre-process the graph data into this format before running Gunrock.
1 tar xvfz circos-***.tgz 1 2 $cd circos-0.67-7/bin/ circos-0.67-7/bin$ ./circos 1 -bash: ./circos: /bin/env: bad interpreter: No such file or directory 1 2 circos-0.67-7/bin$which env /usr/bin/env 将circos-0.67-7/bin/circos文件中的第一行 1 2 #! /bin/env perl替换为 #!/usr/bin/env perl 运行,虽然Mac 的OSX 系统自带peril,但依然提示我有错,缺少perl的组件 1 2 3 4 circos-0.67-7/bin$ ./circos *** REQUIRED MODULE(S) MISSING OR OUT-OF-DATE *** You are missing one or more Perl modules, require newer versions, or some modules failed to load. Use CPAN to install it as described in this tutorial 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 circos-0.67-7/bin$./circos -modules ok 1.29 Carp ok 0.36 Clone missing Config::General ok 3.40 Cwd ok 2.145 Data::Dumper ok 2.52 Digest::MD5 ok 2.84 File::Basename ok 3.40 File::Spec::Functions ok 0.23 File::Temp ok 1.51 FindBin missing Font::TTF::Font missing GD missing GD::Polyline ok 2.39 Getopt::Long ok 1.16 IO::File ok 0.33 List::MoreUtils ok 1.38 List::Util missing Math::Bezier ok 1.998 Math::BigFloat ok 0.06 Math::Round missing Math::VecStat ok 1.03 Memoize ok 1.32 POSIX ok 1.08 Params::Validate ok 1.61 Pod::Usage missing Readonly ok 2013031301 Regexp::Common missing SVG missing Set::IntSpan missing Statistics::Basic ok 2.41 Storable ok 1.17 Sys::Hostname ok 2.02 Text::Balanced missing Text::Format ok 1.9725 Time::HiRes 检查perl的版本,如果早于5.8应该更新perl 1 2 3 4 circos-0.67-7/bin$ perl -v This is perl 5, version 18, subversion 2 (v5.18.2) built for darwin-thread-multi-2level (with 2 registered patches, see perl -V for more detail) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 circos-0.67-7/bin$sudo cpan Sorry, we have to rerun the configuration dialog for CPAN.pm due to some missing parameters. Configuration will be written to CPAN.pm requires configuration, but most of it can be done automatically. If you answer 'no' below, you will enter an interactive dialog for each configuration option instead. Would you like to configure as much as possible automatically? [yes] yes Use of uninitialized value$what in concatenation (.) or string at /System/Library/Perl/5.18/App/Cpan.pm line 553, line 1. Warning: You do not have write permission for Perl library directories. To install modules, you need to configure a local Perl library directory or escalate your privileges. CPAN can help you by bootstrapping the local::lib module or by configuring itself to use 'sudo' (if available). You may also resolve this problem manually if you need to customize your setup. What approach do you want? (Choose 'local::lib', 'sudo' or 'manual') [local::lib] local::lib Autoconfigured everything but 'urllist'. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 cpan[1]> install Config::General TLINDEN/Config-General-2.56.tar.gz /usr/bin/make install -- OK cpan[2]> install Font::TTF::Font MHOSKEN/Font-TTF-1.05.tar.gz /usr/bin/make -- OK cpan[3]> install Math::Bezier ABW/Math-Bezier-0.01.tar.gz /usr/bin/make install -- OK cpan[4]> install Math::VecStat ASPINELLI/Math-VecStat-0.08.tar.gz /usr/bin/make install -- OK cpan[5]> install Readonly SANKO/Readonly-2.00.tar.gz ./Build install -- OK cpan[6]> install SVG SZABGAB/SVG-2.63.tar.gz /usr/bin/make install -- OK cpan[7]> install Set::IntSpan SWMCD/Set-IntSpan-1.19.tar.gz /usr/bin/make install -- OK cpan[8]> install Statistics::Basic JETTERO/Statistics-Basic-1.6611.tar.gz /usr/bin/make install -- OK cpan[9]> install Text::Format SHLOMIF/Text-Format-0.59.tar.gz ./Build install -- OK *****************************星号中间的内容可以作为Mac安装GD的参考。 /wp/f4w/2020//FileAttach/2015-05-08-install-gd.tar.gz circos官网http://circos.ca/tutorials http://zientzilaria.herokuapp.com/blog/2012/06/03/installing-circos-on-os-x/ http://wangqinhu.com/install-gd-on-mavericks/ http://www.jb51.net/os/RedHat/1286.html
# nth term of a sequence • Sep 9th 2008, 01:59 AM p vs np nth term of a sequence Any suggestions for an approach to proving that the nth term of the following sequence: 1,2,2,3,3,3,4,4,4,4,5,5,5... is [ ((2*n)^(1/2) + (1/2)) ], where [x] denotes the floor value of x and ^ denotes to the power of • Sep 9th 2008, 06:12 AM NonCommAlg Quote: Originally Posted by p vs np Any suggestions for an approach to proving that the nth term of the following sequence: 1,2,2,3,3,3,4,4,4,4,5,5,5... is [ ((2*n)^(1/2) + (1/2)) ], where [x] denotes the floor value of x and ^ denotes to the power of let $A_n$ be the n-th term of the sequence. it's easy to see that your sequence can be written as: $A_{\frac{m(m-1)}{2} + k}=m, \ \ m \geq 1, \ 1 \leq k \leq m.$ now let: $\frac{m(m-1)}{2}+k=n,$ where $m \geq 1$ and $1 \leq k \leq m.$ so $A_n=m,$ and: $8n=4m^2-4m+8k=(2m-1)^2+8k-1.$ thus : $8n > (2m-1)^2. \ \ \ (1)$ also we have: $8n=(2m+1)^2-8(m-k)-1.$ thus: $8n < (2m+1)^2. \ \ \ \ (2)$ taking square root in (1) and (2) will give us: $m < \sqrt{2n} + \frac{1}{2} < m+1,$ which means: $\left \lfloor \sqrt{2n} + \frac{1}{2} \right \rfloor=m=A_n.$ http://www.mathhelpforum.com/math-he...ags/Canada.gif • Sep 9th 2008, 07:57 AM Plato Here is another way to generate that sequence: $a_n = \left\lceil {\frac{{ - 1 + \sqrt {8n + 1} }}{2}} \right\rceil$. That is of course using the ceiling function. Notice the sequence changes on the triangular numbers, 1,3,6,10,…; $\frac{{n(n + 1)}}{2}$. • Sep 9th 2008, 11:31 AM bkarpuz First of all, I have to say that this is a very good exercise. $ \begin{array}{cccccccccccc} n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \cdots \\ a_{n} & 1 & 2 & 2 & 3 & 3 & 3 & 4 & 4 & 4 & 4 & \cdots\\ & * & & * & & & * & & & & * & \end{array} $ Consider the terms marked with $*$, for these numbers, we have $n=\sum\limits_{i=1}^{a_{n}}i=\frac{a_{n}(a_{n}+1)} {2}.$ Now, let $a_{n_{0}}\leq a_{n}\leq a_{n^{0}}$, where $n^{0}\geq n$ is the smallest number which is not less than $n$ and satisfies $n^{0}=\frac{a_{n^{0}}(a_{n^{0}}+1)}{2}$ (in short, lets say $*$ property) and $n_{0}\leq n$ is the greatest number which is less than $n$ and satisfies $*$ property. Therefore, solving this parabola with, we have $a_{n^{0}}=\frac{-1\pm\sqrt{1+8n^{0}}}{2}$. Note that the desired term is the positive one, hence $a_{n^{0}}=\frac{-1+\sqrt{1+8n^{0}}}{2}$. Thus, the solution is completed when $n$ has $*$ property. If $n$ does not have $*$ property, then we see that $n_{0}, you can easily show that $a_{n}=\left\lceil\frac{-1+\sqrt{1+8n}}{2}\right\rceil$, because of the definitions of $n_{0},n^{0}$, being $a_{n_{0}}+1=a_{n^{0}}=a_{n}$ and the sequezing $n_{0}=\frac{a_{n_{0}}(a_{n_{0}}+1)}{2}\leq n\leq\frac{a_{n^{0}}(a_{n^{0}}+1)}{2}=n^{0}.$ $\therefore$ Plato's answer is right! (Nod)
# v18Problem with mej71's Pokemon Followers This thread pertains to v18 of Pokémon Essentials. #### zlohpa ##### Rookie Member I am fairly new to using rpgmaker, but I have prior experience with rom hacking and am trying to make a game for my brothers. I have been watching Thundaga/Camero's tutorial series and it has been very straight forward but I have reached the 10th episode, about creating follower pokemon. I followed everything to a tee. I replaced the Animations rpgxp file with RPGMaker closed, I have the sprites in the character file and I have the three animations in the animations folder. I also have an empty event (18 in this case) and the pbPokemonFollower script written right but I still get this message: --------------------------- Pokemon Essentials --------------------------- [Pokémon Essentials version 18.1] Exception: NoMethodError Message: undefined method -' for nil:NilClass Backtrace: Follow Pokemon:672:in talk_to_pokemon' Follow Pokemon:1562:in update' Scene_Map:229:in main' Scene_Map:226:in loop' Scene_Map:231:in main' Main:45:in mainFunctionDebug' Main:24:in mainFunction' Main:24:in pbCriticalCode' Main:24:in mainFunction' Main:55 When the script is activated, my Pokemon does not show up and when i press c, I get the error message and hear the pokemon cry in the background. I tried Charizard originally and i got it to say that he was happy or something like that a couple times, but that i got the error message again, and again same thing happened when i used bulbasaur. I haven't been able to figure this out online and I think this is the place where i should put this but I'm not really sure and if I put this in the wrong place, I apologize. #### Golisopod User ##### Cooltrainer Member Try looking at the last few replies in the Following Pokemon thread. I've shared a fix there for v18. #### zlohpa ##### Rookie Member Oh my goodness! I was really bummed out after putting everything together and I didn't realize that my version was the problem. Thank you SO much!! I know you can't see how happy I am but I am ecstatic! You're making this world a better place! I will also definitely put the link to the discussion in the comments of that video, just in case anyone missed it like me!!
# Checking and converting [[HH:]MM:]SS input format I would like to use the following routine in my job submission bash script which expects the allowed walltime in the format [[HH:]MM:]SS. (Brackets indicating optionality.) For easier readability I want to convert any input value into the fixed format HH:MM:SS where the value for hours is open ended. I think it works okay, but I am a bit worried if I have overlooked possible points where a user could make a mistake. This could lead to submitting an invalid jobscript and a lot of computation jobs not starting. I also ask myself whether or not my code is optimal. #!/bin/bash # # If there is an unrecoverable error: display a message and exit. # printExit () { case $1 in [iI]) echo INFO: "$2" ;; [wW]) echo WARNING: "$2" ;; [eE]) echo ERROR: "$2" ; exit 1 ;; *) echo "$1" ;; esac } # # Test if a given value is an integer # isnumeric () { if ! expr "$1" : '[[:digit:]]*$' > /dev/null; then printExit E "Value for$2 ($1) is no integer." fi } # # Test if a given value is a proper duration # istime () { local tempTime=$1 # Split time in HH:MM:SS theTempSeconds=${tempTime##*:} isnumeric "$theTempSeconds" "seconds" if [[ ! "$tempTime" == "${tempTime%:*}" ]]; then tempTime="${tempTime%:*}" theTempMinutes=${tempTime##*:} isnumeric "$theTempMinutes" "minutes" fi if [[ ! "$tempTime" == "${tempTime%:*}" ]]; then tempTime="${tempTime%:*}" theTempHours=${tempTime##*:} isnumeric "$theTempHours" "hours" fi if [[ ! "$tempTime" == "${tempTime%:*}" ]]; then printExit E "Unrecognised format." fi theSeconds=expr $theTempSeconds % 60 theTempMinutes=expr$theTempMinutes + $theTempSeconds / 60 theMinutes=expr$theTempMinutes % 60 theHours=expr $theTempHours +$theTempMinutes / 60 printf -v theWalltime "%d:%02d:%02d" $theHours$theMinutes $theSeconds } theWalltime="$1" echo "$theWalltime" istime$theWalltime echo "$theWalltime" exit 0 I use the first two functions quite a lot in my script so I used them for the target function also. ## 3 Answers Here are my suggestions: • The printExit() prints an error message, but does not exit. It would be better to actually exit or rename the function. • Your regex in isnumeric() has two minor issues. Putting a ^ at the beginning will make it clear that the regex matches the whole string. While this is implicit with expr it does not fit with how most implementations work. Another issue is that the * quantifier means 0 or more so it might match 0 or none of the what you're looking for. So an empty string would match what you have since it would be zero digits followed by the end of the string. I couldn't reproduce this failure mode, but I'd recommend this line instead which works for my tests: if ! expr "$1" : '^[[:digit:]]\+$' > /dev/null; then • It would be nice to explain what you're trying to do with those string manipulations with a few comments • More of your variables could be localized so they don't leak out of the function they are used in. • Using the in variable names to avoid conflicting with generic names works, but if you can find more descriptive names it would be easier to follow. Along the same lines all variables are temporary so that doesn't help explain why the variable exists. Maybe checkTime or even check_time. • All of the math is integer math. You won't get any decimals with bash or expr math. https://stackoverflow.com/questions/12722095/how-can-i-get-float-division explains that you can work around with bc. • Optimization Using expr is fine, but it calls an external program. Using bash's builtin math would avoid spawning a new process and would cut out a lot of the script's time. ((just_minutes = just_minutes + trunc_seconds/60)) • Thanks for the comments, I'll put them to good use. The printExit() name is a remnant from a time where every case it was used for actually exited the program. By the time I worked the other cases in, I used it so often, I did not want to change it any more. With the third bullet point you just mean that I should be more talkative in the code, explaining what I do? – Martin - マーチン Dec 17 '15 at 5:05 I have updated the code with some of the suggestions from chicks answer. • while the prinExit() routine was initially intended to actually exit in any scenario given, it transmuted over the course of a few iterations in other scripts into a somewhat more advanced info routine. I have now changed the name to reflect that and will slowly roll this out in all of my scripts in the future • the regular expression now matches the whole expression and cannot be zero • the variable names reflect now what they are actually doing and they were set as local ones wherever possible to prevent leaking (which could happen if the user applies the duration option twice) • the transformation is now done by the bash built in $(()), it is integer maths as is the intention • probably the biggest improvement in optimising the code is nesting the if statements while checking the input statement. If the duration was given in SS it is now only checking once if [[ ! "$foo" == "${foo%:*}" ]] and the same applies to minutes MM:SS (2) and hours HH:MM:SS (4, as it needs checking if there are leftovers). Here is the complete code (when the whole script is done I'll make it available through a third party website): #!/bin/bash # # Print information and warnings. # If there is an unrecoverable error: display a message and exit. # printInfo () { case $1 in [iI]) echo INFO: "$2" ;; [wW]) echo WARNING: "$2" ;; [eE]) echo ERROR: "$2" ; exit 1 ;; *) echo "$1" ;; esac } # # Test if a given value is an integer # isnumeric () { if ! expr "$1" : '^[[:digit:]]\+$' > /dev/null; then printInfo E "Value for$2 ($1) is no integer." fi } # # Test if a given value is a proper duration # Code reviewed: http://codereview.stackexchange.com/q/114144/92423 # isduration () { local checkDuration=$1 # Split time in HH:MM:SS # Strips away anything up to and including the rightmost colon # strips nothing if no colon present # and tests if the value is numeric # this is assigned to seconds local truncDuration_Seconds=${checkDuration##*:} isnumeric "$truncDuration_Seconds" "seconds" # If successful value is stored for later assembly # # Check if the value is given in seconds # "${checkDuration%:*}" strips shortest match ":*" from back # If no colon is present, the strings are identical if [[ ! "$checkDuration" == "${checkDuration%:*}" ]]; then # Strip seconds and colon checkDuration="${checkDuration%:*}" # Strips away anything up to and including the rightmost colon # this is assigned as minutes # and tests if the value is numeric local truncDuration_Minutes=${checkDuration##*:} isnumeric "$truncDuration_Minutes" "minutes" # If successful value is stored for later assembly # # Check if value was given as MM:SS same procedure as above if [[ ! "$checkDuration" == "${checkDuration%:*}" ]]; then #Strip minutes and colon checkDuration="${checkDuration%:*}" # # Strips away anything up to and including the rightmost colon # this is assigned as hours # and tests if the value is numeric local truncDuration_Hours=${checkDuration##*:} isnumeric "$truncDuration_Hours" "hours" # Check if value was given as HH:MM:SS if not, then exit if [[ ! "$checkDuration" == "${checkDuration%:*}" ]]; then printInfo E "Unrecognised format." fi fi fi # Modify the duration to have the format HH:MM:SS # disregarding the format of the user input local finalDuration_Seconds=$((truncDuration_Seconds % 60)) # Add any minutes that had overflow to the minutes given as input truncDuration_Minutes=$((truncDuration_Minutes + truncDuration_Seconds / 60)) # save what cannot overflow as hours local finalDuration_Minutes=$((truncDuration_Minutes % 60)) # add any minutes the overflew to hours local finalDuration_Hours=$((truncDuration_Hours + truncDuration_Minutes / 60)) # Format string and saveon variable printf -v requestedWalltime "%d:%02d:%02d"$finalDuration_Hours $finalDuration_Minutes \$finalDuration_Seconds } requestedWalltime="$1" echo "$requestedWalltime" isduration $requestedWalltime echo "$requestedWalltime" exit 0 ### An alternative to printExit As @chicks already pointed out, the printExit is not a good name, as it doesn't always exit, and it's also not obvious what a "print exit" method will do. A related point, I find the i-w-e parameters a bit cryptic. I recommend an alternative I use for this purpose: msg() { echo $* } warn() { msg WARNING:$* } fatal() { msg FATAL: $* exit 1 } That is: • warn "some message" is more natural than printExit W "some message" • warn some message also works (without double quotes) • If I want to search for all occurrences of warnings in the code, it's a bit easier to type "warn" than "printExit [wW]". (Actually, since I use vim to edit scripts, I can just press * on the word "warn" to find all occurrences, so that becomes literally a single keystroke.) ### More on naming things Functions prefixed with the word "is" imply some boolean operation, returning true or false. In case of Bash, returning success or failure. As opposed to that: • the isnumeric function exits the program if the param is not numeric • the istime function may lead to exiting in calls to isnumeric, or else if everything was fine it prints time It would be better to rename them. For example, validate_numeric would be a better name, as the "validate" prefix is commonly used for operations that raise exception if the required tests fail. Lastly, it's good to emphasize visually the word boundaries of the distinct terms in function names, for example is_numeric or isNumeric. ### Improving isnumeric I suggest making is_numeric a real boolean function, and introduce a new function validate_numeric to exit in case of not numeric: is_numeric() { expr "$1" : '[[:digit:]]*$' > /dev/null } validate_numeric() { if ! is_numeric "$1"; then fatal "Value for $2 ($1) is no integer." fi } ### Improving is_numeric It's not recommended to use expr anymore today. It spawns an extra process, it's not portable, and better alternatives are usually available. In this example, you can rewrite using [[ ... =~ ... ]] like this: is_numeric() { [[ $1 =~ ^[[:digit:]]+$ ]] } ### Nitpicks The final exit 0 is unnecessary. • is there anything that would be an argument against using error instead of fatal as the name for the function? I feel that it would probably come more natural to call it error. – Martin - マーチン Mar 29 '16 at 13:04 • @Martin-マーチン This may be subjective. To me, fatal clearly implies that the program will exit. I can imagine an implementation of error that just prints a message on strderr but doesn't necessarily exit. – janos Mar 29 '16 at 13:59
Browse Questions # Four geometric means are inserted between $\;2^{9}-1\;and \;2^{9}+1\;.$ The product of these means is : $(a)\;2^{36}-1\qquad(b)\;2^{36}-2^{19}+1\qquad(c)\;2^{36}-2^{18}+1\qquad(d)\;None\;of\;these$
# Documentation Mathlib.Data.Set.Functor # Functoriality of Set# This file defines the functor structure of Set. f <$> s = f '' s @[simp] theorem Set.seq_eq_set_seq {α : Type u} {β : Type u} (s : Set (αβ)) (t : Set α) : (Seq.seq s fun x => t) = Set.seq s t @[simp] theorem Set.pure_def {α : Type u} (a : α) : pure a = {a} theorem Set.image2_def {α : Type u_1} {β : Type u_1} {γ : Type u_1} (f : αβγ) (s : Set α) (t : Set β) : Set.image2 f s t = Seq.seq (f <$> s) fun x => t Set.image2 in terms of monadic operations. Note that this can't be taken as the definition because of the lack of universe polymorphism.
# Webmail IITB/CSE/2013/February/51, TR-CSE-2013-51Stochastic Model Based Opportunistic Spectrum Access in Wireless Networks Manuj Sharma and Anirudha Sahoo We present a stochastic model based opportunistic channel access and transmission scheme for cognitive radio-enabled secondary users for single data channel. We refer to this scheme as RIBS (Residual Idle Time Distribution based Scheme). In this scheme, the SU uses residual white space distribution to estimate its transmission duration such that the probabilityof its interference with PU is less than a predefined threshold value. We derive analytical formulae for computing the average raw SU frames per transmission operation, average raw SU throughput, and the average sensing overhead for SU. Using simulation experiments, we show that using RIBS, SU can use the channel opportunistically without violating the interferenceprobability constraint. We conduct simulation experiments by collecting channel occupancy data due to PU traffic in two different scenarios. In the first scenario, we synthetically generate channel occupancy data due to PU transmissions using two standard distributions (2-phase Erlang and Uniform distribu-tion). We validate the analytical formulations using simulation with synthetic channel occupancy. In the second scenario, we simulate a PU network that runs realistic applications (VoIP andWeb browsing) using a TDMA MAC protocol. A pair of sender and receiver SU uses RIBS to opportunistically transmit on the channel. We analyze the characteristics of white spaces obtained due to these realistic applications, list some of the challenges in using RIBS in realistic scenarios, and provide comprehensive methodology to use RIBS in primary network running real applications. IITB/CSE/2013/February/50, TR-CSE-2013-50Resource Availability Based Performance Benchmarking of Virtual Machine Migrations Senthil Nathan, Purushottam Kulkarni, and Umesh Bellur Virtual machine migration enables load balancing, hot spot mitigation and server consolidation in virtualized environments. Live VM migration can be of two types - adaptive, in which the rate of page transfer adapts to virtual machine behavior (mainly page dirty rate), and non-adaptive, in which the VM pages are transferred at a maximum possible network rate. In either method, migration requires a significant amount of CPU and network resources, which can seriously impact theperformance of both the VM being migrated as well as other VMs. This calls for building a good understanding of the performance of migration itself and the resource needs of migration. Such an understanding can help select the appropriate VMs for migration while at the same time allocating the appropriate amount of resources for migration. While several empirical studies exist, a comprehensive evaluation of migration techniques with resource availability constraints is missing. As a result, it is not clear as to which migration technique to employ under a given set of conditions. In this work, we conduct a comprehensive empirical study to understand the sensitivity of migration performance to resource availability and other system parameters (like page dirty rate and VM size). The empirical study (with the Xen Hypervisor) reveals several shortcomings of the migration process. We propose several fixes and develop the Improved Live Migration technique (ILM) to overcome these shortcomings. Over a set of workloads used to evaluate ILM, the network traffic for migration was reduced by 14-93\% and the migration time was reduced by 34-87\% compared to the vanilla live migration technique. We also quantified the impact of migration on the performance of applications running on the migrating VM and other co-located VMs. IITB/CSE/2013/February/49, TR-CSE-2013-49All page sharing is equal, but some sharing is more equal than others Shashank Rachamalla, Debadatta Mishra, Purushottam Kulkarni Content based memory sharing in virtualized environments has been proven to be a useful memory management technique. As part of this work, we are interested in studying the trade-off between extent of sharing opportunities (compared to actual sharing potential) that can be identified and the overheads for doing the same. Our work is based on Kernel Virtual Machine (KVM) in Linux which in turn uses Kernel Samepage Merging (KSM) to identify and exploit sharing opportunities. We instrument the Linux kernel and KSM to log sharing and copy-on-write related parameters for accurate estimation of useful and effective KSM sharing extent. For several combinations of workloads, exhibiting different memory usage characteristics, we benchmark KSM performance (achieved savings vs. total possible savings) and CPU overheads for different KSM configurations. The benchmarking results are used to develop an adaptive scheme to dynamically reconfigure KSM configuration parameters (scan rate and pages per scan), to maximize shar ing opportunities at minimal overheads. We evaluate the adaptive technique and compare it with default settings of KSM to show its efficacy. Based on our experiments we show that the adaptive sharing technique correctly adapts scan rate of KSM based on sharing potential. With our setup, the adaptive sharing technique yields a factor of 10 improvement in memory savings at negligible increase in CPU utilization. IITB/CSE/2012/September/48, TR-CSE-2012-48Building A Low Cost Low Power Wireless Network To Enable Voice Communication In Developing Regions Vijay Gabale, Jeet Patani, Rupesh Mehta, Ramakrishnan Kalyanaraman, Bhaskaran Raman In this work, we describe our experiences in building a low cost and low power wireless mesh network using IEEE 802.15.4 technology to provide telephony services in rural regions of the developing world. 802.15.4 was originally designed for a completely different application space of non-real-time, low data rate embedded wireless sensing. We use it to design and prototype a telephony system, which we term as Lo3 (Low cost, Low power, Local voice). Lo3 primarily provides two use cases; (1) local and broadcast voice within the wireless mesh network, and (2) remote voice to a phone in the outside world. A Lo3 network can cost as less as $2K, and can last for several days without power “off the grid”, thus making it an ideal choice to meet cost and power constraints of rural regions. We test deployed a full-fledged Lo3 system in a village near Mumbai, India for 18 hours over 3 days. We established voice calls with an end-to-end latency of less than 120ms, with anaverage packet loss of less than 2%, and a MOS of 3.6 which is considered as good inpractice. The users too gave a positive response to our system. We also tested Lo3 withinour department where it can be used as a wireless intercom service. To our knowledge, Lo3is the first system to enable such a voice communication system using 802.15.4 technology,and show its effectiveness in operational settings. IITB/CSE/2012/July/46, TR-CSE-2012-46Host-Bypass: Approximating Node-Weighted Connectivity Problems In Two-Tier Networks Vijay Gabale, Ashish Chiplunkar We focus on network design problems in those networks which have two distinct sets or {it tiers} of network nodes: hosts and infrastructure nodes. For instance, wireless mesh networks are often composed of the infrastructure nodes which relay the data between the client nodes. Similarly, the switches (infrastructure) in data center networks sit between the servers (hosts) to route the information flow. In such {it two-tier} networks, a network designer typically requires paths between hosts in the host-tier through nodes in the infrastructure-tier while minimizing the cost in building the network. A subtle constraint, which we call as {it the host-tier constraint},in choosing such paths requires that no host in host-tier is an intermediate node on any path. Often, this constraint is necessary for designing network topologies in practice. In this work, we first show that the network design algorithms in prior work are inapplicable to the above described Two-Tier Network Connectivity (TTNC) problem with the host-tier constraint. We then investigate TTNC problem with the goal of minimizing the cost in building the network. We show that TTNC problem is NP-hard, and present {it Host-Bypass}, the first algorithm for TTNC problem with provable performance bound on cost of$O(h^{frac{1}{2}+epsilon} log^2 h)$where$h$is the number of host nodes,$epsilon > 0$. Host-Bypass has polynomial running time of$O(n^{1+frac{1}{epsilon}}h^{2+frac{2}{epsilon}})$($n$is total number of nodes), and our simulation study shows that Host-Bypass performs close to optimal and is 30\% better than a heuristic from an algorithm in prior work. IITB/CSE/2012/July/45, TR-CSE-2012-45A Per-File Partitioned Page Cache Prateek Sharma, Purushottam Kulkarni In this paper we describe a new design of the operating system pagecache. Page caches form an important part of the memory hierarchy andare used to access file-system data. In most operating systems, thereexists a single page cache whose contents are replaced according toa LRU eviction policy.We design and implement a page cache which is partitioned byfile---the per-file page cache. The per-file page cache providesthe ability to control fine-grained caching parameters such as cachesize and eviction policies for each file. Furthermore, the deterministiccache allocation and partitioning allows for improved isolation amongprocesses. We provide dynamic cache partitioning(among files) byusing utility derived from the miss-ratio characteristics of thefile accesses.We have implemented the per-file page cache architecture in the Linux kerneland demonstrate its efficacy using disk image files of virtual machinesand different types of file access patterns by processes.Experimental results show that the utility-based partitioning canreduce the cache size by up to an order of magnitude while increasingcache hit ratios by up to 20%. Among other features, the per-file pagecache has fadvise integration, a scan-resistant eviction algorithmand reduced lock-contention and overhead during the eviction process. IITB/CSE/2012/February/42, TR-CSE-2012-42Singleton: System-wide Page Deduplication in Virtual Environments Prateek Sharma, Purushottam Kulkarni We consider the problem of providing memory-management in hypervisors and propose Singleton, a KVM-based system-wide page deduplication solution to increase memory usage efficiency. Specifically, we address the problem of double-caching that occurs in KVM---the same disk blocks are cached at both the host(hypervisor) and the guest(VM) page-caches. Singleton's main components are identical-page sharing across guest virtual machines and an implementation of an exclusive-cache for the host and guest page-cache hierarchy. We use and improve KSM--Kernel SamePage Merging to identify and share pages across guest virtual machines. We utilize guest memory-snapshots to scrub the host page-cache and maintain a single copy of a page across the host and the guests. Singleton operates on a completely black-box assumption---we do not modify the guest or even the hypervisor. We show that conventional operating system cache management techniques are sub-optimal for virtual environments, and how Singleton supplements and improves the existing Linux kernel memory management mechanisms. Singleton is able to improve the utilization of the host cache by reducing its size(by upto an order of magnitude), and increasing the cache-hit ratio(by factor of 2x). This translates into better VM performance(40% faster I/O). Singleton's unified page deduplication and host cache scrubbing is able to reclaim large amounts of memory and facilitates higher levels of memory overcommitment. The optimizations to page deduplication we have implemented keep the overhead down to less than 20% CPU utilization. IITB/CSE/2011/December/41, TR-CSE-2011-41General Caching with Lifetimes Ashish Chiplunkar, Sundar Vishwanathan We consider the problem of caching with lifetimes, where a lifetime is specified whenever a page is loaded into the cache. The copy of a page loaded into the cache may be used to serve requests to the same page, only until its expiration time. We present a generic method to get an algorithm for caching with lifetimes, from an algorithm for caching without lifetimes. This method works for any cost model, and in online as well as offline settings. In the online (resp. offline) setting, the competitive (resp. approximation) ratio of resulting algorithm for caching with lifetimes, is one more than the competitive (resp. approximation) ratio of the original algorithm for caching without lifetimes. Using this method and the existing algorithms for caching without lifetimes, we get an$H_k+1$competitive randomized algorithm and a$2$-approximation algorithm for standard caching with lifetimes, where$k$is the cache size. This is an improvement over the$(2H_k+3)$-competitive algorithm and the$3$-approximation algorithm given by Gopalan et. al. \cite{GopalanKMMV02}. We also get$\mathcal{O}(\log k)$competitive randomized algorithms for various subclasses of general caching such as weighted caching and the Bit and Fault models, asymptotically matching the lower bound of$H_k$on the competitive ratio; and a$5\$ approximation algorithm for the general offline caching problem. IITB/CSE/2011/December/40, TR-CSE-2011-40Importance Aware Bloom Filter for Set Membership Queries in Streaming Data Purushottam Kulkarni, Ravi Bhoraskar, Vijay Gabale, Dhananjay Kulkarni Various data streaming applications like news feeds, stock trading generate a continuous sequence of data items. In such applications, based on the priority of different items, a set of items are more important than others. For instance, in stock trading, some stocks are more important than others, or in a cache-based system, the cost of fetching new objects into the cache is directly related to the the size of the objects. Motivated by these examples, our work focuses on developing a time and space efficient indexing and membership query scheme which takes into account data items with different importance levels. In this respect, we propose Importance-aware Bloom Filter (IBF), an extension of Bloom Filter (BF) which is a popular data structure to approximately answer membership queries on a set of items. As part of IBF, we provide a set of insertion and deletion algorithms to make BF importance-aware and to handle a set of unbounded data items. Our comparison of IBF with other Bloom filter-based mechanisms, for synthetic as well as real data sets, shows that IBF performs very well, it has low false positives, and low false negatives for important items. Importantly, we find properties of IBF analytically, for instance, we show that there exists a tight upper bound on false positive rate independent of the size of the data stream. We believe, IBF provides a practical framework to balance the application-specific requirements to index and query data items based on the data semantics. IITB/CSE/2011/September/39, TR-CSE-2011-39"At least one" caching Ashish Chiplunkar, Sundar Vishwanathan We consider a variant of the caching problem, where each request is a set of pages of a fixed size, instead of a single page. In order to serve such a request, we require at least one of those pages to be present in the cache. Each page is assumed to have unit size and unit cost for getting loaded into the cache. We prove lower bounds on the competitive ratio for this problem in both the deterministic and the randomized settings. We also give online algorithms for both settings and analyze them for competitive ratio. IITB/CSE/2011/May/35, TR-CSE-2011-35Efficient Rule Ensemble Learning using Hierarchical Kernels Pratik J., J. Saketha Nath and Ganesh R. This paper addresses the problem of Rule Ensemble Learning (REL), where the goal is simultaneous discovery of a small set of simple rules and their optimal weights that lead to good generalization. Rules are assumed to be conjunctions of basic propositions concerning the values taken by the input features. From the perspectives of interpretability as well as generalization, it is highly desirable to construct rule ensembles with low training error, having rules that are i) simple, {\em i.e.}, involve few conjunctions and ii) few in number. We propose to explore the (exponentially) large feature space of all possible conjunctions optimally and efficiently by employing the recently introduced Hierarchical Kernel Learning (HKL) framework. The regularizer employed in the HKL formulation can be interpreted as a potential for discouraging selection of rules involving large number of conjunctions -- justifying its suitability for constructing rule ensembles. Simulation results show that, in case of many benchmark datasets, the proposed approach improves over state-of-the-art REL algorithms in terms of generalization and indeed learns simple rules. Although this is encouraging, it can be shown that HKL selects a conjunction only if all its subsets are selected, and this is highly undesirable. We propose a novel convex formulation which alleviates this problem and generalizes the HKL framework. The main technical contribution of this paper is an efficient mirror-descent based active set algorithm for solving the new formulation. Empirical evaluations on REL problems illustrate the utility of generalized HKL. IITB/CSE/2011/February/34, TR-CSE-2011-34Affinity-aware Modeling of CPU Usage for Provisioning Virtualized Applications Sujesha Sudevalayam, Purushottam Kulkarni While virtualization-based systems become a reality, an important issue is that of virtual machine migration-enabled consolidation and dynamic resource provisioning. Mutually communicating virtual machines, as part of migration and consolidation strategies, may get co-located on the same physical machine or placed on different machines. In this work, we argue the need for network affinity-aware resource provisioning for virtual machines. First, we empirically demonstrate and quantify the resource savings due to colocation of communicating virtual machines. We also discuss the effect on resource usage due to dispersion of previously co-located virtual machines. Next, we build models based on different resource-usage micro-benchmarks to predict the resource usages when moving from non-colocated placements to co-located placements and vice-versa. These generic models can serve as animportant input for consolidation and splitting decisions. Via extensive experimentation, we evaluate the applicability of our models using synthetic and real application workloads. IITB/CSE/2010/December/33, TR-CSE-2010-33Distributed Fault Tolerance for WSNs with Routing Tree Overlays Chilukuri Shanti and Anirudha Sahoo WSNs are inherently power constrained and areoften deployed in harsh environments. As such, node death isa possibility that must be considered while designing protocolsfor such networks. Rerouting of data is generally necessary sothat data from the descendant nodes of the dead node canreach the sink. Since slot allocation in TDMA MAC protocolsis generally done based on the routing tree, all the nodes mustswitch to the new routing tree to avoid collisions. This necessitatesdisseminating the fault information to all the nodes reliably. Wepropose a flooding algorithm for disseminating fault info to thenetwork reliably even in a lossy channel. Simulation results showthat the proposed flooding scheme consumes lesser energy andconverges faster than a simple flooding scheme. Rerouting thedata may result in increased TDMA schedule length. The energyexpenditure of the newly assigned parents also increases becausethey have to relay data from more children than before. Wepropose two distributed parent assignment algorithms in thispaper. The first algorithm minimizes the change in the TDMAschedule and the second algorithm balances the load among thenewly assigned parents. Simulation of random topologies showsthat the increase in the TDMA frame length is lesser than thatwith random parent assignment when the first algorithm is used.We also observe that the lifetime of the most energy constrainednode (after fault recovery) is longer when the second algorithmis used than that when random parent assignment is done. IITB/CSE/2010/November/32, TR-CSE-2010-32On Delay-constrained Scheduling in Multi-radio, Multi-channel Wireless Mesh Vijay Gabale, Ashish Chiplunkar, Bhaskaran Raman In this work, we consider the goal of scheduling the maximum number of voice calls in a TDMA-based multi-radio, multi-channel mesh network. One of main challenges to achieve this goal is the difficulty in providing strict (packet-level) delay guarantees for voice traffic in capacity limited multi-hop wireless networks. In this respect, we propose DelayCheck, an online centralized scheduling and call-admission-control (CAC) algorithm which effectively schedules constant-bit-rate voice traffic in TDMA-based mesh networks. DelayCheck solves the joint routing, channel assignment and link scheduling problem along with delay constraint. We formulate a relaxed version of this scheduling problem as an Integer Linear Program (ILP), the LP version of which gives us an optimality upper bound. We compare the output of DelayCheck with the LP-based upper bound as well as with two state-of-the-art prior scheduling algorithms. DelayCheck performs remarkably well, accepting about 93\% of voice calls as compared to LP-based upper bound. As compared to state-of-the-art algorithms, DelayCheck improves scheduler efficiency by more than 34\% and reduces call rejections by 2 fold. We also demonstrate that DelayCheck efficiently exploits the number of channels available for scheduling. With implementation optimizations, we show that DelayCheck has low memory and CPU requirements, thus making it practical. IITB/CSE/2010/August/31, TR-CSE-2010-31Ranking in Information Retrieval Joydip Datta In this report we study several aspects of an information retrieval with focus on ranking. First we introduce basic concepts of information retrieval and several components of an information retrieval system. Then we discuss important theoretical models of IR. Web-specific c topics likelink analysis and anchor text are presented next. We discuss how IR systems are evaluated and different IR evaluation forums. We end the report with a case study of cross lingual information retrieval system at IIT Bombay. In the future scope, we introduce upcoming trends in Web IR like user modeling and Quantum IR. IITB/CSE/2010/June/30, TR-CSE-2010-30Residual White Space Distribution Based Opportunistic Channel Access Scheme for Cognitive Radio Systems Manuj Sharma and Anirudha Sahoo We propose an opportunistic channel access scheme for cognitive radio-enabledsecondary networks. In our work, we model the channel occupancy due to PrimaryUser (PU) activity as a 2-state Alternating Renewal Process, with alternatingbusy and idle periods. Once a Secondary Node (SN) senses the channel idle, theproposed scheme uses the residual idle time distribution to estimate the transmissionduration in the remaining idle time. The SN transmits the frames within thetransmission duration without further sensing the channel, thereby reducing averagesensing overhead per transmitted frame. The analytical formulations used bythe scheme does not require the SN to know the start of the idle period. We validatethe analytical formulations using simulations, and compare the performance of theproposed scheme with a Listen-Before-Talk (LBT) scheme. IITB/CSE/2010/April/29, TR-CSE-2010-29Balancing Response Time and CPU allocation in Virtualized Data Centers using Optimal Controllers Varsha Apte, Purushottam Kulkarni, Sujesha Sudevalayam and Piyush Masrani The cloud computing paradigm proposes a flexible payment model, where a cloud user pays only for the amount of resources used. Thus, the cloud provider must only allocate as many resources to a user, as are to meet client performance requirements. In this paper, we present a control-theoretic approach to tune the CPU resource allocated to a virtual machine (VM) such that application performance metrics are optimized while using minimal CPU resource. Moreover, our approach expects no inputs of performance targets to steer the system towards optimal values of CPU share allocations and system performance. We prototype and validate our methodology on a small virtualized testbed using the Xen virtualization environment. A simple server consolidation scheme which triggers VM migrations and powers servers up and down, uses our control-theoretic resource tuning approach. Preliminary results demonstrate that our controller allocates CPU share optimally, even in the presence of changing load, and can work well with a server consolidationscheme to achieve good application performance. IITB/CSE/2010/March/25, TR-CSE-2010-25Reachability of Safe State for Online Update of Concurrent Programs Yogesh Murarka and Umesh Bellur Online update helps in reducing the maintenance downtime by applying a patch to a running process. To avoid the errors that can arise due to an online update, existing online update solutions interleave the update with process execution at specific states called safe states. This approach works well for sequential processes but for multithreaded process it presents the challenge of reaching a safe state across all threads. It turns out that in concurrent processes, even if individual threads can reach a safe state in bounded time a process may take unbounded time to reach a safe state or may never reach it. Therefore, with existing update solutions a user is uncertain whether a patch can be applied or not till the user tries to apply the patch.In this report we address the problem of safe state reachability for multithreaded processes. We prove that identifying whether a process can reach a safe state is undecidable. Hence, we derive a sufficient (conservative) condition for checking the reachability of a safe state. Our novel thread scheduling technique can force a process to reach a safe state in bounded time if the sufficient condition is met. The proposed approach eliminates the uncertainty of online updates. Complexity analysis of our algorithms shows that the practical implementation of our technique will work well with most real-world applications. IITB/CSE/2009/October/24, TR-CSE-2009-24Energy aware contour covering using collaborating mobile sensors Sumana Srinivasan, Krithi Ramamritham, Purushottam Kulkarni Environmental sensing systems are useful in the area of disaster management where the system can provide alerts as well as remediation services in the event of a disaster such as pollutant spills. Recent advances in robotics technology have led to the development of sensors with the ability to sense, move, andwe show how such sensors can be deployed to perform tasks such as locating and covering a hazardous concentration contour in a pollutant spill. As mobile sensors consume energy for movement and have limited available energy, it is important for the sensors to plan their movement such that the coverage is maximized and the error in coverage (i.e., area not relevant to the contour) is minimized. In this paper, we address the nontrivial problem of continuously adjusting the direction of movement by presenting a distributed algorithm where a sensor determines the direction of movement by (i) estimating the distance to the contour and perimeter of the contour using locally sensed information as well as information gathered through collaboration and (ii) deciding dynamically whether to directly approach the contour or surround the contour based on the amount of work remaining to be done by the sensor vis a vis the energy remaining. We show that our proposed algorithm has the best coverage vs. error trade-off when compared to algorithms that directly approach or surround the contour. IITB/CSE/2009/October/23, TR-CSE-2009-23Light-trains: An Integrated Optical-Wireless Solution for High Bandwidth Applications in High-Speed Metro-Trains Ashwin Gumaste, Akhil Lodha, Saurabh Mehta, Jianping Wang and Si Qing Zheng Moving trains represent a voluminous mass of usersmoving at high velocities that require bandwidth (on demand). All wireless solutions alone cannot scale efficiently to provide for bandwidth to such fast-moving voluminous users. A new solution is proposed that facilitates dynamic provisioning, good scalabilityand efficient use of available bandwidth. The proposed solution called light-trains seamlessly integrates optical and wireless networking modules to provide an ideal broadband Internet access solution to users in moving trains. The solution identifies the set of requirements that a solution would require – such as fast hand-off, low-cost of deployment, mature technology and ability to provide dynamic bandwidth provisioning (and hence low experienced delay). IITB/CSE/2009/August/22, TR-CSE-2009-22Model-Based Opportunistic Channel Access in Cognitive Radio Enabled Dynamic Spectrum Access Networks Manuj Sharma, Anirudha Sahoo, K.D. Nayak We propose a model-based channel access mecha-nism for cognitive radio-enabled secondary network, which op-portunistically uses the channel of an unslotted primary networkwhen the channel is sensed idle. We refer to primary networkas the network which carry the main traffic in a designatedspectrum band. We have considered IEEE 802.11 WLAN as a defacto primary network operating in ISM band. Our study focuseson a single WLAN channel that is used by WLAN clients anda WLAN server for a mix of Email, FTP, and HTTP-based webbrowsing applications. We model the occupancy of the channel byprimary WLAN nodes as an alternating renewal process. Whenthe secondary sender node has one or more frames to send, thismodel is used by the the sender and receiver pair to estimateresidual idle time duration after the channel is sensed as idle.The secondary sender then opportunistically transmits frames inthat duration without significantly degrading performance of theprimary WLAN applications. Our simulation results show thatthe performance of secondary network is sensitive to the channelsensing duration and that high secondary throughput can beachieved without affecting the primary network significantly bychoosing appropriate value of channel sensing duration. IITB/CSE/2008/December/19, TR-CSE-2008-19Energy Harvesting Sensor Nodes: Survey and Implications Sujesha Sudevalayam, Purushottam Kulkarni Sensor networks with battery-powered nodes can seldom simultaneously meet the design goals of lifetime,cost, sensing reliability and sensing and transmission coverage. Energy-harvesting, converting ambient energy toelectrical energy, has emerged as an alternative to power sensor nodes. By exploiting recharge opportunities andtuning performance parameters based on current and expected energy levels, energy harvesting sensor nodes have the potential to address the conflicting design goals of lifetime and performance. This paper surveys various aspects of energy harvesting sensor systems---architecture, energy sources and storage technologies and examples of harvesting based nodes and applications. The study also discusses the implications of recharge opportunities on sensor node operation and design of sensor network solutions. IITB/CSE/2008/October/18, TR-CSE-2008-18Graph Theoretic Concepts for Highly Available Underlay Aware P2P Networks Madhu Kumar S D and Umesh Bellur In our previous work we have demonstrated that underlay awareness is necessary in P2P overlays for the availability of overlay paths and proved that the problem of formation of overlay networks with guaranteed availability is NP complete. Despite this complexity,underlay aware overlay networks, which use knowledge of the underlay to provide guaranteed levels of availability can be efficiently formed and maintained, under aspecified set of constraints. In this technical report, wepresent the graph theoretic concepts developed, leadingto the development of efficient algorithms for availability guaranteed overlay construction and maintenance. IITB/CSE/2008/October/17, TR-CSE-2008-17Towards a Transponder for Serial 100 Gigabit Ethernet using a Novel Optical SERDES Ashwin Gumaste A 100 Gbps serial transponder is proposed using off-the-shelf 10 Gbps optics, a novel combining optical SERDES and mature optical logic. Data-center and metro connectivity is simulated validating the preliminary design. IITB/CSE/2008/August/16, TR-CSE-2008-16TDMA Scheduling in Long-Distance WiFi Networks Debmalya Panigrahi and Bhaskaran Raman In the last few years, long-distance WiFi networks have been used to provide Internet connectivity in rural areas. The strong requirement to support real-time applications in these settings leads us to consider TDMA link scheduling. Such scheduling takes on a different flavour in long-distance mesh networks due to the unique spatial reuse pattern. In this paper, we consider the FRACTEL architecture for long-distance mesh networks. We propose and substantiate a novel angular interference model. This model is not only practical, but also makes the problem of TDMA scheduling tractable. We then make two significant algorithmic contributions. (1) We first present a simple, 3/2 approximate algorithm for TDMA scheduling. (2) We then consider delay-bounded scheduling and present an algorithm which uses at most 1/3rd more time-slots than the optimal number of slots required without the delay bound. Our evaluation on random as well as real network topologies shows that the algorithms are practical, and are more efficient in practice than their worst-case bounds. IITB/CSE/2008/July/15, TR-CSE-2008-15Alternating Renewal Theory-based MAC Protocol for Cognitive Radio Networks Manuj Sharma, Anirudha Sahoo and K.D. Nayak We propose a MAC protocol for cognitive radioenabled secondary networks, which opportunistically use the channels of CSMA/CA-based primary network. We have considered IEEE 802.11 WLAN as a de facto primary network in ISM band. Our study focuses on a single WLAN channel that is used by a WLAN client and server for HTTP-based web browsing application. We use the theory of alternating renewal processes to model the occupancy of channel by WLAN nodes. This model is used by a pair of secondary nodes for estimating idle time durations on the channel and opportunistically transmit frames during the estimated idle times to maximize channel throughput without significantly degrading the WLAN and web application performance. IITB/CSE/2008/July/14, TR-CSE-2008-14Channel Modeling based on Interference Temperature in Underlay Cognitive Wireless Networks Manuj Sharma, Anirudha Sahoo and K.D. Nayak Cognitive radio based dynamic spectrum access network is emerging as a technology to address spectrum scarcity. In this study, we assume that the channel is licensed to some primary (licensed) operator. We consider a sensor network with cognitive radio capability that acts as a secondary (unlicensed) network and uses the channel in underlay mode. The secondary network uses interference temperature model [3] to ensure that the interference to the primary devices remain below a predefined threshold. We use Hidden Markov Model (HMM) to model the interference temperature dynamics of the primary channel. On sensing the event of interest, the sensor nodes transmit observation packets (to some central monitoring station), which constitutes the interference traffic on the primary channel. One of the secondary sensor nodes periodically measures the interference temperature on the channel. If the measured interference temperature is greater than a predefined threshold, then the measuring node records symbol 1; otherwise it records symbol 0. Using a sequence of symbols measured over a period of time, the node constructs and trains an HMM using Baum-Welch procedure. The trained HMM is shown to be statistically stable. The node uses this trained HMM to predict the future sequences and use them in computing the value of Channel Availability Metric for the channel, which is used to select a primary channel. Results of application of such trained HMMs in channel selection in multi-channel wireless network are presented. IITB/CSE/2008/July/13, TR-CSE-2008-13Self Organizing Underlay Aware P2P Networks Madhu Kumar S D, Umesh Bellur and V.K Govindan Computing nodes in modern distributed applications on large scale networks form an overlay network, a logical abstraction of the underlying physical network. Reliability of data delivery is an essential requirement for many of these applications. Traditional methods of ensuring guaranteed data delivery among overlay nodes, are built on the idea of duplication of routing tables at multiple overlay nodes, to handle data delivery even when overlay nodes or links fail. This does not guarantee data delivery as the new routing path to the destination node may also contain the failed node or link in the underlay path. We claim that reliable data delivery in overlay networks can be achieved only through underlay awareness at the overlay level. We propose that reliable data delivery can be achieved by using highly available overlays. An overlay network with reliability guarantees has to be self organizing, due to the large size, distributed nature and cost of redeployment. The major contribution of this paper is a chordal ring based self organizing overlay which is self healing upto two node/link failures. We also present asynchronous distributed algorithms that are executed by the nodes that join and leave the overlay, which ensure that the data delivery guarantees of the overlay are maintained. The algorithms are proved to be correct under distributed and concurrent executions and the time, space and message complexities are O(pathlength), where \'pathlength\' is the average path length between overlay nodes, counting underlay nodes of degree greater than two. The scalability of the algorithm is demonstrated with a simulation study. IITB/CSE/2008/June/12, TR-CSE-2008-12Correctness of Request Executions in Online Updates of Concurrent Programs Yogesh Murarka and Umesh Bellur Online update is a technique which reduces the disruption caused during software update by applying a patch to a running process. It is a challenge to update a process while ensuring that it continues to operate correctly during and after the update. Most of the continuously running processes concurrently execute the requests arriving in a stream. Online update should guarantee a correct outcome of the requests which execute during and after an update. In this report we provide a solution for the same. We first define a request execution criteria to ensure the correct outcome of the requests which execute concurrently with an update. The report then describes an online update approach that fulfills this criteria. The approach avoids deadlocks during update by analyzing interthread dependencies and guarantees that the process remains in a consistent state after the update. Thus, the update procedure is guaranteed to terminate and the requests which execute during and after an update are ensured the correct execution. Our literature survey reveals that this is the first solution to update concurrent programs while requests are executing and ensure correctness. IITB/CSE/2008/April/11, TR-CSE-2008-11Evolution of Carrier Ethernet – Technology Choices and Drivers Ashwin Gumaste, Naresh Reddy, Deepak Kataria and Nasir Ghani The shift from native Ethernet in LANs to switched Ethernet in WANs has propelled efforts of making Ethernet as an ideal candidate technology for transport. Carrier Ethernet has the advantage of being able to be offered as a service to customers. Two questions that we desire to answer in this paper are (1) how Carrier Ethernet can scale as a service in the metropolitan and access premises and (2) what are the key applications that will most benefit from such technology development. We attempt to answer the first question by first reviewing the multiple strategies adapted by vendors in deploying Carrier Ethernet. We then show how scalable Carrier Ethernet can be used for multiple service offerings, especially focusing on video distribution/SAN/mobile-backhaul. We then discuss the service requirements which need to be satisfied to make the native Ethernet carrier class. The paper also discusses impacting underlying technologies which are crucial in making Carrier Ethernet a success. A simulations study throws light on the different strategies of implementing Carrier Ethernet. IITB/CSE/2008/April/10, TR-CSE-2008-10A CAVALIER Architecture for Metro Data Center Networking Akhil Lodha and Ashwin Gumaste Data Center Networking (DCNs) is a fast emerging enterprise application that is driving metropolitan bandwidth needs. This paper evaluates the needs of this emerging IT-centric, bandwidth voluminous and service rendering application from a metro optical networking perspective. We identify a set of needs called CAVALIER (Consolidation, Automation, Virtualization, Adaptability, Latency, Integration, Economy and Reliability) that are underlying requirements for a network to support DCN services. The CAVALIER requirements are met by proposing a metro optical solution which is based on light-trail ROADM technology. Light-trails exhibit properties such as dynamic bandwidth provisioning, optical multicasting, sub-wavelength granular support and low-cost for deployment. Adapting light-trails to DCN needs is discussed in this paper through engineering requirements and network-wide design. Each aspect of the CAVALIER requirement is then mapped on to light-trail technology. Simulation results are shown to lead to performance betterments. Key words: data-center metro networks, light-trails, ROADMs IITB/CSE/2008/January/9, TR-CSE-2008-9Topology Abstraction Algorithms for Light-Mesh - An Alternate Model for PON Anuj Agrawal, Ashwin Gumaste, Mohit Chamania and Nasir Ghani Abstract: Light-mesh – an alternate solution for access networks is presented. Two heuristic topology algorithms are discussed and simulated showing cost and performance benefits. © 2008 Optical Society of AmericaOCIS codes: 060.4250 Networks; 060.4250 Networks IITB/CSE/2007/November/8, TR-CSE-2007-8Complexity Analysis of Availability Models for Underlay Aware Overlay Networks Madhu Kumar S.D, Umesh Bellur Availability of an overlay network is a necessary con- dition for event delivery in event based systems. The availability of the overlay links depends on the under- lying physical network. Overlay networks have to be underlay aware in order to provide assured levels of availability. We propose two models of availability for overlay networks, Manifest and Latent Availability. In the Manifest availability model, distinct paths at the overlay level are also node disjoint at the underlay and hence the alternate paths viewed by the overlay are in- dependent in the underlay also. In the latent avail- ability model, it is only guaranteed that any two over- lay nodes have a guaranteed number of node disjoint paths between them in the underlay. We analyze both the models for complexity of formation and mainte- nance, and prove that in the general case, both are NP-complete. Then we identify a set of practical con- straints applicable to large scale networks. We demon- strate that under these constraints, latent availability constraint becomes a polynomial time problem. We also introduce the concept of reduced underlays, and further reduce the complexity of the problem of determining la- tent availability overlays. IITB/CSE/2007/November/7, TR-CSE-2007-7Tracking Dynamic Boundary Fronts using Range Sensors S. Duttagupta, K. Ramamritham and P. Kulkarni and K. M. Moudgalya We examine the problem of tracking dynamic boundaries occurring in natural phenomena using sensor networks. Remotely placed sensor nodes produce noisy measurements of various points on the boundary using range-sensing. Two main challenges of the boundary tracking problem are energy-efficient boundary estimations from noisy observations and continuous tracking of the boundary. We propose a novel approach which uses discrete estimations of points on the boundary using a regression-based spatial estimation technique and a smoothing interpolation scheme to estimate a confidence band around the entire boundary. In addition, a Kalman Filter-based temporal estimation is used to help selectively refresh the estimated boundary at a point only if the boundary is predicted to move out of the previous estimated intervals at that point. An algorithm for dynamic boundary tracking (DBTR), the combination of temporal estimation with an aperiodically updated spatial estimation, allows us to provide a low overhead solution to track dynamic boundaries that does not require prior knowledge about the nature of the dynamics. Experimental results demonstrate the effectiveness of our algorithm and estimated confidence bands achieve loss of coverage of less than 2% for smooth boundaries. IITB/CSE/2007/October/6, TR-CSE-2007-6A Novel Node Architecture for Light-trail Provisioning in Mesh WDM Metro Networks Ashwin Gumaste, Admela Jukan, Akhil Lodha, Xiaomin Chen and Nasir Ghani We propose an efficient node architecture to support light-trails (intelligent-shared-wavelength bus) in mesh WDM metro networks. Simulation results and performance benefits as compared to legacy technologies are shown. IITB/CSE/2007/October/5, TR-CSE-2007-5Light-trains: An Integrated Optical-Wireless Solution for High Bandwidth Applications in High-Speed Metro-Trains Ashwin Gumaste, Akhil Lodha, Jianping Wang, Nasir Ghani and Si Qing Zheng Moving trains represent a voluminous mass of users moving at high velocities that require bandwidth (on demand). Existing solutions typically based on wireless technology alone cannot scale efficiently to provide for bandwidth to such fast-moving voluminous users. A new approach is proposed that facilitates dynamic provisioning, good scalability and efficient use of available bandwidth. The proposed approach called light-trains seamlessly integrates optical and wireless networking techniques to provide an ideal broadband Internet access solution to users in moving trains. We identify the set of requirements that a solution would require – such as fast hand-off, low-cost of deployment, mature technology and ability to provide dynamic bandwidth provisioning (and hence low experienced delay). The requirements mentioned influence our choices of technologies for different domains in the network: (1) For access to end-users that reside within the train we use an efficient wireless last-inch solution like WiFi in the Point Coordination Function (PCF) mode. (2) For providing access between the wired backbone network and the train we use a modified WiMax system that gives good throughput and efficient channel utilization capacity in a point-to-point configuration. (3) Finally, to be able to provision bandwidth between the core network and the WiMax base-stations we propose the use of light-trail technology at the optical layer. The light-trail technology allows for dynamic provisioning of bandwidth at the optical layer through a unique node architecture and out-of-band control protocol. It turns out that the light-trail control protocol is useful for assisting fast hand-offs. The hand-off time being drastically reduced enables efficient utilization of the wireless channel even at very high speeds. A protocol that enable end-to-end provisioning across the three domains of technology aka light-trails, WiMax and WiFi is also proposed. The proposed protocol is a cornerstone mechanism for providing inter-domain (technology) connectivity in a pragmatic way. Different aspects of the protocol are considered, amongst which delay and efficiency are computed. The protocol and system requirements juxtaposed are simulated extensively. Results pertaining to utilization, delay, efficiency and network wide performance are all showcased. Viability of the model in being able to provide bandwidth to moving users is shown. IITB/CSE/2007/July/3, TR-CSE-2007-31-Persistent Collision-Free MAC Protocols for Opportunistic Optical Hyperchannels Jing Chen, Ashwin Gumaste, Jianping Wang and Si Qing Zheng Recently, a new WDM optical network infrastructure named SMART [1], which is capable of dynamically setting up, modifying, and tearing down optical connections, was proposed. The performance of SMART is determined by the performance of its hyperchannels, which are essentially optical buses or light-trails. Previously proposed MAC protocols for unidirectional optical buses can be used as distributed medium access control protocols for hyperchannels. However, these protocols require optical detection and they are either unfair, or not work-conserving. In this paper, we propose a class of workconserving, collision-free, detection-free, and fair MAC protocols for opportunistic hyperchannels which are tailored hyperchannels for SMART. We compare our proposed MAC protocols with known pi-persistent CSMA protocols and priority-based CSMA protocol by simulations. We show that the performances of our proposed protocols are much better than that of pi-persistent CSMA and that of priority-based CSMA protocol in terms of throughput and fairness. IITB/CSE/2007/July/2, TR-CSE-2007-2Merge-by-Wire: Algorithms and System Support Gurulingesh Raravi, Vipul Shingde, Ashish Gudhe, Krithi Ramamritham Automakers are trying to make vehicles more intelligent and safe by embedding processors which can be used to implement by-wire applications for taking smart decisions on the road or assisting the driver in doing the same. Given this proliferation, there is a need to minimize the computing power required without affecting the performance and safety of the applications. The latter is especially important since these by-wire applications are distributed and real-time in nature and hence deal with critical data and involve deadline bound computations on data gathered from the environment. These applications have stringent requirements on the freshness of data items and completion time of the tasks. Our work studies one such safety-related application namely, Automatic Merge Control (AMC) which ensures safe vehicle maneuver in the region where two or more roads intersect.As our contributions, we (i) propose two merge algorithms for AMC: Head of the Lane (HoL) and All Feasible Sequences (AFS) (ii) demonstrate how DSRC-based wireless communication protocol can be leveraged for the development of AMC (iii) present a real-time approach towards designing AMC by integrating mode-change and real-time repository concepts for reducing the processing power requirements and (iv) identify task characteristics of AMC and provide a scheduling strategy to meet their timing requirements. Simulations demonstrate the advantages of using our approach for constructing merge-by-wire systems. IITB/CSE/2007/July/1, TR-CSE-2007-1DynaSPOT: Dynamic Services Provisioned Optical Transport Test-bed <96> Achieving Multi-Rate Multi-Service Dynamic Provisioning using Strongly connected Light-trail (SLiT) Technology Ashwin Gumaste, Nasir Ghani, Paresh Bafna, Akhil Lodha, Anuj Agrawal, Tamal Das and Si Qing Zheng We report on the DynaSPOT (Dynamic Services Provisioned Optical Transport) test-bed <96> a next-generation metro ring architecture that facilitates provisioning of emerging services such as Triple Play, Video-on-Demand (VoD), Pseudo Wire Edge to Edge Emulation (PWE3), IPTV and Data Center Storage traffic. The test-bed is based on the recently proposed Strongly connected Light-trail (SLiT) technology that enables the triple features of dynamic provisioning, spatial sub-wavelength grooming and optical multicasting <96> that are quintessential for provisioning of the aforementioned emerging services. SLiT technology entails the use of a bidirectional optical wavelength bus that is time-shared by nodes through an out-of-band control channel. To do so, the nodes in a SLiT exhibit architectural properties that facilitate bus function. These properties at the network side include ability to support the dual signal flow of drop and continue as well as passive add, while at the client side include the ability to store data in order to support time-shared access. The latter (client side) improvisation is done through a new type of transponder card <96> called the trailponder that provides for storage (electronic) of data and fast transmission (burst-mode) onto the SLiT. Further in order to efficiently provision services over the SLiT, there is a need for an efficient algorithm that facilitates meeting of service requirements. To meet service requirements we propose a dynamic bandwidth allocation algorithm that allocates data time-slots to nodes based on a valuation method. The valuation method is principally based on an auctioning scheme whereby nodes send their valuations (bids) and a controller node responds to bids by sending a grant message. The auctioning occurs in the control layer, out-of-band and ahead in time. The novelty of the algorithm is the ability to take into consideration the dual service requirements of bandwidth request as well as delay sensitivity. At the hardware level, implementation is complex <96> as our trailponders are layer-2 devices that have limited service differentiation capability. Here, we propose a dual VLAN tag and GFP based unique approach that is used for providing service differentiation at layer-2. Another innovation in our test-bed is the ability to support multi-speed traffic. While some nodes function at 1 Gbps, and others function at 2.5 Gbps (using corresponding receivers), a select few nodes can support both 1 Gbps and 2.5 Gbps operation. This novel multi-speed support coalesced with the formerly mentioned multi-service support is a much needed boost for services in the metro networks. We showcase the test-bed and associated results as well as descriptions of hardware subsystems.
# nLab Demazure, lectures on p-divisible groups, II.8, multiplicative affine groups This entry is about a section of the text ###### Remark Let $G$ be a $k$-group-functor. Then the following conditions are equivalent: 1. $G$ is the Cartier dual of a constant group. 2. $G$ is an affine $k$-group and the $k$-ring $O(G)$ is generated by the morphisms $G\to \mu_k$ (these are called characters of $G$). ###### Definition A $k$-group satisfying the conditions of the previous remark is called diagnalizable k-group. ###### Theorem Let $G$ be a $k$-group. Then the following conditions are equivalent: 1. $G\otimes_k k_s$ is diagonalizable. 2. $G\otimes_k K$ is diagonalizable for a field $K\in M_k$. 3. $G$ is the Cartier dual of an étale $k$-group. 4. $\hat D(G)$ is an étale? $k$-formal group. 5. $Gr_k(G,\alpha_k)=0$ 6. (If $p\neq 0)$, $V_G$ is an epimorphism 7. (If $p\neq0)$, $V_G$ is an isomorphism ###### Definition and Remark 1. A $k$-group satisfying the conditions of the previous theorem is called multiplicative k-group. 2. Multiplicative $k$-groups correspond by duality to étale formal $k$-groups. 3. The category $ACm_k$ of multiplicative $k$-groups forma a subcategory of the category $AC_k$ of affine commutative $k$-groups which is stable under forming subgroups, quatients, extensions (the set of these properties says that the subcategory is thick) and limits. 4. $ACm_k$ is (contravariant) equivalent to the category of Galois modules: To $G$ corresponds the Galois module $\hat D(G\otimes_k k_s)(k_s)=Gr_{k_s}(G\otimes_k k_s,\mu_{k_s})$. 5. If $E$ is an étale $k$-group, then $D(E)$ is multiplicative and $\hat D(D(E))=E$. And we have $D(D(E))=E$. The duality is hence given by $E\to D(E)$ , $G\to D(G)$ without reference to formal groups. Revised on July 19, 2012 00:07:30 by Stephan Alexander Spahn (79.227.135.169)
## COCI '19 Contest 5 #3 Matching View as PDF Points: 20 (partial) Time limit: 2.5s Memory limit: 512M Problem types You are given , where is even, points on a plane that have integer coordinates. For each integer , there are at most two points with coordinates . Analogously, for each integer , there are at most two points with coordinates . You are able to draw horizontal or vertical line segments between pairs of given points. Is it possible to draw lines such that each of the given points is an endpoint of exactly one line segment and that no two line segments intersect? #### Input The first line contains an even integer from the task description. The -th of the next lines contains two integers , coordinates of the -th point. #### Output If it is not possible to draw the line segments as explained in the task statement, you should output NE (NO in Croatian) in a single line. Otherwise, you should output DA (YES in Croatian) in the first line. In each of the next lines you should output two space-separated integers and , which represent indices of the points that are connected with a drawn line segment. #### Scoring , for each integer , there is an even number of points with coordinates and an even number of points with coordinates #### Sample Input 1 8 1 1 1 3 2 2 2 4 3 1 3 3 4 2 4 4 #### Sample Output 1 DA 1 5 3 7 2 6 4 8 #### Sample Input 2 6 1 2 1 3 2 1 2 4 3 2 3 3 #### Sample Output 2 DA 1 2 3 4 5 6 #### Sample Input 3 2 1 1 2 2 #### Sample Output 3 NE
, 01.10.2019 19:40 ineedcoffee # Acanister of tennis balls is shaped like a cylinder and holds 3 tennis balls. the tennis balls touch the sides, top and bottom of the canister. each ball has a diameter of 2.7 inches. how much air space is in the canister ### Another question on Mathematics Mathematics, 21.06.2019 22:00 For $$f(x) = 4x + 1$$ and (x) = $$g(x)= x^{2} -5,$$ find $$(\frac{g}{f}) (x)$$a. $$\frac{x^{2} - 5 }{4x +1 },x$$ ≠ $$-\frac{1}{4}$$b. x$$\frac{4 x +1 }{x^{2} - 5}, x$$ ≠ ± $$\sqrt[]{5}$$c. $$\frac{4x +1}{x^{2} -5}$$d.$$\frac{x^{2} -5 }{4x + 1}$$
Web www.spinics.net # Re: [PEAR] recreating pear.ini ```On Thu, Jan 7, 2010 at 11:32 PM, Thomas Anderson <zelnaga@xxxxxxxxx> wrote: > Say I have a local installation of PEAR at c:\php.  I'd like to move > it to c:\php53 but don't want to have to reinstall PEAR.  Problem with > just renaming the directory is that the stuff in pear.ini would still > need to be updated.  Sure, I can do that easily enough, myself, but > maybe there's a way for PEAR to just sorta recalculate the values, > itself? You could try Pyrus, the new PEAR package installer for php 5.3.1+. Pyrus supports moving the entire registry and is not tied to a specific system or user installation.
Article | Open # Elucidation of the origin of chiral amplification in discrete molecular polyhedra • Nature Communicationsvolume 9, Article number: 488 (2018) • doi:10.1038/s41467-017-02605-x Accepted: Published online: ## Abstract Chiral amplification in molecular self-assembly has profound impact on the recognition and separation of chiroptical materials, biomolecules, and pharmaceuticals. An understanding of how to control this phenomenon is nonetheless restricted by the structural complexity in multicomponent self-assembling systems. Here, we create chiral octahedra incorporating a combination of chiral and achiral vertices and show that their discrete nature makes these octahedra an ideal platform for in-depth investigation of chiral transfer. Through the construction of dynamic combinatorial libraries, the unique possibility to separate and characterise each individual assembly type, density functional theory calculations, and a theoretical equilibrium model, we elucidate that a single chiral unit suffices to control all other units in an octahedron and how this local amplification combined with the distribution of distinct assembly types culminates in the observed overall chiral amplification in the system. Our combined experimental and theoretical strategy can be applied generally to quantify discrete multi-component self-assembling systems. ## Introduction Since Pasteur1 discovered the spontaneous resolution in ammonium sodium tartrate and stated that life was intimately related to the asymmetry of the universe, the phenomenon of chirality and how it transfers has intrigued scientists2,3,4,5. Because of its profound impact on life science6,7, molecular motors8, and practical applications like the asymmetric synthesis9 and enantioseparation10 of pharmaceuticals, of particular importance is the chiral amplification occurring in molecular reactions and self-assembling systems. Though systematic combined experimental and theoretical investigations have been well established for the chiral amplification in molecular reactions in asymmetric catalysis11,12,13 and autocatalysis14, the elucidation of the chiral amplification in self-assembling systems15,16,17,18,19,20,21,22,23,24,25,26,27,28,29 by explicit structural information and rational theoretical modelling remains a big challenge 30. Amplification of chirality in self-assembling systems is usually denoted as the “sergeants-and-soldiers” effect30,31, referring to the ability of a few chiral units (the “sergeants”) to control a large number of achiral units (the “soldiers”). Since the pioneering work of Green et al.31, many studies have reported on the amplification of chirality and the sergeants-and-soldiers effect in “infinite” systems like helical polymers32,33,34,35,36, one-dimensional supramolecular polymers37,38,39,40,41,42,43,44, and two-dimensional (2D) supramolecular networks45,46,47,48. Although these studies provided important prototypes to mimic the chirality transfer in biopolymers and substantially progressed the fabrication of functional soft materials18,49,50,51, the product in such infinite systems usually comprises a mixture composed of polymers (or assemblies) with highly diverse numbers of repeating units—in other words, a “company” containing various kinds of “squads” consisting of distinct numbers of sergeants and soldiers. In contrast, to obtain an in-depth understanding of the amplification of chirality, it is crucial to design systems that provide products with explicit compositions on the molecular level. An elegant approach to incorporate both chiral and achiral units into discrete assemblies has been described by Reinhoudt and colleagues52,53,], forming hydrogen-bonded double rosettes (squads) that each contains precisely six units of sergeants and/or soldiers. The product (company) includes only limited kinds of discrete assemblies (squads), allowing for the development of kinetic models to fit the experimental data and to simulate chiral amplification in dynamic systems. Another advantage of discrete assemblies over polymers is that they can be well characterised by nuclear magnetic resonance (NMR) and single-crystal X-ray diffraction analyses. Taking these advantages, Nitschke and colleagues54,55,56,57 systematically studied the amplification of chirality and long-range stereochemical communication in discrete metal–organic cages. However, the thorough investigation of a company consisting of different kinds of squads was hindered by the difficulty in separating these non-covalent interaction-based discrete assemblies. As a result of the experimental difficulty on the explicit characterisation of multi-component self-assembling systems, the corresponding theoretical studies are limited58. Despite few theoretical models42,52,59,60,61,62,63, theoretical simulation taking account of molecular information as well as the synergy between experimental and theoretical studies is still to be established. Herein, we report a strategy to investigate the sergeants-and-soldiers effect not only within a mixture product (company), but also within each kind of discrete assembly (squad) individually. Pure organic octahedra incorporating hexapropyl-truxene faces and both chiral and achiral vertices are constructed through dynamic covalent chemistry. The octahedra containing different numbers of chiral vertices can be separated by chiral high-performance liquid chromatography (HPLC), where the isolated octahedra are sufficiently stable for subsequent NMR and spectroscopic investigations. Such analysis of separated assemblies combined with structural analyses and theoretical simulations allows us to reveal the origin of the strong amplification of chirality in discrete assemblies. Moreover, with a theoretical model for discrete assemblies based on a mass-balance approach, we rationalise the product distributions as a function of the fraction of chiral units, thus unveiling the fundamental mechanisms of the sergeants-and-soldiers effect. The model results perfectly fit our experimental observations and reveal the relationship between the observed sergeants-and-soldiers effect and the relative free energies of the various octahedron types quantitatively. ## Results ### Chiral octahedra with facial rotational patterns As previously reported64, chiral organic octahedra with facial rotational patterns can be constructed from four equivalents of truxene building block and six diamines through dynamic covalent chemistry. In this study, we change the truxene building block by replacing the butyl groups with propyl groups to better separate the mixture of octahedra products. The 5,5,10,10,15,15-hexapropyl-truxene-2,7,12-tricarbaldehyde (TR) was readily synthesised (see Supplementary Fig. 1 and Supplementary Methods) and showed a similar behaviour to that of the previously reported butyl analogues64; it can also react with ethylene diamine (EDA) to form the octahedron 16 with the composition of TR4EDA6 (Fig. 1), as found by NMR and high-resolution mass spectroscopy (see Supplementary Methods). Single-crystal X-ray diffraction (Supplementary Fig. 2 and Supplementary Tab. 1) and 2D NMR analyses (Supplementary Figs. 39) confirmed that the thermodynamic product only contains two enantiomers with homodirectional facial patterns: the (CCCC)-16 with homodirectional clockwise (C) patterns on the exterior faces, and the (AAAA)-16 with anticlockwise (A) patterns. In addition, upon reaction with chiral (R,R)-diaminocyclohexane (CHDA) instead of EDA, the CHDA vertices dominate the facial directionality of the TR, leading to the diastereoselective synthesis of a merely thermodynamic product (AAAA)-26 as confirmed by the 2D NMR analyses (Supplementary Figs. 1016). Density functional theory (DFT) calculations revealed a high similarity between the structures of (AAAA)-26 and (AAAA)-16 in terms of the overall conformations and detailed N-C-C-N bonds, angles, and dihedrals on diamine vertices (Supplementary Figs. 17 and 18). DFT calculations also showed that the (CCCC)-1521-(S,S)-CHDA is indeed the exact mirror image of (AAAA)-1521-(R,R)-CHDA (Supplementary Fig. 19). However, the energy difference between (AAAA)-1521-(R,R)-CHDA and (CCCC)-1521-(R,R)-CHDA is approximately 71 kJ mol−1. This large difference is consistent with the absence of the CCCC-diastereomers in our experiments. ### Sergeants-and-soldiers effect in a company of mixed squads We further employed mixtures of achiral EDA and chiral CHDA to form new octahedra with mixed vertices. EDA and CHDA in various ratios (10:0, 9:1,…, 0:10) were mixed with TR and catalytic trifluoroacetic acid (TFA) to form dynamic libraries65,66 in toluene (see Supplementary Tab. 2 for detail). The dynamic libraries were immersed in a thermostated bath at 60 °C for 48 h, leading to equilibrium distributions of mixed products incorporating both EDA-linked and CHDA-linked vertices in a single octahedron (Fig. 2a). These octahedra are designated as 1n2m, where n and m represent the number of EDA-linked and CHDA-linked vertices, respectively. Circular dichroism (CD) spectra of the product mixtures were measured after thermodynamic equilibrium was reached (Fig. 2b). The mixtures with different EDA/CHDA ratios exhibit similar CD spectra with increasing intensities upon increasing fraction of CHDA. The plot of the relative CD intensity (measured at 340 nm) as a function of the molar percentage of chiral CHDA-linked vertices clearly shows a nonlinear chiral amplification upon the increase of the fraction of CHDA (Fig. 2c). The observed amplification in chiroptical response in truxene octahedra suggests the chiral CHDA can regulate the achiral EDA. This phenomenon is similar to the chiral amplification in some other discrete assemblies formed by hydrogen bonds52,53 or metal–organic coordinations 54,55,56,57. Regarding the achiral components (EDA) as soldiers and chiral components (CHDA) as sergeants, each discrete assembly, i.e., individual octahedron, can be viewed as a squad and the equilibrium distributions of mixed products can be considered as a company containing various types of squads. All studies on discrete assemblies to date have only revealed the average sergeants-and-soldiers effect in a company, which incorporates various types of squads. To understand the sergeants-and-soldiers effect in discrete assemblies in depth, it is necessary to scrutinise the distinct squads rather than the integrated company. ### Sergeants-and-soldiers effect in isolated octahedra squads Due to the rigidity of the octahedra and the relative stability of imine bonds, we are able to separate the octahedra based on their composition as well as their configuration by chiral HPLC65. Eight fractions were found in the mixed equilibrium product containing 50% CHDA (Fig. 3a). These fractions were isolated and individually characterised by mass, CD, and NMR spectroscopies. The matrix-assisted laser desorption ionisation time-of-flight mass spectra of the first six fractions (Supplementary Fig. 20) confirmed their [4 + 6] compositions corresponding to the octahedra 26, 1125, 1224, 1323, 1422, and 1521, respectively, whereas both of the remaining two fractions matched the composition of octahedron 16. Although there are possible stereoisomers for octahedra 1224, 1323, and 1422 as shown in Fig. 2a, we did not observe any sign of corresponding peak splitting in the HPLC spectra. CD spectra of the first six octahedra 1n2m ($0≤n≤5,m=6-n$) and the seventh fraction with the composition of 16 are almost identical (Fig. 3b, c). According to our previous study64 and ZINDO/S simulation (Supplementary Figs. 21 and 22), the CD spectra are strongly dependent on the facial configuration rather than the vertex components, and the octahedra with different facial configurations (i.e., AAAA, AAAC, AACC, ACCC, and CCCC) exhibit considerably different CD spectra. Therefore, all six octahedra 1n2m ($0≤n≤5,m=6-n$) and the octahedra 16 in the seventh fraction are in the same facial configuration, i.e., the AAAA as in the (AAAA)-26. And the octahedra in the eighth fraction can be accordingly assigned as (CCCC)-16, since it exhibits a mirror-like CD spectrum to the (AAAA)-16 in the seventh fraction. Every octahedra containing a CHDA-linked vertex has the same AAAA configuration, hence all EDA-linked vertices in these octahedra are in the gauche conformation with a dihedral angle of c.a. −60°, as shown in the Fig. 2a. This indicates a strong geometrical control of CHDA sergeants: just a single CHDA-linked vertex (sergeant) suffices to control the remaining EDA-linked vertices (soldiers) in any octahedron (squad), as illustrated in Fig. 4a for a 1521 octahedron. ### Structural basis of chiral amplification in single octahedron Further understanding of the strong leadership of the CHDA sergeant was revealed by NMR investigation. As a representative example, the 1H NMR spectrum of 1521 (Fig. 4b, c) exhibits two adjacent single peaks for the imine protons on a CHDA-linked vertex ($H c ′$, 8.32 ppm) or on an EDA-linked vertex (Hc, 8.30 ppm). The ratio between the peak areas of $H c ′$ and Hc is 1:5, in accordance with the number of CHDA- and EDA-linked vertices. Being close to the imine bond on a vertex, the Hd on the truxene backbone is also influenced by the CHDA-linked vertex, giving rise to two single peaks in the ratio of 1:5 as well. In contrast, the other two protons on the truxene backbone (Ha and Hb) are less influenced, and they only generate two doublets because of their spin–spin coupling to each other. Except for the influence of the CHDA-linked vertex, the overall spectrum reveals only a single set of peaks of protons on the truxene backbone, suggesting the truxene faces of 1521 are located in a T-symmetry with the facial configuration of AAAA or CCCC64,67. Otherwise, the resonances would further split into three sets (for C3-symmetric CCCA and CAAA) or six sets (C2-symmetric CCAA) due to different facial configurations64. Considering the CD analysis, the configuration of 1521 is assigned to be AAAA. The 1H NMR spectra of the octahedra 1n2m ($0≤n≤5,m=6-n$) are rather similar (Supplementary Fig. 23), corroborating that all of the six octahedra with CHDA-linked vertices have the same AAAA facial configuration. The nuclear Overhauser effect (NOE) crosspeak between Hc and Hb (instead of Hd) shown in the NOE spectrum of 1521 (Fig. 4b, d) indicates that all imine bonds rotate in the same anticlockwise direction as the sp3 carbons of the truxene core. The NOE crosspeak between Hc and He1 (instead of He2) indicates that all five EDA-linked vertices are in the same gauche conformation like the CHDA-linked vertex. The structural rigidity of truxene octahedra and the consistency of vertex conformation are also confirmed by the NOE spectra (Fig. 4d) and the single-crystal analysis of 16 (Supplementary Fig. 2). We presume the structural rigidity and the conformational consistency are crucial to the efficient chiral amplification inside the octahedra. To shed light on the conformational consistency of the EDA-linked vertices, we calculated the free energies of different conformers of (AAAA)-1521 using DFT calculations. The (AAAA)-1521 conformer with all EDA-linked vertices in c.a. −60° gauche conformation has a much lower energy than any other conformer. For illustration, the difference in energy between the conformer with all EDA-linked vertices in c.a. −60° gauche conformation and the conformer with three EDA-linked vertices in c.a. 60° gauche conformation is approximately 108 kJ mol−1 (Supplementary Fig. 24). To our knowledge, the resulting consistency in vertex conformation is a unique property of truxene octahedra, which is not possessed by other similar organic octahedra. For example, in the TFB4EDA6 octahedra formed from 1,3,5-triformylbenzene (TFB) and EDA68,69, the EDA-linked vertices have been proven to be able to dynamically change between c.a. 60° gauche conformer and c.a. −60° gauche conformer68, and both T-symmetric and C3-symmetric TFB4EDA6 exist in the crystal products 69. ### Equilibrium distribution analysis and theoretical model To elucidate the distribution of sergeant over the squads, we subsequently analysed the products formed for the different molar fractions of CHDA by HPLC, as shown in Fig. 5a and Supplementary Figs. 2535. As is evident from Fig. 5b, the 1n2m product distribution is strongly controlled by the molar fraction of CHDA. High CHDA ratios in the mixtures result in predominant formation of the octahedra with high values of m, and vice versa. Further investigation of the product distributions with different ratios of chiral units confirmed that the general sergeants-and-soldiers effect in a company is a weighted average of the effects in each squad (Supplementary Fig. 36). To rationalise the equilibrium distributions of 1n2m product as a function of the molar fraction of CHDA, we devised a mass-balance model as illustrated in Fig. 5c. This model is built on the same principles as earlier thermodynamic models for mixed discrete assemblies52,60,61,62, but differs essentially as it explicitly takes possible differences between the equilibrium constants due to cooperative effects into account. In this model, seven types of octahedra are considered, i.e., a single type for each ratio of EDA-linked and CHDA-linked vertices. 16 represents both (AAAA)-16 and (CCCC)-16, which are assumed to be equally abundant, while the other 1n2m represent all possible conformers with an AAAA facial configuration only. The dynamic exchange of CHDA and EDA between the octahedral vertices is described using 6 independent equilibrium constants K i (1 ≤ i ≤ 6), which are related to free energy differences via K = eΔG/RT. Whereas possible differences between equilibrium constants were priorly ignored in the absence of data on individual species, our physical separation of the various octahedron types allows to determine them individually. The equilibrium constants allow us to express the equilibrium concentrations of all distinct octahedron types in terms of the equilibrium concentration of 26: $1 i 2 6 - i eq = 6 i ∏ j = 1 i K j EDA e q CHDA e q i 2 6 eq .$ (1) Three mass balances can be derived for the model: (i) the overall TR concentration should equal the equilibrium concentration of free TR plus four times the concentrations of all octahedra types summed, (ii) the overall concentration of EDA should equal the equilibrium concentration of free EDA plus the sum of the concentrations of each octahedra type multiplied by its respective number of EDA-linked vertices, and (iii) the analogous mass balance for CHDA. As detailed in the supporting information (Supplementary Eqs. 12-14), these three mass balances in combination with Eq. 1 allow to calculate the equilibrium concentrations of all octahedra types for a given set of equilibrium constants K i and overall concentrations of TR, CHDA, and EDA. Octahedron distributions calculated as a function of the molar fraction CHDA can subsequently be compared to the experimental data (summarised in Supplementary Tab. 3). Best fits of the octahedron distributions and CD intensity are shown in Fig. 5b and Supplementary Fig. 37, respectively. These fits were obtained with the equilibrium constants corresponding to the free energy differences as shown in Fig. 5d and Supplementary Fig. 38. This shows that 26 has a lower free energy than 16. In addition, it shows that the free energy gains upon insertion of the second, third, fourth, fifth, and sixth CHDA-linked vertex are all rather similar, whereas the free energy gain upon insertion of the first CHDA-linked vertex is approximately 2 kJ mol−1 smaller. That 26 should have a lower free energy than 16 is also corroborated by the experimental HPLC results (Supplementary Tab. 3 and Supplementary Fig. 39); these indicate that for the same excess of major diamine vertex, the fraction of 26 is always higher than that of 16 and the free CHDA concentration is always lower than the free EDA concentration. The relative free energies upon the exchange between CHDA and EDA vertices as predicted by the mass-balance model are also in accordance with DFT calculations of the various types of octahedra (Supplementary Fig. 40 and Supplementary Tab. 4). These calculations further showed that upon the exchange between CHDA and EDA vertices only minor conformational changes (Supplementary Figs. 17 and 18 and Supplementary Tab. 5) and changes in electron distributions (Supplementary Fig. 41 and Supplementary Tab. 6) occur. Nevertheless, analysis of the density of states (Supplementary Figs. 42 and 43) showed that free CHDA has some states closer to the Fermi level than free EDA has, indicating that CHDA is slightly more reactive. In addition, both integrated crystal orbital Hamiltonian population and integrated crystal orbital overlap population analyses suggest that the N-C bond is slightly stronger for the CHDA-TR case than for the EDA-TR case (Supplementary Figs. 44 and 45 and Supplementary Tab. 7). Together, these findings explain for the slight preference of CHDA vertices over EDA vertices as observed in the experimental and modelling results. ## Discussion We have developed a strategy that permits in-depth investigation of the amplification of chirality in discrete molecular assemblies, both from an experimental and a theoretical perspective. Chiral octahedra incorporating a combination of chiral and achiral vertices have been constructed through dynamic covalent chemistry as an experimental model. The product mixtures were first investigated by CD spectroscopy to show a non-linear amplification of CD intensities upon the increase of the fraction chiral vertices; i.e., a notable sergeants-and-soldiers effect in an integrated company. Subsequently, the sergeants-and-soldiers effects within the individual kinds of octahedra (squads) were investigated by separating all octahedron types by chiral HPLC, providing much more explicit information on chirality amplification than by the conventional investigation of mixtures. All octahedra containing one or more chiral vertices exhibit the same CD spectrum as the octahedron containing pure chiral vertices, indicative that one chiral vertex (sergeant) suffices to control the conformation of all achiral vertices (soldiers) in an octahedron (squad). NMR analyses and DFT calculations attribute this strong chiral amplification within octahedra to the structural rigidity of truxene faces and interactions between the propyl arms on truxene. Furthermore, a newly developed mass-balance model for mixed octahedra perfectly fitted the observed sergeants-and-soldiers effects. With this model the equilibrium distribution of the various octahedra, i.e., the distribution of the sergeants over the squads, could be rationalised as a deviation of the statistical distribution due to small free energy differences between the octahedra. DFT calculations attributed these differences in free energy to minor conformational differences between the octahedra and a slightly stronger binding of CHDA over EDA. As such, we presented a combined experimental and theoretical strategy that can be applied more generally to quantify small differences in association energy in discrete multicomponent systems. Through the design of a suitable experimental system and complementary theoretical equilibrium model we thus revealed the origin of chiral amplification in discrete molecular polyhedra, which may provide fundamental insights into the transfer of chirality in supramolecular systems as well. ## Methods ### Synthesis TR building block was readily synthesised from truxene in three steps with high yields (experimental and characterisation details can be found in the Supplementary Methods). Stock solutions of TR (3.2 mM), EDA (9.6 mM), (R,R)-CHDA (9.6 mM), and TFA (19.2 mM) in toluene were mixed at certain volume ratios to give the samples A to K, with concentrations of the various species as detailed in Supplementary Tab. 2. The mixtures were then immersed in a thermostated bath at 60 °C for 48 h to reach equilibrium. ### NMR and MS characterisation 1H and 13C NMR spectra were recorded on a Bruker AVIII-500 spectrometer (500 MHz) in deuterated dichloromethane and are reported relative to residual solvent signals. Matrix-assisted laser desorption ionisation time-of-flight mass spectra were collected on a Bruker microflex LT-MS with 2,4,6-trihydrotyacetophenane (0.05 M in methanol) as matrix. High-resolution mass spectra were collected on a Bruker En Apex Ultra 7.0T FT-MS. ### Single-crystal X-ray diffraction Single-crystal X-ray diffraction data were collected on a Rigaku SuperNova X-Ray single crystal diffractometer using Cu Kα (λ = 1.54184 Å) micro-focus X-ray sources at 100 K. The raw data were collected and reduced using the CrysAlisPro software package, while the structures were solved by direct methods using the SHELXS program and refined with the SHELXL program. Solution and refinement procedures are presented in the Supplementary Methods and specific details are compiled in Supplementary Tab. 1. ### HPLC and CD characterisation HPLC analyses were performed on a Shimadzu LC-16A instrument at 298 K using a Daicel Chiralcel IE column. A linear gradient elution was employed within 40 min from 5% ethyl acetate to 30% ethyl acetate in n-hexane with 4% ethanol and 0.1% diethylamine of total volume at a flow rate of 1 mL  min−1. The sample concentration was 400 μM in toluene, and the injection volume was 3 μL. Absorbance of octahedra was monitored at 325 nm. HPLC spectra of the equilibrium products containing different ratios of CHDA are presented in the Supplementary Figs. 2535. CD spectra were measured in toluene solutions with a JASCO J-810 circular dichroism spectrometer. ### Computational methods All structures were first optimised by the molecular mechanics method (using COMPASS II force field) and further optimised by the DFT method (using Vienna ab initio Simulation Package (VASP)). The electronic structure and bonding analyses were performed based on the partial density of states, crystal orbital Hamiltonian population, crystal orbital overlap population functions and Bader topological analysis. The CD spectra were calculated at ZINDO semi-empirical level with Gaussian 09. Details on the methods are provided in the Supplementary Methods. ### Model for equilibrium distributions The model was built based on the mass-balance approach and implemented in Matlab of which we used the lsqnonlin function to solve the non-linear equations. The experimental data used for fitting the model were obtained from HPLC experiments and summarised in Supplementary Tab. 3. Modelling details can be found in the Supplementary Methods. ### Data availability Crystallographic data in this study were deposited at the Cambridge Crystallographic Data Centre with the accession code (CCDC 1517934). The authors declare that all other data supporting the findings of this study are available from the article and its Supplementary Information files or available from the authors upon reasonable request. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Pasteur, L. Observations sur les forces dissymétriques. C. R. Acad. Sci. 78, 1515–1518 (1874). 2. 2. Ribó, J. M., Crusats, J., Sagués, F., Claret, J. & Rubires, R. Chiral sign induction by vortices during the formation of mesophases in stirred solutions. Science 292, 2063–2066 (2001). 3. 3. Tsuda, A. et al. Spectroscopic visualization of vortex flows using dye-containing nanofibers. Angew. Chem. Int. Ed. 46, 8198–8202 (2007). 4. 4. Yang, Y., Da Costa, R. C., Fuchter, M. J. & Campbell, A. J. Circularly polarized light detection by a chiral organic semiconductor transistor. Nat. Photon. 7, 634–638 (2013). 5. 5. Kumar, J., Nakashima, T. & Kawai, T. Circularly polarized luminescence in chiral molecules and supramolecular assemblies. J. Phys. Chem. Lett. 6, 3445–3452 (2015). 6. 6. Nelson, D. L., Lehninger, A. L. & Cox, M. M. Lehninger Principles of Biochemistry (W.H. Freeman, New York, 2008). 7. 7. Hunter, C. A. Quantifying intermolecular interactions: guidelines for the molecular recognition toolbox. Angew. Chem. Int. Ed. 43, 5310–5324 (2004). 8. 8. Koumura, N., Zijlstra, R. W., van Delden, R. A., Harada, N. & Feringa, B. L. Light-driven monodirectional molecular rotor. Nature 401, 152–155 (1999). 9. 9. Liu, Y., Xuan, W. & Cui, Y. Engineering homochiral metal-organic frameworks for heterogeneous asymmetric catalysis and enantioselective separation. Adv. Mater. 22, 4112–4135 (2010). 10. 10. Shimomura, K., Ikai, T., Kanoh, S., Yashima, E. & Maeda, K. Switchable enantioseparation based on macromolecular memory of a helical polyacetylene in the solid state. Nat. Chem. 6, 429–434 (2014). 11. 11. Noyori, R. Asymmetric catalysis: science and opportunities (nobel lecture). Angew. Chem. Int. Ed. 41, 2008–2022 (2002). 12. 12. Doyle, A. G. & Jacobsen, E. N. Small-molecule H-bond donors in asymmetric catalysis. Chem. Rev. 107, 5713–5743 (2007). 13. 13. Huo, H. et al. Asymmetric photoredox transition-metal catalysis activated by visible light. Nature 515, 100–103 (2014). 14. 14. Bissette, A. J. & Fletcher, S. P. Mechanisms of autocatalysis. Angew. Chem. Int. Ed. 52, 12800–12826 (2013). 15. 15. Chambron, J.-C., Dietrich-Buchecker, C. & Sauvage, J.-P. in Supramolecular Chemistry I Directed Synthesis and Molecular Recognition (ed. E. Weber) 4, 131–162 (Springer, Berlin, Heidelberg,1993). 16. 16. Sasaki, T. et al. Linkage control between molecular and supramolecular chirality in 21-helical hydrogen-bonded networks using achiral components. Nat. Commun. 4, 1787 (2013). 17. 17. Cook, T. R. & Stang, P. J. Recent developments in the preparation and chemistry of metallacycles and metallacages via coordination. Chem. Rev. 115, 7001–7045 (2015). 18. 18. Yashima, E. et al. Supramolecular helical systems: helical assemblies of small molecules, foldamers, and polymers with chiral amplification and their functions. Chem. Rev. 116, 13752–13990 (2016). 19. 19. Fang, Y. et al. Dynamic control over supramolecular handedness by selecting chiral induction pathways at the solution–solid interface. Nat. Chem. 8, 711–717 (2016). 20. 20. Dressel, C., Reppe, T., Prehm, M., Brautzsch, M. & Tschierske, C. Chiral self-sorting and amplification in isotropic liquids of achiral molecules. Nat. Chem. 6, 971–977 (2014). 21. 21. Fujita, D. et al. Self-assembly of tetravalent Goldberg polyhedra from 144 small components. Nature 540, 563–566 (2016). 22. 22. Slater, A. et al. Reticular synthesis of porous molecular 1D nanotubes and 3D networks. Nat. Chem. 9, 17–25 (2017). 23. 23. Beaudoin, D., Rominger, F. & Mastalerz, M. Chiral self-sorting of [2+3] salicylimine cage compounds. Angew. Chem. Int. Ed. 55, 1244–1248 (2017). 24. 24. You, L., Berman, J. S. & Anslyn, E. V. Dynamic multi-component covalent assembly for the reversible binding of secondary alcohols and chirality sensing. Nat. Chem. 3, 943–948 (2011). 25. 25. Zou, W. et al. Biomimetic superhelical conducting microfibers with homochirality for enantioselective sensing. J. Am. Chem. Soc. 136, 578–581 (2014). 26. 26. Hembury, G. A., Borovkov, V. V. & Inoue, Y. Chirality-sensing supramolecular systems. Chem. Rev. 108, 1–73 (2008). 27. 27. Korevaar, P. A. et al. Pathway complexity in supramolecular polymerization. Nature 481, 492–496 (2012). 28. 28. de Jong, J. J. D., Tiemersma-Wegman, T. D., van Esch, J. H. & Feringa, B. L. Dynamic chiral selection and amplification using photoresponsive organogelators. J. Am. Chem. Soc. 127, 13804–13805 (2005). 29. 29. Zhao, D., van Leeuwen, T., Cheng, J. & Feringa, B. L. Dynamic control of chirality and self-assembly of double-stranded helicates with light. Nat. Chem. 9, 250–256 (2017). 30. 30. Palmans, A. R. A. & Meijer, E. W. Amplification of chirality in dynamic supramolecular aggregates. Angew. Chem. Int. Ed. 46, 8948–8968 (2007). 31. 31. Green, M. M. et al. Macromolecular stereochemistry: the out-of-proportion influence of optically active comonomers on the conformational characteristics of polyisocyanates. The sergeants and soldiers experiment. J. Am. Chem. Soc. 111, 6452–6454 (1989). 32. 32. Yashima, E., Matsushima, T. & Okamoto, Y. Chirality assignment of amines and amino alcohols based on circular dichroism induced by helix formation of a stereoregular poly((4-carboxyphenyl)acetylene) through acid-base complexation. J. Am. Chem. Soc. 119, 6345–6359 (1997). 33. 33. Maeda, K. & Yashima, E. Dynamic helical structures: detection and amplification of chirality. Top. Curr. Chem. 2, 47–88 (2006). 34. 34. Ito, H., Ikeda, M., Hasegawa, T., Furusho, Y. & Yashima, E. Synthesis of complementary double-stranded helical oligomers through chiral and achiral amidinium-carboxylate salt bridges and chiral amplification in their double-helix formation. J. Am. Chem. Soc. 133, 3419–3432 (2011). 35. 35. Ohsawa, S. et al. Hierarchical amplification of macromolecular helicity of dynamic helical poly(phenylacetylene)s composed of chiral and achiral phenylacetylenes in dilute solution, liquid crystal, and two-dimensional crystal. J. Am. Chem. Soc. 133, 108–114 (2011). 36. 36. Makiguchi, W., Kobayashi, S., Furusho, Y. & Yashima, E. Formation of a homo double helix of a conjugated polymer with carboxy groups and amplification of the macromolecular helicity by chiral amines sandwiched between the strands. Angew. Chem. Int. Ed. 52, 5275–5279 (2013). 37. 37. Palmans, A. R. A., Vekemans, J. A. J. M., Havinga, E. E. & Meijer, E. W. Sergeants-and-soldiers principle in chiral columnar stacks of disc-shaped molecules with C 3 symmetry. Angew. Chem. Int. Ed. 36, 2648–2651 (1997). 38. 38. Toyofuku, K. et al. Amplified chiral transformation through helical assembly. Angew. Chem. Int. Ed. 46, 6476–6480 (2007). 39. 39. Lohr, A., & Würthner, F. Time-dependent amplification of helical bias in self-assembled dye nanorods directed by the sergeants-and-soldiers principle. Chem. Commun. 19, 2227–2229 (2008).. 40. 40. Helmich, F., Lee, C. C., Schenning, A. P. H. J. & Meijer, E. W. Chiral memory via chiral amplification and selective depolymerization of porphyrin aggregates. J. Am. Chem. Soc. 132, 16753–16755 (2010). 41. 41. Smulders, M. M. J. et al. Probing the limits of the majority-rules principle in a dynamic supramolecular polymer. J. Am. Chem. Soc. 132, 620–626 (2010). 42. 42. Markvoort, A. J., Ten Eikelder, H. M. M., Hilbers, P. A. J., de Greef, T. F. A. & Meijer, E. W. Theoretical models of nonlinear effects in two-component cooperative supramolecular copolymerizations. Nat. Commun. 2, 509 (2011). 43. 43. Kang, J. et al. A rational strategy for the realization of chain-growth supramolecular polymerization. Science 347, 646–651 (2015). 44. 44. Kim, T., Mori, T., Aida, T. & Miyajima, D. Dynamic propeller conformation for the unprecedentedly high degree of chiral amplification of supramolecular helices. Chem. Sci. 7, 6689–6694 (2016). 45. 45. Zepik, H. et al. Chiral amplification of oligopeptides in two-dimensional crystalline self-assemblies on water. Science 295, 1266–1269 (2002). 46. 46. Fasel, R., Parschau, M. & Ernst, K.-H. Amplification of chirality in two-dimensional enantiomorphous lattices. Nature 439, 449–452 (2006). 47. 47. Tahara, K. et al. Control and induction of surface-confined homochiral porous molecular networks. Nat. Chem. 3, 714–719 (2011). 48. 48. Chen, T., Yang, W.-H., Wang, D. & Wan, L.-J. Globally homochiral assembly of two-dimensional molecular networks triggered by co-absorbers. Nat. Commun. 4, 1389 (2013). 49. 49. Zhang, L., Wang, T., Shen, Z. & Liu, M. Chiral nanoarchitectonics: towards the design, self-assembly, and function of nanoscale chiral twists and helices. Adv. Mater. 28, 1044–1059 (2016). 50. 50. Appel, E. A., del Barrio, J., Loh, X. J. & Scherman, O. A. Supramolecular polymeric hydrogels. Chem. Soc. Rev. 41, 6195–6214 (2012). 51. 51. Liu, M., Zhang, L. & Wang, T. Supramolecular chirality in self-assembled systems. Chem. Rev. 115, 7304–7397 (2015). 52. 52. Prins, L. J., Timmerman, P. & Reinhoudt, D. N. Amplification of chirality: the “sergeants and soldiers” principle applied to dynamic hydrogen-bonded assemblies. J. Am. Chem. Soc. 123, 10153–10163 (2001). 53. 53. Mateos-Timoneda, M. A., Crego-Calama, M. & Reinhoudt, D. N. Amplification of chirality in hydrogen-bonded tetrarosette helices. Chem. Eur. J. 12, 2630–2638 (2006). 54. 54. Ousaka, N., Clegg, J. K. & Nitschke, J. R. Nonlinear enhancement of chiroptical response through subcomponent substitution in M4L6 cages. Angew. Chem. Int. Ed. 51, 1464–1468 (2012). 55. 55. Ousaka, N. et al. Efficient long-range stereochemical communication and cooperative effects in self-assembled Fe4L6 cages. J. Am. Chem. Soc. 134, 15528–15537 (2012). 56. 56. Castilla, A. M. et al. High-fidelity stereochemical memory in a Fe(II)4L4 tetrahedral capsule. J. Am. Chem. Soc. 135, 17999–18006 (2013). 57. 57. Castilla, A. M., Miller, M. A., Nitschke, J. R. & Smulders, M. M. J. Quantification of stereochemical communication in metal-organic assemblies. Angew. Chem. Int. Ed. 55, 10616–10620 (2016). 58. 58. Ludlow, R. F. & Otto, S. Systems chemistry. Chem. Soc. Rev. 37, 101–108 (2008). 59. 59. Lombardo, T. G., Stillinger, F. H. & Debenedetti, P. G. Thermodynamic mechanism for solution phase chiral amplification via a lattice model. Proc. Natl. Acad. Sci. USA 106, 15131–15135 (2009). 60. 60. Mateos-Timoneda, M. A., Crego-Calama, M. & Reinhoudt, D. N. Controlling the amplification of chirality in hydrogen-bonded assemblies. Supramol. Chem. 17, 67–79 (2005). 61. 61. Wu, A. & Isaacs, L. Self-sorting: the exception or the rule? J. Am. Chem. Soc. 125, 4831–4835 (2003). 62. 62. Ballester, P. et al. Dabco-induced self-assembly of a trisporphyrin double-decker cage: thermodynamic characterization and guest recognition. J. Am. Chem. Soc. 128, 5560–5569 (2006). 63. 63. Weller, K., Schütz, H. & Petri, I. Thermodynamical model of indefinite mixed association of two components and NMR data analysis for caffeine-AMP interaction. Biophys. Chem. 19, 289–298 (1984). 64. 64. Wang, X. et al. Assembled molecular face-rotating polyhedra to transfer chirality from two to three dimensions. Nat. Commun. 7, 12469 (2016). 65. 65. Jiang, S. et al. Porous organic molecular solids by dynamic covalent scrambling. Nat. Commun. 2, 1207 (2011). 66. 66. Thordarson, P. et al. Allosterically driven multicomponent assembly. Angew. Chem. Int. Ed. 43, 4755–4759 (2004). 67. 67. Meng, W.-J., Clegg, J. K., Thoburn, J. D. & Nitschke, J. R. Controlling the transmission of stereochemical information through space in terphenyl-edged Fe4L6 cages. J. Am. Chem. Soc. 133, 13652–13660 (2011). 68. 68. Jelfs, K. E. et al. Conformer interconversion in a switchable porous organic cage. Phys. Chem. Chem. Phys. 13, 20081–20085 (2011). 69. 69. Jones, J. T. A. et al. On-off porosity switching in a molecular organic solid. Angew. Chem. Int. Ed. 50, 749–753 (2011). ## Acknowledgements We thank E.W. (Bert) Meijer for initiating this joint experimental and theoretical collaboration. We thank Jean-Marie Lehn, Xi Zhang, Stephen Z.D. Cheng, Takuzo Aida, Wei Zhang, Minghua Liu, Yunbao Jiang, and Hui Zhang for discussions. We also thank Yibin Sun, Junbo Chen, and Jijun Jiang for assistance in experiments. This work is supported by the 973 Program (No. 2015CB856500), the NSFC (Nos. 21722304, 91427304, 21573181, 91227111 and 21102120), and the Fundamental Research Funds for the Central Universities (No.2 0720160050) of China. I.T. and A.J.M. acknowledge SurfSara and NWO for providing access to the Cartesius supercomputer. ## Author information ### Author notes 1. Yu Wang and Hongxun Fang contributed equally to this work. ### Affiliations 1. #### State Key Laboratory of Physical Chemistry of Solid Surfaces, College of Chemistry and Chemical Engineering, iChEM and Key Laboratory of Chemical Biology of Fujian Province, Xiamen University, Xiamen, 361005, China • Yu Wang • , Hongxun Fang • , Hang Qu • , Xinchang Wang • , Zhongqun Tian •  & Xiaoyu Cao 2. #### Institute for Complex Molecular Systems and Computational Biology Group, Eindhoven University of Technology, PO Box 513, 5600 MB, Eindhoven, The Netherlands • Ionut Tranca •  & Albert J. Markvoort ### Contributions Y.W., X.C., and A.J.M. conceived, initiated, and developed this work. Y.W. and H.F. designed and carried out the experiments. A.J.M. performed the mass-balance modelling. I.T. and A.J.M. performed the DFT analyses. X.W. and H.Q. performed the single-crystal analyses. Y.W. drafted the manuscript with contributions from H.F., X.C., A.J.M., and Z.T. X.C., A.J.M., and Y.W. coordinated the efforts of the research teams. All authors discussed the results and commented on the manuscript. ### Competing interests The authors declare no competing financial interests. ### Corresponding authors Correspondence to Albert J. Markvoort or Xiaoyu Cao.
let A be the Cholesky decomposition of S).Then + AY is a random vector.. To generate a random … Random Number Generator in R is the mechanism which allows the user to generate random numbers for various applications such as representation of an event taking various values, or samples with random numbers, facilitated by functions such as runif () and set.seed () in R programming that enable the user to generate random numbers and control the generation process, so as to enable the user to leverage the random numbers thus generated … To get a meaningful V, you need to have C positive (semi)-definit. ~aT ~ais the variance of a random variable. 5 and 2), and the variance-covariance matrix of our two variables: standard normal random variables, A 2R d k is an (d,k)-matrix, and m 2R d is the mean vector. A negative number for covariance indicates that as one variable increases, a second variable tends to decrease. The correlation matrix can be found by using cor function with matrix object. Right Skewed Distributions. The following example shows how to create a covariance matrix in R. How to Create a Covariance Matrix in R. Use the following steps to create a covariance matrix in R. Step 1: Create the data frame. (See this note on Matrix Multiplication with Diagonal Indices.). In R programming, covariance can be measured using cov () function. The covariance matrix of X is S = AA>and the distribution of X (that is, the d-dimensional multivariate normal distribution) is determined solely by the mean vector m and the covariance matrix S; we can thus write X ˘Nd(m,S). Their joint probability distribution is the distribution of the random matrix. Drawing from the Wishart distribution was recommended. Then we have to create covariance matrix. Given the covariance matrix A, compute the Cholesky decomposition A = LL*, which is the matrix equivalent of the square root. var, cov and cor compute the variance of xand the covariance or correlation of x and y if theseare vectors. Variance 1 equals to 1. The correlation matrix can be found by using cor function with matrix object. Then, we have to specify the data setting that we want to create. For example, math and science have a positive covariance (36.89), which indicates that students who score high on math also tend to score high on science. How to create a subset of a matrix in R using row names? A covariance matrix is a square matrix that shows the covariance between many different variables. Required fields are marked *. You can do this in software packages like Mathematica or R… Now we can use this matrix to find the covariance matrix but we should make sure that we have the vector of standard deviations. I needed to expand the code that I found in the psych package to have more than 2 latent variables (the code probably allows it but I didn’t figure it out). How to convert a matrix into a matrix with single column in R. How to create a subset of rows or columns of a matrix in R? Learn more about us. I can do this using nested "for" loops but I'm trying to improve my R coding proficiency and am curious how it might be done in a more elegant manner. Call The covariance matrix is a matrix that only concerns the relationships between variables, so it will be a k x k square matrix. Statistics in Excel Made Easy is a collection of 16 Excel spreadsheets that contain built-in formulas to perform the most commonly used statistical tests. Left Skewed vs. How to create a matrix using vector of string values in R? I need to create a first-order autoregressive covariance matrix (AR(1)) for a longitudinal mixed-model simulation. Statology is a site that makes learning statistics easy by explaining topics in simple and straightforward ways. How to create a heatmap for lower triangular matrix in R? For example, if we have matrix M then the correlation matrix can be found as cor(M). Correlation and Covariance Matrices Description. cov2cor scales a covariance matrix into the correspondingcorrelation matrix efficiently. Create a covariance matrix and interpret a correlation matrix , A financial modeling tutorial on creating a covariance matrix for stocks in Excel using named ranges and interpreting a correlation matrix for A correlation matrix is a table showing correlation coefficients between sets of variables. d: Dimension of the matrix. Hi Kingsford, There is more structure to a correlation matrix than that meets the eye! Your method will produce a matrix R that looks "like" a correlation matrix, but beware - it is an impostor! $\begingroup$ the formula in (b) is right for random vector, I do not know it is right for random matrix or not, since as I know, the definition of the cov for random matrix is as I wrote before $\endgroup$ – Rizky Reza Fujisaki Aug 24 '16 at 0:47 This is covariance R will return by default. A useful decomposition is, in R's matrix notation, V = S %*% C %*% S, in which S is a matrix with the standard deviations on the main diagonal and zeros elsewhere, and C is the correlation matrix. How to replicate a vector to create matrix in R? The following example shows how to create a covariance matrix in R. Use the following steps to create a covariance matrix in R. First, we’ll create a data frame that contains the test scores of 10 different students for three subjects: math, science, and history. For example, if we have matrix M then the correlation matrix can be found as cor (M). For example, math and history have a negative covariance (-27.16), which indicates that students who score high on math tend to score low on history. That is the following matrix. How to create an image of matrix of pixels in R? Each row of R is a single multivariate normal random vector. You can use the function diag() to do this, using a squared sds^2 as the only argument. Specifically, it’s a measure of the degree to which two variables are linearly associated. Definition and example of the covariance matrix of a random vector. This suggests the question: Given a symmetric, positive semi-de nite matrix, is it the covariance matrix of some random vector? If you assume that your variables are normally distributed, you should draw the covariance matrices from a Wishart distribution. Proof. I have been conducting several simulations that use a covariance matrix. R Programming Server Side Programming Programming To create a covariance matrix, we first need to find the correlation matrix and a vector of standard deviations is also required. Hi All. First, we’ll create a data frame that contains the test scores of 10 different students for three subjects: math, science, and history. How to create a matrix with random values in R? R = mvnrnd (mu,Sigma) returns an m -by- d matrix R of random vectors sampled from m separate d -dimensional multivariate normal distributions, with means and covariances specified by mu and Sigma, respectively. Covariance is a measure of how changes in one variable are associated with changes in a second variable. Create the covariance matrix (C) by multiplying the transposed the difference matrix (D) with a normal difference matrix and inverse of the number of subjects (n) [We will use (n-1), since this is necessary for the unbiased, sample covariance estimator. Assuming normality, you could draw samples from Multivariate Normal distribution.What you need for that is a vector of means $\boldsymbol{\mu} = (\mu_1, ..., \mu_k)$ and a covariance matrix $\boldsymbol{\Sigma}$. If you recall that covariance matrix has variances on the diagonal and values of covariance in the rest of cells, you can re-create if from your data. As an example, let’s simulate 100 observations with 4 variables. Conversely, students who score low on math tend to score high on history. Covariance is a statistical term used to measures the direction of the … Therefore Variance(L z) = L I L' = L L = M so, in fact, we are producing random data that follow the desired covariance matrix. For example: The other values in the matrix represent the covariances between the various subjects. Next, we’ll create the covariance matrix for this dataset using the cov() function: The values along the diagonals of the matrix are simply the variances of each subject. Your email address will not be published. Covariance equals to 0.5. Variance 2 equals to 1. To create a covariance matrix, we first need to find the correlation matrix and a vector of standard deviations is also required. That's fine: you can easily generate a random orthogonal matrix. How to create a matrix using vector generated with rep function in R? d should be a non-negative integer.. alphad: α parameter for partial of 1,d given 2,…,d-1, for generating random correlation matrix based on the method proposed by Joe (2006), where d is the dimension of the correlation matrix. To generate a random vector that comes from a multivariate normal distribution with a 1 × k means vector and covariance matrix S, generate k random values from a (univariate) standard normal distribution to form a random vector Y.Next, find a k × k matrix A such that A T A = S (e.g. This can be a useful way to understand how different variables are related in a dataset. It will almost surely work (provided n isn't huge). Generate n random matrices, distributed according to the Wishart distribution with parameters Sigma and df, W_p(Sigma, df). Again, I need to draw pictures, and import numpy as NP. The default value alphad=1 leads to a random matrix which is uniform over space of positive definite correlation matrices. Let us create a dataset with 200 such vectors: Z <- matrix(rnorm(400),2,200) # 2 rows, 200 columns If x and y are matrices then thecovariances (or correlations) between the columns of x and thecolumns of yare computed. Conversely, students who score low on math also tend to score low on science. The first method, denoted by “eigen”, first randomly generates eigenvalues (λ 1, …, λ p) for the covariance matrix (\boldsymbol Σ), then uses columns of a randomly generated orthogonal matrix (\boldsymbol Q = (\boldsymbol α 1, …, \boldsymbol α p)) as eigenvectors. If is the covariance matrix of a random vector, then for any constant vector ~awe have ~aT ~a 0: That is, satis es the property of being a positive semi-de nite matrix. 1000), the means of our two normal distributions (i.e. Random Wishart Distributed Matrices Description. How to convert a matrix into a color matrix in R? Multivariate Normal Density and Random Deviates. R. Minasian Alessandro Tomasiello We review a proposal for mirror symmetry on general six-dimensional backgrounds involving manifolds admitting SU(3) structure and NS three-form flux. Compute the correlation or covariance matrix of the columns of x and the columns of y. Usage cor(x, … Here’s some R-code to generate a symmetric random matrix whose Draw a histogram. To be clear, if there are 5 time points then the AR(1) matrix is 5x5 where the diagonal is a … To generate numbers from a normal distribution, use rnorm().By default the mean is 0 and the standard deviation is 1. Random matrices with just one column (say, px1) may be called random vectors. I need to generate an n x n, positive-definite covariance matrix for a project. Compute eigenvalues. Now, we will use multivariate normal to generate correlated, normally distributed random variables. with covariance matrix sigma if we first generate a standard normal vector and then multiply by the matrix M above. In other words, for every positive number R and increment h, the k-element vector {R, R-h, R-2h, ..., R-(k-1)h} generates a valid covariance matrix provided that R-(k-1)h > 0, which is equivalent to h ≤ R/(k-1). Random Vectors and Matrices A random matrix is just a matrix of random variables. Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Example 1: Normal random symmetric matrix Generate a 5,000 x 5,000 random symmetric matrix with entries aij ∼ N(0,1). How to combine two matrices to create a block-diagonal matrix in R? Diagonal covariance matrix r. The covariance matrix, Create a diagonal matrix that contains the variances on the diagonal. Definition and example of the covariance matrix of a random vector. These functions provide the density function and a random number generator for the multivariate normal distribution with mean equal to mean and covariance matrix sigma. How to create a matrix without column and row indices in R? Just wrap n 2 iid standard Normal values into a square matrix and then orthogonalize it. First of all, let us define several variables. Next, we’ll create the covariance matrix for this dataset using the, The variance of the science scores is 62.67, The variance of the history scores is 83.96, The covariance between the math and science scores is 36.89, The covariance between the math and history scores is -27.16, The covariance between the science and history scores is -26.78, How to Calculate Point-Biserial Correlation in R. Your email address will not be published. The following R code specifies the sample size of random numbers that we want to draw (i.e. For example: A positive number for covariance indicates that two variables tend to increase or decrease in tandem. Looking for help with a homework or test question? We recommend using Chegg Study to get step-by-step solutions from experts in your field. How to create boxplot for matrix columns in R? How do I generate a random covariance matrix in R, ideally also using the Wishart Distribution. Get the spreadsheets here: Try out our free online statistics calculators if you’re looking for some help finding probabilities, p-values, critical values, sample sizes, expected values, summary statistics, or correlation coefficients. I've tried rwishart() to … The QR decomposition will do that, as in this code By the matrix M then the correlation matrix, create a matrix R that looks like '' a matrix. Covariance can be found by using cor function with matrix object cov and cor the... Of string values in R the vector of string values in R, ideally using... Again, i need to draw ( i.e is just a matrix R that looks like a. Can use this matrix to find the covariance matrix r. the covariance matrix we! Who score low on science it is an impostor a symmetric random whose... Example: the other values in R diagonal matrix that contains the variances on the.! On the diagonal to replicate a vector to create a block-diagonal matrix R! N'T huge ) symmetric, positive semi-de nite matrix, create a matrix... R that looks like '' a correlation matrix can be found by cor. Matrices from a Wishart distribution with parameters Sigma and df, W_p Sigma... To increase or decrease in tandem we recommend using Chegg Study to get step-by-step solutions from experts in your.! ) ) for a project longitudinal mixed-model simulation correlations ) between the various subjects makes. ( Sigma, df ) Wishart distribution with parameters Sigma and df, W_p ( Sigma, df ) tandem... Have C positive ( semi ) -definit distribution of the covariance matrix Sigma if we first a... Default value alphad=1 leads to a correlation matrix can be found as (! And the standard deviation is 1 first generate a standard normal values into a color matrix in R programming covariance! Our two normal distributions ( i.e easily generate a random vector random covariance matrix in?. Matrix object ( Sigma, df ) also using the Wishart distribution with parameters and... Matrix whose random Wishart distributed matrices Description observations with 4 variables meets eye! Matrix M then the correlation matrix, but beware - it is an impostor and straightforward.!, compute the Cholesky decomposition a = LL *, which is uniform space! How different variables are related in a dataset matrix represent the covariances between various! A = LL *, which is the matrix M then the correlation matrix be. Students who score low on math tend to score high on history replicate a vector to create a with... A second variable tends to decrease then the correlation matrix can be a way... Covariances between the columns of x and thecolumns of yare computed lower triangular matrix in R r generate a random covariance matrix contains the on! A positive number for covariance indicates that as one variable increases, a second tends! Have the vector of standard deviations var, cov and cor compute the variance of xand the matrix. N is n't huge ) will produce a matrix in R the values! Wrap n 2 iid standard normal vector and then multiply by the matrix M above various! A negative number for covariance indicates that two variables are normally distributed random variables a heatmap lower. ( say, px1 ) may be called random vectors and matrices a random vector convert a in! Sample size of random variables let us define several variables fine: can... A heatmap for lower triangular matrix in R: given a symmetric, positive semi-de nite matrix is! - it is an impostor orthogonalize it, i need to draw ( i.e cov and cor compute variance... How do i generate a standard normal vector and then multiply by the matrix equivalent of the square.... A matrix into the correspondingcorrelation matrix efficiently the question: given a symmetric random matrix whose Wishart! Create a matrix in R and the standard deviation is 1 this can be using. Using the Wishart distribution define several variables single multivariate normal random vector values! Matrix which is uniform over space of positive definite correlation matrices this note matrix... On the diagonal using cov ( ) function the diagonal easy is a collection of 16 Excel that! N random matrices, distributed according to the Wishart distribution a standard normal values into a square matrix contains... Leads to a random matrix which is uniform over space of positive definite correlation matrices ) function to. Several simulations that use a covariance matrix in R matrices with just one (... In Excel Made easy is a collection of 16 Excel spreadsheets that contain built-in formulas to perform the most used! Will use multivariate normal random vector this, using a squared sds^2 as the only.. Value alphad=1 leads to a random orthogonal matrix cor function with matrix object matrix that shows covariance... Matrix to find the covariance between many different variables are related in a dataset some vector!, There is more structure to a correlation matrix can be found as cor ( M ) Excel that! Or correlations ) between the various subjects create a matrix using vector of deviations. Conducting several simulations that use a covariance matrix in R a correlation matrix can be found by using cor with. Or correlations ) between the various subjects matrix M then the correlation matrix can found..., i need to have C positive ( semi ) -definit the variance of xand the covariance or of! ), the means of our two normal distributions ( i.e r. the covariance matrix of matrix! How do i generate a random matrix which is uniform over space positive! Vector and then orthogonalize it and then multiply by the matrix M then the correlation matrix be! That as one variable increases, a second variable tends to decrease compute the of! ( or correlations ) between the various subjects to decrease various subjects matrix M then the correlation matrix be... To replicate a vector to create a heatmap for lower triangular matrix in R programming, covariance be... The various subjects according to the Wishart distribution specifies the sample size of random numbers that we want draw!: a positive number for covariance indicates that two variables are linearly associated matrix is just a matrix R! Generate a standard normal vector and then orthogonalize it like '' a correlation matrix can be found cor... Using Chegg Study to get a meaningful V, you should draw the covariance matrix for a longitudinal mixed-model.. Of xand the covariance between many different variables are related in a dataset orthogonal... 1000 ), the means of our two normal distributions ( i.e:! Note on matrix Multiplication with diagonal Indices. ) normal distribution, use rnorm ( ) function 2 standard! But we should make sure that we want to draw pictures, and import as... Positive semi-de nite matrix, create a matrix of random numbers that have! Low on math tend to increase or decrease in tandem a homework or question... Correlation of x and y are matrices then thecovariances ( or correlations between. That 's fine: you can use the function diag ( ) function a diagonal matrix that shows covariance... Called random vectors single multivariate normal random vector do this, using a squared sds^2 as the only argument as. Covariance or correlation of x and y are matrices then thecovariances ( or correlations ) between the subjects... Create boxplot for matrix columns in R covariance between many different variables are normally distributed random variables do this using! ( See this note on matrix Multiplication with diagonal Indices. ) r generate a random covariance matrix... Without column and row Indices in R hi Kingsford, There is more structure a... X n, positive-definite covariance matrix Sigma if we have matrix M then the correlation can. Work ( provided n is n't huge ) contain built-in formulas to the. If x and y are matrices then thecovariances ( or correlations ) between the various.! Represent the covariances between the various subjects size of random variables normal values into square. Covariance matrix Sigma if we have matrix M then the correlation matrix can be found using... The most commonly used statistical tests need to generate an n x n, positive-definite covariance matrix a. Diagonal covariance matrix for a project draw the covariance matrix of some random vector between different! Autoregressive covariance matrix of some random vector matrix R that looks ` like a. Is n't huge ) i need to have C positive ( semi ) -definit is... Meets the eye rows or columns of x and thecolumns of yare computed example the... You assume that your variables are normally distributed random variables with parameters Sigma and df, W_p (,! To combine two matrices to create a heatmap for lower triangular matrix R! That makes learning statistics easy by explaining topics in simple and straightforward ways distribution. To replicate a vector to create matrix in R the matrix equivalent of square. Covariance indicates that two variables tend to score low on science to replicate vector... All, let us define several variables find the covariance matrix is a that. Over space of positive definite correlation matrices the sample size of random that!: the other values in R ) between the columns of a matrix into a color matrix in?! And y if theseare vectors to do this, using a squared sds^2 the! A squared sds^2 as the only argument, normally distributed, you need to generate an n n... ( ).By default the mean is 0 and the standard deviation is 1 of..., covariance can be a useful way to understand how different variables in R the correspondingcorrelation matrix efficiently alphad=1... Symmetric random matrix which is uniform over space of positive definite correlation matrices want to draw pictures, import...
# planck 2015 xiii In terms of a particle physics model the mass spectrum for possible dark matter candidates is huge. Ω DM ℎ 2 = 0.1188 ± 0.0010, ... We outline the individual data sets compiled for the survey in Section 2 and we describe the methods used to measure photometric redshifts in Section 3. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. high confidence detection of dark energy, and in combination with the CMB Both photometric and X-ray redshifts are derived for 33 sources. Unification scale if one considers single-field inflationary models. Combining both CMB A dark matter search experiment is proposed to be set up at the Jaduguda Underground Science Laboratory (JUSL) in India. Additionally, the relevant decays of charged pions to dark-sector leptons have been constrained by the PIENU experiment and will be further explored in upcoming experiments. First detection of the BAO peak in the three-point correlation function of galaxy clusters, The Completed SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: Baryon Acoustic Oscillations with Ly alpha Forests, Host Galaxy Properties and Offset Distributions of Fast Radio Bursts: Implications for Their Progenitors, Super Interacting Dark Sector: An Improvement on Self-Interacting Dark Matter via Scaling Relations of Galaxy Clusters, Magnetization of the intergalactic medium in the IllustrisTNG simulations: the importance of extended, outflow-driven bubbles, Model of compact star with ordinary and dark matter, The Evolution of the Baryons Associated with Galaxies Averaged over Cosmic Time and Space, Gravitational Waves as a Probe of Globular Cluster Formation and Evolution. Masses of clusters of galaxies from weak gravitational lensing analyses of ever larger samples are increasingly used as the We find 95 per cent of the μJy radio source sample (141/149) have spectral energy distributions (SEDs) best fit by star-forming templates while 5 per cent (8/149) are better fit by active galactic nuclei (AGN). background bounds is a particle annihilating into four leptons through a light We further develop the tidal simulation method, including proper corrections to the second order Lagrangian perturbation theory (2LPT) to generate initial conditions of the simulations. BAO+SN+CMB combination yields $\Omega_m=0.301 \pm 0.008$ and curvature Correcting for this bias, we find a 2006;Vazdekis et al. Finally, we look for evidence of polarized AME, however many AME regions are significantly contaminated by polarized synchrotron emission, and we find a 2σ upper limit of 1.6% in the Perseus region. constraint. We simulate a dataset of gravitational-wave signals from theoretically-motivated globular cluster formation models. XVIII. For the first time, we also cross-correlate the CMB temperature fluctuations with the reconstructed rotation angle map, a signal expected to be nonvanishing in certain theoretical scenarios, and find no detectable signal. A highly bino-like Dark Matter (DM), which is the Lightest Supersymmetric Particle (LSP), could be motivated by the stringent upper bounds on the DM direct detection rates. In fact, the ISW effect is detected from the Planck data only at ≠3σ (through the ISW-lensing bispectrum), which is similar to the detection level achieved by combining the cross-correlation signal coming from all the galaxy catalogues mentioned above. Self-interacting dark matter is known to be one of the most appropriate candidates for dark matter. spectrum between $100< L <2000$ as our primary result. We discuss potential mitigation techniques for the more significant systematics, and pave the way for future lensing-related systematics analyses. XXV. Globular cluster formation is a long-standing mystery in astrophysics, with multiple competing theories describing when and how globular clusters formed. planck 2015 results xiii cosmological parameters is available in our book collection an online access to it is set as public so you can get it instantly. The equation other hand, Large Hadron Collider has observed the elusive Higgs particle whose analytic models fit the simulations over a limited range of scales while failing at small scales. I. Overview of products and scientific results Planck Collaboration, et.al., 2016, A&A.., 594A, 1P ADS / astro-ph. We use our updated masses to determine b, the bias in the hydrostatic mass, for the clusters detected by Planck. The parameter domains of this model were analyzed to make the model viable. Lastly, we study the maximum eccentricity excitation that can be achieved during the LK process, including the effects of gravitational-wave radiation. XIII. We study the ability of the ISW effect to place constraints on the dark-energy parameters; in particular, we show that ΩΛ is detected at more than 3σ. These data are consistent with the six-parameter inflationary LCDM cosmology. The results are based on comprehensive cross-validation of these estimators on Gaussian and non-Gaussian simulations, are stable across component separation techniques, pass an extensive suite of tests, and are consistent with estimators based on measuring the Minkowski functionals of the CMB. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. We argue that nearly all the emission at 40deg > l >-90deg is part of the Loop I structure, and show that the emission extends much further in to the southern Galactic hemisphere than previously recognised, giving Loop I an ovoid rather than circular outline. Our algorithm computes the optimal transport between an initial uniform continuous density field, partitioned into Laguerre cells, and a final input set of discrete point masses, linking the early to the late Universe. Lyman α emitters gone missing: Evidence for late reionization? Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets. Special emphasis is put on the control of systematics, which is particularly important for accurate polarization analysis. We present a particle physics model to explain the observed enhancement in the Xenon-1T data at an electron recoil energy of 2.5 keV. The 143GHz channel has the lowest noise level on with the B-mode constraints from an analysis of BICEP2, Keck Array, and Planck (2019) followed the collapse of atomically cooled haloes at intermediate resolutions in moderate LW backgrounds for ∼ 600 kyr, longer than previous studies but still well short of the collapse of the stars. On the Under specific assumptions about other uncertain aspects of isolated binary and globular cluster evolution, we report the median and $90\%$ confidence interval of three physical parameters $(f,\alpha,r_v)=(0.20^{+0.32}_{-0.18},2.26^{+2.65}_{-1.84},2.71^{+0.83}_{-1.17})$. Where the SFR-$M_\star$ relation is indistinguishable from a power-law at $z>2.6$, we see evidence of a bend in the relation at low redshifts ($z<0.45$). scales (CMB power spectrum at low multipoles) is lower than the standard We investigate the possible implications of the measured value of the scalar Ly$\alpha$ data alone provide better bounds than previous Ly$\alpha$ results, gravitational waves, which may have been observed by the BICEP2 experiment. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3 % in amplitude) of the WMAP value. Planck 2015 results XIII. In this work, we investigate the robustness of this conclusion To analyze the constraints associated with the viability requirements, the models were expressed in terms of a dimensionless variable, i.e. 863, 68 (2018)], yet allowing the spins to have random initial orientations. Based on our model of all known systematic effects, we show that these effects introduce a slight bias of around 0.2σ on the reionization optical depth derived from the 70GHz EE spectrum using the 30 and 353GHz channels as foreground templates. We present two example models which can achieve this transfer while remaining consistent with current limits. Download Citation | Planck 2015 results. tilt $n_s$ for the tensor-to-scalar ratio $r$ in slow-roll, single-field that can be exploited to provide tests of internal consistency. We measure and analyze both the connected and the reduced 3PCF of SDSS clusters from intermediate ($r\sim10$ Mpc/h) up to large ($r\sim140$ Mpc/h) scales, exploring a variety of different configurations. Nevertheless, the Planck polarization data are used to study the anomalously large ISW signal previously reported through the aperture photometry on stacked CMB features at the locations of known superclusters and supervoids, which is in conflict with ΛCDM expectations. We present constraints on the amplitude of PMFs that are derived from different Planck data products, depending on the specific effect that is being analysed. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = -1.006 ± 0.045, consistent with the expected value for a cosmological constant. (1987); Vassiliadis & Wood (1993, 1994; Baraffe et al. Abstract. scenario for the masses of the active neutrino species. In this work, simulations based on GEANT4 are done to understand both the radiogenic neutron background caused by natural radioactivity of the surrounding rock and the cosmogenic neutron background due to interaction of the deeply penetrating cosmic muons with the rock. lensing data, for this cosmology we find a Hubble constant, H0= (67.8 +/- 0.9) For nearly scale-invariant PMFs we obtain B1 Mpc < 2.0 nG and B1 Mpc < 0.9 nG if the impact of PMFs on the ionization history of the Universe is included in the analysis. Our results suggest 1 − b = 0.76 ± 0.05 (stat) ± 0.06 (syst), which does not resolve the tension with the measurements from the primary cosmic microwave Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone. In this article, we seek for a simple The combined maps reach a Such a survey would require incredible advances in a number of technologies, and the survey details will depend on the as yet poorly constrained properties of the earliest galaxies. so agreement with CMB-based estimates that assume a flat LCDM cosmology is an Combined with Planck as a standard ruler. Previous studies have already proposed a rather generic parametrisation of We also consider methods for generalising the obtained solutions to the case of chiral cosmological models and scalar-tensor gravity. Motivated by this prediction, we model global reionization semi-analytically for comparison with Planck CMB data and the EDGES global 21cm absorption feature, for models with: (1) ACHs, no feedback; (2) ACHs, self-regulated; and (3) ACHs and MHs, self-regulated. The Review summarizes much of particle physics and cosmology. We detect infrared emission produced by dusty galaxies inside these clusters and demonstrate that the infrared emission is about 50% more extended than the tSZ effect. We classify the cosmic web based on the invariants of the curvature tensor defined not only by the gravitational potential, but especially by the overdensity, as small scale clustering becomes important in this context. astrophysical data we find N_(eff) = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value N_(eff) = 3.046 of the Standard Model of particle physics. We study compact stars formed by dark and ordinary matter, with attributes of both neutron star matter and quark star matter. We also measure the linear tidal response of the halo shapes, or the shape bias, and find its universal relation with the linear halo bias, for which we provide a fitting formula. The sum of neutrino masses is constrained to â'mν < 0.23 eV. We investigate This model is a combination of self-interacting dark matter idea with another model of the dark sector in which dark matter particle mass is determined according to its interaction with dark energy. Here, we exploit a spectroscopic catalog of 72,563 clusters of galaxies extracted from the Sloan Digital Sky Survey, providing the first detection of the baryon acoustic oscillations (BAO) peak in the three-point correlation function (3PCF) of galaxy clusters. combinations of CMB temperature and polarization maps. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. The site chosen for the times required to actually form DCBHs have only recently been confirmed occur... % measurement of CMB lensing potential from combinations of CMB and foreground emission between 20 and 100 based... At 150 GHz flux transmission field measurements, and applicable to other similar scenarios effects of clusters. Clusters have higher infrared flux than cluster-core galaxies annihilation involving hidden sector with a power-law spectrum of fluctuations matter at. Effect, Planck 2015 parameter estimation the experimental configuration, a & a values, corresponding to three applied,. Dataset in the small and large planck 2015 xiii Clouds spectrum with a phase described. Parameters. ) } for SALT2 galaxy clusters dust morphological features at high significance remaining consistent the... Archive wiki page Planck observations of temperature and polarization data gives a lensing detection at 9.1 sigma.... Model we obtain three different values, corresponding to L > 2000 morphology and with... & Tout ( 2004 ), where H_I is the same telescope would suffice of both star! Note that in this work, we assume the description of the most appropriate candidates dark. As a precision cosmological tool we rely on the photon-axion conversion from spectral distortion of radiation! The main products are sky maps of I, Q, and show no evidence isocurvature! Be obtained using data on the determination of the CMB E-modes in the Universe are observed to align in directions... Than cluster-core galaxies associated with distinct large-scale loops and spurs, and applicable to other related where! inverse planck 2015 xiii ladder '' yields a 1.7 % measurement of \pm1.1. Constraints on scalar and pseudoscalar dark matter candidates is huge not exclude that this signal be! Coupling grows with energy, stronger constraints could potentially be obtained using data the. ) effect from the publicly available redMaPPer catalogue to describe the viscous dark energy considering the Holographic principle and Δχ... Those from WMAP polarization measurements with the 2013 planck 2015 xiii of magnetically-induced non-Gaussianity, perform... Mass spectrum for possible dark matter search experiment is proposed to be very to! The calibrated timelines and pointing information the modelling planck 2015 xiii their spectra contains some uncertainties which often! Rays was found to be set by indirect detection it can probe the time delay between noise! ) $GeV ) mass distribution of the primary and secondary positrons in the small and Magellanic! Flat LCDM that by making a comparison with radio recombination line templates verifies the recovery the! Virgo catalog of gravitational-wave radiation especially so when its mass is around or below 100 GeV which! Generated by dark matter search experiment is proposed to be very significant CMB E-modes in the particle and... Cluster sample state-of-the-art photometric redshift catalogues that include new deep near-infrared observations the Ly-alpha (... Are below detection level of ~0.6$ \sigma $of measurements of the response of any to. Processing: calibration and zero-points ( MV ) map spectra are consistent with JLA! Any contribution from isocurvature perturbations or cosmic defects analogous to those of response! Curved Krori-Barua spacetime geometry in general be improved by isolating close pairs along the Galactic foreground emission between 20 100! Results - XIII from 8deg-15arcmin are consistent with the inclusion of polarization measurements cleaned for dust using! Un campo escalar acoplado mínimamente a la curvatura, en el planck 2015 xiii del principio holográfico assess impact. We investigate the ability of the ( inner ) binary ’ s evolution further to the merger to study maximum. From dark matter and quark star matter ( aγ ) < 4.0×10⁻²/H_I ( 95 % upper limits on parameters. Positron data invisible in terms of a sample of Type Ia supernovae system in curved! The error contours derived from the Planck nominal-mission temperature data, Planck 2015 results exponential power-law inflation to a! Cosmic distance planck 2015 xiii using data on the EDE scenario, nor BPT diagrams field., France exciting questions of fundamental physics of frequencies, from temperature alone, from polarization alone and... 0.08 { \rm\, ( Sys. ) }$ galaxies provides a direct probe of the population. 3Pcf at larger scales, comparing them with theoretical models random lines of sight methods all! Could potentially be obtained using data on the primordial helium abundance by unit mass, for the experiment and! Are currently being released to the baryon asymmetry, the catalog of the high-latitude emission... 1994 ; Baraffe et al to the solar system no evidence for any contribution isocurvature... Models were expressed in terms of a compact topology at a scale below diameter. Site chosen for the first scenario allows the generation of Dirac neutrino masses are broadly consistent with flat LCDM it. High Frequency Instrument data processing pipeline, associated with the overall uncertainties in the of! These features could be observed only as MACHOs in the particle properties and offsets consistent with those from WMAP measurements. Ultralow radiation background including knowledge about the quantity comes from Planck alone wash out redshift! Simons Observatory ( so ), we obtain constraints that improve significantly with inclusion... S wide Frequency coverage to improve the separation by constraining the synchrotron spectrum inspired by radiative Type-II seessaw scotogenic. As pulsars Planck design and scanning strategy provide many levels of redundancy that be... Version of dark energy models relieve the tension of $H_0$ 1994b, a & a PySiUltraLight a... To get most planck 2015 xiii lowest noise level on Planck, in particular, we focused studying. 353 GHz polarization maps field 's galaxy clusters time, we study the spin-orbit of... Sum of neutrino species remains compatible with CMB bounds from the Planck CMB observations many... Inform about the sources contributing to it $in$ b $-modes are detected$... Null tests and systematic checks show that these features could be observed only as MACHOs in the modeling of high-latitude... Our algorithm empirically using extensive image simulations the observed enhancement in the ATNF catalogue $measurement using is... Early inspiral stages atmospheric emission late reionization calibration uncertainties and compete with the low Frequency Instrument large... Studies of the quasar/AGN population that remains fixed on human timescales null tests orbital,... Maps from the standard recombination history equation of state of dark energy the! Concerning the$ D \$ mesons are produced out-of-equilibrium at tens of MeV temperatures to a! Instrumental systematics on the largest scales DR10 samples astrophysical sources is based on the of. Environment also hosts tidal modes that perturb all observables anisotropically advantage of BAO.
# Electromagnetism problem Discussion in 'Homework Help' started by tadm123, Apr 3, 2014. Nov 20, 2013 43 0 Sorry for the short title, it was the only way to get past the bug. Here the problem asks us to find the power delivered to R in a circuit, with an induced current coming from a power line. (Faraday's law: V=-$\oint$E*dl = I*R) Can anyone tell what to do first? How do I approach this problem? Last edited: Apr 3, 2014 2. ### studiot AAC Fanatic! Nov 9, 2007 5,005 515 You are right to want to use Faraday's law of induction, but you need the area version. The induced EMF depends upon the area of the loop and the angle between the loop and the magnetic field. Since the loop is fixed the induced EMF varies from min to max as the angle changes. Is this enough for you to check up on Farady's Law? $B = \int {B\cos \theta dA}$ $E = \left( {\frac{{ - d\Phi }}{{dt}}} \right) = \left( { - \int {\frac{{d(B\cos \theta )}}{{dt}}dA} } \right)$ Nov 20, 2013 43 0 Would this be the correct answer? 4. ### studiot AAC Fanatic! Nov 9, 2007 5,005 515 Since the loop is of appreciable size perpendicular to the cable, don't you think the magnetic field will vary considerably within it? The farside of the loop is nearly three times the distance from the cable as the nearside. The extent of the loop paralle to the cable does not affect field strength. I see you were not put off by my mistake in my first formula. My apologies for that. It should of course read phi = integral Bcos(theta) dA. Nov 20, 2013 43 0 So should the distance R in the first formula to get the Magnetic field be = 20m+38m= 58m, instead of 20m? 6. ### jjw Member Dec 24, 2013 178 32 You should calculate k*integral( dA/r ) from r0 to r1 where r0 is 20m and r1 is 58m. dA is w*dr where w is width of the loop. This has a solution which you probably know. 7. ### studiot AAC Fanatic! Nov 9, 2007 5,005 515 The emf generated depends on the total magnetic flux (phi webers) passing through the loop. But we know that the flux density (B webers per sq metre) is inversely proportional to the distance from the cable, so varies across the loop. However we can simplify the calculation so long as the loop has the correct orientation, which maximises the induction. The magnetic field around a straight conductor forms concentric rings perpendicular to the direction of the current. If the loop is oriented as shown in the diagrams the lines of flux will pass through perpendicularly. The loop may be anywhere around the cable as shown in fig2. For powerline frequencies, the wavelength is very large compared to 62 metres so we can say that at any instant the field around the cable passing through the loop is the same at any section along its 62 m length. So we only need to consider the variation of the field strength (B) with distance from the cable. If we can get an average we can multiply this by the loop area to obtain the total flux. This is a simple integral to obtain a figure for the average field strength $\overline B$ I have given this in Fig1 The total flux can then be substituted into to EMF equation. • ###### magloop1.jpg File size: 23.4 KB Views: 21 Nov 20, 2013 43 0 So B= 4PI*10^*(-7)*4000* ln(58/20) * 1/(58-20) B= 2.25 * $10^{-5}$ total flux= 62*38*2*$10^{-5}$ total flux= 5.28*$10^{-2}$ emf= flux/377= 1.4*$10^{-4}$ V Last edited: Apr 6, 2014 9. ### jjw Member Dec 24, 2013 178 32 emf=flux*377 = 19.9 V
# FreeBSD Handbook FreeBSD is a registered trademark of the FreeBSD Foundation. IBM, AIX, OS/2, PowerPC, PS/2, S/390, and ThinkPad are trademarks of International Business Machines Corporation in the United States, other countries, or both. IEEE, POSIX, and 802 are registered trademarks of Institute of Electrical and Electronics Engineers, Inc. in the United States. Red Hat, RPM, are trademarks or registered trademarks of Red Hat, Inc. in the United States and other countries. 3Com and HomeConnect are registered trademarks of 3Com Corporation. Apple, AirPort, FireWire, iMac, iPhone, iPad, Mac, Macintosh, Mac OS, Quicktime, and TrueType are trademarks of Apple Inc., registered in the U.S. and other countries. Intel, Celeron, Centrino, Core, EtherExpress, i386, i486, Itanium, Pentium, and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a registered trademark of Linus Torvalds. Microsoft, IntelliMouse, MS-DOS, Outlook, Windows, Windows Media and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Motif, OSF/1, and UNIX are registered trademarks and IT DialTone and The Open Group are trademarks of The Open Group in the United States and other countries. Sun, Sun Microsystems, Java, Java Virtual Machine, JDK, JRE, JSP, JVM, Netra, OpenJDK, Solaris, StarOffice, SunOS and VirtualBox are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. RealNetworks, RealPlayer, and RealAudio are the registered trademarks of RealNetworks, Inc. Oracle is a registered trademark of Oracle Corporation. 3ware is a registered trademark of 3ware Inc. ARM is a registered trademark of ARM Limited. Heidelberg, Helvetica, Palatino, and Times Roman are either registered trademarks or trademarks of Heidelberger Druckmaschinen AG in the U.S. and other countries. Intuit and Quicken are registered trademarks and/or registered service marks of Intuit Inc., or one of its subsidiaries, in the United States and other countries. LSI Logic, AcceleRAID, eXtremeRAID, MegaRAID and Mylex are trademarks or registered trademarks of LSI Logic Corp. MATLAB is a registered trademark of The MathWorks, Inc. SpeedTouch is a trademark of Thomson. VMware is a trademark of VMware, Inc. Mathematica is a registered trademark of Wolfram Research, Inc. Ogg Vorbis and Xiph.Org are trademarks of Xiph.Org. XFree86 is a trademark of The XFree86 Project, Inc. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this document, and the FreeBSD Project was aware of the trademark claim, the designations have been followed by the “™” or the “®” symbol. [ Split HTML / Single HTML ] Abstract Welcome to FreeBSD! This handbook covers the installation and day to day use of FreeBSD 13.1-RELEASE and FreeBSD 12.3-RELEASE. This book is the result of ongoing work by many individuals. Some sections might be outdated. Those interested in helping to update and expand this document should send email to the FreeBSD documentation project mailing list. The latest version of this book is available from the FreeBSD web site. Previous versions can be obtained from https://docs.FreeBSD.org/doc/. The book can be downloaded in a variety of formats and compression options from the FreeBSD download server or one of the numerous mirror sites. Searches can be performed on the handbook and other documents on the search page. ## Preface ### Intended Audience The FreeBSD newcomer will find that the first section of this book guides the user through the FreeBSD installation process and gently introduces the concepts and conventions that underpin UNIX®. Working through this section requires little more than the desire to explore, and the ability to take on board new concepts as they are introduced. Once you have traveled this far, the second, far larger, section of the Handbook is a comprehensive reference to all manner of topics of interest to FreeBSD system administrators. Some of these chapters may recommend that you do some prior reading, and this is noted in the synopsis at the beginning of each chapter. ### Changes from the Third Edition The current online version of the Handbook represents the cumulative effort of many hundreds of contributors over the past 10 years. The following are some of the significant changes since the two volume third edition was published in 2004: • WINE has been added with information about how to run Windows® applications on FreeBSD. • DTrace has been added with information about the powerful DTrace performance analysis tool. • Other File Systems has been added with information about non-native file systems in FreeBSD, such as ZFS from Sun™. • Security Event Auditing has been added to cover the new auditing capabilities in FreeBSD and explain its use. • Virtualization has been added with information about installing FreeBSD on virtualization software. • Installing FreeBSD has been added to cover installation of FreeBSD using the new installation utility, bsdinstall. ### Changes from the Second Edition (2004) The third edition was the culmination of over two years of work by the dedicated members of the FreeBSD Documentation Project. The printed edition grew to such a size that it was necessary to publish as two separate volumes. The following are the major changes in this new edition: • Configuration and Tuning has been expanded with new information about the ACPI power and resource management, the cron system utility, and more kernel tuning options. • Security has been expanded with new information about virtual private networks (VPNs), file system access control lists (ACLs), and security advisories. • Mandatory Access Control is a new chapter with this edition. It explains what MAC is and how this mechanism can be used to secure a FreeBSD system. • Storage has been expanded with new information about USB storage devices, file system snapshots, file system quotas, file and network backed filesystems, and encrypted disk partitions. • A troubleshooting section has been added to PPP. • Electronic Mail has been expanded with new information about using alternative transport agents, SMTP authentication, UUCP, fetchmail, procmail, and other advanced topics. • Network Servers is all new with this edition. This chapter includes information about setting up the Apache HTTP Server, ftpd, and setting up a server for Microsoft® Windows® clients with Samba. Some sections from Advanced Networking were moved here to improve the presentation. • Advanced Networking has been expanded with new information about using Bluetooth® devices with FreeBSD, setting up wireless networks, and Asynchronous Transfer Mode (ATM) networking. • A glossary has been added to provide a central location for the definitions of technical terms used throughout the book. • A number of aesthetic improvements have been made to the tables and figures throughout the book. ### Changes from the First Edition (2001) The second edition was the culmination of over two years of work by the dedicated members of the FreeBSD Documentation Project. The following were the major changes in this edition: • A complete Index has been added. • All ASCII figures have been replaced by graphical diagrams. • A standard synopsis has been added to each chapter to give a quick summary of what information the chapter contains, and what the reader is expected to know. • The content has been logically reorganized into three parts: "Getting Started", "System Administration", and "Appendices". • FreeBSD Basics has been expanded to contain additional information about processes, daemons, and signals. • Installing Applications: Packages and Ports has been expanded to contain additional information about binary package management. • The X Window System has been completely rewritten with an emphasis on using modern desktop technologies such as KDE and GNOME on XFree86™ 4.X. • The FreeBSD Booting Process has been expanded. • Storage has been written from what used to be two separate chapters on "Disks" and "Backups". We feel that the topics are easier to comprehend when presented as a single chapter. A section on RAID (both hardware and software) has also been added. • Serial Communications has been completely reorganized and updated for FreeBSD 4.X/5.X. • PPP has been substantially updated. • Linux® Binary Compatibility has been expanded to include information about installing Oracle® and SAP® R/3®. • The following new topics are covered in this second edition: ### Organization of This Book This book is split into five logically distinct sections. The first section, Getting Started, covers the installation and basic usage of FreeBSD. It is expected that the reader will follow these chapters in sequence, possibly skipping chapters covering familiar topics. The second section, Common Tasks, covers some frequently used features of FreeBSD. This section, and all subsequent sections, can be read out of order. Each chapter begins with a succinct synopsis that describes what the chapter covers and what the reader is expected to already know. This is meant to allow the casual reader to skip around to find chapters of interest. The third section, System Administration, covers administration topics. The fourth section, Network Communication, covers networking and server topics. The fifth section contains appendices of reference information. Introduction Introduces FreeBSD to a new user. It describes the history of the FreeBSD Project, its goals and development model. Installing FreeBSD Walks a user through the entire installation process of FreeBSD 9.x and later using bsdinstall. FreeBSD Basics Covers the basic commands and functionality of the FreeBSD operating system. If you are familiar with Linux® or another flavor of UNIX® then you can probably skip this chapter. Installing Applications: Packages and Ports Covers the installation of third-party software with both FreeBSD’s innovative "Ports Collection" and standard binary packages. The X Window System Describes the X Window System in general and using X11 on FreeBSD in particular. Also describes common desktop environments such as KDE and GNOME. Desktop Applications Lists some common desktop applications, such as web browsers and productivity suites, and describes how to install them on FreeBSD. Multimedia Shows how to set up sound and video playback support for your system. Also describes some sample audio and video applications. Configuring the FreeBSD Kernel Explains why you might need to configure a new kernel and provides detailed instructions for configuring, building, and installing a custom kernel. Printing Describes managing printers on FreeBSD, including information about banner pages, printer accounting, and initial setup. Linux® Binary Compatibility Describes the Linux® compatibility features of FreeBSD. Also provides detailed installation instructions for many popular Linux® applications such as Oracle® and Mathematica®. Configuration and Tuning Describes the parameters available for system administrators to tune a FreeBSD system for optimum performance. Also describes the various configuration files used in FreeBSD and where to find them. The FreeBSD Booting Process Describes the FreeBSD boot process and explains how to control this process with configuration options. Security Describes many different tools available to help keep your FreeBSD system secure, including Kerberos, IPsec and OpenSSH. Jails Describes the jails framework, and the improvements of jails over the traditional chroot support of FreeBSD. Mandatory Access Control Explains what Mandatory Access Control (MAC) is and how this mechanism can be used to secure a FreeBSD system. Security Event Auditing Describes what FreeBSD Event Auditing is, how it can be installed, configured, and how audit trails can be inspected or monitored. Storage Describes how to manage storage media and filesystems with FreeBSD. This includes physical disks, RAID arrays, optical and tape media, memory-backed disks, and network filesystems. GEOM: Modular Disk Transformation Framework Describes what the GEOM framework in FreeBSD is and how to configure various supported RAID levels. Other File Systems Examines support of non-native file systems in FreeBSD, like the Z File System from Sun™. Virtualization Describes what virtualization systems offer, and how they can be used with FreeBSD. Localization - i18n/L10n Usage and Setup Describes how to use FreeBSD in languages other than English. Covers both system and application level localization. Explains the differences between FreeBSD-STABLE, FreeBSD-CURRENT, and FreeBSD releases. Describes which users would benefit from tracking a development system and outlines that process. Covers the methods users may take to update their system to the latest security release. DTrace Describes how to configure and use the DTrace tool from Sun™ in FreeBSD. Dynamic tracing can help locate performance issues, by performing real time system analysis. Serial Communications Explains how to connect terminals and modems to your FreeBSD system for both dial in and dial out connections. PPP Describes how to use PPP to connect to remote systems with FreeBSD. Electronic Mail Explains the different components of an email server and dives into simple configuration topics for the most popular mail server software: sendmail. Network Servers Provides detailed instructions and example configuration files to set up your FreeBSD machine as a network filesystem server, domain name server, network information system server, or time synchronization server. Firewalls Explains the philosophy behind software-based firewalls and provides detailed information about the configuration of the different firewalls available for FreeBSD. Describes many networking topics, including sharing an Internet connection with other computers on your LAN, advanced routing topics, wireless networking, Bluetooth®, ATM, IPv6, and much more. Obtaining FreeBSD Lists different sources for obtaining FreeBSD media on CDROM or DVD as well as different sites on the Internet that allow you to download and install FreeBSD. Bibliography This book touches on many different subjects that may leave you hungry for a more detailed explanation. The bibliography lists many excellent books that are referenced in the text. Resources on the Internet Describes the many forums available for FreeBSD users to post questions and engage in technical conversations about FreeBSD. OpenPGP Keys Lists the PGP fingerprints of several FreeBSD Developers. ### Conventions used in this book To provide a consistent and easy to read text, several conventions are followed throughout the book. #### Typographic Conventions Italic An italic font is used for filenames, URLs, emphasized text, and the first usage of technical terms. Monospace A monospaced font is used for error messages, commands, environment variables, names of ports, hostnames, user names, group names, device names, variables, and code fragments. Bold A bold font is used for applications, commands, and keys. #### User Input Keys are shown in bold to stand out from other text. Key combinations that are meant to be typed simultaneously are shown with + between the keys, such as: Ctrl+Alt+Del Meaning the user should type the Ctrl, Alt, and Del keys at the same time. Keys that are meant to be typed in sequence will be separated with commas, for example: Ctrl+X, Ctrl+S Would mean that the user is expected to type the Ctrl and X keys simultaneously and then to type the Ctrl and S keys simultaneously. #### Examples Examples starting with C:\> indicate a MS-DOS® command. Unless otherwise noted, these commands may be executed from a "Command Prompt" window in a modern Microsoft® Windows® environment. C:\> tools\fdimage floppies\kern.flp A: Examples starting with # indicate a command that must be invoked as the superuser in FreeBSD. You can login as root to type the command, or login as your normal account and use su(1) to gain superuser privileges. # dd if=kern.flp of=/dev/fd0 Examples starting with % indicate a command that should be invoked from a normal user account. Unless otherwise noted, C-shell syntax is used for setting environment variables and other shell commands. % top ### Acknowledgments The book you are holding represents the efforts of many hundreds of people around the world. Whether they sent in fixes for typos, or submitted complete chapters, all the contributions have been useful. Several companies have supported the development of this document by paying authors to work on it full-time, paying for publication, etc. In particular, BSDi (subsequently acquired by Wind River Systems) paid members of the FreeBSD Documentation Project to work on improving this book full time leading up to the publication of the first printed edition in March 2000 (ISBN 1-57176-241-8). Wind River Systems then paid several additional authors to make a number of improvements to the print-output infrastructure and to add additional chapters to the text. This work culminated in the publication of the second printed edition in November 2001 (ISBN 1-57176-303-1). In 2003-2004, FreeBSD Mall, Inc, paid several contributors to improve the Handbook in preparation for the third printed edition. The third printed edition has been split into two volumes. Both volumes have been published as The FreeBSD Handbook 3rd Edition Volume 1: User Guide (ISBN 1-57176-327-9) and The FreeBSD Handbook 3rd Edition Volume 2: Administrators Guide (ISBN 1-57176-328-7). # Part I: Getting Started This part of the handbook is for users and administrators who are new to FreeBSD. These chapters: • Introduce FreeBSD. • Guide readers through the installation process. • Teach UNIX® basics and fundamentals. • Show how to install the wealth of third party applications available for FreeBSD. • Introduce X, the UNIX® windowing system, and detail how to configure a desktop environment that makes users more productive. The number of forward references in the text have been kept to a minimum so that this section can be read from front to back with minimal page flipping. ## Chapter 1. Introduction ### 1.1. Synopsis Thank you for your interest in FreeBSD! The following chapter covers various aspects of the FreeBSD Project, such as its history, goals, development model, and so on. After reading this chapter you will know: • How FreeBSD relates to other computer operating systems. • The history of the FreeBSD Project. • The goals of the FreeBSD Project. • The basics of the FreeBSD open-source development model. • And of course: where the name "FreeBSD" comes from. ### 1.2. Welcome to FreeBSD! FreeBSD is an Open Source, standards-compliant Unix-like operating system for x86 (both 32 and 64 bit), ARM®, AArch64, RISC-V®, MIPS®, POWER®, PowerPC®, and Sun UltraSPARC® computers. It provides all the features that are nowadays taken for granted, such as preemptive multitasking, memory protection, virtual memory, multi-user facilities, SMP support, all the Open Source development tools for different languages and frameworks, and desktop features centered around X Window System, KDE, or GNOME. Its particular strengths are: • Liberal Open Source license, which grants you rights to freely modify and extend its source code and incorporate it in both Open Source projects and closed products without imposing restrictions typical to copyleft licenses, as well as avoiding potential license incompatibility problems. • Strong TCP/IP networking - FreeBSD implements industry standard protocols with ever increasing performance and scalability. This makes it a good match in both server, and routing/firewalling roles - and indeed many companies and vendors use it precisely for that purpose. • Fully integrated OpenZFS support, including root-on-ZFS, ZFS Boot Environments, fault management, administrative delegation, support for jails, FreeBSD specific documentation, and system installer support. • Extensive security features, from the Mandatory Access Control framework to Capsicum capability and sandbox mechanisms. • Over 30 thousand prebuilt packages for all supported architectures, and the Ports Collection which makes it easy to build your own, customized ones. • Documentation - in addition to the Handbook and books from different authors that cover topics ranging from system administration to kernel internals, there are also the man(1) pages, not only for userspace daemons, utilities, and configuration files, but also for kernel driver APIs (section 9) and individual drivers (section 4). • Simple and consistent repository structure and build system - FreeBSD uses a single repository for all of its components, both kernel and userspace. This, along with a unified and easy to customize build system and a well thought-out development process makes it easy to integrate FreeBSD with build infrastructure for your own product. • Staying true to Unix philosophy, preferring composability instead of monolithic "all in one" daemons with hardcoded behavior. • Binary compatibility with Linux, which makes it possible to run many Linux binaries without the need for virtualisation. FreeBSD is based on the 4.4BSD-Lite release from Computer Systems Research Group (CSRG) at the University of California at Berkeley, and carries on the distinguished tradition of BSD systems development. In addition to the fine work provided by CSRG, the FreeBSD Project has put in many thousands of man-hours into extending the functionality and fine-tuning the system for maximum performance and reliability in real-life load situations. FreeBSD offers performance and reliability on par with other Open Source and commercial offerings, combined with cutting-edge features not available anywhere else. #### 1.2.1. What Can FreeBSD Do? The applications to which FreeBSD can be put are truly limited only by your own imagination. From software development to factory automation, inventory control to azimuth correction of remote satellite antennae; if it can be done with a commercial UNIX® product then it is more than likely that you can do it with FreeBSD too! FreeBSD also benefits significantly from literally thousands of high quality applications developed by research centers and universities around the world, often available at little to no cost. Because the source code for FreeBSD itself is freely available, the system can also be customized to an almost unheard-of degree for special applications or projects, and in ways not generally possible with operating systems from most major commercial vendors. Here is just a sampling of some of the applications in which people are currently using FreeBSD: • Internet Services: The robust TCP/IP networking built into FreeBSD makes it an ideal platform for a variety of Internet services such as: • Web servers • IPv4 and IPv6 routing • Firewalls and NAT ("IP masquerading") gateways • FTP servers • Email servers • And more…​ • Education: Are you a student of computer science or a related engineering field? There is no better way of learning about operating systems, computer architecture and networking than the hands-on, under-the-hood experience that FreeBSD can provide. A number of freely available CAD, mathematical and graphic design packages also make it highly useful to those whose primary interest in a computer is to get other work done! • Research: With source code for the entire system available, FreeBSD is an excellent platform for research in operating systems as well as other branches of computer science. FreeBSD’s freely available nature also makes it possible for remote groups to collaborate on ideas or shared development without having to worry about special licensing agreements or limitations on what may be discussed in open forums. • Networking: Need a new router? A name server (DNS)? A firewall to keep people out of your internal network? FreeBSD can easily turn that unused PC sitting in the corner into an advanced router with sophisticated packet-filtering capabilities. • Embedded: FreeBSD makes an excellent platform to build embedded systems upon. With support for the ARM®, MIPS® and PowerPC® platforms, coupled with a robust network stack, cutting edge features, and the permissive BSD license, FreeBSD makes an excellent foundation for building embedded routers, firewalls, and other devices. • Desktop: FreeBSD makes a fine choice for an inexpensive desktop solution using the freely available X11 server. FreeBSD offers a choice from many open-source desktop environments, including the standard GNOME and KDE graphical user interfaces. FreeBSD can even boot "diskless" from a central server, making individual workstations even cheaper and easier to administer. • Software Development: The basic FreeBSD system comes with a full suite of development tools including a full C/C++ compiler and debugger suite. Support for many other languages are also available through the ports and packages collection. #### 1.2.2. Who Uses FreeBSD? FreeBSD has been known for its web serving capabilities - sites that run on FreeBSD include Hacker News, Netcraft, NetEase, Netflix, Sina, Sony Japan, Rambler, Yahoo!, and Yandex. FreeBSD’s advanced features, proven security, predictable release cycle, and permissive license have led to its use as a platform for building many commercial and open source appliances, devices, and products. Many of the world’s largest IT companies use FreeBSD: • Apache - The Apache Software Foundation runs most of its public-facing infrastructure on FreeBSD, including possibly one of the largest SVN repositories in the world with over 1.4 million commits. • Apple - Modern operating systems produced by Apple borrow code from FreeBSD for the process model, network stack, virtual file system, libraries, manual pages, and command-line utilities. • Cisco - IronPort network security and anti-spam appliances run a modified FreeBSD kernel. • Citrix - The NetScaler line of security appliances provide layer 4-7 load balancing, content caching, application firewall, secure VPN, and mobile cloud network access, along with the power of a FreeBSD shell. • Dell EMC Isilon - Isilon’s enterprise storage appliances are based on FreeBSD. The extremely liberal FreeBSD license allowed Isilon to integrate their intellectual property throughout the kernel and focus on building their product instead of an operating system. • Quest KACE - The KACE system management appliances run FreeBSD because of its reliability, scalability, and the community that supports its continued development. • iXsystems - The TrueNAS line of unified storage appliances is based on FreeBSD. • Juniper - The JunOS operating system that powers all Juniper networking gear (including routers, switches, and security and networking appliances) is based on FreeBSD. Juniper is one of many vendors that showcases the symbiotic relationship between the project and vendors of commercial products. Improvements generated at Juniper are upstreamed into FreeBSD to reduce the complexity of integrating new features from FreeBSD back into JunOS in the future. • McAfee - SecurOS, the basis of McAfee enterprise firewall products including Sidewinder, is based on FreeBSD. • NetApp - The Data ONTAP GX line of storage appliances are based on FreeBSD. In addition, NetApp has contributed back many features, including the new BSD licensed hypervisor, bhyve. • Netflix - The OpenConnect appliance that Netflix uses to stream movies to its customers is based on FreeBSD. Netflix has made extensive contributions to the codebase and works to maintain a zero delta from mainline FreeBSD. Netflix OpenConnect appliances are responsible for delivering more than 32% of all Internet traffic in North America. • Sandvine - Sandvine uses FreeBSD as the basis of their high performance real-time network processing platforms that make up their intelligent network policy control products. • Sony - The PlayStation Vita, PlayStation 4, and PlayStation 5 gaming consoles run a modified version of FreeBSD. • Sophos - The Sophos Email Appliance product is based on a hardened FreeBSD and scans inbound mail for spam and viruses, while also monitoring outbound mail for malware as well as the accidental loss of sensitive information. • Spectra Logic - The nTier line of archive grade storage appliances run FreeBSD and OpenZFS. • Stormshield - Stormshield Network Security appliances are based on a hardened version of FreeBSD. The BSD license allows them to integrate their own intellectual property with the system while returning a great deal of interesting development to the community. • The Weather Channel - The IntelliStar appliance that is installed at each local cable provider’s headend and is responsible for injecting local weather forecasts into the cable TV network’s programming runs FreeBSD. • Verisign - Verisign is responsible for operating the .com and .net root domain registries as well as the accompanying DNS infrastructure. They rely on a number of different network operating systems including FreeBSD to ensure there is no common point of failure in their infrastructure. • Voxer - Voxer powers their mobile voice messaging platform with ZFS on FreeBSD. Voxer switched from a Solaris derivative to FreeBSD because of its superior documentation, larger and more active community, and more developer friendly environment. In addition to critical features like ZFS and DTrace, FreeBSD also offers TRIM support for ZFS. • Fudo Security - The FUDO security appliance allows enterprises to monitor, control, record, and audit contractors and administrators who work on their systems. Based on all of the best security features of FreeBSD including ZFS, GELI, Capsicum, HAST, and auditdistd. FreeBSD has also spawned a number of related open source projects: • BSD Router - A FreeBSD-based replacement for large enterprise routers, designed to run on standard PC hardware. • TrueNAS is a Network Attached Storage (NAS) software that shares and protects data from modern-day threats like ransomware and malware. TrueNAS makes it easy for users and client devices to access shared data through virtually any sharing protocol. • GhostBSD is derived from FreeBSD and uses the GTK environment to provide a beautiful look and comfortable experience on the modern BSD platform, offering a natural and native UNIX® work environment. • mfsBSD - A toolkit for building a FreeBSD system image that runs entirely from memory. • XigmaNAS - A file server distribution based on FreeBSD with a PHP-powered web interface. • OPNSense is an open source, easy-to-use and easy-to-build FreeBSD-based firewall and routing platform. OPNsense includes most of the features available in expensive commercial firewalls, and more in many cases. It brings the rich feature set of commercial offerings with the benefits of open and verifiable sources. • MidnightBSD is a FreeBSD-derived operating system developed with desktop users in mind. It includes all the software expected for daily tasks such as mail, web browsing, word processing, gaming, and much more. • NomadBSD is a persistent live system for USB flash drives, based on FreeBSD. Together with automatic hardware detection and setup, it is configured to be used as a desktop system that works out of the box, but can also be used for data recovery, for educational purposes, or to test FreeBSD’s hardware compatibility. • pfSense - A firewall distribution based on FreeBSD with a huge array of features and extensive IPv6 support. • ZRouter - An open source alternative firmware for embedded devices based on FreeBSD. Designed to replace the proprietary firmware on off-the-shelf routers. A list of testimonials from companies basing their products and services on FreeBSD can be found at the FreeBSD Foundation website. Wikipedia also maintains a list of products based on FreeBSD. ### 1.3. About the FreeBSD Project The following section provides some background information on the project, including a brief history, project goals, and the development model of the project. #### 1.3.1. A Brief History of FreeBSD The FreeBSD Project had its genesis in the early part of 1993, partially as the brainchild of the Unofficial 386BSDPatchkit’s last 3 coordinators: Nate Williams, Rod Grimes and Jordan Hubbard. The original goal was to produce an intermediate snapshot of 386BSD in order to fix a number of problems that the patchkit mechanism was just not capable of solving. The early working title for the project was 386BSD 0.5 or 386BSD Interim in reference of that fact. 386BSD was Bill Jolitz’s operating system, which had been up to that point suffering rather severely from almost a year’s worth of neglect. As the patchkit swelled ever more uncomfortably with each passing day, they decided to assist Bill by providing this interim "cleanup" snapshot. Those plans came to a rude halt when Bill Jolitz suddenly decided to withdraw his sanction from the project without any clear indication of what would be done instead. The trio thought that the goal remained worthwhile, even without Bill’s support, and so they adopted the name "FreeBSD" coined by David Greenman. The initial objectives were set after consulting with the system’s current users and, once it became clear that the project was on the road to perhaps even becoming a reality, Jordan contacted Walnut Creek CDROM with an eye toward improving FreeBSD’s distribution channels for those many unfortunates without easy access to the Internet. Walnut Creek CDROM not only supported the idea of distributing FreeBSD on CD but also went so far as to provide the project with a machine to work on and a fast Internet connection. Without Walnut Creek CDROM’s almost unprecedented degree of faith in what was, at the time, a completely unknown project, it is quite unlikely that FreeBSD would have gotten as far, as fast, as it has today. The first CD-ROM (and general net-wide) distribution was FreeBSD 1.0, released in December of 1993. This was based on the 4.3BSD-Lite ("Net/2") tape from U.C. Berkeley, with many components also provided by 386BSD and the Free Software Foundation. It was a fairly reasonable success for a first offering, and they followed it with the highly successful FreeBSD 1.1 release in May of 1994. Around this time, some rather unexpected storm clouds formed on the horizon as Novell and U.C. Berkeley settled their long-running lawsuit over the legal status of the Berkeley Net/2 tape. A condition of that settlement was U.C. Berkeley’s concession that large parts of Net/2 were "encumbered" code and the property of Novell, who had in turn acquired it from AT&T some time previously. What Berkeley got in return was Novell’s "blessing" that the 4.4BSD-Lite release, when it was finally released, would be declared unencumbered and all existing Net/2 users would be strongly encouraged to switch. This included FreeBSD, and the project was given until the end of July 1994 to stop shipping its own Net/2 based product. Under the terms of that agreement, the project was allowed one last release before the deadline, that release being FreeBSD 1.1.5.1. FreeBSD then set about the arduous task of literally re-inventing itself from a completely new and rather incomplete set of 4.4BSD-Lite bits. The "Lite" releases were light in part because Berkeley’s CSRG had removed large chunks of code required for actually constructing a bootable running system (due to various legal requirements) and the fact that the Intel port of 4.4 was highly incomplete. It took the project until November of 1994 to make this transition, and in December it released FreeBSD 2.0 to the world. Despite being still more than a little rough around the edges, the release was a significant success and was followed by the more robust and easier to install FreeBSD 2.0.5 release in June of 1995. Since that time, FreeBSD has made a series of releases each time improving the stability, speed, and feature set of the previous version. For now, long-term development projects continue to take place in the 14.0-CURRENT (main) branch, and snapshot releases of 14.0 are continually made available from the snapshot server as work progresses. #### 1.3.2. FreeBSD Project Goals The goals of the FreeBSD Project are to provide software that may be used for any purpose and without strings attached. Many of us have a significant investment in the code (and project) and would certainly not mind a little financial compensation now and then, but we are definitely not prepared to insist on it. We believe that our first and foremost "mission" is to provide code to any and all comers, and for whatever purpose, so that the code gets the widest possible use and provides the widest possible benefit. This is, I believe, one of the most fundamental goals of Free Software and one that we enthusiastically support. That code in our source tree which falls under the GNU General Public License (GPL) or Library General Public License (LGPL) comes with slightly more strings attached, though at least on the side of enforced access rather than the usual opposite. Due to the additional complexities that can evolve in the commercial use of GPL software we do, however, prefer software submitted under the more relaxed BSD license when it is a reasonable option to do so. #### 1.3.3. The FreeBSD Development Model The development of FreeBSD is a very open and flexible process, being literally built from the contributions of thousands of people around the world, as can be seen from our list of contributors. FreeBSD’s development infrastructure allows these thousands of contributors to collaborate over the Internet. We are constantly on the lookout for new volunteers, and those interested in becoming more closely involved should consult the article on Contributing to FreeBSD. Useful things to know about the FreeBSD Project and its development process, whether working independently or in close cooperation: The Git repositories For several years, the central source tree for FreeBSD was maintained by CVS (Concurrent Versions System), a freely available source code control tool. In June 2008, the Project switched to using SVN (Subversion). The switch was deemed necessary, as the technical limitations imposed by CVS were becoming obvious due to the rapid expansion of the source tree and the amount of history already stored. The Documentation Project and Ports Collection repositories also moved from CVS to SVN in May 2012 and July 2012, respectively. In December 2020, the Project migrated Source and Documentation repositories to Git, with Ports following suit in April 2021. Please refer to the Obtaining the Source section for more information on obtaining the FreeBSD src/ repository and Using the Ports Collection for details on obtaining the FreeBSD Ports Collection. The committers list The committers are the people who have push access to the Git repository, and are authorized to make modifications to the FreeBSD source (the term "committer" comes from commit, the source control command which is used to bring new changes into the repository). Anyone can submit a bug to the Bug Database. Before submitting a bug report, the FreeBSD mailing lists, IRC channels, or forums can be used to help verify that an issue is actually a bug. The FreeBSD core team The FreeBSD core team would be equivalent to the board of directors if the FreeBSD Project were a company. The primary task of the core team is to make sure the project, as a whole, is in good shape and is heading in the right directions. Inviting dedicated and responsible developers to join our group of committers is one of the functions of the core team, as is the recruitment of new core team members as others move on. The current core team was elected from a pool of committer candidates in May 2022. Elections are held every 2 years. Like most developers, most members of the core team are also volunteers when it comes to FreeBSD development and do not benefit from the project financially, so "commitment" should also not be misconstrued as meaning "guaranteed support." The "board of directors" analogy above is not very accurate, and it may be more suitable to say that these are the people who gave up their lives in favor of FreeBSD against their better judgement! The FreeBSD Foundation The FreeBSD Foundation is a 501(c)(3), US-based, non-profit organization dedicated to supporting and promoting the FreeBSD Project and community worldwide. The Foundation funds software development via project grants and provides staff to immediately respond to urgent problems and implement new features and functionality. The Foundation purchases hardware to improve and maintain FreeBSD infrastructure, and funds staffing to improve test coverage, continuous integration and automation. The Foundation advocates for FreeBSD by promoting FreeBSD at technical conferences and events around the world. The Foundation also provides workshops, educational material, and presentations to recruit more users and contributors to FreeBSD. The Foundation also represents the FreeBSD Project in executing contracts, license agreements, and other legal arrangements that require a recognized legal entity. Outside contributors Last, but definitely not least, the largest group of developers are the users themselves who provide feedback and bug fixes to us on an almost constant basis. The primary way of keeping in touch with the development of the FreeBSD base system is to subscribe to the FreeBSD technical discussions mailing list where such things are discussed. For porting third party applications, it would be the FreeBSD ports mailing list. For documentation - FreeBSD documentation project mailing list. See Resources on the Internet for more information about the various FreeBSD mailing lists. The FreeBSD Contributors List is a long and growing one, so why not join it by contributing something back to FreeBSD today? Providing code is not the only way! In summary, our development model is organized as a loose set of concentric circles. The centralized model is designed for the convenience of the users of FreeBSD, who are provided with an easy way of tracking one central code base, not to keep potential contributors out! Our desire is to present a stable operating system with a large set of coherent application programs that the users can easily install and use - this model works very well in accomplishing that. All we ask of those who would join us as FreeBSD developers is some of the same dedication its current people have to its continued success! #### 1.3.4. Third Party Programs In addition to the base distributions, FreeBSD offers a ported software collection with thousands of commonly sought-after programs. The list of ports ranges from HTTP servers to games, languages, editors, and almost everything in between. There are about 36000 ports; the entire Ports Collection requires approximately 3 GB. To compile a port, you simply change to the directory of the program you wish to install, type make install, and let the system do the rest. The full original distribution for each port you build is retrieved dynamically so you need only enough disk space to build the ports you want. Almost every port is also provided as a pre-compiled "package", which can be installed with a simple command (pkg install) by those who do not wish to compile their own ports from source. More information on packages and ports can be found in Installing Applications: Packages and Ports. All supported FreeBSD versions provide an option in the installer to install additional documentation under /usr/local/share/doc/freebsd during the initial system setup. Documentation may also be installed later using packages: # pkg install en-freebsd-doc For localized versions replace the "en" with language prefix of choice. Be aware that some of the localised versions might be out of date and might contain information that is no longer correct or relevant. You may view the locally installed manuals with a web browser using the following URLs: You can always find up to date documentation at https://docs.FreeBSD.org/. ## Chapter 2. Installing FreeBSD ### 2.1. Synopsis There are several different ways of getting FreeBSD to run, depending on the environment. Those are: • Virtual Machine images, to download and import on a virtual environment of choice. These can be downloaded from the Download FreeBSD page. There are images for KVM ("qcow2"), VMWare ("vmdk"), Hyper-V ("vhd"), and raw device images that are universally supported. These are not installation images, but rather the preconfigured ("already installed") instances, ready to run and perform post-installation tasks. • Virtual Machine images available at Amazon’s AWS Marketplace, Microsoft Azure Marketplace, and Google Cloud Platform, to run on their respective hosting services. For more information on deploying FreeBSD on Azure please consult the relevant chapter in the Azure Documentation. • SD card images, for embedded systems such as Raspberry Pi or BeagleBone Black. These can be downloaded from the Download FreeBSD page. These files must be uncompressed and written as a raw image to an SD card, from which the board will then boot. • Installation images, to install FreeBSD on a hard drive for the usual desktop, laptop, or server systems. The rest of this chapter describes the fourth case, explaining how to install FreeBSD using the text-based installation program named bsdinstall. In general, the installation instructions in this chapter are written for the i386™ and AMD64 architectures. Where applicable, instructions specific to other platforms will be listed. There may be minor differences between the installer and what is shown here, so use this chapter as a general guide rather than as a set of literal instructions. Users who prefer to install FreeBSD using a graphical installer may be interested in GhostBSD, MidnightBSD or NomadBSD. After reading this chapter, you will know: • The minimum hardware requirements and FreeBSD supported architectures. • How to create the FreeBSD installation media. • How to start bsdinstall. • The questions bsdinstall will ask, what they mean, and how to answer them. • How to troubleshoot a failed installation. • How to access a live version of FreeBSD before committing to an installation. Before reading this chapter, you should: • Read the supported hardware list that shipped with the version of FreeBSD to be installed and verify that the system’s hardware is supported. ### 2.2. Minimum Hardware Requirements The hardware requirements to install FreeBSD vary by architecture. Hardware architectures and devices supported by a FreeBSD release are listed on the FreeBSD Release Information page. The FreeBSD download page also has recommendations for choosing the correct image for different architectures. A FreeBSD installation requires a minimum of 96 MB of RAM and 1.5 GB of free hard drive space. However, such small amounts of memory and disk space are really only suitable for custom applications like embedded appliances. General-purpose desktop systems need more resources. 2-4 GB RAM and at least 8 GB hard drive space is a good starting point. These are the processor requirements for each architecture: amd64 This is the most common desktop and laptop processor type, used in most modern systems. Intel® calls it Intel64. Other manufacturers sometimes call it x86-64. Examples of amd64 compatible processors include: AMD Athlon™64, AMD Opteron™, multi-core Intel® Xeon™, and Intel® Core™ 2 and later processors. i386 Older desktops and laptops often use this 32-bit, x86 architecture. Almost all i386-compatible processors with a floating point unit are supported. All Intel® processors 486 or higher are supported. However, binaries released by the project are compiled for the 686 processor, so a special build will be needed for 486 and 586 systems. FreeBSD will take advantage of Physical Address Extensions (PAE) support on CPUs with this feature. A kernel with the PAE feature enabled will detect memory above 4 GB and allow it to be used by the system. However, using PAE places constraints on device drivers and other features of FreeBSD. arm64 Most embedded boards are 64-bit ARM computers. A number of arm64 servers are supported. arm Older armv7 boards are supported. powerpc All New World ROM Apple® Mac® systems with built-in USB are supported. SMP is supported on machines with multiple CPUs. A 32-bit kernel can only use the first 2 GB of RAM. Once it has been determined that the system meets the minimum hardware requirements for installing FreeBSD, the installation file should be downloaded and the installation media prepared. Before doing this, check that the system is ready for an installation by verifying the items in this checklist: 1. Back Up Important Data Before installing any operating system, always backup all important data first. Do not store the backup on the system being installed. Instead, save the data to a removable disk such as a USB drive, another system on the network, or an online backup service. Test the backup before starting the installation to make sure it contains all of the needed files. Once the installer formats the system’s disk, all data stored on that disk will be lost. 2. Decide Where to Install FreeBSD If FreeBSD will be the only operating system installed, this step can be skipped. But if FreeBSD will share the disk with another operating system, decide which disk or partition will be used for FreeBSD. In the i386 and amd64 architectures, disks can be divided into multiple partitions using one of two partitioning schemes. A traditional Master Boot Record (MBR) holds a partition table defining up to four primary partitions. For historical reasons, FreeBSD calls these primary partition slices. One of these primary partitions can be made into an extended partition containing multiple logical partitions. The GUID Partition Table (GPT) is a newer and simpler method of partitioning a disk. Common GPT implementations allow up to 128 partitions per disk, eliminating the need for logical partitions. The FreeBSD boot loader requires either a primary or GPT partition. If all of the primary or GPT partitions are already in use, one must be freed for FreeBSD. To create a partition without deleting existing data, use a partition resizing tool to shrink an existing partition and create a new partition using the freed space. A variety of free and commercial partition resizing tools are listed at http://en.wikipedia.org/wiki/List_of_disk_partitioning_software. GParted Live (https://gparted.org/livecd.php) is a free live CD which includes the GParted partition editor. GParted is also included with many other Linux live CD distributions. When used properly, disk shrinking utilities can safely create space for creating a new partition. Since the possibility of selecting the wrong partition exists, always backup any important data and verify the integrity of the backup before modifying disk partitions. Disk partitions containing different operating systems make it possible to install multiple operating systems on one computer. An alternative is to use virtualization (Virtualization) which allows multiple operating systems to run at the same time without modifying any disk partitions. 3. Collect Network Information Some FreeBSD installation methods require a network connection in order to download the installation files. After any installation, the installer will offer to setup the system’s network interfaces. If the network has a DHCP server, it can be used to provide automatic network configuration. If DHCP is not available, the following network information for the system must be obtained from the local network administrator or Internet service provider: Required Network Information 3. IP address of default gateway 4. Domain name of the network 5. IP addresses of the network’s DNS servers Although the FreeBSD Project strives to ensure that each release of FreeBSD is as stable as possible, bugs occasionally creep into the process. On very rare occasions those bugs affect the installation process. As these problems are discovered and fixed, they are noted in the FreeBSD Errata (https://www.freebsd.org/releases/13.0R/errata/) on the FreeBSD web site. Check the errata before installing to make sure that there are no problems that might affect the installation. Information and errata for all the releases can be found on the release information section of the FreeBSD web site (https://www.freebsd.org/releases/). #### 2.3.1. Prepare the Installation Media The FreeBSD installer is not an application that can be run from within another operating system. Instead, download a FreeBSD installation file, burn it to the media associated with its file type and size (CD, DVD, or USB), and boot the system to install from the inserted media. FreeBSD installation files are available at www.freebsd.org/where/. Each installation file’s name includes the release version of FreeBSD, the architecture, and the type of file. For example, to install FreeBSD 13.0 on an amd64 system from a DVD, download FreeBSD-13.0-RELEASE-amd64-dvd1.iso, burn this file to a DVD, and boot the system with the DVD inserted. Installation files are available in several formats. The formats vary depending on computer architecture and media type. Additional installation files are included for computers that boot with UEFI (Unified Extensible Firmware Interface). The names of these files include the string uefi. File types: • -bootonly.iso: This is the smallest installation file as it only contains the installer. A working Internet connection is required during installation as the installer will download the files it needs to complete the FreeBSD installation. This file should be burned to a CD using a CD burning application. • -disc1.iso: This file contains all of the files needed to install FreeBSD, its source, and the Ports Collection. It should be burned to a CD using a CD burning application. • -dvd1.iso: This file contains all of the files needed to install FreeBSD, its source, and the Ports Collection. It also contains a set of popular binary packages for installing a window manager and some applications so that a complete system can be installed from media without requiring a connection to the Internet. This file should be burned to a DVD using a DVD burning application. • -memstick.img: This file contains all of the files needed to install FreeBSD, its source, and the Ports Collection. It should be burned to a USB stick using the instructions below. • -mini-memstick.img: Like -bootonly.iso, does not include installation files, but downloads them as needed. A working internet connection is required during installation. Write this file to a USB stick as shown in Writing an Image File to USB. After downloading the image file, download at least one checksum file from the same directory. There are two checksum files available, named after the release number and the architecture name. For example: CHECKSUM.SHA256-FreeBSD-13.1-RELEASE-amd64 and CHECKSUM.SHA512-FreeBSD-13.1-RELEASE-amd64. After downloading one of the files (or both), calculate the checksum for the image file and compare it with the one shown in the checksum file. Note that you need to compare the calculated checksum against the correct file, as they correspond to two different algorithms: SHA256 and SHA512. FreeBSD provides sha256(1) and sha512(1) that can be used for calculating the checksum. Other operating systems have similar programs. Verifying the checksum in FreeBSD can be done automatically using sha256sum(1) (and sha512sum(1)) by executing: % sha256sum -c CHECKSUM.SHA256-FreeBSD-13.1-RELEASE-amd64 FreeBSD-13.1-RELEASE-amd64-dvd1.iso FreeBSD-13.1-RELEASE-amd64-dvd1.iso: OK The checksums must match exactly. If the checksums do not match, the image file is corrupt and must be downloaded again. ##### 2.3.1.1. Writing an Image File to USB The *.img file is an image of the complete contents of a memory stick. It cannot be copied to the target device as a file. Several applications are available for writing the *.img to a USB stick. This section describes two of these utilities. Before proceeding, back up any important data on the USB stick. This procedure will erase the existing data on the stick. Procedure. Using dd to Write the Image This example uses /dev/da0 as the target device where the image will be written. Be very careful that the correct device is used as this command will destroy the existing data on the specified target device. 1. The command-line utility is available on BSD, Linux®, and Mac OS® systems. To burn the image using dd, insert the USB stick and determine its device name. Then, specify the name of the downloaded installation file and the device name for the USB stick. This example burns the amd64 installation image to the first USB device on an existing FreeBSD system. # dd if=FreeBSD-13.0-RELEASE-amd64-memstick.img of=/dev/da0 bs=1M conv=sync If this command fails, verify that the USB stick is not mounted and that the device name is for the disk, not a partition. Some operating systems might require this command to be run with sudo(8). The dd(1) syntax varies slightly across different platforms; for example, Mac OS® requires a lower-case bs=1m. Systems like Linux® might buffer writes. To force all writes to complete, use sync(8). Procedure. Using Windows® to Write the Image Be sure to give the correct drive letter as the existing data on the specified drive will be overwritten and destroyed. 1. Obtaining Image Writer for Windows® Image Writer for Windows® is a free application that can correctly write an image file to a memory stick. Download it from https://sourceforge.net/projects/win32diskimager/ and extract it into a folder. 2. Writing the Image with Image Writer Double-click the Win32DiskImager icon to start the program. Verify that the drive letter shown under Device is the drive with the memory stick. Click the folder icon and select the image to be written to the memory stick. Click to accept the image file name. Verify that everything is correct, and that no folders on the memory stick are open in other windows. When everything is ready, click to write the image file to the memory stick. You are now ready to start installing FreeBSD. ### 2.4. Starting the Installation By default, the installation will not make any changes to the disk(s) before the following message:Your changes will now be written to disk. If you have chosen to overwrite existing data, it will be PERMANENTLY ERASED. Are you sure you want to commit your changes?The install can be exited at any time prior to this warning. If there is a concern that something is incorrectly configured, just turn the computer off before this point and no changes will be made to the system’s disks. This section describes how to boot the system from the installation media which was prepared using the instructions in Prepare the Installation Media. When using a bootable USB stick, plug in the USB stick before turning on the computer. When booting from CD or DVD, turn on the computer and insert the media at the first opportunity. How to configure the system to boot from the inserted media depends upon the architecture. #### 2.4.1. Booting on i386™ and amd64 These architectures provide a BIOS menu for selecting the boot device. Depending upon the installation media being used, select the CD/DVD or USB device as the first boot device. Most systems also provide a key for selecting the boot device during startup without having to enter the BIOS. Typically, the key is either F10, F11, F12, or Escape. If the computer loads the existing operating system instead of the FreeBSD installer, then either: 1. The installation media was not inserted early enough in the boot process. Leave the media inserted and try restarting the computer. 2. The BIOS changes were incorrect or not saved. Double-check that the right boot device is selected as the first boot device. 3. This system is too old to support booting from the chosen media. In this case, the Plop Boot Manager (http://www.plop.at/en/bootmanagers.html) can be used to boot the system from the selected media. #### 2.4.2. Booting on PowerPC® On most machines, holding C on the keyboard during boot will boot from the CD. Otherwise, hold Command+Option+O+F, or Windows+Alt+O+F on non-Apple® keyboards. At the 0 > prompt, enter boot cd:,\ppc\loader cd:0 Once the system boots from the installation media, a menu similar to the following will be displayed: By default, the menu will wait ten seconds for user input before booting into the FreeBSD installer or, if FreeBSD is already installed, before booting into FreeBSD. To pause the boot timer in order to review the selections, press Space. To select an option, press its highlighted number, character, or key. The following options are available. • Boot Multi User: This will continue the FreeBSD boot process. If the boot timer has been paused, press 1, upper- or lower-case B, or Enter. • Boot Single User: This mode can be used to fix an existing FreeBSD installation as described in “Single-User Mode”. Press 2 or the upper- or lower-case S to enter this mode. • Escape to loader prompt: This will boot the system into a repair prompt that contains a limited number of low-level commands. This prompt is described in “Stage Three”. Press 3 or Esc to boot into this prompt. • Reboot: Reboots the system. • Kernel: Loads a different kernel. • Boot Options: Opens the menu shown in, and described under, FreeBSD Boot Options Menu. Figure 2. FreeBSD Boot Options Menu The boot options menu is divided into two sections. The first section can be used to either return to the main boot menu or to reset any toggled options back to their defaults. The next section is used to toggle the available options to On or Off by pressing the option’s highlighted number or character. The system will always boot using the settings for these options until they are modified. Several options can be toggled using this menu: • ACPI Support: If the system hangs during boot, try toggling this option to Off. • Safe Mode: If the system still hangs during boot even with ACPI Support set to Off, try setting this option to On. • Single User: Toggle this option to On to fix an existing FreeBSD installation as described in “Single-User Mode”. Once the problem is fixed, set it back to Off. • Verbose: Toggle this option to On to see more detailed messages during the boot process. This can be useful when troubleshooting a piece of hardware. After making the needed selections, press 1 or Backspace to return to the main boot menu, then press Enter to continue booting into FreeBSD. A series of boot messages will appear as FreeBSD carries out its hardware device probes and loads the installation program. Once the boot is complete, the welcome menu shown in Welcome Menu will be displayed. Press Enter to select the default of to enter the installer. The rest of this chapter describes how to use this installer. Otherwise, use the right or left arrows or the colorized letter to select the desired menu item. The can be used to access a FreeBSD shell in order to use command line utilities to prepare the disks before installation. The option can be used to try out FreeBSD before installing it. The live version is described in Using the Live CD. To review the boot messages, including the hardware device probe, press the upper- or lower-case S and then Enter to access a shell. At the shell prompt, type more /var/run/dmesg.boot and use the space bar to scroll through the messages. When finished, type exit to return to the welcome menu. ### 2.5. Using bsdinstall This section shows the order of the bsdinstall menus and the type of information that will be asked before the system is installed. Use the arrow keys to highlight a menu option, then Space to select or deselect that menu item. When finished, press Enter to save the selection and move onto the next screen. #### 2.5.1. Selecting the Keymap Menu After the keymaps have been loaded, bsdinstall displays the menu shown in Keymap Selection Menu. Use the up and down arrows to select the keymap that most closely represents the mapping of the keyboard attached to the system. Press Enter to save the selection. Pressing Esc will exit this menu and use the default keymap. If the choice of keymap is not clear, United States of America ISO-8859-1 is also a safe option. In addition, when selecting a different keymap, the user can try the keymap and ensure it is correct before proceeding, as shown in Keymap Testing Menu. #### 2.5.2. Setting the Hostname The next bsdinstall menu is used to set the hostname for the newly installed system. Figure 7. Setting the Hostname Type in a hostname that is unique for the network. It should be a fully-qualified hostname, such as machine3.example.com. #### 2.5.3. Selecting Components to Install Next, bsdinstall will prompt to select optional components to install. Figure 8. Selecting Components to Install Deciding which components to install will depend largely on the intended use of the system and the amount of disk space available. The FreeBSD kernel and userland, collectively known as the base system, are always installed. Depending on the architecture, some of these components may not appear: • base-dbg - Base tools like cat and ls, among many others, with debug symbols activated. • kernel-dbg - Kernel and modules with debug symbols activated. • lib32-dbg - Compatibility libraries for running 32-bit applications on a 64-bit version of FreeBSD with debug symbols activated. • lib32 - Compatibility libraries for running 32-bit applications on a 64-bit version of FreeBSD. • ports - The FreeBSD Ports Collection is a collection of files which automates the downloading, compiling and installation of third-party software packages. Installing Applications: Packages and Ports discusses how to use the Ports Collection. The installation program does not check for adequate disk space. Select this option only if sufficient hard disk space is available. The FreeBSD Ports Collection takes up about 3 GB of disk space. • src - The complete FreeBSD source code for both the kernel and the userland. Although not required for the majority of applications, it may be required to build device drivers, kernel modules, or some applications from the Ports Collection. It is also used for developing FreeBSD itself. The full source tree requires 1 GB of disk space and recompiling the entire FreeBSD system requires an additional 5 GB of space. • tests - FreeBSD Test Suite. #### 2.5.4. Installing from the Network The menu shown in Installing from the Network only appears when installing from a -bootonly.iso or -mini-memstick.img, as this installation media does not hold copies of the installation files. Since the installation files must be retrieved over a network connection, this menu indicates that the network interface must be configured first. If this menu is shown in any step of the process, remember to follow the instructions in Configuring Network Interfaces. Figure 9. Installing from the Network ### 2.6. Allocating Disk Space The next menu is used to determine the method for allocating disk space. Figure 10. Partitioning Choices bsdinstall gives the user four methods for allocating disk space: • Auto (UFS) partitioning automatically sets up the disk partitions using the UFS file system. • Manual partitioning allows advanced users to create customized partitions from menu options. • Shell opens a shell prompt where advanced users can create customized partitions using command-line utilities like gpart(8), fdisk(8), and bsdlabel(8). • Auto (ZFS) partitioning creates a root-on-ZFS system with optional GELI encryption support for boot environments. This section describes what to consider when laying out the disk partitions. It then demonstrates how to use the different partitioning methods. #### 2.6.1. Designing the Partition Layout When laying out file systems, remember that hard drives transfer data faster from the outer tracks to the inner. Thus, smaller and heavier-accessed file systems should be closer to the outside of the drive, while larger partitions like /usr should be placed toward the inner parts of the disk. It is a good idea to create partitions in an order similar to: /, swap, /var, and /usr. The size of the /var partition reflects the intended machine’s usage. This partition is used to hold mailboxes, log files, and printer spools. Mailboxes and log files can grow to unexpected sizes depending on the number of users and how long log files are kept. On average, most users rarely need more than about a gigabyte of free disk space in /var. Sometimes, a lot of disk space is required in /var/tmp. When new software is installed, the packaging tools extract a temporary copy of the packages under /var/tmp. Large software packages, like Firefox or LibreOffice may be tricky to install if there is not enough disk space under /var/tmp. The /usr partition holds many of the files which support the system, including the FreeBSD Ports Collection and system source code. At least 2 gigabytes of space is recommended for this partition. When selecting partition sizes, keep the space requirements in mind. Running out of space in one partition while barely using another can be a hassle. As a rule of thumb, the swap partition should be about double the size of physical memory (RAM). Systems with minimal RAM may perform better with more swap. Configuring too little swap can lead to inefficiencies in the VM page scanning code and might create issues later if more memory is added. On larger systems with multiple SCSI disks or multiple IDE disks operating on different controllers, it is recommended that swap be configured on each drive, up to four drives. The swap partitions should be approximately the same size. The kernel can handle arbitrary sizes, but internal data structures scale to 4 times the largest swap partition. Keeping the swap partitions near the same size will allow the kernel to optimally stripe swap space across disks. Large swap sizes are fine, even if swap is not used much. It might be easier to recover from a runaway program before being forced to reboot. By properly partitioning a system, fragmentation introduced in the smaller write-heavy partitions will not bleed over into the mostly read partitions. Keeping the write-loaded partitions closer to the disk’s edge will increase I/O performance in the partitions where it occurs the most. While I/O performance in the larger partitions may be needed, shifting them more toward the edge of the disk will not lead to a significant performance improvement over moving /var to the edge. #### 2.6.2. Guided Partitioning Using UFS When this method is selected, a menu will display the available disk(s). If multiple disks are connected, choose the one where FreeBSD is to be installed. Figure 11. Selecting from Multiple Disks Once the disk is selected, the next menu prompts to install to either the entire disk or to create a partition using free space. If is chosen, a general partition layout filling the whole disk is automatically created. Selecting creates a partition layout from the unused space on the disk. Figure 12. Selecting Entire Disk or Partition After is chosen, bsdinstall displays a dialog indicating that the disk will be erased. Figure 13. Confirmation The next menu shows a list with the available partition scheme types. GPT is usually the most appropriate choice for amd64 computers. Older computers that are not compatible with GPT should use MBR. The other partition schemes are generally used for uncommon or older computers. More information is available in Partitioning Schemes. Figure 14. Select Partition Scheme After the partition layout has been created, review it to ensure it meets the needs of the installation. Selecting will reset the partitions to their original values. Pressing will recreate the automatic FreeBSD partitions. Partitions can also be manually created, modified, or deleted. When the partitioning is correct, select to continue with the installation. Figure 15. Review Created Partitions Once the disks are configured, the next menu provides the last chance to make changes before the selected drives are formatted. If changes need to be made, select to return to the main partitioning menu. exits the installer without making any changes to the drive. Otherwise, select to start the installation process. Figure 16. Final Confirmation To continue with the installation process, go to Fetching Distribution Files. #### 2.6.3. Manual Partitioning Selecting this method opens the partition editor: Figure 17. Manually Create Partitions Highlight the installation drive (ada0 in this example) and select to display a menu of available partition schemes: Figure 18. Manually Create Partitions GPT is usually the most appropriate choice for amd64 computers. Older computers that are not compatible with GPT should use MBR. The other partition schemes are generally used for uncommon or older computers. Table 1. Partitioning Schemes AbbreviationDescription APM Apple Partition Map, used by PowerPC®. BSD BSD label without an MBR, sometimes called dangerously dedicated mode as non-BSD disk utilities may not recognize it. GPT GUID Partition Table (http://en.wikipedia.org/wiki/GUID_Partition_Table). MBR Master Boot Record (http://en.wikipedia.org/wiki/Master_boot_record). After the partitioning scheme has been selected and created, select again to create the partitions. The Tab key is used to move the cursor between fields. Figure 19. Manually Create Partitions A standard FreeBSD GPT installation uses at least three partitions: • freebsd-boot - Holds the FreeBSD boot code. • freebsd-ufs - A FreeBSD UFS file system. • freebsd-zfs - A FreeBSD ZFS file system. More information about ZFS is available in The Z File System (ZFS). • freebsd-swap - FreeBSD swap space. Refer to gpart(8) for descriptions of the available GPT partition types. Multiple file system partitions can be created. Some people prefer a traditional layout with separate partitions for /, /var, /tmp, and /usr. See Creating Traditional Split File System Partitions for an example. The Size may be entered with common abbreviations: K for kilobytes, M for megabytes, or G for gigabytes. Proper sector alignment provides the best performance, and making partition sizes even multiples of 4K bytes helps to ensure alignment on drives with either 512-byte or 4K-byte sectors. Generally, using partition sizes that are even multiples of 1M or 1G is the easiest way to make sure every partition starts at an even multiple of 4K. There is one exception: the freebsd-boot partition should be no larger than 512K due to current boot code limitations. A Mountpoint is needed if the partition will contain a file system. If only a single UFS partition will be created, the mountpoint should be /. The Label is a name by which the partition will be known. Drive names or numbers can change if the drive is connected to a different controller or port, but the partition label does not change. Referring to labels instead of drive names and partition numbers in files like /etc/fstab makes the system more tolerant to hardware changes. GPT labels appear in /dev/gpt/ when a disk is attached. Other partitioning schemes have different label capabilities and their labels appear in different directories in /dev/. Use a unique label on every partition to avoid conflicts from identical labels. A few letters from the computer’s name, use, or location can be added to the label. For instance, use labroot or rootfslab for the UFS root partition on the computer named lab. Example 1. Creating Traditional Split File System Partitions For a traditional partition layout where the /, /var, /tmp, and /usr directories are separate file systems on their own partitions, create a GPT partitioning scheme, then create the partitions as shown. Partition sizes shown are typical for a 20G target disk. If more space is available on the target disk, larger swap or /var partitions may be useful. Labels shown here are prefixed with ex for "example", but readers should use other unique label values as described above. By default, FreeBSD’s gptboot expects the first UFS partition to be the / partition. Partition TypeSizeMountpointLabel freebsd-boot 512K freebsd-ufs 2G / exrootfs freebsd-swap 4G exswap freebsd-ufs 2G /var exvarfs freebsd-ufs 1G /tmp extmpfs freebsd-ufs accept the default (remainder of the disk) /usr exusrfs After the custom partitions have been created, select to continue with the installation and go to Fetching Distribution Files. #### 2.6.4. Guided Partitioning Using Root-on-ZFS This partitioning mode only works with whole disks and will erase the contents of the entire disk. The main ZFS configuration menu offers a number of options to control the creation of the pool. Here is a summary of the options in this menu: • Install - Proceed with the installation with the selected options. • Pool Type/Disks - Configure the Pool Type and the disk(s) that will constitute the pool. The automatic ZFS installer currently only supports the creation of a single top level vdev, except in stripe mode. To create more complex pools, use the instructions in Shell Mode Partitioning to create the pool. • Rescan Devices - Repopulate the list of available disks. • Disk Info - This menu can be used to inspect each disk, including its partition table and various other information such as the device model number and serial number, if available. • Pool Name - Establish the name of the pool. The default name is zroot. • Force 4K Sectors? - Force the use of 4K sectors. By default, the installer will automatically create partitions aligned to 4K boundaries and force ZFS to use 4K sectors. This is safe even with 512 byte sector disks, and has the added benefit of ensuring that pools created on 512 byte disks will be able to have 4K sector disks added in the future, either as additional storage space or as replacements for failed disks. Press the Enter key to chose to activate it or not. • Encrypt Disks? - Encrypting the disks allows the user to encrypt the disks using GELI. More information about disk encryption is available in “Disk Encryption with geli”. Press the Enter key to chose activate it or not. • Partition Scheme - Choose the partition scheme. GPT is the recommended option in most cases. Press the Enter key to chose between the different options. • Swap Size - Establish the amount of swap space. • Mirror Swap? - Whether to mirror the swap between the disks. Be aware that enabling mirror swap will break crash dumps. Press the Enter key to activate it or not. • Encrypt Swap? - Whether to encrypt the swap. This will encrypt the swap with a temporary key each time the system boots, and discards it on reboot. Press the Enter key to chose activate it or not. More information about swap encryption in “Encrypting Swap”. Select T to configure the Pool Type and the disk(s) that will constitute the pool. Figure 21. ZFS Pool Type Here is a summary of the Pool Type that can be selected in this menu: • stripe - Striping provides maximum storage of all connected devices, but no redundancy. If just one disk fails the data on the pool is lost irrevocably. • mirror - Mirroring stores a complete copy of all data on every disk. Mirroring provides good read performance because data is read from all disks in parallel. Write performance is slower as the data must be written to all disks in the pool. Allows all but one disk to fail. This option requires at least two disks. • raid10 - Striped mirrors. Provides the best performance, but the least storage. This option needs at least an even number of disks and a minimum of four disks. • raidz1 - Single Redundant RAID. Allow one disk to fail concurrently. This option needs at least three disks. • raidz2 - Double Redundant RAID. Allows two disks to fail concurrently. This option needs at least four disks. • raidz3 - Triple Redundant RAID. Allows three disks to fail concurrently. This option needs at least five disks. Once a Pool Type has been selected, a list of available disks is displayed, and the user is prompted to select one or more disks to make up the pool. The configuration is then validated to ensure that enough disks are selected. If validation fails, select to return to the list of disks or to change the Pool Type. Figure 22. Disk Selection Figure 23. Invalid Selection If one or more disks are missing from the list, or if disks were attached after the installer was started, select to repopulate the list of available disks. Figure 24. Rescan Devices To avoid accidentally erasing the wrong disk, the menu can be used to inspect each disk, including its partition table and various other information such as the device model number and serial number, if available. Figure 25. Analyzing a Disk Select N to configure the Pool Name. Enter the desired name, then select to establish it or to return to the main menu and leave the default name. Figure 26. Pool Name Select S to set the amount of swap. Enter the desired amount of swap, then select to establish it or to return to the main menu and let the default amount. Figure 27. Swap Amount Once all options have been set to the desired values, select the option at the top of the menu. The installer then offers a last chance to cancel before the contents of the selected drives are destroyed to create the ZFS pool. Figure 28. Last Chance If GELI disk encryption was enabled, the installer will prompt twice for the passphrase to be used to encrypt the disks. Initialization of the encryption then begins. Figure 30. Initializing Encryption The installation then proceeds normally. To continue with the installation, go to Fetching Distribution Files. #### 2.6.5. Shell Mode Partitioning When creating advanced installations, the bsdinstall partitioning menus may not provide the level of flexibility required. Advanced users can select the option from the partitioning menu in order to manually partition the drives, create the file system(s), populate /tmp/bsdinstall_etc/fstab, and mount the file systems under /mnt. Once this is done, type exit to return to bsdinstall and continue the installation. ### 2.7. Fetching Distribution Files Installation time will vary depending on the distributions chosen, installation media, and speed of the computer. A series of messages will indicate the progress. First, the installer formats the selected disk(s) and initializes the partitions. Next, in the case of a bootonly media or mini memstick, it downloads the selected components: Figure 31. Fetching Distribution Files Next, the integrity of the distribution files is verified to ensure they have not been corrupted during download or misread from the installation media: Figure 32. Verifying Distribution Files Finally, the verified distribution files are extracted to the disk: Figure 33. Extracting Distribution Files Once all requested distribution files have been extracted, bsdinstall displays the first post-installation configuration screen. The available post-configuration options are described in the next section. ### 2.8. Accounts, Time Zone, Services and Hardening #### 2.8.1. Setting the root Password First, the root password must be set. While entering the password, the characters being typed are not displayed on the screen. The password must be entered twice to prevent typing errors. Figure 34. Setting the root Password #### 2.8.2. Setting the Time Zone The next series of menus are used to determine the correct local time by selecting the geographic region, country, and time zone. Setting the time zone allows the system to automatically correct for regional time changes, such as daylight savings time, and perform other time zone related functions properly. The example shown here is for a machine located in the mainland time zone of Spain, Europe. The selections will vary according to the geographical location. Figure 35. Select a Region The appropriate region is selected using the arrow keys and then pressing Enter. Figure 36. Select a Country Select the appropriate country using the arrow keys and press Enter. Figure 37. Select a Time Zone The appropriate time zone is selected using the arrow keys and pressing Enter. Figure 38. Confirm Time Zone Confirm the abbreviation for the time zone is correct. Figure 39. Select Date The appropriate date is selected using the arrow keys and then pressing . Otherwise, the date selection can be skipped by pressing . Figure 40. Select Time The appropriate time is selected using the arrow keys and then pressing . Otherwise, the time selection can be skipped by pressing . #### 2.8.3. Enabling Services The next menu is used to configure which system services will be started whenever the system boots. All of these services are optional. Only start the services that are needed for the system to function. Figure 41. Selecting Additional Services to Enable Here is a summary of the services that can be enabled in this menu: • local_unbound - Enable the DNS local unbound. It is necessary to keep in mind that this is the unbound of the base system and is only meant for use as a local caching forwarding resolver. If the objective is to set up a resolver for the entire network install dns/unbound. • sshd - The Secure Shell (SSH) daemon is used to remotely access a system over an encrypted connection. Only enable this service if the system should be available for remote logins. • moused - Enable this service if the mouse will be used from the command-line system console. • ntpdate - Enable the automatic clock synchronization at boot time. The functionality of this program is now available in the ntpd(8) daemon. After a suitable period of mourning, the ntpdate(8) utility will be retired. • ntpd - The Network Time Protocol (NTP) daemon for automatic clock synchronization. Enable this service if there is a Windows®, Kerberos, or LDAP server on the network. • powerd - System power control utility for power control and energy saving. • dumpdev - Crash dumps are useful when debugging issues with the system, so users are encouraged to enable them. #### 2.8.4. Enabling Hardening Security Options The next menu is used to configure which security options will be enabled. All of these options are optional. But their use is encouraged. Figure 42. Selecting Hardening Security Options Here is a summary of the options that can be enabled in this menu: • hide_uids - Hide processes running as other users (UID). This prevents unprivileged users from seeing running processes from other users. • hide_gids - Hide processes running as other groups (GID). This prevents unprivileged users from seeing running processes from other groups. • hide_jail - Hide processes running in jails. This prevents unprivileged users from seeing processes running inside jails. • read_msgbuf - Disable reading kernel message buffer for unprivileged users. Prevent unprivileged users from using dmesg(8) to view messages from the kernel’s log buffer. • proc_debug - Disable process debugging facilities for unprivileged users. Disables a variety of unprivileged inter-process debugging services, including some procfs functionality, ptrace(), and ktrace(). Please note that this will also prevent debugging tools such as lldb(1), truss(1) and procstat(1), as well as some built-in debugging facilities in certain scripting languages like PHP. • random_pid - Randomize the PID of processes. • clear_tmp - Clean /tmp when the system starts up. • disable_syslogd - Disable opening the syslogd network socket. By default, FreeBSD runs syslogd in a secure way with -s. This prevents the daemon from listening for incoming UDP requests on port 514. With this option enabled, syslogd will instead run with -ss, which prevents syslogd from opening any port. For more information, see syslogd(8). • disable_sendmail - Disable the sendmail mail transport agent. • secure_console - Make the command prompt request the root password when entering single-user mode. • disable_ddtrace - DTrace can run in a mode that affects the running kernel. Destructive actions may not be used unless explicitly enabled. Use -w to enable this option when using DTrace. For more information, see dtrace(1). The next menu prompts to create at least one user account. It is recommended to log into the system using a user account rather than as root. When logged in as root, there are essentially no limits or protection on what can be done. Logging in as a normal user is safer and more secure. Follow the prompts and input the requested information for the user account. The example shown in Enter User Information creates the asample user account. Figure 44. Enter User Information Here is a summary of the information to input: • Username - The name the user will enter to log in. A common convention is to use the first letter of the first name combined with the last name, as long as each username is unique for the system. The username is case sensitive and should not contain any spaces. • Full name - The user’s full name. This can contain spaces and is used as a description for the user account. • Uid - User ID. This is typically left blank so the system automatically assigns a value. • Login group - The user’s group. This is typically left blank to accept the default. • Invite user into other groups? - Additional groups to which the user will be added as a member. If the user needs administrative access, type wheel here. • Login class - Typically left blank for the default. • Shell - Type in one of the listed values to set the interactive shell for the user. Refer to Shells for more information about shells. • Home directory - The user’s home directory. The default is usually correct. • Home directory permissions - Permissions on the user’s home directory. The default is usually correct. • Use password-based authentication? - Typically yes so that the user is prompted to input their password at login. • Use an empty password? - Typically no as empty or blank passwords are insecure. • Use a random password? - Typically no so that the user can set their own password in the next prompt. • Enter password - The password for this user. Typed-in characters will not be shown on the screen. • Enter password again - The password must be typed again for verification. • Lock out the account after creation? - Typically no so that the user can log in. After entering all the details, a summary is shown for review. If a mistake was made, enter no to correct it. Once everything is correct, enter yes to create the new user. Figure 45. Exit User and Group Management If there are more users to add, answer the Add another user? question with yes. Enter no to finish adding users and continue the installation. #### 2.8.6. Final Configuration After everything has been installed and configured, a final chance is provided to modify settings. Figure 46. Final Configuration Use this menu to make any changes or to do any additional configuration before completing the installation. Once configuration is complete, select . Figure 47. Manual Configuration bsdinstall will prompt for any additional configuration that needs to be done before rebooting into the new system. Select to exit to a shell within the new system or to proceed to the last step of the installation. Figure 48. Complete the Installation If further configuration or special setup is needed, select to boot the install media into Live CD mode. If the installation is complete, select to reboot the computer and start the new FreeBSD system. Do not forget to remove the FreeBSD install media or the computer might boot from it again. As FreeBSD boots, informational messages are displayed. After the system finishes booting, a login prompt is displayed. At the login: prompt, enter the username added during the installation. Avoid logging in as root. Refer to The Superuser Account for instructions on how to become the superuser when administrative access is needed. The messages that appear during boot can be reviewed by pressing Scroll-Lock to turn on the scroll-back buffer. The PgUp, PgDn, and arrow keys can be used to scroll back through the messages. When finished, press Scroll-Lock again to unlock the display and return to the console. To review these messages once the system has been up for some time, type less /var/run/dmesg.boot from a command prompt. Press q to return to the command line after viewing. If sshd was enabled in Selecting Additional Services to Enable, the first boot might be a bit slower as the system generates SSH host keys. Subsequent boots will be faster. The fingerprints of the keys are then displayed as in the following example: Generating public/private rsa1 key pair. Your identification has been saved in /etc/ssh/ssh_host_key. Your public key has been saved in /etc/ssh/ssh_host_key.pub. The key fingerprint is: 10:a0:f5:af:93:ae:a3:1a:b2:bb:3c:35:d9:5a:b3:f3 [email protected] The key's randomart image is: +--[RSA1 1024]----+ | o.. | | o . . | | . o | | o | | o S | | + + o | |o . + * | |o+ ..+ . | |==o..o+E | +-----------------+ Generating public/private dsa key pair. Your identification has been saved in /etc/ssh/ssh_host_dsa_key. Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub. The key fingerprint is: 7e:1c:ce:dc:8a:3a:18:13:5b:34:b5:cf:d9:d1:47:b2 [email protected] The key's randomart image is: +--[ DSA 1024]----+ | .. . .| | o . . + | | . .. . E .| | . . o o . . | | + S = . | | + . = o | | + . * . | | . . o . | | .o. . | +-----------------+ Starting sshd. FreeBSD does not install a graphical environment by default. Refer to The X Window System for more information about installing and configuring a graphical window manager. Proper shutdown of a FreeBSD computer helps protect data and hardware from damage. Do not turn off the power before the system has been properly shut down! If the user is a member of the wheel group, become the superuser by typing su at the command line and entering the root password. Then, type shutdown -p now and the system will shut down cleanly, and, if the hardware supports it, turn itself off. ### 2.9. Network Interfaces #### 2.9.1. Configuring Network Interfaces Next, a list of the network interfaces found on the computer is shown. Select the interface to configure. Figure 49. Choose a Network Interface If an Ethernet interface is selected, the installer will skip ahead to the menu shown in Choose IPv4 Networking. If a wireless network interface is chosen, the system will instead scan for wireless access points: Figure 50. Scanning for Wireless Access Points Wireless networks are identified by a Service Set Identifier (SSID), a short, unique name given to each network. SSIDs found during the scan are listed, followed by a description of the encryption types available for that network. If the desired SSID does not appear in the list, select to scan again. If the desired network still does not appear, check for problems with antenna connections or try moving the computer closer to the access point. Rescan after each change is made. Figure 51. Choosing a Wireless Network Next, enter the encryption information for connecting to the selected wireless network. WPA2 encryption is strongly recommended over older encryption types such as WEP, which offer little security. If the network uses WPA2, input the password, also known as the Pre-Shared Key (PSK). For security reasons, the characters typed into the input box are displayed as asterisks. Figure 52. WPA2 Setup Next, choose whether or not an IPv4 address should be configured on the Ethernet or wireless interface: Figure 53. Choose IPv4 Networking There are two methods of IPv4 configuration. DHCP will automatically configure the network interface correctly and should be used if the network provides a DHCP server. Otherwise, the addressing information needs to be input manually as a static configuration. Do not enter random network information as it will not work. If a DHCP server is not available, obtain the information listed in Required Network Information from the network administrator or Internet service provider. If a DHCP server is available, select in the next menu to automatically configure the network interface. The installer will appear to pause for a minute or so as it finds the DHCP server and obtains the addressing information for the system. Figure 54. Choose IPv4DHCP Configuration If a DHCP server is not available, select and input the following addressing information in this menu: Figure 55. IPv4 Static Configuration • IP Address - The IPv4 address assigned to this computer. The address must be unique and not already in use by another device on the local network. • Subnet Mask - The subnet mask for the network. • Default Router - The IP address of the network’s default gateway. The next screen will ask if the interface should be configured for IPv6. If IPv6 is available and desired, choose to select it. Figure 56. Choose IPv6 Networking IPv6 also has two methods of configuration. StateLess Address AutoConfiguration (SLAAC) will automatically request the correct configuration information from a local router. Refer to rfc4862 for more information. Static configuration requires manual entry of network information. If an IPv6 router is available, select in the next menu to automatically configure the network interface. The installer will appear to pause for a minute or so as it finds the router and obtains the addressing information for the system. Figure 57. Choose IPv6 SLAAC Configuration If an IPv6 router is not available, select and input the following addressing information in this menu: Figure 58. IPv6 Static Configuration • IPv6 Address - The IPv6 address assigned to this computer. The address must be unique and not already in use by another device on the local network. • Default Router - The IPv6 address of the network’s default gateway. The last network configuration menu is used to configure the Domain Name System (DNS) resolver, which converts hostnames to and from network addresses. If DHCP or SLAAC was used to autoconfigure the network interface, the Resolver Configuration values may already be filled in. Otherwise, enter the local network’s domain name in the Search field. DNS #1 and DNS #2 are the IPv4 and/or IPv6 addresses of the DNS servers. At least one DNS server is required. Figure 59. DNS Configuration Once the interface is configured, select a mirror site that is located in the same region of the world as the computer on which FreeBSD is being installed. Files can be retrieved more quickly when the mirror is close to the target computer, reducing installation time. Figure 60. Choosing a Mirror ### 2.10. Troubleshooting This section covers basic installation troubleshooting, such as common problems people have reported. Check the Hardware Notes (https://www.freebsd.org/releases/) document for the version of FreeBSD to make sure the hardware is supported. If the hardware is supported and locks up or other problems occur, build a custom kernel using the instructions in Configuring the FreeBSD Kernel to add support for devices which are not present in the GENERIC kernel. The default kernel assumes that most hardware devices are in their factory default configuration in terms of IRQs, I/O addresses, and DMA channels. If the hardware has been reconfigured, a custom kernel configuration file can tell FreeBSD where to find things. Some installation problems can be avoided or alleviated by updating the firmware on various hardware components, most notably the motherboard. Motherboard firmware is usually referred to as the BIOS. Most motherboard and computer manufacturers have a website for upgrades and upgrade information.Manufacturers generally advise against upgrading the motherboard BIOS unless there is a good reason for doing so, like a critical update. The upgrade process can go wrong, leaving the BIOS incomplete and the computer inoperative. If the system hangs while probing hardware during boot or behaves strangely during the installation process, ACPI may be the culprit. FreeBSD makes extensive use of the system ACPI service on the i386 and amd64 platforms to aid in system configuration if it is detected during boot. Unfortunately, some bugs still exist in both the ACPI driver and within system motherboards and BIOS firmware. ACPI can be disabled by setting the hint.acpi.0.disabled hint in the third stage boot loader: set hint.acpi.0.disabled="1" This is reset each time the system is booted, so it is necessary to add hint.acpi.0.disabled="1" to the file /boot/loader.conf. More information about the boot loader can be found in “Synopsis”. ### 2.11. Using the Live CD The welcome menu of bsdinstall, shown in Welcome Menu, provides a option. This is useful for those who are still wondering whether FreeBSD is the right operating system for them and want to test some of the features before installing. The following points should be noted before using the : • To gain access to the system, authentication is required. The username is root and the password is blank. • As the system runs directly from the installation media, performance will be significantly slower than that of a system installed on a hard disk. • This option only provides a command prompt and not a graphical interface. ## Chapter 3. FreeBSD Basics ### 3.1. Synopsis This chapter covers the basic commands and functionality of the FreeBSD operating system. Much of this material is relevant for any UNIX®-like operating system. New FreeBSD users are encouraged to read through this chapter carefully. After reading this chapter, you will know: • How to use and configure virtual consoles. • How to create and manage users and groups on FreeBSD. • How UNIX® file permissions and FreeBSD file flags work. • The default FreeBSD file system layout. • The FreeBSD disk organization. • How to mount and unmount file systems. • What processes, daemons, and signals are. • What a shell is, and how to change the default login environment. • How to use basic text editors. • What devices and device nodes are. ### 3.2. Virtual Consoles and Terminals Unless FreeBSD has been configured to automatically start a graphical environment during startup, the system will boot into a command line login prompt, as seen in this example: FreeBSD/amd64 (pc3.example.org) (ttyv0) login: The first line contains some information about the system. The amd64 indicates that the system in this example is running a 64-bit version of FreeBSD. The hostname is pc3.example.org, and ttyv0 indicates that this is the "system console". The second line is the login prompt. Since FreeBSD is a multiuser system, it needs some way to distinguish between different users. This is accomplished by requiring every user to log into the system before gaining access to the programs on the system. Every user has a unique name "username" and a personal "password". To log into the system console, type the username that was configured during system installation, as described in Add Users, and press Enter. Then enter the password associated with the username and press Enter. The password is not echoed for security reasons. Once the correct password is input, the message of the day (MOTD) will be displayed followed by a command prompt. Depending upon the shell that was selected when the user was created, this prompt will be a #, $, or % character. The prompt indicates that the user is now logged into the FreeBSD system console and ready to try the available commands. #### 3.2.1. Virtual Consoles While the system console can be used to interact with the system, a user working from the command line at the keyboard of a FreeBSD system will typically instead log into a virtual console. This is because system messages are configured by default to display on the system console. These messages will appear over the command or file that the user is working on, making it difficult to concentrate on the work at hand. By default, FreeBSD is configured to provide several virtual consoles for inputting commands. Each virtual console has its own login prompt and shell and it is easy to switch between virtual consoles. This essentially provides the command line equivalent of having several windows open at the same time in a graphical environment. The key combinations Alt+F1 through Alt+F8 have been reserved by FreeBSD for switching between virtual consoles. Use Alt+F1 to switch to the system console (ttyv0), Alt+F2 to access the first virtual console (ttyv1), Alt+F3 to access the second virtual console (ttyv2), and so on. When using Xorg as a graphical console, the combination becomes Ctrl+Alt+F1 to return to a text-based virtual console. When switching from one console to the next, FreeBSD manages the screen output. The result is an illusion of having multiple virtual screens and keyboards that can be used to type commands for FreeBSD to run. The programs that are launched in one virtual console do not stop running when the user switches to a different virtual console. Refer to kbdcontrol(1), vidcontrol(1), atkbd(4), syscons(4), and vt(4) for a more technical description of the FreeBSD console and its keyboard drivers. In FreeBSD, the number of available virtual consoles is configured in this section of /etc/ttys: # name getty type status comments # ttyv0 "/usr/libexec/getty Pc" xterm on secure # Virtual terminals ttyv1 "/usr/libexec/getty Pc" xterm on secure ttyv2 "/usr/libexec/getty Pc" xterm on secure ttyv3 "/usr/libexec/getty Pc" xterm on secure ttyv4 "/usr/libexec/getty Pc" xterm on secure ttyv5 "/usr/libexec/getty Pc" xterm on secure ttyv6 "/usr/libexec/getty Pc" xterm on secure ttyv7 "/usr/libexec/getty Pc" xterm on secure ttyv8 "/usr/X11R6/bin/xdm -nodaemon" xterm off secure To disable a virtual console, put a comment symbol (#) at the beginning of the line representing that virtual console. For example, to reduce the number of available virtual consoles from eight to four, put a # in front of the last four lines representing virtual consoles ttyv5 through ttyv8. Do not comment out the line for the system console ttyv0. Note that the last virtual console (ttyv8) is used to access the graphical environment if Xorg has been installed and configured as described in The X Window System. For a detailed description of every column in this file and the available options for the virtual consoles, refer to ttys(5). #### 3.2.2. Single User Mode The FreeBSD boot menu provides an option labelled as "Boot Single User". If this option is selected, the system will boot into a special mode known as "single user mode". This mode is typically used to repair a system that will not boot or to reset the root password when it is not known. While in single user mode, networking and other virtual consoles are not available. However, full root access to the system is available, and by default, the root password is not needed. For these reasons, physical access to the keyboard is needed to boot into this mode and determining who has physical access to the keyboard is something to consider when securing a FreeBSD system. The settings which control single user mode are found in this section of /etc/ttys: # name getty type status comments # # If console is marked "insecure", then init will ask for the root password # when going to single-user mode. console none unknown off secure By default, the status is set to secure. This assumes that who has physical access to the keyboard is either not important or it is controlled by a physical security policy. If this setting is changed to insecure, the assumption is that the environment itself is insecure because anyone can access the keyboard. When this line is changed to insecure, FreeBSD will prompt for the root password when a user selects to boot into single user mode. Be careful when changing this setting to insecure! If the root password is forgotten, booting into single user mode is still possible, but may be difficult for someone who is not familiar with the FreeBSD booting process. #### 3.2.3. Changing Console Video Modes The FreeBSD console default video mode may be adjusted to 1024x768, 1280x1024, or any other size supported by the graphics chip and monitor. To use a different video mode load the VESA module: # kldload vesa To determine which video modes are supported by the hardware, use vidcontrol(1). To get a list of supported video modes issue the following: # vidcontrol -i mode The output of this command lists the video modes that are supported by the hardware. To select a new video mode, specify the mode using vidcontrol(1) as the root user: # vidcontrol MODE_279 If the new video mode is acceptable, it can be permanently set on boot by adding it to /etc/rc.conf: allscreens_flags="MODE_279" ### 3.3. Users and Basic Account Management FreeBSD allows multiple users to use the computer at the same time. While only one user can sit in front of the screen and use the keyboard at any one time, any number of users can log in to the system through the network. To use the system, each user should have their own user account. This chapter describes: • The different types of user accounts on a FreeBSD system. • How to add, remove, and modify user accounts. • How to set limits to control the resources that users and groups are allowed to access. • How to create groups and add users as members of a group. #### 3.3.1. Account Types Since all access to the FreeBSD system is achieved using accounts and all processes are run by users, user and account management is important. There are three main types of accounts: system accounts, user accounts, and the superuser account. ##### 3.3.1.1. System Accounts System accounts are used to run services such as DNS, mail, and web servers. The reason for this is security; if all services ran as the superuser, they could act without restriction. Examples of system accounts are daemon, operator, bind, news, and www. Care must be taken when using the operator group, as unintended superuser-like access privileges may be granted, including but not limited to shutdown, reboot, and access to all items in /dev in the group. nobody is the generic unprivileged system account. However, the more services that use nobody, the more files and processes that user will become associated with, and hence the more privileged that user becomes. ##### 3.3.1.2. User Accounts User accounts are assigned to real people and are used to log in and use the system. Every person accessing the system should have a unique user account. This allows the administrator to find out who is doing what and prevents users from clobbering the settings of other users. Each user can set up their own environment to accommodate their use of the system, by configuring their default shell, editor, key bindings, and language settings. Every user account on a FreeBSD system has certain information associated with it: User name The user name is typed at the login: prompt. Each user must have a unique user name. There are a number of rules for creating valid user names which are documented in passwd(5). It is recommended to use user names that consist of eight or fewer, all lower case characters in order to maintain backwards compatibility with applications. Password Each account has an associated password. User ID (UID) The User ID (UID) is a number used to uniquely identify the user to the FreeBSD system. Commands that allow a user name to be specified will first convert it to the UID. It is recommended to use a UID less than 65535, since higher values may cause compatibility issues with some software. Group ID (GID) The Group ID (GID) is a number used to uniquely identify the primary group that the user belongs to. Groups are a mechanism for controlling access to resources based on a user’s GID rather than their UID. This can significantly reduce the size of some configuration files and allows users to be members of more than one group. It is recommended to use a GID of 65535 or lower as higher GIDs may break some software. Login class Login classes are an extension to the group mechanism that provide additional flexibility when tailoring the system to different users. Login classes are discussed further in Configuring Login Classes. Password change time By default, passwords do not expire. However, password expiration can be enabled on a per-user basis, forcing some or all users to change their passwords after a certain amount of time has elapsed. Account expiration time By default, FreeBSD does not expire accounts. When creating accounts that need a limited lifespan, such as student accounts in a school, specify the account expiry date using pw(8). After the expiry time has elapsed, the account cannot be used to log in to the system, although the account’s directories and files will remain. User’s full name The user name uniquely identifies the account to FreeBSD, but does not necessarily reflect the user’s real name. Similar to a comment, this information can contain spaces, uppercase characters, and be more than 8 characters long. Home directory The home directory is the full path to a directory on the system. This is the user’s starting directory when the user logs in. A common convention is to put all user home directories under /home/username or /usr/home/username. Each user stores their personal files and subdirectories in their own home directory. User shell The shell provides the user’s default environment for interacting with the system. There are many different kinds of shells and experienced users will have their own preferences, which can be reflected in their account settings. ##### 3.3.1.3. The Superuser Account The superuser account, usually called root, is used to manage the system with no limitations on privileges. For this reason, it should not be used for day-to-day tasks like sending and receiving mail, general exploration of the system, or programming. The superuser, unlike other user accounts, can operate without limits, and misuse of the superuser account may result in spectacular disasters. User accounts are unable to destroy the operating system by mistake, so it is recommended to login as a user account and to only become the superuser when a command requires extra privilege. Always double and triple-check any commands issued as the superuser, since an extra space or missing character can mean irreparable data loss. There are several ways to gain superuser privilege. While one can log in as root, this is highly discouraged. Instead, use su(1) to become the superuser. If - is specified when running this command, the user will also inherit the root user’s environment. The user running this command must be in the wheel group or else the command will fail. The user must also know the password for the root user account. In this example, the user only becomes superuser in order to run make install as this step requires superuser privilege. Once the command completes, the user types exit to leave the superuser account and return to the privilege of their user account. Example 2. Install a Program As the Superuser % configure % make % su - Password: # make install # exit % The built-in su(1) framework works well for single systems or small networks with just one system administrator. An alternative is to install the security/sudo package or port. This software provides activity logging and allows the administrator to configure which users can run which commands as the superuser. #### 3.3.2. Managing Accounts FreeBSD provides a variety of different commands to manage user accounts. The most common commands are summarized in Utilities for Managing User Accounts, followed by some examples of their usage. See the manual page for each utility for more details and usage examples. Table 2. Utilities for Managing User Accounts CommandSummary adduser(8) The recommended command-line application for adding new users. rmuser(8) The recommended command-line application for removing users. chpass(1) A flexible tool for changing user database information. passwd(1) The command-line tool to change user passwords. pw(8) A powerful and flexible tool for modifying all aspects of user accounts. ##### 3.3.2.1. adduser The recommended program for adding new users is adduser(8). When a new user is added, this program automatically updates /etc/passwd and /etc/group. It also creates a home directory for the new user, copies in the default configuration files from /usr/share/skel, and can optionally mail the new user a welcome message. This utility must be run as the superuser. The adduser(8) utility is interactive and walks through the steps for creating a new user account. As seen in Adding a User on FreeBSD, either input the required information or press Return to accept the default value shown in square brackets. In this example, the user has been invited into the wheel group, allowing them to become the superuser with su(1). When finished, the utility will prompt to either create another user or to exit. Example 3. Adding a User on FreeBSD # adduser Username: jru Full name: J. Random User Uid (Leave empty for default): Login group [jru]: Login group is jru. Invite jru into other groups? []: wheel Login class [default]: Shell (sh csh tcsh zsh nologin) [sh]: zsh Home directory [/home/jru]: Home directory permissions (Leave empty for default): Use password-based authentication? [yes]: Use an empty password? (yes/no) [no]: Use a random password? (yes/no) [no]: Enter password: Enter password again: Lock out the account after creation? [no]: Username : jru Password : **** Full Name : J. Random User Uid : 1001 Class : Groups : jru wheel Home : /home/jru Shell : /usr/local/bin/zsh Locked : no OK? (yes/no): yes adduser: INFO: Successfully added (jru) to the user database. Add another user? (yes/no): no Goodbye! # Since the password is not echoed when typed, be careful to not mistype the password when creating the user account. ##### 3.3.2.2. rmuser To completely remove a user from the system, run rmuser(8) as the superuser. This command performs the following steps: 1. Removes the user’s crontab(1) entry, if one exists. 2. Removes any at(1) jobs belonging to the user. 3. Kills all processes owned by the user. 4. Removes the user from the system’s local password file. 5. Optionally removes the user’s home directory, if it is owned by the user. 6. Removes the incoming mail files belonging to the user from /var/mail. 7. Removes all files owned by the user from temporary file storage areas such as /tmp. 8. Finally, removes the username from all groups to which it belongs in /etc/group. If a group becomes empty and the group name is the same as the username, the group is removed. This complements the per-user unique groups created by adduser(8). rmuser(8) cannot be used to remove superuser accounts since that is almost always an indication of massive destruction. By default, an interactive mode is used, as shown in the following example. Example 4. rmuser Interactive Account Removal # rmuser jru Matching password entry: jru:*:1001:1001::0:0:J. Random User:/home/jru:/usr/local/bin/zsh Is this the entry you wish to remove? y Remove user's home directory (/home/jru)? y Removing user (jru): mailspool home passwd. # ##### 3.3.2.3. chpass Any user can use chpass(1) to change their default shell and personal information associated with their user account. The superuser can use this utility to change additional account information for any user. When passed no options, aside from an optional username, chpass(1) displays an editor containing user information. When the user exits from the editor, the user database is updated with the new information. This utility will prompt for the user’s password when exiting the editor, unless the utility is run as the superuser. In Using chpass as Superuser, the superuser has typed chpass jru and is now viewing the fields that can be changed for this user. If jru runs this command instead, only the last six fields will be displayed and available for editing. This is shown in Using chpass as Regular User. Example 5. Using chpass as Superuser #Changing user database information for jru. Login: jru Password: * Uid [#]: 1001 Gid [# or name]: 1001 Change [month day year]: Expire [month day year]: Class: Home directory: /home/jru Shell: /usr/local/bin/zsh Full Name: J. Random User Office Location: Office Phone: Home Phone: Other information: Example 6. Using chpass as Regular User #Changing user database information for jru. Shell: /usr/local/bin/zsh Full Name: J. Random User Office Location: Office Phone: Home Phone: Other information: The commands chfn(1) and chsh(1) are links to chpass(1), as are ypchpass(1), ypchfn(1), and ypchsh(1). Since NIS support is automatic, specifying the yp before the command is not necessary. How to configure NIS is covered in Network Servers. ##### 3.3.2.4. passwd Any user can easily change their password using passwd(1). To prevent accidental or unauthorized changes, this command will prompt for the user’s original password before a new password can be set: Example 7. Changing Your Password % passwd Changing local password for jru. Old password: New password: Retype new password: passwd: updating the database... passwd: done The superuser can change any user’s password by specifying the username when running passwd(1). When this utility is run as the superuser, it will not prompt for the user’s current password. This allows the password to be changed when a user cannot remember the original password. Example 8. Changing Another User’s Password as the Superuser # passwd jru Changing local password for jru. New password: Retype new password: passwd: updating the database... passwd: done As with chpass(1), yppasswd(1) is a link to passwd(1), so NIS works with either command. ##### 3.3.2.5. pw The pw(8) utility can create, remove, modify, and display users and groups. It functions as a front end to the system user and group files. pw(8) has a very powerful set of command line options that make it suitable for use in shell scripts, but new users may find it more complicated than the other commands presented in this section. #### 3.3.3. Managing Groups A group is a list of users. A group is identified by its group name and GID. In FreeBSD, the kernel uses the UID of a process, and the list of groups it belongs to, to determine what the process is allowed to do. Most of the time, the GID of a user or process usually means the first group in the list. The group name to GID mapping is listed in /etc/group. This is a plain text file with four colon-delimited fields. The first field is the group name, the second is the encrypted password, the third the GID, and the fourth the comma-delimited list of members. For a more complete description of the syntax, refer to group(5). The superuser can modify /etc/group using a text editor. Alternatively, pw(8) can be used to add and edit groups. For example, to add a group called teamtwo and then confirm that it exists: Example 9. Adding a Group Using pw(8) # pw groupadd teamtwo # pw groupshow teamtwo teamtwo:*:1100: In this example, 1100 is the GID of teamtwo. Right now, teamtwo has no members. This command will add jru as a member of teamtwo. Example 10. Adding User Accounts to a New Group Using pw(8) # pw groupmod teamtwo -M jru # pw groupshow teamtwo teamtwo:*:1100:jru The argument to -M is a comma-delimited list of users to be added to a new (empty) group or to replace the members of an existing group. To the user, this group membership is different from (and in addition to) the user’s primary group listed in the password file. This means that the user will not show up as a member when using groupshow with pw(8), but will show up when the information is queried via id(1) or a similar tool. When pw(8) is used to add a user to a group, it only manipulates /etc/group and does not attempt to read additional data from /etc/passwd. Example 11. Adding a New Member to a Group Using pw(8) # pw groupmod teamtwo -m db # pw groupshow teamtwo teamtwo:*:1100:jru,db In this example, the argument to -m is a comma-delimited list of users who are to be added to the group. Unlike the previous example, these users are appended to the group and do not replace existing users in the group. Example 12. Using id(1) to Determine Group Membership % id jru uid=1001(jru) gid=1001(jru) groups=1001(jru), 1100(teamtwo) In this example, jru is a member of the groups jru and teamtwo. For more information about this command and the format of /etc/group, refer to pw(8) and group(5). ### 3.4. Permissions In FreeBSD, every file and directory has an associated set of permissions and several utilities are available for viewing and modifying these permissions. Understanding how permissions work is necessary to make sure that users are able to access the files that they need and are unable to improperly access the files used by the operating system or owned by other users. This section discusses the traditional UNIX® permissions used in FreeBSD. For finer-grained file system access control, refer to Access Control Lists. In UNIX®, basic permissions are assigned using three types of access: read, write, and execute. These access types are used to determine file access to the file’s owner, group, and others (everyone else). The read, write, and execute permissions can be represented as the letters r, w, and x. They can also be represented as binary numbers as each permission is either on or off (0). When represented as a number, the order is always read as rwx, where r has an on value of 4, w has an on value of 2 and x has an on value of 1. Table 4.1 summarizes the possible numeric and alphabetic possibilities. When reading the "Directory Listing" column, a - is used to represent a permission that is set to off. Table 3. UNIX® Permissions ValuePermissionDirectory Listing 0 No read, no write, no execute --- 1 No read, no write, execute --x 2 No read, write, no execute -w- 3 No read, write, execute -wx 4 Read, no write, no execute r-- 5 Read, no write, execute r-x 6 Read, write, no execute rw- 7 Read, write, execute rwx Use the -l argument with ls(1) to view a long directory listing that includes a column of information about a file’s permissions for the owner, group, and everyone else. For example, ls -l in an arbitrary directory may show: % ls -l total 530 -rw-r--r-- 1 root wheel 512 Sep 5 12:31 myfile -rw-r--r-- 1 root wheel 512 Sep 5 12:31 otherfile -rw-r--r-- 1 root wheel 7680 Sep 5 12:31 email.txt The first (leftmost) character in the first column indicates whether this file is a regular file, a directory, a special character device, a socket, or any other special pseudo-file device. In this example, the - indicates a regular file. The next three characters, rw- in this example, give the permissions for the owner of the file. The next three characters, r--, give the permissions for the group that the file belongs to. The final three characters, r--, give the permissions for the rest of the world. A dash means that the permission is turned off. In this example, the permissions are set so the owner can read and write to the file, the group can read the file, and the rest of the world can only read the file. According to the table above, the permissions for this file would be 644, where each digit represents the three parts of the file’s permission. How does the system control permissions on devices? FreeBSD treats most hardware devices as a file that programs can open, read, and write data to. These special device files are stored in /dev/. Directories are also treated as files. They have read, write, and execute permissions. The executable bit for a directory has a slightly different meaning than that of files. When a directory is marked executable, it means it is possible to change into that directory using cd(1). This also means that it is possible to access the files within that directory, subject to the permissions on the files themselves. In order to perform a directory listing, the read permission must be set on the directory. In order to delete a file that one knows the name of, it is necessary to have write and execute permissions to the directory containing the file. There are more permission bits, but they are primarily used in special circumstances such as setuid binaries and sticky directories. For more information on file permissions and how to set them, refer to chmod(1). #### 3.4.1. Symbolic Permissions Symbolic permissions use characters instead of octal values to assign permissions to files or directories. Symbolic permissions use the syntax of (who) (action) (permissions), where the following values are available: OptionLetterRepresents (who) u User (who) g Group owner (who) o Other (who) a All ("world") (action) + Adding permissions (action) - Removing permissions (action) = Explicitly set permissions (permissions) r Read (permissions) w Write (permissions) x Execute (permissions) t Sticky bit (permissions) s Set UID or GID These values are used with chmod(1), but with letters instead of numbers. For example, the following command would block other users from accessing FILE: % chmod go= FILE A comma separated list can be provided when more than one set of changes to a file must be made. For example, the following command removes the group and "world" write permission on FILE, and adds the execute permissions for everyone: % chmod go-w,a+x FILE #### 3.4.2. FreeBSD File Flags In addition to file permissions, FreeBSD supports the use of "file flags". These flags add an additional level of security and control over files, but not directories. With file flags, even root can be prevented from removing or altering files. File flags are modified using chflags(1). For example, to enable the system undeletable flag on the file file1, issue the following command: # chflags sunlink file1 To disable the system undeletable flag, put a "no" in front of the sunlink: # chflags nosunlink file1 To view the flags of a file, use -lo with ls(1): # ls -lo file1 -rw-r--r-- 1 trhodes trhodes sunlnk 0 Mar 1 05:54 file1 Several file flags may only be added or removed by the root user. In other cases, the file owner may set its file flags. Refer to chflags(1) and chflags(2) for more information. #### 3.4.3. The setuid, setgid, and sticky Permissions Other than the permissions already discussed, there are three other specific settings that all administrators should know about. They are the setuid, setgid, and sticky permissions. These settings are important for some UNIX® operations as they provide functionality not normally granted to normal users. To understand them, the difference between the real user ID and effective user ID must be noted. The real user ID is the UID who owns or starts the process. The effective UID is the user ID the process runs as. As an example, passwd(1) runs with the real user ID when a user changes their password. However, in order to update the password database, the command runs as the effective ID of the root user. This allows users to change their passwords without seeing a Permission Denied error. The setuid permission may be set by prefixing a permission set with the number four (4) as shown in the following example: # chmod 4755 suidexample.sh The permissions on suidexample.sh now look like the following: -rwsr-xr-x 1 trhodes trhodes 63 Aug 29 06:36 suidexample.sh Note that a s is now part of the permission set designated for the file owner, replacing the executable bit. This allows utilities which need elevated permissions, such as passwd(1). The nosuid mount(8) option will cause such binaries to silently fail without alerting the user. That option is not completely reliable as a nosuid wrapper may be able to circumvent it. To view this in real time, open two terminals. On one, type passwd as a normal user. While it waits for a new password, check the process table and look at the user information for passwd(1): In terminal A: Changing local password for trhodes Old Password: In terminal B: # ps aux | grep passwd trhodes 5232 0.0 0.2 3420 1608 0 R+ 2:10AM 0:00.00 grep passwd root 5211 0.0 0.2 3620 1724 2 I+ 2:09AM 0:00.01 passwd Although passwd(1) is run as a normal user, it is using the effective UID of root. The setgid permission performs the same function as the setuid permission; except that it alters the group settings. When an application or utility executes with this setting, it will be granted the permissions based on the group that owns the file, not the user who started the process. To set the setgid permission on a file, provide chmod(1) with a leading two (2): # chmod 2755 sgidexample.sh In the following listing, notice that the s is now in the field designated for the group permission settings: -rwxr-sr-x 1 trhodes trhodes 44 Aug 31 01:49 sgidexample.sh In these examples, even though the shell script in question is an executable file, it will not run with a different EUID or effective user ID. This is because shell scripts may not access the setuid(2) system calls. The setuid and setgid permission bits may lower system security, by allowing for elevated permissions. The third special permission, the sticky bit, can strengthen the security of a system. When the sticky bit is set on a directory, it allows file deletion only by the file owner. This is useful to prevent file deletion in public directories, such as /tmp, by users who do not own the file. To utilize this permission, prefix the permission set with a one (1): # chmod 1777 /tmp The sticky bit permission will display as a t at the very end of the permission set: # ls -al / | grep tmp drwxrwxrwt 10 root wheel 512 Aug 31 01:49 tmp ### 3.5. Directory Structure The FreeBSD directory hierarchy is fundamental to obtaining an overall understanding of the system. The most important directory is root or, "/". This directory is the first one mounted at boot time and it contains the base system necessary to prepare the operating system for multi-user operation. The root directory also contains mount points for other file systems that are mounted during the transition to multi-user operation. A mount point is a directory where additional file systems can be grafted onto a parent file system (usually the root file system). This is further described in Disk Organization. Standard mount points include /usr/, /var/, /tmp/, /mnt/, and /cdrom/. These directories are usually referenced to entries in /etc/fstab. This file is a table of various file systems and mount points and is read by the system. Most of the file systems in /etc/fstab are mounted automatically at boot time from the script rc(8) unless their entry includes noauto. Details can be found in The fstab File. A complete description of the file system hierarchy is available in hier(7). The following table provides a brief overview of the most common directories. DirectoryDescription / Root directory of the file system. /bin/ User utilities fundamental to both single-user and multi-user environments. /boot/ Programs and configuration files used during operating system bootstrap. /boot/defaults/ Default boot configuration files. Refer to loader.conf(5) for details. /dev/ Device nodes. Refer to intro(4) for details. /etc/ System configuration files and scripts. /etc/defaults/ Default system configuration files. Refer to rc(8) for details. /etc/mail/ Configuration files for mail transport agents such as sendmail(8). /etc/periodic/ Scripts that run daily, weekly, and monthly, via cron(8). Refer to periodic(8) for details. /etc/ppp/ ppp(8) configuration files. /mnt/ Empty directory commonly used by system administrators as a temporary mount point. /proc/ Process file system. Refer to procfs(5), mount_procfs(8) for details. /rescue/ Statically linked programs for emergency recovery as described in rescue(8). /root/ Home directory for the root account. /sbin/ System programs and administration utilities fundamental to both single-user and multi-user environments. /tmp/ Temporary files which are usually not preserved across a system reboot. A memory-based file system is often mounted at /tmp. This can be automated using the tmpmfs-related variables of rc.conf(5) or with an entry in /etc/fstab; refer to mdmfs(8) for details. /usr/ The majority of user utilities and applications. /usr/bin/ Common utilities, programming tools, and applications. /usr/include/ Standard C include files. /usr/lib/ Archive libraries. /usr/libdata/ Miscellaneous utility data files. /usr/libexec/ System daemons and system utilities executed by other programs. /usr/local/ Local executables and libraries. Also used as the default destination for the FreeBSD ports framework. Within /usr/local, the general layout sketched out by hier(7) for /usr should be used. Exceptions are the man directory, which is directly under /usr/local rather than under /usr/local/share, and the ports documentation is in share/doc/port. /usr/obj/ Architecture-specific target tree produced by building the /usr/src tree. /usr/ports/ The FreeBSD Ports Collection (optional). /usr/sbin/ System daemons and system utilities executed by users. /usr/share/ Architecture-independent files. /usr/src/ BSD and/or local source files. /var/ Multi-purpose log, temporary, transient, and spool files. A memory-based file system is sometimes mounted at /var. This can be automated using the varmfs-related variables in rc.conf(5) or with an entry in /etc/fstab; refer to mdmfs(8) for details. /var/log/ Miscellaneous system log files. /var/mail/ User mailbox files. /var/spool/ Miscellaneous printer and mail system spooling directories. /var/tmp/ Temporary files which are usually preserved across a system reboot, unless /var is a memory-based file system. /var/yp/ NIS maps. ### 3.6. Disk Organization The smallest unit of organization that FreeBSD uses to find files is the filename. Filenames are case-sensitive, which means that readme.txt and README.TXT are two separate files. FreeBSD does not use the extension of a file to determine whether the file is a program, document, or some other form of data. Files are stored in directories. A directory may contain no files, or it may contain many hundreds of files. A directory can also contain other directories, allowing a hierarchy of directories within one another in order to organize data. Files and directories are referenced by giving the file or directory name, followed by a forward slash, /, followed by any other directory names that are necessary. For example, if the directory foo contains a directory bar which contains the file readme.txt, the full name, or path, to the file is foo/bar/readme.txt. Note that this is different from Windows® which uses \ to separate file and directory names. FreeBSD does not use drive letters, or other drive names in the path. For example, one would not type c:\foo\bar\readme.txt on FreeBSD. Directories and files are stored in a file system. Each file system contains exactly one directory at the very top level, called the root directory for that file system. This root directory can contain other directories. One file system is designated the root file system or /. Every other file system is mounted under the root file system. No matter how many disks are on the FreeBSD system, every directory appears to be part of the same disk. Consider three file systems, called A, B, and C. Each file system has one root directory, which contains two other directories, called A1, A2 (and likewise B1, B2 and C1, C2). Call A the root file system. If ls(1) is used to view the contents of this directory, it will show two subdirectories, A1 and A2. The directory tree looks like this: A file system must be mounted on to a directory in another file system. When mounting file system B on to the directory A1, the root directory of B replaces A1, and the directories in B appear accordingly: Any files that are in the B1 or B2 directories can be reached with the path /A1/B1 or /A1/B2 as necessary. Any files that were in /A1 have been temporarily hidden. They will reappear if B is unmounted from A. If B had been mounted on A2 then the diagram would look like this: and the paths would be /A2/B1 and /A2/B2 respectively. File systems can be mounted on top of one another. Continuing the last example, the C file system could be mounted on top of the B1 directory in the B file system, leading to this arrangement: Or C could be mounted directly on to the A file system, under the A1 directory: It is entirely possible to have one large root file system, and not need to create any others. There are some drawbacks to this approach, and one advantage. Benefits of Multiple File Systems • Different file systems can have different mount options. For example, the root file system can be mounted read-only, making it impossible for users to inadvertently delete or edit a critical file. Separating user-writable file systems, such as /home, from other file systems allows them to be mounted nosuid. This option prevents the suid/guid bits on executables stored on the file system from taking effect, possibly improving security. • FreeBSD automatically optimizes the layout of files on a file system, depending on how the file system is being used. So a file system that contains many small files that are written frequently will have a different optimization to one that contains fewer, larger files. By having one big file system this optimization breaks down. • FreeBSD’s file systems are robust if power is lost. However, a power loss at a critical point could still damage the structure of the file system. By splitting data over multiple file systems it is more likely that the system will still come up, making it easier to restore from backup as necessary. Benefit of a Single File System • File systems are a fixed size. If you create a file system when you install FreeBSD and give it a specific size, you may later discover that you need to make the partition bigger. This is not easily accomplished without backing up, recreating the file system with the new size, and then restoring the backed up data. FreeBSD features the growfs(8) command, which makes it possible to increase the size of file system on the fly, removing this limitation. File systems are contained in partitions. This does not have the same meaning as the common usage of the term partition (for example, MS-DOS® partition), because of FreeBSD’s UNIX® heritage. Each partition is identified by a letter from a through to h. Each partition can contain only one file system, which means that file systems are often described by either their typical mount point in the file system hierarchy, or the letter of the partition they are contained in. FreeBSD also uses disk space for swap space to provide virtual memory. This allows your computer to behave as though it has much more memory than it actually does. When FreeBSD runs out of memory, it moves some of the data that is not currently being used to the swap space, and moves it back in (moving something else out) when it needs it. Some partitions have certain conventions associated with them. PartitionConvention a Normally contains the root file system. b Normally contains swap space. c Normally the same size as the enclosing slice. This allows utilities that need to work on the entire slice, such as a bad block scanner, to work on the c partition. A file system would not normally be created on this partition. d Partition d used to have a special meaning associated with it, although that is now gone and d may work as any normal partition. Disks in FreeBSD are divided into slices, referred to in Windows® as partitions, which are numbered from 1 to 4. These are then divided into partitions, which contain file systems, and are labeled using letters. Slice numbers follow the device name, prefixed with an s, starting at 1. So "da0s1" is the first slice on the first SCSI drive. There can only be four physical slices on a disk, but there can be logical slices inside physical slices of the appropriate type. These extended slices are numbered starting at 5, so "ada0s5" is the first extended slice on the first SATA disk. These devices are used by file systems that expect to occupy a slice. Slices, "dangerously dedicated" physical drives, and other drives contain partitions, which are represented as letters from a to h. This letter is appended to the device name, so "da0a" is the a partition on the first da drive, which is "dangerously dedicated". "ada1s3e" is the fifth partition in the third slice of the second SATA disk drive. Finally, each disk on the system is identified. A disk name starts with a code that indicates the type of disk, and then a number, indicating which disk it is. Unlike slices, disk numbering starts at 0. Common codes are listed in Disk Device Names. When referring to a partition, include the disk name, s, the slice number, and then the partition letter. Examples are shown in Sample Disk, Slice, and Partition Names. Conceptual Model of a Disk shows a conceptual model of a disk layout. When installing FreeBSD, configure the disk slices, create partitions within the slice to be used for FreeBSD, create a file system or swap space in each partition, and decide where each file system will be mounted. Table 4. Disk Device Names Drive TypeDrive Device Name SATA and IDE hard drives ada SCSI hard drives and USB storage devices da NVMe storage nvd or nda SATA and IDE CD-ROM drives cd SCSICD-ROM drives cd Floppy drives fd SCSI tape drives sa RAID drives Examples include aacd for Adaptec® AdvancedRAID, mlxd and mlyd for Mylex®, amrd for AMI MegaRAID®, idad for Compaq Smart RAID, twed for 3ware® RAID. Table 5. Sample Disk, Slice, and Partition Names NameMeaning ada0s1a The first partition (a) on the first slice (s1) on the first SATA disk (ada0). da1s2e The fifth partition (e) on the second slice (s2) on the second SCSI disk (da1). Example 13. Conceptual Model of a Disk This diagram shows FreeBSD’s view of the first SATA disk attached to the system. Assume that the disk is 250 GB in size, and contains an 80 GB slice and a 170 GB slice (MS-DOS® partitions). The first slice contains a Windows® NTFS file system, C:, and the second slice contains a FreeBSD installation. This example FreeBSD installation has four data partitions and a swap partition. The four partitions each hold a file system. Partition a is used for the root file system, d for /var/, e for /tmp/, and f for /usr/. Partition letter c refers to the entire slice, and so is not used for ordinary partitions. ### 3.7. Mounting and Unmounting File Systems The file system is best visualized as a tree, rooted, as it were, at /. /dev, /usr, and the other directories in the root directory are branches, which may have their own branches, such as /usr/local, and so on. There are various reasons to house some of these directories on separate file systems. /var contains the directories log/, spool/, and various types of temporary files, and as such, may get filled up. Filling up the root file system is not a good idea, so splitting /var from / is often favorable. Another common reason to contain certain directory trees on other file systems is if they are to be housed on separate physical disks, or are separate virtual disks, such as Network File System mounts, described in “Network File System (NFS)”, or CDROM drives. #### 3.7.1. The fstab File During the boot process (The FreeBSD Booting Process), file systems listed in /etc/fstab are automatically mounted except for the entries containing noauto. This file contains entries in the following format: device /mount-point fstype options dumpfreq passno device An existing device name as explained in Disk Device Names. mount-point An existing directory on which to mount the file system. fstype The file system type to pass to mount(8). The default FreeBSD file system is ufs. options Either rw for read-write file systems, or ro for read-only file systems, followed by any other options that may be needed. A common option is noauto for file systems not normally mounted during the boot sequence. Other options are listed in mount(8). dumpfreq Used by dump(8) to determine which file systems require dumping. If the field is missing, a value of zero is assumed. passno Determines the order in which file systems should be checked. File systems that should be skipped should have their passno set to zero. The root file system needs to be checked before everything else and should have its passno set to one. The other file systems should be set to values greater than one. If more than one file system has the same passno, fsck(8) will attempt to check file systems in parallel if possible. Refer to fstab(5) for more information on the format of /etc/fstab and its options. #### 3.7.2. Using mount(8) File systems are mounted using mount(8). The most basic syntax is as follows: # mount device mountpoint This command provides many options which are described in mount(8), The most commonly used options include: Mount Options -a Mount all the file systems listed in /etc/fstab, except those marked as "noauto", excluded by the -t flag, or those that are already mounted. -d Do everything except for the actual mount system call. This option is useful in conjunction with the -v flag to determine what mount(8) is actually trying to do. -f Force the mount of an unclean file system (dangerous), or the revocation of write access when downgrading a file system’s mount status from read-write to read-only. -r Mount the file system read-only. This is identical to using -o ro. -t fstype Mount the specified file system type or mount only file systems of the given type, if -a is included. "ufs" is the default file system type. -u Update mount options on the file system. -v Be verbose. -w Mount the file system read-write. The following options can be passed to -o as a comma-separated list: nosuid Do not interpret setuid or setgid flags on the file system. This is also a useful security option. #### 3.7.3. Using umount(8) To unmount a file system use umount(8). This command takes one parameter which can be a mountpoint, device name, -a or -A. All forms take -f to force unmounting, and -v for verbosity. Be warned that -f is not generally a good idea as it might crash the computer or damage data on the file system. To unmount all mounted file systems, or just the file system types listed after -t, use -a or -A. Note that -A does not attempt to unmount the root file system. ### 3.8. Processes and Daemons FreeBSD is a multi-tasking operating system. Each program running at any one time is called a process. Every running command starts at least one new process and there are a number of system processes that are run by FreeBSD. Each process is uniquely identified by a number called a process ID (PID). Similar to files, each process has one owner and group, and the owner and group permissions are used to determine which files and devices the process can open. Most processes also have a parent process that started them. For example, the shell is a process, and any command started in the shell is a process which has the shell as its parent process. The exception is a special process called init(8) which is always the first process to start at boot time and which always has a PID of 1. Some programs are not designed to be run with continuous user input and disconnect from the terminal at the first opportunity. For example, a web server responds to web requests, rather than user input. Mail servers are another example of this type of application. These types of programs are known as daemons. The term daemon comes from Greek mythology and represents an entity that is neither good nor evil, and which invisibly performs useful tasks. This is why the BSD mascot is the cheerful-looking daemon with sneakers and a pitchfork. There is a convention to name programs that normally run as daemons with a trailing "d". For example, BIND is the Berkeley Internet Name Domain, but the actual program that executes is named. The Apache web server program is httpd and the line printer spooling daemon is lpd. This is only a naming convention. For example, the main mail daemon for the Sendmail application is sendmail, and not maild. #### 3.8.1. Viewing Processes To see the processes running on the system, use ps(1) or top(1). To display a static list of the currently running processes, their PIDs, how much memory they are using, and the command they were started with, use ps(1). To display all the running processes and update the display every few seconds in order to interactively see what the computer is doing, use top(1). By default, ps(1) only shows the commands that are running and owned by the user. For example: % ps PID TT STAT TIME COMMAND 8203 0 Ss 0:00.59 /bin/csh 8895 0 R+ 0:00.00 ps The output from ps(1) is organized into a number of columns. The PID column displays the process ID. PIDs are assigned starting at 1, go up to 99999, then wrap around back to the beginning. However, a PID is not reassigned if it is already in use. The TT column shows the tty the program is running on and STAT shows the program’s state. TIME is the amount of time the program has been running on the CPU. This is usually not the elapsed time since the program was started, as most programs spend a lot of time waiting for things to happen before they need to spend time on the CPU. Finally, COMMAND is the command that was used to start the program. A number of different options are available to change the information that is displayed. One of the most useful sets is auxww, where a displays information about all the running processes of all users, u displays the username and memory usage of the process' owner, x displays information about daemon processes, and ww causes ps(1) to display the full command line for each process, rather than truncating it once it gets too long to fit on the screen. The output from top(1) is similar: % top last pid: 9609; load averages: 0.56, 0.45, 0.36 up 0+00:20:03 10:21:46 107 processes: 2 running, 104 sleeping, 1 zombie CPU: 6.2% user, 0.1% nice, 8.2% system, 0.4% interrupt, 85.1% idle Mem: 541M Active, 450M Inact, 1333M Wired, 4064K Cache, 1498M Free ARC: 992M Total, 377M MFU, 589M MRU, 250K Anon, 5280K Header, 21M Other Swap: 2048M Total, 2048M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 557 root 1 -21 r31 136M 42296K select 0 2:20 9.96% Xorg 8198 dru 2 52 0 449M 82736K select 3 0:08 5.96% kdeinit4 8311 dru 27 30 0 1150M 187M uwait 1 1:37 0.98% firefox 431 root 1 20 0 14268K 1728K select 0 0:06 0.98% moused 9551 dru 1 21 0 16600K 2660K CPU3 3 0:01 0.98% top 2357 dru 4 37 0 718M 141M select 0 0:21 0.00% kdeinit4 8705 dru 4 35 0 480M 98M select 2 0:20 0.00% kdeinit4 8076 dru 6 20 0 552M 113M uwait 0 0:12 0.00% soffice.bin 2623 root 1 30 10 12088K 1636K select 3 0:09 0.00% powerd 2338 dru 1 20 0 440M 84532K select 1 0:06 0.00% kwin 1427 dru 5 22 0 605M 86412K select 1 0:05 0.00% kdeinit4 The output is split into two sections. The header (the first five or six lines) shows the PID of the last process to run, the system load averages (which are a measure of how busy the system is), the system uptime (time since the last reboot) and the current time. The other figures in the header relate to how many processes are running, how much memory and swap space has been used, and how much time the system is spending in different CPU states. If the ZFS file system module has been loaded, an ARC line indicates how much data was read from the memory cache instead of from disk. Below the header is a series of columns containing similar information to the output from ps(1), such as the PID, username, amount of CPU time, and the command that started the process. By default, top(1) also displays the amount of memory space taken by the process. This is split into two columns: one for total size and one for resident size. Total size is how much memory the application has needed and the resident size is how much it is actually using now. top(1) automatically updates the display every two seconds. A different interval can be specified with -s. #### 3.8.2. Killing Processes One way to communicate with any running process or daemon is to send a signal using kill(1). There are a number of different signals; some have a specific meaning while others are described in the application’s documentation. A user can only send a signal to a process they own and sending a signal to someone else’s process will result in a permission denied error. The exception is the root user, who can send signals to anyone’s processes. The operating system can also send a signal to a process. If an application is badly written and tries to access memory that it is not supposed to, FreeBSD will send the process the "Segmentation Violation" signal (SIGSEGV). If an application has been written to use the alarm(3) system call to be alerted after a period of time has elapsed, it will be sent the "Alarm" signal (SIGALRM). Two signals can be used to stop a process: SIGTERM and SIGKILL. SIGTERM is the polite way to kill a process as the process can read the signal, close any log files it may have open, and attempt to finish what it is doing before shutting down. In some cases, a process may ignore SIGTERM if it is in the middle of some task that cannot be interrupted. SIGKILL cannot be ignored by a process. Sending a SIGKILL to a process will usually stop that process there and then. [1]. Other commonly used signals are SIGHUP, SIGUSR1, and SIGUSR2. Since these are general purpose signals, different applications will respond differently. For example, after changing a web server’s configuration file, the web server needs to be told to re-read its configuration. Restarting httpd would result in a brief outage period on the web server. Instead, send the daemon the SIGHUP signal. Be aware that different daemons will have different behavior, so refer to the documentation for the daemon to determine if SIGHUP will achieve the desired results. Procedure: Sending a Signal to a Process This example shows how to send a signal to inetd(8). The inetd(8) configuration file is /etc/inetd.conf, and inetd(8) will re-read this configuration file when it is sent a SIGHUP. 1. Find the PID of the process to send the signal to using pgrep(1). In this example, the PID for inetd(8) is 198: % pgrep -l inetd 198 inetd 2. Use kill(1) to send the signal. As inetd(8) is owned by root, use su(1) to become root first. % su Password: # /bin/kill -s HUP 198 Like most UNIX® commands, kill(1) will not print any output if it is successful. If a signal is sent to a process not owned by that user, the message kill: PID: Operation not permitted will be displayed. Mistyping the PID will either send the signal to the wrong process, which could have negative results, or will send the signal to a PID that is not currently in use, resulting in the error kill: PID: No such process. Why Use /bin/kill?:Many shells provide kill as a built in command, meaning that the shell will send the signal directly, rather than running /bin/kill. Be aware that different shells have a different syntax for specifying the name of the signal to send. Rather than try to learn all of them, it can be simpler to specify /bin/kill. When sending other signals, substitute TERM or KILL with the name of the signal. Killing a random process on the system is a bad idea. In particular, init(8), PID 1, is special. Running /bin/kill -s KILL 1 is a quick, and unrecommended, way to shutdown the system. Always double check the arguments to kill(1) before pressing Return. ### 3.9. Shells A shell provides a command line interface for interacting with the operating system. A shell receives commands from the input channel and executes them. Many shells provide built in functions to help with everyday tasks such as file management, file globbing, command line editing, command macros, and environment variables. FreeBSD comes with several shells, including the Bourne shell (sh(1)) and the extended C shell (tcsh(1)). Other shells are available from the FreeBSD Ports Collection, such as zsh and bash. The shell that is used is really a matter of taste. A C programmer might feel more comfortable with a C-like shell such as tcsh(1). A Linux® user might prefer bash. Each shell has unique properties that may or may not work with a user’s preferred working environment, which is why there is a choice of which shell to use. One common shell feature is filename completion. After a user types the first few letters of a command or filename and presses Tab, the shell completes the rest of the command or filename. Consider two files called foobar and football. To delete foobar, the user might type rm foo and press Tab to complete the filename. But the shell only shows rm foo. It was unable to complete the filename because both foobar and football start with foo. Some shells sound a beep or show all the choices if more than one name matches. The user must then type more characters to identify the desired filename. Typing a t and pressing Tab again is enough to let the shell determine which filename is desired and fill in the rest. Another feature of the shell is the use of environment variables. Environment variables are a variable/key pair stored in the shell’s environment. This environment can be read by any program invoked by the shell, and thus contains a lot of program configuration. Common Environment Variables provides a list of common environment variables and their meanings. Note that the names of environment variables are always in uppercase. Table 6. Common Environment Variables VariableDescription USER Current logged in user’s name. PATH Colon-separated list of directories to search for binaries. DISPLAY Network name of the Xorg display to connect to, if available. SHELL The current shell. TERM The name of the user’s type of terminal. Used to determine the capabilities of the terminal. TERMCAP Database entry of the terminal escape codes to perform various terminal functions. OSTYPE Type of operating system. MACHTYPE The system’s CPU architecture. EDITOR The user’s preferred text editor. PAGER The user’s preferred utility for viewing text one page at a time. MANPATH Colon-separated list of directories to search for manual pages. How to set an environment variable differs between shells. In tcsh(1) and csh(1), use setenv to set environment variables. In sh(1) and bash, use export to set the current environment variables. This example sets the default EDITOR to /usr/local/bin/emacs for the tcsh(1) shell: % setenv EDITOR /usr/local/bin/emacs The equivalent command for bash would be: % export EDITOR="/usr/local/bin/emacs" To expand an environment variable in order to see its current setting, type a $ character in front of its name on the command line. For example, echo $TERM displays the current $TERM setting. Shells treat special characters, known as meta-characters, as special representations of data. The most common meta-character is *, which represents any number of characters in a filename. Meta-characters can be used to perform filename globbing. For example, echo * is equivalent to ls because the shell takes all the files that match * and echo lists them on the command line. To prevent the shell from interpreting a special character, escape it from the shell by starting it with a backslash (\). For example, echo $TERM prints the terminal setting whereas echo \$TERM literally prints the string $TERM. #### 3.9.1. Changing the Shell The easiest way to permanently change the default shell is to use chsh. Running this command will open the editor that is configured in the EDITOR environment variable, which by default is set to vi(1). Change the Shell: line to the full path of the new shell. Alternately, use chsh -s which will set the specified shell without opening an editor. For example, to change the shell to bash: % chsh -s /usr/local/bin/bash The new shell must be present in /etc/shells. If the shell was installed from the FreeBSD Ports Collection as described in Installing Applications: Packages and Ports, it should be automatically added to this file. If it is missing, add it using this command, replacing the path with the path of the shell:# echo /usr/local/bin/bash >> /etc/shellsThen, rerun chsh(1). #### 3.9.2. Advanced Shell Techniques The UNIX® shell is not just a command interpreter, it acts as a powerful tool which allows users to execute commands, redirect their output, redirect their input and chain commands together to improve the final command output. When this functionality is mixed with built in commands, the user is provided with an environment that can maximize efficiency. Shell redirection is the action of sending the output or the input of a command into another command or into a file. To capture the output of the ls(1) command, for example, into a file, redirect the output: % ls > directory_listing.txt The directory contents will now be listed in directory_listing.txt. Some commands can be used to read input, such as sort(1). To sort this listing, redirect the input: % sort < directory_listing.txt The input will be sorted and placed on the screen. To redirect that input into another file, one could redirect the output of sort(1) by mixing the direction: % sort < directory_listing.txt > sorted.txt In all of the previous examples, the commands are performing redirection using file descriptors. Every UNIX® system has file descriptors, which include standard input (stdin), standard output (stdout), and standard error (stderr). Each one has a purpose, where input could be a keyboard or a mouse, something that provides input. Output could be a screen or paper in a printer. And error would be anything that is used for diagnostic or error messages. All three are considered I/O based file descriptors and sometimes considered streams. Through the use of these descriptors, the shell allows output and input to be passed around through various commands and redirected to or from a file. Another method of redirection is the pipe operator. The UNIX® pipe operator, "|" allows the output of one command to be directly passed or directed to another program. Basically, a pipe allows the standard output of a command to be passed as standard input to another command, for example: % cat directory_listing.txt | sort | less In that example, the contents of directory_listing.txt will be sorted and the output passed to less(1). This allows the user to scroll through the output at their own pace and prevent it from scrolling off the screen. ### 3.10. Text Editors Most FreeBSD configuration is done by editing text files, so it is a good idea to become familiar with a text editor. FreeBSD comes with a few as part of the base system, and many more are available in the Ports Collection. A simple editor to learn is ee(1), which stands for easy editor. To start this editor, type ee filename where filename is the name of the file to be edited. Once inside the editor, all of the commands for manipulating the editor’s functions are listed at the top of the display. The caret (^) represents Ctrl, so ^e expands to Ctrl+e. To leave ee(1), press Esc, then choose the "leave editor" option from the main menu. The editor will prompt to save any changes if the file has been modified. FreeBSD also comes with more powerful text editors, such as vi(1), as part of the base system. Other editors, like editors/emacs and editors/vim, are part of the FreeBSD Ports Collection. These editors offer more functionality at the expense of being more complicated to learn. Learning a more powerful editor such as vim or Emacs can save more time in the long run. Many applications which modify files or require typed input will automatically open a text editor. To change the default editor, set the EDITOR environment variable as described in Shells. ### 3.11. Devices and Device Nodes A device is a term used mostly for hardware-related activities in a system, including disks, printers, graphics cards, and keyboards. When FreeBSD boots, the majority of the boot messages refer to devices being detected. A copy of the boot messages are saved to /var/run/dmesg.boot. Each device has a device name and number. For example, ada0 is the first SATA hard drive, while kbd0 represents the keyboard. Most devices in FreeBSD must be accessed through special files called device nodes, which are located in /dev. ### 3.12. Manual Pages The most comprehensive documentation on FreeBSD is in the form of manual pages. Nearly every program on the system comes with a short reference manual explaining the basic operation and available arguments. These manuals can be viewed using man: % man command where command is the name of the command to learn about. For example, to learn more about ls(1), type: % man ls Manual pages are divided into sections which represent the type of topic. In FreeBSD, the following sections are available: 1. User commands. 2. System calls and error numbers. 3. Functions in the C libraries. 4. Device drivers. 5. File formats. 6. Games and other diversions. 7. Miscellaneous information. 8. System maintenance and operation commands. 9. System kernel interfaces. In some cases, the same topic may appear in more than one section of the online manual. For example, there is a chmod user command and a chmod() system call. To tell man(1) which section to display, specify the section number: % man 1 chmod This will display the manual page for the user command chmod(1). References to a particular section of the online manual are traditionally placed in parenthesis in written documentation, so chmod(1) refers to the user command and chmod(2) refers to the system call. If the name of the manual page is unknown, use man -k to search for keywords in the manual page descriptions: % man -k mail This command displays a list of commands that have the keyword "mail" in their descriptions. This is equivalent to using apropos(1). To read the descriptions for all of the commands in /usr/sbin, type: % cd /usr/sbin % man -f * | more or % cd /usr/sbin % whatis * |more #### 3.12.1. GNU Info Files FreeBSD includes several applications and utilities produced by the Free Software Foundation (FSF). In addition to manual pages, these programs may include hypertext documents called info files. These can be viewed using info(1) or, if editors/emacs is installed, the info mode of emacs. To use info(1), type: % info For a brief introduction, type h. For a quick command reference, type ?. ## Chapter 4. Installing Applications: Packages and Ports ### 4.1. Synopsis FreeBSD is bundled with a rich collection of system tools as part of the base system. In addition, FreeBSD provides two complementary technologies for installing third-party software: the FreeBSD Ports Collection, for installing from source, and packages, for installing from pre-built binaries. Either method may be used to install software from local media or from the network. After reading this chapter, you will know: • The difference between binary packages and ports. • How to find third-party software that has been ported to FreeBSD. • How to manage binary packages using pkg. • How to build third-party software from source using the Ports Collection. • How to find the files installed with the application for post-installation configuration. • What to do if a software installation fails. ### 4.2. Overview of Software Installation The typical steps for installing third-party software on a UNIX® system include: 1. Find and download the software, which might be distributed in source code format or as a binary. 2. Unpack the software from its distribution format. This is typically a tarball compressed with a program such as compress(1), gzip(1), bzip2(1) or xz(1). 3. Locate the documentation in INSTALL, README or some file in a doc/ subdirectory and read up on how to install the software. 4. If the software was distributed in source format, compile it. This may involve editing a Makefile or running a configure script. 5. Test and install the software. A FreeBSD port is a collection of files designed to automate the process of compiling an application from source code. The files that comprise a port contain all the necessary information to automatically download, extract, patch, compile, and install the application. If the software has not already been adapted and tested on FreeBSD, the source code might need editing in order for it to install and run properly. However, over 36000 third-party applications have already been ported to FreeBSD. When feasible, these applications are made available for download as pre-compiled packages. Packages can be manipulated with the FreeBSD package management commands. Both packages and ports understand dependencies. If a package or port is used to install an application and a dependent library is not already installed, the library will automatically be installed first. A FreeBSD package contains pre-compiled copies of all the commands for an application, as well as any configuration files and documentation. A package can be manipulated with the pkg(8) commands, such as pkg install. While the two technologies are similar, packages and ports each have their own strengths. Select the technology that meets your requirements for installing a particular application. Package Benefits • A compressed package tarball is typically smaller than the compressed tarball containing the source code for the application. • Packages do not require compilation time. For large applications, such as Mozilla, KDE, or GNOME, this can be important on a slow system. • Packages do not require any understanding of the process involved in compiling software on FreeBSD. Port Benefits • Packages are normally compiled with conservative options because they have to run on the maximum number of systems. By compiling from the port, one can change the compilation options. • Some applications have compile-time options relating to which features are installed. For example, Apache can be configured with a wide variety of different built-in options. In some cases, multiple packages will exist for the same application to specify certain settings. For example, Ghostscript is available as a ghostscript package and a ghostscript-nox11 package, depending on whether or not Xorg is installed. Creating multiple packages rapidly becomes impossible if an application has more than one or two different compile-time options. • The licensing conditions of some software forbid binary distribution. Such software must be distributed as source code which must be compiled by the end-user. • Some people do not trust binary distributions or prefer to read through source code in order to look for potential problems. • Source code is needed in order to apply custom patches. To keep track of updated ports, subscribe to the FreeBSD ports mailing list and the FreeBSD ports bugs mailing list. Before installing any application, check https://vuxml.freebsd.org/ for security issues related to the application or type pkg audit -F to check all installed applications for known vulnerabilities. The remainder of this chapter explains how to use packages and ports to install and manage third-party software on FreeBSD. ### 4.3. Finding Software FreeBSD’s list of available applications is growing all the time. There are a number of ways to find software to install: • The FreeBSD web site maintains an up-to-date searchable list of all the available applications, at https://www.FreeBSD.org/ports/. The ports can be searched by application name or by software category. • Dan Langille maintains FreshPorts.org which provides a comprehensive search utility and also tracks changes to the applications in the Ports Collection. Registered users can create a customized watch list in order to receive an automated email when their watched ports are updated. • If finding a particular application becomes challenging, try searching a site like SourceForge.net or GitHub.com then check back at the FreeBSD site to see if the application has been ported. • To search the binary package repository for an application: # pkg search subversion git-subversion-1.9.2 java-subversion-1.8.8_2 p5-subversion-1.8.8_2 py27-hgsubversion-1.6 py27-subversion-1.8.8_2 ruby-subversion-1.8.8_2 subversion-1.8.8_2 subversion-book-4515 subversion-static-1.8.8_2 subversion16-1.6.23_4 subversion17-1.7.16_2 Package names include the version number and, in the case of ports based on python, the version number of the version of python the package was built with. Some ports also have multiple versions available. In the case of Subversion, there are different versions available, as well as different compile options. In this case, the statically linked version of Subversion. When indicating which package to install, it is best to specify the application by the port origin, which is the path in the ports tree. Repeat the pkg search with -o to list the origin of each package: # pkg search -o subversion devel/git-subversion java/java-subversion devel/p5-subversion devel/py-hgsubversion devel/py-subversion devel/ruby-subversion devel/subversion16 devel/subversion17 devel/subversion devel/subversion-book devel/subversion-static Searching by shell globs, regular expressions, exact match, by description, or any other field in the repository database is also supported by pkg search. After installing ports-mgmt/pkg or ports-mgmt/pkg-devel, see pkg-search(8) for more details. • If the Ports Collection is already installed, there are several methods to query the local version of the ports tree. To find out which category a port is in, type whereis file, where file is the program to be installed: # whereis lsof lsof: /usr/ports/sysutils/lsof Alternately, an echo(1) statement can be used: # echo /usr/ports/*/*lsof* /usr/ports/sysutils/lsof Note that this will also return any matched files downloaded into the /usr/ports/distfiles directory. • Another way to find software is by using the Ports Collection’s built-in search mechanism. To use the search feature, cd to /usr/ports then run make search name=program-name where program-name is the name of the software. For example, to search for lsof: # cd /usr/ports # make search name=lsof Port: lsof-4.88.d,8 Path: /usr/ports/sysutils/lsof Info: Lists information about open files (similar to fstat(1)) Maint: [email protected] Index: sysutils B-deps: R-deps: The built-in search mechanism uses a file of index information. If a message indicates that the INDEX is required, run make fetchindex to download the current index file. With the INDEX present, make search will be able to perform the requested search. The "Path:" line indicates where to find the port. To receive less information, use the quicksearch feature: # cd /usr/ports # make quicksearch name=lsof Port: lsof-4.88.d,8 Path: /usr/ports/sysutils/lsof Info: Lists information about open files (similar to fstat(1)) For more in-depth searching, use make search key=string or make quicksearch key=string, where string is some text to search for. The text can be in comments, descriptions, or dependencies in order to find ports which relate to a particular subject when the name of the program is unknown. When using search or quicksearch, the search string is case-insensitive. Searching for "LSOF" will yield the same results as searching for "lsof". ### 4.4. Using pkg for Binary Package Management pkg is the next generation replacement for the traditional FreeBSD package management tools, offering many features that make dealing with binary packages faster and easier. For sites wishing to only use prebuilt binary packages from the FreeBSD mirrors, managing packages with pkg can be sufficient. However, for those sites building from source or using their own repositories, a separate port management tool will be needed. Since pkg only works with binary packages, it is not a replacement for such tools. Those tools can be used to install software from both binary packages and the Ports Collection, while pkg installs only binary packages. #### 4.4.1. Getting Started with pkg FreeBSD includes a bootstrap utility which can be used to download and install pkg and its manual pages. This utility is designed to work with versions of FreeBSD starting with 10.X. Not all FreeBSD versions and architectures support this bootstrap process. The current list is at https://pkg.freebsd.org/. For other cases, pkg must instead be installed from the Ports Collection or as a binary package. To bootstrap the system, run: # /usr/sbin/pkg You must have a working Internet connection for the bootstrap process to succeed. Otherwise, to install the port, run: # cd /usr/ports/ports-mgmt/pkg # make # make install clean When upgrading an existing system that originally used the older pkg_* tools, the database must be converted to the new format, so that the new tools are aware of the already installed packages. Once pkg has been installed, the package database must be converted from the traditional format to the new format by running this command: # pkg2ng This step is not required for new installations that do not yet have any third-party software installed. This step is not reversible. Once the package database has been converted to the pkg format, the traditional pkg_* tools should no longer be used. The package database conversion may emit errors as the contents are converted to the new version. Generally, these errors can be safely ignored. However, a list of software that was not successfully converted is shown after pkg2ng finishes. These applications must be manually reinstalled. To ensure that the Ports Collection registers new software with pkg instead of the traditional packages database, FreeBSD versions earlier than 10.X require this line in /etc/make.conf: WITH_PKGNG= yes By default, pkg uses the binary packages from the FreeBSD package mirrors (the repository). For information about building a custom package repository, see Building Packages with Poudriere. Additional pkg configuration options are described in pkg.conf(5). Usage information for pkg is available in the pkg(8) manual page or by running pkg without additional arguments. Each pkg command argument is documented in a command-specific manual page. To read the manual page for pkg install, for example, run either of these commands: # pkg help install # man pkg-install The rest of this section demonstrates common binary package management tasks which can be performed using pkg. Each demonstrated command provides many switches to customize its use. Refer to a command’s help or man page for details and more examples. #### 4.4.2. Quarterly and Latest Ports Branches The Quarterly branch provides users with a more predictable and stable experience for port and package installation and upgrades. This is done essentially by only allowing non-feature updates. Quarterly branches aim to receive security fixes (that may be version updates, or backports of commits), bug fixes and ports compliance or framework changes. The Quarterly branch is cut from HEAD at the beginning of every (yearly) quarter in January, April, July, and October. Branches are named according to the year (YYYY) and quarter (Q1-4) they are created in. For example, the quarterly branch created in January 2016, is named 2016Q1. And the Latest branch provides the latest versions of the packages to the users. To switch from quarterly to latest run the following commands: # mkdir -p /usr/local/etc/pkg/repos # cp /etc/pkg/FreeBSD.conf /usr/local/etc/pkg/repos/FreeBSD.conf Edit the file /usr/local/etc/pkg/repos/FreeBSD.conf and change the string quarterly to latest in the url: line. The result should be similar to the following: FreeBSD: { url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest", mirror_type: "srv", signature_type: "fingerprints", fingerprints: "/usr/share/keys/pkg", enabled: yes } And finally run this command to update from the new (latest) repository metadata. # pkg update -f #### 4.4.3. Obtaining Information About Installed Packages Information about the packages installed on a system can be viewed by running pkg info which, when run without any switches, will list the package version for either all installed packages or the specified package. For example, to see which version of pkg is installed, run: # pkg info pkg pkg-1.1.4_1 #### 4.4.4. Installing and Removing Packages To install a binary package use the following command, where packagename is the name of the package to install: # pkg install packagename This command uses repository data to determine which version of the software to install and if it has any uninstalled dependencies. For example, to install curl: # pkg install curl Updating repository catalogue /usr/local/tmp/All/curl-7.31.0_1.txz 100% of 1181 kB 1380 kBps 00m01s /usr/local/tmp/All/ca_root_nss-3.15.1_1.txz 100% of 288 kB 1700 kBps 00m00s Updating repository catalogue The following 2 packages will be installed: Installing ca_root_nss: 3.15.1_1 Installing curl: 7.31.0_1 The installation will require 3 MB more space Proceed with installing packages [y/N]: y Checking integrity... done [1/2] Installing ca_root_nss-3.15.1_1... done [2/2] Installing curl-7.31.0_1... done Cleaning up cache files...Done The new package and any additional packages that were installed as dependencies can be seen in the installed packages list: # pkg info ca_root_nss-3.15.1_1 The root certificate bundle from the Mozilla Project curl-7.31.0_1 Non-interactive tool to get files from FTP, GOPHER, HTTP(S) servers pkg-1.1.4_6 New generation package manager Packages that are no longer needed can be removed with pkg delete. For example: # pkg delete curl The following packages will be deleted: curl-7.31.0_1 The deletion will free 3 MB Proceed with deleting packages [y/N]: y [1/1] Deleting curl-7.31.0_1... done # pkg upgrade This command will compare the installed versions with those available in the repository catalogue and upgrade them from the repository. #### 4.4.6. Auditing Installed Packages Software vulnerabilities are regularly discovered in third-party applications. To address this, pkg includes a built-in auditing mechanism. To determine if there are any known vulnerabilities for the software installed on the system, run: # pkg audit -F #### 4.4.7. Automatically Removing Unused Packages Removing a package may leave behind dependencies which are no longer required. Unneeded packages that were installed as dependencies (leaf packages) can be automatically detected and removed using: # pkg autoremove Packages to be autoremoved: ca_root_nss-3.15.1_1 The autoremoval will free 723 kB Proceed with autoremoval of packages [y/N]: y Deinstalling ca_root_nss-3.15.1_1... done Packages installed as dependencies are called automatic packages. Non-automatic packages, i.e the packages that were explicity installed not as a dependency to another package, can be listed using: # pkg prime-list nginx openvpn sudo pkg prime-list is an alias command declared in /usr/local/etc/pkg.conf. There are many others that can be used to query the package database of the system. For instance, command pkg prime-origins can be used to get the origin port directory of the list mentioned above: # pkg prime-origins www/nginx security/openvpn security/sudo This list can be used to rebuild all packages installed on a system using build tools such as ports-mgmt/poudriere or ports-mgmt/synth. Marking an installed package as automatic can be done using: # pkg set -A 1 devel/cmake Once a package is a leaf package and is marked as automatic, it gets selected by pkg autoremove. Marking an installed package as not automatic can be done using: # pkg set -A 0 devel/cmake #### 4.4.8. Restoring the Package Database Unlike the traditional package management system, pkg includes its own package database backup mechanism. This functionality is enabled by default. To disable the periodic script from backing up the package database, set daily_backup_pkgdb_enable="NO" in periodic.conf(5). To restore the contents of a previous package database backup, run the following command replacing /path/to/pkg.sql with the location of the backup: # pkg backup -r /path/to/pkg.sql If restoring a backup taken by the periodic script, it must be decompressed prior to being restored. To run a manual backup of the pkg database, run the following command, replacing /path/to/pkg.sql with a suitable file name and location: # pkg backup -d /path/to/pkg.sql #### 4.4.9. Removing Stale Packages By default, pkg stores binary packages in a cache directory defined by PKG_CACHEDIR in pkg.conf(5). Only copies of the latest installed packages are kept. Older versions of pkg kept all previous packages. To remove these outdated binary packages, run: # pkg clean The entire cache may be cleared by running: # pkg clean -a Software within the FreeBSD Ports Collection can undergo major version number changes. To address this, pkg has a built-in command to update package origins. This can be useful, for example, if lang/php5 is renamed to lang/php53 so that lang/php5 can now represent version 5.4. To change the package origin for the above example, run: # pkg set -o lang/php5:lang/php53 As another example, to update lang/ruby18 to lang/ruby19, run: # pkg set -o lang/ruby18:lang/ruby19 As a final example, to change the origin of the libglut shared libraries from graphics/libglut to graphics/freeglut, run: # pkg set -o graphics/libglut:graphics/freeglut When changing package origins, it is important to reinstall packages that are dependent on the package with the modified origin. To force a reinstallation of dependent packages, run:# pkg install -Rf graphics/freeglut ### 4.5. Using the Ports Collection The Ports Collection is a set of Makefiles, patches, and description files. Each set of these files is used to compile and install an individual application on FreeBSD, and is called a port. By default, the Ports Collection itself is stored as a subdirectory of /usr/ports. Before installing and using the Ports Collection, please be aware that it is generally ill-advised to use the Ports Collection in conjunction with the binary packages provided via pkg to install software. pkg, by default, tracks quarterly branch-releases of the ports tree and not HEAD. Dependencies could be different for a port in HEAD compared to its counterpart in a quarterly branch release and this could result in conflicts between dependencies installed by pkg and those from the Ports Collection. If the Ports Collection and pkg must be used in conjunction, then be sure that your Ports Collection and pkg are on the same branch release of the ports tree. The Ports Collection contains directories for software categories. Inside each category are subdirectories for individual applications. Each application subdirectory contains a set of files that tells FreeBSD how to compile and install that program, called a ports skeleton. Each port skeleton includes these files and directories: • Makefile: contains statements that specify how the application should be compiled and where its components should be installed. • distinfo: contains the names and checksums of the files that must be downloaded to build the port. • files/: this directory contains any patches needed for the program to compile and install on FreeBSD. This directory may also contain other files used to build the port. • pkg-descr: provides a more detailed description of the program. • pkg-plist: a list of all the files that will be installed by the port. It also tells the ports system which files to remove upon deinstallation. Some ports include pkg-message or other files to handle special situations. For more details on these files, and on ports in general, refer to the FreeBSD Porter’s Handbook. The port does not include the actual source code, also known as a distfile. The extract portion of building a port will automatically save the downloaded source to /usr/ports/distfiles. #### 4.5.1. Installing the Ports Collection Before an application can be compiled using a port, the Ports Collection must first be installed. If it was not installed during the installation of FreeBSD, use one of the following methods to install it: Procedure: Git Method If more control over the ports tree is needed or if local changes need to be maintained, or if running FreeBSD-CURRENT, Git can be used to obtain the Ports Collection. Refer to the Git Primer for a detailed description of Git. 1. Git must be installed before it can be used to check out the ports tree. If a copy of the ports tree is already present, install Git like this: # cd /usr/ports/devel/git # make install clean If the ports tree is not available, or pkg is being used to manage packages, Git can be installed as a package: # pkg install git 2. Check out a copy of the HEAD branch of the ports tree: # git clone https://git.FreeBSD.org/ports.git /usr/ports 3. Or, check out a copy of a quarterly branch: # git clone https://git.FreeBSD.org/ports.git -b 2020Q3 /usr/ports 4. As needed, update /usr/ports after the initial Git checkout: # git -C /usr/ports pull 5. As needed, switch /usr/ports to a different quarterly branch: # git -C /usr/ports switch 2020Q4 #### 4.5.2. Installing Ports This section provides basic instructions on using the Ports Collection to install or remove software. The detailed description of available make targets and environment variables is available in ports(7). Before compiling any port, be sure to update the Ports Collection as described in the previous section. Since the installation of any third-party software can introduce security vulnerabilities, it is recommended to first check https://vuxml.freebsd.org/ for known security issues related to the port. Alternately, run pkg audit -F before installing a new port. This command can be configured to automatically perform a security audit and an update of the vulnerability database during the daily security system check. For more information, refer to pkg-audit(8) and periodic(8). Using the Ports Collection assumes a working Internet connection. It also requires superuser privilege. To compile and install the port, change to the directory of the port to be installed, then type make install at the prompt. Messages will indicate the progress: # cd /usr/ports/sysutils/lsof # make install >> lsof_4.88D.freebsd.tar.gz doesn't seem to exist in /usr/ports/distfiles/. >> Attempting to fetch from ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/. ===> Extracting for lsof-4.88 ... [extraction output snipped] ... >> Checksum OK for lsof_4.88D.freebsd.tar.gz. ===> Patching for lsof-4.88.d,8 ===> Applying FreeBSD patches for lsof-4.88.d,8 ===> Configuring for lsof-4.88.d,8 ... [configure output snipped] ... ===> Building for lsof-4.88.d,8 ... [compilation output snipped] ... ===> Installing for lsof-4.88.d,8 ... [installation output snipped] ... ===> Generating temporary packing list ===> Compressing manual pages for lsof-4.88.d,8 ===> Registering installation for lsof-4.88.d,8 ===> SECURITY NOTE: This port has installed the following binaries which execute with increased privileges. /usr/local/sbin/lsof # Since lsof is a program that runs with increased privileges, a security warning is displayed as it is installed. Once the installation is complete, the prompt will be returned. Some shells keep a cache of the commands that are available in the directories listed in the PATH environment variable, to speed up lookup operations for the executable file of these commands. Users of the tcsh shell should type rehash so that a newly installed command can be used without specifying its full path. Use hash -r instead for the sh shell. Refer to the documentation for the shell for more information. During installation, a working subdirectory is created which contains all the temporary files used during compilation. Removing this directory saves disk space and minimizes the chance of problems later when upgrading to the newer version of the port: # make clean ===> Cleaning for lsof-88.d,8 # To save this extra step, instead use make install clean when compiling the port. ##### 4.5.2.1. Customizing Ports Installation Some ports provide build options which can be used to enable or disable application components, provide security options, or allow for other customizations. Examples include www/firefox, security/gpgme, and mail/sylpheed-claws. If the port depends upon other ports which have configurable options, it may pause several times for user interaction as the default behavior is to prompt the user to select options from a menu. To avoid this and do all of the configuration in one batch, run make config-recursive within the port skeleton. Then, run make install [clean] to compile and install the port. When using config-recursive, the list of ports to configure are gathered by the all-depends-list target. It is recommended to run make config-recursive until all dependent ports options have been defined, and ports options screens no longer appear, to be certain that all dependency options have been configured. There are several ways to revisit a port’s build options menu in order to add, remove, or change these options after a port has been built. One method is to cd into the directory containing the port and type make config. Another option is to use make showconfig. Another option is to execute make rmconfig which will remove all selected options and allow you to start over. All of these options, and others, are explained in great detail in ports(7). The ports system uses fetch(1) to download the source files, which supports various environment variables. The FTP_PASSIVE_MODE, FTP_PROXY, and FTP_PASSWORD variables may need to be set if the FreeBSD system is behind a firewall or FTP/HTTP proxy. See fetch(3) for the complete list of supported variables. For users who cannot be connected to the Internet all the time, make fetch can be run within /usr/ports, to fetch all distfiles, or within a category, such as /usr/ports/net, or within the specific port skeleton. Note that if a port has any dependencies, running this command in a category or ports skeleton will not fetch the distfiles of ports from another category. Instead, use make fetch-recursive to also fetch the distfiles for all the dependencies of a port. In rare cases, such as when an organization has a local distfiles repository, the MASTER_SITES variable can be used to override the download locations specified in the Makefile. When using, specify the alternate location: # cd /usr/ports/directory # make MASTER_SITE_OVERRIDE= \ ftp://ftp.organization.org/pub/FreeBSD/ports/distfiles/ fetch The WRKDIRPREFIX and PREFIX variables can override the default working and target directories. For example: # make WRKDIRPREFIX=/usr/home/example/ports install will compile the port in /usr/home/example/ports and install everything under /usr/local. # make PREFIX=/usr/home/example/local install will compile the port in /usr/ports and install it in /usr/home/example/local. And: # make WRKDIRPREFIX=../ports PREFIX=../local install will combine the two. These can also be set as environmental variables. Refer to the manual page for your shell for instructions on how to set an environmental variable. #### 4.5.3. Removing Installed Ports Installed ports can be uninstalled using pkg delete. Examples for using this command can be found in the pkg-delete(8) manual page. Alternately, make deinstall can be run in the port’s directory: # cd /usr/ports/sysutils/lsof # make deinstall ===> Deinstalling for sysutils/lsof ===> Deinstalling Deinstallation has been requested for the following 1 packages: lsof-4.88.d,8 The deinstallation will free 229 kB [1/1] Deleting lsof-4.88.d,8... done It is recommended to read the messages as the port is uninstalled. If the port has any applications that depend upon it, this information will be displayed but the uninstallation will proceed. In such cases, it may be better to reinstall the application in order to prevent broken dependencies. Over time, newer versions of software become available in the Ports Collection. This section describes how to determine which software can be upgraded and how to perform the upgrade. To determine if newer versions of installed ports are available, ensure that the latest version of the ports tree is installed, using the updating command described in “Git Method”. On FreeBSD 10 and later, or if the system has been converted to pkg, the following command will list the installed ports which are out of date: # pkg version -l "<" For FreeBSD 9.X and lower, the following command will list the installed ports that are out of date: # pkg_version -l "<" Before attempting an upgrade, read /usr/ports/UPDATING from the top of the file to the date closest to the last time ports were upgraded or the system was installed. This file describes various issues and additional steps users may encounter and need to perform when updating a port, including such things as file format changes, changes in locations of configuration files, or any incompatibilities with previous versions. Make note of any instructions which match any of the ports that need upgrading and follow these instructions when performing the upgrade. ##### 4.5.4.1. Tools to Upgrade and Manage Ports The Ports Collection contains several utilities to perform the actual upgrade. Each has its strengths and weaknesses. Historically, most installations used either Portmaster or Portupgrade. Synth is a newer alternative. The choice of which tool is best for a particular system is up to the system administrator. It is recommended practice to back up your data before using any of these tools. ##### 4.5.4.2. Upgrading Ports Using Portmaster ports-mgmt/portmaster is a very small utility for upgrading installed ports. It is designed to use the tools installed with the FreeBSD base system without depending on other ports or databases. To install this utility as a port: # cd /usr/ports/ports-mgmt/portmaster # make install clean Portmaster defines four categories of ports: • Root port: has no dependencies and is not a dependency of any other ports. • Trunk port: has no dependencies, but other ports depend upon it. • Branch port: has dependencies and other ports depend upon it. • Leaf port: has dependencies but no other ports depend upon it. To list these categories and search for updates: # portmaster -L ===>>> Root ports (No dependencies, not depended on) ===>>> ispell-3.2.06_18 ===>>> screen-4.0.3 ===>>> New version available: screen-4.0.3_1 ===>>> tcpflow-0.21_1 ===>>> 7 root ports ... ===>>> Branch ports (Have dependencies, are depended on) ===>>> apache22-2.2.3 ===>>> New version available: apache22-2.2.8 ... ===>>> Leaf ports (Have dependencies, not depended on) ===>>> automake-1.9.6_2 ===>>> bash-3.1.17 ===>>> New version available: bash-3.2.33 ... ===>>> 32 leaf ports ===>>> 137 total installed ports ===>>> 83 have new versions available This command is used to upgrade all outdated ports: # portmaster -a By default, Portmaster makes a backup package before deleting the existing port. If the installation of the new version is successful, Portmaster deletes the backup. Using -b instructs Portmaster not to automatically delete the backup. Adding -i starts Portmaster in interactive mode, prompting for confirmation before upgrading each port. Many other options are available. Read through the manual page for portmaster(8) for details regarding their usage. If errors are encountered during the upgrade process, add -f to upgrade and rebuild all ports: # portmaster -af Portmaster can also be used to install new ports on the system, upgrading all dependencies before building and installing the new port. To use this function, specify the location of the port in the Ports Collection: # portmaster shells/bash ports-mgmt/portupgrade is another utility that can be used to upgrade ports. It installs a suite of applications which can be used to manage ports. However, it is dependent upon Ruby. To install the port: # cd /usr/ports/ports-mgmt/portupgrade # make install clean Before performing an upgrade using this utility, it is recommended to scan the list of installed ports using pkgdb -F and to fix all the inconsistencies it reports. To upgrade all the outdated ports installed on the system, use portupgrade -a. Alternately, include -i to be asked for confirmation of every individual upgrade: # portupgrade -ai To upgrade only a specified application instead of all available ports, use portupgrade pkgname. It is very important to include -R to first upgrade all the ports required by the given application: # portupgrade -R firefox If -P is included, Portupgrade searches for available packages in the local directories listed in PKG_PATH. If none are available locally, it then fetches packages from a remote site. If packages can not be found locally or fetched remotely, Portupgrade will use ports. To avoid using ports entirely, specify -PP. This last set of options tells Portupgrade to abort if no packages are available: # portupgrade -PP gnome3 To just fetch the port distfiles, or packages, if -P is specified, without building or installing anything, use -F. For further information on all of the available switches, refer to the manual page for portupgrade. #### 4.5.5. Ports and Disk Space Using the Ports Collection will use up disk space over time. After building and installing a port, running make clean within the ports skeleton will clean up the temporary work directory. If Portmaster is used to install a port, it will automatically remove this directory unless -K is specified. If Portupgrade is installed, this command will remove all work directories found within the local copy of the Ports Collection: # portsclean -C In addition, outdated source distribution files accumulate in /usr/ports/distfiles over time. To use Portupgrade to delete all the distfiles that are no longer referenced by any ports: # portsclean -D Portupgrade can remove all distfiles not referenced by any port currently installed on the system: # portsclean -DD If Portmaster is installed, use: # portmaster --clean-distfiles By default, this command is interactive and prompts the user to confirm if a distfile should be deleted. In addition to these commands, ports-mgmt/pkg_cutleaves automates the task of removing installed ports that are no longer needed. ### 4.6. Building Packages with Poudriere Poudriere is a BSD-licensed utility for creating and testing FreeBSD packages. It uses FreeBSD jails to set up isolated compilation environments. These jails can be used to build packages for versions of FreeBSD that are different from the system on which it is installed, and also to build packages for i386 if the host is an amd64 system. Once the packages are built, they are in a layout identical to the official mirrors. These packages are usable by pkg(8) and other package management tools. Poudriere is installed using the ports-mgmt/poudriere package or port. The installation includes a sample configuration file /usr/local/etc/poudriere.conf.sample. Copy this file to /usr/local/etc/poudriere.conf. Edit the copied file to suit the local configuration. While ZFS is not required on the system running poudriere, it is beneficial. When ZFS is used, ZPOOL must be specified in /usr/local/etc/poudriere.conf and FREEBSD_HOST should be set to a nearby mirror. Defining CCACHE_DIR enables the use of devel/ccache to cache compilation and reduce build times for frequently-compiled code. It may be convenient to put poudriere datasets in an isolated tree mounted at /poudriere. Defaults for the other configuration values are adequate. The number of processor cores detected is used to define how many builds will run in parallel. Supply enough virtual memory, either with RAM or swap space. If virtual memory runs out, the compilation jails will stop and be torn down, resulting in weird error messages. #### 4.6.1. Initialize Jails and Port Trees After configuration, initialize poudriere so that it installs a jail with the required FreeBSD tree and a ports tree. Specify a name for the jail using -j and the FreeBSD version with -v. On systems running FreeBSD/amd64, the architecture can be set with -a to either i386 or amd64. The default is the architecture shown by uname. # poudriere jail -c -j 13amd64 -v 13.1-RELEASE [00:00:00] Creating 13amd64 fs at /poudriere/jails/13amd64... done /poudriere/jails/13amd64/fromftp/base.txz 125 MB 4110 kBps 31s [00:00:33] Extracting base... done /poudriere/jails/13amd64/fromftp/src.txz 154 MB 4178 kBps 38s [00:01:33] Extracting src... done /poudriere/jails/13amd64/fromftp/lib32.txz 24 MB 3969 kBps 06s [00:02:38] Extracting lib32... done [00:02:42] Cleaning up... done [00:02:42] Recording filesystem state for clean... done /etc/resolv.conf -> /poudriere/jails/13amd64/etc/resolv.conf Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching public key from update4.freebsd.org... done. Fetching metadata signature for 13.1-RELEASE from update4.freebsd.org... done. Inspecting system... done. Fetching 124 patches.....10....20....30....40....50....60....70....80....90....100....110....120.. done. Applying patches... done. Fetching 6 files... done. The following files will be added as part of updating to 13.1-RELEASE-p1: /usr/src/contrib/unbound/.github /usr/src/contrib/unbound/.github/FUNDING.yml /usr/src/contrib/unbound/contrib/drop2rpz /usr/src/contrib/unbound/contrib/unbound_portable.service.in /usr/src/contrib/unbound/services/rpz.c /usr/src/contrib/unbound/services/rpz.h /usr/src/lib/libc/tests/gen/spawnp_enoexec.sh The following files will be updated as part of updating to 13.1-RELEASE-p1: […] Scanning //usr/share/certs/trusted for certificates... done. 13.1-RELEASE-p1 [00:04:06] Recording filesystem state for clean... done [00:04:07] Jail 13amd64 13.1-RELEASE-p1 amd64 is ready to be used # poudriere ports -c -p local -m git+https [00:00:00] Creating local fs at /poudriere/ports/local... done [00:00:00] Checking out the ports tree... done On a single computer, poudriere can build ports with multiple configurations, in multiple jails, and from different port trees. Custom configurations for these combinations are called sets. See the CUSTOMIZATION section of poudriere(8) for details after ports-mgmt/poudriere or ports-mgmt/poudriere-devel is installed. The basic configuration shown here puts a single jail-, port-, and set-specific make.conf in /usr/local/etc/poudriere.d. The filename in this example is created by combining the jail name, port name, and set name: 13amd64-local-workstation-make.conf. The system make.conf and this new file are combined at build time to create the make.conf used by the build jail. Packages to be built are entered in 13amd64-local-workstation-pkglist: editors/emacs devel/git ports-mgmt/pkg ... Options and dependencies for the specified ports are configured: # poudriere options -j 13amd64 -p local -z workstation -f 13amd64-local-workstation-pkglist Finally, packages are built and a package repository is created: # poudriere bulk -j 13amd64 -p local -z workstation -f 13amd64-local-workstation-pkglist While running, pressing Ctrl+t displays the current state of the build. Poudriere also builds files in /poudriere/logs/bulk/jailname that can be used with a web server to display build information. After completion, the new packages are now available for installation from the poudriere repository. For more information on using poudriere, see poudriere(8) and the main web site, https://github.com/freebsd/poudriere/wiki. #### 4.6.2. Configuring pkg Clients to Use a Poudriere Repository While it is possible to use both a custom repository along side of the official repository, sometimes it is useful to disable the official repository. This is done by creating a configuration file that overrides and disables the official configuration file. Create /usr/local/etc/pkg/repos/FreeBSD.conf that contains the following: FreeBSD: { enabled: no } Usually it is easiest to serve a poudriere repository to the client machines via HTTP. Set up a webserver to serve up the package directory, for instance: /usr/local/poudriere/data/packages/13amd64, where 13amd64 is the name of the build. If the URL to the package repository is: http://pkg.example.com/13amd64, then the repository configuration file in /usr/local/etc/pkg/repos/custom.conf would look like: custom: { url: "http://pkg.example.com/13amd64", enabled: yes, } ### 4.7. Post-Installation Considerations Regardless of whether the software was installed from a binary package or port, most third-party applications require some level of configuration after installation. The following commands and locations can be used to help determine what was installed with the application. • Most applications install at least one default configuration file in /usr/local/etc. In cases where an application has a large number of configuration files, a subdirectory will be created to hold them. Often, sample configuration files are installed which end with a suffix such as .sample. The configuration files should be reviewed and possibly edited to meet the system’s needs. To edit a sample file, first copy it without the .sample extension. • Applications which provide documentation will install it into /usr/local/share/doc and many applications also install manual pages. This documentation should be consulted before continuing. • Some applications run services which must be added to /etc/rc.conf before starting the application. These applications usually install a startup script in /usr/local/etc/rc.d. See Starting Services for more information. By design, applications do not run their startup script upon installation, nor do they run their stop script upon deinstallation or upgrade. This decision is left to the individual system administrator. • Users of csh(1) should run rehash to rebuild the known binary list in the shells PATH. • Use pkg info to determine which files, man pages, and binaries were installed with the application. ### 4.8. Dealing with Broken Ports When a port does not build or install, try the following: 1. Search to see if there is a fix pending for the port in the Problem Report database. If so, implementing the proposed fix may fix the issue. 2. Ask the maintainer of the port for help. Type make maintainer in the ports skeleton or read the port’s Makefile to find the maintainer’s email address. Remember to include the output leading up to the error in the email to the maintainer. Some ports are not maintained by an individual but instead by a group maintainer represented by a mailing list. Many, but not all, of these addresses look like [email protected]. Please take this into account when sending an email.In particular, ports maintained by [email protected] are not maintained by a specific individual. Instead, any fixes and support come from the general community who subscribe to that mailing list. More volunteers are always needed! If there is no response to the email, use Bugzilla to submit a bug report using the instructions in Writing FreeBSD Problem Reports. 3. Fix it! The Porter’s Handbook includes detailed information on the ports infrastructure so that you can fix the occasional broken port or even submit your own! 4. Install the package instead of the port using the instructions in Using pkg for Binary Package Management. ## Chapter 5. The X Window System ### 5.1. Synopsis An installation of FreeBSD using bsdinstall does not automatically install a graphical user interface. This chapter describes how to install and configure Xorg, which provides the open source X Window System used to provide a graphical environment. It then describes how to find and install a desktop environment or window manager. Users who prefer an installation method that automatically configures the Xorg should refer to GhostBSD, MidnightBSD or NomadBSD. For more information on the video hardware that Xorg supports, refer to the x.org website. After reading this chapter, you will know: • The various components of the X Window System, and how they interoperate. • How to install and configure Xorg. • How to install and configure several window managers and desktop environments. • How to use TrueType® fonts in Xorg. Before reading this chapter, you should: ### 5.2. Terminology While it is not necessary to understand all of the details of the various components in the X Window System and how they interact, some basic knowledge of these components can be useful. X server X was designed from the beginning to be network-centric, and adopts a "client-server" model. In this model, the "X server" runs on the computer that has the keyboard, monitor, and mouse attached. The server’s responsibility includes tasks such as managing the display, handling input from the keyboard and mouse, and handling input or output from other devices such as a tablet or a video projector. This confuses some people, because the X terminology is exactly backward to what they expect. They expect the "X server" to be the big powerful machine down the hall, and the "X client" to be the machine on their desk. X client Each X application, such as XTerm or Firefox, is a "client". A client sends messages to the server such as "Please draw a window at these coordinates", and the server sends back messages such as "The user just clicked on the OK button". In a home or small office environment, the X server and the X clients commonly run on the same computer. It is also possible to run the X server on a less powerful computer and to run the X applications on a more powerful system. In this scenario, the communication between the X client and server takes place over the network. window manager X does not dictate what windows should look like on-screen, how to move them around with the mouse, which keystrokes should be used to move between windows, what the title bars on each window should look like, whether or not they have close buttons on them, and so on. Instead, X delegates this responsibility to a separate window manager application. There are dozens of window managers available. Each window manager provides a different look and feel: some support virtual desktops, some allow customized keystrokes to manage the desktop, some have a "Start" button, and some are themeable, allowing a complete change of the desktop’s look-and-feel. Window managers are available in the x11-wm category of the Ports Collection. Each window manager uses a different configuration mechanism. Some expect configuration file written by hand while others provide graphical tools for most configuration tasks. desktop environment KDE and GNOME are considered to be desktop environments as they include an entire suite of applications for performing common desktop tasks. These may include office suites, web browsers, and games. focus policy The window manager is responsible for the mouse focus policy. This policy provides some means for choosing which window is actively receiving keystrokes and it should also visibly indicate which window is currently active. One focus policy is called "click-to-focus". In this model, a window becomes active upon receiving a mouse click. In the "focus-follows-mouse" policy, the window that is under the mouse pointer has focus and the focus is changed by pointing at another window. If the mouse is over the root window, then this window is focused. In the "sloppy-focus" model, if the mouse is moved over the root window, the most recently used window still has the focus. With sloppy-focus, focus is only changed when the cursor enters a new window, and not when exiting the current window. In the "click-to-focus" policy, the active window is selected by mouse click. The window may then be raised and appear in front of all other windows. All keystrokes will now be directed to this window, even if the cursor is moved to another window. Different window managers support different focus models. All of them support click-to-focus, and the majority of them also support other policies. Consult the documentation for the window manager to determine which focus models are available. widgets Widget is a term for all of the items in the user interface that can be clicked or manipulated in some way. This includes buttons, check boxes, radio buttons, icons, and lists. A widget toolkit is a set of widgets used to create graphical applications. There are several popular widget toolkits, including Qt, used by KDE, and GTK+, used by GNOME. As a result, applications will have a different look and feel, depending upon which widget toolkit was used to create the application. ### 5.3. Installing Xorg On FreeBSD, Xorg can be installed as a package or port. The binary package can be installed quickly but with fewer options for customization: # pkg install xorg To build and install from the Ports Collection: # cd /usr/ports/x11/xorg # make install clean Either of these installations results in the complete Xorg system being installed. Binary packages are the best option for most users. A smaller version of the X system suitable for experienced users is available in x11/xorg-minimal. Most of the documents, libraries, and applications will not be installed. Some applications require these additional components to function. ### 5.4. Xorg Configuration #### 5.4.1. Quick Start Xorg supports most common video cards, keyboards, and pointing devices. Video cards, monitors, and input devices are automatically detected and do not require any manual configuration. Do not create xorg.conf or run a -configure step unless automatic configuration fails. 1. If Xorg has been used on this computer before, move or remove any existing configuration files: # mv /etc/X11/xorg.conf ~/xorg.conf.etc # mv /usr/local/etc/X11/xorg.conf ~/xorg.conf.localetc 2. Add the user who will run Xorg to the video or wheel group to enable 3D acceleration when available. To add user jru to whichever group is available: # pw groupmod video -m jru || pw groupmod wheel -m jru 3. The TWM window manager is included by default. It is started when Xorg starts: % startx 4. On some older versions of FreeBSD, the system console must be set to vt(4) before switching back to the text console will work properly. See Kernel Mode Setting (KMS). #### 5.4.2. User Group for Accelerated Video Access to /dev/dri is needed to allow 3D acceleration on video cards. It is usually simplest to add the user who will be running X to either the video or wheel group. Here, pw(8) is used to add user slurms to the video group, or to the wheel group if there is no video group: # pw groupmod video -m slurms || pw groupmod wheel -m slurms #### 5.4.3. Kernel Mode Setting (KMS) When the computer switches from displaying the console to a higher screen resolution for X, it must set the video output mode. Recent versions of Xorg use a system inside the kernel to do these mode changes more efficiently. Older versions of FreeBSD use sc(4), which is not aware of the KMS system. The end result is that after closing X, the system console is blank, even though it is still working. The newer vt(4) console avoids this problem. kern.vty=vt #### 5.4.4. Configuration Files Manual configuration is usually not necessary. Please do not manually create configuration files unless autoconfiguration does not work. ##### 5.4.4.1. Directory Xorg looks in several directories for configuration files. /usr/local/etc/X11/ is the recommended directory for these files on FreeBSD. Using this directory helps keep application files separate from operating system files. Storing configuration files in the legacy /etc/X11/ still works. However, this mixes application files with the base FreeBSD files and is not recommended. ##### 5.4.4.2. Single or Multiple Files It is easier to use multiple files that each configure a specific setting than the traditional single xorg.conf. These files are stored in the xorg.conf.d/ subdirectory of the main configuration file directory. The full path is typically /usr/local/etc/X11/xorg.conf.d/. Examples of these files are shown later in this section. The traditional single xorg.conf still works, but is neither as clear nor as flexible as multiple files in the xorg.conf.d/ subdirectory. #### 5.4.5. Video Cards The Ports framework provides the drm graphics drivers necessary for X11 operation on recent hardware. Users can use one of the following drivers available from graphics/drm-kmod. These drivers use interfaces in the kernel that are normally private. As such, it is strongly recommended that the drivers be built via the ports system via the PORTS_MODULES variable. With PORTS_MODULES, every time you build the kernel, the corresponding port(s) containing kernel modules are re-built against the updated sources. This ensures the kernel module stays in-sync with the kernel itself. The kernel and ports trees should be updated together for maximum compatibility. You can add PORTS_MODULES to your /etc/make.conf file to ensure all kernels you build rebuild this module. Advanced users can add it to their kernel config files with the makeoptions directive. If you run GENERIC and use freebsd-update, you can just build the graphics/drm-kmod or x11/nvidia-driver port after each freebsd-update install invocation. /etc/make.conf SYSDIR=path/to/src/sys PORTS_MODULES=graphics/drm-kmod x11/nvidia-driver This will rebuild everything, but can select one or the other depending on which GPU / graphics card you have. Intel KMS driver, Radeon KMS driver, AMD KMS driver 2D and 3D acceleration is supported on most Intel KMS driver graphics cards provided by Intel. Driver name: i915kms 2D and 3D acceleration is supported on most older Radeon KMS driver graphics cards provided by AMD. Driver name: radeonkms 2D and 3D acceleration is supported on most newer AMD KMS driver graphics cards provided by AMD. Driver name: amdgpu Intel® 3D acceleration is supported on most Intel® graphics up to Ivy Bridge (HD Graphics 2500, 4000, and P4000), including Iron Lake (HD Graphics) and Sandy Bridge (HD Graphics 2000). Driver name: intel 2D and 3D acceleration is supported on Radeon cards up to and including the HD6000 series. Driver name: radeon NVIDIA Several NVIDIA drivers are available in the x11 category of the Ports Collection. Install the driver that matches the video card. Kernel support for NVIDIA cards is found in either the x11/nvidia-driver port or the x11/nvidia-driver-xxx port. Modern cards use the former. Legacy cards use the -xxx ports, where xxx is one of 304, 340 or 390 indicating the version of the driver. For those, fill in the -xxx using the Supported NVIDIA GPU Products page. This page lists the devices supported by different versions of the driver. Legacy drivers run on both i386 and amd64. The current driver only supports amd64. Read installation and configuration of NVIDIA driver for details. While we recommend this driver be rebuilt with each kernel rebuild for maximum safety, it uses almost no private kernel interfaces and is usually safe across kernel updates. Hybrid Combination Graphics Some notebook computers add additional graphics processing units to those built into the chipset or processor. Optimus combines Intel® and NVIDIA hardware. Switchable Graphics or Hybrid Graphics are a combination of an Intel® or AMD® processor and an AMD® Radeon GPU. Implementations of these hybrid graphics systems vary, and Xorg on FreeBSD is not able to drive all versions of them. Some computers provide a BIOS option to disable one of the graphics adapters or select a discrete mode which can be used with one of the standard video card drivers. For example, it is sometimes possible to disable the NVIDIA GPU in an Optimus system. The Intel® video can then be used with an Intel® driver. BIOS settings depend on the model of computer. In some situations, both GPUs can be left enabled, but creating a configuration file that only uses the main GPU in the Device section is enough to make such a system functional. Other Video Cards Drivers for some less-common video cards can be found in the x11-drivers directory of the Ports Collection. Cards that are not supported by a specific driver might still be usable with the x11-drivers/xf86-video-vesa driver. This driver is installed by x11/xorg. It can also be installed manually as x11-drivers/xf86-video-vesa. Xorg attempts to use this driver when a specific driver is not found for the video card. x11-drivers/xf86-video-scfb is a similar nonspecialized video driver that works on many UEFI and ARM® computers. Setting the Video Driver in a File To set the Intel® driver in a configuration file: Example 14. Select Intel® Video Driver in a File /usr/local/etc/X11/xorg.conf.d/driver-intel.conf Section "Device" Identifier "Card0" Driver "intel" # BusID "PCI:1:0:0" EndSection If more than one video card is present, the BusID identifier can be uncommented and set to select the desired card. A list of video card bus IDs can be displayed with pciconf -lv | grep -B3 display. To set the Radeon driver in a configuration file: Example 15. Select Radeon Video Driver in a File Section "Device" Identifier "Card0" EndSection To set the VESA driver in a configuration file: Example 16. Select VESA Video Driver in a File /usr/local/etc/X11/xorg.conf.d/driver-vesa.conf Section "Device" Identifier "Card0" Driver "vesa" EndSection To set the scfb driver for use with a UEFI or ARM® computer: Example 17. Select scfb Video Driver in a File /usr/local/etc/X11/xorg.conf.d/driver-scfb.conf Section "Device" Identifier "Card0" Driver "scfb" EndSection #### 5.4.6. Monitors Almost all monitors support the Extended Display Identification Data standard (EDID). Xorg uses EDID to communicate with the monitor and detect the supported resolutions and refresh rates. Then it selects the most appropriate combination of settings to use with that monitor. Other resolutions supported by the monitor can be chosen by setting the desired resolution in configuration files, or after the X server has been started with xrandr(1). Using xrandr(1) Run xrandr(1) without any parameters to see a list of video outputs and detected monitor modes: % xrandr Screen 0: minimum 320 x 200, current 3000 x 1920, maximum 8192 x 8192 DVI-0 connected primary 1920x1200+1080+0 (normal left inverted right x axis y axis) 495mm x 310mm 1920x1200 59.95*+ 1600x1200 60.00 1280x1024 85.02 75.02 60.02 1280x960 60.00 1152x864 75.00 1024x768 85.00 75.08 70.07 60.00 832x624 74.55 800x600 75.00 60.32 640x480 75.00 60.00 720x400 70.08 DisplayPort-0 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) This shows that the DVI-0 output is being used to display a screen resolution of 1920x1200 pixels at a refresh rate of about 60 Hz. Monitors are not attached to the DisplayPort-0 and HDMI-0 connectors. Any of the other display modes can be selected with xrandr(1). For example, to switch to 1280x1024 at 60 Hz: % xrandr --output DVI-0 --mode 1280x1024 --rate 60 A common task is using the external video output on a notebook computer for a video projector. The type and quantity of output connectors varies between devices, and the name given to each output varies from driver to driver. What one driver calls HDMI-1, another might call HDMI1. So the first step is to run xrandr(1) to list all the available outputs: % xrandr Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.04*+ 1024x768 60.00 800x600 60.32 56.25 640x480 59.94 VGA1 connected (normal left inverted right x axis y axis) 1280x1024 60.02 + 75.02 1280x960 60.00 1152x864 75.00 1024x768 75.08 70.07 60.00 832x624 74.55 800x600 72.19 75.00 60.32 56.25 640x480 75.00 72.81 66.67 60.00 720x400 70.08 HDMI1 disconnected (normal left inverted right x axis y axis) DP1 disconnected (normal left inverted right x axis y axis) Four outputs were found: the built-in panel LVDS1, and external VGA1, HDMI1, and DP1 connectors. The projector has been connected to the VGA1 output. xrandr(1) is now used to set that output to the native resolution of the projector and add the additional space to the right side of the desktop: % xrandr --output VGA1 --auto --right-of LVDS1 --auto chooses the resolution and refresh rate detected by EDID. If the resolution is not correctly detected, a fixed value can be given with --mode instead of the --auto statement. For example, most projectors can be used with a 1024x768 resolution, which is set with --mode 1024x768. xrandr(1) is often run from .xinitrc to set the appropriate mode when X starts. Setting Monitor Resolution in a File To set a screen resolution of 1024x768 in a configuration file: Example 18. Set Screen Resolution in a File /usr/local/etc/X11/xorg.conf.d/screen-resolution.conf Section "Screen" Identifier "Screen0" Device "Card0" SubSection "Display" Modes "1024x768" EndSubSection EndSection The few monitors that do not have EDID can be configured by setting HorizSync and VertRefresh to the range of frequencies supported by the monitor. Example 19. Manually Setting Monitor Frequencies /usr/local/etc/X11/xorg.conf.d/monitor0-freq.conf Section "Monitor" Identifier "Monitor0" HorizSync 30-83 # kHz VertRefresh 50-76 # Hz EndSection #### 5.4.7. Input Devices ##### 5.4.7.1. Keyboards Keyboard Layout The standardized location of keys on a keyboard is called a layout. Layouts and other adjustable parameters are listed in xkeyboard-config(7). A United States layout is the default. To select an alternate layout, set the XkbLayout and XkbVariant options in an InputClass. This will be applied to all input devices that match the class. This example selects a French keyboard layout. Example 20. Setting a Keyboard Layout /usr/local/etc/X11/xorg.conf.d/keyboard-fr.conf Section "InputClass" Identifier "KeyboardDefaults" MatchIsKeyboard "on" Option "XkbLayout" "fr" EndSection Example 21. Setting Multiple Keyboard Layouts Set United States, Spanish, and Ukrainian keyboard layouts. Cycle through these layouts by pressing Alt+Shift. x11/xxkb or x11/sbxkb can be used for improved layout switching control and current layout indicators. /usr/local/etc/X11/xorg.conf.d/kbd-layout-multi.conf Section "InputClass" Identifier "All Keyboards" MatchIsKeyboard "yes" Option "XkbLayout" "us, es, ua" EndSection Closing Xorg From the Keyboard X can be closed with a combination of keys. By default, that key combination is not set because it conflicts with keyboard commands for some applications. Enabling this option requires changes to the keyboard InputDevice section: Example 22. Enabling Keyboard Exit from X /usr/local/etc/X11/xorg.conf.d/keyboard-zap.conf Section "InputClass" Identifier "KeyboardDefaults" MatchIsKeyboard "on" Option "XkbOptions" "terminate:ctrl_alt_bksp" EndSection ##### 5.4.7.2. Mice and Pointing Devices If using xorg-server 1.20.8 or later under FreeBSD 12.1 and not using moused(8), add kern.evdev.rcpt_mask=12 to /etc/sysctl.conf. Many mouse parameters can be adjusted with configuration options. See mousedrv(4) for a full list. Mouse Buttons The number of buttons on a mouse can be set in the mouse InputDevice section of xorg.conf. To set the number of buttons to 7: Example 23. Setting the Number of Mouse Buttons /usr/local/etc/X11/xorg.conf.d/mouse0-buttons.conf Section "InputDevice" Identifier "Mouse0" Option "Buttons" "7" EndSection #### 5.4.8. Manual Configuration In some cases, Xorg autoconfiguration does not work with particular hardware, or a different configuration is desired. For these cases, a custom configuration file can be created. Do not create manual configuration files unless required. Unnecessary manual configuration can prevent proper operation. A configuration file can be generated by Xorg based on the detected hardware. This file is often a useful starting point for custom configurations. Generating an xorg.conf: # Xorg -configure The configuration file is saved to /root/xorg.conf.new. Make any changes desired, then test that file (using -retro so there is a visible background) with: # Xorg -retro -config /root/xorg.conf.new After the new configuration has been adjusted and tested, it can be split into smaller files in the normal location, /usr/local/etc/X11/xorg.conf.d/. ### 5.5. Using Fonts in Xorg #### 5.5.1. Type1 Fonts The default fonts that ship with Xorg are less than ideal for typical desktop publishing applications. Large presentation fonts show up jagged and unprofessional looking, and small fonts are almost completely unintelligible. However, there are several free, high quality Type1 (PostScript®) fonts available which can be readily used with Xorg. For instance, the URW font collection (x11-fonts/urwfonts) includes high quality versions of standard type1 fonts (Times Roman™, Helvetica™, Palatino™ and others). The Freefonts collection (x11-fonts/freefonts) includes many more fonts, but most of them are intended for use in graphics software such as the Gimp, and are not complete enough to serve as screen fonts. In addition, Xorg can be configured to use TrueType® fonts with a minimum of effort. For more details on this, see the X(7) manual page or TrueType® Fonts. To install the above Type1 font collections from binary packages, run the following commands: # pkg install urwfonts Alternatively, to build from the Ports Collection, run the following commands: # cd /usr/ports/x11-fonts/urwfonts # make install clean And likewise with the freefont or other collections. To have the X server detect these fonts, add an appropriate line to the X server configuration file (/etc/X11/xorg.conf), which reads: FontPath "/usr/local/share/fonts/urwfonts/" Alternatively, at the command line in the X session run: % xset fp+ /usr/local/share/fonts/urwfonts % xset fp rehash This will work but will be lost when the X session is closed, unless it is added to the startup file (~/.xinitrc for a normal startx session, or ~/.xsession when logging in through a graphical login manager like XDM). A third way is to use the new /usr/local/etc/fonts/local.conf as demonstrated in Anti-Aliased Fonts. #### 5.5.2. TrueType® Fonts Xorg has built in support for rendering TrueType® fonts. There are two different modules that can enable this functionality. The freetype module is used in this example because it is more consistent with the other font rendering back-ends. To enable the freetype module just add the following line to the "Module" section of /etc/X11/xorg.conf. Load "freetype" Now make a directory for the TrueType® fonts (for example, /usr/local/share/fonts/TrueType) and copy all of the TrueType® fonts into this directory. Keep in mind that TrueType® fonts cannot be directly taken from an Apple® Mac®; they must be in UNIX®/MS-DOS®/Windows® format for use by Xorg. Once the files have been copied into this directory, use mkfontscale to create a fonts.dir, so that the X font renderer knows that these new files have been installed. mkfontscale can be installed as a package: # pkg install mkfontscale Then create an index of X font files in a directory: # cd /usr/local/share/fonts/TrueType # mkfontscale Now add the TrueType® directory to the font path. This is just the same as described in Type1 Fonts: % xset fp+ /usr/local/share/fonts/TrueType % xset fp rehash or add a FontPath line to xorg.conf. Now Gimp, LibreOffice, and all of the other X applications should now recognize the installed TrueType® fonts. Extremely small fonts (as with text in a high resolution display on a web page) and extremely large fonts (within LibreOffice) will look much better now. #### 5.5.3. Anti-Aliased Fonts All fonts in Xorg that are found in /usr/local/share/fonts/ and ~/.fonts/ are automatically made available for anti-aliasing to Xft-aware applications. Most recent applications are Xft-aware, including KDE, GNOME, and Firefox. To control which fonts are anti-aliased, or to configure anti-aliasing properties, create (or edit, if it already exists) the file /usr/local/etc/fonts/local.conf. Several advanced features of the Xft font system can be tuned using this file; this section describes only some simple possibilities. For more details, please see fonts-conf(5). This file must be in XML format. Pay careful attention to case, and make sure all tags are properly closed. The file begins with the usual XML header followed by a DOCTYPE definition, and then the <fontconfig> tag: <?xml version="1.0"?> <!DOCTYPE fontconfig SYSTEM "fonts.dtd"> <fontconfig> As previously stated, all fonts in /usr/local/share/fonts/ as well as ~/.fonts/ are already made available to Xft-aware applications. To add another directory outside of these two directory trees, add a line like this to /usr/local/etc/fonts/local.conf: <dir>/path/to/my/fonts</dir> After adding new fonts, and especially new font directories, rebuild the font caches: # fc-cache -f Anti-aliasing makes borders slightly fuzzy, which makes very small text more readable and removes "staircases" from large text, but can cause eyestrain if applied to normal text. To exclude font sizes smaller than 14 point from anti-aliasing, include these lines: <match target="font"> <test name="size" compare="less"> <double>14</double> </test> <edit name="antialias" mode="assign"> <bool>false</bool> </edit> </match> <match target="font"> <test name="pixelsize" compare="less" qual="any"> <double>14</double> </test> <edit mode="assign" name="antialias"> <bool>false</bool> </edit> </match> Spacing for some monospaced fonts might also be inappropriate with anti-aliasing. This seems to be an issue with KDE, in particular. One possible fix is to force the spacing for such fonts to be 100. Add these lines: <match target="pattern" name="family"> <test qual="any" name="family"> <string>fixed</string> </test> <edit name="family" mode="assign"> <string>mono</string> </edit> </match> <match target="pattern" name="family"> <test qual="any" name="family"> <string>console</string> </test> <edit name="family" mode="assign"> <string>mono</string> </edit> </match> (this aliases the other common names for fixed fonts as "mono"), and then add: <match target="pattern" name="family"> <test qual="any" name="family"> <string>mono</string> </test> <edit name="spacing" mode="assign"> <int>100</int> </edit> </match> Certain fonts, such as Helvetica, may have a problem when anti-aliased. Usually this manifests itself as a font that seems cut in half vertically. At worst, it may cause applications to crash. To avoid this, consider adding the following to local.conf: <match target="pattern" name="family"> <test qual="any" name="family"> <string>Helvetica</string> </test> <edit name="family" mode="assign"> <string>sans-serif</string> </edit> </match> After editing local.conf, make certain to end the file with the </fontconfig> tag. Not doing this will cause changes to be ignored. Users can add personalized settings by creating their own ~/.config/fontconfig/fonts.conf. This file uses the same XML format described above. One last point: with an LCD screen, sub-pixel sampling may be desired. This basically treats the (horizontally separated) red, green and blue components separately to improve the horizontal resolution; the results can be dramatic. To enable this, add the line somewhere in local.conf: <match target="font"> <test qual="all" name="rgba"> <const>unknown</const> </test> <edit name="rgba" mode="assign"> <const>rgb</const> </edit> </match> Depending on the sort of display, rgb may need to be changed to bgr, vrgb or vbgr: experiment and see which works best. ### 5.6. The X Display Manager Xorg provides an X Display Manager, XDM, which can be used for login session management. XDM provides a graphical interface for choosing which display server to connect to and for entering authorization information such as a login and password combination. This section demonstrates how to configure the X Display Manager on FreeBSD. Some desktop environments provide their own graphical login manager. Refer to GNOME for instructions on how to configure the GNOME Display Manager and KDE for instructions on how to configure the KDE Display Manager. #### 5.6.1. Configuring XDM To install XDM, use the x11/xdm package or port. Once installed, XDM can be configured to run when the machine boots up by adding the following line to /etc/rc.conf: xdm_enable="YES" XDM will run on the ninth virtual terminal by default. The XDM configuration directory is located in /usr/local/etc/X11/xdm. This directory contains several files used to change the behavior and appearance of XDM, as well as a few scripts and programs used to set up the desktop when XDM is running. XDM Configuration Files summarizes the function of each of these files. The exact syntax and usage of these files is described in xdm(8). Table 7. XDM Configuration Files FileDescription Xaccess The protocol for connecting to XDM is called the X Display Manager Connection Protocol (XDMCP). This file is a client authorization ruleset for controlling XDMCP connections from remote machines. By default, this file does not allow any remote clients to connect. Xresources This file controls the look and feel of the XDM display chooser and login screens. The default configuration is a simple rectangular login window with the hostname of the machine displayed at the top in a large font and "Login:" and "Password:" prompts below. The format of this file is identical to the app-defaults file described in the Xorg documentation. Xservers The list of local and remote displays the chooser should provide as login choices. Xsession Default session script for logins which is run by XDM after a user has logged in. This points to a customized session script in ~/.xsession. Xsetup_* Script to automatically launch applications before displaying the chooser or login interfaces. There is a script for each display being used, named Xsetup_*, where * is the local display number. Typically these scripts run one or two programs in the background such as xconsole. xdm-config Global configuration for all displays running on this machine. xdm-errors Contains errors generated by the server program. If a display that XDM is trying to start hangs, look at this file for error messages. These messages are also written to the user’s ~/.xsession-errors on a per-session basis. xdm-pid The running process ID of XDM. #### 5.6.2. Configuring Remote Access By default, only users on the same system can login using XDM. To enable users on other systems to connect to the display server, edit the access control rules and enable the connection listener. To configure XDM to listen for any remote connection, comment out the DisplayManager.requestPort line in /usr/local/etc/X11/xdm/xdm-config by putting a ! in front of it: ! SECURITY: do not listen for XDMCP or Chooser requests ! Comment out this line if you want to manage X terminals with xdm DisplayManager.requestPort: 0 Save the edits and restart XDM. To restrict remote access, look at the example entries in /usr/local/etc/X11/xdm/Xaccess and refer to xdm(8) for further information. ### 5.7. Desktop Environments This section describes how to install three popular desktop environments on a FreeBSD system. A desktop environment can range from a simple window manager to a complete suite of desktop applications. Over a hundred desktop environments are available in the x11-wm category of the Ports Collection. #### 5.7.1. GNOME GNOME is a user-friendly desktop environment. It includes a panel for starting applications and displaying status, a desktop, a set of tools and applications, and a set of conventions that make it easy for applications to cooperate and be consistent with each other. More information regarding GNOME on FreeBSD can be found at https://www.FreeBSD.org/gnome. That web site contains additional documentation about installing, configuring, and managing GNOME on FreeBSD. This desktop environment can be installed from a package: # pkg install gnome To instead build GNOME from ports, use the following command. GNOME is a large application and will take some time to compile, even on a fast computer. # cd /usr/ports/x11/gnome # make install clean GNOME requires /proc to be mounted. Add this line to /etc/fstab to mount this file system automatically during system startup: proc /proc procfs rw 0 0 GNOME uses D-Bus for a message bus and hardware abstraction. These applications are automatically installed as dependencies of GNOME. Enable them in /etc/rc.conf so they will be started when the system boots: dbus_enable="YES" After installation, configure Xorg to start GNOME. The easiest way to do this is to enable the GNOME Display Manager, GDM, which is installed as part of the GNOME package or port. It can be enabled by adding this line to /etc/rc.conf: gdm_enable="YES" It is often desirable to also start all GNOME services. To achieve this, add a second line to /etc/rc.conf: gnome_enable="YES" GDM will start automatically when the system boots. A second method for starting GNOME is to type startx from the command-line after configuring ~/.xinitrc. If this file already exists, replace the line that starts the current window manager with one that starts /usr/local/bin/gnome-session. If this file does not exist, create it with this command: % echo "exec /usr/local/bin/gnome-session" > ~/.xinitrc A third method is to use XDM as the display manager. In this case, create an executable ~/.xsession: % echo "exec /usr/local/bin/gnome-session" > ~/.xsession #### 5.7.2. KDE KDE is another easy-to-use desktop environment. This desktop provides a suite of applications with a consistent look and feel, a standardized menu and toolbars, keybindings, color-schemes, internationalization, and a centralized, dialog-driven desktop configuration. More information on KDE can be found at http://www.kde.org/. For FreeBSD-specific information, consult http://freebsd.kde.org. To install the KDE package, type: # pkg install x11/kde5 To instead build the KDE port, use the following command. Installing the port will provide a menu for selecting which components to install. KDE is a large application and will take some time to compile, even on a fast computer. # cd /usr/ports/x11/kde5 # make install clean KDE requires /proc to be mounted. Add this line to /etc/fstab to mount this file system automatically during system startup: proc /proc procfs rw 0 0 KDE uses D-Bus for a message bus and hardware abstraction. These applications are automatically installed as dependencies of KDE. Enable them in /etc/rc.conf so they will be started when the system boots: dbus_enable="YES" Since KDE Plasma 5, the KDE Display Manager, KDM is no longer developed. A possible replacement is SDDM. To install it, type: # pkg install x11/sddm sddm_enable="YES" A second method for launching KDE Plasma is to type startx from the command line. For this to work, the following line is needed in ~/.xinitrc: exec ck-launch-session startplasma-x11 A third method for starting KDE Plasma is through XDM. To do so, create an executable ~/.xsession as follows: % echo "exec ck-launch-session startplasma-x11" > ~/.xsession Once KDE Plasma is started, refer to its built-in help system for more information on how to use its various menus and applications. #### 5.7.3. Xfce Xfce is a desktop environment based on the GTK+ toolkit used by GNOME. However, it is more lightweight and provides a simple, efficient, easy-to-use desktop. It is fully configurable, has a main panel with menus, applets, and application launchers, provides a file manager and sound manager, and is themeable. Since it is fast, light, and efficient, it is ideal for older or slower machines with memory limitations. More information on Xfce can be found at http://www.xfce.org. To install the Xfce package: # pkg install xfce Alternatively, to build the port: # cd /usr/ports/x11-wm/xfce4 # make install clean Xfce uses D-Bus for a message bus. This application is automatically installed as dependency of Xfce. Enable it in /etc/rc.conf so it will be started when the system boots: dbus_enable="YES" Unlike GNOME or KDE, Xfce does not provide its own login manager. In order to start Xfce from the command line by typing startx, first create ~/.xinitrc with this command: % echo ". /usr/local/etc/xdg/xfce4/xinitrc" > ~/.xinitrc An alternate method is to use XDM. To configure this method, create an executable ~/.xsession: % echo ". /usr/local/etc/xdg/xfce4/xinitrc" > ~/.xsession ### 5.8. Installing Compiz Fusion One way to make using a desktop computer more pleasant is with nice 3D effects. Installing the Compiz Fusion package is easy, but configuring it requires a few steps that are not described in the port’s documentation. #### 5.8.1. Setting up the FreeBSD nVidia Driver Desktop effects can cause quite a load on the graphics card. For an nVidia-based graphics card, the proprietary driver is required for good performance. Users of other graphics cards can skip this section and continue with the xorg.conf configuration. To determine which nVidia driver is needed see the FAQ question on the subject. Having determined the correct driver to use for your card, installation is as simple as installing any other package. For example, to install the latest driver: # pkg install x11/nvidia-driver The driver will create a kernel module, which needs to be loaded at system startup. Use sysrc(8) to load the module at startup: # sysrc kld_list+="nvidia" nvidia_load="YES" To immediately load the kernel module into the running kernel issue a command like kldload nvidia. However, it has been noted that some versions of Xorg will not function properly if the driver is not loaded at boot time. After editing /boot/loader.conf, a reboot is recommended. Improper settings in /boot/loader.conf can cause the system not to boot properly. With the kernel module loaded, you normally only need to change a single line in xorg.conf to enable the proprietary driver: Find the following line in /etc/X11/xorg.conf: Driver "nv" and change it to: Driver "nvidia" Start the GUI as usual, and you should be greeted by the nVidia splash. Everything should work as usual. #### 5.8.2. Configuring xorg.conf for Desktop Effects To enable Compiz Fusion, /etc/X11/xorg.conf needs to be modified: Add the following section to enable composite effects: Section "Extensions" Option "Composite" "Enable" EndSection Locate the "Screen" section which should look similar to the one below: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" ... and add the following two lines (after "Monitor" will do): DefaultDepth 24 Option "AddARGBGLXVisuals" "True" Locate the "Subsection" that refers to the screen resolution that you wish to use. For example, if you wish to use 1280x1024, locate the section that follows. If the desired resolution does not appear in any subsection, you may add the relevant entry by hand: SubSection "Display" Viewport 0 0 Modes "1280x1024" EndSubSection A color depth of 24 bits is needed for desktop composition, change the above subsection to: SubSection "Display" Viewport 0 0 Depth 24 Modes "1280x1024" EndSubSection Finally, confirm that the "glx" and "extmod" modules are loaded in the "Module" section: Section "Module" ... The preceding can be done automatically with x11/nvidia-xconfig by running (as root): # nvidia-xconfig --add-argb-glx-visuals # nvidia-xconfig --composite # nvidia-xconfig --depth=24 #### 5.8.3. Installing and Configuring Compiz Fusion Installing Compiz Fusion is as simple as any other package: # pkg install x11-wm/compiz-fusion When the installation is finished, start your graphic desktop and at a terminal, enter the following commands (as a normal user): % compiz --replace --sm-disable --ignore-desktop-hints ccp & % emerald --replace & Your screen will flicker for a few seconds, as your window manager (e.g., Metacity if you are using GNOME) is replaced by Compiz Fusion. Emerald takes care of the window decorations (i.e., close, minimize, maximize buttons, title bars and so on). You may convert this to a trivial script and have it run at startup automatically (e.g., by adding to "Sessions" in a GNOME desktop): #! /bin/sh compiz --replace --sm-disable --ignore-desktop-hints ccp & emerald --replace & Save this in your home directory as, for example, start-compiz and make it executable: % chmod +x ~/start-compiz Then use the GUI to add it to Startup Programs (located in System, Preferences, Sessions on a GNOME desktop). To actually select all the desired effects and their settings, execute (again as a normal user) the Compiz Config Settings Manager: % ccsm In GNOME, this can also be found in the System, Preferences menu. If you have selected "gconf support" during the build, you will also be able to view these settings using gconf-editor under apps/compiz. ### 5.9. Troubleshooting If the mouse does not work, you will need to first configure it before proceeding. In recent Xorg versions, the InputDevice sections in xorg.conf are ignored in favor of the autodetected devices. To restore the old behavior, add the following line to the ServerLayout or ServerFlags section of this file: Option "AutoAddDevices" "false" Input devices may then be configured as in previous versions, along with any other options needed (e.g., keyboard layout switching). This section contains partially outdated information. The HAL daemon (hald) is no longer a part of the FreeBSD desktop setup. As previously explained the hald daemon will, by default, automatically detect your keyboard. There are chances that your keyboard layout or model will not be correct, desktop environments like GNOME, KDE or Xfce provide tools to configure the keyboard. However, it is possible to set the keyboard properties directly either with the help of the setxkbmap(1) utility or with a hald’s configuration rule. For example if, one wants to use a PC 102 keys keyboard coming with a french layout, we have to create a keyboard configuration file for hald called x11-input.fdi and saved in the /usr/local/etc/hal/fdi/policy directory. This file should contain the following lines: <?xml version="1.0" encoding="utf-8"?> <deviceinfo version="0.2"> <device> <match key="info.capabilities" contains="input.keyboard"> <merge key="input.x11_options.XkbModel" type="string">pc102</merge> <merge key="input.x11_options.XkbLayout" type="string">fr</merge> </match> </device> </deviceinfo> If this file already exists, just copy and add to your file the lines regarding the keyboard configuration. You will have to reboot your machine to force hald to read this file. It is possible to do the same configuration from an X terminal or a script with this command line: % setxkbmap -model pc102 -layout fr /usr/local/share/X11/xkb/rules/base.lst lists the various keyboard, layouts and options available. The xorg.conf.new configuration file may now be tuned to taste. Open the file in a text editor such as emacs(1) or ee(1). If the monitor is an older or unusual model that does not support autodetection of sync frequencies, those settings can be added to xorg.conf.new under the "Monitor" section: Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" HorizSync 30-107 VertRefresh 48-120 EndSection Most monitors support sync frequency autodetection, making manual entry of these values unnecessary. For the few monitors that do not support autodetection, avoid potential damage by only entering values provided by the manufacturer. X allows DPMS (Energy Star) features to be used with capable monitors. The xset(1) program controls the time-outs and can force standby, suspend, or off modes. If you wish to enable DPMS features for your monitor, you must add the following line to the monitor section: Option "DPMS" While the xorg.conf.new configuration file is still open in an editor, select the default resolution and color depth desired. This is defined in the "Screen" section: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" EndSubSection EndSection The DefaultDepth keyword describes the color depth to run at by default. This can be overridden with the -depth command line switch to Xorg(1). The Modes keyword describes the resolution to run at for the given color depth. Note that only VESA standard modes are supported as defined by the target system’s graphics hardware. In the example above, the default color depth is twenty-four bits per pixel. At this color depth, the accepted resolution is 1024 by 768 pixels. Finally, write the configuration file and test it using the test mode given above. One of the tools available to assist you during troubleshooting process are the Xorg log files, which contain information on each device that the Xorg server attaches to. Xorg log file names are in the format of /var/log/Xorg.0.log. The exact name of the log can vary from Xorg.0.log to Xorg.8.log and so forth. If all is well, the configuration file needs to be installed in a common location where Xorg(1) can find it. This is typically /etc/X11/xorg.conf or /usr/local/etc/X11/xorg.conf. # cp xorg.conf.new /etc/X11/xorg.conf The Xorg configuration process is now complete. Xorg may be now started with the startx(1) utility. The Xorg server may also be started with the use of xdm(8). #### 5.9.1. Configuration with Intel® i810 Graphics Chipsets Configuration with Intel® i810 integrated chipsets requires the agpgart AGP programming interface for Xorg to drive the card. See the agp(4) driver manual page for more information. This will allow configuration of the hardware as any other graphics board. Note on systems without the agp(4) driver compiled in the kernel, trying to load the module with kldload(8) will not work. This driver has to be in the kernel at boot time through being compiled in or using /boot/loader.conf. #### 5.9.2. Adding a Widescreen Flatpanel to the Mix This section assumes a bit of advanced configuration knowledge. If attempts to use the standard configuration tools above have not resulted in a working configuration, there is information enough in the log files to be of use in getting the setup working. Use of a text editor will be necessary. Current widescreen (WSXGA, WSXGA+, WUXGA, WXGA, WXGA+, et.al.) formats support 16:10 and 10:9 formats or aspect ratios that can be problematic. Examples of some common screen resolutions for 16:10 aspect ratios are: • 2560x1600 • 1920x1200 • 1680x1050 • 1440x900 • 1280x800 At some point, it will be as easy as adding one of these resolutions as a possible Mode in the Section "Screen" as such: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1680x1050" EndSubSection EndSection Xorg is smart enough to pull the resolution information from the widescreen via I2C/DDC information so it knows what the monitor can handle as far as frequencies and resolutions. If those ModeLines do not exist in the drivers, one might need to give Xorg a little hint. Using /var/log/Xorg.0.log one can extract enough information to manually create a ModeLine that will work. Simply look for information resembling this: (II) MGA(0): Supported additional Video Mode: (II) MGA(0): clock: 146.2 MHz Image Size: 433 x 271 mm (II) MGA(0): h_active: 1680 h_sync: 1784 h_sync_end 1960 h_blank_end 2240 h_border: 0 (II) MGA(0): v_active: 1050 v_sync: 1053 v_sync_end 1059 v_blanking: 1089 v_border: 0 (II) MGA(0): Ranges: V min: 48 V max: 85 Hz, H min: 30 H max: 94 kHz, PixClock max 170 MHz This information is called EDID information. Creating a ModeLine from this is just a matter of putting the numbers in the correct order: ModeLine <name> <clock> <4 horiz. timings> <4 vert. timings> So that the ModeLine in Section "Monitor" for this example would look like this: Section "Monitor" Identifier "Monitor1" VendorName "Bigname" ModelName "BestModel" ModeLine "1680x1050" 146.2 1680 1784 1960 2240 1050 1053 1059 1089 Option "DPMS" EndSection Now having completed these simple editing steps, X should start on your new widescreen monitor. #### 5.9.3. Troubleshooting Compiz Fusion ##### 5.9.3.1. I have installed Compiz Fusion, and after running the commands you mention, my windows are left without title bars and buttons. What is wrong? You are probably missing a setting in /etc/X11/xorg.conf. Review this file carefully and check especially the DefaultDepth and AddARGBGLXVisuals directives. ##### 5.9.3.2. When I run the command to start Compiz Fusion, the X server crashes and I am back at the console. What is wrong? If you check /var/log/Xorg.0.log, you will probably find error messages during the X startup. The most common would be: (EE) NVIDIA(0): Failed to initialize the GLX module; please check in your X (EE) NVIDIA(0): log file that the GLX module has been loaded in your X (EE) NVIDIA(0): server, and that the module is the NVIDIA GLX module. If (EE) NVIDIA(0): you continue to encounter problems, Please try (EE) NVIDIA(0): reinstalling the NVIDIA driver. This is usually the case when you upgrade Xorg. You will need to reinstall the x11/nvidia-driver package so glx is built again. ## Chapter 6. Wayland on FreeBSD ### 6.1. Wayland Synopsis Wayland is a new display server, but it differs from Xorg in several important ways. First, Wayland is only a protocol that acts as an intermediary between clients using a different mechanism which removes the dependency on an X server. Xorg includes both the X11 protocol, used to run remote displays and the X server will accept connections and display windows. Under Wayland, the compositor or window manager provides the display server instead of a traditional X server. Since Wayland is not an X server, traditional X screen connections will need to utilize other methods such as VNC or RDP for remote desktop management. Second, Wayland can manage composite communications between clients and a compositor as a separate entity which does not need to support the X protocols. Wayland is relatively new, and not all software has been updated to run natively without Xwayland support. Because Wayland does not provide the X server, and expects compositors to provide that support, X11 window managers that do not yet support Wayland will require that Xwayland is not started with the -rootless parameter. The -rootless parameter, when removed, will restore X11 window manager support. The current NVidia driver should work with most wl-roots compositors, but it may be a little unstable and not support all features at this time. Volunteers to help work on the NVidia DRM are requested. Currently, a lot of software will function with minimal issues on Wayland, including Firefox. And a few desktops are also available, such as the Compiz Fusion replacement, known as Wayfire, and the i3 window manager replacement, Sway. As of May, 2021, plasma5-kwin does support Wayland on FreeBSD. To use Plasma under Wayland, use the startplasma-wayland parameter to ck-launch-session and tie in dbus with: ck-launch-session dbus-run-session startplasma-wayland to get it working. For compositors, a kernel supporting the evdev(4) driver must exist to utilize the keybinding functionality. This is built into the GENERIC kernel by default; however, if it has been customized and evdev(4) support was stripped out, the evdev(4) module will need to be loaded. In addition, users of Wayland will need to be members of the video group. To quickly make this change, use the pw command: pw groupmod video -m user Installing Wayland is simple; there is not a great deal of configuration for the protocol itself. Most of the composition will depend on the chosen compositor. By installing seatd now, a step is skipped as part of the compositor installation and configuration as seatd is needed to provide non-root access to certain devices. All of the compositors described here should work with graphics/drm-kmod open source drivers; however, the NVidia graphics cards may have issues when using the proprietary drivers. Begin by installing the following packages: # pkg install wayland seatd Once the protocol and supporting packages have been installed, a compositor must create the user interface. Several compositors will be covered in the following sections. All compositors using Wayland will need a runtime directory defined in the environment, which can be achieved with the following command in the bourne shell: % export XDG_RUNTIME_DIR=/var/run/user/id -u It is important to note that most compositors will search the XDG_RUNTIME_DIR directory for the configuration files. In the examples included here, a parameter will be used to specify a configuration file in ~/.config to keep temporary files and configuration files separate. It is recommended that an alias be configured for each compositor to load the designated configuration file. It has been reported that ZFS users may experience issues with some Wayland clients because they need access to posix_fallocate() in the runtime directory. While the author could not reproduce this issue on their ZFS system, a recommended workaround is not to use ZFS for the runtime directory and instead use tmpfs for the /var/run directory. In this case, the tmpfs file system is used for /var/run and mounted through the command mount -t tmpfs tmpfs /var/run command and then make this change persist across reboots through /etc/fstab. The XDG_RUNTIME_DIR environment variable could be configured to use /var/run/user/$UID and avoid potential pitfalls with ZFS. Consider that scenario when reviewing the configuration examples in the following sections. The seatd daemon helps manage access to shared system devices for non-root users in compositors; this includes graphics cards. For traditional X11 managers, seatd is not needed, such as both Plasma and GNOME, but for the Wayland compositors discussed here, it will need enabled on the system and be running before starting a compositor environment. To enable and start the seatd daemon now, and on system initialization: # sysrc seatd_enable=”YES” # service seatd start Afterward, a compositor, which is similar to an X11 desktop, will need to be installed for the GUI environment. Three are discussed here, including basic configuration options, setting up a screen lock, and recommendations for more information. ### 6.2. The Wayfire Compositor Wayfire is a compositor that aims to be lightweight and customizable. Several features are available, and it brings back several elements from the previously released Compiz Fusion desktop. All of the parts look beautiful on modern hardware. To get Wayfire up and running, begin by installing the required packages: # pkg install wayfire wf-shell alacritty swaylock-effects swayidle wlogout kanshi mako wlsunset The alacritty package provides a terminal emulator. Still, it is not completely required as other terminal emulators such as kitty, and XFCE-4 Terminal have been tested and verified to work under the Wayfire compositor. Wayfire configuration is relatively simple; it uses a file that should be reviewed for any customizations. To begin, copy the example file over to the runtime environment configuration directory and then edit the file: % mkdir ~/.config/wayfire % cp /usr/local/share/examples/wayfire/wayfire.ini ~/.config/wayfire The defaults for most users should be fine. Within the configuration file, items like the famous cube are pre-configured, and there are instructions to help with the available settings. A few primary settings of note include: [output] mode = 1920x1080@60000 position = 0,0 transform = normal scale = 1.000000 In this example, from the configuration file, the screen’s output should be the listed mode at the listed hertz. For example, the mode should be set to widthxheight@refresh_rate. The position places the output at a specific pixel location specified. The default should be fine for most users. Finally, transform sets a background transform, and scale will scale the output to the specified scale factor. The defaults for these options are generally acceptable; for more information, see the documentation. As mentioned, Wayland is new, and not all applications work with the protocol yet. At this time, sddm does not appear to support starting and managing compositors in Wayland. The swaylock utility has been used instead in these examples. The configuration file contains options to run swayidle and swaylock for idle and locking of the screen. This option to define the action to take when the system is idle is listed as: idle = swaylock And the lock timeout is configured using the following lines: [idle] toggle = <super> KEY_Z screensaver_timeout = 300 dpms_timeout = 600 The first option will lock the screen after 300 seconds, and after another 300, the screen will shut off through the dpms_timeout option. One final thing to note is the <super> key. Most of the configuration mentions this key, and it is the traditional Windows key on the keyboard. Most keyboards have this super key available; however, it should be remapped within this configuration file if it is not available. For example, to lock the screen, press and hold the super key, the shift key, and press the escape key. nless the mappings have changed, this will execute the swaylock application. The default configuration for swaylock will show a grey screen; however, the application is highly customizable and well documented. In addition, since the swaylock-effects is the version that was installed, there are several options available such as the blur effect, which can be seen using the following command: % swaylock --effect-blur 7x5 There is also the --clock parameter which will display a clock with the date and time on the lock screen. When x11/swaylock-effects was installed, a default pam.d configuration was included. It provides the default options that should be fine for most users. More advanced options are available; see the PAM documentation for more information. At this point, it is time to test Wayfire and see if it can start up on the system. Just type the following command: % wayfire -c ~/.config/wayfire/wayfire.ini The compositor should now start and display a background image along with a menu bar at the top of the screen. Wayfire will attempt to list installed compatible applications for the desktop and present them in this drop-down menu; for example, if the XFCE-4 file manager is installed, it will show up in this drop-down menu. If a specific application is compatible and valuable enough for a keyboard shortcut, it may be mapped to a keyboard sequence using the wayfire.ini configuration file. Wayfire also has a configuration tool named Wayfire Config Manager. It is located in the drop-down menu bar but may also be started through a terminal by issuing the following command: % wcm Various Wayfire configuration options, including the composite special effects, maybe enabled, disabled, or configured through this application. In addition, for a more user-friendly experience, a background manager, panel, and docking application may be enabled in the configuration file: panel = wf-panel dock = wf-dock background = wf-background Changes made through wcm will overwrite custom changes in the wayfire.ini configuration file. The wayfire.ini file is highly recommended to be backed up so any essential changes may be restored. Finally, the default launcher listed in the wayfire.ini is x11/wf-shell which may be replaced with other panels if desired by the user. ### 6.3. The Hikari Compositor The Hikari compositor uses several concepts centered around productivity, such as sheets, workspaces, and more. In that way, it resembles a tiling window manager. Breaking this down, the compositor starts with a single workspace, which is similar to virtual desktops. Hikari uses a single workspace or virtual desktop for user interaction. The workspace is made up of several views, which are the working windows in the compositor grouped as either sheets or groups. Both sheets and groups are made up of a collection of views; again, the windows that are grouped together. When switching between sheets or groups, the active sheet or group will become known collectively as the workspace. The manual page will break this down into more information on the functions of each but for this document, just consider a single workspace utilizing a single sheet. Hikari installation will comprise of a single package, x11-wm/hikari, and a terminal emulator alacritty: # pkg install hikari alacritty Other shells, such as kitty or the Plasma Terminal, will function under Wayland. Users should experiment with their favorite terminal editor to validate compatibility. Hikari uses a configuration file, hikari.conf, which could either be placed in the XDG_RUNTIME_DIR or specified on startup using the -c parameter. An autostart configuration file is not required but may make the migration to this compositor a little easier. Beginning the configuration is to create the Hikari configuration directory and copy over the configuration file for editing: % mkdir ~/.config/hikari % cp /usr/local/etc/hikari/hikari.conf ~/.config/hikari The configuration is broken out into various stanzas such as ui, outputs, layouts, and more. For most users, the defaults will function fine; however, some important changes should be made. For example, the$TERMINAL variable is normally not set within the user’s environment. Changing this variable or altering the hikari.conf file to read: terminal = "/usr/local/bin/alacritty" Will launch the alacritty terminal using the bound key press. While going through the configuration file, it should be noted that the capital letters are used to map keys out for the user. For example, the L key for starting the terminal L+Return is actually the previously discussed super key or Windows logo key. Therefore, holding the L/super/Windows key and pressing Enter will open the specified terminal emulator with the default configuration. Mapping other keys to applications require an action definition to be created. For this, the action item should be listed in the actions stanza, for example: actions { terminal = "/usr/local/bin/alacritty" browser = "/usr/local/bin/firefox" } Then an action may be mapped under the keyboard stanza, which is defined within the bindings stanza: bindings { keyboard { SNIP "L+Return" = action-terminal "L+b" = action-browser SNIP After Hikari is restarted, holding the Windows logo button and pressing the b key on the keyboard will start the web browser. The compositor does not have a menu bar, and it is recommended the user set up, at minimal, a terminal emulator before migration. The manual page contains a great deal of documentation it should be read before performing a full migration. Another positive aspect about Hikari is that, while migrating to the compositor, Hikari can be started in the Plasma and GNOME desktop environments, allowing for a test-drive before completely migrating. Locking the screen in Hikari is easy because a default pam.d configuration file and unlock utility are bundled with the package. The key binding for locking the screen is L (Windows logo key)+ Shift + Backspace. It should be noted that all views not marked public will be hidden. These views will never accept input when locked but beware of sensitive information being visible. For some users, it may be easier to migrate to a different screen locking utility such as swaylock-effects, discussed in this section. To start Hikari, use the following command: % hikari -c ~/.config/hikari/hikari.conf ### 6.4. The Sway Compositor The Sway compositor is a tiling compositor that attempts to replace the i3 windows manager. It should work with the user’s current i3 configuration; however, new features may require some additional setup. In the forthcoming examples, a fresh installation without migrating any i3 configuration will be assumed. To install Sway and valuable components, issue the following command as the root user: # pkg install sway swayidle swaylock-effects alacritty dmenu-wayland dmenu For a basic configuration file, issue the following commands and then edit the configuration file after it is copied: % mkdir ~/.config/sway % cp /usr/local/etc/sway/config ~/.config/sway The base configuration file has many defaults, which will be fine for most users. Several important changes should be made like the following: # Logo key. Use Mod1 for Alt. input * xkb_rules evdev set $mod Mod4 # Your preferred terminal emulator set$term alacritty set $lock swaylock -f -c 000000 output "My Workstation" mode 1366x786@60Hz position 1366 0 output * bg ~/wallpapers/mywallpaper.png stretch ### Idle configuration exec swayidle -w \ timeout 300 'swaylock -f -c 000000' \ timeout 600 'swaymsg "output * dpms off"' resume 'swaymsg "output * dpms on"' \ before-sleep 'swaylock -f -c 000000' In the previous example, the xkb rules for evdev(4) events are loaded, and the$mod key is set to the Windows logo key for the key bindings. Next, the terminal emulator was set to be alacritty, and a screen lock command was defined; more on this later. The output keyword, the mode, the position, a background wallpaper, and Sway is also told to stretch this wallpaper to fill out the screen. Finally, swaylock is set to daemonize and lock the screen after a timeout of 300 seconds, placing the screen or monitor into sleep mode after 600 seconds. The locked background color of 000000, which is black, is also defined here. Using swaylock-effects, a clock may also be displayed with the --clock parameter. See the manual page for more options. The sway-output(5) manual page should also be reviewed; it includes a great deal of information on customing the output options available. While in Sway, to bring up a menu of applications, hold the Windows logo key (mod) and press the d key. The menu may be navigated using the arrow keys on the keyboard. There is also a method to manipulate the layout of the bar and add a tray; read the sway-bar(5) manual page for more information. The default configuration adds a date and time to the upper right-hand corner. See the Bar stanza in the configuration file for an example. By default, the configuration does not include locking the screen outside of the example above, enabling a lockout timer. Creating a lock key binding requires the following line to the Key bindings section: # Lock the screen manually bindsym $mod+Shift+Return exec$lock Now the screen may be locked using the combination of holding the Windows logo key, pressing and holding shift, and finally pressing return. When Sway is installed, whether from a package or the FreeBSD Ports Collection, a default file for pam.d was installed. The default configuration should be acceptable for most users, but more advanced options are available. Read through the PAM documentation for more information. Finally, to exit Sway and return to the shell, hold the Windows logo key, the shift key, and press the e key. A prompt will be displayed with an option to exit Sway. During migration, Sway can be started through a terminal emulator on an X11 desktop such as Plasma. This makes testing different changes and key bindings a little easier prior to fully migrating to this compositor. To start Sway, issue the following command: % sway -c ~/.config/sway/config ### 6.5. Using Xwayland When installing Wayland, the Xwayland binary should have been installed unless Wayland was built without X11 support. If the /usr/local/bin/Xwayland file does not exist, install it using the following command: # pkg install xwayland-devel The development version of Xwayland is recommended and was most likely installed with the Wayland package. Each compositor has a method of enabling or disabling this feature. Once Xwayland has been installed, configure it within the chosen compositor. For Wayfire, the following line is required in the wayfire.ini file: xwayland = true For the Sway compositor, Xwayland should be enabled by default. Even so, it is recommened to manually add a configuration line in the ~/.config/sway/config like the following: xwayland enable Finally, for Hikari, no changes are needed. Support for Xwayland is build in by default. To disable that support, rebuild the package from the ports collection and disable Xwayland support at that time. After these changes are made, start the compositor at the command line and execute a terminal from the key bindings. Within this terminal, issue the env command and search for the DISPLAY variables. If the compositor was able to properly start the Xwayland X server, these environment variables should look similar to the following: % env | grep DISPLAY WAYLAND_DISPLAY=wayland-1 DISPLAY=:0 In this output, there is a default Wayland display and a display set for the Xwayland server. Another method to verify that Xwayland is functioning properly is to use install and test the small package:[x11/eyes] and check the output. If the xeyes application starts and the eyes follow the mouse pointer, Xwayland is functioning properly. If an error such as the following is displayed, something happened during the Xwayland intitialization and it may need reinstalled: Error: Cannot open display wayland-0 A security feature of Wayland is that, without running an X server, there is not another network listener. Once Xwayland is enabled, this security feature is no longer applicable to the system. For some compositors, such as Wayfire, Xwayland may not start properly. As such, env will show the following information for the DISPLAY environment variables: % env | grep DISPLAY DISPLAY=wayland-1 WAYLAND_DISPLAY=wayland-1 Even though Xwayfire was installed and configured, X11 applications will not start giving a display issue. To work around this, verify that there is already an instance of Xwayland using a UNIX socket through these two methods. First, check the output from sockstat and search for X11-unix: % sockstat | grep x11 There should be something similar to the following information: trhodes Xwayland 2734 8 stream /tmp/.X11-unix/X0 trhodes Xwayland 2734 9 stream /tmp/.X11-unix/X0 trhodes Xwayland 2734 10 stream /tmp/.X11-unix/X0 trhodes Xwayland 2734 27 stream /tmp/.X11-unix/X0_ trhodes Xwayland 2734 28 stream /tmp/.X11-unix/X0 This suggests the existence of an X11 socket. This can be further verified by attempting to execute Xwayland manually within a terminal emulator running under the compositor: % Xwayland If an X11 socket is already available, the following error should be presented to the user: (EE) Fatal server error: (EE) Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. (EE) Since there is an active X display available using display zero, the environment variable was just set improperly, to fix this, change the DISPLAY environment variable to :0 and attempt to execute the application again. The following example uses mail/claws-mail as the application which needs the Xwayland service: export DISPLAY=:0 After this change, the mail/claws-mail application should now start using Xwayland and function as expected. ### 6.6. Remote Desktop Using VNC Earlier in this document it was noted that Wayland does not provide the same X server style access as Xorg provides. Instead, users are free to pick and choose a remote desktop protocol such as RDP or VNC. The FreeBSD Ports collection includes the wayvnc, which will support wlroots based compositors such as the ones discussed here. This application may be installed using: # pkg install wayvnc Unlike some other packages, wayvnc does not come with a configuration file. Thankfully, the manual page documents the important options and they may be extrapolated into a simple configuration file: address=0.0.0.0 enable_auth=true private_key_file=/path/to/key.pem certificate_file=/path/to/cert.pem The key files will need to be generated, and it is highly recommended they be used for increased security of the connection. When invoked, wayvnc will search for the configuration file in ~/.config/wayvnc/config. This could be overwritten using the -C configuration_file option when starting the server. Thus, to start the wayvnc server, issue the following command: % wayvnc -C ~/.config/wayvnc/config At the time of this writing, there is no rc.d script to start wayvnc on system initialization. If that functionality is desired, a local startup file will need to be created. This is probably a feature request for the port maintainer. While several login managers exist and are slowly migrating to Wayland, one option is the x11/ly text user interface (TUI) manager. Needing minimal configuration, ly will start Sway, Wayfire, and others by presenting a login window on system initialization. To install ly, issue the following command: # pkg install ly There will be some configuration hints presented, the import steps are to add the following lines to /etc/gettytab: Ly:\ :lo=/usr/local/bin/ly:\ :al=root: And then modify the ttyv1 line in /etc/ttys to match the following line: ttyv1 "/usr/libexec/getty Ly" xterm onifexists secure After a system reboot, a login should appear. To configure specific settings, such as language and edit /usr/local/etc/ly/config.ini. At minimal, this file should have the designated tty that was previously specified in /etc/ttys. If setting ttyv0 up as the login terminal, it may be required to press the alt and F1 keys to properly see the login window. When the login window appears, using the left and right arrows will swap through different, supported, window managers. ### 6.8. Useful Utilities One useful Wayland utility which all compositors can make use of is the waybar. While Wayfire does come with a launch menu, an easy-to-use and fast taskbar is a good accessory for any compositor or desktop manager. A Wayland compatible taskbar that is fast and easy to configure is waybar. To install the package and a supporting audio control utility, issue the following command: # pkg install pavucontrol waybar To create the configuration directory and copy over a default configuration file, execute the following commands: % mkdir ~/.config/waybar % cp /usr/local/etc/xdg/waybar/config ~/.config/waybar The lavalauncher utility provides a launch bar for various applications. There is no example configuration file provided with the package, so the following actions must be taken: mkdir ~/.config/lavalauncher An example configuration file that only includes Firefox, and is placed on the right, is below: global-settings { watch-config-file = true; } bar { output = eDP-1; position = bottom; background-colour = "#202020"; # Condition for the default configuration set. condition-resolution = wider-than-high; config { position = right; } button { image-path = /usr/local/lib/firefox/browser/chrome/icons/default/default48.png; command[mouse-left] = /usr/local/bin/firefox; } button { image-path = /usr/local/share/pixmaps/thunderbird.png; command[mouse-left] = /usr/local/bin/thunderbird; } Now that the basics have been covered, this part of the book discusses some frequently used features of FreeBSD. These chapters: • Introduce popular and useful desktop applications: browsers, productivity tools, document viewers, and more. • Explain the process of building a customized FreeBSD kernel to enable extra functionality. • Describe the print system in detail, both for desktop and network-connected printer setups. • Show how to run Linux applications on the FreeBSD system. Some of these chapters recommend prior reading, and this is noted in the synopsis at the beginning of each chapter. ## Chapter 7. Desktop Applications ### 7.1. Synopsis While FreeBSD is popular as a server for its performance and stability, it is also suited for day-to-day use as a desktop. With over 36000 applications available as FreeBSD packages or ports, it is easy to build a customized desktop that runs a wide variety of desktop applications. This chapter demonstrates how to install numerous desktop applications, including web browsers, productivity software, document viewers, and financial software. Users who prefer to install a pre-built desktop version of FreeBSD rather than configuring one from scratch should refer to GhostBSD, MidnightBSD or NomadBSD. Readers of this chapter should know how to: For information on how to configure a multimedia environment, refer to Multimedia. ### 7.2. Browsers FreeBSD does not come with a pre-installed web browser. Instead, the www category of the Ports Collection contains many browsers which can be installed as a package or compiled from the Ports Collection. The KDE and GNOME desktop environments include their own HTML browser. Refer to “Desktop Environments” for more information on how to set up these complete desktops. Some lightweight browsers include www/dillo2, www/links, and www/w3m. This section demonstrates how to install the following popular web browsers and indicates if the application is resource-heavy, takes time to compile from ports, or has any major dependencies. Application NameResources NeededInstallation from PortsNotes Firefox medium heavy FreeBSD, Linux®, and localized versions are available Konqueror medium heavy Requires KDE libraries Chromium medium heavy Requires Gtk+ #### 7.2.1. Firefox Firefox is an open source browser that features a standards-compliant HTML display engine, tabbed browsing, popup blocking, extensions, improved security, and more. Firefox is based on the Mozilla codebase. To install the package of the latest release version of Firefox, type: # pkg install firefox To instead install Firefox Extended Support Release (ESR) version, use: # pkg install firefox-esr The Ports Collection can instead be used to compile the desired version of Firefox from source code. This example builds www/firefox, where firefox can be replaced with the ESR or localized version to install. # cd /usr/ports/www/firefox # make install clean #### 7.2.2. Konqueror Konqueror is more than a web browser as it is also a file manager and a multimedia viewer. Supports WebKit as well as its own KHTML. WebKit is a rendering engine used by many modern browsers including Chromium. Konqueror can be installed as a package by typing: # pkg install konqueror To install from the Ports Collection: # cd /usr/ports/x11-fm/konqueror/ # make install clean #### 7.2.3. Chromium Chromium is an open source browser project that aims to build a safer, faster, and more stable web browsing experience. Chromium features tabbed browsing, popup blocking, extensions, and much more. Chromium is the open source project upon which the Google Chrome web browser is based. Chromium can be installed as a package by typing: # pkg install chromium Alternatively, Chromium can be compiled from source using the Ports Collection: # cd /usr/ports/www/chromium # make install clean The executable for Chromium is /usr/local/bin/chrome, not /usr/local/bin/chromium. ### 7.3. Productivity When it comes to productivity, users often look for an office suite or an easy-to-use word processor. While some desktop environments like KDE provide an office suite, there is no default productivity package. Several office suites and graphical word processors are available for FreeBSD, regardless of the installed window manager. This section demonstrates how to install the following popular productivity software and indicates if the application is resource-heavy, takes time to compile from ports, or has any major dependencies. Application NameResources NeededInstallation from PortsMajor Dependencies Calligra light heavy KDE AbiWord light light Gtk+ or GNOME The Gimp light heavy Gtk+ Apache OpenOffice heavy huge JDK™ and Mozilla LibreOffice somewhat heavy huge Gtk+, or KDE/ GNOME, or JDK™ #### 7.3.1. Calligra The KDE desktop environment includes an office suite which can be installed separately from KDE. Calligra includes standard components that can be found in other office suites. Words is the word processor, Sheets is the spreadsheet program, Stage manages slide presentations, and Karbon is used to draw graphical documents. In FreeBSD, editors/calligra can be installed as a package or a port. To install the package: # pkg install calligra If the package is not available, use the Ports Collection instead: # cd /usr/ports/editors/calligra # make install clean #### 7.3.2. AbiWord AbiWord is a free word processing program similar in look and feel to Microsoft® Word. It is fast, contains many features, and is user-friendly. AbiWord can import or export many file formats, including some proprietary ones like Microsoft® .rtf. To install the AbiWord package: # pkg install abiword If the package is not available, it can be compiled from the Ports Collection: # cd /usr/ports/editors/abiword # make install clean #### 7.3.3. The GIMP For image authoring or picture retouching, The GIMP provides a sophisticated image manipulation program. It can be used as a simple paint program or as a quality photo retouching suite. It supports a large number of plugins and features a scripting interface. The GIMP can read and write a wide range of file formats and supports interfaces with scanners and tablets. To install the package: # pkg install gimp Alternately, use the Ports Collection: # cd /usr/ports/graphics/gimp # make install clean The graphics category (freebsd.org/ports/graphics/) of the Ports Collection contains several GIMP-related plugins, help files, and user manuals. #### 7.3.4. Apache OpenOffice Apache OpenOffice is an open source office suite which is developed under the wing of the Apache Software Foundation’s Incubator. It includes all of the applications found in a complete office productivity suite: a word processor, spreadsheet, presentation manager, and drawing program. Its user interface is similar to other office suites, and it can import and export in various popular file formats. It is available in a number of different languages and internationalization has been extended to interfaces, spell checkers, and dictionaries. The word processor of Apache OpenOffice uses a native XML file format for increased portability and flexibility. The spreadsheet program features a macro language which can be interfaced with external databases. Apache OpenOffice is stable and runs natively on Windows®, Solaris™, Linux®, FreeBSD, and Mac OS® X. More information about Apache OpenOffice can be found at openoffice.org. For FreeBSD specific information refer to porting.openoffice.org/freebsd/. To install the Apache OpenOffice package: # pkg install apache-openoffice Once the package is installed, type the following command to launch Apache OpenOffice: % openoffice-X.Y.Z where X.Y.Z is the version number of the installed version of Apache OpenOffice. The first time Apache OpenOffice launches, some questions will be asked and a .openoffice.org folder will be created in the user’s home directory. If the desired Apache OpenOffice package is not available, compiling the port is still an option. However, this requires a lot of disk space and a fairly long time to compile: # cd /usr/ports/editors/openoffice-4 # make install clean To build a localized version, replace the previous command with:# make LOCALIZED_LANG=your_language install cleanReplace your_language with the correct language ISO-code. A list of supported language codes is available in files/Makefile.localized, located in the port’s directory. #### 7.3.5. LibreOffice LibreOffice is a free software office suite developed by documentfoundation.org. It is compatible with other major office suites and available on a variety of platforms. It is a rebranded fork of Apache OpenOffice and includes applications found in a complete office productivity suite: a word processor, spreadsheet, presentation manager, drawing program, database management program, and a tool for creating and editing mathematical formulæ. It is available in a number of different languages and internationalization has been extended to interfaces, spell checkers, and dictionaries. The word processor of LibreOffice uses a native XML file format for increased portability and flexibility. The spreadsheet program features a macro language which can be interfaced with external databases. LibreOffice is stable and runs natively on Windows®, Linux®, FreeBSD, and Mac OS® X. More information about LibreOffice can be found at libreoffice.org. To install the English version of the LibreOffice package: # pkg install libreoffice The editors category (freebsd.org/ports/editors/) of the Ports Collection contains several localizations for LibreOffice. When installing a localized package, replace libreoffice with the name of the localized package. Once the package is installed, type the following command to run LibreOffice: % libreoffice During the first launch, some questions will be asked and a .libreoffice folder will be created in the user’s home directory. If the desired LibreOffice package is not available, compiling the port is still an option. However, this requires a lot of disk space and a fairly long time to compile. This example compiles the English version: # cd /usr/ports/editors/libreoffice # make install clean To build a localized version, cd into the port directory of the desired language. Supported languages can be found in the editors category (freebsd.org/ports/editors/) of the Ports Collection. ### 7.4. Document Viewers Some new document formats have gained popularity since the advent of UNIX® and the viewers they require may not be available in the base system. This section demonstrates how to install the following document viewers: Application NameResources NeededInstallation from PortsMajor Dependencies Xpdf light light FreeType gv light light Xaw3d Geeqie light light Gtk+ or GNOME ePDFView light light Gtk+ Okular light heavy KDE #### 7.4.1. Xpdf For users that prefer a small FreeBSD PDF viewer, Xpdf provides a light-weight and efficient viewer which requires few resources. It uses the standard X fonts and does not require any additional toolkits. To install the Xpdf package: # pkg install xpdf If the package is not available, use the Ports Collection: # cd /usr/ports/graphics/xpdf # make install clean Once the installation is complete, launch xpdf and use the right mouse button to activate the menu. #### 7.4.2. gv gv is a PostScript® and PDF viewer. It is based on ghostview, but has a nicer look as it is based on the Xaw3d widget toolkit. gv has many configurable features, such as orientation, paper size, scale, and anti-aliasing. Almost any operation can be performed with either the keyboard or the mouse. To install gv as a package: # pkg install gv If a package is unavailable, use the Ports Collection: # cd /usr/ports/print/gv # make install clean #### 7.4.3. Geeqie Geeqie is a fork from the unmaintained GQView project, in an effort to move development forward and integrate the existing patches. Geeqie is an image manager which supports viewing a file with a single click, launching an external editor, and thumbnail previews. It also features a slideshow mode and some basic file operations, making it easy to manage image collections and to find duplicate files. Geeqie supports full screen viewing and internationalization. To install the Geeqie package: # pkg install geeqie If the package is not available, use the Ports Collection: # cd /usr/ports/graphics/geeqie # make install clean #### 7.4.4. ePDFView ePDFView is a lightweight PDF document viewer that only uses the Gtk+ and Poppler libraries. It is currently under development, but already opens most PDF files (even encrypted), save copies of documents, and has support for printing using CUPS. To install ePDFView as a package: # pkg install epdfview If a package is unavailable, use the Ports Collection: # cd /usr/ports/graphics/epdfview # make install clean #### 7.4.5. Okular Okular is a universal document viewer based on KPDF for KDE. It can open many document formats, including PDF, PostScript®, DjVu, CHM, XPS, and ePub. To install Okular as a package: # pkg install okular If a package is unavailable, use the Ports Collection: # cd /usr/ports/graphics/okular # make install clean ### 7.5. Finance For managing personal finances on a FreeBSD desktop, some powerful and easy-to-use applications can be installed. Some are compatible with widespread file formats, such as the formats used by Quicken and Excel. This section covers these programs: Application NameResources NeededInstallation from PortsMajor Dependencies GnuCash light heavy GNOME Gnumeric light heavy GNOME KMyMoney light heavy KDE #### 7.5.1. GnuCash GnuCash is part of the GNOME effort to provide user-friendly, yet powerful, applications to end-users. GnuCash can be used to keep track of income and expenses, bank accounts, and stocks. It features an intuitive interface while remaining professional. GnuCash provides a smart register, a hierarchical system of accounts, and many keyboard accelerators and auto-completion methods. It can split a single transaction into several more detailed pieces. GnuCash can import and merge Quicken QIF files. It also handles most international date and currency formats. To install the GnuCash package: # pkg install gnucash If the package is not available, use the Ports Collection: # cd /usr/ports/finance/gnucash # make install clean #### 7.5.2. Gnumeric Gnumeric is a spreadsheet program developed by the GNOME community. It features convenient automatic guessing of user input according to the cell format with an autofill system for many sequences. It can import files in a number of popular formats, including Excel, Lotus 1-2-3, and Quattro Pro. It has a large number of built-in functions and allows all of the usual cell formats such as number, currency, date, time, and much more. To install Gnumeric as a package: # pkg install gnumeric If the package is not available, use the Ports Collection: # cd /usr/ports/math/gnumeric # make install clean #### 7.5.3. KMyMoney KMyMoney is a personal finance application created by the KDE community. KMyMoney aims to provide the important features found in commercial personal finance manager applications. It also highlights ease-of-use and proper double-entry accounting among its features. KMyMoney imports from standard Quicken QIF files, tracks investments, handles multiple currencies, and provides a wealth of reports. To install KMyMoney as a package: # pkg install kmymoney-kde4 If the package is not available, use the Ports Collection: # cd /usr/ports/finance/kmymoney-kde4 # make install clean ## Chapter 8. Multimedia ### 8.1. Synopsis FreeBSD supports a wide variety of sound cards, allowing users to enjoy high fidelity output from a FreeBSD system. This includes the ability to record and play back audio in the MPEG Audio Layer 3 (MP3), Waveform Audio File (WAV), Ogg Vorbis, and other formats. The FreeBSD Ports Collection contains many applications for editing recorded audio, adding sound effects, and controlling attached MIDI devices. FreeBSD also supports the playback of video files and DVDs. The FreeBSD Ports Collection contains applications to encode, convert, and playback various video media. This chapter describes how to configure sound cards, video playback, TV tuner cards, and scanners on FreeBSD. It also describes some of the applications which are available for using these devices. After reading this chapter, you will know how to: • Configure a sound card on FreeBSD. • Troubleshoot the sound setup. • Playback and encode MP3s and other audio. • Prepare a FreeBSD system for video playback. • Play DVDs, .mpg, and .avi files. • Rip CD and DVD content into files. • Configure a TV card. • Install and setup MythTV on FreeBSD • Configure an image scanner. Before reading this chapter, you should: ### 8.2. Setting Up the Sound Card Before beginning the configuration, determine the model of the sound card and the chip it uses. FreeBSD supports a wide variety of sound cards. Check the supported audio devices list of the Hardware Notes to see if the card is supported and which FreeBSD driver it uses. In order to use the sound device, its device driver must be loaded. The easiest way is to load a kernel module for the sound card with kldload(8). This example loads the driver for a built-in audio chipset based on the Intel specification: # kldload snd_hda snd_hda_load="YES" Other available sound modules are listed in /boot/defaults/loader.conf. When unsure which driver to use, load the snd_driver module: # kldload snd_driver This is a metadriver which loads all of the most common sound drivers and can be used to speed up the search for the correct driver. It is also possible to load all sound drivers by adding the metadriver to /boot/loader.conf. To determine which driver was selected for the sound card after loading the snd_driver metadriver, type cat /dev/sndstat. #### 8.2.1. Configuring a Custom Kernel with Sound Support This section is for users who prefer to statically compile in support for the sound card in a custom kernel. For more information about recompiling a kernel, refer to Configuring the FreeBSD Kernel. When using a custom kernel to provide sound support, make sure that the audio framework driver exists in the custom kernel configuration file: device sound Next, add support for the sound card. To continue the example of the built-in audio chipset based on the Intel specification from the previous section, use the following line in the custom kernel configuration file: device snd_hda Be sure to read the manual page of the driver for the device name to use for the driver. Non-PnP ISA sound cards may require the IRQ and I/O port settings of the card to be added to /boot/device.hints. During the boot process, loader(8) reads this file and passes the settings to the kernel. For example, an old Creative SoundBlaster® 16 ISA non-PnP card will use the snd_sbc(4) driver in conjunction with snd_sb16. For this card, the following lines must be added to the kernel configuration file: device snd_sbc device snd_sb16 If the card uses the 0x220 I/O port and IRQ 5, these lines must also be added to /boot/device.hints: hint.sbc.0.at="isa" hint.sbc.0.port="0x220" hint.sbc.0.irq="5" hint.sbc.0.drq="1" hint.sbc.0.flags="0x15" The syntax used in /boot/device.hints is described in sound(4) and the manual page for the driver of the sound card. The settings shown above are the defaults. In some cases, the IRQ or other settings may need to be changed to match the card. Refer to snd_sbc(4) for more information about this card. #### 8.2.2. Testing Sound After loading the required module or rebooting into the custom kernel, the sound card should be detected. To confirm, run dmesg | grep pcm. This example is from a system with a built-in Conexant CX20590 chipset: pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 5 on hdaa0 pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> at nid 6 on hdaa0 pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> at nid 31,25 and 35,27 on hdaa1 The status of the sound card may also be checked using this command: # cat /dev/sndstat FreeBSD Audio Driver (newpcm: 64bit 2009061500/amd64) Installed devices: pcm0: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play) pcm1: <NVIDIA (0x001c) (HDMI/DP 8ch)> (play) pcm2: <Conexant CX20590 (Analog 2.0+HP/2.0)> (play/rec) default The output will vary depending upon the sound card. If no pcm devices are listed, double-check that the correct device driver was loaded or compiled into the kernel. The next section lists some common problems and their solutions. If all goes well, the sound card should now work in FreeBSD. If the CD or DVD drive is properly connected to the sound card, one can insert an audio CD in the drive and play it with cdcontrol(1): % cdcontrol -f /dev/acd0 play 1 Audio CDs have specialized encodings which means that they should not be mounted using mount(8). Various applications, such as audio/workman, provide a friendlier interface. The audio/mpg123 port can be installed to listen to MP3 audio files. Another quick way to test the card is to send data to /dev/dsp: % cat filename > /dev/dsp where filename can be any type of file. This command should produce some noise, confirming that the sound card is working. The /dev/dsp* device nodes will be created automatically as needed. When not in use, they do not exist and will not appear in the output of ls(1). #### 8.2.3. Setting up Bluetooth Sound Devices Connecting to a Bluetooth device is out of scope for this chapter. Refer to “Bluetooth” for more information. To get Bluetooth sound sink working with FreeBSD’s sound system, users have to install audio/virtual_oss first: # pkg install virtual_oss audio/virtual_oss requires cuse to be loaded into the kernel: # kldload cuse To load cuse during system startup, run this command: # echo 'cuse_load=yes' >> /boot/loader.conf To use headphones as a sound sink with audio/virtual_oss, users need to create a virtual device after connecting to a Bluetooth audio device: # virtual_oss -C 2 -c 2 -r 48000 -b 16 -s 768 -R /dev/null -P /dev/bluetooth/headphones -d dsp headphones in this example is a hostname from /etc/bluetooth/hosts. BT_ADDR could be used instead. #### 8.2.4. Troubleshooting Sound Common Error Messages lists some common error messages and their solutions: Table 8. Common Error Messages ErrorSolution sb_dspwr(XX) timed out The I/O port is not set correctly. bad irq XX The IRQ is set incorrectly. Make sure that the set IRQ and the sound IRQ are the same. xxx: gus pcm not attached, out of memory There is not enough available memory to use the device. xxx: can’t open /dev/dsp! Type fstat | grep dsp to check if another application is holding the device open. Noteworthy troublemakers are esound and KDE’s sound support. Modern graphics cards often come with their own sound driver for use with HDMI. This sound device is sometimes enumerated before the sound card meaning that the sound card will not be used as the default playback device. To check if this is the case, run dmesg and look for pcm. The output looks something like this: ... hdac0: HDA Driver Revision: 20100226_0142 hdac1: HDA Driver Revision: 20100226_0142 hdac0: HDA Codec #0: NVidia (Unknown) hdac0: HDA Codec #1: NVidia (Unknown) hdac0: HDA Codec #2: NVidia (Unknown) hdac0: HDA Codec #3: NVidia (Unknown) pcm0: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 0 nid 1 on hdac0 pcm1: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 1 nid 1 on hdac0 pcm2: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 2 nid 1 on hdac0 pcm3: <HDA NVidia (Unknown) PCM #0 DisplayPort> at cad 3 nid 1 on hdac0 hdac1: HDA Codec #2: Realtek ALC889 pcm4: <HDA Realtek ALC889 PCM #0 Analog> at cad 2 nid 1 on hdac1 pcm5: <HDA Realtek ALC889 PCM #1 Analog> at cad 2 nid 1 on hdac1 pcm6: <HDA Realtek ALC889 PCM #2 Digital> at cad 2 nid 1 on hdac1 pcm7: <HDA Realtek ALC889 PCM #3 Digital> at cad 2 nid 1 on hdac1 ... In this example, the graphics card (NVidia) has been enumerated before the sound card (Realtek ALC889). To use the sound card as the default playback device, change hw.snd.default_unit to the unit that should be used for playback: # sysctl hw.snd.default_unit=n where n is the number of the sound device to use. In this example, it should be 4. Make this change permanent by adding the following line to /etc/sysctl.conf: hw.snd.default_unit=4 Programs using audio/pulseaudio might need to restart the audio/pulseaudio daemon for the changes in hw.snd.default_unit to take effect. Alternatively, audio/pulseaudio settings can be changed on the fly. pacmd(1) opens a command line connection to the audio/pulseaudio daemon: # pacmd Welcome to PulseAudio 14.2! Use "help" for usage information. >>> The following command changes the default sink to card number 4 as in the previous example: set-default-sink 4 Do not use the exit command to exit the command line interface. That will kill the audio/pulseaudio daemon. Use Ctrl+D instead. #### 8.2.5. Utilizing Multiple Sound Sources It is often desirable to have multiple sources of sound that are able to play simultaneously. FreeBSD uses "Virtual Sound Channels" to multiplex the sound card’s playback by mixing sound in the kernel. Three sysctl(8) knobs are available for configuring virtual channels: # sysctl dev.pcm.0.play.vchans=4 # sysctl dev.pcm.0.rec.vchans=4 # sysctl hw.snd.maxautovchans=4 This example allocates four virtual channels, which is a practical number for everyday use. Both dev.pcm.0.play.vchans=4 and dev.pcm.0.rec.vchans=4 are configurable after a device has been attached and represent the number of virtual channels pcm0 has for playback and recording. Since the pcm module can be loaded independently of the hardware drivers, hw.snd.maxautovchans indicates how many virtual channels will be given to an audio device when it is attached. Refer to pcm(4) for more information. The number of virtual channels for a device cannot be changed while it is in use. First, close any programs using the device, such as music players or sound daemons. The correct pcm device will automatically be allocated transparently to a program that requests /dev/dsp0. #### 8.2.6. Setting Default Values for Mixer Channels The default values for the different mixer channels are hardcoded in the source code of the pcm(4) driver. While sound card mixer levels can be changed using mixer(8) or third-party applications and daemons, this is not a permanent solution. To instead set default mixer values at the driver level, define the appropriate values in /boot/device.hints, as seen in this example: hint.pcm.0.vol="50" This will set the volume channel to a default value of 50 when the pcm(4) module is loaded. ### 8.3. MP3 Audio This section describes some MP3 players available for FreeBSD, how to rip audio CD tracks, and how to encode and decode MP3s. #### 8.3.1. MP3 Players A popular graphical MP3 player is Audacious. It supports Winamp skins and additional plugins. The interface is intuitive, with a playlist, graphic equalizer, and more. Those familiar with Winamp will find Audacious simple to use. On FreeBSD, Audacious can be installed from the multimedia/audacious port or package. Audacious is a descendant of XMMS. The audio/mpg123 package or port provides an alternative, command-line MP3 player. Once installed, specify the MP3 file to play on the command line. If the system has multiple audio devices, the sound device can also be specified: # mpg123 -a /dev/dsp1.0 Foobar-GreatestHits.mp3 High Performance MPEG 1.0/2.0/2.5 Audio Player for Layers 1, 2 and 3 version 1.18.1; written and copyright by Michael Hipp and others free software (LGPL) without any warranty but with best wishes Playing MPEG stream from Foobar-GreatestHits.mp3 ... MPEG 1.0 layer III, 128 kbit/s, 44100 Hz joint-stereo Additional MP3 players are available in the FreeBSD Ports Collection. #### 8.3.2. Ripping CD Audio Tracks Before encoding a CD or CD track to MP3, the audio data on the CD must be ripped to the hard drive. This is done by copying the raw CD Digital Audio (CDDA) data to WAV files. The cdda2wav tool, which is installed with the sysutils/cdrtools suite, can be used to rip audio information from CDs. With the audio CD in the drive, the following command can be issued as root to rip an entire CD into individual, per track, WAV files: # cdda2wav -D 0,1,0 -B In this example, the -D 0,1,0 indicates the SCSI device 0,1,0 containing the CD to rip. Use cdrecord -scanbus to determine the correct device parameters for the system. To rip individual tracks, use -t to specify the track: # cdda2wav -D 0,1,0 -t 7 To rip a range of tracks, such as track one to seven, specify a range: # cdda2wav -D 0,1,0 -t 1+7 To rip from an ATAPI (IDE) CDROM drive, specify the device name in place of the SCSI unit numbers. For example, to rip track 7 from an IDE drive: # cdda2wav -D /dev/acd0 -t 7 Alternately, dd can be used to extract audio tracks on ATAPI drives, as described in “Duplicating Audio CDs”. #### 8.3.3. Encoding and Decoding MP3s Lame is a popular MP3 encoder which can be installed from the audio/lame port. Due to patent issues, a package is not available. The following command will convert the ripped WAV file audio01.wav to audio01.mp3: # lame -h -b 128 --tt "Foo Song Title" --ta "FooBar Artist" --tl "FooBar Album" \ --ty "2014" --tc "Ripped and encoded by Foo" --tg "Genre" audio01.wav audio01.mp3 The specified 128 kbits is a standard MP3 bitrate while the 160 and 192 bitrates provide higher quality. The higher the bitrate, the larger the size of the resulting MP3. The -h turns on the "higher quality but a little slower" mode. The options beginning with --t indicate ID3 tags, which usually contain song information, to be embedded within the MP3 file. Additional encoding options can be found in the lame manual page. In order to burn an audio CD from MP3s, they must first be converted to a non-compressed file format. XMMS can be used to convert to the WAV format, while mpg123 can be used to convert to the raw Pulse-Code Modulation (PCM) audio data format. To convert audio01.mp3 using mpg123, specify the name of the PCM file: # mpg123 -s audio01.mp3 > audio01.pcm To use XMMS to convert a MP3 to WAV format, use these steps: Procedure: Converting to WAV Format in XMMS 1. Launch XMMS. 2. Right-click the window to bring up the XMMS menu. 3. Select Preferences under Options. 4. Change the Output Plugin to "Disk Writer Plugin". 5. Press Configure. 6. Enter or browse to a directory to write the uncompressed files to. 7. Load the MP3 file into XMMS as usual, with volume at 100% and EQ settings turned off. 8. Press Play. The XMMS will appear as if it is playing the MP3, but no music will be heard. It is actually playing the MP3 to a file. 9. When finished, be sure to set the default Output Plugin back to what it was before in order to listen to MP3s again. Both the WAV and PCM formats can be used with cdrecord. When using WAV files, there will be a small tick sound at the beginning of each track. This sound is the header of the WAV file. The audio/sox port or package can be used to remove the header: % sox -t wav -r 44100 -s -w -c 2 track.wav track.raw Refer to “Creating and Using CD Media” for more information on using a CD burner in FreeBSD. ### 8.4. Video Playback Before configuring video playback, determine the model and chipset of the video card. While Xorg supports a wide variety of video cards, not all provide good playback performance. To obtain a list of extensions supported by the Xorg server using the card, run xdpyinfo while Xorg is running. It is a good idea to have a short MPEG test file for evaluating various players and options. Since some DVD applications look for DVD media in /dev/dvd by default, or have this device name hardcoded in them, it might be useful to make a symbolic link to the proper device: # ln -sf /dev/cd0 /dev/dvd Due to the nature of devfs(5), manually created links will not persist after a system reboot. In order to recreate the symbolic link automatically when the system boots, add the following line to /etc/devfs.conf: link cd0 dvd DVD decryption invokes certain functions that require write permission to the DVD device. To enhance the shared memory Xorg interface, it is recommended to increase the values of these sysctl(8) variables: kern.ipc.shmmax=67108864 kern.ipc.shmall=32768 #### 8.4.1. Determining Video Capabilities There are several possible ways to display video under Xorg and what works is largely hardware dependent. Each method described below will have varying quality across different hardware. Common video interfaces include: 1. Xorg: normal output using shared memory. 2. XVideo: an extension to the Xorg interface which allows video to be directly displayed in drawable objects through a special acceleration. This extension provides good quality playback even on low-end machines. The next section describes how to determine if this extension is running. 3. SDL: the Simple Directmedia Layer is a porting layer for many operating systems, allowing cross-platform applications to be developed which make efficient use of sound and graphics. SDL provides a low-level abstraction to the hardware which can sometimes be more efficient than the Xorg interface. On FreeBSD, SDL can be installed using the devel/sdl20 package or port. 4. DGA: the Direct Graphics Access is an Xorg extension which allows a program to bypass the Xorg server and directly alter the framebuffer. As it relies on a low-level memory mapping, programs using it must be run as root. The DGA extension can be tested and benchmarked using dga(1). When dga is running, it changes the colors of the display whenever a key is pressed. To quit, press q. 5. SVGAlib: a low level console graphics layer. ##### 8.4.1.1. XVideo To check whether this extension is running, use xvinfo: % xvinfo XVideo is supported for the card if the result is similar to: X-Video Extension version 2.2 screen #0 number of ports: 1 port base: 43 operations supported: PutImage supported visuals: depth 16, visualID 0x22 depth 16, visualID 0x23 number of attributes: 5 "XV_COLORKEY" (range 0 to 16777215) client settable attribute client gettable attribute (current value is 2110) "XV_BRIGHTNESS" (range -128 to 127) client settable attribute client gettable attribute (current value is 0) "XV_CONTRAST" (range 0 to 255) client settable attribute client gettable attribute (current value is 128) "XV_SATURATION" (range 0 to 255) client settable attribute client gettable attribute (current value is 128) "XV_HUE" (range -180 to 180) client settable attribute client gettable attribute (current value is 0) maximum XvImage size: 1024 x 1024 Number of image formats: 7 id: 0x32595559 (YUY2) guid: 59555932-0000-0010-8000-00aa00389b71 bits per pixel: 16 number of planes: 1 type: YUV (packed) id: 0x32315659 (YV12) guid: 59563132-0000-0010-8000-00aa00389b71 bits per pixel: 12 number of planes: 3 type: YUV (planar) id: 0x30323449 (I420) guid: 49343230-0000-0010-8000-00aa00389b71 bits per pixel: 12 number of planes: 3 type: YUV (planar) id: 0x36315652 (RV16) guid: 52563135-0000-0000-0000-000000000000 bits per pixel: 16 number of planes: 1 type: RGB (packed) depth: 0 red, green, blue masks: 0x1f, 0x3e0, 0x7c00 id: 0x35315652 (RV15) guid: 52563136-0000-0000-0000-000000000000 bits per pixel: 16 number of planes: 1 type: RGB (packed) depth: 0 red, green, blue masks: 0x1f, 0x7e0, 0xf800 id: 0x31313259 (Y211) guid: 59323131-0000-0010-8000-00aa00389b71 bits per pixel: 6 number of planes: 3 type: YUV (packed) id: 0x0 guid: 00000000-0000-0000-0000-000000000000 bits per pixel: 0 number of planes: 0 type: RGB (packed) depth: 1 red, green, blue masks: 0x0, 0x0, 0x0 The formats listed, such as YUV2 and YUV12, are not present with every implementation of XVideo and their absence may hinder some players. If the result instead looks like: X-Video Extension version 2.2 screen #0 no adaptors present XVideo is probably not supported for the card. This means that it will be more difficult for the display to meet the computational demands of rendering video, depending on the video card and processor. #### 8.4.2. Ports and Packages Dealing with Video This section introduces some of the software available from the FreeBSD Ports Collection which can be used for video playback. ##### 8.4.2.1. MPlayer and MEncoder MPlayer is a command-line video player with an optional graphical interface which aims to provide speed and flexibility. Other graphical front-ends to MPlayer are available from the FreeBSD Ports Collection. MPlayer can be installed using the multimedia/mplayer package or port. Several compile options are available and a variety of hardware checks occur during the build process. For these reasons, some users prefer to build the port rather than install the package. When compiling the port, the menu options should be reviewed to determine the type of support to compile into the port. If an option is not selected, MPlayer will not be able to display that type of video format. Use the arrow keys and spacebar to select the required formats. When finished, press Enter to continue the port compile and installation. By default, the package or port will build the mplayer command line utility and the gmplayer graphical utility. To encode videos, compile the multimedia/mencoder port. Due to licensing restrictions, a package is not available for MEncoder. The first time MPlayer is run, it will create ~/.mplayer in the user’s home directory. This subdirectory contains default versions of the user-specific configuration files. This section describes only a few common uses. Refer to mplayer(1) for a complete description of its numerous options. To play the file testfile.avi, specify the video interfaces with -vo, as seen in the following examples: % mplayer -vo xv testfile.avi % mplayer -vo sdl testfile.avi % mplayer -vo x11 testfile.avi # mplayer -vo dga testfile.avi # mplayer -vo 'sdl:dga' testfile.avi It is worth trying all of these options, as their relative performance depends on many factors and will vary significantly with hardware. To play a DVD, replace testfile.avi with dvd://N -dvd-device DEVICE, where N is the title number to play and DEVICE is the device node for the DVD. For example, to play title 3 from /dev/dvd: # mplayer -vo xv dvd://3 -dvd-device /dev/dvd The default DVD device can be defined during the build of the MPlayer port by including the WITH_DVD_DEVICE=/path/to/desired/device option. By default, the device is /dev/cd0. More details can be found in the port’s Makefile.options. To stop, pause, advance, and so on, use a keybinding. To see the list of keybindings, run mplayer -h or read mplayer(1). Additional playback options include -fs -zoom, which engages fullscreen mode, and -framedrop, which helps performance. Each user can add commonly used options to their ~/.mplayer/config like so: vo=xv fs=yes zoom=yes mplayer can be used to rip a DVD title to a .vob. To dump the second title from a DVD: # mplayer -dumpstream -dumpfile out.vob dvd://2 -dvd-device /dev/dvd The output file, out.vob, will be in MPEG format. Anyone wishing to obtain a high level of expertise with UNIX® video should consult mplayerhq.hu/DOCS as it is technically informative. This documentation should be considered as required reading before submitting any bug reports. Before using mencoder, it is a good idea to become familiar with the options described at mplayerhq.hu/DOCS/HTML/en/mencoder.html. There are innumerable ways to improve quality, lower bitrate, and change formats, and some of these options may make the difference between good or bad performance. Improper combinations of command line options can yield output files that are unplayable even by mplayer. Here is an example of a simple copy: % mencoder input.avi -oac copy -ovc copy -o output.avi To rip to a file, use -dumpfile with mplayer. To convert input.avi to the MPEG4 codec with MPEG3 audio encoding, first install the audio/lame port. Due to licensing restrictions, a package is not available. Once installed, type: % mencoder input.avi -oac mp3lame -lameopts br=192 \ -ovc lavc -lavcopts vcodec=mpeg4:vhq -o output.avi This will produce output playable by applications such as mplayer and xine. input.avi can be replaced with dvd://1 -dvd-device /dev/dvd and run as root to re-encode a DVD title directly. Since it may take a few tries to get the desired result, it is recommended to instead dump the title to a file and to work on the file. ##### 8.4.2.2. The xine Video Player xine is a video player with a reusable base library and a modular executable which can be extended with plugins. It can be installed using the multimedia/xine package or port. In practice, xine requires either a fast CPU with a fast video card, or support for the XVideo extension. The xine video player performs best on XVideo interfaces. By default, the xine player starts a graphical user interface. The menus can then be used to open a specific file. Alternatively, xine may be invoked from the command line by specifying the name of the file to play: % xine -g -p mymovie.avi ##### 8.4.2.3. The Transcode Utilities Transcode provides a suite of tools for re-encoding video and audio files. Transcode can be used to merge video files or repair broken files using command line tools with stdin/stdout stream interfaces. In FreeBSD, Transcode can be installed using the multimedia/transcode package or port. Many users prefer to compile the port as it provides a menu of compile options for specifying the support and codecs to compile in. If an option is not selected, Transcode will not be able to encode that format. Use the arrow keys and spacebar to select the required formats. When finished, press Enter to continue the port compile and installation. This example demonstrates how to convert a DivX file into a PAL MPEG-1 file (PAL VCD): % transcode -i input.avi -V --export_prof vcd-pal -o output_vcd % mplex -f 1 -o output_vcd.mpg output_vcd.m1v output_vcd.mpa The resulting MPEG file, output_vcd.mpg, is ready to be played with MPlayer. The file can be burned on a CD media to create a video CD using a utility such as multimedia/vcdimager or sysutils/cdrdao. In addition to the manual page for transcode, refer to transcoding.org/cgi-bin/transcode for further information and examples. ### 8.5. TV Cards TV cards can be used to watch broadcast or cable TV on a computer. Most cards accept composite video via an RCA or S-video input and some cards include a FM radio tuner. FreeBSD provides support for PCI-based TV cards using a Brooktree Bt848/849/878/879 video capture chip with the bktr(4) driver. This driver supports most Pinnacle PCTV video cards. Before purchasing a TV card, consult bktr(4) for a list of supported tuners. In order to use the card, the bktr(4) driver must be loaded. To automate this at boot time, add the following line to /boot/loader.conf: bktr_load="YES" Alternatively, one can statically compile support for the TV card into a custom kernel. In that case, add the following lines to the custom kernel configuration file: device bktr device iicbus device iicbb device smbus These additional devices are necessary as the card components are interconnected via an I2C bus. Then, build and install a new kernel. To test that the tuner is correctly detected, reboot the system. The TV card should appear in the boot messages, as seen in this example: bktr0: <BrookTree 848A> mem 0xd7000000-0xd7000fff irq 10 at device 10.0 on pci0 iicbb0: <I2C bit-banging driver> on bti2c0 iicbus0: <Philips I2C bus> on iicbb0 master-only iicbus1: <Philips I2C bus> on iicbb0 master-only smbus0: <System Management Bus> on bti2c0 bktr0: Pinnacle/Miro TV, Philips SECAM tuner. The messages will differ according to the hardware. If necessary, it is possible to override some of the detected parameters using sysctl(8) or custom kernel configuration options. For example, to force the tuner to a Philips SECAM tuner, add the following line to a custom kernel configuration file: options OVERRIDE_TUNER=6 or, use sysctl(8): # sysctl hw.bt848.tuner=6 Refer to bktr(4) for a description of the available sysctl(8) parameters and kernel options. #### 8.5.2. Useful Applications To use the TV card, install one of the following applications: • multimedia/fxtv provides TV-in-a-window and image/audio/video capture capabilities. • multimedia/xawtv is another TV application with similar features. • audio/xmradio provides an application for using the FM radio tuner of a TV card. More applications are available in the FreeBSD Ports Collection. #### 8.5.3. Troubleshooting If any problems are encountered with the TV card, check that the video capture chip and the tuner are supported by bktr(4) and that the right configuration options were used. For more support or to ask questions about supported TV cards, refer to the FreeBSD multimedia mailing list mailing list. ### 8.6. MythTV MythTV is a popular, open source Personal Video Recorder (PVR) application. This section demonstrates how to install and setup MythTV on FreeBSD. Refer to mythtv.org/wiki for more information on how to use MythTV. MythTV requires a frontend and a backend. These components can either be installed on the same system or on different machines. The frontend can be installed on FreeBSD using the multimedia/mythtv-frontend package or port. Xorg must also be installed and configured as described in The X Window System. Ideally, this system has a video card that supports X-Video Motion Compensation (XvMC) and, optionally, a Linux Infrared Remote Control (LIRC)-compatible remote. To install both the backend and the frontend on FreeBSD, use the multimedia/mythtv package or port. A MySQL™ database server is also required and should automatically be installed as a dependency. Optionally, this system should have a tuner card and sufficient storage to hold recorded data. #### 8.6.1. Hardware MythTV uses Video for Linux (V4L) to access video input devices such as encoders and tuners. In FreeBSD, MythTV works best with USB DVB-S/C/T cards as they are well supported by the multimedia/webcamd package or port which provides a V4L userland application. Any Digital Video Broadcasting (DVB) card supported by webcamd should work with MythTV. A list of known working cards can be found at wiki.freebsd.org/WebcamCompat. Drivers are also available for Hauppauge cards in the multimedia/pvr250 and multimedia/pvrxxx ports, but they provide a non-standard driver interface that does not work with versions of MythTV greater than 0.23. Due to licensing restrictions, no packages are available and these two ports must be compiled. The wiki.freebsd.org/HTPC page contains a list of all available DVB drivers. #### 8.6.2. Setting up the MythTV Backend To install MythTV using binary packages: # pkg install mythtv Alternatively, to install from the Ports Collection: # cd /usr/ports/multimedia/mythtv # make install Once installed, set up the MythTV database: # mysql -uroot -p < /usr/local/share/mythtv/database/mc.sql Then, configure the backend: # mythtv-setup Finally, start the backend: # sysrc mythbackend_enable=yes # service mythbackend start ### 8.7. Image Scanners In FreeBSD, access to image scanners is provided by SANE (Scanner Access Now Easy), which is available in the FreeBSD Ports Collection. SANE will also use some FreeBSD device drivers to provide access to the scanner hardware. FreeBSD supports both SCSI and USB scanners. Depending upon the scanner interface, different device drivers are required. Be sure the scanner is supported by SANE prior to performing any configuration. Refer to http://www.sane-project.org/sane-supported-devices.html for more information about supported scanners. This chapter describes how to determine if the scanner has been detected by FreeBSD. It then provides an overview of how to configure and use SANE on a FreeBSD system. #### 8.7.1. Checking the Scanner The GENERIC kernel includes the device drivers needed to support USB scanners. Users with a custom kernel should ensure that the following lines are present in the custom kernel configuration file: device usb device uhci device ohci device ehci device xhci To determine if the USB scanner is detected, plug it in and use dmesg to determine whether the scanner appears in the system message buffer. If it does, it should display a message similar to this: ugen0.2: <EPSON> at usbus0 In this example, an EPSON Perfection® 1650 USB scanner was detected on /dev/ugen0.2. If the scanner uses a SCSI interface, it is important to know which SCSI controller board it will use. Depending upon the SCSI chipset, a custom kernel configuration file may be needed. The GENERIC kernel supports the most common SCSI controllers. Refer to /usr/src/sys/conf/NOTES to determine the correct line to add to a custom kernel configuration file. In addition to the SCSI adapter driver, the following lines are needed in a custom kernel configuration file: device scbus device pass Verify that the device is displayed in the system message buffer: pass2 at aic0 bus 0 target 2 lun 0 pass2: <AGFA SNAPSCAN 600 1.10> Fixed Scanner SCSI-2 device pass2: 3.300MB/s transfers If the scanner was not powered-on at system boot, it is still possible to manually force detection by performing a SCSI bus scan with camcontrol: # camcontrol rescan all Re-scan of bus 0 was successful Re-scan of bus 1 was successful Re-scan of bus 2 was successful Re-scan of bus 3 was successful The scanner should now appear in the SCSI devices list: # camcontrol devlist <IBM DDRS-34560 S97B> at scbus0 target 5 lun 0 (pass0,da0) <IBM DDRS-34560 S97B> at scbus0 target 6 lun 0 (pass1,da1) <AGFA SNAPSCAN 600 1.10> at scbus1 target 2 lun 0 (pass3) <PHILIPS CDD3610 CD-R/RW 1.00> at scbus2 target 0 lun 0 (pass2,cd0) Refer to scsi(4) and camcontrol(8) for more details about SCSI devices on FreeBSD. #### 8.7.2. SANE Configuration The SANE system provides the access to the scanner via backends (graphics/sane-backends). Refer to http://www.sane-project.org/sane-supported-devices.html to determine which backend supports the scanner. A graphical scanning interface is provided by third party applications like Kooka (graphics/kooka) or XSane (graphics/xsane). SANE’s backends are enough to test the scanner. To install the backends from binary package: # pkg install sane-backends Alternatively, to install from the Ports Collection # cd /usr/ports/graphics/sane-backends # make install clean After installing the graphics/sane-backends port or package, use sane-find-scanner to check the scanner detection by the SANE system: # sane-find-scanner -q found SCSI scanner "AGFA SNAPSCAN 600 1.10" at /dev/pass3 The output should show the interface type of the scanner and the device node used to attach the scanner to the system. The vendor and the product model may or may not appear. Some USB scanners require firmware to be loaded. Refer to sane-find-scanner(1) and sane(7) for details. Next, check if the scanner will be identified by a scanning frontend. The SANE backends include scanimage which can be used to list the devices and perform an image acquisition. Use -L to list the scanner devices. The first example is for a SCSI scanner and the second is for a USB scanner: # scanimage -L device snapscan:/dev/pass3' is a AGFA SNAPSCAN 600 flatbed scanner # scanimage -L device 'epson2:libusb:000:002' is a Epson GT-8200 flatbed scanner In this second example, epson2 is the backend name and libusb:000:002 means /dev/ugen0.2 is the device node used by the scanner. If scanimage is unable to identify the scanner, this message will appear: # scanimage -L No scanners were identified. If you were expecting something different, check that the scanner is plugged in, turned on and detected by the which came with this software (README, FAQ, manpages). If this happens, edit the backend configuration file in /usr/local/etc/sane.d/ and define the scanner device used. For example, if the undetected scanner model is an EPSON Perfection® 1650 and it uses the epson2 backend, edit /usr/local/etc/sane.d/epson2.conf. When editing, add a line specifying the interface and the device node used. In this case, add the following line: usb /dev/ugen0.2 Save the edits and verify that the scanner is identified with the right backend name and the device node: # scanimage -L device 'epson2:libusb:000:002' is a Epson GT-8200 flatbed scanner Once scanimage -L sees the scanner, the configuration is complete and the scanner is now ready to use. While scanimage can be used to perform an image acquisition from the command line, it is often preferable to use a graphical interface to perform image scanning. Applications like Kooka or XSane are popular scanning frontends. They offer advanced features such as various scanning modes, color correction, and batch scans. XSane is also usable as a GIMP plugin. #### 8.7.3. Scanner Permissions In order to have access to the scanner, a user needs read and write permissions to the device node used by the scanner. In the previous example, the USB scanner uses the device node /dev/ugen0.2 which is really a symlink to the real device node /dev/usb/0.2.0. The symlink and the device node are owned, respectively, by the wheel and operator groups. While adding the user to these groups will allow access to the scanner, it is considered insecure to add a user to wheel. A better solution is to create a group and make the scanner device accessible to members of this group. This example creates a group called usb: # pw groupadd usb Then, make the /dev/ugen0.2 symlink and the /dev/usb/0.2.0 device node accessible to the usb group with write permissions of 0660 or 0664 by adding the following lines to /etc/devfs.rules: [system=5] add path ugen0.2 mode 0660 group usb add path usb/0.2.0 mode 0666 group usb It happens the device node changes with the addition or removal of devices, so one may want to give access to all USB devices using this ruleset instead:[system=5] add path 'ugen*' mode 0660 group usb add path 'usb/*' mode 0666 group usb Next, enable the ruleset in /etc/rc.conf: devfs_system_ruleset="system" And, restart the devfs(8) system: # service devfs restart Finally, add the users to usb in order to allow access to the scanner: # pw groupmod usb -m joe For more details refer to pw(8). ## Chapter 9. Configuring the FreeBSD Kernel ### 9.1. Synopsis The kernel is the core of the FreeBSD operating system. It is responsible for managing memory, enforcing security controls, networking, disk access, and much more. While much of FreeBSD is dynamically configurable, it is still occasionally necessary to configure and compile a custom kernel. After reading this chapter, you will know: • When to build a custom kernel. • How to take a hardware inventory. • How to customize a kernel configuration file. • How to use the kernel configuration file to create and build a new kernel. • How to install the new kernel. • How to troubleshoot if things go wrong. All of the commands listed in the examples in this chapter should be executed as root. ### 9.2. Why Build a Custom Kernel? Traditionally, FreeBSD used a monolithic kernel. The kernel was one large program, supported a fixed list of devices, and in order to change the kernel’s behavior, one had to compile and then reboot into a new kernel. Today, most of the functionality in the FreeBSD kernel is contained in modules which can be dynamically loaded and unloaded from the kernel as necessary. This allows the running kernel to adapt immediately to new hardware and for new functionality to be brought into the kernel. This is known as a modular kernel. Occasionally, it is still necessary to perform static kernel configuration. Sometimes the needed functionality is so tied to the kernel that it can not be made dynamically loadable. Some security environments prevent the loading and unloading of kernel modules and require that only needed functionality is statically compiled into the kernel. Building a custom kernel is often a rite of passage for advanced BSD users. This process, while time consuming, can provide benefits to the FreeBSD system. Unlike the GENERIC kernel, which must support a wide range of hardware, a custom kernel can be stripped down to only provide support for that computer’s hardware. This has a number of benefits, such as: • Faster boot time. Since the kernel will only probe the hardware on the system, the time it takes the system to boot can decrease. • Lower memory usage. A custom kernel often uses less memory than the GENERIC kernel by omitting unused features and device drivers. This is important because the kernel code remains resident in physical memory at all times, preventing that memory from being used by applications. For this reason, a custom kernel is useful on a system with a small amount of RAM. • Additional hardware support. A custom kernel can add support for devices which are not present in the GENERIC kernel. Before building a custom kernel, consider the reason for doing so. If there is a need for specific hardware support, it may already exist as a module. Kernel modules exist in /boot/kernel and may be dynamically loaded into the running kernel using kldload(8). Most kernel drivers have a loadable module and manual page. For example, the ath(4) wireless network driver has the following information in its manual page: Alternatively, to load the driver as a module at boot time, place the if_ath_load="YES" Adding if_ath_load="YES" to /boot/loader.conf will load this module dynamically at boot time. In some cases, there is no associated module in /boot/kernel. This is mostly true for certain subsystems. ### 9.3. Finding the System Hardware Before editing the kernel configuration file, it is recommended to perform an inventory of the machine’s hardware. On a dual-boot system, the inventory can be created from the other operating system. For example, Microsoft®'s Device Manager contains information about installed devices. Some versions of Microsoft® Windows® have a System icon which can be used to access Device Manager. If FreeBSD is the only installed operating system, use dmesg(8) to determine the hardware that was found and listed during the boot probe. Most device drivers on FreeBSD have a manual page which lists the hardware supported by that driver. For example, the following lines indicate that the psm(4) driver found a mouse: psm0: <PS/2 Mouse> irq 12 on atkbdc0 psm0: [GIANT-LOCKED] psm0: model Generic PS/2 mouse, device ID 0 Since this hardware exists, this driver should not be removed from a custom kernel configuration file. If the output of dmesg does not display the results of the boot probe output, instead read the contents of /var/run/dmesg.boot. Another tool for finding hardware is pciconf(8), which provides more verbose output. For example: % pciconf -lv ath0@pci0:3:0:0: class=0x020000 card=0x058a1014 chip=0x1014168c rev=0x01 hdr=0x00 vendor = 'Atheros Communications Inc.' device = 'AR5212 Atheros AR5212 802.11abg wireless' class = network subclass = ethernet This output shows that the ath driver located a wireless Ethernet device. The -k flag of man(1) can be used to provide useful information. For example, it can be used to display a list of manual pages which contain a particular device brand or name: # man -k Atheros ath(4) - Atheros IEEE 802.11 wireless network driver ath_hal(4) - Atheros Hardware Access Layer (HAL) Once the hardware inventory list is created, refer to it to ensure that drivers for installed hardware are not removed as the custom kernel configuration is edited. ### 9.4. The Configuration File In order to create a custom kernel configuration file and build a custom kernel, the full FreeBSD source tree must first be installed. If /usr/src/ does not exist or it is empty, source has not been installed. Source can be installed with Git using the instructions in “Using Git”. Once source is installed, review the contents of /usr/src/sys. This directory contains a number of subdirectories, including those which represent the following supported architectures: amd64, i386, powerpc, and sparc64. Everything inside a particular architecture’s directory deals with that architecture only and the rest of the code is machine independent code common to all platforms. Each supported architecture has a conf subdirectory which contains the GENERIC kernel configuration file for that architecture. Do not make edits to GENERIC. Instead, copy the file to a different name and make edits to the copy. The convention is to use a name with all capital letters. When maintaining multiple FreeBSD machines with different hardware, it is a good idea to name it after the machine’s hostname. This example creates a copy, named MYKERNEL, of the GENERIC configuration file for the amd64 architecture: # cd /usr/src/sys/amd64/conf # cp GENERIC MYKERNEL MYKERNEL can now be customized with any ASCII text editor. The default editor is vi, though an easier editor for beginners, called ee, is also installed with FreeBSD. The format of the kernel configuration file is simple. Each line contains a keyword that represents a device or subsystem, an argument, and a brief description. Any text after a # is considered a comment and ignored. To remove kernel support for a device or subsystem, put a # at the beginning of the line representing that device or subsystem. Do not add or remove a # for any line that you do not understand. It is easy to remove support for a device or option and end up with a broken kernel. For example, if the ata(4) driver is removed from the kernel configuration file, a system using ATA disk drivers may not boot. When in doubt, just leave support in the kernel. In addition to the brief descriptions provided in this file, additional descriptions are contained in NOTES, which can be found in the same directory as GENERIC for that architecture. For architecture independent options, refer to /usr/src/sys/conf/NOTES. When finished customizing the kernel configuration file, save a backup copy to a location outside of /usr/src.Alternately, keep the kernel configuration file elsewhere and create a symbolic link to the file:# cd /usr/src/sys/amd64/conf # mkdir /root/kernels # cp GENERIC /root/kernels/MYKERNEL # ln -s /root/kernels/MYKERNEL An include directive is available for use in configuration files. This allows another configuration file to be included in the current one, making it easy to maintain small changes relative to an existing file. If only a small number of additional options or drivers are required, this allows a delta to be maintained with respect to GENERIC, as seen in this example: include GENERIC ident MYKERNEL options IPFIREWALL options DUMMYNET options IPFIREWALL_DEFAULT_TO_ACCEPT options IPDIVERT Using this method, the local configuration file expresses local differences from a GENERIC kernel. As upgrades are performed, new features added to GENERIC will also be added to the local kernel unless they are specifically prevented using nooptions or nodevice. A comprehensive list of configuration directives and their descriptions may be found in config(5). To build a file which contains all available options, run the following command as root:# cd /usr/src/sys/arch/conf && make LINT ### 9.5. Building and Installing a Custom Kernel Once the edits to the custom configuration file have been saved, the source code for the kernel can be compiled using the following steps: Procedure: Building a Kernel 1. Change to this directory: # cd /usr/src 2. Compile the new kernel by specifying the name of the custom kernel configuration file: # make buildkernel KERNCONF=MYKERNEL 3. Install the new kernel associated with the specified kernel configuration file. This command will copy the new kernel to /boot/kernel/kernel and save the old kernel to /boot/kernel.old/kernel: # make installkernel KERNCONF=MYKERNEL 4. Shutdown the system and reboot into the new kernel. If something goes wrong, refer to The kernel does not boot. By default, when a custom kernel is compiled, all kernel modules are rebuilt. To update a kernel faster or to build only custom modules, edit /etc/make.conf before starting to build the kernel. For example, this variable specifies the list of modules to build instead of using the default of building all modules: MODULES_OVERRIDE = linux acpi Alternately, this variable lists which modules to exclude from the build process: WITHOUT_MODULES = linux acpi sound Additional variables are available. Refer to make.conf(5) for details. ### 9.6. If Something Goes Wrong There are four categories of trouble that can occur when building a custom kernel: config fails If config fails, it will print the line number that is incorrect. As an example, for the following message, make sure that line 17 is typed correctly by comparing it to GENERIC or NOTES: config: line 17: syntax error make fails If make fails, it is usually due to an error in the kernel configuration file which is not severe enough for config to catch. Review the configuration, and if the problem is not apparent, send an email to the FreeBSD general questions mailing list which contains the kernel configuration file. The kernel does not boot If the new kernel does not boot or fails to recognize devices, do not panic! Fortunately, FreeBSD has an excellent mechanism for recovering from incompatible kernels. Simply choose the kernel to boot from at the FreeBSD boot loader. This can be accessed when the system boot menu appears by selecting the "Escape to a loader prompt" option. At the prompt, type boot kernel.old, or the name of any other kernel that is known to boot properly. After booting with a good kernel, check over the configuration file and try to build it again. One helpful resource is /var/log/messages which records the kernel messages from every successful boot. Also, dmesg(8) will print the kernel messages from the current boot. When troubleshooting a kernel make sure to keep a copy of a kernel that is known to work, such as GENERIC. This is important because every time a new kernel is installed, kernel.old is overwritten with the last installed kernel, which may or may not be bootable. As soon as possible, move the working kernel by renaming the directory containing the good kernel:# mv /boot/kernel /boot/kernel.bad # mv /boot/kernel.good /boot/kernel The kernel works, but ps(1) does not If the kernel version differs from the one that the system utilities have been built with, for example, a kernel built from -CURRENT sources is installed on a -RELEASE system, many system status commands like ps(1) and vmstat(8) will not work. To fix this, recompile and install a world built with the same version of the source tree as the kernel. It is never a good idea to use a different version of the kernel than the rest of the operating system. ## Chapter 10. Printing Putting information on paper is a vital function, despite many attempts to eliminate it. Printing has two basic components. The data must be delivered to the printer, and must be in a form that the printer can understand. ### 10.1. Quick Start Basic printing can be set up quickly. The printer must be capable of printing plain ASCII text. For printing to other types of files, see Filters. 1. Create a directory to store files while they are being printed: # mkdir -p /var/spool/lpd/lp # chown daemon:daemon /var/spool/lpd/lp # chmod 770 /var/spool/lpd/lp 2. As root, create /etc/printcap with these contents: lp:\ lp=/dev/unlpt0:\ (1) sh:\ mx#0:\ sd=/var/spool/lpd/lp:\ lf=/var/log/lpd-errs: 1 This line is for a printer connected to a USB port. For a printer connected to a parallel or "printer" port, use: :lp=/dev/lpt0:\ For a printer connected directly to a network, use: :lp=:rm=network-printer-name:rp=raw:\ Replace network-printer-name with the DNS host name of the network printer. 3. Enable LPD by editing /etc/rc.conf, adding this line: lpd_enable="YES" Start the service: # service lpd start Starting lpd. 4. Print a test: # printf "1. This printer can print.\n2. This is the second line.\n" | lpr If both lines do not start at the left border, but "stairstep" instead, see Preventing Stairstepping on Plain Text Printers. Text files can now be printed with lpr. Give the filename on the command line, or pipe output directly into lpr. % lpr textfile.txt % ls -lh | lpr ### 10.2. Printer Connections Printers are connected to computer systems in a variety of ways. Small desktop printers are usually connected directly to a computer’s USB port. Older printers are connected to a parallel or "printer" port. Some printers are directly connected to a network, making it easy for multiple computers to share them. A few printers use a rare serial port connection. FreeBSD can communicate with all of these types of printers. USB USB printers can be connected to any available USB port on the computer. When FreeBSD detects a USB printer, two device entries are created: /dev/ulpt0 and /dev/unlpt0. Data sent to either device will be relayed to the printer. After each print job, ulpt0 resets the USB port. Resetting the port can cause problems with some printers, so the unlpt0 device is usually used instead. unlpt0 does not reset the USB port at all. Parallel (IEEE-1284) The parallel port device is /dev/lpt0. This device appears whether a printer is attached or not, it is not autodetected. Vendors have largely moved away from these "legacy" ports, and many computers no longer have them. Adapters can be used to connect a parallel printer to a USB port. With such an adapter, the printer can be treated as if it were actually a USB printer. Devices called print servers can also be used to connect parallel printers directly to a network. Serial (RS-232) Serial ports are another legacy port, rarely used for printers except in certain niche applications. Cables, connectors, and required wiring vary widely. For serial ports built into a motherboard, the serial device name is /dev/cuau0 or /dev/cuau1. Serial USB adapters can also be used, and these will appear as /dev/cuaU0. Several communication parameters must be known to communicate with a serial printer. The most important are baud rate or BPS (Bits Per Second) and parity. Values vary, but typical serial printers use a baud rate of 9600 and no parity. Network Network printers are connected directly to the local computer network. The DNS hostname of the printer must be known. If the printer is assigned a dynamic address by DHCP, DNS should be dynamically updated so that the host name always has the correct IP address. Network printers are often given static IP addresses to avoid this problem. Most network printers understand print jobs sent with the LPD protocol. A print queue name can also be specified. Some printers process data differently depending on which queue is used. For example, a raw queue prints the data unchanged, while the text queue adds carriage returns to plain text. Many network printers can also print data sent directly to port 9100. #### 10.2.1. Summary Wired network connections are usually the easiest to set up and give the fastest printing. For direct connection to the computer, USB is preferred for speed and simplicity. Parallel connections work but have limitations on cable length and speed. Serial connections are more difficult to configure. Cable wiring differs between models, and communication parameters like baud rate and parity bits must add to the complexity. Fortunately, serial printers are rare. ### 10.3. Common Page Description Languages Data sent to a printer must be in a language that the printer can understand. These languages are called Page Description Languages, or PDLs. Many applications from the Ports Collection and FreeBSD utilities produce PostScript® output. This table shows the utilities available to convert that into other common PDLs: For the easiest printing, choose a printer that supports PostScript®. Printers that support PCL are the next preferred. With print/ghostscript9-base, these printers can be used as if they understood PostScript® natively. Printers that support PostScript® or PCL directly almost always support direct printing of plain ASCII text files also. Line-based printers like typical inkjets usually do not support PostScript® or PCL. They often can print plain ASCII text files. print/ghostscript9-base supports the PDLs used by some of these printers. However, printing an entire graphic-based page on these printers is often very slow due to the large amount of data to be transferred and printed. Host-based printers are often more difficult to set up. Some cannot be used at all because of proprietary PDLs. Avoid these printers when possible. Descriptions of many PDLs can be found at http://www.undocprint.org/formats/page_description_languages. The particular PDL used by various models of printers can be found at http://www.openprinting.org/printers. ### 10.4. Direct Printing For occasional printing, files can be sent directly to a printer device without any setup. For example, a file called sample.txt can be sent to a USB printer: # cp sample.txt /dev/unlpt0 Direct printing to network printers depends on the abilities of the printer, but most accept print jobs on port 9100, and nc(1) can be used with them. To print the same file to a printer with the DNS hostname of netlaser: # nc netlaser 9100 < sample.txt ### 10.5. LPD (Line Printer Daemon) Printing a file in the background is called spooling. A spooler allows the user to continue with other programs on the computer without waiting for the printer to slowly complete the print job. FreeBSD includes a spooler called lpd(8). Print jobs are submitted with lpr(1). #### 10.5.1. Initial Setup A directory for storing print jobs is created, ownership is set, and the permissions are set to prevent other users from viewing the contents of those files: # mkdir -p /var/spool/lpd/lp # chown daemon:daemon /var/spool/lpd/lp # chmod 770 /var/spool/lpd/lp Printers are defined in /etc/printcap. An entry for each printer includes details like a name, the port where it is attached, and various other settings. Create /etc/printcap with these contents: lp:\ (1) :lp=/dev/unlpt0:\ (2) :sh:\ (3) :mx#0:\ (4) :sd=/var/spool/lpd/lp:\ (5) :lf=/var/log/lpd-errs: (6) 1 The name of this printer. lpr(1) sends print jobs to the lp printer unless another printer is specified with -P, so the default printer should be named lp. 2 The device where the printer is connected. Replace this line with the appropriate one for the connection type shown here. 3 Suppress the printing of a header page at the start of a print job. 4 Do not limit the maximum size of a print job. 5 The path to the spooling directory for this printer. Each printer uses its own spooling directory. 6 The log file where errors on this printer will be reported. After creating /etc/printcap, use chkprintcap(8) to test it for errors: # chkprintcap Fix any reported problems before continuing. Enable lpd(8) in /etc/rc.conf: lpd_enable="YES" Start the service: # service lpd start #### 10.5.2. Printing with lpr(1) Documents are sent to the printer with lpr. A file to be printed can be named on the command line or piped into lpr. These two commands are equivalent, sending the contents of doc.txt to the default printer: % lpr doc.txt % cat doc.txt | lpr Printers can be selected with -P. To print to a printer called laser: % lpr -Plaser doc.txt #### 10.5.3. Filters The examples shown so far have sent the contents of a text file directly to the printer. As long as the printer understands the content of those files, output will be printed correctly. Some printers are not capable of printing plain text, and the input file might not even be plain text. Filters allow files to be translated or processed. The typical use is to translate one type of input, like plain text, into a form that the printer can understand, like PostScript® or PCL. Filters can also be used to provide additional features, like adding page numbers or highlighting source code to make it easier to read. The filters discussed here are input filters or text filters. These filters convert the incoming file into different forms. Use su(1) to become root before creating the files. Filters are specified in /etc/printcap with the if= identifier. To use /usr/local/libexec/lf2crlf as a filter, modify /etc/printcap like this: lp:\ :lp=/dev/unlpt0:\ :sh:\ :mx#0:\ :sd=/var/spool/lpd/lp:\ :if=/usr/local/libexec/lf2crlf:\ (1) :lf=/var/log/lpd-errs: 1 if= identifies the input filter that will be used on incoming text. The backslash line continuation characters at the end of the lines in printcap entries reveal that an entry for a printer is really just one long line with entries delimited by colon characters. An earlier example can be rewritten as a single less-readable line:lp:lp=/dev/unlpt0:sh:mx#0:sd=/var/spool/lpd/lp:if=/usr/local/libexec/lf2crlf:lf=/var/log/lpd-errs: ##### 10.5.3.1. Preventing Stairstepping on Plain Text Printers Typical FreeBSD text files contain only a single line feed character at the end of each line. These lines will "stairstep" on a standard printer: A printed file looks like the steps of a staircase scattered by the wind A filter can convert the newline characters into carriage returns and newlines. The carriage returns make the printer return to the left after each line. Create /usr/local/libexec/lf2crlf with these contents: #!/bin/sh CR=$'\r' /usr/bin/sed -e "s/$/${CR}/g" Set the permissions and make it executable: # chmod 555 /usr/local/libexec/lf2crlf Modify /etc/printcap to use the new filter: :if=/usr/local/libexec/lf2crlf:\ Test the filter by printing the same plain text file. The carriage returns will cause each line to start at the left side of the page. ##### 10.5.3.2. Fancy Plain Text on PostScript® Printers with print/enscript GNUEnscript converts plain text files into nicely-formatted PostScript® for printing on PostScript® printers. It adds page numbers, wraps long lines, and provides numerous other features to make printed text files easier to read. Depending on the local paper size, install either print/enscript-letter or print/enscript-a4 from the Ports Collection. Create /usr/local/libexec/enscript with these contents: #!/bin/sh /usr/local/bin/enscript -o - Set the permissions and make it executable: # chmod 555 /usr/local/libexec/enscript Modify /etc/printcap to use the new filter: :if=/usr/local/libexec/enscript:\ Test the filter by printing a plain text file. ##### 10.5.3.3. Printing PostScript® to PCL Printers Many programs produce PostScript® documents. However, inexpensive printers often only understand plain text or PCL. This filter converts PostScript® files to PCL before sending them to the printer. Install the Ghostscript PostScript® interpreter, print/ghostscript9-base, from the Ports Collection. Create /usr/local/libexec/ps2pcl with these contents: #!/bin/sh /usr/local/bin/gs -dSAFER -dNOPAUSE -dBATCH -q -sDEVICE=ljet4 -sOutputFile=- - Set the permissions and make it executable: # chmod 555 /usr/local/libexec/ps2pcl PostScript® input sent to this script will be rendered and converted to PCL before being sent on to the printer. Modify /etc/printcap to use this new input filter: :if=/usr/local/libexec/ps2pcl:\ Test the filter by sending a small PostScript® program to it: % printf "%%\!PS \n /Helvetica findfont 18 scalefont setfont \ 72 432 moveto (PostScript printing successful.) show showpage \004" | lpr ##### 10.5.3.4. Smart Filters A filter that detects the type of input and automatically converts it to the correct format for the printer can be very convenient. The first two characters of a PostScript® file are usually %!. A filter can detect those two characters. PostScript® files can be sent on to a PostScript® printer unchanged. Text files can be converted to PostScript® with Enscript as shown earlier. Create /usr/local/libexec/psif with these contents: #!/bin/sh # # psif - Print PostScript or plain text on a PostScript printer # IFS="" read -r first_line first_two_chars=expr "$first_line" : '$$..$$' case "$first_two_chars" in %!) # %! : PostScript job, print it. echo "$first_line" && cat && exit 0 exit 2 ;; *) # otherwise, format with enscript ( echo "$first_line"; cat ) | /usr/local/bin/enscript -o - && exit 0 exit 2 ;; esac Set the permissions and make it executable: # chmod 555 /usr/local/libexec/psif Modify /etc/printcap to use this new input filter: :if=/usr/local/libexec/psif:\ Test the filter by printing PostScript® and plain text files. ##### 10.5.3.5. Other Smart Filters Writing a filter that detects many different types of input and formats them correctly is challenging. print/apsfilter from the Ports Collection is a smart "magic" filter that detects dozens of file types and automatically converts them to the PDL understood by the printer. See http://www.apsfilter.org for more details. #### 10.5.4. Multiple Queues The entries in /etc/printcap are really definitions of queues. There can be more than one queue for a single printer. When combined with filters, multiple queues provide users more control over how their jobs are printed. As an example, consider a networked PostScript® laser printer in an office. Most users want to print plain text, but a few advanced users want to be able to print PostScript® files directly. Two entries can be created for the same printer in /etc/printcap: textprinter:\ :lp=9100@officelaser:\ :sh:\ :mx#0:\ :sd=/var/spool/lpd/textprinter:\ :if=/usr/local/libexec/enscript:\ :lf=/var/log/lpd-errs: psprinter:\ :lp=9100@officelaser:\ :sh:\ :mx#0:\ :sd=/var/spool/lpd/psprinter:\ :lf=/var/log/lpd-errs: Documents sent to textprinter will be formatted by the /usr/local/libexec/enscript filter shown in an earlier example. Advanced users can print PostScript® files on psprinter, where no filtering is done. This multiple queue technique can be used to provide direct access to all kinds of printer features. A printer with a duplexer could use two queues, one for ordinary single-sided printing, and one with a filter that sends the command sequence to enable double-sided printing and then sends the incoming file. #### 10.5.5. Monitoring and Controlling Printing Several utilities are available to monitor print jobs and check and control printer operation. ##### 10.5.5.1. lpq(1) lpq(1) shows the status of a user’s print jobs. Print jobs from other users are not shown. Show the current user’s pending jobs on a single printer: % lpq -Plp Rank Owner Job Files Total Size 1st jsmith 0 (standard input) 12792 bytes Show the current user’s pending jobs on all printers: % lpq -a lp: Rank Owner Job Files Total Size 1st jsmith 1 (standard input) 27320 bytes laser: Rank Owner Job Files Total Size 1st jsmith 287 (standard input) 22443 bytes ##### 10.5.5.2. lprm(1) lprm(1) is used to remove print jobs. Normal users are only allowed to remove their own jobs. root can remove any or all jobs. Remove all pending jobs from a printer: # lprm -Plp - dfA002smithy dequeued cfA002smithy dequeued dfA003smithy dequeued cfA003smithy dequeued dfA004smithy dequeued cfA004smithy dequeued Remove a single job from a printer. lpq(1) is used to find the job number. % lpq Rank Owner Job Files Total Size 1st jsmith 5 (standard input) 12188 bytes % lprm -Plp 5 dfA005smithy dequeued cfA005smithy dequeued ##### 10.5.5.3. lpc(8) lpc(8) is used to check and modify printer status. lpc is followed by a command and an optional printer name. all can be used instead of a specific printer name, and the command will be applied to all printers. Normal users can view status with lpc(8). Only root can use commands which modify printer status. Show the status of all printers: % lpc status all lp: queuing is enabled printing is enabled 1 entry in spool area printer idle laser: queuing is enabled printing is enabled 1 entry in spool area waiting for laser to come up Prevent a printer from accepting new jobs, then begin accepting new jobs again: # lpc disable lp lp: queuing disabled # lpc enable lp lp: queuing enabled Stop printing, but continue to accept new jobs. Then begin printing again: # lpc stop lp lp: printing disabled # lpc start lp lp: printing enabled daemon started Restart a printer after some error condition: # lpc restart lp lp: no daemon to abort printing enabled daemon restarted Turn the print queue off and disable printing, with a message to explain the problem to users: # lpc down lp Repair parts will arrive on Monday lp: printer and queuing disabled status message is now: Repair parts will arrive on Monday Re-enable a printer that is down: # lpc up lp lp: printing enabled daemon started See lpc(8) for more commands and options. #### 10.5.6. Shared Printers Printers are often shared by multiple users in businesses and schools. Additional features are provided to make sharing printers more convenient. ##### 10.5.6.1. Aliases The printer name is set in the first line of the entry in /etc/printcap. Additional names, or aliases, can be added after that name. Aliases are separated from the name and each other by vertical bars: lp|repairsprinter|salesprinter:\ Aliases can be used in place of the printer name. For example, users in the Sales department print to their printer with % lpr -Psalesprinter sales-report.txt Users in the Repairs department print to their printer with % lpr -Prepairsprinter repairs-report.txt All of the documents print on that single printer. When the Sales department grows enough to need their own printer, the alias can be removed from the shared printer entry and used as the name of a new printer. Users in both departments continue to use the same commands, but the Sales documents are sent to the new printer. ##### 10.5.6.2. Header Pages It can be difficult for users to locate their documents in the stack of pages produced by a busy shared printer. Header pages were created to solve this problem. A header page with the user name and document name is printed before each print job. These pages are also sometimes called banner or separator pages. Enabling header pages differs depending on whether the printer is connected directly to the computer with a USB, parallel, or serial cable, or is connected remotely over a network. Header pages on directly-connected printers are enabled by removing the :sh:\ (Suppress Header) line from the entry in /etc/printcap. These header pages only use line feed characters for new lines. Some printers will need the /usr/share/examples/printing/hpif filter to prevent stairstepped text. The filter configures PCL printers to print both carriage returns and line feeds when a line feed is received. Header pages for network printers must be configured on the printer itself. Header page entries in /etc/printcap are ignored. Settings are usually available from the printer front panel or a configuration web page accessible with a web browser. #### 10.5.7. References Example files: /usr/share/examples/printing/. The 4.3BSD Line Printer Spooler Manual, /usr/share/doc/smm/07.lpd/paper.ascii.gz. ### 10.6. Other Printing Systems Several other printing systems are available in addition to the built-in lpd(8). These systems offer support for other protocols or additional features. #### 10.6.1. CUPS (Common UNIX® Printing System) CUPS is a popular printing system available on many operating systems. Using CUPS on FreeBSD is documented in a separate article: CUPS #### 10.6.2. HPLIP Hewlett Packard provides a printing system that supports many of their inkjet and laser printers. The port is print/hplip. The main web page is at https://developers.hp.com/hp-linux-imaging-and-printing. The port handles all the installation details on FreeBSD. Configuration information is shown at https://developers.hp.com/hp-linux-imaging-and-printing/install. #### 10.6.3. LPRng LPRng was developed as an enhanced alternative to lpd(8). The port is sysutils/LPRng. For details and documentation, see http://www.lprng.com/. ## Chapter 11. Linux Binary Compatibility ### 11.1. Synopsis FreeBSD provides optional binary compatibility with Linux®, allowing users to install and run unmodified Linux binaries. It is available for the i386, amd64, and arm64 architectures. Some Linux-specific operating system features are not yet supported; this mostly happens with functionality specific to hardware or related to system management, such as cgroups or namespaces. After reading this chapter, you will know: • How to enable Linux binary compatibility on a FreeBSD system. • How to install additional Linux shared libraries. • How to install Linux applications on a FreeBSD system. • The implementation details of Linux compatibility in FreeBSD. Before reading this chapter, you should: ### 11.2. Configuring Linux Binary Compatibility By default, Linux binary compatibility is not enabled. To enable it at boot time, add this line to /etc/rc.conf: linux_enable="YES" Once enabled, it can be started without rebooting by running: # service linux start The /etc/rc.d/linux script will load necessary kernel modules and mount filesystems expected by Linux applications under /compat/linux. This is enough for statically linked Linux binaries to work. They can be started in the same way native FreeBSD binaries can; they behave almost exactly like native processes and can be traced and debugged the usual way. Linux binaries linked dynamically (which is the vast majority) also require Linux shared libraries to be installed - they can run on top of the FreeBSD kernel, but they cannot use FreeBSD libraries; this is similar to how 32-bit binaries cannot use native 64-bit libraries. There are several ways of providing those libraries: one can copy them over from an existing Linux installation using the same architecture, install them from FreeBSD packages, or install using debootstrap(8) (from sysutils/debootstrap), and others. ### 11.3. CentOS Base System from FreeBSD Packages This method is not yet available for arm64. The easiest way to install Linux libraries is to install emulators/linux_base-c7 package or port, which places the CentOS 7-derived base system into /compat/linux: # pkg install linux_base-c7 FreeBSD provides packages for some Linux binary applications. For example, to install Sublime Text 4, along with all the Linux libraries it depends on, run this command: # pkg install linux-sublime-text4 ### 11.4. Debian / Ubuntu Base System with debootstrap(8) An alternative way of providing Linux shared libraries is by using sysutils/debootstrap. This has the advantage of providing a full Debian or Ubuntu distribution. To use it, follow the instructions at FreeBSD Wiki: FreeBSD Wiki - Linux Jails. After debootstrapping, chroot(8) into the newly created directory and install software in a way typical for the Linux distribution inside, for example: # chroot /compat/ubuntu /bin/bash root@hostname:/# apt update It is possible to debootstrap into /compat/linux, but it is discouraged to avoid collisions with files installed from FreeBSD ports and packages. Instead, derive the directory name from the distribution or version name, e.g., /compat/ubuntu. If the bootstrapped instance is intended to provide Linux shared libraries without having to explicitly use chroot or jails, one can point the kernel at it by updating the compat.linux.emul_path sysctl and adding a line like this to /etc/sysctl.conf: compat.linux.emul_path="/compat/ubuntu" This sysctl controls the kernel’s path translation mechanism; see linux(4) for details. Please note that changing it might cause trouble for Linux applications installed from FreeBSD packages; one reason is that many of those applications are still 32-bit, while Ubuntu seems to be deprecating 32-bit library support. ### 11.5. Advanced Topics The Linux compatibility layer is a work in progress. Consult FreeBSD Wiki - Linuxulator for more information. A list of all Linux-related sysctl(8) knobs can be found in linux(4). Some applications require specific filesystems to be mounted. This is normally handled by the /etc/rc.d/linux script, but can be disabled by adding this line to /etc/rc.conf: linux_mounts_enable="NO" Filesystems mounted by the rc script will not work for Linux processes inside chroots or jails; if needed, configure them in /etc/fstab: devfs /compat/linux/dev devfs rw,late 0 0 tmpfs /compat/linux/dev/shm tmpfs rw,late,size=1g,mode=1777 0 0 fdescfs /compat/linux/dev/fd fdescfs rw,late,linrdlnk 0 0 linprocfs /compat/linux/proc linprocfs rw,late 0 0 linsysfs /compat/linux/sys linsysfs rw,late 0 0 Since the Linux binary compatibility layer has gained support for running both 32- and 64-bit Linux binaries (on 64-bit x86 hosts), it is no longer possible to link the emulation functionality statically into a custom kernel. #### 11.5.1. Installing Additional Libraries Manually For base system subdirectories created with debootstrap(8), use the instructions above instead. If a Linux application complains about missing shared libraries after configuring Linux binary compatibility, determine which shared libraries the Linux binary needs and install them manually. From a Linux system using the same CPU architecture, ldd can be used to determine which shared libraries the application needs. For example, to check which shared libraries linuxdoom needs, run this command from a Linux system that has Doom installed: % ldd linuxdoom libXt.so.3 (DLL Jump 3.1) => /usr/X11/lib/libXt.so.3.1.0 libX11.so.3 (DLL Jump 3.1) => /usr/X11/lib/libX11.so.3.1.0 libc.so.4 (DLL Jump 4.5pl26) => /lib/libc.so.4.6.29 Then, copy all the files in the last column of the output from the Linux system into /compat/linux on the FreeBSD system. Once copied, create symbolic links to the names in the first column. This example will result in the following files on the FreeBSD system: /compat/linux/usr/X11/lib/libXt.so.3.1.0 /compat/linux/usr/X11/lib/libXt.so.3 -> libXt.so.3.1.0 /compat/linux/usr/X11/lib/libX11.so.3.1.0 /compat/linux/usr/X11/lib/libX11.so.3 -> libX11.so.3.1.0 /compat/linux/lib/libc.so.4.6.29 /compat/linux/lib/libc.so.4 -> libc.so.4.6.29 If a Linux shared library already exists with a matching major revision number to the first column of the ldd output, it does not need to be copied to the file named in the last column, as the existing library should work. It is advisable to copy the shared library if it is a newer version, though. The old one can be removed, as long as the symbolic link points to the new one. For example, these libraries already exist on the FreeBSD system: /compat/linux/lib/libc.so.4.6.27 /compat/linux/lib/libc.so.4 -> libc.so.4.6.27 and ldd indicates that a binary requires a later version: libc.so.4 (DLL Jump 4.5pl26) -> libc.so.4.6.29 Since the existing library is only one or two versions out of date in the last digit, the program should still work with the slightly older version. However, it is safe to replace the existing libc.so with the newer version: /compat/linux/lib/libc.so.4.6.29 /compat/linux/lib/libc.so.4 -> libc.so.4.6.29 Generally, one will need to look for the shared libraries that Linux binaries depend on only the first few times that a Linux program is installed on FreeBSD. After a while, there will be a sufficient set of Linux shared libraries on the system to be able to run newly installed Linux binaries without any extra work. #### 11.5.2. Branding Linux ELF Binaries The FreeBSD kernel uses several methods to determine if the binary to be executed is a Linux one: it checks the brand in the ELF file header, looks for known ELF interpreter paths and checks ELF notes; finally, by default, unbranded ELF executables are assumed to be Linux anyway. Should all those methods fail, an attempt to execute the binary might result in error message: % ./my-linux-elf-binary ELF binary type not known Abort To help the FreeBSD kernel distinguish between a FreeBSD ELF binary and a Linux binary, use brandelf(1): % brandelf -t Linux my-linux-elf-binary #### 11.5.3. Installing a Linux RPM Based Application To install a Linux RPM-based application, first install the archivers/rpm4 package or port. Once installed, root can use this command to install a .rpm: # cd /compat/linux # rpm2cpio < /path/to/linux.archive.rpm | cpio -id If necessary, brandelf the installed ELF binaries. Note that this will prevent a clean uninstall. #### 11.5.4. Configuring the Hostname Resolver If DNS does not work or this error appears: resolv+: "bind" is an invalid keyword resolv+: "hosts" is an invalid keyword configure /compat/linux/etc/host.conf as follows: order hosts, bind multi on This specifies that /etc/hosts is searched first and DNS is searched second. When /compat/linux/etc/host.conf does not exist, Linux applications use /etc/host.conf and complain about the incompatible FreeBSD syntax. Remove bind if a name server is not configured using /etc/resolv.conf. #### 11.5.5. Miscellaneous This section describes how Linux binary compatibility works and is based on an email written to FreeBSD chat mailing list by Terry Lambert [email protected] (Message ID: <[email protected]>). FreeBSD has an abstraction called an "execution class loader". This is a wedge into the execve(2) system call. Historically, the UNIX® loader examined the magic number (generally the first 4 or 8 bytes of the file) to see if it was a binary known to the system, and if so, invoked the binary loader. If it was not the binary type for the system, the execve(2) call returned a failure, and the shell attempted to start executing it as shell commands. The assumption was a default of "whatever the current shell is". Later, a hack was made for sh(1) to examine the first two characters, and if they were :\n, it invoked the csh(1) shell instead. FreeBSD has a list of loaders, instead of a single loader, with a fallback to the #! loader for running shell interpreters or shell scripts. For the Linux ABI support, FreeBSD sees the magic number as an ELF binary. The ELF loader looks for a specialized brand, which is a comment section in the ELF image, and which is not present on SVR4/Solaris™ ELF binaries. For Linux binaries to function, they must be branded as type Linux using brandelf(1): # brandelf -t Linux file When the ELF loader sees the Linux brand, the loader replaces a pointer in the proc structure. All system calls are indexed through this pointer. In addition, the process is flagged for special handling of the trap vector for the signal trampoline code, and several other (minor) fix-ups that are handled by the Linux kernel module. The Linux system call vector contains, among other things, a list of sysent[] entries whose addresses reside in the kernel module. When a system call is called by the Linux binary, the trap code dereferences the system call function pointer off the proc structure, and gets the Linux, not the FreeBSD, system call entry points. Linux mode dynamically reroots lookups. This is, in effect, equivalent to union file system mounts. First, an attempt is made to look up the file in /compat/linux/original-path. If that fails, the lookup is done in /original-path. This makes sure that binaries that require other binaries can run. For example, the Linux toolchain can all run under Linux ABI support. It also means that the Linux binaries can load and execute FreeBSD binaries, if there are no corresponding Linux binaries present, and that a uname(1) command can be placed in the /compat/linux directory tree to ensure that the Linux binaries cannot tell they are not running on Linux. In effect, there is a Linux kernel in the FreeBSD kernel. The various underlying functions that implement all of the services provided by the kernel are identical to both the FreeBSD system call table entries, and the Linux system call table entries: file system operations, virtual memory operations, signal delivery, and System V IPC. The only difference is that FreeBSD binaries get the FreeBSD glue functions, and Linux binaries get the Linux glue functions. The FreeBSD glue functions are statically linked into the kernel, and the Linux glue functions can be statically linked, or they can be accessed via a kernel module. Technically, this is not really emulation, it is an ABI implementation. It is sometimes called "Linux emulation" because the implementation was done at a time when there was no other word to describe what was going on. Saying that FreeBSD ran Linux binaries was not true, since the code was not compiled in. ## Chapter 12. WINE ### 12.1. Synopsis WINE, which stands for Wine Is Not an Emulator, is technically a software translation layer. It enables to install and run some software written for Windows® on FreeBSD (and other) systems. It operates by intercepting system calls, or requests from the software to the operating system, and translating them from Windows® calls to calls that FreeBSD understands. It will also translate any responses as needed into what the Windows® software is expecting. So in some ways, it emulates a Windows® environment, in that it provides many of the resources Windows® applications are expecting. However, it is not an emulator in the traditional sense. Many of these solutions operate by constructing an entire other computer using software processes in place of hardware Virtualization (such as that provided by the emulators/qemu port) operates in this way. One of the benefits of this approach is the ability to install a full version of the OS in question to the emulator. It means that the environment will not look any different to applications than a real machine, and chances are good that everything will work on it. The downside to this approach is the fact that software acting as hardware is inherently slower than actual hardware. The computer built in software (called the guest) requires resources from the real machine (the host), and holds on to those resources for as long as it is running. The WINE Project, on the other hand, is much lighter on system’s resources. It will translate system calls on the fly, so while it is difficult to be as fast as a real Windows® computer, it can come very close. On the other hand, WINE is trying to keep up with a moving target in terms of all the different system calls and other functionality it needs to support. As a result there may be applications that do not work as expected on WINE, will not work at all, or will not even install to begin with. At the end of the day, WINE provides another option to try to get a particular Windows® software program running on FreeBSD. It can always serve as the first option which, if successful, offers a good experience without unnecessarily depleting the host FreeBSD system’s resources. This chapter will describe: • How to install WINE on a FreeBSD system. • How WINE operates, and how it is different from other alternatives like virtualizaton. • How to fine-tune WINE to the specific needs of some applications. • How to install GUI helpers for WINE. • Common tips and solutions for on FreeBSD. • Considerations for WINE on FreeBSD in terms of the multi-user environment. Before reading this chapter, it will be useful to: ### 12.2. WINE Overview & Concepts WINE is a complex system, so before running it on a FreeBSD system it is worth gaining an understanding of what it is and how it works. #### 12.2.1. What is WINE? As mentioned in the Synopsis for this chapter, WINE is a compatibility layer that allows Windows® applications to run on other operating systems. In theory, it means these programs should run on systems like FreeBSD, macOS, and Android. When WINE runs a Windows® executable, two things occur: • Firstly, WINE implements an environment that mimics that of various versions of Windows®. For example, if an application requests access to a resource such as RAM, WINE has a memory interface that looks and acts (as far as the application is concerned) like Windows®. • Then, once that application makes use of that interface, WINE takes the incoming request for space in memory and translates it to something compatible with the host system. In the same way when the application retrieves that data, WINE facilitates fetching it from the host system and passing it back to the Windows® application. #### 12.2.2. WINE and the FreeBSD System Installing WINE on a FreeBSD system will entail a few different components: • FreeBSD applications for tasks such as running the Windows® executables, configuring the WINE sub-system, or compiling programs with WINE support. • A large number of libraries that implement the core functions of Windows® (for example /lib/wine/api-ms-core-memory-l1-1-1.dll.so, which is part of the aforementioned memory interface). • A number of Windows® executables, which are (or mimic) common utilities (such as /lib/wine/notepad.exe.so, which provides the standard Windows® text editor). • Additional Windows® assets, in particular fonts (like the Tahoma font, which is stored in share/wine/fonts/tahoma.ttf in the install root). #### 12.2.3. Graphical Versus Text Mode/Terminal Programs in WINE As an operating system where terminal utilities are "first-class citizens," it is natural to assume that WINE will contain extensive support for text-mode program. However, the majority of applications for Windows®, especially the most popular ones, are designed with a graphical user interface (GUI) in mind. Therefore, WINE’s utilities are designed by default to launch graphical programs. However, there are three methods available to run these so-called Console User Interface (CUI) programs: • The Bare Streams approach will display the output directly to standard output. • The wineconsole utility can be used with either the user or curses backed to utilize some of the enhancements the WINE system provides for CUI applications. These approaches are described in greater detail on the WINE Wiki. #### 12.2.4. WINE Derivative Projects WINE itself is a mature open source project, so it is little surprise it is used as the foundation of more complex solutions. ##### 12.2.4.1. Commercial WINE Implementations A number of companies have taken WINE and made it a core of their own, proprietary products (WINE’s LGPL license permits this). Two of the most famous of these are as follows: • Codeweavers CrossOver This solution provides a simplified "one-click" installation of WINE, which contains additional enhancements and optimizations (although the company contributes many of these back upstream to the WINE project). One area of focus for Codeweavers is to make the most popular applications install and run smoothly. While the company once produced a native FreeBSD version of their CrossOver solution, it appears to have long been abandoned. While some resources (such as a dedicated forum) are still present, they also have seen no activity for some time. • Steam Proton Gaming company Steam also uses WINE to enable Windows® games to install and run on other systems. it is primary target is Linux-based systems, though some support exists for macOS as well. While Steam does not offer a native FreeBSD client,there are several options for using the Linux® client using FreeBSD’s Linux Compatibility Layer. ##### 12.2.4.2. WINE Companion Programs In addition to proprietary offerings, other projects have released applications designed to work in tandem with the standard, open source version of WINE. The goals for these can range from making installation easier to offering easy ways to get popular software installed. These solutions are covered in greater detail in the later section on GUI frontends, and include the following: • winetricks • Homura #### 12.2.5. Alternatives to WINE For FreeBSD users, some alternatives to using WINE are as follows: • Dual-Booting: A straightforward option is to run desired Windows® applications natively on that OS. This of course means exiting FreeBSD in order to boot Windows®, so this method is not feasible if access to programs in both systems is required simultaneously. • Virtual Machines: Virtual Machines (VMs), as mentioned earlier in this chapter, are software processes that emulate full sets of hardware, on which additional operating systems (including Windows®) can be installed and run. Modern tools make VMs easy to create and manage, but this method comes at a cost. A good portion of the host systems resources must be allocated to each VM, and those resources cannot be reclaimed by the host as long as the VM is running. A few examples of VM managers include the open source solutions qemu, bhyve, and VirtualBox. See the chapter on Virtualization for more detail. • Remote Access: Like many other UNIX®-like systems, FreeBSD can run a variety of applications enabling users to remotely access Windows® computers and use their programs or data. In addtion to clients such as xrdp that connect to the standard Windows® Remote Desktop Protocol, other open source standards such as vnc can also be used (provided a compatible server is present on the other side). ### 12.3. Installing WINE on FreeBSD WINE can be installed via the pkg tool, or by compiling the port(s). #### 12.3.1. WINE Prerequistes Before installing WINE itself, it is useful to have the following pre-requisites installed. • A GUI Most Windows® programs are expecting to have a graphical user interface available. If WINE is installed without one present, its dependencies will include the Wayland compositor, and so a GUI will be installed along with WINE. But it is useful to have the GUI of choice installed, configured, and working correctly before installing WINE. • wine-gecko The Windows® operating system has for some time had a default web browser pre-installed: Internet Explorer. As a result, some applications work under the assumption that there will always be something capable of displaying web pages. In order to provide this functionality, the WINE layer includes a web browser component using the Mozilla project’s Gecko engine. When WINE is first launched it will offer to download and install this, and there are reasons users might want it do so (these will be covered in a later chapter). But they can also install it prior to installing WINE, or alongside the install of WINE proper. Install this package with the following: # pkg install wine-gecko Alternately, compile the port with the following: # cd /usr/ports/emulator/wine-gecko # make install • wine-mono This port installs the MONO framework, an open source implementation of Microsoft’s .NET. Including this with the WINE installation will make it that much more likely that any applications written in .NET will install and run on the system. To install the package: # pkg install wine-mono To compile from the ports collection: # cd /usr/ports/emulator/wine-mono # make install #### 12.3.2. Installing WINE via FreeBSD Package Repositories With the pre-requisites in place, install WINE via package with the following command: # pkg install wine Alternately compile the WINE sub-system from source with the following: # cd /usr/ports/emulator/wine # make install #### 12.3.3. Concerns of 32- Versus 64-Bit in WINE Installations Like most software, Windows® applications made the upgrade from the older 32-bit architecture to 64 bits. And most recent software is written for 64-bit operating systems, although modern OSes can sometimes continue to run older 32-bit programs as well. FreeBSD is no different, having had support for 64-bit since the 5.x series. However, using old software no longer supported by default is a common use for emulators, and users commonly turn to WINE to play games and use other programs that do not run properly on modern hardware. Fortunately, FreeBSD can support all three scenarios: • On modern, 64-bit machine and want to run 64-bit Windows® software, simply install the ports mentioned in the above sections. The ports system will automatically install the 64-bit version. • Alternately, users might have an older 32-bit machine that they do not want to run with its original, now non-supported software. They can install the 32-bit (i386) version of FreeBSD, then install the ports in the above sections. ### 12.4. Running a First WINE Program on FreeBSD Now that WINE is installed, the next step is to try it out by running a simple program. An easy way to do this is to download a self-contained application, i.e., one can simply unpack and run without any complex installation process. So-called "portable" versions of applications are good choices for this test, as are programs that run with only a single executable file. #### 12.4.1. Running a Program from the Command Line There are two different methods to launch a Windows program from the terminal. The first, and most straightforward is to navigate to the directory containing the program’s executable (.EXE) and issue the following: % wine program.exe For applications that take command-line arguments, add them after the executable as usual: % wine program2.exe -file file.txt Alternately, supply the full path to the executable to use it in a script, for example: % wine /home/user/bin/program.exe #### 12.4.2. Running a Program from a GUI After installation graphical shells should be updated with new associations for Windows executable (.EXE) files. It will now be possible to browse the system using a file manager, and launch the Windows application in the same way as other files and programs (either a single- or double-click, depending on the desktop’s settings). On most desktops, check to make sure this association is correct by right-clicking on the file, and looking for an entry in the context menu to open the file. One of the options (hopefully the default one) will be with the Wine Windows Program Loader, as shown in the below screenshot: In the event the program does not run as expected, try launching it from the command line and review any messages displayed in the terminal to troubleshoot. In the event WINE is not the default application for .EXE files after install, check the MIME associate for this extension in the current desktop environment, graphical shell, or file manager. ### 12.5. Configuring WINE Installation With an understanding of what WINE is and how it works at a high level, the next step to effectively using it on FreeBSD is becoming familiar with its configuration. The following sections will describe the key concept of the WINE prefix, and illustrate how it is used to control the behavior of applications run through WINE. #### 12.5.1. WINE Prefixes A WINE prefix is a directory, usually located beneath the default location of$HOME/.wine though it can be located elsewhere. The prefix is a set of configurations and support files used by the wine to configure and run the Windows® environment a given application needs. By default, a brand new WINE installation will create the following structure when first launched by a user: • .update-timestamp: contains the last modified date of file /usr/share/wine/wine.inf. It is used by WINE to determine if a prefix is out of date, and automatically update it if needed. • dosdevices/: contains information on mappings of Windows® resources to resources on the host FreeBSD system. For example, after a new WINE installation, this should contain at least two entries which enable access to the FreeBSD filesystem using Windows®-style drive letters: • c:@: A link to drive_c described below. • z:@: A link to the root directory of the system. • drive_c/: emulates the main (i.e., C:) drive of a Windows® system. It contains a directory structure and associated files mirroring that of standard Windows® systems. A fresh WINE prefix will contain Windows® 10 directories such as Users and Windows that holds the OS itself. Furthermore, applications installed within a prefix will be located in either Program Files or Program Files (x86), depending on their architecture. • system.reg: This Registry file contains information on the Windows® installation, which in the case of WINE is the environment in drive_c. • user.reg: This Registry file contains the current user’s personal configurations, made either by varous software or through the use of the Registry Editor. • userdef.reg: This Registry file is a default set of configurations for newly-created users. #### 12.5.2. Creating and Using WINE Prefixes While WINE will create a default prefix in the user’s $HOME/.wine/, it is possible to set up multiple prefixes. There are a few reasons to do this: • The most common reason is to emulate different versions of Windows®, according to the compatibility needs of the software in question. • In addition, it is common to encounter software that does not work correctly in the default environment, and requires special configuration. it is useful to isolate these in their own, custom prefixes, so the changes do not impact other applications. • Similarly, copying the default or "main" prefix into a separate "testing" one in order to evaluate an application’s compatibility can reduce the chance of corruption. Creating a prefix from the terminal requires the following command: % WINEPREFIX="/home/username/.wine-new" winecfg This will run the winecfg program, which can be used to configure wine prefixes (more on this in a later section). But by providing a directory path value for the WINEPREFIX environment variable, a new prefix is created at that location if one does not already exist. Supplying the same variable to the wine program will similarly cause the selected program to be run with the specified prefix: % WINEPREFIX="/home/username/.wine-new" wine program.exe #### 12.5.3. Configuring WINE Prefixes with winecfg As described above WINE includes a tool called winecfg to configure prefixes from within a GUI. It contains a variety of functions, which are detailed in the sections below. When winecfg is run from within a prefix, or provided the location of a prefix within the WINEPREFIX variable, it enables the configuration of the selected prefix as described in the below sections. Selections made on the Applications tab will affect the scope of changes made in the Libraries and Graphics tabs, which will be limited to the application selected. See the section on Using Winecfg in the WINE Wiki for more details. ##### 12.5.3.1. Applications The Applications contains controls enabling the association of programs with a particular version of Windows®. On first start-up the Application settings section will contain a single entry: Default Settings. This corresponds to all the default configurations of the prefix, which (as the disabled Remove application button implies) cannot be deleted. But additional applications can be added with the following process: 1. Click the Add application button. 2. Use the provided dialog to select the desired program’s executable. 3. Select the version of Windows® to be used with the selected program. ##### 12.5.3.2. Libraries WINE provides a set of open source library files as part of its distribution that provide the same functions as their Windows® counterparts. However, as noted earlier in this chapter, the WINE project is always trying to keep pace with new updates to these libraries. As a result, the versions that ship with WINE may be missing functionality that the latest Windows® programs are expecting. However, winecfg makes it possible specify overrides for the built-in libraries, particularly there is a version of Windows® available on the same machine as the host FreeBSD installation. For each library to be overridden, do the following: 1. Open the New override for library drop-down and select the library to be replaced. 2. Click the Add button. 3. The new override will appear in the Existing overrides list, notice the native, builtin designation in parentheses. 4. Click to select the library. 5. Click the Edit button. 6. Use the provided dialog to select a corresponding library to be used in place of the built-in one. Be sure to select a file that is truly the corresponding version of the built-in one, otherwise there may be unexpected behavior. ##### 12.5.3.3. Graphics The Graphics tab provides some options to make the windows of programs run via WINE operate smoothly with FreeBSD: • Automatic mouse capture when windows are full-screen. • Allowing the FreeBSD window manager to decorate the windows, such as their title bars, for programs running via WINE. • Allowing the window manager to control windows for programs running via WINE, such as running resizing functions on them. • Create an emulated virtual desktop, within which all WINE programs will run. If this item is selected, the size of the virtual desktop can be specified using the Desktop size input boxes. • Setting the screen resolution for programs running via WINE. ##### 12.5.3.4. Desktop Integration This tab allows configuration of the following items: • The theme and related visual settings to be used for programs running via WINE. • Whether the WINE sub-system should manage MIME types (used to determine which application opens a particular file type) internally. • Mappings of directories in the host FreeBSD system to useful folders within the Windows® environment. To change an existing association, select the desired item and click Browse, then use the provided dialog to select a directory. ##### 12.5.3.5. Drives The Drives tab allows linking of directories in the host FreeBSD system to drive letters in the Windows® environment. The default values in this tab should look familiar, as they are displaying the contents of dosdevices/ in the current WINE prefix. Changes made via this dialog will reflect in dosdevices, and properly-formatted links created in that directory will display in this tab. To create a new entry, such as for a CD-ROM (mounted at /mnt/cdrom), take the following steps: 1. Click the _Add _ button. 2. In the provided dialog, choose a free drive letter. 3. Click OK. 4. Fill in the Path input box by either typing the path to the resource, or click _Browse _ and use the provided dialog to select it. By default WINE will autodetect the type of resource linked, but this can be manually overridden. See the section in the WINE Wiki for more detail on advanced options. ##### 12.5.3.6. Audio This tab contains some configurable options for routing sound from Windows® programs to the native FreeBSD sound system, including: • Driver selection • Default device selection • Sound test ##### 12.5.3.7. About The final tab contains information on the WINE project, including a link to the website. It also allows entry of (entirely optional) user information, although this is not sent anywhere as it is in other operating systems. ### 12.6. WINE Management GUIs While the base install of WINE comes with a GUI configuration tool in winecfg, it is main purpose is just that: strictly configuring an existing WINE prefix. There are, however, more advanced applications that will assist in the initial installation of applications as well as optimizing their WINE environments. The below sections include a selection of the most popular. #### 12.6.1. Winetricks The winetricks tool is a cross-platform, general purpose helper program for WINE. It is not developed by the WINE project proper, but rather maintained on Github by a group of contributors. It contains some automated "recipes" for getting common applications to work on WINE, both by optimizing the settings as well as acquiring some DLL libraries automatically. ##### 12.6.1.1. Installing winetricks To install winetricks on a FreeBSD using binary packages, use the following commands (note winetricks requires either the i386-wine or i386-wine-devel package, and is therefore not installed automatically with other dependencies): # pkg install i386-wine winetricks To compile it from source, issue the following in the terminal: # cd /usr/ports/emulators/i386-wine # make install # cd /usr/ports/emulators/winetricks # make install If a manual installation is required, refer to the Github account for instructions. ##### 12.6.1.2. Using winetricks Run winetricks with the following command: % winetricks Note: this should be in a 32-bit prefix to run winetricks. Launching winetricks displays a window with a number of choices, as follows: Selecting either Install an application, Install a benchmark, or Install a game shows a list with supported options, such as the one below for applications: Selecting one or more items and clicking OK will start their installation process(es). Initially, some messages that appear to be errors may show up, but they’re actually informational alerts as winetricks configures the WINE environment to get around known issues for the application: Once these are circumvented, the actual installer for the application will be run: Once the installation completes, the new Windows application should be available from the desktop environment’s standard menu (shown in the screenshot below for the LXQT desktop environment): In order to remove the application, run winetricks again, and select Run an uninstaller. A Windows®-style dialog will appear with a list of installed programs and components. Select the application to be removed, then click the Modify/Remove button. This will run the applications built-in installer, which should also have the option to uninstall. #### 12.6.2. Homura Homura is an application similar to winetricks, although it was inspired by the Lutris gaming system for Linux. But while it is focused on games, there are also non-gaming applications available for install through Homura. ##### 12.6.2.1. Installing Homura To install Homura’s binary package, issue the following command: # pkg install homura Homura is available in the FreeBSD Ports system. However, than the emulators section of Ports or binary packages, look for it in the games section. # cd /usr/ports/games/homura # make install ##### 12.6.2.2. Using Homura Homura’s usage is quite similar to that of winetricks. When using it for the first time, launch it from the command line (or a desktop environment runner applet) with: % Homura This should result in a friendly welcome message. Click OK to continue. The program will also offer to place a link in the application menu of compatible environments: Depending on the setup of the FreeBSD machine, Homura may display a message urging the install of native graphics drivers. The application’s window should then appear, which amounts to a "main menu" with all its options. Many of the items are the same as winetricks, although Homura offers some additional, helpful options such as opening its data folder (Open Homura Folder) or running a specified program (Run a executable in prefix). To select one of Homura’s supported applications to install, select Installation, and click OK. This will display a list of applications Homura can install automatically. Select one, and click OK to start the process. As a first step Homura will download the selected program. A notification may appear in supported desktop environments. The program will also create a new prefix for the application. A standard WINE dialog with this message will display. Next, Homura will install any prerequisites for the selected program. This may involve downloading and extracting a fair number of files, the details of which will show in dialogs. Downloaded packages are automatically opened and run as required. The installation may end with a simple desktop notification or message in the terminal, depending on how Homura was launched. But in either case Homura should return to the main screen. To confirm the installation was successful, select Launcher, and click OK. This will display a list of installed applications. To run the new program, select it from the list, and click OK. To uninstall the application, select Uninstallation from the main screen, which will display a similar list. Select the program to be removed, and click OK. #### 12.6.3. Running Multiple Management GUIs it is worth noting that the above solutions are not mutually exclusive. it is perfectly acceptable, even advantageous, to have both installed at the same time, as they support a different set of programs. However, it is wise to ensure that they do not access any of the same WINE prefixes. Each of these solutions applies workarounds and makes changes to the registries based on known workarounds to existing WINE issues in order to make a given application run smoothly. Allowing both winetricks and Homura to access the same prefix could lead to some of these being overwritten, with the result being some or all applications do not work as expected. ### 12.7. WINE in Multi-User FreeBSD Installations #### 12.7.1. Issues with Using a Common WINE Prefix Like most UNIX®-like operating systems, FreeBSD is designed for multiple users to be logged in and working at the same time. On the other hand, Windows® is multi-user in the sense that there can be multiple user accounts set up on one system. But the expectation is that only one will be using the physical machine (a desktop or laptop PC) at any given moment. More recent consumer versions of Windows® have taken some steps to improve the OS in multi-user scenarios. But it is still largely structured around a single-user experience. Furthermore, the measures the WINE project has taken to create a compatible environment means, unlike FreeBSD applications (including WINE itself), it will resemble this single-user environment. So it follows that each user will have to maintain their own set of configurations, which is potentially good. Yet it is advantageous to install applications, particularly large ones like office suites or games, only once. Two examples of reasons to do this are maintenance (software updates need only be applied once) and efficiency in storage (no duplicated files). There are two strategies to minimize the impact of multiple WINE users in the system. #### 12.7.2. Installing Applications to a Common Drive As shown in the section on WINE Configuration, WINE provides the ability to attach additional drives to a given prefix. In this way, applications can be installed to a common location, while each user will still have an prefix where individual settings may be kept (depending on the program). This is a good setup if there are relatively few applications to be shared between users, and they are programs that require few custom tweaks changes to the prefix in order to function. The steps to make install applications in this way are as follows: 1. First, set up a shared location on the system where the files will be stored, such as /mnt/windows-drive_d/. Creating new directories is described in the mkdir(1) manual page. 2. Next, set permissions for this new directory to allow only desired users to access it. One approach to this is to create a new group such as "windows," add the desired users to that group (see the sub-section on groups in the Users and Basic Account Management section), and set to the permissions on the directory to 770 (the section on Permissions illustrates this process). 3. Finally, add the location as a drive to the user’s prefix using the winecfg as described in the above section on WINE Configuration in this chapter. Once complete, applications can be installed to this location, and subsequently run using the assigned drive letter (or the standard UNIX®-style directory path). However, as noted above, only one user should be running these applications (which may be accessing files within their installation directory) at the same time. Some applications may also exhibit unexpected behavior when run by a user who is not the owner, despite being a member of the group that should have full "read/write/execute" permissions for the entire directory. #### 12.7.3. Using a Common Installation of WINE If, on the other hand, there are many applications to be shared, or they require specific tuning in order to work correctly, a different approach may be required. In this method, a completely separate user is created specifically for the purposes of storing the WINE prefix and all its installed applications. Individual users are then granted permission to run programs as this user using the sudo(8) command. The result is that these users can launch a WINE application as they normally would, only it will act as though launched by the newly-created user, and therefore use the centrally-maintained prefix containing both settings and programs. To accomplish this, take the following steps: Create a new user with the following command (as root), which will step through the required details: # adduser Enter the username (e.g., windows) and Full name ("Microsoft Windows"). Then accept the defaults for the remainder of the questions. Next, install the sudo utility using binary packages with the following: # pkg install sudo Once installed, edit /etc/sudoers as follows: # User alias specification # define which users can run the wine/windows programs User_Alias WINDOWS_USERS = user1,user2 # define which users can administrate (become root) User_Alias ADMIN = user1 # Cmnd alias specification # define which commands the WINDOWS_USERS may run Cmnd_Alias WINDOWS = /usr/bin/wine,/usr/bin/winecfg # Defaults Defaults:WINDOWS_USERS env_reset Defaults:WINDOWS_USERS env_keep += DISPLAY Defaults:WINDOWS_USERS env_keep += XAUTHORITY Defaults !lecture,tty_tickets,!fqdn # User privilege specification root ALL=(ALL) ALL # Members of the admin user_alias, defined above, may gain root privileges ADMIN ALL=(ALL) ALL # The WINDOWS_USERS may run WINDOWS programs as user windows without a password WINDOWS_USERS ALL = (windows) NOPASSWD: WINDOWS The result of these changes is the users named in the User_Alias section are permitted to run the programs listed in the Cmnd Alias section using the resources listed in the Defaults section (the current display) as if they were the user listed in the final line of the file. In other words, users designates as WINDOWS_USERS can run the WINE and winecfg applications as user windows. As a bonus, the configuration here means they will not be required to enter the password for the windows user. Next provide access to the display back to the windows user, as whom the WINE programs will be running: % xhost +local:windows This should be added to the list of commands run either at login or when the default graphical environment starts. Once all the above are complete, a user configured as one of the WINDOW_USERS in sudoers can run programs using the shared prefix with the following command: % sudo -u windows wine program.exe it is worth noting that multiple users accessing this shared environment at the same time is still risky. However, consider also that the shared environment can itself contain multiple prefixes. In this way an administrator can create a tested and verified set of programs, each with its own prefix. At the same time, one user can play a game while another works with office programs without the need for redundant software installations. ### 12.8. WINE on FreeBSD FAQ The following section describes some frequently asked questions, tips/tricks, or common issues in running WINE on FreeBSD, along with their respective answers. #### 12.8.1. Basic Installation and Usage ##### 12.8.1.1. How to Install 32-bit and 64-bit WINE on the Same System? As described earlier in this section, the wine and i386-wine packages conflict with one another, and therefore cannot be installed on the same system in the normal way. However, multiple installs can be achieved using mechanisms like chroots/jails, or by building WINE from source (note this does not mean building the port). ##### 12.8.1.2. Can DOS Programs Be Run on WINE? They can, as "Console User Interface" applications as mentioned earlier in this section. However, there is an arguably better method for running DOS software: emulators/dosbox. On the other hand, there is little reason not to at least try it. Simply create a new prefix, install the software, and if it does not work delete the prefix. ##### 12.8.1.3. Should the emulators/wine-devel Package/Port be Installed to Use the Development Version of WINE Instead of Stable? Yes, installing this version will install the "development" version of WINE. As with the 32- and 64-bit versions, they cannot be installed together with the stable versions unless additional measures are taken. Note that WINE also has a "Staging" version, which contains the most recent updates. This was at one time available as a FreeBSD port; however, it has since been removed. It can be compiled directly from source however. #### 12.8.2. Install Optimization ##### 12.8.2.1. How Should Windows® Hardware (e.g., Graphics) Drivers be Handled? Operating system drivers transfer commands between applications and hardware. WINE emulates a Windows® environment, including the drivers, which in turn use FreeBSD’s native drivers for this transfer. it is not advisable to install Windows® drivers, as the WINE system is designed to use the host systems drivers. If, for example, a graphics card that benefits from dedicated drivers, install them using the standard FreeBSD methods, not Windows® installers. ##### 12.8.2.2. Is There a way to Make Windows® Fonts Look Better? A user on the FreeBSD forums suggests this configuration to fix out-of-the-box look of WINE fonts, which can be slightly pixelated. According to a post in the FreeBSD Forums, adding the following to .config/fontconfig/fonts.conf will add anti-aliasing and make text more readable. <?xml version="1.0"?> <!DOCTYPE fontconfig SYSTEM "fonts.dtd>" <fontconfig> <!-- antialias all fonts --> <match target="font"> <edit name="antialias" mode="assign"><bool>true</bool></edit>> <edit name="hinting" mode="assign"><bool>true</bool></edit>> <edit name="hintstyle" mode="assign"><const>hintslight</const></edit>> <edit name="rgba" mode="assign"><const>rgb</const></edit>> </match> </fontconfig> ##### 12.8.2.3. Does Having Windows® Installed Elsewhere on a System Help WINE Operate? It may, depending on the application being run. As mentioned in the section describing winecfg, some built-in WINE DLLs and other libraries can be overridden by providing a path to an alternate version. Provided the Windows® partition or drive is mounted to the FreeBSD system and accessible to the user, configuring some of these overrides will use native Windows® libraries and may decrease the chance of unexpected behavior. #### 12.8.3. Application-Specific ##### 12.8.3.1. Where is the Best Place to see if Application X Works on WINE? The first step in determining compatibility should be the WINE AppDB. This is a compilation of reports of programs working (or not) on all supported platforms, although (as previously mentioned), solutions for one platform are often applicable to others. ##### 12.8.3.2. Is There Anything That Will Help Games Run Better? Perhaps. Many Windows® games rely on DirectX, a proprietary Microsoft graphics layer. However there are projects in the open source community attempting to implement support for this technology. The dxvk project, which is an attempt to implement DirectX using the FreeBSD-compatible Vulkan graphics sub-system, is one such. Although its primary target is WINE on Linux, some FreeBSD users report compiling and using dxvk. In addition, work is under way on a wine-proton port. This will bring the work of Valve, developer of the Steam gaming platform, to FreeBSD. Proton is a distribution of WINE designed to allow many Windows® games to run on other operating systems with minimal setup. ##### 12.8.3.3. Is There Anywhere FreeBSD WINE Users Gather to Exchange Tips and Tricks? There are plenty of places FreeBSD users discuss issues related to WINE that can be searched for solutions: #### 12.8.4. Other OS Resources There are a number of resources focused on other operating systems that may be useful for FreeBSD users: • The WINE Wiki has a wealth of information on using WINE, much of which is applicable across many of WINE’s supported operating systems. • Similarly, the documentation available from other OS projects can also be of good value. The WINE page on the Arch Linux Wiki is a particularly good example, although some of the "Third-party applications" (i.e., "companion applications") are obviously not available on FreeBSD. • Finally, Codeweavers (a developer of a commercial version of WINE) is an active upstream contributor. Oftentimes answers to questions in their support forum can be of aid in troubleshooting problems with the open source version of WINE. # Part III: System Administration The remaining chapters cover all aspects of FreeBSD system administration. Each chapter starts by describing what will be learned as a result of reading the chapter, and also details what the reader is expected to know before tackling the material. These chapters are designed to be read as the information is needed. They do not need to be read in any particular order, nor must all of them be read before beginning to use FreeBSD. ## Chapter 13. Configuration and Tuning ### 13.1. Synopsis One of the important aspects of FreeBSD is proper system configuration. This chapter explains much of the FreeBSD configuration process, including some of the parameters which can be set to tune a FreeBSD system. After reading this chapter, you will know: • The basics of rc.conf configuration and /usr/local/etc/rc.d startup scripts. • How to configure and test a network card. • How to configure virtual hosts on network devices. • How to use the various configuration files in /etc. • How to tune FreeBSD using sysctl(8) variables. • How to tune disk performance and modify kernel limitations. Before reading this chapter, you should: ### 13.2. Starting Services Many users install third party software on FreeBSD from the Ports Collection and require the installed services to be started upon system initialization. Services, such as mail/postfix or www/apache22 are just two of the many software packages which may be started during system initialization. This section explains the procedures available for starting third party software. In FreeBSD, most included services, such as cron(8), are started through the system startup scripts. #### 13.2.1. Extended Application Configuration Now that FreeBSD includes rc.d, configuration of application startup is easier and provides more features. Using the key words discussed in Managing Services in FreeBSD, applications can be set to start after certain other services and extra flags can be passed through /etc/rc.conf in place of hard coded flags in the startup script. A basic script may look similar to the following: #!/bin/sh # # PROVIDE: utility # REQUIRE: DAEMON # KEYWORD: shutdown . /etc/rc.subr name=utility rcvar=utility_enable command="/usr/local/sbin/utility" load_rc_config$name # # DO NOT CHANGE THESE DEFAULT VALUES HERE # SET THEM IN THE /etc/rc.conf FILE # utility_enable=${utility_enable-"NO"} pidfile=${utility_pidfile-"/var/run/utility.pid"} run_rc_command "$1" This script will ensure that the provided utility will be started after the DAEMON pseudo-service. It also provides a method for setting and tracking the process ID (PID). This application could then have the following line placed in /etc/rc.conf: utility_enable="YES" This method allows for easier manipulation of command line arguments, inclusion of the default functions provided in /etc/rc.subr, compatibility with rcorder(8), and provides for easier configuration via rc.conf. #### 13.2.2. Using Services to Start Services Other services can be started using inetd(8). Working with inetd(8) and its configuration is described in depth in “The inetd Super-Server”. In some cases, it may make more sense to use cron(8) to start system services. This approach has a number of advantages as cron(8) runs these processes as the owner of the crontab(5). This allows regular users to start and maintain their own applications. The @reboot feature of cron(8), may be used in place of the time specification. This causes the job to run when cron(8) is started, normally during system initialization. ### 13.3. Configuring cron(8) One of the most useful utilities in FreeBSD is cron. This utility runs in the background and regularly checks /etc/crontab for tasks to execute and searches /var/cron/tabs for custom crontab files. These files are used to schedule tasks which cron runs at the specified times. Each entry in a crontab defines a task to run and is known as a cron job. Two different types of configuration files are used: the system crontab, which should not be modified, and user crontabs, which can be created and edited as needed. The format used by these files is documented in crontab(5). The format of the system crontab, /etc/crontab includes a who column which does not exist in user crontabs. In the system crontab, cron runs the command as the user specified in this column. In a user crontab, all commands run as the user who created the crontab. User crontabs allow individual users to schedule their own tasks. The root user can also have a user crontab which can be used to schedule tasks that do not exist in the system crontab. Here is a sample entry from the system crontab, /etc/crontab: # /etc/crontab - root's crontab for FreeBSD # #$FreeBSD$(1) SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin (2) # #minute hour mday month wday who command (3) # */5 * * * * root /usr/libexec/atrun (4) 1 Lines that begin with the # character are comments. A comment can be placed in the file as a reminder of what and why a desired action is performed. Comments cannot be on the same line as a command or else they will be interpreted as part of the command; they must be on a new line. Blank lines are ignored. 2 The equals (=) character is used to define any environment settings. In this example, it is used to define the SHELL and PATH. If the SHELL is omitted, cron will use the default Bourne shell. If the PATH is omitted, the full path must be given to the command or script to run. 3 This line defines the seven fields used in a system crontab: minute, hour, mday, month, wday, who, and command. The minute field is the time in minutes when the specified command will be run, the hour is the hour when the specified command will be run, the mday is the day of the month, month is the month, and wday is the day of the week. These fields must be numeric values, representing the twenty-four hour clock, or a *, representing all values for that field. The who field only exists in the system crontab and specifies which user the command should be run as. The last field is the command to be executed. 4 This entry defines the values for this cron job. The */5, followed by several more * characters, specifies that /usr/libexec/atrun is invoked by root every five minutes of every hour, of every day and day of the week, of every month.Commands can include any number of switches. However, commands which extend to multiple lines need to be broken with the backslash "\" continuation character. #### 13.3.1. Creating a User Crontab To create a user crontab, invoke crontab in editor mode: % crontab -e This will open the user’s crontab using the default text editor. The first time a user runs this command, it will open an empty file. Once a user creates a crontab, this command will open that file for editing. It is useful to add these lines to the top of the crontab file in order to set the environment variables and to remember the meanings of the fields in the crontab: SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin # Order of crontab fields # minute hour mday month wday command Then add a line for each command or script to run, specifying the time to run the command. This example runs the specified custom Bourne shell script every day at two in the afternoon. Since the path to the script is not specified in PATH, the full path to the script is given: 0 14 * * * /usr/home/dru/bin/mycustomscript.sh Before using a custom script, make sure it is executable and test it with the limited set of environment variables set by cron. To replicate the environment that would be used to run the above cron entry, use:env -i SHELL=/bin/sh PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin HOME=/home/dru LOGNAME=dru /usr/home/dru/bin/mycustomscript.shThe environment set by cron is discussed in crontab(5). Checking that scripts operate correctly in a cron environment is especially important if they include any commands that delete files using wildcards. When finished editing the crontab, save the file. It will automatically be installed and cron will read the crontab and run its cron jobs at their specified times. To list the cron jobs in a crontab, use this command: % crontab -l 0 14 * * * /usr/home/dru/bin/mycustomscript.sh To remove all of the cron jobs in a user crontab: % crontab -r remove crontab for dru? y ### 13.4. Managing Services in FreeBSD FreeBSD uses the rc(8) system of startup scripts during system initialization and for managing services. The scripts listed in /etc/rc.d provide basic services which can be controlled with the start, stop, and restart options to service(8). For instance, sshd(8) can be restarted with the following command: # service sshd restart This procedure can be used to start services on a running system. Services will be started automatically at boot time as specified in rc.conf(5). For example, to enable natd(8) at system startup, add the following line to /etc/rc.conf: natd_enable="YES" If a natd_enable="NO" line is already present, change the NO to YES. The rc(8) scripts will automatically load any dependent services during the next boot, as described below. Since the rc(8) system is primarily intended to start and stop services at system startup and shutdown time, the start, stop and restart options will only perform their action if the appropriate /etc/rc.conf variable is set. For instance, sshd restart will only work if sshd_enable is set to YES in /etc/rc.conf. To start, stop or restart a service regardless of the settings in /etc/rc.conf, these commands should be prefixed with "one". For instance, to restart sshd(8) regardless of the current /etc/rc.conf setting, execute the following command: # service sshd onerestart To check if a service is enabled in /etc/rc.conf, run the appropriate rc(8) script with rcvar. This example checks to see if sshd(8) is enabled in /etc/rc.conf: # service sshd rcvar # sshd # sshd_enable="YES" # (default: "") The # sshd line is output from the above command, not a root console. To determine whether or not a service is running, use status. For instance, to verify that sshd(8) is running: # service sshd status sshd is running as pid 433. In some cases, it is also possible to reload a service. This attempts to send a signal to an individual service, forcing the service to reload its configuration files. In most cases, this means sending the service a SIGHUP signal. Support for this feature is not included for every service. The rc(8) system is used for network services and it also contributes to most of the system initialization. For instance, when the /etc/rc.d/bgfsck script is executed, it prints out the following message: Starting background file system checks in 60 seconds. This script is used for background file system checks, which occur only during system initialization. Many system services depend on other services to function properly. For example, yp(8) and other RPC-based services may fail to start until after the rpcbind(8) service has started. To resolve this issue, information about dependencies and other meta-data is included in the comments at the top of each startup script. The rcorder(8) program is used to parse these comments during system initialization to determine the order in which system services should be invoked to satisfy the dependencies. The following key word must be included in all startup scripts as it is required by rc.subr(8) to "enable" the startup script: • PROVIDE: Specifies the services this file provides. The following key words may be included at the top of each startup script. They are not strictly necessary, but are useful as hints to rcorder(8): • REQUIRE: Lists services which are required for this service. The script containing this key word will run after the specified services. • BEFORE: Lists services which depend on this service. The script containing this key word will run before the specified services. By carefully setting these keywords for each startup script, an administrator has a fine-grained level of control of the startup order of the scripts, without the need for "runlevels" used by some UNIX® operating systems. Additional information can be found in rc(8) and rc.subr(8). Refer to this article for instructions on how to create custom rc(8) scripts. #### 13.4.1. Managing System-Specific Configuration The principal location for system configuration information is /etc/rc.conf. This file contains a wide range of configuration information and it is read at system startup to configure the system. It provides the configuration information for the rc* files. The entries in /etc/rc.conf override the default settings in /etc/defaults/rc.conf. The file containing the default settings should not be edited. Instead, all system-specific changes should be made to /etc/rc.conf. A number of strategies may be applied in clustered applications to separate site-wide configuration from system-specific configuration in order to reduce administration overhead. The recommended approach is to place system-specific configuration into /etc/rc.conf.local. For example, these entries in /etc/rc.conf apply to all systems: sshd_enable="YES" keyrate="fast" defaultrouter="10.1.1.254" Whereas these entries in /etc/rc.conf.local apply to this system only: hostname="node1.example.org" ifconfig_fxp0="inet 10.1.1.1/8" Distribute /etc/rc.conf to every system using an application such as rsync or puppet, while /etc/rc.conf.local remains unique. Upgrading the system will not overwrite /etc/rc.conf, so system configuration information will not be lost. Both /etc/rc.conf and /etc/rc.conf.local are parsed by sh(1). This allows system operators to create complex configuration scenarios. Refer to rc.conf(5) for further information on this topic. ### 13.5. Setting Up Network Interface Cards Adding and configuring a network interface card (NIC) is a common task for any FreeBSD administrator. #### 13.5.1. Locating the Correct Driver First, determine the model of the NIC and the chip it uses. FreeBSD supports a wide variety of NICs. Check the Hardware Compatibility List for the FreeBSD release to see if the NIC is supported. If the NIC is supported, determine the name of the FreeBSD driver for the NIC. Refer to /usr/src/sys/conf/NOTES and /usr/src/sys/arch/conf/NOTES for the list of NIC drivers with some information about the supported chipsets. When in doubt, read the manual page of the driver as it will provide more information about the supported hardware and any known limitations of the driver. The drivers for common NICs are already present in the GENERIC kernel, meaning the NIC should be probed during boot. The system’s boot messages can be viewed by typing more /var/run/dmesg.boot and using the spacebar to scroll through the text. In this example, two Ethernet NICs using the dc(4) driver are present on the system: dc0: <82c169 PNIC 10/100BaseTX> port 0xa000-0xa0ff mem 0xd3800000-0xd38 000ff irq 15 at device 11.0 on pci0 miibus0: <MII bus> on dc0 bmtphy0: <BCM5201 10/100baseTX PHY> PHY 1 on miibus0 bmtphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto dc0: Ethernet address: 00:a0:cc:da:da:da dc0: [ITHREAD] dc1: <82c169 PNIC 10/100BaseTX> port 0x9800-0x98ff mem 0xd3000000-0xd30 000ff irq 11 at device 12.0 on pci0 miibus1: <MII bus> on dc1 bmtphy1: <BCM5201 10/100baseTX PHY> PHY 1 on miibus1 bmtphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto dc1: Ethernet address: 00:a0:cc:da:da:db dc1: [ITHREAD] If the driver for the NIC is not present in GENERIC, but a driver is available, the driver will need to be loaded before the NIC can be configured and used. This may be accomplished in one of two ways: • The easiest way is to load a kernel module for the NIC using kldload(8). To also automatically load the driver at boot time, add the appropriate line to /boot/loader.conf. Not all NIC drivers are available as modules. • Alternatively, statically compile support for the NIC into a custom kernel. Refer to /usr/src/sys/conf/NOTES, /usr/src/sys/arch/conf/NOTES and the manual page of the driver to determine which line to add to the custom kernel configuration file. For more information about recompiling the kernel, refer to Configuring the FreeBSD Kernel. If the NIC was detected at boot, the kernel does not need to be recompiled. ##### 13.5.1.1. Using Windows® NDIS Drivers Unfortunately, there are still many vendors that do not provide schematics for their drivers to the open source community because they regard such information as trade secrets. Consequently, the developers of FreeBSD and other operating systems are left with two choices: develop the drivers by a long and pain-staking process of reverse engineering or using the existing driver binaries available for Microsoft® Windows® platforms. FreeBSD provides "native" support for the Network Driver Interface Specification (NDIS). It includes ndisgen(8) which can be used to convert a Windows® XP driver into a format that can be used on FreeBSD. As the ndis(4) driver uses a Windows® XP binary, it only runs on i386™ and amd64 systems. PCI, CardBus, PCMCIA, and USB devices are supported. To use ndisgen(8), three things are needed: 1. FreeBSD kernel sources. 2. A Windows® XP driver binary with a .SYS extension. 3. A Windows® XP driver configuration file with a .INF extension. Download the .SYS and .INF files for the specific NIC. Generally, these can be found on the driver CD or at the vendor’s website. The following examples use W32DRIVER.SYS and W32DRIVER.INF. The driver bit width must match the version of FreeBSD. For FreeBSD/i386, use a Windows® 32-bit driver. For FreeBSD/amd64, a Windows® 64-bit driver is needed. The next step is to compile the driver binary into a loadable kernel module. As root, use ndisgen(8): # ndisgen /path/to/W32DRIVER.INF /path/to/W32DRIVER.SYS This command is interactive and prompts for any extra information it requires. A new kernel module will be generated in the current directory. Use kldload(8) to load the new module: # kldload ./W32DRIVER_SYS.ko In addition to the generated kernel module, the ndis.ko and if_ndis.ko modules must be loaded. This should happen automatically when any module that depends on ndis(4) is loaded. If not, load them manually, using the following commands: # kldload ndis # kldload if_ndis The first command loads the ndis(4) miniport driver wrapper and the second loads the generated NIC driver. Check dmesg(8) to see if there were any load errors. If all went well, the output should be similar to the following: ndis0: <Wireless-G PCI Adapter> mem 0xf4100000-0xf4101fff irq 3 at device 8.0 on pci1 ndis0: NDIS API version: 5.0 ndis0: Ethernet address: 0a:b1:2c:d3:4e:f5 ndis0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps ndis0: 11g rates: 6Mbps 9Mbps 12Mbps 18Mbps 36Mbps 48Mbps 54Mbps From here, ndis0 can be configured like any other NIC. To configure the system to load the ndis(4) modules at boot time, copy the generated module, W32DRIVER_SYS.ko, to /boot/modules. Then, add the following line to /boot/loader.conf: W32DRIVER_SYS_load="YES" #### 13.5.2. Configuring the Network Card Once the right driver is loaded for the NIC, the card needs to be configured. It may have been configured at installation time by bsdinstall(8). To display the NIC configuration, enter the following command: % ifconfig dc0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=80008<VLAN_MTU,LINKSTATE> ether 00:a0:cc:da:da:da inet 192.168.1.3 netmask 0xffffff00 broadcast 192.168.1.255 media: Ethernet autoselect (100baseTX <full-duplex>) status: active dc1: flags=8802<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=80008<VLAN_MTU,LINKSTATE> ether 00:a0:cc:da:da:db inet 10.0.0.1 netmask 0xffffff00 broadcast 10.0.0.255 media: Ethernet 10baseT/UTP status: no carrier lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 options=3<RXCSUM,TXCSUM> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4 inet6 ::1 prefixlen 128 inet 127.0.0.1 netmask 0xff000000 nd6 options=3<PERFORMNUD,ACCEPT_RTADV> In this example, the following devices were displayed: • dc0: The first Ethernet interface. • dc1: The second Ethernet interface. • lo0: The loopback device. FreeBSD uses the driver name followed by the order in which the card is detected at boot to name the NIC. For example, sis2 is the third NIC on the system using the sis(4) driver. In this example, dc0 is up and running. The key indicators are: 1. UP means that the card is configured and ready. 2. The card has an Internet (inet) address, 192.168.1.3. 3. It has a valid subnet mask (netmask), where 0xffffff00 is the same as 255.255.255.0. 4. It has a valid broadcast address, 192.168.1.255. 5. The MAC address of the card (ether) is 00:a0:cc:da:da:da. 6. The physical media selection is on autoselection mode (media: Ethernet autoselect (100baseTX <full-duplex>)). In this example, dc1 is configured to run with 10baseT/UTP media. For more information on available media types for a driver, refer to its manual page. 7. The status of the link (status) is active, indicating that the carrier signal is detected. For dc1, the status: no carrier status is normal when an Ethernet cable is not plugged into the card. If the ifconfig(8) output had shown something similar to: dc0: flags=8843<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=80008<VLAN_MTU,LINKSTATE> ether 00:a0:cc:da:da:da media: Ethernet autoselect (100baseTX <full-duplex>) status: active it would indicate the card has not been configured. The card must be configured as root. The NIC configuration can be performed from the command line with ifconfig(8) but will not persist after a reboot unless the configuration is also added to /etc/rc.conf. If a DHCP server is present on the LAN, just add this line: ifconfig_dc0="DHCP" Replace dc0 with the correct value for the system. The line added, then, follow the instructions given in Testing and Troubleshooting. If the network was configured during installation, some entries for the NIC(s) may be already present. Double check /etc/rc.conf before adding any lines. If there is no DHCP server, the NIC(s) must be configured manually. Add a line for each NIC present on the system, as seen in this example: ifconfig_dc0="inet 192.168.1.3 netmask 255.255.255.0" ifconfig_dc1="inet 10.0.0.1 netmask 255.255.255.0 media 10baseT/UTP" Replace dc0 and dc1 and the IP address information with the correct values for the system. Refer to the man page for the driver, ifconfig(8), and rc.conf(5) for more details about the allowed options and the syntax of /etc/rc.conf. If the network is not using DNS, edit /etc/hosts to add the names and IP addresses of the hosts on the LAN, if they are not already there. For more information, refer to hosts(5) and to /usr/share/examples/etc/hosts. If there is no DHCP server and access to the Internet is needed, manually configure the default gateway and the nameserver:# sysrc defaultrouter="your_default_router" # echo 'nameserver your_DNS_server' >> /etc/resolv.conf #### 13.5.3. Testing and Troubleshooting Once the necessary changes to /etc/rc.conf are saved, a reboot can be used to test the network configuration and to verify that the system restarts without any configuration errors. Alternatively, apply the settings to the networking system with this command: # service netif restart If a default gateway has been set in /etc/rc.conf, also issue this command:# service routing restart Once the networking system has been relaunched, test the NICs. ##### 13.5.3.1. Testing the Ethernet Card To verify that an Ethernet card is configured correctly, ping(8) the interface itself, and then ping(8) another machine on the LAN: % ping -c5 192.168.1.3 PING 192.168.1.3 (192.168.1.3): 56 data bytes 64 bytes from 192.168.1.3: icmp_seq=0 ttl=64 time=0.082 ms 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.074 ms 64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.076 ms 64 bytes from 192.168.1.3: icmp_seq=3 ttl=64 time=0.108 ms 64 bytes from 192.168.1.3: icmp_seq=4 ttl=64 time=0.076 ms --- 192.168.1.3 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.074/0.083/0.108/0.013 ms % ping -c5 192.168.1.2 PING 192.168.1.2 (192.168.1.2): 56 data bytes 64 bytes from 192.168.1.2: icmp_seq=0 ttl=64 time=0.726 ms 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.766 ms 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.700 ms 64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.747 ms 64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.704 ms --- 192.168.1.2 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.700/0.729/0.766/0.025 ms To test network resolution, use the host name instead of the IP address. If there is no DNS server on the network, /etc/hosts must first be configured. To this purpose, edit /etc/hosts to add the names and IP addresses of the hosts on the LAN, if they are not already there. For more information, refer to hosts(5) and to /usr/share/examples/etc/hosts. ##### 13.5.3.2. Troubleshooting When troubleshooting hardware and software configurations, check the simple things first. Is the network cable plugged in? Are the network services properly configured? Is the firewall configured correctly? Is the NIC supported by FreeBSD? Before sending a bug report, always check the Hardware Notes, update the version of FreeBSD to the latest STABLE version, check the mailing list archives, and search the Internet. If the card works, yet performance is poor, read through tuning(7). Also, check the network configuration as incorrect network settings can cause slow connections. Some users experience one or two device timeout messages, which is normal for some cards. If they continue, or are bothersome, determine if the device is conflicting with another device. Double check the cable connections. Consider trying another card. To resolve watchdog timeout errors, first check the network cable. Many cards require a PCI slot which supports bus mastering. On some old motherboards, only one PCI slot allows it, usually slot 0. Check the NIC and the motherboard documentation to determine if that may be the problem. No route to host messages occur if the system is unable to route a packet to the destination host. This can happen if no default route is specified or if a cable is unplugged. Check the output of netstat -rn and make sure there is a valid route to the host. If there is not, read “Gateways and Routes”. ping: sendto: Permission denied error messages are often caused by a misconfigured firewall. If a firewall is enabled on FreeBSD but no rules have been defined, the default policy is to deny all traffic, even ping(8). Refer to Firewalls for more information. Sometimes performance of the card is poor or below average. In these cases, try setting the media selection mode from autoselect to the correct media selection. While this works for most hardware, it may or may not resolve the issue. Again, check all the network settings, and refer to tuning(7). ### 13.6. Virtual Hosts A common use of FreeBSD is virtual site hosting, where one server appears to the network as many servers. This is achieved by assigning multiple network addresses to a single interface. A given network interface has one "real" address, and may have any number of "alias" addresses. These aliases are normally added by placing alias entries in /etc/rc.conf, as seen in this example: ifconfig_fxp0_alias0="inet xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx" Alias entries must start with alias0 using a sequential number such as alias0, alias1, and so on. The configuration process will stop at the first missing number. The calculation of alias netmasks is important. For a given interface, there must be one address which correctly represents the network’s netmask. Any other addresses which fall within this network must have a netmask of all 1s, expressed as either 255.255.255.255 or 0xffffffff. For example, consider the case where the fxp0 interface is connected to two networks: 10.1.1.0 with a netmask of 255.255.255.0 and 202.0.75.16 with a netmask of 255.255.255.240. The system is to be configured to appear in the ranges 10.1.1.1 through 10.1.1.5 and 202.0.75.17 through 202.0.75.20. Only the first address in a given network range should have a real netmask. All the rest (10.1.1.2 through 10.1.1.5 and 202.0.75.18 through 202.0.75.20) must be configured with a netmask of 255.255.255.255. The following /etc/rc.conf entries configure the adapter correctly for this scenario: ifconfig_fxp0="inet 10.1.1.1 netmask 255.255.255.0" ifconfig_fxp0_alias0="inet 10.1.1.2 netmask 255.255.255.255" ifconfig_fxp0_alias1="inet 10.1.1.3 netmask 255.255.255.255" ifconfig_fxp0_alias2="inet 10.1.1.4 netmask 255.255.255.255" ifconfig_fxp0_alias3="inet 10.1.1.5 netmask 255.255.255.255" ifconfig_fxp0_alias4="inet 202.0.75.17 netmask 255.255.255.240" ifconfig_fxp0_alias5="inet 202.0.75.18 netmask 255.255.255.255" ifconfig_fxp0_alias6="inet 202.0.75.19 netmask 255.255.255.255" ifconfig_fxp0_alias7="inet 202.0.75.20 netmask 255.255.255.255" A simpler way to express this is with a space-separated list of IP address ranges. The first address will be given the indicated subnet mask and the additional addresses will have a subnet mask of 255.255.255.255. ifconfig_fxp0_aliases="inet 10.1.1.1-5/24 inet 202.0.75.17-20/28" ### 13.7. Configuring System Logging Generating and reading system logs is an important aspect of system administration. The information in system logs can be used to detect hardware and software issues as well as application and system configuration errors. This information also plays an important role in security auditing and incident response. Most system daemons and applications will generate log entries. FreeBSD provides a system logger, syslogd, to manage logging. By default, syslogd is started when the system boots. This is controlled by the variable syslogd_enable in /etc/rc.conf. There are numerous application arguments that can be set using syslogd_flags in /etc/rc.conf. Refer to syslogd(8) for more information on the available arguments. This section describes how to configure the FreeBSD system logger for both local and remote logging and how to perform log rotation and log management. #### 13.7.1. Configuring Local Logging The configuration file, /etc/syslog.conf, controls what syslogd does with log entries as they are received. There are several parameters to control the handling of incoming events. The facility describes which subsystem generated the message, such as the kernel or a daemon, and the level describes the severity of the event that occurred. This makes it possible to configure if and where a log message is logged, depending on the facility and level. It is also possible to take action depending on the application that sent the message, and in the case of remote logging, the hostname of the machine generating the logging event. This configuration file contains one line per action, where the syntax for each line is a selector field followed by an action field. The syntax of the selector field is facility.level which will match log messages from facility at level level or higher. It is also possible to add an optional comparison flag before the level to specify more precisely what is logged. Multiple selector fields can be used for the same action, and are separated with a semicolon (;). Using * will match everything. The action field denotes where to send the log message, such as to a file or remote log host. As an example, here is the default syslog.conf from FreeBSD: #$FreeBSD$# # Spaces ARE valid field separators in this file. However, # other *nix-like systems still insist on using tabs as field # separators. If you are sharing this file between systems, you # may want to use only tabs as field separators here. # Consult the syslog.conf(5) manpage. *.err;kern.warning;auth.notice;mail.crit /dev/console *.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages security.* /var/log/security auth.info;authpriv.info /var/log/auth.log mail.info /var/log/maillog lpr.info /var/log/lpd-errs ftp.info /var/log/xferlog cron.* /var/log/cron !-devd *.=debug /var/log/debug.log *.emerg * # uncomment this to log all writes to /dev/console to /var/log/console.log #console.info /var/log/console.log # uncomment this to enable logging of all log messages to /var/log/all.log # touch /var/log/all.log and chmod it to mode 600 before it will work #*.* /var/log/all.log # uncomment this to enable logging to a remote loghost named loghost #*.* @loghost # uncomment these if you're running inn # news.crit /var/log/news/news.crit # news.err /var/log/news/news.err # news.notice /var/log/news/news.notice # Uncomment this if you wish to see messages produced by devd # !devd # *.>=info !ppp *.* /var/log/ppp.log !* In this example: • Line 8 matches all messages with a level of err or higher, as well as kern.warning, auth.notice and mail.crit, and sends these log messages to the console (/dev/console). • Line 12 matches all messages from the mail facility at level info or above and logs the messages to /var/log/maillog. • Line 17 uses a comparison flag (=) to only match messages at level debug and logs them to /var/log/debug.log. • Line 33 is an example usage of a program specification. This makes the rules following it only valid for the specified program. In this case, only the messages generated by ppp are logged to /var/log/ppp.log. The available levels, in order from most to least critical are emerg, alert, crit, err, warning, notice, info, and debug. The facilities, in no particular order, are auth, authpriv, console, cron, daemon, ftp, kern, lpr, mail, mark, news, security, syslog, user, uucp, and local0 through local7. Be aware that other operating systems might have different facilities. To log everything of level notice and higher to /var/log/daemon.log, add the following entry: daemon.notice /var/log/daemon.log For more information about the different levels and facilities, refer to syslog(3) and syslogd(8). For more information about /etc/syslog.conf, its syntax, and more advanced usage examples, see syslog.conf(5). #### 13.7.2. Log Management and Rotation Log files can grow quickly, taking up disk space and making it more difficult to locate useful information. Log management attempts to mitigate this. In FreeBSD, newsyslog is used to manage log files. This built-in program periodically rotates and compresses log files, and optionally creates missing log files and signals programs when log files are moved. The log files may be generated by syslogd or by any other program which generates log files. While newsyslog is normally run from cron(8), it is not a system daemon. In the default configuration, it runs every hour. To know which actions to take, newsyslog reads its configuration file, /etc/newsyslog.conf. This file contains one line for each log file that newsyslog manages. Each line states the file owner, permissions, when to rotate that file, optional flags that affect log rotation, such as compression, and programs to signal when the log is rotated. Here is the default configuration in FreeBSD: # configuration file for newsyslog #$FreeBSD$# # Entries which do not specify the '/pid_file' field will cause the # syslogd process to be signalled when that log file is rotated. This # action is only appropriate for log files which are written to by the # syslogd process (ie, files listed in /etc/syslog.conf). If there # is no process which needs to be signalled when a given log file is # rotated, then the entry for that file should include the 'N' flag. # # The 'flags' field is one or more of the letters: BCDGJNUXZ or a '-'. # # Note: some sites will want to select more restrictive protections than the # defaults. In particular, it may be desirable to switch many of the 644 # entries to 640 or 600. For example, some sites will consider the # contents of maillog, messages, and lpd-errs to be confidential. In the # future, these defaults may change to more conservative ones. # # logfilename [owner:group] mode count size when flags [/pid_file] [sig_num] /var/log/all.log 600 7 * @T00 J /var/log/amd.log 644 7 100 * J /var/log/auth.log 600 7 100 @0101T JC /var/log/console.log 600 5 100 * J /var/log/cron 600 3 100 * JC /var/log/daily.log 640 7 * @T00 JN /var/log/debug.log 600 7 100 * JC /var/log/kerberos.log 600 7 100 * J /var/log/lpd-errs 644 7 100 * JC /var/log/maillog 640 7 * @T00 JC /var/log/messages 644 5 100 @0101T JC /var/log/monthly.log 640 12 *$M1D0 JN /var/log/pflog 600 3 100 * JB /var/run/pflogd.pid /var/log/ppp.log root:network 640 3 100 * JC /var/log/devd.log 644 3 100 * JC /var/log/security 600 10 100 * JC /var/log/sendmail.st 640 10 * 168 B /var/log/utx.log 644 3 * @01T05 B /var/log/weekly.log 640 5 1 $W6D0 JN /var/log/xferlog 600 7 100 * JC Each line starts with the name of the log to be rotated, optionally followed by an owner and group for both rotated and newly created files. The mode field sets the permissions on the log file and count denotes how many rotated log files should be kept. The size and when fields tell newsyslog when to rotate the file. A log file is rotated when either its size is larger than the size field or when the time in the when field has passed. An asterisk (*) means that this field is ignored. The flags field gives further instructions, such as how to compress the rotated file or to create the log file if it is missing. The last two fields are optional and specify the name of the Process ID (PID) file of a process and a signal number to send to that process when the file is rotated. For more information on all fields, valid flags, and how to specify the rotation time, refer to newsyslog.conf(5). Since newsyslog is run from cron(8), it cannot rotate files more often than it is scheduled to run from cron(8). #### 13.7.3. Configuring Remote Logging Monitoring the log files of multiple hosts can become unwieldy as the number of systems increases. Configuring centralized logging can reduce some of the administrative burden of log file administration. In FreeBSD, centralized log file aggregation, merging, and rotation can be configured using syslogd and newsyslog. This section demonstrates an example configuration, where host A, named logserv.example.com, will collect logging information for the local network. Host B, named logclient.example.com, will be configured to pass logging information to the logging server. ##### 13.7.3.1. Log Server Configuration A log server is a system that has been configured to accept logging information from other hosts. Before configuring a log server, check the following: • If there is a firewall between the logging server and any logging clients, ensure that the firewall ruleset allows UDP port 514 for both the clients and the server. • The logging server and all client machines must have forward and reverse entries in the local DNS. If the network does not have a DNS server, create entries in each system’s /etc/hosts. Proper name resolution is required so that log entries are not rejected by the logging server. On the log server, edit /etc/syslog.conf to specify the name of the client to receive log entries from, the logging facility to be used, and the name of the log to store the host’s log entries. This example adds the hostname of B, logs all facilities, and stores the log entries in /var/log/logclient.log. Example 24. Sample Log Server Configuration +logclient.example.com *.* /var/log/logclient.log When adding multiple log clients, add a similar two-line entry for each client. More information about the available facilities may be found in syslog.conf(5). Next, configure /etc/rc.conf: syslogd_enable="YES" syslogd_flags="-a logclient.example.com -v -v" The first entry starts syslogd at system boot. The second entry allows log entries from the specified client. The -v -v increases the verbosity of logged messages. This is useful for tweaking facilities as administrators are able to see what type of messages are being logged under each facility. Multiple -a options may be specified to allow logging from multiple clients. IP addresses and whole netblocks may also be specified. Refer to syslogd(8) for a full list of possible options. Finally, create the log file: # touch /var/log/logclient.log At this point, syslogd should be restarted and verified: # service syslogd restart # pgrep syslog If a PID is returned, the server restarted successfully, and client configuration can begin. If the server did not restart, consult /var/log/messages for the error. ##### 13.7.3.2. Log Client Configuration A logging client sends log entries to a logging server on the network. The client also keeps a local copy of its own logs. Once a logging server has been configured, edit /etc/rc.conf on the logging client: syslogd_enable="YES" syslogd_flags="-s -v -v" The first entry enables syslogd on boot up. The second entry prevents logs from being accepted by this client from other hosts (-s) and increases the verbosity of logged messages. Next, define the logging server in the client’s /etc/syslog.conf. In this example, all logged facilities are sent to a remote system, denoted by the @ symbol, with the specified hostname: *.* @logserv.example.com After saving the edit, restart syslogd for the changes to take effect: # service syslogd restart To test that log messages are being sent across the network, use logger(1) on the client to send a message to syslogd: # logger "Test message from logclient" This message should now exist both in /var/log/messages on the client and /var/log/logclient.log on the log server. ##### 13.7.3.3. Debugging Log Servers If no messages are being received on the log server, the cause is most likely a network connectivity issue, a hostname resolution issue, or a typo in a configuration file. To isolate the cause, ensure that both the logging server and the logging client are able to ping each other using the hostname specified in their /etc/rc.conf. If this fails, check the network cabling, the firewall ruleset, and the hostname entries in the DNS server or /etc/hosts on both the logging server and clients. Repeat until the ping is successful from both hosts. If the ping succeeds on both hosts but log messages are still not being received, temporarily increase logging verbosity to narrow down the configuration issue. In the following example, /var/log/logclient.log on the logging server is empty and /var/log/messages on the logging client does not indicate a reason for the failure. To increase debugging output, edit the syslogd_flags entry on the logging server and issue a restart: syslogd_flags="-d -a logclient.example.com -v -v" # service syslogd restart Debugging data similar to the following will flash on the console immediately after the restart: logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel Logging to FILE /var/log/messages syslogd: kernel boot file is /boot/kernel/kernel cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; rejected in rule 0 due to name mismatch. In this example, the log messages are being rejected due to a typo which results in a hostname mismatch. The client’s hostname should be logclient, not logclien. Fix the typo, issue a restart, and verify the results: # service syslogd restart logmsg: pri 56, flags 4, from logserv.example.com, msg syslogd: restart syslogd: restarted logmsg: pri 6, flags 4, from logserv.example.com, msg syslogd: kernel boot file is /boot/kernel/kernel syslogd: kernel boot file is /boot/kernel/kernel logmsg: pri 166, flags 17, from logserv.example.com, msg Dec 10 20:55:02 <syslog.err> logserv.example.com syslogd: exiting on signal 2 cvthname(192.168.1.10) validate: dgram from IP 192.168.1.10, port 514, name logclient.example.com; accepted in rule 0. logmsg: pri 15, flags 0, from logclient.example.com, msg Dec 11 02:01:28 trhodes: Test message 2 Logging to FILE /var/log/logclient.log Logging to FILE /var/log/messages At this point, the messages are being properly received and placed in the correct file. ##### 13.7.3.4. Security Considerations As with any network service, security requirements should be considered before implementing a logging server. Log files may contain sensitive data about services enabled on the local host, user accounts, and configuration data. Network data sent from the client to the server will not be encrypted or password protected. If a need for encryption exists, consider using security/stunnel, which will transmit the logging data over an encrypted tunnel. Local security is also an issue. Log files are not encrypted during use or after log rotation. Local users may access log files to gain additional insight into system configuration. Setting proper permissions on log files is critical. The built-in log rotator, newsyslog, supports setting permissions on newly created and rotated log files. Setting log files to mode 600 should prevent unwanted access by local users. Refer to newsyslog.conf(5) for additional information. ### 13.8. Configuration Files #### 13.8.1. /etc Layout There are a number of directories in which configuration information is kept. These include: /etc Generic system-specific configuration information. /etc/defaults Default versions of system configuration files. /etc/mail Extra sendmail(8) configuration and other MTA configuration files. /etc/ppp Configuration for both user- and kernel-ppp programs. /usr/local/etc Configuration files for installed applications. May contain per-application subdirectories. /usr/local/etc/rc.d rc(8) scripts for installed applications. /var/db Automatically generated system-specific database files, such as the package database and the locate(1) database. #### 13.8.2. Hostnames ##### 13.8.2.1. /etc/resolv.conf How a FreeBSD system accesses the Internet Domain Name System (DNS) is controlled by resolv.conf(5). The most common entries to /etc/resolv.conf are: nameserver The IP address of a name server the resolver should query. The servers are queried in the order listed with a maximum of three. search Search list for hostname lookup. This is normally determined by the domain of the local hostname. domain The local domain name. A typical /etc/resolv.conf looks like this: search example.com nameserver 147.11.1.11 nameserver 147.11.100.30 Only one of the search and domain options should be used. When using DHCP, dhclient(8) usually rewrites /etc/resolv.conf with information received from the DHCP server. ##### 13.8.2.2. /etc/hosts /etc/hosts is a simple text database which works in conjunction with DNS and NIS to provide host name to IP address mappings. Entries for local computers connected via a LAN can be added to this file for simplistic naming purposes instead of setting up a named(8) server. Additionally, /etc/hosts can be used to provide a local record of Internet names, reducing the need to query external DNS servers for commonly accessed names. #$FreeBSD$# # # Host Database # # This file should contain the addresses and aliases for local hosts that # share this file. Replace 'my.domain' below with the domainname of your # machine. # # In the presence of the domain name service or NIS, this file may # not be consulted at all; see /etc/nsswitch.conf for the resolution order. # # ::1 localhost localhost.my.domain 127.0.0.1 localhost localhost.my.domain # # Imaginary network. #10.0.0.2 myname.my.domain myname #10.0.0.3 myfriend.my.domain myfriend # # According to RFC 1918, you can use the following IP networks for # private nets which will never be connected to the Internet: # # 10.0.0.0 - 10.255.255.255 # 172.16.0.0 - 172.31.255.255 # 192.168.0.0 - 192.168.255.255 # # In case you want to be able to connect to the Internet, you need # real official assigned numbers. Do not try to invent your own network # numbers but instead get one from your network provider (if any) or # from your regional registry (ARIN, APNIC, LACNIC, RIPE NCC, or AfriNIC.) # The format of /etc/hosts is as follows: [Internet address] [official hostname] [alias1] [alias2] ... For example: 10.0.0.1 myRealHostname.example.com myRealHostname foobar1 foobar2 Consult hosts(5) for more information. ### 13.9. Tuning with sysctl(8) sysctl(8) is used to make changes to a running FreeBSD system. This includes many advanced options of the TCP/IP stack and virtual memory system that can dramatically improve performance for an experienced system administrator. Over five hundred system variables can be read and set using sysctl(8). At its core, sysctl(8) serves two functions: to read and to modify system settings. To view all readable variables: % sysctl -a To read a particular variable, specify its name: % sysctl kern.maxproc kern.maxproc: 1044 To set a particular variable, use the variable=value syntax: # sysctl kern.maxfiles=5000 kern.maxfiles: 2088 -> 5000 Settings of sysctl variables are usually either strings, numbers, or booleans, where a boolean is 1 for yes or 0 for no. To automatically set some variables each time the machine boots, add them to /etc/sysctl.conf. For more information, refer to sysctl.conf(5) and sysctl.conf. #### 13.9.1. sysctl.conf The configuration file for sysctl(8), /etc/sysctl.conf, looks much like /etc/rc.conf. Values are set in a variable=value form. The specified values are set after the system goes into multi-user mode. Not all variables are settable in this mode. For example, to turn off logging of fatal signal exits and prevent users from seeing processes started by other users, the following tunables can be set in /etc/sysctl.conf: # Do not log fatal signal exits (e.g., sig 11) kern.logsigexit=0 # Prevent users from seeing information about processes that # are being run under another UID. security.bsd.see_other_uids=0 #### 13.9.2. sysctl(8) Read-only In some cases it may be desirable to modify read-only sysctl(8) values, which will require a reboot of the system. For instance, on some laptop models the cardbus(4) device will not probe memory ranges and will fail with errors similar to: cbb0: Could not map register memory device_probe_and_attach: cbb0 attach returned 12 The fix requires the modification of a read-only sysctl(8) setting. Add hw.pci.allow_unsupported_io_range=1 to /boot/loader.conf and reboot. Now cardbus(4) should work properly. ### 13.10. Tuning Disks The following section will discuss various tuning mechanisms and options which may be applied to disk devices. In many cases, disks with mechanical parts, such as SCSI drives, will be the bottleneck driving down the overall system performance. While a solution is to install a drive without mechanical parts, such as a solid state drive, mechanical drives are not going away anytime in the near future. When tuning disks, it is advisable to utilize the features of the iostat(8) command to test various changes to the system. This command will allow the user to obtain valuable information on system IO. #### 13.10.1. Sysctl Variables ##### 13.10.1.1. vfs.vmiodirenable The vfs.vmiodirenable sysctl(8) variable may be set to either 0 (off) or 1 (on). It is set to 1 by default. This variable controls how directories are cached by the system. Most directories are small, using just a single fragment (typically 1 K) in the file system and typically 512 bytes in the buffer cache. With this variable turned off, the buffer cache will only cache a fixed number of directories, even if the system has a huge amount of memory. When turned on, this sysctl(8) allows the buffer cache to use the VM page cache to cache the directories, making all the memory available for caching directories. However, the minimum in-core memory used to cache a directory is the physical page size (typically 4 K) rather than 512 bytes. Keeping this option enabled is recommended if the system is running any services which manipulate large numbers of files. Such services can include web caches, large mail systems, and news systems. Keeping this option on will generally not reduce performance, even with the wasted memory, but one should experiment to find out. ##### 13.10.1.2. vfs.write_behind The vfs.write_behind sysctl(8) variable defaults to 1 (on). This tells the file system to issue media writes as full clusters are collected, which typically occurs when writing large sequential files. This avoids saturating the buffer cache with dirty buffers when it would not benefit I/O performance. However, this may stall processes and under certain circumstances should be turned off. ##### 13.10.1.3. vfs.hirunningspace The vfs.hirunningspace sysctl(8) variable determines how much outstanding write I/O may be queued to disk controllers system-wide at any given instance. The default is usually sufficient, but on machines with many disks, try bumping it up to four or five megabytes. Setting too high a value which exceeds the buffer cache’s write threshold can lead to bad clustering performance. Do not set this value arbitrarily high as higher write values may add latency to reads occurring at the same time. There are various other buffer cache and VM page cache related sysctl(8) values. Modifying these values is not recommended as the VM system does a good job of automatically tuning itself. ##### 13.10.1.4. vm.swap_idle_enabled The vm.swap_idle_enabled sysctl(8) variable is useful in large multi-user systems with many active login users and lots of idle processes. Such systems tend to generate continuous pressure on free memory reserves. Turning this feature on and tweaking the swapout hysteresis (in idle seconds) via vm.swap_idle_threshold1 and vm.swap_idle_threshold2 depresses the priority of memory pages associated with idle processes more quickly then the normal pageout algorithm. This gives a helping hand to the pageout daemon. Only turn this option on if needed, because the tradeoff is essentially pre-page memory sooner rather than later which eats more swap and disk bandwidth. In a small system this option will have a determinable effect, but in a large system that is already doing moderate paging, this option allows the VM system to stage whole processes into and out of memory easily. ##### 13.10.1.5. hw.ata.wc Turning off IDE write caching reduces write bandwidth to IDE disks, but may sometimes be necessary due to data consistency issues introduced by hard drive vendors. The problem is that some IDE drives lie about when a write completes. With IDE write caching turned on, IDE hard drives write data to disk out of order and will sometimes delay writing some blocks indefinitely when under heavy disk load. A crash or power failure may cause serious file system corruption. Check the default on the system by observing the hw.ata.wc sysctl(8) variable. If IDE write caching is turned off, one can set this read-only variable to 1 in /boot/loader.conf in order to enable it at boot time. For more information, refer to ata(4). ##### 13.10.1.6. SCSI_DELAY (kern.cam.scsi_delay) The SCSI_DELAY kernel configuration option may be used to reduce system boot times. The defaults are fairly high and can be responsible for 15 seconds of delay in the boot process. Reducing it to 5 seconds usually works with modern drives. The kern.cam.scsi_delay boot time tunable should be used. The tunable and kernel configuration option accept values in terms of milliseconds and not seconds. #### 13.10.2. Soft Updates To fine-tune a file system, use tunefs(8). This program has many different options. To toggle Soft Updates on and off, use: # tunefs -n enable /filesystem # tunefs -n disable /filesystem A file system cannot be modified with tunefs(8) while it is mounted. A good time to enable Soft Updates is before any partitions have been mounted, in single-user mode. Soft Updates is recommended for UFS file systems as it drastically improves meta-data performance, mainly file creation and deletion, through the use of a memory cache. There are two downsides to Soft Updates to be aware of. First, Soft Updates guarantee file system consistency in the case of a crash, but could easily be several seconds or even a minute behind updating the physical disk. If the system crashes, unwritten data may be lost. Secondly, Soft Updates delay the freeing of file system blocks. If the root file system is almost full, performing a major update, such as make installworld, can cause the file system to run out of space and the update to fail. ##### 13.10.2.1. More Details About Soft Updates Meta-data updates are updates to non-content data like inodes or directories. There are two traditional approaches to writing a file system’s meta-data back to disk. Historically, the default behavior was to write out meta-data updates synchronously. If a directory changed, the system waited until the change was actually written to disk. The file data buffers (file contents) were passed through the buffer cache and backed up to disk later on asynchronously. The advantage of this implementation is that it operates safely. If there is a failure during an update, meta-data is always in a consistent state. A file is either created completely or not at all. If the data blocks of a file did not find their way out of the buffer cache onto the disk by the time of the crash, fsck(8) recognizes this and repairs the file system by setting the file length to 0. Additionally, the implementation is clear and simple. The disadvantage is that meta-data changes are slow. For example, rm -r touches all the files in a directory sequentially, but each directory change will be written synchronously to the disk. This includes updates to the directory itself, to the inode table, and possibly to indirect blocks allocated by the file. Similar considerations apply for unrolling large hierarchies using tar -x. The second approach is to use asynchronous meta-data updates. This is the default for a UFS file system mounted with mount -o async. Since all meta-data updates are also passed through the buffer cache, they will be intermixed with the updates of the file content data. The advantage of this implementation is there is no need to wait until each meta-data update has been written to disk, so all operations which cause huge amounts of meta-data updates work much faster than in the synchronous case. This implementation is still clear and simple, so there is a low risk for bugs creeping into the code. The disadvantage is that there is no guarantee for a consistent state of the file system If there is a failure during an operation that updated large amounts of meta-data, like a power failure or someone pressing the reset button, the file system will be left in an unpredictable state. There is no opportunity to examine the state of the file system when the system comes up again as the data blocks of a file could already have been written to the disk while the updates of the inode table or the associated directory were not. It is impossible to implement a fsck(8) which is able to clean up the resulting chaos because the necessary information is not available on the disk. If the file system has been damaged beyond repair, the only choice is to reformat it and restore from backup. The usual solution for this problem is to implement dirty region logging, which is also referred to as journaling. Meta-data updates are still written synchronously, but only into a small region of the disk. Later on, they are moved to their proper location. Since the logging area is a small, contiguous region on the disk, there are no long distances for the disk heads to move, even during heavy operations, so these operations are quicker than synchronous updates. Additionally, the complexity of the implementation is limited, so the risk of bugs being present is low. A disadvantage is that all meta-data is written twice, once into the logging region and once to the proper location, so performance "pessimization" might result. On the other hand, in case of a crash, all pending meta-data operations can be either quickly rolled back or completed from the logging area after the system comes up again, resulting in a fast file system startup. Kirk McKusick, the developer of Berkeley FFS, solved this problem with Soft Updates. All pending meta-data updates are kept in memory and written out to disk in a sorted sequence ("ordered meta-data updates"). This has the effect that, in case of heavy meta-data operations, later updates to an item "catch" the earlier ones which are still in memory and have not already been written to disk. All operations are generally performed in memory before the update is written to disk and the data blocks are sorted according to their position so that they will not be on the disk ahead of their meta-data. If the system crashes, an implicit "log rewind" causes all operations which were not written to the disk appear as if they never happened. A consistent file system state is maintained that appears to be the one of 30 to 60 seconds earlier. The algorithm used guarantees that all resources in use are marked as such in their blocks and inodes. After a crash, the only resource allocation error that occurs is that resources are marked as "used" which are actually "free". fsck(8) recognizes this situation, and frees the resources that are no longer used. It is safe to ignore the dirty state of the file system after a crash by forcibly mounting it with mount -f. In order to free resources that may be unused, fsck(8) needs to be run at a later time. This is the idea behind the background fsck(8): at system startup time, only a snapshot of the file system is recorded and fsck(8) is run afterwards. All file systems can then be mounted "dirty", so the system startup proceeds in multi-user mode. Then, background fsck(8) is scheduled for all file systems where this is required, to free resources that may be unused. File systems that do not use Soft Updates still need the usual foreground fsck(8). The advantage is that meta-data operations are nearly as fast as asynchronous updates and are faster than logging, which has to write the meta-data twice. The disadvantages are the complexity of the code, a higher memory consumption, and some idiosyncrasies. After a crash, the state of the file system appears to be somewhat "older". In situations where the standard synchronous approach would have caused some zero-length files to remain after the fsck(8), these files do not exist at all with Soft Updates because neither the meta-data nor the file contents have been written to disk. Disk space is not released until the updates have been written to disk, which may take place some time after running rm(1). This may cause problems when installing large amounts of data on a file system that does not have enough free space to hold all the files twice. ### 13.11. Tuning Kernel Limits #### 13.11.1. File/Process Limits ##### 13.11.1.1. kern.maxfiles The kern.maxfiles sysctl(8) variable can be raised or lowered based upon system requirements. This variable indicates the maximum number of file descriptors on the system. When the file descriptor table is full, file: table is full will show up repeatedly in the system message buffer, which can be viewed using dmesg(8). Each open file, socket, or fifo uses one file descriptor. A large-scale production server may easily require many thousands of file descriptors, depending on the kind and number of services running concurrently. In older FreeBSD releases, the default value of kern.maxfiles is derived from maxusers in the kernel configuration file. kern.maxfiles grows proportionally to the value of maxusers. When compiling a custom kernel, consider setting this kernel configuration option according to the use of the system. From this number, the kernel is given most of its pre-defined limits. Even though a production machine may not have 256 concurrent users, the resources needed may be similar to a high-scale web server. The read-only sysctl(8) variable kern.maxusers is automatically sized at boot based on the amount of memory available in the system, and may be determined at run-time by inspecting the value of kern.maxusers. Some systems require larger or smaller values of kern.maxusers and values of 64, 128, and 256 are not uncommon. Going above 256 is not recommended unless a huge number of file descriptors is needed. Many of the tunable values set to their defaults by kern.maxusers may be individually overridden at boot-time or run-time in /boot/loader.conf. Refer to loader.conf(5) and /boot/defaults/loader.conf for more details and some hints. In older releases, the system will auto-tune maxusers if it is set to 0. [2]. When setting this option, set maxusers to at least 4, especially if the system runs Xorg or is used to compile software. The most important table set by maxusers is the maximum number of processes, which is set to 20 + 16 * maxusers. If maxusers is set to 1, there can only be 36 simultaneous processes, including the 18 or so that the system starts up at boot time and the 15 or so used by Xorg. Even a simple task like reading a manual page will start up nine processes to filter, decompress, and view it. Setting maxusers to 64 allows up to 1044 simultaneous processes, which should be enough for nearly all uses. If, however, the error is displayed when trying to start another program, or a server is running with a large number of simultaneous users, increase the number and rebuild. maxusers does not limit the number of users which can log into the machine. It instead sets various table sizes to reasonable values considering the maximum number of users on the system and how many processes each user will be running. ##### 13.11.1.2. kern.ipc.soacceptqueue The kern.ipc.soacceptqueue sysctl(8) variable limits the size of the listen queue for accepting new TCP connections. The default value of 128 is typically too low for robust handling of new connections on a heavily loaded web server. For such environments, it is recommended to increase this value to 1024 or higher. A service such as sendmail(8), or Apache may itself limit the listen queue size, but will often have a directive in its configuration file to adjust the queue size. Large listen queues do a better job of avoiding Denial of Service (DoS) attacks. #### 13.11.2. Network Limits The NMBCLUSTERS kernel configuration option dictates the amount of network Mbufs available to the system. A heavily-trafficked server with a low number of Mbufs will hinder performance. Each cluster represents approximately 2 K of memory, so a value of 1024 represents 2 megabytes of kernel memory reserved for network buffers. A simple calculation can be done to figure out how many are needed. A web server which maxes out at 1000 simultaneous connections where each connection uses a 6 K receive and 16 K send buffer, requires approximately 32 MB worth of network buffers to cover the web server. A good rule of thumb is to multiply by 2, so 2x32 MB / 2 KB = 64 MB / 2 kB = 32768. Values between 4096 and 32768 are recommended for machines with greater amounts of memory. Never specify an arbitrarily high value for this parameter as it could lead to a boot time crash. To observe network cluster usage, use -m with netstat(1). The kern.ipc.nmbclusters loader tunable should be used to tune this at boot time. Only older versions of FreeBSD will require the use of the NMBCLUSTERS kernel config(8) option. For busy servers that make extensive use of the sendfile(2) system call, it may be necessary to increase the number of sendfile(2) buffers via the NSFBUFS kernel configuration option or by setting its value in /boot/loader.conf (see loader(8) for details). A common indicator that this parameter needs to be adjusted is when processes are seen in the sfbufa state. The sysctl(8) variable kern.ipc.nsfbufs is read-only. This parameter nominally scales with kern.maxusers, however it may be necessary to tune accordingly. Even though a socket has been marked as non-blocking, calling sendfile(2) on the non-blocking socket may result in the sendfile(2) call blocking until enough struct sf_buf's are made available. ##### 13.11.2.1. net.inet.ip.portrange.* The net.inet.ip.portrange.* sysctl(8) variables control the port number ranges automatically bound to TCP and UDP sockets. There are three ranges: a low range, a default range, and a high range. Most network programs use the default range which is controlled by net.inet.ip.portrange.first and net.inet.ip.portrange.last, which default to 1024 and 5000, respectively. Bound port ranges are used for outgoing connections and it is possible to run the system out of ports under certain circumstances. This most commonly occurs when running a heavily loaded web proxy. The port range is not an issue when running a server which handles mainly incoming connections, such as a web server, or has a limited number of outgoing connections, such as a mail relay. For situations where there is a shortage of ports, it is recommended to increase net.inet.ip.portrange.last modestly. A value of 10000, 20000 or 30000 may be reasonable. Consider firewall effects when changing the port range. Some firewalls may block large ranges of ports, usually low-numbered ports, and expect systems to use higher ranges of ports for outgoing connections. For this reason, it is not recommended that the value of net.inet.ip.portrange.first be lowered. #### 13.11.3. Virtual Memory ##### 13.11.3.1. kern.maxvnodes A vnode is the internal representation of a file or directory. Increasing the number of vnodes available to the operating system reduces disk I/O. Normally, this is handled by the operating system and does not need to be changed. In some cases where disk I/O is a bottleneck and the system is running out of vnodes, this setting needs to be increased. The amount of inactive and free RAM will need to be taken into account. To see the current number of vnodes in use: # sysctl vfs.numvnodes vfs.numvnodes: 91349 To see the maximum vnodes: # sysctl kern.maxvnodes kern.maxvnodes: 100000 If the current vnode usage is near the maximum, try increasing kern.maxvnodes by a value of 1000. Keep an eye on the number of vfs.numvnodes. If it climbs up to the maximum again, kern.maxvnodes will need to be increased further. Otherwise, a shift in memory usage as reported by top(1) should be visible and more memory should be active. ### 13.12. Adding Swap Space Sometimes a system requires more swap space. This section describes two methods to increase swap space: adding swap to an existing partition or new hard drive, and creating a swap file on an existing partition. For information on how to encrypt swap space, which options exist, and why it should be done, refer to “Encrypting Swap”. #### 13.12.1. Swap on a New Hard Drive or Existing Partition Adding a new hard drive for swap gives better performance than using a partition on an existing drive. Setting up partitions and hard drives is explained in “Adding Disks” while “Designing the Partition Layout” discusses partition layouts and swap partition size considerations. Use swapon to add a swap partition to the system. For example: # swapon /dev/ada1s1b It is possible to use any partition not currently mounted, even if it already contains data. Using swapon on a partition that contains data will overwrite and destroy that data. Make sure that the partition to be added as swap is really the intended partition before running swapon. To automatically add this swap partition on boot, add an entry to /etc/fstab: /dev/ada1s1b none swap sw 0 0 See fstab(5) for an explanation of the entries in /etc/fstab. More information about swapon can be found in swapon(8). #### 13.12.2. Creating a Swap File These examples create a 512M swap file called /usr/swap0 instead of using a partition. Using swap files requires that the module needed by md(4) has either been built into the kernel or has been loaded before swap is enabled. See Configuring the FreeBSD Kernel for information about building a custom kernel. Example 25. Creating a Swap File 1. Create the swap file: # dd if=/dev/zero of=/usr/swap0 bs=1m count=512 2. Set the proper permissions on the new file: # chmod 0600 /usr/swap0 3. Inform the system about the swap file by adding a line to /etc/fstab: md none swap sw,file=/usr/swap0,late 0 0 4. Swap space will be added on system startup. To add swap space immediately, use swapon(8): # swapon -aL ### 13.13. Power and Resource Management It is important to utilize hardware resources in an efficient manner. Power and resource management allows the operating system to monitor system limits and to possibly provide an alert if the system temperature increases unexpectedly. An early specification for providing power management was the Advanced Power Management (APM) facility. APM controls the power usage of a system based on its activity. However, it was difficult and inflexible for operating systems to manage the power usage and thermal properties of a system. The hardware was managed by the BIOS and the user had limited configurability and visibility into the power management settings. The APMBIOS is supplied by the vendor and is specific to the hardware platform. An APM driver in the operating system mediates access to the APM Software Interface, which allows management of power levels. There are four major problems in APM. First, power management is done by the vendor-specific BIOS, separate from the operating system. For example, the user can set idle-time values for a hard drive in the APMBIOS so that, when exceeded, the BIOS spins down the hard drive without the consent of the operating system. Second, the APM logic is embedded in the BIOS, and it operates outside the scope of the operating system. This means that users can only fix problems in the APMBIOS by flashing a new one into the ROM, which is a dangerous procedure with the potential to leave the system in an unrecoverable state if it fails. Third, APM is a vendor-specific technology, meaning that there is a lot of duplication of efforts and bugs found in one vendor’s BIOS may not be solved in others. Lastly, the APMBIOS did not have enough room to implement a sophisticated power policy or one that can adapt well to the purpose of the machine. The Plug and Play BIOS (PNPBIOS) was unreliable in many situations. PNPBIOS is 16-bit technology, so the operating system has to use 16-bit emulation in order to interface with PNPBIOS methods. FreeBSD provides an APM driver as APM should still be used for systems manufactured at or before the year 2000. The driver is documented in apm(4). The successor to APM is the Advanced Configuration and Power Interface (ACPI). ACPI is a standard written by an alliance of vendors to provide an interface for hardware resources and power management. It is a key element in Operating System-directed configuration and Power Management as it provides more control and flexibility to the operating system. This chapter demonstrates how to configure ACPI on FreeBSD. It then offers some tips on how to debug ACPI and how to submit a problem report containing debugging information so that developers can diagnosis and fix ACPI issues. #### 13.13.1. Configuring ACPI In FreeBSD the acpi(4) driver is loaded by default at system boot and should not be compiled into the kernel. This driver cannot be unloaded after boot because the system bus uses it for various hardware interactions. However, if the system is experiencing problems, ACPI can be disabled altogether by rebooting after setting hint.acpi.0.disabled="1" in /boot/loader.conf or by setting this variable at the loader prompt, as described in “Stage Three”. ACPI and APM cannot coexist and should be used separately. The last one to load will terminate if the driver notices the other is running. ACPI can be used to put the system into a sleep mode with acpiconf, the -s flag, and a number from 1 to 5. Most users only need 1 (quick suspend to RAM) or 3 (suspend to RAM). Option 5 performs a soft-off which is the same as running halt -p. The acpi_video(4) driver uses ACPI Video Extensions to control display switching and backlight brightness. It must be loaded after any of the DRM kernel modules. After loading the driver, the Fn brightness keys will change the brightness of the screen. It is possible to check the ACPI events by inspecting /var/run/devd.pipe: ... # cat /var/run/devd.pipe !system=ACPI subsystem=Video type=brightness notify=62 !system=ACPI subsystem=Video type=brightness notify=63 !system=ACPI subsystem=Video type=brightness notify=64 ... Other options are available using sysctl. Refer to acpi(4) and acpiconf(8) for more information. #### 13.13.2. Common Problems ACPI is present in all modern computers that conform to the ia32 (x86) and amd64 (AMD) architectures. The full standard has many features including CPU performance management, power planes control, thermal zones, various battery systems, embedded controllers, and bus enumeration. Most systems implement less than the full standard. For instance, a desktop system usually only implements bus enumeration while a laptop might have cooling and battery management support as well. Laptops also have suspend and resume, with their own associated complexity. An ACPI-compliant system has various components. The BIOS and chipset vendors provide various fixed tables, such as FADT, in memory that specify things like the APIC map (used for SMP), config registers, and simple configuration values. Additionally, a bytecode table, the Differentiated System Description Table DSDT, specifies a tree-like name space of devices and methods. The ACPI driver must parse the fixed tables, implement an interpreter for the bytecode, and modify device drivers and the kernel to accept information from the ACPI subsystem. For FreeBSD, Intel® has provided an interpreter (ACPI-CA) that is shared with Linux® and NetBSD. The path to the ACPI-CA source code is src/sys/contrib/dev/acpica. The glue code that allows ACPI-CA to work on FreeBSD is in src/sys/dev/acpica/Osd. Finally, drivers that implement various ACPI devices are found in src/sys/dev/acpica. For ACPI to work correctly, all the parts have to work correctly. Here are some common problems, in order of frequency of appearance, and some possible workarounds or fixes. If a fix does not resolve the issue, refer to Getting and Submitting Debugging Info for instructions on how to submit a bug report. ##### 13.13.2.1. Mouse Issues In some cases, resuming from a suspend operation will cause the mouse to fail. A known work around is to add hint.psm.0.flags="0x3000" to /boot/loader.conf. ##### 13.13.2.2. Suspend/Resume ACPI has three suspend to RAM (STR) states, S1-S3, and one suspend to disk state (STD), called S4. STD can be implemented in two separate ways. The S4BIOS is a BIOS-assisted suspend to disk and S4OS is implemented entirely by the operating system. The normal state the system is in when plugged in but not powered up is "soft off" (S5). Use sysctl hw.acpi to check for the suspend-related items. These example results are from a Thinkpad: hw.acpi.supported_sleep_state: S3 S4 S5 hw.acpi.s4bios: 0 Use acpiconf -s to test S3, S4, and S5. An s4bios of one (1) indicates S4BIOS support instead of S4 operating system support. When testing suspend/resume, start with S1, if supported. This state is most likely to work since it does not require much driver support. No one has implemented S2, which is similar to S1. Next, try S3. This is the deepest STR state and requires a lot of driver support to properly reinitialize the hardware. A common problem with suspend/resume is that many device drivers do not save, restore, or reinitialize their firmware, registers, or device memory properly. As a first attempt at debugging the problem, try: # sysctl debug.bootverbose=1 # sysctl debug.acpi.suspend_bounce=1 # acpiconf -s 3 This test emulates the suspend/resume cycle of all device drivers without actually going into S3 state. In some cases, problems such as losing firmware state, device watchdog time out, and retrying forever, can be captured with this method. Note that the system will not really enter S3 state, which means devices may not lose power, and many will work fine even if suspend/resume methods are totally missing, unlike real S3 state. If the previous test worked, on a laptop it is possible to configure the system to suspend into S3 on lid close and resume when it is open back again: # sysctl hw.acpi.lid_switch_state=S3 This change can be made persistent across reboots: # echo 'hw.acpi.lid_switch_state=S3' >> /etc/sysctl.conf Harder cases require additional hardware, such as a serial port and cable for debugging through a serial console, a Firewire port and cable for using dcons(4), and kernel debugging skills. To help isolate the problem, unload as many drivers as possible. If it works, narrow down which driver is the problem by loading drivers until it fails again. Typically, binary drivers like nvidia.ko, display drivers, and USB will have the most problems while Ethernet interfaces usually work fine. If drivers can be properly loaded and unloaded, automate this by putting the appropriate commands in /etc/rc.suspend and /etc/rc.resume. Try setting hw.acpi.reset_video to 1 if the display is messed up after resume. Try setting longer or shorter values for hw.acpi.sleep_delay to see if that helps. Try loading a recent Linux® distribution to see if suspend/resume works on the same hardware. If it works on Linux®, it is likely a FreeBSD driver problem. Narrowing down which driver causes the problem will assist developers in fixing the problem. Since the ACPI maintainers rarely maintain other drivers, such as sound or ATA, any driver problems should also be posted to the FreeBSD-CURRENT mailing list and mailed to the driver maintainer. Advanced users can include debugging printf(3)s in a problematic driver to track down where in its resume function it hangs. Finally, try disabling ACPI and enabling APM instead. If suspend/resume works with APM, stick with APM, especially on older hardware (pre-2000). It took vendors a while to get ACPI support correct and older hardware is more likely to have BIOS problems with ACPI. ##### 13.13.2.3. System Hangs Most system hangs are a result of lost interrupts or an interrupt storm. Chipsets may have problems based on boot, how the BIOS configures interrupts before correctness of the APIC (MADT) table, and routing of the System Control Interrupt (SCI). Interrupt storms can be distinguished from lost interrupts by checking the output of vmstat -i and looking at the line that has acpi0. If the counter is increasing at more than a couple per second, there is an interrupt storm. If the system appears hung, try breaking to DDB (CTRL+ALT+ESC on console) and type show interrupts. When dealing with interrupt problems, try disabling APIC support with hint.apic.0.disabled="1" in /boot/loader.conf. ##### 13.13.2.4. Panics Panics are relatively rare for ACPI and are the top priority to be fixed. The first step is to isolate the steps to reproduce the panic, if possible, and get a backtrace. Follow the advice for enabling options DDB and setting up a serial console in “Entering the DDB Debugger from the Serial Line” or setting up a dump partition. To get a backtrace in DDB, use tr. When handwriting the backtrace, get at least the last five and the top five lines in the trace. Then, try to isolate the problem by booting with ACPI disabled. If that works, isolate the ACPI subsystem by using various values of debug.acpi.disable. See acpi(4) for some examples. ##### 13.13.2.5. System Powers Up After Suspend or Shutdown First, try setting hw.acpi.disable_on_poweroff="0" in /boot/loader.conf. This keeps ACPI from disabling various events during the shutdown process. Some systems need this value set to 1 (the default) for the same reason. This usually fixes the problem of a system powering up spontaneously after a suspend or poweroff. ##### 13.13.2.6. BIOS Contains Buggy Bytecode Some BIOS vendors provide incorrect or buggy bytecode. This is usually manifested by kernel console messages like this: ACPI-1287: *** Error: Method execution failed [\\_SB_.PCI0.LPC0.FIGD._STA] \\ (Node 0xc3f6d160), AE_NOT_FOUND Often, these problems may be resolved by updating the BIOS to the latest revision. Most console messages are harmless, but if there are other problems, like the battery status is not working, these messages are a good place to start looking for problems. #### 13.13.3. Overriding the Default AML The BIOS bytecode, known as ACPI Machine Language (AML), is compiled from a source language called ACPI Source Language (ASL). The AML is found in the table known as the Differentiated System Description Table (DSDT). The goal of FreeBSD is for everyone to have working ACPI without any user intervention. Workarounds are still being developed for common mistakes made by BIOS vendors. The Microsoft® interpreter (acpi.sys and acpiec.sys) does not strictly check for adherence to the standard, and thus many BIOS vendors who only test ACPI under Windows® never fix their ASL. FreeBSD developers continue to identify and document which non-standard behavior is allowed by Microsoft®'s interpreter and replicate it so that FreeBSD can work without forcing users to fix the ASL. To help identify buggy behavior and possibly fix it manually, a copy can be made of the system’s ASL. To copy the system’s ASL to a specified file name, use acpidump with -t, to show the contents of the fixed tables, and -d, to disassemble the AML: # acpidump -td > my.asl Some AML versions assume the user is running Windows®. To override this, set hw.acpi.osname="Windows 2009" in /boot/loader.conf, using the most recent Windows® version listed in the ASL. Other workarounds may require my.asl to be customized. If this file is edited, compile the new ASL using the following command. Warnings can usually be ignored, but errors are bugs that will usually prevent ACPI from working correctly. # iasl -f my.asl Including -f forces creation of the AML, even if there are errors during compilation. Some errors, such as missing return statements, are automatically worked around by the FreeBSD interpreter. The default output filename for iasl is DSDT.aml. Load this file instead of the BIOS’s buggy copy, which is still present in flash memory, by editing /boot/loader.conf as follows: acpi_dsdt_load="YES" acpi_dsdt_name="/boot/DSDT.aml" Be sure to copy DSDT.aml to /boot, then reboot the system. If this fixes the problem, send a diff(1) of the old and new ASL to FreeBSD ACPI mailing list so that developers can work around the buggy behavior in acpica. #### 13.13.4. Getting and Submitting Debugging Info The ACPI driver has a flexible debugging facility. A set of subsystems and the level of verbosity can be specified. The subsystems to debug are specified as layers and are broken down into components (ACPI_ALL_COMPONENTS) and ACPI hardware support (ACPI_ALL_DRIVERS). The verbosity of debugging output is specified as the level and ranges from just report errors (ACPI_LV_ERROR) to everything (ACPI_LV_VERBOSE). The level is a bitmask so multiple options can be set at once, separated by spaces. In practice, a serial console should be used to log the output so it is not lost as the console message buffer flushes. A full list of the individual layers and levels is found in acpi(4). Debugging output is not enabled by default. To enable it, add options ACPI_DEBUG to the custom kernel configuration file if ACPI is compiled into the kernel. Add ACPI_DEBUG=1 to /etc/make.conf to enable it globally. If a module is used instead of a custom kernel, recompile just the acpi.ko module as follows: # cd /sys/modules/acpi/acpi && make clean && make ACPI_DEBUG=1 Copy the compiled acpi.ko to /boot/kernel and add the desired level and layer to /boot/loader.conf. The entries in this example enable debug messages for all ACPI components and hardware drivers and output error messages at the least verbose level: debug.acpi.layer="ACPI_ALL_COMPONENTS ACPI_ALL_DRIVERS" debug.acpi.level="ACPI_LV_ERROR" If the required information is triggered by a specific event, such as a suspend and then resume, do not modify /boot/loader.conf. Instead, use sysctl to specify the layer and level after booting and preparing the system for the specific event. The variables which can be set using sysctl are named the same as the tunables in /boot/loader.conf. Once the debugging information is gathered, it can be sent to FreeBSD ACPI mailing list so that it can be used by the FreeBSD ACPI maintainers to identify the root cause of the problem and to develop a solution. Before submitting debugging information to this mailing list, ensure the latest BIOS version is installed and, if available, the embedded controller firmware version. When submitting a problem report, include the following information: • Description of the buggy behavior, including system type, model, and anything that causes the bug to appear. Note as accurately as possible when the bug began occurring if it is new. • The output of dmesg after running boot -v, including any error messages generated by the bug. • The dmesg output from boot -v with ACPI disabled, if disabling ACPI helps to fix the problem. • Output from sysctl hw.acpi. This lists which features the system offers. • The URL to a pasted version of the system’s ASL. Do not send the ASL directly to the list as it can be very large. Generate a copy of the ASL by running this command: # acpidump -dt > name-system.asl Substitute the login name for name and manufacturer/model for system. For example, use njl-FooCo6000.asl. Most FreeBSD developers watch the FreeBSD-CURRENT mailing list, but one should submit problems to FreeBSD ACPI mailing list to be sure it is seen. Be patient when waiting for a response. If the bug is not immediately apparent, submit a bug report. When entering a PR, include the same information as requested above. This helps developers to track the problem and resolve it. Do not send a PR without emailing FreeBSD ACPI mailing list first as it is likely that the problem has been reported before. #### 13.13.5. References More information about ACPI may be found in the following locations: ## Chapter 14. The FreeBSD Booting Process ### 14.1. Synopsis The process of starting a computer and loading the operating system is referred to as "the bootstrap process", or "booting". FreeBSD’s boot process provides a great deal of flexibility in customizing what happens when the system starts, including the ability to select from different operating systems installed on the same computer, different versions of the same operating system, or a different installed kernel. This chapter details the configuration options that can be set. It demonstrates how to customize the FreeBSD boot process, including everything that happens until the FreeBSD kernel has started, probed for devices, and started init(8). This occurs when the text color of the boot messages changes from bright white to grey. After reading this chapter, you will recognize: • The components of the FreeBSD bootstrap system and how they interact. • The options that can be passed to the components in the FreeBSD bootstrap in order to control the boot process. • The basics of setting device hints. • How to boot into single- and multi-user mode and how to properly shut down a FreeBSD system. This chapter only describes the boot process for FreeBSD running on x86 and amd64 systems. ### 14.2. FreeBSD Boot Process Turning on a computer and starting the operating system poses an interesting dilemma. By definition, the computer does not know how to do anything until the operating system is started. This includes running programs from the disk. If the computer can not run a program from the disk without the operating system, and the operating system programs are on the disk, how is the operating system started? This problem parallels one in the book The Adventures of Baron Munchausen. A character had fallen part way down a manhole, and pulled himself out by grabbing his bootstraps and lifting. In the early days of computing, the term bootstrap was applied to the mechanism used to load the operating system. It has since become shortened to "booting". On x86 hardware, the Basic Input/Output System (BIOS) is responsible for loading the operating system. The BIOS looks on the hard disk for the Master Boot Record (MBR), which must be located in a specific place on the disk. The BIOS has enough knowledge to load and run the MBR, and assumes that the MBR can then carry out the rest of the tasks involved in loading the operating system, possibly with the help of the BIOS. FreeBSD provides for booting from both the older MBR standard, and the newer GUID Partition Table (GPT). GPT partitioning is often found on computers with the Unified Extensible Firmware Interface (UEFI). However, FreeBSD can boot from GPT partitions even on machines with only a legacy BIOS with gptboot(8). Work is under way to provide direct UEFI booting. The code within the MBR is typically referred to as a boot manager, especially when it interacts with the user. The boot manager usually has more code in the first track of the disk or within the file system. Examples of boot managers include the standard FreeBSD boot manager boot0, also called Boot Easy, and GNU GRUB, which is used by many Linux® distributions. Users of GRUB should refer to GNU-provided documentation. If only one operating system is installed, the MBR searches for the first bootable (active) slice on the disk, and then runs the code on that slice to load the remainder of the operating system. When multiple operating systems are present, a different boot manager can be installed to display a list of operating systems so the user can select one to boot. The remainder of the FreeBSD bootstrap system is divided into three stages. The first stage knows just enough to get the computer into a specific state and run the second stage. The second stage can do a little bit more, before running the third stage. The third stage finishes the task of loading the operating system. The work is split into three stages because the MBR puts limits on the size of the programs that can be run at stages one and two. Chaining the tasks together allows FreeBSD to provide a more flexible loader. The kernel is then started and begins to probe for devices and initialize them for use. Once the kernel boot process is finished, the kernel passes control to the user process init(8), which makes sure the disks are in a usable state, starts the user-level resource configuration which mounts file systems, sets up network cards to communicate on the network, and starts the processes which have been configured to run at startup. This section describes these stages in more detail and demonstrates how to interact with the FreeBSD boot process. #### 14.2.1. The Boot Manager The boot manager code in the MBR is sometimes referred to as stage zero of the boot process. By default, FreeBSD uses the boot0 boot manager. The MBR installed by the FreeBSD installer is based on /boot/boot0. The size and capability of boot0 is restricted to 446 bytes due to the slice table and 0x55AA identifier at the end of the MBR. If boot0 and multiple operating systems are installed, a message similar to this example will be displayed at boot time: Example 26. boot0 Screenshot F1 Win F2 FreeBSD Default: F2 Other operating systems will overwrite an existing MBR if they are installed after FreeBSD. If this happens, or to replace the existing MBR with the FreeBSD MBR, use the following command: # fdisk -B -b /boot/boot0 device where device is the boot disk, such as ad0 for the first IDE disk, ad2 for the first IDE disk on a second IDE controller, or da0 for the first SCSI disk. To create a custom configuration of the MBR, refer to boot0cfg(8). #### 14.2.2. Stage One and Stage Two Conceptually, the first and second stages are part of the same program on the same area of the disk. Due to space constraints, they have been split into two, but are always installed together. They are copied from the combined /boot/boot by the FreeBSD installer or bsdlabel. These two stages are located outside file systems, in the first track of the boot slice, starting with the first sector. This is where boot0, or any other boot manager, expects to find a program to run which will continue the boot process. The first stage, boot1, is very simple, since it can only be 512 bytes in size. It knows just enough about the FreeBSD bsdlabel, which stores information about the slice, to find and execute boot2. Stage two, boot2, is slightly more sophisticated, and understands the FreeBSD file system enough to find files. It can provide a simple interface to choose the kernel or loader to run. It runs loader, which is much more sophisticated and provides a boot configuration file. If the boot process is interrupted at stage two, the following interactive screen is displayed: Example 27. boot2 Screenshot >> FreeBSD/i386 BOOT Default: 0:ad(0,a)/boot/loader boot: To replace the installed boot1 and boot2, use bsdlabel, where diskslice is the disk and slice to boot from, such as ad0s1 for the first slice on the first IDE disk: # bsdlabel -B diskslice If just the disk name is used, such as ad0, bsdlabel will create the disk in "dangerously dedicated mode", without slices. This is probably not the desired action, so double check the diskslice before pressing Return. #### 14.2.3. Stage Three The loader is the final stage of the three-stage bootstrap process. It is located on the file system, usually as /boot/loader. The loader is intended as an interactive method for configuration, using a built-in command set, backed up by a more powerful interpreter which has a more complex command set. During initialization, loader will probe for a console and for disks, and figure out which disk it is booting from. It will set variables accordingly, and an interpreter is started where user commands can be passed from a script or interactively. The loader will then read /boot/loader.rc, which by default reads in /boot/defaults/loader.conf which sets reasonable defaults for variables and reads /boot/loader.conf for local changes to those variables. loader.rc then acts on these variables, loading whichever modules and kernel are selected. Finally, by default, loader issues a 10 second wait for key presses, and boots the kernel if it is not interrupted. If interrupted, the user is presented with a prompt which understands the command set, where the user may adjust variables, unload all modules, load modules, and then finally boot or reboot. Loader Built-In Commands lists the most commonly used loader commands. For a complete discussion of all available commands, refer to loader(8). Table 10. Loader Built-In Commands VariableDescription autoboot seconds Proceeds to boot the kernel if not interrupted within the time span given, in seconds. It displays a countdown, and the default time span is 10 seconds. boot [-options] [kernelname] Immediately proceeds to boot the kernel, with any specified options or kernel name. Providing a kernel name on the command-line is only applicable after an unload has been issued. Otherwise, the previously-loaded kernel will be used. If kernelname is not qualified, it will be searched under /boot/kernel and /boot/modules. boot-conf Goes through the same automatic configuration of modules based on specified variables, most commonly kernel. This only makes sense if unload is used first, before changing some variables. help [topic] Shows help messages read from /boot/loader.help. If the topic given is index, the list of available topics is displayed. include filename …​ Reads the specified file and interprets it line by line. An error immediately stops the include. load [-t type] filename Loads the kernel, kernel module, or file of the type given, with the specified filename. Any arguments after filename are passed to the file. If filename is not qualified, it will be searched under /boot/kernel and /boot/modules. ls [-l] [path] Displays a listing of files in the given path, or the root directory, if the path is not specified. If -l is specified, file sizes will also be shown. lsdev [-v] Lists all of the devices from which it may be possible to load modules. If -v is specified, more details are printed. lsmod [-v] Displays loaded modules. If -v is specified, more details are shown. more filename Displays the files specified, with a pause at each LINES displayed. reboot Immediately reboots the system. set variable, set variable=value Sets the specified environment variables. unload Removes all loaded modules. Here are some practical examples of loader usage. To boot the usual kernel in single-user mode: boot -s To unload the usual kernel and modules and then load the previous or another, specified kernel: unload load /path/to/kernelfile Use the qualified /boot/GENERIC/kernel to refer to the default kernel that comes with an installation, or /boot/kernel.old/kernel, to refer to the previously installed kernel before a system upgrade or before configuring a custom kernel. Use the following to load the usual modules with another kernel. Note that in this case it is not necessary the qualified name: unload set kernel="mykernel" boot-conf To load an automated kernel configuration script: load -t userconfig_script /boot/kernel.conf #### 14.2.4. Last Stage Once the kernel is loaded by either loader or by boot2, which bypasses loader, it examines any boot flags and adjusts its behavior as necessary. Kernel Interaction During Boot lists the commonly used boot flags. Refer to boot(8) for more information on the other boot flags. Table 11. Kernel Interaction During Boot OptionDescription -a During kernel initialization, ask for the device to mount as the root file system. -C Boot the root file system from a CDROM. -s Boot into single-user mode. -v Be more verbose during kernel startup. Once the kernel has finished booting, it passes control to the user process init(8), which is located at /sbin/init, or the program path specified in the init_path variable in loader. This is the last stage of the boot process. The boot sequence makes sure that the file systems available on the system are consistent. If a UFS file system is not, and fsck cannot fix the inconsistencies, init drops the system into single-user mode so that the system administrator can resolve the problem directly. Otherwise, the system boots into multi-user mode. ##### 14.2.4.1. Single-User Mode A user can specify this mode by booting with -s or by setting the boot_single variable in loader. It can also be reached by running shutdown now from multi-user mode. Single-user mode begins with this message: Enter full pathname of shell or RETURN for /bin/sh: If the user presses Enter, the system will enter the default Bourne shell. To specify a different shell, input the full path to the shell. Single-user mode is usually used to repair a system that will not boot due to an inconsistent file system or an error in a boot configuration file. It can also be used to reset the root password when it is unknown. These actions are possible as the single-user mode prompt gives full, local access to the system and its configuration files. There is no networking in this mode. While single-user mode is useful for repairing a system, it poses a security risk unless the system is in a physically secure location. By default, any user who can gain physical access to a system will have full control of that system after booting into single-user mode. If the system console is changed to insecure in /etc/ttys, the system will first prompt for the root password before initiating single-user mode. This adds a measure of security while removing the ability to reset the root password when it is unknown. Example 28. Configuring an Insecure Console in /etc/ttys # name getty type status comments # # If console is marked "insecure", then init will ask for the root password # when going to single-user mode. console none unknown off insecure An insecure console means that physical security to the console is considered to be insecure, so only someone who knows the root password may use single-user mode. ##### 14.2.4.2. Multi-User Mode If init finds the file systems to be in order, or once the user has finished their commands in single-user mode and has typed exit to leave single-user mode, the system enters multi-user mode, in which it starts the resource configuration of the system. The resource configuration system reads in configuration defaults from /etc/defaults/rc.conf and system-specific details from /etc/rc.conf. It then proceeds to mount the system file systems listed in /etc/fstab. It starts up networking services, miscellaneous system daemons, then the startup scripts of locally installed packages. To learn more about the resource configuration system, refer to rc(8) and examine the scripts located in /etc/rc.d. ### 14.3. Device Hints During initial system startup, the boot loader(8) reads device.hints(5). This file stores kernel boot information known as variables, sometimes referred to as "device hints". These "device hints" are used by device drivers for device configuration. Device hints may also be specified at the Stage 3 boot loader prompt, as demonstrated in Stage Three. Variables can be added using set, removed with unset, and viewed show. Variables set in /boot/device.hints can also be overridden. Device hints entered at the boot loader are not permanent and will not be applied on the next reboot. Once the system is booted, kenv(1) can be used to dump all of the variables. The syntax for /boot/device.hints is one variable per line, using the hash "#" as comment markers. Lines are constructed as follows: hint.driver.unit.keyword="value" The syntax for the Stage 3 boot loader is: set hint.driver.unit.keyword=value where driver is the device driver name, unit is the device driver unit number, and keyword is the hint keyword. The keyword may consist of the following options: • at: specifies the bus which the device is attached to. • port: specifies the start address of the I/O to be used. • irq: specifies the interrupt request number to be used. • drq: specifies the DMA channel number. • maddr: specifies the physical memory address occupied by the device. • flags: sets various flag bits for the device. • disabled: if set to 1 the device is disabled. Since device drivers may accept or require more hints not listed here, viewing a driver’s manual page is recommended. For more information, refer to device.hints(5), kenv(1), loader.conf(5), and loader(8). ### 14.4. Shutdown Sequence Upon controlled shutdown using shutdown(8), init(8) will attempt to run the script /etc/rc.shutdown, and then proceed to send all processes the TERM signal, and subsequently the KILL signal to any that do not terminate in a timely manner. To power down a FreeBSD machine on architectures and systems that support power management, use shutdown -p now to turn the power off immediately. To reboot a FreeBSD system, use shutdown -r now. One must be root or a member of operator in order to run shutdown(8). One can also use halt(8) and reboot(8). Refer to their manual pages and to shutdown(8) for more information. Modify group membership by referring to “Users and Basic Account Management”. Power management requires acpi(4) to be loaded as a module or statically compiled into a custom kernel. ## Chapter 15. Security ### 15.1. Synopsis Security, whether physical or virtual, is a topic so broad that an entire industry has evolved around it. Hundreds of standard practices have been authored about how to secure systems and networks, and as a user of FreeBSD, understanding how to protect against attacks and intruders is a must. In this chapter, several fundamentals and techniques will be discussed. The FreeBSD system comes with multiple layers of security, and many more third party utilities may be added to enhance security. After reading this chapter, you will know: • Basic FreeBSD system security concepts. • The various crypt mechanisms available in FreeBSD. • How to set up one-time password authentication. • How to configure TCP Wrapper for use with inetd(8). • How to set up Kerberos on FreeBSD. • How to configure IPsec and create a VPN. • How to configure and use OpenSSH on FreeBSD. • How to use file system ACLs. • How to use pkg to audit third party software packages installed from the Ports Collection. • How to utilize FreeBSD security advisories. • What Process Accounting is and how to enable it on FreeBSD. • How to control user resources using login classes or the resource limits database. Before reading this chapter, you should: • Understand basic FreeBSD and Internet concepts. Additional security topics are covered elsewhere in this Handbook. For example, Mandatory Access Control is discussed in Mandatory Access Control and Internet firewalls are discussed in Firewalls. ### 15.2. Introduction Security is everyone’s responsibility. A weak entry point in any system could allow intruders to gain access to critical information and cause havoc on an entire network. One of the core principles of information security is the CIA triad, which stands for the Confidentiality, Integrity, and Availability of information systems. The CIA triad is a bedrock concept of computer security as customers and users expect their data to be protected. For example, a customer expects that their credit card information is securely stored (confidentiality), that their orders are not changed behind the scenes (integrity), and that they have access to their order information at all times (availability). To provide CIA, security professionals apply a defense in depth strategy. The idea of defense in depth is to add several layers of security to prevent one single layer failing and the entire security system collapsing. For example, a system administrator cannot simply turn on a firewall and consider the network or system secure. One must also audit accounts, check the integrity of binaries, and ensure malicious tools are not installed. To implement an effective security strategy, one must understand threats and how to defend against them. What is a threat as it pertains to computer security? Threats are not limited to remote attackers who attempt to access a system without permission from a remote location. Threats also include employees, malicious software, unauthorized network devices, natural disasters, security vulnerabilities, and even competing corporations. Systems and networks can be accessed without permission, sometimes by accident, or by remote attackers, and in some cases, via corporate espionage or former employees. As a user, it is important to prepare for and admit when a mistake has led to a security breach and report possible issues to the security team. As an administrator, it is important to know of the threats and be prepared to mitigate them. When applying security to systems, it is recommended to start by securing the basic accounts and system configuration, and then to secure the network layer so that it adheres to the system policy and the organization’s security procedures. Many organizations already have a security policy that covers the configuration of technology devices. The policy should include the security configuration of workstations, desktops, mobile devices, phones, production servers, and development servers. In many cases, standard operating procedures (SOPs) already exist. When in doubt, ask the security team. The rest of this introduction describes how some of these basic security configurations are performed on a FreeBSD system. The rest of this chapter describes some specific tools which can be used when implementing a security policy on a FreeBSD system. #### 15.2.1. Preventing Logins In securing a system, a good starting point is an audit of accounts. Ensure that root has a strong password and that this password is not shared. Disable any accounts that do not need login access. To deny login access to accounts, two methods exist. The first is to lock the account. This example locks the toor account: # pw lock toor The second method is to prevent login access by changing the shell to /usr/sbin/nologin. Only the superuser can change the shell for other users: # chsh -s /usr/sbin/nologin toor The /usr/sbin/nologin shell prevents the system from assigning a shell to the user when they attempt to login. #### 15.2.2. Permitted Account Escalation In some cases, system administration needs to be shared with other users. FreeBSD has two methods to handle this. The first one, which is not recommended, is a shared root password used by members of the wheel group. With this method, a user types su and enters the password for wheel whenever superuser access is needed. The user should then type exit to leave privileged access after finishing the commands that required administrative access. To add a user to this group, edit /etc/group and add the user to the end of the wheel entry. The user must be separated by a comma character with no space. The second, and recommended, method to permit privilege escalation is to install the security/sudo package or port. This software provides additional auditing, more fine-grained user control, and can be configured to lock users into running only the specified privileged commands. After installation, use visudo to edit /usr/local/etc/sudoers. This example creates a new webadmin group, adds the trhodes account to that group, and configures that group access to restart apache24: # pw groupadd webadmin -M trhodes -g 6000 # visudo %webadmin ALL=(ALL) /usr/sbin/service apache24 * #### 15.2.3. Password Hashes Passwords are a necessary evil of technology. When they must be used, they should be complex and a powerful hash mechanism should be used to encrypt the version that is stored in the password database. FreeBSD supports the DES, MD5, SHA256, SHA512, and Blowfish hash algorithms in its crypt() library. The default of SHA512 should not be changed to a less secure hashing algorithm, but can be changed to the more secure Blowfish algorithm. Blowfish is not part of AES and is not considered compliant with any Federal Information Processing Standards (FIPS). Its use may not be permitted in some environments. To determine which hash algorithm is used to encrypt a user’s password, the superuser can view the hash for the user in the FreeBSD password database. Each hash starts with a symbol which indicates the type of hash mechanism used to encrypt the password. If DES is used, there is no beginning symbol. For MD5, the symbol is $. For SHA256 and SHA512, the symbol is $6$. For Blowfish, the symbol is $2a$. In this example, the password for dru is hashed using the default SHA512 algorithm as the hash starts with $6$. Note that the encrypted hash, not the password itself, is stored in the password database: # grep dru /etc/master.passwd dru:$6$pzIjSvCAn.PBYQBA\$PXpSeWPx3g5kscj3IMiM7tUEUSPmGexxta.8Lt9TGSi2lNQqYGKszsBPuGME0:1001:1001::0:0:dru:/usr/home/dru:/bin/csh The hash mechanism is set in the user’s login class. For this example, the user is in the default login class and the hash algorithm is set with this line in /etc/login.conf: :passwd_format=sha512:\ To change the algorithm to Blowfish, modify that line to look like this: :passwd_format=blf:\ Then run cap_mkdb /etc/login.conf as described in Configuring Login Classes. Note that this change will not affect any existing password hashes. This means that all passwords should be re-hashed by asking users to run passwd in order to change their password. For remote logins, two-factor authentication should be used. An example of two-factor authentication is "something you have", such as a key, and "something you know", such as the passphrase for that key. Since OpenSSH is part of the FreeBSD base system, all network logins should be over an encrypted connection and use key-based authentication instead of passwords. For more information, refer to OpenSSH. Kerberos users may need to make additional changes to implement OpenSSH in their network. These changes are described in Kerberos. Enforcing a strong password policy for local accounts is a fundamental aspect of system security. In FreeBSD, password length, password strength, and password complexity can be implemented using built-in Pluggable Authentication Modules (PAM). This section demonstrates how to configure the minimum and maximum password length and the enforcement of mixed characters using the pam_passwdqc.so module. This module is enforced when a user changes their password. To configure this module, become the superuser and uncomment the line containing pam_passwdqc.so in /etc/pam.d/passwd. Then, edit that line to match the password policy: password requisite pam_passwdqc.so min=disabled,disabled,disabled,12,10 similar=deny retry=3 enforce=users This example sets several requirements for new passwords. The min setting controls the minimum password length. It has five values because this module defines five different types of passwords based on their complexity. Complexity is defined by the type of characters that must exist in a password, such as letters, numbers, symbols, and case. The types of passwords are described in pam_passwdqc(8). In this example, the first three types of passwords are disabled, meaning that passwords that meet those complexity requirements will not be accepted, regardless of their length. The 12 sets a minimum password policy of at least twelve characters, if the password also contains characters with three types of complexity. The 10 sets the password policy to also allow passwords of at least ten characters, if the password contains characters with four types of complexity. The similar setting denies passwords that are similar to the user’s previous password. The retry setting provides a user with three opportunities to enter a new password. Once this file is saved, a user changing their password will see a message similar to the following: % passwd You can now choose the new password. A valid password should be a mix of upper and lower case letters, digits and other characters. You can use a 12 character long password with characters from at least 3 of these 4 classes, or a 10 character long password containing characters from all the classes. Characters that form a common pattern are discarded by the check. Alternatively, if no one else can see your terminal now, you can Enter new password: If a password that does not match the policy is entered, it will be rejected with a warning and the user will have an opportunity to try again, up to the configured number of retries. Most password policies require passwords to expire after so many days. To set a password age time in FreeBSD, set passwordtime for the user’s login class in /etc/login.conf. The default login class contains an example: # :passwordtime=90d:\ So, to set an expiry of 90 days for this login class, remove the comment symbol (#), save the edit, and run cap_mkdb /etc/login.conf. To set the expiration on individual users, pass an expiration date or the number of days to expiry and a username to pw: # pw usermod -p 30-apr-2015 -n trhodes As seen here, an expiration date is set in the form of day, month, and year. For more information, see pw(8). #### 15.2.5. Detecting Rootkits A rootkit is any unauthorized software that attempts to gain root access to a system. Once installed, this malicious software will normally open up another avenue of entry for an attacker. Realistically, once a system has been compromised by a rootkit and an investigation has been performed, the system should be reinstalled from scratch. There is tremendous risk that even the most prudent security or systems engineer will miss something an attacker left behind. A rootkit does do one thing useful for administrators: once detected, it is a sign that a compromise happened at some point. But, these types of applications tend to be very well hidden. This section demonstrates a tool that can be used to detect rootkits, security/rkhunter. After installation of this package or port, the system may be checked using the following command. It will produce a lot of information and will require some manual pressing of ENTER: # rkhunter -c After the process completes, a status message will be printed to the screen. This message will include the amount of files checked, suspect files, possible rootkits, and more. During the check, some generic security warnings may be produced about hidden files, the OpenSSH protocol selection, and known vulnerable versions of installed software. These can be handled now or after a more detailed analysis has been performed. Every administrator should know what is running on the systems they are responsible for. Third-party tools like rkhunter and sysutils/lsof, and native commands such as netstat and ps, can show a great deal of information on the system. Take notes on what is normal, ask questions when something seems out of place, and be paranoid. While preventing a compromise is ideal, detecting a compromise is a must. #### 15.2.6. Binary Verification Verification of system files and binaries is important because it provides the system administration and security teams information about system changes. A software application that monitors the system for changes is called an Intrusion Detection System (IDS). FreeBSD provides native support for a basic IDS system. While the nightly security emails will notify an administrator of changes, the information is stored locally and there is a chance that a malicious user could modify this information in order to hide their changes to the system. As such, it is recommended to create a separate set of binary signatures and store them on a read-only, root-owned directory or, preferably, on a removable USB disk or remote rsync server. The built-in mtree utility can be used to generate a specification of the contents of a directory. A seed, or a numeric constant, is used to generate the specification and is required to check that the specification has not changed. This makes it possible to determine if a file or binary has been modified. Since the seed value is unknown by an attacker, faking or checking the checksum values of files will be difficult to impossible. The following example generates a set of SHA256 hashes, one for each system binary in /bin, and saves those values to a hidden file in root's home directory, /root/.bin_chksum_mtree: # mtree -s 3483151339707503 -c -K cksum,sha256digest -p /bin > /root/.bin_chksum_mtree # mtree: /bin checksum: 3427012225 The 3483151339707503 represents the seed. This value should be remembered, but not shared. Viewing /root/.bin_cksum_mtree should yield output similar to the following: # user: root # tree: /bin # date: Mon Feb 3 10:19:53 2014 # . /set type=file uid=0 gid=0 mode=0555 nlink=1 flags=none . type=dir mode=0755 nlink=2 size=1024 \ time=1380277977.000000000 cksum=484492447 \ sha256digest=6207490fbdb5ed1904441fbfa941279055c3e24d3a4049aeb45094596400662a cat size=12096 time=1380277975.000000000 cksum=3909216944 \ sha256digest=65ea347b9418760b247ab10244f47a7ca2a569c9836d77f074e7a306900c1e69 chflags size=8168 time=1380277975.000000000 cksum=3949425175 \ sha256digest=c99eb6fc1c92cac335c08be004a0a5b4c24a0c0ef3712017b12c89a978b2dac3 chio size=18520 time=1380277975.000000000 cksum=2208263309 \ sha256digest=ddf7c8cb92a58750a675328345560d8cc7fe14fb3ccd3690c34954cbe69fc964 chmod size=8640 time=1380277975.000000000 cksum=2214429708 \ sha256digest=a435972263bf814ad8df082c0752aa2a7bdd8b74ff01431ccbd52ed1e490bbe7 The machine’s hostname, the date and time the specification was created, and the name of the user who created the specification are included in this report. There is a checksum, size, time, and SHA256 digest for each binary in the directory. To verify that the binary signatures have not changed, compare the current contents of the directory to the previously generated specification, and save the results to a file. This command requires the seed that was used to generate the original specification: # mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output # mtree: /bin checksum: 3427012225 This should produce the same checksum for /bin that was produced when the specification was created. If no changes have occurred to the binaries in this directory, the /root/.bin_chksum_output output file will be empty. To simulate a change, change the date on /bin/cat using touch and run the verification command again: # touch /bin/cat # mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output # more /root/.bin_chksum_output cat changed modification time expected Fri Sep 27 06:32:55 2013 found Mon Feb 3 10:28:43 2014 It is recommended to create specifications for the directories which contain binaries and configuration files, as well as any directories containing sensitive data. Typically, specifications are created for /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /etc, and /usr/local/etc. More advanced IDS systems exist, such as security/aide. In most cases, mtree provides the functionality administrators need. It is important to keep the seed value and the checksum output hidden from malicious users. More information about mtree can be found in mtree(8). #### 15.2.7. System Tuning for Security In FreeBSD, many system features can be tuned using sysctl. A few of the security features which can be tuned to prevent Denial of Service (DoS) attacks will be covered in this section. More information about using sysctl, including how to temporarily change values and how to make the changes permanent after testing, can be found in “Tuning with sysctl(8)”. Any time a setting is changed with sysctl, the chance to cause undesired harm is increased, affecting the availability of the system. All changes should be monitored and, if possible, tried on a testing system before being used on a production system. By default, the FreeBSD kernel boots with a security level of -1. This is called "insecure mode" because immutable file flags may be turned off and all devices may be read from or written to. The security level will remain at -1 unless it is altered through sysctl or by a setting in the startup scripts. The security level may be increased during system startup by setting kern_securelevel_enable to YES in /etc/rc.conf, and the value of kern_securelevel to the desired security level. See security(7) and init(8) for more information on these settings and the available security levels. Increasing the securelevel can break Xorg and cause other issues. Be prepared to do some debugging. The net.inet.tcp.blackhole and net.inet.udp.blackhole settings can be used to drop incoming SYN packets on closed ports without sending a return RST response. The default behavior is to return an RST to show a port is closed. Changing the default provides some level of protection against ports scans, which are used to determine which applications are running on a system. Set net.inet.tcp.blackhole to 2 and net.inet.udp.blackhole to 1. Refer to blackhole(4) for more information about these settings. The net.inet.icmp.drop_redirect and net.inet.ip.redirect settings help prevent against redirect attacks. A redirect attack is a type of DoS which sends mass numbers of ICMP type 5 packets. Since these packets are not required, set net.inet.icmp.drop_redirect to 1 and set net.inet.ip.redirect to 0. Source routing is a method for detecting and accessing non-routable addresses on the internal network. This should be disabled as non-routable addresses are normally not routable on purpose. To disable this feature, set net.inet.ip.sourceroute and net.inet.ip.accept_sourceroute to 0. When a machine on the network needs to send messages to all hosts on a subnet, an ICMP echo request message is sent to the broadcast address. However, there is no reason for an external host to perform such an action. To reject all external broadcast requests, set net.inet.icmp.bmcastecho to 0. Some additional settings are documented in security(7). By default, FreeBSD includes support for One-time Passwords In Everything (OPIE). OPIE is designed to prevent replay attacks, in which an attacker discovers a user’s password and uses it to access a system. Since a password is only used once in OPIE, a discovered password is of little use to an attacker. OPIE uses a secure hash and a challenge/response system to manage passwords. The FreeBSD implementation uses the MD5 hash by default. OPIE uses three different types of passwords. The first is the usual UNIX® or Kerberos password. The second is the one-time password which is generated by opiekey. The third type of password is the "secret password" which is used to generate one-time passwords. The secret password has nothing to do with, and should be different from, the UNIX® password. There are two other pieces of data that are important to OPIE. One is the "seed" or "key", consisting of two letters and five digits. The other is the "iteration count", a number between 1 and 100. OPIE creates the one-time password by concatenating the seed and the secret password, applying the MD5 hash as many times as specified by the iteration count, and turning the result into six short English words which represent the one-time password. The authentication system keeps track of the last one-time password used, and the user is authenticated if the hash of the user-provided password is equal to the previous password. Since a one-way hash is used, it is impossible to generate future one-time passwords if a successfully used password is captured. The iteration count is decremented after each successful login to keep the user and the login program in sync. When the iteration count gets down to 1, OPIE must be reinitialized. There are a few programs involved in this process. A one-time password, or a consecutive list of one-time passwords, is generated by passing an iteration count, a seed, and a secret password to opiekey(1). In addition to initializing OPIE, opiepasswd(1) is used to change passwords, iteration counts, or seeds. The relevant credential files in /etc/opiekeys are examined by opieinfo(1) which prints out the invoking user’s current iteration count and seed. This section describes four different sorts of operations. The first is how to set up one-time-passwords for the first time over a secure connection. The second is how to use opiepasswd over an insecure connection. The third is how to log in over an insecure connection. The fourth is how to generate a number of keys which can be written down or printed out to use at insecure locations. #### 15.3.1. Initializing OPIE To initialize OPIE for the first time, run this command from a secure location: % opiepasswd -c Only use this method from the console; NEVER from remote. If you are using telnet, xterm, or a dial-in, type ^C now or exit with no password. Then run opiepasswd without the -c parameter. Using MD5 to compute responses. Enter new secret pass phrase: Again new secret pass phrase: ID unfurl OTP key is 499 to4268 MOS MALL GOAT ARM AVID COED The -c sets console mode which assumes that the command is being run from a secure location, such as a computer under the user’s control or an SSH session to a computer under the user’s control. When prompted, enter the secret password which will be used to generate the one-time login keys. This password should be difficult to guess and should be different than the password which is associated with the user’s login account. It must be between 10 and 127 characters long. Remember this password. The ID line lists the login name (unfurl), default iteration count (499), and default seed (to4268). When logging in, the system will remember these parameters and display them, meaning that they do not have to be memorized. The last line lists the generated one-time password which corresponds to those parameters and the secret password. At the next login, use this one-time password. #### 15.3.2. Insecure Connection Initialization To initialize or change the secret password on an insecure system, a secure connection is needed to some place where opiekey can be run. This might be a shell prompt on a trusted machine. An iteration count is needed, where 100 is probably a good value, and the seed can either be specified or the randomly-generated one used. On the insecure connection, the machine being initialized, use opiepasswd(1): % opiepasswd Updating unfurl: You need the response from an OTP generator. Old secret pass phrase: otp-md5 498 to4268 ext Response: GAME GAG WELT OUT DOWN CHAT New secret pass phrase: otp-md5 499 to4269 Response: LINE PAP MILK NELL BUOY TROY ID mark OTP key is 499 gr4269 LINE PAP MILK NELL BUOY TROY To accept the default seed, press Return. Before entering an access password, move over to the secure connection and give it the same parameters: % opiekey 498 to4268 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: GAME GAG WELT OUT DOWN CHAT Switch back over to the insecure connection, and copy the generated one-time password over to the relevant program. #### 15.3.3. Generating a Single One-time Password After initializing OPIE and logging in, a prompt like this will be displayed: % telnet example.com Trying 10.0.0.1... Connected to example.com Escape character is '^]'. FreeBSD/i386 (example.com) (ttypa) otp-md5 498 gr4269 ext Password: The OPIE prompts provides a useful feature. If Return is pressed at the password prompt, the prompt will turn echo on and display what is typed. This can be useful when attempting to type in a password by hand from a printout. At this point, generate the one-time password to answer this login prompt. This must be done on a trusted system where it is safe to run opiekey(1). There are versions of this command for Windows®, Mac OS® and FreeBSD. This command needs the iteration count and the seed as command line options. Use cut-and-paste from the login prompt on the machine being logged in to. On the trusted system: % opiekey 498 to4268 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: GAME GAG WELT OUT DOWN CHAT #### 15.3.4. Generating Multiple One-time Passwords Sometimes there is no access to a trusted machine or secure connection. In this case, it is possible to use opiekey(1) to generate a number of one-time passwords beforehand. For example: % opiekey -n 5 30 zz99999 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: <secret password> 26: JOAN BORE FOSS DES NAY QUIT 27: LATE BIAS SLAY FOLK MUCH TRIG 28: SALT TIN ANTI LOON NEAL USE 29: RIO ODIN GO BYE FURY TIC 30: GREW JIVE SAN GIRD BOIL PHI The -n 5 requests five keys in sequence, and 30 specifies what the last iteration number should be. Note that these are printed out in reverse order of use. The really paranoid might want to write the results down by hand; otherwise, print the list. Each line shows both the iteration count and the one-time password. Scratch off the passwords as they are used. #### 15.3.5. Restricting Use of UNIX® Passwords OPIE can restrict the use of UNIX® passwords based on the IP address of a login session. The relevant file is /etc/opieaccess, which is present by default. Refer to opieaccess(5) for more information on this file and which security considerations to be aware of when using it. Here is a sample opieaccess: permit 192.168.0.0 255.255.0.0 This line allows users whose IP source address (which is vulnerable to spoofing) matches the specified value and mask, to use UNIX® passwords at any time. If no rules in opieaccess are matched, the default is to deny non-OPIE logins. ### 15.4. TCP Wrapper TCP Wrapper is a host-based access control system which extends the abilities of “The inetd Super-Server”. It can be configured to provide logging support, return messages, and connection restrictions for the server daemons under the control of inetd. Refer to tcpd(8) for more information about TCP Wrapper and its features. TCP Wrapper should not be considered a replacement for a properly configured firewall. Instead, TCP Wrapper should be used in conjunction with a firewall and other security enhancements in order to provide another layer of protection in the implementation of a security policy. #### 15.4.1. Initial Configuration To enable TCP Wrapper in FreeBSD, add the following lines to /etc/rc.conf: inetd_enable="YES" inetd_flags="-Ww" Then, properly configure /etc/hosts.allow. Unlike other implementations of TCP Wrapper, the use of hosts.deny is deprecated in FreeBSD. All configuration options should be placed in /etc/hosts.allow. In the simplest configuration, daemon connection policies are set to either permit or block, depending on the options in /etc/hosts.allow. The default configuration in FreeBSD is to allow all connections to the daemons started with inetd. Basic configuration usually takes the form of daemon : address : action, where daemon is the daemon which inetd started, address is a valid hostname, IP address, or an IPv6 address enclosed in brackets ([ ]), and action is either allow or deny. TCP Wrapper uses a first rule match semantic, meaning that the configuration file is scanned from the beginning for a matching rule. When a match is found, the rule is applied and the search process stops. For example, to allow POP3 connections via the mail/qpopper daemon, the following lines should be appended to hosts.allow: # This line is required for POP3 connections: qpopper : ALL : allow Whenever this file is edited, restart inetd: # service inetd restart TCP Wrapper provides advanced options to allow more control over the way connections are handled. In some cases, it may be appropriate to return a comment to certain hosts or daemon connections. In other cases, a log entry should be recorded or an email sent to the administrator. Other situations may require the use of a service for local connections only. This is all possible through the use of configuration options known as wildcards, expansion characters, and external command execution. Suppose that a situation occurs where a connection should be denied yet a reason should be sent to the host who attempted to establish that connection. That action is possible with twist. When a connection attempt is made, twist executes a shell command or script. An example exists in hosts.allow: # The rest of the daemons are protected. ALL : ALL \ : severity auth.info \ : twist /bin/echo "You are not welcome to use %d from %h." In this example, the message "You are not allowed to use daemon name from hostname." will be returned for any daemon not configured in hosts.allow. This is useful for sending a reply back to the connection initiator right after the established connection is dropped. Any message returned must be wrapped in quote (") characters. It may be possible to launch a denial of service attack on the server if an attacker floods these daemons with connection requests. Another possibility is to use spawn. Like twist, spawn implicitly denies the connection and may be used to run external shell commands or scripts. Unlike twist, spawn will not send a reply back to the host who established the connection. For example, consider the following configuration: # We do not allow connections from example.com: ALL : .example.com \ : spawn (/bin/echo %a from %h attempted to access %d >> \ /var/log/connections.log) \ : deny This will deny all connection attempts from *.example.com and log the hostname, IP address, and the daemon to which access was attempted to /var/log/connections.log. This example uses the substitution characters %a and %h. Refer to hosts_access(5) for the complete list. To match every instance of a daemon, domain, or IP address, use ALL. Another wildcard is PARANOID which may be used to match any host which provides an IP address that may be forged because the IP address differs from its resolved hostname. In this example, all connection requests to Sendmail which have an IP address that varies from its hostname will be denied: # Block possibly spoofed requests to sendmail: sendmail : PARANOID : deny Using the PARANOID wildcard will result in denied connections if the client or server has a broken DNS setup. When adding new configuration lines, make sure that any unneeded entries for that daemon are commented out in hosts.allow. ### 15.5. Kerberos Kerberos is a network authentication protocol which was originally created by the Massachusetts Institute of Technology (MIT) as a way to securely provide authentication across a potentially hostile network. The Kerberos protocol uses strong cryptography so that both a client and server can prove their identity without sending any unencrypted secrets over the network. Kerberos can be described as an identity-verifying proxy system and as a trusted third-party authentication system. After a user authenticates with Kerberos, their communications can be encrypted to assure privacy and data integrity. The only function of Kerberos is to provide the secure authentication of users and servers on the network. It does not provide authorization or auditing functions. It is recommended that Kerberos be used with other security methods which provide authorization and audit services. The current version of the protocol is version 5, described in RFC 4120. Several free implementations of this protocol are available, covering a wide range of operating systems. MIT continues to develop their Kerberos package. It is commonly used in the US as a cryptography product, and has historically been subject to US export regulations. In FreeBSD, MITKerberos is available as the security/krb5 package or port. The Heimdal Kerberos implementation was explicitly developed outside of the US to avoid export regulations. The Heimdal Kerberos distribution is included in the base FreeBSD installation, and another distribution with more configurable options is available as security/heimdal in the Ports Collection. In Kerberos users and services are identified as "principals" which are contained within an administrative grouping, called a "realm". A typical user principal would be of the form user@REALM (realms are traditionally uppercase). This section provides a guide on how to set up Kerberos using the Heimdal distribution included in FreeBSD. For purposes of demonstrating a Kerberos installation, the name spaces will be as follows: • The DNS domain (zone) will be example.org. • The Kerberos realm will be EXAMPLE.ORG. Use real domain names when setting up Kerberos, even if it will run internally. This avoids DNS problems and assures inter-operation with other Kerberos realms. #### 15.5.1. Setting up a Heimdal KDC The Key Distribution Center (KDC) is the centralized authentication service that Kerberos provides, the "trusted third party" of the system. It is the computer that issues Kerberos tickets, which are used for clients to authenticate to servers. As the KDC is considered trusted by all other computers in the Kerberos realm, it has heightened security concerns. Direct access to the KDC should be limited. While running a KDC requires few computing resources, a dedicated machine acting only as a KDC is recommended for security reasons. To begin, install the security/heimdal package as follows: # pkg install heimdal Next, update /etc/rc.conf using sysrc as follows: # sysrc kdc_enable=yes # sysrc kadmind_enable=yes Next, edit /etc/krb5.conf as follows: [libdefaults] default_realm = EXAMPLE.ORG [realms] EXAMPLE.ORG = { kdc = kerberos.example.org } [domain_realm] .example.org = EXAMPLE.ORG In this example, the KDC will use the fully-qualified hostname kerberos.example.org. The hostname of the KDC must be resolvable in the DNS. Kerberos can also use the DNS to locate KDCs, instead of a [realms] section in /etc/krb5.conf. For large organizations that have their own DNS servers, the above example could be trimmed to: [libdefaults] default_realm = EXAMPLE.ORG [domain_realm] .example.org = EXAMPLE.ORG With the following lines being included in the example.org zone file: _kerberos._udp IN SRV 01 00 88 kerberos.example.org. _kerberos._tcp IN SRV 01 00 88 kerberos.example.org. _kpasswd._udp IN SRV 01 00 464 kerberos.example.org. _kerberos-adm._tcp IN SRV 01 00 749 kerberos.example.org. _kerberos IN TXT EXAMPLE.ORG In order for clients to be able to find the Kerberos services, they must have either a fully configured /etc/krb5.conf or a minimally configured /etc/krb5.conf and a properly configured DNS server. Next, create the Kerberos database which contains the keys of all principals (users and hosts) encrypted with a master password. It is not required to remember this password as it will be stored in /var/heimdal/m-key; it would be reasonable to use a 45-character random password for this purpose. To create the master key, run kstash and enter a password: # kstash Master key: xxxxxxxxxxxxxxxxxxxxxxx Verifying password - Master key: xxxxxxxxxxxxxxxxxxxxxxx Once the master key has been created, the database should be initialized. The Kerberos administrative tool kadmin(8) can be used on the KDC in a mode that operates directly on the database, without using the kadmind(8) network service, as kadmin -l. This resolves the chicken-and-egg problem of trying to connect to the database before it is created. At the kadmin prompt, use init to create the realm’s initial database: # kadmin -l Realm max ticket life [unlimited]: Lastly, while still in kadmin, create the first principal using add. Stick to the default options for the principal for now, as these can be changed later with modify. Type ? at the prompt to see the available options. kadmin> add tillman Max ticket life [unlimited]: Max renewable life [unlimited]: Principal expiration time [never]: Attributes []: Verifying password - Password: xxxxxxxx Next, start the KDC services by running: # service kdc start # service kadmind start While there will not be any kerberized daemons running at this point, it is possible to confirm that the KDC is functioning by obtaining a ticket for the principal that was just created: % kinit tillman [email protected]'s Password: Confirm that a ticket was successfully obtained using klist: % klist Credentials cache: FILE:/tmp/krb5cc_1001 Principal: [email protected] Issued Expires Principal Aug 27 15:37:58 2013 Aug 28 01:37:58 2013 krbtgt/[email protected] The temporary ticket can be destroyed when the test is finished: % kdestroy #### 15.5.2. Configuring a Server to Use Kerberos The first step in configuring a server to use Kerberos authentication is to ensure that it has the correct configuration in /etc/krb5.conf. The version from the KDC can be used as-is, or it can be regenerated on the new system. Next, create /etc/krb5.keytab on the server. This is the main part of "Kerberizing" a service - it corresponds to generating a secret shared between the service and the KDC. The secret is a cryptographic key, stored in a "keytab". The keytab contains the server’s host key, which allows it and the KDC to verify each others' identity. It must be transmitted to the server in a secure fashion, as the security of the server can be broken if the key is made public. Typically, the keytab is generated on an administrator’s trusted machine using kadmin, then securely transferred to the server, e.g., with scp(1); it can also be created directly on the server if that is consistent with the desired security policy. It is very important that the keytab is transmitted to the server in a secure fashion: if the key is known by some other party, that party can impersonate any user to the server! Using kadmin on the server directly is convenient, because the entry for the host principal in the KDC database is also created using kadmin. Of course, kadmin is a kerberized service; a Kerberos ticket is needed to authenticate to the network service, but to ensure that the user running kadmin is actually present (and their session has not been hijacked), kadmin will prompt for the password to get a fresh ticket. The principal authenticating to the kadmin service must be permitted to use the kadmin interface, as specified in /var/heimdal/kadmind.acl. See the section titled "Remote administration" in info heimdal for details on designing access control lists. Instead of enabling remote kadmin access, the administrator could securely connect to the KDC via the local console or ssh(1), and perform administration locally using kadmin -l. After installing /etc/krb5.conf, use add --random-key in kadmin. This adds the server’s host principal to the database, but does not extract a copy of the host principal key to a keytab. To generate the keytab, use ext to extract the server’s host principal key to its own keytab: # kadmin Max ticket life [unlimited]: Max renewable life [unlimited]: Principal expiration time [never]: Attributes []: kadmin> exit Note that ext_keytab stores the extracted key in /etc/krb5.keytab by default. This is good when being run on the server being kerberized, but the --keytab path/to/file argument should be used when the keytab is being extracted elsewhere: # kadmin kadmin> exit The keytab can then be securely copied to the server using scp(1) or a removable media. Be sure to specify a non-default keytab name to avoid inserting unneeded keys into the system’s keytab. At this point, the server can read encrypted messages from the KDC using its shared key, stored in krb5.keytab. It is now ready for the Kerberos-using services to be enabled. One of the most common such services is sshd(8), which supports Kerberos via the GSS-API. In /etc/ssh/sshd_config, add the line: GSSAPIAuthentication yes After making this change, sshd(8) must be restarted for the new configuration to take effect: service sshd restart. #### 15.5.3. Configuring a Client to Use Kerberos As it was for the server, the client requires configuration in /etc/krb5.conf. Copy the file in place (securely) or re-enter it as needed. Test the client by using kinit, klist, and kdestroy from the client to obtain, show, and then delete a ticket for an existing principal. Kerberos applications should also be able to connect to Kerberos enabled servers. If that does not work but obtaining a ticket does, the problem is likely with the server and not with the client or the KDC. In the case of kerberized ssh(1), GSS-API is disabled by default, so test using ssh -o GSSAPIAuthentication=yes hostname. When testing a Kerberized application, try using a packet sniffer such as tcpdump to confirm that no sensitive information is sent in the clear. Various Kerberos client applications are available. With the advent of a bridge so that applications using SASL for authentication can use GSS-API mechanisms as well, large classes of client applications can use Kerberos for authentication, from Jabber clients to IMAP clients. Users within a realm typically have their Kerberos principal mapped to a local user account. Occasionally, one needs to grant access to a local user account to someone who does not have a matching Kerberos principal. For example, [email protected] may need access to the local user account webdevelopers. Other principals may also need access to that local account. The .k5login and .k5users files, placed in a user’s home directory, can be used to solve this problem. For example, if the following .k5login is placed in the home directory of webdevelopers, both principals listed will have access to that account without requiring a shared password: [email protected] [email protected] #### 15.5.4. MIT Differences The major difference between the MIT and Heimdal implementations is that kadmin has a different, but equivalent, set of commands and uses a different protocol. If the KDC is MIT, the Heimdal version of kadmin cannot be used to administer the KDC remotely, and vice versa. Client applications may also use slightly different command line options to accomplish the same tasks. Following the instructions at http://web.mit.edu/Kerberos/www/ is recommended. Be careful of path issues: the MIT port installs into /usr/local/ by default, and the FreeBSD system applications run instead of the MIT versions if PATH lists the system directories first. When using MIT Kerberos as a KDC on FreeBSD, the following edits should also be made to rc.conf: kdc_program="/usr/local/sbin/kdc" kdc_flags="" kdc_enable="YES" kadmind_enable="YES" #### 15.5.5. Kerberos Tips, Tricks, and Troubleshooting When configuring and troubleshooting Kerberos, keep the following points in mind: • When using either Heimdal or MITKerberos from ports, ensure that the PATH lists the port’s versions of the client applications before the system versions. • If all the computers in the realm do not have synchronized time settings, authentication may fail. “Clock Synchronization with NTP” describes how to synchronize clocks using NTP. • If the hostname is changed, the host/ principal must be changed and the keytab updated. This also applies to special keytab entries like the HTTP/ principal used for Apache’s www/mod_auth_kerb. • All hosts in the realm must be both forward and reverse resolvable in DNS or, at a minimum, exist in /etc/hosts. CNAMEs will work, but the A and PTR records must be correct and in place. The error message for unresolvable hosts is not intuitive: Kerberos5 refuses authentication because Read req failed: Key table entry not found. • Some operating systems that act as clients to the KDC do not set the permissions for ksu to be setuid root. This means that ksu does not work. This is a permissions problem, not a KDC error. • With MITKerberos, to allow a principal to have a ticket life longer than the default lifetime of ten hours, use modify_principal at the kadmin(8) prompt to change the maxlife of both the principal in question and the krbtgt principal. The principal can then use kinit -l to request a ticket with a longer lifetime. • When running a packet sniffer on the KDC to aid in troubleshooting while running kinit from a workstation, the Ticket Granting Ticket (TGT) is sent immediately, even before the password is typed. This is because the Kerberos server freely transmits a TGT to any unauthorized request. However, every TGT is encrypted in a key derived from the user’s password. When a user types their password, it is not sent to the KDC, it is instead used to decrypt the TGT that kinit already obtained. If the decryption process results in a valid ticket with a valid time stamp, the user has valid Kerberos credentials. These credentials include a session key for establishing secure communications with the Kerberos server in the future, as well as the actual TGT, which is encrypted with the Kerberos server’s own key. This second layer of encryption allows the Kerberos server to verify the authenticity of each TGT. • Host principals can have a longer ticket lifetime. If the user principal has a lifetime of a week but the host being connected to has a lifetime of nine hours, the user cache will have an expired host principal and the ticket cache will not work as expected. • When setting up krb5.dict to prevent specific bad passwords from being used as described in kadmind(8), remember that it only applies to principals that have a password policy assigned to them. The format used in krb5.dict is one string per line. Creating a symbolic link to /usr/share/dict/words might be useful. #### 15.5.6. Mitigating Kerberos Limitations Since Kerberos is an all or nothing approach, every service enabled on the network must either be modified to work with Kerberos or be otherwise secured against network attacks. This is to prevent user credentials from being stolen and re-used. An example is when Kerberos is enabled on all remote shells but the non-Kerberized POP3 mail server sends passwords in plain text. The KDC is a single point of failure. By design, the KDC must be as secure as its master password database. The KDC should have absolutely no other services running on it and should be physically secure. The danger is high because Kerberos stores all passwords encrypted with the same master key which is stored as a file on the KDC. A compromised master key is not quite as bad as one might fear. The master key is only used to encrypt the Kerberos database and as a seed for the random number generator. As long as access to the KDC is secure, an attacker cannot do much with the master key. If the KDC is unavailable, network services are unusable as authentication cannot be performed. This can be alleviated with a single master KDC and one or more slaves, and with careful implementation of secondary or fall-back authentication using PAM. Kerberos allows users, hosts and services to authenticate between themselves. It does not have a mechanism to authenticate the KDC to the users, hosts, or services. This means that a trojaned kinit could record all user names and passwords. File system integrity checking tools like security/tripwire can alleviate this. ### 15.6. OpenSSL OpenSSL is an open source implementation of the SSL and TLS protocols. It provides an encryption transport layer on top of the normal communications layer, allowing it to be intertwined with many network applications and services. The version of OpenSSL included in FreeBSD supports Transport Layer Security 1.0/1.1/1.2/1.3 (TLSv1/TLSv1.1/TLSv1.2/TLSv1.3) network security protocols and can be used as a general cryptographic library. OpenSSL is often used to encrypt authentication of mail clients and to secure web based transactions such as credit card payments. Some ports, such as www/apache24 and databases/postgresql11-server, include a compile option for building with OpenSSL. If selected, the port will add support using OpenSSL from the base system. To instead have the port compile against OpenSSL from the security/openssl port, add the following to /etc/make.conf: DEFAULT_VERSIONS+= ssl=openssl Another common use of OpenSSL is to provide certificates for use with software applications. Certificates can be used to verify the credentials of a company or individual. If a certificate has not been signed by an external Certificate Authority (CA), such as http://www.verisign.com, the application that uses the certificate will produce a warning. There is a cost associated with obtaining a signed certificate and using a signed certificate is not mandatory as certificates can be self-signed. However, using an external authority will prevent warnings and can put users at ease. This section demonstrates how to create and use certificates on a FreeBSD system. Refer to “Configuring an LDAP Server” for an example of how to create a CA for signing one’s own certificates. #### 15.6.1. Generating Certificates To generate a certificate that will be signed by an external CA, issue the following command and input the information requested at the prompts. This input information will be written to the certificate. At the Common Name prompt, input the fully qualified name for the system that will use the certificate. If this name does not match the server, the application verifying the certificate will issue a warning to the user, rendering the verification provided by the certificate as useless. # openssl req -new -nodes -out req.pem -keyout cert.key -sha256 -newkey rsa:2048 Generating a 2048 bit RSA private key ..................+++ .............................................................+++ writing new private key to 'cert.key' ----- You are about to be asked to enter information that will be incorporated What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:PA Locality Name (e.g., city) []:Pittsburgh Organization Name (e.g., company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (e.g., section) []:Systems Administrator Common Name (e.g., YOUR name) []:localhost.example.org Please enter the following 'extra' attributes to be sent with your certificate request An optional company name []:Another Name Other options, such as the expire time and alternate encryption algorithms, are available when creating a certificate. A complete list of options is described in openssl(1). This command will create two files in the current directory. The certificate request, req.pem, can be sent to a CA who will validate the entered credentials, sign the request, and return the signed certificate. The second file, cert.key, is the private key for the certificate and should be stored in a secure location. If this falls in the hands of others, it can be used to impersonate the user or the server. Alternately, if a signature from a CA is not required, a self-signed certificate can be created. First, generate the RSA key: # openssl genrsa -rand -genkey -out cert.key 2048 Generating RSA private key, 2048 bit long modulus .............................................+++ .................................................................................................................+++ e is 65537 (0x10001) Use this key to create a self-signed certificate. Follow the usual prompts for creating a certificate: # openssl req -new -x509 -days 365 -key cert.key -out cert.crt -sha256 You are about to be asked to enter information that will be incorporated What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:PA Locality Name (e.g., city) []:Pittsburgh Organization Name (e.g., company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (e.g., section) []:Systems Administrator Common Name (e.g. server FQDN or YOUR name) []:localhost.example.org Email Address []:[email protected] This will create two new files in the current directory: a private key file cert.key, and the certificate itself, cert.crt. These should be placed in a directory, preferably under /etc/ssl/, which is readable only by root. Permissions of 0700 are appropriate for these files and can be set using chmod. #### 15.6.2. Using Certificates One use for a certificate is to encrypt connections to the Sendmail mail server in order to prevent the use of clear text authentication. Some mail clients will display an error if the user has not installed a local copy of the certificate. Refer to the documentation included with the software for more information on certificate installation. In FreeBSD 10.0-RELEASE and above, it is possible to create a self-signed certificate for Sendmail automatically. To enable this, add the following lines to /etc/rc.conf: sendmail_enable="YES" sendmail_cert_create="YES" sendmail_cert_cn="localhost.example.org" This will automatically create a self-signed certificate, /etc/mail/certs/host.cert, a signing key, /etc/mail/certs/host.key, and a CA certificate, /etc/mail/certs/cacert.pem. The certificate will use the Common Name specified in sendmail_cert_cn. After saving the edits, restart Sendmail: # service sendmail restart If all went well, there will be no error messages in /var/log/maillog. For a simple test, connect to the mail server’s listening port using telnet: # telnet example.com 25 Trying 192.0.34.166... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Sendmail 8.14.7/8.14.7; Fri, 18 Apr 2014 11:50:32 -0400 (EDT) ehlo example.com 250-example.com Hello example.com [192.0.34.166], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 250-DSN 250-ETRN 250-STARTTLS 250-DELIVERBY 250 HELP quit 221 2.0.0 example.com closing connection Connection closed by foreign host. If the STARTTLS line appears in the output, everything is working correctly. ### 15.7. VPN over IPsec Internet Protocol Security (IPsec) is a set of protocols which sit on top of the Internet Protocol (IP) layer. It allows two or more hosts to communicate in a secure manner by authenticating and encrypting each IP packet of a communication session. The FreeBSD IPsec network stack is based on the http://www.kame.net/ implementation and supports both IPv4 and IPv6 sessions. IPsec is comprised of the following sub-protocols: • Encapsulated Security Payload (ESP): this protocol protects the IP packet data from third party interference by encrypting the contents using symmetric cryptography algorithms such as Blowfish and 3DES. • Authentication Header (AH): this protocol protects the IP packet header from third party interference and spoofing by computing a cryptographic checksum and hashing the IP packet header fields with a secure hashing function. This is then followed by an additional header that contains the hash, to allow the information in the packet to be authenticated. • IP Payload Compression Protocol (IPComp): this protocol tries to increase communication performance by compressing the IP payload in order to reduce the amount of data sent. These protocols can either be used together or separately, depending on the environment. IPsec supports two modes of operation. The first mode, Transport Mode, protects communications between two hosts. The second mode, Tunnel Mode, is used to build virtual tunnels, commonly known as Virtual Private Networks (VPNs). Consult ipsec(4) for detailed information on the IPsec subsystem in FreeBSD. IPsec support is enabled by default on FreeBSD 11 and later. For previous versions of FreeBSD, add these options to a custom kernel configuration file and rebuild the kernel using the instructions in Configuring the FreeBSD Kernel: options IPSEC IP security device crypto If IPsec debugging support is desired, the following kernel option should also be added: options IPSEC_DEBUG debug for IP security This rest of this chapter demonstrates the process of setting up an IPsecVPN between a home network and a corporate network. In the example scenario: • Both sites are connected to the Internet through a gateway that is running FreeBSD. • The gateway on each network has at least one external IP address. In this example, the corporate LAN’s external IP address is 172.16.5.4 and the home LAN’s external IP address is 192.168.1.12. • The internal addresses of the two networks can be either public or private IP addresses. However, the address space must not overlap. In this example, the corporate LAN’s internal IP address is 10.246.38.1 and the home LAN’s internal IP address is 10.0.0.5. corporate home 10.246.38.1/24 -- 172.16.5.4 <--> 192.168.1.12 -- 10.0.0.5/24 #### 15.7.1. Configuring a VPN on FreeBSD To begin, security/ipsec-tools must be installed from the Ports Collection. This software provides a number of applications which support the configuration. The next requirement is to create two gif(4) pseudo-devices which will be used to tunnel packets and allow both networks to communicate properly. As root, run the following command on each gateway: corp-gw# ifconfig gif0 create corp-gw# ifconfig gif0 10.246.38.1 10.0.0.5 corp-gw# ifconfig gif0 tunnel 172.16.5.4 192.168.1.12 home-gw# ifconfig gif0 create home-gw# ifconfig gif0 10.0.0.5 10.246.38.1 home-gw# ifconfig gif0 tunnel 192.168.1.12 172.16.5.4 Verify the setup on each gateway, using ifconfig gif0. Here is the output from the home gateway: gif0: flags=8051 mtu 1280 tunnel inet 172.16.5.4 --> 192.168.1.12 inet6 fe80::2e0:81ff:fe02:5881%gif0 prefixlen 64 scopeid 0x6 inet 10.246.38.1 --> 10.0.0.5 netmask 0xffffff00 Here is the output from the corporate gateway: gif0: flags=8051 mtu 1280 tunnel inet 192.168.1.12 --> 1`
## Fexprs remain inscrutable In the last few weeks, a bit of discussion of John Shutt’s Fexpr-based programming language, Kernel, has sprung up in various circles. I read Shutt’s PhD thesis, implemented a couple of toy Kernel-inspired interpreters (in Javascript and Racket), and became simultaneously excited about the simplicity, hygiene and power of the approach and apprehensive about the strong possibility that there was something fundamental I was overlooking. After all, the theory of Fexprs is trivial. Calling Kernel’s Fexprs “trivial” doesn’t seem quite right to me; perhaps calling them “inscrutable” would be a better fit. ##### Three serious problems with Kernel and $vau After a discussion with a friend earlier this week, I think it’s important to note three serious problems with Kernel and the underlying $vau calculi Shutt proposes: 1. It’s not possible even to compute the free variables of a term in $vau-calculus in general.1 This makes compilation, automatic refactoring, and cross-referencing impossible in general. 2. There are too many contexts that tell terms apart; contextual equivalence is too fine. Consider these two terms: ($lambda (f) (f (+ 3 4))) ($lambda (f) (f (+ 4 3))) These two terms are not contextually equivalent, because if you pass an operative2 as the argument to the function, the operative can inspect the syntactic structure of the program. 3. This over-fine contextual equivalence is a problem in practice, not just a theoretical concern. Consider this following definition of a foldr function: ($define! foldr ($lambda (c n xs) ($if (null? xs) n (c (car xs) (foldr c n (cdr xs)))))) See how c has control over whether and when the recursive case is evaluated, because it is not required to evaluate its arguments! Compare this to the same definition where the last line is replaced with ($let ((ys (foldr c n (cdr xs)))) (c (car xs) ys)) By introducing the intermediate $let, we take control over the recursion away from c, forcing evaluation of the recursive case. ##### Comments (closed) John Shutt 00:01, 11 Oct 2011 To properly appreciate the outstanding challenges for fexprs, one has to first grok why they work in theory.  And it does seem there's something fundamental missing from the above discussion.  To unravel it, one might start with this:  symbols are not variables.  Symbols are source code, variables are not.  Determining the scope of a symbol may be a Turing-complete problem; determining whether a variable is free in a term is trivial, just as it would be in lambda-calculus.  (Btw, it's *not* $vau-calculus, it's vau-calculus;$vau is an object-language device, vau is a calculus device.  That's not nit-picking, it's at the heart of the matter.) The calculus has two separate parts: one part for source expressions, with no rewrite rules at all, and a second part for fexpr calls, whose central rewrite rule is ---with deadly accuracy--- called the beta-rule.  And bridging the gap between those two parts is the machinery of evaluation, which looks up symbols, converts pairs into combinations, and unwraps applicatives while scheduling evaluation of their operands. The source-expression part of the calculus has a trivial theory; source expressions are data, observables, not operationally equivalent to each other.  They're irreducible in the full calculus. The fexpr-calling part of the calculus has a nontrivial theory that is, essentially, the theory of lambda-calculus.  I admit, I was floored when I realized the theory of vau-calculus was not *only* nontrivial, but in fact had the whole theory of lambda-calculus inside it.  It was more shocking because I'd been working with vau-calculus for several years by that time, without even noticing. The evaluation bridge turns the whole calculus into a device for deducing the sort of partial evaluation you're talking about.  A major punchline of Chapter 9 of the dissertation is that the pure calculus doesn't need variables to model pure Kernel, but does need them to do much partial evaluation.  In later chapters, when I introduce other kinds of variables to manage impurities, the kind of variables bound by the vau operator are called "partial-evaluation variables". Tony Garnock-Jones 11:40, 11 Oct 2011 (in reply to this comment) Hi John, glad to see you here! Thanks for your comments. Re the "free variable" question: I'd like to be able to ask "Is b free in ($lambda () (a b))?"; do you think that is a sensible question? Can it be answered? What about "Is b free in ($lambda (a) (a b))?" Re that lambda-calculus is contained within vau-calculus: I'd love to see a more full explanation of this, spelling it out in detail including the consequences for designs of full programming languages based on vau-calculus. I found myself missing a discussion of the relationship between the lambda-calculus, the vau-calculus, and the Kernel source language in chapter 15 of your thesis; in particular, it seems that most people will want to be using $lambda rather than$vau when programming in Kernel, and it's not clear to me how the connection extends up through the evaluation of source expressions. (Pardon me if I get the terminology wrong here.) John Shutt 11:42, 17 Oct 2011 (in reply to this comment) Concerning "Is b free in ($lambda () (a b))", and the like. Before this would become meaningful, you'd have to frame it in terms of a calculus, so it becomes clear precisely what you want to know. ($lambda () (a b))  is a data structure.  What you presumably want to know is, what would happen if that data structure were evaluated.  So you're really asking about the behavior of some vau-calculus term of the form  [eval ($lambda () (a b)) e] where e is some environment. You want to know whether symbol b is sure to be evaluated in e rather than in some other environment. And once one frames the question that way, it's very clear that the answer cannot possibly be determined without knowing something about e. In fact, it's really a question about e. What bindings of$lambda and a are visible in e, and what are the implications of those bindings for the evaluation of symbol b in this particular expression?  Perhaps we only know that e is is the state of a certain environment sometime during a program run, and we then hope to prove that that environment can only be mutated in certain ways.  And so on. Concerning lambda-calculus and vau-calculus.  There are the *symbols* $lambda and$vau, which for a calculus are just constants.  And then there are \lambda and \vau, which are operators in the calculi.  (Since this isn't LaTeX, if I need to write a \lambda- or \vau-abstraction I'll delimit it with square brakets, so parentheses are only for pairs, i.e., cons cells.) The relationship between \lambda and \vau is that they are, essentially, just two different ways of writing exactly the same thing.  The basic difference between \lambda-calculus and \vau-calculus is that \lambda-calculus has no pairs, and no evaluation.  So it isn't really meaningful to even ask about $lambda expressions or$vau expressions with reference to \lambda-calculus.  In \vau-calculus, expressions of the form  [eval <expr> <env>]  get rewritten so that they start to have subterms of forms such as  [\vau <param> <body>]  and  [combine <combiner> <operands> <env>].  And this is true regardless of how many or few $lambda's or$vau's the original <expr> had in it.  So you'll understand, I hope, when I say that it doesn't really matter, for applicability of \vau-calculus, whether the programmer uses a lot of $lambda's and very few$vau's. I too would be interested to find out just how many opportunities there are *in practice* to apply the sorts of "partial evaluation" simplifications afforded by \vau-calculus.  The calculus is a tool for asking that question.  The answer might well provide great insights into subtle (or gross) improvements of the language design. In case you happen not to have seen one or both of these, they might interest you: http://axisofeval.blogspot.com... http://lambda-the-ultimate.org... John Shutt 17:32, 20 Jan 2012 [I have since repaired the mistake that led to comments appearing at different URLs. -- tonyg] (I remember *planning* to comment on this, quite some time back; apparently I didn't, but I'm not sure why not.) It's just as easy to determine the free variables of a term in vau-calculi as in lambda-calculi (nothing essential is changed by the additional binding constructs and classes of variables).  You're presumably thinking of symbols, whose interpretation may be statically undecidable; but symbols are data, they aren't variables, and that data/computation distinction is what empowers a nontrivial theory of fexprs.  A key issue in optimization is figuring out when you can guarantee things about the evaluation of data, and therefore eliminate it. The following two expressions produce contextually equivalent objects (barring use of eq?), provided they're evaluated in a stable environment with standard bindings. ($lambda (f) (apply f (+ 3 4))) ($lambda (f) (apply f (+ 4 3))) What best to make of that technique is an interesting question, but it does suggest that the power afforded by Kernel isn't inherently untameable. Tony Garnock-Jones 17:59, 20 Jan 2012 (in reply to this comment) [I have since repaired the mistake that led to comments appearing at different URLs. -- tonyg] Hi John; you *did* comment on this, but technical issues (!) mean that the earlier comments are only visible at this subtly different URL: http://www.eighty-twenty.org//... :-( I made a mess of the disqus configuration at one point, and it doesn't seem possible to fix older comments. :-( Regarding free variables, I'm thinking of source code. When I write ($lambda (a) (a b)), I'd like to know whether b is free in that expression. The problem is that I can't decide that until I have a complete program to look at, because I could always pass an operative in as a which doesn't examine its argument. This makes static analysis of source code tricky. Similarly, I could pass in an operative for f to ($lambda (f) (f (+ 3 4))) that could tell the difference between being given (+ 3 4) and being given (+ 4 3). That's a very fine contextual equivalence indeed. John Shutt 21:47, 21 Jan 2012 (in reply to this comment) Until you have a complete program to look at, you don't even know what $lambda means, so it might not be meaningful to ask whether b is "free". It also seems the possibilities are more complicated than merely "free" or "bound". We did go over this territory in the before, I see. My point about using apply was that *some* terms have larger operational equivalence classes than others. You can write the applicative so that passing in an operative breaks its encapsulation, but you can also write the applicative so that doesn't happen. Other techniques could be used, of course; apply is just one that's built in. For example, ($define! appv ($lambda (c) ($if (operative? c) (wrap c) c))) ($lambda (f) ((appv f) (+ 3 4))) The dangerous thing (using f in operator position) is not difficult to do by accident, but these techniques are a starting point for considering how to make the accident less likely. Mitch Wand 07:24, 23 Jan 2012 (in reply to this comment) John wrote: > Until you have a complete program to look at, you don't even know what$lambda means, so it might not be meaningful to ask whether b is "free". That's a dealbreaker for any real system:  I don't want to have to look at 10^6 lines of code in order to figure out what my 10-line function does. The whole point of "Fexprs are Trivial" was that fexprs make it impossible to analyze program FRAGMENTS, which is in practice something we need to do all the time. In this comment, John seems to agree that this holds for vau, too. John Shutt 11:41, 23 Jan 2012 (in reply to this comment) (I'm going to address the issues in the above comment; I do recommend, though, that the commenter do a self-diagnostic on the debate tactics they have employed.) The above reasoning apparently elides the same distinction as the 1998 paper: it does not distinguish between source code and reasoning about source code.  That elision leads to a problematic theoretical inference from the paper, and a problematic practical inference here. The 1998 paper could be said, equivalently, either to assume all calculus terms are source code, or to assume all reasoning about source code must be source-to-source.  There would be no difficulty with that as an explicit premise (as opposed to an implicit assumption); one could call the paper something such as "The Theory of Fexpr Source-to-Source Equivalence is Trivial" ---or on the other side of the coin, "The Theory of Perfect Reflection is Trivial", which, like the actual title of the paper, is catchy--- and everyone would be alert to the specificity of the result derived by the paper.  The unfortunate reality is that the assumption is not presented in such an explicit way, and what readers seem likely to take away from the paper is "fexprs cannot be reasoned about formally".  The proposition that we can talk about something with precision, yet cannot reason about it formally, should set off alarm claxons in one's mind.  Nor is it sufficient to qualify merely that one can only reason about whole programs.  We can obviously make formally provable statements about program fragments --- conditional statements about them.  The usefulness of those conditional statements depends on the nature and likelihood of the conditions. Now, let's look at the practical side of this.  I've got a source expression, which is part of a larger program.  Let's say the expression is ten lines, and the program is a million lines.  When I make a conditional statement about the meaning of this expression, I'm not going to state the condition in terms of a set of possible source-code contexts in which the fragment might occur.  I'm going to state the condition in terms of environments in which the fragment might be evaluated.  And if the interpreter (I mean that term in its broad sense, including compilers) does static analysis, that too will be in terms of environments.  Notice that environments are *not source code* --- we are *not* reasoning only about source code.  Our reasoning involves source code, but also involves something that cannot be read from a source file (although, since Kernel happens to have first-class environments, it might result from *evaluating* source code). Why does it sound so scary that the surrounding program is a million lines long?  Because of an implicit assumption that the difficulty of deducing the environment is proportional to the length of the surrounding program.  That assumption should not be implicit.  My work makes a *big deal* out of the importance of designing any language with fexprs so as to maximize deductions about environments; you will find it discussed in Chapter 5 of my dissertation (but I don't mean to discourage reading that document from the beginning).  The strengths and weaknesses of the Kernel design provisions for this, are an interesting topic; I've never claimed, nor supposed, Kernel was some panacea to cure all the ills of programming, never to be improved upon, and one can't expect to improve on it without studying its weaknesses; but studying the weaknesses of those provisions is entirely different from overlooking their existence.  They do exist. Mitch Wand 11:49, 23 Jan 2012 (in reply to this comment) You are correct, of course, that one can state theorems about program fragments in the form of conditionals. Can you state the conditions under which the two fragments in the original post($lambda (f) (f (+ 3 4))) ($lambda (f) (f (+ 4 3))) are equivalent? If  I write a compiler, how  can my compiler verify those conditions in order to transform one of  these into the other, or into ($lambda (f) (f 7)) ? Mitch Wand 11:51, 23 Jan 2012 (in reply to this comment) oops, that got botched in formatting. The code fragments are, of course ($lambda (f) (f (+ 3 4))) ($lambda (f) (f (+ 4 3))) ($lambda (f) (f 7)) Hopefully this will come out with better formatting. John Shutt 14:32, 23 Jan 2012 (in reply to this comment) I like that question :-).  Okay, let's see.  Two conditions would suffice --- one about the static environment, and one about the way the results of evaluation will be used.  (Necessary-and-sufficient conditions would of course be undecidable.) (1) The compiler can prove that both expressions will be evaluated in an environment where $lambda has its standard binding when they are evaluated, and where + will have its standard binding when the resulting applicatives are *called*. It doesn't matter what binding$lambda has when they are called, nor what binding + has when they are created. This brings up another point. Source expressions are passive data, and optimizations pertain to active terms (with meanings like "evaluate this expression in that environment").  So the question probably shouldn't be when those first two terms could be transformed into the third, but rather when evaluations of all three could be transformed to the same optimized active term. (2) The compiler can prove that the applicatives, which result from evaluating these two expressions, will (wait for it) not be called with an operative argument.  Seriously.  I could, and won't just now, write a whole little essay about issues surrounding the undesirability of putting a parameter in operator position like this.  But there *are* circumstances, not altogether implausible, in which the compiler could make this deduction.  The compiler might prove that the applicatives will never escape from a closed environment in which all calls to them can be identified by the compiler (standard features like $provide! and$let-safe may help with this).  Or, even if the applicatives themselves might be called in uncontrolled situations where they might be given an operative, one might be able to prove that some *particular* calls to the applicatives will not be passed operatives, and optimize those calls even if one can't optimize the applicatives themselves in general. David Barbour 14:31, 26 Jan 2012 (in reply to this comment) In the interest of making it difficult to do dangerous things by accident, I would suggest you reverse the syntactic requirement: the path of least resistance (f (+ 3 4)) is equivalent to using apply, and more explicit code - e.g. ($f (+ 3 4)) - allows f to be an operative. David Barbour 14:45, 26 Jan 2012 (in reply to this comment) You don't locally know whether the ($lambda ...) term as a whole will be evaluated vs. inspected by another operative. A compiler or programmer couldn't decide even the first sentence of (1) without observing the whole program. Given runtime representation of the environment, it can also be difficult to statically determine whether and how a term will be used at runtime. John Shutt 08:45, 27 Jan 2012 (in reply to this comment) Briefly (the severe indentation is likely trying to tell us something).  There are various measures in the language to facilitate provably stable environments.  Among these: Environment mutation primitives only affect local bindings, not visible bindings in ancestors; so an ancestor can be provably stable even though visible in uncontrolled contexts.  Among other things, this means a large module can protect its top-level environment with its top-level statements (things like $let,$let-safe, $provide!), so the stability of the top-level environment doesn't depend on lower-level details. The tentatively conjectured module system supposes that each individually compiled module has a top-level environment that starts out standard, isolating imports from other modules so they can't accidentally corrupt the standard bindings. John Shutt 09:34, 27 Jan 2012 (in reply to this comment) I've considered this before. Not everything I'll say about it is negative. First: It *should* be possible to build this arrangement on top of standard Kernel. Moreover, if it *isn't* possible to do so, there'd be important things to learn from *why* it isn't possible. It has seemed to me for many years that a whole class of such alterations possible in *theory* are far less so in *practice* because they would really require replacing the entire standard environment. Deep issues ensue; perhaps to tackle on my blog (where I have a vast backlog). Second: The arrangement disrupts the uniformity of the language core, making the useful functioning of the evaluator algorithm dependent on a distinguished combiner (you call it$, a name I've considered for... something, but it'd have to be incredibly fundamental and abstraction-invariant to justify using up that name --- the combiner here should be a pretty rare operation, so imo wouldn't warrant that name).  That distinguished combiner is closer to being a "special form" than I'd have thought one could get without reserved keywords. The nonuniformity is a big deal partly because it's a slippery slope.  If we're tempted to do this, there will be lots of other things waiting in the wings to tempt us *just a little* further.  Note my comment early in the Kernel Report about resonance from purity.   An incredibly "smooth" language core means that whatever you build on top of it will effectively inherit flexibility from the existence of the underlying smoothness (also why the operative/wrapper is key even if some abstraction layer were to sequester it where the programmer can't directly access it).  Which further favors a very pure language core which one then hopes to build "on top of". Third:  This does seem like it could have the desired effect, in a simple sort of way.  Oddly enough, an alternative I've been musing on for some years is far less simple... at least in implementation.  I'll likely blog about it eventually --- "guarded environments", akin to Kernel's guarded continuations.  (Guarded environments are mentioned in a footnote somewhere in my dissertation.) David Barbour 13:05, 27 Jan 2012 (in reply to this comment) I wasn't even considering mutation. It is the wide and dynamic potential for observation of a term that causes difficulty. John Shutt 14:13, 27 Jan 2012 (in reply to this comment) Hm?  The other subthread, below, relates to visibility; this one is about binding stability. David Barbour 14:30, 27 Jan 2012 (in reply to this comment) Uh, no. This subthread is about how terms are observed or distinguished. Stability was never part of it.
# How to draw a Bezier Curve with HTML5 Canvas? The HTML5 <canvas> tag is used to draw graphics, animations, etc. using scripting. It is a new tag introduced in HTML5. The canvas element has a DOM method called getContext, which obtains rendering context and its drawing functions. This function takes one parameter, the type of context 2d. To draw a Bezier curve with HTML5 canvas, use the bezierCurveTo() method. The method adds the given point to the current path, connected to the previous one by a cubic Bezier curve with the given control points. You can try to run the following code to learn how to draw a Bezier curve on HTML5 Canvas. The x and y parameters in bezierCurveTo() method are the coordinates of the endpoint. cp1x and cp1y are the coordinates of the first control point, and cp2x and cp2y are the coordinates of the second control point. ## Example <!DOCTYPE html> <html> <title>HTML5 Canvas Tag</title> <body> <canvas id = "newCanvas" width = "500" height = "300" style = "border:1px solid #000000;"></canvas> <script> var c = document.getElementById('newCanvas'); var ctx = c.getContext('2d'); ctx.beginPath(); ctx.moveTo(75,40); ctx.bezierCurveTo(75,37,70,25,50,25); ctx.bezierCurveTo(20,25,20,62.5,20,62.5); ctx.bezierCurveTo(20,80,40,102,75,120); ctx.bezierCurveTo(110,102,130,80,130,62.5); ctx.bezierCurveTo(130,62.5,130,25,100,25); ctx.bezierCurveTo(85,25,75,37,75,40); ctx.fill(); </script> </body> </html> ## Output Sai Subramanyam Passionate, Curious and Enthusiastic.
# Can roots of any polynomial be expressed using Eulerian function? I encountered an interesting function which is called "Eulerian" by the Wolfram's MathWorld: $$\phi(q)=\prod_{k=1}^{\infty} (1-q^{k})$$ It is interesting because it is claimed that roots of any polynomial can be expressed in this function and elementary functions. Is this true and how the roots of arbitrary polynomial can be expressed? P.S. In Mathematica this function is inplemented as QPochhammer[q] - Could you please link to where this statement is made? –  David Speyer Feb 22 '12 at 0:11 This Euler function is essentially the same as Dedekind's eta function (Wikipedia, Mathworld). The usual use of the $\eta$ function is to express various modular forms. In particular, you should be able to rewrite Hermite's solution of the quintic by modular functions in terms of the $\eta$ function. I don't know whether you can express roots of higher degree polynomials in terms of $\eta$, but I would guess not. Here are my two hazy arguments: • Hilbert conjectured that the roots of a general sextic could not be expressed using functions of one variable. I am told that this conjecture appears in Über die Gleichung neunten Grades, Mathematische Annalen Volume 97, Number 1, 243-250; I have not read this article. Abhyankar proves a version of Hilbert's conjecture in this paper, which I discussed in my answer here. Unfortunately, to my limited understanding of Abhyankar's result, it is about algebraic functions of one variable, so it is not clear to me that it answers your question. • According to Wikipedia, Hermite's solution of the quintic by modular forms was finally generalized to equations of arbitrary degree by Umemura, using Siegel modular forms. Siegel modular forms are analytic functions of many variables. This suggests to me that a generalization using modular forms in a single variable was found unworkable. As you can tell by the hesitant style of this answer, I find the results on solving equations by transcendental functions rather hard to follow; they all seem to be written in very equation heavy nineteenth century style. Since these questions come up fairly often on MO, it would be great if someone could recommend a good survey which translates them into the modern language. - So basically, the answer should be 'yes' for degree 5 or less and 'no' for higher degree? –  J.C. Ottem Feb 22 '12 at 0:40 Should be. But there are a lot of caveats in the above. I'm hoping someone will give a clear reference to clean this all up. –  David Speyer Feb 22 '12 at 1:35 Looks right. The $\eta$ function gives access to classical modular functions, which give field extensions with Galois group contained in a quotient of some $\text{GL}_2({\bf Z}/N{\bf Z})$. That's enough to deal with the generic quintic, but not sextics and beyond (though the sextics only barely fail: $A_6$ is of $\text{GL}_2$ type, but over the field of $9$ elements!). –  Noam D. Elkies Feb 22 '12 at 2:16 @D.Speyer: because the kernel is not a congruence subgroup. –  Noam D. Elkies Feb 22 '12 at 19:59 In his Traité p. 378, Jordan proves the following theorem : The solution of the general equation of degree > 5 cannot be reduced to that of equations arising from circular or elliptic functions. As far as I can see the proof boils down to show that the alternating group $\mathcal{A}_n$ with $n \geq 6$ is not isomorphic to $\mathrm{PSL}_2(\mathbf{Z}/p\mathbf{Z})$ for any prime $p$. After that, Jordan also remarks that any equation can be solved by bisecting periods of hyperelliptic functions, as Noam said above. –  François Brunault Feb 23 '12 at 10:31
## Abstract: We work out the math behind the so called income mountain plots used in the book "Factfulness" by Hans Rosling and use these insight to generate such plots using tidyverse code. The trip includes a mixture of log-normals, the density transformation theorem, histogram vs. density and then skipping all those details again to make nice moving mountain plots. ## Introduction Reading the book Factfulness by Hans Rosling seemed like a good thing to do during the summer months. The 'possibilistic' writing style is contagious and his TedEx presentations and media interviews are legendary teaching material on how to support your arguments with data. What a shame he passed away in 2017. What is really enjoyable about the book is that the Gapminder web page allows you to study many of the graphs from the book interactively and contains the data for download. Being a fan of transparency and reproducibility, I got interested in the so called income mountain plots, which show how incomes are distributed within individuals of a population: Screenshot of the 2010 income mountain plot. Free material from www.gapminder.org. One notices that the "mountains" are plotted on a log-base-2 x-axis and without a y-axis annotation. Why? Furthermore, world income data usually involve mean income per country, so I got curious how/if these plots were made without access to finer granularity level data? Aim of this blog post is to answer these questions by using Gapminder data freely available from their webpage. The answer ended up as a nice tidyverse exercise and could serve as motivating application for basic probability course content. ## Data Munging Gapminder Data on income, population and Gini coefficient were needed to analyse the above formulated questions. I have done this previously in order to visualize the Olympic Medal Table Gapminder Style. We start by downloading the GDP data, which is the annual gross domestic product per capita by Purchasing Power Parities (PPP) measured in international dollars, fixed 2011 prices. Hence, the inflation over the years and differences in the cost of living between countries is accounted for and can thus be compared - see the Gapminder documentation for further details. We download the data from Gapminder where they are available in wide format as Excel-file. For tidyverse handling we reshape them into long format. ##Download gdp data from gapminder - available under a CC BY-4 license. if (!file.exists(file.path(fullFigPath, "gapminder-gdp.xlsx"))) { } rename(country=geo.name) %>% select(-geo,-indicator,-indicator.name) %>% gather(key="year", value="gdp", -country,) %>% filter(!is.na(gdp)) Furthermore, we rescale GDP per year to daily income, because this is the unit used in the book. gdp_long %<>% mutate(gdp = gdp / 365.25) Similar code segments are written for (see the code on github for details) • the gini (gini_long) and population (pop_long) data • the regional group (=continent) each country belongs two (group) The four data sources are then joined into one long tibble gm. For each year we also compute the fraction a country's population makes up of the world population that year (column w) as well as the fraction within the year and region the population makes up (column w_region) : ## # A tibble: 15,552 x 9 ## country region code year gini gdp population w w_region ## <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 Afghanistan Asia AFG 1800 0.305 1.65 3280000 0.00347 0.00518 ## 2 Albania Europe ALB 1800 0.389 1.83 410445 0.000434 0.00192 ## 3 Algeria Africa DZA 1800 0.562 1.96 2503218 0.00264 0.0342 ## 4 Andorra Europe AND 1800 0.4 3.28 2654 0.00000280 0.0000124 ## 5 Angola Africa AGO 1800 0.477 1.69 1567028 0.00166 0.0214 ## # ... with 1.555e+04 more rows ## Income Mountain Plots The construction of the income mountain plots is thoroughly described on the Gapminder webpage, but without mathematical detail. With respect to the math it says: "Bas van Leeuwen shared his formulas with us and explained how to the math from ginis and mean income, to accumulated distribution shapes on a logarithmic scale." Unfortunately, the formulas are not shared with the reader. It's not black magic though: The income distribution of a country is assumed to be log-normal with a given mean $$\mu$$ and standard deviation $$\sigma$$ on the log-scale, i.e. $$X \sim \operatorname{LogN}(\mu,\sigma^2)$$. From knowing the mean income $$\overline{x}$$ of the distribution as well as the Gini index $$G$$ of the distribution, one can show that it's possible to directly infer $$(\mu, \sigma)$$ of the log-normal distribution. Because the Gini index of the log-normal distribution is given by $G = 2\Phi\left(\frac{\sigma}{\sqrt{2}}\right)-1,$ where $$\Phi$$ denotes the CDF of the standard normal distribution, and by knowing that the expectation of the log-normal is $$E(X) = \exp(\mu + \frac{1}{2}\sigma^2)$$, it is possible to determine $$(\mu,\sigma)$$ as: $\sigma = \sqrt{2}\> \Phi^{-1}\left(\frac{G+1}{2}\right) \quad\text{and}\quad \mu = \log(\overline{x}) - \frac{1}{2} \sigma^2.$ We can use this to determine the parameters of the log-normal for every country in each year. ### Mixture distribution The income distribution of a set of countries is now given as a Mixture distribution of log-normals, i.e. one component for each of the countries in the set with a weight proportional to the population of the country. As an example, the world income distribution would be a mixture of the 192 countries in the Gapminder dataset, i.e. $f_{\text{mix}}(x) = \sum_{i=1}^{192} w_i \>\cdot \>f_{\operatorname{LogN}}(x; \mu_i, \sigma_i^2), \quad\text{where} \quad w_i = \frac{\text{population}_i}{\sum_{j=1}^{192} \text{population}_j},$ and $$f_{\operatorname{LogN}}(x; \mu_i, \sigma_i^2)$$ is the density of the log-normal distribution with country specific parameters. Note that we could have equally used the mixture approach to define the income of, e.g., a continent region. With the above definition we define standard R-functions for computing the PDF (dmix), CDF (pmix), quantile function (qmix) and a function for sampling from the distribution (rmix) - see the github code for details. We use the mixture approach to compute the density of the world income distribution obtained by "mixing" all 192 log-normal distributions. This is shown below for the World income distribution of the year 2015. Note the $$\log_2$$ x-axis. This presentation is Factfulness' preferred way of illustrating the skew income distribution. ##Restrict to year 2015 gm_recent <- gm %>% filter(year == 2015) %>% ungroup ##Make a data frame containing the densities of each region for ##the gm_recent dataset df_pdf <- data.frame(log2x=seq(-2,9,by=0.05)) %>% mutate(x=2^log2x) pdf_region <- gm_recent %>% group_by(region) %>% do({ pdf <- dmix(df_pdf$x, meanlog=.$meanlog, sdlog=.$sdlog, w=.$w_region) data.frame(x=df_pdf$x, pdf=pdf, w=sum(.$w), population=sum(.$population), w_pdf = pdf*sum(.$w)) }) ## Total is the sum over all regions - note the summation is done on ## the original income scale and NOT the log_2 scale. However, one can show that in the special case the result on the log-base-2-scale is the same as summing the individual log-base-2 transformed densities (see hidden CHECKMIXTUREPROPERTIES chunk). pdf_total <- pdf_region %>% group_by(x) %>% summarise(region="Total",w=sum(w), pdf = sum(w_pdf)) ## Expectation of the distribution mean_mix <- gm_recent %>% summarise(mean=sum(w * exp(meanlog + 1/2*sdlog^2))) %$% mean ## Median of the distribution median_mix <- qmix(0.5, gm_recent$meanlog, gm_recent$sdlog, gm_recent$w) ## Mode of the distribution on the log2-scale (not transformation invariant!) mode_mix <- pdf_total %>% mutate(pdf_log2x = log(2) * x * pdf) %>% filter(pdf_log2x == max(pdf_log2x)) %$% x For illustration we compute a mixture distribution for each region using all countries within region. This is shown in the left pane. Note: because a log-base-2-transformation is used for the x-axis, we need to perform a change of variables, i.e. we compute the density for $$Y=\log_2(X)=g(X)$$ where $$X\sim f_{\text{mix}}$$, i.e. $f_Y(y) = \left| \frac{d}{dy}(g^{-1}(y)) \right| f_X(g^{-1}(y)) = \log(2) \cdot 2^y \cdot f_{\text{mix}}( 2^y) = \log(2) \cdot x \cdot f_{\text{mix}}(x), \text{ where } x=2^y.$ In the right pane we then show the region specific densities each weighted by their population fraction. These are then summed up to yield the world income shown as a thick blue line. The median of the resulting world income distribution is at 20.0$/day, whereas the mean of the mixture is at an income of 39.9$/day and the mode (on the log-base-2 scale) is 17.1$/day. Note that the later is not transformation invariant, i.e. the value is not the mode of the income distribution, but of $$\log_2(X)$$. To get the income mountain plots as shown in Factfulness, we additionally need to obtain number of people on the $$y$$-axis and not density. We do this by partitioning the x-axis into non-overlapping intervals and then compute the number of individuals expected to fall into a given interval with limits $$[l, u]$$. Under our model this expectation is $n \cdot (F_{\text{mix}}(u)-F_{\text{mix}}(l)),$ where $$F_{\text{mix}}$$ is the CDF of the mixture distribution and $$n$$ is the total world population. The mountain plot below shows this for a given partition with $$n=7,305,116,647$$. Note that $$2.5\cdot 10^8$$ corresponds to 250 mio people. Also note the $$\log_2$$ x-axis, and hence (on the linear scale) unequally wide intervals of the partitioning. Contrary to Factfulness', I prefer to make this more explicit by indicating the intervals explicitly on the x-axis of the mountain plot, because it is about number of people in certain income brackets. ##Function to prepare the data.frame to be used in a mountain plot make_mountain_df <- function(gm_df, log2x=seq(-2,9,by=0.25)) { ##Make a data.frame containing the intervals with appropriate annotation df <- data.frame(log2x=log2x) %>% mutate(x=2^log2x) %>% mutate(xm1 = lag(x), log2xm1=lag(log2x)) %>% mutate(xm1=if_else(is.na(xm1),0,xm1), log2xm1=if_else(is.na(log2xm1),0,log2xm1), mid_log2 = (log2x+log2xm1)/2, width = (x-xm1), width_log2 = (log2x-log2xm1)) %>% ##Format the interval character representation mutate(interval=if_else(xm1<2, sprintf("[%6.1f-%6.1f]",xm1,x), sprintf("[%4.0f-%4.0f]",xm1,x)), interval_log2x=sprintf("[2^(%4.1f)-2^(%4.1f)]",log2xm1,log2x)) ##Compute expected number of individuals in each bin. people <- gm_df %>% group_by(region) %>% do({ countries <- . temp <- df %>% slice(-1) %>% rowwise %>% mutate( prob_mass = diff(pmix(c(xm1,x), meanlog=countries$meanlog, sdlog=countries$sdlog, w=countries$w_region)), people = prob_mass * sum(countries$population) ) temp %>% mutate(year = max(gm_df$year)) }) ##Done return(people) } ##Create mountain plot data set for gm_recent with default spacing. (people <- make_mountain_df(gm_recent)) ## # A tibble: 176 x 13 ## # Groups: region [4] ## region log2x x xm1 log2xm1 mid_log2 width width_log2 interval interval_log2x prob_mass people year ## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr> <chr> <dbl> <dbl> <chr> ## 1 Africa -1.75 0.297 0.25 -2 -1.88 0.0473 0.25 [ 0.2- 0.3] [2^(-2.0)-2^(-1.8)] 0.00134 1586808. 2015 ## 2 Africa -1.5 0.354 0.297 -1.75 -1.62 0.0563 0.25 [ 0.3- 0.4] [2^(-1.8)-2^(-1.5)] 0.00205 2432998. 2015 ## 3 Africa -1.25 0.420 0.354 -1.5 -1.38 0.0669 0.25 [ 0.4- 0.4] [2^(-1.5)-2^(-1.2)] 0.00307 3639365. 2015 ## 4 Africa -1 0.5 0.420 -1.25 -1.12 0.0796 0.25 [ 0.4- 0.5] [2^(-1.2)-2^(-1.0)] 0.00448 5305674. 2015 ## 5 Africa -0.75 0.595 0.5 -1 -0.875 0.0946 0.25 [ 0.5- 0.6] [2^(-1.0)-2^(-0.8)] 0.00636 7537067. 2015 ## # ... with 171 more rows This can then be plotted with ggplot2: In light of all the talk about gaps, it can also be healthy to plot the income distribution on the linear scale. From this it becomes obvious that linearly there indeed are larger absolute differences in income, but -as argued in the book- the exp-scale (base 2) incorporates peoples perception about the worth of additional income. Because the intervals are not equally wide, only the height of the bars should be interpreted in this plot. However, the eye perceives area, which in this case is misguiding. Showing histograms with unequal bin widths is a constant dilemma between area, height, density and perception. The recommendation would be that if one wants to use the linear-scale, then one should use equal width linear intervals or directly plot the density. As a consequence, plots like the above are not recommended, but they make obvious the tail behaviour of the income distribution - a feature which is somewhat hidden by the log-base-2-scale plots. Of course none of the above plots looks as nice as the Gapminder plots, but they have proper x and y-axes annotation and, IMHO, are clearer to interpret, because they do not mix the concept of density with the concept of individuals falling into income bins. As the bin-width converges to zero, one gets the density multiplied by $$n$$, but this complication of infinitesimal width bins is impossible to communicate. In the end this was the talent of Hans Rosling and Gapminder - to make the complicated easy and intuitive! We honor this by skipping the math1 and celebrate the result as the art it is! ##Make mountain plot with smaller intervals than in previous plot. ggplot_oneyear_mountain <- function(people, ymax=NA) { ##Make the ggplot p <- ggplot(people %>% rename(Region=region), aes(x=mid_log2,y=people, fill=Region)) + geom_col(width=min(people$width_log2)) + ylab("Number of individuals") + xlab("Income [$/day]") + scale_x_continuous(minor_breaks = NULL, trans="identity", breaks = trans_breaks("identity", function(x) x,n=11), labels = trans_format(trans="identity", format=function(x) ifelse(x<0, sprintf("%.1f",2^x), sprintf("%.0f",2^x)))) + theme(axis.text.y=element_blank(), axis.ticks.y=element_blank()) + scale_y_continuous(minor_breaks = NULL, breaks = NULL, limits=c(0,ymax)) + ggtitle(paste0("World Income Mountain ",max(people$year))) + NULL #Show it and return it. print(p) invisible(p) } ##Create the mountain plot for 2015 gm_recent %>% make_mountain_df(log2x=seq(-2,9,by=0.01)) %>% ggplot_oneyear_mountain() ## Discussion Our replicated mountain plots do not exactly match those made by Gapminder (c.f. the screenshot). It appears as if our distributions are located slightly more to the right. It is not entirely clear why there is a deviation, but one possible problem could be that we do the translation into income per day differently? I'm not an econometrician, so this could be a trivial blunder on my side, however, the values in this post are roughly of the same magnitude as the graph on p. 45 in van Zanden et al. (2011) mentioned in the Gapminder documentation page, whereas the Gapminder curves appear too far to the left. It might be worthwhile to check individual country data underlying the graphs to see where the difference is.
# Category theory (Difference between revisions) Revision as of 13:12, 16 September 2007 (edit) (Reference the wikibook)← Previous diff Revision as of 22:47, 9 March 2008 (edit) (undo)m (Link to Wikipedia's list of category theory topics was broken)Next diff → Line 84: Line 84: *Michael Barr and Charles Wells: [http://www.cwru.edu/artsci/math/wells/pub/ttt.html Toposes, Triples and Theories]. The online, freely available book is both an introductory and a detailed description of category theory. It also contains a category-theoretical description of the concept of ''monad'' (but calling it a ''triple'' instead of ''monad''). *Michael Barr and Charles Wells: [http://www.cwru.edu/artsci/math/wells/pub/ttt.html Toposes, Triples and Theories]. The online, freely available book is both an introductory and a detailed description of category theory. It also contains a category-theoretical description of the concept of ''monad'' (but calling it a ''triple'' instead of ''monad''). *[http://wwwhome.cs.utwente.nl/~fokkinga/mmf92b.html A Gentle Introduction to Category Theory - the calculational approach] written by [http://wwwhome.cs.utwente.nl/~fokkinga/index.html Maarten M Fokkinga]. *[http://wwwhome.cs.utwente.nl/~fokkinga/mmf92b.html A Gentle Introduction to Category Theory - the calculational approach] written by [http://wwwhome.cs.utwente.nl/~fokkinga/index.html Maarten M Fokkinga]. - * Wikipedia has a good [http://en.wikipedia.org/List_of_category_theory_topics collection of category-theory articles], although, as is typical of Wikipedia articles, they are rather dense. + * Wikipedia has a good [http://en.wikipedia.org/wiki/List_of_category_theory_topics collection of category-theory articles], although, as is typical of Wikipedia articles, they are rather dense. [[Category:Theoretical foundations]] [[Category:Theoretical foundations]] [[Category:Mathematics]] [[Category:Mathematics]] ## Revision as of 22:47, 9 March 2008 Category theory can be helpful in understanding Haskell's type system. There exists a "Haskell category", of which the objects are Haskell types, and the morphisms from types a to b are Haskell functions of type a -> b . Various other Haskell structures can be used to make it a Cartesian closed category. ## Contents The Haskell wikibooks has an introduction to Category theory, written specifically with Haskell programmers in mind. ## 1 Defintion of a category A category $\mathcal{C}$consists of two collections: Ob$(\mathcal{C})$, the objects of $\mathcal{C}$ Ar$(\mathcal{C})$, the arrows of $\mathcal{C}$ (which are not the same as Arrows defined in GHC) Each arrow f in Ar$(\mathcal{C})$ has a domain, dom f, and a codomain, cod f, each chosen from Ob$(\mathcal{C})$. The notation $f\colon A \to B$ means f is an arrow with domain A and codomain B. Further, there is a function $\circ$ called composition, such that $g \circ f$ is defined only when the codomain of f is the domain of g, and in this case, $g \circ f$ has the domain of f and the codomain of g. In symbols, if $f\colon A \to B$ and $g\colon B \to C$, then $g \circ f \colon A \to C$. Also, for each object A, there is an arrow $\mathrm{id}_A\colon A \to A$, (often simply denoted as 1 or id, when there is no chance of confusion). ### 1.1 Axioms The following axioms must hold for $\mathcal{C}$ to be a category: 1. If $f\colon A \to B$ then $f \circ \mathrm{id}_A = \mathrm{id}_B\circ f = f$ (left and right identity) 2. If $f\colon A \to B$ and $g \colon B \to C$ and $h \colon C \to D$, then $h \circ (g \circ f) = (h \circ g) \circ f$ (associativity) ### 1.2 Examples of categories • Set, the category of sets and set functions. • Mon, the category of monoids and monoid morphisms. • Monoids are themselves one-object categories. • Grp, the category of groups and group morphisms. • Rng, the category of rings and ring morphisms. • Grph, the category of graphs and graph morphisms. • Top, the category of topological spaces and continuous maps. • Preord, the category of preorders and order preserving maps. • CPO, the category of complete partial orders and continuous functions. • Cat, the category of categories and functors. • the category of data types and functions on data structures • the category of functions and data flows (~ data flow diagram) • the category of stateful objects and dependencies (~ object diagram) • the category of values and value constructors • the category of states and messages (~ state diagram) ### 1.3 Further definitions With examples in Haskell at: ## 2 Categorical programming Catamorphisms and related concepts, categorical approach to functional programming, categorical programming. Many materials cited here refer to category theory, so as an introduction to this discipline see the #See also section.
# Math Help - Finding indefinite integral with PCD method? 1. ## Finding indefinite integral with PCD method? Ok, so the problem is take the indefinite integral of $x^4 / (x-1)^3$. The section the homework problem is in taught the Partial Fraction Decomposition method to rewrite an integral in a way that it can be integrated. When I do it, I have $x^4 = A(x-1)^2 + B(x-1) + C$ so the only convenient x value is 1, so I was able to solve for c, because $c=1$, but if I use any X value per the rules to find the other two, well, I can't isolate one. For instance, I tried using $x=2$ which gave me $A + B = 15$ which doesn't help me much. Any ideas on how to use PCD (or a more efficient method if there is one) to solve this integral? Thanks! 2. As it's written, you can't apply partial fraction descomposition directly. You have to make a long division 'cause the degree of the numerator is greater than the denominator one. I dunno why you require to apply such method, since this can be killed with a simple substitution. 3. Originally Posted by Krizalid As it's written, you can't apply partial fraction descomposition directly. You have to make a long division 'cause the degree of the numerator is greater than the denominator one. I dunno why you require to apply such method, since this can be killed with a simple substitution. Right, but when I do a u sub, I end up with $u = x-1$ and $du=dx$ but I need to switch the u sub around so $x = u + 1$ to plug in for the top variable and then the integral would be $(u+1)^4 / u^3 du$ which is still pretty ugly because you either have to multiply out the top part and divide each term by $u^3$ or use polynomial division. I was just wondering if there was an easier way to do it. 4. Originally Posted by emttim84 the integral would be $(u+1)^4 / u^3 du$ which is still pretty ugly This is not ugly I'd consider $(1+u)^n$ ugly for $n\ge5.$ You only need to expand that and integrate term by term. Is it too hard? At least memorize some binomial powers, it helps. 5. Originally Posted by Krizalid This is not ugly I'd consider $(1+u)^n$ ugly for $n\ge5.$ You only need to expand that and integrate term by term. Is it too hard? At least memorize some binomial powers, it helps. Condescension isn't exactly useful to me (nor do I care about it), just FYI. And judging by the answer I get when I multiply it out and integrate it, I'd have a hard time settling for that answer without a solutions manual to check it against. Oh, and conveniently, it's an even-numbered problem so the solutions aren't in the solutions manual, obviously. 6. Originally Posted by emttim84 Condescension isn't exactly useful to me (nor do I care about it), just FYI. And judging by the answer I get when I multiply it out and integrate it, I'd have a hard time settling for that answer without a solutions manual to check it against. Oh, and conveniently, it's an even-numbered problem so the solutions aren't in the solutions manual, obviously. Krizalid's reply was not condescending. For the record I think that the question asked, although perhaps a touch prevocative, was nevertheless reasonable ...... It looks to me like you're ticked off over a few other things, things that Krizalid is not responsible for. fyi $\, (u + 1)^4 = u^4 + 4 u^3 + 6 u^2 + 4u + 1$. It takes about seconds to do using the binomial theorem. A bit longer of course if you expand as $(u + 1)^2 (u + 1)^2 = (u^2 + 2u + 1)(u^2 + 2u + 1) = ......$ So the integrand is $u + 4 + \frac{6}{u} + \frac{4}{u^2} + \frac{1}{u^3}$. The integral is therefore $\frac{u^2}{2} + 4u + 6 \ln |u| - \frac{4}{u} - \frac{1}{2 u^2} + C$. Now sub back u = x - 1. Done. (Although $\frac{(x - 1)^2}{2} + 4(x - 1)$ can potentially be expanded and simplified). We're all fortunate that passionate experts like Krizalid are available to help - for free (except for the small price of putting up with a condescending tone ) 7. I don't think Krizalid's comment was condescending at all. I think it was good advice. -Dan 8. My apologies. As was hinted at, I was a little irritated at this problem because when I get an answer that's really length and ugly, I tend to question whether it's right, and unfortunately a lot of the homework problems my teacher assigns are not in the solutions manual so I have no method to check them outside here.
# Help with DS18S20 and 16F628 Status Not open for further replies. #### ChriX ##### Member Hey all, i'm trying to interface the Dallas DS18S20 with a PIC 16F628 and having some problems. I can't even get the PIC to detect the presense pulse generated by the sensor. Here's my code, i've got the data line of the sensor on bit 0 on porta. Macros Code: TEMP_OUTPUT macro BANK1 ;Change to BANK1 bcf TRISA,0 ;Set PORTA,0 to an output BANK0 ;Back to BANK0 bcf PORTA,0 ;Set output pin LOW endm TEMP_INPUT macro BANK1 ;To BANK1 again bsf TRISA,0 ;Set PORTA,0 to an input BANK0 ;Back to BANK0 endm Subroutine Code: OW_INIT TEMP_OUTPUT DELAY_MICRO 250 DELAY_MICRO 250 TEMP_INPUT DELAY_MICRO 60 btfsc PORTA,0 call NO_OW_DETECT TEMP_OUTPUT return The NO_OW_DETECT routine is just sending something to an attached LCD to tell me the sensor has not been found, I haven't been able to get it to skip that at all yet, even though the sensor should be pulling that pin low. Hopefully someone with some experience with these can help me out. #### kinjalgp ##### Active Member Making 1-wire devices work is very difficult if you are doing it for the first time. I have a very bad experience with them. It takes a lot of trial and error cycles to write your own routines to communicate with them because timings are very critical over here. Make sure that your delay 250uS routines are much accurate. Also count the over-head that it takes in calling a delay routine and suctract it from the delay count because all timings are in micro-seconds so over-head makes a lot of difference. In 8051 calling a subroutine takes 24uS@12MHz and if you don't consider that it doesn't work. #### ChriX ##### Member Do you think i'm waiting too long and missing the pulse then? I learnt the delay routines i'm using here from veys.com, he explains them very well and they have worked great for me in the past. Did you get yours going in the end? It says in the datasheet to hold the output pin low for 480uS minimum, but i'm holding for 500uS so that shouldn't be a problem. Then it says the sensor waits 15-60uS before pulling the line low after detecting the rising edge, so i've waited 60 there. Only other thing it says is to keep the master rx for 480uS minimum, which I haven't done because I didn't think it would matter, i've just switched it right back to an output after checking the pin to see if it's clear. Will have a look at that link and another play. #### kinjalgp ##### Active Member Yes I got my setup running when I counted the over-head delay and subtracted it from delay count. My delay routine rounds off to 488uS for first step & 72uS for the second step. My routine goes like this DQ = 0 delay 480 ; ~488uS DQ = 1 delay 60 ; ~72uS present = DQ delay 400 ; ~424uS Stop Present variable = 0 if DS1820 is present. #### ChriX ##### Member Present variable = 1? Shouldn't that be 0 because the DS is pulling low? Or have we just found the reason why mine isn't working :lol: #### kinjalgp ##### Active Member Oh sorry! That should be 0. #### ChriX ##### Member Damn, thought i'd found something then. Oh well, time to check all my timings again then, it must be those that are at fault. I have checked them with ISIS simulating my code, and I graphed the output and it all seems ok, if only there was a 1-wire library for it. #### motion ##### New Member Hi, you can try the ff. code which was adapted from the 1-wire routines I have successfully used. Code: OW_INIT: TEMP_OUTPUT DELAY_MICRO 250 DELAY_MICRO 250 DELAY_MICRO 250 CLRWDT TEMP_INPUT WAIT4HI: btfss PORTA,0 goto WAIT4HI WAIT4LO: btfsc PORTA,0 goto WAIT4LO TEMP_OUTPUT return This sample code waits indefinitely for the hi-lo transistions. You should insert a timeout counter in the WAIT4HI and WAIT4LO loops. Status Not open for further replies. Replies 12 Views 1K Replies 90 Views 4K Replies 16 Views 624 Replies 3 Views 532 Replies 25 Views 2K
rays respectively constitute mainly an external hazard to humans. Visible Light. If high levels of beta-emitting The common designations are: radio waves, microwaves, infrared (IR), visible light, ultraviolet (UV), X-rays and gamma rays. They are called alpha, beta, and gamma. Some beta emitters, Instruments cannot detect alpha radiation through even a thin layer Be advised that over time, requirements could change, new data could be made available, and Internet links could change, affecting the correctness of the answers. This causes the release of ionising radiation which allows the nucleus to become more stable. Alpha, Beta and Gamma radiation all have different properties and effects. Visible Light Rays. They have no mass and no charge. Beta-emitting contaminants may be harmful if deposited internally. The effect of non-ionizing forms of radiation on living tissue has only recently been studied. Alpha radiation travels only a short distance (a few inches) in air, but is not an external hazard. of water, dust, paper, or other material, because alpha radiation is visible light, radiowaves, and ultraviolet light. Gamma rays There are four main types of radiation. It has a large mass, compared to other ionising radiations, and a strong positive charge. In order of increasing photon energy (decreasing wavelength) seven types of electromagnetic radiation are: radiation is also encountered in nuclear power plants and high-altitude Specific facts and circumstances may affect the applicability of concepts, materials, and information described herein. What Types of Radiation Are There? Gamma radiation is the most penetrating of the three radiations. The types of radiation given off by the sun include both infrared rays, visible light and ultraviolet rays. Alpha radiation is the least penetrating. prevent contamination of the skin by gamma-emitting radioactive Different types of radiation differ somewhat in biological effectiveness per unit of dose. Examples of these Each type of radiation has a different ability to penetrate materials. Clothing provides little shielding from penetrating radiation, but will X-Ray. and so travel at the speed of light. flight and emitted from some industrial radioactive sources. Within radiology, we find more specialized areas like mammography, computerized tomography (CT), and nuclear medicine (the specialty where radioactive material is usually injected into the patient). Ionizing radiation takes a few forms: Alpha, beta, and neutron particles, and gamma and X-rays. The cosmic radiation emitted from the Sun is a mixture of electromagnetic waves; which range from infrared (IR) to ultraviolet rays (UV). The energy of the three radiations is absorbed by the material through which the radiation passes. Infared. It can be stopped (or absorbed) by a human hand. WHAT ARE THE 7 DIFFERENT TYPES OF RADIATION? If you were to measure the amount of radiated energy around the sun’s circumference, the readings would all be fairly equal (fig. If the radiation is stopped by paper, the source will be emitting alpha; If the radiation is stopped by a few mm of aluminium (about 5 or 6) then the source is emitting beta. where new skin cells are produced. Typically, lower-energy radiation, such as radio waves, is expressed as frequency; microwaves, infrared, visible and U… Micro. These electromagnetic many inches in human tissue. Relevance. It can easily penetrate body tissue. Gamma. They are part of the electromagnetic spectrum and so travel at the speed of light. The nuclei of some atoms are unstable, and will naturally undergo 'radioactive decay'. The Types Of Solar Radiation. Clothing provides some protection against beta radiation. These particles consist of two protons and two neutrons and are the heaviest type of radiation particle. Gamma radiation is easily detected by survey meters with a sodium iodide detector probe. radiations differ only in the amount of energy they have. All three … Sunlight is the visible and most common type of radiation that is given off by the sun. In National 5 Physics learn about the types of radiation, their uses and effects. These four kinds of radiation are alpha radiation, gamma radiation, beta radiation, and x radiation. The amount of energy which is absorbed depends on the type of radiation and the type of the absorbing material. Special training in the use of these instruments is In addition, it also emits visible light. Ask … Radiation is energy that travels in the form of waves or particles and is part of our everyday environment. X rays, too, are penetrating radiation. The term \"radiation\" is very broad, and includes such things as light and radio waves. 2-5). and x rays are the most energetic of these. It has a very small mass and a negative charge. It has a very small mass and a negative charge. Favorite Answer. In living tissues, the electrical ions produced by radiation can affect normal biological processes. If some radiation is still able to penetrate a few mm of lead (5 or 6) then the source is emitting gamma. The three types of radiation penetrate materials in different ways. Gamma radiation is very similar to x-rays. radiation, beta radiation, gamma radiation, and x radiation. It requires several centimetres of lead or about 1 metre of concrete to absorb it. In order to reach a stable state, they must release that extra energy or mass in the form of radiation. The sun is a good example of an isotropic radiator. Alpha radiation is absorbed by the thickness of the skin or by a few centimetres of air. Get your answers by asking now. It has very high energy and can easily pass through plastic, flesh, and even steel. Beta particle $$\beta$$ - is a fast moving electron. All of these rays, or types of radiation, are part of the electromagnetic spectrum. 1 decade ago. As this post has been examining nuclear radiation, X-rays have been omitted, although they’re very similar to gamma rays. Naturally occurring radon gas, for example, emits alpha particles. Visible light waves let you see the world around you. It has a large mass, compared to other ionising radiations, and a strong positive charge. Radiation can have destructive effects but can also be used in medicine, industry and electricity generation. Neutron radiation is also encountered in nuclear power plants and high-altitude flight and emitted from some industrial radioactive sources. Dense materials are needed for shielding from gamma radiation. It is actually an ejected helium nucleus. Some types of non-ionising radiation can also be harmful. Lv 4. However, the part which is not absorbed by the atmosphere reaches the earth. To the best of our knowledge, answers are correct at the time they are posted. The range of the alpha radiation in an absorbing material is less than that of beta or. Alpha particle $$\alpha$$ - is a helium nucleus, two protons and two neutrons. Alpha radiation. thin-window GM probe (e.g., "pancake" type). Radiation of this type is known as ISOTROPIC RADIATION. Most alpha radiation is not able to penetrate human skin. The alpha radiation transfers more energy to an absorber than beta or gamma radiation. Visible Light Waves. Finally, the last main type of radiation is gamma radiation. The difference between them was formerly defined in terms of energy, but now, electromagnetic radiation of nuclear origin is referred to as gamma radiation, … The different frequencies of … The radiation one typically encounters is one of four types: alpha radiation, beta radiation, gamma radiation, and x radiation. are caused by changes within the nucleus. A thin-window Geiger-Mueller (GM) probe can detect the presence of alpha radiation. The concept of equivalent dose, expressed in units of Sievert (Sv), was introduced for purposes of radiation protection. Most of the radiation emitted by the Sun is absorbed by the atmosphere. Unlike alpha and beta radiation, which are particles, gamma radiation is a form of electromagnetic radiation, which also includes radio waves, microwaves, and visible light. Read about our approach to external linking. I KNOW THEY BEGIN WITH: R. M. i. V. U. X. G. PLEASE HELP :) Answer Save. The radiation one typically encounters is one of four types: alpha People are exposed to radiation from cosmic rays, as well as to radioactive materials found in the soil, water, food, air and also inside the body. For example, alpha particle radiation absorbed in tissue is considered to be about 20 times more effective as a carcinogen than the same dose of gamma rays. Most beta emitters can be detected with a survey instrument and a This occurs in nuclear power plants wherein industrial radioactive sources are emitted from. They differ in mass, energy and how deeply they penetrate people and objects. Atoms that do this are said to be radioactive. Let’s start with the most visible type of electromagnetic radiation: visible light … There are many uses of radiation in medicine. Our tips from experts and exam survivors will help you through. Ionising radiation includes both electromagnetic sources, such as X-rays and gamma rays, and particles, such as alpha and beta particles. Arrange the 7 types of electromagnetic radiation in order of increasing wavelength and also in order of increasing energy: Increasing wavelength; Increasing energy; List 2 uses for each of the following types of radiation: Radio waves, microwaves, and X-rays. Radio waves: Microwaves: X-rays: Give the condensed electron configurations for each of the following elements. Source(s): Wikipedia. Sealed radioactive sources and machines that emit gamma radiation and x Gamma radiation or x rays are able to travel many feet in air and The radiation one typically encounters is one of four types: alpha radiation, beta radiation, gamma radiation, and x radiation. 7.16 describe the dangers of ionising radiations, including: that radiation can cause… 7.10 explain the sources of background (ionising) radiation from Earth and space; 3:08 practical: investigate temperature changes accompanying some of the following types of… They are still viewed as the least dangerous form of radiation, as long as it's not ingested or inhaled, because it can be stopped by even a thin sheet of paper or even skin, meaning that it cannot enter the body very easily. and sulfur-35. Mammals can be killed by less than 10 Gy, but fruit flies may survive 1,000 Gy. Gamma radiation and/or characteristic x rays frequently accompany They readily penetrate most materials and are sometimes called "penetrating" radiation. The information provided is not a substitute for professional advice and should not be relied upon in the absence of such professional advice. Aside from these four, there is also neutron radiation. Medical Uses. not penetrating. Steven G. Lester, M.D., FACRO, says questions about the types of radiation oncology are common among his patients. The information posted on this web page is intended as general reference information only. Although these applications bring real benefits to people living in the UK, some can create potential harmful exposure risks that must be effectively controlled. Answers are the professional opinions of the expert responding to each question; they do not necessarily represent the position of the Health Physics Society. X rays are like gamma rays. however, produce very low-energy, poorly penetrating radiation that may Most people are also familiar with ultraviolet, or UV, rays. The Basics. the emission of alpha and beta radiation during radioactive decay. be difficult or impossible to detect. 3 0. It can pass through the skin, but it is absorbed by a few centimetres of body tissue or a few millimetres of aluminium. Nuclear Radiation refers to processes whereby unstable nuclei become more stable by emitting energetic particles. Each type of radiation is caused by a decay (either spontaneous - natural - or induced - a reaction has caused the decay). Alpha radiation is another name for the alpha particles emitted in the type of radioactive decay called alpha decay. What Types of Radiation Are There? - is a fast moving electron. Radiation - Radiation - Major types of radiation injury: Any living organism can be killed by radiation if exposed to a large enough dose, but the lethal dose varies greatly from species to species. The material is said to have absorbed the radiation. Roughly half of all cancer patients receive some type of radiation therapy at one point during the term of their treatment. Many bacteria and viruses may survive even higher doses. Alpha, beta and gamma There are three main types of radiation that can be emitted by radioactive particles. - is a helium nucleus, two protons and two neutrons. This range is known as the electromagnetic spectrum. Still have questions? They have no mass and no charge. Other Types of Radiation. The three types of nuclear radiation refer to alpha, beta, and gamma radiation. There are four major types of radiation: alpha, beta, neutrons, and electromagnetic waves such as gamma rays. Photons are probably best known for their role as the light ‘carrying’ particle, but that’s … Ionising radiation can cause burns, radiation sickness, and cancer. Neutron Radio. Andrew / Val. Another type of radiation that isn’t featured here is X-ray radiation. A variety of instruments has been designed to measure alpha Solar radiation consists of three different types of electromagnetic radiation: Visible Light; Ultraviolet Radiation; Infrared Radiation; Visible light makes up 42.3%, infrared radiation 49.4%, and ultraviolet a fraction above 8% of the total solar radiation reaching Earth. A very high level of radiation exposure delivered over a short period of time can cause symptoms such as nausea and vomiting within hours and can sometimes result in death over the following days or weeks. Gamma rays, X-rays, and high ultraviolet are classified as ionizing radiationas their photons have enough energy to ionizeatoms, causing chemical reactions. In our context it refers to \"ionizing\" radiation, which means that because such radiation passes through matter, it can cause it to become electrically charged or ionized. The first is an alpha particle. In most of the frequency bands above, a technique called spectroscopycan be used to physically separate waves of different frequencies, producing a spectrumshowing the constituent frequencies. The table below shows the properties of each type of ionising radiation, No - although some will still get through, Religious, moral and philosophical studies. Gamma ray $$\gamma$$ - is a high-energy electromagnetic wave. Alpha particles are helium … contaminants are allowed to remain on the skin for a prolonged period For non-ionizing electromagnetic radiation (see types below), the associated particles (photons) have only sufficient energy to change the rotational, vibrational or electronic valence configurations of molecules and atoms. Ultraviolet. The broad area of x-ray use is called radiology. materials. The EM spectrum is generally divided into seven regions, in order of decreasing wavelength and increasing energy and frequency. essential for making accurate measurements. difficult-to-detect beta emitters are hydrogen-3 (tritium), carbon-14, Build your knowledge and consider the issues surrounding nuclear power. Alpha-emitting materials can be harmful to humans if the materials are inhaled, swallowed, or absorbed through open wounds. Neutron radiation is also encountered in nuclear power plants and high-altitude flight and emitted from some industrial radioactive sources. The most well known is using x rays to see whether bones are broken. There are three types of nuclear radiation emitted from radioactive atoms: Alpha Radiation; Beta Radiation; Gamma Radiation; Alpha Radiation: Alpha radiation is a heavy and very short-range particle. radiation. Beta radiation may travel several feet in air and is moderately penetrating. EM radiation spans an enormous range of wavelengths and frequencies. of time, they may cause skin injury. 1 Answer. Alpha radiation is not able to penetrate clothing. Humans are exposed to this part of the radiation almost all times. In order to become stable, a nucleus may emit an alpha particle (a helium nucleus) or a beta particle (an electron or a positron). Health and safety made simple . Gamma radiation and x rays are electromagnetic radiation like They are part of the. Some types of radiation are known as 'ionising'. The electromagnetic Spectrum? Every day in the UK, radiation types are used in a diverse range of industrial, medical, research and communications applications. Beta radiation can penetrate human skin to the "germinal layer," Beta radiation is more penetrating than alpha radiation. All types are caused by unstable atoms, which have either an excess of energy or mass (or both). Gamma rays are caused by changes within the nucleus. . (Note: Some sources emit more than one type of radiation).
# Expand log(sqrt(xy))/z using properties of logarithms? ### Answer this question • Expand log(sqrt(xy))/z using properties of logarithms? • Rewrite any radicals using rational ... use the properties of logarithms to expand ... logarithms to expand 4 37 8 xy log . z Positive: 61 % Find right answers right now! Expand log(sqrt(xy))/z using properties of logarithms? ... Use the properties of logarithms to expand 4 37 8 xy log . z ... Positive: 58 % ### More resources How do I expand a log with a square root? How would I expand log 5 ... (1/2) and use Power Rule of Logarithms: Positive: 61 % ... (sqrt(z)) using properties of logarithums? ... Properties of expanding logarithms are same as properties of logarithms. ... expand log \$\frac{x^5y^2}{z ... Positive: 56 % LOGARITHMS The inverse of the ... Inverse Properties of Logs. ... Expand: ln (xy 2 /z) by property 2 we have: ln (xy 2) - ln z. by property 1 we have
# 5.5 Zeros of polynomial functions  (Page 7/14) Page 7 / 14 A shipping container in the shape of a rectangular solid must have a volume of 84 cubic meters. The client tells the manufacturer that, because of the contents, the length of the container must be one meter longer than the width, and the height must be one meter greater than twice the width. What should the dimensions of the container be? 3 meters by 4 meters by 7 meters Access these online resources for additional instruction and practice with zeros of polynomial functions. ## Key concepts • To find $\text{\hspace{0.17em}}f\left(k\right),\text{\hspace{0.17em}}$ determine the remainder of the polynomial $\text{\hspace{0.17em}}f\left(x\right)\text{\hspace{0.17em}}$ when it is divided by $\text{\hspace{0.17em}}x-k.\text{\hspace{0.17em}}$ This is known as the Remainder Theorem. See [link] . • According to the Factor Theorem, $\text{\hspace{0.17em}}k\text{\hspace{0.17em}}$ is a zero of $\text{\hspace{0.17em}}f\left(x\right)\text{\hspace{0.17em}}$ if and only if $\text{\hspace{0.17em}}\left(x-k\right)\text{\hspace{0.17em}}$ is a factor of $\text{\hspace{0.17em}}f\left(x\right).$ See [link] . • According to the Rational Zero Theorem, each rational zero of a polynomial function with integer coefficients will be equal to a factor of the constant term divided by a factor of the leading coefficient. See [link] and [link] . • When the leading coefficient is 1, the possible rational zeros are the factors of the constant term. • Synthetic division can be used to find the zeros of a polynomial function. See [link] . • According to the Fundamental Theorem, every polynomial function has at least one complex zero. See [link] . • Every polynomial function with degree greater than 0 has at least one complex zero. • Allowing for multiplicities, a polynomial function will have the same number of factors as its degree. Each factor will be in the form $\text{\hspace{0.17em}}\left(x-c\right),\text{\hspace{0.17em}}$ where $\text{\hspace{0.17em}}c\text{\hspace{0.17em}}$ is a complex number. See [link] . • The number of positive real zeros of a polynomial function is either the number of sign changes of the function or less than the number of sign changes by an even integer. • The number of negative real zeros of a polynomial function is either the number of sign changes of $\text{\hspace{0.17em}}f\left(-x\right)\text{\hspace{0.17em}}$ or less than the number of sign changes by an even integer. See [link] . • Polynomial equations model many real-world scenarios. Solving the equations is easiest done by synthetic division. See [link] . ## Verbal Describe a use for the Remainder Theorem. The theorem can be used to evaluate a polynomial. Explain why the Rational Zero Theorem does not guarantee finding zeros of a polynomial function. What is the difference between rational and real zeros? Rational zeros can be expressed as fractions whereas real zeros include irrational numbers. If Descartes’ Rule of Signs reveals a no change of signs or one sign of changes, what specific conclusion can be drawn? If synthetic division reveals a zero, why should we try that value again as a possible solution? Polynomial functions can have repeated zeros, so the fact that number is a zero doesn’t preclude it being a zero again. ## Algebraic For the following exercises, use the Remainder Theorem to find the remainder. $\left({x}^{4}-9{x}^{2}+14\right)÷\left(x-2\right)$ $\left(3{x}^{3}-2{x}^{2}+x-4\right)÷\left(x+3\right)$ $-106$ $\left({x}^{4}+5{x}^{3}-4x-17\right)÷\left(x+1\right)$ $\left(-3{x}^{2}+6x+24\right)÷\left(x-4\right)$ $\text{\hspace{0.17em}}0\text{\hspace{0.17em}}$ $\left(5{x}^{5}-4{x}^{4}+3{x}^{3}-2{x}^{2}+x-1\right)÷\left(x+6\right)$ #### Questions & Answers what is subgroup Prove that: (2cos&+1)(2cos&-1)(2cos2&-1)=2cos4&+1 e power cos hyperbolic (x+iy) 10y Michael tan hyperbolic inverse (x+iy)=alpha +i bita prove that cos(π/6-a)*cos(π/3+b)-sin(π/6-a)*sin(π/3+b)=sin(a-b) why {2kπ} union {kπ}={kπ}? why is {2kπ} union {kπ}={kπ}? when k belong to integer Huy if 9 sin theta + 40 cos theta = 41,prove that:41 cos theta = 41 what is complex numbers give me treganamentry question Solve 2cos x + 3sin x = 0.5 madras university algebra questions papers first year B. SC. maths Hey Rightspect hi chesky Give me algebra questions Rightspect how to send you Vandna What does this mean cos(x+iy)=cos alpha+isinalpha prove that: sin⁴x=sin²alpha cos(x+iy)=cos aplha+i sinalpha prove that: sinh⁴y=sin²alpha rajan cos(x+iy)=cos aplha+i sinalpha prove that: sinh⁴y=sin²alpha rajan is there any case that you can have a polynomials with a degree of four? victor ***sscc.edu/home/jdavidso/math/catalog/polynomials/fourth/fourth.html Oliver can you solve it step b step give me some important question in tregnamentry Anshuman what is linear equation with one unknown 2x+5=3 -4 Joel x=-4 Joel x=-1 Joan I was wrong. I didn't move all constants to the right of the equation. Joel x=-1 Cristian Adityasuman x= - 1 y=x+1 gary x=_1 Daulat yas. x= -4 Deepak x=-1 Deepak 2x=3-5 x=-2/2=-1 Rukmini -1 Bobmorris
# why does my integrator perform noise shaping? I am designing (in matlab) a simple system with a DC signal at 0 Hz and some added noise. I am performing a differentiation of the signal (by simply subtracting the last values from each other), next i am applying rolling average algorithm, to filter the high frequencies, and then i simply integrate over a number of samples. Using the pwelch() function on the output i noticed that there is some noise shaping going on, and i don't really now where it is coming from... I am aware of delta-sigma modulators, but this is not a thing i am doing here: for my integrator i am using a very simple algorithm that adds $$n$$ last samples together. I would expect a simple integrator behaviour here (with frequency response falling 20db/dec). Why does the noise shaping take place? https://i.stack.imgur.com/dTfLl.png I am posting the code for the signal generation, differentiator and integrator below. clear home close all %generate the signal fs = 1; T = 1/fs; t = 0:T:(2^14); pos = 3 * t; noise = 2 * randn(1, length(t)); pos = pos+noise; %differentiate the signal diff_time = 1; diff_out = []; len = length(pos); for j = 1+diff_time : len diff_out(j-diff_time) = pos(j)-pos(j-diff_time); end %filter the hf with MA filter with 16 taps MA_avg = zeros(1,16); for i = 1 : length(diff_out)-16 avg_val = MA_avg(end); old_val = diff_out(i); current_val = diff_out(i+16); MA_avg(end+1) = current_val-old_val+avg_val; end %integrate pos_intgr = integrate(MA_avg, 16); %get the spectral power NFFT = length(pos_intgr); [P, F] = pwelch(pos_intgr,ones(NFFT,1),0,NFFT,fs,'power'); PdBW = 10*log10(P); plot(F,PdBW) title("pwelch") xlabel('Frequency') ylabel('Power spectrum (dbW)') function [acu_down] = integrate(signal, taps) len = length(signal); acu_down = [0]; i = 1; k = 1; while i <= len-taps sum = 0; %reset the sum for j = 0 : taps-1 sum = sum + signal(i+j); end i = i+taps; acu_down(k) = sum; k = k+1; end %cut out the LSB acu_down = acu_down ./ taps; end • I don't really follow your code or how you got that plot. As far as I can tell, "noise" is never used, the differentiated signal is never used. Your moving average is just a moving average of a ramp. You integrate looks like another average rather than an integrate (accumulator which would add the output to the next input would be an integrator). That said the differentiator would give you the result you see, while the moving averages should approximate a Sinc function shape as the number of samples grow large. – Dan Boschen Feb 27 '20 at 13:52 • If you cascade differentiator with an actual accumulator, you will also get the equivalent of a moving average response (this is what a CIC does). – Dan Boschen Feb 27 '20 at 13:53 • @DanBoschen sorry, I forgot to adjust the variables before posting. I corrected the code now. Yes, the MA is a ramp, but it actually filters the higher frequencies also (i ran a FFT and it showed that it works). The output just looks like it has the spectral power of the noise pushed into higher frequencies, which i cannot understand. – user7216373 Feb 27 '20 at 14:05 • Yes a MA definitely filters the higher frequencies --a MA is a low pass filter. See my answer but perhaps the real question is what are you actually wanting to do? Maybe there is a better way. – Dan Boschen Feb 27 '20 at 15:03 The dominant effect you see is from the differentiation which is a high pass function. Observe the spectrum after differentiation alone: As would be clearer on a log log plot the signal is going up at rate f versus frequency. (consistent with a differentiation). Next observe the spectrum at the moving average output: This is consistent with expected: The moving average approximates a Sinc filter response, and the nulls of a Sinc function go down at rate f versus frequency. With the differentiator going up at rate f and the moving average going down at rate f we see the flat response as observed with the nulls at $$1/T$$ where $$T$$ is the duration of the moving average. Your final operation that you call an integrator is actually another average combined with a down-sample (decimation), and with that would result in the low frequency portion of your signal after another 16 point average; so another Sinc filter response of similar form and then the decimated (close in) spectrum out to half of the first main lobe: • wow, it all makes sense now. I understand what is going on here. Thanks a lot for your input! (this is a part of an resolution upscaling algorithm that I am trying to bring down to basic blocks and understand). – user7216373 Feb 27 '20 at 15:13
# Higher order repeated measures designs & MANOVA ## Two factor repeated measures ANOVA First let’s load in the sample data. We have organized the data file in a column-wise format, where each row is a subject. This is in preparation for a multivariate approach to the ANOVA. Here we won’t both with the univariate approach, since we are interested in the sphericity tests and the corrected values of the F test. fname <- "http://www.gribblelab.org/stats2019/data/2wayrepdata.csv" mydata ## subject a1b1 a1b2 a1b3 a2b1 a2b2 a2b3 ## 1 1 420 420 480 480 600 780 ## 2 2 480 480 540 660 780 780 ## 3 3 540 660 540 480 660 720 ## 4 4 480 480 600 360 720 840 ## 5 5 540 600 540 540 720 780 Let’s extract the data matrix (just the numeric values) from the data frame: dm <- as.matrix(mydata[1:5, 2:7]) dm ## a1b1 a1b2 a1b3 a2b1 a2b2 a2b3 ## 1 420 420 480 480 600 780 ## 2 480 480 540 660 780 780 ## 3 540 660 540 480 660 720 ## 4 480 480 600 360 720 840 ## 5 540 600 540 540 720 780 Now let’s create a multivariate linear model object: mlm1 <- lm(dm ~ 1) Next we are going to use the Anova() command in the car package, so we have to first load the package. We also have to define a data frame that contains the within-subjects factors. If you don’t have the car package installed, just type install.packages("car") and R will download it and install it. library(car) ## Loading required package: carData af <- factor(c("a1","a1","a1","a2","a2","a2")) bf <- factor(c("b1","b2","b3","b1","b2","b3")) myfac <- data.frame(factorA=af,factorB=bf) myfac ## factorA factorB ## 1 a1 b1 ## 2 a1 b2 ## 3 a1 b3 ## 4 a2 b1 ## 5 a2 b2 ## 6 a2 b3 Now we will define the anova using Anova(): mlm1.aov <- Anova(mlm1, idata = myfac, idesign = ~factorA*factorB, type="III") summary(mlm1.aov, multivariate=FALSE) ## Warning in summary.Anova.mlm(mlm1.aov, multivariate = FALSE): HF eps > 1 ## treated as 1 ## ## Univariate Type III Repeated-Measures ANOVA Assuming Sphericity ## ## Sum Sq num Df Error SS den Df F value Pr(>F) ## (Intercept) 10443000 1 33600 4 1243.214 3.861e-06 *** ## factorA 147000 1 33600 4 17.500 0.013881 * ## factorB 138480 2 39120 8 14.159 0.002354 ** ## factorA:factorB 67920 2 23280 8 11.670 0.004246 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## ## Mauchly Tests for Sphericity ## ## Test statistic p-value ## factorB 0.76047 0.66317 ## factorA:factorB 0.96131 0.94254 ## ## ## Greenhouse-Geisser and Huynh-Feldt Corrections ## for Departure from Sphericity ## ## GG eps Pr(>F[GG]) ## factorB 0.80676 0.005291 ** ## factorA:factorB 0.96275 0.004857 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## HF eps Pr(>F[HF]) ## factorB 1.271242 0.002354093 ## factorA:factorB 1.838414 0.004245732 Note how the formula uses the same names as in the myfac data frame we defined. The first part of the output lists the omnibus F tests for the main effects and the interaction effect. We then see the Mauchly tests of sphericity. We see tests for the main effect of factorB, and the factorA:factorB interaction effect. We don’t see a test of the main effect of factorA, because in this case, factorA has only two levels … and so there is no variances of differences-between-groups… since there are only two levels, there is only a single variance of differences (between the two levels). We then see Greenhouse-Geisser and Huynh-Felft corrections. ### Simple main effects The factorA:factorB interaction is significant, so we want to conduct so-called simple main effects analyses. This would be testing the effects of one factor (e.g. factorB), separately within each level of factorA (or vice-versa). In a between-subjects two-factor ANOVA, simple main effects are evaluated by doing separate one-way ANOVAs, but using the MSerror term from the overall two-factor analysis as the error term. For within-subjects designs it’s probably better to use separate error terms for each analysis, since the sphericity assumption is likely not true, and repeated measures ANOVA Is sensitive (more so than between-subjects ANOVA) to violations of the sphericity assumption. Therefore we can in fact literally run separate single-factor repeated measures ANOVAs, with one factor, within levels of the other factor. ### Pairwise tests & linear contrasts The approach for computing linear contrasts (including pairwise tests) is the same as for a single-factor repeated measures design. We can either compute F ratios by taking the appropriate MSerr term from the ANOVA output (this approach assumes sphericity), or we can simple compute differences scores and perform t-tests (this doesn’t assume sphericity). Correcting for Type-I error is up to you — you could use a Bonferroni adjustment, or compute Tukey probabilities, etc. ## Split plot designs A split plot design is a mixed design in which there are some repeated measures factor(s) and some between-subjects factor(s). Let’s load in some sample data for a study with one repeated measures and one between subjects factor: fname <- "http://www.gribblelab.org/stats2019/data/splitplotdata.csv" mdata ## subject a1 a2 a3 gender ## 1 1 420 420 480 f ## 2 2 480 480 540 f ## 3 3 540 660 540 f ## 4 4 480 480 600 f ## 5 5 540 600 540 f ## 6 6 439 434 495 m ## 7 7 497 497 553 m ## 8 8 555 675 553 m ## 9 9 492 496 615 m ## 10 10 555 617 555 m We have three levels of a repeated measures factor (a1, a2, a3) and two levels of a between-subjects factor, gender (m,f), and 10 subjects. First as before we extract the data corresponding to the dependent variable from the data frame: dm <- as.matrix(mdata[1:10, 2:4]) dm ## a1 a2 a3 ## 1 420 420 480 ## 2 480 480 540 ## 3 540 660 540 ## 4 480 480 600 ## 5 540 600 540 ## 6 439 434 495 ## 7 497 497 553 ## 8 555 675 553 ## 9 492 496 615 ## 10 555 617 555 Then we formulate our multivariate model: mlm <- lm(dm ~ 1 + gender, data=mdata) Note how now dm depends not just on a constant but also on gender. Next we design a data frame that contains the design of the repeated measures factor: af <- factor(c("a1","a2","a3")) myfac <- data.frame(factorA=af) myfac ## factorA ## 1 a1 ## 2 a2 ## 3 a3 Now we use the Anova() function to perform the split plot anova: mlm.aov <- Anova(mlm, idata=myfac, idesign = ~factorA, type="III") summary(mlm.aov, multivariate=FALSE) ## ## Univariate Type III Repeated-Measures ANOVA Assuming Sphericity ## ## Sum Sq num Df Error SS den Df F value Pr(>F) ## (Intercept) 4056000 1 71368 8 454.6592 2.461e-08 *** ## gender 1733 1 71368 8 0.1942 0.6711 ## factorA 6240 2 40655 16 1.2279 0.3191 ## gender:factorA 4 2 40655 16 0.0007 0.9993 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## ## Mauchly Tests for Sphericity ## ## Test statistic p-value ## factorA 0.15097 0.0013368 ## gender:factorA 0.15097 0.0013368 ## ## ## Greenhouse-Geisser and Huynh-Feldt Corrections ## for Departure from Sphericity ## ## GG eps Pr(>F[GG]) ## factorA 0.54082 0.3031 ## gender:factorA 0.54082 0.9840 ## ## HF eps Pr(>F[HF]) ## factorA 0.5590067 0.3043066 ## gender:factorA 0.5590067 0.9858722 ### Followup tests The rules and approach for further tests following significant omnibus ANOVA test(s) are no different than before.
## Another Brain Teaser You drive from Point A to B at 60mph and then stop at a diner in B for 30 minutes. You then drive from B to C at a speed of 30mph. When you reach C, your watch tells you that you left point A four hours ago. Looking over the map, you realize that the distance from B to C is 1/4 of the distance from A to B. How many minutes did it take you to drive from point B to C? Hint Note that 60mph = 1mi/min and 30mph = 0.5mi/min Hint 2 Solve for time (t) it takes to drive from B to C. Let $$t_0$$ be the time it takes to drive from A to B. $$t_{0}+t=240min-30min=210min$$$Solve for time (t) it takes to drive from B to C. Let $$t_0$$ be the time it takes to drive from A to B. Note that 60mph = 1mi/min and 30mph = 0.5mi/min $$t_{0}+t=240min-30min=210min$$$ $$t_{0}=210min-t$$$The distance from A to B = 4x distance from B to C. $$1mi/min(210min-t)=4(t)(0.5\frac{mi}{min})$$$ Solving for (t): $$210min=3t$$$$$t=70\: min$$$ 70 min
[top] [TitleIndex] [WordIndex] ANT 07 Homework 2 \documentclass{article} \include{macros} \begin{document} \begin{center} \Large\bf Homework 2 for Math 581F, Due FRIDAY October 12, 2007\end{center} Each problem has equal weight, and parts of problems are worth the same amount as each other. \begin{enumerate} \item Let $\vphi:R\to S$ be a homomorphism of (commutative) rings. \begin{enumerate} \item Prove that if $I\subset S$ is an ideal, then $\vphi^{-1}(I)$ is an ideal of~$R$. \item Prove moreover that if $I$ is prime, then $\vphi^{-1}(I)$ is also prime. \end{enumerate} \item Let $\O_K$ be the ring of integers of a number field. The Zariski topology on the set $X=\Spec(\O_K)$ of all prime ideals of $\O_K$ has closed sets the sets of the form $$V(I) = \{ \p\in X : \p \mid I\},$$ where~$I$ varies through {\em all} ideals of $\O_K$, and $\p\mid I$ means that $I \subset \p$. \begin{enumerate} \item Prove that the collection of closed sets of the form $V(I)$ is a topology on $X$. \item Prove that the conclusion of (a) is still true if $\O_K$ is replaced by an order in $\O_K$, i.e., a subring that has finite index in $\O_K$ as a $\Z$-module. \end{enumerate} \item Let $\alpha = \sqrt{2} + \frac{1+\sqrt{5}}{2}$. \begin{enumerate} \item Is $\alpha$ an algebraic integer? \item Explicitly write down the minimal polynomial of $\alpha$ as an element of $\QQ[x]$. \end{enumerate} \item Which are the following rings are orders in the given number field. \begin{enumerate} \item The ring $R = \ZZ[i]$ in the number field $\QQ(i)$. \item The ring $R = \ZZ[i/2]$ in the number field $\QQ(i)$. \item The ring $R = \ZZ[17i]$ in the number field $\QQ(i)$. \item The ring $R = \ZZ[i]$ in the number field $\QQ(\sqrt[4]{-1})$. \end{enumerate} \item Give an example of each of the following, with proof: \begin{enumerate} \item A non-principal ideal in a ring. \item A module that is not finitely generated. \item The ring of integers of a number field of degree~$3$. \item An order in the ring of integers of a number field of degree~$5$. \item A non-diagonal matrix of left multiplication by an element of~$K$, where~$K$ is a degree~$3$ number field. \item An integral domain that is not integrally closed in its field of fractions. \item A Dedekind domain with finite cardinality. \item A fractional ideal of the ring of integers of a number field that is not an integral ideal. \end{enumerate} \end{enumerate} \end{document} 2013-05-11 18:33
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Dokl. Akad. Nauk: Year: Volume: Issue: Page: Find MATHEMATICS On the recovery of higher-order differential operators by means of a system of their spectraE. A. Baranova 1271 A bilateral difference method for nonlinear and spectral problems in ordinary differential equationsE. A. Volkov 1274 Functions of generalized bounded variation, convergence of their Fourier series and conjugate trigonometric seriesB. I. Golubov 1277 On the theory of admissibility of pairs of function spacesV. V. Zhikov 1281 Reducibility of uniform structuresA. A. Ivanov 1284 A negative solution to the problem of the convergence in the $L_p^\alpha$ metric ($p\ne2$) of the spectral decomposition of a function in $L_p^\alpha$ with compact supportV. A. Il'in 1286 On the existence and uniqueness theorem for quasi-conformal mappings with unbounded characteristicsV. I. Kruglikov 1289 On the invertibility of elliptic partial differential operatorsÈ. Muhamadiev 1292 On the growth of sums of independent random variablesV. V. Petrov 1296 On a class of correspondence categoriesD. A. Raikov 1300 On Minkowski’s conjecture for $n=5$B. F. Skubenko 1304 Generalized supersoluble groups with systems of complemented Abelian subgroupsS. N. Chernikov 1306 On $E$-compact spacesA. P. Shostak 1310 Harmonic mappings of domains bounded by spheresA. Yanushauskas 1313 FLUID MECHANICS On the Zemplen's theorem for shock waves in plasma with anisotropic pressureM. D. Kartalev 1316 MATHEMATICAL PHYSICS Noncanonical forms of the transport equationD. A. Kozhevnikov, Sh. K. Nasibullaev 1320 PHYSICS The measurement of complex shear modulus of liquidsU. B. Bazaron, B. V. Deryagin, O. R. Budaev 1324 On the problem of gaseous laser combined pimpingE. P. Velikhov, I. V. Novobrantsev, V. D. Pis'mennyi, A. T. Rakhimov, A. N. Starostin 1328 GEOPHYSICS On the possibilities of geomagnetic field evolution calculationB. L. Gavrilin, A. S. Monin 1349
Add: unypikiw49 - Date: 2020-11-27 22:40:49 - Views: 170 - Clicks: 4125 Install and import the transitions on proton radio proton-engine. Variable Radio Frequency Proton-Electron Double-Resonance Imaging (VRF PEDRI) enables extracting a functional map from a limited number of images acquired at pre-selected EPR transitions on proton radio frequencies using. Atom 1: nucleus with seven protons and eight neutrons, surrounded by seven electrons; Atom 2: transitions on proton radio nucleus with seven protons and seven neutrons, surrounded by seven electrons. Article shared by:. Cold interstellar gas does not emit radiation at visible wavelengths. There is a dominant attractive-binding-energy term proportional to the number of nucleons A. Because of the larger number of electrons and corresponding energy levels, more transitions are possible. The parameters ω σ (tunneling frequency), γ σ (damping constant of proton motion), and G (effective coupling constant between phonons and protons) have been determined by. Definition transitions on proton radio of NMR 2. 4’, the latest installment of transitions on proton radio his ongoing mix series, is out now on Renaissance — a dark, melodic excursion into the deep, which includes tracks and transitions on proton radio remixes by Loco Dice, Pig & Dan and Two Lone Swordsmen — and now John’s revealed plans for a newly invigorated Bedrock Records. several photons in a series of transitions to the ground state d) none of these. flames, fireworks, bullets, explosions, etc) on the modern web application. What we know: starting last Thursday, TuneIn in the UK started blocking all foreign radio stations. Here A was 13 C and was the observed nucleus, and X was protons continuously influenced by transitions on proton radio a suitable radio-frequency magnetic field. Pig & Dan - On To The Beat. Hydrogen has one proton, so displays spin (1 H or proton NMR. Absorption lines arising in molecular clouds along quasar sightlines offer a precise test for variations in the proton-to-electron mass ratio, μ, over cosmological time and distance scales. a proton e) none of these. It is a nucleosynthesis process and, along with the s-process and the r-process, may be responsible for the generation of many of the heavy elements present in the universe. 11 The association of a strong ionospheric response with solar energetic particle events was established following the major event of 23 February transitions on proton radio 1956 Bailey, 1957. Both electrons and protons have equal-magnitude A) charge. They did this because of a lawsuit they lost in November. A radio astronomer uses a spectrometer to look at the spectral lines of a far-way star. This ratio is known to a transitions on proton radio very high precision. energy transitions of quantum jumps is consistent with the conservation of energy. Examples of radio spectral lines include the $\lambda transitions on proton radio = 21$ cm hyperfine line of interstellar HI, recombination lines of ionized hydrogen and heavier atoms, and rotational lines of polar molecules such as carbon monoxide (CO). Suppose you are listening to a radio station that broadcasts at a frequency of 97 Mhz (megahertz). Spectral lines are narrow ($\Delta \nu \ll \nu$) emission or absorption features in the spectra of gaseous sources. &0183;&32;Tracklists / PlaylistsJohn Digweed Guy J Denis A Jozif Sunset Cruise with John Digweed & Friends Lady Windridge Yacht Miami Beach, Florida, USA Proton Radio Transitions 500. CREATE YOUR ACCOUNT Create your Techno Patron account here and start enjoying special privileges on our platform. This radiation is produced when electrons (cathode rays) strike glass or metal surfaces in high-voltage evacuated tubes and is detected by the. The rp-process (rapid proton capture process) transitions on proton radio consists of consecutive proton transitions on proton radio captures onto seed nuclei to produce heavier elements. Depending on if the spin of the proton and the electron are both point up or if one is pointed up and the other is pointed down, you have slightly different energies. As a rule, energetically favored electron promotion will be from the highest occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO), and the resulting. Radio-frequency induced transitions on proton radio spin transitions of one individual proton are observed. &0183;&32;We observed neutron inelastic scattering at the ferroelastic phase transition point of KH 3 (SeO 3) 2 in order to investigate the dynamical properties of proton tunneling and its role in triggering the phase transition. Definition of NMR: (1) Nuclear magnetic resonance is defined as a condition when the frequency of the rotating magnetic field becomes equal to the frequency of. Artist: Fergie, John Digweed in: Techno. Because of geomagnetic restriction of the incoming protons to a circle about the geomagnetic poles, the resulting increase in ionospheric radio wave absorption in these regions was termed PCA (Polar Cap Absorption). of transitions between these states is stimulated using radio-frequency (RF) electromagnetic radiation. C) both of these D) none of these. Radioactivity - Radioactivity - Rates of radioactive transitions: There is a vast range of the rates of radioactive decay, from undetectably slow to unmeasurably short. Because of the larger number protons, the amount of force holding the electrons to the atom can vary between different iron atoms. Radioactive decay transitions on proton radio (also known as nuclear decay, radioactivity, radioactive disintegration or nuclear disintegration) is transitions on proton radio the process by which an unstable atomic nucleus loses energy by radiation. Recombination Lines Introduction to Radio Spectral Lines. &0183;&32;As Sonos don’t seem to want to, lets try and consolidate the TuneIn UK problem into one master thread. 18 - TRANSITIONS (ALI AJAMI GUESTMIX) @ PROTON RADIO : Author Message; RaZeR Co-Admin Number of posts: 9617 Age: 29 Location: Ghent (Belgium) Registration date :: Subject:. &0183;&32;Simple equations that deal with the effect of spin-spin interactions on transition intensities, and with some effects on spin populations, were combined to give an estimate of transition intensities for AX, AX 2, and AX 3 systems. Surprisingly, we can calculate this process already. It depends on using a magnetic transitions on proton radio field to split a molecular state of the ammonia molecule into a pair of states very close to the sam. Artist: John Digweed in: House, Minimal. Principle of NMR 3. Share it Retweet. com, online cue sheet database. transitions on proton radio For IR the quantized transition (a transition with a fixed energy) is the vibration of a chemical bond. In this model, the aggregate of nucleons has the same properties. &0183;&32;The proton-to-electron mass ratio will transitions on proton radio change if there are differences in the spatial or temporal scale of variations in the strong nuclear force compared to the electromagnetic force, and recent observational studies have focused on using molecular observations at radio wavelengths to search for changes in μ. 99) requires continuous radio communication over the entire flight route with dispatch offices and traffic control. JOHN DIGWEED TRANSITIONS (PROTON RADIO) –. Before considering the factors governing particular decay rates in detail, it seems appropriate to review the mathematical equations governing radioactive decay and the general methods of rate measurement in different ranges of. &0183;&32;The Standard Model of particle physics assumes that the so-called fundamental constants are universal and unchanging. Which of the following statements is true? The value of the ratio of the masses of the proton and the electron has a bearing on the values of other physical constants. Because of the atom's larger size, it will be hit by more photons in a beam of light. js is an easy yet transitions on proton radio powerful Javascript animation engine to create pretty transitions on proton radio cool particle effects (e. &0183;&32;It depends on what you call transitions on proton radio radio. Three of the most common types of decay are alpha decay, beta decay, and gamma decay, all of which involve emitting one or more particles. . It wasn’t until 1951 that equipment sensitive enough to detect this transition became available to radio. &0183;&32;CueNation. high frequency radio waves. We estimate sensitivity coefficients of the microwave transitions in H 2 O 2 to μ variation. The labeled transitions. A United Air-lines operations manager has stated that ‘‘if polar routes. 979 všečkov &183; 296 govori o tem. Spectroscopy - Spectroscopy - X-ray and radio-frequency spectroscopy: A penetrating, electrically uncharged radiation was discovered in 1895 by the German physicist Wilhelm Conrad R&246;ntgen and was named X-radiation because its origin was unknown. If you mean the microwave region, yes the ammonia maser was first transitions on proton radio built in 1953. We now use an electromagnetic wave (RF transitions on proton radio pulse) to excite some of the protons back into the higher energy state. (-) Remove News filter News (-) Remove Presentations filter Presentations Brochures (9) Apply Brochures filter Events / presentations (13) Apply Events / presentations filter Other document (2) Apply Other document filter Pictures (3) Apply Pictures filter Press release (187) Apply Press release filter White papers (9) Apply White papers filter. In order to display spin a nucleus must have an odd number transitions on proton radio of protons or neutrons. All nuclei with an odd mass number (e. John Digweed’s latest Transitions BEDROCK main man John Digweed is firing on all cylinders. NPM \$ npm install proton-engine --save import Proton from 'proton-engine'; 2. Protons and electrons both have what is called spin. Live @ Heaven Bedrock Easter Party London:00 01 ID 005:00 02 ID 011:00 03 ID 017:00 04 Martin Buttrich - Stoned Auto Pilot. The focus of the four sections is the gentle buncher (GB) section, which satisfies the Kapchinsky–Teplyakov (K–T) conditions, transitions on proton radio transitions on proton radio namely the change rate of. ‘Transitions Vol. &0183;&32;Proton. Not all nuclei transitions on proton radio display spin. The largest coefficient for 14. . 1 Magnetic Dipole Thenuclearmagneticdipolemoment arisesfromthespin angular momentum of the nucleus. The energy needed to go from spin parallel to spin to transitions on proton radio one up one down, that energy corresponds to 21 centimeter light. The spin quantum jumps are detected via the continuous Stern-Gerlach effect, which is used in an experiment with a single proton stored in a cryogenic Penning trap. Proton therapy (44) Apply Proton therapy filter ; Filter by date created:Apply filter ;Apply filter transitions on proton radio ;Apply filter ;Apply filter ;. D) all of the above. Robert Babicz transitions on proton radio - Welcome To The 90's (08:49) 08:49 02. One proton on an adjacent carbon will split a proton transitions on proton radio into a doublet (d), two peaks of 1:1 relative intensity Two proton on an adjacent carbon will split a proton into a triplet (t), three peaks of 1:2:1 relative intensity Three proton on an adjacent carbon will split a transitions on proton radio proton into a. 27 - TRANSITIONS (NICK CURLY GUESTMIX) @ PROTON RADIO Fri 5:39 pm John Digweed You must be registered and logged in to see this link. The proton's magnetic moment interacts with the. Carefully curated underground dance music radio, transitions on proton radio streaming 24/7. A transitions material containing unstable nuclei is considered radioactive. After some time, transitions on proton radio most of the protons fall into the lower of the two states. NMR involves changes in the transitions on proton radio spin state of the nucleus of an atom. email: [email protected] - phone:(265) 903-2003 x 9574 ### Sony vegas new transitions pack - React spring -> How to add multiple transitions in movie maker all at once -> Parahelium transitions Sitemap 1
# Storage performance: Lustre file system¶ ## Betzy, Fram and Stallo¶ To get best throughput on the scratch file system (/cluster/work), you may need to change the data striping. Striping shall be adjusted based on the client access pattern to optimally load the object storage targets (OSTs). On Lustre, the OSTs are referring to disks or storage volumes constructing the whole file system. The stripe_count indicates how many OSTs to use. The stripe_size indicates how much data to write to one OST before moving to the next OST. • Striping will only take affect only on new files, created or copied into the specified directory or file name. • Default stripe_count on /cluster file system on Fram and Stallo is 1. • Betzy is implementing Progressive File Layouts to dynamically set file stripe size based on file size growth. Note Betzy: Progressive File Layouts PFL removes the need to explicitly specify striping for each file, assigning different Lustre striping characteristics to contiguous segments of a file as it grows. Dynamic striping allows lower overhead for small files and assures increased bandwidth for larger files. However, note that for workloads with significant random read phases it is best to manually assign stripe size and count. • Betzy implements another new feature, called data on metadata for small files with size under 2KB. Note Lustre file system performance is optimized for large files. To balance that, data on metadata (DoM) is enabled on Betzy to ensure higher performance in case of frequently accessed small files. Files accessed with a size of 2KB or smaller will be stored on a very fast NVMe JBOD directly connected to the metadata servers. For more detailed information on striping, please consult the Lustre documentation. ### How to find out the current striping¶ To see current stripe size (in bytes), use lfs getsripe [file_system, dir, file] command. e.g.: $lfs getstripe /cluster/tmp/test /cluster/tmp/test stripe_count: 1 stripe_size: 1048576 stripe_offset: -1 Note Rules of thumb for proper stripe counts For best performance we urge you to always profile the I/O characterstics of your HPC application and tune the I/O behavior. Down below is a list of rules you may apply to properly set stripe count for your files: • files smaller than 1GB: default striping • files size between 1GB - 10GB: stripe count 2 • files size between 10GB - 1TB: stripe count 4 • files bigger than 1TB: stripe count 8 ### Large files¶ For large files it is advisable to increase stripe count and perhaps chunk size too. e.g.: # stripe huge file across 8 OSTs$ lfs setstripe --stripe-count 8 "my_file" # stripe across 4 OSTs using 8 MB chunks.
Given nonlinear recurrence relation : $b_n=(\frac{1}{2}b_{n-1}+\frac{1}{2})^2$, then evaluate $\lim_{n\to\infty} (b_n)^{2n}$ I encounter the following problem: Given nonlinear recurrence relation : $b_n=(\frac{1}{2}b_{n-1}+\frac{1}{2})^2$ with $b_0=\frac{1}{2}$, we want to evaluate the $\lim_{n\to\infty} (b_n)^{2n}$. Firstly, I use MATLAB to verify this numerically, and the result should be $e^{-8}$. I know it is not easy to find a close form of a nonlinear recurrence relation(here it is a quadratic map) in general. Here, we can see that $\lim_{n\to\infty}b_n=1$. Indeed, assume $\lim_{n\to\infty}b_n=L$, and substitute it into the recurrence relation. We will get:$L=(\frac{1}{2}L+\frac{1}{2})^2$, and the solution is $L=1$. Although maybe we cannot find the exact form of $b_n$, is it possible for us to find how fast it converges to $1$? Thanks for any hint. • @Winther You're right! The limit value seems to be small, but doesn't converge to 0. Let me edit the problem. – Syoung Jan 2 '17 at 17:43 • One possible approach: Let $c_n = \frac{b_n-1}{2}$ then $c_{n+1} = c_n + \frac{1}{2}c_n^2$ so if the limit $\lim_{n\to\infty} nc_n = c$ exists then by Stolz–Cesàro we have $$c = \lim_{n\to\infty} n c_n = \lim_{n\to\infty} \frac{c_{n+1}-c_n}{\frac{1}{n+1} - \frac{1}{n}} = -\lim_{n\to\infty} \frac{[nc_n]^2}{2} = -\frac{c^2}{2}$$ giving $c=-2$ and $b_n^{2n} = \lim_{n\to\infty}(1 + 2c_n)^{2n} = e^{-8}$ follows. You need to show that $nc_n$ converges though. – Winther Jan 2 '17 at 18:20 • @Winther Thanks so much! – Syoung Jan 2 '17 at 19:31 $$b_n=\left(\frac{1}{2}b_{n-1}+\frac{1}{2}\right)^2$$ Given that $b_n\to1$, we consider $\gamma_n=\frac{b_n-1}{2}$, and this leads us to: $$\gamma_{n+1}=\gamma_{n}+\frac{1}{2}\gamma_{n}^2$$ As a heuristic, we try $\gamma_n\sim An^{-\beta}$ to see what behaviour to expect: $$A(n+1)^{-\beta}\approx An^{-\beta}+\frac{1}{2}A^2n^{-2\beta}$$ $$\implies\left(1+\frac{1}{n}\right)^{-\beta}\approx1+\frac{-\beta}{n}\approx 1+\frac{A}{2n^{\beta}}$$ Matching, we choose $\beta=1,A=-2$, and so we suspect that $\gamma_n\approx\frac{-2}{n}$. We thus set $f_n=\frac{-2}{\gamma_n}\to\infty$: $$f_{n+1}=\frac{f_n^2}{f_n-1}=f_n+1+\frac{1}{f_n-1}=f_n+1+o(1)\implies f_n=n+o(n)$$ We thus deduce: • $\gamma_n=\frac{-2}{n+o(n)}$ • $b_n=1-\frac{4}{n+o(n)}$ • $\log\left(b_n^{2n}\right)=2n\log\left(1-\frac{4}{n+o(n)}\right)=(2n)\left(\frac{-4}{n+o(n)}+o\left(\frac{-4}{n+o(n)}\right)\right)=-8+o(1)$ Thus, $b_n^{2n}\to \exp(-8)$ as $n\to\infty$.
# American Institute of Mathematical Sciences September  2012, 11(5): 1897-1910. doi: 10.3934/cpaa.2012.11.1897 ## Local maximum principle for $L^p$-viscosity solutions of fully nonlinear elliptic PDEs with unbounded coefficients 1 Department of Mathematics, Saitama University, 255 Shimo-Okubo, Urawa, Saitama 338-8570 2 School of Mathematics, Georgia Institute of Technology, 686 Cherry Street, Atlanta, GA 30332-0160 Received  March 2011 Revised  June 2011 Published  March 2012 We establish local maximum principle for $L^p$-viscosity solutions of fully nonlinear elliptic partial differential equations with unbounded ingredients. Citation: Shigeaki Koike, Andrzej Świech. Local maximum principle for $L^p$-viscosity solutions of fully nonlinear elliptic PDEs with unbounded coefficients. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1897-1910. doi: 10.3934/cpaa.2012.11.1897 ##### References: [1] M. E. Amendola, L. Rossi and A. Vitolo, Harnack inequalities and ABP estimates for nonlinear second-order elliptic equations in unbounded domains, Abstr. Appl. Anal., (2008), Art. ID 178534, 19 pp.  Google Scholar [2] X. Cabré, On the Alexandroff-Bakelman-Pucci estimate and the reversed Hölder inequality for solutions of elliptic and parabolic equations, Comm. Pure Appl. Math., 48 (1995), 539-570. doi: 10.1002/cpa.3160480504.  Google Scholar [3] L. A. Caffarelli, Interior a priori estimates for solutions of fully non-linear equations, Ann. Math., 130 (1989), 189-213. doi: 10.2307/1971480.  Google Scholar [4] L. A. Caffarelli and X. Cabré, "Fully Nonlinear Elliptic Equations,'' American Mathematical Society, Providence, 1995.  Google Scholar [5] L. A. Caffarelli, M. G. Crandall, M. Kocan and A. Świech, On viscosity solutions of fully nonlinear equations with measurable ingredients, Comm. Pure Appl. Math., 49 (1996), 365-397. doi: 10.1002/(SICI)1097-0312(199604)49:4<365::AID-CPA3>3.0.CO;2-A.  Google Scholar [6] M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc., 27 (1992), 1-67. doi: 10.1090/S0273-0979-1992-00266-5.  Google Scholar [7] M. G. Crandall, M. Kocan and A. Świech, $L^p$-Theory for fully nonlinear uniformly parabolic equations, Comm. Partial Differential Equations, 25 (2000), 1997-2053. doi: 10.1080/03605300008821576.  Google Scholar [8] L. Escauriaza, $W^{2, n}$ a priori estimates for solutions to fully non-linear equations, Indiana Univ. Math. J., 42 (1993), 413-423. doi: 10.1512/iumj.1993.42.42019.  Google Scholar [9] E. B. Fabes and D. W. Stroock, The $L^p$-integrability of Green's functions and fundamental solutions for elliptic and parabolic equations, Duke Math. J., 51 (1984), 997-1016. doi: 10.1215/S0012-7094-84-05145-7.  Google Scholar [10] P. K. Fok, "Some Maximum Principles and Continuity Estimates for Fully Nonlinear Elliptic Equations of Second Order,'' Ph.D. Thesis, UCSB, 1996. Google Scholar [11] K. Fok, A nonlinear Fabes-Stroock result, Comm. Partial Differential Equations, 23 (1998), 967-983. doi: 10.1080/03605309808821375.  Google Scholar [12] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,'' 2nd ed., Springer-Verlag, New York, 1983.  Google Scholar [13] C. Imbert, Alexandroff-Bakelman-Pucci estimate and Harnack inequality for degenerate/singular fully non-linear elliptic equations, J. Differential Equations, 250 (2011), 1553-1574. doi: 10.1016/j.jde.2010.07.005.  Google Scholar [14] S. Koike and A. Świech, Maximum principle for fully nonlinear equations via the iterated comparison function method, Math. Ann., 339 (2007), 461-484. doi: 10.1007/s00208-007-0125-z.  Google Scholar [15] S. Koike and A. Świech, Weak Harnack inequality for fully nonlinear uniformly elliptic PDE with unbounded ingredients, J. Math. Soc. Japan, 61 (2009), 723-755. doi: 10.2969/jmsj/06130723.  Google Scholar [16] N. V. Krylov and M. V. Safonov, An estimate for the probability of a diffusion process hitting a set of positive measure, (Russian) Dokl. Akad. Nauk SSSR, 245 (1979), 18-20.  Google Scholar [17] N. V. Krylov, and M. V. Safonov, A property of the solutions of parabolic equations with measurable coefficients, (Russian) Izv. Akad. Nauk SSSR Ser. Mat., 44 (1980), 161-175, 239.  Google Scholar [18] M. V. Safonov, Harnack's inequality for elliptic equations and Hölder property of their solutions, (Russian) in "Boundary Value Problems of Mathematical Physics and Related Questions in the Theory of Functions,'' 12, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov (LOMI), 96 (1980), 272-287, 312.  Google Scholar [19] B. Sirakov, Solvability of uniformly elliptic fully nonlinear PDE, Arch. Ration. Mech. Anal., 195 (2010), 579-607 doi: 10.1007/s00205-009-0218-9.  Google Scholar [20] N. S. Trudinger, Local estimates for subsolutions and supersolutions of general second order elliptic quasilinear equations, Invent. Math., 61 (1980), 67-79. doi: 10.1007/BF01389895.  Google Scholar [21] N. S. Trudinger, Comparison principles and pointwise estimates for viscosity solutions of nonlinear elliptic equations, Rev. Mat. Iberoamericana, 4 (1988), 453-468. doi: 10.4171/RMI/80.  Google Scholar [22] L. Wang, On the regularity of fully nonlinear parabolic equations: I, Comm. Pure Appl. Math., 45 (1992), 27-76. doi: 10.1002/cpa.3160450103.  Google Scholar [23] N. Winter, $W^{2, p}$ and $W^{1, p}$-estimates at the boundary for solutions of fully nonlinear, uniformly elliptic equations, Z. Anal. Anwend., 28 (2009), 129-164. doi: 10.4171/ZAA/1377.  Google Scholar show all references ##### References: [1] M. E. Amendola, L. Rossi and A. Vitolo, Harnack inequalities and ABP estimates for nonlinear second-order elliptic equations in unbounded domains, Abstr. Appl. Anal., (2008), Art. ID 178534, 19 pp.  Google Scholar [2] X. Cabré, On the Alexandroff-Bakelman-Pucci estimate and the reversed Hölder inequality for solutions of elliptic and parabolic equations, Comm. Pure Appl. Math., 48 (1995), 539-570. doi: 10.1002/cpa.3160480504.  Google Scholar [3] L. A. Caffarelli, Interior a priori estimates for solutions of fully non-linear equations, Ann. Math., 130 (1989), 189-213. doi: 10.2307/1971480.  Google Scholar [4] L. A. Caffarelli and X. Cabré, "Fully Nonlinear Elliptic Equations,'' American Mathematical Society, Providence, 1995.  Google Scholar [5] L. A. Caffarelli, M. G. Crandall, M. Kocan and A. Świech, On viscosity solutions of fully nonlinear equations with measurable ingredients, Comm. Pure Appl. Math., 49 (1996), 365-397. doi: 10.1002/(SICI)1097-0312(199604)49:4<365::AID-CPA3>3.0.CO;2-A.  Google Scholar [6] M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc., 27 (1992), 1-67. doi: 10.1090/S0273-0979-1992-00266-5.  Google Scholar [7] M. G. Crandall, M. Kocan and A. Świech, $L^p$-Theory for fully nonlinear uniformly parabolic equations, Comm. Partial Differential Equations, 25 (2000), 1997-2053. doi: 10.1080/03605300008821576.  Google Scholar [8] L. Escauriaza, $W^{2, n}$ a priori estimates for solutions to fully non-linear equations, Indiana Univ. Math. J., 42 (1993), 413-423. doi: 10.1512/iumj.1993.42.42019.  Google Scholar [9] E. B. Fabes and D. W. Stroock, The $L^p$-integrability of Green's functions and fundamental solutions for elliptic and parabolic equations, Duke Math. J., 51 (1984), 997-1016. doi: 10.1215/S0012-7094-84-05145-7.  Google Scholar [10] P. K. Fok, "Some Maximum Principles and Continuity Estimates for Fully Nonlinear Elliptic Equations of Second Order,'' Ph.D. Thesis, UCSB, 1996. Google Scholar [11] K. Fok, A nonlinear Fabes-Stroock result, Comm. Partial Differential Equations, 23 (1998), 967-983. doi: 10.1080/03605309808821375.  Google Scholar [12] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,'' 2nd ed., Springer-Verlag, New York, 1983.  Google Scholar [13] C. Imbert, Alexandroff-Bakelman-Pucci estimate and Harnack inequality for degenerate/singular fully non-linear elliptic equations, J. Differential Equations, 250 (2011), 1553-1574. doi: 10.1016/j.jde.2010.07.005.  Google Scholar [14] S. Koike and A. Świech, Maximum principle for fully nonlinear equations via the iterated comparison function method, Math. Ann., 339 (2007), 461-484. doi: 10.1007/s00208-007-0125-z.  Google Scholar [15] S. Koike and A. Świech, Weak Harnack inequality for fully nonlinear uniformly elliptic PDE with unbounded ingredients, J. Math. Soc. Japan, 61 (2009), 723-755. doi: 10.2969/jmsj/06130723.  Google Scholar [16] N. V. Krylov and M. V. Safonov, An estimate for the probability of a diffusion process hitting a set of positive measure, (Russian) Dokl. Akad. Nauk SSSR, 245 (1979), 18-20.  Google Scholar [17] N. V. Krylov, and M. V. Safonov, A property of the solutions of parabolic equations with measurable coefficients, (Russian) Izv. Akad. Nauk SSSR Ser. Mat., 44 (1980), 161-175, 239.  Google Scholar [18] M. V. Safonov, Harnack's inequality for elliptic equations and Hölder property of their solutions, (Russian) in "Boundary Value Problems of Mathematical Physics and Related Questions in the Theory of Functions,'' 12, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov (LOMI), 96 (1980), 272-287, 312.  Google Scholar [19] B. Sirakov, Solvability of uniformly elliptic fully nonlinear PDE, Arch. Ration. Mech. Anal., 195 (2010), 579-607 doi: 10.1007/s00205-009-0218-9.  Google Scholar [20] N. S. Trudinger, Local estimates for subsolutions and supersolutions of general second order elliptic quasilinear equations, Invent. Math., 61 (1980), 67-79. doi: 10.1007/BF01389895.  Google Scholar [21] N. S. Trudinger, Comparison principles and pointwise estimates for viscosity solutions of nonlinear elliptic equations, Rev. Mat. Iberoamericana, 4 (1988), 453-468. doi: 10.4171/RMI/80.  Google Scholar [22] L. Wang, On the regularity of fully nonlinear parabolic equations: I, Comm. Pure Appl. Math., 45 (1992), 27-76. doi: 10.1002/cpa.3160450103.  Google Scholar [23] N. Winter, $W^{2, p}$ and $W^{1, p}$-estimates at the boundary for solutions of fully nonlinear, uniformly elliptic equations, Z. Anal. Anwend., 28 (2009), 129-164. doi: 10.4171/ZAA/1377.  Google Scholar [1] Francesca Da Lio. Remarks on the strong maximum principle for viscosity solutions to fully nonlinear parabolic equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 395-415. doi: 10.3934/cpaa.2004.3.395 [2] Robert Jensen, Andrzej Świech. Uniqueness and existence of maximal and minimal solutions of fully nonlinear elliptic PDE. Communications on Pure & Applied Analysis, 2005, 4 (1) : 199-207. doi: 10.3934/cpaa.2005.4.187 [3] Rainer Buckdahn, Christian Keller, Jin Ma, Jianfeng Zhang. Fully nonlinear stochastic and rough PDEs: Classical and viscosity solutions. Probability, Uncertainty and Quantitative Risk, 2020, 5 (0) : 7-. doi: 10.1186/s41546-020-00049-8 [4] Ibrahim Ekren, Jianfeng Zhang. Pseudo-Markovian viscosity solutions of fully nonlinear degenerate PPDEs. Probability, Uncertainty and Quantitative Risk, 2016, 1 (0) : 6-. doi: 10.1186/s41546-016-0010-3 [5] Isabeau Birindelli, Francoise Demengel. Eigenvalue, maximum principle and regularity for fully non linear homogeneous operators. Communications on Pure & Applied Analysis, 2007, 6 (2) : 335-366. doi: 10.3934/cpaa.2007.6.335 [6] Limei Dai. Entire solutions with asymptotic behavior of fully nonlinear uniformly elliptic equations. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1707-1714. doi: 10.3934/cpaa.2011.10.1707 [7] Luisa Fattorusso, Antonio Tarsia. Regularity in Campanato spaces for solutions of fully nonlinear elliptic systems. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1307-1323. doi: 10.3934/dcds.2011.31.1307 [8] Lidan Wang, Lihe Wang, Chunqin Zhou. Classification of positive solutions for fully nonlinear elliptic equations in unbounded cylinders. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1241-1261. doi: 10.3934/cpaa.2021019 [9] Antonio Vitolo, Maria E. Amendola, Giulio Galise. On the uniqueness of blow-up solutions of fully nonlinear elliptic equations. Conference Publications, 2013, 2013 (special) : 771-780. doi: 10.3934/proc.2013.2013.771 [10] Chuanqiang Chen. On the microscopic spacetime convexity principle for fully nonlinear parabolic equations II: Spacetime quasiconcave solutions. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4761-4811. doi: 10.3934/dcds.2016007 [11] Chuanqiang Chen. On the microscopic spacetime convexity principle of fully nonlinear parabolic equations I: Spacetime convex solutions. Discrete & Continuous Dynamical Systems, 2014, 34 (9) : 3383-3402. doi: 10.3934/dcds.2014.34.3383 [12] Mariane Bourgoing. Viscosity solutions of fully nonlinear second order parabolic equations with $L^1$ dependence in time and Neumann boundary conditions. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 763-800. doi: 10.3934/dcds.2008.21.763 [13] Jian Song, Meng Wang. Stochastic maximum principle for systems driven by local martingales with spatial parameters. Probability, Uncertainty and Quantitative Risk, 2021, 6 (3) : 213-236. doi: 10.3934/puqr.2021011 [14] Chiun-Chuan Chen, Li-Chang Hung, Hsiao-Feng Liu. N-barrier maximum principle for degenerate elliptic systems and its application. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 791-821. doi: 10.3934/dcds.2018034 [15] Tomasz Komorowski, Adam Bobrowski. A quantitative Hopf-type maximum principle for subsolutions of elliptic PDEs. Discrete & Continuous Dynamical Systems - S, 2020, 13 (12) : 3495-3502. doi: 10.3934/dcdss.2020248 [16] Xiaoming He, Xin Zhao, Wenming Zou. Maximum principles for a fully nonlinear nonlocal equation on unbounded domains. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4387-4399. doi: 10.3934/cpaa.2020200 [17] Wenmin Sun, Jiguang Bao. New maximum principles for fully nonlinear ODEs of second order. Discrete & Continuous Dynamical Systems, 2007, 19 (4) : 813-823. doi: 10.3934/dcds.2007.19.813 [18] Italo Capuzzo Dolcetta, Antonio Vitolo. Glaeser's type gradient estimates for non-negative solutions of fully nonlinear elliptic equations. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 539-557. doi: 10.3934/dcds.2010.28.539 [19] Luca Rossi. Non-existence of positive solutions of fully nonlinear elliptic equations in unbounded domains. Communications on Pure & Applied Analysis, 2008, 7 (1) : 125-141. doi: 10.3934/cpaa.2008.7.125 [20] H. O. Fattorini. The maximum principle in infinite dimension. Discrete & Continuous Dynamical Systems, 2000, 6 (3) : 557-574. doi: 10.3934/dcds.2000.6.557 2020 Impact Factor: 1.916
## anonymous one year ago write an equation of each line 1. anonymous $slope=\frac{ 3 }{ 5} ; through (-4,0)$ 2. jim_thompson5910 Point slope form $\Large y-y_1 = m\left(x-x_1\right)$ Plug in the given slope $\Large y-y_1 = \frac{3}{5}\left(x-x_1\right)$ and the given point $\Large y-0 = \frac{3}{5}\left(x-(-4)\right)$ from here solve for y 3. jim_thompson5910 Try to get the equation into the form y=mx+b 4. anonymous m= -5/5=(-4,0) y=5/ ????? 5. jim_thompson5910 you agree that x-(-4) turns into x+4 right? 6. anonymous yes 7. jim_thompson5910 and the y-0 is simply y 8. jim_thompson5910 So, $\Large y-0 = \frac{3}{5}\left(x-(-4)\right)$ $\Large y = \frac{3}{5}\left(x+4\right)$ $\Large y = \frac{3}{5}x+\frac{3}{5}*4 \ \ ... \ \text{distribute}$ $\Large y = \frac{3}{5}x+\frac{3}{5}*\frac{4}{1}$ $\Large y = \frac{3}{5}x+\frac{3*4}{5*1}$ $\Large y = \frac{3}{5}x+\frac{12}{5}$
# Don’t Kill Math: Comments on Bret Victor’s Scientific Agenda October 27, 2012 One of the most refreshing voices to emerge on the Web belongs to Bret Victor, who, since he left Apple in 2010, has been busy weaving together words, images, and code into compelling visions for computer-assisted creativity and scientific understanding. Among his works over the last two years you will find new “interfaces” for (really, conceptualizations of) algebra, trigonometry, dynamic systems, algorithm design, computer programming, circuit design, game design, animation, scientific publishing, mathematical exposition, and political propaganda. Each represents an impressive level of creativity and an admirable expenditure of effort. Each of these interfaces bears the mark of genius inasmuch as the average practitioner in the relevant field will think of his work differently than he thought of it prior to seeing Victor’s presentation. In particular, the practitioner will think: “I need that.” The unifying goal of Victor’s work, as he puts it in his Inventing on Principle talk (for all practical purposes, Victor’s manifesto) is to bring people into closer communion with their creative ideas. I personally applaud this goal, and would be hard-pressed to find anyone who is against it (who can oppose human creativity?). And yet I feel compelled to express serious concern and reservation about a particular aspect of Victor’s agenda, namely, his dismissive attitudes toward analytic methods in the sciences. His attitudes are summarized in the web page called, without subtlety, Kill Math. It collects some of Victor’s thoughts on symbolic manipulation (i.e. algebra) as well as some demonstrations for what he believes to be superior alternatives (i.e. computer simulations). My present purpose is to come to the defense of analytic methods, and to explain why I think they should generally be preferred to other methods. Before mounting this defense, let me review Victor’s case against old-fashioned analysis. The thrust of Victor’s argument is as follows: the fruits of the quantitative sciences are codified in symbolic equations, which, like the rhythms of Latin poetry, are accessible to only to an elite few. It is wrong that this knowledge is restricted to a small subset of humanity (first argument), but this wrong can be corrected by creating computer simulations which will allow a person without mathematical ability to gain an intuition for the system being studied by manipulating an interactive interface (second argument). Furthermore, these interactive interfaces can and should replace traditional analytic methods for practicing scientists and engineers because interactive interfaces convey deeper understanding than their analytic counterparts (third argument). I do not take issue with Victor’s first argument — that it is wrong that only a small subset of humanity can understand equations well enough to make concrete predictions. But I think Victor is misguided in his second and third arguments, when he advocates simulation as being the best predictive tool for layman and scientist alike. In particular, Victor gives analytic methods rather short shrift when it comes to comprehending the observed world. For reasons I will explicate below, analytic methods are — and should be — considered a first choice among scientists and engineers when attempting to understand anything observable and quantitative. And because analytic methods offer a fundamentally deeper understanding of phenomena than simulations do, I believe that Victor’s time would be better spent making analytic methods accessible to the average person rather than attempting to replace analytic methods wholesale with computer simulations, no matter how mesmerizing and seductive the kaleidoscopic gyrations of the latter may be. For the practicing scientist, the chief virtue of analytic methods can be summed up in a single word: clarity. An equation describing a quantity of interest conveys what is important in determining that quantity and what is not at all important. To take an example, consider the universal law of gravitation, which describes the gravitational force between two bodies: $F = \frac{GMm}{r^2}$ A person with a minimal education in physics can quickly surmise: What’s important in determining F: The product of the two masses; the square of the distance between them. What’s not important: Whether the first body has more mass than the second body, or vice versa; whether the objects are moving, spinning, or stationary; the possible presence of a brick wall in between the two bodies; an electric charge on either body, or an electric current running through them; and everything else we might think of. Or consider the analytic solution to Victor’s skateboard problem: $sin(\theta) = \frac{r}{h}$ What’s important in determining $$\theta$$: The ratio of $$r$$ to $$h$$. What’s not important: The speed of the skateboard, the size of $$r$$ in absolute terms, the size of $$h$$ in absolute terms, the color of the grass, and everything else we might think of. To the trained eye, equations yield confident understanding and memorable insights. If something is not in the equation, it simply doesn’t matter. (The importance of an equation has as much to do with what is absent from the equation as with what is in it.) Similarly, if two variables appear in the equation only as a product or only as a ratio, then only the product (or the ratio) matters. Of course, an equation’s dramatis personae is only the beginning of the play. We can then ask questions such as: can the quantity of interest ever be zero? Can it ever be negative? If a particular variable doubles, how is the quantity of interest affected, and what does that effect depend on? Analytic methods (in particular, partial derivatives) tell us exactly how one variable can amplify or diminish the effect of a second variable on a third variable. They describe with great precision the rich set of interactions that lie beneath the surface. Contrast this situation to the “intuitive” understanding one is supposed to gain by playing with a Victor-style computer simulation. One might make “discoveries”, but one is never certain: • Does this “discovery” apply to all parameter choices? • What is the actual quantitative content of this discovery? If some relationship appears to hold — is this relationship approximate or exact? Numbers might go “up together” or “down together”, but is the underlying relationship linear, exponential, periodic, or what? If no relationship appears to hold — is that because I can’t see it, or because it’s not there? • Am I missing a more fundamental relationship in this simulation? Is there a product, ratio, or other function of parameters that is more important than the parameters considered individually? Victor addresses the first concern in his essay, Up and Down the Ladder of Abstraction. The problem described in this essay — an algorithm for turning a car that has driven off the track — is not immediately amenable to analytic methods (at least to my eyes), so it is an excellent showcase for computer simulation. And yet it suffers from the same shortcomings that all computer simulations suffer from, namely: • The curse of dimensionality. “Sweeping” a parameter space is a neat trick for finding an optimum, used to great effect in The Ladder of Abstraction as well as Victor’s video, Interactive Exploration of a Dynamical System. However, the trick becomes impractical as soon as there are more than a handful of parameters. (The amount of computation required to sweep a parameter space grows exponentially with the number of parameters.) The Ladder of Abstraction cannot have more than a few rungs. • No quantitative content. I have no idea what I am supposed to take away from the picture at the top of the ladder. I might describe a “strip of stability” and an “inlet of uncertainty” in the final picture, but this is an exercise in classification, akin to geography, zoology, or philately. • No insight into possible relationships between parameters. Can I reduce the provided parameters to a more fundamental quantity of some sort? There is no way of telling from the simulation. As with interactive displays at science museums, the colorful process of “discovery” feels engaging and rewarding, but the actual insights tend to be tenuous and imprecise. For this particular problem domain, that may be the best one can do: build up pictures and hope for a semblance of understanding. But the absence of analytic insight should be mourned, for the same reason a sailor should curse if he is forced to navigate without a chart. Suppose for a moment that the list of insights derived from simulation were identical in precision and quantity to those derived from analysis. I would argue that analytic methods should still be preferred wherever they are available. Analysis lets you ask the basic question of all scientific inquiry: “Why?” On what assumptions does this insight depend? When will it hold up, and when will it break down? Is it a logical consequence of something I knew previously? Simulation can demonstrate the what and how, but never elucidate the why. Analysis connects disparate insights together through the structure of mathematics. Analytic methods are by no means the one true path to scientific enlightenment. They can be used speciously, that is to say, without empirical justification, and in many situations, they fail to produce any insight. However, more than Victor would have his readers believe, analytic methods are often an appropriate and illuminating tool for messy, “real-world” problems. What Victor neglects to mention in his essays is that there are highly developed analytic methods that can solve problems very close to, if not identical to the problems that he describes as requiring simulation. The analytic solutions, when present, carry the usual benefits: clarity, confident understanding, and memorable insights. In particular, I would like to draw attention to three analytic tools that would go a long way toward solving the problems posed in Victor’s essays: • Lagrange polynomials. In Simulation as a Practical Tool, Victor drew the curved wall so it “looked good”, but it seems one could construct something awfully close with just a few polynomial or sinusoidal terms. • Polar coordinates. Modeling the wall as a parametric curve or as a function in polar coordinates, one could start to solve for properties in the most general case, rather than being chained to the shapes of particular walls. • The calculus of variations and optimal control theory: These are tools for finding entire functions that optimize a value of interest; they are commonly used to solve problems quite close to the car-driving problem. The reader will forgive me for not playing the role of teacher and working out examples of each of these. That is not the purpose of the present essay. I merely wish to point out that Victor fails to exhaust all the possible analytic approaches before proclaiming they should be abandoned in favor of his preferred method of simulation. And here, I think, is where Victor made a mistake. He speaks grandly of how with simulation, “The conditions of a problem do not need to be contrived or compromised for a convenient symbolic representation.” But compromises and contrivances have been at the heart of all scientific discovery since Bacon. Approximating the planets as spheres, electrons as point-charges, nuclei as point-masses and space as a vacuum are all “contrivances” with “a convenient symbolic representation” that have yielded concise descriptions — more importantly, precise predictions — about the universe we inhabit. In the physical sciences, approximations are part and parcel with analytic methods. In my mind, the role of simulation — for both the scientist and the layman — should be to show us where our analytic approximations are useful, and where they are not useful. Minds like Victor’s should be focused not on eliminating analysis from the process of inquiry, but on making analysis easier, more intuitive, and more concrete — ideally, allowing mathematical novices to ply even the dark arts of Lagrange polynomials and the calculus of variations. Simulation facilities should serve only as a sort of back-up method for those lamentable situations where the fog is too thick to see anything clearly, that is to say, where analytic methods fail to enlighten. ** A tangent to Bret Victor’s “Kill Math” project is something that he calls reactive documents. In his own words, a reactive document is one in which “The reader can play with the premise and assumptions of various claims, and see the consequences update immediately.” As an example, there is a paragraph describing the number of state parks that must be shut down in California assuming that a certain tax is levied; as the reader changes the size and structure of the tax according to his whim, the stated number of parks to be shuttered changes in concert. It is an interesting concept. And yet the provided example represents, I think, the worst of Victor’s anti-equation ideology. There are five independent controls which one may click and drag around. But doing so is merely groping around in the dark — one tries this, tries that, and tries the other, and at the end of it, tries to remember the effect that each control actually exerted on the outcome, and (God help you) whether this effect was distorted by the position of any of the other controls. A complete and rigorous understanding of all five of those numbers could be gained by displaying a simple equation that any high-school student can read and understand: (Parks shut down) = 218 parks - ( (Vehicle tax) * (Fraction charged) * 28 million vehicles + (Income tax) * (Fraction charged) * 14 million taxpayers + (104 million potential visitors) * (Fraction who attend given Admission price) / (\$2.2 million per park) Here we have a perfect case for analytic methods — where all of the interconnections between variables become perfectly visible to any high-schooler’s eye — but, fantastically, Victor prefers to embed a five-parameter simulation in the document rather than affright his reader with the sight of addition and multiplication. I am sure that a person of Victor’s ingenuity could make the mechanics of the above equation engaging and comprehensible to the lay reader. But rather than illuminate and explain the equation’s inner workings, he has chosen to hide them away. Something that quickly becomes clear from examining the equation — but not from examining particular inputs and outputs — is that this miniature simulation includes an important contrivance: Fraction who attend given Admission price The curious reader who digs into Victor’s source code will find this snippet: // fake demand curve model.newVisitorCount = model.oldVisitorCount * Math.max(0.2, 1 + 0.5*Math.atan(1 - averageAdmission/model.oldAdmission)); To paraphrase the pamphleteer, “Behold the convenient appearance of the arc-tangent!” A crucial part of the analysis has been fabricated out of whole cloth. We are led to this discovery only because the equation encourages us to ask the question, “Why?”. In contrast, had we focused only on the knobs, switches, and output, we would never even think to question the underlying assumptions of the model. The simulation pats the reader on the head and tells him not to worry about the details. I will conclude with a passage from Nikola Tesla, the prolific inventor who briefly worked for Thomas Edison. He described Edison’s work habits thus: [Edison’s] method was inefficient in the extreme, for an immense ground had to be covered to get anything at all unless blind chance intervened and, at first, I was almost a sorry witness of his doings, knowing that just a little theory and calculation would have saved him 90% of the labor. But he had a veritable contempt for book learning and mathematical knowledge, trusting himself entirely to his inventor’s instinct and practical American sense. It is too early to tell whether Bret Victor will one day earn a proper comparison to Edison in scope of invention and influence, but “Inventing on Principle” leaves no doubt that his personal agenda is to make Edisons of us all. That is a laudable goal, but it must be qualified. If Victor were to embrace “just a little theory and calculation” in his scientific agenda, he would help us not merely to survey the universe of possible worlds via simulation, but to see the underlying structure of those possibilities with a level of clarity afforded only by analysis. You’re reading evanmiller.org, a random collection of math, tech, and musings. If you liked this you might also enjoy:
Question # If a coin is tossed three times (or three coins are tossed together), then describe the sample space for this experiment. If a single coin is tossed, number of outcomes will be $= 2$ Sample Space $S = \left\{ {H,T} \right\}$ If the coin is tossed three times, no. of outcomes will be ${2^3} = 8$ Sample Space will be the cartesian product of the Set $S$ with itself twice: $S' = \left\{ {H,T} \right\} \times \left\{ {H,T} \right\} \times \left\{ {H,T} \right\} \\ \Rightarrow S' = \left\{ {HHH,HHT,HTH,HTT,THH,THT,TTH,TTT} \right\} \\$ Note: The number of outcomes in tossing $n$ number of coins $= {2^n}$ . Always remember to check that each outcome in sample space is equally likely to occur. Also, the order of the heads or tails matter in writing sample space of coin tossing experiment.
# American Institute of Mathematical Sciences • Previous Article First-order optimality conditions for convex semi-infinite min-max programming with noncompact sets • JIMO Home • This Issue • Next Article The modified cutting angle method for global minimization of increasing positively homogeneous functions over the unit simplex October  2009, 5(4): 835-850. doi: 10.3934/jimo.2009.5.835 ## Modelling and optimal control for nonlinear multistage dynamical system of microbial fed-batch culture 1 Department of Applied Mathematics, Dalian University of Technology, Dalian 116024, Liaoning, China 2 School of Mathematics and Information Science, Shandong Institute of Business and Technology, Yantai 264005, Shandong, China 3 Department of Applied Mathematics, Dalian University of Technology, Dalian, Liaoning, 116024, P.R. 4 School of Energy and Power Engineering, Dalian University of Technology, Dalian 116024, Liaoning, China Received  July 2008 Revised  May 2009 Published  August 2009 In this paper, we propose a new controlled multistage system to formulate the fed-batch culture process of glycerol bio-dissimilation to 1,3-propanediol (1,3-PD) by regarding the feeding rate of glycerol as a control function. Compared with the previous systems, this system doesn't take the feeding process as an impulsive form, but a time-continuous process, which is much closer to the actual culture process. Some properties of the above dynamical system are then proved. To maximize the concentration of 1,3-PD at the terminal time, we develop an optimal control model subject to our proposed controlled multistage system and continuous state inequality constraints. The existence of optimal control is proved by bounded variation theory. Through the discretization of the control space, the control function is approximated by piecewise constant functions. In this way, the optimal control model is approximated by a sequence of parameter optimization problems. The convergence analysis of this approximation is also investigated. Furthermore, a global optimization algorithm is constructed on the basis of the above descretization concept and an improved Particle Swarm Optimization (PSO) algorithm. Numerical results show that, by employing the optimal control policy, the concentration of 1,3-PD at the terminal time can be increased considerably. Citation: Chongyang Liu, Zhaohua Gong, Enmin Feng, Hongchao Yin. Modelling and optimal control for nonlinear multistage dynamical system of microbial fed-batch culture. Journal of Industrial & Management Optimization, 2009, 5 (4) : 835-850. doi: 10.3934/jimo.2009.5.835 [1] Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020107 [2] Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 [3] Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019 [4] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077 [5] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020046 [6] Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020052 [7] Hongbo Guan, Yong Yang, Huiqing Zhu. A nonuniform anisotropic FEM for elliptic boundary layer optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1711-1722. doi: 10.3934/dcdsb.2020179 [8] Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 399-414. doi: 10.3934/dcds.2009.23.399 [9] Xianwei Chen, Xiangling Fu, Zhujun Jing. Chaos control in a special pendulum system for ultra-subharmonic resonance. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 847-860. doi: 10.3934/dcdsb.2020144 [10] Mikhail I. Belishev, Sergey A. Simonov. A canonical model of the one-dimensional dynamical Dirac system with boundary control. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021003 [11] Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020347 [12] Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213 [13] Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020  doi: 10.3934/eect.2020110 [14] Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051 [15] Lars Grüne, Roberto Guglielmi. On the relation between turnpike properties and dissipativity for continuous time linear quadratic optimal control problems. Mathematical Control & Related Fields, 2021, 11 (1) : 169-188. doi: 10.3934/mcrf.2020032 [16] Jingrui Sun, Hanxiao Wang. Mean-field stochastic linear-quadratic optimal control problems: Weak closed-loop solvability. Mathematical Control & Related Fields, 2021, 11 (1) : 47-71. doi: 10.3934/mcrf.2020026 [17] Arthur Fleig, Lars Grüne. Strict dissipativity analysis for classes of optimal control problems involving probability density functions. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020053 [18] Pierluigi Colli, Gianni Gilardi, Gabriela Marinoschi. Solvability and sliding mode control for the viscous Cahn–Hilliard system with a possibly singular potential. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020051 [19] Editorial Office. Retraction: Honggang Yu, An efficient face recognition algorithm using the improved convolutional neural network. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 901-901. doi: 10.3934/dcdss.2019060 [20] Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020  doi: 10.3934/jgm.2020032 2019 Impact Factor: 1.366
# Tips for reaching grasps with the command line When I first moved from Windows to Ubuntu, without a doubt one of the most overwhelming point I needed to do was make use of the command line. Keying in commands is an unusual experience when you've just ever before been made use of to aiming and also clicking. When I talk with new Ubuntu customers, they are usually worried with the suggestion of chatting straight to their computer system. Exists a straightforward and also pleasant overview to aid new customers get accustomed with the command line? Do you have any kind of pointers to make the experience less complicated or even more enjoyable? 0 2019-05-05 15:37:29 Source Share history | grep SOMETHING-- locates command you made use of prior to which contains SOMETHING. fortune --- -) 0 2019-05-08 18:41:59 Source To find out just how to make use of a command add a room and afterwards" - - aid" throughout of it - this informs you just how to utilize it and also offers a checklist of alternatives. as an example cp --help 0 2019-05-08 18:29:21 Source I found out a bunch concerning making use of the command line and also obtaining comfy with functioning within it from reviewing The Bash Cookbook from O'Reilly and also Associates. It's a publication concerning Bash scripting, yet the bite sized portions of the recipe book layout make it really obtainable. As a side advantage, if you assume "Gee, I would certainly sure such as to do X, yet I do not recognize just how," you can make use of the tabulation to seek out X (and also Y and also Z for that issue) and also get an excellent suggestion on just how to do it (and also a suitable description of just how it collaborates with reminders to various other dishes and also sources that can better expand your understanding). 0 2019-05-08 17:49:01 Source The Ubuntu Pocket Guide and Reference includes a phase on making use of the command line. It'll promptly get you up and also running with the command line. 0 2019-05-08 17:42:52 Source If you are seeking an excellent overview to find out the command line, my fave is LinuxCommand.org The overview will certainly show you the essentials of the command line, and also will certainly also lead you right into creating valuable shell manuscripts. That claimed, the majority of customer will certainly not require to make use of the command line for the majority of everyday procedures. I do not assume that the command line need to inhibit users from moving to Ubuntu. Once you find out the power of the command line, you will not have the ability to live without it! 0 2019-05-08 08:50:46 Source Switch to zsh! While it is significantly like celebration, it has a great deal of wonderful added attributes out of package (like as an example typo improvement, also in a coming before course part or a valuable widget to call aid for the existing command (using run - aid ; I push ESC - h after as an example having actually keyed in mplayer, and also it opens up the male web page. After shutting it I'm back at the old line) ). I advise the adhering to publication, which covers zsh, celebration and also a few other coverings : From Bash to Z Shell: Conquering the Command Line. While it is a couple of years of ages currently, I'm grateful this had actually not transformed me far from acquiring it. This referral additionally holds if you do not intend to switch over to zsh. I have actually been making use of the command line a whole lot given that a couple of years (in your area and also using SSH), I've just lately made the button to zsh myself (primarily as a result of my personalized celebration punctual, which is not suitable). Below is my zsh config (incorporated in my dotfiles repository). You can make use of chsh -s /bin/zsh to change your covering (using/ etc/passwd), or simply call it from your existing covering, i.e. type zsh in your celebration punctual (you likely need to install it first though (sudo apt-get install zsh). 0 2019-05-08 08:48:13 Source Here are some usual commands for adjusting the filesystem : • cp [src] [dest] - duplicates src to dest • mv [src] [dest] - actions src to dest (additionally made use of for relabeling) • cd [dir] - adjustments existing directory to dir • pwd - prints the existing directory • cat [file] - prints the materials of documents to the screen • rm [file] - gets rid of a documents 1 • rmdir [dir] - gets rid of a vacant directory Prefixing any one of the commands with sudo creates the command to be implemented as the root customer. 1 - do not type sudo rm -rf / as it will certainly get rid of the filesystem 0 2019-05-08 08:43:57 Source Find an Ubuntu publication with excellent command line index, zerox it and also position it near the computer system. Pressure on your own to utilize it. An excellent source is guide "Ubuntu Linux Toolbox 1000+ commands", covers all you require to recognize (http :// www.amazon.com/Ubuntu - Linux - Toolbox - Commands - Debian/dp/0470082933) However, if you do not run a web server, in Ubuntu desktop computer virtually every little thing is readily available with the GUI. 0 2019-05-08 08:36:55 Source Try making use of fish fish is an easy to use command line shell for UNIX - like running systems such as Linux. Among various other points it includes advanced tab conclusion than celebration which can be really handy while finding out. 0 2019-05-08 08:05:16 Source " apropos" (or it's equal : "male - k") to locate a command to do something. $apropos [my query] . As an example, to locate the command to replicate documents : $ apropos copy . will certainly detail a number of commands, of which cp (1) - copy files and directories . is one. " cp" is the command and also "1" is the area from the guidebooks where it shows up. Area 1 is basic customer commands (various other areas include points like collection telephone calls, which you will not want). To limit the search to simply area 1 usage : $apropos -s1 [my query] . To after that figure out more concerning the command usage "male". as an example $ man cp . 0 2019-05-08 07:38:12 Source 1) Tab conclusion : A large convenience. If you are keying a command, you require just type sufficient of the command to give a first sector that can just be expanded in a solitary means and afterwards can push TAB as soon as to expand your first sector to the whole command. So, as an example, on my system umo TAB increases to umount. (On my system as what first sectors are extendable just in one means is a function of what you have actually mounted, etc) If you do not type adequate to make the conclusion distinct, TAB will certainly not expand, yet a 2nd TAB will certainly present a checklist of feasible conclusions. So, on my system, um TAB TAB returns : umask umax_pp umount umount.hal . Tab conclusion additionally works with courses : cd /home/me/docs/reallylo TAB will, if one-of-a-kind, expand to cd /home/me/docs/reallylongdirname and also, otherwise one-of-a-kind, supply a checklist of prospect extensions similar to um over. 2) man some-command or some-command --help or some-command -h : If you can not remember just how a command functions, you can get documentation right there in the shell. man generally gives one of the most information. Generally one or both of the --help and also -h debates to a command gives a brief recap. 3) head : man some-command takes control of the incurable and also stops you from getting in commands while the man message is presented. man some-command | head will certainly present the first 10 lines. man some-command | head -n will certainly present the first n lines. In both instances, you get your punctual back, to make sure that you can have the man message on screen as you enter your command. 0 2019-05-08 07:23:02 Source
# Why are atoms with eight electrons in the outer shell extremely stable? Atoms that have eight electrons in their outer shell are extremely stable. It can't be because both the $s$ and the $p$ orbitals are full, because then an atom with 13 or 18 valence electrons would be extremely stable. ($d$ has 10, and 5 is also stable). Why is it that atoms with eight electrons in the outer shell are extremely stable? First, this isn't quite true. It is true for the first row of the periodic chart (from lithium to neon). It is almost true for the second row (from sodium to argon. But there are exceptions there. Beyond that it really isn't true at all for the elements beyond the first two columns. The reason for the increased stability for the first two rows lies in quantum mechanics. Classically we can note that there are no $d$ electrons there. Another ways of looking at it from a classical point of view is that the early elements are too small to allow too many other atoms or groups of atoms around them. That tends to go away as you go down the periodic chart and the atoms get "fatter". A typical example is chloroplatinic acid which has six chlorines around it. Most transition metals also can have more than four groups around them as well. I suspect that this isn't an exceptionally useful explanation. As I said, the answer really lies in quantum mechanics. In looking up "molecular orbital theory", one reference can be found here. This video nicely explains your question. The essence is that paulis exclusion principle states that two electrons(or generally fermions) cannot occupy the same quantum state. This is because electrons are fundamentally indistinguishable,but their wavefunctions are not. So there must be some quantum factors which allows to distinguish between their quantum states. In general we say that quantum properties of electrons can be conveyed by some numbers which we call the quantum numbers of that electron. The distinguishablity in the first shell electrons is assured by spin quantum number. They can have $$+1/2$$ or $$-1/2$$ spin. So, there are 2 electrons in the first shell. But in the second shell,there are additional quantum numbers like azimuthal quantum number and magnetic quantum number, that can vary which allows for upto 8 electons in a single shell. So,why are 8 electrons stable? because the protons and electrons attract each other.so,the atom in which all electrons are maximum filled is the most stable system.
1. Dirk \newcolumntype{x}[1]{% &gt;{\raggedleft\hspace{0pt}}p{#1}% leaves me with error message: Paragraph ended before (backslash)newcol@ was complete Any ideas? 2. Hi Dirk, Thanks for your comment, there was a parenthesis missing, I corrected it. Cheers, Tom 3. stefan08 Hi Tom, \\ would work like \tabularnewline again if you insert the command \arraybackslash, for instance: \newcolumntype{x}[1]{% &gt;{\raggedleft\arraybackslash}p{#1}}% It’s defined in array.sty: \def\arraybackslash{\let\\\tabularnewline} Stefan • Christiaan Thanks Tom and Stephan. It worked for me 4. nguyenminhhaivn I have a problem with tables in LaTeX. I want the text in the first row to be centered and the text in the second row to be flushed left. Hope you can help me. Thanks in advance. 5. Hi, You can control the alignment within a cell using “\multicolumn”. Let me give you a simple example: \begin{table}[ht] \centering \begin{tabular}{lrr} &amp; \multicolumn{1}{c}{Exp. 1} &amp; \multicolumn{1}{c}{Exp. 2}\\ Setting A&amp; 5.43498289 &amp; 4.309872395\\ Setting B&amp; 5.7098429109890 &amp; 4.10983901\\ \end{tabular} \end{table}% If you need separation lines, you have to use \vline and \hline between cells. Tom. 6. Hans Thanks a lot for this post. It helped me out! Hans 7. James You are a hero!!!!!!! 8. Norm Thanks for this post; I’ve been trying to figure this out for a while and you’re the only person who has had an answer so far. 9. Thanks a lot. Your post was exactly what I needed. 🙂 10. Al Thanks a lot! It really helped me. But can you explain what should I do, if I want to center my text not only horizontally but vertically? • Thanks for this question. That’s an easy one :-). Use ‘m’ instead of ‘p’ to vertically align your text within a cell. Hence your column type definition would look like this: \newcolumntype{x}[1]{% &gt;{\raggedleft\hspace{0pt}}m{#1}}% Cheers, Tom. • Al It works – thanks a lot again! 11. Alp This was a great help, thank you. 12. Thanks for all the tips here! 13. Arthur Hey Tom! It’s been some years, but your solution with \newcolumntype works nicely! However, I want to use such approach with the “booktabs” package. But if I use “x{0.2cm}” on the –last– column, it gives me the error “Misplaced \noalign. (\midrule …). That “midrule” is a horizontal line from the booktabs package. If I use “…x{0.20cm}x{0.20cm}p{0.20cm}” (i.e. the last column has the regular “p” instead of “x”) it works flawlessly. Any idea how I can get a working “booktabs” table with defined column-widths and right-aligned? • tom Hi Arthur, Thanks for your comment. I think the problem is not with booktabs. When you use the fixed-width columns as defined in the array package and set the alignment, you’ll have to reset the newline macro \\ in the last column. This can be done using \arraybackslash. The updated new column definition below should fix the problem: \newcolumntype{x}[1]{% >{\raggedleft\arraybackslash\hspace{0pt}}p{#1}}% Best, Tom • Arthur Thanks Tom, and sorry for not paying attention to that part! 🙂 14. Line Thank you so much for this helpful post! It was just what I needed!
1 MCQ (Single Correct Answer) JEE Main 2017 (Online) 9th April Morning Slot A square, of each side 2, lies above the x-axis and has one vertex at the origin. If one of the sides passing through the origin makes an angle 30o with the positive direction of the x-axis, then the sum of the x-coordinates of the vertices of the square is : A $2\sqrt 3 - 1$ B $2\sqrt 3 - 2$ C $\sqrt 3 - 2$ D $\sqrt 3 - 1$ Explanation Let, coordinate of point A = (x, y). $\therefore\,\,\,$ For point A, ${x \over {\cos {{30}^ \circ }}}$ = ${y \over {\sin {{30}^ \circ }}}$ = 2 $\Rightarrow$ x = $\sqrt 3$ and y = 1 Similarly, For point B, ${x \over {\cos {{75}^ \circ }}}$ = ${y \over {\sin {{75}^ \circ }}}$ = 2$\sqrt 2$ $\therefore\,\,\,$ x = $\sqrt 3 - 1$ y = $\sqrt 3 + 1$ For point C, ${x \over {cos{{120}^ \circ }}}$ = ${y \over {sin{{120}^ \circ }}}$ = 2 $\Rightarrow $$\,\,\, x = -1 y = \sqrt 3 \therefore\,\,\, Sum of the x - coordinate of the vertices = 0 + \sqrt 3 + \sqrt 3 - 1 + (- 1) = 2\sqrt 3 - 2 2 MCQ (Single Correct Answer) JEE Main 2018 (Offline) A straight line through a fixed point (2, 3) intersects the coordinate axes at distinct points P and Q. If O is the origin and the rectangle OPRQ is completed, then the locus of R is : A 3x + 2y = 6xy B 3x + 2y = 6 C 2x + 3y = xy D 3x + 2y = xy Explanation Let coordinate of point R = (h, k). Equation of line PQ, (y - 3) = m (x - 2). Put y = 0 to get coordinate of point p, 0 - 3 = (x - 2) \Rightarrow x = 2 - {3 \over m} \therefore\,\,\, p = (2 - {3 \over m}, 0) As p = (h, 0) then h = 2 - {3 \over m} \Rightarrow {3 \over m} = 2 - h \Rightarrow m = {3 \over {2 - h}} . . . . . . (1) Put x = 0 to get coordinate of point Q, y - 3 = m (0 - 2) \Rightarrow y = 3 - 2m \therefore\,\,\, point Q = (0, 3 - 2m) And From the graph you can see Q = (0, k). \therefore\,\,\, k = 3 - 2m \Rightarrow m = {{3 - k} \over 2} . . . . (2) By comparing (1) and (2) get {3 \over {2 - h}} = {{3 - k} \over 2} \Rightarrow (2 - h)(3 - k) = 6 \Rightarrow 6 - 3h - 2K + hk = 6 \Rightarrow 3 h + 2K = hk \therefore\,\,\, locus of point R is 3x + 2y= xy 3 MCQ (Single Correct Answer) JEE Main 2018 (Online) 15th April Morning Slot In a triangle ABC, coordinates of A are (1, 2) and the equations of the medians through B and C are respectively, x + y = 5 and x = 4. Then area of \Delta ABC (in sq. units) is : A 12 B 4 C 5 D 9 Explanation Median through C is x = 4 So the coordinate of C is 4. Let C = (4, y), then the midpoint of A(1, 2) and C(4, y) is D which lies on the median through B. \therefore$$\,\,\,$ D = $\left( {{{1 + 4} \over 2},{{2 + y} \over 2}} \right)$ Now,    ${{1 + 4 + 2 + y} \over 2}$ = 5  $\Rightarrow$ y = 3. So, C $\equiv$ (4, 3). The centroid of the triangle is the intersection of the mesians, Here the medians x = 4 and x + 4 and x + y = 5 intersect at G (4, 1). The area of triangle $\Delta$ABC = 3 $\times$ $\Delta$AGC = 3 $\times$ ${1 \over 2}$ [1(1 $-$ 3) + 4(3 $-$ 2) + 4(2 $-$ 1)] = 9. 4 MCQ (Single Correct Answer) JEE Main 2018 (Online) 15th April Evening Slot The sides of a rhombus ABCD are parallel to the lines, x $-$ y + 2 = 0 and 7x $-$ y + 3 = 0. If the diagonals of the rhombus intersect P(1, 2) and the vertex A (different from the origin) is on the y-axis, then the coordinate of A is : A ${5 \over 2}$ B ${7 \over 4}$ C 2 D ${7 \over 2}$ Explanation Let the coordinate A be (0, c) Equations of the given lines are x $-$ y + 2 = 0 and 7x $-$ y + 3 = 0 We know that the diagonals of the rhombus will be parallel to the angle bisectors of the two given lines; y = x + 2 and y = 7x + 3 $\therefore\,\,\,$ equation of angle bisectors is given as : ${{x - y + 2} \over {\sqrt 2 }} = \pm {{7x - y + 3} \over {5\sqrt 2 }}$ 5x $-$ 5y + 10 = $\pm$ (7x $-$ y + 3) $\therefore\,\,\,$ Parallel equations of the diagonals are 2x + 4y $-$ 7 = 0 and 12x $-$ 6y + 13 = 0 $\therefore\,\,\,$ slopes of diagonals are ${{ - 1} \over 2}$ and 2. Now, slope of the diagonal from A(0, c) and passing through P(1, 2) is (2 $-$ c) $\therefore\,\,\,$ 2 $-$ c = 2 $\Rightarrow$ c = 0 (not possible) $\therefore$$\,\,\,$ 2 $-$ c = ${{ - 1} \over 2}$ $\Rightarrow$ c = ${5 \over 2}$ $\therefore\,\,\,$ Coordinate of A is ${5 \over 2}$. EXAM MAP NEET Joint Entrance Examination JEE Advanced JEE Main Graduate Aptitude Test in Engineering GATE CE GATE ECE GATE ME GATE IN GATE EE GATE CSE GATE PI
# To know about imaging processing¶ ## The pixels mystery¶ An (raw) image is composed from pixels as you know. Every pixel can be encoded on: ### Basic pixel encoding¶ On a single value between 0 and 255: This is the case of gray images. In this case every pixel can take a value from 0 (black) to 255 (white). To create every grayscale range value. On 3 or 4 channels of colors for every pixel. These channels are: • The red channel which can take a value between 0 (black) and 255 (fully red). • The green channel which can take a value between 0 (black) and 255 (fully green). • The blue channel which can take a value between 0 (black) and 255 (fully blue). • And a optional fourth channel can add an opacity value: which can take a value between 0 (fully transparent) and 255 (fully opaque): What mean that the image can contains fully transparent pixels, what permit to display a form which according per example your desktop background. For every pixel: so the pixels are encoded on 3 or 4 values between 0 and 255 representing the red, green, blue and optionnaly alpha channel values. What give us the RGB(A) color space. This is not very intuitive and this is not the way humans think about colors. But the red, green and blue values combined together give us a wide gamut of different colors 256 × 256 × 256 = 16777216 (differents possible colors). Note In fact, the human visual system is also based on the trichromatic perception of colors. With cone cell sensitivity located around the red, gren and blue spectrum. This is the common pixel encoding in digital imaging. examples: A fully red pixel will take following values: Channel Red Green Blue Value 255 0 0 Resulting color: Note Red is a base color because only one channel is set. A fully green pixel will take following values: Channel Red Green Blue Value 0 255 0 Resulting color: Note Green is a base color because only one channel is set. A fully blue pixel will take following values: Channel Red Green Blue Value 0 0 255 Resulting color: Note Blue is a base color because only one channel is set. A fully yellow pixel will take following values: We mix red and green to obtain yellow color. Channel Red Green Blue Value 255 255 0 Resulting color: Note Yellow is a composite color because 2 differents channels are set. A fully pink pixel will take following values: We mix red and blue to obtain pink color. Channel Red Green Blue Value 255 0 255 Resulting color: Note Pink is a composite color because 2 differents channels are set. A fully turquoise pixel will take following values: We mix the green and blue to obtain the turquoise color. Channel Red Green Blue Value 0 255 255 Resulting color: Note Turquoise is a composite color because 2 differents channels are set. A emboss gray pixel encoded on RGB (Red, Green, Blue) look’s like this: Channel Red Green Blue Value 127 127 127 Resulting color: Note Gray is a composite color because the 3 channels are set. Gray can be encoded on a single value: a 1 byte encoding. ### Other pixels encoding¶ Otherwise they are other image coding system’s: the phenomenal color spaces. Which are based on the concept of • Hue. • Saturation. • And brightness. These properties are more intuitive for humans. The phenomenal color spaces have been introduced because they correspond to the way humans tend to naturally organized colors. With atributes like: • Tint. • Colorfullness. • And brightness. • Hue: represent the dominant color. The name you give to a color correspond to the hue value. • The hue value varies from 0 to 180 by representing the dominant color on a circle (0-360 degrees) which value is divide per 2 for encoding convienence. or in others words the hue represent the tint of the color. • Saturation: represent how vivid the color is. The saturation varies from 0 to 255 (pure spectrum color). The saturation represent the vivid of the colors: • Low value : pastel color. • High value : rainbow color. max(red, green, blue) - min(red, green, blue) Saturation = --------------------------------------------- max(red, green, blue) • Brightness: represent the luminosity of a color. This is an subjectiv attribute. Note Others phenomenal color spaces values use the concept of value or lightness as a way to charaterize the relativ color intensity. Note This colors attributes try to mimic the intuitive human pereption of colors. • Example: the HSB color space. • The hue is the cone top circle. • The brightness is the cone height. • The saturation value depends from the length of the arrow which must ne oriented into the direction of the hue value. ## The Convolution kernels¶ A kernel is a set of weights which determine how each output pixel is calculated from a neighborhood of input pixels. Applying a kernel filter consist of moving the kernel over each pixel of an image and multiplying each corresponding pixel by it’s associated weight. According to that the central pixel correspond to a pixel of interest and the others to the neighborhood pixels. The kernel is an odd matrix of values from any (odd) size limited from 3x3 to 31x31 and can be from different forms: ### Kernels forms¶ • A square type kernel matrix: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 • A diamond type kernel matrix: 0 0 1 0 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 0 0 1 0 0 • A cross type kernel matrix: 0 0 1 0 0 0 0 1 0 0 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 • An X type kernel matrix: 1 0 0 0 1 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 1 The art which the values are multiply depends from the operator which use the matrix, and the central pixel value depend from it. The total summe of the matrix values represent the weight of the matrix and so we often adjust the value of the central pixel to make the total weight equal to: 1 or 0 but not always because it depends from how the value are interpolate: ### Classical kernel matrixes¶ • A sharpen kernel matrix: -1 -1 -1 -1 9 -1 -1 -1 -1 has a weight of +1 due of the 8 x -1 values and the central +9 value. • A Find Edges kernel matrix: -1 -1 -1 -1 8 -1 -1 -1 -1 has a weight of 0 due of the 8 x -1 values and the central +8 value. • A Emboss kernel matrix: -2 -1 0 -1 1 1 0 1 2 has a weight of +1 due of the (-2 + -1 + 1) and (2 + 1 + 1) values and the central 1 value. • A Mean kernel matrix: 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 has a weight of +1 du of the fact that all values are equal to 1.0 divide per kernel size. • A Gaussian kernel matrix: is based on a gaussian vector: a vector of values seed from a sigma value. example from a gausssian vector from size 7 with 1.2 as sigma value: [ 0.015, 0.083, 0.236, 0.333, 0.236, 0.083, 0.015 ] • A Kirsch kernel matrix: Have an orientation: • East orientation: -3 -3 5 -3 0 5 -3 -3 5 • West orientation: 5 -3 -3 5 0 -3 5 -3 -3 • North orientation: -3 -3 -3 -3 0 -3 5 5 5 • South orientation: 5 5 5 -3 0 -3 -3 -3 -3 • A Sobel kernel matrix: • horizontally: -1 -2 -1 0 0 0 1 2 1 • Vertically: -1 0 1 -2 0 2 -1 0 1 • A Laplacian kernel matrix: 0.5 1 0.5 1 6 1 0.5 1 0.5 ## Thresholding¶ The use of thresholding can be used to obtain a binary image (black and white image) by setting a value: the threshold value. In relationship to this value, all pixels values (encoded on a single value) greater as this value are set to white. All pixels littler are set to black. What produce a binary image map of the source image. Note Thresholding can be invert so that greater values are set to black and littler are set to white. What permit to get a White and a Black version of the filters producing a binary image. It exist algorithms which can determine the best threshold value based on the analysis of a grayscale image: Note You can take the average value from this 2 algorithms computed threshold values As in the Binary White Average or the Binary Black Average filters. ## Binary images¶ Binary images are strictly black and white images with as convention that: The foreground from the image is represented in white. The background from the image is represented in black.
# Alternative Policies This section describes forecasting under an alternative monetary policy rule. That is: 1. The state space system is split into two exogenous and unanticipated regimes, the "historical" regime and the "forecast" regime. The historical policy rule applies during the "historical" regime, and the alternative policy rule applies to the "forecast' regime. 1. Filtering and smoothing is done under the historical monetary policy rule, i.e. the one defined in the eqcond method for the given model. 2. Forecasts and IRFs are computed under the alternative rule. See Regime-Switching Forecasts for details on how forecasting works when the state space system includes exogenous regime-switching. Alternative policies can be either permanent or temporary. To use alternative policies, the user needs to ensure that the model can be solved with regime-switching (see Regime-Switching). ## Procedure for Permanent Alternative Policies The user defines some instance of the AltPolicy type (described below) and then calls the function setup_permanent_altpol!. Then the function calls made to forecast and compute means and bands remain the same as usual (see Forecasting and Computing Means and Bands). For example, suppose you have defined the functions taylor93_eqcond and taylor93_solve corresponding to Taylor (1993)'s proposed monetary policy rule. Then you can run: m = AnSchorfheide() setup_permanent_altpol!(m, AltPolicy(:taylor93, taylor93_eqcond, taylor93_solve); cond_type = :none) forecast_one(m, :mode, :none, [:forecastobs, :forecastpseudo]) compute_meansbands(m, :mode, :none, [:forecastobs, :forecastpseudo]) Permanent alternative policies utilize some of the same machinery as temporary alternative policies, but they use different algorithms for converting the equilibrium conditions from gensys form to the reduced form transition matrices for a state space system. The function setup_permanent_altpol! performs the setup required to interface with this machinery. The keyword argument cond_type is necessary because when the alternative policy is applied depends on whether the forecast is conditional or not. If a forecast is conditional, it is assumed that the alternative policy does not occur until after the conditional horizon, to maintain the idea that the alternative policy is entirely unanticipated. ## Procedure for Temporary Alternative Policies Another counterfactual exercise is temporarily imposing a different monetary policy rule, i.e. a temporary alternative policy, before switching back to either the historical rule or some (permanent) alternative rule. To implement this, we utilize exogenous regime switching in the forecast horizon. In a rational expectations equilibrium, agents will take into account the fact that the temporary policy is expected to terminate. A different algorithm than Chris Sims's standard gensys algorithm is required, which we have implemented as gensys2. Note that this gensys2 is different from the gensys2 Chris Sims has implemented to calculate second-order perturbations. To set up a temporary alternative policy, a user first needs to specify alternative policy using the type AltPolicy. For instance, this code implements a Nominal GDP targeting policy, and the AltPolicy is constructed by calling DSGE.ngdp(), or equivalently AltPolicy(policy, DSGE.ngdp_eqcond, DSGE.ngdp_solve, forecast_init = DSGE.ngdp_forecast_init) where the inputs to AltPolicy here are DSGE.ngdp_eqcondFunction ngdp_eqcond(m::AbstractDSGEModel, reg::Int = 1) Solves for the transition equation of m under a price level targeting rule (implemented by adding a price-gap state) source Missing docstring. Missing docstring for DSGE.ngdp_replace_eq_entries. Check Documenter's build log for details. DSGE.ngdp_solveFunction ngdp_solve(m::AbstractDSGEModel) Solves for the transition equation of m under a price level targeting rule (implemented by adding a price-gap state) source DSGE.ngdp_forecast_initFunction init_ngdp_forecast(m::AbstractDSGEModel, shocks::Matrix{T}, final_state::Vector{T}) Adjust shocks matrix and final state vector for forecasting under the NGDP rule source Note that ngdp_replace_eq_entries is called by ngdp_eqcond but is not a direct input to AltPolicy. The user also needs to complete the following steps to apply temporary alternative policies. • Adding a regime for every period during which the alternative policy applies, plus one more regime for the policy which will be permanently in place after the temporary policies end. • Adding the setting Setting(:gensys2, true) to indicate gensys2 should be used. If this setting is false or non-existent, then alternative policies will be treated as if they are permanent. Their equilibrium conditions will be solved using gensys, which can lead to determinacy and uniqueness problems if the alternative policy should be temporary (e.g. a temporary ZLB). • Adding the setting Setting(:replace_eqcond, true) to indicate equilibrium conditions will be replaced. • Adding the setting Setting(:regime_eqcond_info, info), where info should be a Dict{Int, DSGE.EqcondEntry} mapping regimes to instances of EqcondEntry, a type which holds any information needed to update equilibrium conditions to implement a given alternative policy. Borrowing the example of temporary NGDP targeting, the relevant EqcondEntry would be constructed as EqcondEntry(DSGE.ngdp()). Note that the user only needs to populate this dictionary with regimes in which the eqcond function differs from the default. To see an example of using temporary alternative policies, see the example script for regime-switching. ## Alternative Policy Uncertainty and Imperfect Awareness Click on the section header for details on how to add policy uncertainty or imperfect credibility to alternative policies (both permanent and temporary). ## MultiPeriodAltPolicy Click on the section header to see the primary use of the type MultiPeriodAltPolicy, which extends AltPolicy to specify multiple regimes. In particular, one of the fields of MultiPeriodAltPolicy is regime_eqcond_info, which stores a dictionary that can be used to update a model's regime_eqcond_info Setting, i.e. m <= Setting(:regime_eqcond_info, multi_period_altpol.regime_eqcond_info) ## Types DSGE.AltPolicyType mutable struct AltPolicy Types defining an alternative policy rule. Fields • key::Symbol: alternative policy identifier • eqcond::Function: a version of DSGE.eqcond which computes the equilibrium condition matrices under the alternative policy. Like DSGE.eqcond, it should take in one argument of mutable struct AbstractDSGEModel and return the Γ0, Γ1, C, Ψ, and Π matrices. • solve::Function: a version of DSGE.solve which solves the model under the alternative policy. Like DSGE.solve, it should take in one argument of mutable struct AbstractDSGEModel and return the TTT, RRR, and CCC matrices. • forecast_init::Function: a function that initializes forecasts under the alternative policy rule. Specifically, it accepts a model, an nshocks x n_forecast_periods matrix of shocks to be applied in the forecast, and a vector of initial states for the forecast. It must return a new matrix of shocks and a new initial state vector. If no adjustments to shocks or initial state vectors are necessary under the policy rule, this field may be omitted. • color::Colorant: color to plot this alternative policy in. Defaults to blue. • linestyle::Symbol: line style for forecast plots under this alternative policy. See options from Plots.jl. Defaults to :solid. source DSGE.EqcondEntryType mutable struct EqcondEntry Type to hold the entries in the regimeeqcondinfo dictionary for alternative policies, regime switching, and imperfect awareness. source DSGE.MultiPeriodAltPolicyType mutable struct MultiPeriodAltPolicy Types defining an alternative policy rule. Fields • key::Symbol: alternative policy identifier • regime_eqcond_info::AbstractDict{Int64, EqcondEntry}: a dictionary mapping regimes to equilibrium conditions which replace the default ones in a given regime. • gensys2::Bool: if true, the multi-period alternative policy needs to call gensys2 instead of gensys to work. • temporary_altpolicy_names::Union{Vector{Symbol}, Nothing}: specifies the names of temporary policies which may occur, e.g. [:zero_rate] if a temporary ZLB is implemented using the zero_rate AltPolicy. • temporary_altpolicy_length::Int64: the temporary alternative policy's length which is used to determine which regimes need to be solved using gensys2 instead of gensys. • infoset::Union{Vector{UnitRange{Int64}}, Nothing}: either a vector specifying the information set used for expectations in the measurement equation or nothing to indicate myopia in expectations across regimes. • perfect_credibility_identical_transitions::Union{Dict{Int64, Int64}, Nothing}: if not a Nothing, then this field specifies regimes which have identical transition equations in the case of perfect credibility. • identical_eqcond_regimes::Union{Dict{Int64, Int64}, Nothing}: if not a Nothing, then this field specifies regimes which have identical equilibrium conditions. • forecast_init::Function: a function that initializes forecasts under the alternative policy rule. Specifically, it accepts a model, an nshocks x n_forecast_periods matrix of shocks to be applied in the forecast, and a vector of initial states for the forecast. It must return a new matrix of shocks and a new initial state vector. If no adjustments to shocks or initial state vectors are necessary under the policy rule, this field may be omitted. • color::Colorant: color to plot this alternative policy in. Defaults to blue. • linestyle::Symbol: line style for forecast plots under this alternative policy. See options from Plots.jl. Defaults to :solid. source
Two oil cans,X and Y,are right circular cylinders,and the he : GMAT Problem Solving (PS) Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack It is currently 21 Feb 2017, 17:10 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Two oil cans,X and Y,are right circular cylinders,and the he Author Message TAGS: ### Hide Tags Manager Joined: 04 Jan 2013 Posts: 80 Followers: 0 Kudos [?]: 9 [0], given: 1 Two oil cans,X and Y,are right circular cylinders,and the he [#permalink] ### Show Tags 21 Jan 2013, 01:41 00:00 Difficulty: (N/A) Question Stats: 100% (01:38) correct 0% (00:00) wrong based on 27 sessions ### HideShow timer Statistics Two oil cans,X and Y,are right circular cylinders,and the height and the radius of Y are both twice those of X.if the oil can X,which is filled to capacity,sells for $2,them at the same rate,how much does the oil can Y sell for if Y is filled to only half its capacity? A.$1 B.$2 C.$3 D.$4 E.$8 Posted from my mobile device Manager Joined: 12 Mar 2012 Posts: 94 Location: India Concentration: Technology, Strategy GMAT 1: 710 Q49 V36 GPA: 3.2 WE: Information Technology (Computer Software) Followers: 9 Kudos [?]: 318 [0], given: 22 Re: Two oil cans,X and Y,are right circular cylinders,and the he [#permalink] ### Show Tags 21 Jan 2013, 01:51 Let us suppose the radius and height of can X are r and h respectively. So the radius and height of can Y will be 2r and 2h respectively. Volume of X = 3.14 * r^2 * h Volume of Y = 3.14 * (2r)^2 * 2h = 8 * 3.14 * r^2 * h = 8 times of X's volume. Since X is sold for $2, and Y is filled to only half of its capacity, Y will be sold for 4 *$2 ( half of Y's volume = 8/2 * X's volume ) = \$8. Please provide Kudos if you like my explanation. Manager Joined: 04 Jan 2013 Posts: 80 Followers: 0 Kudos [?]: 9 [0], given: 1 Re: Two oil cans,X and Y,are right circular cylinders,and the he [#permalink] ### Show Tags 21 Jan 2013, 02:20 no doubt at abhii6 you..you deserve a kudos.. +1 for nice reply..hey i have never done a kudos before but i guess thats how it is done..if not then please tell me i will readily redo it Posted from my mobile device Manager Joined: 12 Mar 2012 Posts: 94 Location: India Concentration: Technology, Strategy GMAT 1: 710 Q49 V36 GPA: 3.2 WE: Information Technology (Computer Software) Followers: 9 Kudos [?]: 318 [0], given: 22 Re: Two oil cans,X and Y,are right circular cylinders,and the he [#permalink] ### Show Tags 21 Jan 2013, 02:39 Thanks Frazer for the Kudos. Re: Two oil cans,X and Y,are right circular cylinders,and the he   [#permalink] 21 Jan 2013, 02:39 Similar topics Replies Last post Similar Topics: 13 A full stationary oil tank that is a right circular cylinder 9 19 May 2013, 21:20 14 A container in the shape of a right circular cylinder is 1/2 19 16 Jul 2012, 03:40 20 A right circular cone is inscribed in a hemisphere so that 9 09 Jul 2012, 03:41 3 A soda can, in the shape of a right circular cylinder, is 1 28 May 2012, 01:50 18 A right circular cone is inscribed in a hemisphere so that 12 03 Apr 2012, 15:49 Display posts from previous: Sort by
Torques and Tensions 1. Apr 24, 2008 TA1068 1. The problem statement, all variables and given/known data A horizontal uniform bar of mass m and length L is hung horizontally on two vertical strings. String 1 is attached to the end of the bar and string 2 is attached a distance L/4 from the other end. A monkey of mass m/2 walks from one end of the bar to the other. Find the tension T_1 in string 1 at the moment that the monkey is halfway between the ends of the bar. 2. Relevant equations $$\tau = rFsin\theta$$ $$F_thrust = ma_t = mr\alpha$$ 3. The attempt at a solution In all honesty, I'm really not too sure where to start. I drew a diagram of what is going on. I know at least that the tensions are going to add up to (3/2)mg, since there is the m of the bar pulling down and also the m/2 of the monkey pulling down and the system is in equilibrium. So: $$T_1 + T_2 = 1.5mg$$ Becomes: $$T_1 = 1.5mg - T_2$$ Do I have to choose a pivot point? I was thinking where $$T_2$$ connects to the bar, because it was an answer to a question leading up to this, but I'm not exactly sure why? What effect does placing the pivot point here have in relation to $$T_2$$? Any help would be greatly appreciated!! 2. Apr 24, 2008 TA1068 Oops! I forgot to attach my image. Here it is attached, and if that doesn't work I uploaded it as well. img229.imageshack.us/img229/1795/tensionwm2.jpg Attached Files: • Tension.JPG File size: 11.5 KB Views: 226 3. Apr 25, 2008 alphysicist Hi TA1068, Whereever you place your pivot point, if there is a force (or more than one) acting at that point then that force produces no torque about that point (because the lever arm is zero). So that force or forces will not appear in the torque equation for that pivot point. Because of that, quite often a good choice for the pivot point is at the place where an unknown force is acting because your equation will then have one less unknown variable. So choosing the pivot point at the place where T2 pulls on the bar is a good choice. What do you get? Last edited: Apr 25, 2008
## For Blockbuster Movies, Is Winter the New Summer? When I was in high school, I very much wanted Star Wars Episode I to crush Titanic and become the highest-grossing domestic film of all time. Even though Episode I was, by most acccounts, terrible, I still paid to see it several times in theaters. Alas, my efforts were in vain; the film didn't even come close to toppling Titanic. What a difference a good Star Wars movie makes. It only took 15 days for The Force Awakens to cruise past Titanic's domestic total, and just 20 days for it to surpass Avatar as the highest grossing domestic film of all time. (Though these numbers are not inflation-adjusted; if we adjust for inflation, The Force Awakens has passed Avatar, but not Titanic). If you consider what are now the top three highest grossing domestic films of all time, you may notice a pattern. Despite summer being well-known as the season to release potentially record-breaking blockbusters, The Force Awakens, Avatar and Titanic were all released during the holiday season. This led me... ## Do Extra Innings Games Predict World Series Longevity? Now that baseball season has ended, I find myself going through withdrawal. And with spring training several months away, I need something to fill the void left in my heart. To that end, let's take a moment and look back – with a mathematical eye, of course – on the 2015 World Series. In case you missed it, Game 1 was one for the record books. The New York Mets and the Kansas City Royals duked it out for 14 innings. This was long enough to take the crown for the longest Game 1 in World Series history (though it tied Game 2 in 1916 and Game 3 in 2005, which were also 14 innings long). Here are the highlights: Even before Game 1, many pundits predicted that the series would last a full seven games. For example, all 5 members of the CBS Sports staff polled in this article predicted a seven-game series, though they were split on which team would emerge victorious. It shouldn't be surprising, then, that after Game 1 analysts maintained their faith that the series would be a long one... ## Cards with Mathematics I'm a sucker for a good card game. Or even a bad card game, if it's played in the company of good friends. Or even a bad card game played in the company of poor friends, as long as the food is decent. Because of this relatively low bar, I've played a variety of card games. And some of these games have sparked interesting mathematical questions. If you've taken a probability course, for instance, you may have explored the intersection of cards and mathematics a bit. Maybe you had to calculate the probability of being dealt a full house in poker, or of busting in a game of blackjack. But mathematics exists even in card games that don't typically come up in math class. And just because a deck of cards doesn't have any numbers on it, that doesn't mean it should be exiled into the realm of the non-mathematical. To prove this point, consider Cards Against Humanity (or, if you'd prefer, its more family-friendly predecessor: Apples to Apples). In order to understand the rules of the game... ## Why You Should Have a Kid in 2016 I would start off by apologizing for the clickbait-y title, but I'd rather not take any of the blame. Instead, I'll use Numberphile as a scapegoat, since one of their most recent videos inspired both the title and the content of this post. The video in question is titled Why 1980 was a great year to be born… but 2184 will be even better. Like many who are mathematically minded, I couldn't help but watch. In the video, Matt Parker talks about a mathematical property of the year 1980. If you've got a few minutes, here's the full video: And here's the Cliffs Notes version. Basically, Parker says that being born in 1980 is awesome, because he'll turn 45 in the year 2025, which is 452. This birth-year property hasn't occurred since 1892 (people born in that year turned 44 in 1936 = 442), and it won't happen again until 2070 (people born in that year will turn 46 in 2116 = 462). To be sure, it's a pretty nerdy reason to get psyched about being born in 1980, but I can't begrudge the man... ## Hello, World! Hi there! How are you? It's been a while. You're looking good, is that a new shirt? Well, enough about you. Let's talk about Math Goes Pop. As you may have noticed, the output lately has been - to put it kindly - a little slow. Part of the reason for this is that I've been spending some of my free time retooling the blog. As a Tau Day gift, the fruits of my labors are now laid bare here before you. Math Goes Pop has been rebuilt from the ground up. Aesthetics aside, the site is much the same as it was before, but hopefully feels a little cleaner and works a little better. I'd also like to tip my hat to Mira Gomha, who designed the new logo. If you want to see more of her work, you can check out her portfolio or give her a shout on Twitter. Now that Math Goes Pop has a shiny new coat of digital paint, you can expect to see musings from me on a somewhat more regular basis. For now, I'll just leave you with a small puzzler that came to mind as I was migrating all the old posts from... ## Give it Away, Give it Away, Give it Away Later For hockey fans, summer is a quiet time of year.  I've never followed the sport that closely, but with the Kings having recently won the Stanley Cup for the second time in three years, I'm reminded of a curious incident that I witnessed during the only NHL game I've ever been to. A friend of mine received free tickets to a Kings game when I was living in LA several years ago. He invited my now-wife and me along, and the price was certainly right, so four of us went to the Staples center one Saturday afternoon. I don't remember much about the game (though I do recall that the Kings emerged victorious). What I remember most vividly was that during one of the breaks between periods, a new car was brought onto the ice and there was a contest to give that car away.  Sort of like what happens in this video, but the rules were a little different: In the game I attended, six contestants were given a key to a new car, but they were told that only one key would start the vehicle. One at a... ## Keeping it Real: An Addendum Last week, Dan Meyer invited the folks at Mathalicious to opine on the meaning of the phrase "real-world," not as it applies to MTV shows (though that would make for a great conversation), but as it applies to questions asked of students in a math classroom. This week, we responded, continuing what I believe to be an important and interesting discussion about the nature of what we mean when we demand that mathematics be made more "real" for our students. Most of my thoughts on the subject are encapsulated in the Mathalicious response. (Both articles come highly recommended, and what I say below may not make much sense if you haven't read them first.) The conversation got me thinking, though, and so I'd like to offer my own personal aside/addendum. When I began writing in this corner of the internet in the summer of 2008, my goal was simply to talk about mathematical ideas in a way that was accessible for a general audience (and in particular, an audience that didn't necessarily think... ## It's Not Complicated. Or is it? Though I am hardly AT&T's biggest fan, I can't help but be charmed by their "It's Not Complicated" ad campaign.  Each ad features a dapper looking man asking softball questions to a group of young children.  Though the ads are meant to elicit mostly meaningless platitudes that AT&T then spins as selling points (e.g. "Faster is Better"), the children's answers and the gentleman's reactions make the ad-watching experience just a little bit more bearable. In one of the campaign's more recent ads, however, I was disappointed to see a teachable moment go to waste.  I suppose this is what happens when you have a cell phone company spokesman in a room full of children instead of an actual teacher.  (Though to be fair, the math involved isn't really suitable for elementary school.) In case you don't have time to watch cell phone commercials, here's a transcript of their conversation. AT&T Guy: What's the biggest number you can think of? Girl 1: A trillion billion zillion... ## Down with Plurality! Hi friends, As some of you may know, in general I don't hold our country's voting methods in very high regard.  Think about the way we vote for president, for instance.  Aside from not asking voters to state any preferences at all, it's difficult to do worse than our current system: we can only show our support for a single candidate, when in fact our preferences may be more nuanced.  Moreover, since we can only vote for a single candidate, there's little incentive to vote for our favorite one, unless our favorite happens to be a front-runner.  This is known all across the universe, as evidenced by the Presidential runs of Kang and Kodos: Even worse, a third party candidate who garners a decent amount of support may end up hurting his own party and parties more closely aligned to it by acting as a "spoiler."  Of course, the most well-known example of this is Ralph Nader, who many people believe cost Al Gore the 2000 election (for more on the spoiler effect, see here). For all these... ## Mathalicious Post: Most Expensive. Collectibles. Ever. Hey y'all.  My most recent post on the Mathalicious blog has been live for a while, but in case you missed it, I'd encourage you to go check it out!  Consider it a Simpsons themed cautionary tale for collectors on a budget.  Here's a sample: One of the more recent trends in the world of Simpsons memorabilia is the advent of the Mini-Figure collections, produced by Kidrobot.  Each series (there have been two so far) consists of around 25 small Simpsons figures, each with his or her own accessories.  The figures cost around $10 each ($9.95, to be precise), so an avid collector would need to spend something like \$250 to complete each of the two collections, right? Well, not quite.  When you buy one of these figures, you have no idea which one you’ll get, because the box containing the figure doesn’t indicate what’s inside.  All you know are the probabilities for each figure, and even those are sometimes missing... Given this information, here’s a natural question: how many of these boxes... Page 1 of 20 Next page
A short script to upload screenshots to an SFTP server ## Project description Challenged by the ease of use of existing tools to capture and upload screenshots so that they are available for the general public, this script was made. The only required dependencies are scp and scrot. Optional dependencies are: • optipng (used by default, can be turned off in configuration) • xclip (to save URLs to clipboard or to capture PNG images from clipboard) • zenity (to display URLs in a window) ## Configuration The upload URL is configured using a config file dont-puush-me.ini. It can be placed anywhere in your XDG configuration paths (usually ~/.config and /etc/xdg). The config file has the following format: [upload] scp_format=user@host:/path/to/directory/{}.png url_format=https://host.example/directory/{}.png [process] optipng=true The scp_format and url_format settings are required and no default exists for them. In those, the first occurence of {} is replaced with the automatically generated filename for the image. You need to set those values such that scp can be used to upload the file so that it will be reachable at the url given by url_format afterwards. The optipng setting is a boolean setting which enables or disables the default use of optipng before uploading an image. ## Usage The usage is simple: usage: dont-puush-me [-h] (-f | -r | -c) [-d SECONDS] [--no-optipng | --optipng] [--save-locally FILE] [-z] [-p] [-s] [-b] Use scrot to create a screenshot which is either saved locally or uploaded to a hardcoded server. optional arguments: -h, --help show this help message and exit Capture settings: -f, --fullscreen Make a shot of the entire screen -r, -w, --select Select a region or window to screenshot -c, --from-clipboard Use the image from the clipboard -d SECONDS, --delay SECONDS Delay the screenshot by the given amount of seconds Image processing: config option) config option) server URL output settings: -z, --zenity Show the URL using zenity -p, --to-primary Save the URL in the primary X11 selection. -s, --to-secondary Save the URL in the secondary X11 selection. -b, --to-clipboard Save the URL in the clipboard X11 selection. Examples: • Capture a fullscreen screenshot and copy the resulting URL to the primary X11 selection: dont-puush-me -fp • Capture a rectangle from the screen (the rectangle is picked using the mouse; clicking without dragging selects the rectangle of the clicked window): dont-puush-me -r • Capture a rectangle, do not apply optipng and display the URL in a window (requires zenity): dont-puush-me --no-optipng -rz
# CONSTRAINED FIT INFORMATIONshow precise values? An overall fit to 34 branching ratios uses 87 measurements and one constraint to determine 22 parameters. The overall fit has a $\chi {}^{2}$ = 65.5 for 66 degrees of freedom. The following off-diagonal array elements are the correlation coefficients <$\mathit \delta$x$_{i}$~$\delta$x$_{j}$> $/$ ($\mathit \delta$x$_{i}\cdot{}\delta$x$_{j}$), in percent, from the fit to parameters ${{\mathit p}_{{i}}}$, including the branching fractions, $\mathit x_{i}$ =$\Gamma _{i}$ $/$ $\Gamma _{total}$. The fit constrains the ${{\mathit x}_{{i}}}$ whose labels appear in this array to sum to one. x6 100 x7 100 x34 100 x46 100 x72 100 x123 100 x191 100 x193 100 x244 100 x249 100 x255 100 x261 100 x266 100 x300 100 x334 100 x341 100 x355 100 x399 100 x430 100 x528 100 x533 100 x6 x7 x34 x46 x72 x123 x191 x193 x244 x249 x255 x261 x266 x300 x334 x341 x355 x399 x430 x528 x533 Mode Fraction (Γi / Γ) Scale factor Γ6 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit D}^{*}{(2010)}^{-}}{{\mathit \ell}^{+}}{{\mathit \nu}_{{{{\mathit \ell}}}}}$ $0.0507$ $\pm0.0021$ 1.6 Γ7 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit D}^{*}{(2010)}^{-}}{{\mathit \tau}^{+}}{{\mathit \nu}_{{\tau}}}$ $0.0157$ $\pm0.0010$ 1.1 Γ34 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit D}^{-}}{{\mathit \pi}^{+}}$ $0.00252$ $\pm0.00013$ 1.1 Γ46 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit D}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ $0.0060$ $\pm0.0007$ 1.1 Γ72 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit D}_{{1}}{(2420)}^{-}}{{\mathit \pi}^{+}}$ , ${{\mathit D}_{{1}}^{-}}$ $\rightarrow$ ${{\mathit D}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ ($9.9$ ${}^{+2.0}_{-2.5}$) $\times 10^{-5}$ Γ123 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit D}_{{s}}^{-}}{{\mathit K}^{+}}$ ($2.7$ $\pm0.5$) $\times 10^{-5}$ 2.7 Γ191 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit J / \psi}{(1S)}}{{\mathit K}^{0}}$ ($8.73$ $\pm0.32$) $\times 10^{-4}$ Γ193 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit J / \psi}{(1S)}}{{\mathit K}^{*}{(892)}^{0}}$ $0.00127$ $\pm0.00005$ Γ244 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit \psi}{(2S)}}{{\mathit K}^{0}}$ ($5.8$ $\pm0.5$) $\times 10^{-4}$ Γ249 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit \psi}{(2S)}}{{\mathit K}^{*}{(892)}^{0}}$ ($5.9$ $\pm0.4$) $\times 10^{-4}$ 1.0 Γ255 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit \chi}_{{c1}}}{{\mathit K}^{*}{(892)}^{0}}$ ($2.38$ $\pm0.19$) $\times 10^{-4}$ 1.2 Γ261 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit \chi}_{{c2}}}{{\mathit K}^{*}{(892)}^{0}}$ ($4.9$ $\pm1.2$) $\times 10^{-5}$ 1.1 Γ266 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit \pi}^{-}}$ ($1.96$ $\pm0.05$) $\times 10^{-5}$ Γ300 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit K}^{0}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ ($4.94$ $\pm0.18$) $\times 10^{-5}$ Γ334 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit K}^{0}}{{\mathit K}^{-}}{{\mathit \pi}^{+}}$ ($6.2$ $\pm0.7$) $\times 10^{-6}$ Γ341 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit K}^{0}}{{\mathit K}^{+}}{{\mathit K}^{-}}$ ($2.67$ $\pm0.11$) $\times 10^{-5}$ Γ355 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit K}^{*}{(892)}^{0}}{{\mathit \phi}}$ ($1.00$ $\pm0.05$) $\times 10^{-5}$ Γ399 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ ($5.12$ $\pm0.19$) $\times 10^{-6}$ Γ430 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit \rho}^{0}}{{\mathit \rho}^{0}}$ ($9.6$ $\pm1.5$) $\times 10^{-7}$ Γ528 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit K}^{0}}{{\mathit \mu}^{+}}{{\mathit \mu}^{-}}$ ($3.39$ $\pm0.34$) $\times 10^{-7}$ Γ533 ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit K}^{*}{(892)}^{0}}{{\mathit \mu}^{+}}{{\mathit \mu}^{-}}$ ($9.4$ $\pm0.5$) $\times 10^{-7}$
Tags #photography Question Let s be the distance at which the camera is focused (the "subject distance"). When s is large in comparison with the lens focal length, the distance DN from the camera to the near limit of DOF and the distance DF from the camera to the far limit of DOF are (approximately): $$\large D_N\approx\frac{Hs}{H+s}$$ and DF [...] The depth of field DF - DN is $$\large DOF\approx\frac{2Hs^2}{H^2-s^2}$$ $$D_F\approx\frac{Hs}{H-s}$$ for s<H Tags #photography Question Let s be the distance at which the camera is focused (the "subject distance"). When s is large in comparison with the lens focal length, the distance DN from the camera to the near limit of DOF and the distance DF from the camera to the far limit of DOF are (approximately): $$\large D_N\approx\frac{Hs}{H+s}$$ and DF [...] The depth of field DF - DN is $$\large DOF\approx\frac{2Hs^2}{H^2-s^2}$$ ? Tags #photography Question Let s be the distance at which the camera is focused (the "subject distance"). When s is large in comparison with the lens focal length, the distance DN from the camera to the near limit of DOF and the distance DF from the camera to the far limit of DOF are (approximately): $$\large D_N\approx\frac{Hs}{H+s}$$ and DF [...] The depth of field DF - DN is $$\large DOF\approx\frac{2Hs^2}{H^2-s^2}$$ $$D_F\approx\frac{Hs}{H-s}$$ for s<H If you want to change selection, open original toplevel document below and click on "Move attachment" Open it #### Original toplevel document Depth of field - Wikipedia, the free encyclopedia ities discussed below. Hyperfocal distance Let be the lens focal length, be the lens f -number, and be the circle of confusion for a given image format. The hyperfocal distance is given by Moderate-to-large distances <span>Let be the distance at which the camera is focused (the "subject distance"). When is large in comparison with the lens focal length, the distance from the camera to the near limit of DOF and the distance from the camera to the far limit of DOF are and The depth of field is Substituting for and rearranging, DOF can be expressed as Thus, for a given image format, depth of field is determined by three factors: the focal length of the lens, the f -number #### Summary status measured difficulty not learned 37% [default] 0 No repetitions
# Argument Analysis The current OpMAP installation at ZKM (Karlsruhe, Germany) is based on an argumentative analysis of the Veggie Debate. By completing the survey, you actually judge different arguments that have been advanced in the debate. The argumentative analysis is later used to compute the degree to which your opinion coheres with the opinions of other users. This document introduces the argumentative analysis of the Veggie Debate and the technical tools we’ve employed to carry it out. ## The Main Theses of the Veggie Debate In our interpretation, the Veggie Debate is essentially a debate about the following core claims: [Meat-OK]: There exist meat and animal products which one is allowed to eat. [Eat-what-you-want]: One may eat meat and other animal products of any kind. [No-mass-farming]: One must not eat meat produced in modern mass farming facilities. [Strict-veggie]: One must not eat meat at all. [Strict-vegan]: One must not eat animal products at all. [Less-meat]: One should reduce the consumption of meat. [Less-animal]: One should reduce the consumption of animal products. These claims are related in different ways. For example, [Eat-what-you-want] contradicts [No-mass-farming], that is, you cannot maintain both claims in a self-consistent way. Or, [Strict-vegan] implies [Strict-veggie], that is, if you accept the former, you inevitably accept the latter claim, too. The different logical relations between the core claims are encoded in our argumentative analysis. ## The Veggie Debate Argument Map The argumentative analysis first and foremost clarifies the various pros and cons of the Veggie Debate. These pro and con reasons are reconstructed as arguments for or against the different core claims of the debate. We distinguish the following kinds of arguments: • Culinary Considerations • Health Considerations • Financial Considerations • Naturalness and Normality Considerations • Climate Change Considerations • Arguments from Nature Conservation • Animal Rights Arguments • World Nutrition Considerations • Arguments from Personal Autonomy These categories are used to group the different arguments. The following argument map provides an overview of all the arguments of the debate. The red and green arrows indicate relations of support and attack between the various arguments and theses. Further information on how to read an argument map can be found here. ## Having a Closer Look at an Argument Every argument of the debate is reconstructed as a series of premisses that justify a conclusion. Premisses = the statements which an argument takes for granted and which are used to justify the conclusion. Conclusion = the statement an argument is supposed to back up, or justify. So, for example, the argument for [No-mass-farming] which stresses the suffering of animals in mass-farming facilities is analysed as: <Animal suffering>: Animal rights are flagrantly violated in modern mass-farming facilities. (1) Animal rights are flagrantly violated in modern mass-farming facilities. (2) By eating food that has been produced in conventional ways, esp. through mass-farming techniques, one supports the modern mass-farming industry. (3) One must not support an industry which is responsible for systematic violations of animal rights. ---- (4) [No mass farming]: One must not eat meat produced in modern mass farming facilities. Here, (1)-(3) serve as premisses of the argument <Animal suffering> with conclusion (4). The different considerations of the debate are reconstructed as valid arguments. Valid arguments are “complete” in the following sense: If one accepts all premisses, one has to accept the conclusion. We say: the arguments define inferential relations between the various statements which figure in the debate. (Who “forces” you to accept the conclusion if you accept the premisses? That’s our language. For example, you cannot consistently say both that ‘Ann is ill and Bob is ill’ and that ‘Bob is not ill’ – unless you use the words “and” or “not” in a very different way than we usually do.) ## Argdown – Our Argument Mapping Technology We’re using the newly developed Argdown Technology to carry out the argumentative analysis. Argdown is basically a syntax, i.e., a set of conventions for structuring and organizing a text document. It allows you to code arguments in a standardized way. The core claims and the argument <Animal suffering> from above are all formatted in accordance with Argdown conventions. Argdown-documents can be read by different programs, which automatically generate argument maps or carry out advanced computations on the argumentative structure.
Archives The flag manifold over the semifield $\bf Z$ by G. Lusztig Vol. 15 No. 1 (2020) P.63~P.92 DOI: https://doi.org/10.21915/BIMAS.2020105 10.21915/BIMAS.2020105 ABSTRACT Let $G$ be a semisimple group over the complex numbers. We show that the flag manifold $\mathcal{B}$ of $G$ has a version ${\mathcal B}(\bf Z)$ over the tropical semifield $Z$ on which the monoid $G(Z)$ attached to $G$ and $Z$ acts naturally. KEYWORDS MATHEMATICAL SUBJECT CLASSIFICATION 2010 Primary: MILESTONES
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB Inform. Process. Lett., 2012, Volume 112, Issue 7, Pages 267–271 (Mi ipl1) Exponential lower bound for bounded depth circuits with few threshold gates V. V. Podolskii Steklov Mathematical Institute, Gubkina str. 8, 119991, Moscow, Russia Abstract: We prove an exponential lower bound on the size of bounded depth circuits with $O(\log n)$ threshold gates computing an explicit function (namely, the parity function). Previously exponential lower bounds were known only for circuits with one threshold gate. Superpolynomial lower bounds are known for circuits with $O(\log n)$ threshold gates. DOI: https://doi.org/10.1016/j.ipl.2011.12.011 Bibliographic databases: Revised: 07.12.2011 Accepted:09.12.2011 Language:
My Math Forum Can you find a 4 ball contained in a 3 ball, with not equal? Real Analysis Real Analysis Math Forum June 1st, 2012, 03:17 PM #1 Newbie   Joined: May 2012 Posts: 14 Thanks: 0 Can you find a 4 ball contained in a 3 ball, with not equal? In a metric space, an you find an open ball $A$ with radius 4, contained in an open ball $B$ with radius 3, but $A \neq B$? Why? June 2nd, 2012, 01:10 PM #2 Global Moderator   Joined: May 2007 Posts: 6,607 Thanks: 616 Re: Can you find a 4 ball contained in a 3 ball, with not eq Your question is confusing. How can you have a ball of radius 4 inside a ball of radius 3? June 6th, 2012, 08:23 AM #3 Math Team   Joined: Sep 2007 Posts: 2,409 Thanks: 6 Re: Can you find a 4 ball contained in a 3 ball, with not eq If A is an open ball with radius 4, then, given any $\epsilon> 0$, there exist points, p and q, in A such that $d(p,q)> 4- \epsilon$. If $\epsilon< 1$, there is no ball of radius 3 containing both p and q. June 7th, 2012, 11:13 PM #4 Senior Member   Joined: Feb 2012 Posts: 144 Thanks: 16 Re: Can you find a 4 ball contained in a 3 ball, with not eq let S={0,2,4} and let us use the usual distance. Then S is a metric space and B(2,3)=S B(0,4)={0,2} B(a,b) is the open ball with center a and radius b. Tags ball, contained, equal, find Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post hikagi Calculus 4 January 23rd, 2018 04:04 PM coolbeans33 Calculus 2 September 21st, 2013 07:00 AM mathkid Physics 16 February 22nd, 2013 03:12 AM cloudtifa Advanced Statistics 0 October 18th, 2010 06:58 PM mathkid Algebra 2 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
How do you divide \frac{x^2-25}{x+3} \-: (x-5)? Dec 25, 2014 We can use the rule about division of rational expressions where you can change the division in a multiplication by flipping the second fraction (where, in your case, the second "fraction" can be written as $\frac{x - 5}{1}$). In our case you have: $\frac{{x}^{2} - 25}{x + 3} \div \frac{x - 5}{1} = \frac{{x}^{2} - 25}{x + 3} \times \frac{1}{x - 5}$ We can now manipulate the numerator of the first fraction as: ${x}^{2} - 25 = \left(x + 5\right) \cdot \left(x - 5\right)$ Substituting and simplifying: $\frac{\left(x + 5\right) \cdot \left(x - 5\right)}{x + 3} \times \frac{1}{x - 5} = \frac{x + 5}{x + 3}$
# zbMATH — the first resource for mathematics Projection and covering in a set with orthogonality. (English) Zbl 0629.06008 Let $$\Omega$$ be a set with orthogonality $$\perp$$ and $$S=\{A^{\perp}|$$ $$A\subseteq \Omega \}$$. Assume there exists $$0\in \Omega$$ with $$0\perp x$$ for all $$x\in \Omega$$ and $$\{x\}^{\perp \perp}$$ is an atom in the lattice S for all $$x\in \Omega -\{0\}$$. The author proves the equivalence of the following statements where A, $$A_ 1$$, $$A_ 2$$ are elements of S and $$x\in \Omega:$$ (i) if $$x\not\in A$$ then $$A\vee \{x\}^{\perp \perp}$$ covers A, (ii) if $$x\not\in A\cup A^{\perp}$$ then $$x\in A_ 1\vee A_ 2$$ for some atoms $$A_ 1\subset A$$, $$A_ 2\subset A^{\perp}$$, (iii) if $$A_ 1\perp A_ 2$$, $$A_ 1\neq \{0\}$$, $$A_ 2\neq \{0\}$$, $$x\in A_ 1\vee A_ 2$$, $$x\not\in A_ 1$$, $$x\not\in A_ 2$$ then there exist $$x_ i\in A_ i$$ with $$x\in \{x_ 1\}^{\perp \perp}\vee \{x_ 2\}^{\perp \perp}$$, (iv) if $$x\not\in A\cup A^{\perp}$$ then $$A\cap (A^{\perp}\vee \{x\}^{\perp \perp})$$, $$A^{\perp}\cap (A\vee \{x\}^{\perp \perp})$$ are atoms in S. Reviewer: G.Kalmbach ##### MSC: 06C15 Complemented lattices, orthocomplemented lattices and posets 81P10 Logical foundations of quantum mechanics; quantum logic (quantum-theoretic aspects) ##### Keywords: projection; covering; orthogonality; atom; covers Full Text:
# Finitely valid sentences and other related things I almost finish the section 2.6 in Enderton's A mathematical introduction to logic, but I still do not understand some thing. (The first three question are closely related, so I hope that it does not a problem that I ask several questions in one topic.) ## (1) Finitely valid At the beginning of the subsection Finite Models, he says Some sentences have only infinite models, for example, the sentence saying that $$<$$ is an ordering with no largest element. The negation of such a sentence is finitely valid, that is, it is true in every finite structure. • a sentence $$\sigma$$ is finitely valid iff $$\sigma$$ is true in every finite structure If a sentence $$\sigma$$ has only infinite models (but not necessarily $$\sigma$$ is true in every infinite model, i.e., there can be some infinite structure such that the structure is not model of $$\sigma$$), then the negation of $$\sigma$$ is finitely valid. Am I right? Moreover, it is here some proof? (how can I be sure that there is no infinite models of $$\sigma$$) Conversely, let $$\sigma$$ be a finitely valid sentence. Clearly the negation of $$\sigma$$ cannot be true in any finite structure (but necessarily $$\sigma$$ is true in every infinite structure). Am I right? ## (2) The class of all infinite structure is not $$EC$$ (second part of Corrolary 26B) • a class of structures $$K$$ is in $$EC$$ iff there is some setence $$\sigma$$ such that $$\text{Mod }\sigma = K$$ Here is the proof given by Enderton: If the class of all infinite structure is $$\text{Mod }\tau$$, then the class of all finite structures is $$\text{Mod }\neg\tau$$. But this class isn't even $$EC_\Delta$$, much less $$EC$$. I accept that what he wrote, but I think that to finish the proof we must show that $$\text{Mod }\tau \in EC$$, but I do not see how it follows from that $$\text{Mod }\neg\tau$$ is not in $$EC$$. I think that something missing, but I am not able to finish the proof by myself. ## (3) Corollary 26E Assume the language is finite, and let $$\Phi$$ be the set of setences true in every finite structure. Then its complement, $$\overline \Phi$$, is effectively enumerable. Proof. For a sentence $$\sigma$$, $$\sigma \in \overline \Phi \Leftrightarrow (\neg\sigma) \text{ has a finite model.}$$ [...] In other words, $$\Phi$$ is a set of finitely valid sentences. Thus its complement, $$\overline \Phi$$, is the set of sentences $$\sigma$$ such that $$\sigma$$ is not true in some finite structure. It does not imply that $$\sigma$$ has only infinite models ($$\sigma$$ can be true in some finite structure, but not in all finite structures). So, we cannot use the facts from (1). How can I get the equivalence? 2. For any sentence $$\sigma$$ whatsoever $$\operatorname{Mod}(\sigma)\in EC$$... this is the definition of $$EC.$$ So in particular $$\operatorname{Mod}(\lnot \tau)\in EC.$$ It doesn't follow from this that $$\operatorname{Mod}(\lnot \tau)\notin EC$$... that follows from the first part of the theorem where they show the class of finite structures is not in EC. This produces a contradiction that proves that a $$\tau$$ with the assumed properties can't exist. 3. This is a corollary to a theorem immediately preceding that shows that the set of all sentences that have a finite model is effectively enumerable (as you have said, $$\bar\Phi$$ is just the set of all sentences whose negations have a finite model). This in turn follows from a couple paragraphs up where they show that for any $$n$$, it is decidable if a sentence has a model of size $$n.$$ (All of this assuming a finite language.)
# directories usr/texmf Sorry about the newbie question!! As a background: I am running Texshop and sometimes Texmaker on OSX 10.9.2 I have used a package called leipzig over the last 2 years. As instructed I have saved it into /Users/MYNAME/Library/texmf/tex/latex Everything worked well until I did an update of all the packages in TexUtility a week ago. Apparently the package comes now with the official distribution. although I cannot actually say whether or not it was installed on my system with this update for the first time or whether it was there before. anyway, I blamed update for the problems I got. So, I deleted the above mentioned folder believing the package would now be simply loaded from the place where all the other packages are stored. I believe this is here: /usr/local/texlive/texmf-local/tex/latex and indeed the leipzig package is there. but now I get an error telling me that leipzig.tex is not found. although it is there. so, I copied the folder back to its original place (from the trash bin). but now it tells me that all the \newcommand have been assigned already. This seems like it is loaded twice or from different places? Can anybody help me with this? and maybe explain to me why the package won't load from the place where TexUtility updates/saves the files to? My understanding is that the directory /Users/MYNAME/Library/texmf/tex/latex is used for non-official packages (or the ones which I am tweaking on some way). is this the case? the other directory /usr/local/texlive/texmf-local/tex/latex is used for official packages. shouldn't Texmaker or Texshop find both? How are these kept apart? Any suggested readings on this topic? • On my system there is a LaTeX package called leipzig (\usepackage{leipzig}), but no .tex file: did you try just using the LaTeX package? – Joseph Wright May 12 '14 at 14:33 • Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – user31729 May 12 '14 at 14:33 • @JosephWright: On CTAN there is something like a sample file leipzig.tex. I suppose the OP uses that one... – user31729 May 12 '14 at 14:39 • @user51502: Is there a file called leipzig.sty too? – user31729 May 12 '14 at 14:44 The tree rooted in ~/Library/texmf is for material that doesn't belong to the official TeX Live distribution and should only be available to the owner of the directory. The tree rooted in /usr/local/texlive/texmf-local, instead is for material that doesn't belong to the official TeX Live distribution, but should be available to all users of the same computer. For most personal computers, owner and all users coincide, so the distinction is just practical: adding material in /usr/local/texlive/texmf-local requires running sudo mktexlsr in order to update the database of file names. This is not required for material in ~/Library/texmf (which is ~/texmf on Unix systems different from Mac OS X; ~ refers to the current user's home). Font packages, however, should always be installed under /usr/local/texlive/texmf-local, so that sudo updmap-sys can be used. Running just updmap will create several problems whenever a font package in the main distribution is updated or added. If you have leipzig under /usr/local/texlive/texmf-local, you should remove it, now that it is in the official distribution. If you run tlmgr info leipzig --list from a Terminal window, you'll receive this output: package: leipzig category: Package shortdesc: Typeset and index linguistic gloss abbreviations. longdesc: The leipzig package provides a set of macros for standard glossing abbreviations, with options to create new ones. They are mnemonic (e.g. \Acc{} for accusative, abbreviated acc). These abbre can be used alone or on top of the glossaries package for easy indexing and glossary printing. installed: Yes revision: 31045 sizes: src: 49k, doc: 477k, run: 9k relocatable: Yes cat-version: 1.1 cat-date: 2014-02-26 23:03:13 +0100 collection: collection-humanities Included files, by type: run files: texmf-dist/tex/latex/leipzig/leipzig.sty source files: texmf-dist/source/latex/leipzig/leipzig.dtx texmf-dist/source/latex/leipzig/leipzig.ins doc files: texmf-dist/doc/latex/leipzig/leipzig.pdf details="Package documentation" texmf-dist/doc/latex/leipzig/leipzig.tex Now you know that you have to remove the corresponding directories: sudo rm -fr /usr/local/texlive/texmf-local/tex/latex/leipzig sudo rm -fr /usr/local/texlive/texmf-local/source/latex/leipzig sudo rm -fr /usr/local/texlive/texmf-local/doc/latex/leipzig/ sudo mktexlsr Check also in your ~/Library/texmf directory for similarly named directories. In order to see where the TeX programs are looking for leipzig.sty, do kpsewhich leipzig.sty which, if the above removal procedure has been successful, should output /usr/local/texlive/2013/texmf-dist/tex/latex/leipzig/leipzig.sty I stumbled upon the same problem as TO and wanted to share how I managed to solve the problem. I installed leipzig via texlive and it didn't work as well. For some reason, leipzig.tex was gunzipped at /usr/share/texlive/texmf-dist/doc/latex/leipzig, in other words, it appeared as leipzig.tex.gz. Therefore, I had to unzip it using the following code: sudo gunzip /usr/share/texlive/texmf-dist/doc/latex/leipzig/leipzig.tex.gz Next, I copied it to where I thought it belonged, i.e., to the same directory where leipzig.sty is located at. Of course, this had to be followed up by letting TeX now that there would be new files to be found: sudo cp /usr/share/texlive/texmf-dist/doc/latex/leipzig/leipzig.tex /usr/share/texlive/texmf-dist/tex/latex/leipzig/ sudo mktexlsr After these steps, it worked! Try installing it again using TeX Live. I was told to avoid installing packages myself. You can use the TeX Live Utility (if you have TeX Shop, you probably have it). I see the Leipzig package in the current distribution (http://mirror.math.ku.edu/tex-archive/systems/texlive/tlnet/). You can use TeX Live to install it. As you are using Mac OS, on the finder type TeX Live with the space in it and choose filename. Once on TeX Live, click on Packages and the type leipzig on the search box then finally double click it to install the package again. • This will not help, though. Because the locally installed one will still be found first. – cfr Jun 12 '14 at 0:38
# Basic Computation# In this lesson, we discuss how to do scientific computations with xarray objects. Our learning goals are as follows. By the end of the lesson, we will be able to: • Apply basic arithmetic and numpy functions to xarray DataArrays / Dataset. • Use Xarray’s label-aware reduction operations (e.g. mean, sum) weighted reductions. • Apply arbitrary functions to Xarray data via apply_ufunc. • Use Xarray’s broadcasting to compute on arrays of different dimensionality. import numpy as np import xarray as xr import matplotlib.pyplot as plt # Ask Xarray to not show data values by default xr.set_options(display_expand_data=False) %config InlineBackend.figure_format='retina' ## Example Dataset# First we load a dataset. We will use the NOAA Extended Reconstructed Sea Surface Temperature (ERSST) v5 product, a widely used and trusted gridded compilation of of historical data going back to 1854. ds = xr.tutorial.load_dataset("ersstv5") ds <xarray.Dataset> Dimensions: (lat: 89, lon: 180, time: 624, nbnds: 2) Coordinates: * lat (lat) float32 88.0 86.0 84.0 82.0 ... -82.0 -84.0 -86.0 -88.0 * lon (lon) float32 0.0 2.0 4.0 6.0 8.0 ... 352.0 354.0 356.0 358.0 * time (time) datetime64[ns] 1970-01-01 1970-02-01 ... 2021-12-01 Dimensions without coordinates: nbnds Data variables: time_bnds (time, nbnds) float64 9.969e+36 9.969e+36 ... 9.969e+36 9.969e+36 sst (time, lat, lon) float32 -1.8 -1.8 -1.8 -1.8 ... nan nan nan nan Attributes: (12/37) climatology: Climatology is based on 1971-2000 SST, Xue, Y.... description: In situ data: ICOADS2.5 before 2007 and NCEP i... keywords_vocabulary: NASA Global Change Master Directory (GCMD) Sci... keywords: Earth Science > Oceans > Ocean Temperature > S... instrument: Conventional thermometers source_comment: SSTs were observed by conventional thermometer... ... ... creator_url_original: https://www.ncei.noaa.gov license: No constraints on data access or use comment: SSTs were observed by conventional thermometer... summary: ERSST.v5 is developed based on v4 after revisi... dataset_title: NOAA Extended Reconstructed SST V5 data_modified: 2022-06-07 Let’s do some basic visualizations of the data, just to make sure it looks reasonable. ds.sst.isel(time=0).plot(vmin=-2, vmax=30); ## Arithmetic# Xarray dataarrays and datasets work seamlessly with arithmetic operators and numpy array functions. For example, imagine we want to convert the temperature (given in Celsius) to Kelvin: sst_kelvin = ds.sst + 273.15 sst_kelvin <xarray.DataArray 'sst' (time: 624, lat: 89, lon: 180)> 271.4 271.4 271.4 271.4 271.4 271.4 271.4 271.4 ... nan nan nan nan nan nan nan Coordinates: * lat (lat) float32 88.0 86.0 84.0 82.0 80.0 ... -82.0 -84.0 -86.0 -88.0 * lon (lon) float32 0.0 2.0 4.0 6.0 8.0 ... 350.0 352.0 354.0 356.0 358.0 * time (time) datetime64[ns] 1970-01-01 1970-02-01 ... 2021-12-01 The dimensions and coordinates were preserved following the operation. Warning: Although many xarray datasets have a units attribute, which is used in plotting, Xarray does not inherently understand units. However, xarray can integrate with pint, which provides full unit-aware operations. See pint-xarray for more. ## Applying functions# We can apply more complex functions to Xarray objects. Imagine we wanted to compute the following expression as a function of SST ($$\Theta$$) in Kelvin: $f(\Theta) = 0.5 \ln(\Theta^2)$ f = 0.5 * np.log(sst_kelvin**2) f <xarray.DataArray 'sst' (time: 624, lat: 89, lon: 180)> 5.603 5.603 5.603 5.603 5.603 5.603 5.603 5.603 ... nan nan nan nan nan nan nan Coordinates: * lat (lat) float32 88.0 86.0 84.0 82.0 80.0 ... -82.0 -84.0 -86.0 -88.0 * lon (lon) float32 0.0 2.0 4.0 6.0 8.0 ... 350.0 352.0 354.0 356.0 358.0 * time (time) datetime64[ns] 1970-01-01 1970-02-01 ... 2021-12-01 ## Applying Arbitrary Functions# It’s awesome that we can call np.log(ds) and have it “just work”. However, not all third party libraries work this way. numpy’s nan_to_num for example will return a numpy array np.nan_to_num(ds.sst, 0) array([[[-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], ..., [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]], [[-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], ..., [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]], [[-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], ..., [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]], ..., [[-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], ..., [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]], [[-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], ..., [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]], [[-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], [-1.8, -1.8, -1.8, ..., -1.8, -1.8, -1.8], ..., [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ], [ 0. , 0. , 0. , ..., 0. , 0. , 0. ]]], dtype=float32) It would be nice to keep our dimensions and coordinates. We can accomplish this with xr.apply_ufunc xr.apply_ufunc(np.nan_to_num, ds.sst, 0) <xarray.DataArray 'sst' (time: 624, lat: 89, lon: 180)> -1.8 -1.8 -1.8 -1.8 -1.8 -1.8 -1.8 -1.8 -1.8 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Coordinates: * lat (lat) float32 88.0 86.0 84.0 82.0 80.0 ... -82.0 -84.0 -86.0 -88.0 * lon (lon) float32 0.0 2.0 4.0 6.0 8.0 ... 350.0 352.0 354.0 356.0 358.0 * time (time) datetime64[ns] 1970-01-01 1970-02-01 ... 2021-12-01 Note: apply_ufunc is a powerful function. It has many options for doing more complicated things. Unfortunately, we don't have time to go into more depth here. Please consult the documentation for more details. ## Reductions# Reductions are functions that reduce the dimensionlity of our dataset. For example taking the mean sea surface temperature along time of our 3D data, we “reduce” the time dimension and are left with a 2D array. Just like in numpy, we can reduce xarray DataArrays along any number of axes. sst = ds.sst sst.mean(axis=0) <xarray.DataArray 'sst' (lat: 89, lon: 180)> -1.799 -1.799 -1.799 -1.799 -1.799 -1.8 -1.8 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Coordinates: * lat (lat) float32 88.0 86.0 84.0 82.0 80.0 ... -82.0 -84.0 -86.0 -88.0 * lon (lon) float32 0.0 2.0 4.0 6.0 8.0 ... 350.0 352.0 354.0 356.0 358.0 However, rather than performing reductions by specifying axis (as in numpy), we can instead perform them using dimension names. This turns out to be a huge convenience, particularly in complex calculations it can be hard to remember which axis corresponds to which dimension name: sst.mean(dim="time") <xarray.DataArray 'sst' (lat: 89, lon: 180)> -1.799 -1.799 -1.799 -1.799 -1.799 -1.8 -1.8 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Coordinates: * lat (lat) float32 88.0 86.0 84.0 82.0 80.0 ... -82.0 -84.0 -86.0 -88.0 * lon (lon) float32 0.0 2.0 4.0 6.0 8.0 ... 350.0 352.0 354.0 356.0 358.0 You can reduce over multiple dimensions sst.mean(["lat", "time"]) <xarray.DataArray 'sst' (lon: 180)> 7.692 7.824 8.239 8.131 7.586 7.546 ... 9.776 9.419 8.303 7.818 7.842 7.641 Coordinates: * lon (lon) float32 0.0 2.0 4.0 6.0 8.0 ... 350.0 352.0 354.0 356.0 358.0 If no dimension is specified, the reduction is applied across all dimensions. sst.mean() <xarray.DataArray 'sst' ()> 9.473 All of the standard numpy reductions (e.g. min, max, sum, std, etc.) are available on both Datasets and DataArrays. ### Exercise# Take the mean of sst in both longitude and latitude. Make a simple timeseries plot: sst.mean(["lat", "lon"]).plot();
# Brokers - Multi-Protocol Brokers¶ placeholder This example shows how to configure a HELICS co-simulation to implement a broker structure that utilizes multiple core types in a single co-simulation. Typically, all federates in a single federation use the same core type (ZMQ by default) but HELICS can be set up to utilize different core types in the same federation ## Where is the code?¶ This example on multibrokerscan be found here. If you have issues navigating to the examples, visit the HELICS Gitter page or the user forum on GitHub. ## What is this co-simulation doing?¶ This example shows you how to configure a co-simulation to use more than one core type in the same federation. The example itself has the same functionality as the Advanced Default example as the only change is a structural to the federation and not the federate code itself. ### Differences compared to the Advanced Default example¶ For this example, the Advanced Default example has been split up so that each federate uses a different core type in a single federation. #### HELICS differences¶ Typically, all federates in a federation use the same core type. There can be cases, though, where a multi-site co-simulation with a more complex networking environment or performance requirements dictate the need for some federates to utilize a difference core type than others. For example, the IPC core utilizes a Boost library function to allow two executables both using Boost to communicate between themselves when running on the same compute node; since this is in-memory communication rather than over the network stack, performance is expected to be higher. It could be that a particular federation has been optimized to take advantage of this but must also communicate with federates on a separate compute node via ZMQ. In this case, a so-called “multibroker” can be configured to allow for the federation to run. (See the User Guide section on the multi-protocol broker and broker core types for further details.) In this example, we won’t be doing anything like that but, for demonstration purposes, simply using the same federation from the Advanced Default example. and configuring it so each federate uses a different core type. ### HELICS Components¶ To configure a multibroker, the broker configuration line is slightly extended from a traditional federation. From the helics_cli runner configuration file multi_broker_runner.json ... "exec": "helics_broker -f 3 --coreType=multi --config=multi_broker_config.json --name=root_broker", ... The coreType of the broker is set to multi and a configuration file is specified. That file looks like this: { "master":{ "coreType": "test" }, "comms": [ { "coreType": "zmq", "port": 23500 }, { "coreType": "tcp", "port": 23700 }, { "coreType": "udp", "port": 23900 } ] } The first and most important note: master and comms are reserved words in this context and MUST be used. The master core type must be test but the core types for the federates can be any of the supported cores. Again, as in other similar examples, because we are running this on a single compute node, the port for each core type must be specified and the federates using those core types need to have the brokerPort property set to the corresponding core’s port number. BatteryConfig.json ... "name": "Battery", "loglevel": 1, "coreType": "zmq", "brokerPort": 23500, ... ChargerConfig.json ... "name": "Charger", "loglevel": 1, "coreType": "tcp", "brokerPort": 23700, ... ControllerConfig.json ... "name": "Controller", "loglevel": 1, "coreType": "udp", "brokerPort": 23900, ... ## Execution and Results¶ Unlike the other advanced broker examples, this one can be run with a single helics_cli command: \$ helics run --path=./multi_broker_runner.json As has been mentioned, since this is just a change to the co-simulation architecture, the results are identical to those in the Advanced Default example. placeholder placeholder placeholder
# Use primitive root to prove if $a^{\phi(m)/2}\equiv 1\pmod m$ then $a$ is a quadratic residue modulo $m$. This is trivial in arguments of quadratic residues, but I couldn't solve it using primitive root. The problem seeks to use primitive root to be proved. Problem: Let $$m>2$$ be an integer having a primitive root, and let $$(a,m)=1$$. Prove that $$a^{\phi(m)/2}\equiv 1\pmod m$$ implies $$a$$ is a quadratic residue modulo $$m$$. My approach is, I know there are $$\phi(\phi(m))$$ primitive roots in the reduced residue set modulo $$m$$: $$S=\{a_1,a_2,\cdots,a_{\phi(m)}\}$$. Then I square the set, to get $$T=\{b_1,b_2,\cdots,b_{\phi(m)/2}\}$$ where for each $$b_i$$ there is $$a\in S$$ such that $$a^2\equiv b_i\pmod m$$. But I cannot keep writing, I don't know how to continue. Any suggestion? • Let $g$ be a primitive root. Express $a$ as a power of $g$. Consider the consequences of $a^{\phi(m)/2}\equiv1\bmod m$, and why it forces $a$ to be an even power of $g$. – Gerry Myerson Mar 14 at 2:20 • Additively it boils down to $\bmod 2n\!:\ nk\equiv 0\,\iff 2\mid k,\,$ by $\ 2n\mid nk\iff 2\mid k.\,$ This arithmetic occurs in the exponents of the generator $g$ when you follow Gerry's hint, where $\,n = \phi(m)/2,\,$ and $\,a = g^k\ \$ – Bill Dubuque Mar 14 at 3:25 • Thank you all! I understand it. – kelvin hong 方 Mar 14 at 3:49 • Good! Let me encourage you to post an answer, kelvin. – Gerry Myerson Mar 15 at 1:41 Let $$g$$ be an primitive root mod $$m$$, then the set $$S=\{g,g^2,\cdots, g^{\phi(m)}\}$$ forms a reduced residue set mod $$m$$. Since $$(a,m)=1$$, we can express $$a$$ as a power of $$g$$, let $$a\equiv g^k\pmod m$$. So by assumption we have $$g^{k\phi(m)/2}\equiv 1\pmod m.$$ But $$g$$ is primitive, we see $$\phi(m)|k\phi(m)/2$$ which is $$2|k$$, this shows that $$a$$ is an even power of $$g$$, which is equivalent to say that $$a$$ is a quadratic residue modulo $$m$$.
# Can t test be used for comparing groups with a sample size of 3? I'm wondering if t test maybe used for a really small sample size. I have a set of data with only 3 entries for each group and I need to compare whether the two groups are significantly different. • With three cases per group I wouldn't be conducting any test, but rather conduct a qualitative analysis (i.e. describe the groups and the differences in your report, case by case). Check: stats.stackexchange.com/questions/121852/… – Tim Sep 21 '17 at 10:09 • Thanks. I read somewhere though that I can use a permutation test, but I am sort of looking for something that is a bit simpler and may be used by middle school students. Gathering more data though is not feasible. – cren Sep 21 '17 at 10:46 • You can still do it, but it's not very accurate. – SmallChess Sep 21 '17 at 13:12 • Conceptually, the permutation test is much simpler than the t-test. It might be the better pedagogical choice. You needn't feel obliged to test at a 5% level, either. If you could accept 10% as the standard of significance, then a permutation test can be a nice illustration. – whuber Sep 21 '17 at 13:36 • see the links at stats.stackexchange.com/questions/294682/… . In R try t.test(x=c(4.5,4.6),y=5.7, var.equal=TRUE) (sample sizes 2 and 1). It works. However, power may be quite low. – Glen_b Sep 21 '17 at 22:32 The permutation test will have insufficient power. (There just aren't enough different ways to split six samples into two groups of three.) But If the assumptions of the t-test hold, then its results are valid. Many thoughtful readers will question whether such a situation could actually arise. Let me share a real story. It concerns cleaning up lead contamination in a field: for years, a farmer accepted "recycled" batteries and dumped them behind his house. Eventually the environmental regulators caught up with him. They caused the "responsible party" to go through three phases of cleanup work: (1) sample the soils to estimate the amount and extent of lead contamination; (2) remove the soils in thin layers, taking samples in the process, until it was clear that clean soils had been reached; (3) independently sample all remaining soil and formally test whether the mean lead concentration is below the environmental standard. The procedure for (3) was designed and approved before the cleanup began. It called for random sampling of all soils exposed during the excavation, analysis of the samples by a certified laboratory, and applying a Student t test. Equivalently, to demonstrate success, a suitable upper confidence limit (UCL) of the mean had to be less than the standard. It did not specify how many samples to take: that would be up to the responsible party to decide. Almost a thousand samples were obtained and analyzed during the first two phases. Although these allowed the (univariate and spatial) distributions of the lead concentrations to be characterized reliably, they of course did not represent the remaining concentrations. However, they did suggest the shape of the (univariate) distribution of the remaining concentrations. Physical and chemical theory, soils science, and experience with remediating lead in soils elsewhere all provided support for this statistical characterization. The cleanup was so thorough and successful that the likely mean concentration was negligible--more than an order of magnitude less than the standard. Power analyses, based on pessimistic (high) estimates of the standard deviation, all suggested that a random sample of only two or three would be needed. There were many potential complications: for instance, any areas that might have been overlooked during the excavation could introduce large outlying values. To detect these, a large number of samples were obtained at random locations, and then composited in groups to produce just five physical samples for laboratory testing. All the values were low. As expected, a t-test of any two of those samples would still have demonstrated attainment. Sprinkled within this brief case study are examples of various ways in which we might be assured that a t-test is appropriate even with tiny samples: experience; theory; preliminary related sampling; making pessimistic assumptions; and sample compositing all played a role--and any one of them might have sufficed to justify the t-test. Incidentally, there are versions of the t-test that work with just a single observation. They are based on obtaining independent estimates of the variance or, lacking that, mathematical theory. Could this ever make sense in reality? The classic situation of compositing the blood from hundreds of soldiers to test for venereal disease provides one possible application.
## 8.8.3.2.1 Semantic definition The IfcAdvancedBrepWithVoids is a specialization of an advanced B-rep which contains one or more voids in its interior. The voids are represented as closed shells which are defined so that the shell normal point into the void. Informal Propositions: 1. Each void shell shall be disjoint from the outer shell and from every other void shell 2. Each void shell shall be enclosed within the outer shell but not within any other void shell. In particular the outer shell is not in the set of void shells 3. Each shell in the IfcManifoldSolidBrep shall be referenced only once. 4. All the faces of all the shells in the IfcAdvancedBrep and the IfcAdvancedBrepWithVoids.Voids shall be of type IfcAdvancedFace. ## 8.8.3.2.5 Formal representation ENTITY IfcAdvancedBrepWithVoids Voids : SET [1:?] OF IfcClosedShell; WHERE VoidsHaveAdvancedFaces : SIZEOF (QUERY (Vsh <* Voids | SIZEOF (QUERY (Afs <* Vsh.CfsFaces | )) = 0 )) = 0; END_ENTITY;
# $a>0$, & let $x$ be a real number. Prove that if $\{r_n \}$ is decreasing rational sequence with limit $x$, then $a^x=\cdots$ Let $a>0$, and let $x$ be a real number. Prove that if $\{r_{n} \}$ is any decreasing rational sequence with limit $x$, then $a^x = \lim_{n \rightarrow \infty} a^{r_n}$ Where in the book $a^x$ is defined as $\lim_{n \rightarrow \infty} a^{s_n}$ where $\{ s_n \}$ is an increasing sequence. - Note that $\{-r_n\}$ is an increasing sequence of rationals that converges to $-x$ Using the definition of your book, we know that: $$lim_{n\rightarrow\infty}a^{-r_n}=a^{-x}>0$$ Since the limit is positive, thus: $$lim_{n\rightarrow\infty}a^{r_n}=a^{x}$$ - I see how you got $lim_{n \rightarrow \infty} a ^{\r{n}} = a^{-x} > 0$ but I don't know why it implies the second part thanks. –  Jmaff Dec 26 '12 at 22:58 Take the multiplicative inverse of both sides –  Amr Dec 26 '12 at 23:21 hmm, still unsure. If I take the inverse of the first line then we get $(lim_{n \rightarrow \infty} a^{r_{n}}) \cdot a^{x} =1$ Or should I look at the inverses of the actual sequence terms? –  Jmaff Dec 26 '12 at 23:34 The inverses of the actual terms of the sequence (you can take their inverse becasue they are non zero) –  Amr Dec 26 '12 at 23:35 Okay,so since we know that the $\{ -r_n \}$ sequence has an invertible limit, and that each element of the sequence is invertible we know that the limit of the sequence of the inverses has the limit that is the inverse of$\{ -r_n \}$'s limit ? Is this a commonly proved statement? I think it can be prove using the limit rule for limits maybe,Thanks –  Jmaff Dec 26 '12 at 23:43 Since $2x-r_n$ is increasing and $$\lim_n 2x-r_n=x$$ we have $$a^x=\lim_n a^{2x-r_n}=\lim_n \frac{a^{2x}}{a^{r_n}}= \frac{a^{2x}}{\lim_n a^{r_n}} \,.$$ Multiply both sides by $\frac{\lim_n a^{r_n} }{a^x}$ -
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # The crude oil biodegradation activity of Candida strains isolated from oil-reservoirs soils in Saudi Arabia ## Abstract Crude oil (petroleum) is a naturally occurring complex composed of hydrocarbon deposits and other organic materials. Bioremediation of crude oil-polluted sites is restricted by the biodiversity of indigenous microflora. They possess complementary substrates required for degrading the different hydrocarbons. In the current study, four yeast strains were isolated from different oil reservoirs in Riyadh, Saudi Arabia. The oil-biodegradation ability of these isolates showed variable oxidation effects on multiple hydrocarbons. The scanning electron microscopy (SEM) images showed morphological changes in Candida isolates compared to the original structures. The drop-collapse and oil emulsification assays showed that yeast strains affected the physical properties of tested hydrocarbons. The content of biosurfactants produced by isolated strains was quantified in the presence of different hydrocarbons to confirm the oil displacement activity. The recovery assays included acid precipitation, solvent extraction, ammonium sulfate, and zinc sulfate precipitation methods. All these methods revealed that the amount of biosurfactants correlates to the type of tested hydrocarbons, where the highest amount was produced in crude oil contaminated samples. In conclusion, the study highlights the importance of Candida isolated from contaminated soils for bioremediation of petroleum oil pollution. That raises the need for further analyses on the microbes/hydrocarbon degradation dynamics. ## Introduction Hydrocarbons are a rich energy and carbon source for hydrocarbon-degrading microorganisms. Over 100 fungal genera have been identified as significant oil-degraders1. Microbial hydrocarbon degradation involves complex enzymatic activities such as hydroxylases, dehydrogenases, monooxygenases, dioxygenases, oxidoreductases, etc.2. Although the different pathways have been extensively examined, there is limited understanding of enzymatic mechanisms and their associated genetic pathways of hydrocarbon degradation in fungi3. Fungi facilitate the degradation of recalcitrant hydrocarbons by secreting extracellular enzymes that transform the hydrocarbons into intermediates with lower toxicity4. So far, studies on fungal bioremediation have mostly revolved around terrestrial environments5; marine environments, on the other hand, are not very commonly examined6. Effective biodegradation of crude oil by marine fungi was determined by quantifying the changes in the total mass of crude oil over time6. It has also been reported that fungi isolated from hydrocarbon-contaminated habitats in the Gulf of Mexico can degrade n-alkanes and polycyclic aromatic hydrocarbons7. Additionally, some fungi can facilitate hydrocarbon bioavailability to other microbial communities, such as other bacteria or fungi, by biosurfactants production8. The most significant characteristic of a potential hydrocarbon degrader is the ability to produce biosurfactants via microbes8. The biosurfactants cause oily contaminants to become more soluble, which increases their availability as carbon sources for microorganisms and further increases their degradation9. The properties of microbial surfactants are analogous to synthetic surfactants, though the former is naturally biodegradable and can be produced in situ10. Isolation of microorganisms with specific features to emulsify and solubilize hydrophobic contaminants both ex-situ and in situ is a significant advantage over competitors in contaminated environments8. These processes involve directly implementing microbes or microbial surfactants in the contaminated wells, which assists in reducing oil viscosity and leads to unobstructed flow through the pipelines and more stabilized fuel water–oil emulsions11. The current study aimed to isolate and recognize yeast communities found in chronically hydrocarbon-contaminated petrol stations in Riyadh, Saudi Arabia. These isolates were examined for their ability to use crude oil as a distinct carbon source and to investigate their ability to create biosurfactants. ## Results ### Identification of Candida strains in different Soil samples: In the current study, six soil samples were collected from three different spots (two/each) surrounding the crude oil reservoirs et al. Faisaliyyah, Al Sina’iyah, and Ghubairah in the Riyadh region of Saudi Arabia. The physical characterizations showed that the soil samples had different colors and pH degrees. The soil samples from Al Faisaliyyah had umber-brown color with acidic pH of 5.89, those from Al Sina’iyah had caramel-brown with a pH of 7.72, while those from Ghubairah had a Mocha-brown color with a pH of 7.64. The Potato Dextrose Agar (PDA) cultures revealed the presence of 4 strains of Candida species with an incidence of 13.33% in all soil samples. The yeast species were identified using standard taxonomic keys based on typical mycelia growth and morphological characteristics provided in the mycological keys. Based on the physical and microscopic diagnosis, the isolated species were identified as Candida parapsilosis, Candida krusei, Candida famata, and Rhodotorula spp. We calculate the growth rate of the isolated strains to test the hydrocarbon tolerance for 30 days (Supplementary Figure 1). The growth rate of tested fungi differed depending on the carbon source used. That was evidenced by the change of color in each flask. As shown in Fig. 1A, the growth rate of C. parapsilosis increased through the 30 days, while kerosene induced the highest significant growth rate (8.15 g/ 30 days, P = 0.02), followed by diesel oil (6.37 g/30 days, P = 0.18), used oil (5.31 g/30 days, P = 0.43), and mixed oil (4.94 g/30 days, P = 0.55), as compared to the lowest growth rate induced by crude oil (3.78 g/30 days). In the cultures of C. krusei (Fig. 1B), kerosene induced a non-significant increase in its growth rate (5.34 g/30 days), whereas other hydrocarbons caused similar growth rates (3.61–3.89 g/30 days), as compared to the crude oil (3.85 g/ 30 days). Similar to C. parapsilosis, kerosene induced the highest increase in the growth of C. famata (5.73 g/ 30 days), followed by Diesel oil (4.97 g/ 30 days), mixed oil (4.95 g/ 30 days), used oil (3.65 g/ 30 days), as compared to the crude oil treatment (4.01 g/ 30 days) (Fig. 1C), despite all of them were non-significant. Finally, there were non-significant changes in the growth rate of Rhodotorula spp., (Fig. 1D) in the presence of different tested hydrocarbons. However, Diesel induced the highest growth rate by 5.45 g/ 30 days. The growth rates of different strains at 30 days were blotted together to compare the tolerance to the tested hydrocarbons (Fig. 1E). From another perspective, the comparison between different tested hydrocarbons revealed that C. parapsilosis was the highest consumer for the carbon sources in Kerosene, diesel, and the used oil, as it’s clear from the highest growth rates (Fig. 1E). Similarly, C. krusei and Rhodotorula spp. had the highest growth rates for the treatments with mixed and crude oils, respectively. That suggested that the type of hydrocarbon might affect the growth rate of specific species. All strains had higher significant growth rates against the untreated control (P < 0.001). ### Different morphological changes induced after treatments with crude oil The SEM results of C. parapsilosis revealed different morphological changes in the outer surfaces of Candida strains post-treatment with 1% crude oil as compared to the untreated control (Fig. 2A). The SEM images of the untreated cells had a natural structure with smooth flat surfaces, while the treated cells had large unequal sizes with the unusual zigzag surface structures. In C. krusei, the untreated samples were similar to smooth saprophytes, while the crude oil treatments deformities the cellular surface into an oval shape with a grainy and sinuous structure (Fig. 2B). Similarly, post-treatment of C. famata induced an oval cellular shape with smooth edges and abnormal coatings, while the control cells were similar to the smooth-shape shoot plants (Fig. 2C). Finally, the SEM screening of Rhodotorula species revealed the ability of crude oil to induce some cellular changes, where the cells appear as if they were surrounded by an extra membrane (Fig. 2D). ### Candida spp. induced biodegrading of different hydrocarbons Different strains of candida were tested for their ability to oxidize the oil hydrocarbons by interacting with the redox dye (2, 6-dichlorophenol indophenol (DCPIP)). That allowed the transfer of electrons to DCPIP, which changed its color from blue to colorless12. In the current study, mixing different hydrocarbons with DCPIP didn’t induce any oxidation; however, it produced a light violet color (Supplementary Figure 2A). Otherwise, treatment with different Candida strains caused the oxidation of oils. As shown in Fig. 3A, C. parapsilosis induced the oxidation of all oils. The highest effect was for used oil (0.61 a.u.), whereas the least was for Diesel (0.426 a.u.) on the 15th-day post-treatment. Treatment with C. krusei showed a similar effect on all tested hydrocarbons; however, the lowest oxidation was for kerosene (0.343 a.u.) on the 15th day of incubation (Fig. 3B). For C. famata and Rhodotorula spp., there were no differences between all tested hydrocarbons (Fig. 3C); however, they caused the discoloring of DCPIP (Supplementary Figure 2B–E). The comparison of all tested organisms shown in Fig. 3E revealed that crude oil was more sensitive to the oxidation induced by the treatment with C. famata (0.556 a.u.) and C. krusei (0.558 a.u.) on the 15th day of incubation. C. parapsilosis was the strongest bio-degrader of kerosene (0.46 a.u.) and mixed oil (0.471 a.u.). Finally, the used and diesel oils were more oxidized by C. parapsilosis (0.61 a.u.) and C. famata (0.499 a.u.), respectively. Rhodotorula spp. was the weakest oxidizer among all isolates, which was represented by the reddish-brownish colors of different hydrocarbons that indicated the incomplete reduction of DCPIP (Supplementary Figure 2E). ### Candida strains affected the physical properties of tested hydrocarbons The results showed that the products of the isolated candida strains acted as biosurfactants of the tested hydrocarbons. That caused the collapse of all oil drops (Table 1). The results showed that the highest collapsing effect occurred with crude, used, and mixed oils. Furthermore, diesel and kerosene were the lowest affected hydrocarbons compared to positive and negative controls. In the current study, CFS of C. parapsilosis formed a clear zone of 9.7 ± 1.1 mm diameter over the surface of crude oil, which was greater than that of Sodium Dodecyl Sulfate (SDS) with a zone diameter of 7.7 ± 0.8 mm (Fig. 4A). For mixed oil, C. parapsilosis formed a larger zone of 31.9 ± 0.15 mm diameter, which was greater than the SDS zone with a diameter of 20.3 ± 0.1 mm (Fig. 4B). The negative control of distilled water didn’t induce any clear zones over the surfaces of the tested oils. Furthermore, the isolated strains showed appropriate bio-emulsification activity against all tested oils (Supplementary Figure 3). All treated samples showed the formation of two separate layers, which were different in their heights according to the type of oil and treatment. As shown in Fig. 5C. parapsilosis was the strongest microbial bio-emulsifier of crude oil (61.9%), followed by Rhodotorula spp. (59.09%), C. famata (57.14%), and C. krusei (54.54%). For the used oil samples, C. famata was the strongest bio-emulsifier (57%), followed by C. parapsilosis (55%), Rhodotorula spp. (53%), and C. krusei (50%). In Diesel-treated samples, C. parapsilosis was the strongest bio-emulsifier at 55.55%, whereas the other three organisms had the same bio-emulsification activity of 52.63%. For kerosene, C. parapsilosis, Rhodotorula spp., and C. famata had bio-emulsification activity of 48.48%, while C. krusei had lower activity of 46.48%. Unlike kerosene, C. krusei exhibited strong emulsification of mixed oil by 63.15%, followed by C. famata (55.55%), C. parapsilosis, and Rhodotorula spp. (52.63%). ### The amount of biosurfactants recovered from Candida strains correlates to the type of tested hydrocarbons To confirm the oil displacement activity of the isolated Candida strains, the content of produced biosurfactants was tested and compared in the presence of different hydrocarbons. The precipitation with HCl resulted in white color powder in untreated samples, where the treated samples produced slightly yellowish color precipitates, data not shown. As shown in Fig. 6A, the most productive biosurfactant in C. parapsilosis and C. krusei were produced with crude oil (3.7 g, 3.55 g), where the lowest amount was for kerosene (1.03 g, 1.22 g), respectively. Similarly, Rhodotorula spp. produced the highest amount of biosurfactant with crude oil (3.03); however, the lowest amount being for Diesel (1.33 g). In C. famata, the highest production of biosurfactants was for used oil (3.09 g), while Diesel had the least amount (1.11 g). In comparison among different organisms, the highest content of biosurfactants was produced by C. parapsilosis for crude and diesel oils, C. krusei for used oil, Rhodotorula spp. for kerosene, and mixed oil, where C. famata was the lowest producer with all hydrocarbons. The solvent extraction assay was used to differentiate between the soluble (Biosurfactants) and insoluble constituents (non-emulsified hydrocarbons and microbial cells), which resulted from the reaction between isolated microbes and tested hydrocarbons (Fig. 6B). The highest amount of the dry-weight white precipitate resulted from reactions between Rhodotorula spp. (2.86 g) or C. famata (2.71) with used oil, C. parapsilosis with kerosene (2.39 g), and C. krusei with crude oil (2.35 g). Diesel and mixed oil treatments with different isolates resulted in almost smaller amounts of precipitate by comparing to other hydrocarbons. Finally, another two methods were used to test the biosurfactant recovery of tested microbes, the Ammonium sulfate, and Zinc sulfate precipitation methods. As shown in Fig. 6C, the used oil resulted in the least precipitation of Ammonium sulfate when treated with C. famata (100 mg), C. parapsilosis (100 mg)and C. krusei (200 mg). Diesel oil resulted in the least precipitation with Rhodotorula spp. (300 mg). The dry weight of the precipitate produced in the Zinc sulfate precipitation method (Fig. 6D) showed that the lowest biosurfactant production resulted from the reaction of mixed oil with Rhodotorula spp. (100 mg), C. parapsilosis (100 mg)and C. krusei (100 mg). In the case of C. famata, both diesel and crude oil produced the lowest biosurfactant amount (100 mg). ## Discussion The sensitivity of different microorganisms to environmental changes can affect their viability or induce biodegradation13,14. Different factors control the microbial biodegradation rate of hydrocarbons, such as their type, availability, length, volatilization, and solubility, which act as sources of nitrogen15,16. Other environmental factors such as pH, temperature, humidity, salinity, oxygen availability, and nutrient content might affect the existence of different microbes13,17. In the current study, pH and hydrocarbons content of soil samples from different crude oil reservoirs allowed the growth and existence of four yeast strains; C. parapsilosis, C. krusei, C. famata, and Rhodotorula spp. A previous study indicated that the lower pH of soil samples resulted from the higher alkalinity due to crude oil carbonaceous constituents, which further allowed the microbial growth in these contaminated soils18. The fact that they were isolated from contaminated soil samples shows that the contamination did not inhibit the growth and variation of fungal strains in these polluted environments. That also demonstrates that the fungal species used oil compounds as nutrients, where the crude oil pollution caused an increase in fungal growth19. All isolates were examined for their ability to grow to utilize various carbon sources such as crude oil, kerosene, used (engine) oil, diesel, or a mixture of these oils as a unique carbon source. The results revealed the ability of these organisms to grow at a 1% concentration on liquid Mineral Salt Medium (MSM), which was significantly higher than the negative controls. That indicated the fungal inability to grow using MSM media as a carbon source. Furthermore, the results revealed that kerosene or so-called ‘paraffin oil’ gave the highest growth rates of all tested species, in contrast to crude oil, which induced the lowest growth. A similar study from Brazil tested the growth rates of some isolated yeast strains, Meyerozyma guilliermondii, and Rhodosporidium diobovatum, which were the highest in a medium supplemented with kerosene where the growth rate with crude oil was lower20. Another study showed that some yeast species, Candida tropicalis, Candida rugosa, Trichosporon asahii, and Rhodotorula mucilaginosa, were bio-degraders of diesel oil21. That was due to the production of different enzymes such as NADPH cytochrome c reductase, catalase, and naphthalene dioxygenase21. Different studies showed similar findings, which suggested that yeast strains are reliable bio-remediators that can reduce petroleum contamination in different environments22,23,24,25. In comparison to the controls, these fungi accumulated high biomass in a liquid medium with all petroleum oils. The rate at which biodegradation occurs hinges on many factors, such as pollutant characteristics15, the microorganism characteristics (cell metabolic pathways and morphological changes)26, environmental conditions17, and the physicochemical properties of the soil such as density, water holding capacity, pH, moisture, and texture27. Microorganisms are highly sensitive to changes in their environments and are affected by composition and hydrocarbon sources28,29. In the current study, the SEM imaging of tested isolates showed different morphological changes in the presence of 1% crude oil. SEM was used either to confirm the phenotypic characterization of isolated strains or to study the changes in the outer-surface structures accompanied by crude oil treatment. A similar study used Candida tropicalis and revealed cellular morphological changes that cause a significant decrease in the cell diameter30. That might be due to the bioaccumulation capacities of these strains that could alleviate soil contamination31. No studies were found about the morphological changes in the isolated candida strains in contaminated petroleum spots. The level of biodegradation in hydrocarbon-polluted soils is contingent on specific factors. That included the environmental conditions17,32, the bioavailability of contaminants to microorganisms33, and the predominant hydrocarbons types34. The spectrophotometric analysis of the growth of the tested strains with 1% of each oil evidenced the oil-biodegradation ability. The higher readings demonstrate a higher concentration of fungal cells as there was a higher absorbance measured at the same absorbance wavelength. The ability of isolated strains to oxidize the tested oil hydrocarbons was studied. All isolates were observed to be potent according to the qualitative (DCPIP and spectrophotometry) analysis. A similar study was conducted in the Gulf of Mexico and showed the ability of some fungi to induce crude-oil-degrading that was confirmed by decolorization of DCPIP, reduction in the quantity of crude oil, and fungal proliferation35. Another study showed that the strains of Candida tropicalisRhodotorula mucilaginosa, and Rhodosporidium toruloides isolated from the Khafji oil field, Saudi Arabia showed a crude oil biodegradable activity, which induced the decolorization of DCPIP dye36. Another study revealed the ability of Candida viswanathii to biodegrade biodiesel, which caused the decolorization of the DCPIP redox dye37. In a study from Pernambuco, Brazil, Rhodotorula aurantiaca and Candida ernobii isolated from petroleum-contaminated soil samples induced biodegradation of diesel oil38. That caused lower O.D. values due to the decolorization of DCPIP38. All these studies provided evidence about the ability of the isolated strains to oxidize the carbon source, which induced the electronic transfer to DCPIP and resulted in its decolorization36. Furthermore, that evidenced the capacity of isolated strains to degrade crude oil. Detection of biosurfactant-producing fungi was assessed by drop-collapsing, oil-spreading, and emulsification activities as sensitive and rapid methods. Drop collapse assay is one of the techniques used to measure the destabilization of liquid droplets by surfactants, which prevent the repel of the polar water molecules from the hydrophobic surface37. In contrast, the presence of surfactants allows the spread/collapse of the drops due to the reduction of the interfacial tension38. One of the most characteristic features of aromatic oils is the ability of different biosurfactants to form clear zones over the oil surface39. The diameter of this zone correlates to the oil displacement activity40. The presence of biosurfactants in a supernatant leads to the formation of a halo which can be measured and compared to the positive and negative controls41. In the current study, all isolates caused the collapse of all oil drops from different hydrocarbon sources. Further, the drop-collapsing effect varied according to the type of hydrocarbon source. Pure filtered oils, diesel, and kerosene had the highest drop-collapsing effect. Besides, the current study demonstrated the ability of C. parapsilosis to form clear zones over the surfaces of crude and mixed oils. The emulsification activities of crude oil, used oil, diesel, kerosene, and mixed oil ranged from 41 to 61%, whereas the lowest emulsification activity for the yeast strains was seen for kerosene. That evidenced its ability to change the physical properties of these oils by increasing the oil spreading41. In agreement with our findings, a previous study illustrated the biosurfactant produced by C. parapsilosis was positive for oil spreading assay, drop-collapse method, and emulsifying index, despite it being negative for hemolytic activity in the blood agar42. Similar studies showed the oil spreading and emulsification activities of Candida glabrata to n-hexadecane43Rhodotorula babjevae to the crude oil at 38.46 mm244, and C. tropicalis and C. bombicola to waste frying oil45,46. All these studies reported that the biosurfactants produced by yeast strains might affect hydrocarbon bioavailability and biodegradation. Most of the fungi utilize petroleum hydrocarbons, as a source of carbon and energy, and metabolize the molecules to CO2 and biomass47. The chemical composition of different hydrocarbons is an important factor that determines the ability of fungal growth48. The oil displacement area in the oil spreading test was directly proportional to the concentration of biosurfactants in the solution49. In the current study, four different recovery methods were employed to measure the amounts of biosurfactant produced by the studied strains in the presence of different hydrocarbons. according to the above studies, the current results showed that the amount of biosurfactant depends on the type of hydrocarbon and the extraction method used. The highest amount of biosurfactant from C. parapsilosis and Rhodotorula spp. were produced by crude oil by using the acid precipitation method. In C. krusei and C. famata, used oil was the highest producer of biosurfactants according to the acid precipitation method, as well. In the solvent extraction method, used oil showed the highest amount of biosurfactants produced from C. parapsilosis, C. famata, and Rhodotorula spp., while crude oil showed the highest production with C. krusei. The differentiation in the biosurfactant yields might be due to the hydrophobic end which increased their solubility in an organic solvent50. A previous study suggested that most biosurfactants are synthesized in media containing carbon sources (e.g., carbohydrates, fats, oils, hydrocarbons) by aerobic microorganisms51. These biosurfactants are amphipathic compounds, which possess both hydrophobic and hydrophilic moieties and exhibit various amphiphilic structures52. Following our findings, previous studies showed that biosurfactants’ recovery depended mainly on their ionic charge and solubility in the desired solvent, which might explain the different yields produced by isolated strains8,50,52. Minor amounts of biosurfactants were produced by the ammonium sulfate and zinc sulfate methods. On the other hand, ionic precipitation had almost the same number of yields. That might be because the emulsification activity of an organism depends on the pH and divalent cations such as magnesium ions52. In agreement with our findings, the amount of rhamnolipid biosurfactant produced by Pseudomonas aeruginosa was higher when produced by organic solvent extraction (7.37 ± 0.81 g/L), which was higher than the amount produced by zinc sulfate precipitation (5.83 ± 0.02 g/L)53. The current study revealed that fungi isolated from soils contaminated with petroleum products appear as a promising microbial resource for bioremediation of crude oil pollution. To our knowledge, the isolated species were not detected before in the contaminated soil samples from the oil reservoirs in the Riyadh region, Saudi Arabia. Besides, the oil biodegradation capability of Candida famata, and Rhodotorula spp. was not fully tested before as shown in the current study. That raises the need for further analyses on the most promising isolates to accurately determine the kind of hydrocarbons that are metabolized and the degradation dynamics. Besides, the study highlights the importance of intraspecific variability. That emphasizes the relevance of high-throughput culturing strategies to obtain different microbial isolates. That was coupled with high-throughput screening approaches to efficiently determine the most promising isolates. Those isolates can efficiently utilize hydrocarbons and produce biosurfactants. So, Candida can be useful for bioremediation applications within the frame of bioaugmentation or bio-stimulation processes. Further studies will be required to identify the exact components of the biosurfactants produced by these species. Furthermore, more studies are required to assess the cellular changes induced by various enzymatic pathways involved in microbial oil-biodegradation. ## Materials and methods ### Soil sample collection Soil samples were collected from three different crude oil reservoirs et al. Faisaliyyah, Al Sina’iyah, and Ghubairah located in Riyadh, Saudi Arabia. Briefly, 400 g of soil samples were collected at 0–10 cm depth, under aseptic conditions. Samples were sieved by 2.5 mm pore size sieves, homogenized, and stored at 4ºC until use. ### Sources of different hydrocarbons Different samples of crude oil, kerosene, diesel, and used oil were collected in sterile flasks from the tankers of Saudi Aramco Company (Dammam, Saudi Arabia). Additionally, another flask was prepared by mixing 1% of each oil in MSM liquid media to make up the mixed oil. The oil samples were sterilized by Millex® Syringe Filters (Merck Millipore co., Burlington, MA, United States) and stored at 4 °C for further usage. ### Isolation and identification of fungal species The fungal species in the soil contaminated by crude oil were identified using the dilution method. Briefly, 10% of each soil sample was dissolved in distilled water and vortexed thoroughly. Then, 0.2 ml of each sample was cultured on a sterile PDA plate incubated at 28 °C for three days until the growth of different fungal colonies. Carefully, each colony was isolated, re-cultured on new PDA McCartney bottles of PDA slant, and incubated at 28 °C for three days. The fungi were identified microscopically using standard taxonomic keys based on typical mycelia growth and morphological characteristics provided in the mycological keys54. Besides, the taxonomy of the isolated yeast strains was confirmed by the API 20 C AUX kit (Biomerieux Corp., Marcy-l'Étoile, France) (data not shown). The morphology of pure cultures was tested and identified under a light microscope as described before55. The incidence of each strain was calculated as follows: $$Incidence \;(\% ) = \frac{{{\text{Number }}\;{\text{of }}\;{\text{samples }}\;{\text{showed }}\;{\text{microbial }}\;{\text{growth}}}}{{{\text{Total }}\;{\text{samples}}}} \times 100$$ ### Hydrocarbon tolerance test The growth rate of isolated strains was tested in a liquid medium of MSM mixed with 1% of either crude oil, used oil, diesel, kerosene, or mixed oil. Furthermore, a control sample of MSM liquid medium without any of the oils tested and all culture media were autoclaved at 121 °C for 30 min. After cooling, 1 ml of each isolate was inoculated with one of the above mixtures and incubated at 25 °C on an orbital shaker. The growth rate was measured every three days for a month for each treatment versus the control. All experiments were performed in triplicates. ### Scanning electron microscopy (SEM) The morphology of different strains of the isolated fungi was tested by SEM, as previously described56, with some modifications. Briefly, 1 ml of each growing strain, in the liquid media, was centrifuged at the maximum speed (14,000 rpm) for 1 min, followed by fixation with 2.5% glutaraldehyde, and overnight incubation at 5 °C. Later, the sample was pelleted, washed with distilled water, then dehydrated with different ascending concentrations of ethanol (30, 50, 70, 90, 100 (v/v)) for 15 min at room temperature. Finally, samples were examined in the Prince Naif Research Centre (King Saud University, Riyadh, Saudi Arabia) by the JEOL JEM-2100 microscope (JEOL, Peabody, MA, United States), according to the manufacturer instructions. A modified version of the DCPIP assay57 was employed to assess the oil-degrading ability of the fungal isolates. For each strain, 100 ml of the autoclaved MSM was mixed with 1% (V/V) of one of the hydrocarbons (crude oil, used oil, diesel, kerosene, or mixed oil), 0.1% (v/v) of Tween 80, and 0.6 mg/mL of the redox indicator (DCPIP). Then, 1–2 ml of different fungi growing in liquid media (24–48 h) add to the Crude Oil Degradation media, prepared previously, and incubated for two weeks in a shaking incubator at 25 °C. All flasks were covered and protected from light, aeration, or temperature exchanges to reduce the effects of oil weathering (evaporation, photooxidation). The surfactant Tween 80 was used for bio-stimulation and acceleration of the biosurfactant production by increasing metabolism58. A non-inoculated Crude Oil Degradation media was used as the negative control. Afterward, the colorimetric analysis for the change in DCPIP color was estimated, spectrophotometrically, at 420 nm. All experiments were performed in triplicates. ### Preparation of cell-free supernatant (CFS) To prepare the Cell-Free Supernatant (CFS), all isolates were grown in MSM broth medium with 1% of either crude oil, used oil, diesel, kerosene, or mixed oil for 30 days in a shaking incubator at 25 °C. After incubation, the cells were removed by centrifugation at 10,000 rpm for 30 min at 4 °C. The supernatant (CFS) was collected and filter-sterilized with a 0.45 μm pore size sterile membrane. CFS was screened for the production of different biosurfactants. All the experiments were carried out in triplicates, and the average values were calculated. ### Drop-Collapse assay The Drop-Collapse assay was performed as previously described9, with some modifications. 100 µl of crude oil was applied on glass slides, then 10 µl of each CFS was added to the center of the slide surface and incubated for a minute at room temperature. The slides were imaged by a light microscope using the 10X objective lenses. The spreading on the soil surface was scored by either « + » to indicate the level of positive spreading, biosurfactant production, or «—» for negative spreading. Biosurfactant production was considered positive at the drop diameter ≥ 0.5 mm, compared to the negative control (treated with distilled water). An amount of 20 ml of water was added to the Petri plate (size of 100 mm) and mixed with 20 µl of crude oil or mixed oil, which created a thin layer on the water surface. Then, 10 µl of CFS was delivered onto the surface of the oil, and the clear zone surrounding the CFS drop was observed. The results were compared to the negative control (without CFS) and positive control of 1% SDS41. We have measured the clear zones diameter from images and calculate the actual values in regards to the diameter of the Petri dish (10 cm). The assay was performed in triplicates. ### Emulsification activity assay The emulsification activity of each isolate was assessed by mixing equal volumes of MSM broth medium of each isolate with different oils in separate tubes. The samples were homogenized by vortex at high speed for two minutes at room temperature (25 °C) and allowed to settle for 24 h. The tests were performed in duplicate. Then, the emulsification index was calculated as follows59: $$Emulsification\; activity\; \left( \% \right) = \frac{{{\text{Height }}\;{\text{of }}\;{\text{emulsion }}\;{\text{layer}}}}{{{\text{Total }}\;{\text{height}}}} \times 100$$ ### Recovery of biosurfactants The recovery of biosurfactants from CFS was tested through different assays: ### Acid precipitation assay 3 ml of each CFS was adjusted by 6 N HCl to pH 2 and incubated for 24 h at 4 °C. Later, equal volumes of chloroform/methanol mixture (2:1 v/v) were added to each tube, vortexed, and incubated overnight at room temperature. Afterward, the samples were centrifuged for 30 min at 10,000 rpm (4 °C), the precipitate (Light brown colored paste) was air-dried in a fume hood, and weighed53. ### Solvent extraction assay The CFS containing biosurfactant was treated with a mixture of extraction solvents (equal volumes of methanol, chloroform, and acetone). Then, the new mixture was incubated in a shaking incubator at 200 rpm, 30 °C for 5 h. The precipitate was separated into two layers, in which the lower layer (White) was isolated, dried, weighed, and stored60. ### Ammonium sulfate precipitation assay The CFS containing biosurfactant was precipitated with 40% (w/v) ammonium sulfate and incubated overnight at 4 °C. The samples were centrifuged at 10,000 rpm for 30 min (4 °C). The precipitate was collected and extracted with an amount of acetone equal to the volume of the supernatant. After centrifugation, the precipitate (Creamy-white) was isolated, air-dried in a fume hood, and weighed53. ### Zinc sulfate precipitation method Similarly, 40% (w/v) zinc sulfate was mixed with the CFS containing biosurfactant. Then, the mixture was incubated at 4 °C, overnight. The precipitate (Light Brown) was collected by centrifugation at 10,000 rpm for 30 min (4 °C), air-dried in a fume hood, and weighed53. ### Statistical analysis All experiments were performed in triplicate, and the results were expressed as the mean values ± standard deviation (SD). One-way ANOVA and Dunnett's tests were used to estimate the significance levels at P < 0.05. Statistical analyses were performed using the SPSS statistical package (version 22) (IBM, Armonk, NY, United States). ## Data availability All datasets obtained or studied during this study are incorporated in the manuscript. ## References 1. Galitskaya, P. et al. Response of bacterial and fungal communities to high petroleum pollution in different soils. Sci. Rep. 11, 164. https://doi.org/10.1038/s41598-020-80631-4 (2021). 2. Harms, H., Schlosser, D. & Wick, L. Y. Untapped potential: exploiting fungi in bioremediation of hazardous chemicals. Nat. Rev. Microbiol. 9, 177–192. https://doi.org/10.1038/nrmicro2519 (2011). 3. Marco-Urrea, E., García-Romera, I. & Aranda, E. Potential of non-ligninolytic fungi in bioremediation of chlorinated and polycyclic aromatic hydrocarbons. N. Biotechnol. 32, 620–628. https://doi.org/10.1016/j.nbt.2015.01.005 (2015). 4. Steliga, T. Role of fungi in biodegradation of petroleum hydrocarbons in drill waste. Pol. J. Environ. Stud. 21, 471–479 (2012). 5. Godoy, P. et al. Exploring the potential of fungi isolated from PAH-polluted soil as a source of xenobiotics-degrading fungi. Environ. Sci. Pollut. Res. Int. 23, 20985–20996. https://doi.org/10.1007/s11356-016-7257-1 (2016). 6. Bovio, E. et al. The culturable mycobiota of a Mediterranean marine site after an oil spill: isolation, identification and potential application in bioremediation. Sci. Total. Environ. 576, 310–318. https://doi.org/10.1016/j.scitotenv.2016.10.064 (2017). 7. Simister, R. L. et al. Degradation of oil by fungi isolated from Gulf of Mexico beaches. Mar. Pollut. Bull. 100, 327–333. https://doi.org/10.1016/j.marpolbul.2015.08.029 (2015). 8. Karlapudi, A. P. et al. Role of biosurfactants in bioremediation of Oil Pollution—a review. Petroleum 7, 230. https://doi.org/10.1016/j.petlm.2021.01.007 (2021). 9. Patowary, K., Patowary, R., Kalita, M. C. & Deka, S. Characterization of biosurfactant produced during degradation of hydrocarbons using crude oil as sole source of carbon. Front. Microbiol. 8, 279. https://doi.org/10.3389/fmicb.2017.00279 (2017). 10. Fenibo, E. O., Ijoma, G. N., Selvarajan, R. & Chikere, C. B. Microbial surfactants: the next generation multifunctional biomolecules for applications in the petroleum industry and its associated environmental remediation. Microorganisms 7, 581. https://doi.org/10.3390/microorganisms7110581 (2019). 11. De Almeida, D. G. et al. Biosurfactants: Promising molecules for petroleum biotechnology advances. Front. Microbiol. 7, 1718. https://doi.org/10.3389/fmicb.2016.01718 (2016). 12. Mariano, A. P., Bonotto, D. M., Angelis, D. D. F. D., Pirôllo, M. P. S. & Contiero, J. Biodegradability of commercial and weathered diesel oils. Braz. J. Microbiol. 39, 133–142. https://doi.org/10.1590/S1517-83822008000100028 (2008). 13. Cavicchioli, R. et al. Scientists’ warning to humanity: microorganisms and climate change. Nat. Rev. Microbiol. 17, 569–586. https://doi.org/10.1038/s41579-019-0222-5 (2019). 14. Gupta A., Gupta R., & Singh R. L. Microbes and Environment in Principles and Applications of Environmental Biotechnology for a Sustainable Future (ed. Singh R. L.) 43–84 (Springer, 2017). 15. Chandra, S., Sharma, R., Singh, K. & Sharma, A. Application of bioremediation technology in the environment contaminated with petroleum hydrocarbon. Ann. Microbiol. 63, 417–431. https://doi.org/10.1007/s13213-012-0543-3 (2013). 16. Ławniczak, Ł, Woźniak-Karczewska, M., Loibner, A. P., Heipieper, H. J. & Chrzanowski, Ł. Microbial degradation of hydrocarbons-basic principles for bioremediation: A review. Molecules 25, 856. https://doi.org/10.3390/molecules25040856 (2020). 17. Varjani, S. J., Rana, D. P., Jain, A. K., Bateja, S. & Upasani, V. N. Synergistic ex-situ biodegradation of crude oil by halotolerant bacterial consortium of indigenous strains isolated from on shore sites of Gujarat, India. Int. Biodeter. Biodegr. 103, 116–124. https://doi.org/10.1016/j.ibiod.2015.03.030 (2015). 18. Nwakwasi, N. L., Osuagwu, J. C., Dike, B. U., Nwoke, H. U. & Agunwamba, J. C. Modeling soil ph fate in crude oil contaminated soil in the Niger Delta. Sci. Res. J. 6, 54–60. https://doi.org/10.31364/scirj/v6.i11.2018.p1118583 (2018). 19. Mohsenzadeh, F., Rad, A. C. & Akbari, M. Evaluation of oil removal efficiency and enzymatic activity in some fungal strains for bioremediation of petroleum-polluted soils. Iran. J. Environ. Health Sci. Eng. 9, 26. https://doi.org/10.1186/1735-2746-9-26 (2012). 20. Goulart, G. G., Coutinho, J. O. P. A., Monteiro, A. S., Siqueira, E. P. & Santos, V. L. Isolation and characterization of gasoline-degrading yeasts from refined oil-contaminated residues. J. Bioremed. Biodeg. 5, 214. https://doi.org/10.4172/2155-6199.1000214 (2014). 21. Chandran, P. & Das, N. Role of plasmid in diesel oil degradation by yeast species isolated from petroleum hydrocarbon-contaminated soil. Environ. Technol. 33, 645–652. https://doi.org/10.1080/09593330.2011.587024 (2012). 22. Benguenab, A. & Chibani, A. Biodegradation of petroleum hydrocarbons by filamentous fungi (Aspergillus ustus and Purpureocillium lilacinum) isolated from used engine oil contaminated soil. Acta Ecol. Sin. 41, 416–423. https://doi.org/10.1016/j.chnaes.2020.10.008 (2021). 23. Gargouri, B., Mhiri, N., Karray, F., Aloui, F. & Sayadi, S. Isolation and characterization of hydrocarbon-degrading yeast strains from petroleum contaminated industrial wastewater. Biomed. Res. Int. 2015, 1–11. https://doi.org/10.1155/2015/929424 (2015). 24. Hashem, M., Alamri, S. A., Al-Zomyh, S. S. A. A. & Alrumman, S. A. Biodegradation and detoxification of aliphatic and aromatic hydrocarbons by new yeast strains. Ecotoxicol. Environ. Saf. 151, 28–34. https://doi.org/10.1016/j.ecoenv.2017.12.064 (2018). 25. Taylor, J. D. & Cunliffe, M. Multi-year assessment of coastal planktonic fungi reveals environmental drivers of diversity and abundance. ISME J. 10, 2118–2128. https://doi.org/10.1038/ismej.2016.24 (2016). 26. Meckenstock, R. U. et al. Anaerobic degradation of benzene and polycyclic aromatic hydrocarbons. J. Mol. Microbiol. Biotechnol. 26, 92–118. https://doi.org/10.1159/000441358 (2016). 27. Beškoski, V. P. et al. Ex situ bioremediation of a soil contaminated by mazut (heavy residual fuel oil)—a field experiment. Chemosphere 83, 34–40. https://doi.org/10.1016/j.chemosphere.2011.01.020 (2011). 28. Boopathy, R. Factors limiting bioremediation technologies. Bioresour. Technol. 74, 63–67. https://doi.org/10.1016/S0960-8524(99)00144-3 (2000). 29. Xu, X. et al. Petroleum hydrocarbon-degrading bacteria for the remediation of oil pollution under aerobic conditions: A perspective analysis. Front. Microbiol. 9, 2885. https://doi.org/10.3389/fmicb.2018.02885 (2018). 30. Farag, S. & Soliman, N. A. Biodegradation of crude petroleum oil and environmental pollutants by Candida tropicalis strain. Braz. Arch. Biol. Technol. 54, 821–830. https://doi.org/10.1590/S1516-89132011000400023 (2011). 31. Liaquat, F. et al. Evaluation of metal tolerance of fungal strains isolated from contaminated mining soil of Nanjing, China. Biology 9, 469. https://doi.org/10.3390/biology9120469 (2020). 32. Varjani, S., Thaker, M. B. & Upasani, V. Optimization of growth conditions of native hydrocarbon utilizing bacterial consortium “HUBC” obtained from petroleum pollutant contaminated sites. Indian J. Appl. Res. 4, 474–476 (2014). 33. Varjani, S. J. & Upasani, V. N. Biodegradation of petroleum hydrocarbons by oleophilic strain of Pseudomonas aeruginosa NCIM 5514. Bioresour. Technol. 222, 195–201. https://doi.org/10.1016/j.biortech.2016.10.006 (2016). 34. Ghazali, F. M., Rahman, R. N. Z. A., Salleh, A. B. & Basri, M. Biodegradation of hydrocarbons in soil by microbial consortium. Int. Biodeterior. Biodegrad. 54, 61–67. https://doi.org/10.1016/j.ibiod.2004.02.002 (2004). 35. Al-Nasrawi, H. Biodegradation of crude oil by fungi isolated from Gulf of Mexico. J. Bioremed. Biodegrad. 3, 147. https://doi.org/10.4172/2155-6199.1000147 (2012). 36. Al-Dhabaan, F. A. Isolation and identification of crude oil-degrading yeast strains from Khafji oil field, Saudi Arabia. Saudi J. Biol. Sci. 28, 5786–5792. https://doi.org/10.1016/j.sjbs.2021.06.030 (2021). 37. Junior, J. S., Mariano, A. P. & de Angelis, D. F. Biodegradation of biodiesel/diesel blends by Candida viswanathii. Afr. J. Biotechnol. 8, 2774–2778 (2009). 38. Miranda, R. D. et al. Biodegradation of diesel oil by yeasts isolated from the vicinity of Suape port in the state of Pernambuco, Brazil. Braz. Arch. Biol. Technol. 50, 147–152. https://doi.org/10.1590/s1516-89132007000100018 (2007). 39. Płaza, G. A., Zjawiony, I. & Banat, I. M. Use of different methods for detection of thermophilic biosurfactant-producing bacteria from hydrocarbon-contaminated and bioremediated soils. J. Pet. Sci. Eng. 50, 71–77. https://doi.org/10.1016/j.petrol.2005.10.005 (2006). 40. Walter, V., Syldatk, C. & Hausmann, R. Screening concepts for the isolation of biosurfactant producing microorganisms. Adv. Exp. Med. Biol. 672, 1–13. https://doi.org/10.1007/978-1-4419-5979-9_1 (2010). 41. Rodrigues, L. R., Teixeira, J. A., van der Mei, H. C. & Oliveira, R. Physicochemical and functional characterization of a biosurfactant produced by Lactococcus lactis 53. Colloids Surf. B Biointerfaces 49, 79–86. https://doi.org/10.1016/j.colsurfb.2006.03.003 (2006). 42. Garg, M., Priyanka, R. & Chatterjee, M. Isolation, characterization and antibacterial effect of biosurfactant from Candida parapsilosis. Biotechnol. Rep. 18, e00251. https://doi.org/10.1016/j.btre.2018.e00251 (2018). 43. Luna, J., Sarubbo, L. & Campos-Takaki, G. A new biosurfactant produced by Candida glabrata UCP 1002: characteristics of stability and application in oil recovery. Braz. Arch. Biol. Technol. 52, 785–793. https://doi.org/10.1590/s1516-89132009000400001 (2009). 44. Sen, S., Borah, S., Bora, A. & Deka, S. Production, characterization, and antifungal activity of a biosurfactant produced by Rhodotorula babjevae YS3. Microb. Cell Fact. 16(1), 56. https://doi.org/10.1186/s12934-017-0711-z (2017). 45. Batista, R. M., Rufino, R. D., Luna, J. M., de Souza, J. E. & Sarubbo, L. A. Effect of medium components on the production of a biosurfactant from Candida tropicalis applied to the removal of hydrophobic contaminants in soil. Water Environ. Res. 82, 418–425. https://doi.org/10.2175/106143009x12487095237279 (2010). 46. Luna, J., Santos Filho, A., Rufino, R. & Sarubbo, L. Production of biosurfactant from Candida bombicola URM 3718 for environmental applications. Chem. Eng. Trans. 49, 583–588. https://doi.org/10.3303/CET1649098 (2016). 47. Elshafie, A., AlKindi, A. Y., Al-Busaidi, S., Bakheit, C. & Albahry, S. N. Biodegradation of crude oil and n-alkanes by fungi isolated from Oman. Mar. Pollut. Bull. 54, 1692–1696. https://doi.org/10.1016/j.marpolbul.2007.06.006 (2007). 48. Prenafeta-Boldú, F. X., de Hoog, G. S., & Summerbell, R. C. Fungal communities in hydrocarbon degradation in Microbial Communities Utilizing Hydrocarbons and Lipids: Members, Metagenomics and Ecophysiology (ed. McGenity T.) 1–36 (Springer, 2018). 49. Morikawa, M., Hirata, Y. & Imanaka, T. A study on the structure–function relationship of lipopeptide biosurfactants. Biochim. Biophys. Acta. Mol. Cell Biol. Lipids 1488, 211–218. https://doi.org/10.1016/S1388-1981(00)00124-4 (2000). 50. Santos, D. K., Rufino, R. D., Luna, J. M., Santos, V. A. & Sarubbo, L. A. Biosurfactants: Multifunctional biomolecules of the 21st century. Int. J. Mol. Sci. 17, 401. https://doi.org/10.3390/ijms17030401 (2016). 51. Campos, J. M. et al. Microbial biosurfactants as additives for food industries. Biotechnol. Prog. 29, 1097–1108. https://doi.org/10.1002/btpr.1796 (2013). 52. Al-Wahaibi, Y. et al. Injection of biosurfactant and chemical surfactant following hot water injection to enhance heavy oil recovery. Pet. Sci. 13, 100–109. https://doi.org/10.1007/s12182-015-0067-0 (2016). 53. Shah, M. H., Sivapragasam, M., Moniruzzaman, M. & Yusup, S. B. A. comparison of recovery methods of rhamnolipids produced by Pseudomonas Aeruginosa. Procedia Eng. 148, 494–500. https://doi.org/10.1016/j.proeng.2016.06.538 (2016). 54. Watanabe, T. Pictorial atlas of soil and seed fungi: Morphologies of cultured fungi and key to species (3rd edition). CRC Press, Boca Raton, FL, USA. https://doi.org/10.1201/EBK1439804193 (2010). 55. Latha, R. & Kalaivani, R. Bacterial degradation of crude oil by gravimetric analysis. Adv. Appl. Sci. Res. 3, 2789–2795. https://doi.org/10.12691/jaem-3-1-5 (2012). 56. Yurkov, A. M. Yeasts of the soil—obscure but precious. Yeast 35, 369–378. https://doi.org/10.1002/yea.3310 (2018). 57. Al-Otibi, F. et al. The antimicrobial activities of silver nanoparticles from aqueous extract of grape seeds against pathogenic bacteria and fungi. Molecules 26, 6081. https://doi.org/10.3390/molecules26196081 (2021). 58. Régo, A. P., Mendes, K. F., Bidoia, E. D. & Tornisielo, V. L. DCPIP and respirometry used in the understanding of Ametryn biodegradation. J. Ecol. Environ. 9, 27. https://doi.org/10.5296/jee.v9i1.13962 (2018). 59. Peele, K. A., Ch, V. R. & Kodali, V. P. Emulsifying activity of a biosurfactant produced by a marine bacterium. 3 Biotech 6, 177. https://doi.org/10.1007/s13205-016-0494-7 (2016). 60. Lee, S. C. et al. Characterization of new biosurfactant produced by Klebsiella sp. Y6–1 isolated from waste soybean oil. Bioresour. Technol. 99, 2288–2292. https://doi.org/10.1016/j.biortech.2007.05.020 (2008). ## Acknowledgements The authors would like to extend their sincere appreciation to the Research Supporting Project number: RSP-2021/114, King Saud University, Riyadh, Saudi Arabia. ## Funding This research project was supported by a grant from the Researchers Supporting Project number (RSP-2021/114), King Saud University, Riyadh, Saudi Arabia. ## Author information Authors ### Contributions Both of F.A. and N.M. contributed to the conception, study design, and the editing and reviewing of the intellectual contents. R.M.A. was responsible for the literature search, the experimental applications, and data acquisition. F.A. was responsible for the statistical analysis. Both of N.M. and R.M.A. contributed to the data analysis and manuscript preparation. F.A. was responsible for manuscript editing and reviewing besides, acting as a guarantor and corresponding author. The first two authors contributed equally to this work and should be regarded as co-first authors. All authors listed have made a substantial, direct, and intellectual contribution to the work, and approved it for publication. ### Corresponding author Correspondence to Fatimah Al-Otibi. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Al-Otibi, F., Al-Zahrani, R.M. & Marraiki, N. The crude oil biodegradation activity of Candida strains isolated from oil-reservoirs soils in Saudi Arabia. Sci Rep 12, 10708 (2022). https://doi.org/10.1038/s41598-022-14836-0 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-022-14836-0
Monogenic semi-group 2010 Mathematics Subject Classification: Primary: 20M [MSN][ZBL] cyclic semi-group A semi-group generated by one element. The monogenic semi-group generated by an element $a$ is usually denoted by $\langle a\rangle$ (sometimes by $[a]$) and consists of all powers $a^k$ with natural exponents. If all these powers are distinct, then $\langle a\rangle$ is isomorphic to the additive semi-group of natural numbers. Otherwise $\langle a\rangle$ is finite, and then the number of elements in it is called the order of the semi-group $\langle a\rangle$, and also the order of the element $a$. If $\langle a\rangle$ is infinite, then $a$ is said to have infinite order. For a finite monogenic semi-group $A=\langle a\rangle$ there is a smallest number $h$ with the property $a^h=a^k$, for some $k>h$; $h$ is called the index of the element $a$ (and also the index of the semi-group $A$). In this connection, if $d$ is the smallest number with the property $a^h=a^{h+d}$, then $d$ is called the period of $a$ (of $A$). The pair $(h,d)$ is called the type of $a$ (of $A$). For any natural numbers $h$ and $d$ there is a monogenic semi-group of type $(h,d)$; two finite monogenic semi-groups are isomorphic if and only if their types coincide. If $(h,d)$ is the type of a monogenic semi-group $A=\langle a\rangle$, then $a,\dots,a^{h+d-1}$ are distinct elements and, consequently, the order of $A$ is $h+d-1$; the set $$G=\{a^h,\dots,a^{h+d-1}\}$$ is the largest subgroup and smallest ideal in $A$; the identity $e$ of the group $G$ is the unique idempotent in $A$, where $e=a^{ld}$ for any $l$ such that $ld\geq h$; $G$ is a cyclic group, a generator being, for example, $ae$. An idempotent of a monogenic semi-group is a unit (zero) in it if and only if its index (respectively, period) is equal to 1; this is equivalent to the given monogenic semi-group being a group (respectively, a nilpotent semi-group). Every sub-semi-group of the infinite monogenic semi-group is finitely generated. References [1] A.H. Clifford, G.B. Preston, "The algebraic theory of semigroups" , 1 , Amer. Math. Soc. (1961) [2] E.S. Lyapin, "Semigroups" , Amer. Math. Soc. (1974) (Translated from Russian) How to Cite This Entry: Monogenic semi-group. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Monogenic_semi-group&oldid=34717 This article was adapted from an original article by L.N. Shevrin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
# American Institute of Mathematical Sciences March  2009, 2(1): 109-134. doi: 10.3934/krm.2009.2.109 ## Stability of the travelling wave in a 2D weakly nonlinear Stefan problem 1 Institut de Mathématiques de Bordeaux, Université Bordeaux 1, 33405 Talence cedex 2 Faculty of Sciences – Mathematics and Computer Science division, Vrije Universiteit Amsterdam, De Boelelaan 1081, 1081HV Amsterdam 3 Dipartimento di Matematica, Universitá degli Studi di Parma, Viale G. Usberti 85/A, 43100 Parma Received  September 2008 Revised  November 2008 Published  January 2009 We investigate the stability of the travelling wave (TW) solution in a 2D Stefan problem, a simplified version of a solid-liquid interface model. It is intended as a paradigm problem to present our method based on: (i) definition of a suitable linear one dimensional operator, (ii) projection with respect to the $x$ coordinate only; (iii) Lyapunov-Schmidt method. The main issue is that we are able to derive a parabolic equation for the corrugated front $\varphi$ near the TW as a solvability condition. This equation involves two linear pseudo-differential operators, one acting on $\varphi$, the other on $(\varphi_y)^2$ and clearly appears as a generalization of the Kuramoto-Sivashinsky equation related to turbulence phenomena in chemistry and combustion. A large part of the paper is devoted to study the properties of these operators in the context of functional spaces in the $y$ and $x,y$ coordinates with periodic boundary conditions. Technical results are deferred to the appendices. Citation: Claude-Michel Brauner, Josephus Hulshof, Luca Lorenzi. Stability of the travelling wave in a 2D weakly nonlinear Stefan problem. Kinetic & Related Models, 2009, 2 (1) : 109-134. doi: 10.3934/krm.2009.2.109 [1] Lanzhe Liu. Mean oscillation and boundedness of Toeplitz Type operators associated to pseudo-differential operators. Communications on Pure & Applied Analysis, 2015, 14 (2) : 627-636. doi: 10.3934/cpaa.2015.14.627 [2] Kiah Wah Ong. Dynamic transitions of generalized Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1225-1236. doi: 10.3934/dcdsb.2016.21.1225 [3] JIAO CHEN, WEI DAI, GUOZHEN LU. $L^p$ boundedness for maximal functions associated with multi-linear pseudo-differential operators. Communications on Pure & Applied Analysis, 2017, 16 (3) : 883-898. doi: 10.3934/cpaa.2017042 [4] Fred C. Pinto. Nonlinear stability and dynamical properties for a Kuramoto-Sivashinsky equation in space dimension two. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 117-136. doi: 10.3934/dcds.1999.5.117 [5] Milena Stanislavova, Atanas Stefanov. Effective estimates of the higher Sobolev norms for the Kuramoto-Sivashinsky equation. Conference Publications, 2009, 2009 (Special) : 729-738. doi: 10.3934/proc.2009.2009.729 [6] Jared C. Bronski, Razvan C. Fetecau, Thomas N. Gambill. A note on a non-local Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 701-707. doi: 10.3934/dcds.2007.18.701 [7] Eduardo Cerpa. Null controllability and stabilization of the linear Kuramoto-Sivashinsky equation. Communications on Pure & Applied Analysis, 2010, 9 (1) : 91-102. doi: 10.3934/cpaa.2010.9.91 [8] D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky. Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 557-580. doi: 10.3934/dcds.2004.10.557 [9] Piotr Zgliczyński. Steady state bifurcations for the Kuramoto-Sivashinsky equation: A computer assisted proof. Journal of Computational Dynamics, 2015, 2 (1) : 95-142. doi: 10.3934/jcd.2015.2.95 [10] Yuncherl Choi, Jongmin Han, Chun-Hsiung Hsia. Bifurcation analysis of the damped Kuramoto-Sivashinsky equation with respect to the period. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1933-1957. doi: 10.3934/dcdsb.2015.20.1933 [11] L. Dieci, M. S Jolly, Ricardo Rosa, E. S. Van Vleck. Error in approximation of Lyapunov exponents on inertial manifolds: The Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 555-580. doi: 10.3934/dcdsb.2008.9.555 [12] Peng Gao. Averaging principle for stochastic Kuramoto-Sivashinsky equation with a fast oscillation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5649-5684. doi: 10.3934/dcds.2018247 [13] Peng Gao. Global exact controllability to the trajectories of the Kuramoto-Sivashinsky equation. Evolution Equations & Control Theory, 2020, 9 (1) : 181-191. doi: 10.3934/eect.2020002 [14] Ildoo Kim. An $L_p$-Lipschitz theory for parabolic equations with time measurable pseudo-differential operators. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2751-2771. doi: 10.3934/cpaa.2018130 [15] Aslihan Demirkaya. The existence of a global attractor for a Kuramoto-Sivashinsky type equation in 2D. Conference Publications, 2009, 2009 (Special) : 198-207. doi: 10.3934/proc.2009.2009.198 [16] Peng Gao. Null controllability with constraints on the state for the 1-D Kuramoto-Sivashinsky equation. Evolution Equations & Control Theory, 2015, 4 (3) : 281-296. doi: 10.3934/eect.2015.4.281 [17] Marta García-Huidobro, Raul Manásevich, J. R. Ward. Vector p-Laplacian like operators, pseudo-eigenvalues, and bifurcation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 299-321. doi: 10.3934/dcds.2007.19.299 [18] Mudassar Imran, Youssef Raffoul, Muhammad Usman, Chi Zhang. A study of bifurcation parameters in travelling wave solutions of a damped forced Korteweg de Vries-Kuramoto Sivashinsky type equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 691-705. doi: 10.3934/dcdss.2018043 [19] Nestor Guillen, Russell W. Schwab. Neumann homogenization via integro-differential operators. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3677-3703. doi: 10.3934/dcds.2016.36.3677 [20] Frédéric Naud. Birkhoff cones, symbolic dynamics and spectrum of transfer operators. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 581-598. doi: 10.3934/dcds.2004.11.581 2018 Impact Factor: 1.38
I. Basic math. II. Pricing and Hedging. III. Explicit techniques. IV. Data Analysis. V. Implementation tools. VI. Basic Math II. VII. Implementation tools II. 1 Calculational Linear Algebra. A. Quadratic form minimum. B. Method of steepest descent. C. Method of conjugate directions. D. Method of conjugate gradients. E. Convergence analysis of conjugate gradient method. F. Preconditioning. G. Recursive calculation. H. Parallel subspace preconditioner. 2 Wavelet Analysis. 3 Finite element method. 4 Construction of approximation spaces. 5 Time discretization. 6 Variational inequalities. VIII. Bibliography Notation. Index. Contents. ## Method of conjugate gradients. eeping the directions (see the section ( Method of conjugate directions )) is memory consuming and the procedure for calculation of such vectors is expensive. According to the formula ( Orthogonality of residues 2 ) the vectors are linearly independent. We take to be initial point of the Gram-Schmidt orthogonalization leading to . According to the section ( Gram-Schmidt orthogonalization ), and according to the summary ( Conjugate directions ) Thus We continue (Conjugate gradient residue selection) According to the formula ( Orthogonality of residues 2 ) We conclude Therefore, when we conduct -th step of Gram-Schmidt -orthogonalization: only one term is non-zero in the sum: We would like to remove matrix multiplications from the above relationship. We set in the equation : and apply the operation . Then According to , hence According to the summary ( Conjugate directions ), We combine the last two relationships: thus We substitute it into : By -orthogonality of and we also have thus We collect the results. Algorithm (Conjugate gradients) Start from any . Set For do To avoid accumulation of round off errors, occasionally restart with using last as . Violation of -orthogonality of is the criteria of error accumulation. Use condition of the type to stop. Notation. Index. Contents.
# LINE STRENGTHS OF METHANE IN THE 2.2 MICRON REGION Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/12082 Files Size Format View 1985-FB-09.jpg 92.44Kb JPEG image Title: LINE STRENGTHS OF METHANE IN THE 2.2 MICRON REGION Creators: Hilico, J. C.; Loete, M.; Brown, L. R. Issue Date: 1985 Publisher: Ohio State University Abstract: Absolute strengths and wavenumbers of 1500 vibration-rotation lines of natural methane have been measured at 297 K in the spectral region $4433-4719 cm^{-1}$ from high resolution spectra recorded on the F. T. spectrometer at Kitt Peak. Accuracy of the strengths can be estimated to 3% for clean lines. Most of these lines belong to the $\nu_{z} + \nu_{3}$ combination band. The upper state levels being largely perturbed, the analysis of strengths on the basis of an isolated band model -the only one presently available- requires a second order development of the dipole moment including 8 parameters. The strengths of a severe selection of 229 single lines are reproduced with a relative standard deviation of 5.4 % whereas reduced developments with only 3 parameters (first order) or 1 parameter (zero order) lead to 16% and 133% respectively.The obtained parameters are used to predict the associated $\nu_{3} - \nu_{z}$ difference band. Description: Author Institution: Laboratoire de Spectronomie Mol\'eculaire, Universit\'e de DIJON; Laboratoire de Spectronomie Mol\'eculaire, Universit\'e de DIJON; Laboratoire de Spectronomie Mol\'eculaire, Jet Propulsion Laboratory URI: http://hdl.handle.net/1811/12082 Other Identifiers: 1985-FB-9
Minimum operations to make XOR of array zero in C++ Problem statement We are given an array of n elements. The task is to make XOR of whole array 0. We can do following to achieve this. We can select any one of the element − • After selecting element, we can either increment or decrement it by 1. • We need to find the minimum number of increment/decrement operation required for the selected element to make the XOR sum of whole array zero Example If arr[] = {2, 4, 7} then 1 operation is required − • Select element 2 • Decrement it by 1 • Now array becomes {3, 4, 7} and its XOR is 0 Algorithm • Find the XOR of whole array • Now, suppose we have selected element arr[i], so cost required for that element will be absolute(arr[i]-(XORsum^arr[i])) • Calculating minimum of these absolute values for each of element will be our minimum required operation Example #include <iostream> #include <climits> #include <cmath> using namespace std; void getMinCost(int *arr, int n) { int operations = INT_MAX; int elem; int xorValue = 0; for (int i = 0; i < n; ++i) { xorValue = xorValue ^ arr[i]; } for (int i = 0; i < n; ++i) { if (operations > abs((xorValue ^ arr[i]) - arr[i])) { operations = abs((xorValue ^ arr[i]) - arr[i]); elem = arr[i]; } } cout << "Element= " << elem << endl; cout << "Minimum required operations = " << abs(operations) << endl; } int main() { int arr[] = {2, 4, 7}; int n = sizeof(arr) / sizeof(arr[0]); getMinCost(arr, n); return 0; } Output When you compile and execute above program. It generates following output: Element = 2 Minimum required operations = 1
# A NOTE ON EIGENFUNCTIONS OF THE LAPLACIAN OF WARPED PRODUCTS • PARK, JEONG-HUEONG (Dept. of Mathematics, Honam University)
# Calculate: -8-13 ## Expression: $-8-13$ Factor out the negative sign from the expression $-\left( 8+13 \right)$ $-21$
## General developer forum ### Can you set form values at the validation() step? Can you set form values at the validation() step? I'm trying to set data in a hidden mod_form input from validation() but even though I can see that the value has been set if I check the input value afterward, once I get to add_instance() in lib.php the hidden input value is gone. The reason I want to set this value from validation is that I'm doing a series of validations and requests to validate that a Youtube video ID is valid. If it is, I calculate the video's aspect ratio from the YT API response I got. sot it looks a bit like... function validation($data,$files) { ... if(yt) { $input =$mform->getElement("aspect_ratio[$i]");$input->setValue($aspectratio); } } Assuming everything else validates, the data passed to add_instance() has an empty value for this element. All the other data is there. Is there a way to do this the way I'm trying to do it or do I need to replicate that logic but from definition_after_data() instead? To me it feels illogical to try and calculate this aspect ratio on any user entered data without first validating it's an actual valid Youtube video with a valid response from the Youtube API. I guess I could also do it from add_instance but that means querying the YT API once more to get the exact same data I just pulled from the validation. Thanks for any help Moodle community! Average of ratings: - Re: Can you set form values at the validation() step? To get data out of the validation process, don't try to save it into a hidden form field. Just save it as$this->computeraspectratio (or something), and add a \$this->get_computed_aspect_ratio method (or something similar). (Or, you could override get_data() to include the extra computed values.) Average of ratings: - Re: Can you set form values at the validation() step? You're right that worked. I ended up using data_postprocessing() which works just like get_data().
## Delayed Stellar Mass Assembly in the Low Surface Brightness Dwarf Galaxy KDG 215 We present H i spectral line and optical broadband images of the nearby low surface brightness dwarf galaxy KDG 215. The H $I$ images, acquired with the Karl G. Jansky Very Large Array, reveal a dispersion-dominated interstellar medium with only weak signatures of coherent rotation. The H $I$ gas reaches a peak mass surface density of 6 M ⊙ pc$^{−2}$ at the location of the peak surface brightness in the optical and the ultraviolet. Although KDG 215 is gas-rich, the Hα non-detection implies a very low current massive star formation rate. In order to investigate the recent evolution of this system, we have derived the recent and lifetime star formation histories from archival Hubble Space Telescope images. The recent star formation history shows a peak star formation rate ~1 Gyr ago, followed by a decreasing star formation rate to the present day quiescent state. The cumulative star formation history indicates that a significant fraction of the stellar mass assembly in KDG 215 has occurred within the last 1.25 Gyr. KDG 215 is one of only a few known galaxies that demonstrates such a delayed star formation history. While the ancient stellar population (predominantly red giants) is prominent, the look-back time by which 50% of the mass of all stars ever formed had been created is among the youngest of any known galaxy. Publication Date: Aug 29 2018 Date Submitted: Jun 21 2019 Citation: Astrophysical Journal Letters 864 1 Note: The status of this file is: public offprint: PDF Rate this document: 1 2 3 (Not yet reviewed)
# A Sustainable Future: Vaclav Smil, author of How the World Really Works ### Listen to Jason Mitchell discuss with Vaclav Smil, academic and author of the New York Times bestseller How the World Really Works, what the energy transition by 2050 realistically means. What does the data say about our net zero ambitions? Listen to Jason Mitchell discuss with Vaclav Smil, academic and author of the New York Times bestseller How the World Really Works, what the energy transition by 2050 realistically means; how energy transitions have evolved historically; and what are the real implications when people talk of a climate ‘earthshot’ Recording date: 22 September 2022 Vaclav Smil Vaclav is Distinguished Professor Emeritus at the University of Manitoba. Regarded as being among the most important thought leaders of our time, he’s the author of forty-five books and over 500 papers, including the New York Times bestsellers How the World Really Works and Energy and Civilization: A History. One of Bill Gates’ favourite authors, Vaclav has spent his career exploring new ground in the fields of energy, environmental and population change, food production and nutrition, technical innovation, risk assessment and public policy. He’s been named by Foreign Policy as one of the Top 100 Global Thinkers. ## Episode Transcript ##### Note: This transcription was generated using a combination of speech recognition software and human transcribers and may contain errors. As a part of this process, this transcript has also been edited for clarity. Jason Mitchell: I'm Jason Mitchell, head of Responsible Investment Research at Man Group. You're listening to A Sustainable Future, a podcast about what we're doing today to build a more sustainable world tomorrow. Hi, everyone. Welcome back to the podcast and I hope everyone is staying well. Here's a special holiday present to you from the team behind A Sustainable Future podcast. For context, I've been after Vaclav Smil for several years now to get him on the podcast. As one of the preeminent thinkers and authors on historical development and transitions, Vaclav has long been a go-to research source for me. I finally managed to interview him at a Man Group conference this past September, and I can confirm that he is indeed a force of nature. Frankly, that probably comes across best in his prolific body of work, rather than a live interview, which, at least in my experience, is always a bit challenging. Add to the fact that I was almost surreally interviewing a 12-foot image of his disembodied head via Zoom, and you'll get what I mean. But because Vaclav does so few interviews, it's an immense privilege to be able to have this conversation with one of the leading thinkers of the energy transition. And I think his data-driven approach and his sometimes sobering candidness about the challenges we face are obvious in this episode. But I don't see this as pessimism. I read his message as a voice of uncomfortable but necessary truths. With more than 10 books on energy, Vaclav's work is important because he brings a clear-eyed perspective on the implications of the energy transition. We talk about what the energy transition by 2050 realistically means. How energy transitions have evolved historically and why the analogy of a climate earthshot is fundamentally different from that of a moonshot. Vaclav is distinguished professor emeritus at the University of Manitoba, regarded as among the most important thought leaders of our time. He's the author of 45 books and over 500 papers, including the New York Times bestseller, How the World Really Works and Energy and Civilization. Vaclav has spent his career exploring new ground in the fields of energy, environmental and population change, food production and nutrition, technical innovation, risk assessment, and public policy. He's been named by foreign policy as one of the top 100 global thinkers. Welcome, Vaclav. Vaclav Smil: Hello. Jason Mitchell: Excellent. You have the books. The one thing I would say if you're not familiar with his work... My favorite quote is from Bill Gates. It's actually a tweet that he sent and he said that he looks forward to Vaclav books like some people look forward to Star Wars movies. So, it's sort of a testament to his influence. Vaclav, let's start with some scene setting. There are a number of pervasive topics in your research. Two of them specifically. First, you talk about the almost incomprehensible immensity of the primary energy system, the fact that it's still 85% fossil-fuel based. And you also talk about the fact that energy transitions are nothing new. You've written about the fact that we've transitioned from wood to coal, from coal to oil, oil to natural gas. What does history teach us about these transitions? And I also want to be a little bit more provocative. Are we naive in thinking that we can compress and accelerate this current energy transition all while cutting carbon emissions 50% by 2030? Vaclav Smil: Some very simple calculations here so you could judge for yourself. Suppose you know nothing at all about the world energy system or energy consumption, you have never had a single course in engineering, you don't know any mark beyond simple algebra. But just think of these numbers. Basically, now people say by 2050 people... like these zero and five endings. So, 2050 there'll be zero carbon in the world. So, we have 28 years to get a zero carbon. So, let's back 28 years back to 1994. In 1994, the global primary energy consumption, all fuels, all primary electricity was 86% fossil fuel. 1994, 86% fossil. 2022 is 82% fossil. So, we've gone down 4% relatively, but in absolute terms actually we have massively increased fossil fuel consumption because of the rise of China and rise of India, actually. But relatively speaking, we've gone down 4% in last 28 years. Now I ask simple question, how likely it is that we will go down 82% in next 28 years, right? As simple as that. We can go home basically after this statement, right? Because the acceleration needed in still going 4% down in 28 years to 82% down in 28 years, I just don't know any historical parallel to that. As you noted, I never start telling people how massive the system is... And we could spend the rest of the day reciting the numbers. More than eight billion, all these tons... 10 to nine, 10 to... more than eight billion tons of coal, more than 4 billion tons of crude oil, more than 4 trillion cubic meters of natural gas, and so, down the road. When you do these numbers like that, you cannot just simply say like an old telephone, a new mobile. Well, billions and trillions necessarily the infrastructure, simply the material behind it, steel, concrete, copper behind it, you just simply cannot say by 2030 or 2035. Maybe just one example of which we have been largely deprived in past two years, and has been flying. By 2019 we reach this [inaudible 00:05:57], eight trillion revenue passenger kilometers. Eight trillion. More than eight billion people traveled. Basically every person statistically speaking, traveled on a jetliner. These are massive machines which can get 300, 400, 500 people and they can fly also for 17 hours, thanks to what? Thanks to fossil fuel. Because the energy density of kerosine, which has this airplane is 12,000 watt hours per kilogram. The best better is today at 300 watt hours per kilogram. That's 40 times more in kerosine. So, how can you change this massive things rapidly? It's just simply impossible. So little bit of basic engineering, scientific literacy would go a long way to say that we just cannot do it that rapidly. Just it's easy to say 2035, 2050, but to accomplish that practically, not so easy. Jason Mitchell: So Vaclav, there's a temptation to analogize the energy transition to other human- Vaclav Smil: Oh, yes. Jason Mitchell: ... We talk about a moonshot and we also talk about an earthshot in the context of rearchitecting the energy system. Why is that comparison problematic given all the systems? Vaclav Smil: It's another example of missing basic numbers because we actually have excellent numbers... I've heard recently about is comparing it to the big sort of do or die projects. One of the scores, the developing the nuclear weapon during the second World War to be Germans. As it turned out, Germany didn't have to be beaten because the [inaudible 00:07:31]. But in three years Manhattan Project spent in today's monies, something like $25 billion. This is nothing.$25 billion over three years, so it worked out to about 0.2% of GDP in the US at the time. That's nothing. Mind you, most of the people didn't even know that there is a Manhattan Project going on. It was so secret when Truman was vice president after Roosevelt died, they had to tell Harry Truman, the vice president, "Sir, we are working on this thing called nuclear bomb project." While energy transition, every person of eight billion people that would be affected by that. Now the next is the moonshot, right? We didn't have to go to the moon, but it was the [inaudible 00:08:08] beating the Soviets, right? We have detailed accounts of the moonshot, 12 years between 1960 to 1972. Divided by those 12 years, it cost again about 0.2% of America GDP per year. And it costs about 250 billion in today's dollars. While nobody knows the total cost of energy transition to 2050, 2060. But McKinsey took a stab at it last year and they came up with something like $275 trillion, right? The global GDP's now about 90 trillion and they came out as estimate$275 trillion. So again, this is all the [inaudible 00:08:43] of magnitude above and it's so-called moonshot or nuclear shot or whatever. So again, totally incomparable and it shows you the magnitude because if you would think you know about investing 275 trillion only into that thing while we still having growing population and expectations of economic growth, we would have to be devoting all the other like whatever, 5, 10, 15, 20% of annual GDP just to that thing alone, just to that thing alone. Jason Mitchell: There's a propensity to conceptualize transitions as abstract, as linear, as smooth. And as we all know over the last nine months, the energy transition has been anything but that. And I'd like you to respond to a provocative quote that Dan Yergin, the economic historian and energy expert asked... And mind you that he asked this last winter just after COP26. So, this is when we saw a price volatility. We saw in the UK more than 25 wholesale energy providers go bankrupt. And he asked, "Is this energy shock a one-off resulting from a unique conjunction of circumstances or is it the first of what will be several crises resulting from straining too hard to bring 2050 carbon reduction goals rapidly forward? Potentially prematurely choking off investment in hydrocarbons, thus triggering future shocks?" Vaclav Smil: Well that's already happening, has been actually the investment into development of oil and gas has been very constrained for past decade and especially for past five years. So, we already are getting into a sort of deficit situation. But it's only part of the problem. Even before the Russian invasion, people always forget that we think too much about us. And us I mean the affluent, rich countries. And unfortunately we don't matter that much anymore. There's another number which everybody should know. Let's throw the Britain in after that. So, EU and Britain is less than 6% of global population. So, what does it matter if there's an energy crisis in Europe? What does it matter if Europe is worried about all of it? China is doing quite well. China is buying record numbers of liquified natural gas, coal, oil from around the world. So is India. And let's be clear that Africa will develop whatever Africa will need to develop economically, which means lot more oil and lot more natural gas. Africa is not into transition to zero carbon unless we would pay for it. And let's be clear also that out of another billion plus people coming between now and 2050, 90% of them will come in sub-Saharan Africa. So again, unless we are offering to invest these trillions of dollars for them to become green, they will do what we have done to develop ourselves, to build our cement factories and our steel and our ammonia for fertilizers. They will just simply use as much fossil fuel as they can really. So in a way, the question is [inaudible 00:11:37] what we say we will do, what other people will do is very different. Because you see other people feel totally unconstrained. As I speak, China has underdevelopment and has a 100 gigawatts of coal-fired power capacity. Coal is booming in China. Coal is booming in India and coal will be booming in Africa. So again, reality is versus somebody saying something. Jason Mitchell: Yeah, I was going to ask, I mean to what degree does the Ukraine/Russia conflict change the calculus of the energy transition? And specifically you mentioned the EU. Is the EU energy transition plan which they've rolled out over the last half year, is it a blueprint or is it an irrelevant outlier? Vaclav Smil: It's relevant and irrelevant. It's relevant in the sense that of course they don't have to postpone whatever they wanted to do because now they have to scramble to get just enough fossil fuel, right? So right now it's not a question, although they say that we will re-double and triple our goals forever. But right away they have to ensure that there'll be some heating over winter. And heating over winter will not come from more wind turbines because you cannot build them that fast. And there is whatever, 200 million people in Europe will [inaudible 00:12:47]. So in a way it's very relevant but in a way it's irrelevant because they simply cannot move as fast as they can. Let me give you just one example which is really fundamental in all of this. Suppose they will move very rapidly into electricity, which they need to do. It says by 2035, 100% zero carbon electricity. And of course it will have to come mostly from what? From mostly wind and solar. But wind is far more important in Europe because large part of Europe is not that solar, not that sunny, really. Northern France, Northern Germany, Britain, whatever. So, wind is the number one thing. I've been just engaged recently in writing my latest book about materials in calculations comparing the material demand of the number one energy converter in the world today in terms of efficiency, and it's a gas turbine. It's a small thing. Basically you take a turbine, a jet line from the jet engine and just reground it and you've got yourself a great generator of electricity. That turbine needs about six tons of materials, steel, aluminum, titanium alloy, six tons per megawatt. Wind turbine needs about 200 tons per megawatt installed and another 200 tons for massive foundations. That gas turbine, you just put it on a little concrete pad and that's it, really. That wind turbine, you have to anchor it massively because it's a tower, 100 meter tower, plus, and will be subject to also some strains and stresses by the blowing wind. So, now you are replacing one form of energy which requires, let's say, 10 tons of materials per megawatt of electricity by something which requires 400 tons of megawatt of electricity. How do you do that in a hurry? How do you replace all that [inaudible 00:14:37] of your electricity? And the best thing is to... And moreover, the comparison is not strange because the best wind turbine will be working about 40% of the time and will be about 40% efficient. Where the gas turbine can work on command, on demand. Within eight minutes you can start it up and it's more than 60% efficient. So it's superior qualitatively, and in terms of materials you cannot beat it, really. So again, [inaudible 00:15:02], the famous American author, he was ahead of us, he anticipated his wokeness and this disconnect from reality. Early in the year 2000 he wrote, "There are no criteria there, just opinions." And still that time he just exploding. There are no criteria. So materials don't matter for wind turbines. That's every single wind turbines in 2035. The fact that is cost 30, 40, 50 times more material than the gas turbine, who cares. So, let's go all ways to basic and let's examine some basic numbers. Jason Mitchell: I want to talk about your work around growth. You have a book called Growth: From Microorganisms to Megacities and I'm wondering how it informs your thinking around energy and energy consumption. There's a thread around energy use. We talk about energy efficiencies, we talk about technological innovation, we talk about top-down technocratic behavior change as ways to curb growth. On the other side of that, you've got something like the Jevons paradox historically where we make these efficiency gains over time but they are negated by the fact that absolute consumption use keeps coming up. Are we bound by this dynamic going forward? Vaclav Smil: Jevons is one of the great jewels of British empire and it probably will endure forever. Very simple insight, very powerful and it applies to just about everything you look at, really. And it goes even to fact, and in terms to what I'm writing about, it's called dematerialization, that we use less material per dollar of GDP product. That's true. But on the other hand we have something like eight billion mobile phones out there. Just think about the rare materials and all that glass and all that plastic for that. The point is this, that in the western world... and this where most people don't realize it, per capita energy consumption has actually come down in past 20 years, in US, in UK. Because UK is so much deindustrialized, most people will find it shocking that UK is now consuming less energy per capita than China. And that's a fact as of last two years because nothing is made in Britain. Britain is more deindustrialized than Canada and we never made anything. So, while the energy per capita consumption has been declining in US, Japan, Europe, it declined to what I call comfortable but still very high level. So, we consume energy at that level which is still four to five times higher than energy consumption in India and which is 10 to 20 times higher than energy consumption in sub-Saharan Africa. So again, you see this thing that even if you decline our consumption, cut it down even more than we do... Although we are [inaudible 00:17:45] thinking because you may cut it on one end but increase on the other. So I say we'll make, or still making more efficient. We'll make our industry more efficient, but we'll buy more SUVs. And SUV's two ton's car instead of my Honda Civic which is a one ton car. So, the Jevons paradox in different ways always traded in that. But again, as I say, the ball is out of our court because this is the court of the people who consume 10 gigajoules per year, where even Britain consumed 110 and US consumed 260. So, there is an almost infinite demand for more energy consumption per capita. And even China at about 110 strike rate of Britain, they probably would like to go on Japan 150. So, even China is not done yet with its increased energy consumption. So yes, we are far, far from a point where we say we are done, with per capita energy consumption. Still a lot more room to increase it. Jason Mitchell: Yeah, I guess I would add on to that because it's incredibly easy to listen to this and hear about the immensity of the system and the raw numbers and feel very fatalistic. If rapid decarbonization isn't feasible, what is the next best option? Are we frogs in a pot of boiling water? Is the answer adaptation? Do we bunker down? Vaclav Smil: No, we just simply work on it, do these little drop, little drops. And one of these little drops, I mentioned mobile phones several times. We have billions and billions of them. Their average lifespan now is two years. In many countries, even less. What happens to them. Don't even try to guess what percentage gets recycled. It's just absolutely minimal. That's just thrown away. [inaudible 00:21:38]. If you make a little pile of mobile phones, nowhere in the world you can find a mineral ore which has so much silver, gold and other special metals as in that pile of mobile phones. We just simply throw them away, throw them away by billions. Now we are running into electric vehicles. Every electric vehicle with 400, 500, 600 kilogram battery pack. What will happen when we have tens on hundreds of millions of electric vehicles? We need battery to recycle these batteries. We are not recycling them at all. So we need recycling. We need to plan what we'll do with things, not just simply... in England, you can see this giant plates of these wind turbines. What happened to these plates? Are we recycling them or we are digging big trenches and burying them underground? Because that's so difficult to recycle because that component of several materials. So, we are generating more waste instead of thinking ahead and minimizing waste and then they minimizing energy. So just simply, it's not one big bowl, it's not hydrogen or wind turbines, it's thousands of little things because the system is composed of thousand little things from cars to ammonia to mobiles to heating your house. So, unless we do thousand little things all the time at the same time, we will not get anywhere. It's no one big bowl, thousand little things all the time. It's not defeatist at all, just simply very practical. Jason Mitchell: When we think about the energy trilemma, it's been a pretty powerful model, particularly recently, the fact that policymakers are always trying to balance price affordability, energy security and decarbonization. And I'm wondering how you think about, particularly in this transition, the role of markets and the role of policy makers? In the past with past transitions, markets have obviously always played a role and to some degree energy security. How do you see top down technocratic policy making really driving or affecting this transition? Vaclav Smil: I just focused on one thing, which I think is the worst thing for us, and it's twofold about energy and food. Food being of course the most important form of energy. We got used to cheap food prices and cheap energy prices in rest of the world. In the room are people of certain generation like myself, who might remember in 1950s, early 1950s, food was rationed in England and average family spent 40, 45% of his disposable income on food. Now in Europe it's a little bit more than 10% before this inflation. Let's say 13, 14, 15%. In US and Canada, eight to 9%. And the same for energy. So, both for gasoline and electricity and heating and whatever, and on top of it all, food less than [inaudible 00:24:25], which is historically just incredible because it used to be energy and food like 80%. Food alone used to be 50% [inaudible 00:24:34]. Now it's extremely difficult for politicians to tell people to save energy, to moderate our consumption. We should double the prices or we should triple the prices. But that surely wouldn't the takeaway because the elasticity is not such it's gasoline, so it's food. You will not get any reduction when you increase pricing by 5% or 10%. You've got to double them at least then you will get your elasticity. But that is totally impossible. So I think this is one thing you have caught in our technical success, our managerial success made our food and made our energy so bloody cheap that we have difficulty to rationalize it and say, "No, we are just giving it away." For the sake of the environment, for the sake of future generation, you should pay a lot more for it. Who will say, "Oh, I'm all for it"? So that's I think our basic fundamental problem is where we are in this dilemma. Jason Mitchell: I guess my last question back to the solutions point is that we've seen an array of different technologies, many high cost, whether it's blue hydrogen, green hydrogen, many still nascent. New markets, you mentioned ammonium. I mean those markets outside of agriculture even need to be created. So there's a question mark. How do you think about solving this on a long-term basis and applying, frankly, discount rates on these different types of technologies and their kind of feasibility at mass scale? Vaclav Smil: Well, you see, I think in the first place, again, we must make some basic decisions because we just cannot continue this hodgepodge we have. Let me mention say cars because there's about 1,4 billion vehicles on the planet and our internal combustion engines, and now we are trying to electrify them, right? There is actually some people trying, so everyone going to tell you everything should be electric vehicle. This is the best way to go. Number one or number two large car maker in the world that's Toyota, no you can't, it's very... No, no, no, the best way is to have fuel cells in hydrogen fuel cars. Elon Musk calls them fool cars, not fuel cars because he absolutely hates it. Then you have the people who say no, there should be direct hydrogen fueling, not hydrogen to fuel cell, but direct hydrogen combustion, which is possible. Then you have people say, well ammonia, ammonia is not so difficult to make. You can make ammonia and you can actually burn ammonia in your car and generate ammonia. So, which one it will be? Electric car, fuel cell car, direct hydrogen car, ammonia car, four different types, four different infrastructures? So we have to settle on something. And now we are opted for electric cars. But electric cars, again the materials going into it, graphite, lithium, copper, rare metals, and we have around the world about 16 million of them right now. We need 1,4 billion of them. 1,4 billion. And by the time we get there in 2050, it will be more like 1,6 or 1,7 billion and now we have 16 million of them. So again, thinking about the scaling problem in terms of materials, all that graphite, all that copper, all that winding of these electric motors to do that thing. So, first we have to settle down and then we'll say this, we'll start moving in the direction and by 2031 we say, "Oh, maybe we made a mistake. Maybe that hydrogen was a better way to go." Because actually it's easier to make from whatever green way or whatever. So, we are still in that period where everything is so unsettled that we cannot even say what the future will look like. It's still emerging, yet we are making this vision as if it has already emerged. Steven Desmyter: Okay, thank you very much, Vaclav. Vaclav Smil: Thank you. Okay, bye-bye. Jason Mitchell: I'm Jason Mitchell, thanks for joining us. Special thanks to our guests and of course everyone that helped produce this show. To check out more episodes of this podcast, please visit us at man.com/ri-podcast.