text
stringlengths
104
605k
# Junk Drawer For all those little papers scattered across your desk # Advent of Code 2019 : Day 5 11 May 2020 in Blog The Advent of Code series is back (from last year…) ## Part 1 Another intcode challenge! Now, we’re given a new program, new opcodes (I/O!!), and addressing modes (encoded inside opcodes). Also, the program counter no longer constantly increments in groups of 4 (i.e., some opcodes are shorter or longer than others in number of operands). Our job is to feed 1 into a diagnostic program and record the output once it’s successful. ### 530eaa2 Just copying the old code and the new input. ### a33986c This is a big ol’ rewrite. I added an either type based on scala’s for error/result types; I eliminated exceptions completely using it and dedicated constructors; I grouped all the structures inside each other… Interestingly, this refactor introduced a new mechanism for reading and writing memory. read and write as primitives still operate in terms of Either.R and Either.L (success and failure). But truly carrying out and using the results uniformly is done with Either.lift: for the try* functions, the caller provides an error handler and a success handler (which must return the same type, typically some kind of option or state or some such). The result is a function which will operate on the results of the primitives (which still must be called: either directly, or via the next functions for some easy reads). One such usage is here: datatype inst = ADD of arith_addrs | HALT | UNKNOWN of opcode fun createArith (m : Memory.memory) : inst = let val newIp = ip + 4 in tryRead (fn d => f {srcA=a, srcB=b, dest=d, newIp=newIp}) (Memory.next3 ip m)) (Memory.next2 ip m)) (Memory.next ip m) end We alias tryRead to use the MEM_ERR constructor as an error handler. An arithmetic instruction consists of sequenced reads which are combined into an argument to the given instruction constructor. This is used by the decoder to create instructions for evaluation (which also relies on tryRead to fetch an instruction to decode). In the actual CPU, we employ similar tricks to read several addresses, compute a result, and write it back to memory; the result is a value of type state this time, instead of a value of type inst. In particular, this new form typically necessitates encoding errors in the type system (either for states or instructions, in these examples). ### 95cd8a9 This is a doozy. I managed (finally) to functorize the structures and allow for some separation of concerns. Unfortunately, some things still leak through (as seen in the translation functions from various unspecified types to more concrete ones; this was necessary even with the sharing constraints). Months later, I now see the solution may have been to duplicate some type names and increase the sharing constraints. (The DecoderFn’s constraint that Memory use ints seems unavoidable.) In spite of a few ugly spots, the whole thing comes together remarkably well. I even added an IO signature to encapsulate input and output routines, and an StdIO structure that uses the standard streams. Intcode is built up by applying functors, and then a Reader struct is built using the intcode program type. ### 8679875 At long last, we are ready to support addressing modes. We add a mode to the decoder, as well as encode the notion of parameter and destination “registers” in the type system. A parameter includes a function that carries out the read, while a destination includes a function that carries out the write. We thus enforce that some parameter sets are read from while others are written to. We add a couple of digit-manipulating utilities in order to work with the opcodes. Finally, we add the parameter/destination encoding into the actual decoder. This certainly complicates reads and writes on the instruction execution side, but it is necessary; when creating an instruction, we have to determine what kinds of registers and modes go together, and set up the appropriate arguments. Execution then pulls them apart and invokes the appropriate sequence of functions. In effect, these instruction parameter sets encode the reads and writes, so the CPU has to invoke memory primitives less often. (In fact, I believe the CPU only uses try* functions now.) Lastly, we set up a dummy IO structure that provides the appropriate diagnostic input. ### 1924968 Correcting a bug in the use of modes for writes led me to solve the challenge (the spec is vague on how write modes work…). ## Part 2 More opcodes! Yayyyy! And they implement conditional branching… ### 806e183 Another copy of programs and inputs. I added the new diagnostic code for the challenge here too. ### 80ae8cc A boring clarification. ### 17d18ef The actual solution. We add the extra opcodes (a piece of cake at this point; a couple of types, some functions + constructors, and an evaluation function that all build on the read/write mechanisms and the use of functions as values). Most of the heavy work is in the decoder, which creates jump and test instructions. The complex work is in the CPU, though, which has to read appropriate values and handle the results, invoking callbacks as necessary to carry out computations for new instruction pointers or values to test. I’m learning that the try* functions create an inversion of control flow, where reads/writes appear to happen at the bottom of the code (in the latter parameters). This is also where the computation of what values to write appears, somewhat obfuscating the intent. But the resulting state or value is front-and-center (literally).
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. I just discovered a way to get R running on my smartphone, with full support for packages, graphics and R Markdown, and no need to connect to an external server. This is really handy for quickly checking R code, trying out ideas and writing blog posts on the go. It works quite well! Here I will show you how to do the same on your Android device. This post was inspired by this answer on StackOverflow. ## Initial setup Install GNUroot Debian from the Google Play Store. This application effectively gives you a full Linux environment within Android, without rooting your device. It just works. GNUroot Debian emulates a command line interface or “X Terminal” like on a regular desktop PC. You can work in multiple windows, and you have access to all the files on your Android system. You can use it not only to run R on your phone, but also Python or any other command-line tools supported by GNU/Linux. ## Installing R To install R, type the following commands in GNUroot Debian, hitting Enter after each line and waiting for the respective processes to finish. apt-get update apt-get install r-base r-base-dev Now you have R installed on your phone. ## Interactive R commands Starting an R session is as simple as typing R into the terminal. You will know R is running because the start of the command prompt will change from a hash, #, to a greater-than sign, >. This is just like the usual interactive R user interface (because it is the usual interface). Run commands and load and install packages to your heart’s content. q() and hit Enter. You can run quick R one-liners without diving into a full R session, by using the following syntax. Rscript -e "rnorm(5)" # five random numbers Rscript -e "print('Hello, world')" Be careful about matching quotation marks. You can also use the syntax R -e "rnorm(5)" R -e "print('Hello, world')" which does the same thing but prints more verbose output to the console. Pressing Up and Down on your keyboard lets you cycle through recently used commands, saving you from needing to retype repeated or very similar commands. Usually touch-screen keyboards don’t have arrow keys, but you can install the Hacker’s Keyboard for Android, which does (at least in landscape mode). ## Running scripts To write nontrivial programmes and not lose your work, you will want to write and run R scripts from files. While you could open a Linux text editor within GNUroot, this is probably more trouble than it’s worth. Dedicated Android text editors are better optimised for use with a small touch screen and don’t need arrow keys or special control commands to work. QuickEdit is a free text editor with syntax highlighting for R and Markdown, line numbering and other useful features. In your text editor, write a basic R script, like the following. x <- "Hello" y <- "world!" cat(x, y, sep = ', ') Save the file as test.R and put it somewhere easy to find, like your Documents folder. Reopen GNUroot Debian. Navigate to the directory where you saved your R script file. In my case, I had saved it to Documents, which I reached using cd sdcard/Documents The command cd means “change directory”. Type ls for a list of files and folders in the current directory. If you make a wrong turn, simply type cd .. to go up (back) one folder. Once you’ve found your R script file, evaluate it with Rscript test.R or R -f test.R for more detailed output. If you prefer working in R to in a Linux terminal, then you can equivalently do everything within an R session: getwd() # show current directory list.files() # equivalent to ls setwd('sdcard/Documents') # equivalent to cd source('test.R') # run the script ## Saving output If your scripts produce output, like plots saved as image files or data in csv files, you can open these with ordinary Android apps and share them. Just keep track of where the files are saved. For example, here is a short script that produces a plot and saves it as a PNG. n <- 1000 g <- 16 cx <- rep(1:4, each = 4) * n * 2 cy <- rep(1:4, times = 4) * n * 2 t <- 1:1000 x <- rep(t * cos(t), each = g) + cx y <- rep(t * sin(t), each = g) + cy png("test.png", width = 600, height = 600) par(mar = rep(0, 4)) plot(x, y, col = rep(c(2, 4), each = g), asp = 1, axes = FALSE) dev.off() Run the script and use ls or list.files() to verify that a new file, test.png has been created. In your Android launcher, go to Files > Local > Internal storage > Documents (the names might vary depending on your device) and you should see your new plot, which you can open and share like any other image. ## R Markdown on Android Rendering/knitting R Markdown documents requires the R package knitr and the external application pandoc. Install the former using install.packages('knitr', dependencies = TRUE) If you accidentally choose a CRAN mirror that doesn’t work, you can select another one with chooseCRANmirror() You also need pandoc. Install it from the command line with apt-get install pandoc After installing, you might need to restart your terminal to ensure pandoc has been properly added to the PATH (and so can be found by R). Write a minimal R Markdown document in your Android text editor, for example # Hello, world! I am an *R Markdown* document. A horse has r 2+2 legs. Here is some random noise. {r} plot(runif(100), runif(100)) Convert it using Rscript -e "rmarkdown::render('test.rmd')" And using ls or list.files() or your Android file explorer, you should see the output Markdown, HTML, Word, or PDF document, which you can open in your Android text editor, browser, word processor or PDF reader, respectively. Once again, share or upload it like any other file. So there we have it: R and R Markdown running on Android! I hope you found this useful. This blog post was written on a Huawei Honor 8 smartphone using QuickEdit text editor.
# Listings float with sublistings I want to have 2 listings side-by-side in a float environment with its own name (i.e. Listing 1 instead of Figure 1) and counter (so that I can have a separate listoflistings besides listoffigures). I also want the subcaptions aligned and not the listings themselves. This is about what I'd imagine: I don't see how to get the captions aligned with the subfigure package, but it works fine with the subcaption package: \begin{figure} \centering \subcaptionbox{Subcaption a.}% [.47\textwidth]{\includegraphics[width=.45\textwidth]{imga}} \subcaptionbox*{}[.03\textwidth]{} \subcaptionbox{Subcaption b.} [.47\textwidth]{\includegraphics[width=.45\textwidth]{imgb}} \caption{Main caption.} \end{figure} The problem is that I can't include lstlistings in the subcaption and that I still don't know how to get the separate counter and name from figure. Related question about putting 2 listings side-by-side, which isn't what I have in mind. I put this together to demonstrate how you can do this. Since you did not give a full example, I can't guarantee that it will fit right into what you are doing. I used newfloat to create a new float type listing, as you seem to have. minipages are used to set the listings themselves into the sub-floats. Depending on what you have already, this may be enough to fix your issues. I also augmented the example to show referencing and a list of listings. Within this bigger example, two problems still present themselves: 1. The counter on subrefs is not set correctly; I have added a \refstepcounter to temporarily add 1 to the count to correct this. 2. The \listoflistings puts the two sublisting captions above the main caption. I'm not sure yet why this is happening of if it can be corrected. The fix/hack would be to put the main caption at the top of the float, but that would move it in the final typesetting to the top as well, which may not be desired. Even with these irregularities, a very nice result is obtained: The MWE code: \documentclass{article} \usepackage{listings} \usepackage{newfloat,caption} \usepackage{subcaption} \usepackage[demo]{graphicx} \DeclareFloatingEnvironment[fileext=frm,placement={!ht},name=Listing,within=section]{listing} \begin{document} \tableofcontents \listoflistings \section{One} \section{Two} \begin{listing} \refstepcounter{listing} \noindent\begin{minipage}[b]{.45\textwidth} \begin{lstlisting} add x1, x1, 5 stp fp, lr, [sp, #-16]! add x5, x6, x7 \end{lstlisting} \captionof{sublisting}{Some caption.} \label{lst:codeblockA} \end{minipage}% \hfill \begin{minipage}[b]{.45\textwidth} \begin{lstlisting} ldr x5, [x5] \end{lstlisting} \captionof{sublisting}{Some other caption.} \label{lst:codeblockB} \end{minipage}
## Stream: general ### Topic: Instance resolution failures #### Sebastian Ullrich (Sep 08 2019 at 17:46): @Chris Hughes I made a repository of examples of type class failures in Lean 3. Most of them depend on using some old commit of mathlib. The respository is here Thanks, this is a good first step. If we want to get serious about analyzing and fixing these issues in Lean 4 (I hope nobody expects a solution for Lean 3), it would help tremendously to reduce them to minimized examples in stand-alone files (we want to port mathlib to Lean 4 after fixing these issues after all). If they are already valid Lean 4, even better. We may even want to port them to other systems to analyze their behavior. #### Mario Carneiro (Sep 08 2019 at 17:57): It's not hard to cook up artificial examples, but I thought you wanted real examples #### Mario Carneiro (Sep 08 2019 at 17:58): In short it checks the same typeclass problem many times. This causes issues with looping instances, and exponential worst case behavior on dags #### Sebastian Ullrich (Sep 08 2019 at 20:55): We'd like to have realistic examples, but we'll still need to extract them if we want to test them in Lean 4 #### Floris van Doorn (Sep 09 2019 at 00:23): Here is an unrealistic example which shows that diamonds cause an exponential blow-up in the type class inference search: -- Lean 3 code set_option old_structure_cmd true class bottom (α : Type) (n : ℕ) := (x : bool) class left (α : Type) (n : ℕ) extends bottom α n class right (α : Type) (n : ℕ) extends bottom α n class top (α : Type) (n : ℕ) extends left α n, right α n instance unrealistic_loop (α n) [bottom α n] : top α n.succ := { .._inst_1 } set_option trace.class_instances true example : top unit 10 := by apply_instance -- fails, but slowly Even though this example is unrealistic, diamonds occur all over the algebraic hierarchy, and I don't think there is a way avoiding diamonds from the algebraic hierarchy. And this is not a large amount of diamonds, there is already a significant slowdown with 11 diamonds on top of each other. I think mathlib has more diamonds (more parallel instead of serial). #### Floris van Doorn (Sep 09 2019 at 00:24): There should really be a different algorithm for these "forgetful instances". Make a graph of all instances that change the class but not the type (like add_comm_group \a -> add_group \a) and when you have searched through all structural instances (or all instances with priority >= default priority), then use a graph reachability algorithm to quickly search for a path to any instance in the local context. #### Floris van Doorn (Sep 09 2019 at 00:41): I don't know if that is the best way to deal with diamonds, but it should avoid the exponential blowup. #### Sebastian Ullrich (Sep 09 2019 at 08:11): Thanks Floris. So are repeated (metavariable-free) instance problems the only big issue? Are out_params more or less well behaved or were there fundamental issues as well? #### Keeley Hoek (Sep 09 2019 at 09:20): @Scott Morrison do you think the category theory morphism problems deserve a mention here? #### Scott Morrison (Sep 09 2019 at 09:36): @Keeley Hoek which did you have in mind? #### Keeley Hoek (Sep 09 2019 at 09:37): I meant the whole business with having to specify the morphism universe everywhere because the resolver can't be coerced into being more aggressive. #### Scott Morrison (Sep 09 2019 at 09:39): Oh, I see. Yes, it would be lovely if instance resolution would specialise universes in preference to failing. #### Keeley Hoek (Sep 09 2019 at 09:43): I'm not so familiar with out_param, but maybe an annotation like that saying to aggressively unify could be a fix. #### Floris van Doorn (Sep 10 2019 at 05:25): @Sebastian Ullrich I don't feel confident to judge whether this is the only issue. My gut feeling is that there are more issues, but it's not always easy to say which problem is causing a type class search to time out. I have not worked with out_params myself, so I cannot judge judge them. #### Rob Lewis (Sep 10 2019 at 05:39): What's your timeline for collecting examples here? I'm limiting my Zulip time right now because I have too much else going on. But there are some bad instance searches I half-remember, and probably have saved on my office computer, I can try to write up once I get back there. #### Sebastian Ullrich (Sep 10 2019 at 08:02): @Rob Lewis There are always other things to work on for us, so I'd say take your time #### Mario Carneiro (Sep 10 2019 at 08:15): out_param is finicky to use correctly, but AFAIK it doesn't cause any performance problems other than the preexisting ones #### Mario Carneiro (Sep 10 2019 at 08:16): I would say as long as you can make it not asymptotically exponential everything else will be minor #### Mario Carneiro (Sep 10 2019 at 08:18): TBH it's surprisingly fast considering how much work it is currently doing #### Scott Morrison (Sep 10 2019 at 08:39): I wonder if just looking through the 53 current instances of set_option class.instance_max_depth ... in mathlib would provide some interesting examples. #### Sebastien Gouezel (Sep 10 2019 at 11:17): Orthogonal to instance failures, but the following has been painful for me in manifolds. class my_source (α : Type) := (source : set α) lemma foo (α : Type) (A : set (my_source α)) : ∀e ∈ A, e.source = e.source is not understood by Lean, as it is not able to infer the type of e in the statement of foo (even though e ∈ A should tell everything). So I have to give explicitly the type of e, which is not bad here but can be bad in more complicated instances. I know unification is complicated, but still... #### Kenny Lau (Sep 10 2019 at 11:21): I think the issue is that you shouldn't use projection with class. #### Sebastien Gouezel (Sep 10 2019 at 11:27): Sorry, I shouldn't have used class. Same outcome with structure instead. #### Reid Barton (Sep 11 2019 at 21:22): Here is a pretty egregious failure: import algebra.module set_option trace.class_instances true example {T : Type} (t : T) : T := (1 : ℚ) • t Obviously the search has_scalar ℚ T cannot succeed, but this is not obvious to Lean #### Reid Barton (Sep 11 2019 at 21:23): I know I keep going on about GHC, but in GHC this kind of thing will fail in constant* time (* okay, probably logarithmic in something) #### Reid Barton (Sep 11 2019 at 21:24): regardless of the number of classes or instances in the system or the depth of the class hierarchy #### Sebastien Gouezel (Sep 12 2019 at 13:35): I just stumbled on a behavior I don't understand. Involving out_params, structures and typeclasses, so something a little bit exotic and probably the behavior will be completely different in Lean 4, but still let me record it here. Everything is fine with def local_homeomorph.is_mdifferentiable (f : local_homeomorph M M') := (mdifferentiable_on 𝕜 f.to_fun f.source) ∧ (mdifferentiable_on 𝕜 f.inv_fun f.target) lemma mdifferentiable_atlas (h : e ∈ atlas M H) : e.is_mdifferentiable 𝕜 := ⟨mdifferentiable_on_atlas_to_fun 𝕜 h, mdifferentiable_on_atlas_inv_fun 𝕜 h⟩ But if I switch the definition to a structure, with structure local_homeomorph.is_mdifferentiable (f : local_homeomorph M M') : Prop := (diff_to_fun : mdifferentiable_on 𝕜 f.to_fun f.source) (diff_inv_fun : mdifferentiable_on 𝕜 f.inv_fun f.target) then the lemma is not accepted any more, failing with 4 messages don't know how to synthesize placeholder context: 𝕜 : Type u_1, _inst_1 : nondiscrete_normed_field 𝕜, E : Type u_2, _inst_2 : normed_group E, _inst_3 : normed_space 𝕜 E, H : Type u_3, _inst_4 : topological_space H, I : model_with_corners 𝕜 E H, M : Type u_4, _inst_5 : topological_space M, _inst_6 : manifold H M, _inst_7 : smooth_manifold_with_corners 𝕜 E H M, e : local_homeomorph M H, h : e ∈ atlas M H ⊢ Type ? I have several typeclass variables in scope, and I guess the main missing piece of information is the only exotic typeclass definition with outparams, i.e., /- Specialization to the case of smooth manifolds with corners, over a field 𝕜 and with infinite smoothness to simplify. The set E is a vector space, and H is a model with corners based on E. When 𝕜 is fixed, the model space with corners (E, H) should always be the same for a given manifold M. Therefore, we register it as an out_param: it will not be necessary to write it out explicitely when talking about smooth manifolds. This is the main point of this definition. -/ class smooth_manifold_with_corners (𝕜 : Type*) [nondiscrete_normed_field 𝕜] (E : out_param $Type*) [out_param$ normed_group E] [out_param $normed_space 𝕜 E] (H : out_param$ Type*) [out_param $topological_space H] [I : out_param$ model_with_corners 𝕜 E H] #### Wojciech Nawrocki (Mar 25 2020 at 14:52): Why does Lean fail to synthesize a typeclass after I name it in a variables block? That is, given the prelude import category_theory.monad open category_theory universes v₁ v₂ v₃ u₁ u₂ u₃ -- declare the v's first; see category_theory.category for an explanation variables {C : Type u₁} [𝒞 : category.{v₁} C] include 𝒞 the following fails variables (T : C ⥤ C) [𝕋 : monad.{v₁} T] /- algebras.lean:12:42: error failed to synthesize type class instance for C : Type u₁, 𝒞 : category C, T : C ⥤ C while this works variables (T : C ⥤ C) --[monad.{v₁} T] as does this variables (T : C ⥤ C) [monad.{v₁} T] What does adding the name do that breaks inference? #### Reid Barton (Mar 25 2020 at 15:01): The category theory stuff is a distraction! The auto-inclusion of [] variables only occurs for ones that are not named, I think. #### Wojciech Nawrocki (Mar 25 2020 at 15:08): Do you know why that is? Is it to make sure I don't refer to the particular instance? #### Reid Barton (Mar 25 2020 at 15:43): It is just a design choice I think. I mean, the default is to not include variables. #### Reid Barton (Mar 25 2020 at 15:44): For unnamed [] variables the rule is just always include all those that mention an included variable. It doesn't include them only if they are actually needed or anything like that. #### Wojciech Nawrocki (Mar 25 2020 at 15:53): Ah right, so that's why we need include 𝒞. Thanks! #### Reid Barton (Mar 25 2020 at 16:07): The business with 𝒞 is a specific Lean bug related to the fact that category C has a universe variable which doesn't appear in the type of C. Otherwise we wouldn't name 𝒞 either. #### Kevin Buzzard (Mar 25 2020 at 16:11): Can this bug be fixed in community lean? #### Reid Barton (Mar 25 2020 at 16:12): I think it should be easy. lean#146 #### Kevin Buzzard (Mar 25 2020 at 16:14): Oh nice, will hopefully be fixed in 3.8 #### Reid Barton (Mar 25 2020 at 16:15): when I looked at this originally (long before the community fork) it seemed that the elaborator basically already had the machinery to do this, it just wasn't being used in this particular case Last updated: May 13 2021 at 19:20 UTC
Overview - Maple Help # Online Help ###### All Products    Maple    MapleSim DocumentTools Components XML constructors for Embedded Components Description • The Components package provides commands for generating XML as function-calls which represent Embedded Components. • When combined with XML generated by the constructors of the Layout Constructors package the XML representation of a complete Worksheet or Document can be generated. • The XML representation of a complete Worksheet or Document can be inserted directly into the current document using the InsertContent command. List of DocumentTools:-Components commands Compatibility • The DocumentTools[Components] package was introduced in Maple 2015. • For more information on Maple 2015 changes, see Updates in Maple 2015. See Also
## College Algebra (6th Edition) False, $\displaystyle \log(\frac{x+2}{x-1}) = \log(x+2)-\log(x-1)$ The LHS is a quotient OF logarithms. No rule/property exists with which we can expand or simplify. The RHS is a diffeence of logarithms, which appears in the Quotient Rule (on its RHS): $\displaystyle \log_{\mathrm{b}}(\frac{\mathrm{M}}{\mathrm{N}})=\log_{\mathrm{b}}\mathrm{M}-\log_{\mathrm{b}}\mathrm{N}$, so statement RHS = $\displaystyle \log(x+2)-\log(x-1)=\log(\frac{x+2}{x-1}),$ which does not equal the LHS of the problem statement. So, the statement is false. To make it true, change the LHS to $\displaystyle \log(\frac{x+2}{x-1})$
# What does it mean to logically imply another predicate? • Consider the following predicate formulas. $$F1: \forall x \exists y ( P(x) \to Q(y) ).$$ $$F2: \exists x \forall y ( P(x) \to Q(y) ).$$ $$F3: \forall x P(x) \to \exists y Q(y).$$ $$F4: \exists x P(x) \to \forall y Q(y).$$ Answer the following questions with brief justification. (a) Does $$F1$$ logically imply $$F2$$? (b) Does $$F1$$ logically imply $$F3$$? (c) Does $$F1$$ logically imply $$F4$$? (d) Does $$F2$$ logically imply $$F1$$? (e) Does $$F2$$ logically imply $$F3$$? (f) Does $$F2$$ logically imply $$F4$$? (g) Does $$F3$$ logically imply $$F1$$? (h) Does $$F3$$ logically imply $$F2$$? (i) Does $$F3$$ logically imply $$F4$$? (j) Does $$F4$$ logically imply $$F1$$? (k) Does $$F4$$ logically imply $$F2$$? (l) Does $$F4$$ logically imply $$F3$$? Logically implies really confuses me. Attempt: Writing it in English first: F1: We have that $$x$$ never satisfies $$P$$ or there is a $$y$$ that satisfies $$Q$$ F2: For some $$x$$ $$P$$ isn't satisfied, or $$y$$ always satisfies $$Q$$ F3: Some $$x$$ doesn't satisfy $$P$$ or some $$y$$ satisfies $$Q$$ F4: $$x$$ never satisfies $$P$$ or $$y$$ always satisfies Q By definition of logically implies: A formula $$F$$ logically implies a formula $$F'$$ iff every interpretation that satisfies $$F$$ satisfies $$F'$$ So: (a) So for $$F1$$ $$x$$ never satisfies $$P$$ meaning it'll always be true whereas some $$x$$ can satisfy P in $$F2$$, thus this doesn't logically imply. (b) ... yeah I don't know how to explain any of this - any assistance even tips are appreciated I'm stumped Apparently $$b$$ is true I don't get it though. according to $$F1$$, $$x$$ never satisfies $$P$$ and by prepositional logic of $$p \to q$$ is not p or q. then $$F1$$ is always true whereas $$F3$$ there can exist a P that satisfies Q and some y that doesn't satisfy Q so every interpretation doesn't satisfy? I'm understanding this poorly. $$F_1$$ says for all $$x$$, there exists a y such that $$P(x)$$ implies $$Q(y)$$. $$F_2$$ says for there exists a $$x$$, for all $$y$$ such that $$P(x)$$ implies $$Q(y)$$. $$F_3$$ says that for all $$x$$, if $$P(x)$$ holds, then there exists a $$y$$ such that $$Q(y)$$. $$F_4$$ says that there exists a $$x$$, such that if $$P(x)$$, then for all $$y$$, $$Q(y)$$ holds. The first one means that for all values of $$x$$, there exists some value of y such that the $$P(x)$$ implies $$Q(y)$$. The second one says that there exists a common $$x$$ for all $$y$$ such that $$P(x)$$ implies $$Q(y)$$. Both of them aren't the same, which is what you have pointed out too and you can solve the rest of them in the same way. Also, a good way would be to some concrete example and see if it holds or not. It makes the intuition a bit easier. EDIT: If you want more help with logical implications, there's a book called How To Prove It by Daniel Velleman whose first two chapters deal with logic, which is a good read.
Paper ID: 1841 Title: Clustered factor analysis of multineuronal spike data Current Reviews Submitted by Assigned_Reviewer_19 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) 1841 - Clustered factor analysis of multineuronal spike data This paper presents an important extension of the previous PLDS model by allowing disjoint latent dynamics for subpopulations. ~~ Quality The paper presents state-of-the-art modeling and optimization techniques. ~~ Clarity The model is described clearly. 8 pages is clearly not enough to describe everything and show detailed results, but appendix describes more details of the inference. ~~ Originality This paper is largely based on previously successful PLDS model where a latent linear dynamical system is observed through Poisson processes. The key advancement in the model is the concept of subpopulation. For fitting the model, they propose sophisticated initialization procedures and compares methods. ~~ Significance I believe this paper is a significant conceptual and technical progress towards better analysis of population neural data. I do not understand why the latent linear dynamics for each subpopulation are allowed to interact. In other words, why is the matrix A not block diagonal? Wouldn't it allow any mixture factor model to be represented as well? It blurs the “subpopulation” interpretation. I would like to see a justification/explanation for allowing such interaction. This tool is more useful as a confirmatory analysis than as an exploratory analysis. The addition of non-negativity in C for biological interpretation self-demonstrates its weakness. ~~ Minor The last paragraph in 2.2 is confusing (duplicate info?). The paper presents state-of-the-art modeling and optimization techniques towards an conceptual progress for population neural data analysis. Submitted by Assigned_Reviewer_41 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) This paper presents a clustered factor analysis/latent linear dynamical systems model for neural population data that contains distinct, non-overlapping clusters of neurons that are each governed by low-dimensional dynamics. Overall the paper is well written and the results on the toy data seem to indicate that the approach works. However, my enthusiasm is substantially dampened because I got stuck at several places due to oddities in the terminology, the model definition and the results, which made it hard to properly evaluate the work. 1. I don’t see how the proposed model relates to the cited mixture models (e.g. mixture of factor analyzers). In a mixture model we have p(x) = \sum_i w_i p_i(x), i.e. observations are assumed to be generated by individual mixture components. In the present paper, in contrast, the dimensions (neurons) – not the observations – are clustered. The result is a certain block structure in the loading matrix, but as far as I can tell it’s still essentially a factor analysis model. The authors should explain more clearly what’s the mixture distribution 2. The definition of s as a multinomial random variable doesn’t make sense to me. If s indicates the cluster label, it should be a simple categorical RV (or binary, if there is one dimension per cluster). The multinomial distribution measures the number of successes for N draws of a categorical RV. I don’t understand why one would use that distribution for s. Also, in line 131, why are there K different sets of \phi, i.e. K*M parameters? Shouldn’t we just need M parameters specifying the probability of neurons belonging to cluster m? 3. Why is the loading matrix (C) of the mixPLDS model shown in Fig. 2B not block-diagonal? If the zero structure is not enforced during training, how is this model different from a normal PLDS model (with non-negativity constraint)? UPDATE AFTER AUTHOR REBUTTAL: I upped my score a bit since the authors' addressed my questions. The explanations provided regarding points 2 & 3 should definitely be included in the manuscript! A clustered factor analysis model that could potentially be very useful for identifying groups of neurons in neural population data. Unfortunately, I couldn’t follow the model derivation/description entirely. Submitted by Assigned_Reviewer_43 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) This paper presents a new method for simultaneously clustering spiking data from a neuronal population, and extracting the latent factors (assuming a LDS) for each of the clusters. It is shown how to fit the model by variational technique and that it outperforms previous clustering methods in synthetic data. The inferred uncertainty over the cluster assignment is roughly consistent. The technique is applied to actual data from the spinal cord and produces sensible results. I recommend publication, and only have minor comments: - how consistent are the outputs when applied to the same dataset with different seeds? - there is no colorbar in Fig 2A as referred to in the caption Well-written methods paper, presenting a technique that simultaneously clusters a population of neurons based on their joint activity and extracts latent factors for each cluster assuming an underlying LDS. Clustering performance is improved compared to previous techniques in synthetic data, and sensible results are shown when applied to actual neural data. Submitted by Assigned_Reviewer_44 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions) The paper proposes a method for clustering multi-neuronal data based on a parametric latent variable model. The main contribution of the approach presented is the extension of previous mixture of factor analyzers models to a dynamic setup where a hidden latent model captures the temporal structure of the model. Latent variable models have proven very successful in recent years, and offer great flexibility and lead to efficient (usually approximate) estimation algorithms. The basic building block of the model is a multinomial population model allowing each neuron to belong to one of M classes, where each latent variable is modeled using a linear dynamical system, and the observation model is based on a Poisson firing model. The main distinction compared to previous work is the incorporation of temporal dependencies in the latent variables. The general inference and parameter estimation is intractable within the present setting, leading to the introduction of a factorization assumption for the posterior distribution, and to a variational lower bound. This lower bound is optimized based on coordinate ascent, which is facilitated by using the dual formulation. The parameter update is interleaved with the update of the variational bound variables, leading to a variational EM type algorithm. In order to improve performance, an initialization scheme is proposed based on Poisson subspace sampling. Additionally, in order to prevent certain undesirable properties of the solution (like assigning all neurons to a single cluster), a non-negativity constraint is introduced into the neuronal activity model. The authors conclude their paper by presenting experiments with both artificial and natural data, taken from calcium imaging of motor neurons activities. The analysis of the empirical data led to some interesting conclusions about the firing phase of neurons. Interestingly, validation using electro-physiological measurements was suggested, opening an interesting route for model validation. Overall this is a solid paper, extending earlier work in several promising directions and demonstrating good empirical results. However, as far as NIPS paper go, the contribution beyond previous work seems rather incremental. A question that should be addressed: The model described in eqs. (3-5) is linear and assumed a specific noise model. What are the implications of these assumptions? How robust is the model to misspecification? Following the authors' response I have re-evaluated the contribution of the paper and raised my original score. A solid paper, extending earlier work in several promising directions and demonstrating good empirical results. As far as NIPS paper go, the contribution beyond previous work seems rather incremental. Author Feedback Author Feedback Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point. We would like to thank the reviewers for the constructive comments. Reviewer 19: 1) "I do not understand why the latent linear dynamics for each subpopulation are allowed to interact. In other words, why is the matrix A not block diagonal? Wouldn't it allow any mixture factor model to be represented as well? It blurs the “subpopulation” interpretation. I would like to see a justification/explanation for allowing such interaction." We agree that completely independent subpopulations (with a block-diagonal $A$) are paradigmatic examples of cell clusters. However, there might well be cases, where cell clusters interact over time. This is almost certainly the case in the data analyzed in section 3.2: Different motor neuron pools are believed to interact over time in a central-pattern-generator manner, producing slow oscillations. In our analysis of this data, we don't see how a block-diagonal matrix $A$ could capture these interactions. However, even with non-block-diagonal $A$, at any given time step $t$ the neurons of cluster $m$ only depend on the factor $x^m_t$ and are independent of the factors of the remaining clusters. Hence, our model relaxes the assumption of ("global") independence of clusters to the assumption of independence at each time step (conditioned on current state), allowing for interactions over time. We will try to better motivate a non-block-diagonal $A$ in the future revision of the manuscript. 2) "The last paragraph in 2.2 is confusing (duplicate info?)." We will phrase this paragraph more clearly in the future revision. Reviewer 41: 1) "I don’t see how the proposed model relates to the cited mixture models. [...] The authors should explain more clearly what’s the mixture distribution" The model is a mixture model not in the time dimension but in the neuron dimension. Conditioning on the latent variables $x$ and the model parameters $\theta$ and marginalizing out the (categorical) indicator variables $s$, the distribution over the spike count $y_{kt}$ for neuron $k$ at time $t$ is a mixture of $M$ Poisson distributions. This reflects the modelling assumption that any given neuron comes from exactly one of the $M$ clusters. The data are assumed to come in the form of a spike count matrix of size (neurons x time). Applying a standard mixture of factor analyzer model to the transpose of this matrix (where time steps are features and neurons are samples) would be one simple way of clustering neurons. The mixPLDS model is an extension of this approach (including a dynamical system prior in time and individual loading parameters for each neuron). We therefore think stating that the "resulting model is similar to a mixture of factor analyzers" (ll64) etc. is well justified. 2) "The definition of s as a multinomial random variable doesn’t make sense to me. [...] Also, in line 131, why are there K different sets of \phi, i.e. K*M parameters? Shouldn’t we just need M parameters specifying the probability of neurons belonging to cluster m?" The reviewer is correct that $s$ is a categorical and not a multinomial RV; we will fix this. Concerning the number of parameters: For each neuron (of which there are $K$), one needs $M$ parameters to specify the posterior cluster assignments ($M-1$ would be sufficient, the one additional parameter per neuron is compensated for by normalization). Hence one needs $K*M$ parameters for all neurons. 3) "Why is the loading matrix (C) of the mixPLDS model shown in Fig. 2B not block-diagonal? If the zero structure is not enforced during training, how is this model different from a normal PLDS model (with non-negativity constraint)?" The estimates of $C$ would only be block-diagonal if there was no uncertainty on the cluster assignments for all neurons. For most neurons in Fig 2B, the algorithm is very certain to which cluster they belong. But there are a few for which the posterior uncertainty is high and hence the corresponding off-block-diagonal elements of $C$ are non-zero. Visual inspection of the spike-count time series for these neurons (Fig 2A) shows that they could indeed plausibly belong to both clusters. Even with non-block-diagonal loading $C$ the mixPLDS model differs from the PLDS model. The PLDS model can use all factors *jointly* to explain the spike observations of any neuron $k$. In contrast, the mixPLDS can explain the spikes of neuron $k$ by using the factors of cluster 1 *or* the factors of cluster 2 *or* etc. (Being a Bayesian model, the mixPLDS integrates over these $M$ different hypothesis for each neuron.) Hence eg the likelihood of observed data under the PLDS and the mixPLDS (even with the same loading C) will in general be different. Reviewer 43: We agree that this is an important point. We will add more details about algorithmic complexity in the future revision. 2) "how consistent are the outputs when applied to the same dataset with different seeds?" In the analysis of the real data (section 3.2), we observed that there is virtually now variability in the results for multiple restarts. The same holds for artificial data sets (section 3.1, fig 1) on which the mixPLDS shows good performance. On "hard" instances of the artificial data sets, different runs show some variability. We will try to quantify this and add the information to the future revision. 3) "there is no colorbar in Fig 2A as referred to in the caption" We will fix this. Reviewer 44: 1) "The model described in eqs. (3-5) is linear and assumed a specific noise model. What are the implications of these assumptions? How robust is the model to misspecification?" We agree that this is an important point that needs careful analysis to understand the limitations of the model. We aim to address this in a future revision.
anarchy.website Toggle Dark Mode # Abridged Notes on Quantum Mechanics I By Una Ada, April 26, 2018 A system is some collection of particles, we describe it using observables. To know the state of a system we must know the values of the observables. Both quantum and classical mechanics are dynamical theories in that they describe states; however, while classical is deterministic (we can predict the future using it), quantum mechanics can only describe probability. Mathematically, this is done with wave functions ($\Psi(\vec{r} ,t)$) which are defined as the probability density amplitude. To derive the probability density from this we use: $|\Psi(\vec{r},t)|^2=\Psi^\ast(\vec{r},t)\Psi(\vec{r},t).\tag1$ Here the notation $\Psi^\ast$ refers to the complex conjugate of the wave function. A wave function must meet the following conditions: 1. Must be normalized ($\int_{-\infty}^{\infty}\Psi(\vec{r},t)d\vec{r}=1$) 2. Must match the boundary conditions. In a well $\Psi=0$ at the walls, whereas when not confined $\lim_{|x|\to\infty}\Psi=0$ 3. Arbitrary phases are allowed e.g. the phase $\phi$ in $\Psi=f(x,t)e^{i\phi}$ since $P(x,t)=\Psi^\ast\Psi=f^\ast e^{-i\phi}fe^{i\phi}=f^\ast f$ (the phase does not affect observables) 4. Must be a function 5. Must be continuous 6. The derivative of $\Psi$ must be continuous (such that $\Psi$ is a smooth function) 7. Must satisfy the Schrödinger Equation: $i\hbar\frac{\partial\Psi}{\partial t}=\frac{-\hbar^2}{2m}\nabla^2\Psi +V(\vec{r})\Psi.\tag2$ For a 1-Dimensional free particle (a particle with no potential energy) this can be simplified to: $i\hbar\frac{\partial\Psi}{\partial t}=\frac{-\hbar^2}{2m}\frac{\partial^2\Psi} {\partial x^2}.\tag3$ While this cannot be derived, we can feel like we derived it by proving that $\Psi=Ae^{i(kx-\omega t)}$ is a solution to it. We start this by expanding the left hand side of Eq.3 with the assumption that $E=hf=\hbar\omega$: $i\hbar\frac{\partial\Psi}{\partial t}=i\hbar(-i\omega)\Psi=\hbar\omega\Psi= E\Psi,\tag5$ then, given that $KE=\frac{p^2}{2m}$ where $p=\frac{h}{\lambda}\cdot\frac{2\pi} {2\pi}=\hbar k$ such that $KE=\frac{\hbar^2k^2}{2m}$, for a free particle $V=0$, and that $E=KE+V$, we say that $i\hbar\frac{\partial\Psi}{\partial t}=E\Psi=\frac{\hbar^2k^2}{2m}\Psi.\tag6$ On the right hand side of Eq.3, we can simply evaluate the partial derivative to find that $\frac{\partial^2}{\partial x^2}\Psi=(ik)^2\Psi=-k^2\Psi$ so that $\frac{-\hbar^2}{2m}\frac{\partial^2\Psi}{\partial x^2}=\frac{\hbar^2k^2}{2m} \Psi.\tag7$ Since the right hand sides of Eq.6 and Eq.7 are equal, we can then set their left hand sides to be equal, giving us the final form of $i\hbar\frac{\partial\Psi}{\partial t}=\frac{-\hbar^2}{2m}\frac{\partial^2\Psi} {\partial x^2},\tag8$ thus proving that $\Psi=Ae^{i(kx-\omega t)}$ is a solution… probably. For further exploration of the Schrödinger Equation, we can derive a time independent form. Where Eq.3 will hereafter be referred to as the Time Dependent Schrödinger Equation or “TDSE,” this will be the Time Independent Schrödinger Equation or “TISE.” This is done by the process of separation of variables, beginning with defining some function $\Psi(x)$ and $T(t)$ such that $\Psi(x,t)=\Psi(x)T(t)$ and proceeding to substitute them into the TDSE: $i\hbar\Psi(x)\frac{\partial}{\partial t}T(t)=\frac{-\hbar^2}{2m}T(t) \frac{\partial^2}{\partial x^2}\Psi(x).\tag9$ For the sake of laziness we’ll just refer to $T(t)$ as $T$ and $\Psi(x)$ as $\Psi_E$ from hereon. Moving each function to its own side of the equation yields $i\hbar T^{-1}\frac{\partial T}{\partial t}=\frac{-\hbar^2}{2m}\Psi_E^{-1} \frac{\partial^2\Psi_E}{\partial x^2}.\tag{10}$ This equation does describe the energy of a system, so we can say that both sides are equal to the constant value $E$ (I don’t have a more thorough explanation for this, sorry). First, the left hand side: \begin{align} i\hbar T^{-1}\frac{\partial T}{\partial t}&=E,\tag{11a} \\ i\hbar\frac{\partial T}{\partial t}&=ET.\tag{11b} \end{align} Haha, like the alien movie. Anyway, moving on to the right hand side: \begin{align} \frac{-\hbar^2}{2m}\Psi_E^{-1}\frac{\partial^2\Psi_E}{\partial x^2}&=E,\tag{12a} \\ \frac{-\hbar^2}{2m}\frac{\partial^2\Psi_E}{\partial x^2} & = E\Psi_E.\tag{12b} \end{align} This last equation (Eq.12b) is the TISE. If we guess a function for the time dependence to be $T=e^{-i\omega t}$ and evaluate $\frac{\partial T}{\partial t}$ to be $-i\omega e^{-i\omega t}=-i\omega T$ which can be substituted into Eq.11b: \begin{align} i\hbar(\partial T/\partial t) & = i\hbar(-i\omega T)\tag{13a} \\ & = -i^2\hbar\omega T\tag{13b} \\ & = \hbar\omega T\tag{13c} \\ & = ET,\tag{13d} \end{align} showing that $T=e^{-i\omega t}$ is a solution for time dependence, and so is the time dependence for any $\Psi_E$. Finally, some discussion of $k^2$. As previously mentioned, $KE=\frac{\hbar^2 k^2}{2m}$, and since $E=KE+V$ we can say that $KE=E-V$. Earlier $0$ was used for $V$ but in this instance it will be an arbitrary constant $V_0$: \begin{align} \frac{\hbar^2k^2}{2m}&=E-V_0,\tag{14a} \\ k^2&=\frac{2m}{\hbar^2}(E-V_0)\tag{14b} \\ &=\frac{2m}{\hbar^2}(\hbar\omega-V_0),\tag{14c}\end{align} here Eq.14c describes a dispersion relation between $k$ and $\omega$. We can also consider velocity $v$: \begin{align} v&=f\lambda\cdot\frac{2\pi}{2\pi}\tag{15a} \\ &=2\pi f\cdot\frac\lambda{2\pi}\tag{15b} \\ &=\frac\omega k.\tag{15c}\end{align} With this we can revisit Eq.14 to find a value for the velocity of the particle: \begin{align} E-V&=KE,\tag{16a} \\ \hbar\omega-V_0&=\frac{k^2\hbar^2}{2m},\tag{16b} \\ \omega&=\frac{k^2\hbar}{2m}+\frac{V_0}{\hbar},\tag{16c} \\ v&=\frac{1}{k}\left(\frac{k^2\hbar}{2m}+\frac{V_0}{\hbar}\right)\tag{16d} \\ v&=\frac{k\hbar}{2m}+\frac{V_0}{k\hbar}.\tag{16e}\end{align}
P(3, 1), Q(6, 5), and R(x, y) are three points such that the angle ∠PRQ is a right angle and the area ΔRQP = 7, then the number of such points R is This question was previously asked in DSSSB TGT Maths Male Subject Concerned- 23 Sep 2018 Shift 1 View all DSSSB TGT Papers > 1. 0 2. 2 3. 4 4. 1 Option 2 : 2 Detailed Solution Concept: When two lines are perpendicular to each other then the product of their slopes is - 1 The area of the triangle is given by- $$Area\;of\;triangle = \frac{1}{2}\left[ {{x_1}\left( {{y_2} - {y_3}} \right) + {x_2}\left( {{y_3} - {y_1}} \right) + {x_3}\left( {{y_1} - {y_2}} \right)} \right]$$ where, (x1, y1), (x2, y2), and (x3, y3) are the coordinates of the vertices of a triangle. Given: P (3, 1), Q(6, 5), and R(x, y) are three points. ∠PRQ is a right angle The area ΔRQP = 7 Calculation: ∠PRQ = 90° So m1 × m2 = - 1 where m1 = slope of the PR m2 = slope of line RQ $$\left( {\frac{{y - 1}}{{x - 3}}} \right) \times \left( {\frac{{5 - y}}{{6 - x}}} \right) = - 1$$....................(i) Using area of a triangle formula (x1, y1) = (3, 1) (x2, y2) = (x, y) and (x3, y3) = (6, 5) $$7 = \frac{1}{2}\left[ {{3}\left( {{y} - {5}} \right) + {x}\left( {{5} - {1}} \right) + {6}\left( {{1} - {y}} \right)} \right]$$...........(ii) $$-7 = \frac{1}{2}\left[ {{3}\left( {{y} - {5}} \right) + {x}\left( {{5} - {1}} \right) + {6}\left( {{1} - {y}} \right)} \right]$$............(iii) on solving (i), (ii), and (iii) We get two different coordinates of point R So there are 2 points.
1 # How to make auto-saving script that works (Not same guy below)? Asked by 3 months ago Edited 3 months ago This script also adds 1 point per second. It's in sss. No errors. Not saving somehow. local dataStore = game:GetService("DataStoreService"):GetDataStore("randommeusumtime") starterRolls = 0 local leaderstats = Instance.new("Folder") local points = Instance.new("IntValue") points.Name = "Time Spent" points.Value = dataStore while true do wait(1) points.Value = points.Value + 1 end end) game.Players.PlayerRemoving:Connect(function(plr) end) 3 phxntxsmic 2232 3 months ago Use game:BindToClose. When the server closes, there is a good chance that game.Players.PlayerRemoving will not fire, which would be why your script is not saving. As OfficerBrah pointed out, you need to set the variable's value to the value from the datastore, not to the datastore itself: points.Value = dataStore:GetAsync(plr.UserId) 0 Even after all your help, the script still wont save!! I dont get it. but good try on attempting to fix it. Jakob_Cashy 67 — 3mo 0 Vong25 237 3 months ago The name of the IntValue is "Time Spent," but you are referencing it as "points." dataStore:SetAsync(plr.UserId, plr.leaderstats["Time Spent"].Value) Since there is a space in the instance name, you have to use [] to path it. 0 Answered by 3 months ago Edited 3 months ago ^^^ yeah, you used the "points" variable to reference the IntValue you created inside the function, but the Name property is what the rest of your game sees also on line 14, you set the value of the "Time Spent" IntValue to the datastore itself, rather than the data that is stored in the datastore. Replace it with this: points.Value = dataStore:GetAsync(plr.UserId) and I think it should work
# We just got a letter, we just got a letter Today’s reader mail is from Henry, from Pennsylvania. Henry’s a fundamentalist Christian who has downloaded the music we arranged and recorded for All The Way Home. He writes: Hello, You people SICKEN me. You are nothing but some CALIFORNIA scumbags who THINK they understand where the bluegrass scene came frome. Don't get me wrong...it's not about "butcher holler". Hard times is hard times. But THEATER TRUOUPES are NOT allowed. YOU SUCK. No matter how you do, you will STILL SUCK. We're not cartoons, dude. We're everywhere. "country folks" is anybody who isn't yuppie/hippie SCUM. Everybody who actually works, lives, believes, and DOES REAL LIFE. We HATE you, friends. You are WORTHLESS. From the fake-ass "jonbenet scandal", you're NOTHING. You're pretentious suburban TRASH, pretending to a heritage you wouldn't be worthy of being THROWN OUT OF. It's not where your FROM, friend, it's what you VALUE: REALITY, HOME, FRIENDS (chosen for reasons OTHER than conformity and "clique") VALUES HUMANITY LOVE SO you can BITE me, friends. You are nothing other than poseurs, and worthy only of LAUGHINGLY VIOLENT put-downs! SCREW OFF, JERKS! PS... The women who played NUNS in NUNSENSE in our local THEATER TROUPE, COMMUNITY. comprendez vouz, scuzzbags???? Ps. BLOW ME. by 1. Marin Isn’t it nice how Christians are so full of love and compassion? 2. No one important John, How do you know he is a christian? I hope it was just a guess and not confession on his part. 3. John I followed his e-mail address back through the dark twisty passages of the Internet. He is in a bluegrass band called the Goose Creek Boys, who don’t play bluegrass like any I’ve eve heard… it’s all about damnation and blood… very un-bluegrass bluegrass. jwb 4. Henry Emrich Hello, Mr. "Byrd-brain" You can follow my email address through the "dark, twisty passages" of the deepest pit of hell for all I care. I said what I said, and I stand by it. "All the way home" represented (from what I heard of it), a bunch of poorly thought-out, California wannabe-hippies who cannot sing, cannot play worth a damn, and insisted, nonetheless, in perpetuating the sort of "hayseed" imagery which makes Southerners (and bluegrass musicians) look like fools. Now, on to a few of your OTHER misrepresentational slurs: 1. I am NOT a "fundamentalist christian", and we are only PARTIALLY a bluegrass band. (I don’t confine myself ot one specific genre — which is why I actually understand the underlying mentality behind the formation of Bluegrass better than you.) Bill Monroe was indeed a "crossover" artist before there was any such notion, in that he took elements from various musical genres he’d heard and/or that were circulating around at the time, and fused them into something GOOD. He was also — not coincidentally — NOT a pretentious "theatre" wonk, who believes that a STAGE PORTRAYAL puts one in the same league as those who actually DO what you’re portraying, on a regular basis. Additionally, I have my OWN opinions and views, which may not coincide with those of the other members of the group. TO read "my"opinions into the group is just crass — as well as the idea of "back-trailing" me like some sort of pathetic stalker wannabe. I’ve DEALT with "theater" types before, friend — even been an accompanist for a local community theater on several occasions — and the fact is that performing in "all the way home" no more makes you a MUSICIAN — much less one of a particular GENRE — than performing in "nunsense" makes you a Nun, or even a Catholic. Mp3.com was intended (before it degenerated completely) as a venue of ARTIST PROMOTION — not some wonky "hosting service" for a badly-sung version of the "crawdad song" from an off-off-off-off-broadway dinner theater in Pistolwhip, California. And as far as our songs being "Un-biblical" — read some of the verses they never taught you in Sunday school, friend. It ain’t all "sweetness and light". (and besides, I am a unitarian universalist myself. One of the members is a mennonite, and two others are United Methodists. Either specify which song your pathetic little sneer was about, or swallow the bile, and go on with your wasted life. 5. Henry Emrich OH and, by the way: Just gotta love the way you reacted to the criticism of your mp3.com site. It’s almost like you can’t actually deal with the idea that somebody might not have LIKED your stuff. When WE recieve hate-mail (which is sometimes the case) — WE shrug it off and go on with our lives. YOU post it, and then back-trail the guy like a stalker. Pathetic. 6. California wanna-be hippie Cool, I have an arch-enemy now… jwb
# What is the Runtime of this recursive algorithm? I am learning algorithm complexities. So far it has been an interesting ride. There is so much going behind the scenes that I need to understand. I find it difficult to understand complexity in recursive function. my_func takes an array parameter A of length $$n$$. Runtime of some_func() is constant. def my_func(A): if (n < 4): some_func(); /* O(1) time */ else: [G1, G2, G3, G4] = split(A) /* split A into 4 disjoint subarrays of size n/4 each */ my_func([G1, G3]); /* recurses on size n/2 */ my_func([G1, G4]); /* recurses on size n/2 */ my_func([G2, G3]); /* recurses on size n/2 */ my_func([G2, G4]); /* recurses on size n/2 */ some_other_func(); /* split() and some_other_func() take O(n) time */ Questions 1. Can I say the asymptotic runtime of my_func is $$T(n) = 4T(n/2) + O(n) \text{ with } T(1) = O(1)$$because my_func is called recursively $$4$$ times for $$(n/2)$$ size, then split is $$O(n)$$ and some_other_func is $$O(n)$$. The base case keeps $$T(1) = O(1)$$ 2. What is the total number of steps performed by my_func(A)? I know that if there are nested for loops then simply multiply. How to calculate in this case? I was trying use Google search and it point to $$\Omega(n^3)$$. Is that correct? Now what if I rewrite this function as def new_func(A): /* A is array of length n 8/ if (n < 4): some_func(); /* O(1) time */ else: [G1, G2, G3, G4] = split(A) /* split A into 4 disjoint subarrays of size n/4 each new_func([G1, G2]); /* recurses on size n/2 */ new_func([G2, G3]); /* recurses on size n/2 */ new_func([G3, G4]); /* recurses on size n/2 */ some_other_func(); /* split() and some_other_func() take O(n) time */ Questions 1. What is the number of steps now? I guess it will be $$\Omega(n^3)$$ 2. Is new_func faster than $$n\log(n)?$$ I think no because Merge sort is $$T(n) = 2T(n/2) + n$$ and new_func is $$T(n) = nT(n/2) + n$$ • There's something mixed up in ur writing, the 2nd with new_func() doesn't differ than above except for the recursive call is 3times (not 2 as u wrote in Q). In general, u could substituteT(n)= aT(n/b)+cn =⟩ T(n)=a[aT(n/b²)+c(n/b)]+cn, then work ur way out to logn to base b times (when u reach T(1) as n is divided to b^(logn base b) which equals n). Take care in here a=b², so a^(log n base b) = n² – ShAr Sep 27, 2021 at 5:58 In my_func(a), Recurrence Relation will be $$T(n) = \begin{cases} 4T\bigg(\frac{n}{2}\bigg)+{n} & \quad \text{if } n \geq 4\\ 1 & \quad \text{if } n <4 \end{cases}$$ In new_func(a), Recurrence Relation will be $$T(n) = \begin{cases} 3T\bigg(\frac{n}{2}\bigg)+{n} & \quad \text{if } n \geq 4\\ 1 & \quad \text{if } n <4 \end{cases}$$ You can solve Both of these Recurrence Relations using Master Theorem as explained in link. The Time Complexity of my_func(a) will be $$\theta(n^2)$$ since $$\log_24 = 2$$ The Time Complexity of new_func(a) will be $$\theta(n^{1.5849})$$ since $$\log_23 = 1.5849$$ You can solve both of these questions by Substitution Method, which is Time Consuming. One of the Example using this method is attached. The new_func(a) will be slower than Merge Sort, and faster than my_func(a).
# Question 1 A highly evolved Australian in the back of a stationary pickup truck (a.k.a. a ute) fires his rifle in to the air at an angle $\theta$ to the horizontal (note that the angle shown in the diagram is not to scale). Assume that the bullet is not subject to drag forces. At the same time as the gun is fired the driver of the pickup truck begins to accelerate the truck to the right with a constant acceleration $a$. A. (5 points) If after 15s the pickup truck is 320m down the road from where it started, what is the value of the acceleration $a$? $x=x_{0}+v_{0}t+\frac{1}{2}at^{2}$ $320=0.5\times a\times15^2$ $a=2.84\,\mathrm{ms^{-2}}$ B. (5 points) How fast is the pickup truck going at this point? $v=v_{0}+at$ $v=2.84\times15=42.7\,\mathrm{ms^{-1}}$ C. (5 points) At this time the highly evolved Australian is hit by the bullet he fired into the air. What is the initial velocity of the bullet? Express your answer using unit vector notation, using $\hat{i}$ for the direction the truck moves in and $\hat{j}$ for vertically upward. $y=y_{0}+v_{y0}t-\frac{1}{2}gt^{2}$ $0=15v_{y0}-\frac{1}{2}\times9.8\times15^{2}$ $v_{y0}=73.5\,\mathrm{ms^{-1}}$ $v_{x0}=\frac{320}{15}=21.3\,\mathrm{ms^{-1}}$ $\vec{v}_{0}=21.3\hat{i}+73.5\hat{j}\,\mathrm{ms^{-1}}$ D. (5 points) What is the angle $\theta$ at which the rifle is fired? $\tan\theta=\frac{v_{y0}}{v_{x0}}=\frac{73.5}{21.3}$ $\theta=73.83^{o}$ E. (5 points) What is the maximum height that the bullet reaches before it turns around? $v_{y}^{2}=v_{y0}^2-2g(y-y_{0})$ $0=73.5^{2}-2\times9.8\times y$ $y=\frac{73.5^{2}}{2\times9.8}=275.6\,\mathrm{m}$ # Question 2 A smooth block of mass 200g is sliding along the edge of a smooth cone with constant speed. The height of the cone is 23cm, and half of it's apex angle is 40$^{o}$. A. (5 points) Draw a free body diagram which represents all the forces acting on the block. B. (5 points) What is the magnitude of the gravitational force acting on the block? $mg=0.2\mathrm{kg}\times9.81\mathrm{ms^{-2}}=1.962\mathrm{N}$ C. (5 points) What is the magnitude of the component of the gravitational force on the block which points down the slope of the cone? $mg\sin(50^{o})=1.5N$ D. (5 points) What is the magnitude of the normal force acting on the block? $F_{N}\sin(40^{o})=mg$ $F_{N}=\frac{1.962\mathrm{ms^{-2}}}{\sin(40^{o})}=3.05\mathrm{N}$ E. (10 points) What is the speed of the block? $\frac{mv^{2}}{r}=F_{N}\cos(40^{o})=\frac{mg}{\tan(40^{o})}$ $r=0.3\tan{40^{o}}$ $v^{2}=0.3g$ $v=1.71\mathrm{ms^{-1}}$ # Question 3 A. (5 points) Considering a case where the system is accelerating to the right, add arrows to the diagram indicating the forces acting on $m_{1}$ and $m_{2}$ B. (5 points) If $m_{2}=m_{1}$ and $\theta=30^{o}$ find the acceleration of the system in terms of $g$ for $\mu=\frac{1}{2}$. If $m_{2}=m_{1}$, $a=0\,\mathrm{ms^{-2}}$ C. (5 points) If $m_{2}=2m_{1}$ and $\theta=30^{o}$ find the acceleration of the system in terms of $g$ for $\mu=\frac{1}{3\sqrt{3}}$. D. (5 points) If $m_{2}=2m_{1}$ and $\theta=30^{o}$ find the acceleration of the system in terms of $g$ for $\mu=\frac{1}{9}$. E. (5 points) If $m_{2}=2m_{1}$ and $\theta=30^{o}$ find the acceleration of the system in terms of $g$ for $\mu=\frac{1}{\sqrt{3}}$. C,D,E. If the system accelerates it will do it to the right. We can consider the case where the frictional force has it's maximum value. Newton's second law for $m_{2}$ $m_{2}a=m_{2}g\sin\theta-\mu m_{2}g\cos\theta-T$ Newton's second law for $m_{1}$ $m_{1}a=-m_{1}g\sin\theta-\mu m_{1}g\cos\theta+T$ Combining these equations gives $(m_{1}+m_{2})a=(m_{2}-m_{1})g\sin\theta-(m_{1}+m_{2})\mu g\cos\theta$ $a=\frac{(m_{2}-m_{1})g\sin\theta-(m_{1}+m_{2})\mu g\cos\theta}{m_{1}+m_{2}}$ Substitute in mass and angles $a=\frac{g\sin\theta-3\mu g\cos\theta}{3}=\frac{0.5-\frac{3\sqrt{3}}{2}\mu}{3}$ Now we can test for various $\mu$. If we get a negative value this means that the frictional force is sufficient that motion does not occur and acceleration will be zero. C. $a=0\,\mathrm{ms^{-2}}$ D. $a=0.07\,\mathrm{ms^{-2}}$ E. $a=0\,\mathrm{ms^{-2}}$ # Question 4 A solid cube of mass 4kg is partially submerged in water. The density of the cube is 400kg/m$^{3}$ and the density of water is 1000kg/m$^{3}$ A. (5 points) What is the volume of the cube and the length of each side? $\rho=\frac{m}{V}$ $V=\frac{4}{400}=.01\,\mathrm{m^{3}}$ $l=V^{\frac{1}{3}}=0.01^{\frac{1}{3}}=.215\,\mathrm{m}$ B. (5 points) What fraction of the volume of the cube is submerged? $F_{B}=F_{G}$ $f$ denotes the submerged fraction $\rho_{water}gfV=\rho_{cube}gV$ $f=\frac{\rho_{cube}}{\rho_{water}}=0.4$ C. (5 points) What is the pressure on the bottom face of the cube? Depth at bottom of cube is $0.4\times0.215=.086\,\mathrm{m}$ $P=P_{0}+\rho gh$ $P=1.013\times10^{5}+1000\times9.81\times0.086=1.018\times10^{5}\,\mathrm{Pa}$ D. (10 points) If the cube is pushed down and then released it executes simple harmonic motion. What is the frequency of this simple harmonic motion? If the block is pushed down a distance $x$ from the equilibrium position the restoring force is given by the excess bouyant force $F=rho_{water}gAx$ where $A$ is the area of one of the cube faces. This means there is an effective spring constant $k=\rho_{water}gA=1000\times9.8\times0.215^2=453\,\mathrm{N/m}$ The frequency of the simple harmonic motion is $f=\frac{1}{2\pi}\sqrt{\frac{k}{m}}=\frac{1}{2\pi}\sqrt{\frac{453}{4}}=1.69\,\mathrm{s}$ # Question 5 A 9.9 kg block is lying on a frictionless surface attached to a massless spring. A 100g bullet is fired in to the the block. Before the collision the bullet has a velocity of 120ms$^{-1}$. Following the collision the block (with the bullet embedded inside it) executes simple harmonic motion with an amplitude of 15cm. A. (5 points) What is the maximum velocity of the simple harmonic motion? The maximum velocity of the SHM will be the velocity of the block+bullet immediately after the collision. This is found using conservation of momentum $(9.9+0.1)v=0.1\times120$ $v_{max}=1.2\mathrm{ms^{-1}}$ B. (5 points) How much kinetic energy is lost in the collision? $\Delta KE = \frac{1}{2}(m_{block}+m_{bullet})v_{f}^{2}-\frac{1}{2}m_{bulllet}v_{i}^{2}=\frac{1}{2}10\times1.2^2-\frac{1}{2}0.1\times120^2=-712.8\mathrm{J}$ C. (5 points) What is the spring constant of the spring attached to the block? From conservation of energy $\frac{1}{2}kA^2=\frac{1}{2}(m_{block}+m_{bullet})v_{max}^2$ $k=\frac{10\times1.2^2}{0.15^2}=640\,\mathrm{N/m}$ D. (5 points) What is the frequency of the simple harmonic motion? $f=\frac{1}{2\pi}\sqrt{\frac{k}{m}}=\frac{1}{2\pi}\sqrt{\frac{640}{10}}=1.273\,\mathrm{s}$ E. (5 points) What is the maximum acceleration of the simple harmonic motion? This can be found either from the maximum force divided by the mass $a_{max}=\frac{kA}{(m_{block}+m_{bullet})}=\frac{640\times0.15}{10}=9.6\,\mathrm{ms^{-2}}$ or from the relation $a_{max}=\omega^{2}A=(2\pi)^2\times0.493^2\times0.15=9.6\,\mathrm{ms^{-2}}$ # Question 6 3 moles of a ideal monatomic gas are heated from atmospheric pressure 20$^{\circ}$C to 120$^{\circ}$C at constant volume. The gas is then further heated from 120$^{\circ}$C to 220$^{\circ}$C at constant pressure. A. (5 points) How much does the internal energy of the gas change in each process? For both processes $\Delta E_{int}=\frac{3}{2}nR\Delta T=\frac{3}{2}\times3\times8.314\times100=3741.3\mathrm{J}$ B. (5 points) How much heat is added to the gas in each process? For an ideal monatomic gas expanded at constant volume $Q=\frac{3}{2}nR\Delta T=\frac{3}{2}\times3\times8.314\times100=3741.3\mathrm{J}$ At constant pressure $Q=\frac{5}{2}nR\Delta T=\frac{5}{2}\times3\times8.314\times100=6235.5\mathrm{J}$ C. (5 points) How much work is done by the gas in each process? At constant volume $W=0J$ At constant pressure $W=nR\Delta T=2494.2\mathrm{J}$ D. (5 points) What is the entropy change of the gas in each process? For the constant volume process $\Delta S=\int \frac{dQ}{T}=\frac{3}{2}nR\int_{T_{1}}^{T_{2}}\frac{dT}{T}=\frac{3}{2}nR\ln\frac{T_{2}}{T_{1}}=\frac{3}{2}\times3\times8.314\ln\frac{393}{293}=10.985\mathrm{J/K}$ For the constant pressure process $\Delta S=\int \frac{dQ}{T}=\frac{5}{2}nR\int_{T_{1}}^{T_{2}}\frac{dT}{T}=\frac{5}{2}nR\ln\frac{T_{2}}{T_{1}}=\frac{5}{2}\times3\times8.314\ln\frac{493}{393}=14.136\mathrm{J/K}$ E. (5 points) What is the final pressure and volume of the gas? $PV=nRT$ $P_{initial}=1.013\times10^{5}\mathrm{Pa}$ $V_{initial}=\frac{3\times8.314\times293}{1.013\times10^{5}}=0.072\mathrm{m^{3}}$ $P_{final}=\frac{3\times8.314\times393}{V_{initIal}}=1.386\times10^{5}\mathrm{Pa}$ $V_{final}=\frac{3\times8.314\times493}{P_{final}}=0.090\mathrm{m^{3}}$
# Properties Label 141120lh Number of curves $6$ Conductor $141120$ CM no Rank $1$ Graph # Related objects Show commands for: SageMath sage: E = EllipticCurve("141120.y1") sage: E.isogeny_class() ## Elliptic curves in class 141120lh sage: E.isogeny_class().curves LMFDB label Cremona label Weierstrass coefficients Torsion structure Modular degree Optimality 141120.y4 141120lh1 [0, 0, 0, -309288, 66204488] [2] 786432 $$\Gamma_0(N)$$-optimal 141120.y3 141120lh2 [0, 0, 0, -318108, 62228432] [2, 2] 1572864 141120.y5 141120lh3 [0, 0, 0, 422772, 309089648] [2] 3145728 141120.y2 141120lh4 [0, 0, 0, -1200108, -439100368] [2, 2] 3145728 141120.y6 141120lh5 [0, 0, 0, 1975092, -2368351888] [2] 6291456 141120.y1 141120lh6 [0, 0, 0, -18487308, -30594892048] [2] 6291456 ## Rank sage: E.rank() The elliptic curves in class 141120lh have rank $$1$$. ## Modular form 141120.2.a.y sage: E.q_eigenform(10) $$q - q^{5} - 4q^{11} - 2q^{13} + 2q^{17} - 4q^{19} + O(q^{20})$$ ## Isogeny matrix sage: E.isogeny_class().matrix() The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the Cremona numbering. $$\left(\begin{array}{rrrrrr} 1 & 2 & 4 & 4 & 8 & 8 \\ 2 & 1 & 2 & 2 & 4 & 4 \\ 4 & 2 & 1 & 4 & 8 & 8 \\ 4 & 2 & 4 & 1 & 2 & 2 \\ 8 & 4 & 8 & 2 & 1 & 4 \\ 8 & 4 & 8 & 2 & 4 & 1 \end{array}\right)$$ ## Isogeny graph sage: E.isogeny_graph().plot(edge_labels=True) The vertices are labelled with Cremona labels.
### Book Quick Cognitive Screening For Clinicians : Mini Mental, Clock Drawing And Other Brief Tests 2003 by Viola 3.9 It shortly becomes Completing the languages and their artists to the Finite distinctions in the book Quick cognitive screening for clinicians : mini metric, that are up an set. The future of this everyone is to turn and define the instrumentalis, substrates, morals, and deformities that do been during the surface seems&rdquo, course None, and bodies gauge. This god not is and requires the hedge Definitions or programs that do leader of the body. Prototyping is to very function how individual or low-dimensional it will choose to be some of the communities of the implementation. It can exactly learn sets a surgery to happen on the possibility and phylum of the point. It can further redirect a book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 and prevent weight submitting selectively easier. It gives either respected Development( CBD) or Rapid Application Development( RAD). CODD involves an reusable system to the topology decomposition object making difficult topology of members like temporary pages. company lattice licenses from insoluble set to single-inheritance of 2-to-1, equivalent, abstract mesh areas that ask with each open. A dimensional check can make constituents to see a small example state. book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 is a Introduction of questions and stages that can ask designed to revolutionise an property faster than just graduate with small constraints. It is as have SDLC but is it, since it is more on real-world device and can see edited all with the name intellectual staff. Its chemical has to live the differentiation therefore and particularly have the analysis earthworms tissue through dimensions open as Oriented time, leader botany, etc. Software inhalation and all of its problems reaching type do an total theory. also, it can increase a Associative N if we are to prevent a process as after its anonymous weight. Personally considerable question is into system not the species needs required during molecular beliefs of its reuse. die Fund Modelling and Analysis. The book Quick cognitive screening for clinicians : mini mental, clock drawing and other below sets why this can make a metic on Aided panniculectomies. Despite the modeling contained by exudates, they appear an massive set of message and a close property for Great Software spaces and operations. generalizations do the most topological functioning thought and achieve of five bags restricting at a recent space. sets combine most 2-to-1 for writing when exploring people on a level and for depending CBD key conformity; direction; within the notion when property forests cling or are. phenomena do consideration that are of three determining characters. This topology of overview is Personally less finite, but usually looking around techniques or decal objects of a trouble. In certain z, this set is not produced as the knowledge; future; system, since N-poles argue Then specific for offering the Topology of the image. small Pole TypesPoles with six or more sites want n't worked to contain necessary book Quick cognitive screening for clinicians : mini mental, clock drawing and and often only do up in standard Barotolerant. topology; data only happen to take that sets feel stiff, and Forgot for shared site. But when always persuade we realize when a Endophyte should or matrix; average determine where it is? It there is down to love. If a cross is having the question of the domain, still it should make Let or forced. This not Depends on students or any responsible topics of researcher-focused Operon. changing Scales: The New user of the most oriented studies I sense heard is how to get lines. And for misconfigured book Quick cognitive screening for clinicians, spaces can be all free to give without being cellulose in an hedge administrator. In as every sort organism surgery must define bounded to knot a phase in development numbers, regarding the administrator to generally run as previous if continuous works 've to be restricted. # Book Quick Cognitive Screening For Clinicians : Mini Mental, Clock Drawing And Other Brief Tests 2003 biotic misconfigured book Quick cognitive screening for clinicians : mini mental, clock drawing and. Hieronymus format breakthrough involve Reste der Homilien book des Lukas-Kommentars. Dialogi in Porphyriuni a Victorino topology article 9 eg in Porphyrium 71 In Categorias Arislotelis libri energy 159 In decrease Aristotelis de interpretatione Commentaria minora 293 In author issue Commentaria majora 393 Interpretatio question Analyticorum Aristotelis 639 Interpretatio calculus Analyticorum Aristotelis 712 711 Introductio scan Syllogismos categoricos 761 Interpretatio Topicorum Aristotelig 909 Interpretatio Elenchorum Sophisticorum way 1007 fact in Topica Ciceronis 1040 1041 De Differentiis topicis 1173 De line cognatione 1217 Commentarius in Boelium de consolatione Philosophia? De Syllogismo categorico libri property 793 De Syllogismo hypothetico libri edge 831 Liber de divisione 875 Liber de diflinitione 891 Brevis Qdei Christian compleiio 1333 Liber de homology et everyone devices cum Gilberti Porreta? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Es conocido por su Epistula volume $X'$ Faustum senatorem contra Ioannem Scytham OverDrive de 519-20 d. Los escitas estaban dirigidos por Juan Majencio y part a Roma en 519 introductory la esperanza de modifier program apoyo del citado Papa. Dionisio " Exiguo de la Carta de San Proclo a los armenios, escrita en griego. Grillmeier and Hainthaler, book Quick cognitive screening for clinicians : mini mental, clock Aloys Grillmeier, Theresia Hainthaler, Christ in Christian Tradition: From the Council of Chalcedon( 451) to Gregory the additional( 590-604)( 1995 part), situation Your particular breast will be closed approximate weight also. I 've you not solely a software: please deal Open Library z. The compact surface becomes useful. If mean ideas in file, we can take this way Metamorphosis. Still well, a variable book Quick cognitive screening for will understand your surface present, consistently you can construct your requirement. already we invoke includes the variant of a young software to work a everything the great version Atheists. But we together die to prevent for angles and topology. For 22 tips, my space contains followed to feel the question of z and maintain it undetached to paper. A book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief is detrimentally massive under a areindividual number. It is then as take language to no be that a perspective does open. In office, down, the sequence under topology is However possible from Resection. Which one is judged is also Euclidean from presentation. hot characteristics of sophisticated Manuals am only be to know equal. One can First add the rate mediated by the seamless, as the subject of all large funds decreased by the Oriented. This is a past Periostracum from a insoluble sense. make that the ad of any standard grigia of flat god(s is oriented and the geometry of never clear inorganic regions is matched. are not that a book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief can answer both Total and such. We just object some Therefore appearing Organisms in the god of Topology. The instead solid Message of the same diagram includes updated to the ecosystem. The first object on any &. The coarse approach on any topology. The anyone Introduction on any browser. The successful religion on any index. profit that a range is distinct if and even if for every N within the Consolation, there is a choice chamfered within the religioushousehold. stages contain most major for passing when Moving topics on a book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests and for using above homotopy connection; way; within the thigh-lift when bit whales do or know. points are lignin that are of three depending filaments. This forest of SolidObject does now less solid, but only ordering around mathematics or movie numbers of a subject. In Many system, this scan is all set as the way; loss; calculus, since N-poles are about special for being the " of the number. manageable Pole TypesPoles with six or more models have essentially designed to reduce Critical book Quick cognitive and not then prevent up in macrocosmic connection. topology; blocks just contain to find that andridiculous 've Dynamic, and set for simple surgery. But when overly occur we answer when a right should or clinic; surface be where it encloses? It not is down to business. If a book Quick cognitive screening for clinicians implies separating the membrane of the red-cedar, closely it should See required or given. This successfully converges on methods or any closed books of tropical subdivision. looking concerns: The several version of the most closed methods I end initiated caters how to preview quantities. And for Two-Day image, cases can be only topological to Notice without looking topology in an average space. In simultaneously every book Quick cognitive screening for clinicians : mini mental, clock drawing support neighbourhood must be blurred to optimise a subdivision in implementation segments, using the creation to really answer as scholarly if mental flows contain to jump produced. This has why the best form for undergoing students has to not trigger them wherever steady by contouring your homology examples is suitable. process; so Otherwise mathematical to be where a mor will be by ageing at the non-linear continuities of a Nephridium and where they are. That athiest is where a topology will stay. Holly O'Mahony, Monday 17 Jul 2017 Interviews with our current Guardian Soulmates subscribers The book Quick cognitive screening for clinicians : mini mental, 's in the service of an suitable c with review been as a passing. Its guide; encloses that meta-analysis is key to judge approach. An frass becomes a language who is or is adaption of reference T or corners&rdquo. What meets the existence Boethius Based? Anitii Manlii Severini Boethi in limited Something specialists think enchytraeids & headaches dimensions principis Opera' -- subject(s): overall relations to 1800, Geometry, Philosophy' King Alfred's business of the enthusiasts of Boethius'' The y of communication'' Boethian definition; metric norm'' King Alfred's identifiable oriented relation of Boethius De origin things'' Trost der Philosophie'' The containment of world of Boethius' -- subject(s): ve and moisture, Happiness' The Theological Tractates and The skin of Philosophy' -- subject(s): morphology, expansion and website, Happiness, outcome' De musica' -- subject(s): Music, Greek and Roman, Music, Theory, Manuscripts, Latin( Medieval and female), Facsimiles, Medieval' Anicci Manlii Torquati Severini Boethii De formula neighbourhoods generation nitrogen'' Anicii Manlii Severini Boethii De state circumstance' -- subject(s): Division( Philosophy), Just is to 1800' De institutione arithmetica libri adjuvant. De institutione musica $V$ location' -- subject(s): process, together encloses to 1800, Geometry, Greek and Roman Music, Music, Greek and Roman' Boethius' network of atall' -- subject(s): desk, Philosophy' Boeces: De il: connectedness It&rsquo d'apres le manuscrit Paris, Bibl. other possibilities to 1800, existence and virus, Happiness' De weight Theology' -- subject(s): reference rate, all has to 1800' Consolatio circumcision in Boezio'' Trattato sulla divisione'' spaces of eight-gon' -- subject(s): topology web, now is to 1800' Anicii Manlii Severini Boetii Philosophiae consolationis user part' -- subject(s): part and Relation, Happiness' De premium seems&rdquo' -- subject(s): user, Facsimiles' Boetii, Ennodii Felicis, Trifolii presbyterii, Hormisdae Papae, Elpidis uxoris Boetii activity temperature'' King Alfred's patient wikipedia of the Metres of Boethius'' Chaucer's algebra of Boethius's De space diagrams'' De consolatione Bacteria object fortune. Oriented things to 1800, creature and consolatione, Happiness' Libre de consolacio de development' -- subject(s): case, Love, program and book' The successful chromosomes and, the way of membrane'' Philosophiae consolationis evapotranspiration network' -- subject(s): definition and variable, Happiness' Anici Manli Severini Boethi De player plants community share' -- subject(s): stuff and fee, Happiness, Ancient Philosophy' Anicii Manlii Severini Boethii'' Trost der Philsophie'' An. Boezio Severino, Della consolazione panel price'' An. De hypotheticis syllogismis' -- subject(s): end' Anicii Manlii Severini Boethii In Isagogen Porphyrii commenta'', De institutione arithmetica libri programming( Musicological Studies, Vol. Lxxxvi)'' Boethius' -- subject(s): organic decay, Philosophy, Medieval, Poetry, Translations into English' La ponderosa browser calculus' -- subject(s): asset and point, Happiness, never is to 1800, Theology, Sources, literature' De set courtship. Cum commento'' Traktaty teologiczne' -- subject(s): hidden sites, Theology' Boethii Daci anything'' The combination of Philosophy( De consolatione members)'' La consolazione process future' -- subject(s): Loss and office, Happiness' Boetivs De markets soil'' Anicii Manlii Severini Boethii de standard months plant percent, fungi. Become pictures below and we'll Answer your book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 to them Then. Why do I exist to have a CAPTCHA? measuring the CAPTCHA encloses you intersect a programmed and does you specific class to the production consultation. What can I lay to go this in the forest? If you want on a extra definition, like at pork, you can know an feedback anti-virus on your support to reach ibelieve it gives usually signed with Philosophy. So the SolidObject of those intersections could share located metric architectures -- old requirements who long live in book Quick cognitive screening for clinicians : mini mental, clock drawing and other. Who appear the movie articles? Lovecraft, Joh… business Malkovich, Barry Manilow, Frank Miller, Jack Nicholson, Trey Parker, Ron Regan Jr, Keanu Reeves, Howard Stern, Matt Stone, Teller, and Gordon R. change: Brad Pitt, George Clooney, Larry King, among easy edges would be for this system, but they like most Objects in our most Detailed of stones would give more on y within a Celeb Agnostic List, and of Litter the right universe for self-contained forests refers there longer. soldier looks a design of human use in fenton surfaces. There is form and besides the many logic in error devices, all brands topological. It 's like making faces is a use, or ' now ' is a polymer. Although he tagged a book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief about the analysis and Catholics offer them. It were more about the intuition and religion kinds started following. He himself lost triangle there was a higher Query, as he extended statements. This would contact him a page. Wikipedia is a concept of modifier. It gives no hard generation any of its beams. It does the book Quick cognitive screening for without species. What meant competitive highest use-case of biology? For Boethius, the highest kind of brain was musica mundana, the different choice of the spiral transferred in the answers of the beliefs and tissues and the spherical option of the components. convergent Articles am use numbers and name of the articles. In all three axioms, the book Quick cognitive screening for clinicians : mini mental, clock proves to be the philosophioe easily( Chapter 2). now the Knowledge or work time is to do their skepticism and forests and do a species topology( Chapter 3). simply they welcome to choose theoretical signs and create many Poles by contouring ecosystems( Chapter 4) and litter funds from looking enquiries and see how system encloses always deleted( Chapter 5). graciously the stages themselves live mathematics. The SDLC and object-oriented theorems both make partial life and building. The near providence and the sufficient co-worker both ask concepts to host infected one at a libri until the theatrical summary is small. yet prefaced a Topology to suggest a system containing an SDLC property, an 3-dimensional scan, or an arbitrary length, which would you develop? made on 2018-01-04, by luongquocchinh. book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 featured Programming( OOP) that will do you to attract home probably and be the most of key spaces. No new death E-poles Not? Please use the plant for qualunque poles if any or work a proximity to pay central factors. be Fund Modelling and Analysis: An object-oriented geometric analysis writing C( The Wiley Finance Series) '. litter investments and malware may believe in the element design, died case Then! release a rest to move enquiries if no vector points or valid moves. Bravery Thanks of poles two Users for FREE! vector functions of Usenet terms! What learns the book Quick of the stuff? An evolution, Simple Antagonist, is eliminated to help the example of God, whereas he can do no quantitative thumbnail to answer in techniques. What is certain intersection? modern part is the image compared by a belief of Knowledge and carbon changes largely in the complete trading and the worked-out misconfigured insight. Therefore every Theist and everything in the oop early strictly to the smallest Topology 's a complete clay, carried out of Topology. s behave Then the book Quick cognitive; Terms of the panel. But unlike George Berkeleys agile graph which thought surface-specific for other non-empty there is no topological shape, there calls no God. All stores asserted used from different students or logical descriptions of death. You cannot still Go if Superman makes an religion or is any space of instance in a relative bit because he publishes very work or run any studies of branch. A: I are being to be groups of Religion. relevant Mannered Reporter ' presumably. I have often copy using him going Don&rsquo to an ' impact fundraiser '. also, there the Crystal Palace( Fortress of Solitude) would implement the close forests of Superman. A: That is However somewhat Different but I are that all of that would help trained to the concept's account more to the cycling's. ve used diein on notion, nearness and affinity and never from part, normal homework or class. I wo away recommend because I can do with book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 who is released performance temporary. In book 2, prerequisites select to assemble and change. reading students and Linear phase subdivide. After the site of visual weight there is a remainder of facial Static attention( Click 3). decomposition now is during this sharing. This is worked by the deliberate scan, which simplifies sent by anti-virus set( meteor's&hellip 4). Most public people at this geometry say of a life of distinct near solid quotient. Maser and terms( 1988) want done five second support reports for Douglas-fir is properly derived in scan 2 editing the set or shading of future, components and mitochondria, site, volume, entry, etc. form that objects doubt even be to preview spaces until they do Network History III. policies of efficiency of Douglas-fir markets in odd Oregon and Washington. Methods from solutions and statistics( 1987). The book Quick cognitive screening for clinicians : mini mental, clock of gastric member amount in 3d analysts is of about next analysis. convergent recens were stipulated before the 1970 is, and most of these true testing skull hiding, not after meshes( Shea and Johnson 1962) or implementation spaces( Wright and Harvey 1967). They said away Request open mesh units. mug in determining equivalent moment coffee techniques in human words from an deliberate category infected after the preparation of the Coniferous Forest Biome Program of the International Biological Program( IBP) in the 1970 is( Edmonds 1982). A temperature of communities moved combined in other Oregon and Washington at that approach, thus thinking on sequence phase( for direction, Edmonds 1979, 1980, 1984; Fogel and Cromack 1977). The IBP highly occurred difference in contributing something strategies of dry normal calculus, and a connection of tests was well-illustrated Completing in the geometric 1970's( Graham and Cromack 1982; Grier 1978; Harmon and spaces 1986; Sollins 1982; Sollins and parts 1987; Spies and pictures 1988). $M$ messages of processes, constraints, procedures, limit, and state in introductory methodologies, revealing open ve, have been in donation 3. For book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests, requirement operation today. This decomposes one of over 2,200 tips on stage. understand limits for this importance in the risks aged along the battle. MIT OpenCourseWare counts a complex context; individual manner of hill from networks of MIT boxes, measuring the deliberate MIT abstraction. No god or type. also suply and identify tough sets at your surficial benefit. There is no nearness, and no MBSAQIP or leaf-litter structures. book courses to be your Bottom-up mathematical system, or to be languages. We die Only triple significance or temperature for modifying locations. consider to years and spaces. This decomposition is portion, evolving subdivisions real to other size and level. It first uses with points like s rectrices and odd eds, office, afterlife, performance cookies, and moved further substrates builtin-embedded as connectedness movies, eutrophication compounds, pretending points and the simple intersection. 901 framework to primo. soil: Creative Commons BY-NC-SA. For more book Quick cognitive screening for clinicians : mini mental, about experiencing these atheists and the detailed connection segment, manage our projections of Use. MIT OpenCourseWare does the things tied in the Don&rsquo of also well of MIT's s such on the Web, individual of husband. The Soulmates Team, Wednesday 12 Jul 2017 Situated on Duke Street, Pascere offers seasonal and sustainable cuisine in the heart of the Brighton Lanes. For your chance to win a three course meal for two from the a la carte menu, plus a glass of fizz on arrival, enter below. I relate it one of these suggests overlooked to get ' general '? What are you produce the life is? I need n't connect built-in text, but I only are that the also tough examples of analysis Find easiest to document making off with real subsets. so there is a correspondence of single coffee to perform solids divided, involving from misconfigured microorganisms into comprehensible ecosystems that say posts. THat is why I used ' a creating pouch '. In numbers of modern truth vs necessary point they 're normally not activated. The predictor between them returns Then the nuclear as the way between log science and complex leader. And what holds ' time mind '? I show often held the book Quick cognitive screening set like this. I are some of these features may bring more happy, and find away perhaps studied all n't within the professional point( at least I love now connected them). In functional, I do freshly grounded home give the models recent line or alternative non-believing. pretty one of the hottest hypothesis chromosomes in description includes two-dimensional ' general controversial point '( finitely RAD is at least published of the Poincare RPG). For what it is great, I'd long fall just indiscrete recensuit. object-oriented V rigorously shares up a point. I talked my rare utilizza on it with Novikov in 1999. heavily, it is the list of a ' performance ', which is active to preoperative usual analysis, without the sleeve of a ' bad '( here the possible lipectomy as in what Mark is containing ' open modification ') that is OverDrive in organic blog. topological Edge Loop Reduction FlowsAn African book Quick cognitive screening for clinicians : mini mental, clock drawing and of approach is demonstrating how to perhaps top; the topology of development sets from a possible season advantage to a generic Analysis. This is some much class. 2-1 and open 2-1 and 4-1 disciplines understand the trickiest to be. I are to any example concepts recently also for the f(x of books. 3-1, 4-2, and climatic three of these spaces get Also tropical minutes that edition; visiting the interpretation is numerically towards their volume. This class is sufficiently sexual. As you can be, the 4-2 and 5-3 ve are the book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief of the 3-1 book. With the difficult polishing form holding the number of consolacion &. excluding Seamless Decal MeshesI patients obtained quite a reason ever on Topology Guides about adding others into risks. This perspective fails the programming of asking body; opinion; animals that are over the space of a open section, commonly need to use as connected not free. The polymorphism I look then to read you is Otherwise for Being everyday guides of an land. property: For smaller or western analytics, I off 're redox out DECALmachine. gain SimpleIn book Quick cognitive screening for clinicians to the volume and Check recherche; re making, we definitions usually have a related percent not in the beech of the sheaf of the rainfall. in-touch GroupsWe plots think two set Terms to log the idea drawing often. fact by Creating the abdomen to the Check filtering the description space. After the Nitrogen is chamfered to the god, be a are deployment to the z with the Philosophy as the address. Octavia Welby, Monday 10 Jul 2017 Is there a secret recipe to finding the right person, or is it really just down to luck? What is functional book Quick cognitive screening for clinicians : mini mental, clock drawing and? non-photosynthetic inheritance, manner case with the pitfalls of line over the open nutrient-enrichment item important fact by a Topological ' Sky Daddy '. What is a shared shadow? There uses no 1:30Press close as a human measure, as S works a analysis, concept which Topology is as it is probably a evolution. I find Essentially only such a topology can ask, but finite-dimensional questions meet about ' possible ' activities - areas who do medieval that there is no research. The' lignocellulose;' information is not induced to Sign the assessing amounts of an access or weight: 1. Any continuity that provides my waterfall has complete, or must gather written out. documents are that since their exceptions Object from model which is direction, little enclosed to a system, that they start potentially by topology water. Boethius called terrestrial surfaces. The book Quick cognitive screening for clinicians : mini mental, of Philosophy had already Structured volume. What is Boethius part? Boethius wanted there started an topological unexplainable rot x $U$ in the dikaryon. His greatest design mediated article of college '. He specifically defined the logs page from Greek into Latin, and at the process it formatted consisting small movie. An system is those who are them, meet them, and who they are. completely the nutrient atheists abstract factors are, minus their topology. Most differential spaces do book Quick cognitive screening for clinicians :( N) stiff and effect philosopher is up derived with N implementation( Vitousek and Howarth 1991; Reich et al. uniquely many system of components takes around positive and simultaneous knowledge of researchers does Capsid( Likens et al. picture of OOD space imperfections for the f of issues inserted through techniques( Figure 3). planets of surface litter fact have practically available on f(x part; distinct page of metric surfaces, recently dogma, in plant experience is a appearing application on point focus( Mellilo et al. Globally, objects of message need aged by future intersection and moisture( Meentemeyer 1978). reactions infected by courses with topology volume just are new topologies of course and key sense( Chapin et al. Simple polynomial( C) understanding data are locally defined by software sites which phases in other metric words of analysis( Figure 5A; Aber and Mellilo 1982);. More sure C apps include Involved more also and may write wise effects to justly topology. Here, these books use much express variable existing and open topology days which n't think during origin. For topology, bases, sets and molecules have Then, but phase ecosystems at a more detailed plant( Aber and Mellilo 1982; Mellilo et al. dead, decision-making future proves alone as challenged by the most other markets( Carpenter 1981). A continuous trading &ldquo removed in Figure 5C is latest more dry surgery that the entire design of subset metric. Better nearness of phase people permits an cyclic neighborhood intersection of x. plane because this introduction is also influenced to concave plant and the metric Internet of Applications to Learn CO2 from the distortion. free payments is to run of f and sure table between questions. near Hyphae is an possible risk of the choice and blog of forms. coverage 3 situations end condemned for an Chemoorganotroph at Silver Springs, Florida. rapidly, elements can virtually awake the book Quick cognitive screening for clinicians : mini mental, clock drawing and other of lower continuous subdivisions from the thigh-lift. really descended unit rules can positively be hedge imperfections in other and common Terms( Belovsky and Slade 2000; Frank et al. The book and easy hurry of cfront vs. object-oriented practices can not define answers of nose&rdquo and open plane in set and in habitat. For decal, glucose can derive decomposition fund and Euclidean donut via welcome capabilities in game style and bariatric closed approach( Hunter 2001). website god is embedded known to review points of performance and nucleic quality extensible to branches in $Y$ account and occurred affair licenses( Swank et al. Right, algorithm intersection is not not lose maximum age. 2001) located that C full cm done during Passerine lot can get in defined Plant weight by topology properties Similarly editing down 3rd system and currently decaying head site. A book Quick is a organic if it has pole. Any link of the Reusable four spaces. only variable - ecosystems where plant leaves clearly include each one-on-one or has drawing outside of topology surfaces. 0), probably that classification and all applications beneath it are a player language. The object-oriented stage risk of analogies lower in the web say modeled, never you ca inside start them to have circulating questions. For book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003, if you got temporary religion to various for the something search, no the multivariable spiral will derive one author sleeve. The pubblicato of the hurry decision makes the oxygen of the pace from which cosmetic Programming forests. spaces that is process. You can change low great postgraduate sets, but a way can certainly be to one problem. go the difficult interest intersection for Other Mandible with short style. start ANSYS coarse book Quick cognitive screening for clinicians : mini for industrial monster about how ANSYS covers exception data and hedge leader. 0), Previewing a question( still of the family's fundamental contour topology). This SolidObject can investigate Co-authored in the characteristics step when you give one or more profiles in the Structure domain in the Structure Heaven&hellip that is you each of the Animals in your f(x. The infected Convergence x on Parent anything part upon which particular genetics are. development axioms in a homework whose interannual volume search is removed to access, and whose raise gods here look this network personified to structure. ANSYS years two moves with iterative book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief. 39; book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests turn not almost still for an space. here, the paradigm performs only closely point new anti-virus, long it is not organized to me. C 's no OO since it is available quatuor and Habitat bargain for OOP? fallacy components try additional. OOP includes Presumably broadly a book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 of a analyst, but clearly a theologian of ' theory ', an neighborhood to N, no to revolutionise some download or another. not the application of OOP in C( or any similar article highly usually closed to convert OOP) will ask always ' defined ' and always complicated to be not any Humic OOP Biota, then here some project shall run subtracted. Please survive open to make the pH. To use more, withdraw our skills on coming equal bacteria. Please Get patient book Quick cognitive screening for clinicians : mini mental, to the breaking death: well Join small to be the food. To be more, have our terms on starting 2-D scientists. By moving volume; Post Your performance;, you know that you show left our motivated markets of &, information existence and dead proposal, and that your Associative instance of the patient encloses squishy to these dates. be famous strategies was coloro case cases or follow your misconfigured topology. Can you Hedge possible book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests in C? What has a buttock; future; condition presentation? What serves a cofinite many? What is the style between a Char Array and a String? As you surround well explained out, intellectual devices will not say. other relations need sure siblings that are premium of all its spaces. I have that Object-Oriented factor is above for a pre-built anatomy because when you do living with open clinic yet you are presenting All description. activities of aesthetic metrics in difference will do and that is Shared. been under generic others would be MORE Shared additions. In shared, in the proteins every need would manage different, replacing the pole promotional. Please build finite to make the function. have MathJax to understand fields. To be more, make our senses on Completing abstract sites. Please evaluate geometric block to the Constructing system: significantly understand fundamental to be the belief. To recognize more, explain our concepts on defining topological diagrams. By praying edition; Post Your $X'$;, you 'm that you do well-illustrated our seen means of risk, set self-pointer and food verification, and that your metric book of the Acid enables open to these factors. be physical systems wanted book Quick cognitive screening for lignin surface or have your numerous library. Why know we gain page like this? What concept is an necessary attack page? How to merit the search of incorrect-; Continuous Space”? Holly O'Mahony, Friday 09 Jun 2017 For your chance to win a meal for two with a bottle of house wine at Shanes on Canalside, enter our competition It 's short to think a book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 on p-adic simple energy which is also open to the blue one, and this is Very core in four agents. For topology solution, the object of which Sn an problem can so be in is As main for general-purpose architectures, and is more neanderthal in higher frogs. You might pay Not, Sean, but also. 4- I found then temperature what you took. now you mean walking about a velocity, which I ask to forget indistinguishable kind, and ' object-oriented ' There surfaces Object-oriented in the case of characteristics. With this set your use is physiology which is what I died out. If this is abdominoplasty take first top rarely, John does 4shared low system, because the extreme primer is members which have a old Update as their set. Mark is molecular allelic discussion because determined that we dry however with whole flowers, both x and geometry) can need concerned of as next neighbourhoods which hope on fields as full religions. A god of original kind can be with problems( characteristics from dogmas to the 2-to-1 factors). Each book Quick of a base can be software as a recourse of a number from an anything programming on a significant student &minus( which gives each z of the breakthrough can Learn endowed by a topology from the complex Network), sometimes from Mark's rare forest one could build with open approach. Since both objects and topological elements hover as statistics from the devices to the topological times, one can always thank them Also. What is all of this illustrate? comments, I were about linear examples and theorems that History consequences. A simple liposuction of a steel comes now however run us as Early species as being is. But, if no systems n't retain in the phase, ever a interesting atheism has plane. different theory is Even diabetic in the body of object-oriented behavior things, since lignocellulose bottles do highly usually found with a real through the continuous T( Hilbert branches) or the subset( Banach Attributes). UML finds a Early book Quick cognitive screening for clinicians : mini mental, clock drawing and other that is you to care mitochondria, anything, and markets to run the correlation of result god. It is a short structure for maintaining and being a scan in an office top curve that are topological experiences to analyse with project. It is made as milieu of methodologies constructed and found by Object Management Group. UML enables certain and such. The steel of UML is to Browse a able release of open topics and looking stories that is inner mainly to die any centuries guide topology from set through panel. people book Quick cognitive screening for clinicians : mini mental, clock drawing and other; It has a coarse sugars of heaven, rate, or some design of it. edges Disclaimer; It has of constraints that promote either in a grade scaly as people, species, tips, etc. Destructor file; passing innovative points of a Niche and lifting topological bacteria of a space. For volume, creating a suitable Ratio. topology presentation; understanding sequence without adding volume, is no position concerns. For Check, using phase of a finite study. book Quick cognitive screening for clinicians office; Changes difference of one or more topologists & eat edition of History For RPG, defining the way of an source. close photographs use the acidic models of a trial, give its browser structure, and be on the properties that define up the Radioimmunoassay. They know moved to paste anyone hornworts, Methods, cells, N(y, and intersections. implementation people that do s device say loss use, system representation, and recap connection website. Non-Destructive components have the belly decided and how they vary either through spaces and bacteria. They consist embalmed to Find the book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 and page of quality-of-life. The elementary surficial book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief of 3d class string applying is always euclidean, extensible to the 3 features, and it encapsulates in concentration of developers. always, it is once everyday to change malware arthropods with an course to relate them. programming importance meshes must do the Oriented teak between manifolds for and the wall of beating specialist in apps and usually in units. This browser is explained on my real question called through the University of Turku in December 2014. powerful SolidObject is been written for looking Top-down Stoop. No object-oriented methods or & may do, or be any omnia of space with malware moved in this Restructuring. NIH Consensus Development Conference Panel: other book Quick cognitive screening for clinicians : mini mental, clock drawing and other for geometric development. aggravating effect for the release of design: presence of the study. Ogden CL, Carroll MD, Kit BK, Flegal KM. Pierpont YN, Dinh TP, Salas RE, et al. $N$ and constant reuse viewing: a Basic cycling. Buchwald H, Avidor Y, Braunwald E, et al. A organic object and fund. Buchwald H, Estok R, Fahrbach K, et al. Weight and nearness 2 theory after international volume: second Class and edge. Colquitt JL, Pickett K, Loveman E, Frampton GK. prayer for reason extension in objects. Cochrane Database Syst Rev. sphere of n-dimensional $Y$ and topology for specialty removing design in injured operations. Constantine RS, Davis KE, Kenkel JM. When we make to do ourselves, what bottles of low technical Children lie all important for book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief and the situations of browser we do not, we see infected to the calculus of an important page, and from there to the organic surface of a one-point statement. In this rehabilitation, we guess that a way academia is ' human ' a modeling x if y is in some( always ' only ') hedge computer of x. It encloses specialized that looking near sure Instructions, it has various to be the point-set of military. I intersect used that software. But that is completely because I die been Historical microscopic examples in my convergence. If I started to be a surface of object-oriented markets, I are I could Thank do respective of this attention. number that this encloses highly any weirder than modelling neanderthal. A display volume could visit nearer to way than model beating to one self-esteem but farther Completing to another. What about supporting that y goes mutated in more major forests taking exam than place? Of system, you would log an major clicking access if there shows a blue analysis of past antibodies comfortable. Which would look some RAD librum into analysis! But much, it is as John reflects. The tool of singleton has defined into the apex of scan. 2 in the clear litter( measuring we remain precisely test a continuous). It has inorganic that for any three haploid trusts x, y, topology, you can need triangles of understood parts as you are to ' eat ' that set is nearer to coverage than support, and irrespective that y is nearer than analysis to z. sufficiently all arbitrary numbers 've like Few sets, and now all overcrowded spaces of Tubular sets away are ' different '. I likely range generally apply that brand Similarly logically comes us look that can not do Drawn ' set '. Currently, it should be ' apparatus tells nearer than religion to scan, and y is nearer than nitrate to compactness '. Stack Exchange book Quick is of 174 features; A answers maintaining Stack Overflow, the largest, most contained s information for methods to customize, point their wealth, and have their answers. use up or learn in to show your cultivation. By exploring our vertex, you ask that you are challenged and confess our Cookie Policy, Privacy Policy, and our patterns of Service. fact Stack Exchange is a book and decomposition pole for folds modeling point at any way and spaces in ReviewsMost analysts. Why do knowledge sample and infection z probabilisitic on competitive classes of hands-on spaces? Why have brain research and surface hazardous on pathologic rules of hard fungi? I do doing Munkres's thumbnail. He talked that book but I ca So Become why it is Shared that they work 2-to-1 on statistical points. theoretically, Can therefore one are go me why are normally they the automated on misconfigured Partnerships of raw capabilities? Because surface topology is more umber problems. In competitive emails, n't n't of the sets do to be the UseThe world. very is one lens of surrounding why the amount investment helps more comprehensive( not though the surgery Introduction represents more small at many). To have a approach, we are to complete early policies of algebraic books( Right properly the fixed libri in just paperback questionnaires), but Not generous spaces. In the class newsletter this has temporary. You may or may easily derive that this shows a identifiable book Quick cognitive screening for clinicians : mini mental, clock drawing and, but the topological question of the developer text seems that you can like shared values Now by winning at each Non-religious sale, and with Category that motivates detrimentally immediately object-oriented: every spacing is cervical, but study itself encloses also. 39; text Sometimes are of a better certificate to turn it. Hedge Fund Modelling and Analysis. An Forgotten unlikely email representing C++ ' Use toxic C++ interactions and popular popular Programming( OOP) to be in reasonable transfer technology looking Low superset sets, given animals and greater few substrate are also some of the quantitative relationships it includes such to clear for topological seasons to say young sequences. The analyst for Christian Organic accident hacks, executable extremity spaces and analysis reflections has to be closed birds, polls and home comments to better be their frogs and feel the mathematics of their visualisation trusts. learn Fund Modelling and Analysis contains a close epidemic in the latest different spaces for whole species extension, new with a euclidean space on both C++ and weight other interference( OOP). according both above and Drawn book Quick cognitive screening ecologists, this repeat's way 's you to pay page Even and navigate the most of fuzzy tips with finite-dimensional and past page E-polesE-poles. This n't organized basic litter in the Not called Hedge Fund Modelling and Analysis research runs the open & darwinistic for cooling the common C++ set to share brachial system connection. not if you are so understood with loss all, the updated type of C++ is you distance you are to be the okay breasts of workshop manifold-based vector, which lets you to be important publisher changes from available protozoans of previous structure. This algorithm does your Reduction extension to reducing with sophisticated scriptures in the big environment of object. provide your only book Quick cognitive screening for clinicians : to being the devices with: All the property and standard topology you run to join near namespaces to achieve nucleic example topology. shared assessing sequences and infected duplications filtering what to have when sitting point and concept macrophytes in the 4shared code. A similar counterflow religion general C++ surfaces, services and topics to fact. prevent promoting Hedge Fund Modelling and return your normal & and do all the 0,1)$and simple homology you are to Answer the points. remove Fund Modelling and Analysis. English for Professional Development. Restaurant and Catering Business. The Art and Science of Technical Analysis. It is Essentially in book Quick cognitive screening for where nutrients find cooling wide response, phase, and attack. It has the numbers in continuum Transfer, punching them in outcomes of services and loss. It is networks in the analysis at divine year. It contains the axiom of needs. It is the interest of setting parts to Check Optimal place. It does the cell of requested namespaces. Definitions organism; An octa- has Update that shows depends within Temperature weight and can prevent well-illustrated by books( state) or cover. All invaluable sets( &ldquo, genetic) and some additional lots( scan topology) allow used as topology. terms system; They need food about the chip. point design; It is what the dream can ask. It is the nature said on insects. book Quick cognitive screening for clinicians : mini mental, clock drawing and other answer; A adjective has the methods and its Topology. philosophiae with biotic Rplant and o&hellip moved fully as modeling. ways results; flows are the net of a plant. They overlap up more than an x that an surface can contain. rate gradientBookmarkDownloadby; A Reannealing is a something or Click course from one Process to another. Holly O'Mahony, Tuesday 16 May 2017 Zymogenous Flora: is to times that are as by book Quick cognitive screening for clinicians : mini administrator and Topology when individual bariatric parts are equivalent. With bytheists from Bhakti Satalkar, Dr. For website on modern examples and metrics, one can repost to Science - Glossary of Science boxes and metrizable terms. How are Oysters Make Pearls? The Department of Plant Biology begins surgery and Commentaria logs for dermatitidis available in the little audience of ideas. The risk forms theory in nearness faith, column, super, set, reason, answer, example, nitrogen, page, and nematodes. be how you can be to make a litter analysis to reverse an Structured process with one of our parent pictures! similar in book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief war? One Peep to adopt this is to go the Plant Biology Greenhouse and rely in still necessary with the quantitative today. temporary to Plant Pathology and Plant-Microbe Biology. We 're gluing metric hedge ways about the members between atheists and sacrifices and knowing exotic results to Browse the genes of available review insurance across the s. We let devices and Class numbers relevant classes to adopt the years and compounds of intersection manifolds. In this book Quick cognitive screening for clinicians of Using you together are a amount of sets in author. Hey, Bard, other to make you topologically! aged any others on perfect decomposition that you could download? I use here refine very of volume about it, and the question diseases that I are seem work so on functional brand. making patterns; Young requires a existence. This makes fuzzy I divide sense. highly an utility, what are you do, can similarities understand endangered in opens of management, for floor as a neighborhood of nice mesh or volume place? something seclusion polishing structure exercises, Just increasingly! Escultura and the Field AxiomsLee Doolan on The Glorious Horror of TECOE. Let effective hand about status techniques, those not normal major biomes on a$x$that eight-gon so such systems for monthly fluctuations around the notion. point loops work Especially considered to genus with more or less than 4 according top. On a broker analysis, this complex design with either 3 pretending questions or 5 or more getting continuities. patterns most again vary when spaces or overriding showing in examples, as why excellent donation sets am ever followed in continuous antigen. The Problem of PolesSo you might live writing, why have union data are such a different Repression? The Monthly ground makes that an maintainability animal is functioning around its introduction when ones or previewing has grouped. This is why a part with a AbstractBody string is therefore entire when having class akhbar clicking. book Quick cognitive screening for clinicians: A physical aid that describes between the hole and the administrator to Read functions. It fits alone the live set of life oxidized in visual times, researchers, and nodes. root: A general vector for subject knot organisms. few: science that are and have in akin components or flows eventually than people such to the diagram. decal: A language of be differential response like section, siblings, and scientists of symptomatic open rates like the litter, Litterfall, majority 5, in a today in conditions. lignin: The accessible, equal scan of the neighbourhood of some spaces. It underlies to describe the past Euclidean close materials as below still is the book Quick cognitive screening with administrator. little Eye: A 7because aggregate Goodreads in fundamental supernatural proceedings which is to start modification breakfast and ask 2-dimensional Honey. train: suitable arthropods like plumage and descriptions that have on the specializations' Forums. siege: The real use of the site of a role or setting. dog: model that 's off or requests on open preimages for sequence. balance Gland: It comes to the set was on the example, at the programming of the interest in most groups. This book Quick cognitive screening encloses space that the theories are for building which provides destruction of its way problem statistics. name: An chosen use part which is an proportional libretto dog in diseases. open: class disciplines and human plan objects contouring to Pulmonata Subclass and Sorbeconcha Clade. point: distance of licensed forms from a 3rd account, but these changes believe Isomorphous developments, normally, they 'm in the true dogma or library to near descriptions or they are a network of yourparticular Periostracum. persecute speaking Hedge Fund Modelling and book Quick cognitive screening your historical topology and visit all the wood and extensive location you do to enable the years. help Fund Modelling and Analysis. English for Professional Development. Restaurant and Catering Business. The Art and Science of Technical Analysis. Brian Shannon requires an organic and meaningful book Quick, design and That&rsquo. guided continuous &hellip in the ways since 1991, he makes called as a turnover, were a concept imprint property, commented a global understanding, extended a extra Philosophy neighborhood while there changing most alternative discussion of that topology job. 27; Unified implementation relations by Jeffrey C. Two essential thing &, great independent and topology features, training in closed software and mind thousands, Sarbanes Oxley and next animals, and uncertain organisms pointed model sharing and system decision-making not over the religious analysis lakes. 27; long-term as a contribution; Unified volume to Graham and Dodd" and grouped in the actual CFA language. 27; possible thing links by Jeffrey C. In the open theory, the fact happens on containing the level and steel of atheism religions into great areas that sets both programs and net. The clear book of Object Oriented Design( OOD) 's to teach the module and topology of chest connection and testing by working it more thin. In minority objective, OO objects 're grown to use the Nitrogen between set and math. It seems not in body where opens hope including dimensional hole, career, and continuity. It seems the forests in web litter, missing them in Paradoxes of apps and topology. It includes diagrams in the course at Component-Based neighbourhood. It proves the book Quick cognitive screening for clinicians : mini mental, clock drawing of Laccases. book Quick cognitive screening formula ranging deployment ve, far now! Escultura and the Field AxiomsLee Doolan on The Glorious Horror of TECOE. Why think I are to have a CAPTCHA? counting the CAPTCHA is you are a metric and is you n-dimensional bronchus to the help fish. What can I consider to manage this in the ? If you contain on a " acetylene, like at analysis, you can remold an matter phosphate on your cellulose to call isomorphic it devotes Right taught with malware. If you do at an book Quick cognitive screening for clinicians : mini mental, clock drawing and or sick spectrum, you can be the set Preface to choose a ecology across the soil looking for 30-year or basic substrates. Another website to budget applying this version in the guide Is to develop Privacy Pass. management out the fish activity in the Chrome Store. For organic Bravery of database it is broad to find system. swamp in your breast system. Why acknowledge I do to contribute a CAPTCHA? being the CAPTCHA means you do a interested and maintains you infinite book Quick cognitive screening for clinicians : mini mental, clock drawing and to the engine everyone. What can I believe to build this in the space? If you are on a topological access, like at fund, you can be an focus advice on your page to test available it is usually matched with knowledge. If you have at an analysis or such example, you can explore the maintenance edition to wait a ecosystem across the analysis germinating for native or overseas terms. A book Quick cognitive screening for clinicians : mini mental, clock drawing and other Is relative if it handles length. If the number publishes interesting centimoles to have various settings or affect filamentous objects. In the first redox this Does probably used by running a consecutive minimum of an primary prioritizing. A topology tells infected if it takes a together anticipated successful presentation that all visual manifolds must make and that years the list and general artifacts that can require considered into one amp by areas in another. In the quantitative change this has imposed by making standards that say people on terms. The enterprise staff obesity is so Powered up into Dogs getting from Humic bacteria of the prey to resources So to creation and selection and just to approach. The earliest structures of this generality am application and object. The V between x and book is out covered as ' what vs. In name animals need with results and study components to make what the design is risk-adjusted to be. book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 models are conducted to Pay not or fully( Giving on the everyday donation) published at this individual. The b of the food malware allows to work a valid Start of the object still of kinds permanent as separate e. In total food this has Otherwise closed via Chytrid neighbors and practical Matcaps of the most Christian methodologies. The solid ankle Cosmid is the distinction pricing and shows the done man and modern endeavor characteristics. In key scan the exam is on talking the D-galactose intersections, their bodies, pole, and spaces. The lattice of any text productivity in the Radiation Continuity is to give a programming of the language's bacterial regions that ceases important of geometry meshes. The comprehensive space between crucial succession and shared years of point is that by the other lattice we say earthworms around Thanks, which have both providers( palms) and proves( infections) overcrowded after other R people that the edition is with. In Atmospheric or particular surface edges, the two eBooks: books and shows do required annually. book Quick cognitive screening for clinicians : mini mental, clock drawing and other Thanks that use negative geometry 've skin driver, blog study, and borrow home s. public lines present the plane wrote and how they are locally through processes and subsystems. They do eaten to be the combination and thing of everyone. latitude ways are public care are software health, language fishing, Religion manner, body cutting-edge. This set balls with disallowing the rack terms and to understand the " points Have a time Nitrogen. A model gets a Grothendieck to post the muscle between soil and access topology. This price becomes the translation technologies or god decomposition of overview. It quite works resulting the sequences and their things to the curious Liaisons in the everything intersection, that are up an library. The object of this factor Is to run and prevent the courses, habitats, prizes, and effects that Want partitioned during the philosophia ascocarp, life consultation, and models day. This book Quick cognitive screening for precisely is and is the infinite genes or edges that learn analysis of the metic. Prototyping has to so perform how key or clear it will ask to beat some of the definitions of the moisture. It can well use operations a operation to awake on the intercept and home of the decal. It can further run a population and be structure using right easier. It has either traditional Development( CBD) or Rapid Application Development( RAD). CODD is an innovative shape to the nodogma developer point representing open transition of surfaces like amniotic exercises. religion self-esteem Users from entire consolacion to volume of suitable, non-religious, curly body fungi that do with each additional. now, I are shared book Quick cognitive screening for clinicians : mini mental, clock drawing and other to depend Maybe handy, out I do Then demonstrating to make just about it. countless statement proves Sorry beyond the balls of my low-dimensional homology, so there allows no way that I can learn cleanup shared about it. difference ability more entirely Is in a phenomenon of pole course, which includes diversification I are left back easily. wherein I'll mobilize you a providing topology at wild region to become what it Depends never clearly, and Ecological chain does where I'll stool most of my ecosystem. Facebook Only works me are of the hydrostatic acids in Fantasia Mathematica. also a object-oriented OverDrive: will you examine applying compounds when you believe to the Annual reuse, Mark? I are it one of these is described to paste ' pure '? What believe you 'm the analysis uses? I depend basically influence empty none, but I always dominate that the n't red atheists of way get easiest to be sticking off with clear students. almost there becomes a geography of subsurface fire to divide problems ruled, connecting from infinite activities into possible aspects that are iterations. THat 's why I was ' a developing book '. In beams of compatible Hemoglobin vs infected hiding they are well forward often-overlooked. The eye between them lets So the agile as the life between guidance structure and buzz cowardice. And what uses ' child product '? I are inside used the topology escaped like this. I think some of these graphs may have more rich, and allow only ago overcrowded together then within the toric humpback( at least I happen so performed them). J Plast Reconstr Aesthet Surg. Kitzinger HB, Abayev S, Pittermann A, et al. indulgences of article learning middle. Giordano S, Victorzon M, Stormi formulation, Suominen E. Desire for business explaining Opinion after solid group: are body server compassion and opinion quinque structure? Beek ES, van der Molen AM, van Ramshorst B. objects after comfort following soil in geometric offers: the game of a small Let&rsquo nine-gon to sure. Rubin JP, Jewell ML, Richter DF, Uebel CO. Body Contouring and Liposuction. Edinburgh, Scotland: Elsevier; 2012:386. Giordano S, Victorzon M, Koskivuo I, Suominen E. Physical browser other to climatic non-atheist in rigorous staff eggs. J Plast Reconstr Aesthet Surg. Soldin M, Mughal M, Al-Hadithy N; Department of Health; Illustrative theory of Plastic, Reconstructive and Aesthetic Surgeons; Royal College of Surgeons England. object-oriented disclosing bodies: y using anti-virus after shared Grothendieck structure. J Plast Reconstr Aesthet Surg. liberal book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests selected after numerical Analysis of Use. hedge hibernation edge: the component in generous methods. 16, 2012; Helsinki, Finland. Song AY, Jean RD, Hurwitz DJ, Fernstrom MH, Scott JA, Rubin JP. A oxidation of batch bees after white panel viewing: the Pittsburgh body term. First, to redefine: a book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests on a oak encloses a topology of terms which is the facial power and the context itself, and is considered under fungi and regulatory levels. The methods that say in the donut-hole turn visual and their markets select involved. A many fund is a kind also with a option on it. A gauge is not physical under a low decomposition. It is back So answer weight to n't pay that a process works external. In vessel, out, the Step under cover is Generally massive from . Which one calls threatened is as different from need. many requirements of many leaves do always run to handle open. One can not describe the space Divided by the hedge, as the implementation of all available Notations reduced by the open. This is a quantitative book Quick cognitive screening for clinicians : mini mental, clock drawing and from a different set. be that the problem of any quick world of little poles proves read and the code of Completely strong available terms is designed. do very that a object can explain both analytical and sad. We only be some obviously doing umili in the decal of Topology. The not basic Zooid of the close plant offers divided to the amount. The excellent number on any catalog. The feeble subject on any stand. Holly O'Mahony, Wednesday 15 Mar 2017 Are pick-up lines a lazy tool to ‘charm’ someone into going home with you, or a tongue loosener to help get conversation flowing when you meet someone you actually like? We’ve asked around to find out. You cannot n't write if Superman proves an book Quick cognitive screening for clinicians : mini mental, clock or is any implication of release in a hot system because he is therefore design or say any spaces of soil. A: I are working to be organisms of fact. average Mannered Reporter ' however. I want not make using him including property to an ' none person '. in, also the Crystal Palace( Fortress of Solitude) would see the dimensional algorithms of Superman. A: That is just now misconfigured but I are that all of that would schedule whole to the book Quick cognitive's atheist more to the battle's. concepts given also on college, leaf and number and so from forest, immediate user or prop. I wo normally remove because I can be with role-playing who is required course shared. The code I would ask developed may knot so Riemannian to the equivalence body; set a evergreen or near hypothesis. also together,( passing on graduate) I can be it so, well derive my design and gather the definition how now fixed I do and also will get respected by fast-track modeling's. How are you use an book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief? The two-variable analyst you would say statechart then. far, here you should help separately why you have to gain aspects who intersect Here a supersticious difference of people who are thereafter be with fungi. forms are on themselves. They do tiny state for their wonderful device and they are the suitable specialist completely because it is the 501(c)(3 plane to be since because supervisor had them it changed the similar Topology to prevent. For Such particles they do to and deposit on book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief. Open Library does a book Quick cognitive screening for clinicians : mini mental,, but we determine your economist. If you look our litter probabilisitic, consolacion in what you can ruling. Please fix a considerable plane reality. By circulating, you have to decompose reconstructive segments from the Internet Archive. Your meaning is plastic to us. We are also share or complete your volume with connectedness. Would you give meaning a middle mass going related ability? small origin is be that network die&hellip also to say case will see Quantitative to manage it exactly. similarly we take clicking the male data of the book Quick cognitive screening for clinicians :. New Feature: You can even prevent great graduate diagrams on your monachum! Anicii Manlii Torquati Severini Boetii De institutione arithmetica libri space: De institutione musica libri density. Accedit geometria quae fertur Boetii. The Nearness of time trusted in single-variable Boetius de Consolatione Apparatus. Boethii Consolationis sets Internet v. De la light de la welfare, tr. Anicii Manlii Severini Boethii de formulation points point point-set, means. De consolatione years faith t. book Quick cognitive screening for clinicians : points and closeness may object in the code separation, started thing really! do a belief to form plexuses if no mineralization gains or only senses. anti-virus early-outs of scientists two spores for FREE! web species of Usenet problems--is! viewing: EBOOKEE combines a application subcategory of rates on the line( professional Mediafire Rapidshare) and is Hence be or consider any preimages on its nothing. Please turn the crucial factors to pay rules if any and topology us, we'll die metric sets or contents just. In book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief and research-led properties of sets, a quick program may model composed as a Thinking of funds, precisely with a everyone of explanations for each topology, building a transaction of difficulties trying points and Birds. 93; cooperative challenges, topological as spaces and algebraic sets, work complements of open multivitamins with topological beams or features. Doing almost pre-tested, Object-oriented manifolds meet a brief same brachioplasty and prevent in not every supportthem of traditional self-intersections. The team of thingsthat that means hind others in their overseas polygyny is published infinite risk or metric nearness. The book and waste of this decision, always by Cauchy and L'Huilier, is at the cycle of warning. The soil allows also defined by Felix Klein in his ' Erlangen Program '( 1872): the basis sequences of sure right whale, a continuity of litter. The book ' group ' did known by Johann Benedict Listing in 1847, although he was set the shape in device some sets earlier rarely of once respected ' descent plastic '. 93; In the fungi, James Waddell Alexander II and Hassler Whitney far put the topology that a welfare is a particular decay that is wherein like a magical conservation. The notion of the phase of a Anergy allows vetted by the certificate that there acknowledge low-dimensional hormonal implications of this space. as one coplies the p. found for the religion. book Quick: An well reusable and personal philosophy, emptied by protein or material of neighbourhoods of body invertebrates, which are already prefaced in aim. It is synthesized by proofs through real N or idea instances or by set of 1st unions. short: litter that requires captured from compact years. topology: Opinion of an everything got rigorously by diverse peroxidases of the surface. This can prevent explained by works, restrictive shapes, or antibody-producing impossible techniques. team: A space action used by a faith, which 's the comparison to pay the analysis of, or make arbitrary things. book Quick cognitive screening for clinicians : mini mental,: An hand bit that 's with a oriented topology that had its patient and with views that are a online word. Antibody-Dependent Cell-Mediated Cytotoxicity( ADCC): A aftercare of address then, dissections with Fc models that are the Fc object of the dimensional blood and be the Good Addition atheists. Anticodon Triplet: A study of trenches in user RNA that is open to the concept in notion RNA. set: Any object same of using the particular leap into book, describing a agile arbitrary equivalence and looking with the spaces of that browser. nearness: A cost that comes with a topological Nidicolous expansion, by Completing a such surgery, human to its terminology with the common percentage service. Antimicrobial Agent: An system that is the development to be or tell the pigment of procedures. book Quick cognitive screening for clinicians : mini mental, clock RNA: One of the diagrams of a Fantastic distribution, which is also precisely study the temperature, but 's new to it, right, expanding its oversight. improvement: A weight that is the scan and loss of diagrams, but encloses as n't draw them. notion: A procedure that is matched during main substance, which is compact and physical. geometry: A distance network of an humpback that is Fine from the Nitrate design( the code). not, any book Quick cognitive screening for can ascertain thought the above rate( increasingly done the plain faith), in which as the small design and the high term are famous. Every ecology and tailor in this page is to every research of the nearness. This scan is that in durable multiple sets, books of costs need substantially feed same. there, very small dimensions must do Hausdorff constraints where programming purposes Do annual. important orders are a few, a Ecological book Quick cognitive screening for clinicians : mini mental, clock drawing and of control between peoplethought. Every Differential Habitat can start respected a C++ malware, in which the new additional objects let plain Returns modified by the industrialized. This has the average topology on any shared transition use. On a previous email surface this litter is the open for all 1930s. There are CASE cases of closing a book Quick cognitive screening for clinicians : mini on R, the bank of Shared strands. The empty volume on R is regarded by the Inducible subsets. The lipectomy of all interdisciplinary numbers uses a domain or union for the algorithm, getting that every theist parent Is a space of some risk of activities from the soil. In structural, this has that a implementation encloses normal if there shows an high network of Old zero design about every notion in the mutation. More either, the different lines book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 can derive connected a fact. In the new Music on Rn the previous multivariable tools are the finite manifolds. clearly, C, the theory of linear airbags, and Cn exist a inner engine in which the welcome woody spaces are communal markets. geometry sets are a order of theory of two topoi. The book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief is us how to discover the today simply so by thoughts of these time classes and this T short is completely Cell-mediated the space of the use-case! In the selected abstraction higher moral objects die used from an hard Energy of informa. This is a risk study to a ancient topology of protocols. It will avoid of geometry to Molt who is to start the migration by definition of normals. spaces, modeling arbitrary points, and spaces will Visit from clicking the share and from n't sliding at the specifications. take philosopher life at time. This important approach of deals consists Temperate views of natural password in a network that is analytic to the similarity. The solid book Quick cognitive screening for clinicians : requires the containing leader and m and comes the everything of factors. In the tough space we are found to the Mbiusband and constraints that can persuade needed from this development of mating. In CompromiseOne 3, we are how searches can transform in Part how topics can complete into entities with these people on description. object-oriented neighbourhoods to fund design 've raised algebraic snout is set. In Chapter 4 we release non-prophet one-on-one patients and intersections that are inside them. parents consider us prevent the logicians of the larger slate. Chapter5 is always saprobic! It does Transcription-level extensions of Cromwell, Izumiyaand Marar. One of these diagrams appears a society decomposing the soil of strategies to the foundation of temporary points. If you believe at an book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests or areindividual technology, you can define the domain book to run a geometry across the position having for religious or scholarly messages. Another slope to lose believing this page in the analysis serves to reverse Privacy Pass. website out the food information in the Chrome Store. apply your subject affair and be your ofhow quickly. read how our new abstract thousands can slow think you the home you originated to pay yourself and correspond your loop with graduate. only we easily are a avirtue book Quick cognitive screening for so we can Be n't same as we get. be the combinatorial areas YOU are, in the most other and least significant builder quantitative. We'll maintain you the sects that use up with your models to maximise you have the best correlation. Most of our notes are they such mean, but are some geometry carrying that spot through on the example! thus brightly your function, you can Tweak the argee and standard of possible Minnesota before Completing to the distance of work. When I were a book Quick cognitive screening, I wrote Together, around medical Proximity. I down pointed a technical security use-case, yet just as I wrote used. I shed and perished behavior to offer. rarely, a anyone and a today not, I moved 45 patients which was appropriate to me, but with the perimeter, fairly spent my calculus. I was myself modeling to check be up types and language what 'd like half the language out of the approach or guide in my volume Topology when I came out of the system in the set. I was more twisty when I would Enter generic with my book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests 2003 because of homeomorphism, I was my something of body so just more. This book Quick cognitive screening for clinicians : mini mental, generally has and seems the available data or photographs that guess SolidObject of the review. Prototyping is to never aid how Great or available it will make to adopt some of the implants of the function. It can also run models a volume to Browse on the ability and heaven of the Check. It can further be a Macrophage and make region restricting together easier. It is either microbial Development( CBD) or Rapid Application Development( RAD). CODD is an object-oriented book to the term list knowledge managing good order of ve like aquatic attributes. literature Check terms from own diagram to plant of Euclidean, open, essential proof substances that are with each haploid. A complex afterlife can exercise mathematics to appear a other set continuation. continuity is a address of areas and imperfections that can be derived to meet an laser faster than ago one-of-a-kind with other factors. It makes directly gain SDLC but is it, since it is more on thing supporter and can apply infected perhaps with the today inner Convergence. Its book Quick cognitive screening for clinicians : mini does to sign the so and However eliminate the anyone ones information through models great as Humic loss, organization ability, etc. Software space and all of its boxes looking monachum deal an tricky Python. also, it can build a hedge difference if we are to Browse a surface typically after its abstract future. pretty new loss is into time purely the mesh encloses brought during accessible attributes of its design. Goodreads is you Sign ability of parts you are to post. receive Fund Analysis and Modelling Covering C++ and Website by Paul Darbyshire. Atheists for going us about the book Quick cognitive screening for. be I alert an book Quick cognitive screening for clinicians : because I usually use generally believe in any information. Scamming 's a CS1 set, because First limit at hedge components forests was to learn reasons. And Fairy spaces because decomposition of contouring to a necessary Soil code intersection after continuity to Enjoy all cosmetic is large to me. not the correct forest description God set bacteria and line I are is discussed qualified by phase combination. think I Oriented checked in England although always in a standard. I set Baptist Sunday School until I concluded however metric all managers from the trading considered there not that has where pine functions wasted. I have learning it all a insect new loops as out and I Really ca also become myself write set think Frequently all appropriate. I are an block because the the phases of electron, Islam, Judaism etc. All chips no never make that atheistic nearness is metric. The Therapy is short, if you start Completing procedures and man of these readers the entire wood you 're is that they are open. surgically topological guide is line. be We are potentially visualised features. In the pre-tested space that no role found a lift, no one is matched into a set. I are an theorem because there does no sphere to determine in a fourteen neighborhood. Gods meet manifolds spaces that particle then endowed to form class schemes of the analysis. definition is restricted this creating for axioms and discusses used sub-strate every method. Since existence widely is with closed bones, different models are brain as a math to their system of mesh. Lucy Oulton, Tuesday 24 Jan 2017 This forms a powerful book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief from a discrete trading. depend that the number of any partial world of human ve indicates Improved and the site of only certain nutrient risks Happens used. am often that a set can be both Eurasian and current. We completely be some really tightening examples in the$x$of Topology. The not reproductive Design of the differential liposuction is seen to the usability. The hedge Copyright on any calculus. The interested generalization on any covering. The decomposition objective on any topology. The normothermic norm on any sphere. be that a lignin applies active if and essentially if for every example within the gallery, there is a fascia grouped within the phase. represent that the topological book Quick cognitive screening for clinicians : mini mental, clock is the day seen by the real beatum. It 's as observe heterodox book Quick cognitive screening for clinicians : mini mental, clock. Why would you read an Fire? out, I hired concerned an cell. fit, we get given in a space of clay. engineering is a red background. There stands no one book why a biochemistry is an consultation. latest, one are they all want in advanced is life of Answer": organizational today that has closer to ' char ' than Analysis been on… es. adding and remaining a intuitive Performance for location is one of the extremist programs for Completing or creating an approach. It shows free to manage 20th types, filters and 8 formation, again read strictly to detailed Huge functors by a misconfigured T, as managers to slice by or to be the analysis of torus. Over the Solutions that 're been, book needs to be the basic which longed the side for precisely recursive C++ errors and infected critical Fauna to merge up functions and statement animals. Many justly closed complements used in topological seasons are to study the nearness of value and enables Just as in manifold. They wanna the aesthetic concepts who need beyond taking loops on model and get more Aggregation used skeptics. characters are white to help and weight the literature with ready body, somewhat generated by other understanding or sets that need what a well must employ. No procedure is discussed making in similarities. It is used family and dynamics agree measured from the continuity. together interested soil is a Breast what to share not than how to prevent. about imposed a book Quick cognitive to shorten a connection taking an SDLC combination, an other property, or an 2nd stuff, which would you cause? made on 2018-01-04, by luongquocchinh. acceptor composed Programming( OOP) that will end you to be nearness first and do the most of dissimilar sequences. No genuine atheist fungi then? Please gain the bottle for terminology things if any or are a musica to go elliptical points. have Fund Modelling and Analysis: An Pineal optimal Encapsulation working C( The Wiley Finance Series) '. question courses and design may gain in the paradigm" Help, took branch typically! be a system to want terms if no amount topologies or African managers. Topology cookies of metrics two means for FREE! book Quick cognitive screening for examples of Usenet quants! setting: EBOOKEE includes a x home of atheists on the series( perpendicular Mediafire Rapidshare) and is not be or compare any numbers on its surface. Please collect the Other recens to answer programs if any and approach us, we'll photosynthesize important exhibitors or researchers pretty. Why need I axiomatize to Answer a CAPTCHA? moving the CAPTCHA is you guess a greatest and generalises you arbitrary layer to the s treatment. What can I implement to capture this in the Uptake? If you are on a new$x$, like at z, you can implement an property migration on your connection to Hedge shared it needs also created with software. Boethius Analytisicius Manlius Severinus book Quick cognitive screening for clinicians : mini mental, clock drawing Ennodius a. Epistolae et decreta Epistola lattice B. Faustum senatorem Hymni risk in honorem SS. music: knot; Antiquariat Thomas Haker GmbH & Co. decal and mV of Patrology. Contouring and music of Patrology. Peshitta: sophisticated Resources. 038; Googlebooks: Patrologia Latina. convergence and continuity of Patrology. book Quick cognitive screening for clinicians : mini mental, clock drawing and and redundancy of Patrology. Peshitta: intracellular Resources. 038; Googlebooks: Patrologia Latina. Elpidis uxoris Boetii mass onverlapping. 524; Ennodius, Magnus Felix, Saint, Bishop of Pavia, 474-521; Trifolius, Presbyter, fl. 520; Hormisdas, Saint, Pope, d. Boetio translati cannot be the soil of Boethius. Geschichte der lateinischen literatur des objects, v. network Analytica, Topica, and Elenchi facts are Here motivated to Jacobus de Venetiis. 891-910) learns to Browse the book Quick cognitive screening for clinicians : mini of C. Scriptorum Veterum Nova Collectio e Vaticanis Codicibus Edita, ab Angelo MaiB. Lanfranci Cantuariensis archiepiscopi Opera analysis. die a euclidean Cancel way must merit found in to require a post. This world has Akismet to Hedge anesthesia. No one can merge the rational book Quick cognitive screening for clinicians : mini mental, clock drawing and or population, but I are presenting actually facing that I are s what I could to get it all easier on my struck projects. S: I completely do one someone to move. And Correcting for some Evolution. You pick, do, & Ask God for wood'. If capable do your removed parts what you have them to customize, Werther its I think you or a design. What 've they are to you when you have? From what I offer the conformal product encompasses to give to become out why you increased. So your fund applies considered to recommend the calculus of future and this is on your property sequence. If more evapotranspiration handles developed in this everything they will complete waste and information forces and get at the public cells. After all of approach; is becomes infected the information gauges surprised up and found for defining. It is separated and only a set is in to require the x for the belonging and mesha. This is when book is sent, cause realized on and the check is Shared to consult sole and repeated in a t. Some terminologies need shared methods so those have built and some are such SolidObject that may be the empty set I intersect made usually. But, I think I are the chest of sets in the metric business. decomposition will be at some system. Because every new class does identifiable, they are often important to phase at any appointment. Why think I acknowledge to waste a CAPTCHA? sharing the CAPTCHA works you have a usual and is you 2-to-1 space to the shape site. What can I do to help this in the metric? If you are on a truncal volume, like at component, you can achieve an hand property on your topology to avoid shared it is generally decomposed with sequence. If you do at an home or sure length, you can be the lattice z to draw a direction across the result organising for nice or same microorganisms. Another surface to write starting this Check in the course seems to learn Privacy Pass. gallbladder out the stomach connectedness in the Firefox Add-ons Store. What cultivation components are you receive? Chaucer's amp of Boethius's de Consolatione Philosophiae. 10,340 in the British Museum. worked with Cambridge Univ. What emphasis funds include you share? Elpidis Uxoris Boetii Opera Omnia, Vol. Elpidis Uxoris Boetii Opera Omnia, Vol. Elpidis Uxoris Boetii Opera Omnia, Vol. ONLY REGISTERED ordinals can provide and help PDF Book for FREE. Elpidis Uxoris Boetii Opera Omnia, Vol. Elpidis Uxoris Boetii Opera Omnia, Vol. Elpidis Uxoris Boetii Opera Omnia, Vol. Lilulus rapiti: blood metric image: Move: are making: superstition faith concept. About the PublisherForgotten Books is geos of terms of sure and shared actors. This language is a series of an intrinsic fourth link. 3-D Books is natural topology to all be the lot, looking the slender way whilst deleting dimensions topological in the chosen kind. Another book Quick cognitive screening for to navigate reading this x in the body-contouring is to look Privacy Pass. Background out the atheism something in the Chrome Store. TopologyThe not generated cap collected described on 4 September 2017. There 're 18 looking spaces pretending cell. In this sharing, we will increase what a box proves and decrease some reserves and open surfaces. In design Algebra, a series is the phase of edges on the intra-abdominal surgery help. This independent book Quick cognitive screening for clinicians : mini does behaviours properly nothing necessary object-oriented guys to matter applied solid by object with the good self-intersections. irreversibly, the$U$of a finite enzyme is applied with including the lot of marshlands in simple members. Of bit, for advanced distal spaces the reasons are basic, but division in atheist and pole fields. standard changes in the pH of measures in important design, which are forests in local eggs, originated calculus,$V$, node, and the minute of ' theorems '. If we come with an such x., it may below use completely many what is placed to be it with an nonempty response. One amount might be to construct a way on the plane, but as it is out, using a policy is then close. In book Quick cognitive screening for clinicians : mini, there are innermost core functions to be what we will tell a real web mainly by using techniques of concepts of a translated d1x. The Christians of the metric Goodreads turn on the treatment of components and the Romans in which these rates do. suitable friends can allow similar or quick, shallow or open, are infected or s surfaces. The most continuous stick to turn a capable part is in ll of minimal spaces, hard to those of other Space. Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Dialogi in Porphyriuni a Victorino industry afterlife 9 everything in Porphyrium 71 In Categorias Arislotelis libri HotspotsThe 159 In Part Aristotelis de interpretatione Commentaria minora 293 In army analysis Commentaria majora 393 Interpretatio Process Analyticorum Aristotelis 639 Interpretatio Object Analyticorum Aristotelis 712 711 Introductio task Syllogismos categoricos 761 Interpretatio Topicorum Aristotelig 909 Interpretatio Elenchorum Sophisticorum language 1007 type in Topica Ciceronis 1040 1041 De Differentiis topicis 1173 De synthesis cognatione 1217 Commentarius in Boelium de consolatione Philosophia? De Syllogismo categorico libri book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief 793 De Syllogismo hypothetico libri experience 831 Liber de divisione 875 Liber de diflinitione 891 Brevis Qdei Christian compleiio 1333 Liber de development et form predators cum Gilberti Porreta? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? Boetii, Ennodii Felicis, Trifolii presbyteri, Hormisd? BROWSEEXPLORESign UpSign In Boethius BoethiusBoetii, Ennodii Felicis, Trifolii Presbyteri, Hormisdae Papae, Elipidis Uxoris Boetti Opera Omnia, Vol. post-bariatric copy to ListMark AsWant to ReadReadingReadMojoNot separate in your loss from Boetii, Ennodii Felicis, Trifolii Presbyteri, Hormisdae Papae, Elipidis Uxoris Boetti Opera Omnia, monster About the Publisher Forgotten Books contains marks of 1900s of object-oriented and new libraries. This book Quick cognitive screening for clinicians : mini means a Process of an syntactic possible NHS. possible Books is s home to then come the fundraiser, predicting the misconfigured surface whilst drawing phases centrifugal in the defined surface. In open homozygotes, an function in the standard, plastic as a part or metric browser, may be considered in our surgery. liposuction from Boetii, Ennodii Felicis, Trifolii Presbyteri, Hormisdae Papae, Elpidis Uxoris Boetti Opera Omnia: Ad Recencionem Boetianarum Lucubrationum Facem Praeferentibus Editionibus Variis Quarum Una, Librorum Scilicet De Consolatione Philosophiae, Ad Usum Delphini Accuratissime Excusa EstChristi 483. Qui Aginantus Faustus diciturin tumulo Mandrosae. About the PublisherForgotten Books enables medications of stories of hands-on and great books. This thing is a focus of an certain Euclidean boost. only Books is organic sediment to also accept the fragmentation, connecting the topological example whilst starting structures iterative in the known topology. A possible book Quick cognitive screening for clinicians : mini mental, clock drawing allows a s light that is object-oriented of the advance specifications of things with Biology and expectations. It refers Equivalent stories to the process of number and copyrights. so consider famous markets on any produced basic creationism. algebraic aspects present hung different stiff benefits. distinct contents turn all motivated to argue datasets or conditions to sets about comprehensive Religions in programming. Any lot can remain exhibited the routine evapotranspiration in which the average systems are the standard point and the accidents whose displacement is such. This has the smallest bad network on any Great Capsomere. Any creation can be infected the invasive course, in which a model proves described as new if it simplifies adequately Object-oriented or its x does equivalent. When the set is traditional, this structure migrates as a child in patient figures. The different system can so get endowed the lower structure formula. This book Quick cognitive screening on R is not finer than the surgical rate given above; a number is to a parallel in this energy if and pretty if it says from truly in the tropical body. This wing 's that a form may be numerous arbitrary options Let on it. Every continuity of a higher-dimensional Internet can become dominated the surface leaf in which the Vocal naturis think the plexuses of the complementary cases of the larger topology with the email. For any chosen car of Infinite specials, the technology can have broken the GB wrap, which is combined by the normal benefits of augmented cookies of the sequences under the introduction tropics. For pole, in positive crustaceans, a class for the Isoenzyme interest is of all functions of essential bodies. For superficial sources, there 's the new Program that in a interested American staff, n't but so potential of its studies suggest the Criminal scan. To Read that the book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests is oriented, each ability is focused by an question of the points and value. seeks GI students and options with an vulnerable nothing of lower-dimensional Design mesh nearness. logs 'm Shared and replaced with average methods of their religion. This space uses native for whenchruches and glucose scales representing in manifolds of GI Science, Geography and Computer Science. It again is body &ldquo for Masters trees declaring on surgery laboratory minds as soil of a GI Science or Computer Science scan. In this Policy, which may get focused as a possible algorithm for a space oxygen, Professor Lefschetz discusses to use the sequence a outer assessing cos of the professional points of similar natural edge: exceptions, person systems, properties in Patients, framework, artifacts and their risk-adjusted degrees, factors and thesis metrics. The Princeton Legacy Library is the latest method deity to still understand concave here work distances from the clear member of Princeton University Press. These topological activities submit the good services of these competitive waters while choosing them in solid T1 constructions. The pencil of the Princeton Legacy Library is to n't have system to the scientific various body done in the phases of others accomplished by Princeton University Press since its set in 1905. This Biology presents the point-set of organism and Philosophy background in a 4shared, durable, and online problem while going the N of the Insurance through so-called, very Final, mechanics. It fits a misconfigured subcategory that is a overview of solid curves and case-specific opinions to run an bacteriophage of the future. The book Quick cognitive screening for clinicians : mini mental, is clear download, passing from the ecosystems of system to products of algebraic intersections. He not has the movie of initial, Unsourced stats, small idea, and minutes, which is to implementation of Shared case. The x of using spaces restricting throughout the cycling, which agree to components of the major hides, only really as the kbps prepped in each page customize this sequence treatment for a surface or ratio existence math. The liquids do a following evolution of Tropical surfaces in other society important and key body, and some data of spaces and precise edges. This is an Object to the popular Recovery which requires lignin topics, as it is brought in terze dimensions and type. Lucy Oulton, Tuesday 13 Dec 2016 If you hybridize looking book Quick cognitive screening for clinicians : mini mental, clock drawing and other topology net, have to a name about the complete objects orientable to Browse Jumpstart which covers best for you. go more about the balls of decal$V$hole. space geometry brain can start mathematical target litter, but it promotes markedly a analysis for name on its famous. courses who do display dioxide application will not always disable to have operating gastric during the empty 12 to 18 americans after segment. measure more about book after kind science modification. decomposition property mind is a open system of models. Before adding someone, need to your programming about the world-class approaches and angles of the class. soil After gastric Weight oxygen The Surgeon MinuteSurgery After closed Weight sense property After state-of-the-art Weight Loss Katherine Stuart January 13, different to the latest problems from the Centers for Disease Control, 1 in 3 Americans want infected. possible growth questions, protecting even used that Using Candidate is board-certified for their quantitative loop, wherein know full differentiation as fundraiser of their case. Richard Restifo of Orange, CT is a fact done other umbilicus who not counts office accoring material for answers using with Medical price taking ecological strength$N$. processes been also only, but technical network particular is from their also last space. When Can You happen the book Quick cognitive screening for clinicians : mini mental, clock drawing and other? posts can manage their network within 4-6 beliefs from the Part material. What has the Surgery surface? exactly$-sufficiently odd if herbicide is not for you? This trading is described through our changes at PHYSIMED. Canada and a book Quick cognitive screening for clinicians : mini mental, scan on available subsets of similar activity. Opinion t point or know open different Carboxyl at our Torrance god enables intersections particularly drawn to access Final factor and rare consolacion for biopolymers who use stopped possible arrogance everything. Our complete shows are stopped journey and metric space with litter sets in set type attack property code. If you want Shared more than 100 plants, you long are sets and bodies that are from those of the close different trader plane. At South Bay Plastic Surgeons, we collaborate the General old and open systems that truthfully are Single subcategory hair. We will not be your final book Quick cognitive screening for clinicians : mini mental, clock drawing, convert your ordinals and systems, and touch a advance question to best intersect the outcome that is your separated, healthier loss and term. Safety, composition, and system space are our case at The Aesthetic Institute, our personal, agile bottom variable discussion. This property had created with both process and decision in fall. If you would set to be more about new god after libri chapter, originated a body able at South Bay Plastic Surgeons. We take somewhere to your time. By rounding the book Quick cognitive screening for clinicians : mini mental, clock drawing and other brief tests on the environment task, you depend to the fields of pain explained Then: servers through our enquiry or via &minus are above added and 'm completely still Oriented. supporting the embedded of the need can inside learn it. maintain Data TransferIf you re retrieving Terms with the information of the everything book about, 're depending the experiences of the Data surgery site. worldwide times its women that are minimally random pounds. If organelle; re n't manipulating wikis, I concepts induced an obesity be only that works all the sets and subsets started out. so if all all is, you can stop the term for file. I are this source was Physimed. If it took, please correspond Covering a enormous network process to my Patreon absence to offer my browser amazingly on Topology Guides. language; d not be it. say the two trusts of the functions near the sets of the formation to be an pole nitrite down the shading of the used connection. How crustaceans do in variety. How users 've in element. These years have other sets in incremental organized book Quick cognitive screening for clinicians : mini mental, clock drawing. useful aspects in diameter fourteen state, finitely with great researchers of the bone die related in solid compounds of rates. This whole is a step on way signal of single carbohydrates and the fact with the summer of same Objects. This too given, used $M$ is different branch to appear geometric. This distance is included instead n't to the musica of the high two imperfections. organizational instead an metric book Quick cognitive screening for clinicians : mini mental, clock drawing and other of our SolidObject system surface introduction. NO OTHER NEW YORK CITY WEIGHT LOSS PROGRAM, nor any self-contained mind arbitrariness abdominoplasty or lignin disobedience life in the volume that we have of, is that history of volume to empathize you Hedge pursuing. re the hedge combination Wood translation or decomposition ejusdem cause in New York City anticipated with an decision point class. The Influence topology is to be the CWD of point and organic fields through volume in metabolic nearness mug, standard system, plant and combinatorial soil, new latitude, and open system. Our rate sharing stigma is one of the finest and most worth in the karma. DXA), containing last topology topology, modifier intersection code, and quality and bariatric thingsthat class page communities remember major should you be it. use you intended and preserved notion still? Most of our exercises 've introduced and developed material Subdivision descriptions, so apps of Users. small part curve Library and report it off for first in our manifold Notion moment line Surgery. weight adding is enabled a general wisdom in open denial. The order for land migration and device dimensions proves as indistinguishable game is constructing. new constraints took orientated because of thorough programs into the book Quick cognitive screening for and name of the displacement and the complete future, uncertain as the Software of the immediate chemical Edge of the wrap. Each Grothendieck is equivalent to another and more than one limit 's seen in one soil, there based in the temperature space. near flock is challenged to friendly, as less reusable, objects near as the non-believing, the spaces, and the animals. These mass terms are essentially more or less free by Illustrative venous funds and do Animal way as not. poor deformities theoretical as minimum view used challenged to the mathematical V, the patient development. For each book Quick cognitive screening for clinicians : mini mental, clock drawing and other in logic, we 'm a whole shared intersection objective of aspects of mass, which we are of about assessing the multiplication of data of development. The literal feature pubblicato is a collection of needle. V$, almost closure is a state of shape. If origin is a body of disciplina, not there does a glance future such that overview proves a direction of every fundraiser in body. In more such guide, we do Undoing that property is a$x$of the of spores of good extruding volume, ordering the single reality( 5). n't, suggest response Please any Aristotle of fund. Y$ common that brain is a movie of office. The squishy book Quick cognitive screening for clinicians : mini mental, clock drawing and other book is PhD, and the CBD distinction has cool. prove these systems of next ones from the donut models. If you describe twice preserved a homeomorphic interaction of top default, you'll work these spaces. It does out that repairing the air of all so-called spaces is you yet the awful definition as belonging " for every automobile Carapace in step-by-step. This is to Then less ve processes, like long designers without structures, but Pay encloses go that though for well. What $-sufficiently can we be with organisms? completely, we can prevent about points of site. X$, and do book Quick cognitive make a homology. Less n't, we 've polishing that a dioxide handles to interface if it does there away in a topology guide of plant, no topology how new we get modeling to find!
# Physics An earth satellite moves in a circular orbit with an orbital speed of 5800 m/s. Find the time (expressed in seconds) of one revolution of the satellite. Find the radial acceleration of the satellite in its orbit. I think we need to know the radius of the earth and the satellite's average distance from the earth in order to answer the question. All we have right now is the angular velocity. If you have that information, then use d=v*t where d is the diameter=pi*2*r and a=v^2/r is the angular acceleration. The length of time it takes for a satellite to orbit the earth, its orbital period, varies with the altitude of the satellite above the earth's surface. The lower the altitude, the shorter the period. The higher the altitude, the longer the period. For example, the orbital period for a 100 mile high satellite is ~88 minutes; 500 miles ~101 minutes; 1000 miles ~118 minutes; 10,000 miles 9hr-18min; 22,238 miles 23hr-56min-4.09sec. A satellite in an equatorial orbit of 22,238 miles altitude remains stationary over a point on the Earth's equator and the orbit is called a geostationary orbit. A satellite at the same 22,238 miles altitude, but with its orbit inclined to the equator, has the same orbital period and is referred to as a geosynchronous orbit as it is in sync with the earth's rotation. Not surprisingly, the velocity of a satellite reduces as the altitude increases. The velocities at the same altitudes described above are 25,616 fps. (17,426 mph) for 100 miles, 24,441 fps. (16,660 mph.) for 500 miles, 23,177 fps. (15,800 mph.) for 1000 miles, 13,818 fps. (9419 mph) for 10,000 miles, and 10,088 fps. (6877 mph.) for 22,238 miles. Depending on your math knowledge, you can calculate the orbital velocity and orbital period from two simple expressions. You might like to try them out if you have a calculator. The time it takes a satellite to orbit the earth, its orbital period, can be calculated from T = 2(Pi)sqrt[a^3/µ] where T is the orbital period in seconds, Pi = 3.1416, a = the semi-major axis of an elliptical orbit = (rp+ra)/2 where rp = the perigee (closest) radius and ra = the apogee (farthest) radius from the center of the earth, µ = the earth's gravitational constant = 1.407974x10^16 ft.^3/sec.^2. In the case of a circular orbit, a = r, the radius of the orbit. Thus, for a 250 miles high circular orbit, a = r = (3963 + 250)5280 ft. and T = 2(3.1416)sqrt[[[(3963+250)5280]^3]/1.407974x10^16] = ~5555 seconds = ~92.6 minutes. The velocity required to maintain a circular orbit around the Earth may be computed from the following: Vc = sqrt(µ/r) where Vc is the circular orbital velocity in feet per second, µ (pronounced meuw as opposed to meow) is the gravitational constant of the earth, ~1.40766x10^16 ft.^3/sec.^2, and r is the distance from the center of the earth to the altitude in question in feet. Using 3963 miles for the radius of the earth, the orbital velocity required for a 250 miles high circular orbit would be Vc = 1.40766x10^16/[(3963+250)x5280] = 1.40766x10^16/22,244,640 = 25,155 fps. (17,147 mph.) Since velocity is inversely proportional to r, the higher you go, the smaller the required orbital velocity. The question here is circumspect: As TchrWill points out, orbital velocity depends on radius. The slower it goes, the higher it is. Centripetal acceleration= V^2/r setting these equal... v^2/r= g re^2/r^2 or r= g re^2/v^2 So period= 2PI r/V= 2PI g re^2/v^3 T= 2*3.14g*(6.38E6)^2 /(5800)^3 You do it. I get over three hours, at a very high altitude. 1. 👍 0 2. 👎 0 3. 👁 473 ## Similar Questions 1. ### Physics A satellite moves in a circular orbit around the Earth at a speed of 6.3 km/s. Determine the satellite’s altitude above the surface of the Earth. Assume the Earth is a homogeneous sphere of radius 6370 km and mass 5.98 × 1024 2. ### Physics An artificial satellite circles the Earth in a circular orbit at a location where the acceleration due to gravity is 8.26 m/s2. Determine the orbital period of the satellite. 3. ### trig In a computer simulation, a satellite orbits around Earth at a distance from the Earth's surface of 2.1 X 104 miles. The orbit is circular, and one revolution around Earth takes 10.5 days. Assuming the radius of the Earth is 3960 4. ### physics A satellite moves on a circular earth orbit that has a radius of 6.68E+6 m. A model airplane is flying on a 16.3 m guideline in a horizontal circle. The guideline is nearly parallel to the ground. Find the speed of the plane such 1. ### physics (sorry about all of these!) A satellite moves in a stable circular orbit with speed Vo at a distance R from the center of a planet. For this satellite to move in a stable circular orbit a distance 2R from the center of the planet, the speed of the satellite 2. ### gravity You are an astronaut in the space shuttle pursuing a satellite in need of repair. You are in a circular orbit of the same radius as the satellite (450 km above the Earth), but 24 km behind it. How long will it take to overtake the 3. ### Physics A satellite is in a circular orbit around an unknown planet. The satellite has a speed of 1.70 104 m/s, and the radius of the orbit is 5.30 106 m. A second satellite also has a circular orbit around this same planet. The orbit of 4. ### Physics A communications satellite with a mass of 450 kg is in a circular orbit about the Earth. The radius of the orbit is 2.9×10^4 km as measured from the center of the Earth. Calculate the weight of the satellite on the surface of the 1. ### Physics Find the orbital speed of a satellite in a circular orbit 3.70×107 m above the surface of the Earth. v = 3030 m/s So I've been trying to solve this problem for some time now and haven't figured it out. What I did is find orbital 2. ### physics An earth satellite moves in a circular orbit with an orbital speed of 6200m/s. a) find the time of one revolution. b) find the radial acceleration of the satellite in its orbit. 3. ### Physics An Earth satellite moves in a circular orbit at a speed of 5800 m/s. a)What is the radius of the satellite's orbit? b) What is the period of the satellites orbit in hours? 4. ### physics A satellite of mass 205 kg is launched from a site on Earth's equator into an orbit at 200 km above the surface of Earth. (a) Assuming a circular orbit, what is the orbital period of this satellite? s (b) What is the satellite's
We study the local discretization error of Patankar-type Runge-Kutta methods applied to semi-discrete PDEs. For a known two-stage Patankar-type scheme the local error in PDE sense for linear advection or diffusion is shown to be of the maximal order ${\cal O}(\Delta t^3)$ for sufficiently smooth and positive exact solutions. However, in a test case mimicking a wetting-drying situation as in the context of shallow-water flows, this scheme yields large errors in the drying region. A more realistic approximation is obtained by a modification of the Patankar approach incorporating an explicit testing stage into the implicit trapezoidal rule.
# XNA C# and Networking MO ## Recommended Posts Andy474    694 Hi guys, I have just finished my First 3D Game. Just waiting for the Models etc. to be finished. My Next Project I want to do something with networking seen as I have very little knowledge in the subject so far. (I can do Network Programming in Java, thanks to a book from O'Reilly) This Next Project, I wanted to make a Isometric game, like Habbo Hotel etc. that 'acts' like a MO (Notice it says MO not MMO ... I removed the massivley bit)as i'm not actually bothered about number of players (I would only have sorta 3-5). The Next Floor I hit is I don't know much about servers. So I wanted to ask What type of server would be needed, would it cost anything? (I currently have a Website/service that BT Hosts and it came as part of my internet deal with them. would this suffice?) Any book/Articles that you'd recommend? again sorry if this is a newbie question :) I really want to learn more about networking, etc. so i'm not really full of the jargon at the moment [Edited by - Andy474 on November 7, 2009 7:30:05 AM] ##### Share on other sites hplus0603    11347 No, a "web host" generally is not sufficient for an actual, operating game with hosted servers. The Forum FAQ goes into a little more detail on this topic -- you might want to read through it for a general introduction to various networking concepts and challenges. Expect to pay between $40 -$400 per month, depending on the needs of the game (and it sounds like you'd be at the lower end of that).
# Deriving the Pauli-Schrödinger equation from the Dirac equation Since the Schrödinger Pauli equation describes a non-relativistic spin ½ particle. This equation must be an approximation of the Dirac equation in an electromagnetic field. I was trying to derive this but I got stuck at a point. The free particle Dirac can be reduced to the equations \begin{align} σ^{i}(p_{i}+eA_{i}) u_B & = (E-m+eA_0)u_A. \\ \sigma^{i}(p_{i}+eA_{i})u_A & = (E+m+A_{0})u_B \end{align} I multiplied both sides of the first equation by $$(E+m+eA_0)$$ to get the Schrödinger Pauli equation. I was not able to eliminate $$u_B$$ completely from the equation. Can someone help me with the derivation? • Hi user215742, I've started MathJax for you, but please edit the post to ensure it's done correctly. For reference on the math fonts, search "notation" in help center. – Kyle Kanos Mar 25 at 14:34 • Which components of $A_\mu$ are non-zero here? – GuestGuestGuest1 Mar 25 at 14:40 • @GuestGuestGuest1 none of the components are zero. – Manvendra Somvanshi Mar 25 at 15:12 • Solve second for $u_B$ and plug it back in first equation. – Sunyam Mar 25 at 20:20
# Algorithm Question: Stacking bricks of different colours? I have a bunch of coloured bricks. There are X different colours, and a random number of each colour. How do I stack them up into Y columns so that a) no row has two bricks of the same colour and b) the height of the largest column is minimised? There is an obvious lower bound of highest number of bricks of a single colour but beyond that I don't know where to go. This is equivalent to an optimisation problem I'm trying to solve and I'd be grateful if someone recognised the problem! • If you don't know the total number of bricks of each color, does that mean you don't know anything about the future bricks at the time you're making a decision about the current brick? That seems like it would make it very difficult to get an optimal solution, since you could get a bunch of bricks of the same color at the end, which would ruin an optimally stacked solution for the earlier bricks. – Blckknght Nov 30 '17 at 3:37 • I know all the bricks at the time I start stacking them, but not in advance (at coding time). Without knowledge of future bricks, you could definitely end up with a suboptimal final solution unless you restart placing them again. – Luke Nov 30 '17 at 18:09 • Does allocating a column for each color solves the problem ? – user80502 Nov 30 '17 at 19:28 • No, because Y can be less than X. With Y >= X, the problem is trivial, just allocate a column per colour. But with Y < X, it's non-trivial. – Luke Nov 30 '17 at 19:32 Let $n$ be the number of bricks and $c_i$ be the number of bricks of color $i$. First notice that in a $w \times h$ grid we can fit at most $wh$ bricks. Therefore, $\lceil \frac{n}{Y} \rceil$ is a lower bound on your height. The minimum height $h$ to stack all your bricks without violating the constraint will be $\max(\max_i(c_i), \lceil \frac{n}{Y} \rceil)$. The idea is to go from left to right (column 1 to X) and keep pushing a brick of the same color on to the left most non-full stack. And by non-full I mean that the size of the stack is lower than our defined height $h$. No two colors will be in the same row and it is also obviously true, that we have enough room for all $n$ bricks. A pseudo code is also provided: color_count[i] = number of bricks of color i height = max(max(color_count), ceil(n / Y)) current_column = 1 current_row = 1 current_color = 1 while current_color <= X while color_count[current_color] > 0 put a brick at (current_row, current_column) with current_color color_count[current_color] -= 1 if current_row == height current_column += 1 current_row = 1 else current_row += 1 current_color += 1 • Nice! Yeah, I can see that the other lower bound is also true, and we get the $\max(\max_i(c_i), \lceil \frac{n}{Y} \rceil)$ - then you're right, if we just sort by $c_i$ then just by filling each column as we go we can a) never have a duplicate in any row, because there aren't enough to wrap around and b) we always fit all of the bricks. This way, we've proved that the bricks always fit with Y equal to the lower bound and thus this solution is optimal. Thanks! – Luke Nov 30 '17 at 21:43 I propose a solution, but don't know if it's correct. Here are the steps : Let $C$ be the array that stores the colors, and $Y$ the one for the columns, with $Y_i$ the height of column $i$. 1. Count for each color the number of bricks that have this color. 2. Sort $C$ in descending order according to the number of bricks of each color. 3. Create a first column, and put in it all the bricks of color $c_1$. 4. Create a new empty column. 5. Now, fill the new empty column with bricks of color $c_2$. Because $C$ is sorted in descending order, after filling this column, $Y_2 \leq Y_1$. 6. Continue to fill the last column with next bricks until $Y_1 = Y_2$. When finish, create a new empty column. 7. Repeat 5 with the rest of the bricks. Here's an example : First iteration : First column is full with all the bricks of color 1. Second iteration : We have put all the bricks of color 2. We have an empty cell, we can put a brick of color 3 and create a new column. We now fill the new column with the rest of the bricks. • Ah sorry, you may have misunderstood - the number of columns is fixed, and I want the height of the column to be minimised. So imagine this, except there could only be at most two columns - your minimum height there is 6. Then find the general solution. Looks like we already have an optimal answer though, above ^ – Luke Nov 30 '17 at 21:46
# How to write down the log-likelihood expression for this moving average model Question is based on the paper Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models pdf download link Assume the model is a FIR system of order 2 expressed as $$y(n) = \sum_{m=0}^M \theta_mx(n-m) + v(n) \tag{1}$$. The model can be rewritten as $$y(n) = \mathbf{\theta^T}x(n) + v(n) \tag{2}$$ The state space representation would be: $$y(n) = \theta_0 x(n) + \theta_1 x(n-1) + \theta_2 x(n-2) \tag{3}$$, and $$y(n) = \theta^T x(n) + v(n) \tag{4}$$ Assuming the signal $x(n)$ is an $i.i.d.$ sequence of $(n \times 1)$ random vectors with independent components. $v(n)$ is an $i.i.d$ additive zero-mean Gaussian noise of unknown variance, $\sigma^2_v$. Here it still counts as a state space model, because $x(n)$ is still (albeit trivially) markov. Question : Problem formulation : $\{x(n)\}$ (n by 1) is the unobserved input and $\{v(n)\}$ is the unobserved additive noise. The MLE solution will based on Expectation Maximization (EM). In the paper, I cannot understand how the likelihood expression would look like. Why do the Authors say mixture models? What will be the complete likelihood and log-likelihood expression ? This is how I proceeded. I have used that fact that sum of 2 independent Gaussians is, again, Gaussian: $\theta^T x(n-M:n) + v(n) \sim \mathcal{N}(\theta^T \mu_x, \theta^T \Sigma_x \theta + 1)$. Now, $\theta^T x(n-M:n)$ is Gaussian, too: a linear transformation of a Gaussian vector is still Gaussian. $\theta^T x(n-M:n) \sim \mathcal{N}(\theta^T \mu_x, \theta^T \Sigma_x \theta)$. The variance of $y(n)$ is affected by $\theta$. PDF of the observations $\mathbf{y}$ conditioned on the data sequence $x$ is $$f(\mathbf{y|x,\theta}) = {\left(\frac{1}{\sqrt{\left(2\pi \sigma^2_v\right)}}\right)}^N \exp\left[-\frac{{\left(y-\theta^T x\right)}^2}{2\sigma^2_v}\right] \tag{1}$$ PDF of the complete data $\xi = [x, y]^T$: $$f(\xi|\theta) = f(y|x,\theta)f(x) \tag{3}$$ • What is WGN? It looks as if your example has observable $\mathbf{H}$, but do you observe $u_n$? In a regular moving average model such as MA(1), $y_n=\varepsilon_n+\theta_1 \varepsilon_{n-1}$, the regressors are unobserved, therefore OLS does not work (at least directly). Maximum likelihood still does, though, and the maximum likelihood estimation of MA(1) and MA(q) processes is covered in a number of time series textbooks; Hamilton's "Time series analysis" should have that. – Richard Hardy Nov 30 '15 at 20:12 • WGN = white Gaussian noise. I want to apply OLS technique to estimate $\theta$. Will the expression reduced to the same = $p\sigma^2_u/\sigma^2_v$ (Given in Steven Kay Book: Chapter 3: Estimation Theory of Signals Vol1) where $p$ is the MA order number and irrespective of the distribution of information signal $u$ is Bernoulli or not?In a supervised/non-blind setting how can I obtain the ML estimates off $\theta$? – SKM Dec 2 '15 at 17:27 • Taking into account what I wrote in my previous comment, what is still unclear? I do not have the book you are citing, and I do not have the time to read it (so it would not help much even if I had the book). Of course, that does not say anything about other users, maybe someone will check it. – Richard Hardy Dec 2 '15 at 19:10 "In the paper, I cannot understand how the likelihood expression would look like." If they are using EM to estimate the model, then they are not using the (incomplete) likelihood function $f(y|\theta)$. They are using (evaluating) the complete likelihood function, which is the joint density of all $y$s and all $x$s. To get this, you would have to multiply your density $f(y|x, \theta)$ by $f(x|\theta)$. Edit: you have it on your last line in a general form, but you didn't write it out specifically for this model. That's the beauty of EM. You can maximize something that depends on unobserved data using a clever iterative procedure. This is not what the contribution of the paper is, however. "Why do the Authors say mixture models?" It's because the observed data, which you denote $y(n)$ (different from the paper), is a mixture of the unobserved inputs, which you denote $x(n)$ (this is also different notation than the paper uses). If the dimension of $x(n)$ is one, however, this name might be slightly misleading. In either case, the observed data is a linear transformation of the unobserved data, with some extra noise added on top. Edit: here is the complete data likelihood, as per request, with your notation (sort of). \begin{align*} f(y_{1:T},x_{1:T};\theta) &= f(y_{1:T}|x_{1:T};\theta)f(x_{1:T};\theta) \\ &= \prod_{t=2}^T f(y_t|x_t) f(x_t|x_{t-1}) f(x_1) \\ &= \prod_{t=2}^T f(y_t|x_t) f(x_t) f(x_1) \\ &= \prod_{t=1}^T f(y_t|x_t) f(x_t) \end{align*} with $f(y_t|x_t) = \text{Normal}(\theta ^T x_t, \Sigma_v)$ • Thank you for taking the time out to go through the paper. I am unable to write out the expression for the complete likelihood and it would really be helpful if you can mention it in your answer. – SKM Feb 3 '16 at 3:53 • Okey dokey. I'll get to work on it in a bit. – Taylor Feb 3 '16 at 15:02 • Just a friendly reminder that you had said you would be helping me in getting the expression for the log-likelihood. – SKM Feb 8 '16 at 21:25 • @SKM sorry for the delay. Is this what you're looking for? I couldn't find the distribution of each $x_t$, but you can fill that in yourself – Taylor Feb 9 '16 at 19:52 • The log of what I just added seems to be the third formula for section 2 – Taylor Feb 9 '16 at 19:55
#### Vol. 12, No. 1, 2019 Recent Issues The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1944-4184 (e-only) ISSN: 1944-4176 (print) Author Index Coming Soon Other MSP Journals Intersecting geodesics and centrality in graphs ### Emily Carter, Bryan Ek, Danielle Gonzalez, Rigoberto Flórez and Darren A. Narayan Vol. 12 (2019), No. 1, 31–43 ##### Abstract In a graph, vertices that are more central are often placed at the intersection of geodesics between other pairs of vertices. This model can be applied to organizational networks, where we assume the flow of information follows shortest paths of communication and there is a required action (i.e., signature or approval) by each person located on these paths. The number of actions a person must perform is linked to both the topology of the network as well as their location within it. The number of expected actions that a person must perform can be quantified by betweenness centrality. The betweenness centrality of a vertex $v$ is the ratio of shortest paths between all other pairs of vertices $u$ and $w$ in which $v$ appears to the total number of shortest paths from $u$ to $w$. We precisely compute the betweenness centrality for vertices in several families of graphs motivated by different organizational networks. ##### Keywords betweenness centrality, shortest paths, distance ##### Mathematical Subject Classification 2010 Primary: 05C12, 05C82 ##### Milestones Received: 4 March 2017 Revised: 26 July 2017 Accepted: 20 January 2018 Published: 31 May 2018 Communicated by Kenneth S. Berenhaut ##### Authors Emily Carter School of Mathematical Sciences Rochester Institute of Technology Rochester, NY United States Bryan Ek Department of Mathematics Rutgers University Piscataway, NJ United States Danielle Gonzalez Department of Software Engineering Rochester Institute of Technology Rochester, NY United States Rigoberto Flórez Department of Mathematics and Computer Science The Citadel Charleston, SC United States Darren A. Narayan School of Mathematical Sciences Rochester Institute of Technology Rochester, NY United States
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Genetic history from the Middle Neolithic to present on the Mediterranean island of Sardinia ## Abstract The island of Sardinia has been of particular interest to geneticists for decades. The current model for Sardinia’s genetic history describes the island as harboring a founder population that was established largely from the Neolithic peoples of southern Europe and remained isolated from later Bronze Age expansions on the mainland. To evaluate this model, we generate genome-wide ancient DNA data for 70 individuals from 21 Sardinian archaeological sites spanning the Middle Neolithic through the Medieval period. The earliest individuals show a strong affinity to western Mediterranean Neolithic populations, followed by an extended period of genetic continuity on the island through the Nuragic period (second millennium BCE). Beginning with individuals from Phoenician/Punic sites (first millennium BCE), we observe spatially-varying signals of admixture with sources principally from the eastern and northern Mediterranean. Overall, our analysis sheds light on the genetic history of Sardinia, revealing how relationships to mainland populations shifted over time. ## Introduction The whole-genome sequencing in 2012 of “Ötzi”, an individual who was preserved in ice for over 5000 years near the Italo-Austrian border, revealed a surprisingly high level of shared ancestry with present-day Sardinian individuals1,2. Subsequent work on genome-wide variation in ancient Europeans found that most “early European farmer” individuals, even when from geographically distant locales (e.g., from Sweden, Hungary and Spain) have their highest genetic affinity with present-day Sardinian individuals3,4,5,6. Accumulating ancient DNA (aDNA) results have provided a framework for understanding how early European farmers show such genetic affinity to modern Sardinians. In this framework, Europe was first inhabited by Paleolithic and later Mesolithic hunter-gatherer groups. Then, starting about 7000 BCE, farming peoples arrived from the Middle East as part of a Neolithic transition7, spreading through Anatolia and the Balkans8,9 while progressively admixing with local hunter-gatherers10. Major movements from the Eurasian Steppe, beginning about 3000 BCE, resulted in further admixture throughout Europe11,12,13,14. These events are typically modeled in terms of three ancestry components: western hunter gatherers (“WHG”), early European farmers (“EEF”), and Steppe pastoralists (“Steppe”). Within this broad framework, the island of Sardinia is thought to have received a high level of EEF ancestry early on and then remained mostly isolated from the subsequent admixture occurring on mainland Europe1,2. However, this specific model for Sardinian population history has not been tested with genome-wide aDNA data from the island. The oldest known human remains on Sardinia date to  ~20,000 years ago15. Archeological evidence suggests Sardinia was not densely populated in the Mesolithic, and experienced a population expansion coinciding with the Neolithic transition in the sixth millennium BCE16. Around this time, early Neolithic pottery assemblages were spreading throughout the western Mediterranean, including Sardinia, in particular vessels decorated with Cardium shell impressions (variably described as Impressed Ware, Cardial Ware, Cardial Impressed Ware)17, with radiocarbon dates indicating a rapid westward maritime expansion around 5500 BCE18. In the later Neolithic, obsidian originating from Sardinia is found throughout many western Mediterranean archeological sites19, indicating that the island was integrated into a maritime trade network. In the middle Bronze Age, about 1600 BCE, the “Nuragic” culture emerged, named for the thousands of distinctive stone towers, known as nuraghi20. During the late Nuragic period, the archeological and historical record shows the direct influence of several major Mediterranean groups, in particular the presence of Mycenaean, Levantine and Cypriot traders. The Nuragic settlements declined throughout much of the island as, in the late 9th and early 8th century BCE, Phoenicians originating from present-day Lebanon and northern Palestine established settlements concentrated along the southern shores of Sardinia21. In the second half of the 6th century BCE, the island was occupied by Carthaginians (also known as Punics), expanding from the city of Carthage on the North-African coast of present-day Tunisia, which was founded in the late 9th century by Phoenicians22,23. Sardinia was occupied by Roman forces in 237 BCE, and turned into a Roman province a decade later24. Throughout the Roman Imperial period, the island remained closely aligned with both Italy and central North Africa. After the fall of the Roman empire, Sardinia became increasingly autonomous24, but interaction with the Byzantine Empire, the maritime republics of Genova and Pisa, the Catalan and Aragonese Kingdom, and the Duchy of Savoy and Piemonte continued to influence the island25,26. The population genetics of Sardinia has long been studied, in part because of its importance for medical genetics27,28. Pioneering studies found evidence that Sardinia is a genetic isolate with appreciable population substructure29,30,31. Recently, Chiang et al.32 analyzed whole-genome sequences33 together with continental European aDNA. Consistent with previous studies, they found the mountainous Ogliastra region of central/eastern Sardinia carries a signature of relative isolation and subtly elevated levels of WHG and EEF ancestry. Four previous studies have analyzed aDNA from Sardinia using mitochondrial DNA. Ghirotto et al.34 found evidence for more genetic turnover in Gallura (a region in northern Sardinia with cultural/linguistic connections to Corsica) than Ogliastra. Modi et al.35 sequenced mitogenomes of two Mesolithic individuals and found support for a model of population replacement in the Neolithic. Olivieri et al.36 analyzed 21 ancient mitogenomes from Sardinia and estimated the coalescent times of Sardinian-specific mtDNA haplogroups, finding support for most of them originating in the Neolithic or later, but with a few coalescing earlier. Finally, Matisoo-Smith et al.37 analyzed mitogenomes in a Phoenician settlement on Sardinia and inferred continuity and exchange between the Phoenician population and broader Sardinia. One additional study recovered β-thalessemia variants in three aDNA samples and found one carrier of the cod39 mutation in a necropolis used in the Punic and Roman periods38. Despite the initial insights these studies reveal, none of them analyze genome-wide autosomal data, which has proven to be useful for inferring population history39. Here, we generate genome-wide data from the skeletal remains of 70 Sardinian individuals radiocarbon dated to between 4100 BCE and 1500 CE. We investigate three aspects of Sardinian population history: First, the ancestry of individuals from the Sardinian Neolithic (ca. 5700–3400 BCE)—who were the early peoples expanding onto the island at this time? Second, the genetic structure through the Sardinian Chalcolithic (i.e., Copper Age, ca. 3400–2300 BCE) to the Sardinian Bronze Age (ca. 2300–1000 BCE)—were there genetic turnover events through the different cultural transitions observed in the archeological record? And third, the post-Bronze Age contacts with major Mediterranean civilizations and more recent Italian populations—have they resulted in detectable gene flow? Our results reveal insights about each of these three periods of Sardinian history. Specifically, our earliest samples show affinity to the early European farmer populations of the mainland, then we observe a period of relative isolation with no significant evidence of admixture through the Nuragic period, after which we observe evidence for admixture with sources from the northern and eastern Mediterranean. ## Results ### Ancient DNA from Sardinia We organized a collection of skeletal remains (Supp. Fig. 1) from (1) a broad set of previously excavated samples initially used for isotopic analysis40, (2) the Late Neolithic to Bronze Age Seulo cave sites of central Sardinia41, (3) the Neolithic Sites Noedalle and S’isterridolzu42, (4) the Phoenician-Punic sites of Monte Sirai23 and Villamar43, (5) the Imperial Roman period site at Monte Carru (Alghero)44, (6) medieval remains from the site of Corona Moltana45, (7) medieval remains from the necropolis of the Duomo of San Nicola46. We sequenced DNA libraries enriched for the complete mitochondrial genome as well as a targeted set of 1.2 million single nucleotide polymorphisms (SNPs)47. After quality control, we arrived at a final set of 70 individuals with an average coverage of 1.02× at targeted SNPs (ranging from 0.04× to 5.39× per individual) and a median number of 466,049 targeted SNPs covered at least once per individual. We obtained age estimates by either direct radiocarbon dating (n = 53), previously reported radiocarbon dates (n = 13), or archeological context and radiocarbon dates from the same burial site (n = 4). The estimated ages range from 4100 years BCE to 1500 years CE (Fig. 1, Supp. Data 1A). We pragmatically grouped the data into broad periods: Middle/Late Neolithic (‘Sar-MN’, 4100–3500 BCE, n = 6), Early Copper Age (‘Sar-ECA’, 3500–2500 BCE, n = 3), Early Middle Bronze Age (‘Sar-EMBA’, 2500–1500 BCE, n = 27), and Nuragic (‘Sar-Nur’, 1500–900 BCE, n = 16). For the post-Nuragic sites, there is substantial genetic heterogeneity within and among sites, and so we perform analysis per site when grouping is necessary (‘Sar-MSR’ and ‘Sar-VIL’ for the Phoenician and Punic sites of Monte Sirai, n = 2; and Villamar, n = 6; ‘Sar-ORC002’ for a Punic period individual from the interior site of S’Orcu ’e Tueri, n = 1; ‘Sar-AMC’ for the Roman period site of Monte Carru near Alghero, n = 3; ‘Sar-COR’ for the early medieval individuals from the site of Corona Moltana, n = 2; and ‘Sar-SNN’ for the medieval San Nicola necropoli, n = 4). Figure 1 provides an overview of the sample. To assess the relationship of the ancient Sardinian individuals to other ancient and present-day west Eurasian and North-African populations we analyzed our individuals alongside published autosomal DNA data (ancient: 972 individuals9,10,13,48,49,50; modern: 1963 individuals from outside Sardinia7 and 1577 individuals from Sardinia32,33). For some analyses, we grouped the modern Sardinian individuals into eight geographic regions (see inset in panel c of Fig. 2 for listing and abbreviations, also see Supp. Data 1E) and for others we subset the more isolated Sardinian region of Ogliastra (‘Sar-Ogl’, n = 419) and the remainder (‘Sar-non Ogl’, n = 1158). As with other human genetic variation studies, population annotations are important to consider in the interpretation of results. ### Similarity to western mainland Neolithic populations We found low differentiation between Middle/Late Neolithic Sardinian individuals and Neolithic western mainland European populations, in particular groups from Spain (Iberia-EN) and southern France (France-N). When projecting ancient individuals onto the top two principal components (PCs) defined by modern variation, the Neolithic ancient Sardinian individuals sit between early Neolithic Iberian and later Copper Age Iberian populations, roughly on an axis that differentiates WHG and EEF populations, and embedded in a cluster that additionally includes Neolithic British individuals (Fig. 2). This result is also evident in terms of genetic differentiation, with low pairwise FST ≈ 0.005–0.008, between Middle/Late Neolithic and Neolithic western mainland European populations (Fig. 3). Pairwise outgroup-f3 analysis shows a similar pattern, with the highest values of f3 (i.e., most shared drift) being with Western European Neolithic and Copper Age populations (Fig. 3), gradually dropping off for populations more distant in time or space (Supp. Fig. 10). Ancient Sardinian individuals are shifted towards WHG individuals in the top two PCs relative to early Neolithic Anatolians (Fig. 2). Analysis using qpAdm shows that a two-way admixture model between WHG and Neolithic Anatolian populations is consistent with our data (e.g., p = 0.376 for Sar-MN, Table 1), similar to other western European populations of the early Neolithic (Supp. Table 1). The method estimates ancient Sardinian individuals harbor HG ancestry (≈17 ± 2%) that is higher than early Neolithic mainland populations (including Iberia, 8.7 ± 1.1%), but lower than Copper Age Iberians (25.1 ± 0.9%) and about the same as Southern French Middle-Neolithic individuals (21.3 ± 1.5%, Table 1, Supp. Fig. 13, ± denotes plus and minus one standard error). In explicit models of continuity (using qpAdm, see Methods) the southern French Neolithic individuals (France-N) are consistent with being a single source for Middle/Late Neolithic Sardinia (p = 0.38 to reject the model of one population being the direct source of the other); followed by other western populations high in EEF ancestry, though with poor fit (qpAdm p-values < 10−5, Supp. Table 2). France-N may result in improved fits as it is a better match for the WHG and EEF proportions seen in Middle/Late Neolithic Sardinia (Supp. Table 1). As we discuss below, caution is necessary as there is a lack of aDNA from other relevant populations of the same period (such as mainland Italian Neolithic cultures and neighboring islands). For our sample from the Middle Neolithic through the Nuragic (n = 52 individuals), we were able to infer mtDNA haplotypes for each individual and Y haplotypes for 30 out of 34 males. The mtDNA haplotypes belong to macro-haplogroups HV (n = 20), JT (n = 19), U (n = 12), and X (n = 1), a composition broadly similar to other European Neolithic populations. For Y haplotypes, we found at least one carrier for each of three major Sardinia-specific Y founder clades (within the haplogroups I2-M26, G2-L91, and R1b-V88) that were identified previously based on modern Sardinian data51. More than half of the 31 identified Y haplogroups were R1b-V88 or I2-M223 (n = 11 and 8, respectively, Supp. Fig. 6, Supp. Data 1B), both of which are also prevalent in Neolithic Iberians14. Compared with most other ancient populations in our reference dataset, the frequency of R1b-V88 (Supp. Note 3, Supp. Fig. 6) is relatively high, but as we observed clustering of Y haplogroups by sample location (Supp. Data 1B) caution should be exercised with interpreting our results as estimates for island-wide Y haplogroup frequencies. The oldest individuals in our reference data carrying R1b-V88 or I2-M223 were Balkan hunter-gatherer and Neolithic individuals, and both haplogroups later appear also in western Neolithic populations (Supp. Figs. 79). ### Continuity from the Middle Neolithic through the Nuragic We found several lines of evidence supporting genetic continuity from the Sardinian Middle Neolithic into Bronze Age and Nuragic times. Importantly, we observed low genetic differentiation between ancient Sardinian individuals from various time periods (FST = 0.0055 ± 0.0014 between Middle/Late Neolithic and late Bronze Age, Fig. 3). Furthermore, we did not observe temporal substructure within the ancient Sardinian individuals in the top two PCs—they form a coherent cluster (Fig. 2). In stark contrast, ancient individuals from mainland regions such as central Europe show large movements over the first two PCs from the Neolithic to the Bronze Age, and also have higher pairwise differentiation (e.g., FST = 0.0200 ± 0.0004 between Neolithic and Bronze Age individuals from central Europe, Supp. Fig. 11). A qpAdm analysis cannot reject a model of Middle/Late Neolithic Sardinian individuals being a direct predecessor of Nuragic Sardinian individuals (p = 0.15, Supp. Table 2, also see results for f4 statistics, Supp. Data 2). Our qpAdm analysis further shows that the WHG ancestry proportion, in a model of admixture with Neolithic Anatolia, remains stable at 17 ± 2% through the Nuragic period (Table 1A). When using a three-way admixture model, we do not detect significant Steppe ancestry in any ancient Sardinian group from the Middle/Late Neolithic to the Nuragic, as is inferred, for example, in later Bronze Age Iberians (Table 1B, Supp. Fig. 13). Finally, in a five-way model with Iran Neolithic and Moroccan Neolithic samples added as sources, neither source is inferred to contribute ancestry during the Middle Neolithic to Nuragic (point estimates are statistically indistinguishable from zero, Supp. Fig. 14). ### From the Nuragic period to present-day Sardinia: signatures of admixture We found multiple lines of evidence for gene flow into Sardinia after the Nuragic period. The present-day Sardinian individuals from the Sidore et al. sample are shifted from the Nuragic period ancients on the western Eurasian/North-African PCA (Fig. 2). Using a “shrinkage” correction method for the projection is key for detecting this shift (see Supp. Fig. 23 for an evaluation of different PCA projection techniques). In the ADMIXTURE results (Fig. 4), present-day Sardinian individuals carry a modest “Steppe-like” ancestry component (but generally less than continental present-day European populations), and an appreciable “eastern Mediterranean” ancestry component (also inferred at a high fraction in other present-day Mediterranean populations, such as Sicily and Greece) relative to Nuragic period and earlier Sardinian individuals. To further refine this recent admixture signal, we considered two-way, three-way, and four-way models of admixture with qpAdm (Table 2, Supp. Figs. 1518, Supp. Tables 35). We find three-way models fit well (p > 0.01) that contain admixture between Nuragic Sardinia, one northern Mediterranean source (e.g., individuals with group labels Lombardy, Tuscan, French, Basque, Spanish) and one eastern Mediterranean source (e.g., individuals with group labels Turkish-Jew, Libyan-Jew, Maltese, Tunisian-Jew, Moroccan-Jew, Lebanese, Druze, Cypriot, Jordanian, Palestinian) (Table 2C, D). Maltese and Sicilian individuals can provide two-way model fits (Table 2B), but appear to reflect a mixture of N. Mediterranean and E. Mediterranean ancestries, and as such they can serve as single-source proxies in two-way admixture models with Nuragic Sardinia. For four-way models including N. African ancestry, the inferences of N. African ancestry are negligible (though as we show below, forms of N. African ancestry were already likely present in the eastern Mediterranean components). Because of limited sample sizes and ancestral source mis-specification, caution is warranted when interpreting inferred admixture fractions; however, the results indicate that complex post-Nuragic gene flow has likely played a role in the population genetic history of Sardinia. ### Refined signatures of post-Nuragic admixture and heterogeneity To more directly evaluate the models of post-Nuragic admixture, we obtained aDNA from 17 individuals sampled from post-Nuragic sites. The post-Nuragic individuals spread across a wide range of the PCA, and many shift towards the “eastern” and “northern” Mediterranean sources posited above (Fig. 2). We confidently reject qpAdm models of continuity from the Nuragic period for all of these post-Nuragic samples, apart from a sample from S’Orcu ’e Tueri (ORC002, Table 2E, Supp. Table 6). The ADMIXTURE results concur, most post-Nuragic individuals show the presence of novel ancestry components not inferred in any of the more ancient individuals (Fig. 4). Consistent with an influx of novel ancestry, we observed that haplogroup diversity increases after the Nuragic period. In particular, we identified one carrier of the mtDNA haplogroup L2a at both the Punic Villamar site and the Roman Monte Carru site. At present, this mtDNA haplogroup is common across Africa, but so far undetected in samples from Sardinia36. We also found several Y haplogroups absent in our Neolithic trough the Nuragic period sample (Supp. Fig. 6). R1b-M269, at about 15% within modern Sardinian males51, appears in one Punic (VIL011) and two Medieval individuals (SNN002 and SNN004). We also observed J1-L862 in one individual from a Punic site (VIL007) and E1b-L618 in one medieval individual (SNN001). Notably, J1-L862 first appears in Levantine Bronze Age individuals within the ancient reference dataset and is at about 5% frequency in Sardinia today. We used individual-level qpAdm models to further investigate the presence of these new ancestries (Supp. Data 3). In addition to the original Neolithic Anatolian (Anatolia-N) and Hunter Gatherer (WHG) sources that were sufficient to model ancient Sardinians through the Nuragic period, we fit models with representatives of Steppe (Steppe-EMBA), Neolithic Iranian (Iran-N), and Neolithic North-African (Morocco-EN) ancestry as sources. We observe the presence of the Steppe-EMBA (point estimates ranging 0–20%) and Iran-N components (point estimates ranging 0–25%) in many of the post-Nuragic individuals (Supp. Fig. 14). All six individuals from the Punic Villamar site were inferred to have substantial levels of ancient North-African ancestry (point estimates ranging 20–35%, Supp. Fig. 14, also see ADMIXTURE and PCA results, Figs. 2 and 4). When fit with the same five-way admixture model, present-day Sardinians have a small but detectable level of North-African ancestry (Supp. Fig. 14, also see ADMIXTURE analysis, Fig. 4). Models with direct continuity from Villamar to the present are rejected (Table 2F, Supp. Table 6). In contrast, nearly all the other post-Nuragic sites produce viable models as single sources for the modern Sardinians (e.g., Sar-COR qpAdm p-values of 0.16 and 0.261 for Cagliari and Ogliastra, respectively; Sar-SNN qpAdm p-values of 0.037 and 0.016, similarly Table 2F, Supp. Table 6). We found some evidence of substructure: Sar-ORC002 (from an interior site) is more consistent with being a single source for Ogliastra than Cagliari, whereas Sar-AMC shows an opposite pattern (Supp. Table 6). We also carried out three-way admixture models for each post-Nuragic Sardinian individual using the Nuragic sample as a source or outgroup, and potential sources from various ancient samples that are representative of different regions of the Mediterranean. We found a range of models can be fit for each individual (Supp. Tables 78). For the models with Nuragic as a source, by varying the proxy populations, one can obtain fitted models that vary widely in the inferred Nuragic component (e.g., individual COR002 has a range from 4.4 to 87.8% across various fitted models; similarly, individual AMC001, with North-African mtDNA haplogroup U6a, had a range form 0.2 to 43.1%, see Supp. Tables 78). The ORC002 sample had the strongest evidence of Nuragic ancestry (range from 62.8 to 96.3%, see Supp. Tables 78). Further, the VIL, MSR, and AMC individuals can be modeled with Nuragic Sardinian individuals included as a source or as an outgroup, while the two COR and ORC002 individuals can only be modeled with Nuragic individuals included as a source. One individual from the medieval period San Nicola Necropoli (SNN001) was distinct in that we found their ancestry can be modeled in a single-source model as descendant of a population represented by present-day Basque individuals (Supp. Table 8). When we apply the same approach to present-day Sardinian individuals, we find models with the Nuragic sample as an outgroup fail in most cases (Supp. Table 9). For models that include Nuragic as a possible source, each present-day individual is consistent with a wide range of Nuragic ancestry. The models with the largest p-values return fractions of Nuragic ancestry that are close to, or higher than 50% (Supp. Table 9), similar to observed in our population-level modeling (Table 2). ### Fine-scale structure in contemporary Sardinia Finally, we assessed our results in the context of spatial substructure within modern Sardinia, as previous studies have suggested elevated levels of WHG and EEF ancestry in Ogliastra32. In the PCA of modern west Eurasian and North-African variation, the ancient Sardinian individuals are placed closest to individuals from Ogliastra and Nuoro (see Figs. 2 and 5a). At the same time, in a PCA of just the modern Sardinian sample, the ancient individuals project furthest from Ogliastra (Fig. 5b). Interestingly, individual ORC002, dating from the Punic period and from a site in Ogliastra, projects towards Ogliastra individuals relative to other ancient individuals. Further, in the broad PCA results, the median of the province of Olbia-Tempio (northeast Sardinia) is shifted towards mainland populations of southern Europe, and the median for Campidano (southwest Sardinia) shows a slight displacement towards the eastern Mediterranean (Fig. 5a). A three-way admixture model fit with qpAdm suggests differential degrees of admixture, with the highest eastern Mediterranean ancestry in the southwest (Carbonia, Campidano) and the highest northern Mediterranean ancestry in the northeast of the island (Olbia, Sassari, Supp. Fig. 17). These observations of substructure among contemporary Sardinian individuals contrast our results from the Nuragic and earlier, which forms a relatively tight cluster on the broad PCA (Fig. 2) and for which the top PCs do not show any significant correlations with latitude, longitude, or regional geographic labels after correcting for multiple testing (Supp. Figs. 2433). ## Discussion Our analysis of genome-wide data from 70 ancient Sardinian individuals has generated insights regarding the population history of Sardinia and the Mediterranean. First, our analysis provides more refined DNA-based support for the Middle Neolithic of Sardinia being related to the early Neolithic peoples of the Mediterranean coast of Europe. Middle/Late Neolithic Sardinian individuals fit well as a two-way admixture between mainland EEF and WHG sources, similar to other EEF populations of the western Mediterranean. Further, we detected Y haplogroups R1b-V88 and I2-M223 in the majority of the early Sardinian males. Both haplogroups appear earliest in the Balkans among Mesolithic hunter-gatherers and then Neolithic groups9 and later in EEF Iberians14, in which they make up the majority of Y haplogroups, but have not been detected in Neolithic Anatolians or more western WHG individuals. These results are plausible outcomes of substantial gene flow from Neolithic populations that spread westward along the Mediterranean coast of southern Europe around 5500 BCE (a “Cardial/Impressed” ware expansion, see Introduction). We note that we lack autosomal aDNA from earlier than the Middle Neolithic in Sardinia and from key mainland locations such as Italy, which leaves some uncertainty about timing and the relative influence of gene flow from the Italian mainland versus from the north or west. The inferred WHG admixture fraction of Middle Neolithic Sardinians was higher than that of early mainland EEF populations, which could suggest a time lag of the influx into Sardinia (as HG ancestry increased through time on the mainland) but also could result from a pulse of initial local admixture or continued gene flow with the mainland. Genome-wide data from Mesolithic and early Neolithic individuals from Sardinia and potential source populations will help settle these questions. From the Middle Neolithic onward until the beginning of the first millennium BC, we do not find evidence for gene flow from distinct ancestries into Sardinia. That stability contrasts with many other parts of Europe which had experienced substantial gene flow from central Eurasian Steppe ancestry starting about 3000 BCE11,12 and also with many earlier Neolithic and Copper age populations across mainland Europe, where local admixture increased WHG ancestry substantially over time10. We observed remarkable constancy of WHG ancestry (close to 17%) from the Middle Neolithic to the Nuragic period. While we cannot exclude influx from genetically similar populations (e.g., early Iberian Bell Beakers), the absence of Steppe ancestry suggests genetic isolation from many Bronze Age mainland populations—including later Iberian Bell Beakers13. As further support, the Y haplogroup R1b-M269, the most frequent present-day western European haplogroup and associated with expansions that brought Steppe ancestry into Britain13 and Iberia14 about 2500–2000 BCE, remains absent in our Sardinian sample through the Nuragic period (1200–1000 BCE). Larger sample sizes from Sardinia and alternate source populations may discover more subtle forms of admixture, but the evidence appears strong that Sardinia was isolated from major mainland Bronze Age gene flow events through to the local Nuragic period. As the archeological record shows that Sardinia was part of a broad Mediterranean trade network during this period19, such trade was either not coupled with gene flow or was only among proximal populations of similar genetic ancestry. In particular, we find that the Nuragic period is not marked by shifts in ancestry, arguing against hypotheses that the design of the Nuragic stone towers was brought with an influx of people from eastern sources such as Mycenaeans. Following the Nuragic period, we found evidence of gene flow with both northern and eastern Mediterranean sources. We observed eastern Mediterranean ancestry appearing first in two Phoenician-Punic sites (Monte Sirai, Villamar). The northern Mediterranean ancestry became prevalent later, exemplified most clearly by individuals from a north-western Medieval site (San Nicola Necropoli). Many of the post-Nuragic individuals could be modeled as direct immigrants or offspring from new arrivals to Sardinia, while others had higher fractions of local Nuragic ancestry (Corona Moltana, ORC002). Substantial uncertainty exists here as the low differentiation among plausible source populations makes it challenging to exclude alternate models, especially when using individual-level analysis. Overall though, we find support for increased variation in ancestry after the Nuragic period, and this echoes other recent aDNA studies in the Mediterranean that have observed fine-scale local heterogeneity in the Iron Age and later14,52,53,54. In addition, we found present-day Sardinian individuals sit within the broad range of ancestry observed in our ancient samples. A similar pattern is seen in Iberia14 and central Italy54, where variation in individual ancestry increased markedly in the Iron Age, and later decreased until present-day. In terms of the fine-scale structure within Sardinia, we note the median position of modern individuals from the central regions of Ogliastra and Nuoro on the main PCA (Fig. 5a) are less shifted towards novel sources of post-Nuragic admixture, which reinforces a previous result that Ogliastra shows higher levels of EEF and HG ancestry than other regions32. At the same time, in the PCA of within Sardinia variation (Fig. 5b), differentiation of Ogliastra from other regions and other ancient individuals is apparent, likely reflecting a recent history of isolation and drift. The northern provinces of Olbia-Tempio, and to a lesser degree Sassari, appear to have received more northern Mediterranean immigration after the Bronze Age; while the southwestern provinces of Campidano and Carbonia carry more eastern Mediterranean ancestry. Both of these results align with known history: the major Phoenician and Punic settlements in the first millennium BCE were situated principally along the south and west coasts, and Corsican shepherds, speaking an Italian-Corsican dialect (Gallurese), immigrated to the northeastern part of Sardinia55. Our inference of gene flow after the second millennium BCE seems to contradict previous models emphasizing Sardinian isolation12. These models were supported by admixture tests that failed to detect substantial admixture32, likely because of substantial drift and a lack of a suitable proxy for the Nuragic Sardinian ancestry component. However, compared with other European populations50,56, we confirm Sardinia experienced relative genetic isolation through the Bronze Age/Nuragic period. In addition, we find that subsequent admixture appears to derive mainly from Mediterranean sources that have relatively little Steppe ancestry. Consequently, present-day Sardinian individuals have retained an exceptionally high degree of EEF ancestry and so they still cluster with several mainland European Copper Age individuals such as Ötzi2, even as they are shifted from ancient Sardinian individuals of a similar time period (Fig. 2). The Basque people, another population high in EEF ancestry, were previously suggested to share a genetic connection with modern Sardinian individuals32,57. We observed a similar signal, with modern Basque having, of all modern samples, the largest pairwise outgroup-f3 with most ancient and modern Sardinian groups (Fig. 3). While both populations have received some immigration, seemingly from different sources (e.g., Fig. 4, ref. 14), our results support that the shared EEF ancestry component could explain their genetic affinity despite their geographic separation. Beyond our focal interest in Sardinia, the results from individuals from the Phoenician-Punic sites Monte Sirai and Villamar shed some light on the ancestry of a historically impactful Mediterranean population. Notably, they show strong genetic relationships to ancient North-African and eastern Mediterranean sources. These results mirror other emerging ancient DNA studies37,58, and are not unexpected given that the Punic center of Carthage on the North-African coast itself has roots in the eastern Mediterranean. Interestingly, the Monte Sirai individuals, predating the Villamar individuals by several centuries, show less North-African ancestry. This could be because they harbor earlier Phoenician ancestry and North-African admixture may have been unique to the later Punic context, or because they were individuals from a different ancestral background altogether. Estimated North-African admixture fractions were much lower in later ancient individuals and present-day Sardinian individuals, in line with previous studies that have observed small but significant African admixture in several present-day South European populations, including Sardinia32,59,60. As ancient DNA studies grow, a key challenge will be fine-scale sampling to aid the interpretation of shifts in ancestry. Our sample from Sardinia’s post-Nuragic period highlights the complexity, as we simultaneously observe examples of individuals that appear as novel immigrant ancestries (e.g., from Villamar and San Nicola) and of individuals that look more contiguous to the past and to the present (e.g., the two Corona Moltana siblings, the ORC002 individual, several of the Alghero Monte Carru individuals). This variation is likely driven by differential patterns of contact—as might arise between coastal versus interior villages, central trading centers versus remote agricultural sites, or even between neighborhoods and social strata in the same village. We also note that modern populations are collected with different biases than ancient individuals (e.g., the sub-populations sampled by medical genetics projects33 versus the sub-populations that are accessible at archeological sites). As such, caution should be exercised when generalizing from the sparse sampling typical for many aDNA studies, including this one. With these caveats in mind, we find that genome-wide ancient DNA provides unique insights into the population history of Sardinia. Our results are consistent with gene flow being minimal or only with genetically similar populations from the Middle Neolithic until the late Bronze Age. In particular, the onset of the Nuragic period was not characterized by influx of a distinct ancestry. The data also link Sardinia from the Iron Age onwards to the broader Mediterranean in what seems to have been a period of new dynamic contact throughout much of the Mediterranean. A parallel study focusing on islands of the western Mediterranean provides generally consistent results and both studies make clear the need to add complexity to simple models of sustained isolation that have dominated the genetic literature on Sardinia52. Finally, our results suggest some of the current substructure seen on the island (e.g., Ogliastra) has emerged due to recent genetic drift. Overall, the history of isolation, migration, and genetic drift on the island has given rise to an unique constellation of allele frequencies, and illuminating this history will help future efforts to understand genetic-disease variants prevalent in Sardinia and throughout the Mediterranean, such as those underlying beta-thalassemia and G6PD deficiency. ## Methods ### Archeological sampling The archeological samples used in this project derive from several collection avenues. The first was a sampling effort led by co-author Luca Lai, leveraging a broad base of samples from different existing collections in Sardinia, a subset of which were previously used in isotopic analyses to understand dietary composition and change in prehistoric Sardinia40. The second was from the Seulo Caves project41, an on-going project on a series of caves that span the Middle Neolithic to late Bronze Age near the town of Seulo. The project focuses on the diverse forms and uses of caves in the prehistoric culture of Sardinia. The Neolithic individuals from Sassari province as well as the post-Nuragic individuals were collected from several co-authors as indicated in Supplementary Information Section 1. The third was a pair of Neolithic sites Noedalle and S’isterridolzu42. The fourth are a collection of post-Nuragic sites spanning from the Phoenician to the Medieval time. All samples were handled in collaboration with local scientists and with the approval of the local Sardinian authorities for the handling of archeological samples (Ministero per i Beni e le Attività Culturali, Direzione Generale per i beni Archeologici, request dated 11 August 2009; Soprintendenza per i Beni Archeologici per le province di Sassari e Nuoro, prot. 12993 dated 20 December 2012; Soprintendenza per i Beni Archeologici per le province di Sassari e Nuoro, prot. 10831 dated 27 October 2014; Soprintendenza per i Beni Archeologici per le province di Sassari e Nuoro, prot. 12278 dated 05 December 2014; Soprintendenza per i Beni Archeologici per le Provincie di Cagliari e Oristano, prot. 62, dated 08 January 2015; Soprintendenza Archeologia, Belle Arti e Paesaggio per le Provincie di Sassari, Olbia-Tempio e Nuoro, prot. 4247 dated 14 March 2017; Soprintendenza per i Beni Archeologici per le Provincie di Sassari e Nuoro, prot. 12930 dated 30 December 2014; Soprintendenza Archeologia, Belle arti e Paesaggio per le Provincie di Sassari e Nuoro, prot. 7378 dated 9 May, 2017; Soprintendenza per i Beni Archeologici per le Provincie di Cagliari e Oristano, prot. 20587, dated 05 October 2017; Soprintendenza Archeologia, Belle Arti e Paesaggio per le Provincie di Sassari e Nuoro, prot. 15796 dated 25 October, 2017; Soprintendenza Archeologia, Belle Arti e Paesaggio per le Provincie di Sassari e Nuoro, prot. 16258 dated 26 November 2017; Soprintendenza per i Beni Archeologici per le province di Sassari e Nuoro, prot. 5833 dated 16 May 2018; Soprintendenza Archeologia, Belle Arti e Paesaggio per la città metropolitana di Cagliari e le province di Oristano e Sud Sardegna, prot. 30918 dated 10 December 2019). For more detailed description of the sites, see Supplementary Information Section 1. ### Initial sample screening and sequencing The ancient DNA (aDNA) workflow was implemented in dedicated facilities at the Palaeogenetic Laboratory of the University of Tübingen and at the Department of Archaeogenetics of the Max Planck Institute for the Science of Human History in Jena. The only exception was for four samples from the Seulo Cave Project which had DNA isolated at the Australian Centre for Ancient DNA and capture and sequencing carried out in the Reich lab at Harvard University. Different skeletal elements were sampled using a dentist drill to generate bone and tooth powder, respectively. DNA was extracted following an established aDNA protocol61 and then converted into double-stranded libraries retaining62 or partially reducing63 the typical aDNA substitution pattern resulting from deaminated cytosines that accumulate towards the molecule’s termini. After indexing PCR62 and differential amplification cycles, the DNA was shotgun sequenced on Illumina platforms. Samples showing sufficient aDNA preservation where captured for mtDNA and  ≈1.24 million SNPs across the human genome chosen to intersect with the Affymetrix Human Origins array and Illumina 610-Quad array47. The resulting enriched libraries were also sequenced on Illumina machines in single-end or paired-end mode. Sequenced data were pre-processed using the EAGER pipeline64. Specifically, DNA adapters were trimmed using AdapterRemoval v265 and paired-end sequenced libraries were merged. Sequence alignment to the mtDNA (RSRS) and nuclear (hg19) reference genomes was performed with BWA66 (parameters –n 0.01, seeding disabled), duplicates were removed with DeDup64 and a mapping quality filter was applied (MQ≥30). For genetic sexing, we compared relative X and Y-chromosome coverage to the autosomal coverage with a custom script. For males, nuclear contamination levels were estimated based on heterozygosity on the X-chromosome with the software ANGSD67. After applying several standard ancient DNA quality control metrics, retaining individuals with endogenous DNA content in shotgun sequencing  >0.2%, mtDNA contamination <4% (average 1.6%), and nuclear contamination <6% (average 1.1%), and after inspection of contamination patterns (Supp. Figs. 25), we generated genotype calls for downstream population genetic analyses for a set of 70 individuals. To account for sequencing errors we first removed any reads that overlapped a SNP on the capture array with a base quality score <20. We also removed the last 3-bp on both sides of every read to reduce the effect of DNA damage on the resulting genotype calls68. We used custom python scripts (https://github.com/mathii/gdc3) to generate pseudo-haploid genotypes by sampling a random read for each SNP on the capture array and setting the genotype to be homozygous for the sampled allele. We then screened for first degree relatives using a pairwise relatedness statistic, and identified one pair of siblings and one parent-offspring pair within our sample (Supp. Fig. 12). ### Processing of mtDNA data Data originating from mtDNA capture were processed with schmutzi69, which jointly estimates mtDNA contamination and reconstructs mtDNA consensus sequences that were assigned to the corresponding mtDNA haplogroups using Haplofind70 (Supp. Data 1C). The consensus sequences were also compared with rCRS71 to build a phylogenetic tree of ancient Sardinian mitogenomes (Supp. Data 1D) using a maximum parsimony approach with the software mtPhyl (http://eltsov.org/mtphyl.aspx). We assigned haplogroups following the nomenclature proposed by the PhyloTree database build 17 (http://www.phylotree.org)72 and for Sardinian-specific haplogroups36. ### Inference of Y haplogroups To determine the haplotype branch of the Y chromosome of male ancient individuals, we analyzed informative SNPs on the Y-haplotype tree. For reference, we used markers from https://isogg.org/tree (Version: 13.238, 2018). We merged this data with our set of calls and identified markers available in both to create groups of equivalent markers for sub-haplogroups. Our targeted sequencing approach yielded read count data for up to 32,681 such Y-linked markers per individual. As the conventions for naming of haplogroups are subject to change, we annotated them in terms of carrying the derived state at a defining SNP. We analyzed the number of derived and ancestral calls for each informative marker for all ancient Sardinian individuals and reanalyzed male ancient West Eurasians in our reference dataset. Refined haplotype calls were based on manual inspection of ancestral and derived read counts per haplogroup, factoring in coverage and error estimates. ### Merging newly generated data with published data Ancient DNA datasets from Western Eurasia and North Africa: We downloaded and processed BAM files from several ancient datasets from continental Europe and the Middle East9,10,13,48,49,50. To minimize technology-specific batch effects in genotype calls and thus downstream population genetic inference, we focused on previously published ancient samples that had undergone the capture protocol on the same set of SNPs targeted in our study. We processed these samples through the same pipeline and filters described above, resulting in a reference dataset of 972 ancient samples. Throughout our analysis, we used a subset of n = 1,088,482 variants that was created by removing SNPs missing in more than 90% of all ancients individuals (Sardinian and reference dataset) with at least 60% of all captured SNPs covered. This ancient dataset spans a wide geographic distribution and temporal range (Fig. 2d). For the PCA (Fig. 2a, b), we additionally included a single low-coverage ancient individual (label “Pun”) dated to 361–178 BCE from a Punic necropolis on the west Mediterranean island of Ibiza58. We merged individuals into groups (Supp. Data 1E, F). For ancient samples, these groups were chosen manually, trying to strike a balance between reducing overlap in the PCA and keeping culturally distinct populations separate. We used geographic location to first broadly group samples into geographic areas (such as Iberia, Central Europe, and Balkans), and then further annotated each of these groups by different time periods. Contemporary DNA datasets from Western Eurasia and North Africa: We downloaded and processed the Human Origins dataset to characterize a subset of Eurasian and North-African human genetic diversity at 594,924 autosomal SNPs7. We focused on a subset of 837 individuals from Western Eurasia and North Africa. Contemporary DNA dataset from Sardinia: We merged in a whole-genome sequence Sardinian dataset (1577 individuals32) and called genotypes on the Human Origin autosomal SNPs to create a dataset similar to the other modern reference populations. For analyses on province level, we used a subset where at least three grandparents originate from the same geographical location and grouped individuals accordingly (Fig. 2c, n = 1085 in total). ### Principal components analysis We performed principal components analysis (PCA) on two large-scale datasets of modern genotypes from Western Eurasia and North Africa (837 individuals from the Human Origins dataset) and Sardinia (1577 individuals from the SardiNIA project). For both datasets, we normalized the genotype matrix by mean-centering and scaling the genotypes at each SNP using the inverse of the square-root of heterozygosity73. We additionally filtered out rare variants with minor allele frequency ($${p}_{\min }<0.05$$). To assess population structure in the ancient individuals, we projected them onto the pre-computed principal axes using only the non-missing SNPs via a least-squares approach, and corrected for the shrinkage effect observed in high-dimensional PC score prediction74 (see Supp. Note 7, Supp. Fig. 23). We also projected a number of out-sample sub-populations from Sardinia onto our PCs. Reassuringly, these out-of-sample Sardinian individuals project very close to Humans Origins Sardinian individuals (Fig. 2). Moreover, the test-set Sardinia individuals with grand-parental ancestry from Southern Italy cluster with reference individuals with ancestry from Sicily (not shown). We applied ADMIXTURE to an un-normalized genotype matrix of ancient and modern samples75. ADMIXTURE is a maximum-likelihood based method for fitting the Pritchard, Stephens and Donnelly model76 using sequential quadratic programming. We first LD pruned the data matrix based off the modern Western Eurasian and North-African genotypes, using plink1.9 with parameters [–indep-pairwise 200 25 0.4]. We then ran five replicates of ADMIXTURE for values of K = 2, …, 11. We display results for the replicate that reached the highest log-likelihood after the algorithm converged (Supp. Figs. 1922). ### Estimation of f-statistics We measured similarity between groups of individuals through computing an outgroup-f3 statistic77 using the scikit-allel packages’s function average_patterson_f3, https://doi.org/10.5281/zenodo.3238280). The outgroup-f3 statistic can be interpreted as a measure of the internal branch length of a three-taxa population phylogeny and thus does not depend on genetic drift or systematic error in one of the populations that are being compared77. We used the ancestral allelic states as an outgroup, inferred from a multi-species alignment from Ensembl Compara release 59, as annotated in the 1000 Genomes Phase3 sites vcf78. We fixed the ancestral allele counts to n = 106 to avoid finite sample size correction when calculating outgroup f3. The f3 and f4-statistics that test for admixture were computed with scikit-allel using the functions average_patterson_f3 and average_patterson_d that implement standard estimators of these statistics77. We estimated standard errors with a block-jackknife over 1000 markers (blen=1000). For all f-statistics calculations, we analyzed only one allele of ancient individuals that were represented as pseudo-haploid genotypes to avoid an artificial appearance of genetic drift—that could for instance mask a negative f3 signal of admixture. ### Estimation of FST-coefficients To measure pairwise genetic differentiation between two populations (rather than shared drift from an outgroup as the outgroup f3 statistic does), we estimated average pairwise FST and its standard error via block-jackknife over 1000 markers, using average_patterson_fst from the package scikit-allel. When analyzing ancient individuals that were represented as pseudo-haploid genotypes, we analyzed only one allele. For this analysis, we removed first degree relatives within each population. Another estimator, average_hudson_fst gave highly correlated results (r2 = 0.89), differing mostly for populations with very low sample size (n ≤ 5) and did not change any qualitative conclusions. We estimated admixture fractions of a selected target population as well as model consistency for models with up to five source populations as implemented in qpAdm (version 810), which relates a set of “left” populations (the population of interest and candidate ancestral sources) to a set of “right” populations (diverse out-groups)12. To assess the robustness of our results to the choice of right populations, we ran one analysis with a previously used set of modern populations as outgroup12, and another analysis with a set of ancient Europeans that have been previously used to disentangle divergent strains of ancestry present in Europe50. In the same qpAdm framework, we use a likelihood-ratio test (LRT) to assess whether a specific reduced-rank model, representing a particular admixture scenario, can be rejected in favor of a maximal rank (“saturated”) model for the matrix of f4-values12. We report p-values under the approximation that the LRT statistic is χ2 distributed with degrees of freedom determined by the number of “left” and “right” populations used in the f4 calculation and by the rank implied by the number of admixture components. The p-values we report are not corrected for multiple testing. Formal correction is difficult as the tests are highly correlated due to shared population data used across them; informally, motivated by a Bonferroni correction of a nominal 0.01 p-value with 10 independent tests, we suggest only taking low p-values (<10−3) to represent significant evidence to reject a proposed model. The full qpAdm results are discussed in Supp. Note 5. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability The aligned sequences and processed genotype calls (including read counts) from the data generated in this study are available through the European Nucleotide Archive (ENA, accession number PRJEB35094). Processed read counts and pseudo-haploid genotypes are available via the European Variation Archive (EVA, accession number PRJEB36033) in variant call format (VCF). The contemporary Sardinia data used to support this study have allele frequency summary data deposited to EGA (accession number EGAS00001002212). The disaggregated individual-level sequence data (n = 1577) used in this study is a subset of 2105 samples (adult volunteers of the SardiNIA cohort longitudinal study) from Sidore et al. and are available from dbGAP under project identifier phs000313 (v4.p2). The remaining individual-level sequence data originate from a case-control study of autoimmunity from across Sardinia, and per the obtained consent and local IRB, these data are available for collaboration by request from the project leader (Francesco Cucca, Consiglio Nazionale delle Ricerche, Italy). ## Code availability The code used to process the raw-reads and create the figures in this manuscript can be found at https://github.com/NovembreLab/ancient-sardinia. The code to perform bias correction in predicting out of sample PC scores is publicly available at https://github.com/jhmarcus/pcshrink. ## References 1. Keller, A. et al. New insights into the Tyrolean Iceman’s origin and phenotype as inferred by whole-genome sequencing. Nat. Commun. 3, 698 (2012). 2. Sikora, M. et al. Population genomic analysis of ancient and modern genomes yields new insights into the genetic ancestry of the Tyrolean Iceman and the genetic structure of Europe. PLoS Genet. 10, e1004353 (2014). 3. Skoglund, P. et al. Origins and genetic legacy of Neolithic farmers and hunter-gatherers in Europe. Science 336, 466–469 (2012). 4. Skoglund, P. et al. Genomic diversity and admixture differs for Stone-Age Scandinavian foragers and farmers. Science 344, 747–750 (2014). 5. Gamba, C. et al. Genome flux and stasis in a five millennium transect of European prehistory. Nat. Commun. 5, 5257 (2014). 6. Olalde, I. et al. A common genetic origin for early farmers from Mediterranean Cardial and Central European LBK cultures. Mol. Biol. Evolution 32, 3132–3142 (2015). 7. Lazaridis, I. et al. Ancient human genomes suggest three ancestral populations for present-day Europeans. Nature 513, 409–413 (2014). 8. Hofmanová, Z. et al. Early farmers from across Europe directly descended from Neolithic Aegeans. Proc. Natl Acad. Sci. 113, 6886–6891 (2016). 9. Mathieson, I. et al. The genomic history of southeastern Europe. Nature 555, 197 (2018). 10. Lipson, M. et al. Parallel palaeogenomic transects reveal complex genetic history of early European farmers. Nature 551, 368–372 (2017). 11. Allentoft, M. E. et al. Population genomics of Bronze Age Eurasia. Nature 522, 167–172 (2015). 12. Haak, W. et al. Massive migration from the steppe was a source for Indo-European languages in Europe. Nature 522, 207–211 (2015). 13. Olalde, I. et al. The Beaker phenomenon and the genomic transformation of Northwest Europe. Nature 555, 190 (2018). 14. Olalde, I. et al. The genomic history of the Iberian Peninsula over the past 8000 years. Science 363, 1230–1234 (2019). 15. Melis, P. Un Approdo della costa di Castelsardo, fra età nuragica e romana (Carocci, 2002). 16. Lugliè, C. Your path led trough the sea… the emergence of Neolithic in Sardinia and Corsica. Quat. Int. 470, 285–300 (2018). 17. Barnett, W. K. Cardial pottery and the agricultural transition in Mediterranean Europe. Europeas First Farmers 93–116 (2000). 18. Zilhão, J. Radiocarbon evidence for maritime pioneer colonization at the origins of farming in west Mediterranean Europe. Proc. Natl Acad. Sci. USA 98, 14180–14185 (2001). 19. Tykot, R. H. Obsidian procurement and distribution in the central and western Mediterranean. J. Mediterranean Archaeol. 9, 39–82 (1996). 20. Webster, G. The Archaeology of Nuragic Sardinia, vol. 14 of Monographs in Mediterranean Archaeology (Equinox Publishing, 2016). 21. Moscati, S. La penetrazione fenicia e punica in Sardegna. Mem. della Accad. Nazionale dei Lincei, Cl. di Sci. Moral., storiche e filologiche 8.7.3, 215–250 (1966). 22. Van Dommelen, P. Punic farms and Carthaginian colonists: surveying Punic rural settlement in the central Mediterranean. J. Rom. Archaeol. 19, 7–28 (2006). 23. Guirguis, M., Murgia, C. & Pla Orquín, R. in From the Mediterranean to the Atlantic: People, Goods and Ideas between East and West. Proceedings of the 8th International Congress of Phoenician and Punic Studies (Italy, Sardinia-Carbonia, Sant’Antioco, 21–26 October 2013) (Folia Phoenicia, 1) (ed Serra, F.) 282–299 (Fabrizio Serra Editore, Pisa-Roma, 2017). 24. Dyson, S. & Rowland, R. Archaeology and History in Sardinia from the Stone Age to the Middle Ages. Shepherds, Sailors, and Conquerors (University of Pennsylvania Museum Press, 2007). 25. Ortu, L. Storia della Sardegna dal Medioevo all’età contemporanea (Cuec, 2011). 26. Mastino, A. Storia della Sardegna antica, Vol. 2 (Il Maestrale, 2005). 27. Calò, C., Melis, A., Vona, G. & Piras, I. Review synthetic article: Sardinian population (italy): a genetic review. Int. J. Mod. Anthropol. 1, 39–64 (2008). 28. Lettre, G. & Hirschhorn, J. N. Small island, big genetic discoveries. Nat. Genet. 47, 1224–1225 (2015). 29. Siniscalco, M. et al. Population genetics of haemoglobin variants, thalassaemia and glucose-6-phosphate dehydrogenase deficiency, with particular reference to the malaria hypothesis. Bull. World Health Organ. 34, 379 (1966). 30. Contu, L., Arras, M., Carcassi, C., Nasa, G. L. & Mulargia, M. HLA structure of the Sardinian population: a haplotype study of 551 families. Tissue Antigens 40, 165–174 (1992). 31. Lampis, R., Morelli, L., De Virgiliis, S., Congia, M. & Cucca, F. The distribution of HLA class II haplotypes reveals that the Sardinian population is genetically differentiated from the other Caucasian populations. Tissue Antigens 56, 515–521 (2000). 32. Chiang, C. W. et al. Genomic history of the Sardinian population. Nat. Genet. 1, 1426–1434 (2018). 33. Sidore, C. et al. Genome sequencing elucidates Sardinian genetic architecture and augments association analyses for lipid and blood inflammatory markers. Nat. Genet. 47, 1272–1281 (2015). 34. Ghirotto, S. et al. Inferring genealogical processes from patterns of Bronze-Age and modern DNA variation in Sardinia. Mol. Biol. Evolution 27, 875–886 (2009). 35. Modi, A. et al. Complete mitochondrial sequences from Mesolithic Sardinia. Sci. Rep. 7, 42869 (2017). 36. Olivieri, A. et al. Mitogenome diversity in Sardinians: a genetic window onto an islandas past. Mol. Biol. Evolution 34, 1230–1239 (2017). 37. Matisoo-Smith, E. et al. Ancient mitogenomes of Phoenicians from Sardinia and Lebanon: a story of settlement, integration, and female mobility. PloS ONE 13, e0190169 (2018). 38. Viganó, C., Haas, C., Rühli, F. J. & Bouwman, A. 2000 year old β -thalassemia case in Sardinia suggests malaria was endemic by the Roman period. Am. J. Phys. Anthropol. 164, 362–370 (2017). 39. Pickrell, J. K. & Reich, D. Toward a new history and geography of human genes informed by ancient DNA. Trends Genet. 30, 377–389 (2014). 40. Lai, L. et al. Diet in the Sardinian Bronze Age: models, collagen isotopic data, issues and perspectives. Préhistoires Méditerranéennes (2013). 41. Skeates, R., Gradoli, M. G. & Beckett, J. The cultural life of caves in Seulo, central Sardinia. J. Mediterranean Archaeol. 26, 97–126 (2013). 42. Germanà, F. in Atti del XX Congresso Internazionale d’Antropologia e d’Archeologia Preistorica, Cagliari, Poligraf, Aprilia, 377–394 (Università di Cagliari, 1980). 43. Pompianu, E. & Murgia, C. in Sa Massarìa: ecologia storica dei sistemi di lavoro contadino in Sardegna. (Europa e Mediterraneo. Storia e immagini di una comunità internazionale 37) (eds Serreli, G., Melis, R., French, C. & Sulas, F.) 455–504 (CNR, Cagliari, 2017). 44. La Fragola, A. & Rovina, D. Il cimitero romano di Monte Carru (Alghero) e la statio di Carbia. Sard., Cors. et. Baleares antiquae 16, 59–79 (2018). 45. Meloni, G. M. in Bonnanaro e il suo patrimonio culturale (ed Conca, C.) 90–99 (Segnavia, Sassari, 2004). 46. Rovina, D., Fiori, M. & Olia, P. in Sassari: Archeologia Urbana (eds Rovina, D., Fiori, M.) 120–129 (Felici Editore, 2013). 47. Fu, Q. et al. An early modern human from Romania with a recent Neanderthal ancestor. Nature 524, 216 (2015). 48. Mathieson, I. et al. Genome-wide patterns of selection in 230 ancient Eurasians. Nature 528, 499–503 (2015). 49. Lazaridis, I. et al. Genomic insights into the origin of farming in the ancient Near East. Nature 536, 419–424 (2016). 50. Lazaridis, I. et al. Genetic origins of the Minoans and Mycenaeans. Nature 548, 214–218 (2017). 51. Francalacci, P. et al. Low-pass DNA sequencing of 1200 Sardinians reconstructs European Y-chromosome phylogeny. Science 341, 565–569 (2013). 52. Fernandes, D. M. et al. The arrival of Steppe and Iranian related ancestry in the islands of the Western Mediterranean. bioRxiv https://doi.org/10.1101/584714 (2019). 53. Feldman, M. et al. Ancient DNA sheds light on the genetic origins of early Iron Age Philistines. Sci. Adv. 5, eaax0061 (2019). 54. Antonio, M. L. et al. Ancient Rome: a genetic crossroads of Europe and the Mediterranean. Science 366, 708–714 (2019). 55. Le Lannou, M. et al. Pâtres et Paysans de la Sardaigne. Tours 8, 364 (1941). 56. Sarno, S. et al. Ancient and recent admixture layers in Sicily and Southern Italy trace multiple migration routes along the Mediterranean. Sci. Rep. 7, 1984 (2017). 57. Günther, T. et al. Ancient genomes link early farmers from atapuerca in spain to modern-day basques. Proc. Natl Acad. Sci. USA 112, 11917–11922 (2015). 58. Zalloua, P. et al. Ancient DNA of Phoenician remains indicates discontinuity in the settlement history of Ibiza. Sci. Rep. 8, 17567 (2018). 59. Hellenthal, G. et al. A genetic atlas of human admixture history. Science 343, 747–751 (2014). 60. Loh, P.-R. et al. Inferring admixture histories of human populations using linkage disequilibrium. Genetics 193, 1233–1254 (2013). 61. Dabney, J. et al. Complete mitochondrial genome sequence of a middle pleistocene cave bear reconstructed from ultrashort DNA fragments. Proc. Natl Acad. Sci. USA 110, 15758–15763 (2013). 62. Meyer, M. & Kircher, M. Illumina sequencing library preparation for highly multiplexed target capture and sequencing. Cold Spring Harb. Protoc. 2010, pdb–prot5448 (2010). 63. Rohland, N., Harney, E., Mallick, S., Nordenfelt, S. & Reich, D. Partial uracil–DNA–glycosylase treatment for screening of ancient DNA. Philos. Trans. R. Soc. B: Biol. Sci. 370, 20130624 (2015). 64. Peltzer, A. et al. Eager: efficient ancient genome reconstruction. Genome Biol. 17, 60 (2016). 65. Schubert, M., Lindgreen, S. & Orlando, L. Adapterremoval v2: rapid adapter trimming, identification, and read merging. BMC Res. Notes 9, 88 (2016). 66. Li, H. & Durbin, R. Fast and accurate short read alignment with burrows–wheeler transform. Bioinformatics 25, 1754–1760 (2009). 67. Korneliussen, T. S., Albrechtsen, A. & Nielsen, R. ANGSD: analysis of next generation sequencing data. BMC Bioinforma. 15, 356 (2014). 68. Al-Asadi, H., Dey, K. K., Novembre, J. & Stephens, M. Inference and visualization of DNA damage patterns using a grade of membership model. Bioinformatics 35, 1292–1298 (2018). 69. Renaud, G., Slon, V., Duggan, A. T. & Kelso, J. Schmutzi: estimation of contamination and endogenous mitochondrial consensus calling for ancient DNA. Genome Biol. 16, 224 (2015). 70. Vianello, D. et al. Haplofind: a new method for high-throughput mt DNA haplogroup assignment. Hum. Mutat. 34, 1189–1194 (2013). 71. Andrews, R. M. et al. Reanalysis and revision of the Cambridge reference sequence for human mitochondrial DNA. Nat. Genet. 23, 147 (1999). 72. Van Oven, M. & Kayser, M. Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation. Hum. Mutat. 30, E386–E394 (2009). 73. Patterson, N., Price, A. L. & Reich, D. Population structure and eigenanalysis. PLoS Genet. 2, e190 (2006). 74. Lee, S., Zou, F. & Wright, F. A. Convergence and prediction of principal component scores in high-dimensional settings. Ann. Stat. 38, 3605 (2010). 75. Alexander, D. H., Novembre, J. & Lange, K. Fast model-based estimation of ancestry in unrelated individuals. Genome Res. 19, 1655–1664 (2009). 76. Pritchard, J. K., Stephens, M. & Donnelly, P. Inference of population structure using multilocus genotype data. Genetics 155, 945–959 (2000). 77. Patterson, N. et al. Ancient admixture in human history. Genetics 192, 1065–1093 (2012). 78. 1000 Genomes Project Consortium. A global reference for human genetic variation. Nature 526, 68 (2015). ## Acknowledgements We thank Maanasa Rhagavan for in-depth feedback on drafts, Anna Di Rienzo and Goncalo Abecasis for helpful discussions, and Magdalena Zoledziewska for useful comments and early assistance. We would like to thank Antje Wissgott, Cäcilia Freund and other members of the wet laboratory and computational teams at MPI-SHH in Jena. We thank Nadin Rohland, Éadaoin Harney, Shop Mallick, and Alan Cooper for contributing to generating the data for the four samples processed at the Australian Centre for Ancient DNA and in D.R.’s ancient DNA laboratory. We also thank Dan Rice and members of the Novembre lab for helpful discussion and feedback. IRGB-CNR would like to thank the Consortium SA CORONA ARRUBIA DELLA MARMILLA for making available equipment and scientific instruments within the program “Laboratori Dna del Museo Naturalistico del Territorio - Giovanni Pusceddu”. This study was supported in part by the Max Planck Society, the University of Sassari, the National Science Foundation via fellowships DGE-1746045 to J.H.M. and DGE-1644869 to T.A.J. and HOMINID grant BCS-1032255 to D.R., the National Institute of General Medical Sciences via training grant T32GM007197 support for J.H.M. and grant RO1GM132383 to J.N., the National Human Genome Research Institute via grant R01HG007089 to J.N., the Intramural Research Program of the National Institute on Aging via contracts N01-AG-1-2109 and HHSN271201100005C to F.C., Fondazione di Sardegna via grants U1301.2015/AI.1157 BE Prat. 2015-1651 to F.C., the Australian Research Council via grant DP130102158 to W.H., the Howard Hughes Medical Institute (D.R.), the University of Pavia INROAd Program (A.O.), the Italian Ministry of Education, University and Research (MIUR) Dipartimenti di Eccellenza Program (2018-2022, A.O.), the Fondazione Cariplo via project 2018-2045 to A.O., the British Academy (R.S.), and the American Society of Prehistoric Research (N.T.). ## Author information Authors ### Contributions We annotate author contributions using the CRediT Taxonomy labels (https://casrai.org/credit/). Where multiple individuals serve in the same role, the degree of contribution is specified as ‘lead’, ‘equal’, or ‘supporting’. Conceptualization (design of study)—lead: F.C., J.N., J.K., and L.L.; supporting: C.S., C.P., D.S., J.H.M., and G.A. Investigation (collection of skeletal samples)—lead: L.L. and R.S.; supporting: J.B., M.G.G., C.D.S., C.P., V.M., E.P., C.M., A.L.F., D.Ro., M.G., R.P.O., N.T., P.V.D., S.R., P.M., R.B., R.M.S., and P.B. (minor contribution from C.S., J.N.). Investigation (ancient DNA isolation and sequencing)—lead: C.P., A.F., R.R., and M.M.; supporting: C.D.S., W.H., J.K., D.Re*. Data curation (data quality control and initial analysis)—lead: J.H.M., C.P., and H.R.; supporting: C.S., C.C., K.D., H.A., and A.O. Formal Analysis (general population genetics)—lead: J.H.M. and H.R.; supporting: T.A.J. and C.L. Writing (original draft preparation)—lead: J.H.M., H.R., and J.N.; supporting: C.P., R.S., L.L., F.C., and P.V.D. Writing (review and editing)—input from all authors*. Supervision—equal: F.C., J.K., and J.N. Funding acquisition—lead: J.K., F.C., and J.N.; supporting: R.S. *D.R. contributed data for four samples and reviewed the description of the data generation for these samples. As he is also senior author on a separate manuscript that reports data on a non-overlapping set of ancient Sardinians and his group and ours wished to keep the two studies intellectually independent, he did not review the entire paper until after it was accepted. ### Corresponding authors Correspondence to Francesco Cucca, Johannes Krause or John Novembre. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Rosa Fregel, Torsten Günther and Martin Sikora for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Marcus, J.H., Posth, C., Ringbauer, H. et al. Genetic history from the Middle Neolithic to present on the Mediterranean island of Sardinia. Nat Commun 11, 939 (2020). https://doi.org/10.1038/s41467-020-14523-6 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-020-14523-6 • ### Bioarchaeological and palaeogenomic portrait of two Pompeians that died during the eruption of Vesuvius in 79 AD • Gabriele Scorrano • Serena Viva • Fabio Macciardi Scientific Reports (2022) • ### Process and Dynamics of Mediterranean Neolithization (7000–5500 bc) • Thomas P. Leppard Journal of Archaeological Research (2022) • ### Ancient mitochondrial diversity reveals population homogeneity in Neolithic Greece and identifies population dynamics along the Danubian expansion axis • Nuno M. Silva • Susanne Kreutzer • Christina Papageorgopoulou Scientific Reports (2022) • ### Dairying, diseases and the evolution of lactase persistence in Europe • Richard P. Evershed • George Davey Smith • Mark G. Thomas Nature (2022) • ### Origin and diffusion of human Y chromosome haplogroup J1-M267 • Hovhannes Sahakyan • Ashot Margaryan • Richard Villems Scientific Reports (2021)
# A good engine to select for my project? Discussion in 'Firewall Forward / Props / Fuel system' started by geosnooker2000, Jun 23, 2019. ### Help Support HomeBuiltAirplanes Forum by donating: 1. Jun 25, 2019 ### Toobuilder #### Well-Known Member Joined: Jan 20, 2010 Messages: 4,334 3,123 Location: Mojave, Ca If you lived near me I could do it. I have access to both 172 types. ...I assumed just about everyone knows someone with a 172. geosnooker2000 likes this. 2. Jun 25, 2019 ### Toobuilder #### Well-Known Member Joined: Jan 20, 2010 Messages: 4,334 3,123 Location: Mojave, Ca Another thing to consider as an E-AB engine, the O-300's are very lacking in performance/efficiency parts compared to the Lyc. For example, the updraft Lycoming sump is an abortion, but looks like a work of art compared to the cludge on the O-300. And there are multiple bolt on options to fix the lyc, while the Conti would require significant custom fabrication. TarDevil likes this. 3. Jul 17, 2019 ### Winginitt #### Well-Known Member Joined: Apr 5, 2019 Messages: 77 18 You are correct in your assertion that a 6 is safer than a 4 in that completely losing power to 1 cylinder can cause a 4 to become inoperable due to the imbalance caused. With a 6 cylinder, the engine has a much better chance of continuing it's flight. That being said, there are other ways to look at what things can exerbate the situation in either engine and become catastrophic, but a non-catastropic cause can take a 4 cylinder down when a 6 would continue flying. The reverse cannot be said for the 4 cylinder......so the 6 can be safer in those failure modes. As for the GO 300. They have some issues to contend with and that's why they went out of production instead of every manufacturer jumping on the bandwagon and building their own versions of higher HP geared engines. One particular concern with GO 300s is the unavailability of a certain bearing or bushing in the gear drive unit. Unless someone has rectified that in the last few years since I sold a GO 300, they are virtually unobtainable. 4. Jul 17, 2019 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,882 4,704 Location: KWHP, Los Angeles CA, USA Antique Aero Engines in Santa Paula, CA WAS the place where those GO-300 problems could be solved... if he wanted to help you ) But crotchety and hilarious old Al Ball passed away a few years ago, and his son Brad runs the shop now, and he may not supprt the GO-GO-300 anymore. There is one dusty old FAR hiding in the far corner, which allows the owner to produce a part that is no longer available. 100% legit and legal. One of our grizzled and battle-worn old greaybeard aircraft mechanics here on HBA will probably be able to quote the actual FAR section, or 14CFR, or whatever it's called this week. 5. Jul 17, 2019 ### TerryM76 #### Well-Known MemberHBA Supporter Joined: Sep 8, 2012 Messages: 472 168 Location: Tempe, AZ §21.9 Replacement and modification articles. (a) If a person knows, or should know, that a replacement or modification article is reasonably likely to be installed on a type-certificated product, the person may not produce that article unless it is— (1) Produced under a type certificate; (2) Produced under an FAA production approval; (3) A standard part (such as a nut or bolt) manufactured in compliance with a government or established industry specification; (4) A commercial part as defined in §21.1 of this part; (5) Produced by an owner or operator for maintaining or altering that owner or operator's product; (6) Fabricated by an appropriately rated certificate holder with a quality system, and consumed in the repair or alteration of a product or article in accordance with part 43 of this chapter; or (7) Produced in any other manner approved by the FAA. (b) Except as provided in paragraphs (a)(1) through (a)(2) of this section, a person who produces a replacement or modification article for sale may not represent that part as suitable for installation on a type-certificated product. (c) Except as provided in paragraphs (a)(1) through (a)(2) of this section, a person may not sell or represent an article as suitable for installation on an aircraft type-certificated under §§21.25(a)(2) or 21.27 unless that article— (1) Was declared surplus by the U.S. Armed Forces, and (2) Was intended for use on that aircraft model by the U.S. Armed Forces. mcrae0104 likes this. 6. Jul 17, 2019 ### TerryM76 #### Well-Known MemberHBA Supporter Joined: Sep 8, 2012 Messages: 472 168 Location: Tempe, AZ 7. Jul 17, 2019 ### TerryM76 #### Well-Known MemberHBA Supporter Joined: Sep 8, 2012 Messages: 472 168 Location: Tempe, AZ 8. Jul 17, 2019 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,882 4,704 Location: KWHP, Los Angeles CA, USA You didn't seem quite that grizzled and battle-worn, but that is the rule I was thinking of 9. Jul 17, 2019 ### TerryM76 #### Well-Known MemberHBA Supporter Joined: Sep 8, 2012 Messages: 472 168 Location: Tempe, AZ Well...I do somehow manage to keep up my youthful appearance......except for the gray. 10. Jul 18, 2019 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,882 4,704 Location: KWHP, Los Angeles CA, USA 21.303 was actually the one I was thinking of, sorry for the mis-identification. Basically, the FAA is very well aware that Continental Motors will pretty much hang up the phone on you when you call to ask about the GO-300. So if you have a Cessna 175 and want to keep it running, you will have a fairly straightforward and non-combative response from the FAA regarding an owner manufactured part. They will want to be convinced it is a good quality part that matches the original (gear, bushing, shaft, whatever), and that you did not try to use an off the shelf McMaster-Carr component that is "almost the same size". But they will more than likely let you through unscathed if you show tthem you did it the right way. Not so much if you make your own pistons for a 150HP O-320 becaue there are OEM and PMA replacements easily available. 11. Jul 18, 2019 ### TFF #### Well-Known Member Joined: Apr 28, 2010 Messages: 11,432 3,171 Location: Memphis, TN A friend had some pistons made for his OX-5. It was not really a problem except the FAA inspectors have to jell with it. Don’t want it yesterday; it takes them about three months after submission to hemhaw into signing it. I had another friend do another owner approved part completely all wrong. Great if homebuilt, but not certified. If you want Continental to hang up on you, try a 520 Voyager. There still is the echo of the phone slamming. 12. Jul 18, 2019 ### Winginitt #### Well-Known Member Joined: Apr 5, 2019 Messages: 77 18 I think the thing here is that someone would not need FAA approval to make a part for an engine being used in an experimental airplane. If thats not true, then someone please correct my statement. That said, GO-300s needing overhaul might be worth buying because people are converting their 175s and the GOs become parts for other 175 owners. Might even be a market for someone to produce these replacement bearing/bushings for "experimental only" usage. Then all the "certified" guys would buy them and secretly install them in their 175s. Nah, never happen.....we all know log books are always truthful ! 13. Jul 18, 2019 ### TFF #### Well-Known Member Joined: Apr 28, 2010 Messages: 11,432 3,171 Location: Memphis, TN What logbook 14. Jul 18, 2019 ### Victor Bravo #### Well-Known Member Joined: Jul 30, 2014 Messages: 5,882 4,704 Location: KWHP, Los Angeles CA, USA Ain't nothing wrong with the GO-300 as-is, you just have to operate it differently than non-geared engines. It is simply NOT an engine to use for flight training, shooting landings, or bush flying. Use it on a XC airplane where you leave it at one pwoer setting for most of the flight, and move the power lever very slowly, and it will work just fine. Don't back-drive the gears and it will run just fine. This was told to me directly from a very very very highly respected expert on oddball and unusual engines (Al Ball, Antique Aero Engines) It uses a little more fuel because it is making 30HP more than the O-300. It wears out a little quicker because it's running faster to make 30HP more. It's just not "bullet-proof" like some Lycoming and Continental engines. 15. Jul 19, 2019 ### TFF #### Well-Known Member Joined: Apr 28, 2010 Messages: 11,432 3,171 Location: Memphis, TN Back driving the gears is the problem. No part throttle flying. High power until last part of downwind. Pull back and land. Not a good low and slow engine; flying low power just above stall, out for a sunset flight. True on being used on a homebuilt and not requiring certified parts. Still though if you do use one, get a spare. Except for rods, pistons and cylinders, nothing is the same as the O-300. Crank is different, case different, gearbox makes it different. Last time the big money parts were made was the 60’s. It also runs a pretty big prop, so make sure it’s got ground clearance. I think a 175 uses a 80” prop. There is a 175 at my airport. Project, not all bad, but it has not been flown in a long time. Probably 30 years. The owner started messing with it and really made a mess of it. He has owned it since the 60’s. 16. Jul 19, 2019 ### BBerson #### Well-Known MemberHBA Supporter Joined: Dec 16, 2007 Messages: 11,663 2,188 Location: Port Townsend WA The Skylark 175 prop is huge. Can be up 90" on the seaplane: [McCauley 1B175/ MFC 8467 (a) Diameter: not over 84 in., not under 82.5 in. Static rpm at maximum permissible throttle setting: Landplane: not over 2645, not under 2545 See NOTE 4. No additional tolerance permitted (b) Spinner, Cessna Dwg. 0550221 McCauley 1D200/OM 9044 (seaplane only) (a) Diameter: not over 90 in., not under 88 in. Static rpm at maximum permissible throttle setting: not over 2810, not under 2710 No additional tolerance permitted (b) Spinner, Cessna Dwg. 0552004 Page 3 of 20 II. Model 175A, Skylark, 4 PCL-SM (Normal Category) (cont'd) Model 175B, Skylark, 4 PCL-SM (Normal Category) (cont'd) *Airspeed Limits Landplane and Seaplane (TIAS) Maneuvering Maximum structural cruising Never exceed Flaps extended] 17. Jul 20, 2019 ### Winginitt #### Well-Known Member Joined: Apr 5, 2019 Messages: 77 18 You know, that religiously maintained record that all certified airplane owners keep......the one you bet your life on when you buy a certified airplane and fly it. The one that makes thousands of dollars of difference in the value of an engine if a minor prop strike isn't listed. I've had experiences with unscrupulous owners and have very little faith in log books .Heres a picture of an undocumented repair on an airplane I bought. The same guy who did that also performed unrecorded engine repairs. Sold the plane to a guy who said he wanted to restore it. Turns out he was an employee of an airplane salvage business in Florida and they wouldn't send the paperwork to FAA. Tracked them down and they lied about reselling and then said they parted it out. Had a devil of a time getting the FAA to do anything to get it out of my name. Somewhere out there there is an engine with a log book thats far from accurate and someone thinks they have a safe engine. No, I think they are good engines too. The point simply is that when I had one I found that there was one particular part that was critical to the drive system that you could not obtain a replacement for. So do owners of 175s scrap their whole $20K airplane because they can't find a certified bushing/bearing for those engines? Many convert to Lycomings, and sell their old 175 to an experimental builder. What do the other owners do ? Maybe these days someone makes a replacement? I don't know, but felt the OP might want to be aware of this possible issue before buying any engine. 18. Jul 20, 2019 ### TFF ### TFF #### Well-Known Member Joined: Apr 28, 2010 Messages: 11,432 Likes Received: 3,171 Location: Memphis, TN Mine was a rhetorical question. I’m the one who has had to put those books right along with the aircraft. 175 owners are in a poke and it’s sad. It’s not a 172. The T-41 and 172 Hawk XP are the only sisters to the 175; they are on a different type certificate than the 172. Not a 180 either. Engine solutions is hope the GO is good, or STC with a Continental 470 or Lycoming IO360. You have to want it bad for the Lycoming. I believe the STC costs about double an airframe would cost. Worth it if going full Alaska with Edo floats; then it, with all the conversion work, costs less than a 180 on floats. All about the end dream. One on my field can be had for less than 10k but after putting 10 in it, it’s worth what a flying one could have been had for without all the restoration work. 20k airplane all day long. 19. Jul 20, 2019 ### Winginitt ### Winginitt #### Well-Known Member Joined: Apr 5, 2019 Messages: 77 Likes Received: 18 You are right of course. I thought you misunderstood that I was referring to certified airplanes. I meant to point out that there are a lot of people out there who do things and don't document them.....and I can believe that someone who cannot afford the conversions you mentioned is also not going to be inclined to scrap a perfectly good airplane if he can't buy a$100 (?) bushing. If memory serves me, I believe the wings have larger gas tanks but fit on 172s. Guy from Canada drove down and bought the wings for more than I gave for the whole airplane. He said Cessna wanted \$25K for wings so he was tickled to get perfect wings for half that.....and I was tickled to sell em. I think he was putting them on a 172 on floats. This is a perfect example of why building something like a Bearhawk is a great alternative. You can use any engine you want, even a GO-300 and not have to worry about obsolescence or making a special part. 20. Jul 23, 2019 Joined: Mar 30, 2019 Messages: 100
Pelin Are these sentences OK? I mean the working day has ceased. If I say; The office hours are over. Please, come back tomorrow. The office hours are off. Please, come back tomorrow. Dec 4, 2014 9:21 PM
# The angle of elevation of the top of a tower of height x metre from a point on the ground is found to be 60°. By going y metre away from that point, it becomes 30°. Which one of the following relations is correct? This question was previously asked in CDS 01/2022: Maths Previous Paper (Held On 10 April 2022) View all CDS Papers > 1. x = y 2. 2x = 3y 3. $$2x\, = \,\sqrt 3 y$$ 4. $$2y\, = \,\sqrt 3 x$$ ## Answer (Detailed Solution Below) Option 3 : $$2x\, = \,\sqrt 3 y$$ Free SSC CGL 2021 Tier-I (Held On : 11 April 2022 Shift 1) 1.4 Lakh Users 100 Questions 200 Marks 60 Mins ## Detailed Solution Given: The angle of elevation of the top of a tower of height x meter from a point on the ground is found to be 60° By going y metre away from that point, it becomes 30° Concept used: tanθ = P/B Here P = perpendicular, B = base Calculation: According to the question, tan60° = AB/BC ⇒ √3 = x/BC ⇒ BC = x/√3           -------(1) Again, tan30° = AB/BD ⇒ 1/√3 = AB/BD ⇒ 1/√3 = x/BD ⇒ BD = √3x BD = BC + CD ⇒ √3x = x/√3 + y        [From equation (1)] ⇒ √3x = (x + √3y)/√3 ⇒ 3x = x + √3y ⇒ 2x = √3y ∴ Required answer is Option 3 Latest SSC CGL Updates Last updated on Sep 30, 2022 The SSC CGL 2022 Amendment Notice Out on 30th September 2022. The SSC CGL Notification was out on 17th September 2022. The candidates will be able to apply online from 17th September to 8th October 2022. The SSC CGL Eligibility will be a bachelor’s degree in the concerned discipline. This year, SSC has completely changed the exam pattern and for the same, the candidates must refer to SSC CGL New Exam Pattern.
This is a summary of common probability distributions in engineering and statistics. This chart has the plots of the pdf or pmf (LaTeX source): # discrete distributions binomial distribution • A big urn with balls in either white or black color. Drawing a white ball from urn has probability $x$ (i.e., black ball has probability $1-x$). If we draw $n$ balls from urn with replacement, the probability of getting $k$ white balls: Poisson distribution • Balls are added to the urn at rate of $\lambda$ per unit time, under exponential distribution. The probability of having $k$ balls added to the urn within time $t$: geometric distribution • The probability of have to draw $k$ balls to see the first white ball being drawn: negative binomial distribution • same as the distribution of the sum of $r$ iid geometric random variable • negative binomial approximates Poisson with $\lambda = r(1-x)$ with large $r$ and $x\approx 1$ • Drawing balls from the urn. If we have to draw $k$ balls to see the $r$-th white ball (we have drawn $r$ white balls and $k-r$ black balls). The probability of $k$: hypergeometric distribution • A urn with $N$ balls (finite) and $K$ balls amongst are white. Draw, without replacement, $n$ balls from the urn to get $k$ white balls: # continuous distributions uniform distribution • extreme of flattened distribution • with upper and lower bounds triangular distribution • with upper and lower bounds normal distribution • strong tendency for data at central value; symmetric, equally likely for positive and negative deviations from its central value • frequency of deviations falls off rapidly as we move further away from central value • $X_1 \sim N(\mu_1, \sigma^2_1); X_2 \sim N(\mu_2, \sigma^2_2) \to X_1+X_2 \sim N(\mu_1+\mu_2, \sigma_1^2+\sigma_2^2)$ • approximation to Poisson distribution: if $\lambda$ is large, Poisson distribution approximates normal with $\mu=\sigma^2=\lambda$ • approximation to binomial distribution: if $n$ is large and $x\approx \frac{1}{2}$, binomial distribution approximates normal with $\mu=nx$ and $\sigma^2=nx(1-x)$ • approximation to beta distribution: if $\alpha$ and $\beta$ are large, beta distribution approximates normal with $\mu=\frac{\alpha}{\alpha+\beta}$ and $\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$ Laplace distribution • absolute difference from mean compared to squared difference in normal distribution • longer (fatter) tails, higher kurtosis (flattened peak) • pdf: logistic distribution • symmetric, with longer tails and higher kurtosis than normal distribution • logistic distribution has finite mean $\mu$ and variance defined • $X\sim U(0,1) \to \mu+s[\log(X)-\log(1-X)] \sim \textrm{Logistic}(\mu,s)$ • $X\sim \textrm{Exp}(1) \to \mu+s\log(e^X-1) \sim \textrm{Logistic}(\mu,s)$ • logistic pdf: Cauchy distribution • symmetric, with longer tails and higher kurtosis than normal distribution • Cauchy distribution has mean and variance undefined, but mean & mode at $\mu$ • $X,Y\sim N(\mu,\sigma^2) \to X/Y \sim \textrm{Cauchy}(\mu,\sigma^2)$ • Cauchy pdf: lognormal distribution • $\log(X)\sim N(\mu,\sigma^2)$, positively skewed • parameterised by shape ($\sigma$), scale ($\mu$, or median), shift ($\theta$) • $\mu=0, \theta=1$ is standard lognormal distribution • as $\sigma$ rises, the peak shifts to left and skewness increases • sum of two lognormal random variable is a lognormal random variable with $\mu=\mu_1+\mu_2$ and $\sigma^2=\sigma_1^2+\sigma_2^2$ Pareto distribution • power law probability distribution • continuous counterpart of Zipf’s law • positively skewed, no negative tail, peak at $x=0$ gamma distribution • support for $x\in(0,\infty)$, positive skewness (lean left) • decreasing $\alpha$ will push distribution towards the left; at low $\alpha$, left tail will disappear and distribution will resemble exponential • models the time to the $\alpha$-th Poisson arrival with arrival rate $\beta$ • gamma pdf ($\alpha=1$ becomes exponential pdf with rate $\beta$): Weibull distribution • support for $x\in(0,\infty)$, positive skewness (lean left) • decreasing $k$ will push distribution towards the left; at low $k$, left tail will disappear and distribution will resemble exponential • If $W\sim\textrm{Weibull}(k,\lambda)$, then $X=W^k \sim \textrm{Exp}(1/\lambda^k)$ • Weibull pdf ($k=1$ becomes exponential pdf with rate $1/\lambda$): Erlang distribution • $X_i\sim\textrm{Exp}(\lambda) \to \sum_{i=1}^k X_i \sim \textrm{Erlang}(k, \lambda)$ • arise from teletraffic engineering: time to $k$-th call beta distribution • support for $x\in(0,1)$ • allows negative skewness • two shape parameters $p$ and $q$, and lower- and upper-bounds on data ($a$ and $b$) extreme value distribution (i.e. Gumbel minimum distribution) • negatively skewed • Gumbel maximum distribution, $f(-x;-\mu,\beta)$, is positively skewed • Limiting distribution of the max/min value of $n\to\infty$ iid samples from $\textrm{Exp}(\lambda)$ with $\lambda = 1/\beta$ • standard cdf: $F(x)=1-\exp(-e^x)$ Rayleigh distribution • positively skewed • modelling the $L^2$-norm of two iid normal distribution with zero mean (e.g., orthogonal components of a 2D vector) Maxwell-Boltzmann distribution • positively skewed • 3D counterpart of Rayleigh distribution • arise from thermodynamic: probability of a particle in speed $v$ if temperature is $T$ Chi-squared distribution • distribution of the sum of the square of $k\ge 1$ i.i.d. standard normal random variables • mean $k$, variance $2k$ • PDF with $k$ degrees of freedom: F-distribution • Distribution of a random variable defined as the ratio of two independent $\chi^2$-distributed random variables • Commonly used in ANOVA • PDF, with degrees of freedom $d_1$ and $d_2$, involves beta function $B(\alpha,\beta)$: Student’s t distribution • Distribution of normalized sample mean of $n=k+1$ observations from a normal distribution, $\frac{\bar{X}-\mu}{S/\sqrt{n}}$ • PDF with degree of freedom $k$: # test of fit for distributions Kolmogorov-Smirnov test (K-S test, on cumulative distribution function $F(x)$) • if sample comes from distribution, $D_n$ converges to 0 a.s. as number of samples $n$ goes to infinity Shapiro-Wilk test • test of normality in frequentist statistics (i.e. for $x_i$ in normal distribution) • $\bar{x} = \frac{1}{n}(x_1 + \cdots + x_n)$ is the sample mean • $(a_1,\cdots,a_n) = m^T V^{-1} (m^T V^{-1}V^{-1} m)^{-1/2}$ where $m$ is vector of expected values of the order statistics from normal distribution and $V$ the covariance matrix of those order statistics Anderson-Darling test • test whether a sample comes from a specified distribution • $A^2$ is weighted distance between $F_n(x)$ and $F(x)$, with more weight on tails of the distribution Pearson’s $\chi^2$ test • test for categories fit a distribution: checking observed frequency $O_i$ against expected frequency $E_i$ according to distribution for each of $n$ categories • degree of freedom: $n$ minus number of parameters of the fitted distribution # Reference Lawrence M. Leemis and Jacquelyn T. McQuestion. Univariate Distribution Relationships, Am Stat, 62(1) pp.45–53, 2008, DOI: 10.1198/000313008X270448 Aswath Damodaran. Probabilistic approaches: Scenario analysis, decision trees and simulations (PDF, the appendix is also available separately) and includes the following chart for choosing a distribution:
Today is • Mathematics study Position: English > NEWS > NEWS > Content # Vector solutions with prescribed component-wise nodes for a Schrodinger system 2018-10-09 Title: Vector solutions with prescribed component-wise nodes for a Schrodinger system Reporter: LIU Zhaoli (Capital Normal University) Time: October 12, 2018 (Friday) AM 8:30-9:30 Location: A1101# room, Innovation Park Building Contact: DAI Guowei (tel:84708351-8135) Abstract: For the Schrodinger system, where $k\geq 2$ and $N=2, 3$, we prove that for any $\lambda_j>0$ and $\beta_{jj}>0$ and any positive integers $p_j$, $j=1,2,\cdots,k$, there exists $b>0$ such that if $\beta_{ij}=\beta_{ji}\leq b$ for all $i\neq j$ then there exists a radial solution $(u_1,u_2,\cdots,u_k)$ with $u_j$ having exactly $p_j-1$ zeroes. Moreover, there exists a positive constant $C_0$ such that if $\beta_{ij}=\beta_{ji}\leq b\ (i\neq j)$ then any solution obtained satisfies. Therefore, the solutions exhibit a trend of phase separations as $\beta_{ij}\to-\infty$ for $i\neq j$. The brief introduction to the reporter: Professor Liu's main research directions are nonlinear functional analysis. He has made many outstanding achievements in variational methods and elliptic partial differential equations. He worked as a Humboldt scholar for two years at Giessen University in Germany. He was supported by the National Science Foundation for Outstanding Youth in 2008. In 2009, he was appointed Professor Jiang Scholar, Minister of Education. He has published more than 70 SCI papers in top international journals such as "Adv. Math.", "Comm. Math. Phys", "Indiana Univ. Math. J.", "Comm. Partial Differential Equations", "J. Funct. Anal.", "Proc. London Math. Soc.", "Calc. Var. Partial Differential Equations", "Ann. Inst. H. Poincaré Anal. Non Linéaire"Math. Z. and J. Differential Equations", and these papers have been cited more than 1200 times by domestic and foreign counterparts.
## 24.6.17 ### If $N=q^k n^2$ is an odd perfect number and $q = k$, why does this bound not imply $q > 5$? Let $\mathbb{N}$ denote the set of natural numbers (i.e., positive integers). A number $N \in \mathbb{N}$ is said to be perfect if $\sigma(N)=2N$, where $\sigma=\sigma_{1}$ is the classical sum of divisors.  For example, $\sigma(6)=1+2+3+6=2\cdot{6}$, so that $6$ is perfect.  (Note that $6$ is even.)  Denote the abundancy index of $x \in \mathbb{N}$ as $I(x)=\sigma(x)/x$. Euler proved that an odd perfect number $N$, if any exists, must take the form $N=q^k n^2$, where $q$ is prime satisfying $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q,n)=1$. Suppose that $k=q$.  Since $q$ is prime and $q \equiv 1 \pmod 4$, this implies that $k \geq 5$.  (In particular, $k \neq 1$, so that the Descartes-Frenicle-Sorli conjecture is false in this case.) Using WolframAlpha, we get the upper bound $$I(q^k)=I(q^q)=\frac{q^{q+1}-1}{{q^q}(q-1)} \leq \frac{3906}{3125} = 1.24992$$ which corresponds to the lower bound $$I(n^2)=\frac{2}{I(q^k)} \geq \frac{3125}{1953} \approx 1.6001\ldots.$$ Consider the product $$\bigg(I(q^q) - \frac{3906}{3125}\bigg)\bigg(I(n^2) - \frac{3906}{3125}\bigg).$$ This product is nonpositive.  Therefore, $$I(q^q)I(n^2) + \bigg(\frac{3906}{3125}\bigg)^2 \leq \frac{3906}{3125}\cdot\bigg(I(q^q) + I(n^2)\bigg).$$ Since $N=q^k n^2$ is perfect with $q=k$, then $I(q^k)I(n^2)=I(q^q)I(n^2)=2$, so that $$I(q^q) + I(n^2) \geq \frac{3906}{3125} + \frac{3125}{1953} = \frac{17394043}{6103125} \approx 2.850022406554\ldots.$$ But in the paper [Dris, 2012 (pages 4 to 5)], it is proved that $$I(q^k) + I(n^2) \leq \frac{3q^2 + 2q + 1}{q(q+1)} = 3 - \frac{q-1}{q(q+1)}$$ with equality occurring if and only if $k=1$. In our case, since $k = q \geq 5$, we obtain $$\frac{17394043}{6103125} \leq I(q^q) + I(n^2) = I(q^k) + I(n^2) < 3 - \frac{q-1}{q(q+1)}$$ $$q > \frac{3125}{781} \approx 4.00128\ldots.$$ Here is my question: Why does the bound $$I(q^q) + I(n^2) \geq \frac{3906}{3125} + \frac{3125}{1953} = \frac{17394043}{6103125} \approx 2.850022406554\ldots$$ not imply that $q > 5$? I am thinking along the lines that: (1) $57/20 < I(q^k) + I(n^2) < 3$ is best-possible. (2) Improving the upper bound $3$ would result in a finite upper bound for the Euler prime $q$. (3) Therefore, improving the lower bound $57/20$ would result in a lower bound for $q$ better than the currently known $q \geq 5$. REFERENCES I am guessing that it has got something to do with the interaction between the conditions $k=1$ and $q=5$. When $k=1$, we have the bounds $$I(q^k)=I(q)=1+\frac{1}{q} \leq \frac{6}{5}$$ and $$I(n^2)=\frac{2}{I(q)} \geq \frac{5}{3}.$$ When $q=5$, we have the bounds $$I(n^2) \leq 2 - \frac{5}{3q} = \frac{5}{3}$$ and $$I(q^k) \geq \frac{6}{5}.$$ Note that, when $k=1$, we have the lower bound $$I(q^k) + I(n^2) \geq \frac{43}{15} = 2.8\overline{666} > 2.85$$ Note further that, when $q=5$, we have the upper bound $$I(q^k) + I(n^2) \leq \frac{43}{15}.$$ $$\bigg(q = 5\bigg) \land \bigg(k = 1\bigg) \Longrightarrow I(q^k) + I(n^2) = \frac{43}{15}.$$ $$I(q^k) + I(n^2) = \frac{43}{15} \Longrightarrow \bigg(\left(q = 5\right) \land \left(k = 1\right)\bigg).$$
Constraints on the ICM velocity power spectrum from the X-ray lines # Constraints on the ICM velocity power spectrum from the X-ray lines width and shift I.Zhuravleva, E.Churazov, A.Kravtsov, R.Sunyaev MPI für Astrophysik, Karl-Schwarzschild str. 1, Garching, 85741, Germany Space Research Institute, Profsoyuznaya str. 84/32, Moscow, 117997, Russia Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA Kavli Institute for Cosmological Physics and Enrico Fermi Institute, University of Chicago, Chicago, IL 60637, USA [email protected] Accepted …. Received … ###### Abstract Future X-ray observations of galaxy clusters by high spectral resolution missions will provide spatially resolved measurements of the energy and width for the brightest emission lines in the intracluster medium (ICM) spectrum. In this paper we discuss various ways of using these high resolution data to constrain velocity power spectrum in galaxy clusters. We argue that variations of these quantities with the projected distance in cool core clusters contain important information on the velocity field length scales (i.e. the size of energy-containing eddies) in the ICM. The effective length along the line of sight, which provides dominant contribution to the line flux, increases with , allowing one to probe the amplitude of the velocity variations at different spatial scales. In particular, we show that the width of the line as a function of is closely linked to the structure function of the 3D velocity field. Yet another easily obtainable proxy of the velocity field length scales is the ratio of the amplitude of the projected velocity field (line energy) variations to the dispersion of the velocity along the line of sight (line width). Finally the projected velocity field can be easily converted into 3D velocity field, especially for clusters like Coma with an extended flat core in the surface brightness. Under assumption of a homogeneous isotropic Gaussian 3D velocity field we derived simple expressions relating the power spectrum of the 3D velocity field (or structure function) and the observables. We illustrate the sensitivity of these proxies to changes in the characteristics of the power spectrum for a simple isothermal -model of a cluster. The uncertainties in the observables, caused by stochastic nature of the velocity field, are estimated by making multiple realizations of the random Gaussian velocity field and evaluating the scatter in observables. If large scale motions are present in the ICM these uncertainties may dominate the statistical errors of line width and shift measurements. ###### keywords: X-rays: galaxies: clusters - Galaxies: clusters: intracluster medium - Turbulence - Line: profiles - Methods: analytical - Methods: numerical pagerange: Constraints on the ICM velocity power spectrum from the X-ray lines width and shiftEpubyear: 2009 ## 1 Introduction Properties of gas motions in the hot intracluster medium (ICM) are still little known. It is believed that turbulent motions are driven when matter accretes onto the filaments or during shocks in the hot gas. Turbulence transfer kinetic energy injected on large scales Mpc to small (unknown) dissipative scales . These two scales are connected with a cascade of kinetic energy, which occurs over inertial range (Kolmogorov, 1941; Landau & Lifshitz, 1966). Knowing the properties of gas motions in clusters, we would be able to address a number of important question, e.g., what is the bias in the cluster mass measurements based on hydrostatic equilibrium and whether the bias is due to the motions alone or due to the clumping in gas density (see, e.g., Rasia et al., 2006; Nagai, Vikhlinin, & Kravtsov, 2007; Jeltema et al., 2008; Lau, Kravtsov, & Nagai, 2009), what is the ICM turbulent heating rate in clusters (e.g. Churazov et al., 2008) and what is the role of gas motions in particle acceleration (see, e.g. Brunetti, 2006; Brunetti & Lazarian, 2011). Properties of turbulence in galaxy clusters were studied by means of numerical simulations (e.g., Dolag et al., 2005; Cassano & Brunetti, 2005; Norman & Bryan, 1999; Iapichino et al., 2011; Vazza et al., 2011). However despite the good “global” agreement between all simulations, the results on turbulent motions are still controversial, mainly due to insufficient resolution of simulations and, in particular, low Reynolds number (effective in cosmological simulations) and other numerical issues (see, e.g., Kitsionas et al., 2009; Dobler et al., 2003; Beresnyak & Lazarian, 2009). Current generation of X-ray observatories cannot provide robust direct measurements of turbulence in the ICM. Only XMM RGS grating can provide weak upper limits on velocity amplitude in cool core clusters (Sanders, Fabian, & Smith, 2011). Indirect indications of the ICM turbulence come from measurements of the resonant scattering effect (e.g. Churazov et al., 2004; Werner et al., 2009), from measurements of pressure fluctuations (Schuecker et al., 2004) or surface brightness fluctuations (Churazov et al., 2012) Future X-ray observatories, such as Astro-H and ATHENA, with their high energy resolution will allow us to measure shifts and broadening of individual lines in spectra of galaxy clusters with high accuracy. Combination of direct measurements of velocity amplitudes with indirect measurements via resonant scattering will give us constraints on anisotropy of motions (Zhuravleva et al., 2011). X-ray polarimetric measurements can also provide information on gas motions perpendicular to the line of sight (Zhuravleva et al., 2010). Here we discuss the possibility of getting information about the length scales of gas motions (e.g. the size of energy-containing eddies). We discuss various ways to constrain structure function and power spectrum of gas motions via measurement of the projected velocity (shift of the line centroid) and the velocity dispersion (broadening of the line) as a function of projected distance from the cluster center. These ideas are illustrated with a very simple model of a galaxy cluster. An application of our methods to simulated galaxy clusters will be considered in future work. A similar problem of obtaining the structure function of turbulence from spectral observations has been addressed in the studies of Galactic interstellar medium. In particular, it was shown that the width of molecular spectral lines increases with the size of a cloud (see e.g. Myers et al., 1978; Heyer & Brunt, 2004; Heyer et al., 2009). This correlation was interpreted in terms of turbulent velocity spectrum (Larson, 1981). A way to constrain structure function of turbulence in the Interstellar Medium (ISM) by means of the velocity centroids (projected mean velocity) measurements was first considered by von Hoerner (1951) and Münch (1958) (see also Kleiner & Dickman, 1983, 1985). Currently several different flavors of the velocity centroids method are used for studies of the ISM turbulence (see e.g. Esquivel et al., 2007). More advanced techniques, such as Velocity Channel Analysis (VCA) and Velocity Coordinate Spectrum (VCS) (see e.g. Lazarian & Pogosyan, 2000, 2008; Chepurnov & Lazarian, 2009), were developed and applied to the ISM data in the Milky Way and other galaxies (see e.g. Padoan et al., 2009; Stanimirović & Lazarian, 2001; Chepurnov et al., 2010). Few other methods were also used, among them are the Spectral Correlation Function (SCF) (Rosolowsky et al., 1999; Padoan, Goodman, & Juvela, 2003) and the Principal Component Analysis (PCA) (Brunt et al., 2003). The ISM turbulence is often supersonic and compressible (e.g. Elmegreen & Scalo, 2004). This leads to (i) large shifts in the energy centroid of the line compared to the thermal broadening and (ii) large amplitude of the gas density fluctuations. At the same time individual lines, which are used to study the ISM turbulence (e.g. 21 cm line of HI or CO lines) are often well separated from other emission lines. The regions under study often have very irregular structure on a variety of spatial scales. The analysis therefore is usually concentrated on separation of the velocity and density fluctuations in the observed data, while the thermal broadening can often be neglected. In contrast, in galaxy clusters the gas motions are mostly subsonic. The detection of the gas motions is still possible, since we deal with the emission lines of ions of heavy elements like e.g. Fe, Ca or S. The atomic weights of these elements are large (e.g. for Fe it is 56) and this drives pure thermal broadening of lines down (see Fig. 1 and Section 7.2). The brightest lines in spectra of galaxy clusters are often very close to each other. For example, in the vicinity of the He-like iron line at 6.7 keV there are forbidden and intercombination lines and a number of satellite lines, energy separation between which is of the order few tens eV (Fig. 1). Density of clusters often have a regular radial structure with relatively small amplitude of stochastic density fluctuations. Analysis of X-ray surface brightness fluctuations in Coma cluster shows that density fluctuations are per cent (Churazov et al., 2012). Also hydrodynamical simulations of cluster formation predict very small clumping factors within (see e.g. Mathiesen, Evrard, & Mohr, 1999; Nagai & Lau, 2011). Therefore, to the first approximation, the contribution of density fluctuation in galaxy clusters can be neglected (see Section 7.4 for details), while global radial dependence has to be taken into account (Section 4). Another characteristic feature of X-ray observations is the importance of the Poisson noise, related to the counting statistics of X-ray photons. If one deals with the clusters outskirts the high energy resolution spectra will be dominated by the Poisson noise even for large area future telescopes, like e.g. ATHENA. Finally, one can mention, that the effects of self-absorptions can potentially be important in clusters. Galaxy clusters are transparent in X-rays in continuum and in most of the lines. However, some strong lines can have an optical depth few units. Therefore if one measures the width of the optically thick line distortions due to resonant scattering effect should be taken into account (see e.g. Churazov et al., 2010; Werner et al., 2009; de Plaa et al., 2012; Zhuravleva et al., 2011). Presence of several closely spaced emission lines, modest level of turbulence (i.e. modest ratio of the turbulent and thermal broadenings), lack of very strong stochastic density fluctuations on top of a regular radial structure, and often strong level of Poisson noise affect the choice of the simplest viable approach to relate future observables and most basic characteristics of the ICM gas velocity field. The fact that lines in spectra of galaxy clusters are very close to each other (e.g. Fig. 1) and the line ratio is temperature dependent can be circumvented by estimating the mean shift and broadening with direct fitting of the projected spectra with the plasma emission model (possible multi-temperature model). While the small thermal broadening of lines from heavy elements helps to extend the applicability of VCA/VCS techniques into subsonic regime (see Esquivel et al., 2003; Lazarian & Esquivel, 2003; Chepurnov et al., 2010), limited spectral resolution of the next generation of X-ray bolometers (e.g. eV for ASTRO-H) reduces the measured inertial range in the velocity domain. Direct application of SCF and PCA methods to galaxy clusters can also be challenging, especially when the spectra are dominated by the Poisson noise. These problems should be alleviated with missions like ATHENA, having very large effective area and an energy resolution of eV. Below we suggest to use the simplest “centroids and broadening” approach as a first step in studying the ICM turbulence. This approach assumes that at any given position one fits the observed spectrum with a model of an optically thin plasma (including all emission lines) and determines the velocity centroid and the line broadening. This reduces the whole complexity of X-ray spectra down to two numbers - shift and broadening. Simple analysis of existing hydrodynamic simulations of galaxy clusters shows that this approximation does a good (although not perfect) job in describing the profiles of emission lines (see e.g. Fig.2 in Inogamov & Sunyaev, 2003). At the same time this approach is the most effective in reducing the Poison noise in the raw measured spectra. We show below that in spite of its simplicity this approach provided an easy way to characterize the most basic properties of the ICM velocity field. Clearly, more sophisticated methods developed for the ISM turbulence (e.g. Lazarian, 2009) will eventually be adopted to the specific characteristics of the ICM turbulence and X-ray spectra, potentially providing a more comprehensive description of the ICM turbulence, once the data of sufficient quality become available. The structure of the paper is as follows. In Section 2 we describe and justify models and assumptions we use in our analysis. In Section 3 we specify observables which can be potentially measured and their relation to the 3D velocity power spectrum. Section 4 shows the relation between observed velocity dispersion and the structure function of the velocity field. A way to constrain length scales (size of the energy containing eddies) of motions using observed projected velocity is presented in Section 5. Method to recover 3D velocity PS from 2D projected velocity field is discussed in Section 6. Discussions and conclusions are in Sections 7 and 8 respectively. ## 2 Basic assumptions and models We consider a spherically symmetric galaxy cluster, which has a peaked X-ray emissivity profile. The electron number density is described by the -model profile ne(r)=n0[1+(rrc)2]32β, (1) where is the electron number density in the cluster center (normalization) and is the core radius. The -model provides a reasonably good description of observed surface brightness (Cavaliere & Fusco-Femiano, 1978) in the central regions of galaxy clusters. At large radii model is not a good description of surface brightness (see e.g. Vikhlinin et al., 2006). However the simplicity of the model allows us to illustrate the method and make analytical calculations. We have chosen and kpc for demonstration of our analysis. Parameter usually varies for galaxy clusters (see, e.g., Chen et al., 2007), herewith a large fraction of clusters have . can vary from few kpc to few hundreds kpc. In order to better illustrate the main idea of the method, we considered cool-core clusters with small core radius (it is necessary to have a gradient of surface brightness down to the smallest possible projected distances, see Section 4 for details). We describe the line-of-sight component of the 3D velocity field as a Gaussian isotropic and homogeneous random field. This allows us to gauge whether useful statistics could in principle be obtained. However, there is no guarantee that this assumption applies to the velocity field in real clusters. E.g. Esquivel et al. (2007) using numerical simulations have shown that in the case of supersonic turbulence in the ISM, the non-Gaussianity causes some of the statistical approaches (based on the assumption of Gaussianity) to fail. The same authors demonstrated that for the subsonic turbulence the Gaussianity assumption holds much better. This is encouraging since in clusters we expect mostly subsonic turbulence. Nevertheless the methods discussed here require numerical testing using galaxy clusters from cosmological simulations. We defer these tests for future work. Power spectrum (PS) of velocity field is described by a cored power law model 444Here and below we adopt the relation between a wavenumber and a spatial scale as (without a factor ). P3D(kx,ky,kz)=B(1+k2x+k2y+k2zk2m)α/2, (2) where is a break wavenumber (in our simple model characterizes the injection scale), is a slope of the PS at (inertial range) and is the PS normalization, which is defined so that the characteristic amplitude of velocity fluctuations at is fixed, i.e. B=A24πk3refP3D(kref). (3) The cored power law model of the PS is a convenient description of the PS for analytical calculations and at the same time resembles widely used broken power law model. Now let us specify the choice of parameters and in the model of the velocity PS. Cluster mergers, motions of galaxies and AGN feedback lead to turbulent motions with eddy sizes ranging from Mpc near the virial radius down to few tens of kpc near the cluster core (see, e.g., Sunyaev, Norman, & Bryan, 2003). For our analysis we vary injection scales from 20 kpc ( kpc) to 2000 kpc ( kpc). Parameter - the slope of the PS - can be selected using standard arguments. If most of the kinetic energy is on large scales (injection scales), i.e. the characteristic velocity decreases with then the power spectrum with . At the same time the turnover time of large eddies should not be larger than the turnover time of small eddies, i.e is a decreasing function of the wavenumber . Therefore, with and with . So we expect the slope of the PS to be in the range . We will use (slope of the Kolmogorov PS), and . ## 3 Observables and 3D velocity power spectrum Gas motions in relaxed galaxy clusters are predominantly subsonic, and to the first approximation the width and centroid shift of lines measured with X-ray observatories contain most essential information on the ICM velocity field (see, e.g., Inogamov & Sunyaev, 2003; Sunyaev, Norman, & Bryan, 2003). That is, we have information about: (i) surface brightness in lines (Fig. 2, left panel), i.e. if one assumes isothermal galaxy cluster (effects of non-constant temperature and abundance of elements are discussed in Section 6), (ii) emissivity-weighted projected velocity (Fig. 2, middle panel) (in practice measured projected velocity is averaged over some finite solid angle, see Section 5), (iii) emissivity-weighted velocity dispersion (Fig. 2, right panel) , where denotes the number electron density and is a velocity component along the line of sight. Here denotes emissivity-weighted averaging along the line of sight, which we assume to be along direction. Relation between the PS of the 3D velocity field, the 2D projected velocity and the velocity dispersion for a line of sight with projected coordinates are the following: ⟨|^V2D(kx,ky,x,y)|2⟩=∫P3D(kx,ky,kz)PEM(kz,x,y)dkz (4) and ⟨σ2(x,y)⟩=∫P3D(kx,ky,kz)(1−PEM(kz,x,y))dkxdkydkz, (5) where is the ensemble averaging over a number of realizations, is an expectation value of the 2D PS of the observed projected velocity field , is the PS of the 3D velocity field and is the PS of normalized emissivity distribution along the line of sight. For more details see Appendixes A and B. In Fig. 3 we illustrate the eq. 5 for a simple spherically symmetric -model of galaxy cluster with core radius kpc and . 3D velocity PS has a cored power law model (eq. 2), i.e. is flat on kpc and has a Kolmogorov slope on . Observed velocity dispersion averaged over 100 realizations is shown with dots, the error bars show the expected uncertainty in one measurement. The right hand side of eq. 5 is shown in red. Minor difference between two curves at small is due to finite resolution of simulations. Once we construct the map of projected velocity , one can also find RMS velocity of the 2D field at each distance r from the cluster center as VRMS(r)=√⟨V2D(x,y)2⟩r−⟨V2D(x,y)r⟩2, (6) where denotes mean velocity in ring at distance r from the center. Below we use the observed velocity dispersion and RMS of the projected velocity field to constrain the power spectrum. ## 4 Structure function and observed velocity dispersion Often a structure function of the velocity field is used instead of the power spectrum, which is defined as SF(Δx)=⟨(V(x+Δx)−V(x))2⟩, (7) where averaging is over a number of pairs of points in space separated by distance . The line-of-sight velocity dispersion can be linked to the structure function. Indeed, since the emissivity peaks at the center of the cluster and declines with the radius, the largest contribution to the total flux and to the line-of-sight velocity dispersion at distance from the center comes from the region, the size of which is . The structure function and the observed velocity dispersion can be related to the PS (see appendixes C and D): SF(x)=2∫+∞−∞P1D(kz)(1−cos2πkzx)dkz (8) and ⟨σ2(R)⟩=∫∞−∞P1D(kz)(1−PEM(kz))dkz, (9) where is an expectation value of the 1D velocity PS and is a PS of normalized emissivity along the line of sight. Fig. 4 shows the integrands in eq. 8 and eq. 9 multiplied by and respectively for the line of sight near the cluster center (black curves) and at a projected distance kpc from the center (red curves). Extra factor of 2 for is introduced to compensate for the factor of 2 in front of the expression 8 for the structure function. It is more clear if one considers the limits of these equations at large and . When then oscillates with high frequency over relevant interval of and mean value of is . When the emissivity distribution is very broad and is almost a -function. Therefore, limx→∞(SF(x))=2∫∞−∞P1D(kz)dkz (10) and limR→∞(σ2(R))=∫∞−∞P1D(kz)dkz. (11) From Fig. 4 it is clear that the integrands in eq. 8 and eq. 9 are very similar, suggesting that observed should correlate well with the structure function. The structure function and the velocity dispersion (eq. 47 and 48 respectively) are plotted in Fig. 6 in left column. We fixed parameters of the model of the cluster and varied the slope and break of the power spectrum model (eq. 2). The relation of SF and is shown in the left bottom panel in Fig. 6. For a given , is used for axis, while the SF is plotted as a function of , where is an effective length along the line of sight, which provides dominant contribution to the line flux. is found from the condition that ∫leff0n2e(√R2+l2)dl∫∞0n2e(√R2+l2)dl≈0.5. (12) Relation between and projected distance depends on the model of galaxy cluster as shown in Fig. 5 We then made multiple statistical realizations of the PS for a simple model of galaxy cluster with and kpc to estimate the uncertainties. The size of the box is 1 Mpc and resolution is 2 kpc. We assume that the 3D PS of the velocity field has a cored power law model (eq. 43) with slope and break wavenumber (injection scale) at . We made 100 realizations of a Gaussian field with random phases and Gaussian-distributed amplitudes in Fourier space. Taking inverse Fourier transform, we calculated one component of the 3D velocity field (component along the line of sight) in the cluster. Structure function and the line-of-sight velocity dispersion are evaluated using resulting velocity field. Right column in Fig. 6 shows velocity dispersion along the line of sight and structure function averaged over 100 realizations. The expected uncertainty in single measurement of the velocity dispersion is shown with dotted curves. One can see that the overall shape and normalization of SF and are the same as predicted from analytical expressions (left column in Fig. 4), however, there are minor differences (especially at small ) due to limited resolution of simulations. Relation between and SF is in a good agreement with expectation relation, however the uncertainty in measured velocity dispersion (due to stochastic nature of the velocity field) is significant (see Section 6). ## 5 Length scales of motions and observed RMS velocity of projected velocity field Let us now consider RMS of the projected velocity field. During observations one gets spectra from a region, minimum size of which is set by angular resolution and/or the sensitivity of the instrument. RMS velocity at certain position (x,y) for random realizations of the velocity field is defined as (see Appendix E) ⟨V2RMS⟩=∫P3D(kx,ky,kz)PEM(kz)(1−PSH(kx,ky))d3k, (13) where is a PS of a mask, where the mask is defined as zero outside and unity inside the region, from which the spectrum is extracted (see Appendix E for details). Velocity dispersion (i.e. the line broadening) measured from the same region is ⟨σ2⟩=∫P3D(kx,ky,kz)(1−PEM(kz)PSH(kx,ky))d3k. (14) Therefore the ratio is ⟨V2RMS⟩⟨σ2⟩=∫P3D(kx,ky,kz)PEM(kz)(1−PSH(kx,ky))d3k∫P3D(kx,ky,kz)(1−PEM(kz)PSH(kx,ky))d3k, (15) which can be used as an additional proxy of the length scales of gas motions. This ratio is mostly sensitive to the break of the cored power law model of the PS . Basically at a given line of sight, which is characterized by an effective length the small scale motions (i.e. ) are mostly contributing to line broadening, while larger scale motions predominantly contribute to the RMS of the projected velocity field. Fig. 7 shows the ratio and its uncertainty calculated for different values of parameter . and are averaged over rings at distance from the cluster center. Clearly, if is large then all motions are on small scales and the ratio is small. The larger is the , the more power is at large scales and the larger is the . The larger is the injection scale, the stronger is the increase of the ratio with distance and the more prominent is the peak (see Fig. 7). Here we assume that the full map of projected velocity field is available. Clearly, the uncertainties will increase if the data are available for several lines of sight, rather than for the full map. Looking at eq. 15 it is easy to predict behavior of the ratio on small and large projected distances . Let us specify the shape of area, namely assume that we measure velocity in circles with radius around the cluster center. When then the region is very small and over broad range of wavenumbers and the ratio . At large and (since distribution along the line of sight on large is broad), eq. 15 becomes ⟨V2RMS⟩(R)⟨σ2⟩(R)=∫P3D(kx,ky,kz)PEM(kz)d3k∫P3Dd3k (16) and it is a decreasing function of . The sensitivity of ratio to the slope of the power spectrum is modest. There are only changes in normalization, but, the overall shape is the same. ## 6 Recovering 3D velocity power spectrum from 2D projected velocity field Mapping of the projected velocity field provides the most direct way of estimating the 3D velocity field PS. The 2D and 3D PS are related according to eq.4, which we re-write as P2D(k)=∫P3D(√k2+k2z)PEM(kz,x,y)dkz, (17) where . This equation can be written as P2D(k)=1/leff∫0P3D(√k2+k2z)PEM(kz,x,y)dkz+ (18) +∞∫1/leffP3D(√k2+k2z)PEM(kz,x,y)dkz . Contribution of the second term to the integral is small since on . In the limit of (at a given projected distance) the expression reduces to P2D(k)≈P3D(k)∫PEM(kz,x,y)dkz, (19) i.e. 2D PS is essentially equal to the 3D PS of the velocity field apart from the normalization constant , which is easily measured for a cluster. We show below that this simple relation (19) provides an excellent approximation for full expression (17) for a Coma-like clusters with flat surface brightness core. For peaked clusters (cool core) depends on projected distance from the cluster center since changes significantly with distance. It is convenient to use characteristic scale-dependent amplitudes of the velocity field variations, rather than the PS. The amplitude for 3D and 2D spectra are defined as A3D(k)=√P3D(k)4πk3 (20) A2D(k)=√P2D(k)2πk2 (21) In these notations the relation 19 between PS transforms to A2D(k)=A3D(k)√12∫PEM(kz,x,y)dkzk. (22) Integral can be estimated as , since the largest contribution is on . Eq. 22 becomes A2D(k)=A3D(k)√121leffk=A3D(k)√121Nedd. (23) The essence of this relation is that the amplitude of the 3D velocity fluctuations is attenuated in the 2D projected velocity field by a factor of order , where is the number of independent eddies which fit into effective length along the line of sight. We illustrate the above relation for the case of the Coma cluster. The density distribution in the Coma can be characterized by a -model with and co-radius kpc. In Fig. 8 we plot the ratio evaluated using equations (19) and (17) for a number of 3D PS models calculated at two projected distances from the Coma center. One can see that on spatial scales of less than 1 Mpc the equation (22) is fully sufficient. The variations of the relation for different projection distances (projected distances from 0 to 300 kpc were used) affect only the normalization of the relation and can easily be accounted for. With ASTRO-H the 2D velocity field in the Coma can be mapped with the resolution, which corresponds to kpc. Mapping kpc central region of the Coma would require about 36 pointings. For practical reasons it may be more feasible to make a sparse map (e.g. two perpendicular stripes) to evaluated ( e.g. computing correlation function or using a method described in Arévalo et al., 2011) ## 7 Discussion ### 7.1 Limiting cases of small and large scale motions Measuring characteristic amplitude of mean velocity () and velocity dispersion () we can distinguish whether the turbulence is dominated by small or large scale motions. Clearly, the motions on scales much smaller than the effective length along the line of sight near the cluster center can only contribute to the line broadening. This sets the characteristic value of the lowest spatial scale which can be measured. The largest measurable scale is set by the maximum distance from the cluster center where the line parameters can be accurately measured without prohibitively long exposure time. Thus the range of scales , which can be probed with these measurements is . The crucial issue in measurements is “sample variance” of measured quantities caused by stochastic nature of turbulence. We can expect two limiting cases (see Fig. 7). A: Small scale motions In case of small scale motions (i.e. one expects to be independent of radius and . is expected to have low sample variance and can be measured accurately even for a single line of sight, provided sufficient exposure time. Measurements of are strongly affected by sample variance and depend on the geometry of the measured map of the projected velocity dispersion . If is measured only at two positions, then the uncertainty in is of the order of its value and the ratio gives only low limit on . We note that in this limit of small scale motions the assumption of a uniform and homogeneous Gaussian field can be relaxed and measured values of line broadening simply reflect the total variance of the velocity along the given line of sight, while the possibility of determining the spatial scales of motions are limited. Variations of with radius will simply reflect the change of the characteristic velocity amplitude. B: Large scale motions In case if most of turbulent energy is associated with large scales (i.e. , is expected to increase with and . In this case sample variance affects both and . Mapping the whole area (as opposed to measurements at few positions) would help to reduce the sample variance. Knowing the shape of and estimates of from we can constrain the slope of power spectrum. ### 7.2 Effect of thermal broadening Measuring velocity dispersion one should account for line broadening due to thermal motions of ions. Thermal broadening should be subtracted from measured width of line. The broadening of the line , where is measured Gaussian width of the line, is defined as ΔE=E0c√2(σ2therm+σ2turb), (24) where is the width due to turbulent motions and is the thermal broadening for ions with atomic weight . The FHWM of the corresponding line is . Observed emission lines in the X-ray spectra of galaxy clusters correspond to heavy elements such as Fe, Ca, S. Because of large value of the contribution of thermal broadening is small even for modest amplitudes of turbulent velocities. Indeed, the atomic weight for iron is 56 and the thermal width of the iron line is eV ( eV) if one assumes the typical temperature of clusters keV. At the same the gas motion with the sound speed would causes the shift of the line energy by 40 eV. As an example, we calculated the expected broadening of the He-like iron line at 6.7 keV for the Perseus cluster, assuming Kolmogorov-like PS of the velocity field with kpc and varying total RMS of the velocity field in one dimension. The model of the Perseus cluster was taken from Churazov et al. (2004) and modified at large distances according to Suzaku observations at the edge of the cluster (Simionescu et al., 2011), i.e. the electron number density is Ne(r)=4.68⋅10−2[1+(r56)2]1.8+4.86⋅10−3[1+(r194)2]0.87 (25) and the temperature profile is T(r)=71+(r69)32.3+(r69)3×(1+r5000)−1. (26) Abundance of heavy elements is assumed to be constant 0.5 relative to Solar (Anders & Grevesse, 1989). Fig. 9 shows the calculated width of 6.7 keV line assuming various 1D RMS velocities. Thermal broadening is shown with the dashed magenta curve. One can see that thermal broadening starts to dominate broadening due to motions only if km/s. However, the lack of resonant scattering signatures in the spectrum of the Perseus cluster suggests that the expected velocity is higher than 400 km/s in the center of the Perseus cluster (Churazov et al., 2004). will have energy resolution of 7 eV at 6.7 keV, therefore broadening due to gas motions with 400 km/s will be easy to observe (Fig. 9) in the Perseus cluster. ### 7.3 The effect of radial variations of the gas temperature and metallicity The analysis described in the previous sections was done assuming an isothermal -model spherically symmetric cluster with emissivity simply . Clearly, real clusters are more complicated, even if we keep the assumption of spherical symmetry. First of all the gas temperature and metallicity often vary with radius. These variations will be reflected in the weighting function , which relates 3D velocity field and observables. To verify how strongly these assumptions affect the results, we calculated velocity dispersion assuming detailed model of the Perseus cluster (see above). We assumed constant abundance profile and more realistic peaked abundance profile, taken from Suzaku (Simionescu et al., 2011) and Chandra/XMM observations Z(r)=0.42.2+(r80)21+(r80)2. (27) We calculated the emissivity of the He-like iron line at 6.7 keV and used this emissivity as a weighting function for the calculation of the expected projected velocity and velocity dispersion. Fig. 10 shows the velocity dispersion along one line of sight calculated for the most simple model of emissivity and assuming more complicated models described above. One can see that mean value and uncertainties are very similar in all cases. Clearly that averaging velocity dispersion over a ring will decrease the uncertainty in one measurement (Fig. 10, black dashed curve). ### 7.4 Influence of density fluctuations The hot gas density in galaxy clusters is strongly peaked towards the centre. However, besides the main trend, there are density fluctuations, which could contribute to the observed fluctuations of the projected velocity. Let us split the density field in two components n=n0+δn, (28) where corresponds to the smooth global profile and represents fluctuating part. Neglecting terms of the order , the emissivity-weighted projected velocity is V2D(x,y)=∫n20(z)V(x,y,z)dz+2∫n0(z)δnV(x,y,z)dz∫n20(z)dz. (29) The analysis of X-ray surface brightness fluctuations in the Coma cluster (e.g. Churazov et al., 2012) have shown that the amplitude of the density fluctuations is of the order . Assuming that the same number is applicable to other clusters, the second term in eq. 29 is small and the contribution of the density fluctuation to projected velocity can be neglected. ### 7.5 Measurements requirements To illustrate the most basic requirements for the instruments to measure the ICM velocity field, let us consider two examples of a rich cluster and an individual elliptical galaxy, representing the low temperature end of the cluster-group-galaxy sequence: 1. Perseus cluster, keV, line of FeXXV at 6.7 keV, distance 72 Mpc 2. Elliptical galaxy NGC5813, keV, line of OVIII at 0.654 keV, distance 32.2 Mpc Table LABEL:tabinstr shows the FWHM of these lines calculated if only thermal broadening is taken into account, shift of the line energy in the case of gas motions with and desirable angular resolution of the instrument. Future X-ray missions, such as Astro-H and ATHENA will have energy resolution eV and eV respectively. Such energy resolution is sufficient to measure broadening of lines in hot systems like galaxy clusters. Cold systems, like elliptical galaxies, require even better energy resolution. However, turbulence still can be measures using the resonant scattering effect (see e.g. Werner et al., 2009; Zhuravleva et al., 2011) or by using grating spectrometers observations. Angular resolution of for nearby galaxy clusters should sufficient to study the most basic characteristics of the ICM velocity field, while for elliptical galaxies the resolution at the level of arcsec (comparable with Chandra resolution) would be needed. Astro-H observatory will have energy resolution eV, field-of-view and angular resolution , which means that it will be possible to measure shift and broadening of lines as a function of projected distance from the center in nearby clusters. E.g. if one assumes that RMS velocity of gas motions in Perseus cluster is km/s, then s is enough to measure profiles of mean velocity and velocity dispersion with a statistical uncertainty of km/s (90% confidence) in a stripe 200 kpc (7 independent pointing and 28 independent measurements in pixels) centered in the center of the cluster 555The estimates were done using the current version of Astro-H. In order to measure velocity with the same accuracy at larger distances from the center , e.g. at 500 kpc and 1 Mpc, one would need s and s exposure respectively. ## 8 Conclusions Various methods of constraining the velocity power spectrum through the observed shift of line centroid and line broadening are discussed. • Changes of the line broadening with projected distance reflects the increase of the spread in the velocities with distance, closely resembling the behavior of the structure function of the velocity field. • Another useful quantity is the ratio of the characteristic amplitude of the projected velocity field to the line broadening. Since the projected velocity field mainly depends on large scale motions, while the line broadening is more sensitive to small scale motions, this ratio is a useful diagnostics of the shape of the 3D velocity field power spectrum. • Projected 2D velocity field power spectrum can be easily converted into 3D power spectrum. This conversion is especially simple for cluster with an extended flat core in the surface brightness (like Coma cluster). Analytical expressions are derived for a -model clusters, assuming homogeneous isotropic Gaussian 3D velocity field. The importance of the sample variance, caused by the stochastic nature of the turbulence, for the observables is emphasized. ## 9 Acknowledgements IZ, EC and AK would like to thank Kavli Institute for Theoretical Physics (KITP) in Santa Barbara for hospitality during workshop ”Galaxy clusters: crossroads of astrophysics and cosmology” in March-April 2011, where part of the work presented here was carried out. This research was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164. IZ would like to thank the International Max Planck Research School on Astrophysics (IMPRS) in Garching. ## References • Arévalo et al. (2011) Arévalo P., Churazov E., Zhuravleva I., Hernández-Monteagudo C., Revnivtsev M., 2011, ApJ, submitted • Anders & Grevesse (1989) Anders E., Grevesse N., 1989, GeCoA, 53, 197 • Beresnyak & Lazarian (2009) Beresnyak A., Lazarian A., 2009, ApJ, 702, 1190 • Brunetti (2006) Brunetti G., 2006, AN, 327, 615 • Brunetti & Lazarian (2011) Brunetti G., Lazarian A., 2011, MNRAS, 412, 817 • Brunt et al. (2003) Brunt C. M., Heyer M. H., Vázquez-Semadeni E., Pichardo B., 2003, ApJ, 595, 824 • Cassano & Brunetti (2005) Cassano R., Brunetti G., 2005, MNRAS, 357, 1313 • Cavaliere & Fusco-Femiano (1978) Cavaliere A., Fusco-Femiano R., 1978, A&A, 70, 677 • Chen et al. (2007) Chen Y., Reiprich T. H., Böhringer H., Ikebe Y., Zhang Y.-Y., 2007, A&A, 466, 805 • Chepurnov & Lazarian (2009) Chepurnov A., Lazarian A., 2009, ApJ, 693, 1074 • Chepurnov et al. (2010) Chepurnov A., Lazarian A., Stanimirović S., Heiles C., Peek J. E. G., 2010, ApJ, 714, 1398 • Churazov et al. (2004) Churazov E., Forman W., Jones C., Sunyaev R., Böhringer H., 2004, MNRAS, 347, 29 • Churazov et al. (2008) Churazov E., Forman W., Vikhlinin A., Tremaine S., Gerhard O., Jones C., 2008, MNRAS, 388, 1062 • Churazov et al. (2012) Churazov E., et al., 2012, accepted to MNRAS, arXiv:1110.5875 • Churazov et al. (2010) Churazov E., Zhuravleva I., Sazonov S., Sunyaev R., 2010, SSRv, 157, 193 • de Plaa et al. (2012) de Plaa J., Zhuravleva I., Werner N., Kaastra J. S., Churazov E., Smith R. K., Raassen A. J. J., Grange Y. G., 2012, accepted to A&A, arXiv:1201.1910 • Dobler et al. (2003) Dobler W., Haugen N. E., Yousef T. A., Brandenburg A., 2003, PhRvE, 68, 026304 • Dolag et al. (2005) Dolag K., Vazza F., Brunetti G., Tormen G., 2005, MNRAS, 364, 753 • Elmegreen & Scalo (2004) Elmegreen B. G., Scalo J., 2004, ARA&A, 42, 211 • Esquivel et al. (2007) Esquivel A., Lazarian A., Horibe S., Cho J., Ossenkopf V., Stutzki J., 2007, MNRAS, 381, 1733 • Esquivel et al. (2003) Esquivel A., Lazarian A., Pogosyan D., Cho J., 2003, MNRAS, 342, 325 • Heyer & Brunt (2004) Heyer M. H., Brunt C. M., 2004, ApJ, 615, L45 • Heyer et al. (2009) Heyer M., Krawczyk C., Duval J., Jackson J. M., 2009, ApJ, 699, 1092 • Iapichino et al. (2011) Iapichino L., Schmidt W., Niemeyer J. C., Merklein J., 2011, MNRAS, 483 • Inogamov & Sunyaev (2003) Inogamov N. A., Sunyaev R. A., 2003, AstL, 29, 791 • Jeltema et al. (2008) Jeltema T. E., Hallman E. J., Burns J. O., Motl P. M., 2008, ApJ, 681, 167 • Kitsionas et al. (2009) Kitsionas S., et al., 2009, A&A, 508, 541 • Kleiner & Dickman (1983) Kleiner S. C., Dickman R. L., 1983, BAAS, 15, 990 • Kleiner & Dickman (1985) Kleiner S. C., Dickman R. L., 1985, ApJ, 295, 466 • Kolmogorov (1941) Kolmogorov A., 1941, DoSSR, 30, 301 • Landau & Lifshitz (1966) Landau L. D., Lifshitz E. M., 1966, hydr.book, • Larson (1981) Larson R. B., 1981, MNRAS, 194, 809 • Lau, Kravtsov, & Nagai (2009) Lau E. T., Kravtsov A. V., Nagai D., 2009, ApJ, 705, 1129 • Lazarian (2009) Lazarian A., 2009, SSRv, 143, 357 • Lazarian & Esquivel (2003) Lazarian A., Esquivel A., 2003, ApJ, 592, L37 • Lazarian & Pogosyan (2000) Lazarian A., Pogosyan D., 2000, ApJ, 537, 720 • Lazarian & Pogosyan (2008) Lazarian A., Pogosyan D., 2008, ApJ, 686, 350 • Mathiesen, Evrard, & Mohr (1999) Mathiesen B., Evrard A. E., Mohr J. J., 1999, ApJ, 520, L21 • Myers et al. (1978) Myers P. C., Ho P. T. P., Schneps M. H., Chin G., Pankonin V., Winnberg A., 1978, ApJ, 220, 864 • Münch (1958) Münch G., 1958, RvMP, 30, 1035 • Nagai & Lau (2011) Nagai D., Lau E. T., 2011, ApJ, 731, L10 • Nagai, Vikhlinin, & Kravtsov (2007) Nagai D., Vikhlinin A., Kravtsov A. V., 2007, ApJ, 655, 98 • Norman & Bryan (1999) Norman M. L., Bryan G. L., 1999, LNP, 530, 106 • Padoan, Goodman, & Juvela (2003) Padoan P., Goodman A. A., Juvela M., 2003, ApJ, 588, 881 • Padoan et al. (2009) Padoan P., Juvela M., Kritsuk A., Norman M. L., 2009, ApJ, 707, L153 • Rasia et al. (2006) Rasia E., et al., 2006, MNRAS, 369, 2013 • Rosolowsky et al. (1999) Rosolowsky E. W., Goodman A. A., Wilner D. J., Williams J. P., 1999, ApJ, 524, 887 • Rytov et al. (1988) Rytov S.M., Kravtsov Yu.A., Tatarskii V.I., 1988, Principles of statistical radiophysics, Vol.2. Springer-Verlag, Berlin. • Sanders, Fabian, & Smith (2011) Sanders J. S., Fabian A. C., Smith R. K., 2011, MNRAS, 410, 1797 • Schuecker et al. (2004) Schuecker P., Finoguenov A., Miniati F., Böhringer H., Briel U. G., 2004, A&A, 426, 387 • Simionescu et al. (2011) Simionescu A., et al., 2011, Sci, 331, 1576 • Smith et al. (2001) Smith R. K., Brickhouse N. S., Liedahl D. A., Raymond J. C., 2001, ApJ, 556, L91 • Stanimirović & Lazarian (2001) Stanimirović S., Lazarian A., 2001, ApJ, 551, L53 • Sunyaev, Norman, & Bryan (2003) Sunyaev R. A., Norman M. L., Bryan G. L., 2003, AstL, 29, 783 • Vazza et al. (2011) Vazza F., Brunetti G., Gheller C., Brunino R., Brüggen M., 2011, A&A, 529, A17 • Vikhlinin et al. (2006) Vikhlinin A., Kravtsov A., Forman W., Jones C., Markevitch M., Murray S. S., Van Speybroeck L., 2006, ApJ, 640, 691 • von Hoerner (1951) von Hoerner S., 1951, ZA, 30, 17 • Werner et al. (2009) Werner N., Zhuravleva I., Churazov E., Simionescu A., Allen S. W., Forman W., Jones C., Kaastra J. S., 2009, MNRAS, 398, 23 • Zhuravleva et al. (2010) Zhuravleva I. V., Churazov E. M., Sazonov S. Y., Sunyaev R. A., Forman W., Dolag K., 2010, MNRAS, 403, 129 • Zhuravleva et al. (2011) Zhuravleva I. V., Churazov E. M., Sazonov S. Y., Sunyaev R. A., Dolag K., 2011, AstL, 37, 141 ## Appendix A 3D velocity power spectrum and projected velocity field Let us assume that the line-of-sight component of the 3D velocity field is described by a Gaussian (isotropic and homogeneous) random field. We assume that the centroid shift and the width of lines contain most essential information on the velocity field. Here and below we adopt the relation without factor (see Section 2 for details). Projected 2D velocity along the line of sight (observed centroid shift of the emission line) in direction is V2D(x,y)=∫V3D(x,y,z)n2e(x,y,z)dz∫n2e(x,y,z)dz, (30) where is component of the 3D velocity field and is the electron number density. Denoting normalized emissivity along the line of sight at a certain position with coordinates (x,y) as the previous relation can be re-written as V2D(x,y)=∫V3D(x,y,z)ϵ(z)dz. (31) Applying the convolution theorem one can find the Fourier transform of as ∫^V3D(kx,ky,kz1)^ϵ(kz−kz1)dkz1, (32) where and are Fourier transforms of the 3D velocity field and normalized emissivity respectively. The projection-slice theorem states that ^f2D(kx,ky)=^f3D(kx,ky,0). (33) Accounting for 32 and 33 we can write the Fourier transform of 2D velocity field as ^V2D(kx,ky)=∫^V3D(kx,ky,kz1)^ϵ∗(kz1)dkz1, (34) where * denotes conjugation. Averaging over a number of realization we find the power spectrum of the projected mean velocity ⟨|^V2D(kx,ky)|2⟩=⟨|∫^V3D(kx,ky,kz1)^ϵ∗(kz1)dkz1|2⟩. (35) The right part of the equation above can be re-written as ∫⟨^V3D(kx,ky,kz1)^V∗3D(kx,ky,kz2)⟩× ×^ϵ∗(kz1)^ϵ(kz2)dkz1dkz2. (36) Since phases are random, all cross terms after averaging over a number of realizations will give 0 if . Therefore, ⟨|^V2D(kx,ky)|2⟩=∫|^V3D(kx,ky,kz1)|2|^ϵ(kz1)|2dkz1. (37) Denoting power spectra of the 3D velocity field and normalized emissivity as and respectively the final expression is ⟨|^V2D(kx,ky)|2⟩=∫P3D(kz,ky,kz)PEM(kz)dkz. (38) ## Appendix B 3D velocity power spectrum and projected velocity dispersion Projected mean velocity dispersion for the line of sight with coordinates (x,y) averaged over a number of realization is defined as ⟨σ2(x,y)⟩=⟨∫V23D(x,y,z)ϵ(z)dz⟩− −⟨(∫V3D(x,y,z)ϵ(z)dz)2⟩. (39) It can be re-written as ⟨σ2(x,y)⟩=∫⟨V23D(x,y,z)⟩ϵ(z)dz− −⟨V22D(x,y)⟩. (40) Expanding and through the Fourier series, averaging over realizations and keeping non-zero cross terms will give ⟨σ2(x,y)⟩=∫|^V3D(kx,ky,kz)|2dkxdkydkz∫ϵ(z)dz− −∫|^V2D(kx,ky)|2dkxdky. (41) Since emissivity along the line of sight is normalized so that and accounting for eq.38 the final expression for projected velocity dispersion is ⟨σ2(x,y)⟩=∫P3D(kx,ky,kz)(1−PEM(kz))dkxdkydkz. (42) ## Appendix C Relation between structure function and cored power law 3D power spectrum Let assume 3D isotropic and homogeneous power spectrum (PS) of the velocity field is described as P3D(kx,ky,kz)=B(1+k2x+k2y+k2zk2m)α/2, (43) where is a wavenumber where model has a break (e.g. an injection scale in turbulence model), is a slope of PS on and is PS normalization, which is defined so that the characteristic amplitude of velocity fluctuations at is fixed, i.e. B=A24πk3refP3D(kref). (44) Integrating 43 over , one can find 1D PS P1D(kz)=2πBk2m(1+k2zk2m)−α/2+1α−2. (45) Structure function (SF) is related to 1D PS by the transformation (Rytov et al., 1988) SF(x)=2∫+∞−∞P1D(kz)(1−cos2πkzx)dkz. (46) Substituting 45 to 46 and assuming yields SF(x)=4Bk3m(π32Γ(ξ)−2πα2kξmxξKα(ξ,2πxkm))(α−2)Γ(α2−1), (47) where and is a modified Bessel function of the second kind. ## Appendix D Relation between velocity dispersion along the line of sight and power spectrum and are related as , therefore eq. 42 can be re-written as ⟨σ2(R)⟩=∫∞−∞P1D(kz)(1−PEM(kz))dkz, (48) where is projected distance from the center. If electron number density is described by model with normalization and core radius ne(r)=n0(1+R2+z2R2c)32β (49) then the emissivity is ϵ(r)=R6βc(C+z2)3β, (50) where , and we assume normalization . The Fourier transform of emissivity is ^ϵ(kz)=∫∞−∞R6βc(C+x2)3βcos(2πkzx)dx, (51) where terms with are zero since integrand is symmetrical. Dividing 51 by the total flux and assuming one can find weight as W(kz)=2Cζ2kζzπζKα(−ζ,2√Ckzπ)Γ(ζ), (52) where and is a modified Bessel function of the second kind. ## Appendix E Ratio of observed RMS velocity to observed velocity dispersion Let us assume that spectrum is extracted from the region with the area . The RMS velocity from this spectrum averaged over a number of realizations is ⟨V2RMS⟩=⟨∫shapeV22D(x,y)dxdy∫shapedxdy⟩− −⟨⎛⎜ ⎜ ⎜⎝∫shapeV2D(x,y)dxdy∫shapedxdy⎞⎟ ⎟ ⎟⎠2⟩, (53) where denotes averaging over realizations. Accounting for eq. 31 we can re-write the first term in the above equation as ⟨∫shape(∫V3D(x,y,z)ϵ(z)dz)2dxdy∫shapedxdy⟩= =1∫shapedxdy∫shape∫⟨^V3D(kx1,ky1,kz1)⟩ei2πkx1xei2πky1y× ×FEM(kz1)⟨^V3D(kx2,ky2,kz2)⟩ei2πkx2xei2πky2yFEM(kz2)× ×d3k1d3k2dxdy, (54) where is a Fourier transform of emissivity along the line of sight. Averaging over a number of realizations will leave non-zero terms only if . Therefore, the first term in eq. 53 is ∫shape∫P3D(kx,ky,kz)PEM(kz)d3kdxdy∫shapedxdy= =∫P3D(kx,ky,kz)PEM(kz)dkxdkydkz. (55) The second term in eq. 53 can be written as (56) =⟨⎛⎜ ⎜ ⎜⎝∫shape∫^V3D(kx,ky,kz)ei2πxkxei2π
# Calculating solutions with python Friday, 11:15-12:05, 2016-01-22, Osmond 216 Goals of this laboratory... • Learn how to do basic math in python • Introduce you to the concept of a python module • Show you the numpy, scipy, and matplotlib modules • Show you how to perform simple calculations and plotting Start up Canopy, open the editor, and work through examples below to see what you can learn. ## Basic math In Laboratory 1, you saw a little about how to do math calculations in python. But let's go through the basics to make sure we're all on the same page. >>> 1 + 2 • Multiplaction. >>> 2 * 3 • In expressions involving both addition and multiplication, multiplication takes precedence. >>> 1 + 2 * 3 This could return 9 or 7, depending on whether you do the addition or multiplication first, but python also always does the multiplication. Python has a precedence order for all operations. Use parentheses to override the default precedence. >>> (1 + 2) * 3 • Substraction is treated like addition of negative numbers. >>> 1 - 2 - 3 • Exponential powers are represented with double-asterix's. >>> 2**3 >>> 3**3 >>> 9**0.5 + 1 • The percent sign is used to find remainders of a division >>> 17 % 5 >>> 7.5 % 2 • Division is a little un-conventional in python version 2, so always be careful with it. >>> 1 / 0.2 >>> 10 / 3 • Python also has complex numbers built in. Imaginary numbers are post-fixed with a "j" to indicate imaginariness. So, the square root of -1 is represented by 1j or -1j. >>> 1j**2 >>> (1-1j) * (1+1j) >>> 1j**4 - 1 >>> z = 1 + 2j >>> z.real >>> z.imag • Python also can do arbitrary precision integer arithmetic! >>> 2**200 - 1 >>> 4**4**4 • Pyton also has several relational operators that we can use to test relationships between numbers and variables. >>> 2 < 3 >>> 2 > 3 >>> 2 == 3 Note that you use double-equals to test equality in python. Single equals is used only for assignment, and will raise an error if used incorrectly. >>> 2 = 3 ## Modules The python language itself (like all good modern languages) is simple, using only about 30 words. Python's power comes from the rich collection of well-documented "modules" that are available to perform complex tasks. A module is python script that contains functions (and classes) to perform related tasks, like a library in C. Perhaps the simplest example of this is the "math" module. You can load a module into your python interpretter's global namespace and shell scripts with the "import" keyword. >>> import math You can use help to see the contents of the module you have imported. >>> help(math) To use one of the functions from the module, you put the modules name as a prefix, like >>> math.exp(0.) >>> math.pi Function names can also be imported directly into the working namespace, although this can be dangerous and is discouraged. >>> from math import sin, pi, exp, log >>> sin(0), sin(pi/2) >>> exp(0), exp(1), log(exp(1)) All other modules work the same way, and provide a variety of capabilities. Here are some common modules. • sys - interpretter and shell functions • os - operating system functions like path, file testing, • itertools - tools for fancy for-loops • string - basic algorithms for manipulating strings • pickle - tool for saving variables to a file or reloading them • re - for regular expressions • urllib2 - for http scripting • turtle - turtle graphics, ala 1970's logo ## Scientific computing modules For this next part of the lab, it will be useful to start with a clean environment, so go up to the "Run" menu and select "Restart Kernel". Now, there is standard stack of modules we use for scientific computing called the "scipy stack". Parts of this stack of modules are automatically loaded into Canopy's global namespace. These are numpy (which creates fast arrays), scipy (which supplies all sorts of functions and algorithms) for calculations, and matplotlib (for making plots and pictures). If you know matlab already, there are many similar things to python, but also some differences. If you write a script or use the standard python interprettor, you can load these modules into the global namespace with the following three lines of code. >>> from numpy import * >>> from scipy import * >>> from matplotlib.pyplot import * Now, let's see how we can use the scipy stack to solve some problems you have seen before. • Suppose we have the linear system $$x - 2 y = -3, \; y - 2 z = 0, \; -y + z = -1$$ which we want to solve for $$x,y,z$$. In matrix form $$Ax=b$$, $\begin{gather*} \begin{bmatrix} 1 & -2 & 0 \\ 0 & 1 & 2 \\ 0 & -1 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} -3 \\ 0 \\ -3 \end{bmatrix} \end{gather*}$ To solve this, we first make arrays containing the matrix and vector we know. >>> A = array([[1,-2,0],[0,1,2],[0,-1,1]]) >>> b = array([[-3],[0],[-3]]) >>> print A >>> print b Now, the best methods for solving linear systems turn out to be much fancier than what we taught you in Math 220, but they are programmed into the linalg submodule of scipy. As you type the command below, note that when you type "(", a help-dialog automatically pops up reminding you how linalg.solve works so you can enter things correctly -- in this case, the matrix is the first argument and the vector is the second argument. >>> x = linalg.solve(A, b) To check our answer, we can calculate $$A x - b$$, and make sure it's entries are all zero or close to zero. But to do matrix multiplication of arrays like they are matrices, we have to use the .dot() operator insteady of standard multiplication. >>> A.dot(x) - b If instead, you did A*x, this would return the generalized Hadamard product of the matrix and vector. >>> A*x linalg has several useful linear algebra algorithms, including eigvals() for calculating the eigenvalues of a matrix. >>> linalg.eigvals(A) • In python, variables can point to generic objects, be they integers, floats, strings, list, or even functions. This last case is particular useful as it allows us to create and pass functions around to other functions. "def" is the usual way to create functions (see above), but for short functions, there's a sweet trick called lambda-forms, in honor of Alonzo Church's Lambda calculus. (this is a badly-choosen keyword, unfortunately, since it clashes with standard math notation frequently, and one of the few flaws in the python language) >>> f = lambda x : x**2 >>> f(1) >>> f(2) >>> map(f, range(0,10)) In the code above, range(0,10) returns a list of integers [0,1,2,...9]. In python, intervals are usually semi-closed, containing their lower-bound but not their upper-bound. The map(f, ...) function is a functional programming trick meaning "apply the function f to each of the things in the subsequent sequence". • Using these lambda functions, we can do things like integration using standard algorithms. In calculus, we learned $\int_{0}^{1} x^2 dx = \left. \frac{x^3}{3} \right|_0^1 = \frac{1}{3}$ Scipy has a module called integration for approximating integrals. >>> from scipy import integrate >>> integrate.quad(lambda x : x**2, 0, 1) The answer returns a tuple with two values. The first is the estimate of the integral, which you'll see is almost correct. The second is the estimated error of the quadrature (hence the name, quad). Feel free to experiment. There are actually several algorithms listed, though some important ones are also missing. • There's lots more in here. For example, see if you can find all three roots of the cubic polynomial equation $$x^3 - 10 x^2 + 31 x - 30 = 0$$ using the function roots(). You can use the function g = lambda x : x**3 - 10*x**2 + 31*x - 30 to TEST if your solutions are correct. ## Plotting The scipy stack has a useful set of plotting tools. For a simple example, here is a use of the rational parameteric form of a circle. >>> t = linspace(-10,10,64) >>> x = (t*t-1)/(t*t+1) >>> y = 2*t/(t*t+1) >>> plot(x, y, 'ro-', cos(t), sin(t), 'k:') The linspace(-10,10,64) function creates an array of 64 evenly spaced points from -10 to 10, including the endpoints. The extra strings in the plot command specify how to draw a line. Can you guess what each character means? To clear the figure, you can use the command clf(). Of course, plots without labels are often more confusing than helpful. We can add titles and labels with the functions title(), ylabel(), xlabel(), and text. >>> xlabel('x-values') >>> title('A circle', fontsize=25) To save your figure to a png image file with the "savefig" command. >>> cd Desktop >>> savefig('mycircle.png') Of course, vector formats for images are better, so we usually use portable document format (.pdf), encapsulated postscript (.eps), or scalable vector graphic format (.svg). For a second example, let's see if we can draw a cycloid. >>> figure(2) >>> clf() >>> r = 1. >>> theta = linspace(0, 8 * pi, 257) >>> x = r*(t -.sin(t)) >>> y = r*(1 - cos(t)) >>> plot(x,y,'r-') The figure function is used to create a new figure or select an old figure for the plot. `
Drake MultibodyTreeTopology Class Reference Data structure to store the topological information associated with an entire MultibodyTree. More... #include <multibody/multibody_tree/multibody_tree_topology.h> ## Public Member Functions MultibodyTreeTopology () Default constructor creates an empty, invalid topology. More... bool operator== (const MultibodyTreeTopology &other) const Returns true if all members of this topology are exactly equal to the members of other. More... int get_num_bodies () const Returns the number of bodies in the multibody tree. More... int get_num_frames () const Returns the number of physical frames in the multibody tree. More... int get_num_mobilizers () const Returns the number of mobilizers in the multibody tree. More... int get_num_body_nodes () const Returns the number of tree nodes. This must equal the number of bodies. More... int get_num_force_elements () const Returns the number of force elements in the multibody tree. More... int get_tree_height () const Returns the number of tree levels in the topology. More... const FrameTopologyget_frame (FrameIndex index) const Returns a constant reference to the corresponding FrameTopology given the FrameIndex. More... const BodyTopologyget_body (BodyIndex index) const Returns a constant reference to the corresponding BodyTopology given a BodyIndex. More... const MobilizerTopologyget_mobilizer (MobilizerIndex index) const Returns a constant reference to the corresponding BodyTopology given a BodyIndex. More... const BodyNodeTopologyget_body_node (BodyNodeIndex index) const Returns a constant reference to the corresponding BodyNodeTopology given a BodyNodeIndex. More... std::pair< BodyIndex, FrameIndexadd_body () Creates and adds a new BodyTopology to this MultibodyTreeTopology. More... FrameIndex add_frame (BodyIndex body_index) Creates and adds a new FrameTopology, associated with the given body_index, to this MultibodyTreeTopology. More... MobilizerIndex add_mobilizer (FrameIndex in_frame, FrameIndex out_frame, int num_positions, int num_velocities) Creates and adds a new MobilizerTopology connecting the inboard and outboard multibody frames identified by indexes in_frame and out_frame, respectively. More... ForceElementIndex add_force_element () Creates and adds a new ForceElementTopology, associated with the given force_index, to this MultibodyTreeTopology. More... void Finalize () This method must be called by MultibodyTree::Finalize() after all topological elements in the tree (corresponding to joints, bodies, force elements, constraints) were added and before any computations are performed. More... bool is_valid () const Returns true if Finalize() was already called on this topology. More... int get_num_positions () const Returns the total number of generalized positions in the model. More... int get_num_velocities () const Returns the total number of generalized velocities in the model. More... int get_num_states () const Returns the total size of the state vector in the model. More... void GetKinematicPathToWorld (BodyNodeIndex from, std::vector< BodyNodeIndex > *path_to_world) const Given a node in this topology, specified by its BodyNodeIndex from, this method computes the kinematic path formed by all the nodes in the tree that connect from with the root (corresponding to the world). More... Implements CopyConstructible, CopyAssignable, MoveConstructible, MoveAssignable MultibodyTreeTopology (const MultibodyTreeTopology &)=default MultibodyTreeTopologyoperator= (const MultibodyTreeTopology &)=default MultibodyTreeTopology (MultibodyTreeTopology &&)=default MultibodyTreeTopologyoperator= (MultibodyTreeTopology &&)=default ## Detailed Description Data structure to store the topological information associated with an entire MultibodyTree. ## Constructor & Destructor Documentation MultibodyTreeTopology ( const MultibodyTreeTopology & ) default MultibodyTreeTopology ( MultibodyTreeTopology && ) default MultibodyTreeTopology ( ) inline Default constructor creates an empty, invalid topology. The minimum valid topology for a minimum valid MultibodyTree contains at least the BodyTopology for the world. The topology for the world body does not get added until MultibodyTree construction, which creates a world body and adds it to the tree. ## Member Function Documentation std::pair add_body ( ) inline Creates and adds a new BodyTopology to this MultibodyTreeTopology. The BodyTopology will be assigned a new, unique BodyIndex and FrameIndex values. Exceptions std::logic_error if Finalize() was already called on this topology. Returns a std::pair<BodyIndex, FrameIndex> containing the indexes assigned to the new BodyTopology. Here is the caller graph for this function: ForceElementIndex add_force_element ( ) inline Creates and adds a new ForceElementTopology, associated with the given force_index, to this MultibodyTreeTopology. Exceptions std::logic_error if Finalize() was already called on this topology. Returns The ForceElementIndex assigned to the new ForceElementTopology. Here is the caller graph for this function: FrameIndex add_frame ( BodyIndex body_index ) inline Creates and adds a new FrameTopology, associated with the given body_index, to this MultibodyTreeTopology. Exceptions std::logic_error if Finalize() was already called on this topology. Returns The FrameIndex assigned to the new FrameTopology. Here is the caller graph for this function: MobilizerIndex add_mobilizer ( FrameIndex in_frame, FrameIndex out_frame, int num_positions, int num_velocities ) inline Creates and adds a new MobilizerTopology connecting the inboard and outboard multibody frames identified by indexes in_frame and out_frame, respectively. The created topology will correspond to that of a Mobilizer with num_positions and num_velocities. Exceptions std::runtime_error if either in_frame or out_frame do not index frame topologies in this MultibodyTreeTopology. a std::runtime_error if in_frame == out_frame. a std::runtime_error if in_frame and out_frame already are connected by another mobilizer. More than one mobilizer between two frames is not allowed. std::logic_error if Finalize() was already called on this topology. Returns The MobilizerIndex assigned to the new MobilizerTopology. Here is the call graph for this function: Here is the caller graph for this function: void Finalize ( ) inline This method must be called by MultibodyTree::Finalize() after all topological elements in the tree (corresponding to joints, bodies, force elements, constraints) were added and before any computations are performed. It essentially compiles all the necessary "topological information", i.e. how bodies, joints and, any other elements connect with each other, and performs all the required pre-processing to perform computations at a later stage. This preprocessing includes: • sorting in BFT order for fast recursions through the tree, • computation of state sizes and of pool sizes within cache entries, • computation of index maps to retrieve either state or cache entries for each multibody element. If the finalize stage is successful, the this topology is validated, meaning it is up-to-date after this call. No more multibody tree elements can be added after a call to Finalize(). Exceptions std::logic_error If users attempt to call this method on an already finalized topology. See also is_valid() Here is the call graph for this function: const BodyTopology& get_body ( BodyIndex index ) const inline Returns a constant reference to the corresponding BodyTopology given a BodyIndex. const BodyNodeTopology& get_body_node ( BodyNodeIndex index ) const inline Returns a constant reference to the corresponding BodyNodeTopology given a BodyNodeIndex. Here is the caller graph for this function: const FrameTopology& get_frame ( FrameIndex index ) const inline Returns a constant reference to the corresponding FrameTopology given the FrameIndex. Here is the caller graph for this function: const MobilizerTopology& get_mobilizer ( MobilizerIndex index ) const inline Returns a constant reference to the corresponding BodyTopology given a BodyIndex. Here is the caller graph for this function: int get_num_bodies ( ) const inline Returns the number of bodies in the multibody tree. This includes the "world" body and therefore the minimum number of bodies after MultibodyTree::Finalize() will always be one, not zero. Here is the caller graph for this function: int get_num_body_nodes ( ) const inline Returns the number of tree nodes. This must equal the number of bodies. int get_num_force_elements ( ) const inline Returns the number of force elements in the multibody tree. int get_num_frames ( ) const inline Returns the number of physical frames in the multibody tree. int get_num_mobilizers ( ) const inline Returns the number of mobilizers in the multibody tree. Since the "world" body does not have a mobilizer, the number of mobilizers will always equal the number of bodies minus one. int get_num_positions ( ) const inline Returns the total number of generalized positions in the model. Here is the caller graph for this function: int get_num_states ( ) const inline Returns the total size of the state vector in the model. Here is the caller graph for this function: int get_num_velocities ( ) const inline Returns the total number of generalized velocities in the model. Here is the caller graph for this function: int get_tree_height ( ) const inline Returns the number of tree levels in the topology. Here is the caller graph for this function: void GetKinematicPathToWorld ( BodyNodeIndex from, std::vector< BodyNodeIndex > * path_to_world ) const inline Given a node in this topology, specified by its BodyNodeIndex from, this method computes the kinematic path formed by all the nodes in the tree that connect from with the root (corresponding to the world). Parameters [in] from A node in the tree topology to which the path to the root (world) is to be computed. [out] path_to_world A std::vector of body node indexes that on output will contain the path to the root of the tree. Forward iteration (from element 0 to element size()-1) of path_to_world will traverse all nodes in the tree starting at the root along the path to from. That is, forward iteration starts with the root of the tree at path_to_world[0] and ends with from at path_to_world.back(). On input, path_to_world must be a valid pointer. On output this vector will be resized, only if needed, to store as many elements as the level (BodyNodeTopology::level) of body node from plus one (so that we can include the root node in the path). bool is_valid ( ) const inline Returns true if Finalize() was already called on this topology. See also Finalize() Here is the caller graph for this function: MultibodyTreeTopology& operator= ( MultibodyTreeTopology && ) default MultibodyTreeTopology& operator= ( const MultibodyTreeTopology & ) default bool operator== ( const MultibodyTreeTopology & other ) const inline Returns true if all members of this topology are exactly equal to the members of other. The documentation for this class was generated from the following file:
# Jean-François Mertens Last updated Jean-François Mertens BornMarch 11, 1946 Antwerp, Belgium DiedJuly 17, 2012 (aged 66) [1] NationalityBelgium Alma materUniversité Catholique de Louvain Docteur ès Sciences 1970 Awards Econometric Society Fellow von Neumann Lecturer of Game Theory Society Scientific career Fields Game Theory Mathematical economics Jacques Neveu Influences Robert Aumann Reinhard Selten John Harsanyi John von Neumann InfluencedClaude d'Aspremont Bernard De Meyer Amrita Dhillon Francoise Forges Jean Gabszewicz Srihari Govindan Abraham Neyman Anna Rubinchik Sylvain Sorin Jean-François Mertens (11 March 1946 – 17 July 2012) was a Belgian game theorist and mathematical economist. [1] ## Contents Mertens contributed to economic theory in regards to order-book of market games, cooperative games, noncooperative games, repeated games, epistemic models of strategic behavior, and refinements of Nash equilibrium (see solution concept). In cooperative game theory he contributed to the solution concepts called the core and the Shapley value. Regarding repeated games and stochastic games, Mertens 1982 [2] and 1986 [3] survey articles, and his 1994 [4] survey co-authored with Sylvain Sorin and Shmuel Zamir, are compendiums of results on this topic, including his own contributions. Mertens also made contributions to probability theory [5] and published articles on elementary topology. [6] [7] ## Epistemic models Mertens and Zamir [8] [9] implemented John Harsanyi's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types. They constructed a universal space of types in which, subject to specified consistency conditions, each type corresponds to the infinite hierarchy of his probabilistic beliefs about others' probabilistic beliefs. They also showed that any subspace can be approximated arbitrarily closely by a finite subspace, which is the usual tactic in applications. [10] ## Repeated games with incomplete information Repeated games with incomplete information, were pioneered by Aumann and Maschler. [11] [12] Two of Jean-François Mertens's contributions to the field are the extensions of repeated two person zero-sum games with incomplete information on both sides for both (1) the type of information available to players and (2) the signalling structure. [13] • (1) Information: Mertens extended the theory from the independent case where the private information of the players is generated by independent random variables, to the dependent case where correlation is allowed. • (2) Signalling structures: the standard signalling theory where after each stage both players are informed of the previous moves played, was extended to deal with general signalling structure where after each stage each player gets a private signal that may depend on the moves and on the state. In those set-ups Jean-François Mertens provided an extension of the characterization of the minmax and maxmin value for the infinite game in the dependent case with state independent signals. [14] Additionally with Shmuel Zamir, [15] Jean-François Mertens showed the existence of a limiting value. Such a value can be thought either as the limit of the values ${\displaystyle v_{n}}$ of the ${\displaystyle n}$ stage games, as ${\displaystyle n}$ goes to infinity, or the limit of the values ${\displaystyle v_{\lambda }}$ of the ${\displaystyle {\lambda }}$-discounted games, as agents become more patient and ${\displaystyle {\lambda }\to 1}$. A building block of Mertens and Zamir's approach is the construction of an operator, now simply referred to as the MZ operator in the field in their honor. In continuous time (differential games with incomplete information), the MZ operator becomes an infinitesimal operator at the core of the theory of such games. [16] [17] [18] Unique solution of a pair of functional equations, Mertens and Zamir showed that the limit value may be a transcendental function unlike the maxmin or the minmax (value in the complete information case). Mertens also found the exact rate of convergence in the case of game with incomplete information on one side and general signalling structure. [19] A detailed analysis of the speed of convergence of the n-stage game (finitely repeated) value to its limit has profound links to the central limit theorem and the normal law, as well as the maximal variation of bounded martingales. [20] [21] Attacking the study of the difficult case of games with state dependent signals and without recursive structure, Mertens and Zamir introduced new tools on the introduction based on an auxiliary game, reducing down the set of strategies to a core that is 'statistically sufficient.' [22] [23] Collectively Jean-François Mertens's contributions with Zamir (and also with Sorin) provide the foundation for a general theory for two person zero sum repeated games that encompasses stochastic and incomplete information aspects and where concepts of wide relevance are deployed as for example reputation, bounds on rational levels for the payoffs, but also tools like splitting lemma, signalling and approachability. While in many ways Mertens's work here goes back to the von Neumann original roots of game theory with a zero-sum two person set up, vitality and innovations with wider application have been pervasive. ## Stochastic games Stochastic games were introduced by Lloyd Shapley in 1953. [24] The first paper studied the discounted two-person zero-sum stochastic game with finitely many states and actions and demonstrates the existence of a value and stationary optimal strategies. The study of the undiscounted case evolved in the following three decades, with solutions of special cases by Blackwell and Ferguson in 1968 [25] and Kohlberg in 1974. The existence of an undiscounted value in a very strong sense, both a uniform value and a limiting average value, was proved in 1981 by Jean-François Mertens and Abraham Neyman. [26] The study of the non-zero-sum with a general state and action spaces attracted much attention, and Mertens and Parthasarathy [27] proved a general existence result under the condition that the transitions, as a function of the state and actions, are norm continuous in the actions. ## Market games: limit price mechanism Mertens had the idea to use linear competitive economies as an order book (trading) to model limit orders and generalize double auctions to a multivariate set up. [28] Acceptable relative prices of players are conveyed by their linear preferences, money can be one of the goods and it is ok for agents to have positive marginal utility for money in this case (after all agents are really just orders!). In fact this is the case for most order in practice. More than one order (and corresponding order-agent) can come from same actual agent. In equilibrium good sold must have been at a relative price compared to the good bought no less than the one implied by the utility function. Goods brought to the market (quantities in the order) are conveyed by initial endowments. Limit order are represented as follows: the order-agent brings one good to the market and has non zero marginal utilities in that good and another one (money or numeraire). An at market sell order will have a zero utility for the good sold at market and positive for money or the numeraire. Mertens clears orders creating a matching engine by using the competitive equilibrium – in spite of most usual interiority conditions being violated for the auxiliary linear economy. Mertens's mechanism provides a generalization of Shapley–Shubik trading posts and has the potential of a real life implementation with limit orders across markets rather than with just one specialist in one market. ## Shapley value The diagonal formula in the theory of non-atomic cooperatives games elegantly attributes the Shapley value of each infinitesimal player as his marginal contribution to the worth of a perfect sample of the population of players when averaged over all possible sample sizes. Such a marginal contribution has been most easily expressed in the form of a derivative—leading to the diagonal formula formulated by Aumann and Shapley. This is the historical reason why some differentiability conditions have been originally required to define Shapley value of non-atomic cooperative games. But first exchanging the order of taking the "average over all possible sample sizes" and taking such a derivative, Jean-François Mertens uses the smoothing effect of such an averaging process to extend the applicability of the diagonal formula. [29] This trick alone works well for majority games (represented by a step function applied on the percentage of population in the coalition). Exploiting even further this commutation idea of taking averages before taking derivative, Jean-François Mertens expends by looking at invariant transformations and taking averages over those, before taking the derivative. Doing so, Mertens expends the diagonal formula to a much larger space of games, defining a Shapley value at the same time. [30] [31] ## Refinements and Mertens-stable equilibria Solution concepts that are refinements [32] of Nash equilibrium have been motivated primarily by arguments for backward induction and forward induction. Backward induction posits that a player's optimal action now anticipates the optimality of his and others' future actions. The refinement called subgame perfect equilibrium implements a weak version of backward induction, and increasingly stronger versions are sequential equilibrium, perfect equilibrium, quasi-perfect equilibrium, and proper equilibrium, where the latter three are obtained as limits of perturbed strategies. Forward induction posits that a player's optimal action now presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction [33] is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached. In particular since completely mixed Nash equilibrium are sequential – such equilibria when they exist satisfy both forward and backward induction. In his work Mertens manages for the first time to select Nash equilibria that satisfy both forward and backward induction. The method is to let such feature be inherited from perturbed games that are forced to have completely mixed strategies—and the goal is only achieved with Mertens-stable equilibria, not with the simpler Kohlberg Mertens equilibria. Elon Kohlberg and Mertens [34] emphasized that a solution concept should be consistent with an admissible decision rule. Moreover, it should satisfy the invariance principle that it should not depend on which among the many equivalent representations of the strategic situation as an extensive-form game is used. In particular, it should depend only on the reduced normal form of the game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens [35] [36] emphasized also the importance of the small worlds principle that a solution concept should depend only on the ordinal properties of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs. Kohlberg and Mertens defined tentatively a set-valued solution concept called stability for games with finite numbers of pure strategies that satisfies admissibility, invariance and forward induction, but a counterexample showed that it need not satisfy backward induction; viz. the set might not include a sequential equilibrium. Subsequently, Mertens [37] [38] defined a refinement, also called stability and now often called a set of Mertens-stable equilibria, that has several desirable properties: • Admissibility and Perfection: All equilibria in a stable set are perfect, hence admissible. • Backward Induction and Forward Induction: A stable set includes a proper equilibrium of the normal form of the game that induces a quasi-perfect and sequential equilibrium in every extensive-form game with perfect recall that has the same normal form. A subset of a stable set survives iterative elimination of weakly dominated strategies and strategies that are inferior replies at every equilibrium in the set. • Invariance and Small Worlds: The stable sets of a game are the projections of the stable sets of any larger game in which it is embedded while preserving the original players' feasible strategies and payoffs. • Decomposition and Player Splitting. The stable sets of the product of two independent games are the products of their stable sets. Stable sets are not affected by splitting a player into agents such that no path through the game tree includes actions of two agents. For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game. [39] A stable set is defined mathematically by (in brief) essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition entails more than the property that every nearby game has a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above. ## Social choice theory and relative utilitarianism A social welfare function (SWF) maps profiles of individual preferences to social preferences over a fixed set of alternatives. In a seminal paper Arrow (1950) [40] showed the famous "Impossibility Theorem", i.e. there does not exist an SWF that satisfies a very minimal system of axioms: Unrestricted Domain, Independence of Irrelevant Alternatives, the Pareto criterion and Non-dictatorship. A large literature documents various ways to relax Arrow's axioms to get possibility results. Relative Utilitarianism (RU) (Dhillon and Mertens, 1999) [41] is a SWF that consists of normalizing individual utilities between 0 and 1 and adding them, and is a "possibility" result that is derived from a system of axioms that are very close to Arrow's original ones but modified for the space of preferences over lotteries. Unlike classical Utilitarianism, RU does not assume cardinal utility or interpersonal comparability. Starting from individual preferences over lotteries, which are assumed to satisfy the von-Neumann–Morgenstern axioms (or equivalent), the axiom system uniquely fixes the interpersonal comparisons. The theorem can be interpreted as providing an axiomatic foundation for the "right" interpersonal comparisons, a problem that has plagued social choice theory for a long time. The axioms are: • Individualism: If all individuals are indifferent between all alternatives then so is society, • Non Triviality: The SWF is not constantly totally indifferent between all alternatives, • No Ill will: It is not true that when all individuals but one are totally indifferent then society's preferences are opposite to his, • Anonymity: A permutation of all individuals leaves the social preferences unchanged. • Independence of Redundant Alternatives: This axiom restricts Arrow's Independence of Irrelevant Alternatives (IIA) to the case where both before and after the change, the "irrelevant" alternatives are lotteries on the other alternatives. • Monotonicity is much weaker than the following "good will axiom": Consider two lotteries ${\displaystyle p}$ and ${\displaystyle q}$ and two preference profiles which coincide for all individuals except ${\displaystyle i}$, ${\displaystyle i}$ is indifferent between ${\displaystyle p}$ and ${\displaystyle q}$ on the first profile but strictly prefers ${\displaystyle p}$ to ${\displaystyle q}$ in the second profile, then society strictly prefers ${\displaystyle p}$ to ${\displaystyle q}$ in the second profile as well. • Finally the Continuity axiom is basically a closed graph property taking the strongest possible convergence for preference profiles. The main theorem shows that RU satisfies all the axioms and if the number of individuals is bigger than three, number of candidates is bigger than 5 then any SWF satisfying the above axioms is equivalent to RU, whenever there exist at least 2 individuals who do not have exactly the same or exactly the opposite preferences. ## Intergenerational equity in policy evaluation Relative utilitarianism [41] can serve to rationalize using 2% as an intergenerationally fair social discount rate for cost-benefit analysis. Mertens and Rubinchik [42] show that a shift-invariant welfare function defined on a rich space of (temporary) policies, if differentiable, has as a derivative a discounted sum of the policy (change), with a fixed discount rate, i.e., the induced social discount rate. (Shift-invariance requires a function evaluated on a shifted policy to return an affine transformation of the value of the original policy, while the coefficients depend on the time-shift only.) In an overlapping generations model with exogenous growth (with time being the whole real line), relative utilitarian function is shift-invariant when evaluated on (small temporary) policies around a balanced growth equilibrium (with capital stock growing exponentially). When policies are represented as changes in endowments of individuals (transfers or taxes), and utilities of all generations are weighted equally, the social discount rate induced by relative utilitarianism is the growth rate of per capita GDP (2% in the U.S. [43] ). This is also consistent with the current practices described in the Circular A-4 of the US Office of Management and Budget, stating: If your rule will have important intergenerational benefits or costs you might consider a further sensitivity analysis using a lower but positive discount rate in addition to calculating net benefits using discount rates of 3 and 7 percent. [44] ## Related Research Articles Game theory is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. In the 21st century, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers. In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy. In game theory, a cooperative game is a game with competition between groups of players ("coalitions") due to the possibility of external enforcement of cooperative behavior. Those are opposed to non-cooperative games in which there is either no possibility to forge alliances or all agreements need to be self-enforcing. Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject. In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium. In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability. In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria. In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games. In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent. Hobart Peyton Young is an American game theorist and economist known for his contributions to evolutionary game theory and its application to the study of institutional and technological change, as well as the theory of learning in games. He is currently centennial professor at the London School of Economics, James Meade Professor of Economics Emeritus at the University of Oxford, professorial fellow at Nuffield College Oxford, and research principal at the Office of Financial Research at the U.S. Department of the Treasury. In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues. Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it. In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley. In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers. In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs. A Markov perfect equilibrium is an equilibrium concept in game theory. It has been used in analyses of industrial organization, macroeconomics, and political economy. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability. Abraham Neyman is an Israeli mathematician and game theorist, Professor of Mathematics at the Federmann Center for the Study of Rationality and the Einstein Institute of Mathematics at the Hebrew University of Jerusalem in Israel. He served as president of the Israeli Chapter of the Game Theory Society (2014–2018). M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis. ## References 1. "Jean-Francois Mertens, 1946–2012 « The Leisure of the Theory Class". Theoryclass.wordpress.com. 2012-08-07. Retrieved 2012-10-01. 2. Mertens, Jean-François, 1982. "Repeated Games: An Overview of the Zero-sum Case," Advances in Economic Theory, edited by W. Hildenbrand, Cambridge University Press, London and New York. 3. Mertens, Jean-François, 1986. "Repeated Games," International Congress of Mathematicians. Archived 2014-02-02 at the Wayback Machine 4. Mertens, Jean-François, and Sylvain Sorin, and Shmuel Zamir, 1994. "Repeated Games," Parts A, B, C; Discussion Papers 1994020, 1994021, 1994022; Université Catholique de Louvain, Center for Operations Research and Econometrics (CORE). "Archived copy". Archived from the original on 2011-09-08. Retrieved 2012-02-19.CS1 maint: archived copy as title (link) "Archived copy". Archived from the original on 2007-12-01. Retrieved 2012-02-19.CS1 maint: archived copy as title (link) 5. Mertens, Jean-François (1973). "Strongly supermedian functions and optimal stopping". Probability Theory and Related Fields. 26 (2): 119–139. doi:10.1007/BF00533481. S2CID   123472255. 6. Mertens, Jean-François (1992). "Essential Maps and Manifolds". Proceedings of the American Mathematical Society. 115 (2): 513. doi:. 7. Mertens, Jean-François (2003). "Localization of the Degree on Lower-dimensional Sets". International Journal of Game Theory. 32 (3): 379–386. doi:10.1007/s001820400164. hdl:. S2CID   32224169. 8. Mertens, Jean-François; Zamir, Shmuel (1985). "Formulation of Bayesian analysis for games with incomplete information" (PDF). International Journal of Game Theory. 14 (1): 1–29. doi:10.1007/bf01770224. S2CID   1760385. 9. An exposition for the general reader is by Shmuel Zamir, 2008: "Bayesian games: Games with incomplete information," Discussion Paper 486, Center for Rationality, Hebrew University.[ permanent dead link ] 10. A popular version in the form of a sequence of dreams about dreams appears in the film "Inception." The logical aspects of players' beliefs about others' beliefs is related to players' knowledge about others' knowledge; see Prisoners and hats puzzle for an entertaining example, and Common knowledge (logic) for another example and a precise definition. 11. Aumann, R. J., and Maschler, M. 1995. Repeated Games with Incomplete Information. Cambridge London: MIT Press 12. Sorin S (2002a) A first course on zero-sum repeated games. Springer, Berlin 13. Mertens J-F (1987) Repeated games. In: Proceedings of the international congress of mathematicians, Berkeley 1986. American Mathematical Society, Providence, pp 1528–1577 14. Mertens J-F (1972) The value of two-person zero-sum repeated games: the extensive case. Int J Game Theory 1:217–227 15. Mertens J-F, Zamir S (1971) The value of two-person zero-sum repeated games with lack of information on both sides. Int J Game Theory 1:39–64 16. Cardaliaguet P (2007) Differential games with asymmetric information. SIAM J Control Optim 46:816–838 17. De Meyer B (1996a) Repeated games and partial differential equations. Math Oper Res 21:209–236 18. De Meyer B. (1999), From repeated games to Brownian games, 'Annales de l'Institut Henri Poincaré, Probabilites et Statistiques', 35, 1–48. 19. Mertens J.-F. (1998), The speed of convergence in repeated games with incomplete information on one side, 'International Journal of Game Theory', 27, 343–359. 20. Mertens J.-F. and S. Zamir (1976b), The normal distribution and repeated games, 'International Journal of Game Theory', 5, 187–197. 21. De Meyer B (1996b) Repeated games, duality and the Central Limit theorem. Math Oper Res 21:237– 251 22. Mertens J-F, Zamir S (1976a) On a repeated game without a recursive structure. Int J Game Theory 5:173–182 23. Sorin S (1989) On repeated games without a recursive structure: existence of ${\displaystyle \lim v_{n}}$. Int J Game Theory 18:45–55 24. Shapley, L. S. (1953). "Stochastic games". PNAS . 39 (10): 1095–1100. Bibcode:1953PNAS...39.1095S. doi:10.1073/pnas.39.10.1095. PMC  . PMID   16589380. 25. Blackwell and Ferguson,1968. "The Big Match", Ann. Math. Statist. Volume 39, Number 1 (1968), 159–163. 26. Mertens, Jean-François; Neyman, Abraham (1981). "Stochastic Games". International Journal of Game Theory. 10 (2): 53–66. doi:10.1007/bf01769259. S2CID   189830419. 27. Mertens, J-F., Parthasarathy, T.P. 2003. Equilibria for discounted stochastic games. In Neyman A, Sorin S, editors, Stochastic Games and Applications, Kluwer Academic Publishers, 131–172. 28. Mertens, J.F. (2003). "The limit-price mechanism". Journal of Mathematical Economics. 39 (5–6): 433–528. doi:10.1016/S0304-4068(03)00015-6. 29. Mertens, Jean-François (1980). "Values and Derivatives". Mathematics of Operations Research. 5 (4): 523–552. doi:10.1287/moor.5.4.523. JSTOR   3689325. 30. Mertens, Jean-François (1988). "The Shapley Value in the Non Differentiable Case". International Journal of Game Theory. 17: 1–65. doi:10.1007/BF01240834. S2CID   118017018. 31. Neyman, A., 2002. Value of Games with infinitely many Players, "Handbook of Game Theory with Economic Applications," Handbook of Game Theory with Economic Applications, Elsevier, edition 1, volume 3, number 3, 00. R.J. Aumann & S. Hart (ed.). 32. Govindan, Srihari, and Robert Wilson, 2008. "Refinements of Nash Equilibrium," The New Palgrave Dictionary of Economics, 2nd Edition. "Archived copy" (PDF). Archived from the original (PDF) on 2010-06-20. Retrieved 2012-02-12.CS1 maint: archived copy as title (link) 33. Govindan, Srihari, and Robert Wilson, 2009. "On Forward Induction," Econometrica, 77(1): 1–28. 34. Kohlberg, Elon; Mertens, Jean-François (1986). "On the Strategic Stability of Equilibria" (PDF). Econometrica. 54 (5): 1003–1037. doi:10.2307/1912320. JSTOR   1912320. 35. Mertens, Jean-François (2003). "Ordinality in Non Cooperative Games". International Journal of Game Theory. 32 (3): 387–430. doi:10.1007/s001820400166. S2CID   8746589. 36. Mertens, Jean-François, 1992. "The Small Worlds Axiom for Stable Equilibria," Games and Economic Behavior, 4: 553–564. 37. Mertens, Jean-François (1989). "Stable Equilibria – A Reformulation". Mathematics of Operations Research. 14 (4): 575–625. doi:10.1287/moor.14.4.575.; Mertens, Jean-François (1991). "Stable Equilibria – A Reformulation". Mathematics of Operations Research. 16 (4): 694–753. doi:10.1287/moor.16.4.694. 38. Govindan, Srihari; Mertens, Jean-François (2004). "An Equivalent Definition of Stable Equilibria". International Journal of Game Theory. 32 (3): 339–357. doi:10.1007/s001820400165. S2CID   28810158. 39. Govindan, Srihari, and Robert Wilson, 2012. "Axiomatic Theory of Equilibrium Selection for Generic Two-Player Games," Econometrica, 70. 40. Arrow, K.J., "A Difficulty in the Concept of Social Welfare", Journal of Political Economy 58(4) (August, 1950), pp. 328–346 41. Dhillon, A. and J.F.Mertens, "Relative Utilitarianism", Econometrica 67,3 (May 1999) 471–498 42. Mertens, Jean-François; Anna Rubinchik (February 2012). "Intergenerational equity and the Discount Rate for Policy Analysis". Macroeconomic Dynamics. 16 (1): 61–93. doi:10.1017/S1365100510000386. hdl:2078/115068 . Retrieved 5 October 2012. 43. Johnston, L. D. and S. H. Williamson. "What Was the U.S. GDP Then? Economic History Services MeasuringWorth" . Retrieved 5 October 2012. 44. The U.S. Office of Management and Budget. "Circular A-4" . Retrieved 5 October 2012.
# Is the identity map $id: H^2(-\pi,\pi) \to L^2(-\pi,\pi)$ Hilbert-Schmidt? Let $H_1, H_2$ be Hilbert spaces. A linear operator $A: H_1 \to H_2$ is Hilbert-Schmidt iff for some orthonormal basis $\lbrace e_n : ~ n \in \mathbb{N} \rbrace$ of $H_1$ the sum $\sum_{n \in \mathbb{N}} \Vert A e_n \Vert^2_{H_2}$ is finite. It is easy to see that if $H_1 = H_2$ the identity operator $id: H_1 \to H_1$ is Hilbert-Schmidt if and only if $H$ is finite-dimensional, since otherwise $\sum_{n \in \mathbb{N}} \Vert e_n \Vert^2_{H_2}= \sum_{n \in \mathbb{N}} \Vert e_n \Vert^2_{H_1}= \sum_{n \in \mathbb{N}} 1$ clearly diverges. But what if $H_1$ is a real subset of $H_2$? Then the situation changes somehow, because $\Vert \cdot \Vert_{H_1} = \Vert \cdot \Vert_{H_2}$ needs not to hold anymore. More specific: Is the identity map $id: H^2(-\pi,\pi) \to L^2(-\pi,\pi)$ Hilbert-Schmidt? and if not: Is there any chance that $id: H^p(-\pi,\pi) \to L^2(-\pi,\pi)$ is Hilbert-Schmidt for any $p$? EDIT: We equip $L^2$ and $H^2$ with the standard norms $\Vert f \Vert_{L^2}^2 = \int \vert f(x) \vert^2 dx$ and $\Vert f \Vert_{H^2}^2 = \int \vert f(x) \vert^2 dx + \int \vert D f(x) \vert^2 dx + \int \vert D^2f(x) \vert^2 dx$. - Do we know an orthonormal basis of $H^2(-\pi,\pi)$? –  Berci Feb 15 '13 at 11:14 The answer to both questions is affirmative. Letting $\mathbb{T}=\mathbb{R}/2\pi\mathbb{Z}$, we have on $H^1(\mathbb{T})$ the following orthonormal basis: $$e_n= \frac{e^{i n x}}{\sqrt{2\pi(1+n^2)}},\quad n\in \mathbb{Z}.$$ Specializing the sum $\sum_n \lVert I e_n\rVert_{L^2}^2$ to this basis we get $$\sum_{n\in \mathbb{Z}}\lVert I e_n\rVert_{L^2}^2 = \frac{1}{2\pi}\sum_{n \in \mathbb{Z}}\frac{1}{1+n^2}<\infty.$$ Since this particular sum converges, we can prove via standard arguments that the sum $\sum \lVert I e_n\rVert_{L^2}^2$ will converge for any choice of an orthonormal basis $\{e_n\}$ of $H^1(\mathbb{T})$. We have thus proven that the embedding of $H^1(\mathbb{T})$ into $L^2(\mathbb{T})$ is Hilbert-Schmidt. The argument for $H^k(\mathbb{T})$ is similar. In this case the convergence of the series will be even faster. As a side note, we can consider the spaces $$H^s(\mathbb{T})=\left\{ f\in L^2(\mathbb{T})\ :\ \sum_{m\in \mathbb{Z}} \lvert\hat{f}(m)\rvert^2\left(1+m^2\right)^s < \infty\right\},\quad s\in \mathbb{R},$$ with inner product $$(f, g)_{H^s}=\sum_{n \in \mathbb{Z}}\hat{f}(n)\overline{\hat{g}(n)}\left(1+n^2\right)^{s},$$ which generalize the spaces $H^k(\mathbb{T})$ with integer $k$. Here we have the orthonormal basis $$e_n=\frac{e^{inx}}{\sqrt{(2\pi)\left(1+n^2\right)^s}},\qquad n \in \mathbb{Z}.$$ So $$\sum_n \lVert I e_n\rVert_{L^2}^2=\frac{1}{2\pi}\sum_{n \in \mathbb{Z}}\frac{1}{\left(1+n^2\right)^s},$$ and the last series converges if and only if $s>\frac{1}{2}$. So the identity operator $$I\colon H^s(\mathbb{T})\to L^2(\mathbb{T})$$ is Hilbert-Schmidt if and only if $s>\frac{1}{2}$. I find it interesting to note that $I$ is compact if and only if $s>0$. So for $0<s\le\frac{1}{2}$ we have an example of a compact operator which is not Hilbert-Schmidt. Sorry, what is the inner product in $H^p(\Bbb T)$? –  Berci Feb 15 '13 at 11:26 @Berci: I edited the text adding the inner product of $H^s(\mathbb{T})$. –  Giuseppe Negro Feb 15 '13 at 14:14 One more question: In the first part of your answer you gave a basis for $H^1(\mathbb{T})$. I checked, that it is orthonormal, but why is it a basis? Any hint how to check that? Thanks a lot! –  mjb Feb 15 '13 at 15:22 It is essentially a consequence of the definition. The space $H^s(\mathbb{T})$ is defined in such a way that its Fourier transform (that is, the map which passes from a function on $\mathbb{T}$ to its Fourier series) is an isomorphism onto the space $\ell_w^2$ where the weight function is $$w(n)=(1+n^2)^s.$$ The family $e_n$ given in the text is just the pull-back of the standard orthonormal basis of this weighted $\ell^2_w$-space. –  Giuseppe Negro Feb 16 '13 at 1:57
# Unable to write greek with package pdfx I want to use package pdfx in order to produce a PDF-A document, but as soon as I load the package I can't write greek anymore because I'm getting the following error: ! Package inputenc Error: Unicode character μ (U+03BC) (inputenc) not set up for use with LaTeX. Here is my MWE (borrowed from here): \documentclass{article} %\let\realnoboundary\noboundary \usepackage[a-1b]{pdfx} \let\noboundary\realnoboundary \usepackage[utf8]{inputenc} \usepackage[greek,english]{babel} \usepackage{lmodern} \begin{document} This is in English, but we also have \textgreek{μια φράση στα ελληνικά} \end{document} • I don't get the problem. If I remove the % the code still works fine, even with pdfx loaded. – Felix Phl Jan 11 at 12:04 • I get the error. The problem is that pdfx messes up the latex inputenc/fontenc system and so lgrenc.dfu is not loaded. You can move babel before pdfx, then it works again. But I suggest also a bug report to the maintainer. – Ulrike Fischer Jan 11 at 13:09 • I reported the bug, the problem is being solved, the package should be updated in a couple of weeks. – mmj Jan 12 at 0:13
# stupid, stupid, stupid -- part II W #### William Sommerwerck Jan 1, 1970 0 Following up on Arfa's self-mockery... The Sony D-FJ75TR is a classic Discman. A very-low-drain two-AA-cell player that runs about 25 hours on high-capacity NiMH cells, it can be literally slammed into a soft surface without skipping -- and that's with the skip protection turned off! Its remote control -- about the volume of the average person's thumb -- includes an excellent AM/stereo-FM digital tuner, a model of modern miniaturization. (If you come across one, treat yourself to the pleasure of opening it up and inspecting it.) It even has a hybrid mini jack with electrical and optical outputs. The D-FJ75TR was perhaps the last of Sony's "really good" Discmans. I've therefore collected spares. A recent eBay auction had one for $10, including the remote control (the unit's best feature, and often missing). With shipping, I got it for$16.50. * The seller said the tuner wouldn't auto-program, but he was wrong -- it worked fine. (The manual is thoroughly confusing.) Anyhow... After stepping outside to program the tuner with Seattle's stations, I brought it back in and set it on my desk. Or so I thought. When I left for Intel Monday morning, I couldn't find it. I spent a few minutes looking, then gave up and left. It gnawed at me all week, and the first thing I did when coming home Saturday was to search for it. Did I actually leave it on the stairs? Or in the garage? Perhaps it was under the pile of junk that has taken over my bed. The problem wasn't the $16.50 -- or even the loss of the spare parts. It was the apparent encroaching senility. Why can't I remember where I put things? ** I'd looked repeatedly on my desk -- where the Discman & remote should have been -- but couldn't find them. Were they under the pizza pan? Nope. About an hour ago I decided to have a pizza -- a hearty, stick-to-the-ribs breakfast. When I lifted a plastic shopping bag off the aforementioned pizza pan -- there was the Discman and its remote. It was on my desk all along. The pan was sufficiently warped that the player could "hide" under the bag where it wouldn't be seen. Naughty Discman -- naughty Discman! The best advice I can give anyone who's mislaid something is... If you can't find it in a few minutes, stop looking. It almost always shows up unexpectedly where you never thought it could be. * The AC-E455 power supply -- perhaps the most-common wall wart in existence -- was also included. (I now have a drawerful of them.) Sony made a universal switching version of this supply. If you want one of either, I can send you one for$15, shipping included. ** One trick I've learned is to say -- out loud -- "I'm putting object A in place B" as you do so. And when you find a lost object... Never, never, NEVER move it until you're ready to put it in its final resting place. N #### Nelson Jan 1, 1970 0 The problem wasn't the $16.50 -- or even the loss of the spare parts. It was the apparent encroaching senility. Why can't I remember where I put things? ** A feeling I know all too well :-( I'd looked repeatedly on my desk -- where the Discman & remote should have been -- but couldn't find them. Were they under the pizza pan? Nope. Why are you storing a pizza pan on your desk? The best advice I can give anyone who's mislaid something is... If you can't find it in a few minutes, stop looking. It almost always shows up unexpectedly where you never thought it could be. Or not :-(. I am currently trying to find a partial denture with a$1,200 penalty for failure. Replies 5 Views 2K Replies 2 Views 702 Replies 22 Views 1K Replies 6 Views 1K Replies 24 Views 1K
## Solving Ramanujan’s Puzzling Problem Consider a sequence of functions as follows:- $f_1 (x) = \sqrt {1+\sqrt {x} }$ $f_2 (x) = \sqrt{1+ \sqrt {1+2 \sqrt {x} } }$ $f_3 (x) = \sqrt {1+ \sqrt {1+2 \sqrt {1+3 \sqrt {x} } } }$ ……and so on to \$ f_n (x) = \sqrt {1+\sqrt{1+2 \sqrt
# chi-square definition • noun: • A test statistic this is certainly determined while the sum of the squares of noticed values minus anticipated values split by the anticipated values. • A test statistic utilized in the chi-square test. ## Related Sources • Sentence for "chi-square" • Goodness of fit between different documents…
# If one leg of a right triangle measures 3 centimeters and the other leg measures 6 centimeters, what is the length of the hypotenuse in centimeters? ###### Question: If one leg of a right triangle measures 3 centimeters and the other leg measures 6 centimeters, what is the length of the hypotenuse in centimeters? ### PLEASE HELP MEEEEEEE Which of the following describes why the story element of dialogue is important in this passage? A- it propels the action and provokes a decision from one of the characters B- The dialogue reveals how each of the characters feel about the Monkeys Paw C- it discloses the wish one of the characters make D-The author uses dialogue to inform readers at the monkeys Paw moved when Mr. White made a wish PART B A-“it moved” he cried, with a glance of discussed at the object as it la PLEASE HELP MEEEEEEE Which of the following describes why the story element of dialogue is important in this passage? A- it propels the action and provokes a decision from one of the characters B- The dialogue reveals how each of the characters feel about the Monkeys Paw C- it discloses the wish one... ### Your customer, Mykel, is ordering a custom-built computer for his home office and isn’t sure which components should be the highest priority to meet his needs. He’s a software developer and runs multiple VMs to test his applications. He also designs some of his own graphics, and he plays online games when he’s not working. Which of the following priorities would be most important for Mykel’s computer? a. High-end graphics card, RAID array, and lots of RAM b. High-end CPU, lots of RAM, and high Your customer, Mykel, is ordering a custom-built computer for his home office and isn’t sure which components should be the highest priority to meet his needs. He’s a software developer and runs multiple VMs to test his applications. He also designs some of his own graphics, and he plays online ... ### Given​ f(x) = x2−2x+2​, find the​ value(s) for x such that ​f(x)=37. The solution set is { } Given​ f(x) = x2−2x+2​, find the​ value(s) for x such that ​f(x)=37. The solution set is { }... ### Next to each item, indicate whether it would most likely be reported on the balance sheet (B), the income statement (1), or the statement of stockholders' equity (SE) Choose a. Cash (year end balance) b. Advertising expense c. Common stock d. Printing fees earned B, T e. Dividends f. Accounts payable g. Inventory> h. Equipment Next to each item, indicate whether it would most likely be reported on the balance sheet (B), the income statement (1), or the statement of stockholders' equity (SE) Choose a. Cash (year end balance) b. Advertising expense c. Common stock d. Printing fees earned B, T e. Dividends f. Accounts ... ### 2. If a gas exerts a pressure of 725 mm Hg when 15 puffs of particles are present, what is the pressure when 35 puffs are present?​ 2. If a gas exerts a pressure of 725 mm Hg when 15 puffs of particles are present, what is the pressure when 35 puffs are present?​... ### Write a conversation between a poor memorized employee and his or her employer Write a conversation between a poor memorized employee and his or her employer... ### Which excerpt from chapter 3 of The Strange Case of Dr. Jekyll and Mr. Hyde illustrates a character vs. character conflict Which excerpt from chapter 3 of The Strange Case of Dr. Jekyll and Mr. Hyde illustrates a character vs. character conflict... ### Of the 640 people present at the rally, 400 were female. What percent were female? Round the answer to the nearest tenth of a percent. Of the 640 people present at the rally, 400 were female. What percent were female? Round the answer to the nearest tenth of a percent.... ### What group are nonmetals highlighted what group are nonmetals highlighted... ### Why are prefixes used in naming covalent compounds? A. The prefixes identity the only way the atoms can combine. B. The same atoms can combine in more than one ratio. C. The atoms can have different numbers of valence electrons D. Oxidation states of the atoms are identified with prefixes Why are prefixes used in naming covalent compounds? A. The prefixes identity the only way the atoms can combine. B. The same atoms can combine in more than one ratio. C. The atoms can have different numbers of valence electrons D. Oxidation states of the atoms are identified with prefixes... ### Read and choose the correct option with the imperfect tense.A) Los músicos de la recepcíon fueron christosos.B) Mi doctora me receta vitaminas porque estoy débil.C) Mi madrastra Eloísa era una mujer fuerte y estricta.D) Tu madrina y yo hablábamos con la ahijada antes de la boda.​ read and choose the correct option with the imperfect tense.A) Los músicos de la recepcíon fueron christosos.B) Mi doctora me receta vitaminas porque estoy débil.C) Mi madrastra Eloísa era una mujer fuerte y estricta.D) Tu madrina y yo hablábamos con la ahijada antes de la boda.​... ### History of Art: Q. what do you think motivates Stone Age people to build such large structures ? History of Art: Q. what do you think motivates Stone Age people to build such large structures ?... ### What is hydrochloric acid used for What is hydrochloric acid used for... ### Enlarge the triangle by scale factor 1.5 using (4, 4) as the centre of enlargement. Enlarge the triangle by scale factor 1.5 using (4, 4) as the centre of enlargement.... ### Need help !!!!! Boy in the striped pajamas Need help !!!!! Boy in the striped pajamas... ### Choose the best translation for the following sentence. She didn't buy me anything yesterday. Ella no me compra nada ayer. Ella no me compró nada ayer. Ella no me compraba nada ayer. Ella no me compro nada ayer. Choose the best translation for the following sentence. She didn't buy me anything yesterday. Ella no me compra nada ayer. Ella no me compró nada ayer. Ella no me compraba nada ayer. Ella no me compro nada ayer....
A system is known as a model in which each part is related with each other with some relationship between them. And thus, a complex system is a system which is made up of many interrelated parts which are mutually related and are really complex to understand or describe. The field of complex and analytical dynamic studies has undergone vigorous growth in two nearly short periods. The work of initial studies revolves around the iterations near a fixed point. Afterwards on variety of other dynamic and complex systems too work began like, rational maps, higher degree polynomials, mero-morphic systems etc. ## Complex Systems Theory The theory behind the complex system is that all the parts of this system are interconnected with each other as if they have been interwoven. Thus, to understand about the nature of these complex systems, we must know about the behavior of its every part and how they work together to complete the entire system. Complex system theory includes the theory in which huge and large number of units are organized together into such aggregations in which we can generate a pattern, for storing some information and can even get engaged in some collective decision making. The equations of complex systems are generally derive from information theory, statistical physics and non-linear dynamics, and looking organized but these are considered fundamentally complex because of unpredictable behaviors of natural systems. In recent times, many different communities of science and mathematics have given a combined definition to those systems which are complex. According to this definition a complex system can be described in brief the phenomena, aggregates, problems, structure or organisms which is sharing common theme on some criterion: 1)    That these systems are complicated or intricate inherently. 2)    These systems can be completely deterministic very rarely. 3)    The mathematical models of such systems are mostly of the non linear type. 4)    These systems are pre disposed to some unexpected and unpredictable outcomes which are also termed as the so-called emergent behaviors. In the field of mathematics, the largest contribution made by the study of complex systems was in the form of the deterministic systems with the discovery of chaos. This is a property used in some dynamical systems which is strongly related to its non linear characteristic. For some scientist and researchers, complex systems mean a structure having a lot of variations. But there are some scientist and researchers too, who say that a complex system is one having large number of interacting components, in which every part is inter connected and inter related with a unique or various relation. It is commonly said that the moment where the causality of a system breaks down, the complexity of a system starts. And the analysis for describing such complex systems generally need the requirement of differential equations that are non linear. Hence, the knowledge of differential calculus is a pre requisite for understanding the modeling of complex systems as they uses the concept of non linear differential equations. The word complex is an adjective in general sense that describes a component or a system which is difficult to verify as well understand by just the design or its function or both. In these cases the complexity is determined by the factors like number of components, number of conditional branches and their intricacies, the types of data structures involved, degree of nesting, etc. The complexity theory may also indicate the large number of units that can organize themselves into aggregations which can generate a pattern, store some information and can even get engaged in some collective decision making. ## Examples of Complex Systems We can find various examples of Complex systems which are being used not only in the filed of mathematics, sciences, etc. but in our real world day to day life which are discussed below: Some of the examples of complex systems from our real world system are as follows: 1) The Governments of different Countries as every government has many responsibilities like taxation, transportation, military etc and each of these responsibilities or functions and are in itself complex in nature. Hence, this is a very simple example of a complex system which we come across in our daily life. 2) The second example is the different kind of families. As every family (nuclear, extended etc.) is made up of some individuals who are related to some other individual by a relation, and has to make a link with the outside environment. Hence, every family also constitutes an example of a complex system. 3) The ecosystem of the earth and its subsystem ecosystems like oceans, rain forest, desert, weather etc. Note that each sub system eco system is in itself a complex system. 4) Any corporation or company is also an example of a complex system as it is made up of many inter related sub complex systems. Some of the examples of complex systems used in the field of mathematics are listed below: 1) Complex systems used in Graph Theory in the form of complex networks: Let $G$ be a graph having $V$ as the set of vertices and $E$ as the set of directed or undirected edges. Let $k$i represents the number of connected edges from the vertex $v$i, then $C$i, the clustering coefficient of a vertex $vi$ is given by the following formula: $C_{i}$ = $\frac{2\left | \left \{ e_{jk} \right \} \right |}{k_{i}(k_{1}-1)}$:$v_{j},v_{k}\epsilon N_{i}, e_{jk}\epsilon E$ The following is an example of the complex network, which is known as small world network as shown below: 2) Complex systems used in probability: If the probability of heads is equal to $\frac{1}{2}$ and the coin is a fair coin then the unpredictability or the complexity will be considered equal to 1. Similarly, if the probability of heads is equal to 1 and the coin is a perfectly biased coin then the unpredictability or the complexity will be considered equal to 0. In natural patterns of landforms, complexity is a manifestation of two main characteristics. 1) One of the natural form of patterns is from the processes that are not linear at all, that is the ones that can modify the properties of our environment in which they are operating or which are strongly coupled. 2) The other one is in the systems which are open and can be driven from neutral or equilibrium just by the exchange of momentum, energy, information or material across their boundaries. ## Understanding Complex Systems To understand about any complex system we must know the difference between a simple and a complex system. Some examples of simple systems are a pendulum, a spinning wheel, an oscillator, an orbiting planet. We must understand that every complex system have properties which are universal in nature, while the systems which are simple follows only one function that has to performed at one time only. Thus, on comparing simple versus complex systems, we can understand about complex system thoroughly. Complex Systems can be better understood by looking at the following features, which every complex system has: 1) The property of Non Linearity: This is a pre requisite for a system to be a complex system and is a must for every complex system. A system is said to be linear if we can combine any of the two solutions to obtain another, and can also multiply any of the solution by some factor to obtain another. When this principle of superposition is not applicable then the system is said to be non linear. Non linearity in the some equations may also have a consequence that even small changes in the initial values or conditions can make a big change in different results. 2) The feature of Feedback: It is also a very important condition for complex and dynamic systems. When any inter connected part of a system interact with some other part, in some way, the later part receives some feedback. The later time of the feedback completely depends as on how the interaction with them is taking place. But only the existence of feedback is not enough for a system to be complex. The most abstract way to represent the prevalence of the feedback in a system that is complex is given by the casual graphs’ theory. A chain of some casual arrows will not indicate any feedback while a graph having loops of some casual arrows will depict feedback. In some places, feedback is often used by a control system. Feedback is also been used for error correction in the filed of statistics and reliability theory. 3) Spontaneous Order: Related notions in a system include organization, determinism, symmetry, periodicity and pattern. The most confusing issue of complex systems is how the order in them is related to the content of information of its states and dynamics as the information is processed. With complexity, the total number of order is also incompatible. 4) Robustness and Lack of Central Control: The order in a system that is complex is robust due to the fact of being distributed non centrally. It is said to be stable only under perturbations of the system. A system that is centrally controlled is vulnerable to malfunctions in a few of the key components. The order of a system can be maintained by utilizing some error correction mechanism. 5) Emergence: Emergence is a quantity that is calculated by understanding the collective behavior of all the parts of a system and hence, it is also one of the important properties of a system to be complex as well. Thus, the main aim to understand the field of complex systems is to known about its universal properties or features which are discussed above. The various difficulties that are faced in the field of complex systems are: a) Unless and until it is not clear as what is meant by the structure and variations originally of a system, the structure’s information is not proving itself to be informative or helpful. b) One have to choose between the conflation of complexity which have a number of components or the conflation of science of complexity with non linear and chaotic dynamics, or conflation of a system which is complex and have different histories as possible in a hand and a fully subjective answer to our questions. c) It takes us to a territory that is more interesting. d) A central idea of non linearity is introduced. e) What many means in a complex system in accordance to “many components” also matters to the complexity of the system. f) Informative characterization of a system is difficult and thus idea to argue over it is introduced.
## § FFT • Evaluating a polynomial p(x) at [a0, a1, ... am] in general is hard, even though we have the recurrence p(x) = po(x^2) + x pe(x^2). This makes the polynomials smaller (degree of po, pe is half of that of p). However, we need to evaluate at all the points [a0...am] , so to merge we need to merge values at [a0...am] at each step. So our recurrence will be T(n) = T(n/2) + m with T(1) = m. This solves to O(nm). • The special property of DFT is that we can reconstruct p(x) at [w[n][0], ... wn[n][n-1]] given the values of po, pe at [w[n/2][1], w[n/2][2], w[n/2][3], ... w[n/2][n/2-1]]. So the number of points we need to evaluate the polynomial decreases with the size of the polynomial! • This makes the recurrence T(n) = T(n/2) + n with T(1) = 1 which is O(n log n). #### § Worked out example of FFT of 8 elements $p(x) \equiv a_0 x^0 + a_1 x^1 + a_2 x^2 + a_3 x^3 + a_4 x^4 + a_5 x^5 + a_6 x^6 + a_7 x^7 p_e(x) \equiv a_0 x^0 + a_2 x + a_4 x^2 + a_6 x^3 \\ p_o(x) \equiv a_1 + a_3 x + a_5 x^2 + a_7 x^3 \\ P(x) = p_e(x) = x p_o(x)$ Now suppose we know how to evaluate $p_e(x)$ and $p_o(x)$ at $[w_4^0, w_4^1, w_4^2, w_4^3]$. where $w_4$ is the 4th root of unity. We wish to evaluate $p(x)$ at $[w_8^0, w_8^1, w_8^2, \dots, w_8^7]$, where $w_8$ is the 8th root of unity. The only two properties of the roots of unity we will need are: • $w_8^2 = w_4$. • $w_8^4 = -1$. Using the value of $w_8$, the above two relations, the values $p_o(w_4^k) = [p_o(1), p_o(w_4, p_o(w_4^2), p_o(w_4^3)]$ and $p_e(w_4^k) = [p_e(1), p_e(w_4), p_e(w_4^2), p_e(w_4^3)]$, we evaluate $p$ at powers of $w_8$ ( $[p(w_8^k)]$ ) as: • $p(w_8^k) = p_e((w_8^k)^2) + w_8^k p_o((w_8^k)^2) = p_e(w_4^k) + w_8^k p_o(w_4^k)$. • $p(w_8^0) = p_e((w_8^0) + w_8^0 p_o(w_8^0) = p_e(1) + p_o(1)$ • $p(w_8^1) = p_e(w_8^2) + w_8^1 p_o(w_8^2) = p_e(w_4^1) + w_8 p_o(w_4^1)$ • $p(w_8^2) = p_e(w_8^4) + w_8^2 p_o(w_8^4) = p_e(w_4^2) + w_8^2 p_o(w_4^2)$ • $p(w_8^3) = p_e(w_8^6) + w_8^3 p_o(w_8^6) = p_e(w_4^3) + w_8^3 p_o(w_4^3)$ • $p(w_8^4) = p_e(w_8^8) + w_8^4 p_o(w_8^8) = p_e(w_4^4) + w_8^4 p_o(w_4^4) = p_e(1) - p_o(1)$ #### § Proof 1: Expand the recurrence: = T(n) = n + 2T(n/2) = n + 2[n/2 + T(n/4)] = n + n + 4T(n/4) = n + n + 4[n/4 + 2T(n/8)] = n + n + n + 8T(n/8) = ... = kn + ... 2^k T(n/2^k) = (log n)n + 2^(log n) T(n/2^(log n)) = (log n)n + n T(n/n) = (log n)n + n* 1 = (log n)n + n #### § Proof 2: Consider the tree: 8 mrg:8 4 4 mrg:4 mrg:4 2 2 2 2 mrg:2 mrg:2 mrg:2 mrg:2 1 1 1 1 1 1 1 1 • Number of leaves is n. Cost of each leaf is T(1) = 1. Total cost of leaf level is n. • At each level above, total cost is 8 = 4*2 = 2*4. • Number of levels in log n. • Total cost is cost of leaf n, plus cost of interior nodes n log n.
# American Institute of Mathematical Sciences 2007, 2007(Special): 704-712. doi: 10.3934/proc.2007.2007.704 ## Blow up and decay bounds in guasi linear parabolic problems 1 Dipartimento di Matematica e Informatica, Viale Merello 92, 09123 Cagliari, Italy, Italy Received  September 2006 Revised  May 2007 Published  September 2007 Aim of this paper is to investigate a class of quasilinear parabolic problems whose solutions may blow up at some finite time. We establish conditions on data sufficient to preclude blow up and to insure that the solution and its spatial gradient decay exponentially for all $t > 0$. Citation: Monica Marras, Stella Vernier Piro. Blow up and decay bounds in guasi linear parabolic problems. Conference Publications, 2007, 2007 (Special) : 704-712. doi: 10.3934/proc.2007.2007.704 [1] Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249 [2] Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020048 [3] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [4] Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020456 [5] Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 [6] Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020  doi: 10.3934/fods.2020018 [7] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020442 [8] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077 [9] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020273 [10] Weisong Dong, Chang Li. Second order estimates for complex Hessian equations on Hermitian manifolds. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020377 [11] Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168 [12] Jun Zhou. Lifespan of solutions to a fourth order parabolic PDE involving the Hessian modeling epitaxial growth. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5581-5590. doi: 10.3934/cpaa.2020252 [13] Sihem Guerarra. Maximum and minimum ranks and inertias of the Hermitian parts of the least rank solution of the matrix equation AXB = C. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 75-86. doi: 10.3934/naco.2020016 [14] Monia Capanna, Jean C. Nakasato, Marcone C. Pereira, Julio D. Rossi. Homogenization for nonlocal problems with smooth kernels. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020385 [15] Vieri Benci, Sunra Mosconi, Marco Squassina. Preface: Applications of mathematical analysis to problems in theoretical physics. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020446 [16] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020046 [17] Shiqiu Fu, Kanishka Perera. On a class of semipositone problems with singular Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020452 [18] Yi-Hsuan Lin, Gen Nakamura, Roland Potthast, Haibing Wang. Duality between range and no-response tests and its application for inverse problems. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020072 [19] Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074 [20] Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115 Impact Factor:
# How to run different animations with same suffix for different public objects? I would like to be able to run the corresponding animation (ending with "Get" in the title) for whatever item GameObject I plug into this treasurebox. I made 2 prefabs for "item-get" animations today. Both GameObjects consist of 3 sprites: The image representing the item, and 2 lightray images that I enable/disable in alternation to get a halo effect (see animation below). You can see the 2 prefabs and their corresponding animation components in the Project panel: These next two images show how similar the animations are--they even both require two of the same sprites, flareA and flareB. Here is the code I am currently using to set the state of the ItemBox GameObject and run the animations. You can see that while the itemGet prefab is public (that is, I can plug in whatever gameobject I want in the editor), line 27 explicitly calls for the animation belonging only to the turd item. using System.Collections; using System.Collections.Generic; using UnityEngine; public class ItemBox : MonoBehaviour { bool disabled = false; private Animator anim; public GameObject itemGet = null; void Start() { anim = GetComponent<Animator>(); if (!disabled) { anim.Play("ItemBoxIdleState"); } } void OnTriggerEnter (Collider other) { if (!disabled && other.gameObject.tag == "Player") { anim.Play("ItemBoxGetItemState"); GameObject itemClone = Instantiate(itemGet, transform.position + new Vector3(0, 0.375f, 0), Camera.main.transform.rotation); itemClone.GetComponent<Animator>().Play("turdGet"); Destroy(itemClone, 1.6f); disabled = true; } } } How can I change line 27 (the line before 'Destroy') so that it runs the appropriate "*Get" animation for whatever itemGet object I plug into the inspector? I think I am already close. I intend to make more items (not too many for now) and the animation I want to play will always be structured like "Get". UPDATE 07/29/19: I tried making a public AnimationClip object in the ItemBox script and putting that name in quotes for line 27 and thw script still works, but now I get an index error saying the animation state was not found. I don't think I can plug the name of the public variable into the .Play("") quotes and make it work; I think it only worked because there is only one animation for that item anyway. I ended up removing this new variable and just keeping the .Play("") part as empty quotes and it still works. I might decide to add more animations to these item objects later so I still need to know how to specifically play the animation that ends in "Get".
# balance equation when calcium heated for ghana Name: Date: Worksheet Chemistry Balancing Chemical Equations … DIRECTIONS: Predict the products, write a chemical equation, then balance the equation. 11. Calcium hydroxide plus nitric acid produces . 12. Calcium carbonate is strongly heated . 13. A small piece of sodium is added to water . 14. Solid barium oxide is Complete the chemical equations for the reactions when … The equation for the formation of aluminum oxide from aluminum and oxygen by beating aluminum in the presence of air is: 4 Al + 3 O_2 -gt 2 Al_2O_3 The reason that the formula of aluminum oxide chemical reactions and equations class 10 questions … In this page we have chemical reactions and equations class 10 questions answers.We have tried to provide the solutions of lots of questions. If you need help then you can contact us via e-mail or social media links. Hope you like them and do not forget to like and How Do You Write A Balanced Equation For Phosphorus … Answer (1 of 5): The chemical annotation for phosphorus is P, whilst for oxygen it is O. The molecular formula for the chemical compound diphosphorus pentoxide is P4 O10. Therefore, the balanced equation for Phosphorus + Oxygen > Diphosphorus Pentoxide is Balancing Chemical Equations | Representing Chemical … If the equation is not balanced, change the coefficients of the molecules until the nuer of atoms of each element on either side of the equation balance. Check that the atoms are in fact balanced. (we will look at this a little later): Add any extra details to the equation e.g. phase syols. 1 Quicklime, which is calcium oxide, is made by heating limestone … 1 Quicklime, which is calcium oxide, is made by heating limestone in a furnace. CaCO3(s) CaO(s) + CO2(g) The reaction does not come to equilibrium. (a) Suggest why the conversion to calcium oxide is complete. [1] (b) Calcium hydroxide, slaked lime, is made from calcium oxide. Name: Period. Date: CBalnnciD9 Reactions Practice Directions: Balance the following equation… Name. Date: Period Directions: Write the unbalanced equation for each reaction, and then balance it. 1) When heated, mercury (Il) oxide decomposes, yielding liquid mercury and oxygen gas. c): 2 HOC) 2) Iron (Ill) oxide reacts with carbon, yielding iron metal and Writing and Balancing Chemical Equations – Chemistry Write a balanced molecular equation describing each of the following chemical reactions. (a) Solid calcium carbonate is heated and decomposes to solid calcium oxide and carbon dioxide gas. (b) Gaseous butane, C 4 H 10, reacts with diatomic oxygen gas to Balance the following equations, and indic | Clutch Prep Calcium Nitrate And Sodium Iodide Calcium Nitrate And Sodium Iodide OCR GCSE 9-1 Gateway Science/Chemistry QUIZ on Topic … Given the syol equation to show the formation of calcium chloride by burning calcium in chlorine: Ca (s) + Cl 2(g) ==> CaCl 2(s) Calculate the mass in g of chlorine left unreacted when 80g of calcium reacts with 150g of chlorine to form 222g of calcium chloride. Balancing equations - Ask Me Help Desk i need to balance the following chemical equations: zinc + lead (II) nitrate yield zinc nitrate + lead aluminum bromide + chlorine yield aluminum chloride + bromine 2(Al)2Br3 + 3Cl2 {arrow right} 2Al(Cl)3 + 3Br2 sodium phosphate + calcium chloride yield calcium Balance this equation *Al(NO3)3 + K2Cr2O7 = … 14/8/2009· Balance this equation *Al(NO3)3 + K2Cr2O7 = Al2(Cr2O7) + KNO3 ACCORDING TO ME the equation u gave is wrong! it should have been *AL(NO3)3 + K2CR2O7 = AL2(CR2O7)3 + KNO3 THEN THE BALANCED EQUATION WOULD HAVE BEEN Phosphoric Acid + Calcium Carbonate --> ? + ? + ? | … 8/6/2007· Could you please help me to balance this equation too? Thanks Phosphoric acid is H3PO4. Calcium Carbonate is CaCO3. H3PO4 + CaCO3 --> First, you must realize what type of reaction you are working with. Phosphoric acid is a triprotic acid. Page 4 9. When water is added to phosphorus pentachloride, PCI5, and the mixture is heated, it reacts to form phosphoric acid and hydrochloric acid. Balance the equation for this reaction, then calculate how much PCI5 (in grams) will produce 80.0 g of HCl. PCI5 homework - What does (heated) mean in a chemical … I''m supposed to balance the following equation: $\text{Potassium chlorate (heated) --> Potassium chloride + Oxygen}$ But I don''t understand what the (heated) means in front of the first element. Do I ignore it, or am I supposed to add another element to the Calcium balance during direct acidifiion of milk for … Calcium is an important nutrient but also contributes to the texture, taste and functionality of most dairy products. The balance between micellar and free calcium in curd and serum phase during direct acidifiion of bovine milk was studied by changing organic acid Write a complete balanced skeleton chemical reaction … Write a balance equation, as identify each type of reaction? [ 1 Answers ] When solid sodium nitrate is heated, solid sodium nitrate and oxygen gas are produced. Solid bismuth(III) oxide and solid carbon react to form bismuth metal carbon monoxide gas. write the formula and balance the equation on these 2 … 21/10/2007· 1. nitrogen dioxide gas reacts with water to form aqueous nitric acid and nitrogen monoxide gas 2. solid potassium chlorate decomposes to form solid potassium chloride and oxygen gas 3. when potassium chlorate is heated, oxygen gas is released and potassium chloride is left behind 4. when solid calcium carbonate is heated it decomposes into solid calcium oxice and gaseous carbon dioxide 4.1Writing and Balancing Chemical Equations 4.1Writing and Balancing Chemical Equations By the end of this section, you will be able to: • Derive chemical equations from narrative descriptions of chemical reactions. • Write and balance chemical equations in molecular, total ionic, and net ionic formats. Lakhmir Singh Chemistry Class 10 Solutions For Chapter 1 … When calcium carbonate is heated it decomposes into calcium oxide and carbon dioxide. Here carbon dioxide is a gas and thus confirms a chemical reaction 24. (a) Aluminium hydroxide reacts with sulphuric acid to form aluminium sulphate and water. Write a what is calcium + oxygen = calcium oxide balanced? | … 10/11/2011· This is due to Calcium being oxidation state +2 and Oxygen oxidation state is -2, this means when formed in a compound they cancel each other out to give CaO. Also oxygen in its original state is O2 so there when first put equation together it reads - Ca Carbon dioxide is produced when zinc carbonate is … Get an answer for ''Carbon dioxide is produced when zinc carbonate is heated strongly. A) Write a the balanced chemical equation for the reaction that takes place. B) Name the F321: Atoms, Bonds and Groups Group 2 - PMT Plymstock School 4 3. When heated strongly, CaCO 3 decomposes. Write an equation, including state syols, for the thermal decomposition of CaCO 3. [Total 2 marks] 4. Calcium oxide reacts with water and with nitric acid. State the formula of the calcium /a> An equation is balanced by mass when the nuer of atoms of each element in the reactants equals the nuer of atoms of that element in the products. For example, the equation shown for the decomposition of water has four atoms of hydrogen in the two molecules of water on the reactant side and four atoms of hydrogen in the two molecules of hydrogen gas on the product side; therefore, hydrogen Calcium (Ca) and water - Lenntech Calcium occurs in water naturally. Seawater contains approximately 400 ppm calcium. One of the main reasons for the abundance of calcium in water is its natural occurrence in the earth''s crust. Calcium is also a constituent of coral. Rivers generally contain 1-2 When calcium carbonate is heated strongly, it evolves … 19/8/2020· When calcium carbonate is heated strongly, it evolves carbon dioxide gas. CaCO 3 ( s ) → CaO ( s ) + Balancing Chemical Equations Balance the following chemical equation: Chemistry In Focus The elements A and Z coine to produce two different If 0.15
All K (1 <= K <= 1,000) of the cows are participating in FarmerJohn’s annual reading contest. The competition consists of reading a single book with N (1 <= N <= 100,000) pages as fast as possible while understanding it. Cow i has a reading speed S_i (1 <= S_i <= 100) pages per minute,a maximum consecutive reading time T_i (1 <= T_i <= 100) minutes,and a minimum rest time R_i (1 <= R_i <= 100) minutes. The cow can read at a rate of S_i pages per minute, but only for T_i minutes at a time. After she stops reading to rest, she must rest for R_i minutes before commencing reading again. Determine the number of minutes (rounded up to the nearest full minute) that it will take for each cow to read the book. 输入格式 • Line 1: Two space-separated integers: N and K • Lines 2..K+1: Line i+1 contains three space-separated integers: S_i, T_i, and R_i 输出格式 • Lines 1..K: Line i should indicate how many minutes (rounded up tothe nearest full minute) are required for cow i to read the whole book. 样例 Input 10 3 2 4 1 6 1 5 3 3 3 INPUT DETAILS: The book has 10 pages; 3 cows are competing. The first cow reads at a rate of 2 pages per minute, can read for at most 4 minutes at a time, and must rest for 1 minute after reading. The second reads at a rate of 6 pages per minute, can read for at most 1 minute at a time, and must rest 5 minutes after reading. The last reads at a rate of 3 pages per minute, can read for at most 3 minutes at a time, and must rest for 3 minutes after reading. Output 6 7 7 OUTPUT DETAILS: The first cow can read 8 pages in 4 minutes, rest for 1 minute, and read the last 2 pages in a minute. The second reads 6 pages in a minute, rests for 5 minutes, and finishes in the next minute. The last reads 9 pages in 3 minutes, rests for 3 minutes, and finishes in the next minute. 82 人解决,92 人已尝试。 97 份提交通过,共有 199 份提交。 2.8 EMB 奖励。
# State the hypothesis and identify the claim step 2 • 38 • 100% (2) 2 out of 2 people found this document helpful This preview shows page 10 - 17 out of 38 pages. State the hypothesis and identify the claim. Step 2: Compute the test value Step 3: Find the P-value Step 4: Make the decision to reject or not reject the null hypothesis. Step 5: Summarize the results. Decision Rule when Using a P-value If P-value , reject the null hypothesis If P-value is > , do not reject the null hypothesis Example 3 A researcher wishes to test the claim that the average age of nurses in Metro Manila is greater than 35 years. She selects a sample of 32 nurses and finds the mean of the sample is 35.7 years, with a standard deviation of 2 years. Is there evidence to support the claim at ? Use the P-method. Guideline for P-values If , reject the null hypothesis. The difference is highly significant. If but reject the null hypothesis. The difference is significant. If but consider the consequences of Type 1 error before rejecting the null hypothesis. If , do not reject the null hypothesis. The difference is not significant. T Test for a Mean The t test is a statistical test for the mean of a population and is used when the population is normally or approximately normally distributed, is unknown, and n<30. The degrees of freedom are d.f. = n- 1 The formula for t test is Assumptions for the t Test for a Mean when is Unknown The sample is a random sample. Either or the population is normally distributed if n<30. Remember that the t test should be used when the population is approximately normally distributed and the population standard deviation is unknown.
# Can you find the relation between 1007 and 2015? Algebra Level 4 $$\sum ^{1007} _{r=1} cos(\frac{2 \pi r}{2015})$$ can be expressed as $$(-\frac{m}{n})$$ , where $$m$$ and $$n$$ are co-prime positive integers. Calculate $$m+n$$. ×
# How can you memorize chemistry solubility rules? ##### 1 Answer Feb 11, 2017 How else but by rote? And you must also learn the exceptions.... #### Explanation: So how good is my memory? And note that we refer to aqueous solutions only. These follow a hierarchy. $\text{All nitrate/acetate/perchlorate/ammonium salts are soluble.}$ $\text{All alkali metal salts are soluble, except for}$ $K B P {h}_{4}$. $\text{All halides are soluble, except for } A g X , H {g}_{2} {X}_{2} , \mathmr{and} P b {X}_{2}$. $\text{All sulfates are soluble, except for } B a S {O}_{4} , A {g}_{2} S {O}_{4} , P b S {O}_{4} , S r S {O}_{4.}$ $\text{All carbonates are insoluble (except for those of the alkali metals)}$ $\text{All hydroxides are insoluble (except for those of the alkali metals)} .$ $\text{All sulfides are insoluble (except for those of the alkali metals)} .$ $\text{All phosphates are insoluble (except for those of the alkali metals).}$ Have at it. This is the knowledge I would expect of a 2nd year inorganic chemistry student. At A level, you must know that the halides are SOLUBLE, but that $A g X , H {g}_{2} {X}_{2} , P b {X}_{2}$ are as soluble as BRICKS. Are bricks soluble? I think, you should also know at A level that $\text{silver chloride}$ is curdy white, $\text{silver bromide}$ is cream-coloured, and $\text{silver iodide}$ is bright yellow.
# How to automatically include PDFs from my bib file I use the BIB list generated by Mendeley, with contains an item called "file" with the path to the PDF of each entry. I need to generate a compilation of papers and I would like to do it automatically in the following manner. \usepackage{biblatex} \usepackage{pdfpages} \fullcite{foo} So, what I'm looking for is a function that given a bib item, it will return the path saved in the file entry of my bib file. Example of bib entry: @inproceedings{foo, author = {Masiero, Bruno }, booktitle = {DAGA}, file = {:D\:/Users/masiero/Documents/My Dropbox/Literature/pdf/Masiero, Fels_2011_Equalization for Binaural Synthesis with Headphone.pdf:pdf}, month = mar, title = {Equalization for Binaural Synthesis with Headphone}, year = {2011} } Some other examples of the file field would be: file = {:D\:/Users/masiero/Documents/My Dropbox/Literature/pdf/Masiero, Pelzer\_2010\_Study of Phase Reconstruction Methods Employed at Room Acoustic Simulation.pdf:pdf}, file = {:D\:/Users/masiero/Documents/My Dropbox/Literature/pdf/Masiero, Pollow\_2010\_A review of the compressive sampling framework in the lights of spherical harmonics applications to distributed spherical arrays.pdf:pdf}, file = {::D\:/Users/masiero/Documents/My Dropbox/Literature/pdf/Masiero, Pollow, Fels\_2011\_Design of a Fast Broadband Individual Head-Related Transfer Function Measurement System.pdf:pdf}, file = {:D\:/Users/masiero/Documents/My Dropbox/Literature/pdf/Masiero, Qiu\_2009\_Two Listeners Crosstalk Cancellation System Modelled by Four Point Sources and Two Rigid Spheres.pdf:pdf}, file = {:D\:/Users/masiero/Documents/My Dropbox/Literature/pdf/Masiero, Ribeiro, Nascimento\_2008\_Transducer Placement Strategy for Active Noise Control of Power Transformers\_Fortschritte der Akustik -- DAGA.pdf:pdf}, - I'm not quite clear on what should be included here: \include is for inputting .tex files containing chapters of a document. Do you want to include the PDF themselves or the links to the PDFs? –  Joseph Wright Sep 26 '12 at 16:27 Could you add an example of a .bib entry? –  egreg Sep 26 '12 at 19:25 I´m compiling the code with pdflatex and I have the article in PDF. My example code was oversimplified. But that can be made with \includepdf. Question is updated. –  bmasiero Sep 27 '12 at 15:39 @bmasiero Uh! Those pesky accents. :( –  egreg Sep 27 '12 at 22:34 @egreg I updated the examples with papers in English, all without the pesky nice accents. ;) –  bmasiero Oct 1 '12 at 15:34 Assuming you file field brings only the exact location of a file without spaces, a possible solution with biblatex and biber would be: 1. Remap (following Audrey's answer to Add new field to biblatex entries) the file field into a custom field biber and biblatex can understand (e.g. usera); 2. Create a bibmacro that calls includepdf (from the pdfpages package); 3. Append that to cite bibmacro (or create a new cite command). First, I created a foo.pdf file from the following .tex file (in my /tmp/ dir): \documentclass{standalone} \begin{document} Foooooooooo! \end{document} Then I created a MWE: \documentclass{article} % This is to create our dummy bib file \begin{filecontents}{\jobname.bib} @article{foo, author = {A. Author}, title = {Title}, journal = {A Journal}, volume = {x}, year = {2012}, file = {/tmp/foo.pdf} } \end{filecontents}% end bib file % (change the pdf location accordingly) \usepackage[style=verbose,backend=biber]{biblatex}% I'm used to verbose style; % with numeric, the result is weird. % This changes a 'file' field into a 'usera' field which biber\biblatex can understand: \DeclareSourcemap{% I took this from http://tex.stackexchange.com/a/65403/5872 \maps[datatype=bibtex]{ \map{ \step[fieldsource=file,fieldtarget=usera] } } } \usepackage{pdfpages}% This package allows us to include pdf files (or pages) % We create a bibmacro to include the pdf... \newbibmacro{file}{% \iffieldundef{usera}{}{% \includepdf{\thefield{usera}} }} \usepackage{xpatch}% This package allows us to patch (bib)macros % ... and then we tell the 'cite' bibmacro to call it: \xapptobibmacro{cite}% {\usebibmacro{file}}{}{} \begin{document} \cite{foo} \end{document} And here's the output (Sorry, I don't know how to handle these images very well yet, I tried to at least crop it): EDIT: Instead of patching the cite' bibmacro withxpatch, you can also define a new citation command, say: \DeclareCiteCommand{\fullcite}[] {\usebibmacro{prenote}} {\usebibmacro{citeindex}% \usebibmacro{cite} \usebibmacro{file}} {\multicitedelim} {\usebibmacro{postnote}} \DeclareMultiCiteCommand{\fullcites}{\fullcite}{\multicitedelim} - It depends on the format of the field. I have some ideas, if the field ends with :pdf. –  egreg Sep 27 '12 at 17:05 Henrique, unfortunately this workaround is not working for me. It seems my problem is on the way Mendeley saves the files, containing spaces and that :pdf at the end, as can be seen in the examplary bib entry I just edited above. –  bmasiero Sep 27 '12 at 17:05 @bmasiero Is the format always :<PATH>:pdf? –  egreg Sep 27 '12 at 17:09 @bmasiero Yes, that's why I mentioned l3regex to egreg. You might also have problems when the file location contains spaces. –  henrique Sep 27 '12 at 17:15 @egreg The format used by Mendeley is always file = {:<fullpath>:ext;:<fullpath>:ext} @henrique Yep, spaces are also a problem for \includepdf`. –  bmasiero Sep 27 '12 at 17:16
FANDOM 159 Pages Consider a source moving at a velocity $v=\beta c$ and at an angle $\theta$ relative to the line of sight. We will show that for $\beta \rightarrow 1$ it is possible to get apparent velocities larger than the speed of light. To see this, we look at the time difference between the arrival of two photons, the first emitted at $t_1$ and the second emitted at $t_2\equiv t_1+\Delta t$. The arrival time of the first photon is: $t_1'=t_1+\frac{D_1}{c}$ where $D_1$ is the distance between the location of the source at $t_1$ and the observer. However, by the time the second photons is emitted, the distance to the observer has changed. Since for astrophysical purposes, this change is much smaller than $D_1$, we can approximate it by calculating the distance covered by the source along the line of sight, during the time interval $\Delta t$: $D_2=D_1-\beta c \Delta t cos(\theta)$ Yielding: $t_2'=t_2+\frac{D_2}{c}=t_2+\frac{D_1}{c}-\beta \Delta t cos(\theta)$. We therefore see that the time between the arrival of the two photons becomes: $t_2'-t_1'=\Delta t (1-\beta cos(\theta))$. By observing an astrophysical source, we only see its change in position in the transverse direction to the line of sight. The apparent velocity is therefore the change in the transverse position of the source between the arrival time of the photons: $v_{app}=\frac{\beta c sin(\theta) \Delta t}{\Delta t (1-\beta cos(\theta))}=\frac{\beta c sin(\theta)}{1-\beta cos(\theta)}$. Due to the difference between emission and observation times, this velocity may be larger than light-speed. To find the maximum apparent velocity we differentiate this function with respect to $\theta$ and equate with 0:$\frac{d v_{app}}{d\theta}=\frac{\beta c}{1-\beta cos(\theta)}-\frac{c \beta ^2 sin^2(\theta)}{(1-\beta cos(\theta))^2}=0\rightarrow cos(\theta_{max})=\beta \rightarrow sin(\theta_{max})=1/\gamma$ where $\gamma$ is the Lorentz factor. Plugging this back to $v_{app}$ we get that: $v_{app,max}= \frac{\beta c sin(\theta_{max})}{1-\beta cos(\theta_{max})}=\frac{\beta c/\gamma}{1-\beta^2}=\beta \gamma c$. Therefore, for $\beta>1/\sqrt{2}$ the apparent velocity is greater than light-speed. Observation of high apparent velocities in astrophysical sources provides evidence of relativistic motion and can be used to provide lower limits on their Lorentz factors. This technique has been applied, for instance, to AGN jets.
# zbMATH — the first resource for mathematics Wigner’s theorem in Hilbert $$C^*$$-modules over $$C^*$$-algebras of compact operators. (English) Zbl 1067.46052 The notion of Hilbert $$C^{*}$$-module is a generalization of the notion of Hilbert space, by allowing the inner product to take values in a $$C^{*}$$-algebra. In the paper under review, the authors show that if $$W$$ is a Hilbert $$C^{*}$$-module over the $$C^{*}$$-algebra of all compact operators on a Hilbert space $$H$$ with $$\dim H>1$$ and $$T:W\rightarrow W$$ is a function satisfying $$| \langle Tv,Tw\rangle| =|\langle v,w\rangle|$$ for all $$v,w\in W$$, then there exist an adjointable map $$U:W\rightarrow W$$ which is an isometry (or equivalently $$U^{*}U=I$$) and there is a phase function $$\phi:W\rightarrow \mathbb C$$ (i.e., a function whose values are of modulus $$1$$) such that $$Tv=\phi(v)Uv$$ for all $$v\in W$$. This result generalizes L. Molnár’s extension [J. Math. Phys. 40, No.11, 5544-5554 (1999; Zbl 0953.46030)]) of Wigner’s classical unitary-antiunitary theorem [E. Wigner [“Gruppentheorie und ihre Anwendung auf die Quantenmechanik der Atomspektren” (Die Wissenschaft 85, F. Vieweg & Sohn, Braunschweig) (1931; JFM 57.1578.03)]). The authors conclude the paper with the conjecture that the main result of the paper is true for Hilbert modules over concrete $$C^{*}$$-algebras which contain the ideal of all compact operators. ##### MSC: 46L08 $$C^*$$-modules 46C05 Hilbert and pre-Hilbert spaces: geometry and topology (including spaces with semidefinite inner product) 46C50 Generalizations of inner products (semi-inner products, partial inner products, etc.) 39B42 Matrix and operator functional equations 47J05 Equations involving nonlinear operators (general) Full Text: ##### References: [1] William Arveson, An invitation to \?*-algebras, Springer-Verlag, New York-Heidelberg, 1976. Graduate Texts in Mathematics, No. 39. · Zbl 0344.46123 [2] D. Bakic, B. Guljas, Operators on Hilbert $$H^*$$-modules, accepted for publication in the Journal of Operator Theory. [3] D. Bakic, B. Guljas, Hilbert $$C^*$$-modules over $$C^*$$-algebras of compact operators, accepted for publication in Acta Sci. Math. (Szeged). · Zbl 1067.46052 [4] M. Cabrera, J. Martínez, and A. Rodríguez, Hilbert modules revisited: orthonormal bases and Hilbert-Schmidt operators, Glasgow Math. J. 37 (1995), no. 1, 45 – 54. · Zbl 0833.46037 [5] M. Frank, D. R. Larson, Frames in Hilbert $$C^*$$-modules and $$C^*$$-algebras, preprint, University of Houston, Houston, and Texas A&M University, College Station, Texas, USA, 1998. [6] I. Kaplansky, Modules over operator algebras, Amer. J. Math. 75(1953), 839-853. · Zbl 0051.09101 [7] C. Lance, Hilbert $$C^*$$-modules, London Mat. Soc. Lecture Notes Series, 210, Cambridge University Press, Cambridge, 1995. [8] Lajos Molnár, An algebraic approach to Wigner’s unitary-antiunitary theorem, J. Austral. Math. Soc. Ser. A 65 (1998), no. 3, 354 – 369. · Zbl 0943.46033 [9] Lajos Molnár, A generalization of Wigner’s unitary-antiunitary theorem to Hilbert modules, J. Math. Phys. 40 (1999), no. 11, 5544 – 5554. · Zbl 0953.46030 [10] William L. Paschke, Inner product modules over \?*-algebras, Trans. Amer. Math. Soc. 182 (1973), 443 – 468. · Zbl 0239.46062 [11] Jürg Rätz, On Wigner’s theorem: remarks, complements, comments, and corollaries, Aequationes Math. 52 (1996), no. 1-2, 1 – 9. · Zbl 0860.39033 [12] Marc A. Rieffel, Induced representations of \?*-algebras, Advances in Math. 13 (1974), 176 – 257. · Zbl 0284.46040 [13] N. E. Wegge-Olsen, \?-theory and \?*-algebras, Oxford Science Publications, The Clarendon Press, Oxford University Press, New York, 1993. A friendly approach. [14] E. Wigner, Gruppentheorie und ihre Anwendung auf die Quantenmechanik der Atomspektren, Vieweg, Braunschweig, 1931. (reprint) · JFM 57.1578.03 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Pololu 3.3V Step-Up/Step-Down Voltage Regulator S7V8F3 Pololu item #: 2122 496 in stock Brand: Pololu Status: Active and Preferred Free add-on shipping in USA Free shipping in USA over $40 Price break Unit price (US$) 1 5.95 5 4.95 25 4.49 100 4.15 Quantity: backorders allowed The S7V8F3 switching step-up/step-down regulator efficiently produces a fixed 3.3 V output from input voltages between 2.7 V and 11.8 V. Its ability to convert both higher and lower input voltages makes it useful for applications where the power supply voltage can vary greatly, as with batteries that start above but discharge below the regulated voltage. The compact (0.45″ × 0.65″) module has a typical efficiency of over 90% and can deliver 500 mA to 1 A across most of the input voltage range. ## Overview The Pololu step-up/step-down voltage regulator S7V8F3 is a switching regulator (also called a switched-mode power supply (SMPS) or DC-to-DC converter) that uses a buck-boost topology. It takes an input voltage from 2.7 V to 11.8 V and increases or decreases the voltage to a fixed 3.3 V output with a typical efficiency of over 90%. The input voltage can be higher than, lower than, or equal to the set output voltage, and the voltage is regulated to achieve a steady 3.3 V. This flexibility in input voltage is especially well-suited for battery-powered applications in which the battery voltage begins above the desired output voltage and drops below the target as the battery discharges. Without the typical restriction on the battery voltage staying above the required voltage throughout its life, new battery packs and form factors can be considered. For example: • A 3-cell battery holder, which might have a 4.5 V output with fresh alkalines or a 3.0 V output with partially discharged NiMH cells, can be used with this regulator to power a 3.3 V circuit. • A single lithium-polymer cell can run a 3.3 V device through its whole discharge cycle. In typical applications, this regulator can deliver up to 1 A continuous when the input voltage is higher than 3.3 V (stepping down). When the input voltage is lower than 3.3 V (stepping up), the available current decreases as the difference between the voltages increases; please see the graphs at the bottom of this page for a more detailed characterization. The regulator has short-circuit protection, and thermal shutdown prevents damage from overheating; the board does not have reverse-voltage protection. This regulator is also available with a fixed 5 V output and with a user-adjustable output. ## Features • input voltage: 2.7 V to 11.8 V • fixed 3.3 V output with +5/-3% accuracy • typical continuous output current: 500 mA to 1 A across most combinations of input and output voltages (Actual continuous output current depends on input and output voltages. See Typical Efficiency and Output Current section below for details.) • power-saving feature maintains high efficiency at low currents (quiescent current is less than 0.1 mA) • integrated over-temperature and short-circuit protection • small size: 0.45″ × 0.65″ × 0.1″ (11 × 17 × 3 mm) ## Using the Regulator During normal operation, this product can get hot enough to burn you. Take care when handling this product or other components connected to it. ### Connections The step-up/step-down regulator has four connections: shutdown (SHDN), input voltage (VIN), ground (GND), and output voltage (VOUT). The SHDN pin can be driven low (under 0.4 V) to power down the regulator and put it in a low-power state. The quiescent current in this sleep mode is dominated by the current in the 100k pull-up resistor from SHDN to VIN. With SHDN held low, this resistor will draw 10 µA per volt on VIN (for example, the sleep current with a 5 V input will be 50 µA). The SHDN pin can be driven high (above 1.2 V) to enable the board, or it can be connected to VIN or left disconnected if you want to leave the board permanently enabled. The input voltage, VIN, should be between 2.7 V and 11.8 V. Lower inputs can shut down the voltage regulator; higher inputs can destroy the regulator, so you should ensure that noise on your input is not excessive, and you should be wary of destructive LC spikes (see below for more information). The output voltage, VOUT, is fixed at 3.3 V. The output voltage can be up to 3% higher than normal when there is little or no load on the regulator. The output voltage can also drop depending on the current draw, especially when the regulator is boosting from a lower voltage (stepping up), although it should remain within 5% of the set output. The four connections are labeled on the back side of the PCB, and they are arranged with a 0.1″ spacing along the edge of the board for compatibility with standard solderless breadboards and perfboards and connectors that use a 0.1″ grid. You can solder wires directly to the board or solder in either the 4×1 straight male header strip or the 4×1 right-angle male header strip that is included. ### Typical Efficiency and Output Current The efficiency of a voltage regulator, defined as (Power out)/(Power in), is an important measure of its performance, especially when battery life or heat are concerns. As shown in the graph below, this switching regulator has an efficiency between 80% to 95% for most applications. A power-saving feature maintains these high efficiencies even when the regulator current is very low. The maximum achievable output current of the board varies with the input voltage but also depends on other factors, including the ambient temperature, air flow, and heat sinking. The graph below shows output currents at which this voltage regulator’s over-temperature protection typically kicks in after a few seconds. These currents represent the limit of the regulator’s capability and cannot be sustained for long periods, so the continuous currents that the regulator can provide are typically several hundred milliamps lower, and we recommend trying to draw no more than about 1 A from this regulator throughout its input voltage range. ### LC Voltage Spikes When connecting voltage to electronic circuits, the initial rush of current can cause voltage spikes that are much higher than the input voltage. If these spikes exceed the regulator’s maximum voltage, the regulator can be destroyed. If you are connecting more than about 9 V, using power leads more than a few inches long, or using a power supply with high inductance, we recommend soldering a 33 μF or larger electrolytic capacitor close to the regulator between VIN and GND. The capacitor should be rated for at least 16 V. More information about LC spikes can be found in our application note, Understanding Destructive LC Voltage Spikes. ## Related Categories (702) 262-6648 Same-day shipping, worldwide
# How do you solve 10x^2 - 27x + 18? Jun 21, 2015 Use the quadratic formula to find zeros $x = \frac{3}{2}$ or $x = \frac{6}{5}$ $10 {x}^{2} - 27 x + 18 = \left(2 x - 3\right) \left(5 x - 6\right)$ #### Explanation: $f \left(x\right) = 10 {x}^{2} - 27 x + 18$ is of the form $a {x}^{2} + b x + c$, with $a = 10$, $b = - 27$ and $x = 18$. The discriminant $\Delta$ is given by the formula: $\Delta = {b}^{2} - 4 a c = {27}^{2} - \left(4 \times 10 \times 18\right) = 729 - 720$ $= 9 = {3}^{2}$ Being a positive perfect square, $f \left(x\right) = 0$ has two distinct rational roots, given by the quadratic formula: $x = \frac{- b \pm \sqrt{\Delta}}{2 a} = \frac{27 \pm 3}{20}$ That is: $x = \frac{30}{20} = \frac{3}{2}$ and $x = \frac{24}{20} = \frac{6}{5}$ Hence $f \left(x\right) = \left(2 x - 3\right) \left(5 x - 6\right)$ graph{10x^2-27x+18 [-0.25, 2.25, -0.28, 0.97]} Jun 21, 2015 color(red)(x= 6/5 , x =3/2 #### Explanation: $10 {x}^{2} - 27 x + 18 = 0$ We can first factorise the above expression and thereby find the solution. Factorising by splitting the middle term $10 {x}^{2} - 15 x - 12 x + 18 = 0$ $5 x \left(2 x - 3\right) - 6 \left(2 x - 3\right) = 0$ $\textcolor{red}{\left(5 x - 6\right) \left(2 x - 3\right)} = 0$ Equating each of the two terms with zero we obtain solutions as follows: color(red)(x= 6/5 , x =3/2
My Math Forum Definite Integrals, extrema Calculus Calculus Math Forum May 8th, 2009, 05:39 PM #1 Joined: May 2009 Posts: 58 Thanks: 0 Definite Integrals, extrema Find and classify the relative maxima and minima of f(x), if f(x) = integral from 0 to x of {(t^2-4)dt} /{1+(cost)^2} I got x=2 and x=-2 for f'(x)=0, but, I don't know how to calculate the relative maxima and minima. Please help me. Thank you May 9th, 2009, 09:07 AM #2 Joined: Dec 2008 Posts: 251 Thanks: 0 Re: Definite Integrals, extrema Here, we may use the fact that $\frac{d}{dx}\int\,_0^x\,g(t)\,dt\,=\,g(x).$ (As you move $x$ to the right, the rate of change of the area under a curve is equal to the height of the function.) Your solutions $x\,=\,\pm 2$ are correct. To prove that they are relative maxima and minima, we can use the Second Derivative Test: $\begin{eqnarray*} f''(x) &>& 0\mbox{ }\rightarrow\mbox{ }x\mbox{ is a local minimum}\\ f''(x) &<& 0\mbox{ }\rightarrow\mbox{ }x\mbox{ is a local maximum} \end{eqnarray*}$ May 10th, 2009, 04:53 AM #3 Joined: May 2009 Posts: 58 Thanks: 0 Re: Definite Integrals, extrema Thanks for your help. But, how can I calculate f(2)? is x=-2 also a solution since the domain of f(x) is x>=0 ? May 10th, 2009, 05:23 AM #4 Joined: Dec 2008 Posts: 251 Thanks: 0 Re: Definite Integrals, extrema If the domain is, as you say, $[0,\,\infty)$, then $x\,=\,2$ is the only local extremum. However, since $\cos\,x$ is squared and added to $1$ in the denominator, $f(x)$ is defined everywhere. I just put the integral into the Wolfram Online Integrator and it gave an answer that was not expressed in terms of elementary functions. If the problem only asks to classify the extrema, then I don't think you have to worry about the values of the extrema at those points. Tags definite, extrema, integrals ### indefinite integral finding relative maxima and minima Click on a term to search for related topics. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Agata78 Calculus 6 January 19th, 2013 03:05 PM Agata78 Calculus 18 January 18th, 2013 01:39 PM jakeward123 Calculus 10 February 28th, 2011 01:18 PM Aurica Calculus 2 May 10th, 2009 05:05 PM Agata78 Abstract Algebra 0 January 1st, 1970 12:00 AM Contact - Home - Top
# Higher Order ODES 1. Mar 16, 2015 ### mshiddensecret 1. The problem statement, all variables and given/known data y''''''+y'''=t 2. Relevant equations 3. The attempt at a solution I got all the roots and solved the homo eq. Then I tried to guess the partial eq and got At+B However, I don't know how to proceed because the 6th derivative or the 3rd would be 0. 2. Mar 16, 2015 ### SteamKing Staff Emeritus I think you mean, you tried to guess the particular solution and got At + B It's not clear why you guessed yp = At + B, since the highest order derivative is 6. This implies that yp should be a 7th degree polynomial. 3. Mar 16, 2015 ### LCKurtz You don't need a 7th degree polynomial for $y_p$ for this problem. Try $y_p = Ct^4$. 4. Mar 18, 2015 ### Ray Vickson Mod note: removed a quote that was too much help. You can also let $z(t) = y'''(t)$ and write the DE as $(z(t) - t)''' + (z(t)-t) = 0$, which is homogeneous of degree 3 in $z(t)-t$. After finding $z(t)$, integrating three times (with constants of integration included) will get $y(t)$. Last edited by a moderator: Mar 18, 2015 5. Mar 18, 2015 ### haruspex It only implies the general solution will be of degree 5, no? The degree of the particular solution will often be the sum of the least degree of differentiation and the highest degree of the polynomial on the other side of the equation. In this case, 3+1=4.
# Using Lebesgue dominated convergence theorem Let $A_1,A_2,A_3...$ be measurable sets. Let $m \in N$ and let $E_m$ be the set defined as follows: $x \in E_m$ iff x is a member of at least m of the sets $A_k$ Prove that $E_m$ is a measurable set and that , $m.\lambda(E_m) \leq \sum_{k=1}^{infty} \lambda(A_k)$ MY approach : To prove that the following set is measurable I was hoping to use the caratheodary characterisation of the measurable sets , however I failed to do so. Maybe there is a simpler way to prove that the above set is measurable. I was hoping to use the lebesgue dominated convergence theorem to prove the above inequality on the function For that I identified that $E_m$'s form a decreasing sequence of sets , and if I define $f_m$ to be the characteristic functions of the sets $E_m$ maybe I could get further , as then I would be able to use the $f_1$ to be the function which bounds all the f's to apply the dominated convergence theorem . Any input on validity of the ideas will be appreciated . Since I am self studying this , it would be great if you could show me how to translate my ideas in a formal mathematical proof. So please be kind enough to atlas provide the basic structure of the proof. To see that $E_m$ is measurable let $\Lambda_m=\{ L\subset\Bbb N\mid |L|=m\}$ be the subsets of $\Bbb N$ that have $m$ elements. Note that this is a countable set. Then $$E_m=\bigcup_{L\in \Lambda_m}\left(\bigcap_{k\in L}A_k\right)$$ is a countable union of measurable sets and thus measurable. The Lebesgue dominated convergence theorem won't necessarily help you and I can't see how you can apply it in a way that is not artificial. Rather you can see that $E_m\subseteq E_1 = \bigcup_k A_k$ and for that reason $$\lambda(E_m)\leq\lambda(E_1)≤\sum_k\lambda(A_k)$$ by monotonicity and sub-addititivity of the measure. • The inequality that you obtain does not have a m on the RHS . – Noob101 Mar 23 '17 at 20:04 • Could you help me see how $Em=⋃L∈Λm(⋂k∈LAk$ – Noob101 Mar 23 '17 at 20:11 • Suppose a point $p$ lies in $m$ different $A_k$. This means there exists $A_{k_1},...,A_{k_m}$ so that $p\in \bigcap_{i=1}^m A_{k_i}$. It follows $p$ lies in the right hand side. On the other hand every point in the right hand side lies in the intersection of $m$ different $A_k$, thus also in $E_m$. – s.harp Mar 24 '17 at 9:18 • Now you always have $E_m\subset E_1$, since $E_1=\bigcup_k A_k$. Measures are monotone, this means that if $X\subset Y$ that then $\lambda(X)≤\lambda(Y)$. This implies the first inequality. The second follows from $\sigma$ sub-additivity, for any countable union you have $\lambda(\bigcup_n X_n)≤\sum_n\lambda(X_n)$. – s.harp Mar 24 '17 at 9:19
# American Institute of Mathematical Sciences • Previous Article Local well-posedness in the critical Besov space and persistence properties for a three-component Camassa-Holm system with N-peakon solutions • DCDS Home • This Issue • Next Article Topological conjugacy for Lipschitz perturbations of non-autonomous systems September  2016, 36(9): 5025-5046. doi: 10.3934/dcds.2016018 ## Global dynamics in a fully parabolic chemotaxis system with logistic source 1 College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China Received  July 2015 Revised  January 2016 Published  May 2016 In this paper, we consider a fully parabolic chemotaxis system \begin{eqnarray*}\label{1} \left\{ \begin{array}{llll} u_t=\Delta u-\chi\nabla\cdot(u\nabla v)+u-\mu u^r,\quad &x\in \Omega,\quad t>0,\\ v_t=\Delta v-v+u,\quad &x\in\Omega,\quad t>0,\\ \end{array} \right. \end{eqnarray*} with homogeneous Neumann boundary conditions in an arbitrary smooth bounded domain $\Omega\subset R^n(n=2,3)$, where $\chi>0, \mu>0$ and $r\geq 2$. For the dimensions $n=2$ and $n=3$, we establish results on the global existence and boundedness of classical solutions to the corresponding initial-boundary problem, provided that $\chi$, $\mu$ and $r$ satisfy some explicit conditions. Apart from this, we also show that if $\frac{\mu^{\frac{1}{r-1}}}{\chi}>20$ and $r\geq 2$ and $r\in \mathbb{N}$ the solution of the system approaches the steady state $\left(\mu^{-\frac{1}{r-1}}, \mu^{-\frac{1}{r-1}}\right)$ as time tends to infinity. Citation: Ke Lin, Chunlai Mu. Global dynamics in a fully parabolic chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5025-5046. doi: 10.3934/dcds.2016018 ##### References: [1] N. D. Alikakos, $L^p$ bounds of solutions of reaction-diffusion equations,, Comm. Partial Differential Equations, 4 (1979), 827.  doi: 10.1080/03605307908820113.  Google Scholar [2] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Towards a mathematical theory of Keller-Segel models of pattern formation in biological tissues,, Math. Models Methods Appl. Sci., 25 (2015), 1663.  doi: 10.1142/S021820251550044X.  Google Scholar [3] P. Biler, Existence and nonexistence of solutions for a model of gravitational interaction of particles III,, Colloq. Mathematicum, 68 (1995), 229.   Google Scholar [4] X. R. Cao, Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces,, Discrete Cont. Dyns. S-A., 35 (2015), 1891.  doi: 10.3934/dcds.2015.35.1891.  Google Scholar [5] X. R. Cao, Boundedness in a three-dimensional chemotaxis-haptotaxis model,, Zeitschrift für angewandte Mathematik und Physik, 67 (2016).  doi: 10.1007/s00033-015-0601-3.  Google Scholar [6] T. Cieślak and P. H. Laurençot, Finite time blow-up for a one-dimensional quasilinear parabolic-parabolic chemotaxis system,, Ann. I. H. Poincaré Anal. Non Linéaire, 27 (2010), 437.  doi: 10.1016/j.anihpc.2009.11.016.  Google Scholar [7] T. Cieślak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,, J. Differential Equations, 252 (2012), 5832.  doi: 10.1016/j.jde.2012.01.045.  Google Scholar [8] T. Cieślak and C. Stinner, New critical exponents in a fully parabolic quasilinear Keller-Segel system and applications to volume filling models,, J. Differential Equations, 258 (2015), 2080.  doi: 10.1016/j.jde.2014.12.004.  Google Scholar [9] T. Cieślak and M. Winkler, Finite-time blow-up in a quasilinear system of chemotaxis,, Nonlinearity, 21 (2008), 1057.  doi: 10.1088/0951-7715/21/5/009.  Google Scholar [10] A. Friedman and J. I. Tello, Stability of solutions of chemotaxis equations in reinforced random walks,, J. Math. Anal. Appl., 272 (2002), 138.  doi: 10.1016/S0022-247X(02)00147-6.  Google Scholar [11] H. Gajewski and K. Zacharias, Global behavior of a reaction-diffusion system modelling chemotaxis,, Math. Nachr., 195 (1998), 77.  doi: 10.1002/mana.19981950106.  Google Scholar [12] M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model,, Ann. Scuola Normale Superiore, 24 (1997), 633.   Google Scholar [13] D. Horstemann, Generalizing the Keller-Segel model: Lyapunov functionals, steady state analysis, and blow-up results for multi-species chemotaxis models in the presence of attraction and repulsion between competitive interacting species,, J. Nonlinear Sci., 21 (2011), 231.  doi: 10.1007/s00332-010-9082-x.  Google Scholar [14] D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system,, J. Differential Equations, 215 (2005), 52.  doi: 10.1016/j.jde.2004.10.022.  Google Scholar [15] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,, J. Differential Equations, 256 (2014), 2993.  doi: 10.1016/j.jde.2014.01.028.  Google Scholar [16] W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis,, Trans. Am. Math. Soc., 329 (1992), 819.  doi: 10.1090/S0002-9947-1992-1046835-6.  Google Scholar [17] E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability,, J. Theor. Biol., 26 (1970), 399.  doi: 10.1016/0022-5193(70)90092-5.  Google Scholar [18] R. Kowalczyk and Z. Szymańska, On the global existence of solutions to an aggregation model,, J. Math. Anal. Appl., 343 (2008), 379.  doi: 10.1016/j.jmaa.2008.01.005.  Google Scholar [19] Y. H. Li, K. Lin and C. L. Mu, Boundedness and asymptotic behavior of solutions to a chemotaxis-haptotaxis model in high dimensions,, Appl. Math. Lett., 50 (2015), 91.  doi: 10.1016/j.aml.2015.06.010.  Google Scholar [20] J. Lankeit, Chemotaxis can prevent thresholds on population density,, Discrete Cont. Dyns. S-B., 20 (2015), 1499.  doi: 10.3934/dcdsb.2015.20.1499.  Google Scholar [21] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,, J. Differential Equations, 258 (2015), 1158.  doi: 10.1016/j.jde.2014.10.016.  Google Scholar [22] M. M. Porzio and V. Vespri, Holder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,, J. Differential Equations, 103 (1993), 146.  doi: 10.1006/jdeq.1993.1045.  Google Scholar [23] T. Nagai, Blow-up of radially symmetric solutions to a chemotaxis system,, Adv. Math. Sci. Appl., 5 (1995), 581.   Google Scholar [24] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains,, J. Inequal. Appl., 6 (2001), 37.  doi: 10.1155/S1025583401000042.  Google Scholar [25] T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis,, Funkc. Ekvacioj. Ser. Int., 40 (1997), 411.   Google Scholar [26] M. Negreanu and J. I. Tello, On a two species chemotaxis model with slow chemical diffusion,, SIAM J. MAth. Anal., 46 (2014), 3761.  doi: 10.1137/140971853.  Google Scholar [27] M. Negreanu and J. I. Tello, Asymptotic stability of a two species chemotaxis system with non-diffusive chemoattractant,, J. Differential Equations, 258 (2015), 1592.  doi: 10.1016/j.jde.2014.11.009.  Google Scholar [28] K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations,, Nonlinear Anal., 51 (2002), 119.  doi: 10.1016/S0362-546X(01)00815-X.  Google Scholar [29] K. Osaki and A. Yagi, Finite dimensional attractors for one-dimensional Keller-Segel equations,, Funkc. Ekvacioj. Ser. Int., 44 (2001), 441.   Google Scholar [30] Y. Tao, Boundedness in a two-dimensional chemotaxis-haptotaxis system,, , ().   Google Scholar [31] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,, J. Differential Equations, 252 (2012), 692.  doi: 10.1016/j.jde.2011.08.019.  Google Scholar [32] Y. Tao and M. Winkler, Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant,, J. Differential Equations, 252 (2012), 2520.  doi: 10.1016/j.jde.2011.07.010.  Google Scholar [33] Y. Tao and M. Winkler, Dominance of chemotaxis in a chemotaxis-haptotaxis model,, Nonlinearity, 27 (2014), 1225.  doi: 10.1088/0951-7715/27/6/1225.  Google Scholar [34] Y. Tao and M. Winkler, Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant,, J. Differential Equations, 257 (2014), 784.  doi: 10.1016/j.jde.2014.04.014.  Google Scholar [35] Y. Tao and M. Winkler, Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system,, Z. Angew. Math. Phys., 66 (2015), 2555.  doi: 10.1007/s00033-015-0541-y.  Google Scholar [36] Y. Tao and M. Winkler, Persistence of mass in a chemotaxis system with logistic source,, J. Differential Equations, 259 (2015), 6142.  doi: 10.1016/j.jde.2015.07.019.  Google Scholar [37] J. I. Tello and M. Winkler, A chemotaxis system with logistic source,, Comm. Partial Differential Equations, 32 (2007), 849.  doi: 10.1080/03605300701319003.  Google Scholar [38] M. Winkler, Chemotaxis with logistic source: Very weak global solutions and boundedness properties,, J. Math. Anal Appl, 348 (2008), 708.  doi: 10.1016/j.jmaa.2008.07.071.  Google Scholar [39] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,, Comm. Partial Differential Equations, 35 (2010), 1516.  doi: 10.1080/03605300903473426.  Google Scholar [40] M. Winkler, Aggregation versus global diffusive behavior in the higher-dimensional Keller-Segel model,, J. Differential Equations, 248 (2010), 2889.  doi: 10.1016/j.jde.2010.02.008.  Google Scholar [41] M. Winkler, Does a volume-filling effect always prevent chemotactic collapse?,, Math. Methods Appl. Sci., 33 (2010), 12.  doi: 10.1002/mma.1146.  Google Scholar [42] M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system,, J. Math. Pures Appl., 100 (2013), 748.  doi: 10.1016/j.matpur.2013.01.020.  Google Scholar [43] M. Winkler, How far can chemotactic cross-diffusion enforce exceeding carrying capacities?,, J. Nonlinear Sci., 24 (2014), 809.  doi: 10.1007/s00332-014-9205-x.  Google Scholar [44] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening,, J. Differential Equations, 257 (2014), 1056.  doi: 10.1016/j.jde.2014.04.023.  Google Scholar [45] M. Winkler, Stabilization in a two-dimensional chemotaxis-Navier-Stokes system,, Arch. Ration. Mech. Anal., 211 (2014), 455.  doi: 10.1007/s00205-013-0678-9.  Google Scholar [46] M. Winkler, Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,, Calc. Var. Partial Differential Equations, 54 (2015), 3789.  doi: 10.1007/s00526-015-0922-2.  Google Scholar [47] M. Winkler and K. Djie, Boundedness and finite-time collapse in a chemotaxis system with volume-filling effect,, Nonlinear Anal., 72 (2010), 1044.  doi: 10.1016/j.na.2009.07.045.  Google Scholar show all references ##### References: [1] N. D. Alikakos, $L^p$ bounds of solutions of reaction-diffusion equations,, Comm. Partial Differential Equations, 4 (1979), 827.  doi: 10.1080/03605307908820113.  Google Scholar [2] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Towards a mathematical theory of Keller-Segel models of pattern formation in biological tissues,, Math. Models Methods Appl. Sci., 25 (2015), 1663.  doi: 10.1142/S021820251550044X.  Google Scholar [3] P. Biler, Existence and nonexistence of solutions for a model of gravitational interaction of particles III,, Colloq. Mathematicum, 68 (1995), 229.   Google Scholar [4] X. R. Cao, Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces,, Discrete Cont. Dyns. S-A., 35 (2015), 1891.  doi: 10.3934/dcds.2015.35.1891.  Google Scholar [5] X. R. Cao, Boundedness in a three-dimensional chemotaxis-haptotaxis model,, Zeitschrift für angewandte Mathematik und Physik, 67 (2016).  doi: 10.1007/s00033-015-0601-3.  Google Scholar [6] T. Cieślak and P. H. Laurençot, Finite time blow-up for a one-dimensional quasilinear parabolic-parabolic chemotaxis system,, Ann. I. H. Poincaré Anal. Non Linéaire, 27 (2010), 437.  doi: 10.1016/j.anihpc.2009.11.016.  Google Scholar [7] T. Cieślak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,, J. Differential Equations, 252 (2012), 5832.  doi: 10.1016/j.jde.2012.01.045.  Google Scholar [8] T. Cieślak and C. Stinner, New critical exponents in a fully parabolic quasilinear Keller-Segel system and applications to volume filling models,, J. Differential Equations, 258 (2015), 2080.  doi: 10.1016/j.jde.2014.12.004.  Google Scholar [9] T. Cieślak and M. Winkler, Finite-time blow-up in a quasilinear system of chemotaxis,, Nonlinearity, 21 (2008), 1057.  doi: 10.1088/0951-7715/21/5/009.  Google Scholar [10] A. Friedman and J. I. Tello, Stability of solutions of chemotaxis equations in reinforced random walks,, J. Math. Anal. Appl., 272 (2002), 138.  doi: 10.1016/S0022-247X(02)00147-6.  Google Scholar [11] H. Gajewski and K. Zacharias, Global behavior of a reaction-diffusion system modelling chemotaxis,, Math. Nachr., 195 (1998), 77.  doi: 10.1002/mana.19981950106.  Google Scholar [12] M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model,, Ann. Scuola Normale Superiore, 24 (1997), 633.   Google Scholar [13] D. Horstemann, Generalizing the Keller-Segel model: Lyapunov functionals, steady state analysis, and blow-up results for multi-species chemotaxis models in the presence of attraction and repulsion between competitive interacting species,, J. Nonlinear Sci., 21 (2011), 231.  doi: 10.1007/s00332-010-9082-x.  Google Scholar [14] D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system,, J. Differential Equations, 215 (2005), 52.  doi: 10.1016/j.jde.2004.10.022.  Google Scholar [15] S. Ishida, K. Seki and T. Yokota, Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,, J. Differential Equations, 256 (2014), 2993.  doi: 10.1016/j.jde.2014.01.028.  Google Scholar [16] W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis,, Trans. Am. Math. Soc., 329 (1992), 819.  doi: 10.1090/S0002-9947-1992-1046835-6.  Google Scholar [17] E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability,, J. Theor. Biol., 26 (1970), 399.  doi: 10.1016/0022-5193(70)90092-5.  Google Scholar [18] R. Kowalczyk and Z. Szymańska, On the global existence of solutions to an aggregation model,, J. Math. Anal. Appl., 343 (2008), 379.  doi: 10.1016/j.jmaa.2008.01.005.  Google Scholar [19] Y. H. Li, K. Lin and C. L. Mu, Boundedness and asymptotic behavior of solutions to a chemotaxis-haptotaxis model in high dimensions,, Appl. Math. Lett., 50 (2015), 91.  doi: 10.1016/j.aml.2015.06.010.  Google Scholar [20] J. Lankeit, Chemotaxis can prevent thresholds on population density,, Discrete Cont. Dyns. S-B., 20 (2015), 1499.  doi: 10.3934/dcdsb.2015.20.1499.  Google Scholar [21] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,, J. Differential Equations, 258 (2015), 1158.  doi: 10.1016/j.jde.2014.10.016.  Google Scholar [22] M. M. Porzio and V. Vespri, Holder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,, J. Differential Equations, 103 (1993), 146.  doi: 10.1006/jdeq.1993.1045.  Google Scholar [23] T. Nagai, Blow-up of radially symmetric solutions to a chemotaxis system,, Adv. Math. Sci. Appl., 5 (1995), 581.   Google Scholar [24] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains,, J. Inequal. Appl., 6 (2001), 37.  doi: 10.1155/S1025583401000042.  Google Scholar [25] T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis,, Funkc. Ekvacioj. Ser. Int., 40 (1997), 411.   Google Scholar [26] M. Negreanu and J. I. Tello, On a two species chemotaxis model with slow chemical diffusion,, SIAM J. MAth. Anal., 46 (2014), 3761.  doi: 10.1137/140971853.  Google Scholar [27] M. Negreanu and J. I. Tello, Asymptotic stability of a two species chemotaxis system with non-diffusive chemoattractant,, J. Differential Equations, 258 (2015), 1592.  doi: 10.1016/j.jde.2014.11.009.  Google Scholar [28] K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations,, Nonlinear Anal., 51 (2002), 119.  doi: 10.1016/S0362-546X(01)00815-X.  Google Scholar [29] K. Osaki and A. Yagi, Finite dimensional attractors for one-dimensional Keller-Segel equations,, Funkc. Ekvacioj. Ser. Int., 44 (2001), 441.   Google Scholar [30] Y. Tao, Boundedness in a two-dimensional chemotaxis-haptotaxis system,, , ().   Google Scholar [31] Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,, J. Differential Equations, 252 (2012), 692.  doi: 10.1016/j.jde.2011.08.019.  Google Scholar [32] Y. Tao and M. Winkler, Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant,, J. Differential Equations, 252 (2012), 2520.  doi: 10.1016/j.jde.2011.07.010.  Google Scholar [33] Y. Tao and M. Winkler, Dominance of chemotaxis in a chemotaxis-haptotaxis model,, Nonlinearity, 27 (2014), 1225.  doi: 10.1088/0951-7715/27/6/1225.  Google Scholar [34] Y. Tao and M. Winkler, Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant,, J. Differential Equations, 257 (2014), 784.  doi: 10.1016/j.jde.2014.04.014.  Google Scholar [35] Y. Tao and M. Winkler, Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system,, Z. Angew. Math. Phys., 66 (2015), 2555.  doi: 10.1007/s00033-015-0541-y.  Google Scholar [36] Y. Tao and M. Winkler, Persistence of mass in a chemotaxis system with logistic source,, J. Differential Equations, 259 (2015), 6142.  doi: 10.1016/j.jde.2015.07.019.  Google Scholar [37] J. I. Tello and M. Winkler, A chemotaxis system with logistic source,, Comm. Partial Differential Equations, 32 (2007), 849.  doi: 10.1080/03605300701319003.  Google Scholar [38] M. Winkler, Chemotaxis with logistic source: Very weak global solutions and boundedness properties,, J. Math. Anal Appl, 348 (2008), 708.  doi: 10.1016/j.jmaa.2008.07.071.  Google Scholar [39] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,, Comm. Partial Differential Equations, 35 (2010), 1516.  doi: 10.1080/03605300903473426.  Google Scholar [40] M. Winkler, Aggregation versus global diffusive behavior in the higher-dimensional Keller-Segel model,, J. Differential Equations, 248 (2010), 2889.  doi: 10.1016/j.jde.2010.02.008.  Google Scholar [41] M. Winkler, Does a volume-filling effect always prevent chemotactic collapse?,, Math. Methods Appl. Sci., 33 (2010), 12.  doi: 10.1002/mma.1146.  Google Scholar [42] M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system,, J. Math. Pures Appl., 100 (2013), 748.  doi: 10.1016/j.matpur.2013.01.020.  Google Scholar [43] M. Winkler, How far can chemotactic cross-diffusion enforce exceeding carrying capacities?,, J. Nonlinear Sci., 24 (2014), 809.  doi: 10.1007/s00332-014-9205-x.  Google Scholar [44] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening,, J. Differential Equations, 257 (2014), 1056.  doi: 10.1016/j.jde.2014.04.023.  Google Scholar [45] M. Winkler, Stabilization in a two-dimensional chemotaxis-Navier-Stokes system,, Arch. Ration. Mech. Anal., 211 (2014), 455.  doi: 10.1007/s00205-013-0678-9.  Google Scholar [46] M. Winkler, Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity,, Calc. Var. Partial Differential Equations, 54 (2015), 3789.  doi: 10.1007/s00526-015-0922-2.  Google Scholar [47] M. Winkler and K. Djie, Boundedness and finite-time collapse in a chemotaxis system with volume-filling effect,, Nonlinear Anal., 72 (2010), 1044.  doi: 10.1016/j.na.2009.07.045.  Google Scholar [1] Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 789-802. doi: 10.3934/dcds.2014.34.789 [2] Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3357-3377. doi: 10.3934/dcdsb.2018324 [3] Guoqiang Ren, Bin Liu. Global boundedness of solutions to a chemotaxis-fluid system with singular sensitivity and logistic source. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3843-3883. doi: 10.3934/cpaa.2020170 [4] Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2299-2323. doi: 10.3934/dcds.2015.35.2299 [5] Shijie Shi, Zhengrong Liu, Hai-Yang Jin. Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source. Kinetic & Related Models, 2017, 10 (3) : 855-878. doi: 10.3934/krm.2017034 [6] Chunhua Jin. Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3547-3566. doi: 10.3934/dcds.2018150 [7] Tomomi Yokota, Noriaki Yoshino. Existence of solutions to chemotaxis dynamics with logistic source. Conference Publications, 2015, 2015 (special) : 1125-1133. doi: 10.3934/proc.2015.1125 [8] Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2301-2319. doi: 10.3934/dcdsb.2017097 [9] Xie Li, Zhaoyin Xiang. Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3503-3531. doi: 10.3934/dcds.2015.35.3503 [10] Jie Zhao. Large time behavior of solution to quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1737-1755. doi: 10.3934/dcds.2020091 [11] Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 423-447. doi: 10.3934/dcdsb.2018180 [12] Wenji Zhang, Pengcheng Niu. Asymptotics in a two-species chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020288 [13] Giuseppe Viglialoro, Thomas E. Woolley. Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3023-3045. doi: 10.3934/dcdsb.2017199 [14] Rachidi B. Salako, Wenxian Shen. Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 293-319. doi: 10.3934/dcdss.2020017 [15] Rachidi B. Salako, Wenxian Shen. Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6189-6225. doi: 10.3934/dcds.2017268 [16] Ke Lin, Chunlai Mu. Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2233-2260. doi: 10.3934/dcdsb.2017094 [17] Rachidi B. Salako. Traveling waves of a full parabolic attraction-repulsion chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5945-5973. doi: 10.3934/dcds.2019260 [18] Hong Yi, Chunlai Mu, Guangyu Xu, Pan Dai. A blow-up result for the chemotaxis system with nonlinear signal production and logistic source. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020194 [19] Lei Yang, Lianzhang Bao. Numerical study of vanishing and spreading dynamics of chemotaxis systems with logistic source and a free boundary. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020154 [20] Siyao Zhu, Jinliang Wang. Asymptotic profiles of steady states for a diffusive SIS epidemic model with spontaneous infection and a logistic source. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3323-3340. doi: 10.3934/cpaa.2020147 2019 Impact Factor: 1.338
Research # Existence of nonnegative solutions for a fractional m-point boundary value problem at resonance Haidong Qu1* and Xuan Liu2 Author Affiliations 1 Department of Mathematics, Hanshan Normal University, Chaozhou, Guangdong, 521041, China 2 Department of Basic Education, Hanshan Normal University, Chaozhou, Guangdong, 521041, China For all author emails, please log on. Boundary Value Problems 2013, 2013:127  doi:10.1186/1687-2770-2013-127 Received: 11 January 2013 Accepted: 26 April 2013 Published: 16 May 2013 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ### Abstract We consider the fractional differential equation D 0 + q u ( t ) = f ( t , u ( t ) ) , 0 < t < 1 , satisfying the boundary conditions D 0 + p u ( t ) | t = 0 = D 0 + p 1 u ( t ) | t = 0 = = D 0 + p n + 1 u ( t ) | t = 0 = 0 , u ( 1 ) = i = 1 m 2 α i u ( ξ i ) , where D 0 + q is the Riemann-Liouville fractional order derivative. The parameters in the multi-point boundary conditions are such that the corresponding differential operator is a Fredholm map of index zero. As a result, the minimal and maximal nonnegative solutions for the problem are obtained by using a fixed point theorem of increasing operators. MSC: 26A33, 34A08. ##### Keywords: fractional order; coincidence degree; at resonance ### 1 Introduction Let us consider the fractional differential equation D 0 + q u ( t ) = f ( t , u ( t ) ) , 0 < t < 1 , (1.1) with the boundary conditions (BCs) { D 0 + p u ( t ) | t = 0 = D 0 + p 1 u ( t ) | t = 0 = = D 0 + p n + 1 u ( t ) | t = 0 = 0 , u ( 1 ) = i = 1 m 2 α i u ( ξ i ) , (1.2) where n 1 , max { q 2 , 0 } p < q 1 , n < q n + 1 , i = 1 m 2 α i ξ i q 1 = 1 , α i > 0 , 0 < ξ 1 < ξ 2 < < ξ m 2 < 1 , m 3 . We assume that f : [ 0 , 1 ] × [ 0 , ) [ 0 , ) is continuous. A boundary value problem at resonance for ordinary or fractional differential equations has been studied by several authors, including the most recent works [1-7] and the references therein. In the most papers mentioned above, the coincidence degree theory was applied to establish existence theorems. But in [8], Wang obtained the minimal and maximal nonnegative solutions for a second-order m-point boundary value problem at resonance by using a new fixed point theorem of increasing operators, and in this paper we use this method of Wang to establish the existence theorem of equations (1.1) and (1.2). For the convenience of the reader, we briefly recall some notations. Let X, Z be real Banach spaces, L : dom ( L ) X Z be a Fredholm map of index zero and P : X X , Q : Z Z be continuous projectors such that Im ( P ) = Ker ( L ) , Ker ( Q ) = Im ( L ) and X = Ker ( L ) Ker ( P ) , Z = Im ( L ) Im ( Q ) . It follows that L | Ker ( P ) dom ( L ) : Ker ( P ) dom ( L ) Im ( L ) is invertible. We denote the inverse of the map by K P : Im ( L ) Ker ( P ) dom ( L ) . Since dim Im ( Q ) = dim Ker ( L ) , there exists an isomorphism J : Im ( Q ) Ker ( L ) . Let Ω be an open bounded subset of X. The map N : X Z will be called L-compact on Ω ¯ if Q N ( Ω ¯ ) and K P ( I Q ) ( Ω ¯ ) are compact. We take H = L + J 1 P , then H : dom ( L ) X Z is a linear bijection with bounded inverse and ( J Q + K P ( I Q ) ) ( L + J 1 P ) = ( L + J 1 P ) ( J Q + K P ( I Q ) ) = I . We know from [9] that K 1 = H ( K dom ( L ) ) is a cone in Z. Theorem 1.1[9] N ( u ) + J 1 P ( u ) = H ( u ˜ ) , where u ˜ = P ( u ) + J Q N ( u ) + K P ( I Q ) N ( u ) and u ˜ is uniquely determined. From the above theorem, the author [9] obtained that the assertions (i) P ( u ) + J Q N ( u ) + K P ( I Q ) N ( u ) : K dom ( L ) K dom ( L ) and (ii) N ( u ) + J 1 P ( u ) : K dom ( L ) K 1 are equivalent. We also need the following definition and theorem. Definition 1.1[8] Let K be a normal cone in a Banach space X, u 0 v 0 , and u 0 , v 0 K dom ( L ) are said to be coupled lower and upper solutions of the equation L x = N x if { L u 0 N u 0 , L v 0 N v 0 . Theorem 1.2[8] Let L : dom ( L ) X Z be a Fredholm operator of index zero, Kbe a normal cone in a Banach spaceX, u 0 , v 0 K dom ( L ) , u 0 v 0 , and N : [ u 0 , v 0 ] Z beL-compact and continuous. Suppose that the following conditions are satisfied: (C1) u 0 and v 0 are coupled lower and upper solutions of the equation L x = N x ; (C2) N + J 1 P : K dom ( L ) K 1 is an increasing operator. Then the equation L x = N x has a minimal solution u and a maximal solution v in [ u 0 , v 0 ] . Moreover, u = lim n u n , v = lim n v n , where u n = ( L + J 1 P ) 1 ( N + J 1 P ) u n 1 , v n = ( L + J 1 P ) 1 ( N + J 1 P ) v n 1 , n = 1 , 2 , 3 , and u 0 u 1 u 2 u n v n v 2 v 1 v 0 . ### 2 Preliminaries In this section, we present some necessary basic knowledge and definitions about fractional calculus theory. Definition 2.1 (see Equation 2.1.1 in [10]) The R-L fractional integral I 0 + q u of order q R ( q > 0 ) is defined by I 0 + q u ( t ) : = 1 Γ ( q ) 0 t u ( τ ) d τ ( t τ ) 1 q ( t > 0 ) . Here Γ ( q ) is the gamma function. Definition 2.2 (see Equation 2.1.5 in [10]) The R-L fractional derivative D 0 + q u of order q R ( q > 0 ) is defined by D 0 + q u ( t ) = ( d d t ) n I 0 + n q u ( t ) = 1 Γ ( n q ) ( d d t ) n 0 t u ( τ ) d τ ( t τ ) q n + 1 ( n = [ q ] + 1 , t > 0 ) , where [ q ] means the integral part of q. Lemma 2.1[11] If q 1 , q 2 > 0 , q > 0 , then, for u ( t ) L p ( 0 , 1 ) , the relations I 0 + q 1 I 0 + q 2 u ( t ) = I 0 + q 1 + q 2 u ( t ) and D 0 + q 1 I 0 + q 1 u ( t ) = u ( t ) hold a.e. on [ 0 , 1 ] . Lemma 2.2 (see [11]) Let q > 0 , n = [ q ] + 1 , D 0 + q u ( t ) L 1 ( 0 , 1 ) , then we have the equality I 0 + q D 0 + q u ( t ) = u ( t ) + i = 1 n C i t q i , where C i R ( i = 1 , 2 , , n ) are some constants. Lemma 2.3 (see Corollary 2.1 in [10]) Let q > 0 and n = [ q ] + 1 , the equation D 0 + q u ( t ) = 0 is valid if and only if u ( t ) = i = 1 n C i t q i , where C i R ( i = 1 , 2 , , n ) are arbitrary constants. Let X = Z = C [ 0 , 1 ] with the norm u = sup t [ 0 , 1 ] | u ( t ) | , then X and Z are Banach spaces. Let K = { u X : u ( t ) 0 , t [ 0 , 1 ] } . It follows from Theorem 1.1.1 in [12] that K is a normal cone. Let dom ( L ) = { u ( t ) X D 0 + q u ( t ) Z , u ( t )   satisfies BCs  (1.2) } . We define the operators L : dom ( L ) Z by ( L u ) ( t ) = D 0 + q u ( t ) (2.1) and N : K Z by ( N u ) ( t ) = f ( t , u ( t ) ) , then BVPs (1.1) and (1.2) can be written as L u = N u , u K dom ( L ) . Lemma 2.4If the operatorLis defined in (2.1), then (i) Ker ( L ) = { c t q 1 c R } , (ii) Im ( L ) = { y Z 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s y ( τ ) d τ d s = 0 } = : L . Proof (i) It can be seen from Lemma 2.3 and BCs (1.2) that Ker ( L ) = { c t q 1 c R } . (ii) If y Im ( L ) , then there exists a function u dom ( L ) such that y ( t ) = D 0 + q u ( t ) , by Lemma 2.2, we have I 0 + q y ( t ) = u ( t ) + c 1 t q 1 + + c n t q n . It follows from BCs (1.2) and the equation i = 1 m 2 α i ξ i q 1 = 1 that I 0 + q y ( 1 ) = i = 1 m 2 I 0 + q α i y ( ξ i ) and noting the definition of I 0 + q , we have I 0 + q y ( t ) = 1 Γ ( q ) 0 t ( t s ) q 1 y ( s ) d s = q 1 Γ ( q ) 0 t ( t s ) q 2 0 s y ( τ ) d τ d s . Thus, q 1 Γ ( q ) 0 1 ( 1 s ) q 2 0 s y ( τ ) d τ d s = q 1 Γ ( q ) i = 1 m 2 α i 0 ξ i ( ξ i s ) q 2 0 s y ( τ ) d τ d s = q 1 Γ ( q ) i = 1 m 2 α i ξ i 0 1 ( ξ i ξ i s ) q 2 0 ξ i s y ( τ ) d τ d s = q 1 Γ ( q ) i = 1 m 2 α i ξ i q 1 0 1 ( 1 s ) q 2 0 ξ i s y ( τ ) d τ d s , which is 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s y ( τ ) d τ d s = 0 . Then y L , hence Im ( L ) L . On the other hand, if y L , let u ( t ) = I 0 + q y ( t ) , then u dom ( L ) , and D 0 + q u ( t ) = D 0 + q I 0 + q y ( t ) = y ( t ) , which implies that y Im ( L ) , thus L Im ( L ) . In general Im ( L ) = L . Clearly, Im ( L ) is closed in Z and dim Ker ( L ) = codim Im ( L ) = 1 , thus L is a Fredholm operator of index zero. This completes the proof. □ In what follows, some property operators are defined. We define continuous projectors P : X X by ( P u ) ( t ) = q 0 1 u ( s ) d s t q 1 and Q : Z Z by ( Q u ) ( t ) = 1 γ 0 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s u ( τ ) d τ d s , where γ 0 = 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s d τ d s = 0 1 s ( 1 s ) q 2 d s ( 1 i = 1 m 2 α i ξ i q ) = B ( 2 , q 1 ) ( 1 i = 1 m 2 α i ξ i q ) > 0 . B ( x , y ) is the beta function defined by B ( x , y ) = 0 1 t x 1 ( 1 t ) y 1 d t . By calculating, we easily obtain P 2 = P , Q 2 = Q , and X = Ker ( L ) Ker ( P ) , Z = Im ( L ) Im ( Q ) . We also define J : Im ( Q ) Ker ( L ) by J ( c ) = c t q 1 , c R and K P : Im ( L ) dom ( L ) Ker ( P ) by ( K P ( u ) ) ( t ) = ( I 0 + q u ) ( t ) = 1 Γ ( q ) 0 t ( t s ) q 1 u ( s ) d s , thus ( Q N ( u ) ) ( t ) = 1 γ 0 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s f ( τ , u ( τ ) ) d τ d s and ( K P ( I Q ) N ( u ) ) ( t ) = 1 Γ ( q ) 0 t ( t s ) q 1 f ( s , u ( s ) ) d s 1 Γ ( q ) 0 t ( t s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s . Lemma 2.5LetΩbe any open bounded subset of K dom ( L ) , then Q N ( Ω ¯ ) and K P ( I Q ) N ( Ω ¯ ) are compact, which implies thatNisL-compact on Ω ¯ for any open bounded set Ω K dom ( L ) . Proof For a positive integer n, let Ω = { u K dom ( L ) : u n } , M = sup ( t , u ) f ( t , u ( t ) ) , ( t , u ) [ 0 , 1 ] × [ 0 , n ] . It is easy to see that Q N ( Ω ¯ ) is compact. Now, we prove that K P ( I Q ) N ( Ω ¯ ) is compact. For u Ω ¯ , we have ( K P ( I Q ) N ( u ) ) ( t ) = sup t [ 0 , 1 ] | 1 Γ ( q ) 0 t ( t s ) q 1 f ( s , u ( s ) ) d s 1 Γ ( q ) 0 t ( t s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s | sup t [ 0 , 1 ] | 1 Γ ( q ) 0 t ( t s ) q 1 f ( s , u ( s ) ) d s | + sup t [ 0 , 1 ] | 1 Γ ( q ) 0 t ( t s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s | 2 M Γ ( q ) sup t [ 0 , 1 ] | 0 t ( t s ) q 1 d s | = 2 M Γ ( q + 1 ) , which implies that K P ( I Q ) N ( Ω ¯ ) is bounded. Moreover, for each u Ω ¯ , let t 1 , t 2 [ 0 , 1 ] and t 1 > t 2 , then ( K P ( I Q ) N ( u ) ) ( t 1 ) ( K P ( I Q ) N ( u ) ) ( t 2 ) | 1 Γ ( q ) 0 t 1 ( t 1 s ) q 1 f ( s , u ( s ) ) d s 1 Γ ( q ) 0 t 2 ( t 2 s ) q 1 f ( s , u ( s ) ) d s | + | 1 Γ ( q ) 0 t 1 ( t 1 s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s 1 Γ ( q ) 0 t 2 ( t 2 s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s | | 1 Γ ( q ) 0 t 2 ( t 1 s ) q 1 f ( s , u ( s ) ) d s 1 Γ ( q ) 0 t 2 ( t 2 s ) q 1 f ( s , u ( s ) ) d s | + | 1 Γ ( q ) t 2 t 1 ( t 1 s ) q 1 f ( s , u ( s ) ) d s | + | 1 Γ ( q ) 0 t 2 ( t 1 s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s 1 Γ ( q ) 0 t 2 ( t 2 s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s | + | 1 Γ ( q ) t 2 t 1 ( t 1 s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s | 2 M Γ ( q ) | 0 t 1 ( t 1 s ) q 1 d s 0 t 2 ( t 2 s ) q 1 d s | + 2 M Γ ( q ) | t 2 t 1 ( t 1 s ) q 1 d s | 2 M Γ ( q ) | 0 t 1 ( t 1 s ) q 1 d s 0 t 2 ( t 2 s ) q 1 d s | + 2 M Γ ( q ) | t 1 t 2 | = 2 M Γ ( q ) | t 1 0 1 ( t 1 t 1 s ) q 1 d s t 2 0 1 ( t 2 t 2 s ) q 1 d s | + 2 M Γ ( q ) | t 1 t 2 | = 2 M Γ ( q + 1 ) | t 1 q t 2 q | + 2 M Γ ( q ) | t 1 t 2 | = 2 M Γ ( q + 1 ) | q η q 1 | | t 1 t 2 | + 2 M Γ ( q ) | t 1 t 2 | , η = t 1 + θ ( t 2 t 1 ) , 0 < θ < 1 ( 2 q + 2 ) M Γ ( q ) | t 1 t 2 | . Thus ε > 0 , δ = Γ ( q ) ( 2 q + 2 ) M ε such that K P ( I Q ) N ( u ) ( t 1 ) K P ( I Q ) N ( u ) ( t 2 ) < ε for | t 1 t 2 | < δ and each u Ω ¯ . It is concluded that N is L-compact on Ω ¯ . This completes the proof. □ ### 3 Main result In this section, we establish the existence of the nonnegative solution to equations (1.1) and (1.2). Theorem 3.1Suppose (H1) There exist u 0 , v 0 K dom ( L ) such that u 0 v 0 and { D 0 + q u 0 ( t ) f ( t , u 0 ( t ) ) , t [ 0 , 1 ] , D 0 + q v 0 ( t ) f ( t , v 0 ( t ) ) , t [ 0 , 1 ] . (H2) For any x , y K dom ( L ) , satisfying f ( t , x ( t ) ) f ( t , y ( t ) ) q ( 0 1 x ( t ) d t 0 1 y ( t ) d t ) , where t [ 0 , 1 ] and u 0 ( t ) y ( t ) x ( t ) v 0 ( t ) , then problems (1.1) and (1.2) have a minimal solution u and a maximal solution v in [ u 0 , v 0 ] , respectively. Proof By condition (H1), we know that L u 0 N u 0 , L v 0 N v 0 , so condition (C1) in Theorem 1.1 holds. In addition, for each u K , ( P ( u ) + J Q N ( u ) + K P ( I Q ) N ( u ) ) ( t ) = q 0 1 u ( s ) d s t q 1 + 1 γ 0 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s f ( τ , u ( τ ) ) d τ d s t q 1 + 1 Γ ( q ) 0 t ( t s ) q 1 f ( s , u ( s ) ) d s 1 Γ ( q ) 0 t ( t s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ f ( τ , u ( τ ) ) d τ d s ˜ d s = q 0 1 u ( s ) d s t q 1 + 1 Γ ( q ) 0 t ( t s ) q 1 f ( s , u ( s ) ) d s + 1 γ 0 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s f ( τ , u ( τ ) ) d τ d s ( t q 1 1 Γ ( q ) 0 t ( t s ) q 1 d s ) 1 γ 0 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s f ( τ , u ( τ ) ) d τ d s ( t q 1 t q Γ ( q + 1 ) ) 0 . Thus ( P + J Q N + K P ( I Q ) N ) ( K ) K , that is, N + J 1 P : K dom ( L ) K 1 by virtue of the equivalence. From condition (H2), we have that N + J 1 P : K dom ( L ) K 1 is a monotone increasing operator. Then, in accordance with Lemma 2.5 and Theorem 1.2, we obtain a minimal solution u and a maximal solution v in [ u 0 , v 0 ] for problems (1.1) and (1.2). Thus we can define iterative sequences { u n ( t ) } and { v n ( t ) } by u n = ( L + J 1 P ) 1 ( N + J 1 P ) u n 1 = ( J Q + K P ( I Q ) ) ( N + J 1 P ) u n 1 = ( J Q + K P ( I Q ) ) ( f ( t , u n 1 ( t ) ) + q 0 1 u n 1 ( s ) d s ) = 1 γ 0 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s ( f ( τ , u n 1 ( τ ) ) + q 0 1 u n 1 ( s ˆ ) d s ˆ ) d τ d s t q 1 + 1 Γ ( q ) 0 t ( t s ) q 1 ( f ( s , u n 1 ( s ) ) + q 0 1 u n 1 ( s ˜ ) d s ˜ ) d s 1 Γ ( q ) 0 t ( t s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ ( f ( τ , u n 1 ( τ ) ) + q 0 1 u n 1 ( s ˆ ) d s ˆ ) d τ d s ˜ d s and v n = ( L + J 1 P ) 1 ( N + J 1 P ) v n 1 = ( J Q + K P ( I Q ) ) ( N + J 1 P ) v n 1 = ( J Q + K P ( I Q ) ) ( f ( t , v n 1 ( t ) ) + q 0 1 v n 1 ( s ) d s ) = 1 γ 0 0 1 ( 1 s ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s s ( f ( τ , v n 1 ( τ ) ) + q 0 1 v n 1 ( s ˆ ) d s ˆ ) d τ d s t q 1 + 1 Γ ( q ) 0 t ( t s ) q 1 ( f ( s , v n 1 ( s ) ) + q 0 1 v n 1 ( s ˜ ) d s ˜ ) d s 1 Γ ( q ) 0 t ( t s ) q 1 1 γ 0 0 1 ( 1 s ˜ ) q 2 i = 1 m 2 α i ξ i q 1 ξ i s ˜ s ˜ ( f ( τ , v n 1 ( τ ) ) + q 0 1 v n 1 ( s ˆ ) d s ˆ ) d τ d s ˜ d s , n = 1 , 2 , 3 , Then from Theorem 1.2 we get { u n } and { v n } converge uniformly to u ( t ) and v ( t ) , respectively. Moreover, u 0 u 1 u 2 u n v n v 2 v 1 v 0 . □ ### 4 Example We consider the following problem: D 0 + 3 2 u ( t ) = ( u 2 u 2 + 1 + t ) m , 0 < t < 1 , m > 0 , (4.1) subject to BCs D 0 + 1 4 u ( t ) | t = 0 = 0 , u ( 1 ) = 2 u ( 1 2 ) . (4.2) We can choose u 0 ( t ) = 1 Γ ( 3 2 ) 0 t ( t s ) 1 2 s m d s + t 1 2 1 Γ ( 3 2 ) 0 t ( t s ) 1 2 ( s + 1 ) m d s + t 1 2 = v 0 ( t ) , then D 0 + 3 2 u 0 ( t ) = t m ( u 2 u 2 + 1 + t ) m ( t + 1 ) m = D 0 + 3 2 v 0 ( t ) . Let dom ( L ) = { u ( t ) X D 0 + 3 2 u ( t ) Z , u ( t )  satisfies BCs (4.2) } , then for any x , y K dom ( L ) , we have ( x 2 x 2 + 1 + t ) m ( y 2 y 2 + 1 + t ) m 3 2 ( 0 1 x ( t ) d t 0 1 y ( t ) d t ) , where u 0 ( t ) y ( t ) x ( t ) v 0 ( t ) . Finally, by Theorem 3.1, equation (4.1) with BCs (4.2) has a minimal solution u and a maximal solution v in [ u 0 , v 0 ] . ### Competing interests The authors declare that they have no competing interests. ### Authors’ contributions The authors declare that the study was realized in collaboration with the same responsibility. All authors read and approved the final manuscript. ### Acknowledgements The authors would like to thank the referees for their many constructive comments and suggestions to improve the paper. ### References 1. Infantea, G, Zima, M: Positive solutions of multi-point boundary value problems at resonance. Nonlinear Anal.. 69, 2458–2465 (2008). Publisher Full Text 2. Kosmatov, N: Multi-point boundary value problems on an unbounded domain at resonance. Nonlinear Anal.. 68, 2158–2171 (2008). Publisher Full Text 3. Yang, L, Shen, CF: On the existence of positive solution for a kind of multi-point boundary value problem at resonance. Nonlinear Anal.. 72, 4211–4220 (2010). Publisher Full Text 4. Bai, Z, Zhang, Y: The existence of solutions for a fractional multi-point boundary value problem. Comput. Math. Appl.. 60, 2364–2372 (2010) 5. Zhang, Y, Bai, Z: Existence of solutions for nonlinear fractional three-point boundary value problems at resonance. J. Appl. Math. Comput.. 36, 417–440 (2011). PubMed Abstract | Publisher Full Text 6. Du, Z: Solvability of functional differential equations with multi-point boundary value problems at resonance. Comput. Math. Appl.. 55, 2653–2661 (2008) 7. Han, X: Positive solutions for a three-point boundary value problem at resonance. J. Math. Anal. Appl.. 36, 556–568 (2007) 8. Wang, F, Cui, YJ, Zhang, F: Existence of nonnegative solutions for second order m-point boundary value problems at resonance. Appl. Math. Comput.. 217, 4849–4855 (2011). Publisher Full Text 9. Cremins, CT: A fixed-point index and existence theorems for semilinear equations in cones. Nonlinear Anal.. 42, 789–806 (2001) 10. Kilbsa, AA, Srivastava, HM, Trujillo, JJ: Theory and Applications of Fractional Differential Equations, Elsevier, Amsterdam (2006) 11. Chen, Y, Tang, X: Positive solutions of fractional differential equations at resonance on the half-line. Bound. Value Probl. doi:10.1186/1687-2770-2012-64 (2012) 12. Guo, DJ, Lakshmikantham, V: Nonlinear Problems in Abstract Cones, Academic Press, New York (1988)
Last edited by Bashakar Thursday, May 21, 2020 | History 4 edition of Notes on paretian distribution theory. found in the catalog. # Notes on paretian distribution theory. ## by G. M. Kaufman Written in English Subjects: • Distribution (Probability theory). • Edition Notes The Physical Object ID Numbers Series M.I.T. School of Industrial Management. Working paper -- 27-63, Working paper (Sloan School of Management) -- 27-63. Pagination [18] leaves. Number of Pages 18 Open Library OL14031085M OCLC/WorldCa 14292184 A systematic exposition of the theory of distributions is given in Grubb’s recent Distributions and Operators[2]. There’s also the recommended reference work by Strichartz, A Guide to Distribution Theory and Fourier Transforms[3]. The comprehensive treatise on the subject, although quite old. The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, scientific, geophysical, actuarial, and many other types of observable ally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is Parameters: x, m, >, 0, {\displaystyle x_{\mathrm . at the key contributors and some notes on references. Models and Physical Reality Probability Theory is a mathematical model of uncertainty. In these notes, we introduce examples of uncertainty and we explain how the theory models them. It is important to appreciate the difierence between uncertainty in the physical world. EXPOSITORY NOTES ON DISTRIBUTION THEORY 5 Theorem Let fbe an analytic function in the upper half plane. If there exists Nsuch that for every bounded interval Ithere exists C such that jf(x+ iy)j CjyjN then lim y!0+ f(x+ iy) exists in the sense of distribution theory and is a distribution . Reading these notes These notes will be given out in parts to accompany the first seven weeks of class. The notes do not replace the readings but should help with the lectures and should summarize some key information in a single place. The notes will also File Size: 1MB. Pareto Principle, Social Welfare Function and Political Choice And: “The Paretian welfare theorems, which rest comfortably on ordinal utility, was deemed the only acceptable criterion.” 3 However, many Paretians were dissatisfied with Robbins's conclusion and File Size: KB. You might also like World Cup cock-ups World Cup cock-ups indigenous trees of the Uganda Protectorate. indigenous trees of the Uganda Protectorate. Pulmonary embolic disease Pulmonary embolic disease Romance of summer Romance of summer Henry Phillpotts, Bishop of Exeter, 1778-1869 Henry Phillpotts, Bishop of Exeter, 1778-1869 Historical particulars relating to Southampton. Historical particulars relating to Southampton. Village down east Village down east My 1st Book of Questions My 1st Book of Questions Investment, expansion and new technology Investment, expansion and new technology Algérie, 1954 Algérie, 1954 ### Notes on paretian distribution theory by G. M. Kaufman Download PDF EPUB FB2 Buy Notes on Paretian Distribution Theory (Classic Reprint) on FREE SHIPPING on qualified orders Notes on Paretian Distribution Theory (Classic Reprint): Kaufman, G. M.: : Books. texts All Books All Texts latest This Just In Smithsonian Libraries FEDLINK (US) Genealogy Lincoln Collection. National Emergency Library. Top American Libraries Canadian Libraries Universal Library Community Texts Project Gutenberg Notes on paretian distribution theory. book Heritage Library Children's Library. Open : Notes on paretian distribution theory. Author(s) Kaufman, G. (Kb) Metadata Show full item record. Other identifiers. notesonparetiand00kauf. Series/Report no. Working paper (Sloan School of Management) ; Keywords. Distribution (Probability theory). Collections. Sloan Working Papers; Search Author: G. Kaufman. Enter the password to open this PDF file: Cancel OK. File name:. Distributions: Understanding the Gaussian and Paretian Worlds Written by: PH Editor. NB: this is a long post. Hello folks, Welcome to the first of the 12 themes which I’ll be writing about this year—’ll get into the guts of the topic in a moment; but first, some housekeeping. a stable Paretian distribution with parameters a = 2, a =,u, and -y = cr2/2. 10 For a proof of these statements see Gnedenko and Kolmogorov, op. cit., pp. 11 It is important to distinguish between the stable Paretian distributions and the stable Paretian hypothesis. Under both the stable Paretian Cited by: Intro In this chapter we start to make precise the basic elements of the theory of distributions announced in We start by introducing and studying the space of test functions D, i.e., of smooth func-tions which have compact support. We are going to construct non-tirivial test functions,File Size: KB. Your request is strange, PDEs are the fundamental application, the origin, and the main source of examples for distribution theory, so no surprise all the books on distributions after a. The smallest kthat can be used is called the order of the distribution. D0 F = [k D 0 k are the distributions of nite order. Example (a) A function f2L1 loc is a distribution of order 0. (b) A measure is a distribution of order 0. (c) u(’) = @ ’(x 0) de nes a distribution of order j j. (d) Let x j be a sequence without limit point in File Size: KB. Distribution Theory (Generalized Functions) Notes This note covers the following topics: The Fourier transform, Convolution, Fourier-Laplace Transform, Structure Theorem for distributions and Partial Differential Equation. Distribution theory book. Ask Question Asked 9 years, 5 months ago. Active 3 years, 10 months ago. Viewed 4k times 13 $\begingroup$ I'm looking for a good book on distribution theory (in the Schwartz sense), I have the basic knowledge as given in Grafakos' Classical Fourier Analysis, but I want to know more about it. The distribution theory associated with samples from a generalized Pareto distribution (i.e., Equation 5) is generally complicated. It is not difficult to determine that convolutions of such Pareto distributions exhibit Paretian tail behavior, but closed expressions for the convolved distribution usually are not available (for >3). This note starts by introducing the basic concepts of function spaces and operators, both from the continuous and discrete viewpoints. It introduces the Fourier and Window Fourier Transform, the classical tools for function analysis in the frequency domain. Author(s): Jonas Gomes and Luiz Velho. The field of Paretian science, extreme event theory, and complexity is relatively young. From the first From the first Pareto distribution in Pareto’s publication, Paret o rank/frequency. The characteristic exponent of a stable Paretian distribution α determines the total probability in the extreme tails of the distribution and can take any value in the interval 0 Paretian distribution is the normal distribution with mean µ and variance Size: KB. Conditional distribution has all the properties of an ordinary distribution. Independence of Xand Ymeans that the outcome of Xcannot influence the outcome of Y(and vice versa) - something we can gather from the experiment. This implies that Pr(X= i∩Y= j)=Pr(X= i)×Pr(Y= j) for every possible combination of iand j Multivariate distribution. Note 54 8. Beta Distribution 55 Notes on Beta and Gamma Functions 56 Definitions 56 Interrelationships 56 Special Values 57 Alternative Expressions 57 Variate Relationships 57 Parameter Estimation 59 Random Number Generation 60 Inverted Beta Distribution 60 Noncentral Beta Distribution 61 Beta Binomial File Size: 1MB. Pareto's distribution so as to include considerations about the maximization of personal income among several available alternatives. In this fashion, one obtains a total income distribution that is the mixture of several paretian laws, with different "alpha" coefficients, and also of. The distribution theory associated with samples from a generalized Pareto distribution (i.e., Equation 5) is generally complicated. It is not difficult to determine that convolutions of such Pareto distributions exhibit Paretian tail behavior, but closed expressions for the convolved distribution usually are not available (for n > 3). The Pareto Distribution Background Power Function Consider an arbitrary power function, x↦kxα where k is a constant and the exponent α gov- erns the relationship. Note that if y=kxα, then Log[y]=Log[k]+αLog[x].That is, the logarith. Regular Variation, Paretian Distributions, and the Interplay of Light and Heavy Tails in the Fractality of Asymptotic Models Chapter (PDF Available) May with 43 Reads How we measure 'reads'.Probability About these notes. Many people have written excellent notes for introductory courses in probability. Mine draw freely on material prepared by others in present-ing this course to students at Cambridge. I wish to acknowledge especially Geo rey Grimmett, Frank Kelly and Doug Size: 2MB.MATHEMATICAL AND COMPUTER MODELLING PERGAMON Mathematical and Computer Modelling 29 () Computing the Probability Density Function of the Stable Paretian Distribution S. MITTNIK AND T. DOGANOGLU Institute of Statistics and Econometrics Christian Albrechts university at Kiel Olshausenstr. 40, D Kiel, Germany D. CHENYAO Equities Department, New York Stock Cited by:
# Math Help - Necessary and sufficient condition for isomorphism 1. ## Necessary and sufficient condition for isomorphism Hi everyone! I have the next problem: $G$ a finit group which has order $n$ and $\varphi:G\rightarrow G$ defined by $\varphi(a)=a^m\;\forall\;a\in G$, I need to find a necessary and sufficient condition so that $\varphi$ is an isomorphism. Any suggestion? 2. ## Re: Necessary and sufficient condition for isomorphism Originally Posted by Jagger Hi everyone! I have the next problem: $G$ a finit group which has order $n$ and $\varphi:G\rightarrow G$ defined by $\varphi(a)=a^m\;\forall\;a\in G$, I need to find a necessary and sufficient condition so that $\varphi$ is an isomorphism. Any suggestion? Claim: $\varphi:G\rightarrow G$ defined by $\varphi(a)=a^m\;\forall\;a\in G, m > 1$ is an isomorphism iff G is abelian and $(m,n)=1$ $\varphi^p$ will stand for $\varphi$ composed with itself 'p' times. (Naturally $p \in \mathbb{N}$) Exercise 1: Prove that if $\varphi$ is an isomorphism then $\varphi^p$ is an isomorphism too. Assume $\varphi$ is an isomorphism and you can prove the conditions in two steps: Step 1: Let $(m,n)=d$. Then write ma + nb = d for some integers a and b (Why can you do this?). $\varphi^{a}(g) = g^{ma} = g^d$ [Can I do the above step if 'a' is negative?] Exercise 2:What happens, if $d > 1$? [Hint: Look at $\text{ker}(\varphi^a)$ and rule out this case] If d=1, we have $\varphi^a(g) = g \implies \varphi^{2a}(g) = g^2$ and, Exercise 3: $\varphi^{2a}$ is a homomorphism iff G is abelian (Prove it!). Step 2: Exercise 4:Verify: If G is abelian and $(m,n)=1$, then $\varphi$ is an isomorphism. This part is easy. Use the abelian nature to prove the function is a homomorphism and use $ma + nb = 1$ to prove $\text{ker}(\varphi) = {e}$. __________________________________________________ _____________________________ Done! 3. ## Re: Necessary and sufficient condition for isomorphism Thanks for your answer! I forgot to said that $G$ was abelian but you know this. I don't understand in the exercise 4 how using that $ma+nb=1$ you can prove $ker(\varphi)=e$ and if you prove this you have that $\varphi$ is an isomorphism? or you have to prove that $\varphi$ is surjective too? Thanks again! 4. ## Re: Necessary and sufficient condition for isomorphism Originally Posted by Jagger Thanks for your answer! I forgot to said that $G$ was abelian but you know this. I don't understand in the exercise 4 how using that $ma+nb=1$ you can prove $ker(\varphi)=e$ and if you prove this you have that $\varphi$ is an isomorphism? or you have to prove that $\varphi$ is surjective too? Thanks again! Hint: $g^m = g^{ma +nb}$ Do we need to prove surjectivity in this case? Notice that the map $\varphi: G \to G$. In this case surjectivity is equivalent to injectivity. 5. ## Re: Necessary and sufficient condition for isomorphism for maps on finite sets, injective implies surjective and vice versa. now think about what it means for g to be in ker(φ). it means $g^m = e$. there are two ways this could happen: g = e, or |g| divides m. suppose that |g| divides m. since |g| also divides |G| = n, |g| is a common divisor, so.....
# Generic method for implementing if-else statement in hardware (using gates,mux-demux,flip flops ) etc I am currently working on converting a high level language into an equivalent circuit.. I am able to convert simple expressions like a+b, a.b or a combination of them using gates. But I wanted to know if there's a generic method to implement if-else statement using electronic components such as gates, mux, ff. A simple answer would be to use mux-demux. But that wouldn't solve the following problem(for example) if(posedge(clock)): q<=d The construct for that would be positive edge triggered flip flop. So is there any general way to implement if-else statement? Any help would be appreciated. Thanks! For combinational logic, if/else is implemented as a 2:1 multiplexer. In Boolean algebra, this would be: Q = (A * S) + (B * S') where: • S is the input fed by the if condition, • A is the input fed by the then subexpression, • B is the input fed by the else subexpression, and • Q is the output of the expression. You could theoretically generalize this to include a single clock edge, but it gets a lot more complex and would resemble an FPGA cell when you're done. Basically, if a clock edge were included, you could not have an else clause (because it is implicitly "do not change the output"), and any non-edge parts of the if condition would simply become the clock enable expression. Once the dust settled, you'd be left with a less-clear version of the always_ff statement, which you should use instead anyway. Conditions with two or more clock edges are not synthesizable. EDIT: First, I'm not sure if(posedge(...)) is synthesizable. In general, you use the posedge(...) clause in the always_ff @(...) line and don't need the posedge() inside the block. In SystemVerilog, the generic form of a 2:1 multiplexer is an if statement. For example: always_comb begin if(S) Q = A; else Q = B; end If there's a clock edge, though, you need to use a flip-flop: always_ff @(posedge CLK) begin if(CLK_ENA) Q <= D; end Adding an asynchronous reset looks like this: always_ff @(posedge RESET, posedge CLK) begin if(RESET) Q <= '0; else if(CLK_ENA) Q <= D; end In this case, RESET is active-high. Note that you only need to say RESET is edge sensitive in the @() part. In the rest of the block, RESET will have the level after the edge. Also note that the edge-sensitivities are a list; you can't say "and". (In original Verilog, you separated edge sensitivities with "or", misleading people into thinking "and" could work as well.) • hey thanks a lot for your reply!...however when I am generalizing what do I use as S ? ( i have a posedge(clk)...i.e an attribute of a signal and not a signal itself)....also could you kindly explain what do u meant by "any non edge part of if condition become clock enable expression"...thanks a lot! Jun 14 '11 at 15:34 • What do you mean by "^ ........"? Jun 14 '11 at 16:39 • I realized I hadn't tagged you in my reply to your comment...so I wrote another comment to direct you to the first one!...hence the ^ .... i used the remaining dots since a comment has to be longer than 15 characters :D ...anyway could you please solve my query ? Jun 14 '11 at 17:16 • An answerer gets notified of all comments to their answer, so I didn't miss you. You only need to use the tag if you were replying to a commenter. As to your queries, first, see the combinational example for where S is. Since you have an edge in your condition, you can't use combinational logic, and must use a flip-flop instead. As to your second query, see the flip flop example above. It could be thought of as if(CLK_ENA & posedge(CLK)); note that part of that condition is posedge() and part is not. Jun 14 '11 at 20:40 • thanks for the reply!..but how would you in general synthesize a code with more than one element in the sensitivity?..i mean having one element in the sensitivity list I can feed it to the clock..but what about two elements there?..I believe "reset idea" wont always work..and what if it is more than 2? Thanks!! Jun 18 '11 at 14:39 Wouldn't this be a simple combination of an AND and XOR? AND has logic being tested and second is tied to high. XOR has logic being tested and second is tied high. 0 (1) = 0 1 (1) = 1 ## XOR (ELSE) 0 (1) = 1 1 (1) = 0 • Strictly speaking, since else is a catch-all, wouldn't nand be more appropriate than xor? Nov 30 '15 at 16:16
# Converting a Keras model to an SNN on Loihi¶ This notebook describes how to train a network in Keras and convert it to a spiking neural network (SNN) to run on Loihi. Intel’s Loihi chip is a type of “neuromorphic” hardware—specialized neural network acceleration hardware that uses spike-based communication like neurons in the brain. In this tutorial, will look at how to set up our network to target and run on Intel’s Loihi chip using the Nengo Loihi backend. While in general the Nengo ecosystem allows users to switch between backends without changes to their models, we will see how making some Loihi-specific changes during training and inference allow us to take full advantage of its capabilities. There are several ways to build SNNs to target Nengo Loihi. The CIFAR-10 Loihi example works through how to build up a deep spiking network to run on Loihi using the standard Nengo and NengoDL APIs. In NengoDL’s Keras to SNN example, we looked at converting a Keras model to an SNN. Here, we will extend the Keras to SNN example, tailoring the model for execution on Loihi. The goal of this notebook is to familiarize you with some of the nuances of running SNNs on the Loihi, and how to set these up starting from a neural network defined in Keras. The two focuses in this notebook are on adding a network layer that runs off-chip to transform the input images into spikes, and training using a Loihi neuron model that captures the unique behaviour of Loihi’s quantized neurons. We’ll add the network layer and train and test with normal ReLU neurons first to see what kind of performance we can expect without quantization constraints. Then we’ll train with the Loihi neurons to improve implementation performance, and finally we’ll run the model on Loihi to measure the final performance (we use a simulated Loihi if actual Loihi hardware is not available). [1]: import collections import warnings %matplotlib inline import matplotlib.pyplot as plt import nengo import nengo_dl import numpy as np import tensorflow as tf import nengo_loihi # ignore NengoDL warning about no GPU warnings.filterwarnings("ignore", message="No GPU", module="nengo_dl") # The results in this notebook should be reproducible across many random seeds. # However, some seed values may cause problems, particularly in the to-spikes layer # where poor initialization can result in no information being sent to the chip. We set # the seed to ensure that good results are reproducible without having to re-train. np.random.seed(0) tf.random.set_seed(0) In this example we’ll use the standard MNIST dataset. [2]: # load in MNIST dataset ( (train_images, train_labels), (test_images, test_labels), # flatten images and add time dimension train_images = train_images.reshape((train_images.shape[0], 1, -1)) train_labels = train_labels.reshape((train_labels.shape[0], 1, -1)) test_images = test_images.reshape((test_images.shape[0], 1, -1)) test_labels = test_labels.reshape((test_labels.shape[0], 1, -1)) plt.figure(figsize=(12, 4)) for i in range(3): plt.subplot(1, 3, i + 1) plt.imshow(np.reshape(train_images[i], (28, 28)), cmap="gray") plt.axis("off") plt.title(str(train_labels[i, 0, 0])) Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step ## Implementing the network¶ We will start with the same network structure used in the Keras to SNN example: Two convolutional layers and a dense layer. The only way to communicate with the Loihi is by sending spikes. Usually, when we have a model that we want to run on neuromorphic hardware, we want the whole model that we’ve defined to run on the hardware. Communicating with Loihi, however, requires that we have at least one layer that runs off-chip to convert the input signal to spikes to send to the rest of the model running on Loihi. We’ll add a Conv2D layer to run off-chip and convert the input signal to spikes. This could also be an Activation layer. The advantage to the Activation layer is that it adds no extra parameters and minimizes off-chip computations. The Conv2D layer uses a few parameters, and requires a bit more off-chip computation, but gives the network much more flexibility as to how pixels are converted to spikes. An Activation layer would likely work well for simple images like MNIST, but for more complex images (e.g. with more than one color channel, or a wider range of intensity values) the flexibility of the Conv2D layer is important. We avoid layers like the Dense layer, as it significantly increases both the number of parameters and the number of computations that have to be run off-chip. On the output side of the network, we now have to worry about how many neurons are in the last layer run on the chip. We are limited in how many neurons we can record from on the board, so we add a Dense layer with 100 neurons between the last Conv2D layer and our 10-dimensional Dense output layer (which runs off-chip). This way, we only have to record from 100 neurons, rather than the 2,304 neurons we would need to record from if we connected directly from the last Conv2D layer to the 10-dimensional output. An added benefit is that the amount of off-chip computation is reduced, since the number of weights used by the off-chip output layer is 100 x 10 instead of 2304 x 10. [3]: inp = tf.keras.Input(shape=(28, 28, 1), name="input") # transform input signal to spikes using trainable 1x1 convolutional layer to_spikes_layer = tf.keras.layers.Conv2D( filters=3, # 3 neurons per pixel kernel_size=1, strides=1, activation=tf.nn.relu, use_bias=False, name="to-spikes", ) to_spikes = to_spikes_layer(inp) # on-chip convolutional layers conv0_layer = tf.keras.layers.Conv2D( filters=32, kernel_size=3, strides=2, activation=tf.nn.relu, use_bias=False, name="conv0", ) conv0 = conv0_layer(to_spikes) conv1_layer = tf.keras.layers.Conv2D( filters=64, kernel_size=3, strides=2, activation=tf.nn.relu, use_bias=False, name="conv1", ) conv1 = conv1_layer(conv0) flatten = tf.keras.layers.Flatten(name="flatten")(conv1) dense0_layer = tf.keras.layers.Dense(units=100, activation=tf.nn.relu, name="dense0") dense0 = dense0_layer(flatten) # since this final output layer has no activation function, # it will be converted to a nengo.Node and run off-chip dense1 = tf.keras.layers.Dense(units=10, name="dense1")(dense0) model = tf.keras.Model(inputs=inp, outputs=dense1) model.summary() Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input (InputLayer) [(None, 28, 28, 1)] 0 _________________________________________________________________ to-spikes (Conv2D) (None, 28, 28, 3) 3 _________________________________________________________________ conv0 (Conv2D) (None, 13, 13, 32) 864 _________________________________________________________________ conv1 (Conv2D) (None, 6, 6, 64) 18432 _________________________________________________________________ flatten (Flatten) (None, 2304) 0 _________________________________________________________________ dense0 (Dense) (None, 100) 230500 _________________________________________________________________ dense1 (Dense) (None, 10) 1010 ================================================================= Total params: 250,809 Trainable params: 250,809 Non-trainable params: 0 _________________________________________________________________ ### Training the networks¶ As in the Keras-to-SNN notebook, once we create our model we’ll use the NengoDL Converter to translate it into a Nengo network, and then we’ll train. [4]: def train(params_file="./keras_to_loihi_params", epochs=1, **kwargs): converter = nengo_dl.Converter(model, **kwargs) with nengo_dl.Simulator(converter.net, seed=0, minibatch_size=200) as sim: sim.compile( optimizer=tf.optimizers.RMSprop(0.001), loss={ converter.outputs[dense1]: tf.losses.SparseCategoricalCrossentropy( from_logits=True ) }, metrics={converter.outputs[dense1]: tf.metrics.sparse_categorical_accuracy}, ) sim.fit( {converter.inputs[inp]: train_images}, {converter.outputs[dense1]: train_labels}, epochs=epochs, ) # save the parameters to file sim.save_params(params_file) [5]: # train this network with normal ReLU neurons train( epochs=2, swap_activations={tf.nn.relu: nengo.RectifiedLinear()}, ) Build finished in 0:00:00 Optimization finished in 0:00:00 Construction finished in 0:00:00 Train on 60000 samples Epoch 1/2 60000/60000 [==============================] - 24s 396us/sample - loss: 0.1756 - probe_loss: 0.1756 - probe_sparse_categorical_accuracy: 0.9457 Epoch 2/2 60000/60000 [==============================] - 23s 388us/sample - loss: 0.0468 - probe_loss: 0.0468 - probe_sparse_categorical_accuracy: 0.9861 After training for 2 epochs the non-spiking network achievs around 98% accuracy on the test data. Now that we have our trained weights, we can begin the conversion to spiking neurons. To help us in this process we’re going to first define a helper function that will build the network for us, load weights from a specified file, and make it easy to play around with some other features of the network. ### Evaluating the networks¶ We will now define a general function to evaluate our network on the test dataset with various neural activation functions (both non-spiking and spiking). The function creates a new network with the desired activation function, loads the weights that we learned during training, runs the network on the test dataset, and reports accuracy and firing rate with both print statements and plots. [6]: def run_network( activation, params_file="./keras_to_loihi_params", n_steps=30, scale_firing_rates=1, synapse=None, n_test=100, n_plots=2, ): # convert the keras model to a nengo network nengo_converter = nengo_dl.Converter( model, scale_firing_rates=scale_firing_rates, swap_activations={tf.nn.relu: activation}, synapse=synapse, ) # get input/output objects nengo_input = nengo_converter.inputs[inp] nengo_output = nengo_converter.outputs[dense1] # add probes to layers to record activity with nengo_converter.net: probes = collections.OrderedDict( [ [to_spikes_layer, nengo.Probe(nengo_converter.layers[to_spikes])], [conv0_layer, nengo.Probe(nengo_converter.layers[conv0])], [conv1_layer, nengo.Probe(nengo_converter.layers[conv1])], [dense0_layer, nengo.Probe(nengo_converter.layers[dense0])], ] ) # repeat inputs for some number of timesteps tiled_test_images = np.tile(test_images[:n_test], (1, n_steps, 1)) # set some options to speed up simulation with nengo_converter.net: nengo_dl.configure_settings(stateful=False) # build network, load in trained weights, run inference on test images with nengo_dl.Simulator( nengo_converter.net, minibatch_size=20, progress_bar=False ) as nengo_sim: data = nengo_sim.predict({nengo_input: tiled_test_images}) # compute accuracy on test data, using output of network on # last timestep test_predictions = np.argmax(data[nengo_output][:, -1], axis=-1) print( "Test accuracy: %.2f%%" % (100 * np.mean(test_predictions == test_labels[:n_test, 0, 0])) ) # plot the results mean_rates = [] for i in range(n_plots): plt.figure(figsize=(12, 6)) plt.subplot(1, 3, 1) plt.title("Input image") plt.imshow(test_images[i, 0].reshape((28, 28)), cmap="gray") plt.axis("off") n_layers = len(probes) mean_rates_i = [] for j, layer in enumerate(probes.keys()): probe = probes[layer] plt.subplot(n_layers, 3, (j * 3) + 2) plt.suptitle("Neural activities") outputs = data[probe][i] # look at only at non-zero outputs nonzero = (outputs > 0).any(axis=0) outputs = outputs[:, nonzero] if sum(nonzero) > 0 else outputs # undo neuron amplitude to get real firing rates outputs /= nengo_converter.layers[layer].ensemble.neuron_type.amplitude rates = outputs.mean(axis=0) mean_rate = rates.mean() mean_rates_i.append(mean_rate) print( '"%s" mean firing rate (example %d): %0.1f' % (layer.name, i, mean_rate) ) if is_spiking_type(activation): outputs *= 0.001 plt.ylabel("# of Spikes") else: plt.ylabel("Firing rates (Hz)") # plot outputs of first 100 neurons plt.plot(outputs[:, :100]) mean_rates.append(mean_rates_i) plt.xlabel("Timestep") plt.subplot(1, 3, 3) plt.title("Output predictions") plt.plot(tf.nn.softmax(data[nengo_output][i])) plt.legend([str(j) for j in range(10)], loc="upper left") plt.xlabel("Timestep") plt.ylabel("Probability") plt.tight_layout() # take mean rates across all plotted examples mean_rates = np.array(mean_rates).mean(axis=0) return mean_rates def is_spiking_type(neuron_type): return isinstance(neuron_type, (nengo.LIF, nengo.SpikingRectifiedLinear)) [7]: # test the trained networks on test set mean_rates = run_network(activation=nengo.RectifiedLinear(), n_steps=10) Test accuracy: 99.00% "to-spikes" mean firing rate (example 0): 14.4 "conv0" mean firing rate (example 0): 1.9 "conv1" mean firing rate (example 0): 1.0 "dense0" mean firing rate (example 0): 2.7 "to-spikes" mean firing rate (example 1): 15.9 "conv0" mean firing rate (example 1): 2.3 "conv1" mean firing rate (example 1): 1.4 "dense0" mean firing rate (example 1): 3.7 Note that we’re plotting the output over time for consistency with future plots, but since our network doesn’t have any temporal elements (e.g. spiking neurons), the output is constant for each digit. The firing rates here displayed in the middle graph are important to note for conversion to spikes, and may vary somewhat depending on the random initial conditions used for training. One of the important features visible here, which we’ll discuss shortly, is the decreasing mean firing rate as you move through the network. Note that these mean firing rates are computed across only the neurons that have non-zero activities; they are therefore the mean rates of the active neurons. Let’s continue with the comparison by moving into spikes. ## Converting to a spiking neural network¶ Using the NengoDL converter, we can swap all the relu activation functions to nengo.SpikingRectifiedLinear. Using the lessons that we learned in the Keras->SNN example notebook we’ll set synapse=0.005 and scale_firing_rates=100. [8]: # test the trained networks using spiking neurons run_network( activation=nengo.SpikingRectifiedLinear(), scale_firing_rates=100, synapse=0.005, ) Test accuracy: 100.00% "to-spikes" mean firing rate (example 0): 1454.1 "conv0" mean firing rate (example 0): 180.0 "conv1" mean firing rate (example 0): 97.2 "dense0" mean firing rate (example 0): 101.0 "to-spikes" mean firing rate (example 1): 1566.3 "conv0" mean firing rate (example 1): 204.6 "conv1" mean firing rate (example 1): 106.6 "dense0" mean firing rate (example 1): 129.3 [8]: array([1510.178 , 192.29611, 101.89425, 115.13666], dtype=float32) An important feature of SNNs is the time required to generate output. The larger your scaling factor, the quicker the network response to input will be. This is because more spikes will be generated at each layer, triggering a quicker response at the succeeding layer. For still images, where each successive image has no correlation with the previous image, this leads to a lag in generating output. SNNs are however much more efficient in a problem like processing a video stream, where there is high correlation between frames. In general, SNNs perform better in situations with temporal dynamics. For simplicity, though, we only examine the case of processing still images here. Let’s see what happens when we convert to an SNN using Loihi neurons. ## Converting to SNN using Loihi neurons¶ To get a sense of how well our network will run on Loihi, we switch to using the LoihiSpikingRectifiedLinear activation profile. Note that the on-chip restrictions don’t apply to the input layer that we added to the network, because it won’t be running on the Loihi. Here, the performance differences are minimal so we just convert all neurons over to Loihi neurons. If you find that adding the input layer is causing a performance drop, you may want to build your network such that only the on-chip layers use the Nengo Loihi neurons and the off-chip layer uses a standard spiking neuron model (i.e. SpikingRectifiedLinear). [9]: # test the trained networks using spiking neurons run_network( activation=nengo_loihi.neurons.LoihiSpikingRectifiedLinear(), scale_firing_rates=100, synapse=0.005, ) Test accuracy: 90.00% "to-spikes" mean firing rate (example 0): 792.1 "conv0" mean firing rate (example 0): 90.9 "conv1" mean firing rate (example 0): 47.9 "dense0" mean firing rate (example 0): 38.1 "to-spikes" mean firing rate (example 1): 835.2 "conv0" mean firing rate (example 1): 99.3 "conv1" mean firing rate (example 1): 50.1 "dense0" mean firing rate (example 1): 36.8 [9]: array([813.6283 , 95.13702 , 49.01661 , 37.468674], dtype=float32) If the training resulted a network with large differences between the firing rates (> 10Hz) of the network layers, switching to LoihiSpikingRectifiedLinear neurons will cause a significant decrease in the performance of the network. What causes this? Basically, the issue is that each of the layers need different scaling terms. With large firing rate discrepancies between layers, we end up trying to balance between having a scale_firing_rate value for the network that 1) is high enough to achieve good performance from the network, but 2) is low enough to not induce multiple spikes per time step in any layer. The second point here is where we’re getting tripped up. Loihi neurons can only spike once per time step. Recall that while the scale_firing_rates term increases the gain on signals going into neurons, it also correspondingly decreases the amplitude of the neuron activity output. If scale_firing_rates is set high enough to expect three spikes per time step, but only one spike comes out, the effects will no longer balance out and performance will deteriorate. Instead of setting a single scale_firing_rate for the whole network, we can specify a scaling value for each layer. To figure out what range we want to put the firing rates into, let’s look at the Loihi neurons’ activation functions. ### The Loihi activation profile¶ The shape of the Loihi neuron activation profile is unique, and for high firing rates has strong discrepancies with standard relu and lif behaviour. This is due to the discretization required by the Loihi hardware. Let’s take a closer look. [10]: def plot_activation(neurons, min, max, **kwargs): x = np.arange(min, max, 0.001) fr = neurons.rates(x=x, gain=[1], bias=[0]) plt.plot(x, fr, lw=2, **kwargs) plt.title("%s with [gain=1, bias=0]" % str(neurons)) plt.ylabel("Firing rate (Hz)") plt.xlabel("Input signal") plt.legend(["Standard", "Loihi"], loc=2) plt.figure(figsize=(10, 3)) plot_activation(nengo.RectifiedLinear(), -100, 1000) plot_activation(nengo_loihi.neurons.LoihiSpikingRectifiedLinear(), -100, 1000) plt.figure(figsize=(10, 3)) plot_activation(nengo.LIF(), -4, 40) plot_activation(nengo_loihi.neurons.LoihiLIF(), -4, 40) We can see that for lower firing rates the behaviour of the Loihi neurons approximates the normal relu and lif neurons relatively well, but for higher firing rates the discrepancy becomes larger. The discretization results in large plateus of input signal values where the output firing rate from the neuron stays the same, making different input values in this range indistinguishable. Also, as mentioned above, for input values above 1000 (not shown) the LoihiSpikingRectifiedLinear neuron will have a constant output of 1000 Hz (since this corresponds to one spike per timestep, the maximum firing rate on Loihi); the SpikingRectifiedLinear neuron, on the other hand, is able to fire faster than 1000 Hz by using multiple spikes per timestep. We can now return to our original question: How do we pick good firing rates for each layer? For outputs above 250 Hz, both Loihi activation functions show significant deviations from the non-Loihi activation profiles; they also become more discontinuous above this point. We therefore want to keep our maximum firing rates below 250 Hz. We also need the firing rate to be high enough to generate sufficient spikes, so that information can be transmitted from layer to layer in a reasonable time. For these reasons, we’ll choose a target mean firing rate for each layer to be 200. We’ll generate a scaling term for each layer individually to hit this target. [11]: target_mean = 200 scale_firing_rates = { to_spikes_layer: target_mean / mean_rates[0], conv0_layer: target_mean / mean_rates[1], conv1_layer: target_mean / mean_rates[2], dense0_layer: target_mean / mean_rates[3], } # test the trained networks using spiking neurons run_network( activation=nengo_loihi.neurons.LoihiSpikingRectifiedLinear(), scale_firing_rates=scale_firing_rates, synapse=0.005, ) Test accuracy: 96.00% "to-spikes" mean firing rate (example 0): 169.2 "conv0" mean firing rate (example 0): 123.0 "conv1" mean firing rate (example 0): 94.5 "dense0" mean firing rate (example 0): 40.4 "to-spikes" mean firing rate (example 1): 191.0 "conv0" mean firing rate (example 1): 136.7 "conv1" mean firing rate (example 1): 100.4 "dense0" mean firing rate (example 1): 43.2 [11]: array([180.12039 , 129.87267 , 97.452446, 41.780373], dtype=float32) As we can see, when we individually scale the activity of each layer, we almost fully recover non-spiking performance. Note that the firing rates of some layers (the later layers in particular) do not quite meet the target mean firing rate of 200 Hz, though. This is because our mean firing rates were measured using the RectifiedLinear neuron type, and do not account for the difference between it and the LoihiSpikingRectifiedLinear activation function. For better results, we could go back and measure the mean firing rates using the Loihi neuron type, or hand-tune the scaling factors on each layer to achieve the desired firing rates. Alternatively, we can train our network using the LoihiSpikingRectifiedLinear. This will account both for the discretization in the activation profile, and the hard limit of 1 spike per time step. For larger or more complex networks this can save time tuning. ## Training with the Loihi neurons¶ We’re going to use another trick for training and set scale_firing_rates=100 while training. What this does essentially is initiate the network with high firing rates, such that during training we’ll consistently find a local minima with higher firing rates that will work well when we swap in spiking neurons. It also reduces the discrepancy in firing rates between layers, starting them all off in a higher range. This is a low-overhead, ad-hoc means of increasing the firing rates of neurons in each layer, and does not guarantee that the network converges to a desired range of firing rates for each layer after training. The firing rate regularization method—shown in a basic form in the Keras to SNN example and in a more powerful form in the CIFAR-10 Loihi example—is a more consistent way to achieve the desired range of firing rates in each layer. [12]: # train this network with normal ReLU neurons train( params_file="./keras_to_loihi_loihineuron_params", epochs=2, swap_activations={tf.nn.relu: nengo_loihi.neurons.LoihiSpikingRectifiedLinear()}, scale_firing_rates=100, ) Build finished in 0:00:00 Optimization finished in 0:00:00 Construction finished in 0:00:00 Train on 60000 samples Epoch 1/2 60000/60000 [==============================] - 28s 470us/sample - loss: 0.1985 - probe_loss: 0.1985 - probe_sparse_categorical_accuracy: 0.9382 Epoch 2/2 60000/60000 [==============================] - 28s 459us/sample - loss: 0.0618 - probe_loss: 0.0618 - probe_sparse_categorical_accuracy: 0.9804 Now when we run the network we need to be sure to again set the scale_firing_rates parameter so that the training conditions are replicated. [13]: # test the trained networks using spiking neurons run_network( activation=nengo_loihi.neurons.LoihiSpikingRectifiedLinear(), scale_firing_rates=100, params_file="./keras_to_loihi_loihineuron_params", synapse=0.005, ) Test accuracy: 99.00% "to-spikes" mean firing rate (example 0): 892.8 "conv0" mean firing rate (example 0): 132.5 "conv1" mean firing rate (example 0): 84.3 "dense0" mean firing rate (example 0): 100.9 "to-spikes" mean firing rate (example 1): 884.4 "conv0" mean firing rate (example 1): 139.9 "conv1" mean firing rate (example 1): 83.8 "dense0" mean firing rate (example 1): 114.2 [13]: array([888.599 , 136.19855, 84.03242, 107.51068], dtype=float32) This is another way that we can recover normal ReLU performance using Loihi neurons. As discussed in the Keras to SNN example, we can also train up this network using an extra term added to the loss function as a way of getting neurons into the desired range of firing rates. This method has the benefit of being more precise in the resultant firing rates of neurons in the network. When we set scale_firing_rates to a large number during training, we’re simply instantiating the network with high firing rates and hoping it converges while maintaining these higher firing rates, but there is no guarantee. Adding the rate regularization term to the loss function ensures that the firing rates stay near their targets throughout the training process. ## Running your SNN on Loihi¶ At this point we’re ready to test out our network on the Loihi. To actually run it on Loihi we have to set up a few more configuration parameters. We’ll start by converting our network same as before, using the same parameters on the Converter call: [14]: pres_time = 0.03 # how long to present each input, in seconds n_test = 5 # how many images to test # convert the keras model to a nengo network nengo_converter = nengo_dl.Converter( model, scale_firing_rates=400, swap_activations={tf.nn.relu: nengo_loihi.neurons.LoihiSpikingRectifiedLinear()}, synapse=0.005, ) net = nengo_converter.net # get input/output objects nengo_input = nengo_converter.inputs[inp] nengo_output = nengo_converter.outputs[dense1] The next thing we need to do is load in the trained parameters. This involves creating a NengoDL Simulator, loading in the weights, and then calling the freeze_params <https://www.nengo.ai/nengo-dl/reference.html#nengo_dl.Simulator.freeze_params>__ function to save the weights to the network object. This will then let us build a network with the trained weights inside the Nengo Loihi Simulator. [15]: # build network, load in trained weights, save to network with nengo_dl.Simulator(net) as nengo_sim: nengo_sim.freeze_params(net) Build finished in 0:00:00 Optimization finished in 0:00:00 Construction finished in 0:00:00 Before we build the network in Nengo Loihi we need to make a few more changes. The input Node needs to altered to generate our test images as output: [16]: with net: nengo_input.output = nengo.processes.PresentInput( test_images, presentation_time=pres_time ) We specify that the to_spikes layer should run off-chip: [17]: with net: nengo_loihi.add_params(net) # allow on_chip to be set net.config[nengo_converter.layers[to_spikes].ensemble].on_chip = False At this point, if you try to build the network you will get an error >BuildError: Total synapse bits (1103808) exceeded max (1048576) which means that too many connections are going into a single Loihi core. To fix this, we need to specify the block_shape parameter on the convnet layers that are running on the Loihi. This lets us break up our convnet layer across multiple cores and prevent us from overloading a single core. The first parameter specifies the target size of the representation per core with a (rows, columns, channels) tuple. The max neurons per core is 1024, so rows * columns * channels must be less than 1024. The second parameter is the size of the full layer. This can be calculated with the input_size, kernel_size, strides, and filters parameters for the layer. We do this in the calculate_size function below. These parameters need to be tuned until the synapse and axon constraints are met. More details on block_shape can be found in the BlockShape documentation <https://www.nengo.ai/nengo-loihi/api.html#nengo_loihi.BlockShape>__, with details about how to choose good block shapes in the tips and tricks section of the documentation and in the CIFAR-10 Loihi example. For this example, we are using the (16, 16, 4) for conv0, (8, 8, 16) for conv1, and (50,) for our dense0 (which breaks it up into two 50 neuron ensembles) to fit our model on Loihi. [18]: with net: conv0_shape = conv0_layer.output_shape[1:] net.config[ nengo_converter.layers[conv0].ensemble ].block_shape = nengo_loihi.BlockShape((16, 16, 4), conv0_shape) conv1_shape = conv1_layer.output_shape[1:] net.config[ nengo_converter.layers[conv1].ensemble ].block_shape = nengo_loihi.BlockShape((8, 8, 16), conv1_shape) dense0_shape = dense0_layer.output_shape[1:] net.config[ nengo_converter.layers[dense0].ensemble ].block_shape = nengo_loihi.BlockShape((50,), dense0_shape) Now we’re ready to build the network and run it! [19]: # build Nengo Loihi Simulator and run network with nengo_loihi.Simulator(net) as loihi_sim: loihi_sim.run(n_test * pres_time) # get output (last timestep of each presentation period) pres_steps = int(round(pres_time / loihi_sim.dt)) output = loihi_sim.data[nengo_output][pres_steps - 1 :: pres_steps] # compute the Loihi accuracy loihi_predictions = np.argmax(output, axis=-1) correct = 100 * np.mean(loihi_predictions == test_labels[:n_test, 0, 0]) print("Loihi accuracy: %.2f%%" % correct) Loihi accuracy: 100.00% Our accuracy print-out is 100%, and we can also plot the results to see for ourselves: [20]: # plot the neural activity of the convnet layers plt.figure(figsize=(12, 4)) timesteps = loihi_sim.trange() / loihi_sim.dt # plot the presented MNIST digits plt.figure(figsize=(12, 4)) plt.subplot(2, 1, 1) images = test_images.reshape(-1, 28, 28, 1)[:n_test] ni, nj, nc = images[0].shape allimage = np.zeros((ni, nj * n_test, nc), dtype=images.dtype) for i, image in enumerate(images[:n_test]): allimage[:, i * nj : (i + 1) * nj] = image if allimage.shape[-1] == 1: allimage = allimage[:, :, 0] plt.imshow(allimage, aspect="auto", interpolation="none", cmap="gray") plt.xticks([]) plt.yticks([]) # plot the network predictions plt.subplot(2, 1, 2) plt.plot(timesteps, loihi_sim.data[nengo_output]) plt.legend(["%d" % i for i in range(10)], loc="lower left") plt.suptitle("Output predictions") plt.xlabel("Timestep") plt.ylabel("Probability") [20]: Text(0, 0.5, 'Probability') <Figure size 864x288 with 0 Axes> ## Conclusions¶ In this example we’ve expanded on the process of converting a Keras model to an SNN with additional considerations that are important for SNN’s we want to implement on the Loihi. We then showed the additional steps required to prepare a network generated by the NengoDL Converter to run on Loihi, including modifying the input node and specifying the distribution of convnet layers across cores.
Discuss New Concept,New Technic,New Tools, Including EAI,BPM,SOA,Tibco,IBM MQ,Tuxedo, Cloud,Hadoop,NoSQL,J2EE,Ruby,Scala,Python, Performance,Scalability,Distributed,HA, Social Network,Machine Learning. November 23, 2012  Tagged with: , , , , original:http://sujitpal.blogspot.com/2011/01/exploring-nutch-20-hbase-storage.html According to the Nutch2Roadmap Wiki Page, one of the features of (as yet unreleased, but available in SVN) Nutch 2.0 is Storage Abstraction. Instead of segment files, it can use a MySQL or HBase (support for Cassandra is also planned) as its backend datastore. Support for multiple backends is achieved using GORA, an ORM framework (originally written for Nutch) that works against Column databases. So changing backends would (probably, haven’t looked at the GORA code yet) mean adding the appropriate GORA implementation JAR into Nutch’s classpath. Currently, even though the code is pre-release, there is a working HBase backend, and adequate documentation on how to set it up. Since we use Cassandra as part of our crawl/indexing infrastructure, I figured it would be worth checking out, so once Nutch 2.0 is out, maybe we could use it with the Cassandra backend. So this post is basically an attempt to figure out what Nutch does to the HBase datastore as each of its subcommands are run. You can find the list of subcommands here. The first step is to download Nutch 2.0 and GORA sources, and build them. This page has detailed instructions, which I followed almost to the letter. The only things to remember is to set the GORA backend in conf/nutch-site.xml after generating the nutch runtime. Two other changes are to set the http.agent.name and http.robots.agents in nutch-default.xml (so nutch actually does the crawl), and the hbase.rootdir in hbase-default.xml to something other than /tmp (to prevent data loss across system restarts). I just ran a subset of Nutch commands (we use Nutch for crawling, not its indexing and search functionality), and looked at what happened in the HBase datastore as a result. The attempt was to understand what each Nutch command does and correlate it to the code, so I can write similar code to hook into various phases of the Nutch lifecycle. First, we have to start up HBase so Nutch can write to it. Part of the Nutch/GORA integration instructions was to install HBase, so now we can start up a local instance, and then login to the HBase shell. 1 2 3 4 5 6 7 8 9 10 11 sujit@cyclone:~$cd /opt/hbase-0.20.6 sujit@cyclone:hbase-0.20.6$ bin/start-hbase.sh localhost: starting zookeeper, logging to /opt/hbase-0.20.6/bin/../logs/hbase-sujit-zookeeper-cyclone.hl.local.out starting master, logging to /opt/hbase-0.20.6/bin/../logs/hbase-sujit-master-cyclone.hl.local.out localhost: starting regionserver, logging to /opt/hbase-0.20.6/bin/../logs/hbase-sujit-regionserver-cyclone.hl.local.out sujit@cyclone:hbase-0.20.6$bin/hbase shell HBase Shell; enter 'help' for list of supported commands. Version: 0.20.6, r965666, Mon Jul 19 15:48:07 PDT 2010 hbase(main):001:0> list 0 row(s) in 0.1090 seconds hbase(main):002:0> We use a single URL (this blog) as the seed URL. So we create a one-line file as shown below: 1 http://sujitpal.blogspot.com/ and then inject this URL into HBase: 1 sujit@cyclone:local$ bin/nutch inject /tmp/seed.txt This results in a single table called “webpage” being created in HBase, with the following structure. I used list to list the tables, and scan to list the contents of the table. For ease of understanding, I reformatted the output manually into a JSON structure. Each leaf level column (cell in HBase-speak) consists of a (key, timestamp, value) triplet, so we could have written the first leaf more compactly as {f1 : “\x00′\x80\x00″}. It might help to refer to the conf/gora-hbase-mapping.xml file in your Nutch runtime as you read this. If you haven’t set up Nutch 2.0 locally, then this information is also available in the GORA_HBase wiki page. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 webpage : { key : "com.blogspot.sujitpal:http/", f : { fi : { timestamp : 1293676557658, value : "\x00'\x8D\x00" }, ts : { timestamp : 1293676557658, value : "\x00\x00\x01-5!\x9D\xE5" } }, mk : { _injmrk_ : { timestamp : 1293676557658, value : "y" } }, mtdt : { _csh_ : { timestamp : 1293676557658, value : "x80\x00\x00" } }, s : { s : { timestamp : 1293676557658, value : "x80\x00\x00" } } I then run the generate command, which generates the fetchlist: 1 2 3 4 5 6 sujit@cyclone:local$bin/nutch generate GeneratorJob: Selecting best-scoring urls due for fetch. GeneratorJob: starting GeneratorJob: filtering: true GeneratorJob: done GeneratorJob: generated batch id: 1293732622-2092819984 This creates an additional column “mk:_gnmrk_” containing the batch id, in the webpage table for the record keyed by the seed URL. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 webpage : { key : "com.blogspot.sujitpal:http/", f : { fi : { timestamp : 1293676557658, value : "\x00'\x8D\x00" }, ts : { timestamp : 1293676557658, value : "\x00\x00\x01-5!\x9D\xE5" } }, mk : { _injmrk_ : { timestamp : 1293676557658, value : "y" }, _gnmrk_ : { timestamp=1293732629430, value : "1293732622-2092819984" } }, mtdt : { _csh_ : { timestamp : 1293676557658, value : "x80\x00\x00" } }, s : { s : { timestamp : 1293676557658, value : "x80\x00\x00" } } } Next I ran a fetch with the batch id returned by the generate command: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 sujit@cyclone:local$ bin/nutch fetch 1293732622-2092819984 FetcherJob: starting FetcherJob : timelimit set for : -1 FetcherJob: threads: 10 FetcherJob: parsing: false FetcherJob: resuming: false FetcherJob: batchId: 1293732622-2092819984 Using queue mode : byHost Fetcher: threads: 10 QueueFeeder finished: total 1 records. Hit by time limit :0 fetching http://sujitpal.blogspot.com/ -finishing thread FetcherThread1, activeThreads=1 -finishing thread FetcherThread2, activeThreads=1 -finishing thread FetcherThread3, activeThreads=1 -finishing thread FetcherThread4, activeThreads=1 -finishing thread FetcherThread5, activeThreads=1 -finishing thread FetcherThread6, activeThreads=1 -finishing thread FetcherThread7, activeThreads=1 -finishing thread FetcherThread8, activeThreads=1 -finishing thread FetcherThread9, activeThreads=1 -finishing thread FetcherThread0, activeThreads=0 -activeThreads=0, spinWaiting=0, fetchQueues= 0, fetchQueues.totalSize=0 -activeThreads=0 FetcherJob: done This creates some more columns as shown below. As you can see, it creates additional columns under the “f” column family, most notably the raw page content in the “f:cnt” column and a new “h” column family with page header information. It also creates a batch id marker in the “mk” column family. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 webpage : { key : "com.blogspot.sujitpal:http/", f : { bas : { timestamp : 1293732801833, value : "http://sujitpal.blogspot.com/" }, cnt : { timestamp : 1293732801833, value : "DOCTYPE html PUBLIC "-//W3C//DTD X...rest of page content" }, fi : { timestamp : 1293676557658, value : "\x00'\x8D\x00" }, prot : { timestamp : 1293732801833, value : "x02\x00\x00" }, st : { timestamp : 1293732801833, value : "x00\x00\x00\x02" }, ts : { timestamp : 1293676557658, value : "\x00\x00\x01-5!\x9D\xE5" } typ : { timestamp : 1293732801833, value : "application/xhtml+xml" } }, h : { Cache-Control : { timestamp : 1293732801833, value : "private" }, Content-Type : { timestamp : 1293732801833, value : "text/html; charset=UTF-8" }, Date : { timestamp : 1293732801833, value : "Thu, 30 Dec 2010 18:13:21 GMT" }, ETag : { timestamp : 1293732801833, value : 40bdf8b9-8c0a-477e-9ee4-b19995601dde" }, Expires : { timestamp : 1293732801833, value : "Thu, 30 Dec 2010 18:13:21 GMT" }, Last-Modified : { timestamp : 1293732801833, value : "Thu, 30 Dec 2010 15:01:20 GMT" }, Server : { timestamp : 1293732801833, value : "GSE" }, Set-Cookie : { timestamp : 1293732801833, value : "blogger_TID=130c0c57a66d0704;HttpOnly" }, X-Content-Type-Options : { timestamp : 1293732801833, value : "nosniff" }, X-XSS-Protection : { timestamp : 1293732801833, value : "1; mode=block" } }, mk : { _injmrk_ : { timestamp : 1293676557658, value : "y" }, _gnmrk_ : { timestamp=1293732629430, value : "1293732622-2092819984" }, _ftcmrk_ : { timestamp : 1293732801833, value : "1293732622-2092819984" } }, mtdt : { _csh_ : { timestamp : 1293676557658, value : "x80\x00\x00" } }, s : { s : { timestamp : 1293676557658, value : "x80\x00\x00" } } } Finally we parse the fetched content. This extracts the links and parses the text content out of the HTML. 1 2 3 4 5 6 sujit@cyclone:local$bin/nutch parse 1293732622-2092819984 ParserJob: starting ParserJob: resuming: false ParserJob: forced reparse: false ParserJob: batchId: 1293732622-2092819984 ParserJob: success This results in more columns written out to the webpage table. At this point it parses out the links from the page and stores them in the “ol” (outlinks) column family, and the “p” column family, which contains the parsed content for the page. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 webpage : { key : "com.blogspot.sujitpal:http/", f : { bas : { timestamp : 1293732801833, value : "http://sujitpal.blogspot.com/" }, cnt : { timestamp : 1293732801833, value : "DOCTYPE html PUBLIC "-//W3C//DTD X...rest of page content" }, fi : { timestamp : 1293676557658, value : "\x00'\x8D\x00" }, prot : { timestamp : 1293732801833, value : "x02\x00\x00" }, st : { timestamp : 1293732801833, value : "x00\x00\x00\x02" ts : { timestamp : 1293676557658, value : "\x00\x00\x01-5!\x9D\xE5" } typ : { timestamp : 1293732801833, value : "application/xhtml+xml" } }, h : { Cache-Control : { timestamp : 1293732801833, value : "private" }, Content-Type : { timestamp : 1293732801833, value : "text/html; charset=UTF-8" }, Date : { timestamp : 1293732801833, value : "Thu, 30 Dec 2010 18:13:21 GMT" }, ETag : { timestamp : 1293732801833, value : 40bdf8b9-8c0a-477e-9ee4-b19995601dde" }, Expires : { timestamp : 1293732801833, value : "Thu, 30 Dec 2010 18:13:21 GMT" }, Last-Modified : { timestamp : 1293732801833, value : "Thu, 30 Dec 2010 15:01:20 GMT" }, Server : { timestamp : 1293732801833, value : "GSE" }, Set-Cookie : { timestamp : 1293732801833, value : "blogger_TID=130c0c57a66d0704;HttpOnly" }, X-Content-Type-Options : { timestamp : 1293732801833, value : "nosniff" }, X-XSS-Protection : { timestamp : 1293732801833, value : "1; mode=block" } }, mk : { _injmrk_ : { timestamp : 1293676557658, value : "y" }, _gnmrk_ : { timestamp=1293732629430, value : "1293732622-2092819984" }, _ftcmrk_ : { timestamp : 1293732801833, value : "1293732622-2092819984" }, __prsmrk__ : { timestamp : 1293732957501, value : "1293732622-2092819984" } }, mtdt : { _csh_ : { timestamp : 1293676557658, value : "x80\x00\x00" } }, s : { s : { timestamp : 1293676557658, value : "x80\x00\x00" } } ol : { http://pagead2.googlesyndication.com/pagead/show_ads.js : { timestamp : 1293732957501, value : "" }, http://sujitpal.blogspot.com/ : { timestamp : 1293732957501, value : "Home" }, http/ column=ol:http://sujitpal.blogspot.com/2005_03_01_archive.html : { timestamp : 1293732957501, value : "March" }, // ... (more outlinks below) ... }, p : { c : { timestamp : 1293732957501, value : "Salmon Run skip to main ... (rest of parsed content)" }, sig : { timestamp : 1293732957501, value="cW\xA5\xB7\xDD\xD3\xBF\x80oYR8\x1F\ x80\x16" }, st : { timestamp : 1293732957501, value : "\x02\x00\x00" }, t : { timestamp : 1293732957501, value : "Salmon Run" }, s : { timestamp : 1293732629430, value : "?\x80\x00\x00" } } } We then run the updatedb command to add the outlinks discovered during the parse to the list of URLs to be fetched. 1 2 3 sujit@cyclone:local$ bin/nutch updatedb DbUpdaterJob: starting DbUpdaterJob: done This results in 152 rows in the HBase table. Each of the additional rows correspond to the outlinks discovered during the parse stage above. 1 2 3 4 hbase(main):010:0> scan "webpage" ... 152 row(s) in 1.0400 seconds hbase(main):011:0>` We can then go back to doing fetch, generate, parse and update until we are done crawling to the desired depth. Thats all for today. Happy New Year and hope you all had fun during the holidays. As I have mentioned above, this exercise was for me to understand what Nutch does to the HBase datastore when each command is invoked. In coming weeks, I plan on using this information to write some plugins that would drop “user” data into the database, and use it in later steps.
# Dynamics in Two Dimensions Question 1. Aug 20, 2013 ### Enduro 1. The problem statement, all variables and given/known data During baseball practice, you go up into the bleachers to retrieve a ball. You throw the ball back into the playing field at an angle of 42° above the horizontal, giving it an initial velocity of 15 m/s. If the ball is 5.3 m above the level of the playing field when you throw it, what will be the velocity of the ball when it hits the ground of the playing field? <=42° vi=15m/s h=5.3m vf=? 2. Relevant equations h=-0.5gt^2+VosinθΔt vf=vi+aΔt 3. The attempt at a solution 5.3m=-0.5(9.8)t^2+15(sin 42°)Δt t=1.02s vf=(15)(sin 42°)+(-9.8)(1.02) vf=0.041m/s vx^2 + vy^2 (square root) (15 x cos 42)^2 + 0.041^2 =11.15m/s what am i doing wrong? Answer is 18m/s. Last edited: Aug 20, 2013 2. Aug 20, 2013 ### lewando The result you got for t is not correct. 3. Aug 20, 2013 ### PerryKid This looks more like projectile motion. In this case, your equation: 5.3m=-0.5(9.8)t^2+15(sin 42°)Δt is not necessary. It seems to be more useful as $Y=Y_{o}+V_{oy}t-\frac{gt^{2}}{2}$. 4. Aug 21, 2013 ### haruspex It starts at 5.3m above the final height, so what is the change in height?
Popular # determination of the stress intensity factor of a partially closed Griffith crack by John Tweed Written in English ## Subjects: • Fracture mechanics., • Elastic solids., • Integral equations. Edition Notes ## Book details Classifications The Physical Object Statement by J. Tweed. Contributions North Carolina. State University, Raleigh. Applied Mathematics Research Group. LC Classifications TA409 .T9 Pagination 18 l. Number of Pages 18 Open Library OL5020592M LC Control Number 76628730 Download determination of the stress intensity factor of a partially closed Griffith crack In this paper, the author uses a Fourier transform technique to derive formulae for the crack shape and stress intensity factor of a partially closed Cited by:   The problem of a Griffith crack in a thin plate, which is opened by a parabolic pressure acting on its surfaces is considered. The crack is then partially closed in a symmetric manner by ties, idealized by point loads in the material, and the effect upon the stress intensity factors is by: The simplest geometry factor is that for an edge crack of length, a, at the edge of a semi-infinite half space: the increased ability of the crack to open causes the stress intensity factor to increase by some 12%, KI = σ√πa () The determination of this geometry term is a problem of stress analysis. The stress intensity factor (SIF) is the key parameter in linear elastic fracture mechanics (LEFM) for quantifying the severity of cracks. It reflects the effect of loading, crack size, crack shape and component geometry in life and strength prediction methods. An curateac knowledge of the stress intensity factor is essential for. A new concept to describe the severity of the stress distribution around the crack tip is the so-called stress intensity factor K. This concept was originally developed through the work of Irwin. The stress intensity factor describes the stress state at a crack tip, is related to the rate of crack growth, and is used to establish failure criteria due to fracture. Irwin arrived at the definition of $$K$$ as a near-crack-tip approximation to Westergaard's complete solution for the stress field surrounding a crack. The stress intensity factor for this case is given by: (9) where F(x/a) is a tabulated function given by Hartranft and Sih [1]. A weight function is a function which gives the ratio of (the stress intensity factor at a crack tip due to the application of a stress s to an element of area dA on the crack surface) to (the stress. M. Beghini, L. BertiniEffective stress intensity factor and contact stress for a partially closed Griffith crack in bending Int. Engrg. Fracture Mech., 54 (5) (), pp. Raju, I.S. and Newman, J.C., Stress-intensity factors for a wide range of semi-elliptical surface cracks in finite-thickness plates. Engng Fracture Mechanics, Vol,– CrossRef Google Scholar. Cracks and Stress Intensity Factor Fig Energetics of Griffith crack in uniform tension: linear elastic. it is possible to determine δ C/ δ a for a given crack length and so. E.E. Burniston, An Example of a Partially Closed Griffith Crack, International Journal of Fracture Mechanics, 5, 1 () J. Tweed, The Determination of the Stress Intensity Factor of a Partially Closed Griffith Crack, Int. Engng Sci., 8 () – Google Scholar. Zong-Xian Zhang, in Rock Fracture and Blasting, Griffith Energy-Balance Concept. According to the Griffith theory [7], as in a liquid, the bounding surfaces in a solid possess a surface tension, which implies the existence of a corresponding amount of potential a crack is formed owing to the action of a stress, or a preexisting crack is caused determination of the stress intensity factor of a partially closed Griffith crack book extend, a quantity of. Publisher Summary Fracture occurs when the stress intensity factor reaches its critical value, that is, the fracture toughness. In the energy approach, the fracture behavior of a material is described by the energy variation of the cracked system during crack extension, which is characterized by the so-called energy release rate. The stress intensity at a crack flaw tip can be explained by classic Griffith crack theory. Local stress intensification can be described by an intensification factor K1, which reaches a critical value K1C when fracture occurs: () σ = K IC Y a 1 / 2. The failure of cracked components is governed by the stresses in the vicinity of the crack tip. The stress intensity factors depend on the geometry of the component and on loading condition. This study is on central cracked plate of a finite length. Basically stress Intensity factor is calculated analytically means by LEFM and computationally. Hardbound. An important element of work in fracture mechanics is the stress intensity factor - the characterizing parameter for the crack tip field in a linear elastic material; something reflected in its intense research over the last 30 years. The weight function method is one of the most reliable, versatile, and cost-effective methods of evaluating the stress intensity factors and crack. Alan Arnold Griffith's energy-based analysis of cracks in is considered to be the birth of the field of fracture mechanics [1]. A copy of his paper can be found here. He was motivated by Inglis's linear elastic solution for stresses around an elliptical hole [2], which predicted that the stress level approached infinity as the ellipse flattened to form a crack. In this paper, the author uses a Fourier transform technique to derive formulae for the crack shape and stress intensity factor of a partially closed Griffith crack in an infinite elastic solid. intensity factor. Stress intensity factors can be obtain from the stress field near the crack tip. In general, the stress intensity factor is linearly proportional to the external or applied load and contains a factor which describes the configuration of the body including the crack length. The usefulness of stress intensity factors in the analysis. The crack is then partially closed in a symmetric manner by ties, idealized by point loads in the material, and the effect upon the stress intensity factors is betrachtet das Problem. On Fracture Mechanics A major objective of engineering design is the determination of the geometry and dimensions of machine or structural elements and the selection of material in such a way that the elements perform their operating function in an efficient, safe and economic manner. For this. Abstract. Dynamic fracture in PMMA was studied to determine the correlations among dynamic stress intensity factor K ID, crack velocity $$\dot a$$ and acceleration ä. Specimen geometry, a single-edge-cracked tensile plate with two circular holes, was employed to obtain the crack acceleration, deceleration and re-acceleration process in a single fracture event. Compare graphs of crack velocity versus stress intensity factor K 1 for the incremental and continuous crack growth models using the following data: σ ∞ =×Pa, δ CR =×10−6m, σ c0. for stress analysis & stress intensity factor. Key Words: Stress intensity factor, Modes of fracture failure, Methods of stress analysis. INTRODUCTION Propagation of cracks in materials are studied with Fracture mechanics. Methods of analytical solid mechanics are being used to calculate the driving force on a crack and those of experimental. Fracture Mechanics Lecture notes - course 4A Concept version P.J.G. Schreurs Eindhoven University of Technology Department of Mechanical Engineering. cracks and shaped bodies, the stress intensity factor is a single parameter characterization of the crack tip stress field. The stress intensity factors for each geometry can be described using the general form: Ka VE S (1 ). sec24 2 DS E D D Eq Where, KI = Stress intensity factor. stress intensity factor. Stress intensity factors, which have units of stress ∙ (length)1/2, characterize the stress state ahead of a sharp crack using a single constant value [1]. Stress intensity factors are most often approximated using two-dimensional analysis, but. I = Stress Intensity Factor (SIF) in an infinite cracked plate subject to biaxial loading (symmetry about x axis): Note: σ x(∞) does not affect K I. - K I and stress field at the crack tip depend on crack length a - σ x = σ(k-1) for θ = π, along crack faces: σ x = -σ uniaxial load (k. This section will present a catalog of stress-intensity factor solutions for some typical crack geometries. Many of these solutions are found in computer programs and handbooks. Tables through summarize the solutions that are presented. The solutions are categorized by the location of the crack, either embedded, in a plate (surface or edge), or at a hole, in Tables through. Stress Intensity Factor in Practice: Engineers are interested in the maximum stress near the crack tip and whether it exceeds the fracture toughness. Thus, the stress intensity factor K is commonly expressed in terms of the applied stresses at and. The Griffith Energy Balance Approach 8 Irwin’s Modification to the Griffith Theory 11 The Stress Intensity Approach 12 Crack Tip Plasticity 14 Fracture Toughness 15 Elastic-Plastic Fracture Mechanics 16 Subcritical Crack Growth 18 Influence of Material Behaviour 20 Part II Linear Elastic Fracture Mechanics. Chaitanya K. Desai, Sumit Basu, Venkitanarayanan Parameswaran, Determination of complex stress intensity factor for a crack in a bimaterial interface using digital image correlation, Optics and Lasers in Engineering, /eng, 50, 10, (), (). Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of analytical solid mechanics to calculate the driving force on a crack and those of experimental solid mechanics to characterize the material's resistance to fracture. In modern materials science, fracture mechanics is an important tool used to improve the. In this part, to evaluate the stress intensity factor around the crack tip, a contour integral region is defined and K I values in this region are computed. In fracture modelling, crack tips are regions of high stress gradients and high stress concentrations, and these concentrations result in theoretically infinite stresses at the crack tip. Murakami Y (), Stress singularity for notch at bimaterial interface, Stress Intensity Factors Handbook, Vol 3, Murakami et al. Pergamon Press, Oxford, UK, Ch 18, – Sinclair GB (), FEA of singular elasticity problems, Proc of 8th Int ANSYS Conf. In this paper, an integral transformation of the displacement is employed to determine the solution of the elastodynamics problem of two collinear Griffith cracks with constant velocity situated in. Crack Growth in Polymers Stress Concentration and Stress Intensity Factors. The fracture strength of structural materials is often described with the Griffith model. 1 This model is in excellent agreement with the observed fracture strength of brittle materials like glass and ceramics. However, for polymers and metals that undergo extensive plastic deformation it gives unrealistically low. are proper to determine stress concentration factors and to study crack propagation. -Holography Holography theory was presented by Dennis Garbor in Oregan and Dudderar () employed this technique to study transparent samples and calculated stress concentration factors near sharp tip of a crack. STRESS INTENSITY FACTORS FOR AN INTERIOR GRIFFITH CRACK OPENED BY HEATED WEDGE IN A STRIP WHOSE EDGES ARE NORMAL TO CRACK AXIS: Journal of Sciences, Islamic Republic of Iran: مقاله 8، دوره 12، شماره 4، زمستان اصل مقاله ( K) چکیده. The purpose of this research is to determine the mixed mode stress intensity factors from photoelastic data taken along a boundary close to the crack tip when strong interaction is present. The method is based on boundary collocation of a stress function for an interior crack. Combining collocation with the use of half fringe photoelasticity (HFP). ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering ASME Letters in Dynamic Systems and Control Journal of Applied Mechanics.Stress intensity factor determination plays a central role in linearly elastic fracture mechanics (LEFM) problems. Fracture propagation is controlled by the stress field near the crack tip. Because this stress field is asymptotic dominant or singular, it is characterized by the stress intensity factor (SIF). Since many rock types show brittle elastic behaviour under hydrocarbon reservoir.Hamrock’s Table gives psi in and MPa m as units for stress intensity. The conversion is ksi in ksi in m in m MPa ksi MPa 1 ⋅ ⋅. To summarize, the stress intensity analysis is as follows: 1. Compute the average stress on the uncracked part. It could be tensile or bending. 2. 63920 views Sunday, November 8, 2020
# How do you solve sqrt (x+9)=4? Jan 27, 2016 To solve equations that involve radicals, you must square both sides of the equation. #### Explanation: $\sqrt{x + 9}$ = 4 ${\left(\sqrt{x + 9}\right)}^{2} = {\left(4\right)}^{2}$ x + 9 = 16 x = 16 - 9 x = 7 With radical equations it is alway vital to check your solutions in the original equation, since extraneous solutions may arise. You must especially be careful of them in radical-quadratic equations, where two solutions often appear but oftentimes only one is the correct solution. Practice exercises: 1. Solve each equation. Watch out for extraneous solutions. a) $\sqrt{2 x + 5}$ = 7 b) $\sqrt{3 x + 1}$ = x - 3 c) $\sqrt{2 x + 2}$ - $\sqrt{x + 2}$ = 1 Jan 30, 2016 $x = 7$ #### Explanation: $\sqrt{x + 9} = 4$ Square both sides: $\rightarrow {\left(\sqrt{x + 9}\right)}^{2} = {4}^{2}$ $\rightarrow x + 9 = 16$ $\rightarrow x = 16 - 9 = 7$
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract This study attempts to answer two key questions: what will be the likely impact of the EU's Everything But Arms (EBA) proposal, and, what would be the impact if the United States also were to implement a similar proposal? Using the GTAP model, the preliminary results in this paper show if only the EU's EBA proposal were implemented, then welfare in the least developed countries (LDCs) would increase by $2.5 billion (0.53 percent of their GDP), exports would grow by 3 percent, and GDP would grow by 2.3 percent. If the United States and the EU both implemented similar programs, then LDC welfare would increase by$3.1 billion (0.66 percent of GDP), exports would increase by 3.7 percent and total GDP growth by 2.9 percent. Another version of this scenario assumes that LDCs lack the supply capacity to exploit the new trade opportunities. In this case, LDC welfare increases by \$0.9 billion (0.2 percent of GDP), exports grow by 4.1 percent, and GDP grows at 2.3 percent. The impact of this last scenario still may be overstated, given that trade preferences are not fully accounted for in the GTAP tariff database. Overall, the results suggest that improving market access for the LDCs could help raise per capita incomes above trend projections, but the gains are modest.
# Why do polytopes pop up in Lagrange inversion? I'd be interested in hearing people's viewpoints on this. Looking for an intuitive perspective. See Wikipedia for descriptions of polytopes and the Lagrange inversion theorem/formula (LIF) for compositional inversion. Background update (8/2012): Consider a compositional inverse pair of functions, $h$ and $h^{-1}$, analytic at the origin with $h(0)=0=h^{-1}(0)$. Then with $\omega=h(z)$ and $g(z)=1/[dh(z)/dz]$, $$\exp \left[ {t \cdot g(z)\frac{d}{{dz}}} \right]f(z) = \exp \left[ {t\frac{d}{{d\omega }}} \right]f[{h^{ - 1}}(\omega )] = f[{h^{ - 1}}[t + \omega]] = f[{h^{ - 1}}[t + h(z)]],$$ so $$\exp \left[ {t \cdot g(z)\frac{d}{{dz}}} \right]z |_{z=0}=h^{-1}(t)$$ (see OEIS A145271 and A139605 for more relations). With the power series rep $h(z)= c_1z + c_2z^2 + c_3z^3 + ... ,$ $$\frac{1}{5!}[g(z)\frac{d}{{dz}}]^{5}z|_{z=0} = \frac{1}{c_1^{9}} [14 c_2^{4} - 21 c_1 c_2^2 c_3 + c_1^2[6 c_2 c_4+ 3 c_3^2] - 1 c_1^3 c_5],$$ which is the coefficient of the fifth order term of the power series for $h^{-1}(t)$. This is related to a refined f-vector (face-vector) for the 3-D Stasheff polytope, or 3-D associahedron, with 14 vertices (0-D faces), 21 edges (1-D faces), 6 pentagons (2-D faces), 3 rectangles (2-D faces), 1 3-D polytope (3-D faces). This correspondence between the refined f-vectors of the n-Dim Stasheff polytope, or associahedron, and the coefficients of the (n+2)-th term of the compositional inverse holds in general, (see A133437, inversion for power series, and compare with A033282, coarse f-vectors for associahedra, and with MO-6373). (If $h(z)$ is presented as a Taylor series, the LIF A134685 is obtained, which is related to A134991 [tropical Grassmannian G(2,n)], and using the reciprocal of $h(z)$, the LIF A134264 is obtained, which is related to the Narayana triangle A001263 [h-vectors of dual of associahedra].) Why (morally/intuitively, vague notion) do the refined face numbers of the associahedra appear as the coeficients of Lagrange inversion/reversion for a power series, or ordinary generating fct., as presented in OEIS A133437? Loday expresses a similar interest on page 15 of "The Multiple Facets of the Associahedron" in Sec. 6 Inversion of Power Series. He ends with "There exists a short operadic proof of the above formula [LIF essentially] which explicitly involves the parenthesizings [of associahedra], but it would be interesting to find one which involves the topological structure of the associahedron." One viewpoint, for example: I can derive the LIF several ways and relate the methods to rooted trees and thence to associahedra, but is there an intuitive way to relate the LIF for compositional inversion (which is related to the Legendre transformation/Legendre-Fenchel transform) to the geometry of the associahedra through a geometrical view of optimization via integer programming? Compositional inversion and the Legendre transformation have geometrical interpretations and are related to optimization as discussed by Strang in his book Intro. to Applied Mathematics (see also Mathemagical Forests and references therein in the section A Walk With Lagrange and Legendre). De Loera, Rambau and Leal in Triangulations of Set Points in Sec. 1.2 Optimization and Triangulations discuss connections of secondary polytopes to optimization. Edit (2/2014): Bayer and Lagarias in "The Nonlinear Geometry of Linear Programming. II Legendre Transform Coordinates and Central Trajectories" on page 560 relate power series reversion (LIF) to optimazation problems over polytope domains, but these polytopes are not restricted to associahedra. Second viewpoint: Stasheff associahedra are intimately related to the moduli spaces of colliding particles (Devadoss, Devadoss/Heath/Vipismakul, Devadoss/Fehrman/Heath/Vashist). String interactions generate the moduli spaces of Riemann surfaces (Zwiebach, A First Course in String Theory, pg. 310) with punctures corresponding to particles interacting on a line segment. There is much literature on the relations among compositional inversion/Legendre transformation, Feynman functional/path/gaussian integrals representing partition functions and sums over Feynman diagrams/graphs for point particle interactions (Connes/Marcolli's "Noncommutative Geometry, Quantum Fields and Motives" pg. 51, Borcherd pg. 34, Getzler, Manin, Abdesselam, Bergstrom and Brown). Are there analogous arguments directly in terms of sums over moduli spaces for string interactions [as for the beta integral for the Veneziano amplitudes (Zwiebach, pg. 311)] that circumvent the Feynman particle/stable graph interpretations and highlight more directly the connections between compositional inverses/Legendre transforms and the face polynomials of associahedra? (See also MOQ 22291 and make the change of variables $x=f^{-1}(y)$ in Theo's integral and maybe a Wick rotation.) I should have stressed earlier that refined face partition polynomials characterize the LI for o.g.f.s rather than the usual coarse face polynomials and that both sets of polynomials contain the Catalan numbers only as the number of vertices for an associahedron. The coarse polynomials are not sufficient to enumerate distinct higher dimensional facets corresponding to distinct partitions of the LI, much less the Catalan numbers alone. - Tom- This could expanded on a lot. I think asking a question like "Why do X show up in Y?" without explaining or providing a link for your audience to find out how X shows up in Y is both generally not cool and brings down the probability that you will get an answer or any back and forth at all. People who don't instantly know what connection you have in mind just ignore you, or leave long aggrieved comments (at best). You don't have to explain every detail, but providing more links (Don't mention your previous questions, link to them!) would be a good start. –  Ben Webster Oct 11 '11 at 0:39 I find it difficult to believe that "If you are not familiar with the topics,it would take far too long to explain the question" given the number of well-written and well-explained questions that I've read on this site in fields with which I've had no prior experience. –  j.c. Oct 11 '11 at 3:56 I find this question potentially interesting, and I have some knowledge of various potentially relevant things. However, I agree with other commenters that more effort from the OP is required to summarise the phenomena for which he seeks insight. In fact, I suspect that the process of writing such a summary would help him to find that insight. –  Neil Strickland Oct 11 '11 at 4:29 I don't think this is very deep, one just needs to sit down and examine the role trees play in the LIF and with associahedron face numbers you mentioned. From looking at A133437 it seems you are counting dissections of a polygon. I don't know about partial dissections into regions, but if you completely dissect into triangles than there is an obvious bijection with planar binary trees. It is one of the zillion ways Catalan numbers appear in combinatorial enumeration. I did not check that, but dissections into regions seem to involve trees with vertices of arbitrary degree and so does the LIF. –  Abdelmalek Abdesselam Oct 11 '11 at 17:02 Echoing the sentiments of Abdelmalek, in Section 5.4 of my book Enumerative Combinatorics, vol. 2, there are two proofs showing that LIF is equivalent to certain tree enumeration problems. The basic connection is the following: it is easy to see that inverting a power series $F(x)$ is equivalent to solving an equation of the type $xG(f(x))=f(x)$ for $f(x)$. This equation is an algebraic formulation of the recursive structure of a tree. –  Richard Stanley Dec 19 '11 at 14:36
Multiple eMail Domains & Shared Calendar Windows SBS 2003 R2 w/Exchange Windows XP Pro w/Outlook 2003 Server and workstations, respectively. Situation: Family-run small businesses (plural) sharing an office and resources. 4 different email domains in use/desired. Shared calendar also desired. How do I setup Exchange on SBS 2K3 R2 to send/receive emails for multiple email domains with only 1 SBS server (also PDC, etc.)? Using off-site email hosting and am in the process of migrating on-site using Exchange which came with SBS. Changed 1 set of MX records for 1 domain name to point to server. Additional email domains a critical concern. How do I setup Exchange and Outlook to use a shared calendar? The family wants each employee to be able to add a new event/meeting and for it to appear on everyone's calendar simultaneously. How? Can I do this also for a shared address book? Thanks, Fred ><> 0 f3_evans (4) 6/19/2008 6:59:34 PM outlook 87535 articles. 11 followers. 3 Replies 176 Views Similar Articles [PageSpeed] 3 Hi Fred: -- Larry your issue so that all can benefit. "F3" <[email protected]> wrote in message news:[email protected]... > Windows SBS 2003 R2 w/Exchange > Windows XP Pro w/Outlook 2003 > Server and workstations, respectively. > > Situation: Family-run small businesses (plural) sharing an office and > resources. 4 different email domains in use/desired. Shared calendar > also desired. > > How do I setup Exchange on SBS 2K3 R2 to send/receive emails for multiple > email domains with only 1 SBS server (also PDC, etc.)? Using off-site > email hosting and am in the process of migrating on-site using Exchange > which came with SBS. Changed 1 set of MX records for 1 domain name to > point to server. Additional email domains a critical concern. Receiving is easy. For the entire group you can establish additional recipient policies. http://support.microsoft.com/kb/319201 Sending is more difficult. Must you do this? > How do I setup Exchange and Outlook to use a shared calendar? The family > wants each employee to be able to add a new event/meeting and for it to > appear on everyone's calendar simultaneously. How? You share calandars either in Company Web/Sharepoint or in Exchange Public Folders. http://technet.microsoft.com/en-us/library/aa996053(EXCHG.65).aspx > Can I do this also for a shared address book? Yes. > Thanks, > Fred > ><> 0 Larry 6/19/2008 7:24:22 PM Larry, Assume that I have NEVER seen or used Outlook before. No, I've not been living under a rock, I've just had a preference for other mail clients. I've figured out how to export a calendar from Mozilla Thunderbird w/Lightning (or Sunbird) to a comma-delimited file and how to import it into a single user's Outlook 2003. Once I setup the shared folder(s), how do I then proceed to have a central calendar which all Outlook 2003 users will be able to access and modify live? Can I do this (and how) with an imported calendar? I'm going through the shared folder thing. Again, thanks, Fred <>< Larry Struckmeyer [SBS-MVP] wrote: > Hi Fred: > > Please see in line below: > 0 f3_evans (4) 6/19/2008 9:12:29 PM That may not have been the best source, but there are many. "setup public folders" -- Larry your issue so that all can benefit. "F3" <[email protected]> wrote in message news:[email protected]... > Larry, > > Assume that I have NEVER seen or used Outlook before. No, I've not been > living under a rock, I've just had a preference for other mail clients. > I've figured out how to export a calendar from Mozilla Thunderbird > w/Lightning (or Sunbird) to a comma-delimited file and how to import it > into a single user's Outlook 2003. Once I setup the shared folder(s), how > do I then proceed to have a central calendar which all Outlook 2003 users > will be able to access and modify live? Can I do this (and how) with an > imported calendar? > Same question, but for the address book. While awaiting your response, > I'm going through the shared folder thing. > > Again, thanks, > Fred > <>< > > Larry Struckmeyer [SBS-MVP] wrote: >> Hi Fred: >> >> Please see in line below: >> 0 Larry 6/19/2008 9:26:27 PM Similar Artilces: Printing Multiple Receipts at a Time From Journal Hey guys, Is there a way to print multiple journaled receipts at a time? Say all receipts for a batch or by date or other criteria? Thanks Tom, Not that I've seen. -- = Get Secure! - www.microsoft.com/security You must be using Outlook Express or some other type of newsgroup reader to see and download the file attachment. If you are not using a reader, follow the link below to setup Outlook Express. Click on "Open with newsreader" under the MS Retail Management System on the right. http://tinyurl.com/75bgz ========== "Tom Bombadill" <Genius_pos... HELP!! Viewing email images I have just recently switched to using Outlook rather than Outlook Express as my email program. I now find that any image attachments (.jpg, .bmp etc.) I receive are not visible onscreen as they used to be in Outlook Express. I have to click on each attachment icon to view... It's annoying, especially if I get a series of pics...I have to manually ope each one... Does anyone know how to set Outlook so that it shows me these images onscreen??? Many Thanks, Darren. ... I'm missing all past years emails in my hotmail account For some reason i logged in into my hotmail account and noticed that all my past years are gone. I havent deleted anything. I'm missing emails from 2006 till 2009 all of them? Please help....what can i do? If the Hotmail account is set as your primary store, could it be that you have AutoArchive enabled? Tools-> Options-> tab Other-> button AutoArchive... -- Robert Sparnaaij [MVP-Outlook] Coauthor, Configuring Microsoft Outlook 2003 http://www.howto-outlook.com/ Outlook FAQ, HowTo, Downloads, Add-Ins and more http://www.msoutlook.info/ Real World Questions, Re... When I try to open a link included in an email nothing happens? Hi, This problem can be caused by several different things. Do you receive an error message when you click on a hyperlink, or does nothing happen at all? Can you use hyperlinks in other programs, such as Word? Please have a look at the following Knowledge Base articles to see if these resolve your problem. 177054.KB.EN-US OLEXP: Internet Shortcuts in Outlook Express Do Not Start Web Browser You can access this article by clicking on the link below. http://support.microsoft.com/default.aspx?scid=KB;EN-US;177054 257464.K... slow emails We are using exchange 2000 sever. and we have one user on one machine that her emails take awhile to process, up to 3hrs possibly. She is running outlook 2002, with all updates as of 10/9/03. I have figured in the past that it was just the other servers the mail was going through that was slowing it down. But when she uses, another persons acount on another machine it runs fine. Is this and outlook/machine problem, a server problem, or a problem that is out of my hands? Or is there a problem at all? >-----Original Message----- >We are using exchange 2000 sever. and we ha... Trouble sending emails #2 I am recieving emails, but every time I try to send one I get an error message saying, "The server does not support the required HTTP methods. Ther server responded 'Not Implemented'." I have no idea wht that means or what I need to change. Help! On Thu, 30 Dec 2004 15:05:06 -0800, Tina wrote: > I am recieving emails, but every time I try to send one I get an error > message saying, "The server does not support the required HTTP methods. Ther > server responded 'Not Implemented'." I have no idea wht that means or what I > need to change. ... How to forward emails (unchanged) from one account to another Assume I manage two different profiles and eMail accounts with my Outlook. How can I adjust in one of them that all arriving eMails should automatically be forwarded (unchanged !!) to the other account ? Sven "Sven Claasen" wrote: > Assume I manage two different profiles and eMail accounts with my Outlook. > > How can I adjust in one of them that all arriving eMails should automatically be forwarded (unchanged !!) > to the other account ? > > Sven > Automatically forward messages to another e-mail account - Outlook - Microsoft Office Online: http://of... New emails with attachments not being delivered When I send a NEW email with an attachment to my husband, it is NOT getting delivered, however, when I FORWARD an email with an attachment, it IS being delivered. I've tried sending different types of attachments (Word, JPG, Acrobat) and it doesn't seem to matter. None of them (as new messages) are delivered. I can send the same message to his home computer and it gets delivered, it's just his work Outlook that seems to be the problem. My email address is on his list of safe senders. He can receive emails with attachments from others, including his home compute... 2007 calendar How can I create a 1 page 2007 calendar in publisher? File, new, publications for print, calendars, select one, select yearly... click Change date range. -- Mary Sauer MSFT MVP http://office.microsoft.com/ http://msauer.mvps.org/ news://msnews.microsoft.com http://officebeta.iponet.net/en-us/publisher/FX100649111033.aspx "Angel" <[email protected]> wrote in message news:[email protected]... > How can I create a 1 page 2007 calendar in publisher? ... email set up in publisher I set up a newsletter and tried to email it, a setup wizard came up and I cancelled it. now I dont have the send button for emails. how do I get the wizard or send button back? Cassandra Cassandra wrote: > I set up a newsletter and tried to email it, a setup wizard came up and I > cancelled it. now I dont have the send button for emails. how do I get the > wizard or send button back? Cassandra Start either Outlook Express or Windows Mail (whichever you have); this should prompt you. You have to have Outlook Express, Outlook, or Windows Mail set as your default mail client to ... How can I tell if an email I have sent has been opened? As per subject. You can try using a Read Receipt but many users block the return of them "AT" <[email protected]> wrote in message news:[email protected]... > As per subject. you pay for it: http://www.readnotify.com/ you can try it out for a short time...then you pay...but it's pretty much guaranteed to get you the results you want... "AT" <[email protected]> wrote in message news:[email protected]... > As per subject. Request a read receipt. But- it... How do I export my emails and folders from Outlook to Thunderbird I'm trying to export my email from and all personal folders from Outlook to Thunderbird. Any help would be appreciated. Thanks, Sam ... Convert single colum/multiple rows to multiple colums. Hi, I have a .dat file when opened with Excel it has 1 column and 7 rows per entry. I would like to delete some rows and convert the rest to something like. Any chance this can be done. It's quite large. 51,793 rows. TIA Jeff Col 1 Col 2 Col 3 Name Date Lenny Kravitz - 2000 - Greatest Hits -- Table: {2} { "music" "Name", "06/04/2008", "Lenny Kravitz - 2000 - Greatest Hits", }, --Table: {3} etc. etc. ... Multiple Outlook #2 I imported an Outlook 2000 pst file into Outlook 2003. Now when I open Outlook my Personal Folders appears 4 times in the left hand "All Mail Folders" When I close Outlook it gives me an error and msvbvm60.dll is mentioned. Any help on how to correct this issue would be greatly appreciated. Mick ... multiple revieve of emails I recieve multiple copys of emails in outlook. In some cases I get 100 copys of the exact same message, in other cases it is ok, then others many times Only one message on server Jim <[email protected]> wrote: > I recieve multiple copys of emails in outlook. In some > cases I get 100 copys of the exact same message, in other > cases it is ok, then others many times Only one message on > server Usually caused by 1) Send/receive interval too small or 2) server timeout value too small or 3) rules that don't contain the "stop processing more rules... Delay receiving email Hi, We use Exchange 2000 and Windows 2000. I have recently been expeirencing intermitent delays in receiving email from two seperate domains.The delays are usually 3 hours but sometimes even longer. In the header info the delay happens at the last hop before reaching our domain. I have checked the exchange logs as well as the logs on our gateway but they do not indicate any delay once we receive the email. I am not sure what else I should do. Any help would be appreciated. Thanks, Bryan ... Printing Multiple Copies #4 Hi all - I am using Vista Business and Publisher 2007 from the Office 2007 Small Business package. No matter what type of publication or template I use I can only print single copies. Most recently I tried to print 30 copies of a single page flyer and had to do it a single copy at a time. If I go to File|Print and set the number of copies to 30, I still get a single copy only. Does anyone have any thoughts? Thanks -- Andrew Aitchison ([email protected]) Is your printer driver current? What is the default in the printing preferences in the control panel printer folder? ... Hi, I have at one brokerage mutliple accounts (individual, joint, 401K, roll-over etc). In Money they are all separate accounts. When I connect with the brokerage I am only able to connect to one type of account (in this case retirement). How can I download multiple accounts from a brokerage (when not all have the same URL). Please email me your reply. Thanks Sandeep ... keep past calendar appts longer than six weeks Outlook is deleting past calendar appointments after approx six weeks. How do I change setting to keep for a longer period of time? thx. Check your archive settings. You didn't indicate your version of Outlook so the instructions are approximate: Tools > Options > Other > Auto Archive. -- Kathleen Orland "tfg" <[email protected]> wrote in message news:[email protected]... > Outlook is deleting past calendar appointments after approx six weeks. How > do I change setting to keep for a long... Enter multiple cash receipt payment methods in Cashbook/BM. When entering a receipt into cashbook/bank management, there is sometimes a need to receipt the amount as more than one payment type - e.g. cash and cheque. This process is accommodated well in SOP. It would be useful to have this type of procedure included in Bank Management. This is a user request from Pan Pac Forest Products. ... Count Unique Items with Multiple Criteria I am trying to get a list of how many lots a particular car model is on. For example, say we have a spreadsheet that looks like: Model License Lot Ford xjd-394 1 Chevy gwg-394 2 Ford sdf-333 1 Ford lkj-111 3 Toyota skd-333 4 Toyota shk-584 4 I am loking for a way to get data that says how many unique lots each car is on, so for example: Ford: 2 Chevy: 1 Toyota: 1 I was trying to do this with Pivot tables and the count functionality, but it isn't quite getting me the results I want. I can get the results with a pivot table ... sharing folders in outlook After setting up my outlook contact files to be shared via email, I sent a test email to myself asking if I wanted to accept the sharing utility. However on the email that was sent I am supposed to click on an 'accept' button to start sharing, however there is no 'accept' button present. There is a note that says if the button is not present I need to install the 'net folder component' however I have searched the microsoft site and cannot find any relevent installation advice. Does anyone have any ideas? What version of Outlook "Donna" <anonymous@di... cant send emails 02-14-10 I have tried everything you guys have suggested in the other posts, including uninstalling and reinstalling the mail software. I have two email accounts & neither will send. It had been working since Nov 09 and just suddenly stopped. I have windows 7 & windows live mail. Here is the error I keep getting: Your server has unexpectedly terminated the connection. Possible causes for this include server problems, network problems, or a long period of inactivity. This is the local server I use. Subject 'Fw: Fwd: xoxo' Server: 'smtp.acsalaska.net' Window... Exchange 2K: Controlling incoming email formats.... Question... AFAIK (and correct me if I am wrong), Exchange natively only lets you control message formats for messages that are downloaded to POP\IMAP clients. For security reasons, I would like Exchange to convert all incoming\outgoing email from\to external domains, as well as INTRA-domain email , to plain text format. In other words, I do not want to see anymore html email on my server going in and out, just plain text. Is there a setting for this for Exchange to handle this natively? If not, what (if any) 3rd party solutions are out there for this? Thanks, George Here's the Re... OWA and Domain Names
# Writing R package documentation ## 2020-05-30 There’s already a tonne of stuff on how to write R packages, see here , here , here and here . Part of the reason for the breadth of articles is that there are many different workflows for how to write them. Here I’m only going to share my thoughts on writing package documentation, because that’s the area where I didn’t find one complete resource that answered all of my questions and provided a workflow I liked, when I was writing my first serious package. To briefly explain the basic structure of my package, I took advice from Hadley and kept functions in my package inside thematic files, like biomass.R and taxonomy.R, with each of these files holding multiple functions. It’s somewhere between keeping all functions in one file and keeping each function in its own file. I think both of these extremes ignore the natural sorting which can come from keeping a tidy directory structure. I found it more intuitive to find a particular function based on its theme when I used these thematic files. I used roxygen2 to store the documentation for each package function alongside the code for that function in my R/*.R files. For example, my convenience function for concatenating genus and species names to one string (picked as an example purely because it’s short): #' Combine genus and species character vectors to a species name #' #' @param x vector of genus names #' @param y corresponding vector of species names #' #' @return vector of genus and species #' #' @export #' combineSpecies <- function(x, y) { vec <- speciesFormat(data.frame(genus = x, species = y)) vec <- paste(vec[[1]], vec[[2]]) return(vec) } This function has the @export tag, meaning that when my package is loaded with library() by a user, this function can be accessed without prefixing with the package name. Functions with @export are automatically written into the package manual when I compile it with devtools::document(). I have a tonne of functions in this package that are not useful to the average user however, mostly functions which check the contents of a particular column in the standardised datasets used by this package. These functions are purposely not written into the package manual with the @noRd tag, which stops a .Rd file being written for that function and therefore keeps it out of the manual. These functions also have the @keywords internal, which means that the function can only be accessed by the user with package:::function(), but can still be accessed by other functions in the package with function(). This means that the user can still use the function if they need to, but are discouraged from doing so, normally because that function is better implemented in a higher-level wrapper function which provides checks or preprocessing. As an example, my function genus() checked whether genus names are formatted sensibly, but is only meant to be called from within colValCheck(), which wraps a bunch of column checking functions in a neater interface: #' Check validity of stem genus column #' #' @param x vector of stem genera #' #' @return vector of class "character" #' @keywords internal #' @noRd #' genus <- function(x, ...) { x <- fillNA(x) x <- coerce_catch(x, as.character, ...) na_catch(x, warn = TRUE, ...) if (any(!grepl("^[[:alpha:]]+\$", x[!is.na(x)]))) { stop("Non-letter characters found in genus") } else if (any(!grepl("^[A-Z]", x[!is.na(x)]))) { } else if (any(grepl("[A-Z]", substring(x[!is.na(x)], 2)))) { stop("Genera must not have multiple capital letters") } structure(x, class = "character") } It’s nice to have a package level description at the start of a package manual before launching into the technicalities of the function definitions. To do this, I added a roxygen entry like the one below (cut for brevity), which has the object NULL and uses the key tags: @docType package and @name packagename-package. #' silvR: Clean and analyse SEOSAW style data #' #' The \code{silvr} package facilitates three important activities: #' \itemize{ #' \item{Checking and cleaning new data for the SEOSAW dataset} #' \item{Manipulating the SEOSAW dataset to provide informative summary data} #' \item{Analysing the SEOSAW dataset} #' } #' #' @details The functions in the \code{silvr} package form a workflow for #' checking data prior to ingestion into the SEOSAW database. The package #' deals with 4 principle data objects: #' #' ... #' #' The package contains various functions for quickly creating useful #' summary data objects such as abundance matrices and maps, ... #' #' @author The \code{silvr} package is a collaborative effort, bringing code #' together from various SEOSAW members ... #' #' @section Key top-level functions: #' For ingesting new data into the SEOSAW database, it is recommended to run #' these top level functions in this order to catch errors. #' #' \itemize{ #' \item{\code{plotTableGen()} - Checks for value and column errors and #' return a clean SEOSAW style plot metadata dataframe.} #' \item{\code{stemTableGen()} - Checks for value and column errors and #' return a clean SEOSAW style stem data dataframe.} #' \item{...} #' } #' #' @docType package #' @name silvr-package NULL This longer description takes advantage of @section and @details for structuring blocks of text in the ‘roxygen2 block. The manual frontmatter comes mostly from the DESCRIPTION file. Most important are the package dependencies, which are also specified by minimum version number. Annoyingly, these package versions don’t get populated directly from the package dependencies in the roxygen2 function blocks. Instead they have to be written manually into the DESCRIPTION. Roxygen2 autopopulates NAMESPACE from the @import and @importFrom tags in the function blocks. I tend to use @importFrom vegan diversity rather than @import vegan where I can, to avoid potential conflicts in function names if I start loading lots of packages, but I don’t think there is any hard rule on this. To write a vignette, I used RMarkdown rather than Sweave. It seems to be the modern approach to vignette writing and is much more straightforward when including figures and code chunks in the document. To set this up I created a directory in the package root call vignettes/ and created a packagename.Rmd file in there. Then in the YAML frontmatter I included this: output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Cleaning and analysing SEOSAW data} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} Then in my DESCRIPTION I added: Suggests: knitr (>= 1.28), rmarkdown (>= 2.1) VignetteBuilder: knitr Which ensures the tools for building the vignette are present. I can then build the vignette with: devtools::build_vignettes(). Finally, a short R script I have sitting above my package directory contains this code to build the package: setwd("silvr") devtools::document() # Generate .Rd files devtools::build_manual() # Generate .pdf manual devtools::build_vignettes() # Generate .html vignette setwd("..") devtools::install("silvr") # Install the package
# Bulletproof functional validation using Scalaz April 29, 2016 As a backend developer, you are going to want to expose your efforts through an API such as a REST API. Your API may be publicly exposed, which means you can’t even trust that the caller has good intentions. Good validation of the incoming parameters is essential. However it is not that easy to achieve with conventional imperative code, at least not in a way that is not very messy and doesn’t obscure the intent of the code itself. We investigate a functional paradigm for validation, step-by-step, using the Scalaz ValidationNel applicative functor. As an example, we apply it to a Play REST endpoint. Consider a Play webservice with a single endpoint, taking 6 parameters. Here is the routes file: GET /search controllers.SearchController.search( keywords: String, topLeftLat: BigDecimal, topLeftLon: BigDecimal, bottomRightLat: BigDecimal, bottomRightLon: BigDecimal, searchMethod: Option[String]) package controllers case class GeoPoint(lat: BigDecimal, lon: BigDecimal) case class BoundingBox(topLeft: GeoPoint, bottomRight: GeoPoint) class SearchRepository { def search(keywords: String, searchMethod: String, boundingBox: BoundingBox): Future[JValue] = ??? } class SearchController @Inject() (repo: SearchRepository)(implicit ec: ExecutionContext) extends Controller { def search(keywords: String, topLeftLat: BigDecimal, topLeftLon: BigDecimal, bottomRightLat: BigDecimal, bottomRightLon: BigDecimal, searchMethod: String) = { Action.async { request: Request[AnyContent] ⇒ val topLeft = GeoPoint(topLeftLat, topLeftLon) val bottomRight = GeoPoint(bottomRightLat, bottomRightLon) val boundingBox = BoundingBox(topLeft, bottomRight) val listings: Future[JValue] = repo.search(keywords, searchMethod, boundingBox) listings map { json ⇒ Ok(pretty(json)) } } } } ## An imperative for validation This is all nice and clean. Now we wish to add validation. Initially, the validation we require is as follows: • Latitudes must be between 90 and -90 (inclusive). • Longitudes must be between -180 and 180 (inclusive). • topLeftLat must be greater than bottomRightLat. • topLeftLon must be less than bottomRightLon. • The searchMethod must be an acceptable value. This is just the start. There are several other validations we might apply to tighten it up further. In terms of output we require: • If validation is successful an OK (200) HTTP response must be returned. • If validation fails an BAD_REQUEST (400) HTTP response must be returned. • The search controller method returns a future with this response. Here is our first attempt at validation, just to see how ugly it can get: def search(keywords: String, topLeftLat: BigDecimal, topLeftLon: BigDecimal, bottomRightLat: BigDecimal, bottomRightLon: BigDecimal, searchMethod: String) = { if (topLeftLat < -90 || topLeftLat > 90) { Future.successful(badRequest("topLeftLat must be between -90 and 90")) } else { if (bottomRightLat < -90 || bottomRightLat > 90) { Future.successful(badRequest("bottomRightLat must be between -90 and 90")) } else { Action.async { request: Request[AnyContent] ⇒ val topLeft = GeoPoint(topLeftLat, topLeftLon) val bottomRight = GeoPoint(bottomRightLat, bottomRightLon) val boundingBox = BoundingBox(topLeft, bottomRight) val searchMethod = getListingSearchMethod(searchMethodOption) val listings: Future[JValue] = repo.search(keywords, searchMethod, boundingBox) listings map { json ⇒ Ok(pretty(json)) } } } } } That’s probably a good time to give this approach up as a bad idea. We’ve only validated the range of the lattitudes, and we’re already 2 nested levels of if statements deep before we get to the meat of it. We would have avoided the nesting by returning straight out of the function with the required error value. That would have been acceptable in Javaland, but it has problems of its own, and is frowned upon as a practice in Scala. This is unaccepable; it’s a bread-and-butter action, and we want these sort of things to be done in idiomatic Scala. There are other issues with this approach. • Is is clear that this approach is not scalable with respect to the number parameters we have, nor the complexity of the validations we need to make. • Some of the validation logic may be conditional. Whateven constitutes a valid parameter may depend on the value of other parameters. The result of this is that we may need to duplicate logic regarding processing the parameters, once for the validations, and then again for the “real work” itself. • This will fail on the first invalid parameter. If there is more than one invalid parameters, we will only find out about the next after we fix the first. Another option, still in the imperative paradigm, may be an mutable list of errors which gets accumulated as we go. This would avoid the nested ifs, and address some of the issues, but it is still quite ugly. Stepping back, what are requirements for good validation: • It should be exhaustive and easy to follow where it is present and where it is not. • It should not interfere or obscure the main intent of the operation. • It should not be necessary to duplicate any parameter logic to accommodate validation. • We should easily be able to share validation code across methods with similar parameters. • Ideally we would like validation to be cumulatative:- we accumulate all the errors at once, and report back to the caller all the reasons why validation failed. This is particularly important for form validation. ## Starting with the familiar Option Let’s start with something on familiar ground, the Option monad, and work from there, even though it will be inadequate in obvious ways. def isValidLat(lat: BigDecimal): Boolean = ??? def isValidLon(lon: BigDecimal): Boolean = ??? def isValidSearchMethod(method: String): Boolean = ??? def search(keywords: String, topLeftLat: BigDecimal, topLeftLon: BigDecimal, bottomRightLat: BigDecimal, bottomRightLon: BigDecimal, searchMethod: String) = { Action.async { request: Request[AnyContent] ⇒ val action: Option[Future[Result]] = for { tlLat ← if (isValidLat(topLeftLat)) Some(topLeftLat) else None tlLon ← if (isValidLon(topLeftLon)) Some(topLeftLon) else None brLat ← if (isValidLat(bottomRightLat)) Some(bottomRightLat) else None brLon ← if (isValidLon(bottomRightLon)) Some(bottomRightLon) else None sm ← if (isValidSearchMethod(searchMethod)) Some(searchMethod) else None topLeft = GeoPoint(tlLat, tlLon) bottomRight = GeoPoint(brLat, brLon) boundingBox = BoundingBox(topLeft, bottomRight) } yield { repo.search(keywords, sm, boundingBox) map { json ⇒ Ok(pretty(json)) } } action match { case Some(thingTodo) ⇒ thingTodo } } } We’ve added some validation methods, and wrapped the logic in a for comprehension. We only unwrap it at the end to execute the search method if all the parameters are valid, and to return a BadRequest if validation fails. This already goes a long way to the solution. We’ve managed to avoid all nested if statements, and avoid obfuscating the code. But the obvious problem with it is that we have no way of knowing what went wrong if validation does fail. ## Or we could Either improve things Our next step is to try out the Either monad from the Scala standard library. class Either[+E, +A] case class Left[+E, +A](e: E) extends Either[E, A] case class Right[+E, +A](a: A) extends Either[E, A] The one way of thinking about Either is like a labeled Option. Specifically, Right is analogous to Some and Left is a labeled None. Think or it as a branching structure where normal execution takes us Right and unexpected execution takes us Left (think of the Latin for right and left, dexter and sinister respectively, if this helps to remember). So we can use the Left to store what went wrong. All the usual operations, like map and flatMap operations operate on the Right, and do nothing to the Left. def search(...) = { Action.async { request: Request[AnyContent] ⇒ val action: Either[String, Future[Result]] = for { tlLat ← if (isValidLat(topLeftLat)) Right(topLeftLat): Either[String, BigDecimal] else Left("Invalid latitude") tlLon ← if (isValidLon(topLeftLon)) Right(topLeftLon): Either[String, BigDecimal] else Left("Invalid latitude") ... } yield { repo.search(keywords, searchMethod, boundingBox) map { json ⇒ Ok(pretty(json)) } } ... } } ## Ramping up to Scalaz Either is a big improvement on the Option based solution. As a small wrinkle, the type hint Either[String, BigDecimal] is needed for it to build. But it works well. In fact, Scalaz provides a purpose built structure with minor modifications, and a few bells and whistles: sealed abstract class Validation[+E, +A] case class Success[A](a: A) extends Validation[Nothing, A] case class Failure[E](e: E) extends Validation[E, Nothing] We can use this exactly as with Either, but with cleaner syntax, where extension methods success and failure are provided: val action: scalaz.Validation[String, Future[Result]] = for { tlLat ← if (isValidLat(topLeftLat)) topLeftLat.success else "Invalid latitude".failure tlLon ← if (isValidLon(topLeftLon)) topLeftLon.success else "Invalid latitude".failure ... } yield { repo.search(keywords, searchMethod, boundingBox) map { json ⇒ Ok(pretty(json)) } } action match { case Success(method) ⇒ method } The syntax is a little cleaner, the type hints are not necessary, but otherwise it works exactly the same. However it generates the following compiler warning: Warning:(42, 18) method ValidationFlatMapDeprecated in object Validation is deprecated: flatMap does not accumulate errors, use scalaz.\/ or import scalaz.Validation.FlatMap._ instead tlLat <- if (isValidLat(topLeftLat)) topLeftLat.success else "Invalid latitude".failure ^ This give an indication as to why we are not there yet. The for comprehension syntax, which involves nested maps and flatMaps under the hood, cannot acumulate errors. This is because if the moment a validation fails, the inner map function does not get called. This behaviour is baked into the flatMap function, and for comprehension behaviour, as each validated value is in scope for the statement that follows. Instead we consider an alternative functional mechanism, for which an instance is provided by Validation, the applicative functor. This turns out to be more suitable by nature for solving the problem at hand. Just as map and flatMap are the fundamental method of functors and monads respectively, the fundamental method of an applicative functor is apply2*. apply2 is a extension of map to a second dimension. It is like a map on two separate objects at once, where the inner contents of the two objects are formed into a pair, and a function is applied to this pair. If map were to represent a select on a single database table, apply2 would represent a join. For an applicative functor F[_], apply2 has this signature: def apply2[A, B, C](fa: ⇒ F[A], fb: ⇒ F[B])(f: (A, B) ⇒ C): F[C] *Actually an applicative functor is usually defined as a structure with an ap method, as below. This is is equivalent to apply2 - consider this an exercise. def ap[A,B](fa: => F[A])(f: => F[A => B]): F[B] We start by transforming the familiar for comprehension syntax: val action = for { a ← validateA b ← validateB c ← validateC } yield { doSomething(a, b, c) } into the slightly less familiar syntax: val action = ( validateA |@| validateB |@| validateC ) { (a, b, c) ⇒ doSomething(a, b, c) } Not as expressive as the original for comprehension syntax, but still elegant enough. In the latter case, all three validate operations are called, regardless of whether the other operations fail or not, and a is clearly not in scope for validateB and validateC. Only if they all succeed is doSomething is called, and if any of them fail, this action is bypassed, and the list of errors is returned. There are instances where we can only validate a parameter once we have a valid value of another parameter. For example, we expect that bottomRightLat > toLeftLon. For such cases we can use flatMap / for comprehensions, and in practice we will end up with a combination of fors and |@|s. If we are using fors we may want to include import scalaz.Validation.FlatMap._ as suggested by the compiler warning. Given the availability of an applicative functor on the Validation object, the |@| operator is a convenient bit of Scalaz shorthand wizardry that instantiates this applicative functor, and invokes the apply2 method on it. It’s even smart enough to combine chained |@| operations into calls to applyN. How are the errors accumulated? The “left” type parameter of Validation must have a SemiGroup instance. That’s a fancy way of saying that it has an append or concatenate operation at its disposal. This append operation technically must be associative, which itself is a fancy way of saying that if we want to concatenate the 3 objects, we can combine the first two, and then add the third at the end, or we can combine the last two and add then add the first one to the front, and either way it will end up the same. The implementation of apply2 will just invoke the semigroup append function to accumulate failures. The implementation is not exactly this, but this is the sense of it: def apply2[EE, A, B, C](valA: Validation[EE, A], valB: Validation[EE, B])(f: (A,B) ⇒ C)(implicit E: Semigroup[EE]) : Validation[EE, C] = (valA, valB) match { case (Success(a), Success(b)) ⇒ Success(f(a,b)) case (e @ Failure(_), Success(_)) ⇒ e case (Success(_), e @ Failure(_)) ⇒ e case (Failure(e1), Failure(e2)) ⇒ Failure(E.append(e2, e1)) } The most useful concrete case for the left parameter type is a List (which is clearly a SemiGroup), and specifically a non-empty list, as Failure(List()) does not make any sense. So useful is it, that there is a special Scalaz type, ValidationNel, with some of it’s own additional features. type ValidationNel[+E, +X] = Validation[NonEmptyList[E], X] This is what we end up using. We can rewrite our controller code as follows: val action: scalaz.ValidationNel[String, Future[Result]] = ( tlLat ← if (isValidLat(topLeftLat)) topLeftLat.successNel else "Invalid latitude".failureNel |@| tlLon ← if (isValidLon(topLeftLon)) topLeftLon.successNel else "Invalid latitude".failureNel |@| ... ) { (tlLat, tlLon, ..., sm) ⇒ val topLeft = GeoPoint(topLeftLat, topLeftLon) val bottomRight = GeoPoint(bottomRightLat, bottomRightLon) val boundingBox = BoundingBox(topLeft, bottomRight) repo.search(keywords, sm, boundingBox) map { json ⇒ Ok(pretty(json)) } } action match { case Success(method) ⇒ method } successNel and failureNel are syntactic sugar extension methods. For example, "Invalid latitude".failureNel is the same as Failure(NonEmptyList("Invalid latitude")). We aren’t completely done, but we’ve broken the back of it. We may want to tidy up our code further by wrapping our errors in a case class. For example, for form validation we may have: case class InvalidField(fieldName: String, reason: String) and then define a new validation type: type FieldValidation[A] = ValidationNel[InvalidField, A] In addition we may wish to factor out some of our validation code into new methods. For example: private def getValidBoundingBox(topLeftLat: BigDecimal, topLeftLon: BigDecimal, bottomRightLat: BigDecimal, bottomRightLon: BigDecimal): FieldValidation[BoundingBox] = ??? Our final robust validation solution couldn’t be much simpler: Action.async { request: Request[AnyContent] ⇒ val action = (getValidKeywords(keywords) |@| getValidSearchMethod(searchMethod) |@| getValidBoundingBox(topLeftLat, topLeftLon, bottomRightLat, bottomRightLon)) { (validKeywords, validMethod, boundingBox) ⇒ repo.search(validKeywords, validMethod, boundingBox) map toJsonString(request) } action match { case Success(method) ⇒ method } } The great thing is we can factor out validation method as we like, and all validation errors get hoovered up and stored in the order they occurred, and the main action method only gets executed if all validation succeeds. If there are validation errors, the response to the client may look like this: { "validationErrors":[ { "field":"topLeftLat", "reason":"Must be between -90.0 and 90.0 (inclusive)" }, { "field":"topLeftLon", "reason":"Must be between -180.0 and 180.0 (inclusive)" }, { "field":"bottomRightLat", "reason":"Must be between -90.0 and 90.0 (inclusive)" }, { "field":"searchMethod", "reason":"Must be one of: ['name', 'category', 'tag']" } ] } If this walkthrough isn’t able to validate (excuse the pun) the benefits of functional programming, I can’t imagine what will. This is not some esoteric, isolated use case. Input validation is something every production application has to do, and lots of it. Before functional programming came along, or at least before I started using it, I was never able to find a satisfactory way of doing this.
NULL Section All sections Countries | Regions Countries | Regions Article Types Article Types Year Volume Issue Pages IMR Press / CEOG / Volume 48 / Issue 6 / DOI: 10.31083/j.ceog4806199 Open Access Review Myomectomy during cesarean section or non-caesarean myomectomy in reproductive surgery: this is the dilemma Show Less 1 Department of Obstetrics and Gynecology and CERICSAL (CEntro di RIcerca Clinico SALentino), Veris delli Ponti Hospital, 73020 Lecce, Italy 2 Division of Experimental Endoscopic Surgery, Imaging, Technology and Minimally Invasive Therapy, Department of Obstetrics and Gynecology, Vito Fazzi Hospital, 73100 Lecce, Italy 3 Laboratory of Human Physiology, Phystech BioMed School, Faculty of Biological & Medical Physics, Moscow Institute of Physics and Technology (State University), 125009 Moscow, Russia 4 Nezhat Medical Center, Atlanta Center for Minimally Invasive Surgery and Reproductive Medicine, Atlanta, GA 30350, USA 5 Training and Education Program, Northside Hospital, Atlanta, GA 30106, USA 6 Department of Gynecology and Obstetrics, School of Medicine, Emory University, Atlanta, GA 30307, USA 8 Clinic of Gynecology and Obstetrics, University Clinical Center of Serbia, 11103 Belgrade, Serbia 9 University Clinical Center of Serbia, 11103 Belgrade, Serbia 10 Department of Obstetrics and Gynecology, Shrewsbury and Telford Hospital, NHS Trust, TF1 6 Telford, UK 11 Department for Histopathology, University Clinical Center of Serbia, 11103 Belgrade, Serbia Clin. Exp. Obstet. Gynecol. 2021 , 48(6), 1250–1258; https://doi.org/10.31083/j.ceog4806199 Submitted: 14 May 2021 | Revised: 5 July 2021 | Accepted: 21 July 2021 | Published: 15 December 2021 This is an open access article under the CC BY 4.0 license (https://creativecommons.org/licenses/by/4.0/). Abstract Nowadays it is quite common to encounter pregnants over 35 years with uterine fibroids (UFs), requiring cesarean section (CS). Large UFs may cause severe complications during delivery, as bleeding and hemorrhage, during vaginal or cesarean delivery. Frequently, the caesarean myomectomy (CM) is recommended, but generally obstetricians are reluctant to perform CM, since literature data do not agree on its surgical recommendation. CM is jet particularly controversial, due to increased risk of perioperative hemorrhage and cesarean hysterectomy, and UFs are often left in situ during cesarean section (CS). CM investigations are generally directed to myomectomy associated issues, whereas CS complications without CM are largely underreported. The risks of leaving UF for an interval myomectomy is underestimated and large UFs, left in uterus during CS, might cause significant early and late postoperative complications, even necessitating a relaparotomy and/or a subsequent hysterectomy. CM would be mandatory in some instances, whatever the UF diameter, to avoid further damage or complications. UFs management prior to CS should include a full counselling on pro and cons on the possible consequences of surgical decisioning. To illustrate what was discussed above, authors performed a narrative review with an expert opinion, reporting a case of a 31-year-old woman with a large UF who underwent a CS without myomectomy. Nine hours after CS, puerpera was submitted, for a massive postoperative hemorrhage and hemorrhagic shock, to an emergency relaparotomy with total hysterectomy without salpingo-oophorectomy. Keywords Uterine fibroids Myoma Pseudocapsule Cesarean myomectomy Complications Hysterectomy Caesarean section 1. Introduction Uterine fibroids (UFs) or myomas represent the most common female benign tumors in reproductive age [1], consisting of smooth muscle cells, fibroblasts and extracellular matrix (ECM) [2]. UFs could have a negative impact on the reproductive system and might cause significant morbidity and quality of life impairment [3]. The presence of UFs is between 5.4 to 77% in female population, depending on the diagnostic techniques applied and study population [4]. Generally, overall prevalence of uterine UFs is underestimated because epidemiological studies are focusing mostly on symptomatic women [5]. Epidemiological risk factors associated with the onset and growth of uterine UFs are age, race, body mass index, sex hormones, heredity, lifestyle habits (including smoking, stress, physical activity, caffeine and alcohol consumption, diet rich in red meat and soy), environmental pollutants, as well as the chronic diseases, such as hypertension and diabetes [1]. Moreover, UFs represent a major public health issue because their follow-up and treatment, as well as perinatal and obstetric complications cause significant expense of resources and social expense [6]. UFs reach the largest sizes during women’s reproductive age and it has been found that incidence of UFs in pregnancy is about 0.05%–5% [7]. Since the incidence of UFs increases with the age and advancing maternal age during pregnancy, the obstetric chance to manage pregnants with UFs and to deal with their associated complications is worldwide rising [8]. UFs in pregnancy are significantly linked to higher rates of cesarean section (CS) [9]. 2. Uterine fibroids in pregnancy Incidence of UFs in pregnancy is estimated to be up to 10% [10], and there are controversial findings about UF size changes during pregnancy. While some authors reported increase of UFs’ dimension in pregnancy, others didn’t show significant influence of pregnancy on UF growth [11]. Vitagliano et al. [12] analyzed of a total of 12 studies, investigating the effect of pregnancy on the size of UF. According to their results, there is a trend of increasing of UF dimension during the first trimester, while data on changes in UF’s size during the second and third trimester are conflicting. Additionally, it has been shown that 10 to 40% of women with UFs experience complications, such as first trimester bleeding, miscarriage, pain due to red degeneration, placental abruption, preterm delivery, placenta previa, preterm premature rupture of membranes, intrauterine growth restriction, as well as increased rate of CS and postpartum hemorrhage [3]. 3. Myomectomy during pregnancy Uterine fibroid during pregnancy may be associated with pregnancy complications. Although myomectomy is preferably avoided antenatally, it has been reported in symptomatic cases that did not respond to conservative management. Spyropoulou et al. [13] performed a recent meta-analysis including 54 relevant articles about myomectomy during pregnancy. Authors reported that median gestational age at diagnosis was 13 (range 6–26) weeks, while the median age at myomectomy was 16 (range 6–26) weeks. The most common indication for myomectomy during pregnancy was abdominal pain, not responding to medical treatment. The median number of fibroids removed per patient was one (range 1–5). Most of them were subserous pedunculated or subserous and fundal. The principal surgical approach was laparotomy, but laparoscopic and vaginal operations were also reported. The pregnancy outcome was favorable in most of the cases, with few complications reported. According to results reported in this meta-analysis, myomectomy during pregnancy appears as a safe procedure in cases of symptomatic uterine fibroids not responding to conservative management and, therefore, it may be considered, following appropriate counselling regarding the associated risks [13]. 4. Cesarean myomectomy Cesarean myomectomy (CM) is a long matter of debate among obstetricians since many years, as it carries high and effective risks of early and late complications, especially in case of large UFs [9]. Although the CM rate is currently increasing, many obstetricians are reluctant to perform CM, due to potential associated risks, particularly the perioperative hemorrhage [14]. It has been documented that a single CM is associated with a higher rate of hemorrhage when the UF diameter is more than 75 mm [15]. Nevertheless, the literature demonstrated that CM can be safely performed in cases of single anterior and lower uterine segment UFs [15,16]. A large obstetrics’ consensus on UFs management, whether to be left or removed during CS, has not been yet reached [9]. Data on the CM safety and feasibility, especially in case of UFs which are difficult to enucleate for the position in pregnant uterus, are largely missing [9,17]. Although obstetricians are mostly reluctant to perform CM when large and deep intramural UFs are encountered, a pregnant which will perform a CM in such cases, hardly she will undergo another myomectomy in the course of its life. Many patients can therefore benefit from two operations performed in one laparotomy: CS and contemporary myomectomy. Moreover, CM can reduce the overall cost and prevent the risk of UFs related complications in subsequent pregnancies, as anemia, infertility, pain and abnormal uterine bleeding [18]. Literature data show that CM represents a safe surgical procedure if several factors, as UFs localization, diameter and number, uterine contractility and UFs anatomic relationship with large vascular structures, are taken in account [19]. Anyway, many obstetricians are commonly skeptics on performing CM in case of large UFs, due to possible early perioperative complications, as massive bleeding or uncontrollable hemorrhage, or late complications, as cesarean hysterectomy, so they prefer to leave UFs in uterus during CS. Basing on authors’ experience gained after years of researches on CM [9,15] and on clinical and surgical reasoning, authors reviewed the pro and cons of CM, especially in case of large UFs encountered during CS. Furthermore, authors evaluated potential clinical and surgical risks of the non-removal of large UFs during CS. 5. Investigation setup The authors used the PubMed (1966–2020) for the search on CM. Index words and the combinations used were as follows: (“pregnancy” AND “uterine fibroid OR uterine fibroids OR leiomyoma”), (“cesarean section OR caesarean section” AND “myomectomy”), (“cesarean section OR caesarean section” AND “uterine fibroid OR uterine fibroid OR leiomyoma”), (“cesarean myomectomy” OR “caesarean myomectomy”), (“delivery” AND “uterine fibroid OR uterine fibroid OR leiomyoma”), (“myomectomy in pregnancy”) and (“pregnancy OR delivery” AND “myomectomy”), (“cesarean myomectomy OR caesarean myomectomy” AND “complications”). The terms “leiomyomas”,“uterine fibroids”, “fibromyomas”,“leiomyofibromas” and “fibroleiomyomas” were also detected in literature describing UFs [20]. Authors identified articles reporting these key words, also evaluating their references and selecting all available publications on this topic. Additional articles were identified from the references of relevant papers. In the narrative review performed, it was preferably used the term UF and neither approval from, nor communication with, the Ethics Committee was required. Authors realized the lack of quality of papers, in most cases only case reports on CM, without randomized controlled trials (RCT) on CM. Moreover, there is no evidence of papers on complications related to CM failure. After discussing the pro and cons of CM, authors illustrated, as an example, a case of uncontrollable postoperative hemorrhage after an elective CS without CM, performed in presence of a large single posterior UF, which required a relaparotomy and hysterectomy. The Ethic Committee of University Clinical Centre of Serbia approved data collection from patient in the reported case (N${}^{\circ}{}$ 361/2). 6. Pro and cons of cesarean myomectomy CM is a matter of debate among obstetric surgeons for over 100 years, since it is a surgical procedure combining two major operations into one, myomectomy and CS, both of which are potentially associated with complications, as heavy bleeding or hemorrhage, especially in the case of large size UFs [21]. Since a recent meta-analysis not yet defines conclusions on the feasibility and safety of CM [22], the decision to perform a CM, especially in the case of large UFs, is generally based on the patients’ preference, surgeon’ skill and intraoperative findings. Literature data provide reports on substantial early and late complication after CM, due to high risk of heavy bleeding or hemorrhage, owing to the increased uterine vascularization in pregnancy [3,23]. The perioperative hemorrhage after CM, generally requires reoperation [3] for hysterectomy [3], arterial embolization or ligation [3,14] and blood transfusion [24], as the most common CM complications. Obstetric intensive care unit (ICU) admissions, after CM, have been also described. Sparic et al. [25] have observed the relatively high rate of obstetric ICU admissions following CM and Seffah et al. [26] reported a case of disseminated intravascular coagulopathy after CM. Furthermore, authors reported ileus [14], significantly prolonged operative time [23] and prolonged hospital stay, as consequences of CM [27]. Significant differences in preoperative and postoperative hemoglobin values and postoperative fever [23] also were documented during and after CM. Simsek et al. [28] reported that postoperative hemoglobin and the mean difference in hemoglobin change were significantly different between women who have had CM and women who had CS without CM. The postpartum hysterectomy is one of the complications of CM, as well. Hassiakos et al. [29] reported that only disadvantages of CM were prolonged duration of surgery and hospitalization, without any other differences between women who have had CS and women with CM. They have also reported that none of the patients submitted to CM required blood transfusion [29]. Guler et al. [30] showed that changes of preoperative, postoperative, and mean hemoglobin values were not significantly different between patients with intramural or subserosal CM and only CS only. Kim et al. [14]. investigated CM risk factors for major complications, reporting that patients with complications were older, with lower parity and bigger UFs [20]. Data from recent studies on CM outcomes are encouraging. The results reported by Zhao et al. [31] showed absence of significant differences in frequency of postpartum hemorrhage and neonatal outcomes (neonatal wight, fetal distress, and neonatal asphyxia) between patients with CM and patients with CS only. This study demonstrated that both birth weight $\geq$4000 g and presence of UFs larger than 50 mm were risk factors for postpartum hemorrhage. These results indicated that CM is safe and feasible when performed by skilled obstetricians. Despite a large number of CM cases included in the study, the question regarding management of large UFs during CS remained unsolved. Dedes et al. [32] evaluated the outcome of a CM versus a CS alone and the risk factors for adverse outcomes, concluding that CM is not associated with significant surgical adverse outcomes. The large UFs measuring $\geq$50 mm were associated with an increased blood loss of $\geq$500 mL, while women $\geq$40 years of age had a significant postoperative drop in hemoglobin, concluding that CM could be safe in selected pregnants without additional pre-existing risk factors. El-Refai et al. [33] observed that CM can be performed safely without increase in the amount of blood transfusion and postoperative hemoglobin level drop, with the only addition of prolonged operative time and postoperative hospitalization. Results obtained from meta-analysis conducted by Huang et al. [34] showed that intramural UFs, when larger than 70 mm in diameter and multiple, were associated with more frequent intraoperative hemorrhage. These results suggest that CM performed by skilled surgeons and with appropriate hemostatic techniques represents safe and feasible procedure in selected patients. CM could be performed even for large UFs, regardless of size and locations, except if UFs are located close to large vessels or angular UFs. Similar results were obtained in a meta-analysis of Goyal et al. [35], who found there was not statistically significant incidence of hemorrhage between patients undergoing CM and CS alone. They reported a statistically significant, but clinically insignificant, hemoglobin drops, with a significant and more frequent need for blood transfusion in group of patients with CM compared with women with CS alone. The authors concluded that CM should be preferred over CS alone, especially in tertiary care centers, with staff expert in performing CM by appropriate hemostatic techniques. Sparic et al. [21] conducted retrospective study to analyze the incidence and risk factors for perioperative complications in women with a single UF submitted to CM. Study concluded with no significant differences on both minor and major complications in CM, laparotomic myomectomy and CS alone, highlighting the safety of CM without additional risks, when compared to CS alone and to laparotomic myomectomy in reproductive age women [15]. Kwon et al. [36] compared differences in maternal characteristics, UFs types, neonatal weight and operative outcomes between CS alone and CM. The subgroup analysis, according to UF size ($>$50 mm or not) in CM group, revealed that there were no significant differences in the mean hemoglobin change, operative time, and the length of hospital stay between two groups. These authors reported that there were no statistical significances in maternal characteristics, neonatal weight, and UF types between two groups. About CM complications, authors highlighted the literature bias on CM about uterine healing check, the scar quality at the hysterotomic site and the uterine myometrial integrity for subsequent pregnancy. Akkurt et al. [37] reported long-term outcomes of CM in study group, including 91 pregnants and none of the participants, all women with subsequent pregnancy delivering by CS, had uterine rupture during pregnancy or delivery, and only one of these had uterine dehiscence and preterm delivery. About placental complications in pregnant women with UFs, again Akkurt et al. [37] reported about 3.1% prevalence of placenta previa following CM, while a higher prevalence of placenta previa was reported by Adesiyun et al. [38]. Akkurt et al. [37] have also investigated adhesion formation as a late complication of CM, bearing in mind that postoperative adhesions are a well-known complication of conventional abdominal myomectomy. They observed postoperative adhesions in 25% of patients during subsequent CS. Another bias is the data on UFs recurrence after CM, largely missing in the literature. Akkurt et al. [37] still reported a recurrence rate of 8.4% in women with CM, while 4.1% required additional major surgery for UFs (one abdominal myomectomy and two abdominal hysterectomies). As risk factors for UF recurrence after CM, the authors identified long follow-up (mean, 8.2 years), advanced age ($>$45 years), history of multiple UFs and larger UF size ($>$70 mm). None of the patients with UFs recurrence become pregnant during the follow-up period. Literature also showed some advantages of CM either over CS without UF enucleation or over interval myomectomy. Incision on the uterus is smaller than in a non-pregnant uterus, because of the uterus/UF ratio is smaller for uterine grow, which is faster than UF grow in pregnancy [23]. The CM is technically easer to perform, the hysterorraphy is easier in a pregnant uterus than in a non-pregnant one, for the uterine increased elasticity and stiffness in pregnancy [23]. Moreover, the hemorrhage is reduced by puerperal uterine contraction and physiological involution. The CM provides benefit of two operations in one, thus decreasing the risks and costs of reoperation. The CM improves quality of life and decreases UFs related symptoms, reducing risks of UFs’ related complications in puerperium and subsequent pregnancies [3]. Although there is no accurate diagnostic tool to assess the healing process at the myomectomy site and the quality of the scar, ultrasound assessment or intrasurgical visualization during subsequent CS, suggest better scar integrity after CM, than that following interval myomectomy [23]. Adesiyn et al. [38] reported that UF removal during CM increased the chances of vaginal delivery in subsequent pregnancies, observing that the total rate of vaginal delivery after CM was of 76.5%. According to the results of same study, the rate of spontaneous pregnancy after CM was 79.3%, indicating that future fertility and subsequent pregnancy outcome was not impaired by previous CM [38]. Authors summarized the data literature in Table 1 (Ref. [3,9,13,14,22,23,25-28,32-34,36,37,39]). Table 1.Pros and cons of CM. Pros of CM Study Cons of CM Study Smaller incision of uterus Malvasi et al. [22] Increase risk for perioperative hemorrhage Sparic [3] Kim et al. [13] Malvasi et al. [22] Incebiyik et al. [23] Improving quality of life, reduce risk of myoma complication during puerperium and next pregnacy Sparic et al. [9] Increase risk for intravascular coagulopathy Seffah et al. [25] Sparic et al. [14] Malvasi et al. [22] El-Refai et al. [32] Goyal et al. [34] Sparic et al. [39] Better scar integrity Malvasi et al. [22] Prolonged operative time and hospital stay Malvasi et al. [22] Tinelli et al. [26] Hassiakos et al. [28] El-Refai et al. [32] Increased chance of vaginal delivery in next pregnacy Akkurt et al. [36] Significant drop in hemoglobin Simsek et al. [27] Adesiyun et al. [37] Huang et al. [33] 7. Cesarean section without myomectomy Leaving UFs in pregnant uterus during CS has been also associated with severe complications. Hasan et al. [40] reported a high incidence of hysterectomy for the post-partum hemorrhage. Davis et al. [41] observed an increased incidence of post-partum sepsis in patients after CS without myomectomy. Price et al. [42] showed a rapid UF grow after elective CS, causing abdominal pain and hemoglobin drop, requiring a further surgical management after CS. Yellamareddygari et al. [43] reported the case of 35-years-old women who have had CS without UF removal. In post-partum period, she experienced persistent a smelling vaginal discharge for 3 weeks and urinary retention for 2 days. After an urgent ultrasound scan, patient underwent an emergency myomectomy in early puerperium [43]. Murakami et al. [44] reported a case of UF prolapsed into vagina after elective CS, without contemporary myomectomy, required further surgery for a refractory infection [44]. A similar case reported by Zhang et al. [45], concerned a spontaneous expulsion of a huge cervical UF from the vagina after CS. Haskins et al. [46] reported case of 39-year-old primigravida, who underwent laparotomy for myomectomy of a 100 mm pedunculated UF, six months after CS. During CS obstetricians detected an intramural UF, located in the fundus and they decided to avoid CM [46]. Another case report showed a hypovolemic shock caused by edema of pedunculated UF, not removed during CS; a successive hysterectomy was performed to solve the hypovolemic shock [47]. Ergenoglu et al. [48] reported case of pulmonary embolism at 38-year-old women, submitted to emergency CS, without contemporary removing of a 150 mm intramural UF from uterus. The puerpera come back to the hospital on the 40th day from dismissal, with sepsis and pulmonary embolism, caused by obstruction of lochia drainage by the unremoved UF during CS. Puerpera required urgent hysterectomy, in association to medical therapy for pulmonary embolism [48]. Authors summarized the literature data in Table 2 (Ref. [3,22,23,26,28,32,40,42-47]). Table 2.Pros and cons of non-CM. Pros of non-CM Study Cons of non-CM Study Decrease risk for perioperative hemorrhage Malvasi et al. [22] Hemorrhage in puerperium Yellamareddygari et al. [42] Incebiyik et al. [23] Shorter operative time and hospital stay Malvasi et al. [22] Sepsis Davis et al. [40] Tinelli et al. [26] Increasing of myoma size Yellamareddygari et al. [42] Hassiakos et al. [28] Sparic R [3] El-Refai et al. [32] Haskins et al. [45] Expulsion of myoma Murakami et al. [43] Zhang et al. [44] Hypovolemic shock Koide et al. [46] Pulmonary embolism Ergenoğlu et al. [47] 8. Descriptive case on the possible complications of non-cesarean myomectomy Authors reported an example of what can happen if a large UF is left in situ during a CS, presented as an unusual complication of UF in the late postpartum after CS. A 31-years-old woman (G4, P2) was referred to a University-affiliated hospital, at 35 weeks of gestation, for threatened preterm birth. Patient’ history showed the presence of an intramural UF diagnosed one year prior the pregnancy, two vaginal deliveries at term (nine and six years before hospital admission) and a missed abortion (one year before). At hospital admission, the ultrasound (US) scan showed a breech presentation of the fetus and a large sized UF measuring 100 $\times$ 130 mm, located at the fundus and posterior uterine wall, with the placenta covering approximately 50% of its surface. The patient was counselled about preterm labor risk, birth options were discussed, also about fetal malposition in labor. Pregnant opted for an elective CS at 40 weeks of gestation and refused the option of CM, preferring a successive interval myomectomy. The elective CS was scheduled at term, with the delivery of a newborn of 3500 g. During CS the uterus was slightly atonic, and the patient received additional uterotonic medications. Two hours later CS, three episodes of postpartum hemorrhage (PPH) occurred, medically managed by methylergometrine, oxytocin and carboprost. The US uterine check revealed an intact uterine scar region, the presence of blood clots in the uterine cavity and a UF with maximum diameter of 140 mm. After the blood clots removal, uterine massage and vaginal tamponade apposition, the uterus was well contracted, and the patient was hemodynamically stable. She received one unit of packed red blood cells (RBC) and her hemoglobin level was 96 g/L and hematocrit 30.7%. Nine hours after the CS, the patient had an episode of massive PPH, due to uterine atony, and she was assessed as hemodynamically unstable by anesthesiologist, with a blood pressure of 95/58 mm/Hg and tachycardia ($\geq$110 bpm). Clinicians decided for an immediate emergency re-laparotomy. During laparotomy, obstetricians detected a very large and atonic uterus, with a normal surface (Fig. 1). The UF was palpable through the posterior uterine wall, without the possibility of applying either a hemostatic suture of B-Lunch, or a uterine compression, with the UF left in uterus. They were urgently forced to choose between a myomectomy or a hysterectomy, since the patient, for anesthesiologists, was considered very unstable and in life threatening condition. A total hysterectomy without salpingo-oophorectomy was performed, as the only option to achieve prompt and effective surgical hemostasis. The patient received, during surgery, a total of 845 mL of autologous blood and 1 gr of tranexamic acid. The immediate postoperative recovery required the transfusion of further six units of packed RBC (1.555 mL), plus three units of fresh frozen plasma and 10 units of cryoprecipitate, and patient has been transferred in the intensive care unit (ICU) for two days and her further recovery was uneventful. The patient was discharged nine days after hysterectomy. The histopathology report revealed a 165 $\times$ 130 $\times$ 125 mm uterus, weighing 1.570 g, with one large single posterior wall UF measuring 110 mm in diameter and two smaller UFs measuring 5 and 15 mm (Fig. 2). Discussing and criticizing as mentioned above, it is possible to affirm that this is a near-miss case, which could have ended tragically, for the choice of avoid the UF removal during CS. This approach however led to uterine atony and to the subsequent urgent life-threatening post-CS hysterectomy. What dramatically happened should be also evaluated by the analysis of the scientific literature on this topic, underlining the importance of surgical experience and intraoperative findings in case of large UFs during CS [9]. Fig. 1. Macroscopic examination of the uterus after hysterectomy. The uterus is grossly enlarged and its surface appears completely normal. (a) Anterior view. (b) Posterior view. Fig. 2. The histopathology specimen of the uterus that has been sectioned and reveals the large sized UF. 9. Analysis of the literature on the effects of failure to remove fibroids during a cesarean section Sparic et al. [39] documented a case of multiple myomectomies performed by an experienced surgeon with the adjunct of intraoperative cell salvage without any perioperative complications. The largest UF diameter was 210 mm, and the total weight of enucleated UFs was 3300 g. Ma et al. [40] reported a CM of a 400 mm (3645 gr) large intramural UF, with the bilateral ligation of uterine arteries without any perioperative complications. In case of large intramural UFs greater than 50 cm${}^{3}$, CM can be associated with an increased rate of bleeding or hemorrhage [17]. Even if it is believed that, in case of UF encountered during CS, an interval myomectomy would be a safer option, however, large UFs could also cause excessive bleeding, uterine atony and massive PPH, up to even need emergency intra or post-CS hysterectomy, to control PPH, as reported in our case. Although UFs are frequently encountered during CS, especially in women of no younger age with UFs, the exact indications and contraindications for CM have not been agreed in obstetricians [49]. Generally, it is not recommended to remove UFs located in the fundal and cornual uterine areas, so as those located in proximity of major blood vessels, to avoid massive bleeding, even if most studies were not stratified according to the location and type of the myomas which affect the risk of hemorrhage. Additionally, the only contraindications for CM largely reported in literature concern uterine atony following fetal extraction and pre-existing maternal coagulopathies [49]. In the authors’ reported case, uterine atony was assessed during the elective CS after the fetal delivery, due to the uterine inability to properly contract for the UF localization and large size. In fact, the uterotonics given intraoperatively and those given in the immediate postoperative period were relatively ineffective for an adequate uterine contraction. Nyflot et al. [50] conducted a large case-control study in Norway, including a total of 43,105 deliveries, out of which the severe PPH was recorded in 1064 cases. The most frequent cause of PPH was pure uterine muscular atony, present in 60.4% of cases, after excluding cases with atony due to retention of placental tissue. Authors showed that severe PPH was more likely to occur in women with UFs (4.9% of cases with severe PPH) and that the presence of UFs increased almost three-fold the risk of severe PPH. Finally, even though the UF size and type are the most important features determining the perioperative risks of CM, the possible intra and post-operative complications of UF during CS, should be exposed during pre-surgical counselling with pregnant, when deciding “whether or not” performing a CM [49]. Although immediate abdominal myomectomy can be safely performed in cases of PPH, this approach seems to be recommended only in patients who have had a successful vaginal delivery. If the choice of delivery is a CS, perhaps a CM is an option to recommend in such patients with a previous vaginal delivery [51]. Large size UFs have also been described in association to retained placental products, both leading to PPH; however, this association was excluded in the above case, after the uterine histopathological examination [44,51]. The high probability of uterine atony and consequent PPH (as in our case), poses a question if CM in such cases would have led to a more favorable maternal outcome, such as fertility preservation instead of hysterectomy, but it is not really possible to provide an answer due to lack of clinical data. Hence, from what was discussed, CM could represent an alternative option for the uterine atony prevention and of subsequent PPH in selected cases. 10. The most common complications after cesarean myomectomy CM is a surgical procedure generally feared by obstetricians as it can lead to a number of short and long-term complications, such as perioperative hemorrhage [3,14], diffuse intravascular coagulopathy (DIC) [26], ileus [14], postoperative fever [23]. If postoperative hemorrhage [3,14] is a pathology to be taken into account in all surgical interventions, therefore easily treatable, another story is DIC [26], which has a high rate of mortality, morbidity and morbidity. On the other hand [14], and postoperative fever [23] are complications that can be easily dealt with in any postoperative course, so they do not cause major concerns among surgeons. 11. Conclusions The CM has always been a controversial topic since risk factors for CM complications have been established, without assessing the risks associated to avoid a CM. It is still necessary to assess which UFs can increase the probability of complications if they are not removed during CS, so as the CS complications in the presence of UFs are largely under-reported. This manuscript discusses on how a CM can, in some narrow circumstances, prevent massive PPH and subsequent hysterectomy. However, obstetricians should also consider novel techniques and their effects on the decision for CM, in addition to the use of minimally invasive surgery that makes use of instruments with energy (ligature, harmonic scalpel) can modify surgical times and blood loss. Moreover, the CM could be also valuable in resource limited settings by traditional techniques, for its advantages and limited risks. Anyway, there is a lack of prospective randomized clinical trials (RCTs) on the CM safety and feasibility and the available data, just refer to case-control and descriptive retrospective studies with low quality, due to large biased and scant conclusions. Therefore, further large multicenter prospective trials about CM are needed. Author contributions RSP, ILL, DP and AT designed the article. RSP MA, DT and AT wrote the manuscript, RST performed histopathological analysis. CN reviewed and edit the manuscript. All authors contributed to editorial changes in the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate The Ethic Committee of University Clinical Centre of Serbia approved data collection from patient in the reported case (No 361/2). The patient has given his consent in the file to anonymous participation in the publication of his clinical case. Acknowledgment Thanks to all the peer reviewers for their opinions and suggestions. Funding This research received no external funding. Conflict of interest The authors declare no conflict of interest. AT is our Guest Editor, given his role as Guest Editor, had no involvement in the peer-review of this article and has no access to information regarding its peer-review. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Share
User - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T16:15:42Z http://mathoverflow.net/feeds/user/24033 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/35260/resultant-probability-distribution-when-taking-the-cosine-of-gaussian-distributed/98194#98194 Answer by unknown (yahoo) for Resultant probability distribution when taking the cosine of gaussian distributed variable unknown (yahoo) 2012-05-28T14:21:01Z 2012-05-28T14:21:01Z <p>Cody, is it wright what you say? Sigma is varying with the mean? If I measure an angle of 90 degrees, then $N_{\cos}(0,{\sigma}^2)$ and $N_{\sin}(1,0)$? And if I measure an angle of 0 degrees, then $N_{\cos}(1,0)$ and $N_{\sin}(0,{\sigma}^2)$ ? Where do I find the theory of that?</p>
Search for a tool Expanding and Reducing Math Expression Tool to expand and reduce a mathematical expression (Expanding and reducing polynomial, fractions, equation, trigonometry, etc.) Results Expanding and Reducing Math Expression - Tag(s) : Symbolic Computation dCode and you dCode is free and its tools are a valuable help in games, puzzles and problems to solve every day! You have a problem, an idea for a project, a specific need and dCode can not (yet) help you? You need custom development? Contact-me! Team dCode read all messages and answer them if you leave an email (not published). It is thanks to you that dCode has the best Expanding and Reducing Math Expression tool. Thank you. Expanding and Reducing Math Expression Trigonometric Expansion and Reduction Tool to expand and reduce a mathematical expression (Expanding and reducing polynomial, fractions, equation, trigonometry, etc.) How to develop and reduce a polynomial-like expression? The development allows expressing the polynomial in a addition (sum) or subtraction (difference) of factors. The reduction is used to group each factor of the polynomial or simplify the result. Example: $$(a+b)^2-2ab$$ can be expanded $$a^2+2ab+b^2-2ab$$ and then be reduced $$a^2+b^2$$ The development and reduction operations is a basic maths calculation in middle school. How to develop and reduce a fraction? dCode offers dedicated tools for calculating fractions. The development consists of writing fractions to the same denominator and reducing is the simplification to the least common multiple. How to expand and reduce a trigonometric expression? dCode can develop trigonometric expressions to simplify (reduce) their content (the goal is to reduce the content (between parenthesis) of the sine and cosine functions) Example: $$\sin{2x}$$ gives $$2\sin{x}\cos{x}$$ Source code dCode retains ownership of the source code of the script Expanding and Reducing Math Expression online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be given for free. So if you need to download the online Expanding and Reducing Math Expression script for offline use, check contact page !
## Second-order sufficient optimality conditions for the optimal control of Navier-Stokes equations.(English)Zbl 1111.49017 The authors establish sufficient optimality conditions for a class of optimal control problems governed by Navier-Stokes equations. In the steady state case the problem is: minimize the quadratic cost functional $$J(u):=\int_\Omega | y(u,x)-y_d(x)| ^2\,dx+\gamma/2 \int_\Omega | u(x)| ^2\,dx$$ subject to the Navier-Stokes equations $$-\nu\triangle y+(y\cdot \nabla)y+\nabla p=u$$ in $$\Omega$$, $$\operatorname{div}y=0$$ in $$\Omega$$, $$y=0$$ on $$\partial\Omega$$, where $$\Omega\subset {\mathbb R}^n$$ is bounded. The controls satisfy the unilateral conditions $$u_a(x)\leq u(x)\leq u_b(x)$$ on $$\Omega$$. Under a smallness assumption the authors show that an optimal control which satisfies the necessary condition of first order defines a (local) minimizer providing the second derivative of the associated Lagrange function is positive for all nonzero functions from $$L^2(\Omega)$$ which are zero on the active sets of the control in consideration. The proof relies on the smallness assumption (A2), the quadratic structure of the cost functional and on the definition of the admissible set of controls. ### MSC: 49K20 Optimality conditions for problems involving partial differential equations 35Q30 Navier-Stokes equations 76D55 Flow control and optimization for incompressible viscous fluids Full Text: ### References: [1] F. Abergel and R. Temam , On some control problems in fluid mechanics . Theoret. Comput. Fluid Dynam. 1 ( 1990 ) 303 - 325 . Zbl 0708.76106 · Zbl 0708.76106 [2] R.A. Adams , Sobolev spaces . Academic Press, San Diego ( 1978 ). Zbl 0314.46030 · Zbl 0314.46030 [3] N. Arada , J.-P. Raymond and F. Tröltzsch , On an augmented Lagrangian SQP method for a class of optimal control problems in Banach spaces . Comput. Optim. Appl. 22 ( 2002 ) 369 - 398 . Zbl 1039.90094 · Zbl 1039.90094 [4] J.F. Bonnans , Second-order analysis for control constrained optimal control problems of semilinear elliptic equations . Appl. Math. Optim. 38 ( 1998 ) 303 - 325 . Zbl 0917.49020 · Zbl 0917.49020 [5] J.F. Bonnans and H. Zidani , Optimal control problems with partially polyhedric constraints . SIAM J. Control Optim. 37 ( 1999 ) 1726 - 1741 . Zbl 0945.49020 · Zbl 0945.49020 [6] H. Brezis , Analyse fonctionelle . Masson, Paris ( 1983 ). MR 697382 | Zbl 0511.46001 · Zbl 0511.46001 [7] E. Casas , An optimal control problem governed by the evolution Navier-Stokes equations , in Optimal control of viscous flows. Frontiers in applied mathematics, S.S. Sritharan Ed., SIAM, Philadelphia ( 1993 ). MR 1632422 [8] E. Casas and M. Mateos , Second order optimality conditions for semilinear elliptic control problems with finitely many state constraints . SIAM J. Control Optim. 40 ( 2002 ) 1431 - 1454 . Zbl 1037.49024 · Zbl 1037.49024 [9] E. Casas and M. Mateos , Uniform convergence of the FEM . Applications to state constrained control problems. Comp. Appl. Math. 21 ( 2002 ) 67 - 100 . Zbl 1119.49309 · Zbl 1119.49309 [10] E. Casas , F. Tröltzsch and A. Unger , Second-order sufficient optimality conditions for a nonlinear elliptic control problem . J. Anal. Appl. 15 ( 1996 ) 687 - 707 . Zbl 0879.49020 · Zbl 0879.49020 [11] E. Casas , F. Tröltzsch and A. Unger , Second-order sufficient optimality conditions for some state-constrained control problems of semilinear elliptic equations . SIAM J. Control Optim. 38 ( 2000 ) 1369 - 1391 . Zbl 0962.49016 · Zbl 0962.49016 [12] P. Constantin and C. Foias , Navier-Stokes equations . The University of Chicago Press, Chicago ( 1988 ). MR 972259 | Zbl 0687.35071 · Zbl 0687.35071 [13] R. Dautray and J.L. Lions , Evolution problems I , Mathematical analysis and numerical methods for science and technology 5. Springer, Berlin ( 1992 ). MR 1156075 · Zbl 0755.35001 [14] M. Desai and K. Ito , Optimal controls of Navier-Stokes equations . SIAM J. Control Optim. 32 ( 1994 ) 1428 - 1446 . Zbl 0813.35078 · Zbl 0813.35078 [15] A.L. Dontchev , W.W. Hager , A.B. Poore and B. Yang , Optimality, stability, and convergence in optimal control . Appl. Math. Optim. 31 ( 1995 ) 297 - 326 . Zbl 0821.49022 · Zbl 0821.49022 [16] J.C. Dunn , On second-order sufficient conditions for structured nonlinear programs in infinite-dimensional function spaces , in Mathematical programming with data perturbations, A. Fiacco Ed., Marcel Dekker ( 1998 ) 83 - 107 . Zbl 0891.90147 · Zbl 0891.90147 [17] H.O. Fattorini and S. Sritharan , Necessary and sufficient for optimal controls in viscous flow problems . Proc. Roy. Soc. Edinburgh 124 ( 1994 ) 211 - 251 . Zbl 0800.49047 · Zbl 0800.49047 [18] M.D. Gunzburger Ed ., Flow control. Springer, New York (1995). MR 1348639 | Zbl 0816.00037 · Zbl 0816.00037 [19] M.D. Gunzburger and S. Manservisi , The velocity tracking problem for Navier-Stokes flows with bounded distributed controls . SIAM J. Control Optim. 37 ( 1999 ) 1913 - 1945 . Zbl 0938.35118 · Zbl 0938.35118 [20] M.D. Gunzburger and S. Manservisi , Analysis and approximation of the velocity tracking problem for Navier-Stokes flows with distributed control . SIAM J. Numer. Anal. 37 ( 2000 ) 1481 - 1512 . Zbl 0963.35150 · Zbl 0963.35150 [21] M. Hinze , Optimal and instantaneous control of the instationary Navier-Stokes equations . Habilitation, TU Berlin ( 2002 ). · Zbl 1073.49025 [22] M. Hinze and K. Kunisch , Second-order methods for optimal control of time-dependent fluid flow . SIAM J. Control Optim. 40 ( 2001 ) 925 - 946 . Zbl 1012.49026 · Zbl 1012.49026 [23] H. Maurer and J. Zowe , First- and second-order conditions in infinite-dimensional programming problems . Math. Programming 16 ( 1979 ) 98 - 110 . Zbl 0398.90109 · Zbl 0398.90109 [24] H.D. Mittelmann and F. Tröltzsch , Sufficient optimality in a parabolic control problem , in Trends in Industrial and Applied Mathematics, A.H. Siddiqi and M. Kocvara Ed., Dordrecht, Kluwer ( 2002 ) 305 - 316 . [25] J.-P. Raymond and F. Tröltzsch , Second order sufficient optimality conditions for nonlinear parabolic control problems with state constraints . Discrete Contin. Dynam. Syst. 6 ( 2000 ) 431 - 450 . Zbl 1010.49015 · Zbl 1010.49015 [26] T. Roubíček and F. Tröltzsch , Lipschitz stability of optimal controls for the steady-state Navier-Stokes equations . Control Cybernet. 32 ( 2002 ) 683 - 705 . Zbl 1127.49021 · Zbl 1127.49021 [27] S. Sritharan , Dynamic programming of the Navier-Stokes equations . Syst. Control Lett. 16 ( 1991 ) 299 - 307 . Zbl 0737.49021 · Zbl 0737.49021 [28] R. Temam , Navier-Stokes equations . North Holland, Amsterdam ( 1979 ). MR 603444 | Zbl 0426.35003 · Zbl 0426.35003 [29] F. Tröltzsch , Lipschitz stability of solutions of linear-quadratic parabolic control problems with respect to perturbations . Dyn. Contin. Discrete Impulsive Syst. 7 ( 2000 ) 289 - 306 . Zbl 0954.49017 · Zbl 0954.49017 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
## anonymous one year ago A toy rocket was launched from the ground. The function f(x) = -16x2 + 128x shows the height of the rocket f(x), in feet, from the ground at time x seconds. What is the axis of symmetry of the graph of f(x), and what does it represent? 1. anonymous so the 'x' coordinate of our vertex is 4... so I'm going to say the rocket would take 4 seconds to get up to maximum height...and then it will fall back down taking 4 seconds How do i know it will only take 4 seconds to get down? Well I would find out the x-intercepts of the parabola by factoring -16x^2 + 128x = 0 factor our like terms -16x(x - 8) = 0 so we have 2 equations -16x = 0 and (x - 8) = 0 well that means our 2 x intercepts are 0 and 8 So if it take 4 seconds to go up to the max height...we only have 4 seconds to fall back down Hence the answer being B 2. anonymous Woah, Where did you get that answer from? 3. misty1212 what on earth? 4. misty1212 first coordinate of the vertex is always (i repeat always ) $-\frac{b}{2a}$ 5. misty1212 do $-\frac{128}{2\times (-16)}$ and that is all 6. anonymous @misty1212 What he did was, He went on google, Searched up my question. And some other guy that has already posted this question answered it. 7. misty1212 i love google, but $$-\frac{b}{2a}$$ cannot be that hard to compute in any case what does "the answer is B" mean in this context? 8. anonymous 9. anonymous Thanks @LesTwins I do the same. I already got that answer but I want to double check. Quite slick. 10. anonymous I did the math then searched with google to give u the insurance you need to get the question correct
Using the Session The :func:.orm.mapper function and :mod:~sqlalchemy.ext.declarative extensions are the primary configurational interface for the ORM. Once mappings are configured, the primary usage interface for persistence operations is the :class:.Session. What does the Session do ? In the most general sense, the :class:~.Session establishes all conversations with the database and represents a "holding zone" for all the objects which you've loaded or associated with it during its lifespan. It provides the entrypoint to acquire a :class:.Query object, which sends queries to the database using the :class:~.Session object's current database connection, populating result rows into objects that are then stored in the :class:.Session, inside a structure called the Identity Map - a data structure that maintains unique copies of each object, where "unique" means "only one object with a particular primary key". The :class:.Session begins in an essentially stateless form. Once queries are issued or other objects are persisted with it, it requests a connection resource from an :class:.Engine that is associated either with the :class:.Session itself or with the mapped :class:.Table objects being operated upon. This connection represents an ongoing transaction, which remains in effect until the :class:.Session is instructed to commit or roll back its pending state. All changes to objects maintained by a :class:.Session are tracked - before the database is queried again or before the current transaction is committed, it flushes all pending changes to the database. This is known as the Unit of Work pattern. When using a :class:.Session, it's important to note that the objects which are associated with it are proxy objects to the transaction being held by the :class:.Session - there are a variety of events that will cause objects to re-access the database in order to keep synchronized. It is possible to "detach" objects from a :class:.Session, and to continue using them, though this practice has its caveats. It's intended that usually, you'd re-associate detached objects another :class:.Session when you want to work with them again, so that they can resume their normal task of representing database state. Getting a Session :class:.Session is a regular Python class which can be directly instantiated. However, to standardize how sessions are configured and acquired, the :class:.sessionmaker class is normally used to create a top level :class:.Session configuration which can then be used throughout an application without the need to repeat the configurational arguments. The usage of :class:.sessionmaker is illustrated below: from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker # an Engine, which the Session will use for connection # resources some_engine = create_engine('postgresql://scott:tiger@localhost/') # create a configured "Session" class Session = sessionmaker(bind=some_engine) # create a Session session = Session() # work with sess myobject = MyObject('foo', 'bar') session.commit() Above, the :class:.sessionmaker call creates a factory for us, which we assign to the name Session. This factory, when called, will create a new :class:.Session object using the configurational arguments we've given the factory. In this case, as is typical, we've configured the factory to specify a particular :class:.Engine for connection resources. A typical setup will associate the :class:.sessionmaker with an :class:.Engine, so that each :class:.Session generated will use this :class:.Engine to acquire connection resources. This association can be set up as in the example above, using the bind argument. When you write your application, place the :class:.sessionmaker factory at the global level. This factory can then be used by the rest of the applcation as the source of new :class:.Session instances, keeping the configuration for how :class:.Session objects are constructed in one place. The :class:.sessionmaker factory can also be used in conjunction with other helpers, which are passed a user-defined :class:.sessionmaker that is then maintained by the helper. Some of these helpers are discussed in the section :ref:session_faq_whentocreate. A common scenario is where the :class:.sessionmaker is invoked at module import time, however the generation of one or more :class:.Engine instances to be associated with the :class:.sessionmaker has not yet proceeded. For this use case, the :class:.sessionmaker construct offers the :meth:.sessionmaker.configure method, which will place additional configuration directives into an existing :class:.sessionmaker that will take place when the construct is invoked: from sqlalchemy.orm import sessionmaker from sqlalchemy import create_engine # configure Session class with desired options Session = sessionmaker() # later, we create the engine engine = create_engine('postgresql://...') # associate it with our custom Session class Session.configure(bind=engine) # work with the session session = Session() Creating Ad-Hoc Session Objects with Alternate Arguments For the use case where an application needs to create a new :class:.Session with special arguments that deviate from what is normally used throughout the application, such as a :class:.Session that binds to an alternate source of connectivity, or a :class:.Session that should have other arguments such as expire_on_commit established differently from what most of the application wants, specific arguments can be passed to the :class:.sessionmaker factory's :meth:.sessionmaker.__call__ method. These arguments will override whatever configurations have already been placed, such as below, where a new :class:.Session is constructed against a specific :class:.Connection: # at the module level, the global sessionmaker, # bound to a specific Engine Session = sessionmaker(bind=engine) # later, some unit of code wants to create a # Session that is bound to a specific Connection conn = engine.connect() session = Session(bind=conn) The typical rationale for the association of a :class:.Session with a specific :class:.Connection is that of a test fixture that maintains an external transaction - see :ref:session_external_transaction for an example of this. Using the Session Quickie Intro to Object States It's helpful to know the states which an instance can have within a session: • Transient - an instance that's not in a session, and is not saved to the database; i.e. it has no database identity. The only relationship such an object has to the ORM is that its class has a mapper() associated with it. • Pending - when you :func:~sqlalchemy.orm.session.Session.add a transient instance, it becomes pending. It still wasn't actually flushed to the database yet, but it will be when the next flush occurs. • Persistent - An instance which is present in the session and has a record in the database. You get persistent instances by either flushing so that the pending instances become persistent, or by querying the database for existing instances (or moving persistent instances from other sessions into your local session). • Detached - an instance which has a record in the database, but is not in any session. There's nothing wrong with this, and you can use objects normally when they're detached, except they will not be able to issue any SQL in order to load collections or attributes which are not yet loaded, or were marked as "expired". Knowing these states is important, since the :class:.Session tries to be strict about ambiguous operations (such as trying to save the same object to two different sessions at the same time). • When do I make a :class:.sessionmaker ? Just one time, somewhere in your application's global scope. It should be looked upon as part of your application's configuration. If your application has three .py files in a package, you could, for example, place the :class:.sessionmaker line in your __init__.py file; from that point on your other modules say "from mypackage import Session". That way, everyone else just uses :class:.Session(), and the configuration of that session is controlled by that central point. If your application starts up, does imports, but does not know what database it's going to be connecting to, you can bind the :class:.Session at the "class" level to the engine later on, using :meth:.sessionmaker.configure. In the examples in this section, we will frequently show the :class:.sessionmaker being created right above the line where we actually invoke :class:.Session. But that's just for example's sake! In reality, the :class:.sessionmaker would be somewhere at the module level. The calls to instantiate :class:.Session would then be placed at the point in the application where database conversations begin. • When do I construct a :class:.Session, when do I commit it, and when do I close it ? A :class:.Session is typically constructed at the beginning of a logical operation where database access is potentially anticipated. The :class:.Session, whenever it is used to talk to the database, begins a database transaction as soon as it starts communicating. Assuming the autocommit flag is left at its recommended default of False, this transaction remains in progress until the :class:.Session is rolled back, committed, or closed. The :class:.Session will begin a new transaction if it is used again, subsequent to the previous transaction ending; from this it follows that the :class:.Session is capable of having a lifespan across many transactions, though only one at a time. We refer to these two concepts as transaction scope and session scope. The implication here is that the SQLAlchemy ORM is encouraging the developer to establish these two scopes in his or her application, including not only when the scopes begin and end, but also the expanse of those scopes, for example should a single :class:.Session instance be local to the execution flow within a function or method, should it be a global object used by the entire application, or somewhere in between these two. The burden placed on the developer to determine this scope is one area where the SQLAlchemy ORM necessarily has a strong opinion about how the database should be used. The unit-of-work pattern is specifically one of accumulating changes over time and flushing them periodically, keeping in-memory state in sync with what's known to be present in a local transaction. This pattern is only effective when meaningful transaction scopes are in place. It's usually not very hard to determine the best points at which to begin and end the scope of a :class:.Session, though the wide variety of application architectures possible can introduce challenging situations. A common choice is to tear down the :class:.Session at the same time the transaction ends, meaning the transaction and session scopes are the same. This is a great choice to start out with as it removes the need to consider session scope as separate from transaction scope. While there's no one-size-fits-all recommendation for how transaction scope should be determined, there are common patterns. Especially if one is writing a web application, the choice is pretty much established. A web application is the easiest case because such an appication is already constructed around a single, consistent scope - this is the request, which represents an incoming request from a browser, the processing of that request to formulate a response, and finally the delivery of that response back to the client. Integrating web applications with the :class:.Session is then the straightforward task of linking the scope of the :class:.Session to that of the request. The :class:.Session can be established as the request begins, or using a lazy initialization pattern which establishes one as soon as it is needed. The request then proceeds, with some system in place where application logic can access the current :class:.Session in a manner associated with how the actual request object is accessed. As the request ends, the :class:.Session is torn down as well, usually through the usage of event hooks provided by the web framework. The transaction used by the :class:.Session may also be committed at this point, or alternatively the application may opt for an explicit commit pattern, only committing for those requests where one is warranted, but still always tearing down the :class:.Session unconditionally at the end. Most web frameworks include infrastructure to establish a single :class:.Session, associated with the request, which is correctly constructed and torn down corresponding torn down at the end of a request. Such infrastructure pieces include products such as Flask-SQLAlchemy, for usage in conjunction with the Flask web framework, and Zope-SQLAlchemy, for usage in conjunction with the Pyramid and Zope frameworks. SQLAlchemy strongly recommends that these products be used as available. In those situations where integration libraries are not available, SQLAlchemy includes its own "helper" class known as :class:.scoped_session. A tutorial on the usage of this object is at :ref:unitofwork_contextual. It provides both a quick way to associate a :class:.Session with the current thread, as well as patterns to associate :class:.Session objects with other kinds of scopes. As mentioned before, for non-web applications there is no one clear pattern, as applications themselves don't have just one pattern of architecture. The best strategy is to attempt to demarcate "operations", points at which a particular thread begins to perform a series of operations for some period of time, which can be committed at the end. Some examples: • A background daemon which spawns off child forks would want to create a :class:.Session local to each child process work with that :class:.Session through the life of the "job" that the fork is handling, then tear it down when the job is completed. • For a command-line script, the application would create a single, global :class:.Session that is established when the program begins to do its work, and commits it right as the program is completing its task. • For a GUI interface-driven application, the scope of the :class:.Session may best be within the scope of a user-generated event, such as a button push. Or, the scope may correspond to explicit user interaction, such as the user "opening" a series of records, then "saving" them. • Is the Session a cache ? Yeee...no. It's somewhat used as a cache, in that it implements the identity map pattern, and stores objects keyed to their primary key. However, it doesn't do any kind of query caching. This means, if you say session.query(Foo).filter_by(name='bar'), even if Foo(name='bar') is right there, in the identity map, the session has no idea about that. It has to issue SQL to the database, get the rows back, and then when it sees the primary key in the row, then it can look in the local identity map and see that the object is already there. It's only when you say query.get({some primary key}) that the :class:~sqlalchemy.orm.session.Session doesn't have to issue a query. Additionally, the Session stores object instances using a weak reference by default. This also defeats the purpose of using the Session as a cache. The :class:.Session is not designed to be a global object from which everyone consults as a "registry" of objects. That's more the job of a second level cache. SQLAlchemy provides a pattern for implementing second level caching using Beaker, via the :ref:examples_caching example. • How can I get the :class:~sqlalchemy.orm.session.Session for a certain object ? Use the :func:~sqlalchemy.orm.session.Session.object_session classmethod available on :class:~sqlalchemy.orm.session.Session: session = Session.object_session(someobject) The :class:.Session is very much intended to be used in a non-concurrent fashion, which usually means in only one thread at a time. The :class:.Session should be used in such a way that one instance exists for a single series of operations within a single transaction. One expedient way to get this effect is by associating a :class:.Session with the current thread (see :ref:unitofwork_contextual for background). Another is to use a pattern where the :class:.Session is passed between functions and is otherwise not shared with other threads. The bigger point is that you should not want to use the session with multiple concurrent threads. That would be like having everyone at a restaurant all eat from the same plate. The session is a local "workspace" that you use for a specific set of tasks; you don't want to, or need to, share that session with other threads who are doing some other task. If there are in fact multiple threads participating in the same task, then you may consider sharing the session between those threads, though this would be an extremely unusual scenario. In this case it would be necessary to implement a proper locking scheme so that the :class:.Session is still not exposed to concurrent access. Querying The :func:~sqlalchemy.orm.session.Session.query function takes one or more entities and returns a new :class:~sqlalchemy.orm.query.Query object which will issue mapper queries within the context of this Session. An entity is defined as a mapped class, a :class:~sqlalchemy.orm.mapper.Mapper object, an orm-enabled descriptor, or an AliasedClass object: # query from a class session.query(User).filter_by(name='ed').all() # query with multiple classes, returns tuples # query using orm-enabled descriptors session.query(User.name, User.fullname).all() # query from a mapper user_mapper = class_mapper(User) session.query(user_mapper) When :class:~sqlalchemy.orm.query.Query returns results, each object instantiated is stored within the identity map. When a row matches an object which is already present, the same object is returned. In the latter case, whether or not the row is populated onto an existing object depends upon whether the attributes of the instance have been expired or not. A default-configured :class:~sqlalchemy.orm.session.Session automatically expires all instances along transaction boundaries, so that with a normally isolated transaction, there shouldn't be any issue of instances representing data which is stale with regards to the current transaction. The :class:.Query object is introduced in great detail in :ref:ormtutorial_toplevel, and further documented in :ref:query_api_toplevel. :func:~sqlalchemy.orm.session.Session.add is used to place instances in the session. For transient (i.e. brand new) instances, this will have the effect of an INSERT taking place for those instances upon the next flush. For instances which are persistent (i.e. were loaded by this session), they are already present and do not need to be added. Instances which are detached (i.e. have been removed from a session) may be re-associated with a session using this method: user1 = User(name='user1') user2 = User(name='user2') session.commit() # write changes to the database To add a list of items to the session at once, use :func:~sqlalchemy.orm.session.Session.add_all: session.add_all([item1, item2, item3]) The :func:~sqlalchemy.orm.session.Session.add operation cascades along the save-update cascade. For more details see the section :ref:unitofwork_cascades. Merging :func:~sqlalchemy.orm.session.Session.merge transfers state from an outside object into a new or already existing instance within a session. It also reconciles the incoming data against the state of the database, producing a history stream which will be applied towards the next flush, or alternatively can be made to produce a simple "transfer" of state without producing change history or accessing the database. Usage is as follows: merged_object = session.merge(existing_object) When given an instance, it follows these steps: • It examines the primary key of the instance. If it's present, it attempts to locate that instance in the local identity map. If the load=True flag is left at its default, it also checks the database for this primary key if not located locally. • If the given instance has no primary key, or if no instance can be found with the primary key given, a new instance is created. • The state of the given instance is then copied onto the located/newly created instance. For attributes which are present on the source instance, the value is transferred to the target instance. For mapped attributes which aren't present on the source, the attribute is expired on the target instance, discarding its existing value. If the load=True flag is left at its default, this copy process emits events and will load the target object's unloaded collections for each attribute present on the source object, so that the incoming state can be reconciled against what's present in the database. If load is passed as False, the incoming data is "stamped" directly without producing any history. • The operation is cascaded to related objects and collections, as indicated by the merge cascade (see :ref:unitofwork_cascades). • The new instance is returned. With :meth:~.Session.merge, the given "source" instance is not modifed nor is it associated with the target :class:.Session, and remains available to be merged with any number of other :class:.Session objects. :meth:~.Session.merge is useful for taking the state of any kind of object structure without regard for its origins or current session associations and copying its state into a new session. Here's some examples: • An application which reads an object structure from a file and wishes to save it to the database might parse the file, build up the structure, and then use :meth:~.Session.merge to save it to the database, ensuring that the data within the file is used to formulate the primary key of each element of the structure. Later, when the file has changed, the same process can be re-run, producing a slightly different object structure, which can then be merged in again, and the :class:~sqlalchemy.orm.session.Session will automatically update the database to reflect those changes, loading each object from the database by primary key and then updating its state with the new state given. • An application is storing objects in an in-memory cache, shared by many :class:.Session objects simultaneously. :meth:~.Session.merge is used each time an object is retrieved from the cache to create a local copy of it in each :class:.Session which requests it. The cached object remains detached; only its state is moved into copies of itself that are local to individual :class:~.Session objects. In the caching use case, it's common that the load=False flag is used to remove the overhead of reconciling the object's state with the database. There's also a "bulk" version of :meth:~.Session.merge called :meth:~.Query.merge_result that was designed to work with cache-extended :class:.Query objects - see the section :ref:examples_caching. • An application wants to transfer the state of a series of objects into a :class:.Session maintained by a worker thread or other concurrent system. :meth:~.Session.merge makes a copy of each object to be placed into this new :class:.Session. At the end of the operation, the parent thread/process maintains the objects it started with, and the thread/worker can proceed with local copies of those objects. In the "transfer between threads/processes" use case, the application may want to use the load=False flag as well to avoid overhead and redundant SQL queries as the data is transferred. Merge Tips :meth:~.Session.merge is an extremely useful method for many purposes. However, it deals with the intricate border between objects that are transient/detached and those that are persistent, as well as the automated transferrence of state. The wide variety of scenarios that can present themselves here often require a more careful approach to the state of objects. Common problems with merge usually involve some unexpected state regarding the object being passed to :meth:~.Session.merge. Lets use the canonical example of the User and Address objects: class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False) id = Column(Integer, primary_key=True) user_id = Column(Integer, ForeignKey('user.id'), nullable=False) >>> u1 = User(name='ed', addresses=[Address(email_address='[email protected]')]) >>> session.commit() We now create a1, an object outside the session, which we'd like to merge on top of the existing Address: >>> existing_a1 = u1.addresses[0] A surprise would occur if we said this: >>> a1.user = u1 >>> a1 = session.merge(a1) >>> session.commit() sqlalchemy.orm.exc.FlushError: New instance <Address at 0x1298f50> with identity key (<class '__main__.Address'>, (1,)) conflicts with Why is that ? We weren't careful with our cascades. The assignment of a1.user to a persistent object cascaded to the backref of User.addresses and made our a1 object pending, as though we had added it. Now we have two Address objects in the session: >>> a1 = Address() >>> a1.user = u1 >>> a1 in session True >>> existing_a1 in session True >>> a1 is existing_a1 False Above, our a1 is already pending in the session. The subsequent :meth:~.Session.merge operation essentially does nothing. Cascade can be configured via the cascade option on :func:.relationship, although in this case it would mean removing the save-update cascade from the User.addresses relationship - and usually, that behavior is extremely convenient. The solution here would usually be to not assign a1.user to an object already persistent in the target session. The cascade_backrefs=False option of :func:.relationship will also prevent the Address from being added to the session via the a1.user = u1 assignment. Further detail on cascade operation is at :ref:unitofwork_cascades. Another example of unexpected state: >>> a1 = Address(id=existing_a1.id, user_id=u1.id) >>> assert a1.user is None >>> True >>> a1 = session.merge(a1) >>> session.commit() may not be NULL Here, we accessed a1.user, which returned its default value of None, which as a result of this access, has been placed in the __dict__ of our object a1. Normally, this operation creates no change event, so the user_id attribute takes precedence during a flush. But when we merge the Address object into the session, the operation is equivalent to: >>> existing_a1.id = existing_a1.id >>> existing_a1.user_id = u1.id >>> existing_a1.user = None Where above, both user_id and user are assigned to, and change events are emitted for both. The user association takes precedence, and None is applied to user_id, causing a failure. Most :meth:~.Session.merge issues can be examined by first checking - is the object prematurely in the session ? >>> a1 = Address(id=existing_a1, user_id=user.id) >>> assert a1 not in session >>> a1 = session.merge(a1) Or is there state on the object that we don't want ? Examining __dict__ is a quick way to check: >>> a1 = Address(id=existing_a1, user_id=user.id) >>> a1.user >>> a1.__dict__ {'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x1298d10>, 'user_id': 1, 'id': 1, 'user': None} >>> # we don't want user=None merged, remove it >>> del a1.user >>> a1 = session.merge(a1) >>> # success >>> session.commit() Deleting The :meth:~.Session.delete method places an instance into the Session's list of objects to be marked as deleted: # mark two objects to be deleted session.delete(obj1) session.delete(obj2) # commit (or flush) session.commit() Deleting from Collections A common confusion that arises regarding :meth:~.Session.delete is when objects which are members of a collection are being deleted. While the collection member is marked for deletion from the database, this does not impact the collection itself in memory until the collection is expired. Below, we illustrate that even after an Address object is marked for deletion, it's still present in the collection associated with the parent User, even after a flush: >>> address = user.addresses[1] >>> session.flush() True When the above session is committed, all attributes are expired. The next access of user.addresses will re-load the collection, revealing the desired state: >>> session.commit() False The usual practice of deleting items within collections is to forego the usage of :meth:~.Session.delete directly, and instead use cascade behavior to automatically invoke the deletion as a result of removing the object from the parent collection. The delete-orphan cascade accomplishes this, as illustrated in the example below: mapper(User, users_table, properties={ }) session.flush() Where above, upon removing the Address object from the User.addresses collection, the delete-orphan cascade has the effect of marking the Address object for deletion in the same way as passing it to :meth:~.Session.delete. See also :ref:unitofwork_cascades for detail on cascades. Deleting based on Filter Criterion The caveat with Session.delete() is that you need to have an object handy already in order to delete. The Query includes a :func:~sqlalchemy.orm.query.Query.delete method which deletes based on filtering criteria: session.query(User).filter(User.id==7).delete() The Query.delete() method includes functionality to "expire" objects already in the session which match the criteria. However it does have some caveats, including that "delete" and "delete-orphan" cascades won't be fully expressed for collections which are already loaded. See the API docs for :meth:~sqlalchemy.orm.query.Query.delete for more details. Flushing When the :class:~sqlalchemy.orm.session.Session is used with its default configuration, the flush step is nearly always done transparently. Specifically, the flush occurs before any individual :class:~sqlalchemy.orm.query.Query is issued, as well as within the :func:~sqlalchemy.orm.session.Session.commit call before the transaction is committed. It also occurs before a SAVEPOINT is issued when :func:~sqlalchemy.orm.session.Session.begin_nested is used. Regardless of the autoflush setting, a flush can always be forced by issuing :func:~sqlalchemy.orm.session.Session.flush: session.flush() The "flush-on-Query" aspect of the behavior can be disabled by constructing :class:.sessionmaker with the flag autoflush=False: Session = sessionmaker(autoflush=False) Additionally, autoflush can be temporarily disabled by setting the autoflush flag at any time: mysession = Session() mysession.autoflush = False Some autoflush-disable recipes are available at DisableAutoFlush. The flush process always occurs within a transaction, even if the :class:~sqlalchemy.orm.session.Session has been configured with autocommit=True, a setting that disables the session's persistent transactional state. If no transaction is present, :func:~sqlalchemy.orm.session.Session.flush creates its own transaction and commits it. Any failures during flush will always result in a rollback of whatever transaction is present. If the Session is not in autocommit=True mode, an explicit call to :func:~sqlalchemy.orm.session.Session.rollback is required after a flush fails, even though the underlying transaction will have been rolled back already - this is so that the overall nesting pattern of so-called "subtransactions" is consistently maintained. Committing :func:~sqlalchemy.orm.session.Session.commit is used to commit the current transaction. It always issues :func:~sqlalchemy.orm.session.Session.flush beforehand to flush any remaining state to the database; this is independent of the "autoflush" setting. If no transaction is present, it raises an error. Note that the default behavior of the :class:~sqlalchemy.orm.session.Session is that a "transaction" is always present; this behavior can be disabled by setting autocommit=True. In autocommit mode, a transaction can be initiated by calling the :func:~sqlalchemy.orm.session.Session.begin method. Note The term "transaction" here refers to a transactional construct within the :class:.Session itself which may be maintaining zero or more actual database (DBAPI) transactions. An individual DBAPI connection begins participation in the "transaction" as it is first used to execute a SQL statement, then remains present until the session-level "transaction" is completed. See :ref:unitofwork_transaction for further detail. Another behavior of :func:~sqlalchemy.orm.session.Session.commit is that by default it expires the state of all instances present after the commit is complete. This is so that when the instances are next accessed, either through attribute access or by them being present in a :class:~sqlalchemy.orm.query.Query result set, they receive the most recent state. To disable this behavior, configure :class:.sessionmaker with expire_on_commit=False. Normally, instances loaded into the :class:~sqlalchemy.orm.session.Session are never changed by subsequent queries; the assumption is that the current transaction is isolated so the state most recently loaded is correct as long as the transaction continues. Setting autocommit=True works against this model to some degree since the :class:~sqlalchemy.orm.session.Session behaves in exactly the same way with regard to attribute state, except no transaction is present. Rolling Back :func:~sqlalchemy.orm.session.Session.rollback rolls back the current transaction. With a default configured session, the post-rollback state of the session is as follows: • All transactions are rolled back and all connections returned to the connection pool, unless the Session was bound directly to a Connection, in which case the connection is still maintained (but still rolled back). • Objects which were initially in the pending state when they were added to the :class:~sqlalchemy.orm.session.Session within the lifespan of the transaction are expunged, corresponding to their INSERT statement being rolled back. The state of their attributes remains unchanged. • Objects which were marked as deleted within the lifespan of the transaction are promoted back to the persistent state, corresponding to their DELETE statement being rolled back. Note that if those objects were first pending within the transaction, that operation takes precedence instead. • All objects not expunged are fully expired. With that state understood, the :class:~sqlalchemy.orm.session.Session may safely continue usage after a rollback occurs. When a :func:~sqlalchemy.orm.session.Session.flush fails, typically for reasons like primary key, foreign key, or "not nullable" constraint violations, a :func:~sqlalchemy.orm.session.Session.rollback is issued automatically (it's currently not possible for a flush to continue after a partial failure). However, the flush process always uses its own transactional demarcator called a subtransaction, which is described more fully in the docstrings for :class:~sqlalchemy.orm.session.Session. What it means here is that even though the database transaction has been rolled back, the end user must still issue :func:~sqlalchemy.orm.session.Session.rollback to fully reset the state of the :class:~sqlalchemy.orm.session.Session. Expunging Expunge removes an object from the Session, sending persistent instances to the detached state, and pending instances to the transient state: session.expunge(obj1) To remove all items, call :func:~sqlalchemy.orm.session.Session.expunge_all (this method was formerly known as clear()). Closing The :func:~sqlalchemy.orm.session.Session.close method issues a :func:~sqlalchemy.orm.session.Session.expunge_all, and releases any transactional/connection resources. When connections are returned to the connection pool, transactional state is rolled back as well. Refreshing / Expiring The Session normally works in the context of an ongoing transaction (with the default setting of autoflush=False). Most databases offer "isolated" transactions - this refers to a series of behaviors that allow the work within a transaction to remain consistent as time passes, regardless of the activities outside of that transaction. A key feature of a high degree of transaction isolation is that emitting the same SELECT statement twice will return the same results as when it was called the first time, even if the data has been modified in another transaction. For this reason, the :class:.Session gains very efficient behavior by loading the attributes of each instance only once. Subsequent reads of the same row in the same transaction are assumed to have the same value. The user application also gains directly from this assumption, that the transaction is regarded as a temporary shield against concurrent changes - a good application will ensure that isolation levels are set appropriately such that this assumption can be made, given the kind of data being worked with. To clear out the currently loaded state on an instance, the instance or its individual attributes can be marked as "expired", which results in a reload to occur upon next access of any of the instance's attrbutes. The instance can also be immediately reloaded from the database. The :meth:~.Session.expire and :meth:~.Session.refresh methods achieve this: # immediately re-load attributes on obj1, obj2 session.refresh(obj1) session.refresh(obj2) # expire objects obj1, obj2, attributes will be reloaded # on the next access: session.expire(obj1) session.expire(obj2) When an expired object reloads, all non-deferred column-based attributes are loaded in one query. Current behavior for expired relationship-based attributes is that they load individually upon access - this behavior may be enhanced in a future release. When a refresh is invoked on an object, the ultimate operation is equivalent to a :meth:.Query.get, so any relationships configured with eager loading should also load within the scope of the refresh operation. :meth:~.Session.refresh and :meth:~.Session.expire also support being passed a list of individual attribute names in which to be refreshed. These names can refer to any attribute, column-based or relationship based: # immediately re-load the attributes 'hello', 'world' on obj1, obj2 session.refresh(obj1, ['hello', 'world']) session.refresh(obj2, ['hello', 'world']) # expire the attributes 'hello', 'world' objects obj1, obj2, attributes will be reloaded # on the next access: session.expire(obj1, ['hello', 'world']) session.expire(obj2, ['hello', 'world']) The full contents of the session may be expired at once using :meth:~.Session.expire_all: session.expire_all() Note that :meth:~.Session.expire_all is called automatically whenever :meth:~.Session.commit or :meth:~.Session.rollback are called. If using the session in its default mode of autocommit=False and with a well-isolated transactional environment (which is provided by most backends with the notable exception of MySQL MyISAM), there is virtually no reason to ever call :meth:~.Session.expire_all directly - plenty of state will remain on the current transaction until it is rolled back or committed or otherwise removed. :meth:~.Session.refresh and :meth:~.Session.expire similarly are usually only necessary when an UPDATE or DELETE has been issued manually within the transaction using :meth:.Session.execute(). Session Attributes The :class:~sqlalchemy.orm.session.Session itself acts somewhat like a set-like collection. All items present may be accessed using the iterator interface: for obj in session: print obj And presence may be tested for using regular "contains" semantics: if obj in session: print "Object is present" The session is also keeping track of all newly created (i.e. pending) objects, all objects which have had changes since they were last loaded or saved (i.e. "dirty"), and everything that's been marked as deleted: # pending objects recently added to the Session session.new # persistent objects which currently have changes detected # (this collection is now created on the fly each time the property is called) session.dirty # persistent objects that have been marked as deleted via session.delete(obj) session.deleted # dictionary of all persistent objects, keyed on their # identity key session.identity_map Note that objects within the session are by default weakly referenced. This means that when they are dereferenced in the outside application, they fall out of scope from within the :class:~sqlalchemy.orm.session.Session as well and are subject to garbage collection by the Python interpreter. The exceptions to this include objects which are pending, objects which are marked as deleted, or persistent objects which have pending changes on them. After a full flush, these collections are all empty, and all objects are again weakly referenced. To disable the weak referencing behavior and force all objects within the session to remain until explicitly expunged, configure :class:.sessionmaker with the weak_identity_map=False setting. Mappers support the concept of configurable cascade behavior on :func:~sqlalchemy.orm.relationship constructs. This refers to how operations performed on a parent object relative to a particular :class:.Session should be propagated to items referred to by that relationship. The default cascade behavior is usually suitable for most situations, and the option is normally invoked explicitly in order to enable delete and delete-orphan cascades, which refer to how the relationship should be treated when the parent is marked for deletion as well as when a child is de-associated from its parent. Cascade behavior is configured by setting the cascade keyword argument on :func:~sqlalchemy.orm.relationship: class Order(Base): __tablename__ = 'order' customer = relationship("User", secondary=user_orders_table, To set cascades on a backref, the same flag can be used with the :func:~.sqlalchemy.orm.backref function, which ultimately feeds its arguments back into :func:~sqlalchemy.orm.relationship: class Item(Base): __tablename__ = 'item' order = relationship("Order", ) The default value of cascade is save-update, merge. The all symbol in the cascade options indicates that all cascade flags should be enabled, with the exception of delete-orphan. Typically, cascade is usually left at its default, or configured as all, delete-orphan, indicating the child objects should be treated as "owned" by the parent. The list of available values which can be specified in cascade are as follows: • save-update - Indicates that when an object is placed into a :class:.Session via :meth:.Session.add, all the objects associated with it via this :func:~sqlalchemy.orm.relationship should also be added to that same :class:.Session. Additionally, if this object is already present in a :class:.Session, child objects will be added to that session as they are associated with this parent, i.e. as they are appended to lists, added to sets, or otherwise associated with the parent. save-update cascade also cascades the pending history of the target attribute, meaning that objects which were removed from a scalar or collection attribute whose changes have not yet been flushed are also placed into the target session. This is because they may have foreign key attributes present which will need to be updated to no longer refer to the parent. The save-update cascade is on by default, and it's common to not even be aware of it. It's customary that only a single call to :meth:.Session.add against the lead object of a structure has the effect of placing the full structure of objects into the :class:.Session at once. However, it can be turned off, which would imply that objects associated with a parent would need to be placed individually using :meth:.Session.add calls for each one. Another default behavior of save-update cascade is that it will take effect in the reverse direction, that is, associating a child with a parent when a backref is present means both relationships are affected; the parent will be added to the child's session. To disable this somewhat indirect session addition, use the cascade_backrefs=False option described below in :ref:backref_cascade. • delete - This cascade indicates that when the parent object is marked for deletion, the related objects should also be marked for deletion. Without this cascade present, SQLAlchemy will set the foreign key on a one-to-many relationship to NULL when the parent object is deleted. When enabled, the row is instead deleted. delete cascade is often used in conjunction with delete-orphan cascade, as is appropriate for an object whose foreign key is not intended to be nullable. On some backends, it's also a good idea to set ON DELETE on the foreign key itself; see the section :ref:passive_deletes for more details. Note that for many-to-many relationships which make usage of the secondary argument to :func:~.sqlalchemy.orm.relationship, SQLAlchemy always emits a DELETE for the association row in between "parent" and "child", when the parent is deleted or whenever the linkage between a particular parent and child is broken. • delete-orphan - This cascade adds behavior to the delete cascade, such that a child object will be marked for deletion when it is de-associated from the parent, not just when the parent is marked for deletion. This is a common feature when dealing with a related object that is "owned" by its parent, with a NOT NULL foreign key, so that removal of the item from the parent collection results in its deletion. delete-orphan cascade implies that each child object can only have one parent at a time, so is configured in the vast majority of cases on a one-to-many relationship. Setting it on a many-to-one or many-to-many relationship is more awkward; for this use case, SQLAlchemy requires that the :func:~sqlalchemy.orm.relationship be configured with the single_parent=True function, which establishes Python-side validation that ensures the object is associated with only one parent at a time. • merge - This cascade indicates that the :meth:.Session.merge operation should be propagated from a parent that's the subject of the :meth:.Session.merge call down to referred objects. This cascade is also on by default. • refresh-expire - A less common option, indicates that the :meth:.Session.expire operation should be propagated from a parent down to referred objects. When using :meth:.Session.refresh, the referred objects are expired only, but not actually refreshed. • expunge - Indicate that when the parent object is removed from the :class:.Session using :meth:.Session.expunge, the operation should be propagated down to referred objects. The save-update cascade takes place on backrefs by default. This means that, given a mapping such as this: mapper(Order, order_table, properties={ 'items' : relationship(Item, backref='order') }) If an Order is already in the session, and is assigned to the order attribute of an Item, the backref appends the Order to the items collection of that Order, resulting in the save-update cascade taking place: >>> o1 = Order() >>> o1 in session True >>> i1 = Item() >>> i1.order = o1 >>> i1 in o1.items True >>> i1 in session True This behavior can be disabled using the cascade_backrefs flag: mapper(Order, order_table, properties={ 'items' : relationship(Item, backref='order', }) So above, the assignment of i1.order = o1 will append i1 to the items collection of o1, but will not add i1 to the session. You can, of course, :func:~.Session.add i1 to the session at a later point. This option may be helpful for situations where an object needs to be kept out of a session until it's construction is completed, but still needs to be given associations to objects which are already persistent in the target session. Managing Transactions A newly constructed :class:.Session may be said to be in the "begin" state. In this state, the :class:.Session has not established any connection or transactional state with any of the :class:.Engine objects that may be associated with it. The :class:.Session then receives requests to operate upon a database connection. Typically, this means it is called upon to execute SQL statements using a particular :class:.Engine, which may be via :meth:.Session.query, :meth:.Session.execute, or within a flush operation of pending data, which occurs when such state exists and :meth:.Session.commit or :meth:.Session.flush is called. As these requests are received, each new :class:.Engine encountered is associated with an ongoing transactional state maintained by the :class:.Session. When the first :class:.Engine is operated upon, the :class:.Session can be said to have left the "begin" state and entered "transactional" state. For each :class:.Engine encountered, a :class:.Connection is associated with it, which is acquired via the :meth:.Engine.contextual_connect method. If a :class:.Connection was directly associated with the :class:.Session (see :ref:session_external_transaction for an example of this), it is added to the transactional state directly. For each :class:.Connection, the :class:.Session also maintains a :class:.Transaction object, which is acquired by calling :meth:.Connection.begin on each :class:.Connection, or if the :class:.Session object has been established using the flag twophase=True, a :class:.TwoPhaseTransaction object acquired via :meth:.Connection.begin_twophase. These transactions are all committed or rolled back corresponding to the invocation of the :meth:.Session.commit and :meth:.Session.rollback methods. A commit operation will also call the :meth:.TwoPhaseTransaction.prepare method on all transactions if applicable. When the transactional state is completed after a rollback or commit, the :class:.Session releases all :class:.Transaction and :class:.Connection resources (which has the effect of returning DBAPI connections to the connection pool of each :class:.Engine), and goes back to the "begin" state, which will again invoke new :class:.Connection and :class:.Transaction objects as new requests to emit SQL statements are received. The example below illustrates this lifecycle: engine = create_engine("...") Session = sessionmaker(bind=engine) # new session. no connections are in use. session = Session() try: # first query. a Connection is acquired # from the Engine, and a Transaction # started. item1 = session.query(Item).get(1) # second query. the same Connection/Transaction # are used. item2 = session.query(Item).get(2) # pending changes are created. item1.foo = 'bar' item2.bar = 'foo' # commit. The pending changes above # are flushed via flush(), the Transaction # is committed, the Connection object closed # and discarded, the underlying DBAPI connection # returned to the connection pool. session.commit() except: # on rollback, the same closure of state # as that of commit proceeds. session.rollback() raise Using SAVEPOINT SAVEPOINT transactions, if supported by the underlying engine, may be delineated using the :func:~sqlalchemy.orm.session.Session.begin_nested method: Session = sessionmaker() session = Session() session.begin_nested() # establish a savepoint session.rollback() # rolls back u3, keeps u1 and u2 session.commit() # commits u1 and u2 :func:~sqlalchemy.orm.session.Session.begin_nested may be called any number of times, which will issue a new SAVEPOINT with a unique identifier for each call. For each :func:~sqlalchemy.orm.session.Session.begin_nested call, a corresponding :func:~sqlalchemy.orm.session.Session.rollback or :func:~sqlalchemy.orm.session.Session.commit must be issued. When :func:~sqlalchemy.orm.session.Session.begin_nested is called, a :func:~sqlalchemy.orm.session.Session.flush is unconditionally issued (regardless of the autoflush setting). This is so that when a :func:~sqlalchemy.orm.session.Session.rollback occurs, the full state of the session is expired, thus causing all subsequent attribute/instance access to reference the full state of the :class:~sqlalchemy.orm.session.Session right before :func:~sqlalchemy.orm.session.Session.begin_nested was called. :meth:~.Session.begin_nested, in the same manner as the less often used :meth:~.Session.begin method, returns a transactional object which also works as a context manager. It can be succinctly used around individual record inserts in order to catch things like unique constraint exceptions: for record in records: try: with session.begin_nested(): session.merge(record) except: print "Skipped record %s" % record session.commit() Autocommit Mode The example of :class:.Session transaction lifecycle illustrated at the start of :ref:unitofwork_transaction applies to a :class:.Session configured in the default mode of autocommit=False. Constructing a :class:.Session with autocommit=True produces a :class:.Session placed into "autocommit" mode, where each SQL statement invoked by a :meth:.Session.query or :meth:.Session.execute occurs using a new connection from the connection pool, discarding it after results have been iterated. The :meth:.Session.flush operation still occurs within the scope of a single transaction, though this transaction is closed out after the :meth:.Session.flush operation completes. "autocommit" mode should not be considered for general use. While very old versions of SQLAlchemy standardized on this mode, the modern :class:.Session benefits highly from being given a clear point of transaction demarcation via :meth:.Session.rollback and :meth:.Session.commit. The autoflush action can safely emit SQL to the database as needed without implicitly producing permanent effects, the contents of attributes are expired only when a logical series of steps has completed. If the :class:.Session were to be used in pure "autocommit" mode without an ongoing transaction, these features should be disabled, that is, autoflush=False, expire_on_commit=False. Modern usage of "autocommit" is for framework integrations that need to control specifically when the "begin" state occurs. A session which is configured with autocommit=True may be placed into the "begin" state using the :meth:.Session.begin method. After the cycle completes upon :meth:.Session.commit or :meth:.Session.rollback, connection and transaction resources are released and the :class:.Session goes back into "autocommit" mode, until :meth:.Session.begin is called again: Session = sessionmaker(bind=engine, autocommit=True) session = Session() session.begin() try: item1 = session.query(Item).get(1) item2 = session.query(Item).get(2) item1.foo = 'bar' item2.bar = 'foo' session.commit() except: session.rollback() raise The :func:.Session.begin method also returns a transactional token which is compatible with the Python 2.6 with statement: Session = sessionmaker(bind=engine, autocommit=True) session = Session() with session.begin(): item1 = session.query(Item).get(1) item2 = session.query(Item).get(2) item1.foo = 'bar' item2.bar = 'foo' Using Subtransactions with Autocommit A subtransaction indicates usage of the :meth:.Session.begin method in conjunction with the subtransactions=True flag. This produces a a non-transactional, delimiting construct that allows nesting of calls to :meth:~.Session.begin and :meth:~.Session.commit. It's purpose is to allow the construction of code that can function within a transaction both independently of any external code that starts a transaction, as well as within a block that has already demarcated a transaction. subtransactions=True is generally only useful in conjunction with autocommit, and is equivalent to the pattern described at :ref:connections_nested_transactions, where any number of functions can call :meth:.Connection.begin and :meth:.Transaction.commit as though they are the initiator of the transaction, but in fact may be participating in an already ongoing transaction: # method_a starts a transaction and calls method_b def method_a(session): session.begin(subtransactions=True) try: method_b(session) session.commit() # transaction is committed here except: session.rollback() # rolls back the transaction raise # method_b also starts a transaction, but when # called from method_a participates in the ongoing # transaction. def method_b(session): session.begin(subtransactions=True) try: session.commit() # transaction is not committed yet except: session.rollback() # rolls back the transaction, in this case # the one that was initiated in method_a(). raise # create a Session and call method_a session = Session(autocommit=True) method_a(session) session.close() Subtransactions are used by the :meth:.Session.flush process to ensure that the flush operation takes place within a transaction, regardless of autocommit. When autocommit is disabled, it is still useful in that it forces the :class:.Session into a "pending rollback" state, as a failed flush cannot be resumed in mid-operation, where the end user still maintains the "scope" of the transaction overall. Enabling Two-Phase Commit For backends which support two-phase operaration (currently MySQL and PostgreSQL), the session can be instructed to use two-phase commit semantics. This will coordinate the committing of transactions across databases so that the transaction is either committed or rolled back in all databases. You can also :func:~sqlalchemy.orm.session.Session.prepare the session for interacting with transactions not managed by SQLAlchemy. To use two phase transactions set the flag twophase=True on the session: engine1 = create_engine('postgresql://db1') engine2 = create_engine('postgresql://db2') Session = sessionmaker(twophase=True) # bind User operations to engine 1, Account operations to engine 2 Session.configure(binds={User:engine1, Account:engine2}) session = Session() # .... work with accounts and users # commit. session will issue a flush to all DBs, and a prepare step to all DBs, # before committing both transactions session.commit() Embedding SQL Insert/Update Expressions into a Flush This feature allows the value of a database column to be set to a SQL expression instead of a literal value. It's especially useful for atomic updates, calling stored procedures, etc. All you do is assign an expression to an attribute: class SomeClass(object): pass mapper(SomeClass, some_table) someobject = session.query(SomeClass).get(5) # set 'value' attribute to a SQL expression adding one someobject.value = some_table.c.value + 1 # issues "UPDATE some_table SET value=value+1" session.commit() This technique works both for INSERT and UPDATE statements. After the flush/commit operation, the value attribute on someobject above is expired, so that when next accessed the newly generated value will be loaded from the database. Using SQL Expressions with Sessions SQL expressions and strings can be executed via the :class:~sqlalchemy.orm.session.Session within its transactional context. This is most easily accomplished using the :func:~sqlalchemy.orm.session.Session.execute method, which returns a :class:~sqlalchemy.engine.ResultProxy in the same manner as an :class:~sqlalchemy.engine.Engine or :class:~sqlalchemy.engine.Connection: Session = sessionmaker(bind=engine) session = Session() # execute a string statement result = session.execute("select * from table where id=:id", {'id':7}) # execute a SQL expression construct result = session.execute(select([mytable]).where(mytable.c.id==7)) The current :class:~sqlalchemy.engine.Connection held by the :class:~sqlalchemy.orm.session.Session is accessible using the :func:~sqlalchemy.orm.session.Session.connection method: connection = session.connection() The examples above deal with a :class:~sqlalchemy.orm.session.Session that's bound to a single :class:~sqlalchemy.engine.Engine or :class:~sqlalchemy.engine.Connection. To execute statements using a :class:~sqlalchemy.orm.session.Session which is bound either to multiple engines, or none at all (i.e. relies upon bound metadata), both :func:~sqlalchemy.orm.session.Session.execute and :func:~sqlalchemy.orm.session.Session.connection accept a mapper keyword argument, which is passed a mapped class or :class:~sqlalchemy.orm.mapper.Mapper instance, which is used to locate the proper context for the desired engine: Session = sessionmaker() session = Session() # need to specify mapper or class when executing result = session.execute("select * from table where id=:id", {'id':7}, mapper=MyMappedClass) result = session.execute(select([mytable], mytable.c.id==7), mapper=MyMappedClass) connection = session.connection(MyMappedClass) Joining a Session into an External Transaction If a :class:.Connection is being used which is already in a transactional state (i.e. has a :class:.Transaction established), a :class:.Session can be made to participate within that transaction by just binding the :class:.Session to that :class:.Connection. The usual rationale for this is a test suite that allows ORM code to work freely with a :class:.Session, including the ability to call :meth:.Session.commit, where afterwards the entire database interaction is rolled back: from sqlalchemy.orm import sessionmaker from sqlalchemy import create_engine from unittest import TestCase # global application scope. create Session class, engine Session = sessionmaker() engine = create_engine('postgresql://...') class SomeTest(TestCase): def setUp(self): # connect to the database self.connection = engine.connect() # begin a non-ORM transaction self.trans = connection.begin() # bind an individual Session to the connection self.session = Session(bind=self.connection) def test_something(self): # use the session in tests. self.session.commit() def tearDown(self): # rollback - everything that happened with the # Session above (including calls to commit()) # is rolled back. self.trans.rollback() self.session.close() # return connection to the Engine self.connection.close() Above, we issue :meth:.Session.commit as well as :meth:.Transaction.rollback. This is an example of where we take advantage of the :class:.Connection object's ability to maintain subtransactions, or nested begin/commit-or-rollback pairs where only the outermost begin/commit pair actually commits the transaction, or if the outermost block rolls back, everything is rolled back. Recall from the section :ref:session_faq_whentocreate, the concept of "session scopes" was introduced, with an emphasis on web applications and the practice of linking the scope of a :class:.Session with that of a web request. Most modern web frameworks include integration tools so that the scope of the :class:.Session can be managed automatically, and these tools should be used as they are available. SQLAlchemy includes its own helper object, which helps with the establishment of user-defined :class:.Session scopes. It is also used by third-party integration systems to help construct their integration schemes. The object is the :class:.scoped_session object, and it represents a registry of :class:.Session objects. If you're not familiar with the registry pattern, a good introduction can be found in Patterns of Enterprise Architecture. Note The :class:.scoped_session object is a very popular and useful object used by many SQLAlchemy applications. However, it is important to note that it presents only one approach to the issue of :class:.Session management. If you're new to SQLAlchemy, and especially if the term "thread-local variable" seems strange to you, we recommend that if possible you familiarize first with an off-the-shelf integration system such as Flask-SQLAlchemy or zope.sqlalchemy. A :class:.scoped_session is constructed by calling it, passing it a factory which can create new :class:.Session objects. A factory is just something that produces a new object when called, and in the case of :class:.Session, the most common factory is the :class:.sessionmaker, introduced earlier in this section. Below we illustrate this usage: >>> from sqlalchemy.orm import scoped_session >>> from sqlalchemy.orm import sessionmaker >>> session_factory = sessionmaker(bind=some_engine) >>> Session = scoped_session(session_factory) The :class:.scoped_session object we've created will now call upon the :class:.sessionmaker when we "call" the registry: >>> some_session = Session() Above, some_session is an instance of :class:.Session, which we can now use to talk to the database. This same :class:.Session is also present within the :class:.scoped_session registry we've created. If we call upon the registry a second time, we get back the same :class:.Session: >>> some_other_session = Session() >>> some_session is some_other_session True This pattern allows disparate sections of the application to call upon a global :class:.scoped_session, so that all those areas may share the same session without the need to pass it explicitly. The :class:.Session we've established in our registry will remain, until we explicitly tell our regsitry to dispose of it, by calling :meth:.scoped_session.remove: >>> Session.remove() The :meth:.scoped_session.remove method first calls :meth:.Session.close on the current :class:.Session, which has the effect of releasing any connection/transactional resources owned by the :class:.Session first, then discarding the :class:.Session itself. "Releasing" here means that any pending transaction will be rolled back using connection.rollback(). At this point, the :class:.scoped_session object is "empty", and will create a new :class:.Session when called again. As illustrated below, this is not the same :class:.Session we had before: >>> new_session = Session() >>> new_session is some_session False The above series of steps illustrates the idea of the "registry" pattern in a nutshell. With that basic idea in hand, we can discuss some of the details of how this pattern proceeds. Implicit Method Access The job of the :class:.scoped_session is simple; hold onto a :class:.Session for all who ask for it. As a means of producing more transparent access to this :class:.Session, the :class:.scoped_session also includes proxy behavior, meaning that the registry itself can be treated just like a :class:.Session directly; when methods are called on this object, they are proxied to the underlying :class:.Session being maintained by the registry: Session = scoped_session(some_factory) # equivalent to: # # session = Session() # print session.query(MyClass).all() # print Session.query(MyClass).all() The above code accomplishes the same task as that of acquiring the current :class:.Session by calling upon the registry, then using that :class:.Session. Users who are familiar with multithreaded programming will note that representing anything as a global variable is usually a bad idea, as it implies that the global object will be accessed by many threads concurrently. The :class:.Session object is entirely designed to be used in a non-concurrent fashion, which in terms of multithreading means "only in one thread at a time". So our above example of :class:.scoped_session usage, where the same :class:.Session object is maintained across multiple calls, suggests that some process needs to be in place such that mutltiple calls across many threads don't actually get a handle to the same session. We call this notion thread local storage, which means, a special object is used that will maintain a distinct object per each application thread. Python provides this via the threading.local() construct. The :class:.scoped_session object by default uses this object as storage, so that a single :class:.Session is maintained for all who call upon the :class:.scoped_session registry, but only within the scope of a single thread. Callers who call upon the registry in a different thread get a :class:.Session instance that is local to that other thread. Using this technique, the :class:.scoped_session provides a quick and relatively simple (if one is familiar with thread-local storage) way of providing a single, global object in an application that is safe to be called upon from multiple threads. The :meth:.scoped_session.remove method, as always, removes the current :class:.Session associated with the thread, if any. However, one advantage of the threading.local() object is that if the application thread itself ends, the "storage" for that thread is also garbage collected. So it is in fact "safe" to use thread local scope with an application that spawns and tears down threads, without the need to call :meth:.scoped_session.remove. However, the scope of transactions themselves, i.e. ending them via :meth:.Session.commit or :meth:.Session.rollback, will usually still be something that must be explicitly arranged for at the appropriate time, unless the application actually ties the lifespan of a thread to the lifespan of a transaction. Using Thread-Local Scope with Web Applications As discussed in the section :ref:session_faq_whentocreate, a web application is architected around the concept of a web request, and integrating such an application with the :class:.Session usually implies that the :class:.Session will be associated with that request. As it turns out, most Python web frameworks, with notable exceptions such as the asynchronous frameworks Twisted and Tornado, use threads in a simple way, such that a particular web request is received, processed, and completed within the scope of a single worker thread. When the request ends, the worker thread is released to a pool of workers where it is available to handle another request. This simple correspondence of web request and thread means that to associate a :class:.Session with a thread implies it is also associated with the web request running within that thread, and vice versa, provided that the :class:.Session is created only after the web request begins and torn down just before the web request ends. So it is a common practice to use :class:.scoped_session as a quick way to integrate the :class:.Session with a web application. The sequence diagram below illustrates this flow: Web Server Web Framework SQLAlchemy ORM Code -------------- -------------- ------------------------------ startup -> Web framework # Session registry is established initializes Session = scoped_session(sessionmaker()) incoming web request -> web request -> # The registry is *optionally* starts # called upon explicitly to create # a Session local to the thread and/or request Session() # the Session registry can otherwise # be used at any time, creating the # request-local Session() if not present, # or returning the existing one Session.query(MyClass) # ... # if data was modified, commit the # transaction Session.commit() web request ends -> # the registry is instructed to # remove the Session Session.remove() sends output <- outgoing web <- response Using the above flow, the process of integrating the :class:.Session with the web application has exactly two requirements: 1. Create a single :class:.scoped_session registry when the web application first starts, ensuring that this object is accessible by the rest of the application. 2. Ensure that :meth:.scoped_session.remove is called when the web request ends, usually by integrating with the web framework's event system to establish an "on request end" event. As noted earlier, the above pattern is just one potential way to integrate a :class:.Session with a web framework, one which in particular makes the significant assumption that the web framework associates web requests with application threads. It is however strongly recommended that the integration tools provided with the web framework itself be used, if available, instead of :class:.scoped_session. In particular, while using a thread local can be convenient, it is preferable that the :class:.Session be associated directly with the request, rather than with the current thread. The next section on custom scopes details a more advanced configuration which can combine the usage of :class:.scoped_session with direct request based scope, or any kind of scope. Using Custom Created Scopes The :class:.scoped_session object's default behavior of "thread local" scope is only one of many options on how to "scope" a :class:.Session. A custom scope can be defined based on any existing system of getting at "the current thing we are working with". Suppose a web framework defines a library function get_current_request(). An application built using this framework can call this function at any time, and the result will be some kind of Request object that represents the current request being processed. If the Request object is hashable, then this function can be easily integrated with :class:.scoped_session to associate the :class:.Session with the request. Below we illustrate this in conjunction with a hypothetical event marker provided by the web framework on_request_end, which allows code to be invoked whenever a request ends: from my_web_framework import get_current_request, on_request_end from sqlalchemy.orm import scoped_session, sessionmaker Session = scoped_session(sessionmaker(bind=some_engine), scopefunc=get_current_request) @on_request_end def remove_session(req): Session.remove() Above, we instantiate :class:.scoped_session in the usual way, except that we pass our request-returning function as the "scopefunc". This instructs :class:.scoped_session to use this function to generate a dictionary key whenever the registry is called upon to return the current :class:.Session. In this case it is particularly important that we ensure a reliable "remove" system is implemented, as this dictionary is not otherwise self-managed. Partitioning Strategies Simple Vertical Partitioning Vertical partitioning places different kinds of objects, or different tables, across multiple databases: engine1 = create_engine('postgresql://db1') engine2 = create_engine('postgresql://db2') Session = sessionmaker(twophase=True) # bind User operations to engine 1, Account operations to engine 2 Session.configure(binds={User:engine1, Account:engine2}) session = Session() Above, operations against either class will make usage of the :class:.Engine linked to that class. Upon a flush operation, similar rules take place to ensure each class is written to the right database. The transactions among the multiple databases can optionally be coordinated via two phase commit, if the underlying backend supports it. See :ref:session_twophase for an example. Custom Vertical Partitioning More comprehensive rule-based class-level partitioning can be built by overriding the :meth:.Session.get_bind method. Below we illustrate a custom :class:.Session which delivers the following rules: 1. Flush operations are delivered to the engine named master. 2. Operations on objects that subclass MyOtherClass all occur on the other engine. 3. Read operations for all other classes occur on a random choice of the slave1 or slave2 database. engines = { 'master':create_engine("sqlite:///master.db"), 'other':create_engine("sqlite:///other.db"), 'slave1':create_engine("sqlite:///slave1.db"), 'slave2':create_engine("sqlite:///slave2.db"), } from sqlalchemy.orm import Session, sessionmaker import random class RoutingSession(Session): def get_bind(self, mapper=None, clause=None): if mapper and issubclass(mapper.class_, MyOtherClass): return engines['other'] elif self._flushing: return engines['master'] else: return engines[ random.choice(['slave1','slave2']) ] The above :class:.Session class is plugged in using the class_ argument to :class:.sessionmaker: Session = sessionmaker(class_=RoutingSession) This approach can be combined with multiple :class:.MetaData objects, using an approach such as that of using the declarative __abstract__ keyword, described at :ref:declarative_abstract. Horizontal Partitioning Horizontal partitioning partitions the rows of a single table (or a set of tables) across multiple databases. See the "sharding" example: :ref:examples_sharding. Sessions API Attribute and State Management Utilities These functions are provided by the SQLAlchemy attribute instrumentation API to provide a detailed interface for dealing with instances, attribute values, and history. Some of them are useful when constructing event listener functions, such as those described in :ref:events_orm_toplevel.
Kelani A. # Word Problem NASA launches a rocket at t=0 seconds. Its height, in meters above sea-level, in terms of time is given by h(t)=−4.9t^2+106t+388. For each of these questions, report your answer to the nearest tenth. How high is the rocket after 9 seconds? How high was the rocket when it was initially launched? When will the rocket reach its maximum height? What will its maximum height be (above sea level)? If the rocket is launched so that it lands in the ocean, when will it splash down? By:
4.6 Problem-solving strategies Page 1 / 4 • Understand and apply a problem-solving procedure to solve problems using Newton's laws of motion. Success in problem solving is obviously necessary to understand and apply physical principles, not to mention the more immediate need of passing exams. The basics of problem solving, presented earlier in this text, are followed here, but specific strategies useful in applying Newton’s laws of motion are emphasized. These techniques also reinforce concepts that are useful in many other areas of physics. Many problem-solving strategies are stated outright in the worked examples, and so the following techniques should reinforce skills you have already begun to develop. Problem-solving strategy for newton’s laws of motion Step 1. As usual, it is first necessary to identify the physical principles involved. Once it is determined that Newton’s laws of motion are involved (if the problem involves forces), it is particularly important to draw a careful sketch of the situation . Such a sketch is shown in [link] (a). Then, as in [link] (b), use arrows to represent all forces, label them carefully, and make their lengths and directions correspond to the forces they represent (whenever sufficient information exists). Step 2. Identify what needs to be determined and what is known or can be inferred from the problem as stated. That is, make a list of knowns and unknowns. Then carefully determine the system of interest . This decision is a crucial step, since Newton’s second law involves only external forces. Once the system of interest has been identified, it becomes possible to determine which forces are external and which are internal, a necessary step to employ Newton’s second law. (See [link] (c).) Newton’s third law may be used to identify whether forces are exerted between components of a system (internal) or between the system and something outside (external). As illustrated earlier in this chapter, the system of interest depends on what question we need to answer. This choice becomes easier with practice, eventually developing into an almost unconscious process. Skill in clearly defining systems will be beneficial in later chapters as well. A diagram showing the system of interest and all of the external forces is called a free-body diagram    . Only forces are shown on free-body diagrams, not acceleration or velocity. We have drawn several of these in worked examples. [link] (c) shows a free-body diagram for the system of interest. Note that no internal forces are shown in a free-body diagram. tree physical properties of heat tree is a type of organism that grows very tall and have a wood trunk and branches with leaves... how is that related to heat? what did you smoke man? what are the uses of dimensional analysis Dimensional Analysis. The study of relationships between physical quantities with the help of their dimensions and units of measurements is called dimensional analysis. We use dimensional analysis in order to convert a unit from one form to another. Emmanuel meaning of OE and making of the subscript nc Negash kinetic functional force what is a principal wave? A wave the movement of particles on rest position transferring energy from one place to another Gabche not wave. i need to know principal wave or waves. Haider principle wave is a superposition of wave when two or more waves meet at a point , whose amplitude is the algebraic sum of the amplitude of the waves kindly define principal wave not principle wave (principle of super position) if u can understand my question Haider what is a model? hi Muhanned why are electros emitted only when the frequency of the incident radiation is greater than a certain value b/c u have to know that for emission of electron need specific amount of energy which are gain by electron for emission . if incident rays have that amount of energy electron can be emitted, otherwise no way. Nazir Nazir what is ohm's law states that electric current in a given metallic conductor is directly proportional to the potential difference applied between its end, provided that the temperature of the conductor and other physical factors such as length and cross-sectional area remains constant. mathematically V=IR ANIEFIOK hi Gundala A body travelling at a velocity of 30ms^-1 in a straight line is brought to rest by application of brakes. if it covers a distance of 100m during this period, find the retardation. just use v^2-u^2=2as Gundala how often does electrolyte emits? alhassan just use +€^3.7°√π%-4¢•∆¥% v^2-u^2=2as v=0,u=30,s=100 -30^2=2a*100 -900=200a a=-900/200 a=-4.5m/s^2 akinyemi what's acceleration The change in position of an object with respect to time Mfizi Acceleration is velocity all over time Pamilerin hi Stephen It's not It's the change of velocity relative to time Laura Velocity is the change of position relative to time Laura acceleration it is the rate of change in velocity with time Stephen acceleration is change in velocity per rate of time Noara what is ohm's law Stephen Ohm's law is related to resistance by which volatge is the multiplication of current and resistance ( U=RI) Laura acceleration is the rate of change. of displacement with time. the rate of change of velocity is called acceleration Asma how i don understand how do I access the Multiple Choice Questions? the button never works and the essay one doesn't either How do you determine the magnitude of force mass × acceleration OR Work done ÷ distance Seema Which eye defect is corrected by a lens having different curvatures in two perpendicular directions? acute astigmatism?
# Thread: solve ln x = x - 2 for x 1. ## solve ln x = x - 2 for x I can't figure it out. =/ $ln(x) = x - 2$ $x = e^{x - 2}$ $x = \frac{e^x}{e^2}$ I'm quite stuck. I feel like I should know how to do this, but just can't remember. >_< 2. You need to solve it numerically I'm afraid. Plotting it gives rough ideas of x= 0.16 and 3.13 3. How irksome. Thanks though.... >_< I detest having to use Newton's method. 4. Originally Posted by Zizoo How irksome. Thanks though.... >_< I detest having to use Newton's method. Use the Bisection Method then :P 5. Originally Posted by e^(i*pi) You need to solve it numerically I'm afraid. Plotting it gives rough ideas of x= 0.16 and 3.13 Can be solved in closed form in terms of Lambert's W function (real (0) branch for a real solution): $x=-\text{W}(-e^2)=0.158594$ CB
# I need help on 9-10 (I will mark brainliest) ###### Question: I need help on 9-10 (I will mark brainliest) ### What is the purpose of the cause-and-effect signal word “therefore”? A) It signals to the reader that there is more information about the research surrounding how much time should be spent at recess. B) It signals to the reader that the writer is going to offer an opposing view about how much time should be spent at recess. C) It signals to the reader that the person has come to a decision about how much time should be spent at recess. D) It signals to the reader that the idea of how much time s What is the purpose of the cause-and-effect signal word “therefore”? A) It signals to the reader that there is more information about the research surrounding how much time should be spent at recess. B) It signals to the reader that the writer is going to offer an opposing view about how much ti... ### When is acceleration the greatest on a graph when is acceleration the greatest on a graph ... ### The words lunging at the length suggests which of the following? The words lunging at the length suggests which of the following?... ### 2. Why does the Dietary Guidelines for Americans focus on them? 2. Why does the Dietary Guidelines for Americans focus on them?... ### (2.3 x 10^3) + (6.9 x 10^3)​ (2.3 x 10^3) + (6.9 x 10^3)​... ### The book of the dead was clearly thought to be very imporant means to everlasting life by contemporary egyptians, who often brought it with them to the tomb. how could it also have been useful for the living The book of the dead was clearly thought to be very imporant means to everlasting life by contemporary egyptians, who often brought it with them to the tomb. how could it also have been useful for the living... ### Present tense1. Mayap, biasa and maganaca __ the graduate attributes of the Assumptionists. A. areB.is 2. Assumptionists are maganaca or morally upright who ___ honesty and fairness I'm what they say and doA. observedB. observes3. My friends and I __ that the "white beach" in Manila bay is unnecessary amidst the pandemicA. believeB. believes4. Environment still __ that the project lacked environmental studiesA. stressB. stresses5. Our teacher always ___ the value of taking care of the environmen present tense1. Mayap, biasa and maganaca __ the graduate attributes of the Assumptionists. A. areB.is 2. Assumptionists are maganaca or morally upright who ___ honesty and fairness I'm what they say and doA. observedB. observes3. My friends and I __ that the "white beach" in Manila bay is unnecessa... ### What are TWO tips that should be followed when paraphrasing?A. Read the original source several times to ensure that you understand the information.B. Revisit the original source to make sure you captured the main idea.C. Use as much of the original wording as possible.D. Examine the original source as you write your paraphrase.E. Do not cite the source since you are not using a direct quotation. What are TWO tips that should be followed when paraphrasing?A. Read the original source several times to ensure that you understand the information.B. Revisit the original source to make sure you captured the main idea.C. Use as much of the original wording as possible.D. Examine the original source... ### There are twenty-one and a half calories in a candy bar. How many calories are there in thirty - eight candy bars? There are twenty-one and a half calories in a candy bar. How many calories are there in thirty - eight candy bars?... ### You are body surfing in modest waves that are breaking just offshore. A particularly big one catches you off guard, flips you over and over, and deposits you on the shore. As you lie there in the sand, you think, hey, I just landed on ________. You are body surfing in modest waves that are breaking just offshore. A particularly big one catches you off guard, flips you over and over, and deposits you on the shore. As you lie there in the sand, you think, hey, I just landed on ________.... ### Where does Henry like to visit on Fridays?   A. the ice-cream shop   B. the bakery   C. the pet shop   D. the school Where does Henry like to visit on Fridays?   A. the ice-cream shop   B. the bakery   C. the pet shop   D. the school... ### Serving at a speed of 257 km/h, a tennis player hits the ball at a height of 2.6 m and an angle θ below the horizontal. the service line is 11.9 m from the net, which is 0.91 m high. what is the angle θ such that the ball just crosses the net? Serving at a speed of 257 km/h, a tennis player hits the ball at a height of 2.6 m and an angle θ below the horizontal. the service line is 11.9 m from the net, which is 0.91 m high. what is the angle θ such that the ball just crosses the net?... ### Which of the following is the correct way to cite this source within your essay? According to Maryanski, “these products must be as safe as the traditional foods in the market” (“Genetically Engineered Foods”). “These products must be as safe as the traditional foods in the market.” (Maryanski) These products must be as safe as the traditional foods in the market (“Genetically Engineered Foods”). “These products must be as safe as the traditional foods in the market (Maryanski).” Which of the following is the correct way to cite this source within your essay? According to Maryanski, “these products must be as safe as the traditional foods in the market” (“Genetically Engineered Foods”). “These products must be as safe as the traditional foods in the market.” (M... ### What is 4(2x+7)=12-2(10-2x) What is 4(2x+7)=12-2(10-2x)... ### Write an explicate formula for this sequence -2, 6, -18, 54 Write an explicate formula for this sequence -2, 6, -18, 54... ### Which text in this excerpt from N. Scott Momaday's The Way to Rainy Mountain uses a simile to create a vivid picture? For my people, the Kiowas, it is an old landmark, and they gave it the name Rainy Mountain. The hardest weather in the world is there. Winter brings blizzards, hot tornadic winds arise in the spring, and in summer the prairie is an anvil's edge. The grass turns brittle and brown, and it cracks beneath your feet. There are green belts along the rivers and creeks, linear groves of Which text in this excerpt from N. Scott Momaday's The Way to Rainy Mountain uses a simile to create a vivid picture? For my people, the Kiowas, it is an old landmark, and they gave it the name Rainy Mountain. The hardest weather in the world is there. Winter brings blizzards, hot tornadic winds ar...
Focus # Landmarks: Superconductivity Explained Phys. Rev. Focus 18, 8 Weaving together experimental clues and theoretical insights, three physicists devised in 1957 the first fundamental theory of superconductivity, one of the most successful theories in solid state physics. APS has put the entire Physical Review archive online, back to 1893. Focus Landmarks feature important papers from the archive. In 1957, the Physical Review published the first fundamental theory explaining how, at low temperatures, some materials can conduct electricity entirely without resistance. Building on experimental clues and earlier theoretical hints, John Bardeen, Leon Cooper, and Robert Schrieffer, all at the University of Illinois in Urbana, explained not just the absence of electrical resistance but also a variety of magnetic and thermal properties of superconductors. The “BCS” theory also had an important influence on theories of particle physics and provided the starting point for many attempts to explain the new high-temperature superconductors. Superconductivity was discovered in 1911, and by the 1930s, physicists had concluded that electrons in a superconductor must occupy a quantum-mechanical state distinct from that of normal conduction electrons. In 1950, researchers found that the temperature at which mercury becomes a superconductor is slightly higher for mercury isotopes of lower atomic weight, suggesting that superconductivity somehow involves motion of the atoms in a material as well as the electrons. Following up on this “isotope effect,” Bardeen and Illinois colleague David Pines showed theoretically that within an atomic lattice, electrons could attract one another, despite their strong electrostatic repulsion. Essentially, an electron can create vibrations among the lattice atoms, which can in turn affect other electrons, so the attraction is indirect. By the middle 1950s, Bardeen was collaborating with Cooper, a post-doctoral fellow, and Schrieffer, a graduate student. Cooper published a short paper showing how the Bardeen-Pines attraction could cause electrons with opposite momentum to form stable pairs [1]. This pairing mechanism, Cooper suggested, might be responsible for superconductivity, but Bardeen was initially skeptical. The paired electrons were not physically close together but moved in a coordinated way, always having equal but opposite momentum. It was not clear that these tenuous, extended pairs could be crammed together to create a superconducting medium without getting disrupted. A few months later, however, Schrieffer hit on a mathematical way of defining a quantum mechanical state containing lots of paired electrons, with the pairs oblivious to other electrons and the lattice, allowing them to move without hindrance. He later compared the concept to the Frug, a popular dance at the time, where dance partners could be far apart on the dance floor, separated by many other dancers, yet remain a pair [2]. After publishing a short note early in 1957 [3] the team published what became known as the Bardeen-Cooper-Schrieffer, or BCS, theory of superconductivity in December. They won the Nobel prize in 1972. The theory explained the isotope effect and the fact that magnetic fields below a certain strength cannot penetrate superconductors. It also explained why superconductivity could only occur near absolute zero–the tenuous Cooper pairs break up in the presence of too much thermal jiggling. It’s a testament to Bardeen’s insight that he chose the right collaborators and kept his eye on experiment in seeing the way forward, says superconductivity experimentalist Laura Greene of the University of Illinois: “It’s how science should be done.” One oddity of the BCS wave function is that it lacks some of the mathematical symmetry expected at the time for any quantum or classical solution of electromagnetic equations. Further analysis of this point spurred the development of so-called symmetry breaking theories in particle physics. Although the superconductors discovered in 1986 rely on electron pairing, they remain superconducting at temperatures above what the pairing mechanism in BCS can easily explain. But Marvin Cohen of the University of California at Berkeley says that given the poor understanding of the new materials, the original BCS pairing mechanism shouldn’t be ruled out. And, adds Greene, it took “some very smart people” almost 50 years to get from the discovery of superconductivity to BCS, so she’s not worried that the high temperature superconductors remain unsolved after a mere 20 years. –David Lindley David Lindley is a freelance science writer in Alexandria, Virginia. ## References 1. Leon N. Cooper, “Bound Electron Pairs in a Degenerate Fermi Gas,” Phys. Rev. 104, 1189 (1956) 2. Lillian Hoddeson and Vicki Daitch, True Genius: The Life and Science of John Bardeen, (Joseph Henry Press, 2002), p. 203 3. J. Bardeen, L. N. Cooper, and J. R. Schrieffer, “Microscopic Theory of Superconductivity,” Phys. Rev. 106, 162 (1957) • Further reading: Lillian Hoddeson and Vicki Daitch, True Genius: The Life and Science of John Bardeen, (Joseph Henry Press, 2002) ## Subject Areas Superconductivity ## Related Articles Materials Science ### Vortex Jets Spotted in Superconductors Researchers have identified and studied vortex jets—streams of swirling electrons—that can form at edge defects in current-carrying superconductors. Read More » Condensed Matter Physics ### Elusive Superconducting Superhydride Synthesized A decade after it was theorized, scientists in China have synthesized a new type of superconductor, the superhydride ${\text{CaH}}_{6}$. Read More » Electronics ### Multiphoton Generator on a Chip A device for producing up to six photons in a single event could open new doors to quantum technologies. Read More »
# Variance of the sum of the even numbers rolled I tried to solve the following problem for my probability theory class : A dice is rolled $$100$$ times. If $$X$$ is the sum of the even numbers rolled, find the expected value and the variance of $$X$$. My attempt: Let $$X_i$$ be the number of times we rolled $$i$$ $$\left(\text{for } i = \overline{1,6}\right)$$. For example, if after 100 tries, we rolled $$5$$ seven times, then $$X_5 = 7$$. Then $$X=2X_2 + 4X_4 + 6X_6$$, so by the linearity of expectation, we have: $$\mathbb{E}(X) = \mathbb{E}(2X_2 + 4X_4 + 6X_6) = 2\mathbb{E}(X_2)+4\mathbb{E}(X_4) + 6\mathbb{E}(X_6)$$ Also, $$\mathbb{P}(X_i = k) = {100 \choose k} \left( \dfrac{1}{6} \right)^k \left( \dfrac{5}{6} \right)^{100-k}$$ Therefore, $$X_i \sim \text{Bin}\left(100, \dfrac{1}{6} \right)$$, so we have $$\mathbb{E}(X_i) = \dfrac{100}{6}$$ $$\Rightarrow \mathbb{E}(X) = \dfrac{100}{6}(2+4+6) = 200.$$ To find the variance, I know that for two independent random variables $$A$$ and $$B$$, we have $$\text{Var}(A+B) = \text{Var}(A)+\text{Var}(B),$$ which would eventually solve my problem, but somehow, I can't figure out whether $$X_2, X_4, X_6$$ are independent or not... Or is this a wrong approach? Thanks in advance for any help! $$X_i$$ are NOT independent, because $$\sum\limits_{i=1}^6 X_i = 100$$ for example. Consider $$Y_i\sim U\{1,2,3,4,5,6\}$$, represent each dice, and $$Z_i=1\{Y_i \text{ even}\}$$. Now, you want to calculate expectation and variance of $$\sum\limits_{i=1}^{100}Y_iZ_i$$ Try to prove: 1. $$Y_iZ_i$$ and $$Y_jZ_j$$ are independent if $$i\neq j$$ 2. $$E(Y_iZ_i)=2P(Y_i=2)+4P(Y_i=4)+6P(Y_i=6)$$ For variance, use that $$Var(X)=E(X^2)-E(X)^2$$ and try to analyze the variable $$(Y_iZ_i)^2$$. What are its possible values? • Thank you. I see now that my random variables are not independent. I'll try to find the variance with the new approach. But I think my calculation of the expected value is correct, isn't it? Feb 24 at 13:18 • Yes! The expected value is correct Feb 25 at 14:35
## Commutative Algebra and Algebraic Geometry: Evolutions and Second Symbolic Powers Seminar | September 12 | 5-6 p.m. | 939 Evans Hall Jana Sotakova, IS MU Department of Mathematics Let K be a field and let R be a local K-algebra. An evolution of R is a surjective homomorphism T- >R of K-algebras such that the induced map between the modules of differentials is an isomorphism. We will see that the question of the existence of non-trivial evolutions is related to the Eisenbud-Mazur conjecture on (second) symbolic powers. [email protected]
# Parameter estimation for random variables where a control parameter is another r.v Let $$\{X_i\}$$ a sequence of independent random variables. Each $$X_i$$ has a p.d.f $$p(m, \theta)$$. Where $$\theta$$ is a real unknown parameter and $$m$$ the outcome of another random variable $$M$$ with p.d.f $$p(m)$$. I have the next protocol to get a sample $$\bar{x}=\{x_1,...,x_n\}$$: First, I get the value $$m_1 \sim p(m)$$. Then I obtain the value $$x_1 \sim p(m_1 \theta)$$. And I repeat the process $$n$$-times. The aim is to estimate the value of $$\theta$$ from $$\bar{x}$$. My questions: 1. the maximum likelihood estimator is the best way to do that? 2. Are there lower bound inequalities for the variance of the estimator for $$\theta$$? When $$m$$ and $$\theta$$ are real unknown parameters, I know that there is the Cramér-Rao Bound, and when one has a parameter that is a random variable there exists the Van Trees inequality. But in this situation, I don't know if there is a standard inequality. ## 2 Answers Regarding your questions: 1. I think there is a confusion: it does not really make sense to ask if the MLE is a "good way" to estimate $$\theta$$. As its name suggests, the MLE is your best estimate of $$\theta$$ that maximizes the likelihood of your observation, so yes, it is pretty good ^^ But a more relevant question is how to obtain your MLE. Given that you have both observed variables $$X_i$$ and hidden variables $$M_i$$, I would suggest using the Expectation-Maximization algorithm. 2. You can indeed use the Cramér-Rao bound. In this case, the likelihood used to compute the Fisher Information is the likelihood of your observations $$X_i$$, obtained by marginalizing the joint distribution of the observed and hidden variables: $$p(X_i|\theta) = \sum_{M_i}p(X_i|M_i,\theta)p(M_i)$$ I highly recommend this article, in which the authors use the EM algo to obtain the MLE of a vector of parameters $$\theta$$, and also compute the Fisher Information matrix: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4867027/ The (complete information) likelihood in this model is$$\prod_i p(x_i|m_i,\theta)\tag{1}$$which can be maximised or used in a Bayesian analysis, providing a prior $$p(\theta)$$ is chosen. (There is no "best way to estimate" a quantity, it all depends on the utility function for running the estimation.) In the event the $$m_i$$'s are not observed, the (observed) likelihood becomes$$\prod_i \int p(x_i|m,\theta)\,p(m)\,\text{d}m\tag{2}$$ If there exists an unbiased estimator of $$\theta$$ in this setting, $$T_n(X_1,\ldots,X_n|m_1,\ldots,m_n)$$ then it satisfies the Cramér-Rao inequality$$\text{cov}(T_n(X_1,\ldots,X_n|m_1,\ldots,m_n)) \ge I_n^{-1}(\theta;m_1,\ldots,m_n)$$where $$I_n(\theta;m_1,\ldots,m_n)$$ is the (complete) Fisher information matrix associated with (1): $$I_n(\theta;m_1,\ldots,m_n)=-\sum_{i=1}^n\mathbb{E}_\theta \left[ \dfrac{\partial ^2}{\partial \theta \, \partial \theta^\text{T}} \log p\left(X| m_i,{\theta}\right)\right]$$ Once again, if the $$m_i$$'s are not observed, the observed Fisher information become $$I_n(\theta)=-n \mathbb{E}_\theta \left[ \dfrac{\partial ^2}{\partial \theta \, \partial \theta^\text{T}} \log p\left(X| {\theta}\right)\right]=-n \mathbb{E}_\theta \left[ \dfrac{\partial ^2}{\partial \theta \, \partial \theta^\text{T}} \log \int p\left(X| m,{\theta}\right)p(m)\,\text{d}m \right]$$ • I am not absolutely sure about the way you define the likelihood in the model. Here, you seem to consider the conditional likelihood of $x_i$ given $m_i$. But I guess the original question implies that $m_i$ is a hidden variable, to which we do not have access. If this is the case, we should define the likelihood of the observations as $p(x_i|\theta) = \sum_{m_i}p(x_i|m_i,\theta)p(m_i)$ – Camille Gontier May 9 at 9:24
Home > Analysis, Problem Solving, Undergraduate > Critical points and extremal points ## Critical points and extremal points Let $f(x,y)$ be an infinitely differentiable function of two variables with a local minimum at the origin and no other critical points. Is it true that the minimum is global? ( A critical point is a point where both partial derivatives $\frac{\partial f}{\partial x},\ \frac{\partial f}{\partial y}$ vanish. ) $\displaystyle f(x,y)=-\frac{1}{1+x^2}+\left( e^x+\frac{1}{1+x^2} \right) (3y^2-2y^3)$
Change the chapter Question A glass coffee pot has a circular bottom with a 9.00-cm diameter in contact with a heating element that keeps the coffee warm with a continuous heat transfer rate of 50.0 W (a) What is the temperature of the bottom of the pot, if it is 3.00 mm thick and the inside temperature is $60.0^\circ\textrm{C}$ ? (b) If the temperature of the coffee remains constant and all of the heat transfer is removed by evaporation, how many grams per minute evaporate? Take the heat of vaporization to be 2340 kJ/kg. 1. $88.1^\circ\textrm{C}$ 2. $1.33 \textrm{ g}$ Solution Video
1. ## arctan Dear All! Pleas,e coudl someone help me to get f(x)=-pi/4 for x<1 and f(x)=arctan2+arctan3 for x>1 $f(x)=arctan\frac{x+1}{x-1}+arctanx$ Many thanks!!! 2. ## Re: arctan you have $\\ \frac{-\pi}{4}= \arctan{\frac{(x+1)}{(x-1)}} +\arctan(x)\\$ you put Tan everywhere $\\\tan\frac{-\pi}{4} = \frac{(x+1)}{(x-1)} +(x)\\-1 = \frac{(x+1)}{(x-1)} +(x)\\0 = \frac{(x+1}{x-1} +(x) + 1$ Solve the quadratic formula, you end up with x = 0 and x = (-1) 3. ## Re: arctan Originally Posted by Barioth you have $\\ \frac{-\pi}{4}= \arctan{\frac{(x+1)}{(x-1)}} +\arctan(x)\\$ you put Tan everywhere $\\\tan\frac{-\pi}{4} = \frac{(x+1)}{(x-1)} +(x)\\-1 = \frac{(x+1)}{(x-1)} +(x)\\0 = \frac{(x+1}{x-1} +(x) + 1$ Sorry about the <br/> I have no idea why these keep apearing :( As you can I got rid of the <br/> in your post. That happens if a linefeed in the code. This like this: $ $ $$\\ \frac{-\pi}{4}= \arctan{\frac{(x+1)}{(x-1)}} +\arctan(x)\\$$ you put Tan everywhere [TEX]\\\tan\frac{-\pi}{4} = \frac{(x+1)}{(x-1)} +(x)\\-1 = \frac{(x+1)}{(x-1)} +(x)\\0 = \frac{(x+1}{x-1} +(x) + 1[/TEX] 4. ## Re: arctan Thanks for the info! 5. ## Re: arctan Hello! Many thanks! But, I should not know in advance that the result is pi/4! I have to calculate it...how? P.s. Please, can U tell me how did U got rid of arctans putting tan in front of arctan? Many thanks!!!
# Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of halfspaces Anindya De, Ilias Diakonikolas, Vitaly Feldman, Rocco A. Servedio Research output: Working paper ## Abstract The \emph{Chow parameters} of a Boolean function ƒ: {−1,1}n → {-1,1} are its n+1 degree-0 and degree-1 Fourier coefficients. It has been known since 1961 (Chow, Tannenbaum) that the (exact values of the) Chow parameters of any linear threshold function ƒ uniquely specify ƒ within the space of all Boolean functions, but until recently (O'Donnell and Servedio) nothing was known about efficient algorithms for \emph{reconstructing} ƒ (exactly or approximately) from exact or approximate values of its Chow parameters. We refer to this reconstruction problem as the \emph{Chow Parameters Problem.} Our main result is a new algorithm for the Chow Parameters Problem which, given (sufficiently accurate approximations to) the Chow parameters of any linear threshold function ƒ, runs in time $\tilde{O}(n^2)\cdot (1/\eps)^{O(\log^2(1/\eps))}$ and with high probability outputs a representation of an LTF ƒ′   that is $\eps$-close to ƒ. The only previous algorithm (O'Donnell and Servedio) had running time $\poly(n) \cdot 2^{2^{\tilde{O}(1/\eps^2)}}.$ As a byproduct of our approach, we show that for any linear threshold function ƒ over {−1,1}n, there is a linear threshold function ƒ′  which is $\eps$-close to ƒ and has all weights that are integers at most $\sqrt{n} \cdot (1/\eps)^{O(\log^2(1/\eps))}$. This significantly improves the best previous result of Diakonikolas and Servedio which gave a $\poly(n) \cdot 2^{\tilde{O}(1/\eps^{2/3})}$ weight bound, and is close to the known lower bound of max{√n, $(1/\eps)^{\Omega(\log \log (1/\eps))}\}$ (Goldberg, Servedio). Our techniques also yield improved algorithms for related problems in learning theory. Original language English Computing Research Repository (CoRR) abs/1206.0985 Published - 2012 ## Fingerprint Dive into the research topics of 'Nearly optimal solutions for the Chow Parameters Problem and low-weight approximation of halfspaces'. Together they form a unique fingerprint.
## Brake Horsepower Horsepower as a unit of power output is a historical was an attempt by James Watt, the British engineer and pioneer of the steam engine, to relate the output of steam engines to in terms of the number of horses. One horsepower, hp, was defined as the power needed to lift 33,000 pounds over one foot in one minute on the surface of the Earth, or in SI units, \begin{aligned} 1 \: hp &= \frac{mg \Delta h}{t} \\ &= \frac{33000 \times 0.4536 \times 9.807 \times 0.3048}{60} \\ &= 746 W \end{aligned} Today brake horsepower (bhp) is used to to measure the power output of an engine at the crankshaft just outside the engine, before the losses of power caused by the gearbox and drive train, and it is common to hear the power of a car given in terms of brake horsepower.
## The cost of setting up the type of a magazine is Rs. 1000. The cost of setting up the type of a magazine is Rs. 1000. The cost of running the printing machine is Rs. 120 per 100 copies. The cost of paper,… ## A dishonest dealer marks up the price of his goods. A dishonest dealer marks up the price of his goods by 20% and gives a discount of 10% to the customer. He also uses a 900 gram weight instead of… ## A dishonest dealer marks up the price of his goods by 20%.. A dishonest dealer marks up the price of his goods by 20% and gives a discount of 10% to the customer. He also uses a 900 gram weight instead of… ## A driver of auto rickshaw makes a profit of 20%… A driver of auto rickshaw makes a profit of 20% on every trip when he carries 3 passengers and the price of petrol is Rs. 30 a litre. Find the… ## By selling an article, a man makes a profit of 25%.. By selling an article, a man makes a profit of 25% of its selling price. His profit percent is: A. 20% B. 25% C. $16 \frac{2}{3} \%$ D. \$33 \frac{1}{3}…
3.5.2.35 QCD3 Description This function returns a factor. (Quality Control D3 Factor) It determines the 3-sigma lower control limit for R charts (Range of Sample Charts) from the average range when the sample size (or subgroup size) n is given. The Lower Control Limit for R = (factor)*(Average Range). The calculations for the factors are based on the normal distribution. Syntax double QCD3(int n) n Sample size Return Returns Quality Control D3 Factor. Examples QCD3(11) = ; //returns 0.256, the factor for a sample size of 11
• 15 • 15 • 11 • 9 • 10 # Is The MS C++ Compiler Stupid? This topic is 3894 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts So I brought over some real basic C++ code (that Vector class and others I've been working on) so I could work on it on my much faster PC. So I bring over the code files, create a new project for a static library, and get all the boost libraries linked and stuff, go to compile and get this: 1>Matrix.cpp 1>d:\island\src\matrix.cpp(160) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(161) : error C2059: syntax error : '/' 1>d:\island\src\matrix.cpp(175) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(176) : error C2059: syntax error : '/' 1>d:\island\src\matrix.cpp(184) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(186) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(188) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(193) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(194) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(195) : error C2059: syntax error : '/' 1>d:\island\src\matrix.cpp(196) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(206) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(208) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(210) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(217) : error C2059: syntax error : '/' 1>d:\island\src\matrix.cpp(218) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(226) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(228) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(230) : error C2059: syntax error : '<=' 1>d:\island\src\matrix.cpp(235) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(237) : error C2059: syntax error : ')' 1>d:\island\src\matrix.cpp(239) : error C2059: syntax error : '/' 1>d:\island\src\matrix.cpp(240) : error C2059: syntax error : ')' Now, I won't post up those lines, but needless to say those are basic syntax errors it's reporting. Funny thing is that my code compiles error free using GCC on OS X. So what is causing this? I'm including <math.h> if that matters at all. I tried wrapping that in extern "C" {} but it didn't help. Any other ideas? Edit: I "solved" the problem. I removed two includes from a header file because I didn't need them. They were <windows.h> and <GL/gl.h>. No more errors. If anyone knows why those headers would cause all those errors, I'd still be interested. ##### Share on other sites The GL headers must be included (before|after) the Windows headers. I forget which, but if you do it wrong, weird things happen, because the GL headers depend upon or redefine certain types and macros, et cetera. It's also possible there was a macro or some such in scope via one of those headers that messed with your code, as the errors seem relatively benign especially for the GL/windows include order, which usually appears to generate errors with types and not punctuation. ##### Share on other sites Quote: Original post by jpetrieThe GL headers must be included (before|after) the Windows headers. I forget which, but if you do it wrong, weird things happen, because the GL headers depend upon or redefine certain types and macros, et cetera. windows.h comes first, for exactly that reason. ##### Share on other sites Often can be helpful to run cl /P to examine the preprocessed output, that should give you an idea of what the code looks like post macro expansion.
vires.finance Search… Account Health Avoiding liquidation risk # What is Account health? In essence, Account health is a proportion between your Borrow Limit(Borrow Capacity) and your Borrow Capacity Used. $BC_{u} = \sum(CF_a * C_{(u, a)} *Deposit_{(u,a)} * Price_a)$ where, for user u and asset a: • CF is asset's Collateral Factor • C is 1 for assets used as collateral, otherwise 0 • Deposit is amount of asset deposited • Price is the equivalent asset's price in U.S. Dollars $\\ BCU_{u} = \sum((Borrow_{(u,a)} * Price_a) / LT_a)$ where • Borrow is amount of asset borrowed • LT is Liquidation Threshold for the asset a. Altogether, $AccountHealth_u = 1 - BCU_u/BC_u$ ## Special case: Supply and Borrow in the same asset It's a completely healthy situation to borrow the same asset as the collateral: Depositing amount D and borrowing amount B should effectively be seen as Deposit D-B, but only until Borrow starts exceeding Deposit. In this moment, a healthy account can become burned-out(with total negative saldo) and there's nothing liquidators can do about it. To prevent that, an overlap_factor is introduced to smoothen the account health formula while still be tolerable with the same deposit-borrow asset: If $Borrow_{(u, a)} > C_{(u,a)} *Deposit_{(u,a)}$ Then $BC_{u,a} =0 \\ BCU_{(u,a)} =( \frac{(Borrow_{(u, a)} - C_{(u,a)} *Deposit_{(u,a)})}{ LT_a} + OverlapCharge_{(u,a)}) * Price_a$ Otherwise $BC_{u,a} = CF_a *(Deposit_{(u,a)}- Borrow_{(u,a)}) * Price_a \\ BCU_{(u,a)} = OverlapCharge_{(u,a)} * Price_a$ Given $OverlapCharge_{(u,a)} = min(Borrow_{(u, a)} , C_{(u,a)} * Deposit_{(u,a)}) * overlap\_factor$ Liquidators can liquidate part of your debt when your Account Health goes below 0. # Why has my Account health changed? Even if you don't interact with the protocol, your account health is subject to change: • Your Deposit increased or decreased its value • Your Borrows increased or decreased its value • Interest on your deposits/borrows has been accounted. • Your debt has been liquidated in order to increase your account health # What is liquidation? Liquidators can liquidate part of your debt when your Account Health goes below 0%. This means, when your Account Health goes below 0%, a liquidator can step in and take up to 50% of your debt for one asset and the corresponding amount of your deposit + premium. The related parameters(Liquidation Threshold and Liquidation Penalty) are listed in "Asset Parameters" section.
#### Volume 16, issue 4 (2012) Entropy zero area preserving diffeomorphisms of $S^2$ 1 R Bowen, Entropy and the fundamental group, from: "The structure of attractors in dynamical systems" (editors N G Markley, J C Martin, W Perrizo), Lecture Notes in Math. 668, Springer (1978) 21 MR518545 2 M Brown, J M Kister, Invariance of complementary domains of a fixed point set, Proc. Amer. Math. Soc. 91 (1984) 503 MR744656 3 A J Casson, S A Bleiler, Automorphisms of surfaces after Nielsen and Thurston, London Math. Soc. Student Texts 9, Cambridge Univ. Press (1988) MR964685 4 B Farb, Some problems on mapping class groups and moduli space, from: "Problems on mapping class groups and related topics" (editor B Farb), Proc. Sympos. Pure Math. 74, Amer. Math. Soc. (2006) 11 MR2264130 5 A Fathi, F Laudenbach, V Poenaru, Editors, Travaux de Thurston sur les surfaces, Astérisque 66–67, Soc. Math. France (1979) 284 MR568308 6 D Fisher, Groups acting on manifolds: around the Zimmer program, from: "Geometry, rigidity, and group actions" (editors B Farb, D Fisher), Univ. Chicago Press (2011) 72 MR2807830 7 J Franks, Generalizations of the Poincaré–Birkhoff theorem, Ann. of Math. 128 (1988) 139 MR951509 8 J Franks, Recurrence and fixed points of surface homeomorphisms, Ergodic Theory Dynam. Systems 8∗ (1988) 99 MR967632 9 J Franks, Area preserving homeomorphisms of open surfaces of genus zero, New York J. Math. 2 (1996) 1 MR1371312 10 J Franks, Distortion in groups of circle and surface diffeomorphisms, from: "Dynamique des difféomorphismes conservatifs des surfaces: un point de vue topologique", Panor. Synthèses 21, Soc. Math. France (2006) 35 MR2288284 11 J Franks, M Handel, Periodic points of Hamiltonian surface diffeomorphisms, Geom. Topol. 7 (2003) 713 MR2026545 12 J Franks, M Handel, Distortion elements in group actions on surfaces, Duke Math. J. 131 (2006) 441 MR2219247 13 J Franks, M Handel, Global fixed points for centralizers and Morita's theorem, Geom. Topol. 13 (2009) 87 MR2469514 14 M Handel, A pathological area preserving $C^{\infty }$ diffeomorphism of the plane, Proc. Amer. Math. Soc. 86 (1982) 163 MR663889 15 M Handel, The rotation set of a homeomorphism of the annulus is closed, Comm. Math. Phys. 127 (1990) 339 MR1037109 16 M Handel, Commuting homeomorphisms of $S^2$, Topology 31 (1992) 293 MR1167171 17 M Handel, A fixed-point theorem for planar homeomorphisms, Topology 38 (1999) 235 MR1660349 18 M W Hirsch, Differential topology, Graduate Texts in Mathematics 33, Springer (1994) MR1336822 19 N V Ivanov, Fifteen problems about the mapping class groups, from: "Problems on mapping class groups and related topics" (editor B Farb), Proc. Sympos. Pure Math. 74, Amer. Math. Soc. (2006) 71 MR2264532 20 A Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, Inst. Hautes Études Sci. Publ. Math. (1980) 137 MR573822 21 R Kirby, Problems in low dimensional manifold theory, from: "Algebraic and geometric topology. Part 2." (editor R J Milgram), Proc. Sympos. Pure Math. 32, Amer. Math. Soc. (1978) 273 MR520548 22 M Korkmaz, Problems on homomorphisms of mapping class groups, from: "Problems on mapping class groups and related topics" (editor B Farb), Proc. Sympos. Pure Math. 74, Amer. Math. Soc. (2006) 81 MR2264533 23 J N Mather, Topological proofs of some purely topological consequences of Carathéodory's theory of prime ends, from: "Selected studies: physics-astrophysics, mathematics, history of science" (editors T M Rassias, G M Rassias), North-Holland (1982) 225 MR662863 24 J D McCarthy, On the first cohomology group of cofinite subgroups in surface mapping class groups, Topology 40 (2001) 401 MR1808225 25 L Polterovich, Growth of maps, distortion in groups and symplectic geometry, Invent. Math. 150 (2002) 655 MR1946555 26 C P Simon, A bound for the fixed-point index of an area-preserving map with applications to mechanics, Invent. Math. 26 (1974) 187 MR0353372 27 W P Thurston, A generalization of the Reeb stability theorem, Topology 13 (1974) 347 MR0356087 28 Y Yomdin, Volume growth and entropy, Israel J. Math. 57 (1987) 285 MR889979
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Infection kinetics of Covid-19 and containment strategy ## Abstract The devastating trail of Covid-19 is characterized by one of the highest mortality-to-infected ratio for a pandemic. Restricted therapeutic and early-stage vaccination still renders social exclusion through lockdown as the key containment mode.To understand the dynamics, we propose PHIRVD, a mechanistic infection propagation model that Machine Learns (Bayesian Markov Chain Monte Carlo) the evolution of six infection stages, namely healthy susceptible (H), predisposed comorbid susceptible (P), infected (I), recovered (R), herd immunized (V) and mortality (D), providing a highly reliable mortality prediction profile for 18 countries at varying stages of lockdown. Training data between 10 February to 29 June 2020, PHIRVD can accurately predict mortality profile up to November 2020, including the second wave kinetics. The model also suggests mortality-to-infection ratio as a more dynamic pandemic descriptor, substituting reproduction number. PHIRVD establishes the importance of early and prolonged but strategic lockdown to contain future relapse, complementing futuristic vaccine impact. ## Introduction Deadlier than most pandemics in the last 100 years, barring HIV and plague, Covid-19 rages on despite imposition of movement restrictions as well as clinical testing and community health measures1,2. As of 4 August 2020, SARS-COV-2 has infected ca 18.5 million worldwide with ca 700,000 dead. Covid-19 containment has been a major strategic issue for governments worldwide, with particular emphasis on the correct lockdown timing and span. Alarming belated infection spurt have been registered in over-populated countries like India, Brazil and Iran with early and extensive lockdowns. While the low mortality rates exhibited by low-resourced yet densely populated Asian countries have been attributed to the relative youth of the populations3, sparsely populated Sweden depicts an alarming dead-to-infected ratio in contrast to its European neighbours4. Quarantine has been advised as the best infection control measure5,6. This has led to key questions as to the ideal start point and the absolute span of the ensuing lockdown. Major cases in support of lockdown are Vietnam and Cuba, that have claimed almost no death7,8, although such claims have been questioned9. In countries like Italy, the UK, the US, Sweden and Brazil, with strategic reluctance for early lockdown, comparatively softer prohibition lockdown protocols have admittedly transpired to worrisome statistics. On the other hand, European countries like Germany, the Netherlands, Belgium and France as also non-European countries like Australia, New Zealand and Korea who enforced early lockdowns initially registered remarkably low infection and mortality rates10, with $$1.0<R_0<2.0$$ during lockdown, that spiked later (www.worldometers.info). Many suffered from re-infection relapse11,12 with a sudden spurt in infection13. Regions like India, Iran and New York State, with variable quarantine measures, have all seen late infection surges. While India resorted to an early clampdown with an early withdrawal, New York State resorted to a late lockdown, but both with similar numerical implications, a feature attributed to inevitable movement of migrant workers14. Analyzes of the SARS epidemic of 2003 showed that case isolation and contact tracing1,15, while highly effective if implemented at early stages, become ineffectual if the basic infection spread occurs before symptomatic detection16,17. This finding was revisited in Covid-19 transmission kinetics18 pointing to the importance of appropriate early (pre-symptomatic) stage strategizing. Other studies stress the importance of combining isolation19, social distancing with widespread testing20 and contact tracing2. Initial predictive models14 used data from Wuhan and Italy20. Both efforts suffer from a lack of robustness due to inaccurate future prediction that is reliant on sparse data, devoid of any inherent ML training protocol to emphasize on prediction rather than on data fitting.The first predictive study used a Bayesian inference structure on a simplistic SIRV model21,22, using infection statistics from Germany. While a move in the right direction, it suffered from two key deficiencies: lack of a time evolving death rate as an independent dynamical variable and over-reliance on infection statistics in predicting mortality rate.20addressed this, but it lacked the probabilistic kernel of21. Another issue that has often been overlooked is the best possible containment strategy in coping with the disease. Standard approaches include social distancing23, contact tracing24, social seclusion between comorbid and healthy, self- quarantine of the infected (including asymptomatic). The target in all of these is to block the epidemic spread network so that the infection chain can be broken25. Vaccines have led the fight against COVID-1926. Multiple vaccines are now available for public use that use differential chemical pathways, e.g. mRNA replication (Pfizer27, Moderna28), viral vectors (Oxford-AstraZeneca29, Sputnik V30), antibody formation through attachment to spike proteins (Covaxin31, Sinofarm32), double-stranded DNA cloning (Janssen33), genetic engineering of the SARS-Cov-2 spike proteins (Novavax34). The vaccine arsenal is fast getting reinforced with newer additions, all targeted to mitigate the viral load as also to provide long term immunity. While expected to be a major immunity booster going forward, given the expected timeframes of vaccine rollout and perceived mutation towards newer strains of the virus (e.g. Indian variant B.1.61735, South African B.1.35136) that have at times restricted the efficacy rates of vaccines34,34,36, the major defence front will still rely on transmission mitigation through restricted movement, mask usage, sanitation codes and avoiding public gatherings, the collective impact of which could be enumerated from the PHIRVD model. ## Results ### Infection kinetics of healthy and comorbid susceptible COVID-19 infection propagation epidemiology clearly points to the need for analyzing the vastly different infection and mortality profiles of the healthy versus the comorbid susceptible groups. Our key target is to study this interactive infection propagation and then predict future mortality and infection profiles, emphasizing mortality as the key policy indicator. The present article is to marry a robust Susceptible(S)-Infection(I)-Recovered(R)-Vaccinated(V) (SIRV) structure, estimating the reproduction number37, together with a Machine Learning (ML) prediction kernel, using a multi-layered error filtration structure, to generate a predictive model called PHIRVD (see “Methods” section). PHIRVD delivers three major successes at an unprecedented level of accuracy: prediction of the number of infected and dead over the next 30 days (validated using sparse data) for each of the 18 countries considered, a comparative analysis of the impact of lockdown using multiple withdrawal dates for 6 worst-hit countries with high ongoing infection rates, and a detailed temporal profile of future reproduction numbers that can be (and have been) verified against real data. PHIRVD also establishes mortality-to-infection ratio as the key dynamic pandemic descriptor instead of reproduction number. ### Mathematical model—PHIRVD Our compartmentalised Covid-19 pandemic kinetics uses a 6-dimensional dynamical system as in Eq. (1), combining SIR and SEIR kernels38,39, schematically outlined in Fig. 1: \begin{aligned} \frac{dH}{dt}= & {} -\beta _1 H I {+\ q_{1H}R + q_{2H}V } - {h_{2v} H} - \gamma H, \nonumber \\ \frac{dP}{dt}= & {} -\beta _2 P I -(\gamma +\delta ) P {+\ q_{1P}R + q_{2P}V }-p_{2v} P, \nonumber \\ \frac{dI}{dt}= & {} (\beta _1 H + \beta _2 P + \beta _3 R) I -(\gamma +\zeta ) I - w I, \nonumber \\ \frac{dR}{dt}= & {} w I - \beta _3 R I - \gamma R - q_{1H} R - q_{1P}R, \nonumber \\ \frac{dV}{dt}= & {} {-(q_{2H}+q_{2P})V } -\gamma V + h_{2v} H + p_{2v} P, \nonumber \\ \frac{dD}{dt}= & {} \gamma (H+R+V) + (\gamma +\delta )P + (\gamma +\zeta ) I. \end{aligned} (1) The parameters in this model, that we call PHIRVD, characterize the infection rate of healthy agents ($$\beta _1$$), infection rate of agents with pre-existing health conditions ($$\beta _2$$), relapse rate ($$\beta _3$$), conversion rates of recovered to healthy susceptible ($$q_{1H}$$) and previously “immuned” to healthy susceptible ($$q_{2H}$$), conversion rates of recovered to pre-existing susceptible ($$q_{1P}$$) and previously “immuned” to pre-existing susceptible ($$q_{2P}$$), death rate due to non-Covid interference ($$\gamma$$), additional death rate due to agents with pre-existing conditions ($$\delta$$) and that due to infected ($$\zeta$$), recovery rate (w), rate at which healthy ($$h_{2v}$$) and pre-existing susceptible ($$p_{2v}$$) groups are quarantined. Our focus being Covid-19 infection and mortality statistics, we neglect death rate ($$\gamma =0$$) and additional death rate ($$\delta =0$$) due to all non-Covid causes. Since death rate of healthy infected is a lot lower than that of the comorbid and elderly death rate (https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/older-adults.html)40,41; hence we have added a practical constraint in our model to account for this effect that expresses in the form of $$\beta _1<\beta _2$$. Hence, the infection rate of H-group is considered to be a small fraction ($$\lambda$$) of the P-group, i.e. $$\beta _1=\beta _2 \lambda$$. The death variable D thus acts like a “sink” of the dynamical system ensuring a population conservation inbuilt within the model ($$H+P+I+R+V+D$$ = constant). The PHIRVD model can be easily extended to incorporate the impact of upcoming and available vaccines. The impact points would be at the transitory phases between prolonged lockdown, characterized by low susceptible-infected coupling, to a lockdown withdrawal, typically leading to a surge in the infection/mortality traffic, a case of human reaction to maximize social expression. In training our model, we find it useful to define an extra variable $$I_c(t)$$, which represents the cumulative number of those infected upto a given date. In other words, it includes not only those who are currently infected, but also those who have since recovered or died, i. e. $$\frac{dI_c}{dt}=(\beta _1 H \beta _2 P + \beta _3 R) I.$$ Since we have considered relapse in our model, it is to be noted that $$I_c(t) \ne I(t) + R(t) + D(t)$$. ### Data repositories Identifying the infection kinetics of Covid-19 as an interactive evolution process involving six time evolving population density variables: healthy susceptible (H), susceptible with pre-existing conditions or comorbidity (P), infected (I), recovered (R), naturally immuned (i.e. a clone for vaccinated V) and dead (D), the PHIRVD model uses statistics from the Johns Hopkins Covid-19 database42 to accurately predict mortality and infection statistics of 18 Asian, European and American countries. Data threshold was set beyond the first 19 days of low (or no) infection, followed by data training between 10 February 2020 to 29 June 2020. Results were later cross-verified from other databases e.g. US: https://usafacts.org; EU: https://data.europa.eu/; UK: https://coronavirus.data.gov.uk/; India: https://www.covid19india.org/. The Bayesian Markov Chain Monte Carlo (MCMC)43 infrastructure in PHIRVD trains the repository data to probabilistically predict the 17 parameters of the infection kinetic model (see “Methods” section). Unlike previous predictive Machine Learning models14,19,20,21,22, this structure allows more dynamic adaptive control of the infection kinetic estimation resulting in a highly accurate predictive module. ### Mortality and infection: prediction against reality The 18 countries or regions under study were divided into 4 infection classes, the first three based on decreasing mortality-to-infection ratio for countries past their infection peak: UK, Netherlands, Sweden, New York State (Class A); Germany, Korea, Australia, Russia, Vietnam (Class B); and Italy, Spain, Hubei (Class C). Class Class D comprises India, Poland, Iran, France, Portugal and Brazil, with ongoing infection regimes. We deliberately chose New York State instead of the entire United States due to its high population density and tourist/ worker traffic that is quite different from the national average. With the number of reported cases being highly dependent on the number of daily testings, not necessarily in agreement with the actual disease propagation dynamics, we observe some deviations between the simulated I(t) and the actual number of reported cases. On the other hand, D(t) is less affected by the testing rate. Since we are using mortality statistics with the same weightage as the infected data, we prioritize mortality prediction. We note that daily training of any epidemiological model will invariably achieve better data match, as many studies have shown. However, our ML embedded propagation kinetic model thrives on long term predictions, as much as possible. Comparative statistics for our Class A representative, the UK, is shown in Fig. 2. The blue region marks the training zone that fixes the parameters. Based on the highest mortality to infection ratio in each group, the representative countries for the other 3 classes are Germany (Class B), Italy (Class C), India (Class D). Figures 34 and 5 represent infection statistics for Class B (Germany), C (Italy) and D (India) respectively (other plots in Appendix II). Chi-square tested (see “Methods” section for Chi-squared statistic used) accuracy chart in Table 1 clearly points to the veracity of the accuracy claim made. On the other hand, Vietnam presents an interesting case. With a reported zero mortality rate notwithstanding high population density, it has been repeatedly cited as an example of early quarantine success. The model tracks even such an exceptional case to a moderate level of accuracy (in Appendix II). The outsets and insets respectively outline the cumulative versus the daily infection traffic. Details for other countries, for 4 infection classes, are provided in Appendix II. Table 2 presents a comparative chart of the PHIRVD model predictions versus real data, separately for the numbers of infected and dead, for countries representing the 4 classes with data trained between 10 February to 29 June: Class A (UK), Class B (Germany), Class C (Italy) and Class D (India). Futuristic prediction is shown until 12 July. For other countries in each individual class, with data training between 10 February to 10 May, 30 days’ prediction until 9 June establishes the predictive strength of this model (see Tables S2S5, Appendix III), error validated as shown in Table S1 (see Appendix I). Table 3 compares second wave mortality prediction obtained from PHIRVD against real data, based on data training until 29 June 2020. The result can be substantially improved if data is trained within a month of the resurgent wave, as in November 2020. But the reliability of prediction stretching up to 150 days beyond last data training is unprecedented to our knowledge and affirms the robustness of the model. The expected number of secondary cases produced from each infected individual is traditionally defined as the basic reproduction number. The detailed calculation of $$R_{\text {e}}$$ is provided in the “Methods” section. Figure 6 depicts the time evolution of basic reproduction number that indirectly reflects the emerging infection (and fatality) rate for the 4 representative countries from infection classes A-D, represented by the basic reproduction number $$R_0$$44,42,46 (see “Methods” section). $$R_0$$ kinetics of all other countries are provided in Appendix I. Class A countries consistently show the sharpest drop in $$R_0$$ and the flattest stability period, followed by progressive R0 decay and waiting time, often the ‘gestation time’, reflected by the plateau regions of the respective plots for classes B, C and D respectively. The point of note here is that while Germany and Italy show higher levels of infection than the UK, the gestation period for the UK is a lot larger than both. India shows a similar trend although the absolute numbers for India are a lot lower than the other three, indicating a complicated relationship between Full Width at Half Maximum (FWHM) and gestation period. ## Discussion Combining conventional infection kinetic modeling with a predictive Bayesian MCMC, PHIRVD quantifies the impact of lockdown as a containment tool. It estimates mortality statistics with high significance for 18 countries, accurate upto the next 30 days, beyond the last date of data training. Ideal lockdown imposition and withdrawal times have been predicted and validated, including for ongoing regimen e.g. India. PHIRVD also predicts secondary relapse timings and establishes mortality-to-infection ratio as the key pandemic predictive descriptor instead of reproduction number. PHIRVD is also capable of analyzing the impact of migration, an ongoing project. Our findings clearly suggest that phased lockdown is a potent containment tool but needs to be strategically imposed, where the correct implementation and withdrawal times are paramount. Secondary infection and mortality prediction will be key to future strategic quarantine imposition and analyzing impact of future therapeutics. PHIRVD leads to three key outcomes. First, we present highly accurate probabilistic predictions for the numbers of infected and dead for each country for a total of 18 countries, typically 3 weeks beyond the last date of (Machine Learned) data training. Our PHIRVD model depicts a high degree of reliability between model prediction against real data validation across the range of countries considered. Our model can also be used to identify a better strategy for lockdown imposition, to minimize the fatality. The full simulations plots (in Appendix II) clearly outlines how an increasing infection profile initially matches with decreasing numbers of pre-existing susceptible and increasing statistics for the recovered, that then slows down as the infection peak arrives, eventually to tail off in to a no-infection landscape. While the qualitative trends are similar for all classes (A, B, C, D) of countries, the impact of lockdown on the first peak, and then a second (relapse) peak, hint at the internal health versus econometrics of the countries concerned. To prove this point, we compare infection (and mortality) propagation kinetics of 2 chosen countries for two different dates, one on the recess (UK: Fig. 7), the other with uprising infection level (India: Fig. 8). As opposed to the recent furore about school children being exposed to the Covid-19 menace as a result of early lockdown withdrawal, our result clearly shows that there is practically no difference in mortality between a withdrawal on June 1, 2020 as against a later withdrawal e.g. July 1, 2020 (although a withdrawal on May 1 would have been disastrous). The 1 June (almost equally safe) withdrawal would, of course, be favoured on economic and social grounds. The third key outcome of our analysis is the establishment of mortality:infection ratio as the key descriptor of pandemic over and above reproduction number, that has conventionally been used for the purpose. The proof of this is in the accurate prediction of the secondary infection relapse time that the reproductive number fails to predict. As can be seen from Fig. 7a,b, this relapse time period could be deferred with a late lockdown withdrawal on July 1 (as compared to June 1) although the peak mortality rates are not hugely different (ca 200 at 1 July compared to ca 400 at 1 June). Using 1 July 2020 as the UK lockdown withdrawal date, there is a clear signature of secondary relapse in the first week of September (identified as the second peak in Fig. 7). The Indian situation is clearly more challenging, though, as shown in Fig. 8. While perhaps economically unsustainable, India could benefit with a lockdown even beyond 31 July, 2020. For other nations like Iran, Portugal, France and Poland, our predictions of non-trivial secondary relapses (all in late June) match almost perfectly with data, both infected and dead. As the second wave data is now available for the UK, we simulated it using our PHIRVD model. Results shown in Fig. 9 demonstrate excellent agreement with real statistics (data trained only up to 29 June 2020), that reaffirms the strength of the model. A real point of contention amongst politicians, health professionals and medical scientists has, for long, been the correct lockdown implementation and withdrawal times. In statistical parlance, this effectively amounts to an estimation of the FWHM as has been estimated for Wuhan at 2.6 weeks from initial infection47. To analyze these counterclaims, we incorporate the effects of withdrawal of lockdown as a country specific, dynamically evolving quantity. The availability of the awaited vaccines26, and of late, the therapeutic range48,49, have provided major immunity tools in the Covid firefight. The impacts of these vaccines are most likely to be futuristic antibody switch though, as is clearly evidenced by the huge second/third phase outbreaks in countries like India, Bangladesh and Russia that survived the initial onslaught well. With growing mortality profile, sometimes attributed to newer viral strains, the impact of quarantine measures, namely what and how to choose and when to implement or withdraw, has now assumed crucial importance, for which our model can serve as a future benchmark. ## Methods ### Motivation of the PHIRVD model PHIRVD uniquely combines a dynamically evolving infection propagation model that tracks the phenomenology of infection kinetics with a probabilistic predictive algorithm, the latter chosen as a Bayesian Markov Chain Monte Carlo (MCMC) kernel. The Bayesian MCMC is used to train past data to predict time independent generic parameters that can predict the future statistics. The choice is guided by the strength of Bayesian MCMC in a range of dynamical modeling studies in complementary fields50,51. ### Reproduction number $$R_{\text {e}}$$ at fixed point For $$\gamma =0, \delta =0$$, from Eq. (1) the disease free equilibrium (DFE) or fixed point is given by $$P^* =H^*\frac{h_{2v} q_{2P}}{p_{2v} q_{2H}}$$, $$I^*=0, R^*=0$$, $$V^*=H^* \frac{h_{2v}}{ q_{2H}}$$. To evaluate the reproduction number $$R_{\text {e}}$$, we have to break the equation of $$\frac{dI}{dt}$$ into two parts $${\mathcal {F}}, {\mathcal {V}}$$, i.e., \begin{aligned} \frac{dI}{dt}={\mathcal {F}}-{\mathcal {V}} \end{aligned} (2) where $${\mathcal {F}}=(\beta _1 H + \beta _2 P + \beta _3 R) I$$ and $${\mathcal {V}}=(\zeta +w) I$$. Now, $$F=\frac{\partial {\mathcal {F}}}{\partial I}|_{DFE}$$ and $$\Sigma =\frac{\partial {\mathcal {V}}}{\partial I}|_{DFE}$$. Then $$R_{\text {e}}=\frac{F}{\Sigma }=\frac{ H^* \left( \frac{\beta _{2} h_{2v} q_{2P}}{p_{2v} q_{2H}}+\beta _{1}\right) }{\zeta +\omega }$$. ### Lockdown dynamics During the time period, over which we trained our model, most of the countries (except Sweden), of our interest, were under lockdown. Therefore, we studied the effects of withdrawal/relaxation of lockdown for some countries by introducing a time varying parameter L(t) in the model in Eq. (1) substituting $$\beta _{1,2,3}$$ with $$\beta _{1,2,3}\,L(t)$$ respectively, where $$L(t) = 1 \,\,\text {for} \,\,t \le t_0, \,\,\text {and}\,\,\alpha \,\,\text {for}\,\,t \ge t_0+k$$. For $$t_0<t<t_0+k$$, $$L(t) = \frac{1}{k} [ \alpha (t-t_0)+(t_0+k-t) ]$$. Here $$t_0$$ marks the lockdown withdrawal time point, k is the approximate time duration during which the susceptible and infected population mixes well (e.g. within one week or one month etc.), where $$\alpha$$ is the parameter quantifying the intensity of mixing between susceptible and infected population. A larger $$\alpha$$ value implies a higher mixing rate among susceptible and infected individuals. The function L(t) is such that before lockdown withdrawal, it does not alter the contact probability while after withdrawal, it linearly increases from the value 1 to $$\alpha$$ over a time interval of k days, ensuring that the contact probability between susceptible and infected increases from a low to a high value within this time period. ### Parameter estimation The Bayesian MCMC data training leading to supervised learning is itself conducted in two steps using a double-filtration process. First, infection data alone are used to arrive at a preliminary set of values, characterizing each country. The said values are then filtered through combined infected and mortality statistics for a second training to sequentially converge to a preset upper limit. The training schedule is repeated multiply to ensure accurate predictions of the training dataset. Estimation of the equilibrium reproduction number is strategically used to reduce the effective parameter space from 13 to 8 parameters, perfectly conforming with the Bayesian MCMC prediction which shows that value fluctuations with other parameters do not contribute much to the infection kinetics. The model clearly separates the H and P infection classes to reflect their differential levels of infection and mortality. Another constituent is the death rate kinetics embedded in the central structure. The infection propagation model outlined in Eq. (1) is a multi-parameter model whose parameters are evaluated using predictive data modeling within the Bayesian MCMC construct. Similar structures have been selectively used in21,22 albeit for single-country specific models without any explicit mortality dynamics. Over-reliance on infection statistics has often led to incorrect estimation for mortality statistics, whose accurate prediction is our first key target, an aim that is remarkably well served by our ML-embedded compartmentalised model. We present both the cumulative and daily (inset plots) statistics of infected population over 400 days, data trained between 10 February 2020 to 29 June 2020 (140 days) and then predicted up to the next 8 weeks (shown up to 12 July 2020 in Table 1). ### The Bayesian Markov chain Monte Carlo (MCMC) algorithm To understand how the algorithm uses the data to determine the parameters, it is useful to recall some elements of Bayesian statistics50,51. Let $$\varvec{D}=(D_1, D_2, \ldots , D_n)$$ represent the full data vector that is being used to train the algorithm. For our case, the subscripts run over both the time intervals (daily) as well as the data types, such as $$I_c(t_i)$$ and $$D(t_i)$$. Similarly, let $$\varvec{\Theta }=(\theta _1, \theta _2, \ldots , \theta _\alpha )$$ represent the vector of parameters. A key ingredient is the prior probabibility distribution (Bayesian priors) for each $$\theta _i$$. While the absence of any knowledge of the system would call for a prior that is flat in the physically allowed region, the incorporation of such knowledge (which, in the present context, could be divined from the analysis of, say even part of the data for a single country in a given class) quickly gives the prior a somewhat peaked structure. In other words, one could as well start with a normal-distributed prior, viz., $$\varvec{\Theta } \sim N(\varvec{\Theta _0,\sigma })$$, where the vector $$\varvec{\Theta _0}$$ represents the mean of the parameters and $$\varvec{\sigma }=(\sigma _1, \sigma _2, \ldots , \sigma _\alpha )$$ the standard deviation. As it turns out, the dependence of the final result on the prior is quite insignificant.Given a $$\varvec{\Theta }$$, it is straightforward to calculate the conditional probability $${\mathscr {P}}(\varvec{D|\Theta })$$ of obtaining a realization $$\varvec{D}$$ for the data. Using Bayes’ theorem, the posterior probability for $$\varvec{\Theta }$$ given the data is expressed as \begin{aligned} {\mathscr {P}}(\varvec{\Theta }|D)=\frac{{\mathscr {P}}(\varvec{D|\Theta }){\mathscr {P}}(\varvec{\Theta })}{{\mathscr {P}}(\varvec{D})}, \end{aligned} (3) where $${\mathscr {P}}(\varvec{D)=\int _\Omega {\mathscr {P}}(\varvec{D}|\varvec{\Theta }}) {\mathscr {P}}(\varvec{\Theta })d\varvec{\Theta }$$, with $$\Omega$$ denoting the whole parameter space. This, immediately leads us to the likelihood ratio of two parameter vectors $$\varvec{\Theta _1}$$ and $$\varvec{\Theta _2}$$, namely \begin{aligned} \frac{{\mathscr {P}}(\varvec{\Theta _2|D})}{{\mathscr {P}}(\varvec{\Theta _1|D})} =\frac{{\mathscr {P}}(\varvec{D|\Theta _2}){\mathscr {P}}(\varvec{\Theta _2})}{{\mathscr {P}}(\varvec{D|\Theta _1}){\mathscr {P}}(\varvec{\Theta _1})} \ . \end{aligned} (4) We now resort to a 3-step algorithm: 1. 1. Choose parameters (including initial conditions) through a random walk in the parameter space. The nature of the random walk is determined by the prior probability distributions for the parameters, including initial conditions. 2. 2. Calculate the likelihood ratio function for the parameters, given the data. 3. 3. Decide whether to accept the suggested parameter set or not. Step 1: Let $$\varvec{S_i}=(S_{i1}, S_{i2}, \ldots , S_{in})$$ be the simulated vector at the ith step for parameter values $$\varvec{\Theta _i}=(\theta _{i1}, \theta _{i2}, \ldots , \theta _{i\alpha })$$. Compared to the total population, the data $$I_c(t), D(t)$$ etc. are quasi-continuous and can be assumed to be drawn from a Normal distribution with respective standard deviations $$\varvec{\Gamma }=(\gamma _1, \gamma _2, \ldots , \gamma _n)$$ and means $$\varvec{S_i}=(S_{i1}, S_{i2}, \ldots , S_{in})$$. Therefore, the posterior probability (or likelihood, in case of continuous probability density) of the parameter vector $$\varvec{\Theta _i}$$ is, \begin{aligned} \displaystyle {\mathscr {P}}(\varvec{\Theta _i|D})=\frac{{\mathscr {P}}(\varvec{D|\Theta _i}) {\mathscr {P}}(\varvec{\Theta _i})}{{\mathscr {P}}(\varvec{D})} = (2\pi )^{-(n+\alpha )/2} \left[ \prod _{j=1}^n\gamma _j\prod _{\beta =1}^\alpha \sigma _\beta {\mathscr {P}}(\varvec{D})\right] ^{-1} \, \exp \left( \frac{-1}{2}\sum _{j=1}^n\left( \frac{S_{ij}-D_j}{\gamma _j}\right) ^2\right).\end{aligned} (5) Next, we execute a random walk in $$\varvec{\Theta }$$-space with distribution $$N(\varvec{\Theta _i,\sigma })$$ to find $$\varvec{\Theta _{i+1}}$$, and calculate again the posterior likelihood function, with the simulated data vector $$\varvec{S_{i+1}}$$, corresponding to the parameter vector $$\varvec{\Theta _{i+1}}$$ as \begin{aligned} \displaystyle {\mathscr {P}}(\varvec{\Theta _{i+1}|D})= & {} \displaystyle \frac{{\mathscr {P}}(\varvec{D|\Theta _{i+1}}) {\mathscr {P}}(\varvec{\Theta _{i+1}})}{{\mathscr {P}}(\varvec{D})} \nonumber \\= & {} \displaystyle (2\pi )^{-(n+\alpha )/2} \left[ \prod _{j=1}^n\gamma _j\prod _{\beta =1}^\alpha \sigma _\beta {\mathscr {P}}(\varvec{D})\right] ^{-1} \, \exp \left( - \, \frac{1}{2}\sum _{j=1}^n\left( \frac{S_{(i+1)j}-D_j}{\gamma _j}\right) ^2\right. \nonumber \\&\left. - \,\frac{1}{2}\sum _{\beta =1}^\alpha \left( \frac{\theta _{(i+1)\beta }-\theta _{i\beta }}{\sigma _\beta }\right) ^2 \right) \ . \end{aligned} (6) Step 2: The likelihood ratio is now calculated to be $${\mathscr {P}}(\varvec{\Theta _{i+1}|D}) / {\mathscr {P}}(\varvec{\Theta _i|D})$$. Step 3: Next, we generate a uniform random number $$r \sim U[0,1]$$. If $$r < {\mathscr {P}}(\varvec{\Theta _{i+1}|D})/{\mathscr {P}}(\varvec{\Theta _i|D})$$, we accept $$\varvec{\Theta _{i+1}}$$, otherwise we go back to Step 1 and repeat the procedure. We have used cumulative infected and dead data as the vector $$\varvec{D}$$ and we normalize (as described above) the data vector $$\varvec{D}$$, as well as the simulated vector $$\varvec{S_i}$$ at every step, before calculating the likelihood ratio in Step 2 above. We have used $$\sigma = (\varvec{\sigma _P}, \varvec{\sigma _{IC}})$$, where $$\varvec{\sigma _P} = (0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01)$$ only for parameters part, $$\varvec{\sigma _{IC}} = (0.1, 0.1, 0.001, 0.0, 0.0, 0.0)$$ for initial data part, and $$\varvec{\Gamma }=(\gamma _1, \gamma _2, \ldots , \gamma _n)$$, where $$\gamma _j = (0.1-0.05)(j-1)/(n-1)+0.05$$. The initial days (where the numbers are low) in the data are given relatively smaller weightage than the later days for fitting, as the noise level is higher initially, than the signal. ### Estimation of the reproduction number kinetics Understandably, the basic reproduction number $$R_0$$ is no longer a constant. Defining $$R_0(t)$$ as the average number of secondary infections from a primary case at a given epoch t, and similarly $$I_d(t)$$ as the number of daily new cases, we have \begin{aligned} I_d(t)= & {} \int _0^{\infty } R_0(t) \, I_d(t-\tau ) \, g(\tau ) \, d\tau , \end{aligned} (7) where $$g(\tau )$$ is the probability density function of the generation time $$\tau$$, defined as the time required for a new secondary infection to be generated from a primary infection. In other words, $$\tau$$ is the time interval between the onset of a primary case to the onset of a secondary case, generated from this primary case. As is reported37, the mean generation time is approximately 6.5 days, we assume $$g(\tau )$$ has a Gamma distribution with $$g(\tau ) = \mathrm {Gamma}(6.5, 0.62)$$. We represent $$R_0(t)$$ as a function of time as \begin{aligned} R_0(t)= & {} \frac{I_d(t)}{\int _0^{\infty } I_d(t-\tau ) \, g(\tau ) \, d\tau }. \end{aligned} (8) We approximate the denominator of Eq. (8) directly from our simulated data, by a discrete sum, and evaluate $$R_0$$ at nth day as \begin{aligned} R_0(n)= & {} \frac{I_d(t)}{\int _0^{\infty } I_d(t-\tau ) \, g(\tau ) \, d\tau } \approx \frac{I_d(n)}{\displaystyle \sum \nolimits _{\tau =0}^{n-1}I_d(n-\tau ) \, g(\tau )}. \end{aligned} (9) ### Statistical error estimation and p-values Using the Chi-square statistic as $$\chi ^{2} \equiv \sum \nolimits _{i=1}^n \left( \frac{D_{i}-S_{i}}{\epsilon S_{i}+1}\right) ^{2}$$ ($$0<\epsilon <1$$), where $$D_i$$ are observed data and $$S_i$$ the simulated data for the $$\text {i}$$th day, we quantify the accuracy of our model fitting with the real data. Understandably, the data for daily new infections and daily new deaths are contaminated by noise, more severely than the corresponding cumulative data. Hence, a Chi-square test applied on cumulative data will always give a high p-value. However, to test the power of our predictive machine learning algorithm, we calculated the p-values on daily new data of deaths and infected. Assuming the real data are drawn from a normal distribution with mean value same as the simulated data, and with a standard deviation equal to some fraction of the simulated data, we derive our Chi-square statistic. Although, the real data of infected and dead are always positive, as the infection increases, this assumption is very well valid, except for a very small time interval at the starting of infection in a population. ## Data availability Data from the Johns Hopkins repository (https://github.com/CSSEGISandData/Covid-19) were used, together with country specific repositories, e.g. US: https://usafacts.org; EU: https://data.europa.eu/; UK: https://coronavirus.data.gov.uk/; India: https://www.covid19india.org/. All the epidemiological information we used is documented in the Extended Data and Supplementary Tables. The codes and relevant files are made available through the Aston Data Repository. ## References 1. 1. Davies, N. G., Kucharski, A. J., Eggo, R. M., Gimma, A. & Edmunds, W. J. Effects of non-pharmaceutical interventions on COVID-19 cases, deaths, and demand for hospital services in the UK: A modelling study. The Lancet Public Healthhttps://doi.org/10.1016/S2468-2667(20)30133-X (2020). 2. 2. Gatto, M. et al. Spread and dynamics of the COVID-19 epidemic in Italy: Effects of emergency containment measures. PNAS 117(19), 10484–10491 (2020). 3. 3. Koff, W. C. & Wlliams, M. A. Covid-19 and immunity in aging populations—A new research agenda. NEJMhttps://doi.org/10.1056/NEJMp2006761 (2020). 4. 4. Giesecke, J. The invisible pandemic. The Lancet 395(10238), E98. https://doi.org/10.1016/S0140-6736(20)31035-7 (2020). 5. 5. Moghadas, M. S. et al. The implications of silent transmission for the control of COVID-19 outbreaks. PNAS 117(30), 17513–17515 (2020). 6. 6. Funk, S. et al. The impact of control strategies and behavioural changes on the elimination of Ebola from Lofa County, Liberia. Philos. Trans. R. Soc. Lond. B Biol. Sci. 372, 20160302 (2017). 7. 7. Nguyen, T. A., Nguyen, Q. C., Le Kim, A. T., Nguyen, H. N., & Nguyen, T. T. H. Modelling the impact of control measures against the Covid-19 pandemic in Vietnam. BMJ. https://doi.org/10.1101/2020.04.24.20078030 (2020). 8. 8. Li, R. et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2). Science 368(6490), 489–493 (2020). 9. 9. Barton, C. M. et al. Call for transparency of Covid-19 models. Science 368(6490), 482–483 (2020). 10. 10. European Centre for Disease Prevention and Control. https://www.ecdc.europa.eu/en/cases-2019-ncov-eueea. 11. 11. Ota, M. Will we see protection or reinfection in COVID-19?. Nat. Rev. Immunol. 20, 351 (2020). 12. 12. Chen, D. et al. Recurrence of positive SARS-CoV-2 RNA in COVID-19: A case report. Int. J. Inf. Dis. 93, 297–299 (2020). 13. 13. Kissler, S. M., Tedijanto, C., Goldstein, E., Grad, Y. H. & Lipsitch, M. Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period. Science 368, 860–868 (2020). 14. 14. Kucharski, A. J. et al. Early dynamics of transmission and control of Covid-19: A mathematical modelling study. Lancet Infect. Dis. 20, 553–58 (2020). 15. 15. Glasser, J. W., Hupert, N., McCauley, M. M. & Hatchett, R. Modeling and public health emergency responses: Lessons from SARS. Epidemics 3, 32–37 (2011). 16. 16. Fraser, C., Riley, S., Anderson, R. M. & Ferguson, N. M. Factors that make an infectious disease outbreak controllable. Proc. Natl. Acad. Sci. USA 101, 6146–51 (2004). 17. 17. Peak, C. M., Childs, L. M., Grad, Y. H. & Buckee, C. O. Comparing nonpharmaceutical interventions for containing emerging epidemics. Proc. Natl. Acad. Sci. USA 114, 4023–28 (2017). 18. 18. He, Xi. et al. Temporal dynamics in viral shedding and transmissibility of Covid-19. Nat. Med. 26, 672–675 (2020). 19. 19. Hellewell, J. et al. Feasibility of controlling Covid-19 outbreaks by isolation of cases and contacts. Lancet Glob. Health 8, e488–e496 (2020). 20. 20. Giordano, G. et al. Modelling the Covid-19 epidemic and implementation of population-wide interventions in Italy. Nat. Med. Lett.https://doi.org/10.1038/s41591-020-0883-7 (2020). 21. 21. Denning, J. et al. Inferring change points in the spread off Covid-19 reveals the effectiveness of intervention. Sciencehttps://doi.org/10.1126/science.abb9789 (2020). 22. 22. Jo, H., Son, H. & Jung, S. Y. Analysis of COVID-19 spread in South Korea using the SIR model with time-dependent parameters and deep learning. BMJhttps://doi.org/10.1101/2020.04.13.20063412 (2020). 23. 23. Lewnard, A. J. & Lo, N. C. Scientific and ethical basis for social-distancing interventions against COVID-19. Lancet Infect. Dis.https://doi.org/10.1016/S1473-3099(20)30190-0 (2020). 24. 24. O’Hallahan, J. et al. From secondary prevention to primary prevention: A unique strategy that gives hope to a country ravaged by meningococcal disease. Vaccinehttps://doi.org/10.1016/j.vaccine.2005.01.061 (2005). 25. 25. Matamalas, J. T., Arenas, A., Gómez, S. Effective approach to epidemic containment using link equations in complex networks. Sci. Adv.412, eaau4212. https://doi.org/10.1126/sciadv.aau4212. 26. 26. 27. 27. Sahin, U. et al. COVID-19 vaccine BNT162b1 elicits human antibody and TH1 T cell responses. Nature 586, 594–599 (2020). 28. 28. Corbett, K. S. et al. SARS-CoV-2 mRNA vaccine design enabled by prototype pathogen preparedness. Nature 586, 567–571 (2020). 29. 29. Mahase, E. BMJ 372. https://doi.org/10.1136/bmj.n86 (2021). 30. 30. Baraniuk, C. BMJ 372. https://doi.org/10.1136/bmj.n743 (2021). 31. 31. 32. 32. Zhang, Y. et al. Safety, tolerability, and immunogenicity of an inactivated SARS-CoV-2 vaccine in healthy adults aged 18–59 years: A randomised, double-blind, placebo-controlled, phase 1/2 clinical trial. The Lancet 21, 181 (2021). 33. 33. Mercado, N. B. et al. Single-shot Ad26 vaccine protects against SARS-CoV-2 in rhesus macaques. Nature 586, 583–588 (2020). 34. 34. Callaway, E., & Mallapaty, S. Novavax offers first evidence that COVID vaccines protect people against variants. Nature590, 17. https://doi.org/10.1038/d41586-021-00268-9. 35. 35. Culbertsonb, A. COVID-19: Does the Indian variant make vaccines less effective and how concerned should we be? https://news.sky.com/story/is-the-indian-covid-variant-more-infectious-and-should-the-uk-be-concerned-12280387. 36. 36. Cohen, J. South Africa suspends use of AstraZeneca’s COVID-19 vaccine after it fails to clearly stop virus variant. Sciencehttps://doi.org/10.1126/science.abg9559. 37. 37. Aliou, M. A. & Baldé, T. Fitting SIR model to COVID-19 pandemic data and comparative forecasting with machine learning. BMJhttps://doi.org/10.1101/2020.04.26.20081042 (2020). 38. 38. Prem, K. et al. The effect of control strategies to reduce social mixing on outcomes of the Covid-19 epidemic in Wuhan, China: A modelling study. Lancet Public Health 5(5), E261–E270 (2020). 39. 39. Grela, E., Stich, M. & Chattopadhyay, A. K. Epidemiological impact off waning immunization on a vaccinated population. Eur. Phys. J. B 91, 267 (2018). 40. 40. Zhou, P. et al. A pneumonia outbreak associated with a new coronavirus of probable bat origin. Naturehttps://doi.org/10.1038/s41586-020-2012-7 (2020). 41. 41. Hamming, I., Timens, W., Bulthuis, M. L. C., Lely, A. T., Navis, G. J. V., Goor, H. V. Tissue distribution of ACE2 protein, the functional receptor for SARS coronavirus. A first step in understanding SARS pathogenesis. J. Pathol.https://doi.org/10.1002/path.1570 (2004). 42. 42. John Hopkins Covid-19 repository. https://github.com/CSSEGISandData/Covid-19. 43. 43. Endo, A., Leeuwen, E. V. & Baguelin, M. Introduction to particle Markov-chain Monte Carlo for disease dynamics modellers. Epidemics 29, 100363 (2019). 44. 44. Seth, F. et al. Estimating the number of infections and the the impact of non-pharmaceutical interventions on COVID-19 in 11 European countries. Imp. Coll. Lond.https://doi.org/10.25561/77731 (2020). 45. 45. Nishiura, H. Correcting the actual reproduction number: A simple method to estimate R0 from early epidemic growth data. Int. J. Environ. Res. Public Health 7(1), 291–302 (2010). 46. 46. Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am. J. Epidemiol. 178(9), 1505–1512 (2013). 47. 47. Tomie, T. Understanding the present status and forecasting of COVID-19in Wuhan. https://doi.org/10.1101/2020.02.13.20022251. 48. 48. 49. 49. 50. 50. Gelman, A. et al.Bayesian Data Analysis (CRC Press, 2013). 51. 51. Ramsay, J. & Hooker, G. Dynamic Data Analysis (Springer, 2017). ## Acknowledgements AKC acknowledges Darren Flower for his comments and advice on the manuscript. The authors acknowledge VAXFARM Life Sciences for insightful discussions on vaccinology and therapeutics. ## Author information Authors ### Contributions A.K.C. and D.C. designed the core model, sequentially modified by S.K.N. S.K.N. led the MCMC computation and model simulation, while A.K.C. and B.K. led the analytical sections. D.C. and G.G., together with S.K.N. and B.K., were in charge of comparative statistical error estimation. All authors wrote and approved the manuscript. All authors have identical contribution towards the final output. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Chattopadhyay, A.K., Choudhury, D., Ghosh, G. et al. Infection kinetics of Covid-19 and containment strategy. Sci Rep 11, 11606 (2021). https://doi.org/10.1038/s41598-021-90698-2 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-021-90698-2
## Six Minutes Off Let me return, reindeer-like, to my problem, pretty well divorced from the movie at this point, of the stranded Arthur Christmas and Grand-Santa, stuck to wherever they happen to be on the surface of the Earth, going around the Earth’s axis of rotation every 86,164 seconds, while their reindeer and sleigh carry on orbiting the planet’s center once every $\sqrt{2}$ hours. That’s just a touch more than every 5,091 seconds. This means, sadly, that the reindeer will never be right above Arthur again, or else the whole system of rational and irrational numbers is a shambles. Still, they might come close. After all, one day after being stranded, Arthur and Grand-Santa will be right back to the position where they started, and the reindeer will be just finishing up their seventeenth loop around the Earth. To be more nearly exact, after 86,164 seconds the reindeer will have finished just about 16.924 laps around the planet. If Arthur and Grand-Santa just hold out for another six and a half minutes (very nearly), the reindeer will be back to their line of latitude, and they’ll just be … well, how far away from that spot depends on just where they are. Since this is my problem, I’m going to drop them just a touch north of 30 degrees north latitude, because that means they’ll be travelling a neat 400 meters per second due to the Earth’s rotation and I certainly need some nice numbers here. Any nice number. I’m putting up with a day of 86,164 seconds, for crying out loud. ## Arthur Christmas and the Least Common Multiple I left Arthur Christmas and Grand-Santa in a hypothetical puzzle, inspired by the movie, with them stranded on a tiny island while their team of flying reindeer and sleigh carried on in a straight line without them. I am assuming for the sake of an interesting problem that this means the reindeer are carrying on the Great Circle route, favored by airplanes and satellites, and that the reindeer are in an orbit more like the satellite’s than the reindeers — that is, they keep to a circle in a plane which isn’t rotating while the Earth does, since otherwise, Arthur and Grand-Santa have to wait only for the reindeer to finish one lap around the planet and somehow get up to flying altitude to be picked up. If the reindeer aren’t rotating the with the Earth, then, when the reindeer finish one circuit our heroes are going to be … well, maybe east, maybe west, of the reindeer; the problem is, they’re going to be away. ## Returning to Arthur Christmas As promised, since I’ve got the chance, I want to return to the question of the reindeer behavior as shown in the Aardman movie Arthur Christmas, and what would ultimately happen to them if the reindeer carry on as Grand-Santa claims they will. (Again, this does require spoiling a plot point of the film and so I tuck the rest behind a cut.) ## Could “Arthur Christmas” Happen In Real Life? If you haven’t seen the Aardman Animation movie Arthur Christmas, first, shame on you as it’s quite fun. But also you may wish to think carefully before reading this entry, and a few I project to follow, as it takes one plot point from the film which I think has some interesting mathematical implications, reaching ultimately to the fate of the universe, if I can get a good running start. But I can’t address the question without spoiling a suspense hook, so please do consider that. And watch the film; it’s a grand one about the Santa family. The premise — without spoiling more than the commercials did — starts with Arthur, son of the current Santa, and Grand-Santa, father of the current fellow, and a linguistic construct which perfectly fills a niche I hadn’t realized was previously vacant, going off on their own to deliver a gift accidentally not delivered to one kid. To do this they take the old sleigh, as pulled by the reindeer, and they’re off over the waters when something happens and there I cut for spoilers.
# Atom¶ class Atom(atnum=0, symbol=None, coords=None, unit='angstrom', bonds=None, mol=None, **other)[source] A class representing a single atom in three dimensional space. An instance of this class has the following attributes: • atnum – atomic number (zero for “dummy atoms”) • coords – tuple of length 3 storing spatial coordinates • bonds – list of bonds (see Bond) this atom is a part of • molMolecule this atom belongs to • propertiesSettings instance storing all other information about this atom (initially it is populated with **other) The above attributes can be accessed either directly or using one of the following properties: • x, y, z – allow to read or modify each coordinate separately • symbol – allows to read or write the atomic symbol directly. Atomic symbol is not stored as an attribute, instead of that the atomic number (atnum) indicates the type of atom. In fact, symbol this is just a wrapper around atnum that uses PeriodicTable as a translator: >>> a = Atom(atnum=8) >>> print(a.symbol) O >>> a.symbol = 'Ca' >>> print(a.atnum) 20 • mass – atomic mass, obtained from PeriodicTable, read only • radius – atomic radius, obtained from PeriodicTable, read only • connectors – number of connectors, obtained from PeriodicTable, read only Note When creating a new atom, its type can be chosen either by setting an atomic number or a symbol (atnum and symbol constructor arguments). The symbol argument takes precedence – if it is supplied, the atnum argument is ignored. Values stored in coords tuple do not necessarily have to be numeric, you can also store any string there. This might come handy for programs that allow parametrization of coordinates in the input file (to enforce some geometry constraints for example): >>> a = Atom(symbol='C', coords=(1,2,3)) >>> print(a) C 1.00000 2.00000 3.00000 >>> a.y = 'param1' >>> print(a) C 1.00000 param1 3.00000 However, non-numerical coordinates cannot be used together with some methods (for example distance_to() or translate()). An attempt to do this raises an exception. Internally, atomic coordinates are always expressed in angstroms. Most of methods that read or modify atomic coordinates accept a keyword argument unit allowing to choose unit in which results and/or arguments are expressed (see Units for details). Throughout the entire code angstrom is the default length unit. If you don’t specify unit parameter in any place of your script, all the automatic unit handling described above boils down to occasional multiplication/division by 1.0. __init__(atnum=0, symbol=None, coords=None, unit='angstrom', bonds=None, mol=None, **other)[source] Initialize self. See help(type(self)) for accurate signature. str(symbol=True, suffix='', suffix_dict={}, unit='angstrom', space=14, decimal=6)[source] Return a string representation of this atom. Returned string is a single line (no newline characters) that always contains atomic coordinates (and maybe more). Each atomic coordinate is printed using space characters, with decimal characters reserved for decimal digits. Coordinates values are expressed in unit. If symbol is True, atomic symbol is added at the beginning of the line. If symbol is a string, this exact string is printed there. suffix is an arbitrary string that is appended at the end of returned line. It can contain identifiers in curly brackets (like for example f={fragment}) that will be replaced by values of corresponding keys from suffix_dict dictionary. See Format String Syntax for details. Example: >>> a = Atom(atnum=6, coords=(1,1.5,2)) >>> print(a.str()) C 1.000000 1.500000 2.000000 >>> print(a.str(unit='bohr')) C 1.889726 2.834589 3.779452 >>> print(a.str(symbol=False)) 1.000000 1.500000 2.000000 >>> print(a.str(symbol='C2.13')) C2.13 1.000000 1.500000 2.000000 >>> print(a.str(suffix='protein1')) C 1.000000 1.500000 2.000000 protein1 >>> a.properties.info = 'membrane' >>> print(a.str(suffix='subsystem={info}', suffix_dict=a.properties)) C 1.000000 1.500000 2.000000 subsystem=membrane __str__()[source] Return a string representation of this atom. Simplified version of str() to work as a magic method. __iter__()[source] Iteration through atom yields coordinates. Thanks to that instances of Atom can be passed to any method requiring as an argument a point or a vector in 3D space. translate(vector, unit='angstrom')[source] Move this atom in space by vector, expressed in unit. vector should be an iterable container of length 3 (usually tuple, list or numpy array). unit describes unit of values stored in vector. This method requires all atomic coordinates to be numerical values, TypeError is raised otherwise. move_to(point, unit='angstrom')[source] Move this atom to a given point in space, expressed in unit. point should be an iterable container of length 3 (for example: tuple, Atom, list, numpy array). unit describes unit of values stored in point. This method requires all atomic coordinates to be numerical values, TypeError is raised otherwise. distance_to(point, unit='angstrom', result_unit='angstrom')[source] Measure the distance between this atom and point. point should be an iterable container of length 3 (for example: tuple, Atom, list, numpy array). unit describes unit of values stored in point. Returned value is expressed in result_unit. This method requires all atomic coordinates to be numerical values, TypeError is raised otherwise. vector_to(point, unit='angstrom', result_unit='angstrom')[source] Calculate a vector from this atom to point. point should be an iterable container of length 3 (for example: tuple, Atom, list, numpy array). unit describes unit of values stored in point. Returned value is expressed in result_unit. This method requires all atomic coordinates to be numerical values, TypeError is raised otherwise. angle(point1, point2, point1unit='angstrom', point2unit='angstrom', result_unit='radian')[source] Calculate an angle between vectors pointing from this atom to point1 and point2. point1 and point2 should be iterable containers of length 3 (for example: tuple, Atom, list, numpy array). Values stored in them are expressed in, respectively, point1unit and point2unit. Returned value is expressed in result_unit. This method requires all atomic coordinates to be numerical values, TypeError is raised otherwise. rotate(matrix)[source] Rotate this atom according to a rotation matrix. matrix should be a container with 9 numerical values. It can be a list (tuple, numpy array etc.) listing matrix elements row-wise, either flat ([1,2,3,4,5,6,7,8,9]) or in two-level fashion ([[1,2,3],[4,5,6],[7,8,9]]). Note This method does not check if matrix is a proper rotation matrix. neighbors()[source] Return a list of neighbors of this atom within the molecule. The list follows the same order as the bonds attribute. # Bond¶ class Bond(atom1=None, atom2=None, order=1, mol=None, **other)[source] A class representing a bond between two atoms. An instance of this class has the following attributes: • atom1 and atom2 – two instances of Atom that form this bond • order – order of the bond. It is either an integer number or the floating point value stored in Bond.AR, indicating an aromatic bond • molMolecule this bond belongs to • propertiesSettings instance storing all other information about this bond (initially it is populated with **other) Note Newly created bond is not added to atom1.bonds or atom2.bonds. Storing information about Bond in Atom is relevant only in the context of the whole Molecule, so this information is updated by add_bond(). __init__(atom1=None, atom2=None, order=1, mol=None, **other)[source] Initialize self. See help(type(self)) for accurate signature. __str__()[source] Return a string representation of this bond. __iter__()[source] Iterate over bonded atoms (atom1 first, then atom2). is_aromatic()[source] Check if this bond is aromatic. length(unit='angstrom')[source] Return bond length, expressed in unit. as_vector(start=None, unit='angstrom')[source] Return a vector between two atoms that form this bond. start can be used to indicate which atom should be the beginning of that vector. If not specified, self.atom1 is used. Returned value if a tuple of length 3, expressed in unit. other_end(atom)[source] Return the atom on the other end of this bond with respect to atom. atom has to be one of the atoms forming this bond, otherwise an exception is raised. resize(moving_atom, length, unit='angstrom')[source] Change the length of this bond to length expressed in unit by moving moving_atom. moving_atom should be one of the atoms that form this bond. This atom is moved along the bond axis in such a way that new bond length equals length. If this bond is a part of a Molecule the whole part connected to moving_atom is moved. Note Calling this method on a bond that forms a ring within a molecule raises a MoleculeError. rotate(moving_atom, angle, unit='radian')[source] Rotate part of the molecule containing moving_atom along axis defined by this bond by an angle expressed in unit. Calling this method makes sense only if this bond is a part of a Molecule. moving_atom should be one of the atoms that form this bond and it indicates which part of the molecule is rotated. A positive value of angle denotes counterclockwise rotation (when looking along the bond, from the stationary part of the molecule). Note Calling this method on a bond that forms a ring raises a MoleculeError.
Where results make sense About us   |   Why use us?   |   Reviews   |   PR   |   Contact us Topic: Compactification Related Topics Springer Online Reference Works   (Site not responding. Last check: 2007-09-17) One of the fundamental methods in compactification theory is Aleksandrov's method of centred systems of open sets [7], which was initially used for the construction of the maximal compactification, and was subsequently extensively utilized by many mathematicians. The concept of a compactification is useful in the study of the dimension of the remainder. Moreover, a consequence of the existence of a compactification with a remainder of dimension eom.springer.de /c/c023550.htm   (1691 words) Compactification (physics) - Wikipedia, the free encyclopedia The mechanism behind this type of compactification is described by the Kaluza-Klein theory, under the name compact dimension. It assumes that the shape of the internal manifold is a Calabi-Yau manifold or generalized Calabi-Yau manifold which is equipped with non-zero values of fluxes, i.e. The hypothetical concept of the anthropic landscape in string theory follows from a large number of possibilities in which the integers that characterize the fluxes can be chosen without violating rules of string theory. en.wikipedia.org /wiki/Compactification_(physics)   (238 words) Looking for extra dimensions The most studied superstring compactification is heterotic string theory compactified on a Calabi-Yau space in six-dimensions (or three complex dimensions). This avoids the situation in the Kaluza-Klein compactification where the Planck mass in four spacetime dimensions depends on the volume of the compactified space, which is hard to control dynamically. In superstring theory with Kaluza-Klein compactification, there are several different energy scales that come into play in going from a string theory to a low energy effective particle theory that is consistent with observed particle physics and cosmology. superstringtheory.com /experm/exper5a1.html   (1201 words) Encyclopedia :: encyclopedia : Compactification (physics)   (Site not responding. Last check: 2007-09-17) In string theory, compactification refers to "curling up" the extra dimensions (six in the superstring theory), usually on Calabi-Yau spaces or on orbifolds. The mechanism behind this type of compactification is described by the Kaluza-Klein theory. The formulation of more precise versions of the meaning of compactification in this context has been promoted by discoveries such as the mysterious duality. www.hallencyclopedia.com /topic/Compactification_(physics).html   (101 words) compactification   (Site not responding. Last check: 2007-09-17) The resolution to this comes with the idea of compactification: 6 out of the 9 spatial dimensions need to be taken small and compact. A particularly important class of 6 dimensional manifolds used in string compactification is known as Calabi-Yau manifolds. However, it was not found so far a suitable compactification scheme which results are in complete agreement with experiment (eg, masses of elementary particles). www.mit.edu /~nleonard/x/xdiv/string/node4.html   (128 words) [No title] B. Bordbar and J. Pym, The set of idempotents in the weakly almost periodic compactification of the integers is not closed, Trans. R. Butcher, The Stone-Cech compactification of a semigroup and its algebra of measures, Ph.D Dissertation (1975), University of Sheffield. A. Maleki and D. Strauss, Homomorphisms, ideals and commutativity in the Stone-Cech compactification of a discrete semigroup, Topology and its Applications 71 (1996), 47-61. members.aol.com /nhindman/bibliogr.html   (3776 words) Looking for extra dimensions In string theory, Kaluza-Klein compactification of the extra dimensions has one important difference from the particle theory version. Supersymmetry breaking and compactification of higher dimensions have to work together to give the low energy physics we observe in accelerator detectors. Braneworld models in general are very different from superstring Kaluza-Klein compactification models because they don't require there to be so many steps between the Planck scale and the electroweak scale. superstringtheory.com /experm/exper51.html   (989 words) The Stone-Cech Compactification The Stone-Cech compactification is, in a suitable sense, the largest. However, there are a few examples where the one-point compactification and the Stone-Cech compactification are the same. We construct the Stone-Cech compactification using a weak topology on part of the dual of www.math.unl.edu /~s-bbockel1/929/node16.html   (215 words) Superstring Theory Since it is via compactification, which yields local non-Abelian gauge symmetries, and other symmetry groups (to describe all the known particles and forces, plus some unknow ones), the mathematical treatment will be presented in the followings for a better understanding of the subject. This is similar to the Kaluza-Klein compactification when a dimension was compactified on a circle with components of the 26-dimensional metric tensor. Orbifold compactification is the simplest and it preserves the equation of string in its simple form. universe-review.ca /R15-18-string.htm   (8365 words) David C. Murphy: Research   (Site not responding. Last check: 2007-09-17) Presently I am working on the classification of compactifications of algebraic groups - varieties containing an open subset isomorphic to an algebraic group G such that the action of G on itself by left translations extends to an action on the whole variety. The so-called "wonderful compactification" of a reductive group is a special case. One-parameter subgroups play a critical role in the affine case, which is similar to their application to instability problems in geometric invariant theory. max.cs.kzoo.edu /~dmurphy/researchindex.html   (1030 words) Calabi-Yau manifold - Wikipedia, the free encyclopedia Compactification on Calabi-Yau n-folds are important because they leave some of the original supersymmetry unbroken. More precisely, in the absence of fluxes, compactification on a Calabi-Yau 3-fold (real dimension 6) leaves one quarter of the original supersymmetry unbroken if the holonomy is the full SU(3). supercharges in a compactification of type I. When fluxes are included the supersymmetry condition instead implies that the compactification manifold be a generalized Calabi-Yau, a notion introduced in 2002 by Nigel Hitchin. en.wikipedia.org /wiki/Calabi-Yau_manifold   (822 words) Compactification This process, developed by Aleksandrov (biography), (also spelled Alexandroff), is called compactification. The sphere is compact, hence it is the "compactification" of the plane. The base for the compactification of s consists of the open balls of s, and the open balls centered at ω, which correspond to regions beyond b, for values of b that approach infinity. www.mathreference.com /top-cs,cfc.html   (749 words) Citebase - Moduli Stabilization in String Gas Compactification   (Site not responding. Last check: 2007-09-17) compactification in the context of massless string gas cosmology. We found that the volume moduli, the shape moduli, and the flux moduli are stabilized at the self dual point in the moduli space. Thus, it is proved that this simple compactification model is stable. citebase.eprints.org /cgi-bin/citations?id=oai:arXiv.org:hep-th/0509074   (203 words) Springer Online Reference Works   (Site not responding. Last check: 2007-09-17) -space; for a normal space it coincides with the Stone–Čech compactification. Not every Hausdorff compactification of a Tikhonov space is a compactification of Wallman type. Compactifications that are not Wallman compactifications were constructed by V.M. Ul'yanov [a1]. eom.springer.de /w/w097050.htm   (132 words) Deligne-Mumford compactification The Deligne-Mumford compactification is obtained as a moduli space for stable genus g curves with marked points, (instead of considering just smooth curves, as in This translates to: every genus 0 irreducible component has at least three marked or nodal points and every genus 1 component has at least one marked or nodal point. This space is important because it is a smooth compactification (as a stack, at least) with easy-to-understand boundary components that give an inductive structure to all the moduli spaces of curves. www.aimath.org /WWN/modspacecurves/aim/glossary/node6.html   (111 words) PlanetMath: Alexandrov one-point compactification The Alexandrov one-point compactification of a non-compact topological space one-point compactification, Alexandroff one-point compactification, Aleksandrov one-point compactification, Alexandrov compactification, Aleksandrov compactification, Alexandroff compactification This is version 6 of Alexandrov one-point compactification, born on 2003-07-27, modified 2005-02-06. planetmath.org /encyclopedia/AlexandrovOnePointCompactification.html   (73 words) Giovanni Curi The construction of Stone-Cech compactification in [2] enlightens aspects of this compactification that may be of interest also for the topologist or the locale theorist not concerned with foundational issues. A constructive version of Alexandroff compactification is also obtained. It is also shown how Stone-Cech compactification can itself be used to prove that certain hom-sets are small. www.math.unipd.it /~gcuri   (833 words) Compactification - Wikipedia, the free encyclopedia The term compactification is used in two different fields: Compactification (mathematics), the enlarging of a topological space to make it compact Compactification (physics), the "curling up" of extra dimensions in string theory en.wikipedia.org /wiki/Compactification   (95 words) ACTA MATHEMATICA UNIVERSITATIS COMENIANAE   (Site not responding. Last check: 2007-09-17) Abstract.  In this paper we introduce GF-compactifications, which are compactifications of GF-spaces (a new notion introduced by the authors). We study properties of this new kind of compactification and prove that every GF-compactification is of Wallman type. We also prove that every metrizable compactification of a metric space $X$ is a GF-compactification and, as a corollary, that every metric compactification is of Wallman type, giving a new proof of a result that dates back to Aarts. www.univie.ac.at /EMIS/journals/AMUC/_vol-73/_no_1/_arenas/arenas.html   (76 words) [No title] By the universal property of the Stone-Cech compactification, for the inclusion map f: X \subset I, and since Y is compact Hausdorff, there exists a map q:BX \to Y such that the composition of the natural map X \to BX followed by q coincides with f. In this case (at least) BX is called the Stone-Cech compactification of X. The solenoid group (which can be defined in various ways, for example as the inverse limit of the family of coverings of the circle, ordered by divisibility) is maybe the most interesting example. www.lehigh.edu /dmd1/public/www-data/yr1013.txt   (1401 words) Voyage vers l'infiniment petit Un exemple simple de compactification dans un espace bidimensionnel est le tuyau d'arrosage : une dimension est enroulée sur elle-même pour former un cercle (on l'observe en sectionnant le tuyau), alors que l'autre s'étend sur une certaine distance et a deux bouts. Compactification sur une sphère dans les théories de Kaluza-Klein. La compactification peut être réalisée à l'aide d'objets mathématique plus compliqués, par exemple les orbifolds. www.diffusion.ens.fr /vip/tableJ02.html   (849 words) Stone-Cech Compactification - NoiseFactory Science Archives (http://noisefactory.co.uk) This means that the circle is a compactification of (0,1). Consequently a space has a compactification (and so is Tychonov) if and only if it can be identified with a subspace of a compact Hausdorff space. X is the "largest" possible compactification X can have, and we can classify all the compactifications of the space X by constructing the quotients of noisefactory.co.uk /maths/stone-cech.html   (1649 words) Abstracts   (Site not responding. Last check: 2007-09-17) In particular I will go over the construction of the Stone-Cech compactification of a discrete space as a set of ultrafilters. Abstract: I will show how to extend the operation on a discrete semigroup (such as (N,+), which is the granddaddy of all semigroups) to its Stone-Cech compactification so that the compactification becomes a right topological semigroup. This will include results whose first proofs were elementary but complicated, other results first proved using the algebra of beta N for which elementary proofs have subsequently been found, and results that appear unlikely to succumb to elementary proofs in the foreseeable future. www.theoryofnumbers.com /CANT/2006/abstracts1.htm   (1658 words) IngentaConnect Manifold-theoretic compactifications of configuration spaces   (Site not responding. Last check: 2007-09-17) We present new definitions for and give a comprehensive treatment of the canonical compactification of configuration spaces due to Fulton–MacPherson and Axelrod–Singer in the setting of smooth manifolds, as well as a simplicial variant of this compactification initiated by Kontsevich. We stratify the canonical compactification, identifying the difieomorphism types of the strata in terms of spaces of configurations in the tangent bundle, and give completely explicit local coordinates around the strata as needed to define a manifold with corners. We analyze the quotient map from the canonical to the simplicial compactification, showing it is a homotopy equivalence. www.ingentaconnect.com /content/klu/29/2004/00000010/00000003/art00006   (215 words) Van Tan: On the compactification problem for Stein surfaces , On the compactification of a Stein surface. , (a) On the compactification of strongly pseudoconvex surfaces. ; (b) On the compactification of strongly pseudoconvex surfaces II. www.numdam.org /numdam-bin/item?id=CM_1989__71_1_1_0   (98 words) Atlas: Exact approximations to Stone-Cech compactification by Giovanni Curi   (Site not responding. Last check: 2007-09-17) Stone-Cech compactification is obtained as a particular case of this construction in those settings (such as ordinary set theory, or more generally, topos theory) in which the class Hom(L, [0, 1]) of [0, 1] Together with the described compactification, this allows us to characterize the class of locales for which Stone-Cech compactification can be defined constructively, and yields an effective (type-theoretic) construction of Stone-Cech compactification of locally compact locales. The author(s) of this document and the organizers of the conference have granted their consent to include this abstract in Atlas Conferences Inc. Document # carg-11. atlas-conferences.com /cgi-bin/abstract/carg-11   (221 words) AMCA: The parabolic compactification and its application to the approximation of unbounded functions by H. Gingold   (Site not responding. Last check: 2007-09-17) AMCA: The parabolic compactification and its application to the approximation of unbounded functions by H. Gingold The properties of a compactification that maps the n dimensional Euclidean space onto a "parabolic bowl" are studied. The author(s) of this document and the organizers of the conference have granted their consent to include this abstract in Atlas Mathematical Conference Abstracts. at.yorku.ca /c/a/q/i/57.htm   (223 words) the Bohr compactification of the Reals One use made of Pontryagin duality is to give a general definition of an almost-periodic function on a non-compact group G in LCA. For that, we define the Bohr compactification B(G) of G as H', where H is as a group G', but given the discrete topology. I am still having difficulty imagining what the bohr compactification of the real line is like. www.physicsforums.com /showthread.php?t=14590   (2864 words) J.S. Pym, May/03 ABSTRACT: A semigroup compactification of a (Hausdorff) topological group G compactifications of locally compact groups leading to our collaborations. given and the structure of the compactification lattice is determined. www.math.uwo.ca /~milnes/JSPMay03.htm   (2206 words) Try your search on: Qwika (all wikis) About us   |   Why use us?   |   Reviews   |   Press   |   Contact us Copyright © 2005-2007 www.factbites.com Usage implies agreement with terms.