text
stringlengths
104
605k
Find all School-related info fast with the new School-Specific MBA Forum It is currently 29 Apr 2016, 01:28 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # A jar contains only red, yellow, and orange marbles. If there are 3 re Author Message TAGS: ### Hide Tags Manager Status: GMAT Preperation Joined: 04 Feb 2010 Posts: 103 Concentration: Social Entrepreneurship, Social Entrepreneurship GPA: 3 WE: Consulting (Insurance) Followers: 2 Kudos [?]: 90 [0], given: 15 A jar contains only red, yellow, and orange marbles. If there are 3 re [#permalink] ### Show Tags 16 Oct 2010, 05:28 3 This post was BOOKMARKED 00:00 Difficulty: 5% (low) Question Stats: 87% (02:42) correct 13% (02:29) wrong based on 88 sessions ### HideShow timer Statictics A jar contains only red, yellow, and orange marbles. If there are 3 red, 5 yellow, and 4 orange marbles, and 3 marbles are chosen from the jar at random without replacing any of them, what is the probability that 2 yellow, 1 red, and no orange marbles will be chosen? A. 1/60 B. 1/45 C. 2/45 D. 3/22 E. 5/22 [Reveal] Spoiler: OA Last edited by Bunuel on 27 Sep 2014, 16:50, edited 1 time in total. Renamed the topic, edited the question and added the OA. Manager Joined: 04 Jun 2010 Posts: 113 Concentration: General Management, Technology Schools: Chicago (Booth) - Class of 2013 GMAT 1: 670 Q47 V35 GMAT 2: 730 Q49 V41 Followers: 14 Kudos [?]: 194 [3] , given: 43 Re: A jar contains only red, yellow, and orange marbles. If there are 3 re [#permalink] ### Show Tags 16 Oct 2010, 06:22 3 KUDOS vanidhar wrote: A jar contains only red, yellow, and orange marbles. If there are 3 red, 5 yellow, and 4 orange marbles, and 3 marbles are chosen from the jar at random without replacing any of them, what is the probability that 2 yellow, 1 red, and no orange marbles will be chosen? A. 1/60 B. 1/45 C. 2/45 D. 3/22 E. 5/22 In total you have 3 different scenarios: (Y-Yellow , R-Red) YYR, RYY,YRY after you will calculate the probability for each of these scenarios you'll have the answer. YYR =$$\frac{5}{12}*\frac{4}{11}*\frac{3}{10}$$ = $$\frac{5*4*3}{12*11*10}$$ RYY =$$\frac{3}{12}*\frac{5}{11}*\frac{4}{10} = \frac{3*5*4}{12*11*10}$$ YRY = $$\frac{5}{12}*\frac{3}{11}*\frac{4}{10} = \frac{5*3*4}{12*11*10}$$ I intentionally don't calculate the fractions through because it is easier to reduce the fraction this way. Sum them up => $$\frac{3*(5*4*3)}{12*11*10} = \frac{3}{22}$$ => answer D _________________ Consider Kudos if my post helped you. Thanks! -------------------------------------------------------------------- My TOEFL Debrief: http://gmatclub.com/forum/my-toefl-experience-99884.html My GMAT Debrief: http://gmatclub.com/forum/670-730-10-luck-20-skill-15-concentrated-power-of-will-104473.html GMAT Club Legend Joined: 09 Sep 2013 Posts: 9208 Followers: 453 Kudos [?]: 114 [0], given: 0 Re: A jar contains only red, yellow, and orange marbles. If there are 3 re [#permalink] ### Show Tags 27 Sep 2014, 16:39 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Senior Manager Status: On a mountain of skulls, in the castle of pain, I sit on a throne of blood. Joined: 30 Jul 2013 Posts: 322 Followers: 7 Kudos [?]: 101 [3] , given: 130 Re: A jar contains only red, yellow, and orange marbles. If there are 3 re [#permalink] ### Show Tags 28 Sep 2014, 01:44 3 KUDOS 5C2*3C1=10*3=30 12C3=220 3/22 Intern Joined: 19 Dec 2013 Posts: 11 GPA: 4 Followers: 0 Kudos [?]: 2 [0], given: 11 Re: A jar contains only red, yellow, and orange marbles. If there are 3 re [#permalink] ### Show Tags 29 Nov 2014, 21:29 1 This post was BOOKMARKED For me, combinations work better. Ways for selecting 2 yellow marbles out of 5: 5C2 Ways for selecting 1 red marble out of 3: 3C1 Total no. of way for selecting 3 out of 12: 12C3 Probability: 5C2*3C1/12C3=3/22 Senior Manager Status: Math is psycho-logical Joined: 07 Apr 2014 Posts: 443 Location: Netherlands GMAT Date: 02-11-2015 WE: Psychology and Counseling (Other) Followers: 1 Kudos [?]: 80 [0], given: 169 Re: A jar contains only red, yellow, and orange marbles. If there are 3 re [#permalink] ### Show Tags 10 Jan 2015, 04:32 I started by finding the 2 probabilities, without calculation, like this: P(YYR) P(YRY) P(RYY) I calculated the first one and ended in 1/22. I looked at the answer choices at this point and saw answer D: 3/22. This helped me realise that for the 3 possible orderings the probabbility is the same. So, it should be (1/22)*(3), which indeed is 3/22. Re: A jar contains only red, yellow, and orange marbles. If there are 3 re   [#permalink] 10 Jan 2015, 04:32 Similar topics Replies Last post Similar Topics: 9 A jar contains 12 marbles consisting of an equal number of red, green, 6 16 Jul 2015, 11:07 1 A black box contains eight marbles, of which three are red, and the re 4 21 Jan 2015, 10:21 3 A jar contains 16 marbles, of which 4 are red, 3 are blue, a 5 14 Feb 2011, 17:24 72 A bag of 10 marbles contains 3 red marbles and 7 blue 34 06 Dec 2007, 15:24 14 A basket contains 3 blue, 3 red and 3 yellow marbles. If 3 18 16 Nov 2007, 07:46 Display posts from previous: Sort by
# Correspondence beween countable ordinals and monotone functions I have a curiosity: is it true that every countable ordinal $\lambda$, there is a monotone function $F : \mathcal{P}(\mathbb{N}) \to \mathcal{P}(\mathbb{N})$, whose closure ordinal is $\lambda$? I've been trying to prove this on my own but no luck. I'm starting to think it might be false. Here $\mathcal{P}(X)$ means the power set of the set $X$. Also, monotone and closure ordinal represent the usual definitions. - The power set is a common concept taught at the undergraduate level, but what is a "closure ordinal"? – Henning Makholm Mar 21 '13 at 0:09 @Henning: See the first two paragraphs of Section $2$ of this paper for the definitions that I think Danx is using. – Brian M. Scott Mar 21 '13 at 0:25 @Brian: Thanks, that was clearer than what I could google up immediately. – Henning Makholm Mar 21 '13 at 0:29 If "closure ordinal" means what Brian M. Scott suggested in his comment, you should be able to take $\Gamma:\mathcal P(\lambda)\to\mathcal P(\lambda)$ defined by $$\Gamma(A)=\begin{cases} A\cup\{\min(\lambda\setminus A)\} & \text{when }A\ne \lambda \\ A & \text{when } A =\lambda \end{cases}$$ Then, in the notation of Brian's reference, $\Gamma^\alpha = \alpha$ for all $\alpha\le\lambda$, and the closure ordinal of $\Gamma$ is exactly $\lambda$. Since $\lambda$ is assumed countable, you can cojugate $\Gamma$ with the assumed bijection between $\lambda$ and $\mathbb N$ to get a $F:\mathcal P(\mathbb N)\to\mathcal P(\mathbb N)$ with the same structure as $\Gamma$, and therefore the same closure ordinal.
# HN Theater @HNTheaterMonth The best talks and videos of Hacker News. ### Hacker News Comments onBret Victor The Future of Programming HN Theater has aggregated all Hacker News stories and comments that mention Joey Reid's video "Bret Victor The Future of Programming". "The most dangerous thought you can have as a creative person is to think you know what you're doing." Presented at Dropbox's DBX conference on July 9, 2013. All of the slides are available at: http://worrydream.com/dbx/ For his recent DBX Conference talk, Victor took attendees back to the year 1973, donning the uniform of an IBM systems engineer of the times, delivering his presentation on an overhead projector. The '60s and early '70s were a fertile time for CS ideas, reminds Victor, but even more importantly, it was a time of unfettered thinking, unconstrained by programming dogma, authority, and tradition. 'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor. 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' He concludes, 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'" HN Theater Rankings #### Hacker News Stories and Comments All the comments and stories posted to Hacker News that reference this video. But there was one earlier presentation I cannot find it, guy was showing live debugging of the video game. Not sure is it TED or one of conferences ... Edit: This is it: Bred Victor: https://youtu.be/EGqwXt90ZqA?t=1006 mkl *Bret Victor: http://worrydream.com/ Many past threads on HN: https://hn.algolia.com/?q=worrydream I think you might be ok on compute but bottleneck on bandwidth. Who knows though. Fun question. If you like exploring these kinds of ideas you might enjoy https://youtu.be/8pTEmbeENF4 Jul 15, 2021 · melling on Pharo 9 “Pharo is a … and immediate feedback.” The key that we should provide more often. Bret Victor has been discussing for over a decade. Can’t find them demo that i’m thinking of but here’s an introduction into Bret https://youtu.be/ef2jpjTEB5U https://youtu.be/8pTEmbeENF4 His ideas go beyond “immediate feedback” … Jul 14, 2021 · 1 points, 0 comments · submitted by funkaster The problem is we stop pursuing answers on this topic, thus stop making progress. It's basically like what Bret Victor has described in 'The Future of Programming'. [0] There were a lot of language zealots at the end of the last century, especially on evangelizing Object-Oriented Programming. Nowadays everybody can easily counter those arguments with 'No Silver Bullet' without further thinking, it's arrogance in disguise of humility. There are still a huge amount of accidental complexities to deal with in most tech stacks. Most businesses would die fast and leave nothing behind anyway, while the progression of the industry would accumulate and benefit the whole industry itself. Java looks slightly better for creating software at scale than C. C looks slightly better than FORTRAN. FORTRAN looks slightly better than machine code. Say there's a language that looks like Haskell but has tooling and ecosystems as good as Java, I believe it would also slightly better than Java. Bret Victor has an inspiring talk on this theme: https://www.youtube.com/watch?v=8pTEmbeENF4 Bret Victor's, "The Future of Programming" is illuminating. He walks through what "programming" means and how the concept of "programming" has shifted in little evolutionary leaps. "There can be a lot of resistance to new ways of working that require you to unlearn what you've already learned and think in new ways. ... And there can even be outright hostility." Programming Baduk used to involve expert systems. Now convolutional neural networks (CNNs) can hoist a computer to superhuman performance, even doing so without pre-programmed rules (see MuZero). We no longer "program" computers to play Go, chess, shogi, or even Atari games. Some people have difficulty keeping code structures in their mind's eye. Here's a conceptual development environment for navigating code visually, ending with a dual text editor: Is that programming? What's the difference between _typing_ instructions into a computer to place a graphical user interface widget on a screen and _telling_ the computer you'd like to put a toroid on the screen? Computers can use CNNs to fill in knowledge gaps. Even though the computer wasn't told the colour, size, shading, material, or location of the toroid, it can still show us the ring. Is telling a holodeck that you'd like to replay a scene from a novel a form of programming? Each evolutionary step in programming has given us more powerful ways to express ideas in ever terser forms. Few people code in binary anymore. Did Picard need to tell the computer where to put every chair, table, glass, and machine gun? What is programming? Lots of cynic-cynics in here :) I'll stand up for the author. The lists in the article are not great, but I still agree with the sentiment. I recommend everyone watch Bret Victor's classic "The Future of Programming" https://www.youtube.com/watch?v=8pTEmbeENF4 Yes, we've had a trillion dollars invested in "How to run database servers at scale". And, we've had some incremental improvements to the C++ish language ecosystem. We've effectively replaced Perl with Python. That's nice. Deep Learning has been a major invention. Probably the most revolutionary I can think of in the past couple decades. But, what do I do in Visual Studio 2019 that is fundamentally different than what I was doing in Borland Turbo Pascal back on my old 286? C++20 is more powerful. Intellisense is nice. Edit-and-continue worked 20 years ago and most programmers still don't use it. If you are super awesome, you might use a reversible debugger. That's still fringe science these days. There is glacial momentum in the programming community. A lot of "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" And, so we don't have code-as-database. A lot of "I grew up learning how to parse curly bracket blocks. So, I can't accept anything different". So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm. A lot of "GDB is terrible, don't even try" so many programmers are in reverse-stockholm system where they have convinced themselves debuggers are unnecessary and debugging is just fundamentally slow and painful. So, we don't have in-process information flow visualization. And, so on. I agree. I also agree with the rough time delineation. Starting with the dotcom bubble, the industry was flooded with people. So we should have seen amazing progress in every direction. Most of those programmers were non-geeks interested in making an easy buck instead of geeks, into computers and happily shocked that we could make a living at it. And many of the desirable jobs turned out to be making people to click on things to drive revenue. Who can blame any of those people? They were just chasing the incentives presented to them. Check out the Unison programming language (https://www.unisonweb.org/). The codebase exists as a database instead of raw text. It has the clever ideas of having code be content, and be immutable. From these 2 properties, most aspects of programming, version control, and deployment can be re-thought. I've been following its development for a few years, I can't wait for it to blossom more! What problem does this solves? > Edit-and-continue worked 20 years ago and most programmers still don't use it. If you are super awesome, you might use a reversible debugger. That's still fringe science these days. Or use a debugger at all. Or write their code in a way that's easy to debug. > "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" Just try sneaking an unicode character in a non-trivial project somewhere. > So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm C did something right. It's still readable and simple enough it doesn't take too long to learn (memory management is the hardest thing about it). > There is glacial momentum in the programming community. A lot of "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" And, so we don't have code-as-database. A lot of "I grew up learning how to parse curly bracket blocks. So, I can't accept anything different". So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm. We actually have tried several times to build programming languages that break out of the textual programming style that we use. Visual programming languages exist, and there's a massive list of them on Wikipedia. However, they don't appear to actually be superior to regular old textual programming languages. “Superior” is meaningless out of context. There are domains where visual programming prevails, like shader design in computer graphics. Visual programming is a spectrum: on one end you’re trading the raw power of textual languages for a visual abstraction, and on the other end you just have GUI apps. UI design and prototyping programs like Sketch are arguably visual programming environments, and you’d have a hard time convincing me that working in text would be more efficient. >We actually have tried several times to build programming languages that break out of the textual programming style that we use. Visual programming languages exist, and there's a massive list of them on Wikipedia. However, they don't appear to actually be superior to regular old textual programming languages. A lot of the time I spent doing the Advent of Code last month was wishing I could just highlight a chunk of text and tell the computer "this is an X"... instead of typing out all the syntactic sugar. Now, there is nothing that this approach could do that you can't do typing things out... except for the impedance mismatch between all that text, and my brain wanting to get something done. If you look at it terms of expressiveness, there is nothing to gain... but if you consider speed, ease of use, and avoiding errors, there might be an order of magnitude improvement in productivity. Yet... in the end, it could always translate the visual markup back to syntactic sugar. Low code environments as shipped today are actually quite impressive, and I'm saying that as a very long term skeptic about that field. This time around they're here to stay. I've come to the opinion that "graphical" vs "non-graphical" is a red herring. I don't think it actually matters much when it comes to mainstream adoption. Is Excel graphical? I mean, partly, and partly not, but it's the closest we've gotten to a "programming language for the masses". Next up would probably be Visual Basic, which isn't graphical at all. Bash is arguably in the vicinity too, and again, not graphical. Here's my theory (train of thought here); the key traits of a successful mainstream programming solution are: 1) A simple conceptual model. Syntax errors are a barrier but a small one, and one that fades with just a little bit of practice. You can also overlay a graphical environment on a text-based language fairly easily. The real barrier, IMO, is concepts. Even today's "more accessible" languages require you to learn not only variables and statements and procedures, but functions with arguments and return values, call stacks, objects and classes and arrays (oh my!). And that's just to get in the door. To be productive you then have to learn APIs, and tooling, and frameworks, and patterns, etc. Excel has variables and functions (sort of), but that's all you really need to get going. Bash builds on the basic concepts of files, and text piping from one thing to another. 2) Ease of interfacing with things people care about: making GUIs, making HTTP requests, working with files, etc. Regular people's programs aren't about domain modeling or doing complex computations. They're mostly about hooking up IO between different things, sending commands to a person's digital life. Bash and Visual Basic were fantastic at this. It's trickier today because most of what people care about exists in the form of services that have (or lack) web APIs, but it's not insurmountable. I think iOS Shortcuts is actually an extremely compelling low-key exploration of this space. They're off to a slow start for a number of reasons, but I think they're on exactly the right track. You're missing that the programming environment needs to be scalable to 50 or even 500 collaborators. Arguably bash and excel struggle at scaling non trivial problems to 15 collaborators. A surprising number of programming environments do even worse, notably visual or pseudo-visual ones, but even some textual ones. I don't think it needs to, since none of the above do, but that would definitely help at least in the enterprise Bret Victor has captured this really well. Dynamicland is the best form of 'AR' right now. Apple is taking a note via App Clips and the physical version of that - whatever they call it. Sep 01, 2020 · 3 points, 0 comments · submitted by rbanffy Aug 01, 2020 · 12 points, 2 comments · submitted by r2b2 sxp This needs either a (2013) or (1973) tag in the title. It's a good video and worth watching like many other Bret Victor videos. He probably wasn’t born in 1973. PeerJ has actually quite a few publications, one of them is CS ;) https://peerj.com/computer-science/ the dropdown on the top left allows you to switch between them. I think MathJax is certainly a step in the right direction, they even support rendering to MathML. But I agree that there is a certain lack there in terms of full semantic representations. MathJax is more accessible than TeX but it's still describing visual layout, instead of semantic meaning. Pushing HTML to arxiv is also a step into the right direction. I think the most important thing we can do is not be complacent with the state of the art. We need to go back to an age of computing where we didn't think we had it all figured out. We need to experiment, and not be afraid to take a step back in some aspects, like layout and kerning, in exchange for other advances like semantic representations and knowledge representation. I think bred victor has a great talk on this: https://www.youtube.com/watch?v=8pTEmbeENF4 I think we need to experiment with things like observablehq.com or nextjournal.com or the many other that are coming into existence. re: PeerJ: I missed that, nice! re: semantics vs visual layout of math... Wikipedia says OpenMath is a thing, but... that only solves half the problem. Once you have a format that encodes what you want, someone has to actually it. Like, if some writes x^{-1} and f^{-1}, it's hard for a computer to figure out that the first one means "the number you get when you divide 1 by x", whereas the second one means "the function you get when you compute the inverse of f". And if the author can't be bothered to slow down and say which is which, then the reader will have to guess. re: kerning: TeX's advantage here is not fundamental, I think. Just need a good font, as far as I know. (Actually that's not far; I know almost nothing here.) re: layout: CSS is finally getting good at this from what I hear. re: talk: looks familiar; maybe I should re-watch it. >> They continuously run every time you make any change to them. >Which is very much unlike what a program does. You're saying "Program that run continuously every time you make any change are very much unlike what a program does?" That doesn't make any sense to me at all, can you please try to rephrase it? Speaking of program that run continuously, have you ever seen Bret Victor's talks "The Future of Programming" and "Inventing on Principle", or heard of Doug Engelbart's work? The Future of Programming Inventing on Principle HN discussion: https://news.ycombinator.com/item?id=16315328 "I'm totally confident that in 40 years we won't be writing code in text files. We've been shown the way [by Doug Engelbart NLS, Grail, Smalltalk, and Plato]." -Bret Victor Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia. Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language. I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax: >But the actual structure and syntax of a C++ program that you edit in VI is simply a one-dimensional stream of characters, not a two-dimensional grid of interconnected objects, values, graphical attributes, and formulas, with relative and absolute two-dimensional references, like a spreadsheet. Text programming languages are one-dimensional streams of characters. Visual programming languages are two-dimensional and graph structured instead of sequential (or possibly 3d, but that makes them much harder to use and visualize). The fact that you can serialize the graph representation of a visual programming language into a one-dimensional array of bytes to save it to a file does not make it a text programming language. The fact that you can edit the one-dimensional stream of characters that represents a textual programming language in a visual editor does not make it a visual programming language. Microsoft Visual Studio doesn't magically transform C++ into a visual programming language. PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language, it just implemented a visual graphical user interface to the textual PostScript programming language, much like Visual Studio implements a visual interface to C++, which remains a one-dimensional textual language. And the fact that PostScript is a graphical language that can draw on the screen or paper doesn't necessarily make it a visual programming language. https://medium.com/@donhopkins/the-shape-of-psiber-space-oct... It's all about the representation and syntax of the language itself, not what you use it for, or how you edit it. Do you have a better definition, that doesn't misclassify C++ or PostScript or Excel or Max/MSP? lmm > You're saying "Program that run continuously every time you make any change are very much unlike what a program does?" That doesn't make any sense to me at all, can you please try to rephrase it? Running continuously every time you make any change is very much unlike what a program does. Programming is characteristically about controlling the sequencing of instructions/behaviour, and someone editing a spreadsheet in the conventional (non-macro) way is not doing that. > Do you still maintain that "Excel sheets in their widely used form are not instructions or behaviour", despite the examples and citation I gave you? If so, I'm pretty sure we're not talking about the same Microsoft Excel, or even using the same Wikipedia. This is thoroughly dishonest of you. You edited those points and examples into your comment, there was no mention of macros or "programming by demonstration" at the point when I hit reply. To respond to those added arguments now: I suspect those features are substantially less popular than Ruby. Your own source states that Microsoft themselves discourage the use of the things you're talking about. Excel is popular and it may be possible to write programs in it, but writing programs in it is not popular and the popular uses of Excel are not programs. Magic: The Gathering is extremely popular and famously Turing-complete, but it would be a mistake to see that as evidence for the viability of a card-based programming paradigm. > Your definition is arbitrarily gerrymandered because you're trying to drag the editor into the definition of the language, while I'm talking about the representation and structure of the language itself, which defines the language, not the tools you use to edit it, which don't define the language. Anything "visual" is necessarily going to be about how the human interacts with the language, because vision is something that humans have and computers don't (unless you're talking about a language for implementing computer vision or something). > I'll repeat what I already wrote, defining how you can distinguish a non-visual text programming language like C++ from a visual programming language like a spreadsheet or Max/MSP by the number of dimensions and structure of its syntax: But you can't objectively define whether a given syntactic construct is higher-dimensional or not. Plenty of languages have constructs that describe two- or more-dimensional spaces - e.g. object inheritance graphs, effect systems. Whether we consider these languages to be visual or not always comes down to how programmers typically interact with them. > PSIBER is an interactive visual user interface to a graphical PostScript programming environment that I wrote years after the textual PostScript language was designed at Adobe and defined in the Red Book, but it didn't magically retroactively transform PostScript into a visual language There's nothing magical about new tools changing what kind of language a given language is. Lisp was a theoretical language for reasoning about computation until someone implemented an interpreter for it and turned it into a programming language. > Lisp was a theoretical language for reasoning about computation until someone implemented an interpreter for it and turned it into a programming language. Lisp was designed and developed as a real programming language. That it was a theoretical language first is wrong. Related thread: Ask HN: What's the best book on the early history of the Internet and/or Web? https://news.ycombinator.com/item?id=19556208 My previous reco: Not a book, but a great video via Steve Blank: https://www.youtube.com/watch?v=ZTC_RxWN_xo Also Bret Victor The Future of Programming, which is misleading as above is performance piece where title slide reads 1973 https://www.youtube.com/watch?v=8pTEmbeENF4 Feb 03, 2020 · 3 points, 0 comments · submitted by szx > I see Bret Victor more as a historian where he finds old ideas and re-introduces them to people who haven’t seen them before. Just yesterday I revisited his Future of Programming talk. Splendid! Nov 20, 2018 · 2 points, 0 comments · submitted by fmoronzirfas Oct 29, 2018 · 2 points, 0 comments · submitted by gyre007 'The future of Programming' by Bret Victor (https://www.youtube.com/watch?v=8pTEmbeENF4). Seems a pun about OP title but it really is related with his/her question. Take a look and you will be amazed of how good (or revolutionary by our standards) some old technologies were. > How would that better environment look? Come on, what kind of question is that? If I knew how to improve it I wouldn't be chatting here with you, I would be doing something about it. Also, you should probably watch Bret Victor's videos, especially "The Future of Programming", if only to realize that we have been improving the programming environment since the days of punch cards, and are still in the process of doing so. Also, pretty much anything Bret Victor has done. Good read. I feel the title should be "Teaching Programming Paradigms and Beyond" since the text assumes familiarity with complete CS landscape and comments on success of several teaching methods. I would recommend (also misleadingly titled) talk The Future of Programming by Bret Victor [1] which goes over some groundbreaking paradigms that have since become mostly forgotten. It's in a Handbook of Computing Education, so everything in the book is about "teaching". It would therefore have been especially odd to put that in the title. Jun 05, 2018 · 1 points, 0 comments · submitted by aziis98 May 23, 2018 · 5 points, 1 comments · submitted by nmat Bret Victor is _so incredibly inspiring_. So much amazing work: Learnable Programming: http://worrydream.com/#!/LearnableProgramming Inventing On Principle: http://worrydream.com/#!/InventingOnPrinciple The Future Of Programming: http://worrydream.com/#!/TheFutureOfProgramming Edit: formatting Yes, healthy skepticism is good, but just because code (i.e. text files) is often the most _powerful_ or _flexible_ tool, doesn't mean it's always the best tool. We (programmers) are notoriously bad at advancing the tools in our field. For a brief history of this, watch "The Future of Programming" talk by Bret Victor: That guy gets an a+ for presentation but I couldn't find much to agree with him on. He talks about code being linear lines of text as though that's a bad thing. We've pretty much been stuck with this as state of the art in our writing systems for thousands of years, what would be your reaction if I suggested everyone should watch videos instead of read books? It's a flexible and easy way to represent a program that no other tool has come close to. > We (programmers) are notoriously bad at advancing the tools in our field. We've been trying to automate ourselves out of jobs for the entirety of the history of the industry yet programmers are in more demand than ever. Everyone wants to work on interesting problems and creating inner platforms is far more interesting than writing boring business logic. Yet for all our efforts we've barely progressed since the 70's, why do you think that is? > What would be your reaction if I suggested everyone should watch videos instead of read books? Videos are just another useful tool for learning; they don't obviate the need for books, but they're better at conveying some ideas/information than books alone. Just like videos and books aren't mutually-exclusive tools for learning, graphical tools and textfiles aren't mutually-exclusive tools for building programs. >We (programmers) are notoriously bad at advancing the tools in our field. I think the tools of our trade have advanced tremendously. Visual Studio for example is an amazing experience for C# programmers, one that most languages don't have. And this is in text tools. Programmers know that text is the most powerful and flexible, which is why we advance those tools that help in working with text. GUI tools are good for people who only want to do something every once in a while. Something they don't need to repeat. Where there's a simple recipe for it. And, yes, programmers don't do that much to advance these, because they have no use for them themselves. Bret Victor - The Future of Programming (imagined from perspective of 1970's) This talk is obscenely underrated. There is not nearly as much tech-focused performance art in our industry. Underrated by who? (“Whom”?) It always shows up in these lists, and rightly so. > Makes you wonder what pioneers back in the 60s and 70s could have accomplished with modern hardware. One of the best Bret Victor talks is about that. "The Future of Programming" > And it kills me to think that Smalltalk and Visual Basic had a built-in GUI editor and layout manager, unlike the web. Anyone of us doing desktop/mobile development on .NET, Java, Android, iOS, Qt development can still enjoy such goodies. With web, one day it might catch 90's RAD tooling. I used the 1990s RAD tooling, and in many regards, the results were mediocre at best. Changing a window size could kill your form, to say nothing of font size. The live aspect was great, though. But you would expect 20 years to be enough time to improve RAD tooling in areas it wasn't so great at for modern devices. Instead, at least as far as the web is concerned, it's mostly been missing. 90's RAD tooling also supported layout managers, devs just have to actually use them. Also RAD tooling is about the whole stack, not just dragging stuff into forms. One of the reasons it has never taken off is that GUI editors and layout managers that have come out for the web (and there have been a few), have never quite gotten the code right. They would produce a page that looked like the designer... but with terribly written HTML/CSS. So web designers and developers prefer to make their own markup. I seldom recognize pieces of HTML/CSS literature when looking at the developer tools panel, or source code from well known frameworks. The deeper problem is that the tech industry is slow to get rid of HTML, even though HTML was created to exchange documents, and nowadays we mostly use it as a GUI for network software. See "The Problem With HTML": It used to be conventional wisdom that using HTML as a layout system is wrong, you know. Because it wasn't meant to be one, and text content was supposed to be independent of the medium on which it is viewed. Well, at the time I had my doubts that it was going to work, but it was (and still is) a nice idea. lmm HTML isn't the problem - declarative markup is a great way of doing GUI layout, non-web GUI frameworks tend to come up with alternatives that look similar. The problem is CSS, which is a fractal of bad design, broken at every level, from selectors to the box model. That I agree with. Android, iOS, XAML and QML are quite nice to work with. CSS works fine for Text markup. The problem is you get two models one where the Browser picks where stuff goes depending on the browser and local settings, another where the designer makes that choice. You can't have both things exist at the same time, on top of that most designers don't know what they are doing. Mar 15, 2018 · 1 points, 0 comments · submitted by swyx Programming was in its infancy 50 years ago, but in reality we don't really know what its development arc is. We could be in the toddler stage right now, or we could still be in infancy when compared to future developments in the field. I believe we are much closer to the latter. There were things being done in the 60s that we still haven't really integrated into our trade. [0] We joke about having to program with hardware switches and punch cards, yet here we are still typing carefully crafted cryptic commands that tell the computer exactly what it is supposed to do, and storing them in linear text files which we have to mentally map to program states. I think there will always be a place for this kind of programming, just as people still use Assembly today, but it's a bit premature to say, "Well, this is it, or nearly so!" I recall reading that one of the giants of early computing, Von Neumann perhaps, never understood the benefit of Assembly and thought that it was a waste of the computer's time to compile to machine code rather than have a human write the machine code directly. We are working inside a problem domain that we barely understand. I find it hard to believe that we will have the glorious sci-fi future that many of us imagine will come out of advancements in technology without also developing corresponding advancements in how we describe and create and interact with it. One potential example that I am looking forward to learning more about is Luna, which features a visual development environment that is isomorphic to its code. [1] The implicit goal of programming language and tool development is "How do I make it easier to accurately map 'the thing I want done' into a functioning system?" And our tools are getting better all the time, opening up new avenues of interest and possibility. This is a great time to be a programmer, and I think it's only going to get better, and become more accessible. > in reality we don't really know what its development arc is That's true, but we know we're past the rapid advancement portion of the arc. Look at the most widely used languages today, the top ten, regardless of methodology[1][2][3], are dominated by languages that are ~20 to 30+ years old. > advancements in how we describe and create and interact with it As a funny, but accurate, CommitStrip[4] pointed out, you'll need to create a specification, and we already have a term for a project specification that is comprehensive and precise enough to generate a program...it's called code. > One potential example that I am looking forward to learning more about is Luna That was discussed on HN recently[5] and some people were pointing out it appeared to have made little to no progress since the previous time it was submitted and others mentioned various short-comings of these types of visual programming languages in general. Time will tell, we'll see what happens /shrug I'm in love with this, I have nothing constructive to add but you should be really proud of this work. Reminded me of this video: https://www.youtube.com/watch?v=8pTEmbeENF4 Think you're on to something that this talk points out very well. That video reminds me of a comic I sketched out years ago. It's about this rag tag armed group of hackers who specialize in intelligence espionage in the near future. There's a payload specialist, security specialist, data structure specialist, etc. Much of the drama unfolds in VR space where the hackers can be seen frantically querying/manipulating data structures directly by hand whilst evading detection. It's supposed to be educational as well, explaining CS topics through stories. Think the movie hackers plus GitS, but with attention to accurate portrayal of CS knowledge. It's basically my vision of what the future could be like. Thanks! That's funny, before starting at a programming job a few years ago my soon-to-be-employer (at a tiny YC startup) said to me, "you can be like our own little Bret Victor!" I should re-watch that one though since I only have a vague recollection of it. I enjoyed his "Inventing on Principle" talk quite a bit. Extremely relevant, particularly his remarks: https://youtu.be/8pTEmbeENF4?t=1174 (19:34 if t= doesn't work). May 23, 2017 · 2 points, 0 comments · submitted by feargswalsh92 May 18, 2017 · comboy on Kotlin Is Better Oh Delphi.. every time I fight with CSS and think about how easy making GUI apps used to be almost 20(sic!) years ago, I feel like something went wrong. I had to do maintenance on an old winforms app recently, it's insane how simple it is to develop with, how quickly it starts up and how quickly it show users the data they want. I signed myself up as the project maintainer. And even that is an incredibly bloated technology compared to delphi. I've been writing web apps for 20ish years, and also still can't see productivity catching up to what we were doing with Delphi and other desktop uis 25 years earlier... not even with react and all these other webpack/babel heavy things. Web development... Haven't looked at Delphi since last century, but it seems it's still alive, and can produce iOS and Android (and all the desktops) programs. No idea how well though... It's rubbish. Worst IDE I've ever worked with, sometimes I consider just using Notepad. No day without crashes, random errors, intelliSense not working, debugger suddenly not showing variable values, code navigation not working... Embarcadero is just milking companies that need Delphi for legacy code. Borland management went wrong. 20 years... For those who want to see how Delphi GUI design is: https://www.youtube.com/watch?v=BRMo5JSA9rw bitL There's always Lazarus... yep. yep. yep. So did Delphi handle dynamically resizing and positioning layouts? My impression is all the old highly productive gui languages (Delphi, Visual Basic, ...) used absolute positioning. Personally I'd rather have a more complex gui framework (css, swing, wpf) that handles positioning than to be forever cursed tweaking pixel width, height, x, y values. bitL Yes, if you set anchors on each component you wanted to be resizable (more specifically, all four sides could have had an independent anchor). Same with winforms. I'm not sure if it was added at some point or always there, if it was always there I wish I knew about it a lot sooner. >... than to be forever cursed tweaking pixel width, height, x, y values. Inevitably, this is what CSS work devolves to, though.[0] [0] Pixel twiddling It has anchor layout, flow layout, table layout, etc... I have typically found it easier to do the layout I wanted in delphi than CSS. The only annoying thing is that the visual designer has no undo, which you really miss when doing exploratory designs. CSS 2 really is terrible. There's a few things in CSS 3 that make it passable, but overall I consider it a failed layout system which needs more workarounds than it provides solutions. This reminds me a lot of Bret Victor's "The Future of Programming". The improved process is not in how it can emulate the way you've done things for decades with file-based tools. It's in the conceptual nature of programming, as well-explained by Bret Victor: https://youtu.be/8pTEmbeENF4 - Software development using files and folders is absolutely antediluvian. Smalltalk does everything in a universe of objects; source code is organized spatially rather than in long reams residing in text files. - Live coding and debugging done right is an enormous and unparalleled productivity booster. - Persisting execution state is extremely convenient for maintaining continuity, and hence the high velocity of development. Governments and enterprises have long used Smalltalk to write massive applications in large team environments using Smalltalk's own tooling for collaboration and version control. There's no question that historically Smalltalk has not played well with the outside (file-based) world, and this has been a major point of contention. If you insist on using Git and similar tools with Smalltalk, then yes, this is problematic. The point is, if you view software development from only one perspective, you deny any possibility of improving the process in other ways that can lead to dramatically improved productivity, accelerated development, and lowered cognitive stress. Sorry, I am at work right now and don't have time to watch videos. Can you tell me more about "Smalltalk's own tooling for collaboration and version control"? Are you referring to Monticello? I am not insisting on git, but Monticello seems pretty limited in term of collaboration. I see commit, diff, checkout, and remote pull/push. Specifically, let's imagine this scenario: we have team of tens of programmers working on a project. A new team member joins and accidentally breaks the code in non-obvious way. He pushes the code to main repository. Next time, everyone else checks out the latest version of the code and starts having weird problems. If you had 20 people on team, and they each wasted 2 hours because the code was broken, well, you just wasted a week of programmer time. How do you prevent it? In file-based word, the answer is tests and CI. What is the smalltalk way? And please do not say "It's in the conceptual nature of programming" -- if the scenario makes no sense in the smalltalk world (maybe you are not supposed to have 20 people working on the same project?) please say this. A few important points: 1. Breaking code is nothing specific to a language. The usual weapon also in Smalltalk is to monitor if something breaks - for instance by using CI. Continuous integration only makes real sense when you have tests. One should remember that "test first", "unit testing" and "Extreme programming" (XP) like many other things had their roots in Smalltalk. Because in dynamic languages testing using code was and is part of the culture (ranging from lively verifying with workspace expressions and inspectors up to fully written tests). The first unit testing framework "SUnit" was written by Kent Beck for Smalltalk (SUnit), later he ported it to Java with Erich Gamma on a flight to OOPSLA OO conference. Java helped to push the popularity afterwards. Meanwhile also static language enthusiasts have understood that it is better to rely on tests than type checking compilers and they now hurry up to follow One last thing you should try: try to query your system how many test methods were written. When you solved this easily with a Smalltalk expression retry this in Java ;) 2. Commercial Smalltalks which are often used in big projects provide solutions which are repository based like the famous ENVY (written by OTI, was in VisualAge for Smalltalk from IBM, now VAST) or Store (VisualWorks). For more details try the commercial evaluation versions or read [1] or [2]. A screenshot of Envy can be seen in [3]. I worked with ENVY and it is really good - but mostly only for internal work/teams. If I remember correctly ENVY once was also available for VisualWorks (VW) ... but later got replaced Cincom developed Store for VW as a replacement which is also nice as it allows to work in an occasionally-connected mode, so work offline and push packages/versions later to a central team repo. In the open source world there are different solutions (including Monticello which is available for nearly all Smalltalk derivates) or newer solutions like FileTree or Iceberg allowing to work with Git. The workflow depends on the tool and your requirements. 3. Often it makes sense to automatically build and regular distribute a fresh daily developer images to the members of your team. This helps in later merging code. For instance Kapital (a big financial project from JP Morgan) works that way and I've seen that model very often. See [4] Again nothing special to Smalltalk. In more file based languages it also makes sense to stay close to the main line and merge as well as resynchronize with the team. In Pharo for instance we have the PharoLauncher that allows you to download any (fresh or old) image built provided by the open source community. 3. Versioning can be done on many levels. Simplest level is the image itself. Smalltalk not only has an VM and image concept - but also the concept of a changes file. If you evaluate a code expression, create or modify a class or method in the system this gets logged there. It prevents you from loosing code and it is easy to restore quickly for instance an earlier method versions/editions that one has implemented. Most Smalltalks now also work with packages and you can define package dependencies as well as declaring versions that fit together to provide a project, goodie or app (for instance with a Configuration class in Monticello) While in file based languages this is often done in an XML file (Maven for instance) or a JSON file in Smalltalk this is usually expressed with objects and classes again. This also makes it more flexible as you can very easily do queries on it or use refactoring tools to even restructure or reconfigure. 4. Usage of shared code repositories is very common also in Smalltalk. While you now can also use GitHub, GitLab, Gogs and others with Iceberg and friends in Smalltalk there are also repository systems implemented in Smalltalk itself like - SqueakSource (http://source.squeak.org, http://squeaksource.com) - SqueakSource3 (http://ss3.gemtalksystems.com) - SmalltalkHub (http://smalltalkhub.com) 5. Beside repositories where code and goodies are hosted one often finds registries Pharo for instance has http://catalog.pharo.org which is accessible also directly from the image. 5. If you work in a team you can also use a custom update stream. This is how for instance open source projects like Pharo and Squeak are managed. So anyone can hit an "update" button to get the latest changes. In Pharo http://updates.pharo.org is used and you can have a look at UpdateStreamer class to see how easy that works over the web or how to customize it for own needs. 7. If one requires not only collaboration for the development team (coding) but would like to collaborate also with other projects members on other artefacts (Excel, project plans, documents, ...) then one should have a look at tools like this http://www.3dicc.com which is implemented in - guess what: SMALLTALK. This list could be endless ... the first few points should only give a glimpse on what is there and available. Pharo Smalltalk, in particular, when it comes to VCS, it's very similar to git actually. It uses source code files, it distributes them via zip files, it works locally instead of centralized, it supports merges, etc. Pharo works well also with usual VCS because it can export code into source code files. The image plays no role in VCS whatsoever because VCS is about code, not data, and image is mostly about live data and less about live code. So any tool will and does work with Pharo outside the image. Problem arises with a majority of people that prefer to stay in the image; in that case you gain more control because you have more Pharo code to play with, but you lose a lot of power because we are a small community not able to compete with behemoth projects like git. Another interesting thing which Pharo does emphasize is remote debugging: though not a Pharo monopoly by a long shot, we do have several libraries that can achieve this, and because the image format retains live state and live code execution, it's easy to resolve issues. Besides the image format, the Fuel format has the advantage of storing only a fraction of the image. You can email this or share it via git or Dropbox. Like an image, a Fuel file is a binary file and, like the image, it can store live state and live code execution. This way, you can isolate live bugs and share them with your team, each one in its own Fuel file. STON is also another alternative format which feels familiar for those that have worked with JSON. So you see, you get the best of both worlds. You have the fantastic Smalltalk image, and you have ways to deal with the file-based world. Bret Victor is amazing. I saw his video about the future of programming[0] and have been following him since then. Always liked Bret Victor's talks - they are quite popular amongst HN crowd I think. Bret Victor The Future of Programming: https://www.youtube.com/watch?v=8pTEmbeENF4 Another good example is the NeXT vs Sun duel, regarding the RAD tooling of NeXTStep vs the traditional UNIX development (1991). Aug 10, 2016 · 1 points, 0 comments · submitted by tylermauthe Apple has Swift Playgrounds as someone described. Visual Studio has edit-and-continue, interactive REPL, visualizers, some sort of backtracking and now from Xamarin Interactive Workbooks. Then you have the whole INotebook trend which started out with Python and nowadays supports multiple languages. However all these tools are actually catching up with many of the features that Smalltalk-80, Interlip-D, Mesa/Cedar, Lisp Machines, Oberon already had. This is what Bret Victor jokes about in another presentation of him, where he pretends we are in the 70's making predictions how the world of computers will look like in the 21st century. Those experiences are not provided in smalltalk and Lisp machine, they are just a bit more interactive than others. In fact, playgrounds are inspired by inventing on principle and come closest to realizing just one of the demos, while the designs were obviously not around in the early 80s to guide anything. You can definitely do "something" in smalltalk, lisp machine, but the experience was always hazily defined, and anyways, is quite niche. So as you resort to in your post, people tend to list technical features other than describe experiences, which, even after many of Bret's essays, still need further development before they are realized in production systems (we still don't really know what we want to do, especially what will scale beyond say playgrounds). Experience first, features that can realize it second. I immediately thought of Bret Victor's wonderful talk on, 'The Future of Programming' Jul 22, 2015 · 1 points, 0 comments · submitted by eddd Jul 17, 2014 · 3 points, 2 comments · submitted by thoughtsimple The conclusion is profound in my opinion. The rest is just a clever way of making the point. From the description of the video: For his recent DBX Conference talk, Victor took attendees back to the year 1973, donning the uniform of an IBM systems engineer of the times, delivering his presentation on an overhead projector. The '60s and early '70s were a fertile time for CS ideas, reminds Victor, but even more importantly, it was a time of unfettered thinking, unconstrained by programming dogma, authority, and tradition. 'The most dangerous thought that you can have as a creative person is to think that you know what you're doing,' explains Victor. 'Because once you think you know what you're doing you stop looking around for other ways of doing things and you stop being able to see other ways of doing things. You become blind.' He concludes, 'I think you have to say: "We don't know what programming is. We don't know what computing is. We don't even know what a computer is." And once you truly understand that, and once you truly believe that, then you're free, and you can think anything.'" Mar 31, 2014 · 2 points, 0 comments · submitted by cygnus Mar 30, 2014 · 4 points, 0 comments · submitted by stesch Feb 16, 2014 · thangalin on Stack Overflow is down While writing ConTeXt code (similar to LaTeX), I will reference the StackExchange network: % @see http://tex.stackexchange.com/a/128858/2148 Brett Victor asks, "How do you get communication started between uncorrelated sentient beings?" to introduce the concept of automatic service discovery using a common language.[1] Alan Kay had a similar idea: that objects should refer to other objects not by their memory space inside a single machine but by their URI.[2] When programmers copy/paste StackOverflow snippets, in a way they are actually closer to realizing Alan Kay's vision of meta-programming than those who subscribe to the "tyranny of a single implementation" -- or "writing" code as some would mock, expressing a narrow view of what they think "programming" a computer must entail. The StackExchange network provides a feature-rich interface to document source code snippets that perform a specific task. What's missing is a formal, structured description of these snippets and a mechanism to provide semantic interoperability that leads to a universal prototyping language for deep messaging interchange.[3] How else are we going to go from Minecraft[4] to Holodeck[5]? Reactive Programming[1] (not FRP): Look up ThingLab[2][3]. Done in 1978 on Smalltalk by Alan Borning. Alan Kay typically points to Sutherland's Sketchpad (1963)[4] as inventing objects, computer graphics and constraint programming. I have to admit I don't understand the hype over FRP. I mean it's great that you can now do reactive programing in FP as well, but it's not like this hasn't been around for ages. Anyhow, what Alan does is not co-opting, it is pointing out all the great work that has been forgotten and then reinvented, usually badly, in the hope that someone will finally do a better job than what went before. See also Brett Victor's talk "The Future of Programming"[5]. Brett works for Alan now. Others have pointed out VPRI[6]. Open Source programming languages that came out of there include OMeta (OO pattern matching)[7], Nile (dataflow for graphics)[8], Maru (metacircular S-Expr. eval)[9], KScript (FRP)[10], etc. In terms of publishing papers: he's 73 for pete's sake. He doesn't have to publish papers, or do anything he doesn't absolutely want to. But in fact he doesn't just rest on his awards (Turing...) or patents or having had a hand in creating just about every aspect of what we now consider computing. He's still going strong. So yes, there is a peanut gallery. You just may be confused as to who is sitting in it and who is on stage changing the world. Dec 02, 2013 · 1 points, 0 comments · submitted by dhaneshnm Aug 12, 2013 · 3 points, 0 comments · submitted by micampe Aug 10, 2013 · 4 points, 0 comments · submitted by ColinWright Jul 31, 2013 · slacka on The Future of Programming Here you go: HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site. ~ [email protected] ;laksdfhjdhksalkfj more things
# How to have actual values in Matplotlib Pie Chart displayed? MatplotlibPythonData Visualization To have actual or any custom values in Matplotlib pie chart displayed, we can take the following steps − • Set the figure size and adjust the padding between and around the subplots. • Make lists of labels, fractions, explode position and get the sum of fractions to calculate the percentage • Make a pie chart using labels, fracs and explode with autopct=lambda p: <calculation for percentage>. • To display the figure, use show() method. ## Example import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True labels = ('Read', 'Eat', 'Sleep', 'Repeat') fracs = [5, 3, 4, 1] total = sum(fracs) explode = (0, 0.05, 0, 0) plt.pie(fracs, explode=explode, labels=labels, autopct=lambda p: '{:.0f}%'.format(p * total / 100), plt.show()
# Magnetic field generated by current in semicircular loop at a point on axis 1. Mar 13, 2010 ### SOMEBODYCOOL 1. The problem statement, all variables and given/known data Determine the magnetic field strength and direction at a point 'z' on the axis of the centre of a semi-circular current loop of radius R. 2. Relevant equations Biot Savart Formula $$d\vec{B}=\frac{\mu_{0}Id\vec{r}\times\hat{e}}{4\pi|\vec{R}-\vec{r}|^{2}}$$ e being the unit vector from r to R 3. The attempt at a solution A much simpler problem is a full current loop, because one component of the magnetic field cancels out. For this problem, you'd have to deal with the half-circle arc and the straight line base separately. I was also wondering whether its easier to calculate the z and x components of B separately as well... One component is straightforward enough... I just really don't understand where to start. Last edited: Mar 13, 2010 2. Mar 14, 2010 ### gabbagabbahey This should be a pretty straightforward application of the Biot-Savart Law. Start by finding expressions for $\textbf{r}$, the position vector for a general point on the semi-circular arc, and $\textbf{R}$ the position vector for a general point on the $z$-axis....what do you get for those?...What does that make $\hat{\mathbf{e}}$? What is $d\textbf{r}$ for a semi-circualr arc? To makethings easier, you will want to use cylindrical coordinates. 3. Mar 14, 2010 ### SOMEBODYCOOL So, the parametric representation of a point on the semi-circle would be (0, bcos(t), bsin(t)) where b is the radius of the semi-circle. The vector R is just [d, 0, 0] where d is the distance on the axis of the point and then the e is the unit vector from R-r But what's dr? And where does the switch to cylindrical coord come in? 4. Mar 14, 2010 ### SOMEBODYCOOL I think I got it. Thanks 5. Mar 14, 2010 ### gabbagabbahey If you'd like to post your result, we''ll be able to check it for you.
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). 2 Corrected typo.; edited body Negative result: See p. 377 in Chapter 15 of Matousek's book, which can be found here. In short, if you want the image of the $k$ points to be between the surface of a convex body $C$ K$and the surface of$DK$for some$D>1$, you need the operator to have rank at least $k^{f(D)}$ for some function$f$. Positive result on a related problem: In Johnson, William B.; Lindenstrauss, Joram; Schechtman, Gideon On Lipschitz embedding of finite metric spaces in low-dimensional normed spaces. Geometrical aspects of functional analysis (1985/86), 177–184, Lecture Notes in Math., 1267, Springer, Berlin, 1987, it is proved that for some constant$C$, if you have$k$points on the surface of a symmetric convex body, then you can put the points isometrically into a suitable$\ell_\infty^m$in such a way that a random projection of order rank $k^{1/D}$ will place the points between the surface of a symmetric convex body$C$K$ and the surface of $CDK$; see the paper for a precise statement. I don't think symmetry places much of a role here. We were interested in the embedding of points into a Banach space and so did not think about general convex bodies. The embedding theorem we proved was later made obsolete by Matousek when he proved that and metric space with size $k$ embeds into $\ell_\infty^{n}$ with distortion $D$ with $n$ about $Dk^{1/(2D)} \log k$ (see p. 404 at the above given link). 1 Negative result: See p. 377 in Chapter 15 of Matousek's book, which can be found here. In short, if you want the image of the $k$ points to be between the surface of a convex body $C$ and the surface of $DK$ for some $D>1$, you need the operator to have rank at least $k^{f(D)}$ for some function $f$. Positive result on a related problem: In Johnson, William B.; Lindenstrauss, Joram; Schechtman, Gideon On Lipschitz embedding of finite metric spaces in low-dimensional normed spaces. Geometrical aspects of functional analysis (1985/86), 177–184, Lecture Notes in Math., 1267, Springer, Berlin, 1987, it is proved that for some constant $C$, if you have $k$ points on the surface of a symmetric convex body, then you can put the points isometrically into a suitable $\ell_\infty^m$ in such a way that a random projection of order rank $k^{1/D}$ will place the points between the surface of a symmetric convex body $C$ and the surface of $CDK$; see the paper for a precise statement. I don't think symmetry places much of a role here. We were interested in the embedding of points into a Banach space and so did not think about general convex bodies. The embedding theorem we proved was later made obsolete by Matousek when he proved that and metric space with size $k$ embeds into $\ell_\infty^{n}$ with distortion $D$ with $n$ about $Dk^{1/(2D)} \log k$ (see p. 404 at the above given link).
AIB Currency Losses: John Rusnak's Role and the Fraud John Rusnak’s Role In July 1993, Allfirst hired Mr. Rusnak. The hiring process was led by the head of treasury funds management for Allfirst, Mr. Ray. Mr. Rusnak had extensive experience in currency trading. Mr. Rusnak promoted himself as a trader who used options to engage in a form of arbitrage, attempting to take advantage of price discrepancies between currency options and currency forwards. Allfirst had until then engaged in “directional” spot and forward trading, simple bets that particular currencies would rise or fall. Messrs. Cronin and Ray were intrigued by Mr. Rusnak’s style of trading, as he claimed it would diversify the revenue stream arising from simple directional trading. Mr. Rusnak ws earlier reporting to a trading manager, when in 1999, when the trading manager left, he started directly reporting to mr. Ray. Mr. Ray's knowledge of foreign exchange was limited. However, Allfirst's treasurer, despite his own extensive currency-trading experience, nevertheless relied heavily upon the treasury funds manager to supervise Mr. Rusnak. Mr. Ray however, did not devote significant attention to Mr. Rusnak’s proprietary trading. The treasury funds manager (Ray) was highly protective of Mr. Rusnak; he often strongly defended Mr. Rusnak in inquiries by the back office and risk assessment personnel. Mr. Rusnak’s annual bonus was directly related to his net trading profits. In effect, Mr. Rusnak received a bonus equal to 30 percent of any net trading profits he generated in excess of five times his salary. Mr. Rusnak was regarded by some fellow employees as strong and confident. In the market, Mr. Rusnak was perceived as an active trader and a profitable client for the brokers. Many brokerage firms wanted to cover Mr. Rusnak. The brokers and traders heavily entertained Mr. Rusnak, with meals, hotel stays, golf trips, Super Bowl tickets and other travel. He apparently liked to be wined and dined, and the brokers obliged. Mr. Rusnak’s Fraud Trading Strategy: Mr. Rusnak told everyone that he engaged in an arbitrage between foreign exchange options and the spot and forward markets to make consistent profits. In fact, however, much of Mr. Rusnak’s trading was linear, directional trading. These were simple bets that the market would move in a particular direction. The majority of his real positions were simple currency forwards. He also bought some foreign exchange options with “high deltas” (options that were “deep in the money” and had large premiums). He traded in “exotic” options, although the trading in these products was infrequent. The Bogus Options: Mr. Rusnak sustained substantial losses at some point in or about 1997, and it was around that time that his fraudulent activities may have begun. Using currency forwards, Mr. Rusnak apparently bet wrongly on the movement of the Japanese yen— he bought a great deal of yen for future delivery, only to see the value of the yen, and thus his forward positions, decline. To hide his losses and the size of his positions, he created fictitious options. These fictitious options also tended to give the appearance that his real positions were hedged. Through a clever technique, Mr. Rusnak was able to get the bogus options onto Allfirst’s books. Typically, he would simultaneously enter two bogus trades into Allfirst’s trading system.  The two options would involve the same currency, and the same strike price, and they would offset each other from a cash standpoint: the first would involve the receipt of a large premium; and the second would involve the payment of an identical premium; accordingly, there would be no net cash in or out. There was one significant difference in the terms of the offsetting options: the option involving the receipt of a premium would expire on the same day it was purportedly written, but the other option would expire weeks later. Mr. Rusnak’s bogus options were designed to exploit weaknesses in the control environment around him. Allfirst prepared no reports listing the expiring one-day options. And so no one at Allfirst paid any attention to them. In part, this was due to the fact that the system being used by Allfirst for options did not automatically alert supervisors if such options were not exercised. At the same time, Mr. Rusnak took advantage of an even bigger hole in the control environment: a failure in the back-office consistently to obtain transaction confirmations. Initially, Mr. Rusnak created bogus broker confirmations to validate his deals, but, with occasional exceptions, he stopped doing that in September 1998. Mr. Rusnak instead apparently managed to persuade an individual in the back office not to seek to confirm the purported pairs of options. There was no need for confirmations, he apparently argued, because there was no net transfer of cash. Perhaps this practice suited the convenience of the back-office staffer; the bogus options were purportedly with the Tokyo or Singapore branches of major international financial institutions, and to have made confirming telephone calls would have required the employee to work in the middle of the night. The upshot is that Mr. Rusnak’s scheme empowered him to create, at will, assets on Allfirst’s books— false assets— without ever having to pay for them. At the end of a day when he entered the pair of bogus options, the liability represented by the one day bogus option would not appear on Allfirst’s books. What was left— and what did appear on the books— was the purported unexpired deep-in-the-money option for which Allfirst had supposedly paid a large premium. The Allfirst balance sheet would reflect that the bank was holding a valuable asset— one that concealed the losses in Mr. Rusnak’s directional spot and forward trades. Mr. Rusnak would effectively keep the seemingly valuable but nonexistent asset on his book by repeatedly rolling it over into new bogus options as the original ones purportedly came due. The Prime Brokerage Accounts: In his real trading, Mr. Rusnak  continued to lose money in spot and forward transactions, and as he did so, he wrote more and more of the bogus options to cover up his losses. (There were a few months in late 1999 when he apparently made some money back and reduced his bogus options positions, but that was short-lived.) From 1999 on, the majority of the real, money-losing trading activity was conducted pursuant to “net settlement” agreements that Mr. Rusnak established with various financial institutions, including Bank of America and later Citibank (when his principal trading contact at another institution moved there). The net settlement arrangements with Bank of America and Citibank subsequently evolved into “prime brokerage accounts.” Under the prime brokerage agreements, spot foreign exchange transactions between Allfirst and its counterparties were settled with the broker and “rolled” into a forward transaction. At the end of each day, all spot foreign exchange trades were swapped into a forward foreign exchange trade between the prime broker and Allfirst. These forward trades were cash settled in dollars at a fixed date each month. No settlement or cash collateral moved on these accounts on any other date. Prime brokers make money by receiving an agreed upon fee for settlement of foreign exchange spot transactions ($8 to$10 per million settled) and also typically charge full bid-offer pricing on the forward transaction rolls. These accounts enabled Mr. Rusnak to increase significantly the size and scope of his real trading. Prime brokerage accounts are commonly used by hedge funds and other active traders, and— except for the monthly settlements and lack of collateral requirements (most prime brokerage arrangements call for daily mark-to-market collateral)— the terms of Allfirst’s arrangements with its prime brokers were not particularly unusual. Such accounts are, however, unusual for banks and are not used by AIB’s foreign exchange traders. Nonetheless, Mr. Rusnak managed to convince his supervisors, including the Allfirst treasurer, that the accounts made sense for Allfirst because they would eliminate the need for extensive back office operations. The investigation revealed bogus deals in Allfirst’s records of prime brokerage activity. Such deals were input by Mr. Rusnak in the DEVON system (the system used to record prime brokerage account trades) and later reversed prior to the monthly settlement. Mr. Rusnak’s usage of Allfirst’s balance sheet**:**  Through his use of the prime brokerage accounts, Mr. Rusnak’s trading activity grew. So did his losses and the bogus option positions. And so grew his use of Allfirst’s balance sheet. At some point in 2000, the Allfirst treasurer directed that trading income should reflect a charge for the cost of balance-sheet usage. In 2001, Mr. Rusnak’s balance sheet usage drew the attention of the finance department, auditors and others (including Mr. Ray, who noted that Mr. Rusnak’s earnings were inadequate to justify his use of the balance sheet). The deep-in-the-money options: Mr. Rusnak’s solution, beginning in February 2001, was to sell real yearlong, deep-in-the-money options. These options enabled Mr. Rusnak to fund his losses and keep trading. He sold five such options for a total of \$300 million. These options thus raised a large amount of cash that was used to help fund the monthly settlement of Mr. Rusnak’s foreign exchange forward transactions. The options also allowed Mr. Rusnak to augment his core directional position. These real, deep-in-the-money options were essentially synthetic loans made to Allfirst by the counterparties, which included Citibank and Bank of America. More bogus options: The real options Mr. Rusnak sold were liabilities of Allfirst. And they were recorded as liabilities on Allfirst’s books— initially. But to disguise his losses and his new method of funding them, Mr. Rusnak needed to get them off the books. To do that, he turned again to bogus options— deals that were purportedly transacted with the counterparties to the original deep-in-the-money options, and that gave the impression that the original options had been repurchased. The result, of course, was that Allfirst was saddled with massive, unrecorded liabilities. Manipulation of the Value at Risk calculation: Mr. Rusnak manipulated the principal measure used by Allfirst and AIB to monitor his trading: Value at Risk (VaR). One way in which Mr. Rusnak manipulated the VaR figures was through the bogus options he created: those options, as noted, appeared to hedge his real positions, and so they reduced the VaR. But Mr. Rusnak had another technique to manipulate VaR: false figures for so-called “holdover” transactions. Such transactions created the illusion of reducing Mr. Rusnak’s open currency position. Mr. Rusnak achieved this scheme by directly manipulating the inputs into the calculation of the VaR that were used by an employee in Allfirst’s risk-control group. Thus, while that employee was supposed to independently check the VaR, she relied on a spreadsheet that obtained information from Mr. Rusnak’s personal computer and that included figures for so-called “holdover” transactions—transactions entered into after a certain hour toward the end of each day. But these transactions were not real and, indeed, unlike the bogus options, were not even entered on to the bank’s trading software. Mr. Rusnak also engaged in a practice of entering false foreign exchange forward transactions in DEVON and reversing them before the next settlement date.
Thread: Circumference of a circle View Single Post ## Circumference of a circle What's the circumference of a circle of radius r, on a sphere of radius R?
+1.617.933.5480 # Q: The deepest point known in any of the earth's oceans The deepest point known in any of the earth's oceans is in the Marianas Trench, 10.92 k/D deep. (a) Assuming water is incompressible, what is the pressure at this depth? Use the density of seawater. (b) The actual pressure is 1.16 X 10 Pa; your calculated value will be less because the density actually varies with depth. Using the compressibility of water and the actual pressure, find the density of the water at the bottom of the Marianas Trench. What is the percent change in the density of the water? 0 0 0 0 Given data: Depth 'h' = 10.92 Km For Part (a) Asumption : water is incompressible. Concept used: Since water incompressible, it means density remains constant through out the depth. Calculations : As we know pressure-depth relation is $$P=P_0+\rho gh$$ ,Here P is the pressure at the depth h and $$P_0$$ is the atmospheric pressure. Taking: $$P_0=1.013*10^5Pa$$ and Density of sea water as $$\rho=1030 Kg/m^3$$ We have, $$P=1.013*10^5Pa+1030*9.81*10920=1.1*10^8Pa$$ So, the pressure at this depth = $$1.1*10^8Pa$$ For Part(b) Assumption : water is compressible Related Questions in Fluid Mechanics • Q: At a depth of 10.9 km, the Challenger Deep in November 21, 2013 At a depth of 10 .9 km, the Challenger Deep in the Marianas Trench of the Pacific Ocean is the deepest site in any ocean. Yet, in 1960, Donald... • Q: TWT20121110794_TAl-Ansari_2 (Solved) November 10, 2012 . The objective is to have a model which is accurate so if i change theinput data, i will get new results. Deliverable ================= Excel Analysis 2-3 Pages MS Word Explanation is... Solution Preview : RO Modelling Table 2.5 model - what does this mean: =CONCATENATE(ROUND((1-(E4/E2)),3)*100," ","percent") Explanation: Concatenate is used only for the purpose of showing Results in the... • Q: TWT20121027699_TAl-Ansari_1 (Solved) October 27, 2012 I would like to reproducetable 1.1 pn page 31 on excel . • Q: A barrel contains a 0.120-m layer of oil floating on (Solved) November 21, 2013 A barrel contains a 0.120-m layer of oil floating on water that is 0.250 m deep . The density of the oil is 600 kg/m. (a) What is the gauge pressure at the... • Q: A cubical block of density PB and with sides of (Solved) November 21, 2013 , how deep must the water layer be so that the water surface just rises to the top of the block? Express your answer in terns of L, PH' PL' and Pw. (c)... Question Status: Solved
# Step functions ## Step Functions What is a step function? The main characteristic of a step function, and the reason why it truly looks like a staircase doodle, is that this function happens to be constant in intervals. These intervals do not have the same value and we end up with a function which “jumps” from one value to the next (following its own conditions) in a certain pattern. Every time the function jumps to a new interval it has a new constant value and we can observe the “steps” on its graphic representation as horizontal lines. Take a look at the graphic representation below for a typical step function where is plain to see how the name of the function came about. Notice from the figure above that we can also define a step function as a piecewise function of constant steps, meaning that the function is broken down in a finite number of pieces, each piece having a constant value, in other words, each piece is a constant function. When talking about step functions we cannot forget to talk about the Heaviside function also known as the unit step function. This is a function that is defined to have a constant value of zero up to a certain point on t (the horizontal axis) at which it jumps to a value of 1. This particular value at which the function jumps (or switches from zero to one) is usually taken as the origin in the typical coordinate system representation, but it can be any value c in the horizontal axis. Consequently, the Heaviside step function is defined as: Where c represents the point at t in which the function goes from a 0 value to a value of 1. So, if we want to write down the mathematical expression of the heaviside function depicted in figure 3, we would write it as: u3(t). Notice how we only wrote the math expression using the first type of notation found in equation 1, this is because that notation happens to be the most commonly used and is also the one we will continue to use throughout our lesson. Is still important though, that you know about the other notations and keep them in mind just in case you find them throughout your studies. From its definition, we can understand why the Heaviside function is also called the “unit step” function. As it can be observed, a Heaviside function can only have values of 0 and 1, in other words, the function is always equal to zero before arriving to a certain value t=c at which it “turns on” and jumps directly into having a value of 1, and so, it jumps in a step size of one unit. Notice how we have used the phrase “turning on” to describe the process of the unit step function to jump from a zero value to a unit value, this is a very common way to refer to Heaviside step functions’ behavior and interestingly enough, it is all due to their real life usage and comes from the reason why they were invented. Historically, physicist, self-taught engineer and mathematician Oliver Heaviside, invented Heaviside step functions in order to describe the behaviour of a current signal when you click the switch on of an electric circuit, thus, allowing you to calculate the magnitude of the current from zero when the circuit is off, to a certain value when is tuned on. There is an important note to clarify here. Electric current does not magically jump from a zero value to a high value. When having a constant current through a circuit, we know that this constant value obviously started off from zero from when the circuit was off and then it arrived until a certain value after gradually increasing in time, the thing is that electric current travels at a very high speed, and so it is impossible for us (in a simple everyday life setting) to see this gradual increase of the current from zero to its final value since it happens in a very short instant of time, and so, we take it as a “jump” from one value to the next and describe it accordingly in graphic representations. ## Heaviside function properties Although the Heaviside function itself can only have the values of 0 or 1 as mentioned before, this does not mean we cannot obtain a graphic representation of a higher jump using Heaviside step functions. It is actually a very simple task to obtain a higher value since you just need to multiply it for any constant value that you want to have as the size jump, in other words, if you have a step function written as 3u5(t) in here you have a step function which has a value of zero until it gets to a value of t=5, and which point, the function has a final value of 3. This can be seen in the figure below: One of the greatest properties of Heaviside step functions is that they allow us to model certain scenarios (like the one described for current in an on/off circuit) and mathematically be able to solve for important information on these scenarios. Such cases tend to require the use of differential equations and so here we have yet another tool to solve them. Being this a course on differential equations, the most important point of this lesson is to give an introduction to a function which will aid in the solution of certain differential equations, such tool will be used along others seen before, such as the Laplace transform. This will serve to come up with important formulas to be used and to be prepared for the next lesson in which you will be solving differential equations with step functions. We will talk a bit more about this on the last section of this lesson, meanwhile for a review on the definition of the unit step function and a list of a few of its identities, we recommend the next Heaviside step function article. ## Heaviside step function examples Let us take a look into an example in which you will have to write all of the necessary unit step functions in order to completely describe the graphic representation found in figure 5. #### Example 1 Write the mathematical expression for the following graph in terms of the Heaviside step function: Let’s divide this in parts so we can see how the functions of the graph behave at each different value given. And so, we will write a separate expression for each of the next pieces: for t < 3, for t=3 to t=4, for t=4 to t=6 and for t>6. • For t < 3: Notice that this is a regular Heaviside function in which c=0, multiplied by a 2 in order to obtain the jump of 2 units in size. This would fit the requirement of the function being zero for all negative values of t, and then to have a value of 2 for values of t from 0 to 3. And so, the expression is: 2uo(t)=2. • For t=3 to t=4: In this range we have to cancel the unit step function that we had before, that means we need a negative unit step function in here but this one will start to be applied at t=3 and will have to be multiplied by 2 again in order to cancel the value of the previous expression. Thus, our expression for this part of the function is: -2u3(t). • For t=4 to t=6: If we weren’t to add any function at t=4, the value of the function would remain as zero to infinity since the second function cancelled the first one, but instead, we see in the graph that we have a diagonal line increasing one unit step size for each one unit of distance traveled on t. Thus, since the function is increasing at the same rate as t, we could easily multiply a new unit step function which starts at 4 by t and be over with, this would produce a diagonal line following the same behavior. The problem is that just multiplying u4(t) by t would produce a line that would come out of the origin instead of t=4, and for that, we need to multiply the unit step function by (t-4) so the function starts at the same time as the unit step function will be applied (which is at t=4). And so the expression is: (t-4)u4(t). Notice (t-4)u4(t) produces the values for y of: 0 (when t=4), 1 (when t=5) and 2 (when t=6), which is what the graph requires. • For t > 6: For this last piece of the graph should be already easy that we just need to cancel our last function in order to have a value of zero back to the graph. For that we use a negative unit step function which starts at t=6, and which should be multiplied again by (t-4). And so, the expression to cancel our last one and completes the graph is: -(t-4)6(t). We add all of the four pieces of function we found to produce the expression that represents the whole graph shown in figure 5: Due to this particular example having multiple step function samples, we continue onto the next section to work on more complicated problems. If you would like to continue practicing how to write down Heaviside step functions, we recommend you to visit these notes of step functions for more Heaviside function examples along with a little introduction. Notice these notes also introduce the topic for our next section: unit step function Laplace transforms and the overall use of the Laplace transform when associated to Heaviside functions. # Laplace transform of Heaviside function You have already had an introduction to the Laplace transform in recent past lessons, still, at this time we do recommend you to give the topic a review if you think it appropriate or necessary. The lesson on calculating Laplace transforms is of special use for you to be prepared for this section. Let us continue with the Heaviside step function and how we will use it along the Laplace transform. The Laplace transform will help us find the value of y(t) for a function that will be represented using the unit step function, so far we have talked about step functions in which the value is a constant (just a jump from zero to a constant value, producing a straight horizontal line in a graph) but we can have any type of function to start at any given point in time (which is what we represent with t mostly). What are we saying? Well, think on the graphic representation of a function, you can have any function, with any shape, but this special case comes from the fact that the function will be zero through time, until a certain point in which the signal will turn on, and then it will “jump” into this “any shape” function behavior. This is what we call a “shifted function” and this is because we can think of any regular function graphed, and then saying “oh but we want this function to start happening later” and we “switch” it to start at a later point in time (at t=c). Since these shifted functions will be equal to zero until at certain point c in which they will “turn on”, they can be represented as: The shifted function then is defined as: Now let’s see what happens when we take the Laplace transform of a shifted function! First, remember that the mathematical definition of the Laplace transform is: Therefore the Laplace transform for the shifted function is: Notice how the Laplace transform gets simplified from an improper integral to a regular integral where you have dropped the unit step function. The reason for this is that although the range of the whole Laplace transform integral is from 0 to infinity, before c (whatever value c has) the unit step function is equal to zero, and therefore the whole integral goes to zero. After c, the unit step function has a value of 1, and thus, we can just take it as a constant value of 1 multiplying the rest of the integral, which range is not from c to infinity. Continuing with the simplification of the Laplace transform of the shifted function, we set x = t - c, which means that t = x+c and so the transformation looks as follows By making the integral to be in relation to one single variable rather that a t-c term, we have simplified this transformation so we could obtain a quickly manageable formula we can readily use with future problems and our already known table of Laplace transforms from past lessons. Having solved equation 5 makes it easier to obtain another important formula: the unit step function Laplace transform. Notice, not the transform for a shifted function, but the Laplace transform of the unit step function (Heaviside function) itself and alone. If you notice, equation 5 was useful while obtaining equation 6 because taking the Laplace transformation of the Heaviside function by itself can be taken as having a shifted function in which the f(t-c) part equals to 1, and so you end up with the Laplace transform of a unit step function times 1, which results in the simple and very useful formula found in equation 6. Now let us finish this lesson by working on some transformation of a unit step function examples. #### Example 2 Find the Laplace transform of each of the following step functions: • Applying the Laplace transform to the function and using equations 5 and 6 we obtain: • Using equation 5, we set x=t-c (for this case x=t-5) and work through the transformation on the second term of the last equation: Notice that in order to solve the last term, we used the method of comparison with the table of Laplace transforms. You can find such table on the past lessons related to the Laplace transform. • Now let us solve the third transformation (remember we set x = t-c, which in this case is x=t-7): • Now we put everything together to form the final answer to the problem: #### Example 3 Find the Laplace transform for each of the following functions in g(t): • Now we use equation 5 and set up x=t-c to solve the Laplace transform: • We separate the two terms found on the right hand side of the equation, and solve for the first one (for this case x=t-) using the trigonometric function: sin(a+b)=sin(a)cos(b)+cos(a)sin(b). Therefore: • Now let us solve the second term, where x = t-4, therefore, t=x+4: • Putting the whole result together: And now we are ready for our next section where we will be solving differential equations with what we learned today. If you would like to see some extra notes on the Heaviside function and its relation with another function which we will study in later lessons, the Dirac delta function, visit the link provided. ### Step functions #### Lessons A Heaviside Step Function (also just called a “Step Function”) is a function that has a value of 0 from 0 to some constant, and then at that constant switches to 1. The Heaviside Step Function is defined as, The Laplace Transform of the Step Function: $L${$u_{c}(t)$ $f(t - c)$} = $e^{-sc}$$L${$f(t)$} $L${$u_{c}(t)$} = $\frac{e^{-sc}}{s}$ These Formulae might be necessary for shifting functions: $\sin{(a + b)} = \sin(a)\cos(b) + \cos(a)\sin(b)$ $\cos{(a + b)} = \cos(a)\cos(b) - \sin(a)\sin(b)$ $(a + b)^{2} = a^{2} + 2ab +b^{2}$ • Introduction a) What is the Heaviside Step Function? b) What are some uses of the Heaviside Step Function and what is the Laplace Transform of a Heaviside Step Function? • 1. Determining Heaviside Step Functions Write the following graph in terms of a Heaviside Step Function • 2. Determining the Laplace Transform of a Heaviside Step Function Find the Laplace Transform of each of the following Step Functions: a) $f(t) = 6u_{3}(t) - e^{3t - 15}u_{5}(t) + 3(t - 7)^{2}u_{7}(t)$ b) $g(t) = -\sin{(t)}u_{\pi}(t) + 2t^{2}u_{4}(t)$ • 3. Determining the Inverse Laplace Transform of a Heaviside Step Function Find the inverse Laplace Transform of the following function: $F(s) = \frac{4e^{-3s}}{(s - 2)(s + 3)}$
# Chapter 1 - Introduction: Matter and Measurement - Additional Exercises: 1.86g This statement is false. Corrected Version: The number 0.033 has the same number of significant figures as 0.0033. #### Work Step by Step When considering significant figures, it is important to realize that non-zero numbers are significant figures. However, zeroes that do not have non-zero digits behind them are not significant figures. This becomes apparent using scientific notation: $.033=3.3*10^(-2)$ $.0033=3.3*10^(-3)$ Converting into scientific notation makes in clear that zeroes located in front of all of the non-zero digits are not significant. Therefore, each number contains two significant figures. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Capacity analysis (Redirected from Capability analysis) Capacity analysis is a method that allows the examination and determination of parameters relating to different types of capacity and capabilities for general transmission and movement in a specific time unit [1]. Capacity analysis is used for the needs: ## Capacity Analysis in Transport Transport capacity is the maximum transport capacity of a means of transport in a given period of time, in specific conditions defined by current technical and organisational factors. Technical factors determine the payload or load capacity of the means of transport, the speed of the vehicle or the type of vehicle transported [2]. Organisational factors depend on the speed of loading and unloading, the route of the means of transport and the degree of utilization of the payload [3]. ## Capacity meters The main measure characterising passenger transport of each type is capacity, defined as the product of the number of passengers (train, bus) and the distance travelled by them. The unit of transport work is the passenger-kilometre [4]. However, the complexity of passenger traffic makes this meter a synthetic one, not reflecting the actual complexity of the movement of vehicles. Therefore, in the case of a number of factors must be taken into account when calculating the transport work for a train as an example, such as [5]: • the distance travelled, • wagon class • type of passenger traffic. ## Assumptions for capacity analysis The concept of capacity in passenger traffic (e.g. rail) can be defined as the number of passengers that is movable in a given period of time, taking into account the convenience of travel [6]. Determination of seating capacity passenger coaches may concern the whole railway network, or only the line or section to be analysed [7]. ## Calculation of capacity The following formula may be used to calculate the seating capacity [8]: Zol = Np • nw • nm • Wp • $$\frac{ls}{lp}$$[passengers/day] whereby: • Np – number of trains running on a given line, • nw- number of wagons in one train, • nm – number of seats in one car, • Wp - coefficient of utilisation of wagon capacity • ls - Average distance of a train running on a given line, • lp - Average travel distance of one passenger. For example: • Number of trains running on the line: Np= 36 trains/day • Average number of wagons of 1st class in one train: nw= 1,19 wagons • Average number of wagons of 2 classes in one train: nw= 4,64 wagons • Number of seats in a Class 1 car: • nm= 9 compartments • 5 seats = 54 seats • Number of seats in a Class 2 car: • nm= 10 compartments • 8 seats= 80 seats • Average train running distance on a given line: ls = 2764/18 = 153,56 km The wagon capacity utilisation factor is the quotient of the number of passengers in the wagon and the number of seats in the wagon [9]. When determining the maximum capacity of rolling stock, it is assumed that all the seats in the train are occupied, and therefore the coefficient value is Wp=1 However, when determining the level of capacity utilisation of rolling stock, it is necessary to determine the coefficient on the basis of the current train turnout and to calculate its average value [10]. Let the exemplary average distance of one passenger is lp=63 km. On the basis of the above data, the maximum capacity of 1st and 2nd class cars and the total capacity for both classes of cars can be calculated. • Zol1= 36• 4, 64 • 80 • 1 • 153,56/63 = 32606 passengers 1 class/day. • Zol2 = 36• 1,19 • 54 • 1•153,56/63 = 5644 passengers 2 class/day. • Zol = Zol1+ Zol2 =32606 + 5644 = 38250 passengers /day. ## Capacity analysis in determining congestion and bandwidth Capacity analysis is also used to traffic measurements, determine congestion and bandwidth. In a stochastic concept of highway bandwidth analysis, the capacity of an object on a highway is treated as a random variable, not a constant value. In this way, the stochastic approach provides new measures of traffic flow performance based on traffic reliability aspects [11]. A method for estimating bandwidth distribution functions based on empirical data based on statistical methods for the analysis of lifetime data is introduced. This method has been developed for the analysis of motorway throughput. However, it has been shown that the stochastic approach is also applicable to intersections [12]. ## Capacity Analysis in IT Each computer is equipped with memory, i.e. electronic systems for storing data and programs. Memory is an ordered (numbered from 0) set of elementary memory cells of a specified length. RAM (Random Access Memory) is a memory with free access (access to any RAM cell is possible at any time). RAM is an internal, operating memory, data can be read and written from it. It contains the data necessary to perform calculations and the results of these calculations. This memory is ephemeral, i.e. when the computer is shut down by any means, the information contained in the RAM is lost [13]. Data from internal memory can be protected against loss by storing them in external memory, e.g. on a hard drive. Particular types of RAM differ from each other[14]: • capacity, read/write speed, power consumption and voltage needed to power them. Bandwidth is the ability to transfer data in a time unit. Actual frequency is the actual data rate of the computer. To calculate the bandwidth, you need to know the effective frequency and multiply it by the width of the data bus (or vice versa). Example for DDR-400 memory [15]. Capacity = 400MHz * 64b = 400MHz * 8B = 3 200MHz * B = 3 200MB/s ## Capacity Analysis in Management Capacity in management is understood as efficiency, effectiveness and potential [16]. It is a result of undertaken actions, described by the relation of the achieved effects to the incurred outlays. It means the best effects of production, distribution, sales or promotion, achieved at the lowest costs, such as economy, enterprise, process, finance, management, investment or motivation [17]. Capacity determines the functioning of an organisation and determines its development. It is an important tool for measuring management effectiveness. It covers the phenomena inside and outside the organization. It shows the speed of reaction to challenges that flow from the market, as well as the expectations of its participants. Capacity is also a measure of effectiveness and efficiency, understood as a measure of the extent to which the set goals are achieved [18]. Capacity is measured using partial indicators characterising the effectiveness of particular production factors, e.g. labour productivity or capital productivity, and synthetic indicators of the effectiveness of the entire enterprise, e.g. return on capital, assets, sales. Capacity can be identified ex post and ex ante [19]. When calculating ex ante capacity the expected effects are estimated with the use of specific resources and time, while ex post Capacity is determined by the results of specific actions [20]. Author: Natalia Chowaniak ## Footnotes 1. M. O’Neill, G. Warren, 2016, p.1 2. M. O’Neill, G. Warren, 2016, p.2-4 3. M. O’Neill, G. Warren, 2016, p.2-4 4. R. Chambers 2018, p.13-16 5. R. Chambers 2018, p. 13-16 6. M. O’Neill, C. Schmidt, G. Warren 2016, pp. 18-21 7. M. O’Neill, G. Warren, 2016, p.5-6 8. M. Vangelisti, 2006, pp.44-50 9. KJ. Cremers, Martijn, A. Petajisto, 2009, pp. 3329-3365 10. KJ. Cremers, Martijn, A. Petajisto, 2009, pp. 3329-3365 11. M. Vangelisti, 2006, p. 50 12. M. Vangelisti, 2006, p. 50 13. M. O’Neill, C. Schmidt, G. Warren 2016, pp.18-21 14. M. O’Neill, C. Schmidt, G. Warren 2016, pp.18-21 15. M. O’Neill, C. Schmidt, G. Warren 2016, pp.18-21 16. M. O’Neill, G, Warren, 2016,p.4 17. M. O’Neill, G, Warren, 2016b,p.4 18. M. O’Neill, G, Warren, 2016,p.5 19. KJ M Cremers, A. Petajisto, 2009, p.6 20. KJ M Cremers, A. Petajisto, 2009, p.6 .
# Example 4.3: The Effect of Nonlinear Diffusion This example shows the effect of nonlinear diffusion for the equation with Neumann boundary condition on an interval. ## Initial setup xmin=-1.5; xmax=1.5; T=1.0; N = 255; h = (xmax-xmin)/N; x=xmin+(0:N)*h; u0 = -(abs(x)<=1.0).*sin(pi*x); nstep=64; ## Diffusion functions We will use two different diffusion functions: a linear function A(u)=u and a nonlinear function given by The latter function is constant on the interval [-1/4,1/4], and here the parabolic equation degenerates and becomes hyperbolic. u = linspace(-1,1,101); plot(u,iden(u), u, Anonlin(u)), legend('A(u)=u','A(u) nonlinear',4) ## Linear diffusion For the linear diffusion, we compute the solution for two different choices of the parameter \epsilon that determines the balance between convective and diffusive forces epsilon=0.1; u=diffBurg('iden',u0,epsilon,h,1,nstep,'neumann'); u1=u(:,nstep+1); epsilon=0.01; u=diffBurg('iden',u0,epsilon,h,1,nstep,'neumann'); u2=u(:,nstep+1); ## Nonlinear diffusion Similarly, compute the solution for two different sizes of the diffusion parameter \epsilon epsilon=0.5; u=diffBurg('Anonlin',u0,epsilon,h,1,nstep,'neumann'); u3=u(:,nstep+1); epsilon=0.05; u=diffBurg('Anonlin',u0,epsilon,h,1,nstep,'neumann'); u4=u(:,nstep+1); ## Discussion subplot(1,2,1); plot(x,u1,x,u2,'--'); xlabel('\it x'); ylabel('\it u'); legend('\epsilon=0.1','\epsilon=0.01'); axis([xmin xmax -0.75 0.75]); title('Linear diffusion'); subplot(1,2,2); plot(x,u3,x,u4,'--'); xlabel('\it x'); ylabel('\it u'); legend('\epsilon=0.5','\epsilon=0.05'); axis([xmin xmax -0.75 0.75]); title('Nonlinear diffusion'); With the linear diffusion function, the solution is uniformly parabolic, and although the solution develops a sharp gradient in the vincinity of the origin for \epsilon=0.01, it remains smooth. For the nonlinear diffusion function, on the other hand, the solution degenerates with a hyperbolic region for u in [-1/4,1/4]. In the hyperbolic region, there are no viscous forces and the solution has developed a stationary shock at the origin. Notice also the nonsmooth transitions between the hyperbolic and the parabolic regions for \epsilon=0.05.
• # question_answer If the principal stresses corresponding to a two-dimensional state of stress are ${{\sigma }_{1}}$ and ${{\sigma }_{2}}.$  If ${{\sigma }_{1}}$ is greater than ${{\sigma }_{2}}$ and both are tensile, then which one of the following would be the correct criterion for failure by yielding, according to the maximum shear stress criterion? A) $({{\sigma }_{1}}+{{\sigma }_{2}})/2=\pm \,{{\sigma }_{yp}}/2$B) ${{\sigma }_{1}}/2=\pm \,{{\sigma }_{yp}}/2$C) ${{\sigma }_{2}}/2=\pm \,{{\sigma }_{yp}}/2$D) ${{\sigma }_{1}}=\pm \,{{\sigma }_{yp}}$ According to maximum shear stress theory $\frac{{{\sigma }_{1}}+{{\sigma }_{2}}}{2}=\frac{{{\sigma }_{yp}}}{2}$
# Conjugated Cycle Selection The valence bond model ("VB model") remains widely used in chemistry, and serves as the basis for most molecular representation schemes in cheminformatics. Useful though it may be, the VB model carries some important liabilities. One relates to electron delocalization of the type found in benzene and its analogs. Here, the VB model's exclusive focus on two-atom bonding leads to asymmetry artifacts. This in turn leads to problems in certain applications, most notably canonicalization. Ideally, it would be possible to correct these artifacts using minimal intervention. This article describes one approach. ## Delocalization-Induced Molecular Equality The VB model considers bonding to be a local phenomenon that occurs between two atoms only. This approximation works for many structures across many applications, but it does have one spectacular failure mode. Molecules such as benzene exhibit multi-atom bonding. The VB model can be used to represent benzene and its derivatives, but only by freezing electron density between two atoms at a time. The result is often referred to as a "resonance structure." In many contexts, this structural mis-specification can be ignored. For example, the exact molecular weight and total particle count of benzene and its derivatives can be computed from VB-based representations with high accuracy and precision. Other descriptors such as molecular formula can likewise be computed. But in some applications, the artifactual asymmetry caused by the VB model leads to unacceptable errors. Canonicalization is a case in point. Consider the canonicalization of 1,2-difluorobenzene (below). Two distinct, yet equivalent, VB structures can be constructed. Canonicalization of VB-based representations demands either the exclusive use of one representation, or a method to suppress asymmetry artifacts. A previous article associated the term "Delocalization-Induced Molecular Equality" (DIME) with this phenomenon. ## The Atom Selection Problem One technique for mitigating the effects of DIME is atom selection ("selection"). Selection is the process of identifying those atoms that collectively define a node-induced subgraph covering all atoms and bonds leading to DIME. For example, selecting all of the carbon atoms in 1,2-difluorobenzene would yield a subgraph that also included all carbon-carbon bonds. What happens after atom selection depends on the system. In Balsa (and, by extension, SMILES), a "delocalization subgraph" propagates into the syntax and semantics of string encodings. Ultimately these encodings eliminate asymmetry artifacts that the exclusive use of the VB model would otherwise impose. After selection, the carbon atoms in 1,2-difluorobenzene are all encoded with the same lower case character (c). Even so, atom selection has its own requirements. A canonicalization scheme would need some way of drawing the line between conjugated double bonds whose atoms must be selected, and conjugated double bonds whose atoms must not be selected. Moreover, selection is not exactly cheap. The next sections explain the cost of selection itself, but the non-negligible costs of de-selection should also be considered. De-selection will be required in several contexts, including translation into non-selectable serialization formats (e.g., Molfile) and 2D depiction. Deselection requires at the very least a maximum matching implementation. To minimize encoding and decoding costs, the set of selected atoms should be kept as small as possible. The smaller the selection set, the less work is required to decode it. Selecting atoms that don't participate in DIME adds cost to an already expensive set of operations. ## Conjugated Circuits Electron delocalization by itself poses no problem for the canonicalization of VB-based molecular representations. The reason is simple. Without a way to generate multiple equal representations, only one VB representation is possible. The trouble arises from the presence of multiple equivalent localized forms. And this can only occur when delocalization occurs along a cyclic path. Therefore, a minimal atom selection set can be obtained by adding only those atoms found along delocalized, cyclic paths. More specifically, we'd like to select all of the atoms that lie on a conjugated circuit. Milan Randić, who introduced this concept in 1976, defined conjugated circuits as "circuits of full conjugation contained in Kekulé forms of a molecule." He later offered the related definition: "Conjugated circuits are circuits within Kekulé valence structures in which there is regular alternation of C=C and C-C bonds." Randić used conjugated circuits to solve several theoretical problems, but they are also well-suited to atom selection. Before moving on to the application of conjugated circuits to atom selection, the difference between "circuit" and "cycle" is worth discussing. In graph theory, both cycles and circuits represent alternating sequences of nodes and edges starting and ending at the same node. But a circuit allows repetition of internal nodes, whereas a cycle does not. The butterfly graph, depicted below, illustrates this difference. The node sequence (0, 1, 2, 3, 1, 4, 0) represents a circuit. This sequence is not a cycle because internal node 1 occurs twice. However, the node sequence (0, 1, 4, 0) defines both a cycle and a circuit. Alternatively, consider the set of all cycles to be a subset of the set of all circuits. What disqualifies a circuit as a cycle is the presence of at least one repeated internal node. In cheminformatics this distinction is largely moot. The repeated node in a circuit must by definition have degree four or higher. In VB-representations, this implies an atomic valence of four or higher. Such atoms can only engage in delocalization through expanded valence (e.g., sulfur). Tetravalence is furthermore often associated with tetrahedral substituent geometry, which tends to block conjugation. In the context of cheminformatics representations based on the VB model, the terms "conjugated circuit" and "conjugated cycle" can therefore be used interchangeably. ## Conjugated Cycle Selection The definition of conjugated circuit can be combined with an understanding of exactly why electron delocalization poses a problem for canonicalization, yielding a simple procedure for minimal atom selection that I call "conjugated cycle selection": 1. Construct the set of all cycles (C) through exhaustive enumeration. Any correct algorithm will do, but the one developed by Hanser et al. is especially attractive for its simplicity of implementation and efficiency. 2. For each cycle c in C, test for conjugation. Various electronic factors might be considered here, including whether triple, dative, conformationally restricted, or zwitterionic bonds should be eligible. 3. If c is conjugated, select all of its unselected atoms. ## Examples A few examples help illustrate how conjugated cycle selection differs from what might be considered common practice. A naive algorithm might select all of the atoms in the acenaphthylene ring system. These atoms are, after all, contained in the same cycle system and connected to each other through conjugated bonds. However, this approach leads to over-selection in that the two atoms across the five-membered ring bridge is unnecessary. This can be recognized by considering the set of conjugated cycles. Neither of the two bridging atoms is contained within a conjugated cycle. These atoms can therefore never participate in DIME. Asymmetry artifacts can be completely eliminated through the selection of only the ten atoms found in the naphthalene core structure. Indole offers another example. One often finds SMILES in which all atoms in both the six- and five-membered rings are selected. In contrast, conjugated cycle selection leads to a subgraph containing only the atoms in the six-membered ring. Selection of other atoms is unnecessary from the perspective of suppressing DIME. If the atoms in the isolated double bond of indole don't need to be selected, then none of the atoms in pyrrole need be selected for the same reason. Fully-selected pyrrole and imidazole rings are nevertheless common even though such over-selection adds needlessly to the computational burden placed on readers. Conjugated cycle selection is especially useful for cases in which the Hückel 4n + 2 rule might seem difficult to apply. Take, for example, pentalene. Although the 4n + 2 rule as taught in undergraduate organic chemistry courses does not apply to fused cycles, it often is used anyway. In the event, pentalene's 8 π electrons might discourage atom selection because of the 4n + 2 violation. Yet two equivalent VB structures are possible for substituted analogs, and DIME is clearly possible. Conjugated cycle selection alerts us to this fact by finding the eight-membered conjugated cycle containing all of the molecule's atoms. It might seem as if conjugated cycle selection could fail for naphthalene in some cases. If so, recall that naphthalene contains three cycles: two of length six (six atoms), and one of length ten (ten atoms). Regardless of the exact VB-structure drawn, two of these cycles must be conjugated. Therefore, all of naphthalene's atoms will be selected if the set of all cycles is used, regardless of the specific VB representation. Naphthalene also illustrates why the set of all cycles must be used. Cycle basis sets such as the smallest set of smallest rings (SSSR) are not suitable because they omit some cycles. Those missing cycles may be conjugated, which would lead to under-selection. ## Previous Work The idea of minimal atom selection was explored earlier by Mann and Thiel in Kekulé structure enumeration yields unique SMILES. The approach differs from the one described here, but shares the use of the set of all cycles. ## Conclusion Atom selection offers a way to patch the failures of the VB model so that representations based on it can be canonicalized. However, this raises the new problem of efficient atom selection. Basing selection on the set of all conjugated cycles offers a straightforward, correct, and minimal solution.
My Math Forum boundaries Calculus Calculus Math Forum May 27th, 2012, 03:40 AM #1 Senior Member   Joined: Apr 2012 Posts: 135 Thanks: 1 boundaries How can i find the boundaries of a convergent sequance? What are the conditions? May 27th, 2012, 12:04 PM #2 Global Moderator   Joined: May 2007 Posts: 6,764 Thanks: 697 Re: boundaries Are you asking about a series or a sequence? Is there a variable involved? Usually you would use a ratio test in some form, but you need to more specific about what you have in mind. An example would help. May 27th, 2012, 06:05 PM #3 Math Team   Joined: Sep 2007 Posts: 2,409 Thanks: 6 Re: boundaries As mathman said, a "sequence" or "series" does not have a "boundary". I think that you are talking about a "power series" or series of the form $\sum a_nx^n$, which always has an "interval of convergence". The simplest thing to do to find the interval of convergence for a power series is, again as mathman said, to use the ratio test. Look at $\left|\frac{a_{n+1}x^{n+1}}{a_nx^n}\right|= \left|\frac{a_{n+1}}{a_n}\right||x|$. The series will converge where the limit of that, as n goes to infinity, is less than 1, diverge where it is greater than 1. The "boundary" points, or endpoints, of the interval of convergence will be where the limit of the fraction $\frac{a_{n+1}}{a_n}$ is 1 or -1. Tags boundaries Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post oliver1 Advanced Statistics 1 March 21st, 2012 09:29 PM daigo Calculus 8 March 3rd, 2012 06:11 AM plitter Calculus 0 January 13th, 2009 06:17 AM daigo Algebra 3 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
Tag Info Accepted Alternative to CBC mode encryption? GCM is a very good alternative, it provides built in message authentication, so encrypted messages can not be manipulated by an attacker. The encryption itself is based on CTR mode which is well ... • 10.4k Cryptography based on #P-complete problems We don't know how to base cryptography even on $\mathbf{NP}$-completeness let alone $\#\mathbf{P}$-completeness. Moreover, there are known barriers to basing cryptography on $\mathbf{NP}$-completeness:... • 4,463 What is the difference between AES-CCM8 mode and AES-CCM mode? 8 is the tag length (in bytes). CCM is a family of AEAD (authenticated encryption with associated data) algorithms which is parametrized by: a block cipher algorithm (e.g. AES-128, AES-192, AES-256, ... 1 vote What is a ciphering key? A cipher is defined for cryptography as "A cryptographic system using an algorithm that converts letters or sequences of bits into ciphertext." Now it seems that the origin of the term ... • 84.5k 1 vote What is the difference between AES-CCM8 mode and AES-CCM mode? As defined in section 6.1 of RFC6655, AES-CCM8 differs in that the size of the authentication tag is 8 bytes (i.e. 64-bits) rather than 16 bytes (i.e. 128-bits) for AES-CCM. The NIST CCM specification ... • 9,576 Only top scored, non community-wiki answers of a minimum length are eligible
# Natural number which can be expressed as sum of two perfect squares in two different ways? Ramanujan's number is $1729$ which is the least natural number which can be expressed as the sum of two perfect cubes in two different ways. But can we find a number which can be expressed as the sum of two perfect squares in two different ways. One example I got is $50$ which is $49+1$ and $25+25$. But here second pair contains same numbers. Does any one have other examples ? $$65 = 64 + 1 = 49 + 16$$ This will work for any number that's the product of two primes each of which is congruent to $1$ mod $4$. For more than two ways multiply more than two such primes. Note that $a^2 + b^2 = c^2 + d^2$ is equivalent to $a^2 - c^2 = d^2 - b^2$, i.e. $(a-c)(a+c) = (d-b)(d+b)$. If we factor any odd number $m$ as $m = uv$, where $u$ and $v$ are both odd and $u < v$, we can write this as $m = (a-c)(a+c)$ where $a = (u+v)/2$ and $c = (v-u)/2$. So any odd number with more than one factorization of this type gives an example. Thus from $m = 15 = 1 \cdot 15 = 3 \cdot 5$, we get $8^2 - 7^2 = 4^2 - 1^2$, or $1^2 + 8^2 = 4^2 + 7^2$. From $m = 21 = 1 \cdot 21 = 3 \cdot 7$ we get $11^2 - 10^2 = 5^2 - 2^2$, or $2^2 + 11^2 = 5^2 + 10^2$. • For another example, instead of $m$ odd, we can take $m$ as a multiple of $8$. Then $m$ can again be written as a product $m=uv$ where $u$ and $v$ has the same parity, in two ways. For $m=40$, for example, first $$m=40=4\cdot 10=(7-3)(7+3)=7^2-3^2$$ and secondly $$m=40=2\cdot 20=(11-9)(11+9)=11^2-9^2$$ and we get the example $$7^2+9^2=3^2+11^2.$$ – Jeppe Stig Nielsen Mar 10 '18 at 20:17 The following example easily generalizes: \begin{align} 5&=(2+i)(2-i)=4+1\\ 13&=(3+2i)(3-2i)=9+4\\ 5\cdot13&=((2+i)(3+2i))((2-i)(3-2i))=(4+7i)(4-7i)=16+49\\ &=((2+i)(3-2i))((2-i)(3+2i))=(8-i)(8+i)=64+1 \end{align} Well, as I much as I can think of, we have at least one class of examples in $$\boxed{125k^2=(11k)^2+(2k)^2=(10k)^2+(5k)^2} \,\,\,\,\,\,\,\,\, \text{for } \,\,\,\, k\in \mathbb{N}$$ There are many numbers that can be expressed as the sum of two squares in more than one way. For example, $$65=64+1 =49+16$$ $$85=81+4 =49+36$$ $$125=121+4 =100+25$$ $$130=121+9 =81+49$$ $$145=144+1 =64+81$$ $$170=169+1 =121+49$$ $$185=169+16 =121+64$$ and so on... You can also read this PDF for more details. Hope it helps. • Why do You write the number? If there are so many good formulas. – individ Dec 22 '16 at 5:27 • Are all these numbers expressible as the sum of two squares in two different ways divisible by $5$? (Oh jeez, that question would not be great to ask in one breath.)..... Edit: Nope, because $5\nmid 629$. – Mr Pie Feb 25 '18 at 10:54 You can just multiply any number with a constant, you will get another such numbers. e.g- $65=8^2+1^2=7^2+4^2$. If you multiply $8$,$1$,$7$ and $4$ with a constant $k=2$ or any other number you will get another number which can be expressed as sum of two squares in two different ways. Here, A new number formed such that - $16^2+2^2=14^2+8^2$=$260$ I too got one, but without the above proof. 629 = 23^2 + 10^2 = 25^2 + 2^2 • Put a dollar sign $\$$at the beginning and end of your equation to form the following:$$629 = 23^2 + 10^2 = 25^2 + 2^2.$$By the way, it is also equal to$9^3 - 10^2$and just one off$14^3 - 46^2$(fun fact). – Mr Pie Feb 25 '18 at 10:56 In the same way we can also write 650 as the sum of the squares of two prime numbers in two different ways i.e$650=11^2+23^2=17^2+19^2$since$19^2-11^2=23^2-17^22465$can be expressed as the sum of two squares in four different ways:-$8^2 + 49^2$,$16^2 + 47^2$,$23^2 + 44^2$and$28^2 + 41^2\$. • Why the two downvotes? This answer seems fine...what am I missing?! – user1729 Mar 5 '18 at 14:45 Product of any two primes of the type (4k+1) will do the trick.. Product of any three primes of the type (4k+1) and you have 4 different ways etc.. Basically - (all easy to prove) A prime of the type p=(4k+1) has unique $$a^2+b^2 = p$$ solution. (Proven by Fermat) A prime of the type p= 4k+3 has NO solution. And for a product of two primes, say $$p=a^2+b^2$$, and $$q= c^2+d^2$$, we have, $$pq = (ac+bd)^2+(ad-bc)^2 = (ac-bd)^2 +(ad+bc)^2$$ . • Please use MathJax to format. – Saad Jan 12 at 3:15 The lowest integer that is the sum of two integer squares in two different ways is 50, but that case involves one repeat number 5^2 + 5^2 = 25 + 25 = 50 = 7^2 + 1. The lowest integer that is the sum of two integer squares in two different ways with all different numbers is 65. The lowest integer that is the sum of THREE integer squares in three different ways is 325. The lowest number that is the sum of FOUR integer squares in four different ways is 1105. Interestingly, the lowest integer that is the sum of SIX integer squares in six different ways is lower than the lowest integer that is the sum of FIVE integer squares in five different ways (you can work all these out for yourselves !).
# Modified embedded atom method (MEAM) potential with user defined functions¶ This example demonstrates the use of the modified embedded atom method (MEAM) potential routine together with user defined functions. The potential form and parameters have been taken from [LenSadAlo00]. The example also illustrates the use of XML Inclusions. Note that the potential is merely evaluated for a couple of simple lattice structures and none of the parameters are fitted. ## Location¶ examples/potential_MEAM ## Input files¶ • main.xml: main input file 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Silicon MEAM potential by Lenosky et al. medium Si • potential.xml: potential parameter set (included in main input file via XML Inclusions) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 -42.66967 4.5 -1.0 3.5 -3.61894 3.5 -13.95042 1.13462 0.73514 0.61652 • structures.xml: input structures (included in main input file via XML Inclusions) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 Si 3.6 Si 3.0 Si 2.6 2.1 Si 2.6 1.1 Si 5. Si Si 3. 1.6 0.375 ## Output (files)¶ • The final properties (as well as parameters) are written to standard output. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 This is program version 0.1.6 Reading job file main.xml ------------------------------------------------------- Parsing input file(s) ------------------------------------------------------- ------------------------------------------------------- Computing structure properties Structure 'A1 (fcc)': total-energy: -3.91431 eV atomic-energy: -3.91431 eV/atom total-volume: 16.0672 A^3 atomic-volume: 16.0672 A^3/atom bulk-modulus: 8688.3 GPa lattice-parameter: 4.00559 A (relaxed) [0:] Structure 'A2 (bcc)': total-energy: -3.8948 eV atomic-energy: -3.8948 eV/atom total-volume: 15.4127 A^3 atomic-volume: 15.4127 A^3/atom bulk-modulus: 8729.77 GPa lattice-parameter: 3.13547 A (relaxed) [0:] Structure 'A3 (hcp) - large c/a': total-energy: -7.89209 eV atomic-energy: -3.94605 eV/atom total-volume: 30.7255 A^3 atomic-volume: 15.3627 A^3/atom bulk-modulus: 5891.52 GPa lattice-parameter: 2.58945 A (relaxed) ca-ratio: 2.04336 (relaxed) Structure 'A3 (hcp) - small c/a': total-energy: -8.64158 eV atomic-energy: -4.32079 eV/atom total-volume: 27.8554 A^3 atomic-volume: 13.9277 A^3/atom bulk-modulus: 5049.82 GPa lattice-parameter: 3.07713 A (relaxed) ca-ratio: 1.10393 (relaxed) Structure 'A4 (diamond)': total-energy: -9.22485 eV atomic-energy: -4.61242 eV/atom total-volume: 40.0378 A^3 atomic-volume: 20.0189 A^3/atom bulk-modulus: 110.009 GPa lattice-parameter: 5.43055 A (relaxed) [0:] Structure 'hexagonal diamond': total-energy: -18.4867 eV atomic-energy: -4.62167 eV/atom total-volume: 80.0477 A^3 atomic-volume: 20.0119 A^3/atom bulk-modulus: 112.553 GPa lattice-parameter: 3.84095 A (relaxed) [0:] ca-ratio: 1.63118 (relaxed) u-parameter: 0.375008 (relaxed) ------------------------------------------------------- -------------------------------------------------------
Share # RD Sharma solutions for Class 10 Maths chapter 16 - Probability [Latest edition] Course Textbook page ## Chapter 16: Probability Ex. 16.1Ex. 16.4Ex. 16.2Others #### RD Sharma solutions for Class 10 Maths Chapter 16 Probability Exercise 16.1, 16.4, 16.2 [Pages 20 - 26] Ex. 16.1 | Q 1 | Page 20 The probability that it will rain tomorrow is 0.85. What is the probability that it will not rain tomorrow? Ex. 16.1 | Q 2.2 | Page 20 A die is thrown. Find the probability of getting 2 or 4 Ex. 16.1 | Q 2.3 | Page 20 A die is thrown. Find the probability of getting a multiple of 2 or 3 Ex. 16.1 | Q 2.4 | Page 20 A die is thrown. Find the probability of getting an even prime number Ex. 16.1 | Q 2.5 | Page 20 A die is thrown. Find the probability of getting a number greater than 5 Ex. 16.1 | Q 2.6 | Page 20 A die is thrown once. Find the probability of getting a number lying between 2 and 6; Ex. 16.1 | Q 3.1 | Page 20 Three different coins are tossed together. Find the probability of getting exactly two heads. Ex. 16.1 | Q 3.2 | Page 20 Three different coins are tossed together. Find the probability of getting at least two heads. Ex. 16.1 | Q 3.3 | Page 20 Three coins are tossed together. Find the probability of getting at least one head and one tail Ex. 16.1 | Q 3.4 | Page 20 Three coins are tossed together. Find the probability of getting no tails Ex. 16.1 | Q 4 | Page 20 A and B throw a pair of dice. If A throws 9, find B’s chance of throwing a higher number. Ex. 16.1 | Q 5 | Page 20 Two unbiased dice are thrown. Find the probability that the total of the numbers on the dice is greater than 10. Ex. 16.1 | Q 6.01 | Page 20 A card is drawn at random from a well-shuffled deck of playing cards. Find the probability that the card drawn is a black king. Ex. 16.1 | Q 6.02 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is either a black card or a king Ex. 16.1 | Q 6.03 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is black and a king Ex. 16.1 | Q 6.04 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is a jack, queen or a king Ex. 16.1 | Q 6.05 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is neither a heart nor a king Ex. 16.1 | Q 6.06 | Page 20 A card is drawn at random from a well-shuffled deck of playing cards. Find the probability that the card drawn is a card of spade or an ace. Ex. 16.1 | Q 6.07 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is neither an ace nor a king Ex. 16.1 | Q 6.08 | Page 20 A card is drawn at random from a well shuffled pack of 52 playing cards. Find the probability of getting neither a red card nor a queen. Ex. 16.1 | Q 6.09 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is other than an ace Ex. 16.1 | Q 6.1 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is a ten Ex. 16.1 | Q 6.11 | Page 20 One card is drawn from a well-shuffled deck of 52 cards. Find the probability of getting a spade. Ex. 16.1 | Q 6.12 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is a black card Ex. 16.1 | Q 6.13 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is the seven of clubs Ex. 16.1 | Q 6.14 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is jack Ex. 16.1 | Q 6.15 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is the ace of spades Ex. 16.1 | Q 6.16 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is a queen Ex. 16.1 | Q 6.17 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is a heart Ex. 16.1 | Q 6.18 | Page 20 A card is drawn at random from a pack of 52 cards. Find the probability that card drawn is a red card Ex. 16.1 | Q 7 | Page 21 In a lottery of 50 tickets numbered 1 to 50, one ticket is drawn. Find the probability that the drawn ticket bears a prime number. Ex. 16.1 | Q 8 | Page 21 An urn contains 10 red and 8 white balls. One ball is drawn at random. Find the probability that the ball drawn is white. Ex. 16.1 | Q 9.2 | Page 21 A bag contains 3 red balls, 5 black balls and 4 white balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is Red Ex. 16.1 | Q 9.3 | Page 21 A bag contains 3 red balls, 5 black balls and 4 white balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is Black Ex. 16.1 | Q 9.4 | Page 21 A bag contains 3 red balls, 5 black balls and 4 white balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is Not red Ex. 16.1 | Q 10 | Page 21 What is the probability that a number selected from the numbers 1, 2, 3, ..., 15 is a multiple of 4? Ex. 16.1 | Q 11 | Page 21 A bag contains 5 white and 7 red balls. One ball is drawn at random. What is the probability that ball drawn is not black? Ex. 16.1 | Q 12 | Page 21 A bag contains 6 red, 8 black and 4 white balls. A ball is drawn at random. What is the probability that ball drawn is not black? Ex. 16.1 | Q 13 | Page 21 Tickets numbered from 1 to 20 are mixed up and a ticket is drawn at random. What is the probability that the ticket drawn has a number which is a multiple of 3 or 7? Ex. 16.1 | Q 14 | Page 21 In a lottery there are 10 prizes and 25 blanks. What is the probability of getting a prize? Ex. 16.1 | Q 15 | Page 21 A bag contains 5 white and 7 red balls. One ball is drawn at random. What is the probability that ball drawn is white? Ex. 16.1 | Q 15 | Page 21 If the probability of winning a game is 0.3, what is the probability of losing it? Ex. 16.1 | Q 16.1 | Page 21 A bag contains 5 black, 7 red and 3 white balls. A ball is drawn from the bag at random. Find the probability that the ball drawn is Red Ex. 16.1 | Q 16.2 | Page 21 A bag contains 5 black, 7 red and 3 white balls. A ball is drawn from the bag at random. Find the probability that the ball drawn is black or white Ex. 16.1 | Q 16.3 | Page 21 A bag contains 5 black, 7 red and 3 white balls. A ball is drawn from the bag at random. Find the probability that the ball drawn is not black Ex. 16.1 | Q 16.3 | Page 21 Cards numbered 1 to 30 are put in a bag. A card is drawn at random from this bag. Find the probability that the number on the drawn card is not a perfect square number Ex. 16.1 | Q 17.1 | Page 21 A bag contains 4 red, 5 black and 6 white balls. A ball is drawn from the bag at random. Find the probability that the ball drawn is White Ex. 16.1 | Q 17.2 | Page 21 A bag contains 4 red, 5 black and 6 white balls. A ball is drawn from the bag at random. Find the probability that the ball drawn is Red Ex. 16.1 | Q 17.3 | Page 21 A bag contains 4 red, 5 black and 6 white balls. A ball is drawn from the bag at random. Find the probability that the ball drawn is Not black Ex. 16.1 | Q 17.4 | Page 21 A bag contains 4 red, 5 black and 6 white balls. A ball is drawn from the bag at random. Find the probability that the ball drawn is Red or white Ex. 16.1 | Q 18.1 | Page 21 One card is drawn from a well shuffled deck of 52 cards. Find the probability of getting a king of red suit Ex. 16.1 | Q 18.2 | Page 21 One card is drawn from a well shuffled deck of 52 cards. Find the probability of getting a face card Ex. 16.1 | Q 18.3 | Page 21 One card is drawn from a well shuffled deck of 52 cards. Find the probability of getting a red face card Ex. 16.1 | Q 18.4 | Page 21 One card is drawn from a well shuffled deck of 52 cards. Find the probability of getting a queen of black suit Ex. 16.1 | Q 18.5 | Page 21 One card is drawn from a well shuffled deck of 52 cards. Find the probability of getting a jack of hearts Ex. 16.1 | Q 18.6 | Page 21 One card is drawn from a well shuffled deck of 52 cards. Find the probability of getting a spade Ex. 16.1 | Q 19.1 | Page 21 Five cards, the ten, jack, queen, king and ace of diamonds, are well-shuffled with their face downwards. One card is then picked up at random. What is the probability that the card is the queen? Ex. 16.1 | Q 19.2 | Page 21 Five cards—ten, jack, queen, king, and an ace of diamonds are shuffled face downwards.One card is picked at random If a king is drawn first and put aside, what is the probability that the second card picked up is the ace? Ex. 16.1 | Q 20.1 | Page 21 A bag contains 3 red balls and 5 black balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is red? Ex. 16.1 | Q 20.2 | Page 21 A bag contains 3 red balls and 5 black balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is Black Ex. 16.1 | Q 21.1 | Page 21 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? that the sum of the two numbers that turn up is 8? Ex. 16.1 | Q 21.1 | Page 21 A game of chance consists of spinning an arrow which is equally likely to come to rest pointing to one of the number, 1, 2, 3, ..., 12 as shown in Fig. below. What is the probability that it will point to 10? Ex. 16.1 | Q 21.2 | Page 21 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? of obtaining a total of 6? Ex. 16.1 | Q 21.2 | Page 21 A game of chance consists of spinning an arrow which is equally likely to come to rest pointing to one of the number, 1, 2, 3, ..., 12 as shown in Fig. below. What is the probability that it will point to an odd number? Ex. 16.1 | Q 21.3 | Page 21 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability of obtaining a total of 10? Ex. 16.1 | Q 21.3 | Page 21 A game of chance consists of spinning an arrow which is equally likely to come to rest pointing to one of the number, 1, 2, 3, ..., 12 as shown in Fig. below. What is the probability that it will point to a number which is multiple of 3? Ex. 16.1 | Q 21.4 | Page 21 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? of obtaining the same number on both dice? Ex. 16.1 | Q 21.4 | Page 21 A game of chance consists of spinning an arrow which is equally likely to come to rest pointing to one of the number, 1, 2, 3, ..., 12 as shown in Fig. below. What is the probability that it will point to an even number? Ex. 16.1 | Q 21.5 | Page 21 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability of obtaining a total more than 9? Ex. 16.1 | Q 21.6 | Page 21 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability that the sum of the two numbers appearing on the top of the dice is 13? Ex. 16.1 | Q 21.7 | Page 21 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability that the sum of the numbers appearing on the top of the dice is less than or equal to 12? Ex. 16.1 | Q 22.1 | Page 22 In a class, there are 18 girls and 16 boys. The class teacher wants to choose one pupil form class monitor. What she does, she writes the name of each pupil on a card and puts them into a basket and mixes thoroughly. A child is asked to pick one card from the basket. What is the probability that the name written on the card is the name of a girl? Ex. 16.1 | Q 22.2 | Page 22 In a class, there are 18 girls and 16 boys. The class teacher wants to choose one pupil for class monitor. What she does, she writes the name of each pupil on a card and puts them into a basket and mixes thoroughly. A child is asked to pick one card from the basket. What is the probability that the name written on the card is the name of a boy? Ex. 16.1 | Q 23 | Page 22 Why is tossing a coin considered to be a fair way of deciding which team should choose ends in a game of cricket? Ex. 16.1 | Q 24 | Page 22 What is the probability that a number selected at random from the number 1,2,2,3,3,3, 4, 4, 4, 4 will be their average? Ex. 16.1 | Q 25 | Page 22 There are 30 cards, of same size, in a bag on which numbers 1 to 30 are written. One card is taken out of the bag at random. Find the probability that the number on the selected card is not divisible by 3. Ex. 16.1 | Q 25.1 | Page 22 A bag contains cards which are numbered from 2 to 90. A card is drawn at random from the bag. Find the probability that it bears a two digit number Ex. 16.1 | Q 25.2 | Page 22 A bag contains cards which are numbered from 2 to 90. A card is drawn at random from the bag. Find the probability that it bears a number which is a perfect square Ex. 16.1 | Q 26.1 | Page 22 A bag contains 5 red, 8 white and 7 black balls. A ball is drawn at random from the bag. Find the probability that the drawn ball is red or white Ex. 16.1 | Q 26.2 | Page 22 A bag contains 5 red, 8 white and 7 black balls. A ball is drawn at random from the bag. Find the probability that the drawn ball is not black Ex. 16.1 | Q 26.3 | Page 22 A bag contains 5 red, 8 white and 7 black balls. A ball is drawn at random from the bag. Find the probability that the drawn ball is neither white nor black. Ex. 16.1 | Q 27 | Page 22 Find the probability that a number selected from the number 1 to 25 is not a prime number when each of the given numbers is equally likely to be selected. Ex. 16.1 | Q 27.1 | Page 22 Two customers are visiting a particular shop in the same week (Monday to Saturday). Each is equally likely to visit the shop on any one day as on another. What is the  robability that both will visit the shop on the same day? Ex. 16.1 | Q 27.2 | Page 22 Two customers are visiting a particular shop in the same week (Monday to Saturday). Each is equally likely to visit the shop on any one day as on another. What is the probability that both will visit the shop on different days? Ex. 16.1 | Q 27.3 | Page 22 Two customers are visiting a particular shop in the same week (Monday to Saturday). Each is equally likely to visit the shop on any one day as on another. What is the probability that both will visit the shop on consecutive days? Ex. 16.1 | Q 28.1 | Page 22 A bag contains 8 red, 6 white and 4 black balls. A ball is drawn at random from the bag. Find the probability that the drawn ball is Red or white Ex. 16.1 | Q 28.2 | Page 22 A bag contains 8 red, 6 white and 4 black balls. A ball is drawn at random from the bag. Find the probability that the drawn ball is Not black Ex. 16.1 | Q 28.3 | Page 22 A bag contains 8 red, 6 white and 4 black balls. A ball is drawn at random from the bag. Find the probability that the drawn ball is Neither white nor black Ex. 16.1 | Q 29.1 | Page 22 Find the probability that a number selected at random from the numbers 1, 2, 3, ..., 35 is a Prime number Ex. 16.1 | Q 29.2 | Page 22 Find the probability that a number selected at random from the numbers 1, 2, 3, ..., 35 is a Multiple of 7 Ex. 16.1 | Q 29.3 | Page 22 Find the probability that a number selected at random from the numbers 1, 2, 3, ..., 35 is a Multiple of 3 or 5 Ex. 16.1 | Q 30.1 | Page 22 From a pack of 52 playing cards Jacks, queens, kings and aces of red colour are removed. From the remaining, a card is drawn at random. Find the probability that the card drawn is A black queen Ex. 16.1 | Q 30.2 | Page 22 From a pack of 52 playing cards Jacks, queens, kings and aces of red colour are removed. From the remaining, a card is drawn at random. Find the probability that the card drawn is A red card Ex. 16.1 | Q 30.3 | Page 22 From a pack of 52 playing cards Jacks, queens, kings and aces of red colour are removed. From the remaining, a card is drawn at random. Find the probability that the card drawn is A black jack Ex. 16.1 | Q 30.4 | Page 22 From a pack of 52 playing cards Jacks, queens, kings and aces of red colour are removed. From the remaining, a card is drawn at random. Find the probability that the card drawn is a picture card (Jacks, queens and kings are picture cards) Ex. 16.1 | Q 31 | Page 22 The faces of a red cube and a yellow cube are numbered from 1 to 6. Both cubes are rolled. What is the probability that the top face of each cube will have the same number? Ex. 16.1 | Q 31.1 | Page 22 A bag contains lemon flavoured candies only. Malini takes out one candy without looking into the bag. What is the probability that she takes out an orange flavoured candy? Ex. 16.1 | Q 31.2 | Page 22 A bag contains lemon flavoured candies only. Malini takes out one candy without looking into the bag. What is the probability that she takes out a lemon flavoured candy? Ex. 16.1 | Q 32 | Page 23 The probability of selecting a green marble at random from a jar that contains only green, white and yellow marbles is 1/4  The probability of selecting a white marble at random from  the same jar is 1/3  If this jar contains 10 yellow marbles. What is the total number of marbles in the jar? Ex. 16.1 | Q 32 | Page 23 It is given that m a group of 3 students, the probability of 2 students not having the same birthday is 0.992. What is the probability that the 2 students have the same birthday? Ex. 16.1 | Q 33.1 | Page 23 A bag contains 3 red balls and 5 black balls. A ball is draw at random from the bag. What is the probability that the ball drawn is red? Ex. 16.1 | Q 33.2 | Page 23 A bag contains 3 red balls and 5 black balls. A ball is draw at random from the bag. What is the probability that the ball drawn is  not red? Ex. 16.1 | Q 34.1 | Page 23 A box contains 5 red marbels, 8 white marbles and 4 green marbles, One marble is taken out of the box at ramdom. What is the probability that the marble taken out will be  red? Ex. 16.1 | Q 34.2 | Page 23 A box contains 5 red marbels, 8 white marbles and 4 green marbles, One marble is taken out of the box at ramdom. What is the probability that the marble taken out will be white? Ex. 16.1 | Q 34.3 | Page 23 A box contains 5 red marbels, 8 white marbles and 4 green marbles, One marble is taken out of the box at ramdom. What is the probability that the marble taken out will be  not green? Ex. 16.1 | Q 35.1 | Page 23 A lot consists of 144 ball pens of which 20 are defective and the others are good. Nuri will buy a pen if it is good, but will not buy if it is defective. The shopkeeper draws one pen at random and gives it to her. What is the probability that She will buy it? Ex. 16.1 | Q 35.2 | Page 23 A lot consists of 144 ball pens of which 20 are defective and the others are good. Nuri will buy a pen if it is good, but will not buy if it is defective. The shopkeeper draws one pen at random and gives it to her. What is the probability that She will not buy it? Ex. 16.1 | Q 36 | Page 23 12 defective pens are accidently mixed with 132 good ones. It is not possible to just look at pen and tell whether or not it is defective. one pen is taken out at random from this lot. Determine the probability that the pen taken out is good one. Ex. 16.1 | Q 37.1 | Page 23 Five cards − the ten, jack, queen, king and ace of diamonds, are well-shuffled with their face downwards. One card is then picked up at random. What is the probability that the card is the queen? Ex. 16.1 | Q 37.2 | Page 23 Five cards − the ten, jack, queen, king and ace of diamonds, are well-shuffled with their face downwards. One card is then picked up at random. If the queen is drawn and put a side, what is the probability that the second card picked up is (a) an ace? (b) a queen? Ex. 16.1 | Q 38 | Page 23 Harpreet tosses two different coins simultaneously (say, one is of Re 1 and other of Rs 2). What is the probability that he gets at least one head? Ex. 16.1 | Q 39.1 | Page 23 Cards marked with numbers 13, 14, 15, ...., 60 are placed in a box and mixed thoroughly.One card is drawn at random from the box. Find the probability that number on the card drawn is divisible by 5 Ex. 16.1 | Q 39.2 | Page 23 Cards marked with numbers 13, 14, 15, ...., 60 are placed in a box and mixed thoroughly.One card is drawn at random from the box. Find the probability that number on the card drawn is a number is a perfect square Ex. 16.1 | Q 40.1 | Page 23 A bag contains tickets numbered 11, 12, 13,..., 30. A ticket is taken out from the bag at random. Find the probability that the number on the drawn ticket is a multiple of 7 Ex. 16.1 | Q 40.2 | Page 23 A bag contains tickets numbered 11, 12, 13,..., 30. A ticket is taken out from the bag at random. Find the probability that the number on the drawn ticket is greater than 15 and a multiple of 5. Ex. 16.1 | Q 41.1 | Page 23 Fill in blank: Probability of a sure event is........... Ex. 16.1 | Q 41.2 | Page 23 Fill in blank: Probability of an impossible event is........... Ex. 16.1 | Q 41.3 | Page 23 Fill in blank: The probability of an event (other than sure and impossible event) lies between…… Ex. 16.1 | Q 41.4 | Page 23 Fill in blank: Every elementary event associated to a random experiment has........... probability. Ex. 16.1 | Q 41.5 | Page 23 Fill in blank: Probability of an event A + Probability of event ‘not A’ ........... Ex. 16.1 | Q 41.6 | Page 23 Fill in blank: Sum of the probabilities of each outcome m an experiment is .......... Ex. 16.1 | Q 42.1 | Page 23 Examine the following statement and comment: If two coins are tossed at the same time, there are 3 possible outcomes—two heads, two tails, or one of each. Therefore, for each outcome, the probability of occurrence is 1/3 Ex. 16.1 | Q 42.2 | Page 23 Examine the following statement and comment: If a die is thrown once, there are two possible outcomes—an odd number or an even number. Therefore, the probability of obtaining an odd number is 1 /2 and the probability of obtaining an even number is 1/2 . Ex. 16.1 | Q 43.1 | Page 23 A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears a two-digit number Ex. 16.1 | Q 43.1 | Page 23 A box contains loo red cards, 200 yellow cards and 50 blue cards. If a card is drawn at random from the box, then find the probability that it will be a blue card Ex. 16.1 | Q 43.2 | Page 23 A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears a perfect square number Ex. 16.1 | Q 43.2 | Page 23 A box contains loo red cards, 200 yellow cards and 50 blue cards. If a card is drawn at random from the box, then find the probability that it will be not a yellow card Ex. 16.1 | Q 43.3 | Page 23 A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears a number divisible by 5. Ex. 16.1 | Q 43.3 | Page 23 A box contains loo red cards, 200 yellow cards and 50 blue cards. If a card is drawn at random from the box, then find the probability that it will be neither yellow nor a blue card. Ex. 16.1 | Q 44 | Page 24 A box contains cards numbered 3, 5, 7, 9, ..., 35, 37. A card is drawn at random form the box. Find the probability that the number on the drawn card is a prime number. Ex. 16.1 | Q 45.1 | Page 24 A group consists of 12 persons, of which 3 are extremely patient, other 6 are extremely honest and rest are extremely kind. A person form the group is selected at random. Assuming that each person is equally likely to be selected, find the probability of selecting a person who is extremely patient Ex. 16.1 | Q 45.2 | Page 24 A group consists of 12 persons, of which 3 are extremely patient, other 6 are extremely honest and rest are extremely kind. A person form the group is selected at random. Assuming that each person is equally likely to be selected, find the probability of selecting a person who is  extremely kind or honest. Which of the above you prefer more. Ex. 16.1 | Q 46.1 | Page 24 Cards numbered 1 to 30 are put in a bag. A card is drawn at random from this bag. Find the probability that the number on the drawn card is not divisible by 3 Ex. 16.1 | Q 46.2 | Page 24 Cards numbered 1 to 30 are put in a bag. A card is drawn at random from this bag. Find the probability that the number on the drawn card is a prime number greater than 7 Ex. 16.1 | Q 47.1 | Page 24 A piggy bank contains hundred 50 paise coins, fifity ₹1 coins, twenty ₹2 coins and ten ₹5 coins. If it is equally likely that one of the coins will fall out when the bank is turned upside down, find the probability that the coin which fell will be a 50 paise coin Ex. 16.1 | Q 47.2 | Page 24 A piggy bank contains hundred 50 paise coins, fifity ₹1 coins, twenty ₹2 coins and ten ₹5 coins. If it is equally likely that one of the coins will fall out when the bank is turned upside down, find the probability that the coin which fell  will be of value more than ₹1 Ex. 16.1 | Q 47.3 | Page 24 A piggy bank contains hundred 50 paise coins, fifity ₹1 coins, twenty ₹2 coins and ten ₹5 coins. If it is equally likely that one of the coins will fall out when the bank is turned upside down, find the probability that the coin which fell will be of value less than ₹5 Ex. 16.1 | Q 47.4 | Page 24 A piggy bank contains hundred 50 paise coins, fifity ₹1 coins, twenty ₹2 coins and ten ₹5 coins. If it is equally likely that one of the coins will fall out when the bank is turned upside down, find the probability that the coin which fell  will be a ₹1 or ₹2 coin Ex. 16.1 | Q 48.1 | Page 24 A bag contains cards numbered from 1 to 49. A card is drawn from the bag at random, after mixing the card thoroughly. Find the probability that the number on the drawn card is an odd number Ex. 16.1 | Q 48.2 | Page 24 A bag contains cards numbered from 1 to 49. A card is drawn from the bag at random, after mixing the card thoroughly. Find the probability that the number on the drawn card is  a multiple of 5 Ex. 16.1 | Q 48.3 | Page 24 A bag contains cards numbered from 1 to 49. A card is drawn from the bag at random, after mixing the card thoroughly. Find the probability that the number on the drawn card is  a perfect square Ex. 16.1 | Q 48.4 | Page 24 A bag contains cards numbered from 1 to 49. A card is drawn from the bag at random, after mixing the card thoroughly. Find the probability that the number on the drawn card is an even prime number Ex. 16.1 | Q 49.1 | Page 24 A box contains 20 cards numbered from 1 to 20. A card is drawn at random from the box. Find the probability that the number on the drawn card is divisible by 2 or 3 Ex. 16.1 | Q 49.2 | Page 24 A box contains 20 cards numbered from 1 to 20. A card is drawn at random from the box. Find the probability that the number on the drawn card is a prime number Ex. 16.1 | Q 50.01 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting 8 as the sum Ex. 16.1 | Q 50.02 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting a doublet Ex. 16.1 | Q 50.03 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting a doublet of prime numbers Ex. 16.1 | Q 50.04 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting a doublet of odd numbers Ex. 16.1 | Q 50.05 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting a sum greater than 9 Ex. 16.1 | Q 50.06 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting an even number on first Ex. 16.1 | Q 50.07 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting an even number on one and a multiple of 3 on the other Ex. 16.1 | Q 50.08 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting neither 9 nor 1 1 as the sum of the numbers on the faces Ex. 16.1 | Q 50.09 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting a sum less than 6 Ex. 16.1 | Q 50.1 | Page 24 Two different dice are thrown together. Find the probability that the numbers obtained have a sum less than 7 Ex. 16.1 | Q 50.11 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting a sum more than 7 Ex. 16.1 | Q 50.12 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting 1 at least once Ex. 16.1 | Q 50.13 | Page 24 In a simultaneous throw of a pair of dice, find the probability of getting a number other than 5 on any dice. Ex. 16.1 | Q 51 | Page 24 What is the probability that an ordinary year has 53 Sundays? Ex. 16.1 | Q 52 | Page 24 What is the probability that a leap year has 53 Sundays and 53 Mondays? Ex. 16.1 | Q 53.01 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? that the sum of the two numbers that turn up is 8? Ex. 16.1 | Q 53.02 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? of obtaining a total of 6? Ex. 16.1 | Q 53.03 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability of obtaining a total of 10? Ex. 16.1 | Q 53.04 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? of obtaining the same number on both dice? Ex. 16.1 | Q 53.05 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability of obtaining a total more than 9? Ex. 16.1 | Q 53.06 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability that the sum of the two numbers appearing on the top of the dice is 13? Ex. 16.1 | Q 53.07 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability that the sum of the numbers appearing on the top of the dice is less than or equal to 12? Ex. 16.4 | Q 53.08 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability?  that the product of numbers appearing on the top of the dice is less than 9. Ex. 16.1 | Q 53.09 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? that the difference of the numbers appearing on the top of two dice is 2. Ex. 16.1 | Q 53.1 | Page 24 A black die and a white die are thrown at the same time. Write all the possible outcomes. What is the probability? that the numbers obtained have a product less then 16. Ex. 16.1 | Q 54.1 | Page 25 A bag contains cards which are numbered from 2 to 90. A card is drawn at random from the bag. Find the probability that it bears a two digit number Ex. 16.1 | Q 54.2 | Page 25 A bag contains cards which are numbered from 2 to 90. A card is drawn at random from the bag. Find the probability that it bears a number which is a perfect square Ex. 16.1 | Q 55 | Page 25 The faces of a red cube and a yellow cube are numbered from 1 to 6. Both cubes are rolled. What is the probability that the top face of each cube will have the same number? Ex. 16.1 | Q 56 | Page 25 The probability of selecting a green marble at random from a jar that contains only green, white and yellow marbles is 1/4  The probability of selecting a white marble at random from  the same jar is 1/3  If this jar contains 10 yellow marbles. What is the total number of marbles in the jar? Ex. 16.1 | Q 57 | Page 25 1) A lot of 20 bulbs contain 4 defective ones. One bulb is drawn at random from the lot. What is the probability that this bulb is defective? 2) Suppose the bulb drawn in (1) is not defective and is not replaced. Now one bulb is drawn at random from the rest. What is the probability that this bulb is not defective? Ex. 16.1 | Q 57 | Page 25 A number is selected at random from first 50 natural numbers. Find the probability that it is a multiple of 3 and 4. Ex. 16.1 | Q 58.1 | Page 25 A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears a two-digit number Ex. 16.1 | Q 58.2 | Page 25 A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears a perfect square number Ex. 16.1 | Q 58.3 | Page 25 A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears a number divisible by 5. Ex. 16.1 | Q 59 | Page 25 Two dice, one blue and one grey, are thrown at the same time. (i) Write down all the possible outcomes and complete the following table: Event :‘Sum on 2 dice’ 2 3 4 5 6 7 8 9 10 11 12 Probability 1/36 5/36 1/36 Ex. 16.2 | Q 60 | Page 25 A bag contains 6 red balls and some blue balls. If the probability of drawing a blue ball the bag is twice that of a red ball, find the number of blue balls in the bag. Ex. 16.1 | Q 61.1 | Page 25 The king, queen and jack of clubs are removed from a deck of 52 playing cards and the remaining cards are shuffled. A card is drawn from the remaining cards. Find the probability of getting a card of heart Ex. 16.1 | Q 61.2 | Page 25 The king, queen and jack of clubs are removed from a deck of 52 playing cards and the remaining cards are shuffled. A card is drawn from the remaining cards. Find the probability of getting a card of queen Ex. 16.1 | Q 61.3 | Page 25 The king, queen and jack of clubs are removed from a deck of 52 playing cards and the remaining cards are shuffled. A card is drawn from the remaining cards. Find the probability of getting a card of clubs. Ex. 16.1 | Q 61.4 | Page 25 The king, queen and jack of clubs are removed form a deck of 52 playing cards and the remaining cards are shuffled. A card is drawn from the remaining cards. Find the probability of getting a card of a face card Ex. 16.1 | Q 61.5 | Page 25 The king, queen and jack of clubs are removed form a deck of 52 playing cards and the remaining cards are shuffled. A card is drawn from the remaining cards. Find the probability of getting a card of  a queen of diamond. Ex. 16.1 | Q 62.1 | Page 25 Two dice are thrown simultaneously. What is the probability that 5 will not come up on either of them? Ex. 16.1 | Q 62.2 | Page 25 Two dice are thrown simultaneously. What is the probability that 5 will come up on at least one? Ex. 16.1 | Q 62.3 | Page 25 Two dice are thrown simultaneously. What is the probability that 5 wifi come up at both dice? Ex. 16.1 | Q 63 | Page 26 A number is selected at random from first 50 natural numbers. Find the probability that it is a multiple of 3 and 4. Ex. 16.1 | Q 64.1 | Page 26 A dice is rolled twice. Find the probability that  5 will not come up either time Ex. 16.1 | Q 64.2 | Page 26 A dice is rolled twice. Find the probability that 5 will come up exactly one time Ex. 16.1 | Q 65.1 | Page 26 All the black face cards are removed from a pack of 52 cards. The remaining cards are well shuffled and then a card is drawn at random. Find the probability of getting face card Ex. 16.1 | Q 65.2 | Page 26 All the black face cards are removed from a pack of 52 cards. The remaining cards are well shuffled and then a card is drawn at random. Find the probability of getting red card Ex. 16.1 | Q 65.3 | Page 26 All the black face cards are removed from a pack of 52 cards. The remaining cards are well shuffled and then a card is drawn at random. Find the probability of getting black card Ex. 16.1 | Q 65.4 | Page 26 All the black face cards are removed from a pack of 52 cards. The remaining cards are well shuffled and then a card is drawn at random. Find the probability of getting king Ex. 16.1 | Q 66.1 | Page 26 Cards numbered from 11 to 60 are kept in a box. If a card is drawn at random from the box, find the probability that the number on the drawn card is  an odd number Ex. 16.1 | Q 66.2 | Page 26 Cards numbered from 11 to 60 are kept in a box. If a card is drawn at random from the box, find the probability that the number on the drawn card is a perfect square number Ex. 16.1 | Q 66.3 | Page 26 Cards numbered from 11 to 60 are kept in a box. If a card is drawn at random from the box, find the probability that the number on the drawn card is divisible by 5 Ex. 16.1 | Q 66.4 | Page 26 Cards numbered from 11 to 60 are kept in a box. If a card is drawn at random from the box, find the probability that the number on the drawn card isa prime number less than 20 Ex. 16.1 | Q 67.1 | Page 26 All kings and queens are removed from a pack of 52 cards. The remaining cards are well shuffled and then a card is randomly drawn from it. Find the probability that this card is a red face card Ex. 16.1 | Q 67.2 | Page 26 All kings and queens are removed from a pack of 52 cards. The remaining cards are well shuffled and then a card is randomly drawn from it. Find the probability that this card is  a black card. Ex. 16.1 | Q 68.1 | Page 26 All jacks, queens and kings are removed from a pack of 52 cards. The remaining cards are well-shuffled and then a card is randomly drawn from it. Find the probability that this card is a black face card Ex. 16.1 | Q 68.2 | Page 26 All jacks, queens and kings are removed from a pack of 52 cards. The remaining cards are well-shuffled and then a card is randomly drawn from it. Find the probability that this card is  a red card. Ex. 16.1 | Q 69.1 | Page 26 Red queens and black jacks are removed from a pack of 52 playing cards. A card is drawn at random from the remaining cards, after reshuffling them. Find the probability that the card drawn is a king Ex. 16.1 | Q 69.2 | Page 26 Red queens and black jacks are removed from a pack of 52 playing cards. A card is drawn at random from the remaining cards, after reshuffling them. Find the probability that the card drawn is of red colour Ex. 16.1 | Q 69.3 | Page 26 Red queens and black jacks are removed from a pack of 52 playing cards. A card is drawn at random from the remaining cards, after reshuffling them. Find the probability that the card drawn is a face card Ex. 16.1 | Q 69.4 | Page 26 Red queens and black jacks are removed from a pack of 52 playing cards. A card is drawn at random from the remaining cards, after reshuffling them. Find the probability that the card drawn is a queen Ex. 16.1 | Q 70 | Page 26 All red face cards are removed from a pack of playing cards. The remaining cards are well shuffled and then a card is drawn at random from them. Find the probability that the drawn card is  a red card Ex. 16.1 | Q 70.1 | Page 26 In a bag there are 44 identical cards with figure of circle or square on them. There are 24 circles, of which 9 are blue and rest are green and 20 squares of which 11 are blue and rest are green.  One card is drawn from the bag at random. Find the probability that it has the figure of square Ex. 16.1 | Q 70.2 | Page 26 In a bag there are 44 identical cards with figure of circle or square on them. There are 24 circles, of which 9 are blue and rest are green and 20 squares of which 11 are blue and rest are green.  One card is drawn from the bag at random. Find the probability that it has the figure of Ex. 16.1 | Q 70.2 | Page 26 In a bag there are 44 identical cards with figure of circle or square on them. There are 24 circles, of which 9 are blue and rest are green and 20 squares of which 11 are blue and rest are green.  One card is drawn from the bag at random. Find the probability that it has the figure of  green colour Ex. 16.1 | Q 70.2 | Page 26 In a bag there are 44 identical cards with figure of circle or square on them. There are 24 circles, of which 9 are blue and rest are green and 20 squares of which 11 are blue and rest are green.  One card is drawn from the bag at random. Find the probability that it has the figure of green colour Ex. 16.1 | Q 70.3 | Page 26 In a bag there are 44 identical cards with figure of circle or square on them. There are 24 circles, of which 9 are blue and rest are green and 20 squares of which 11 are blue and rest are green.  One card is drawn from the bag at random. Find the probability that it has the figure of blue circle and Ex. 16.1 | Q 70.3 | Page 26 In a bag there are 44 identical cards with figure of circle or square on them. There are 24 circles, of which 9 are blue and rest are green and 20 squares of which 11 are blue and rest are green.  One card is drawn from the bag at random. Find the probability that it has the figure of green square. Ex. 16.1 | Q 70.4 | Page 26 In a bag there are 44 identical cards with figure of circle or square on them. There are 24 circles, of which 9 are blue and rest are green and 20 squares of which 11 are blue and rest are green.  One card is drawn from the bag at random. Find the probability that it has the figure of green Ex. 16.1 | Q 71.1 | Page 26 All red face cards are removed from a pack of playing cards. The remaining cards are well shuffled and then a card is drawn at random from them. Find the probability that the drawn card is  a red card Ex. 16.1 | Q 71.2 | Page 26 All red face cards are removed from a pack of playing cards. The remaining cards are well shuffled and then a card is drawn at random from them. Find the probability that the drawn card is a face card and Ex. 16.1 | Q 71.3 | Page 26 All red face cards are removed from a pack of playing cards. The remaining cards are well shuffled and then a card is drawn at random from them. Find the probability that the drawn card is  a card of clubs. Ex. 16.1 | Q 72.1 | Page 26 Two customers are visiting a particular shop in the same week (Monday to Saturday). Each is equally likely to visit the shop on any one day as on another. What is the  robability that both will visit the shop on the same day? Ex. 16.1 | Q 72.2 | Page 26 Two customers are visiting a particular shop in the same week (Monday to Saturday). Each is equally likely to visit the shop on any one day as on another. What is the probability that both will visit the shop on different days? Ex. 16.1 | Q 72.3 | Page 26 Two customers are visiting a particular shop in the same week (Monday to Saturday). Each is equally likely to visit the shop on any one day as on another. What is the probability that both will visit the shop on consecutive days? #### RD Sharma solutions for Class 10 Maths Chapter 16 Probability Exercise 16.2 [Page 33] Ex. 16.2 | Q 1 | Page 33 Suppose you drop a tie at random on the rectangular region shown in the given figure. What is the probability that it will land inside the circle with diameter 1 m? Ex. 16.2 | Q 2 | Page 33 In the accompanying diagram a fair spinner is placed at the center O of the circle. Diameter AOB and radius OC divide the circle into three regions labelled X, Y and Z. If ∠BOC = 45°. What is the probability that the spinner will land in the region X?(See fig) Ex. 16.2 | Q 3 | Page 33 A target shown in Fig. below consists of three concentric circles of radii, 3, 7 and 9 cm respectively. A dart is thrown and lands on the target. What is the probability that the dart will land on the shaded region? Ex. 16.2 | Q 4 | Page 33 In below Fig., points A, B, C and D are the centers of four circles that each have a radius of length one unit. If a point is selected at random from the interior of square ABCD. What is the probability that the point will be chosen from the shaded region? Ex. 16.2 | Q 5 | Page 33 In the Fig. below, JKLM is a square with sides of length 6 units. Points A and B are the mid- points of sides KL and LM respectively. If a point is selected at random from the interior of the square. What is the probability that the point will be chosen from the interior of ΔJAB? Ex. 16.2 | Q 6 | Page 33 In the given figure, a square dart board is shown. The length of a side of the larger square is 1.5 times the length of a side of the smaller square. If a dart is thrown and lands on the larger square. What is the probability that it will land in the interior of the smaller square? #### RD Sharma solutions for Class 10 Maths Chapter 16 Probability [Pages 34 - 35] Q 1 | Page 34 Cards each marked with one of the numbers 4, 5, 6, ..., 20 are placed in a box and mixed thoroughly. One card is drawn at random from the box. What is the probability of getting an even number? Q 2 | Page 34 One card is drawn from a well shuffled deck of 52 playing cards. What is the probability of getting a non-face card? Q 3 | Page 34 A bag contains 5 red, 8 green and 7 white balls, One ball is drawn at random from the bag. What is the probability of getting a white ball or a green ball? Q 4 | Page 34 A die is thrown once. What is the probability of getting a prime number? Q 5 | Page 34 A die thrown once. What is the probability of getting a number lying between  2 and 6? Q 6 | Page 34 A die is thrown once. What is the probability of getting an odd number? Q 7 | Page 35 If $\bar{E}$ denote the complement or negation of an even E, what is the value of P(E) + P($\bar{E}$) ? Q 8 | Page 35 One card is drawn at random from a well shuffled deck of 52 cards. What is the probability of getting an ace? Q 9 | Page 35 Two coins are tossed simultaneously. What is the probability of getting at least one head? Q 10 | Page 35 Tickets numbered 1 to 20 are mixed up and then a ticket is drawn at random. What is the probability that the ticket drawn bears a number which is a multiple of 3? Q 11 | Page 35 From a well shuffled pack of cards, a card is drawn at random. Find the probability of getting a black queen. Q 12 | Page 35 A die is thrown once. Find the probability of getting a number less than 3. Q 13 | Page 35 Two coins are tossed simultaneously. Find the probability of getting exactly one head. Q 14 | Page 35 A die is thrown once. What is the probability of getting a number greater than 4? Q 15 | Page 35 What is the probability that a number selected at random from the numbers 3, 4, 5, ....9 is a multiple of 4? Q 15 | Page 35 What is the probability that a number selected at random from the numbers 3, 4, 5, ....9 is a multiple of 4? Q 16 | Page 35 A letter of English alphabet is chosen at random. Determine the probability that the chosen letter is a consonant. Q 17 | Page 35 A bag contains 3 red and 5 black balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is not red. Q 18 | Page 35 A number is chosen at random from the number –3, –2, –1, 0, 1, 2, 3. What will be the probability that square of this number is less then or equal to 1? #### RD Sharma solutions for Class 10 Maths Chapter 16 Probability [Pages 35 - 39] Q 1 | Page 35 If a digit is chosen at random from the digit 1, 2, 3, 4, 5, 6, 7, 8, 9, then the probability that it is odd, is • $\frac{4}{9}$ • $\frac{5}{9}$ • $\frac{1}{9}$ • $\frac{2}{3}$ Q 2 | Page 35 In Q. No. 1, The probability that the digit is even is • $\frac{4}{9}$ • $\frac{5}{9}$ • $\frac{1}{9}$ • $\frac{2}{3}$ Q 3 | Page 36 In the probability that the digit is a multiple of 3 is • $\frac{1}{3}$ • $\frac{2}{3}$ • $\frac{1}{9}$ • $\frac{2}{9}$ Q 4 | Page 36 If three coins are tossed simultaneously, then the probability of getting at least two heads, is • $\frac{1}{4}$ • $\frac{3}{8}$ • $\frac{1}{2}$ • $\frac{1}{4}$ Q 5 | Page 36 In a single throw of a die, the probability of getting a multiple of 3 is • $\frac{1}{2}$ • $\frac{1}{3}$ • $\frac{1}{6}$ • $\frac{2}{3}$ Q 6 | Page 36 The probability of guessing the correct answer to a certain test questions is$\frac{x}{12}$ If the probability of not  guessing the correct answer to this question is$\frac{ 2}{3}$ then x = • 2 •  3 •  4 • 6 Q 7 | Page 36 A bag contains three green marbles, four blue marbles, and two orange marbles, If a marble is picked at random, then the probability that it is not an orange marble is • $\frac{1}{4}$ • $\frac{1}{3}$ • $\frac{4}{9}$ • $\frac{7}{9}$ Q 8 | Page 36 A number is selected at random from the numbers 3, 5, 5, 7, 7, 7, 9, 9, 9, 9 The probability that the selected number is their average is • $\frac{1}{10}$ • $\frac{3}{10}$ • $\frac{7}{10}$ • $\frac{9}{10}$ Q 9 | Page 36 The probability of throwing a number greater than 2 with a fair dice is • $\frac{3}{5}$ • $\frac{2}{5}$ • $\frac{2}{3}$ • $\frac{1}{3}$ Q 10 | Page 36 A card is accidently dropped from a pack of 52 playing cards. The probability that it is an ace is • $\frac{1}{4}$ • $\frac{1}{13}$ • $\frac{1}{52}$ • $\frac{12}{13}$ Q 11 | Page 36 A number is selected from numbers 1 to 25. The probability that it is prime is • $\frac{2}{3}$ • $\frac{1}{6}$ • $\frac{1}{3}$ • $\frac{5}{6}$ Q 12 | Page 36 Which of the following cannot be the probability of an event? • $\frac{2}{3}$ • $- 1 . 5$ • $15 %$ • $0 . 7$ Q 13 | Page 36 If P(E) = 0.05, then P(not E) = • −0.05 • 0.5 • 0.9 •  0.95 Q 14 | Page 36 Which of the following cannot be the probability of occurence of an event? •  0.2 •  0.4 • 0.8 • 1.6 Q 15 | Page 36 The probability of a certain event is • 0 • 1 • 1/2 •  no existent Q 16 | Page 37 The probability of an impossible event is • 0 •  1 • 1/2 •  non-existent Q 17 | Page 37 Aarushi sold 100 lottery tickets in which 5 tickets carry prizes. If Priya purchased a ticket, what is the probability of Priya winning a prize? • $\frac{19}{20}$ • $\frac{1}{25}$ • $\frac{1}{20}$ • $\frac{17}{20}$ Q 18 | Page 37 A number is selected from first 50 natural numbers. What is the probability that it is a multiple of 3 or 5? • $\frac{13}{25}$ • $\frac{21}{50}$ • $\frac{12}{25}$ • $\frac{23}{50}$ Q 19 | Page 37 A month is selected at random in a year. The probability that it is March or October, is • $\frac{1}{12}$ • $\frac{1}{6}$ • $\frac{3}{4}$ •  None of these Q 20 | Page 37 From the letters of the word ''MOBILE",  a letter is selected. The probability that the letter is a vowel, is • $\frac{1}{3}$ • $\frac{3}{7}$ • $\frac{1}{6}$ • $\frac{1}{2}$ Q 21 | Page 37 A die is thrown once. The probability of getting a prime number is • $\frac{2}{3}$ • $\frac{1}{3}$ • $\frac{1}{2}$ • $\frac{1}{6}$ Q 22 | Page 37 The probability of getting an even number, when a die is thrown once is • $\frac{1}{2}$ • $\frac{1}{3}$ • $\frac{1}{6}$ • $\frac{5}{6}$ Q 23 | Page 37 A box contains 90 discs, numbered from 1 to 90. If one disc is drawn at random from the box, the probability that it bears a prime number less than 23, is • $\frac{7}{90}$ • $\frac{10}{90}$ • $\frac{4}{45}$ • $\frac{9}{89}$ Q 24 | Page 37 The probability that a number selected at random from the numbers 1, 2, 3, ..., 15 is a multiple of 4, is • $\frac{4}{15}$ • $\frac{2}{15}$ • $\frac{1}{5}$ • $\frac{1}{3}$ Q 25 | Page 37 Two different coins are tossed simultaneously. The probability of getting at least one head is • $\frac{1}{4}$ • $\frac{1}{8}$ • $\frac{3}{4}$ • $\frac{7}{8}$ Q 26 | Page 37 If two different dice are rolled together, the probability of getting an even number on both dice is • $\frac{1}{36}$ • $\frac{1}{2}$ • $\frac{1}{6}$ • $\frac{1}{4}$ Q 27 | Page 37 A number is selected at random from the numbers 1 to 30. The probability that it is a prime number is • $\frac{2}{3}$ • $\frac{1}{6}$ • $\frac{1}{3}$ • $\frac{11}{30}$ Q 28 | Page 38 A card is drawn at random from a pack of 52 cards. The probability that the drawn card is not an ace is • $\frac{1}{13}$ • $\frac{9}{13}$ • $\frac{4}{13}$ • $\frac{12}{13}$ Q 29 | Page 38 A number x is chosen at random from the numbers −3, −2, −1, 0, 1, 2, 3 the probability that | x | < 2 is • $\frac{5}{7}$ • $\frac{2}{7}$ • $\frac{3}{7}$ • $\frac{1}{7}$ Q 30 | Page 38 If a number x is chosen from the numbers 1, 2, 3, and a number y is selected from the numbers 1, 4, 9. Then, P(xy < 9) • $\frac{7}{9}$ • $\frac{5}{9}$ • $\frac{2}{3}$ • $\frac{1}{9}$ Q 31 | Page 38 The probability that a non-leap year has 53 sundays, is • $\frac{2}{7}$ • $\frac{5}{7}$ • $\frac{6}{7}$ • $\frac{1}{7}$ Q 32 | Page 38 In a single throw of a pair of dice, the probability of getting the sum a perfect square is • $\frac{1}{18}$ • $\frac{7}{36}$ • $\frac{1}{6}$ • $\frac{2}{9}$ Q 33 | Page 38 What is the probability that a non-leap year has 53 Sundays? • $\frac{6}{7}$ • $\frac{1}{7}$ • $\frac{5}{7}$ • None of these Q 34 | Page 38 Two numbers 'a' and 'b' are selected successively without replacement in that order from the integers 1 to 10. The probability that$\frac{a}{b}$ is an integer, is • $\frac{17}{45}$ • $\frac{1}{5}$ • $\frac{17}{90}$ • $\frac{8}{45}$ Q 35 | Page 38 Two dice are rolled simultaneously. The probability that they show different faces is • $\frac{2}{3}$ • $\frac{1}{6}$ • $\frac{1}{3}$ • $\frac{5}{6}$ Q 36 | Page 38 What is the probability that a leap year has 52 Mondays? • $\frac{2}{7}$ • $\frac{4}{7}$ • $\frac{5}{7}$ • $\frac{6}{7}$ Q 37 | Page 38 If a two digit number is chosen at random, then the probability that the number chosen is a multiple of 3, is • $\frac{3}{10}$ • $\frac{29}{100}$ • $\frac{1}{3}$ • $\frac{7}{25}$ Q 38 | Page 38 Two dice are thrown together. The probability of getting the same number on both dice is • $\frac{1}{2}$ • $\frac{1}{3}$ • $\frac{1}{6}$ • $\frac{1}{12}$ Q 39 | Page 39 In a family of 3 children, the probability of having at least one boy is • $\frac{7}{8}$ • $\frac{1}{8}$ • $\frac{5}{8}$ • $\frac{3}{4}$ Q 40 | Page 39 A bag contains cards numbered from 1 to 25. A card is drawn at random from the bag. The probability that the number on this card is divisible by both 2 and 3 is • $\frac{1}{5}$ • $\frac{3}{25}$ • $\frac{4}{25}$ • $\frac{2}{25}$ ## Chapter 16: Probability Ex. 16.1Ex. 16.4Ex. 16.2Others ## RD Sharma solutions for Class 10 Maths chapter 16 - Probability RD Sharma solutions for Class 10 Maths chapter 16 (Probability) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Class 10 Maths solutions in a manner that help students grasp basic concepts better and faster. Further, we at Shaalaa.com provide such solutions so that students can prepare for written exams. RD Sharma textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students. Concepts covered in Class 10 Maths chapter 16 Probability are Sample Space, Concept Or Properties of Probability, Simple Problems on Single Events, Introduction to Probability, Probability - A Theoretical Approach, Probability Examples and Solutions, Probability Examples and Solutions, Introduction to Probability, Probability - A Theoretical Approach, Type of Event - Elementry, Type of Event - Complementry, Type of Event - Exclusive, Type of Event - Exhaustive, Equally Likely Outcomes, Probability of an Event, Concept Or Properties of Probability, Addition Theorem, Random Experiments, Sample Space. Using RD Sharma Class 10 solutions Probability exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in RD Sharma Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 10 prefer RD Sharma Textbook Solutions to score more in exam. Get the free view of chapter 16 Probability Class 10 extra questions for Class 10 Maths and can use Shaalaa.com to keep it handy for your exam preparation
Thread: Equality of two sets (using Boolean algebra) 1. Equality of two sets (using Boolean algebra) Hello. I recently started my course, so this is pretty trivial. Prove: $\displaystyle $f\left( {\bigcup\limits_{i \in \ell } {{A_i}} } \right) = \bigcup\limits_{i \in \ell } {f\left( {{A_i}} \right)}$$ Well, analogous examples demonstrated in my university begin with $\displaystyle $\forall y \in f\left( {\bigcup\limits_{i \in \ell } {{A_i}} } \right) \Rightarrow \left( {\exists x:f\left( x \right) = y} \right) \wedge \left( {x \in \bigcup\limits_{i \in \ell } {{A_i}} } \right)$$ The problem is that I am not sure how to represent this highly abstract union of sets since $\displaystyle $x \in \bigcup\limits_{i \in \ell } {{A_i}} \Rightarrow x \in {A_i} \wedge i \in \ell$$ I guess wouldn't make enough sense (I could tell the same about intersection of those very same sets). So what is missing on my mind? Thanks for help. 2. Originally Posted by Pranas The problem is that I am not sure how to represent this highly abstract union of sets since $\displaystyle $x \in \bigcup\limits_{i \in \ell } {{A_i}} \Rightarrow x \in {A_i} \wedge i \in \ell$$ I guess wouldn't make enough sense (I could tell the same about intersection of those very same sets This is one standard way: $\left( {\exists j \in \ell } \right)\left[ {x \in A_j } \right]$ 3. Originally Posted by Plato This is one standard way: $\left( {\exists j \in \ell } \right)\left[ {x \in A_j } \right]$ I believe it's all I needed. Thanks again And an intersection would be $\left( {\forall j \in \ell } \right)\left[ {x \in A_j } \right]$ right?
Chapter 11: Delta Functions ### The Dirac Delta Function The Dirac delta function $\delta(x)$ is not really a “function”. It is a mathematical entity called a distribution which is well defined only when it appears under an integral sign. It has the following defining properties: $$\delta(x) = \cases{0, \qquad &if x\not= 0\cr \infty, \qquad &if x=0\cr}$$ $$\int_b^c \delta(x)\, dx = 1 \qquad\qquad b<0<c$$ $$x\,\delta(x) \equiv 0$$ It may be easiest to think of the delta function as the limit of a sequence of steps, each of which is higher and narrower than the previous step, such that the area under the step is always one; see Figure 1. Figure 1: The function $\delta(x)$ can be approximated by a series of steps that get progressively thinner and higher in such a way that the area under the curve is always equal to one. The properties of the delta function allow us to compute \begin{eqnarray} \Int_{-\infty}^{\infty} f(x)\,\delta(x) \,dx &=& \Int_{-\infty}^{\infty} f(0)\,\delta(x) \,dx \\ &=& f(0) \Int_{-\infty}^{\infty} \delta(x) \,dx \nonumber\\ &=& f(0) \nonumber \end{eqnarray} We can shift the “spike” in the delta function as usual, obtaining $\delta(x-a)$. This shifted delta function satisfies $$\Int_{-\infty}^{\infty} f(x)\,\delta(x-a) \,dx = f(a)$$ Thus, the Dirac delta function can be used to pick out the value of a function at any desired point. We can relate the delta function to the step function in the following way. Consider the function $g(x)$ given by the integral $$g(x)=\Int_{-\infty}^x \delta(u-a)\,du$$ Notice the variable $x$ in the upper limit of the integral. The value of this function $g(x)$ is $0$ if we stop integrating before we reach the peak of the delta function, i.e. for $x<a$. If we integrate through the peak, the value of the integral is $1$, i.e. for $x>a$. Thus, we have argued that the value of the integral, thought of as a function of $x$, is just the step function $$\Theta(x-a) = \Int_{-\infty}^x \delta(u-a)\,du$$ (Recall that we don't really care about the choice of $\Theta(0)$, so we don't need to worry about the value of this function if we stop integrating exactly at $x=a$.) If the step function is the integral of the delta function, then the delta function must be the derivative of the step function. $$\frac{d}{dx}\Theta(x-a) = \delta(x-a) \label{fdelta}$$ You should be able to persuade yourself that this statement is reasonable geometrically if you think of the derivative of a function as representing its slope.
# Documentation du code de simulation numérique SUNFLUIDH ## Web LIMSI sunfluidh:velocity_initialization_examples ## Examples of data set The user finds here some examples illustrating different configurations related to the namelist "Velocity_Initialization". The data initialized by default, and not explicitly required, are generally not present for a sake of clarity. #### Uniform Velocity field The velocity is oriented along the J-direction only. Its value is $1.5$. Other velocity components are null. &Velocity_Initialization I_Velocity_Reference_Value = 0.0 , J_Velocity_Reference_Value = 1.5 , K_Velocity_Reference_Value = 0.0 , Initial_Field_Option_For_Velocity_I = 0 , Initial_Field_Option_For_Velocity_J = 0 , Initial_Field_Option_For_Velocity_K = 0, White_Noise_Magnitude_For_Velocity_I= 0.0 , White_Noise_Magnitude_For_Velocity_J= 0.0 , White_Noise_Magnitude_For_Velocity_K= 0.0 / By considering the default values of the namelist, it could simply be write as : &Velocity_Initialization J_Velocity_Reference_Value = 1.5 / #### Parabolic velocity profile The velocity is oriented along the J-direction and its mean value over the cross section of the domain is $1.5$. The profile depends on The I-direction. Other velocity components are null. &Velocity_Initialization J_Velocity_Reference_Value = 1.5 , Initial_Field_Option_For_Velocity_J = 1 / #### "Spreading" velocity field from an inlet Relevant when just one inlet is present. The normal inflow is oriented along the I-direction. Other velocity components are null. &Velocity_Initialization Initial_Field_Option_For_Velocity_I = 3 / The inflow velocity profile is spread out over the domain in the I-direction. The mean value of the velocity component is not required (but this value can still be set just to keep it mind). #### Parabolic Velocity field with superimposed white noise The velocity is oriented along the J-direction only. Its mean value is $1.5$. The parabolic profile depends on The I-direction. &Velocity_Initialization I_Velocity_Reference_Value = 0.0 , J_Velocity_Reference_Value = 1.5 , K_Velocity_Reference_Value = 0.0 , Initial_Field_Option_For_Velocity_I = 0 , Initial_Field_Option_For_Velocity_J = 1 , Initial_Field_Option_For_Velocity_K = 0, White_Noise_Magnitude_For_Velocity_I= 0.05 , White_Noise_Magnitude_For_Velocity_J= 0.1 , White_Noise_Magnitude_For_Velocity_K= 0.02 / A white noise is superimposed on each velocity component such as : • The random fluctuations on the I-velocity component are 5% of the mean value of the J-velocity component (because the mean value of the I-velocity component is null) • The random fluctuations on the J-velocity component are 10% of the local value given by the parabolic profile. • The random fluctuations on the K-velocity component are 2% of the mean value of the J-velocity (because the mean value of the K-velocity component is null) Keep in mind the magnitude of fluctuations is defined in respect with the local value of the velocity component invoked only if it is a non-zero value . Otherwise this magnitude is relied on the maximum value given by I_Velocity_Reference_Value, J_Velocity_Reference_Value or K_Velocity_Reference_Value sunfluidh/velocity_initialization_examples.txt · Dernière modification: 2017/09/25 17:06 de yann
• • ### 鄱阳湖滨沙岭地区网纹层的顶界年代 1. 南京大学地理与海洋科学学院, 江苏 南京 210093 • 收稿日期:2010-11-08 修回日期:2011-01-04 出版日期:2012-01-20 发布日期:2012-01-20 • 作者简介: 作者简介:韩志勇(1968-),男,江苏丹徒人,副教授,主要从事地貌与环境变化专业。E-mail: [email protected] • 基金资助: 国家自然科学基金项目(40771023、40930103、40971004)资助 ### Top Boundary Age of the Vermiculated Beds in the Shaling Area of the Poyang Lake Zhi-yong HAN(), Xu-sheng LI, Ying-yong CHEN, Shuang-wen YI, Hua-yu LU, Da-yuan YANG 1. School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing, Jiangsu 210093,China • Received:2010-11-08 Revised:2011-01-04 Online:2012-01-20 Published:2012-01-20 Abstract: Sand hills distributed along shores of the Poyang Lake are composed of alternate eolian sand layers and silt layers. The Shaling sand hill, about 2 km in width and 5 km in length, is located in Liaohua Town of Xingzi County. On its north and west margins, three sections (SS1, SS2 and SS3) were investigated. Vermiculated mottles in this hill can be divided into three types. Vermiculated mottles of type Ⅰ are thick and dense and only occur in the lower part (layer 1) of section SS1. Vermiculated mottles of type Ⅱ are relatively thin and sparsely scattered and occur in the beds (layer 2) above type Ⅰ. Vermiculated mottles of type Ⅲ are indistinctly shaped and occur in the upper part (layer 5) of the section SS1and the upper part (layer 2) of the section SS2. A sand bed (layer 4) separates the vermiculated mottles of type Ⅲ from that of type I and type Ⅱ in section SS1. Vermiculated mottles of type Ⅲ cover a sand bed (layer 1) in section SS2. The section SS3 only comprises one sand bed (layer 1) and develops no vermiculated mottle. Vermiculated mottles of types I and type II are mature vermiculated mottles, whereas type Ⅲ are immature. OSL samples were collected from each sand bed. 125-250 μm quartz grains were separated from sand samples following the sequences of the standard pretreatment. The equivalent dose was measured using the Single Aliquot Regeneration protocol in the Laboratory of Surface Process of Nanjing University. The annual dose was determined by contents of U, Th and K measured using the neutron activation method in China Institute of Atomic Energy. In OSL dating of sand samples, no evidence indicates that the samples show OSL saturation signals and the annual dose of the samples was affected by weathering. So, the OSL ages can be interpreted as the deposition ages. The sand layers of Section SS1 and SS2 were deposited about 80 ka and 71 ka ago respectively and the sand layer of Section SS3 has been accumulated since 29 ka. From these OSL ages, it can be thought that the age of the layers with mature vermiculated mottles is older than 80 ka and the age of the layers with immature vermiculated mottles ranges from 71 ka to 29 ka. The top boundary age of vermiculated beds can be inferred to be 80-29 ka if the layers with immature vermiculated mottles are taken as the vermiculated beds. The top boundary age of vermiculated beds is older than 80 ka if the layers with mature vermiculated mottles are taken as the vermiculated beds. Both inferred top boundary ages are much younger than 400 ka, which was previously reported as the formation age of the vermiculated beds. However, this conclusion is drawn on three OSL ages in Shaling area and should be validated by other studies in the future. • P597
# Fraction Division Online Quiz Following quiz provides multiplication Choice Questions (MCQs) related to Fraction Division. You will have to read all the given answers and click over the correct answer. If you are not sure about the answer then you can check the answer using Show Answer button. You can use Next Quiz button to check new set of questions in the quiz. Q 1 - Divide $\frac{5}{7}$ ÷ $\frac{9}{7}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{5}{7}$ ÷ $\frac{9}{7}$ = $\frac{5}{7}$ × $\frac{7}{9}$ = $\frac{(5×7)}{(7×9)}$ = $\frac{35}{63}$ Step 2: $\frac{5}{7}$ ÷ $\frac{9}{7}$ = $\frac{35}{63}$ Step 3: Reducing to lowest terms $\frac{35}{63}$ = $\frac{5}{9}$ Q 2 - Divide $\frac{4}{9}$ ÷ $\frac{6}{15}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{4}{9}$ ÷ $\frac{6}{15}$ = $\frac{4}{9}$ × $\frac{15}{6}$ = $\frac{(4×15)}{(9×6)}$ = $\frac{60}{54}$ Step 2: $\frac{4}{9}$ ÷ $\frac{6}{15}$ = $\frac{60}{54}$ Step 3: Reducing to lowest terms $\frac{60}{54}$ = $\frac{10}{9}$ Q 3 - Divide $\frac{5}{7}$ ÷ $\frac{5}{9}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{5}{7}$ ÷ $\frac{5}{9}$ = $\frac{5}{7}$ × $\frac{9}{5}$ = $\frac{(5×9)}{(7×5)}$ = $\frac{45}{35}$ Step 2: $\frac{5}{7}$ ÷ $\frac{5}{9}$ = $\frac{45}{35}$ Step 3: Reducing to lowest terms $\frac{45}{35}$ = $\frac{9}{7}$ Q 4 - Divide $\frac{3}{5}$ ÷ $\frac{8}{10}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{3}{5}$ ÷ $\frac{8}{10}$ = $\frac{3}{5}$ × $\frac{10}{8}$ = $\frac{(3×10)}{(5×8)}$ = $\frac{30}{40}$ Step 2: $\frac{3}{5}$ ÷ $\frac{8}{10}$ $\frac{30}{40}$ Step 3: Reducing to lowest terms $\frac{30}{40}$ = $\frac{3}{4}$ Q 5 - Divide $\frac{5}{9}$ ÷ $\frac{7}{9}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{5}{9}$ ÷ $\frac{7}{9}$ = $\frac{5}{9}$ × $\frac{9}{7}$ = $\frac{(5×9)}{(9×7)}$ = $\frac{45}{63}$ Step 2: $\frac{5}{9}$ ÷ $\frac{7}{9}$ = $\frac{45}{63}$ Step 3: Reducing to lowest terms $\frac{45}{63}$ = $\frac{5}{7}$ Q 6 - Divide $\frac{4}{7}$ ÷ $\frac{5}{14}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{4}{7}$ ÷ $\frac{5}{14}$ = $\frac{4}{7}$ × $\frac{14}{5}$ = $\frac{(4×14)}{(7×5)}$ = $\frac{56}{35}$ Step 2: $\frac{4}{7}$ ÷ $\frac{5}{14}$ = $\frac{56}{35}$ Step 3: Reducing to lowest terms $\frac{56}{35}$ = $\frac{8}{5}$ Q 7 - Divide $\frac{4}{6}$ ÷ $\frac{7}{12}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{4}{6}$ ÷ $\frac{7}{12}$ = $\frac{4}{6}$ × $\frac{12}{7}$ = $\frac{(4×12)}{(6×7)}$ = $\frac{48}{42}$ Step 2: $\frac{4}{6}$ ÷ $\frac{7}{12}$ = $\frac{48}{42}$ Step 3: Reducing to lowest terms $\frac{48}{42}$ = $\frac{8}{7}$ Q 8 - Divide $\frac{3}{5}$ ÷ $\frac{7}{15}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{3}{5}$ ÷ $\frac{7}{15}$ = $\frac{3}{5}$ × $\frac{15}{7}$ = $\frac{(3×15)}{(5×7)}$ = $\frac{45}{35}$ Step 2: $\frac{3}{5}$ ÷ $\frac{7}{15}$ = $\frac{45}{35}$ Step 3: Reducing to lowest terms $\frac{45}{35}$ = $\frac{9}{7}$ Q 9 - Divide $\frac{5}{7}$ ÷ $\frac{8}{7}$ ### Explanation Step 1: $\frac{5}{7}$ ÷ $\frac{8}{7}$ Rewriting division as a multiplication operation $\frac{5}{7}$ ÷ $\frac{8}{7}$ = $\frac{5}{7}$ × $\frac{7}{8}$ = $\frac{(5×7)}{(7×8)}$ = $\frac{35}{56}$ Step 2: $\frac{5}{7}$ ÷ $\frac{8}{7}$ = $\frac{35}{56}$ Step 3: Reducing to lowest terms $\frac{35}{56}$ = $\frac{5}{8}$ Q 10 - Divide $\frac{5}{9}$ ÷ $\frac{7}{12}$ ### Explanation Step 1: Rewriting division as a multiplication operation $\frac{5}{9}$ ÷ $\frac{7}{12}$ = $\frac{5}{9}$ × $\frac{12}{7}$ = $\frac{(5×12)}{(9×7)}$ = $\frac{60}{63}$ Step 2: $\frac{5}{9}$ ÷ $\frac{7}{12}$ = $\frac{60}{63}$ Step 3: Reducing to lowest terms $\frac{60}{63}$ = $\frac{20}{21}$ fraction_division.htm Advertisements
# Phase transition Phase transition This diagram shows the nomenclature for the different phase transitions. A phase transition is the transformation of a thermodynamic system from one phase or state of matter to another. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium certain properties of the medium change, often discontinuously, as a result of some external condition, such as temperature, pressure, and others. For example, a liquid may become gas upon heating to the boiling point, resulting in an abrupt change in volume. The measurement of the external conditions at which the transformation occurs is termed the phase transition point. Phase transitions are common occurrences observed in nature and many engineering techniques exploit certain types of phase transition. The term is most commonly used to describe transitions between solid, liquid and gaseous states of matter, in rare cases including plasma. ## Types of phase transition • The transitions between the solid, liquid, and gaseous phases of a single component, due to the effects To From Solid Liquid Gas Plasma Solid Solid-solid transformation Melting/fusion Sublimation N/A Liquid Freezing N/A Boiling/evaporation N/A Gas Deposition Condensation N/A Ionization Plasma N/A N/A Recombination/deionization N/A A typical phase diagram. The dotted line gives the anomalous behavior of water. A small piece of rapidly melting argon ice simultaneously shows the transitions from solid to liquid to gas. • A eutectic transformation, in which a two component single phase liquid is cooled and transforms into two solid phases. The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. • A peritectic transformation, in which a two component single phase solid is heated and transforms into a solid phase and a liquid phase. • A spinodal decomposition, in which a single phase is cooled and separates into two different compositions of that same phase. • Transition to a mesophase between solid and liquid, such as one of the "liquid crystal" phases. • The transition between the ferromagnetic and paramagnetic phases of magnetic materials at the Curie point. • The transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide. • The martensitic transformation which occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. • Changes in the crystallographic structure such as between ferrite and austenite of iron. • Order-disorder transitions such as in alpha-titanium aluminides. • The emergence of superconductivity in certain metals when cooled below a critical temperature. • The transition between different molecular structures (polymorphs or allotropes), especially of solids, such as between an amorphous structure and a crystal structure or between two different crystal structures. • Quantum condensation of bosonic fluids, such as Bose-Einstein condensation and the superfluid transition in liquid helium. • The breaking of symmetries in the laws of physics during the early history of the universe as its temperature cooled. Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables (cf. phases). This condition generally stems from the interactions of a large number of particles in a system, and does not appear in systems that are too small. At the phase transition point (for instance, boiling point) the two phases of a substance, liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the gaseous form is preferred. It is sometimes possible to change the state of a system diabatically (as opposed to adiabatically) in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable, i.e. not theoretically stable, but quasistable. This occurs in superheating, supercooling and supersaturation. ## Classifications ### Ehrenfest classification Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. Under this scheme, phase transitions were labeled by the lowest derivative of the free energy that is discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable.[1] The various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, which is the first derivative of the free energy with respect to chemical potential. Second-order phase transitions are continuous in the first derivative (the order parameter, which is the first derivative of the free energy with respect to the external field, is continuous across the transition) but exhibit discontinuity in a second derivative of the free energy.[1] These include the ferromagnetic phase transition in materials such as iron, where the magnetization, which is the first derivative of the free energy with the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature. The magnetic susceptibility, the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification scheme, there could in principle be third, fourth, and higher-order phase transitions. Though useful, Ehrenfest's classification has been found to be an inaccurate method of classifying phase transitions, for it does not take into account the case where a derivative of free energy diverges (which is only possible in the thermodynamic limit). For instance, in the ferromagnetic transition, the heat capacity diverges to infinity. ### Modern classifications In the modern classification scheme, phase transitions are divided into two broad categories, named similarly to the Ehrenfest classes: First-order phase transitions are those that involve a latent heat. During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy. During this process, the temperature of the system will stay constant as heat is added: the system is in a "mixed-phase regime" in which some parts of the system have completed the transition and others have not. Familiar examples are the melting of ice or the boiling of water (the water does not instantly turn into vapor, but forms a turbulent mixture of liquid water and vapor bubbles). Second-order phase transitions are also called continuous phase transitions. They are characterized by a divergent susceptibility, an infinite correlation length, and a power-law decay of correlations near criticality. Examples of second-order phase transitions are the ferromagnetic transition, superconductor and the superfluid transition. Lev Landau gave a phenomenological theory of second order phase transitions. Several transitions are known as the infinite-order phase transitions. They are continuous but break no symmetries. The most famous example is the Kosterlitz–Thouless transition in the two-dimensional XY model. Many quantum phase transitions in two-dimensional electron gases belong to this class. The liquid-glass transition is observed in many polymers and other liquids that can be supercooled far below the melting point of the crystalline phase. This is atypical in several respects. It is not a transition between thermodynamic ground states: it is widely believed that the true ground state is always crystalline. Glass is a quenched disorder state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon: on cooling a liquid, internal degrees of freedom successively fall out of equilibrium. However, there is a longstanding debate whether there is an underlying second-order phase transition in the hypothetical limit of infinitely long relaxation times. ## Characteristic properties ### Critical points In any system containing liquid and gaseous phases, there exists a special combination of pressure and temperature, known as the critical point, at which the transition between liquid and gas becomes a second-order transition. Near the critical point, the fluid is sufficiently hot and compressed that the distinction between the liquid and gaseous phases is almost non-existent. This is associated with the phenomenon of critical opalescence, a milky appearance of the liquid due to density fluctuations at all possible wavelengths (including those of visible light). ### Order parameters The order parameter is normally a quantity which is 0 in one phase (usually above the critical point), and non-zero in the other. It characterises the onset of order at the phase transition. The order parameter susceptibility will usually diverge approaching the critical point. For a ferromagnetic system undergoing a phase transition, the order parameter is the net magnetization. For liquid/gas transitions, the order parameter is related to the density. When symmetry is broken, one needs to introduce one or more extra variables to describe the state of the system. For example, in the ferromagnetic phase, one must provide the net magnetization, whose direction was spontaneously chosen when the system cooled below the Curie point. Such variables are examples of order parameters. An order parameter is a measure of the degree of order in a system; the extreme values are 0 for total disorder and 1 for complete order.[2] For example, an order parameter can indicate the degree of order in a liquid crystal. However, note that order parameters can also be defined for non-symmetry-breaking transitions. Some phase transitions, such as superconducting and ferromagnetic, can have order parameters for more than one degree of freedom. In such phases, the order parameter may take the form of a complex number, a vector, or even a tensor, the magnitude of which goes to zero at the phase transition. There also exist dual descriptions of phase transitions in terms of disorder parameters. These indicate the presence of line-like excitations such as vortex- or defect[disambiguation needed ] lines. ### Relevance in cosmology Symmetry-breaking phase transitions play an important role in cosmology. It has been speculated that, in the hot early universe, the vacuum (i.e. the various quantum fields that fill space) possessed a large number of symmetries. As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2)×U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field. This transition is important to understanding the asymmetry between the amount of matter and antimatter in the present-day universe (see electroweak baryogenesis.) Progressive phase transitions in an expanding universe are implicated in the development of order in the universe, as is illustrated by the work of Eric Chaisson[3] and David Layzer.[4] See also Relational order theories. ### Critical exponents and universality classes Continuous phase transitions are easier to study than first-order transitions due to the absence of latent heat, and they have been discovered to have many interesting properties. The phenomena associated with continuous phase transitions are called critical phenomena, due to their association with critical points. It turns out that continuous phase transitions can be characterized by parameters known as critical exponents. The most important one is perhaps the exponent describing the divergence of the thermal correlation length by approaching the transition. For instance, let us examine the behavior of the heat capacity near such a transition. We vary the temperature T of the system while keeping all the other thermodynamic variables fixed, and find that the transition occurs at some critical temperature Tc. When T is near Tc, the heat capacity C typically has a power law behavior: $C \propto |T_c - T|^{-\alpha}.$ A similar behavior, but with the exponent ν instead of α, applies for the correlation length. The exponent ν is positive. This is different with α. Its actual value depends on the type of phase transition we are considering. For -1 < α < 0, the heat capacity has a "kink" at the transition temperature. This is the behavior of liquid helium at the lambda transition from a normal state to the superfluid state, for which experiments have found α = -0.013±0.003. At least one experiment was performed in the zero-gravity conditions of an orbiting satellite to minimize pressure differences in the sample.[5] This experimental value of α agrees with theoretical predictions based on variational perturbation theory.[6] For 0 < α < 1, the heat capacity diverges at the transition temperature (though, since α < 1, the enthalpy stays finite). An example of such behavior is the 3-dimensional ferromagnetic phase transition. In the three-dimensional Ising model for uniaxial magnets, detailed theoretical studies have yielded the exponent α ∼ +0.110. Some model systems do not obey a power-law behavior. For example, mean field theory predicts a finite discontinuity of the heat capacity at the transition temperature, and the two-dimensional Ising model has a logarithmic divergence. However, these systems are limiting cases and an exception to the rule. Real phase transitions exhibit power-law behavior. Several other critical exponents - β, γ, δ, ν, and η - are defined, examining the power law behavior of a measurable physical quantity near the phase transition. Exponents are related by scaling relations such as β = γ / (δ − 1), ν = γ / (2 − η). It can be shown that there are only two independent exponents, e.g. ν and η. It is a remarkable fact that phase transitions arising in different systems often possess the same set of critical exponents. This phenomenon is known as universality. For example, the critical exponents at the liquid-gas critical point have been found to be independent of the chemical composition of the fluid. More amazingly, but understandable from above, they are an exact match for the critical exponents of the ferromagnetic phase transition in uniaxial magnets. Such systems are said to be in the same universality class. Universality is a prediction of the renormalization group theory of phase transitions, which states that the thermodynamic properties of a system near a phase transition depend only on a small number of features, such as dimensionality and symmetry, and are insensitive to the underlying microscopic properties of the system. Again, the divergency of the correlation length is the essential point. ### Critical slowing down and other phenomena There are also other critical phenoma; e.g., besides static functions there is also critical dynamics. As a consequence, at a phase transition one may observe critical slowing down or speeding up. The large static universality classes of a continuous phase transition split into smaller dynamic universality classes. In addition to the critical exponents, there are also universal relations for certain static or dynamic functions of the magnetic fields and temperature differences from the critical value. ### Percolation Theory Another phenomenon which shows phase transitions and critical exponents is percolation. The simplest example is perhaps percolation in a two dimensional square lattice. Sites are randomly occupied with probability p. For small values of p the occupied sites form only small clusters. At a certain threshold pc a giant cluster is formed and we have a second order phase transition.[7] The behavior of P near pc is, P~(p-pc)β, where β is a critical exponent. ## References 1. ^ a b Blundell, Stephen J.; Katherine M. Blundell (2008). Concepts in Thermal Physics. Oxford University Press. ISBN 978-0198567707. 2. ^ A. D. McNaught and A. Wilkinson, ed. "Compendium of Chemical Terminology (commonly called The Gold Book)". IUPAC. ISBN 0-86542-684-8. Retrieved 2007-10-23. 3. ^ Chaisson, “Cosmic Evolution”, Harvard, 2001 4. ^ David Layzer, Cosmogenesis, The Development of Order in the Universe", Oxford Univ. Press, 1991 5. ^ Arxiv.org 6. ^ Prola.aps.org 7. ^ Armin Bunde and Shlomo Havlin (1996). Fractals and Disordered Systems. Springer. • Anderson, P.W., Basic Notions of Condensed Matter Physics, Perseus Publishing (1997). • Goldenfeld, N., Lectures on Phase Transitions and the Renormalization Group, Perseus Publishing (1992). • Krieger, Martin H., Constitutions of matter : mathematically modelling the most everyday of physical phenomena, University of Chicago Press, 1996. Contains a detailed pedagogical discussion of Onsager's solution of the 2-D Ising Model. • Landau, L.D. and Lifshitz, E.M., Statistical Physics Part 1, vol. 5 of Course of Theoretical Physics, Pergamon, 3rd Ed. (1994). • Kleinert, H., Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback ISBN 981-02-4659-5 (readable online here). • Kleinert, H. and Verena Schulte-Frohlinde, Gauge Fields in Condensed Matter, Vol. I, "Superfluid and Vortex lines; Disorder Fields, Phase Transitions,", pp. 1–742, World Scientific (Singapore, 1989); Paperback ISBN 9971-5-0210-0 (readable online physik.fu-berlin.de) • Mussardo G., "Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics", Oxford University Press, 2010. • Schroeder, Manfred R., Fractals, chaos, power laws : minutes from an infinite paradise, New York: W.H. Freeman, 1991. Very well-written book in "semi-popular" style—not a textbook—aimed at an audience with some training in mathematics and the physical sciences. Explains what scaling in phase transitions is all about, among other things. • Yeomans J. M., Statistical Mechanics of Phase Transitions, Oxford University Press, 1992. • H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford University Press, Oxford and New York 1971). Wikimedia Foundation. 2010. ### Look at other dictionaries: • phase transition — fazinis virsmas statusas T sritis Standartizacija ir metrologija apibrėžtis Medžiagos vienos fazės virtimas kita. atitikmenys: angl. phase change; phase transform; phase transition vok. Phasenübergang, f; Phasenumwandlung, f rus. фазовое… …   Penkiakalbis aiškinamasis metrologijos terminų žodynas • phase transition — fazinis virsmas statusas T sritis fizika atitikmenys: angl. phase change; phase transform; phase transition vok. Phasenübergang, m; Phasenumwandlung, f rus. фазовое превращение, n; фазовый переход, m pranc. changement de phase, m; transition de… …   Fizikos terminų žodynas • phase transition — fazinis virsmas statusas T sritis Energetika apibrėžtis Medžiagos vienos fazės kitimas kita, vykstantis tam tikroje temperatūroje ir atitinkamame slėgyje. atitikmenys: angl. phase transfer; phase transformation; phase transition vok.… …   Aiškinamasis šiluminės ir branduolinės technikos terminų žodynas • phase transition — fazinis virsmas statusas T sritis chemija apibrėžtis Medžiagos vienos fazės virtimas kita. atitikmenys: angl. phase change; phase transform; phase transition rus. фазовое превращение; фазовый переход …   Chemijos terminų aiškinamasis žodynas • phase transition temperature — fazinio virsmo temperatūra statusas T sritis Standartizacija ir metrologija apibrėžtis Temperatūra, kurioje medžiaga iš vienos fazės virsta kita. atitikmenys: angl. phase transition temperature; transformation temperature; transition point;… …   Penkiakalbis aiškinamasis metrologijos terminų žodynas • phase transition temperature — fazinio virsmo temperatūra statusas T sritis fizika atitikmenys: angl. phase transition temperature; transformation point temperature vok. Phasenübergangstemperatur, f; Phasenumwandlungstemperatur, f rus. температура фазового перехода, f pranc.… …   Fizikos terminų žodynas • phase transition theory — fazinių virsmų teorija statusas T sritis fizika atitikmenys: angl. phase transition theory vok. Theorie der Phasenumwandlung, f rus. теория фазовых превращений, f pranc. théorie de changement de phase, f …   Fizikos terminų žodynas • phase transition temperature — fazinio virsmo temperatūra statusas T sritis chemija apibrėžtis Temperatūra, kurioje medžiaga iš vienos fazės pereina į kitą. atitikmenys: angl. phase transition temperature; transformation temperature rus. температура фазового перехода …   Chemijos terminų aiškinamasis žodynas • phase transition — noun The transition between thermodynamic phases of a physical system, especially one between the solid, liquid, and gaseous phases of a substance. See Also: melting, freezing, boiling …   Wiktionary • phase transition — noun a change from one state (solid or liquid or gas) to another without a change in chemical composition • Syn: ↑phase change, ↑state change, ↑physical change • Hypernyms: ↑natural process, ↑natural action, ↑action, ↑ …   Useful english dictionary
## PREP 2015 Question Authoring - Archived ### Re: Using 'if' in a Solution by Daniele Arcara - Number of replies: 0 Thank you, Davide. My solution did have some math in it, and I had gotten around it by defining $solution_text1,$solution_math1, \$solution_text2, and so on… This is a lot simpler, though!
Given a signal $$x[n] = [1, 2, 3, 4, 5]$$. How many transfer functions can be found with Padé approximation which have a causal impulse response and start with those five samples. How many of them are stable? Found this question and I'm not sure if I understood it right. I'd say none of them is stable since Padé can't guarantee stable solutions. The first five samples will be perfectly recreated in the impulse response. Therefor $$h(z)$$ between $$0-4$$ looks like $$x[n]$$. But what about how many? Are there multiple solutions? Is this question badly formulated? A further question is then: Find the Padé model with exaclty one pole. One pole? Isn't that an all pole model? Or is there some kind of rule like always the same amount of poles and zeros? Can't solve it with the Padé euqations. • what's p and q ? their sum p+q+1 = 5 , but what's the particular pair ? Then given the order p, you find the coefficeints a[k] (and b[k]) and look for possible transfer functions... Feb 6 '19 at 19:53 • p and q are not given in the first question. I assumed its known from the fact that 5 samples need to be perfectly correct. For the second question: So I know q from 5-1-p? That would be 3 zeros and 1 pole? Doesn't this extra lags? I thought p and q should only differ one to be stable and causal. Feb 6 '19 at 19:57 The following might help. Given the data of $$5$$ samples, $$x[n] = [1, 2, 3, 4, 5]$$ You will have possible choices for the orders $$(p,q)$$ of $$a[k]$$ and $$b[k]$$, such as $$\{(4,0),(3,1),(2,2),(1,3),(0,4)\}$$. You can solve or each of the possibilities and check whether they yield stable systems or not, by looking at the roots of $$a[k]$$... afaik they should all be causal as this is Pade modeling assumption (unless equations modified to handle otherwise). Note that for certain data sets (including this one) some tail coefficients might turn to be $$0$$ and the actual order can be less than that of indicated by $$p$$ or $$q$$. Also note that a model with one pole is not an all-pole model. All-pole model requires that $$q=0$$, and does not depend on $$p$$. • In this context of exact mathcing for modeling, (4,0) is an all-pole Padé model with 4 poles and no zeros. Feb 6 '19 at 20:27 • All-pole means that there are no zeros. Since for Padé modeling $p+q+1 = N$ (N data number) then all pole means $q=0$ and $p = N-1$... Feb 6 '19 at 20:28 • it's 1 pole (p =1) and then N-p-1 = q = 5 - 1 - 1= 3 zeros... Feb 6 '19 at 20:37 • No probably you misunderstood. Let me show an example. $$H_1(z) = \frac{ 1 + 0.4 z^{-1} + 0.79 z^{-2} + 0.26 z^{-3} }{ 1 - 0.79z^{-1} }$$ is a causal system as the numerator powers are all less than zero, however the following $$H_2(z) = \frac{ 1 + 0.4 z^{1} + 0.79 z^{2} + 0.26 z^{-3} }{ 1 - 0.79z^{-1} }$$ is non-causal as numerator has positive power of $z$. Feb 6 '19 at 22:01 • Ou^^ That easy. Obviously you can see it on the highest "z" power. Knew that negativ powers are causal, but thought it might depends on the number of poles and zeros too. Silly me. Feb 6 '19 at 22:09
Evaluate (1/128)/2 Multiply the numerator by the reciprocal of the denominator. Multiply . Multiply and . Multiply by . The result can be shown in multiple forms. Exact Form: Decimal Form: Evaluate (1/128)/2
# Why does kinetic energy below molecule bond energy break bonds? I was thinking about why bullet can break molecular bonds when it hits solid target despite the fact that the kinetic energy of the individual atoms that make up the bullet is less that the molecule bond energy threshold of the target. For example, lets say we fire 300 m/s graphite bullet at teflon target. Carbon fluorine bond is 5.5 eV and carbon atom moving at 300 m/s velocity has 0.005 eV kinetic energy yet the graphite bullet is able to break the teflon carbon fluorine bonds. How exactly do many carbon atoms with low kinetic energy join their energies together and break those bonds? Is this a phonon thing? Do they create together phonon wave with higher than 5.5 eV energy? • Why do you think that it is the kinetic energy that breaks the bonds? You can fracture a solid without giving it any kinetic energy by straining it. – lnmaurer Jul 17 at 4:12 • What other energy can it be? The only energy bullet has is kinetic energy. Yes yes I know mass equals energy and if it isnt liquid helium cooled bullet then it has thermal energy but for all practical purposes, bullets do stuff becose of kinetic energy. – wav scientist Jul 18 at 0:50 • The bullet-target system can have energy other than kinetic energy. I have posted an answer below. – lnmaurer Jul 18 at 4:55 The bullet-target system only has kinetic energy --- until the bullet collides with the target. Consider a system that is easier to visualize: instead of a bullet we have a baseball, and we throw it at a flexible rubber sheet. As the baseball collides with the sheet, the baseball causes the sheet to stretch. In the process, the baseball slows down. In other words, the collision converts kinetic energy in the baseball to strain energy in the sheet. If the strain in the sheet becomes to great, the sheet fractures (rips), and the strain energy is dissipated. The same process happens when a bullet hits a target. In the collision, the kinetic energy of the bullet is converted to strain energy in the target (and in the bullet, for that matter). If the strain becomes large enough, molecular (and intermolecular) bonds break and the target fractures. In other words, the kinetic energy doesn't directly cause bonds to break; strain from the collision causes the bonds to break. • Thanks for clarification. That strain, its mediated by phonons, correct? – wav scientist Jul 19 at 17:14 • I'm not sure what that question means. If something is strained, then its constituent atoms are, on average, further apart. There is then a restoring force from the bonds connecting the atoms. Chemical bonds are complicated, but I wouldn't say that phonons have much to do with it. – lnmaurer Jul 21 at 17:41 This is getting into chemistry but anyways, the question is based on physics. You are thinking of intramolecular bond energy, which is about the bond between atoms of a molecule. Breaking these would mean breaking a compound into its elements. For example, breaking the bond you talked about, for water ($$H_2O$$), would result in separate hydrogen and oxygen gas, instead of splitting up water. What you need to consider is intermolecular bonds - the bonds between different molecules. This is way weaker. This is what you break when splitting something like a block of wood or boiling some water away.
# The $221$ groups of order $|G| = 400$ A book of John Conway suggests there are 221 groups of order $$|G| = 400$$. How do I go about finding these. Commutative groups with $$ab = ba$$ can be listed very easily: • $$\mathbb{Z}/400\mathbb{Z}$$ • $$\mathbb{Z}/200\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$$ • $$\mathbb{Z}/25\mathbb{Z} \oplus \mathbb{Z}/16\mathbb{Z}$$ • ... There are 207 super-solvable groups of order $$|G| = 400$$ How do we list some of them? $$1 \leq H_0 \leq H_1 \leq H_2 \leq \dots \leq H_n = G$$ here $$H_i \vartriangleleft G$$ and $$H_{i+1}/H_i$$ is cyclic. This could be a great way to explain the difference between nilpotent and solvable groups. Some discussion here however I will put in the tag which includes (for example) matrix representation or permutation representations. There are 28 nilpotent groups of order $$|G| = 400$$. I haven't used GAP the question would be how does the computer program find such objects? Here's some of what it found: • $$G = (C_5 \ltimes Q_8 ) \times D_{10}$$ • $$G = (C_5 \ltimes C_5) \ltimes (C_4 \times C_4)$$ • $$G = C_2 \times ((C_5 \times C_5) \ltimes C_8)$$ These names or descriptions leave it upon us to say what these symmetries actually look like. • Example, $$D_{10}$$ is the dihedral group or the symmetry group of a 10-gon • Also $$D_{10} = C_{10} \ltimes C_2$$ , see also [1] . • $$C_n \simeq \mathbb{Z}/n\mathbb{Z}$$ is the cyclic group . • Pithy but partly-accurate answer: gap-system.org/Manuals/pkg/SmallGrp/doc/chap1.html . Though in all seriousness, the references there are probably going to be your best hope; since $|G|=p^4q^2$ it shouldn't be too terrible to analyze the cases. Apr 19 at 23:25 • Even with computer help, just going through the list of results. Apr 19 at 23:38 • Any particular reason you're interested in $|G|=400$ particularly? $|G|=100$ has few enough candidates to be relatively straightforwardly enumerable, and $|G|=200$ has a quarter as many groups as 400 does, while still having non-trivial members in most of the categories (e.g. nilpotent but not abelian, solvable but not supersolvable, etc.) Apr 19 at 23:52 Using the SmallGrp package for GAP and the function StructureDescription you can get some insight into the structure of the groups of order 400. See the documentation for how to interpret these strings, e.g. a colon : denotes a semidirect product. G := AllSmallGroups(400);; List(G, g -> StructureDescription(g)); [ "C25 : C16", "C400", "C25 : C16", "C25 : Q16", "C8 x D50", "C25 : (C8 : C2)", "C25 : QD16", "D400", "C2 x (C25 : C8)", "C25 : (C8 : C2)", "C4 x (C25 : C4)", "C25 : (C4 : C4)", "C25 : (C4 : C4)", "C25 : ((C4 x C2) : C2)", "C25 : QD16", "C25 : D16", "C25 : Q16", "C25 : QD16", "C25 : ((C4 x C2) : C2)", "C100 x C4", "C25 x ((C4 x C2) : C2)", "C25 x (C4 : C4)", "C200 x C2", "C25 x (C8 : C2)", "C25 x D16", "C25 x QD16", "C25 x Q16", "C25 : (C8 x C2)", "C25 : (C8 : C2)", "C4 x (C25 : C4)", "C25 : (C4 : C4)", "C2 x (C25 : C8)", "C25 : (C8 : C2)", "C25 : ((C4 x C2) : C2)", "C2 x (C25 : Q8)", "C2 x C4 x D50", "C2 x D200", "C25 : ((C4 x C2) : C2)", "D8 x D50", "C25 : ((C4 x C2) : C2)", "Q8 x D50", "C25 : ((C4 x C2) : C2)", "C2 x C2 x (C25 : C4)", "C2 x (C25 : D8)", "C100 x C2 x C2", "C50 x D8", "C50 x Q8", "C25 x ((C4 x C2) : C2)", "C5 x (C5 : C16)", "(C5 x C5) : C16", "C80 x C5", "(C2 x C2 x C2 x C2) : C25", "C2 x C2 x (C25 : C4)", "C2 x C2 x C2 x D50", "C50 x C2 x C2 x C2", "C5 x (C5 : C16)", "(C5 x C5) : C16", "(C5 x C5) : C16", "(C5 x C5) : C16", "(C5 : C8) x D10", "(C5 x C5) : (C8 x C2)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : D16", "(C5 x C5) : D16", "(C5 x C5) : QD16", "(C5 x C5) : QD16", "(C5 x C5) : QD16", "(C5 x C5) : Q16", "(C5 x C5) : Q16", "(C5 : C4) x (C5 : C4)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : (C4 : C4)", "(C5 x C5) : (C4 : C4)", "C40 x D10", "C5 x (C40 : C2)", "C5 x (C40 : C2)", "C5 x D80", "C5 x (C5 : Q16)", "C10 x (C5 : C8)", "C5 x ((C5 : C8) : C2)", "C20 x (C5 : C4)", "C5 x ((C5 : C4) : C4)", "C5 x (C20 : C4)", "C5 x ((C20 x C2) : C2)", "C5 x ((C5 x D8) : C2)", "C5 x ((C5 : Q8) : C2)", "C5 x ((C5 x Q8) : C2)", "C5 x (C5 : Q16)", "C5 x ((C10 x C2) : C4)", "C8 x ((C5 x C5) : C2)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : QD16", "(C5 x C5) : D16", "(C5 x C5) : Q16", "C2 x ((C5 x C5) : C8)", "(C5 x C5) : (C8 : C2)", "C4 x ((C5 x C5) : C4)", "(C5 x C5) : (C4 : C4)", "(C5 x C5) : (C4 : C4)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : D16", "(C5 x C5) : QD16", "(C5 x C5) : QD16", "(C5 x C5) : Q16", "(C5 x C5) : ((C4 x C2) : C2)", "C20 x C20", "C5 x C5 x ((C4 x C2) : C2)", "C5 x C5 x (C4 : C4)", "C40 x C10", "C5 x C5 x (C8 : C2)", "C5 x C5 x D16", "C5 x C5 x QD16", "C5 x C5 x Q16", "(C5 x C5) : C16", "(C5 : C4) x (C5 : C4)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : (C4 : C4)", "D10 x (C5 : C8)", "(C5 x C5) : (C8 x C2)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : (C4 x C4)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : (C4 : C4)", "(C5 x C5) : (C8 x C2)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : (C4 : C4)", "(C5 x C5) : D16", "(C5 x C5) : QD16", "(C5 x C5) : Q16", "(C5 x C5) : (C4 : C4)", "C5 x (C5 : (C8 x C2))", "C5 x ((C5 : C8) : C2)", "C20 x (C5 : C4)", "C5 x (C20 : C4)", "C10 x (C5 : C8)", "C5 x ((C5 : C8) : C2)", "C5 x ((C10 x C2) : C4)", "(C5 x C5) : (C8 x C2)", "(C5 x C5) : (C8 : C2)", "C4 x ((C5 x C5) : C4)", "(C5 x C5) : (C4 : C4)", "C2 x ((C5 x C5) : C8)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : (C8 x C2)", "(C5 x C5) : (C8 : C2)", "C4 x ((C5 x C5) : C4)", "(C5 x C5) : (C4 : C4)", "C2 x ((C5 x C5) : C8)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : (C8 x C2)", "(C5 x C5) : (C8 : C2)", "C4 x ((C5 x C5) : C4)", "(C5 x C5) : (C4 : C4)", "C2 x ((C5 x C5) : C8)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 : Q8) x D10", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : (C2 x Q8)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "C4 x D10 x D10", "D40 x D10", "(C5 x C5) : (C2 x D8)", "C2 x ((C5 : C4) x D10)", "(C5 x C5) : ((C4 x C2) : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "C2 x ((C5 x C5) : (C4 x C2))", "C2 x ((C5 x C5) : D8)", "C2 x ((C5 x C5) : D8)", "C2 x ((C5 x C5) : Q8)", "((C10 x C2) : C2) x D10", "(C5 x C5) : (C2 x D8)", "C10 x (C5 : Q8)", "C2 x C20 x D10", "C10 x D40", "C5 x ((C20 x C2) : C2)", "C5 x D8 x D10", "C5 x ((C4 x D10) : C2)", "C5 x Q8 x D10", "C5 x ((C4 x D10) : C2)", "C2 x C10 x (C5 : C4)", "C10 x ((C10 x C2) : C2)", "C2 x ((C5 x C5) : Q8)", "C2 x C4 x ((C5 x C5) : C2)", "C2 x ((C5 x C5) : D8)", "(C5 x C5) : ((C4 x C2) : C2)", "D8 x ((C5 x C5) : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "Q8 x ((C5 x C5) : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "C2 x C2 x ((C5 x C5) : C4)", "C2 x ((C5 x C5) : D8)", "C20 x C10 x C2", "C5 x C10 x D8", "C5 x C10 x Q8", "C5 x C5 x ((C4 x C2) : C2)", "(C5 : C4) x (C5 : C4)", "(C5 x C5) : (C8 : C2)", "(C5 x C5) : ((C4 x C2) : C2)", "C2 x ((C5 x C5) : C8)", "C2 x (D10 x (C5 : C4))", "C2 x ((C5 x C5) : (C4 x C2))", "C2 x ((C5 x C5) : D8)", "C2 x ((C5 x C5) : Q8)", "C5 x ((C2 x C2 x C2 x C2) : C5)", "C2 x C10 x (C5 : C4)", "C2 x C2 x ((C5 x C5) : C4)", "C2 x C2 x ((C5 x C5) : C4)", "C2 x C2 x ((C5 x C5) : C4)", "C2 x C2 x D10 x D10", "C2 x C2 x C10 x D10", "C2 x C2 x C2 x ((C5 x C5) : C2)", "C10 x C10 x C2 x C2" ] • the best we can do is type in a computer and look around? Apr 20 at 14:33 • No but I hoped the results could help you with the intuition required to come up with the theory. Unfortunately I don’t feel well qualified to do the theory part. Apr 20 at 14:45 • it's a great starting point... Apr 20 at 19:44 • If you're interested in the best we can do, your question is much, much too broad for math.stackexchange. Apr 20 at 20:24 • Just to add that SmallGrp is a database, while to construct them almost "from scratch" you can use the GrpConst package. After LoadPackage("grpconst");, enter l:=ConstructAllGroups(400);Length(l); - took only 3 seconds to get the result. Apr 20 at 20:52
[Estimated Reading Time: 3 minutes] We now know a little more about this duget thing, and have seen how to create a package. But a package cannot be consumed ‘in situ’ – it must be made available via a feed. Which brings us to the PUSH command. NOTE: Don’t worry, I have my priorities straight. This post was written before Liev arrived. 🙂 ## The PUSH Command To make a package available for consumption by other projects, it must be pushed to a feed. The feeds available are identified in a duget configuration file named duget.config. Actually, multiple duget.config files may be involved. The first is the ‘global’ configuration, which is not strictly global but actually specific to the current user account. Here’s mine: { packagesFolder: "packages", feeds: [ { name: "alpha", folder: "\\\\nuc\\duget\\alpha" }, { name: "release", folder: "\\\\nuc\\duget\\feed" }, { name: "duget.org", url: "https://api.duget.org/v1/index.json" } ], disabledFeeds: [“duget.org”] } duget will load this configuration first and then look ’up’ the folder hierarchy from the working location, looking for additional duget.config files. If found these are then applied, furthest first. This allows “local” configuration changes to be based on the “inherited” configuration to that point. So for example I might introduce an additional feed to be used by projects in a certain folder, but projects in another folder won’t be aware of that additional feed (unless they are in a sub-folder). There are other variations that can be accomplished with local configs which I won’t go into here. But if you’re familiar with nuget configuration, you will find this all very familiar. 😉 The packagesFolder property identifies a folder to act as the package cache. ### The Package Cache This is the first place that duget looks for packages when resolving dependencies. Obviously when duget encounters a dependency on a package for the first time (either a new package id or a new version of a known package) it will not be in the cache and so duget will attempt to fetch that package from one of the configured feeds. If successful, the package is stashed in the cache for future reference. The feeds property then identifies the feeds that duget will use to try to find those packages that aren’t in the cache. In the example above you can see that I have multiple feeds configured. The first two are the only two that are actually of any use currently. The duget.org http feed is non-functional – duget ignores (or rejects) all http feeds for now. In this case I have simply disabled the feed. Local configuration files can enable/disable “inherited” feeds in the configuration as well as introducing new feeds. The two file system feeds configured identify two folders that are shared by my build machine, each representing two feeds – an alpha feed or “pre-release channel” and a regular feed. A similar configuration exists on that build machine. So when a successful build produces an updated package, my dev machines can immediately consume both pre-release and release packages directly from the build machine. I can of course also push new packages from my dev machine as well, as long as I have the necessary permissions to the file share in question (this is the extent of any security in duget currently). Pushing a new or updated package is the same whether from a build machine and uses the push command. Again, to push all packages from the current folder to the default push feed you would simply: duget push With the configuration above this would fail since there is no default push feed configured and so one must be specified on the command line, e.g.: duget push --feed:alpha If there are multiple package files in the folder, specific packages may be selected for pushing by specifying those as arguments to the command (just the package id is needed): duget push deltics.smoketest --feed:alpha By default the package file will delete any pushed packages from the original folder. This can be prevented by adding a --noDelete switch. Thus far we’ve seen how duget can be used to create packages and make them available for use via one or more feeds. Tomorrow we’ll look at how projects consume packages from those feeds with the restore and update commands.
GMAT Changed on April 16th - Read about the latest changes here It is currently 26 May 2018, 18:20 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar What is the remainder when (1!)^3+ (2!)^3 + (3!)^3 +.....(11 Author Message TAGS: Hide Tags Manager Joined: 19 Aug 2009 Posts: 78 What is the remainder when (1!)^3+ (2!)^3 + (3!)^3 +.....(11 [#permalink] Show Tags 04 Nov 2009, 01:07 5 KUDOS 20 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 76% (08:06) correct 24% (01:54) wrong based on 296 sessions HideShow timer Statistics What is the remainder when (1!)^3+ (2!)^3 + (3!)^3 +.....(1152!)^3 is divided by 1152? 1. 125 2. 225 3. 325 Intern Affiliations: CA - India Joined: 27 Oct 2009 Posts: 44 Location: India Schools: ISB - Hyderabad, NSU - Singapore Show Tags 04 Nov 2009, 04:19 4 KUDOS The ans has to be 225. all the terms in the sequence after (3!)^3 are divisible by 1152 and hence remainder is 0. Upto (3!)^3, sum of all numbers, i.e. 1+8+216 = 225 which is the remainder!! Senior Manager Joined: 22 Dec 2009 Posts: 325 Show Tags 31 Jan 2010, 06:01 1 This post was BOOKMARKED Is there a specific approach to tackle this? _________________ Cheers! JT........... If u like my post..... payback in Kudos!! |For CR refer Powerscore CR Bible|For SC refer Manhattan SC Guide| ~~Better Burn Out... Than Fade Away~~ Math Expert Joined: 02 Sep 2009 Posts: 45455 Show Tags 31 Jan 2010, 07:17 16 KUDOS Expert's post 5 This post was BOOKMARKED jeeteshsingh wrote: Is there a specific approach to tackle this? No specific approach. We have the sum of many numbers: $$(1!)^3+ (2!)^3 + (3!)^3 +...+(1152!)^3$$ and want to determine the remainder when this sum is divided by 1152. First we should do the prime factorization of 1152: $$1152=2^7*3^2$$. Consider the third and fourth terms: $$(3!)^3=2^3*3^3$$ not divisible by 1152; $$(4!)^3=2^9*3^3=2^2*3*(2^7*3^2)=12*1152$$ divisible by 1152, and all the other terms after will be divisible by 1152. We'll get $$\{(1!)^3+ (2!)^3 + (3!)^3\} +\{(4!)^3+...+(1152!)^3\}=225+1152k$$ and this sum divided by 1152 will result remainder of 225. _________________ Manager Joined: 10 Feb 2010 Posts: 164 Show Tags 12 Feb 2010, 18:45 Nice Explanation! Manager Status: I will not stop until i realise my goal which is my dream too Joined: 25 Feb 2010 Posts: 209 Schools: Johnson '15 Show Tags 14 Jul 2011, 03:13 Bunuel....+1 to you... _________________ Regards, Harsha Note: Give me kudos if my approach is right , else help me understand where i am missing.. I want to bell the GMAT Cat Satyameva Jayate - Truth alone triumphs Manager Joined: 14 Apr 2011 Posts: 174 Show Tags 16 Jul 2011, 03:13 good question. Thanks Bunuel for sharing the approach for these problem! _________________ Looking for Kudos Intern Joined: 23 May 2012 Posts: 30 Re: What is the remainder when (1!)^3+ (2!)^3 + (3!)^3 +.....(11 [#permalink] Show Tags 19 Oct 2012, 04:21 Did .. the same thing as Bunuel.. But took 8 minutes.. In actual exam.. I would have guessed and moved on What is the source of this problem? Manager Joined: 14 Nov 2011 Posts: 132 Location: United States Concentration: General Management, Entrepreneurship GPA: 3.61 WE: Consulting (Manufacturing) Show Tags 18 Jul 2013, 06:20 Bunuel wrote: jeeteshsingh wrote: Is there a specific approach to tackle this? No specific approach. We have the sum of many numbers: $$(1!)^3+ (2!)^3 + (3!)^3 +...+(1152!)^3$$ and want to determine the remainder when this sum is divided by 1152. First we should do the prime factorization of 1152: $$1152=2^7*3^2$$. Consider the third and fourth terms: $$(3!)^3=2^3*3^3$$ not divisible by 1152; $$(4!)^3=2^9*3^3=2^2*3*(2^7*3^2)=12*1152$$ divisible by 1152, and all the other terms after will be divisible by 1152. We'll get $$\{(1!)^3+ (2!)^3 + (3!)^3\} +\{(4!)^3+...+(1152!)^3\}=225+1152k$$ and this sum divided by 1152 will result remainder of 225. Hi Bunnel, To get the remainder, we dont have to reduce the fraction right? That is we cant do - 225/1152 = 25/ 128 and get remainder 25? Math Expert Joined: 02 Sep 2009 Posts: 45455 Show Tags 21 Jul 2013, 02:24 cumulonimbus wrote: Bunuel wrote: jeeteshsingh wrote: Is there a specific approach to tackle this? No specific approach. We have the sum of many numbers: $$(1!)^3+ (2!)^3 + (3!)^3 +...+(1152!)^3$$ and want to determine the remainder when this sum is divided by 1152. First we should do the prime factorization of 1152: $$1152=2^7*3^2$$. Consider the third and fourth terms: $$(3!)^3=2^3*3^3$$ not divisible by 1152; $$(4!)^3=2^9*3^3=2^2*3*(2^7*3^2)=12*1152$$ divisible by 1152, and all the other terms after will be divisible by 1152. We'll get $$\{(1!)^3+ (2!)^3 + (3!)^3\} +\{(4!)^3+...+(1152!)^3\}=225+1152k$$ and this sum divided by 1152 will result remainder of 225. Hi Bunnel, To get the remainder, we dont have to reduce the fraction right? That is we cant do - 225/1152 = 25/ 128 and get remainder 25? Yes, 225 divided by 1152 yields the remainder of 225. The same way as 2 divided by 4 yields the remainder of 2, not 1 (1:2). _________________ Director Joined: 23 Jan 2013 Posts: 596 Schools: Cambridge'16 Re: What is the remainder when (1!)^3+ (2!)^3 + (3!)^3 +.....(11 [#permalink] Show Tags 12 Aug 2015, 12:22 starting from 4!^3=24^3 all numbers are divisible by 1152, i.e. remainder equal to 0 Only 1!^3, 2!^3 and 3!^3 are not divisible by 1152 and have remainder equal to 1,8 and 216, respectively. If sum numbers we can sum their remanders to find total remainder, which is 216+8+1+0=225 B Manager Joined: 06 Jun 2013 Posts: 175 Location: India Concentration: Finance, Economics Schools: Tuck GMAT 1: 640 Q49 V30 GPA: 3.6 WE: Engineering (Computer Software) Re: What is the remainder when (1!)^3+ (2!)^3 + (3!)^3 +.....(11 [#permalink] Show Tags 01 Dec 2017, 06:06 factorize 1152 into 2^7*3^2 see carefully all are cubic factorial we need to look beyonf (4!)^3 as from here onwards remainder is zero so consider cubic factorial of 1 , 2 and 3 and sum will give 225. and this number on division by 1152 gives 225. Re: What is the remainder when (1!)^3+ (2!)^3 + (3!)^3 +.....(11   [#permalink] 01 Dec 2017, 06:06 Display posts from previous: Sort by
# Unsubscribe Confirmation Page Shows Too Much Information When users click to unsubscribe from our newsletter, to confirm their unsubscribe they are taken to a page in CiviCRM that shows them all of the groups that the email was sent to. Is there anyway to change it so that these groups are not visible to them? Write an extension and use hook_civicrm_buildForm( $formName, &$form ) where you can change the values for the specify page
4.4k views A group of $15$ routers is interconnected in a centralized complete binary tree with a router at each tree node. Router $i$ communicates with router $j$ by sending a message to the root of the tree. The root then sends the message back down to router $j$. The mean number of hops per message, assuming all possible router pairs are equally likely is 1. $3$ 2. $4.26$ 3. $4.53$ 4. $5.26$ edited | 4.4k views 0 Just find the expectation for each level of tree OPTION is C. Here, we have to count average hops per message. Steps: 1) Message goes up from sender to root 2) Message comes down from root to destination 1) Average hops message goes to root - $\dfrac{(3\times 8)+(2\times 4)+(1\times 2)+(0\times 1) }{15}=2.267$ Here  $3\times 8$ represents $3$ hops & $8$ routers for Bottommost level & So on.. 2) Similarly average hops when message comes down - $\dfrac{(3\times 8)+(2\times 4)+(1\times 2)+(0\times 1)}{15}$   {Same as above} So, Total Hops $= 2\times 2.267 =4.53$ (Answer) by Boss (15.6k points) edited +2 0 Ur welcome. +4 It seems that Source and Destination can be same, by your answer. Isn't it? That results in a very small offset in your answer, from Arjun's. :p 0 @arjun sir @ manoj,@shresta,@kapil why it cant be like this: 1 leaf node can send message to other 7 leafnode and another leaf node can send to other 7 and so on so it will be 8*8*3(considering leaf to root) 0 yes that is you r counting how a leaf can send a message.rt? And he is calculating how many ways a message can be send. See the last solution , it is what u r telling rt? 0 Excellent!! 0 beautifully explained ... 0 Are source and destination nodes too calculated as hops? For a leaf node, source to root will have 3 hops and root to the destination will have 2 hops. But you are calculating root to the destination as 3 hops. Why are you counting destination as hop in the second step viz., root node to destination calculation? Explanation: Consider Complete tree in Figure If H wants to communicate with router at level 3 then it first sends packet to node A, then A forward packet to the router at Level 3; total 6 hops are required if A wants to communicate with any level 3 node. Similarly, 5 hops are required if H wants to communicate with any level 2 node , 4 hops are required if H wants to communicate with any level 1 node and 3 hops are required if H wants to communicate with any level 0 node . Hops required if H wants to communicates with all other nodes = (8-1)*6 + 4*5+2*4 +1*3 = 73 If all 8 level 3 nodes communicates with all other nodes then hops required=73*8=584 Similarly, Hops required if D wants to communicates with all other nodes = 8*5 + (4-1)*4+2*3 +1*2 = 60 If all 4 level 2 nodes communicates with all other nodes then hops required=60*4=240 Hops required if B wants to communicates with all other nodes = 8*4 + 4*3+(2-1)*2 +1*1 = 47 If all 4 level 2 nodes communicates with all other nodes then hops required=47*2=94 Hops required if A wants to communicates with all other nodes = 8*3 + 4*2+2*1 = 34 Total hops required when all nodes communicate with all other nodes=584+240+94+34= 952 Total number of message is 2 * 15C2 =2 * (15*14/2)=2*105=210 Here 2 is multiplied with 15C2 because in communication between A and B, A sends message to B and B sends message to A. The mean number of hops per message= 952/210= 4.53 by (305 points) 0 in your solution assumption is that a router can't send a message to itself. anyway, it's a self understood thing that i is not equal to j. great explanation! 0 Nice Explanation !! 0 Great explanation ..thanks. 0 @Iceberg Nice explanation !! 0 i also initially thought the same way but did some calculation mistake. very nice answer. thanks 0 Great Explanation!! The path length differs for nodes from each level. For a node in level $4,$ we have maximum no. of hops as follows, Level Max. no. of hops 1 3 (3-2-1) 2 3+1 = 4 (3-2-1-2) 3 3 + 2 = 5 (3-2-1-2-3) 4 3 + 3 = 6 (3-2-1-2-3-4) So, mean no. of hops for a node in level $4$ $= \dfrac{1.3 + 2.4 + 4.5 + 7.6}{14} =\dfrac{73}{14}$, as we have $1, 2, 4$ and $8$ nodes respectively in levels $1, 2, 3$ and $4$ and we discard the source one in level $4.$ Similarly, from a level $3$ node we get mean no. of hops, $= \dfrac{1.2 + 2.3 + 3.4+ 8.5}{14} = \dfrac{60}{14}$ From level $2,$ we get mean no. of hops $= \dfrac{1.1 +1.2 + 4.3 + 8.4}{14} = \dfrac{47}{14}$ And from level $1,$ we get, mean no. of hops $= \dfrac {0 + 2.1 + 4.2 + 8.3}{14} = \dfrac{34}{14}$. So, now we need to find the overall mean no. of hops which will be $= \dfrac{\text{Sum of mean no. of hops for each node}}{\text{No. of nodes}}$ $= \dfrac{ \dfrac{73}{14} \times 8 + \dfrac{60}{14} \times 4 + \dfrac{47}{14} \times 2 + \dfrac{34}{14} \times 1}{15}$ $= \dfrac{68}{15}$ $= 4.53$ by Veteran (425k points) edited 0 @Arjun  , Can you give me reference to more problems of this kind ? This is very lengthy problem of averaging ! 0 @Akash , check my answer.. 0 how to count the number of hops ?. 0 See now.. 0 Thanks Arjun Sir.... 0 not able to understand. plss sm 1 help me with representation of tree, how everything is being calculated? +1 @Arjun Why did you taken destination in hop count? https://en.wikipedia.org/wiki/Hop_(networking) diagram here says you shouldn't Put n = 4 in the formula given below: by Boss (33.8k points) 0 How you decide the n=4 ? I am not getting plz tell .. +2 15 nodes are there in a complete binary tree. So, $n=4$ as $2^n-1$ is the no. of nodes. 0 what book or e-book is this ? +3 +1 vote by Loyal (7.8k points) +1 vote And Now comes my answer , hope atleast someone will find it helpful I am still so not know  why to include destination in hop count. as wiki says we should not https://en.wikipedia.org/wiki/Hop_(networking) but solving this question by including destination Level No of node 1 1 2 2 3 4 4 8 path from level path to level no of such path length of path calculation 4 4 8C2 = 28 (path from any of level 4 node to level 4 node) 6 4 3 8*4 5 4 2 8*2 4 4 1 8*1 3 3 3 4C2=6 4 3 2 4*2=8 3 3 1 4 2 2 2 1 2 2 1 2 1 1 1 1 0 Now multiply 'no of such path' with respective 'length of path' and divide by total of 'length of path' ANS is C by Active (4.9k points) explanation by Amitabh Tiwari:- You have 8 leaves. If a leaf wants to communicate to other 7 leaves ...each such communication would need 7 hops. So 7*7 hops for leaf to leaf communication. Now each of these leaves can communicate with the 4 nodes in the level above them in 5 hops. So 4*5. Each of these leaves can communicate with 2 nodes who are the children of root in 4 hops. So 2*4. Each of these leaves could also communicate with root in 3 hops. So on total for a single leaf average number of hops to communicate with all other nodes in tree is : (7*7 + 4*5 +2*4 + 3)/14 There are 8 such leaves: So multiply the above expression by 8 to get average number of hops for communication of leaves with all other nodes in tree. Now the way we did it for leaves. Repeat the same procedure for nodes at level 2,level 1 and root. by Active (4.8k points)
Math Differentiability and derivative Difference quotient Difference Quotient In contrast to linear functions,other function types do not have a constant slope. To calculate the average slope between two points $P_1(x_0|f(x_0))$ and $P_2(x|f(x))$ one uses the difference quotient: $\frac{f(x)-f(x_0)}{x-x_0}$ ! Remember The difference quotient is the average slope between two points. The difference quotient is the slope of the secant, going through points $P_1(x_0|f(x_0))$ and $P_2(x|f(x))$. i Tip Basically, one just draws a line (the secant) through the points and calculates it like the slope of a linear function: $m=\frac{\Delta y}{\Delta x}=\frac{f(x)-f(x_0)}{x-x_0}$ Example Determine the difference quotient of the function $f(x)=x^2$ of the points $P_1(2|f(2))$ and $P_2(5|f(5))$ $m=\frac{f(x)-f(x_0)}{x-x_0}=\frac{f(5)-f(2)}{5-2}$ $=\frac{5^2-2^2}{3}$ $=\frac{25-4}{3}$ $=\frac{21}{3}=7$
Our aim is to prove Howson’s theorem for free groups followings Stallings’ article, Topology of finite graphs; this note is more or less a sequel of How Stallings proved finitely-generated free groups are subgroup separable. The construction introduced here, the fiber product, will be also useful for the generalization of ideas of Stallings to special cube complexes (due to Wise). Another interesting proof, based on regular languages, can be found in Meier’s book untitled Graphs, groups and trees. Theorem: (Howson) Let $F$ be a free group of finite rank and $H,K$ be two finitely-generated subgroups. Then $H \cap K$ is also finitely-generated. Let $\alpha : X \to Z$ and $\beta : Y \to Z$ be two immersions (ie. locally injective cellular maps) between graphs. We define the fiber product $X \otimes_Z Y$ as the graph whose vertices are $\{ (x,y) \in X \times Y \mid \alpha(x)= \beta(y) \}$ where two vertices $(x_1,y_1)$ and $(x_2,y_2)$ are linked by an edge if and only if $\alpha(x_1)=\beta(y_1)$ and $\alpha(x_2)=\beta(y_2)$ are linked by an edge in $Z$. Notice that there are two obvious projections $p : X \otimes_Z Y \to X$, $q : X \otimes_Z Y \to Y$, and a natural map $\gamma : X \otimes_Z Y \to Z$ so that we have the following commutative diagram: Claim 1: $\gamma : X \otimes_Z Y \to Z$ is an immersion. Proof. Let $(x_1,y_1)$ be a vertex of $X \otimes_Z Y$ and $e,f$ two edges starting from $(x_1,y_1)$; let $(x_2,y_2)$ and $(x_3,y_3)$ denote the ending points of $e$ and $f$. If $\gamma(e)=\gamma(f)$, then $\gamma(x_3,y_3)=\gamma(x_2,y_2)$ hence $(x_2,y_2)=(x_3,y_3)$. Therefore, if $e \neq f$ then $e$ and $f$ are two loops based at $(x_1,y_1)$, sent via $\gamma$ on two loops based at $\gamma(x_1,y_1)$. So $\gamma$ is locally injective. $\square$ Claim 2: $\gamma_* \pi_1( X \otimes_Z Y) = \alpha_* \pi_1(X) \cap \beta_* \pi_1(Y)$. First, we need the following easy lemma: Lemma: Let $\Gamma$ be a graph and $c_1,c_2$ be two reduced (ie. without round-trip) loops. If $c_1=c_2$ in $\pi_1(\Gamma)$ then $c_1=c_2$. Sketch of proof. The result is clear if $\Gamma$ is a bouquet of circles, using the usual normal form for free groups. In fact, it is the only case to consider, since the quotient of $\Gamma$ by a maximal subtree is a bouquet of circles. $\square$ Proof of claim 2. Because $\alpha \circ p = \gamma = \beta \circ q$, the inclusion $\gamma_* \pi_1( X \otimes_Z Y) \subset \alpha_* \pi_1(X) \cap \beta_* \pi_1(Y)$ is clear. Conversely, let $c_0 \in \alpha_* \pi_1(X) \cap \beta_* \pi_1(Y)$, that is there exist $c_X \in \pi_1(X)$ and $c_Y \in \pi_1(Y)$ such that $\alpha(c_X)=c_0=\beta(c_Y)$ in $\pi_1(Z)$. Of course, $c_X$ and $c_Y$ may be chosen reduced so that $\alpha(c_X)$ and $\beta(c_Y)$ so are. We deduce that $\alpha(c_X)=\beta(c_Y)$ from our previous lemma. Therefore, by construction of $X \otimes_Z Y$, there exist a loop $c \in \pi_1(X \otimes_Z Y)$ such that $\gamma(c)=c_0$, hence the inclusion $\alpha_* \pi_1(X) \cap \beta_* \pi_1(Y) \subset \gamma_* \pi_1 (X \otimes_Z Y)$. $\square$ Proof of Howson’s theorem. As in the previous note, we see $F$ as the fundamental group of a finite bouquet of circles $B$, and we choose immersions of finite graphs $\alpha : X \to B$ and $\beta : Y \to B$ such that $\alpha_* \pi_1(X)=H$ and $\beta_* \pi_1(Y)= K$. Then the fiber product $\gamma : X \otimes_B Y \to B$ gives an immersion such that $\gamma_* \pi_1(X \otimes_B Y) = H \cap K$; because $X \otimes_B Y$ is a finite graph (since $X$ and $Y$ are itself finite), we deduce that $H \cap K$ is finitely-generated. $\square$
# Black-Scholes under stochastic interest rates I'm trying to implement the Black-Scholes formula to price a call option under stochastic interest rates. Following the book of McLeish (2005), the formula is given by (assuming interest rates are nonrandom, i.e. known): $E[exp\{-\int_0^Tr_t dt\}(S_T-k)^+]$ =$E[(S_0 exp\{N(-0.5\sigma^2T,\sigma^2T)\}-exp\{-\int_0^Tr_tdt\}K)^+]$ =$BS(S_0,k,\bar{r},T,\sigma)$ where $\bar{r}=\frac{1}{T}\int_0^Tr_tdt$ is the average interest rate over the life of the option . If interest rates are random, "we could still use the Black-Scholes formula by first conditioning on the interest rates, so that $E[e^{-\bar{r}T}(S_T-K)^+|r_s, 0<s<T]= BS(S_0,K,\bar{r},T,\sigma)$ and then computing the unconditional expected value of this by simulating values of $\bar{r}$ and averaging". I'm not sure how can I calculate $\bar{r}$ given a simulated sample paths. • is this homework or an assignment? – Matt Jun 11 '15 at 4:01 • In the case of stochastic interest rate, you need the correlation between the equity price and the interest rate, and it will be model dependent. What is the interest rate model in mind? Hull-White? – Gordon Jun 11 '15 at 12:42 • Interest rate model is Hull-White. Why the correlation is model dependent? I estimated it from market data and I used that as the correlation between the Brownian Motions driving stock and interest rates. – Egodym Jun 11 '15 at 14:01 • Model dependent refers to the interest rate model, not for correlation. – Gordon Jun 11 '15 at 15:40 • Ok, but once I have simulated 1000 sample paths with the Hull-White model, how can I calculate r bar? – Egodym Jun 11 '15 at 16:19 We assume that the short interest rate $r_t$ follows the Hull-White model, that is, the short rate $r$ and the stock price $S$ satisfies a system of SDEs of the form \begin{align*} dr_t &= (\theta_t -a\, r_t)dt + \sigma_0 dW_t^1,\\ dS_t &= S_t\Big[r_t dt + \sigma \Big(\rho dW_t^1 + \sqrt{1-\rho^2} dW_t^2\Big)\Big], \end{align*} where $a$, $\sigma_0$, $\sigma$, and $\rho$ are constants, and $\{W_t^1, t\ge 0\}$ and $\{W_t^2, t\ge 0\}$ are two independent standard Brownian motions. Note that, \begin{align*} &\ E\bigg(\exp\Big(-\int_0^T r_t dt \Big) (S_T-K)^+\bigg) \\ =& \ E\bigg(e^{-\bar{r}T} \Big(S_0e^{\bar{r}T -\frac{1}{2}\sigma^2 T - \sigma \big(\rho W_T^1 + \sqrt{1-\rho^2}W_T^2\big)} -K\Big)^+ \bigg)\\ =& \ E\Bigg(E\bigg(e^{-\bar{r}T} \Big[S_0e^{\bar{r}T -\frac{1}{2}\sigma^2 T + \sigma \big(\rho W_T^1 + \sqrt{1-\rho^2}W_T^2\big)} -K\Big]^+ \Bigg\vert r_s, 0<s \leq T\bigg)\Bigg)\\ =& \ E\Big(F(S_0,K,\bar{r},T,\sigma, W_T^1) \Big\vert r_s, 0<s \leq T\Big), \end{align*} for a certain function $F$. Note the random variable $W_T^1$ in the formula. If $\rho=0$, that is, $S$ and $r$ are independent, then \begin{align*} &\ E\bigg(\exp\Big(-\int_0^T r_t dt \Big) (S_T-K)^+\bigg) \\ =& \ E\Bigg(E\bigg(e^{-\bar{r}T} \Big(S_0e^{\bar{r}T -\frac{1}{2}\sigma^2 T + \sigma W_T^2} -K\Big)^+ \bigg\vert r_s, 0<s \leq T\bigg)\Bigg)\\ =&\ E\Big(BS(S_0,K,\bar{r},T,\sigma) \Big\vert r_s, 0<s \leq T \Big). \end{align*} That is, the formula provided in the question holds if the stock price and the interest rate are independent. In this case, $\bar{r}$ can be approximated by a Riemann sum. EDIT Here, we provide an analytical valuation formula for the above vanilla European option. From this question, the zero-coupon bond price is given by \begin{align*} P(t, T) &= E\left(e^{-\int_t^T r_s ds} \Big\vert \mathcal{F}_t \right)\\ &=\exp\left(-B(t, T) r_t - \int_t^T \theta(s) B(s, T) ds + \frac{1}{2}\int_t^T \sigma_0^2 B(s, T)^2 ds\right), \end{align*} where \begin{align*} B(t, T) = \frac{1}{a}\Big(1-e^{-a(T-t)} \Big). \end{align*} Then \begin{align*} d\ln P(t, T) &=-e^{-a(T-t)}r_tdt -B(t, T)dr_t + \theta(t)B(t, T)dt - \frac{1}{2} \sigma_0^2 B(t, T)^2 dt\\ &=\left(r_t-\frac{1}{2} \sigma_0^2 B(t, T)^2\right) dt - \sigma_0 B(t, T)dW_t,\tag{1} \end{align*} or \begin{align*} \frac{dP(t, T)}{P(t, T)} = r_t dt - \sigma_0 B(t, T)dW_t. \end{align*} Let $Q$ denote the risk-neutral measure and $Q^T$ denote the $T$-forward measure. Moreover, let $B_t = e^{\int_0^t r_s ds}$ be the money market account value. From $(1)$, \begin{align*} \frac{dQ^{T}}{dQ}\Bigg|_t &= \frac{P(t, T)B_0}{P(0, T)B_t}\ \ (\text{with } B_0=1) \\ &=\exp\left(-\frac{1}{2}\int_0^t \sigma_0^2 B(s, T)^2 ds - \int_0^t \sigma_0 B(s, T) dW_s\right). \end{align*} Then by the Girsanov theorem, under $Q^T$, the process $\{(\widehat{W}_t^1, \widehat{W}_t^2), t \ge 0 \}$, where \begin{align*} \widehat{W}_t^1 &= W_t^1 + \int_0^t \sigma_0 B(s, T) ds,\\ \widehat{W}_t^2 &= W_t^2, \end{align*} is a standard two-dimensional Brownian motion. Moreover, under $Q^T$, \begin{align*} \frac{dP(t, T)}{P(t, T)} &= r_t dt - \sigma_0 B(t, T)dW_t^1 \\ &=\big(r_t +\sigma_0^2 B(t, T)^2\big)dt - \sigma_0 B(t, T)d\widehat{W}_t^1 \\ \frac{dS_t}{S_t} &= r_t dt + \sigma \Big(\rho dW_t^1 + \sqrt{1-\rho^2} dW_t^2\Big) \\ &=\big(r_t- \rho\sigma_0\sigma B(t, T)\big) dt + \sigma \Big(\rho d\widehat{W}_t^1 + \sqrt{1-\rho^2} d\widehat{W}_t^2\Big).\tag{2} \end{align*} Note that, the forward price $F(t, T)$ has the form \begin{align*} F(t, T) &= E_{Q^T}(S_T \mid \mathcal{F}_t)\\ &=\frac{S_t}{P(t, T)}. \end{align*} which is a martingale under the $T$-forward measure $Q^T$ and satisfies an SDE of the form \begin{align*} dF(t, T) &= \frac{dS_t}{P(t, T)} -\frac{S_t}{P(t, T)^2}dP(t, T) \\ &\qquad - \frac{d\langle S_t, P(t, T)\rangle}{P(t, T)^2} + \frac{S_t}{P(t, T)^3}d\langle P(t, T), P(t, T)\rangle\\ &= F(t, T)\left[\sigma \Big(\rho d\widehat{W}_t^1 + \sqrt{1-\rho^2} d\widehat{W}_t^2\Big) + \sigma_0 B(t, T)d\widehat{W}_t^1 \right]\\ &= F(t, T) \left[ \big(\sigma\rho + \sigma_0 B(t, T)\big) d\widehat{W}_t^1 + \sigma \sqrt{1-\rho^2} d\widehat{W}_t^2 \right]. \end{align*} Let $\hat{\sigma}$ be a quantity defined by \begin{align*} T\hat{\sigma}^2 &= \int_0^T\Big[\big(\sigma\rho + \sigma_0 B(s, T)\big)^2 + \sigma^2\big(1-\rho^2\big) \Big] ds\\ &=\int_0^T\Big[\sigma^2 + 2\rho\sigma\sigma_0 B(s, T) + \sigma_0^2 B^2(s, T)\Big] ds\\ &=\sigma^2T + \frac{2\rho\sigma\sigma_0}{a}\Big[T-\frac{1}{a}\big(1-e^{-aT}\big)\Big] + \frac{\sigma_0^2}{a^2}\Big[T+\frac{1}{2a}\big(1-e^{-2aT} \big) - \frac{2}{a}\big(1-e^{-aT} \big) \Big]\\ &=\sigma^2T + \frac{2\rho\sigma\sigma_0}{a}\Big[T-\frac{1}{a}\big(1-e^{-aT}\big)\Big] + \frac{\sigma_0^2}{a^2}\Big[T-\frac{1}{2a}e^{-2aT}+\frac{2}{a}e^{-aT} -\frac{3}{2a} \Big]. \end{align*} Then \begin{align*} F(T, T) = F(0, T)\exp\left(-\frac{1}{2}\hat{\sigma}^2T + \hat{\sigma}\sqrt{T} Z \right), \end{align*} where $Z$ is a standard normal random variable. Consequently, \begin{align*} E_Q\left(\frac{(S_T-K)^+}{B_T}\right) &= E_Q\left(\frac{(F(T, T)-K)^+}{B_T}\right)\\ &=E_{Q^T}\left(\frac{(F(T, T)-K)^+}{B_T} \frac{dQ}{dQ^T}\bigg|_T \right)\\ &=P(0, T)E_{Q^T}\left((F(T, T)-K)^+\right)\\ &=P(0, T)\big[F(0, T)N(d_1) - KN(d_2) \big], \end{align*} where $d_1 = \frac{\ln F(0, T)/K + \frac{1}{2}\hat{\sigma}^2 T}{\hat{\sigma} \sqrt{T}}$ and $d_2 = d_1 - \hat{\sigma} \sqrt{T}$. • I believe there is a typo on the SDE for the forward, the denominator of the 4th term should be cubic not quadratic. – Daneel Olivaw May 3 '18 at 17:52 • Thanks @DaneelOlivaw. Do you mean $a^2$ should be $a^3$? – Gordon May 3 '18 at 18:07 • I mean the bond price (in the denominator) in the $dP(t,T)^2$ term of the Taylor expansion of the forward price. – Daneel Olivaw May 3 '18 at 19:08 • Thanks @DaneelOlivaw. That is indeed a typo. – Gordon May 3 '18 at 20:08 As Gordon explained very clearly, if you assume your IR model is normal, you have closed form formulas. The important thing here is that the Forward with maturity T is lognormal under the $T$-forward measure. Why is that? Why do we care? As soon as you have stochastic interest rates, you should basically forget about the risk neutral measure and think in terms of forward measures instead. The change of measure formula is: $$V_t = \mathbb{E}^{\mathbb{Q}^{RN}}_t[e^{-\int_t^T r_u\,du} V_T] = Z_{t,T}\mathbb{E}^{\mathbb{Q}^T}_t[V_T]$$ where $$Z_{t,T} = \mathbb{E}^{\mathbb{Q}^{RN}}_t[e^{-\int_t^T r_u\,du} ]$$ is the ZCB price, i.e. the value of receiving 1 unit of currency at time $T$, as seen from time $t$ (I write $\mathbb{E}_t$ for the conditional expectation wrt the filtration representing the information available at time $t$). The ZCB price is typically known/implied from liquid rates instruments at time $t$. So the above formula factors out the stochasticity of interest rates. For non-path-dependent products, this means that we can forget about the risk-neutral measure altogether. The only thing that matters is the distribution of the terminal cash-flow $V_T$ under the $T$-forward measure $\mathbb{Q}^T$ associated with the numeraire $Z_{t,T}$. Most people without a rates background feel uncomfortable with this measure at first. Why introduce this fictional measure when we have the risk neutral one? Well, first, the so-called risk-neutral measure is just as fictional. It is purely a mathematical construct whose existence is derived, under some strong assumptions, from the only measure that matters: the historical measure $\mathbb{P}$. Moreover, this is how the market participants actually think! Indeed, in option markets, participants quote implied volatilities. If $C_t(T,K)$ is the value of a call with maturity $T$ and strike $K$ at time $t$, the corresponding BS implied volatility is $$C_t(T,K) = Z_{t,T}BS\left(t,F_{t,T};T,K;\Sigma_{BS}\right)$$ where $$BS(t,F;T,K;\sigma) = FN\left( -\frac{\log(K/F)}{\sigma\sqrt{T-t}} + \frac{1}{2}\sigma\sqrt{T-t} \right) - KN\left(-\frac{\log(K/F)}{\sigma\sqrt{T-t}} - \frac{1}{2}\sigma\sqrt{T-t} \right)$$ In order to agree on the current price, participants need to agree on the vol and on $Z_{t,T}$. But, in practice, market participants do not need to agree on the fair price. What is required is for each counterparty to estimate that the trade is beneficial to them. If you have a better estimate of $Z_{t,T}$ then you can arbitrage the other counterparty. This is exactly what happened after the 2008 crisis when some were still using USD Libor rates as "risk-free" discount rates when others were discounting at OIS rates (the interest rate on collateral). Writing $F_{t,T} = S_t/Z_{t,T}$, the implied volatility can be seen as a function $\Sigma_{BS}(t,S,Z;T,K)$ where the variables after the semi-colon are fixed (they refer to the maturity and strike in the option contract) while those before that will evolve stochastically with $t$. The dependency wrt to the strike is the well-known volatility smile. The dependency wrt to the spot $S$ is known as the volatility backbone. The dependency wrt to $t$ is essentially what people call Theta (or at least its volatility component). The dependency wrt $Z$ corresponds to the IR risk. This risk is negligible in short dated options but not in long-dated ones. In order to define option price we should follow Black Scholes construction to construct riskless portfolio at t then to state that instantaneous rate of return of this portfolio equal risk free rate r ( t ) where r is a random on [ t , t + dt ] interval. We actually then arrive at the problem which could not be embedded in BS pricing world.
International Association for Cryptologic Research # IACR News Central You can also access the full news archive. Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal). 2014-06-26 21:17 [Pub][ePrint] 21:17 [Pub][ePrint] 21:17 [Pub][ePrint] 21:17 [Pub][ePrint] In Financial Cryptography 2013, Bringer, Chabanne and Patey proposed two biometric authentication schemes between a prover and a verifier where the verifier has biometric data of the users in plain form. The protocols are based on secure computation of Hamming distance in the two-party setting. Their first scheme uses Oblivious Transfer (OT) and provides security in the semi-honest model. The other scheme uses Committed Oblivious Transfer (COT) and is claimed to provide full security in the malicious case. In this paper, we show that their protocol against malicious adversaries is not actually secure. We propose a generic attack where the Hamming distance can be minimized without knowledge of the real input of the user. Namely, any attacker can impersonate any legitimate user without prior knowledge. We propose an enhanced version of their protocol where this attack is eliminated. We provide a simulation based proof of the security of our modified protocol. In addition, for efficiency concerns, the modified version also utilizes Verifiable Oblivious Transfer (VOT) instead of COT. The use of VOT does not reduce the security of the protocol but improves the efficiency significantly. 21:17 [Pub][ePrint] 2014-06-25 21:12 [PhD][New] Name: J. C. Migliore 21:12 [PhD][Update] Name: Elisa Gorla Topic: Lifting properties from the general hyperplane section of a projective scheme Category:(no category) 2014-06-23 15:17 [Pub][ePrint] We consider the notion of a non-interactive key exchange (NIKE). A NIKE scheme allows a party \$$A\$$ to compute a common shared key with another party \$$B\$$ from \$$B\$$\'s public key and \$$A\$$\'s secret key alone. This computation requires no interaction between \$$A\$$ and \$$B\$$, a feature which distinguishes NIKE from regular (i.e., interactive) key exchange not only quantitatively, but also qualitatively. Our first contribution is a formalization of NIKE protocols as ideal functionalities in the Universal Composability (UC) framework. As we will argue, existing NIKE definitions (all of which are game-based) do not support a modular analysis either of NIKE schemes themselves, or of the use of NIKE schemes. We provide a simple and natural UC-based NIKE definition that allows for a modular analysis both of NIKE schemes and their use in larger protocols. We proceed to investigate the properties of our new definition, and in particular its relation to existing game-based NIKE definitions. We find that (a) game-based NIKE security is equivalent to UC-based NIKE security against \\emph{static} corruptions, and (b) UC-NIKE security against adaptive corruptions cannot be achieved without additional assumptions (but \\emph{can} be achieved in the random oracle model). Our results suggest that our UC-based NIKE definition is a useful and simple abstraction of non-interactive key exchange. 15:17 [Pub][ePrint] 15:17 [Pub][ePrint] 15:17 [Pub][ePrint]
# What's the best way to convert vectors to one-row data frames for use with unnest? I seem to frequently find myself wanting to unnest list-columns that contain vectors, because they should really be their own columns. Often if we use lapply or map to iterate we end up with a function that returns a vector, such as with quantile below. We could imagine wanting to iterate over many different vectors of distributions with different parameters and getting quantiles. However, in order to use unnest to get multiple columns out, we need a one-row data frame. The most "obvious" way of doing it with tidyverse functions that I could see was enframe and then spread, since enframe is supposed to be the standard function for creating a tibble from a vector. However, spread is not fast and calling it for every row can quickly become undesirable. Here I benchmarked a few different alternatives that I could think of, mostly running through matrix. I'm not the best at profiling and am not too sure why the saving of one names<- call gets such a boost, but all of these options are much, much faster than the seemingly "neat" method using enframe. The question is: Am I missing some other method that would be faster? The discussion part is: Should this operation be made easier, or approached in some other manner? set.seed(1) named_vec <- quantile(rnorm(1000), c(0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95)) named_vec #> 5% 10% 25% 50% 75% 90% #> -1.72695999 -1.33933368 -0.69737322 -0.03532423 0.68842795 1.32402975 #> 95% #> 1.74398317 library(tidyverse) bench::mark( as_tibble(matrix(named_vec, nrow = 1, dimnames = list(NULL, names(named_vec)))), data.frame(matrix(named_vec, nrow = 1)) %>% names<-(names(named_vec)), as.data.frame(matrix(named_vec, nrow = 1)) %>% names<-(names(named_vec)), as.data.frame(matrix(named_vec, nrow = 1, dimnames = list(NULL, names(named_vec)))) ) #> # A tibble: 5 x 10 #> expression min mean median max itr/sec mem_alloc n_gc #> <chr> <bch:tm> <bch:tm> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> #> 1 enframe(n… 1.36ms 1.56ms 1.52ms 2.15ms 640. 634KB 10 #> 2 as_tibble… 262.67µs 300.5µs 287.38µs 663.3µs 3328. 0B 13 #> 3 data.fram… 138.9µs 166.06µs 162.3µs 402.71µs 6022. 280B 11 #> 4 as.data.f… 73.77µs 86.93µs 84.42µs 313.09µs 11503. 280B 14 #> 5 as.data.f… 16.22µs 19.55µs 18.92µs 120.47µs 51151. 0B 8 #> # … with 2 more variables: n_itr <int>, total_time <bch:tm> Created on 2019-04-25 by the reprex package (v0.2.1) Welcome to the community! I don't know the answer to your question, but I'd like to add another way as.data.frame(t(named_vec)) (which seems most obvious to me) to do this so that people who know the answer will also consider this option. As you can see, it's certainly not the fastest (but IMHO, "neat" and "intuitive"), but close enough to be considered as an alternative. set.seed(1) named_vec <- quantile(x = rnorm(1000), probs = c(0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95)) library(tidyverse) bench::mark(Enframe = enframe(x = named_vec) %>% spread(key = name, value = value), Tibble = as_tibble(x = matrix(data = named_vec, nrow = 1, dimnames = list(NULL, names(x = named_vec)))), DataFrame = data.frame(matrix(data = named_vec, nrow = 1)) %>% names<-(names(x = named_vec)), AsDataFrameNames = as.data.frame(matrix(data = named_vec, nrow = 1)) %>% names<-(names(x = named_vec)), AsDataFrameDimnames = as.data.frame(matrix(data = named_vec, nrow = 1, dimnames = list(NULL, names(x = named_vec)))), AsDataFrameTranspose = as.data.frame(t(x = named_vec))) #> # A tibble: 6 x 10 #> expression min mean median max #> <chr> <bch:tm> <bch:tm> <bch:tm> <bch:tm> #> 1 Enframe 1.43ms 1.51ms 1.5ms 5.49ms #> 2 Tibble 252.05<U+00B5>s 291.99<U+00B5>s 285.7<U+00B5>s 8.32ms #> 3 DataFrame 119.48<U+00B5>s 134.28<U+00B5>s 130.9<U+00B5>s 4.32ms #> 4 AsDataFra… 61.78<U+00B5>s 69.17<U+00B5>s 67.3<U+00B5>s 4.09ms #> 5 AsDataFra… 14.58<U+00B5>s 16.75<U+00B5>s 16.4<U+00B5>s 60.23<U+00B5>s #> 6 AsDataFra… 14.64<U+00B5>s 17.38<U+00B5>s 16<U+00B5>s 4.04ms #> # … with 5 more variables: itr/sec <dbl>, mem_alloc <bch:byt>, #> # n_gc <dbl>, n_itr <int>, total_time <bch:tm> (I generated this using RStudio cloud, and apparently it can't recognise \mu from <U+00B5>) 2 Likes I'm not sure on speed, but the development version of tidyr has functions unnest_longer()/unnest_wider() that appear to do what you're after. From the NEWS: New unnest_longer() and unnest_wider() make it easier to unnest list-columns of vectors into either rows or columns (#418) Given a list column of vectors like you described, you could use unnest_wider() to unnest them into a wide format. quants = tibble(x = list(named_vec, named_vec) ) unnest_wider(quants, x) # A tibble: 2 x 7 5% 10% 25% 50% 75% 90% 95% <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 -1.73 -1.34 -0.697 -0.0353 0.688 1.32 1.74 2 -1.73 -1.34 -0.697 -0.0353 0.688 1.32 1.74 1 Like I'm not familiar with purrr, but vectorize and mapply can combine the looping and combining steps: samples <- replicate(5, runif(1000), simplify = FALSE) mydata <- data.frame(s = I(samples)) mydata # s # 1 0.414275.... # 2 0.156483.... # 3 0.992391.... # 4 0.432951.... # 5 0.302785.... quants <- vapply( X = mydata[["s"]], FUN = quantile, FUN.VALUE = numeric(7), probs = c(0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95) ) quants # [,1] [,2] [,3] [,4] [,5] # 5% 0.04082008 0.04507124 0.05049143 0.04776495 0.04590253 # 10% 0.09333178 0.10545829 0.10458423 0.09645673 0.08284631 # 25% 0.23145141 0.24900141 0.25052316 0.24397599 0.22676617 # 50% 0.50762321 0.52104472 0.49992605 0.51110439 0.48699883 # 75% 0.75176868 0.76793539 0.75177977 0.76025279 0.75084742 # 90% 0.91539669 0.90923986 0.90348624 0.90909336 0.89700135 # 95% 0.96388973 0.95451164 0.95004418 0.95091292 0.94022864 mydata[rownames(quants)] <- as.data.frame(t(quants)) mydata # s 5% 10% 25% 50% 75% 90% 95% # 1 0.414275.... 0.04082008 0.09333178 0.2314514 0.5076232 0.7517687 0.9153967 0.9638897 # 2 0.156483.... 0.04507124 0.10545829 0.2490014 0.5210447 0.7679354 0.9092399 0.9545116 # 3 0.992391.... 0.05049143 0.10458423 0.2505232 0.4999260 0.7517798 0.9034862 0.9500442 # 4 0.432951.... 0.04776495 0.09645673 0.2439760 0.5111044 0.7602528 0.9090934 0.9509129 # 5 0.302785.... 0.04590253 0.08284631 0.2267662 0.4869988 0.7508474 0.8970013 0.9402286 This topic was automatically closed 21 days after the last reply. New replies are no longer allowed. If you have a query related to it or one of the replies, start a new topic and refer back with a link.
# What happens in an empty microwave oven? What it we don't put any food in a microwave oven? I.e. nothing to absorb the microwaves? Would the standing microwave modes in the 3D cavity be reinforced? Would there be too much energy in the microwave? • Except for some small amount of absorption in and leakage through the walls almost all will get reflected back in to the magnetron, eventually blowing it up unless the output is protected by an isolator in which case the latter absorbs the reflected energy. – hyportnex Feb 1 '17 at 23:01 • I edited my answer and @hyportnex nailed it with the effects in his comments, but I cannot find a more rigorous treatment of the standing wave question – user140606 Feb 2 '17 at 0:21 From Quora: This is what happens when you run it without a load. At first, everything is fine because the cavity, the door, the tray (or shelf) absorb the excess energy. Things will start to get hot, but it's not critical yet. After a few minutes, something will start to get really hot and the excess energy? It starts reflecting back into the magnetron (the device that provides the microwave energy) and that starts to heat up too. If it is designed well enough, it will trip a thermal fuse before anything actually breaks. The thermal fuse can be resettable, but usually not. You will need to replace this fuse if the oven no longer works. If the oven is designed poorly, you'll experience what is called "thermal runaway". At this point, some random location in the oven will begin to superheat. And by super heat, I mean hot enough to liquefy porcelain. This will continue until something finally gives out. It could be a breaker tripping, a standard fuse blowing, or in the worst case, the insulation on the wiring or the plastic components might literally burst into flames consuming everything flammable until the oven finally loses power. Don't do this. Some ovens should not be operated when empty. Refer to the instruction manual for your oven. I would not ignore this instruction, issued by the FDA. Also you would be wasting money, and I would guess there is some kind of limiter built in as hyportnex says, BUT it's not worth the risk. But you could do this experiment instead, and eat the results. Image source: Measure the speed of light using chocolate Use a bar of chocolate to check that the speed of light is 300,000 km/s, rather than let all that energy go to waste. Apologies if you know this already. Measure the distance between the melted spots, after they have formed and then double it to get the wavelength of the microwave radiation. The wave frequency is around 2.45 gigahertz. Velocity of light = wavelength x frequency The distance between each melted spot should be around 6 cm. 6 x 2 x 2450000000 = 29400000000 cm/s, pretty close to the speed of light. • Yeah, thanks, but I was actually interested on what happens in the hypothetical scenario a microwave oven or any other 3D cavity were operated without any absorbing medium. – SuperCiocia Feb 1 '17 at 23:44 • @Countto10-Nice way to measure the speed of light! – descheleschilder Feb 1 '17 at 23:48 • Hey, that experiment is AWESOME. I can't wait to show it to the children at my daughter's school and tell them they can measure $c$ with chocolate! – WetSavannaAnimal Feb 1 '17 at 23:50 • @Countto10 Thanks: I do plan to make sure I can do the experiment properly before talking about it. Definitely important to deliver with children! – WetSavannaAnimal Feb 2 '17 at 0:11 • @WetSavannaAnimalakaRodVance I did this many years ago with a layer of marshmallows instead of chocolate... maybe for the good of science you should try both and see which works better ;) – Rococo Feb 17 '17 at 2:22 Re: chocolate experiment and microwaves. All the microwave ovens I have ever come across, dismantled or repaired have always included a "stirrer" which is in the waveguide path between the magnetron and the oven cavity. Usually it is in the form of a metal rotating shape like a fan. This chops up the otherwise nicely formed e-m microwaves into a jumble and hence there can be virtually no standing waves inside the oven. The reason for this is simple. To ensure that your chocolate is NOT melted/vaporized in little spots but heated evenly. Pretty well all the owners of microwave ovens I know prefer evenly heated food as opposed to generally still frozen food with black burnt blobs in it. • Does this include microwaves with turntables? – user1583209 Mar 12 '17 at 21:01 • Yes, never seen one without a stirrer. Have come across one with a faulty stirrer which did in fact result in a frozen sausage with a couple of charcoalised holes bored in it. Quite amusing. In this case the stirrer was driven by a rubber belt rather like that found in the old reel-to-reel 1/4" audio tape recorders. Belt had broken. – BetterBuildings Mar 12 '17 at 22:12 • Then what is the point of the turntable? – user1583209 Mar 13 '17 at 8:42
## High-reduction Cycloidal Actuator for Robotics This page documents my efforts to create a fully-3D printable, high-reduction cycloidal drive. The initial parts of this article describe the construction of a 24:1 drive as a. The latter parts discuss the modifications of this original design to increase its torque output and make it manufacturable. Precise robotic actuation demands the use of gear reductions for the amplification of torque and accurate translation of control input to mechanical output. For this application, a variety of methods exist including offset spur or helical gears, worm drives, planetary gears, and strain wave (harmonic) drives. Though widespread, many of these designs suffer from backlash, larger form factors, or complexity which diminish their value for use in robotic applications. This actuator design uses a cycloidal drive mechanism to overcome some of these issues. Cycloidal drives are known for high reduction ratios and compact forms that minimize backlash. Drives consist of cycloidal disks with equation-driven profiles that allow for precise meshing within the mechanism. Cycloidal curves are generated parametrically from following a fixed point on a circle as it rotates about a profile without slipping. Various methods like add-ins, extensions, and equations can be used to generate these profiles. For this project, Otvinta was initially used to extract parametric equations for the drive discs. Cycloidal drives consist of single or multiple drive discs mounted about an eccentric shaft or bearing. The perimeter of the gearbox commonly contains pins that are fixed, causing the drive gears to exhibit net rotation about the eccentric shaft. Subsequently, output rollers are used to translate the slower rotation of the drive disc linearly based on tangent relations with cutouts in the drive discs. Reduction rate is a function of the number of pins in the gearbox housing (P) and the lobes on the drive discs (N). This equation describes the relation between P and N with R being the reduction ratio. To illustrate this, here is a proof-of-concept design for a 7:1 reduction gearbox: The above design contains 2 discs with a 180-degree phase offset to minimize vibrations and force imbalance at high RPM. This is the essence of a ‘harmonic’ drive which uses offset gearing to cancel out the effects of vibration. Furthermore, even with 1 disk, and depending on manufacturing methods, reduction ratios of up to 7,569:1 are possible in a smaller length compared to planetary drives. Furthermore, in comparison to other traditional spur and helical gear meshes, cycloidal gears share the output of loads across 1/3 of their profile which leads to a reduction in the forces experienced by each tooth. Having tested the waters, the following design criteria were established. The mechanism would be able to utilize a standard brushless motor with mounting designed for easy maintenance as well as airflow over the motor bell. A new design strategy modeled after Paul Gould’s work was chosen instead of the more simple 1-sided output ring design shown above. This allowed the output shaft to be the pin casing around the drive disks themselves which removed asymmetric loading conditions and increased the strength of the design. As such, the mechanism would be designed to incorporate a rotary housing, not just an output shaft as is standard with most gearboxes. Alignment between the drive disks would be calculated such that fixed pins can extend through their geometries, thus constraining their rotation and only allowing the outer pin casing to rotate due to tangential meshing. Additionally, an offset cam would be designed to allow the cycloidal gears to eccentrically oscillate. This shaft would need to withstand rotational forces, interlock with the motor shaft, and remain relatively thin as to avoid the need for expensive or large bearings. The first reduction ratio that I attempted to model was 24:1. To achieve this, I used the Otvinta parameters: D as the ring diameter of 80 mm, d as the pin diameter of 2.5 mm, N is the number of pins (25), and e is the eccentricity offset which was 1 mm for the eccentric cam. These created a set of parametric equations for an epitrochid curve which was then inset to create the desired profile. Later on, I realized that these equations were cumbersome and inhibitory for higher gear ratios but I’ll postpone that discussion for now until I get into the post-semester improvements below. For now though, the equations were used to generate the gears and surrounding output profile shown here with detailed descriptions available in the mentioned semester report: After this, the components were assembled and a motion study performed to assess the reduction of the mechanism. In order to further reduce vibrations and increase contact area, 3 of the above cycloidal gears were used in the design. 2 of these are identical with one thicker gear (with the same profile) in the center to further distribute loading. Coupled with the surrounding output gear, the resulting design also eliminated backlash! At this point, I had to research and experiment a fair amount to get the gears to properly mesh. In order for the 180 phase offset to occur, alignment depends on whether the number of lobes on the driving gears is odd or even. Designing for this alignment is crucial in order to generate the proper mating conditions to study motion and, of course, make the gearbox actually work. The following pictures show the differences in alignment and design between such gear types. Another key consideration is the placement of the internal fixed pins which act as the mechanical constraint for the gears. These pins must be tangential to the inner surfaces of the gears in order to prevent their rotation. To ensure this, the radius of the circle that contains the fixed pins must be equal between the cycloidal gear itself and the housing in which the pins are connected to. The only distinction is that the holes along the circumference of that radius on the gear must be large enough in diameter to have the fixed pins themselves be tangent to their surfaces. In addition to the FEA done in the semester report, a final motion study was performed to determine whether or not the desired 24:1 reduction was achieved. It generated the following angular velocity versus time plots: While the complexity of the gearbox geometries as well as the limitations of my computer yielded some inconsistencies in angular acceleration, comparing the overall average speeds gives an acceptable result. After outputting the data for each plot in discrete intervals and averaging magnitudes, I proved that the output averaged ~600 deg/s while the input averaged ~25 deg/sec: an approximate 24:1 reduction. At this point in the project, the semester had concluded and I finished the first phase of the project by FDM printing the gears. They ended up working great! This was very exciting but also the beginning of a ‘phase 2’ which was motivated by their application in a parallel project: my autonomous rover. With this new purpose in mind, I identified several improvements that would need to be made in order for their use to make sense in place of commercially available motors like these. Here are the goals that I sought to achieve in the second part of this project: • Decrease vibration: As shown in the above videos, the large size of the lobes and poor tolerance lead to a fair amount of slop, vibration, and noise at middle-high RPMs (The drive is quite noisy if you un-mute the video) • Upgrade the case and mounting fixtures to be manufacturable: This was mainly to shift the original focus of the mount (being for a multi-DOF arm) to be mounted to the rover’s chassis • Make the output gear power a drive shaft: Instead of rotating an arm, the output would have to translate rotation to a parallel axle • Increase the reduction: Using 3s or 4s LiPo batteries as a power source, the actuator failed to reduce speed enough to maintain torque output at low speeds. Such a range of operation would be crucial to make the rover operate efficiently for long periods of time The latter of these was definitely easier said than done as it required an entirely new set of equations and methods to build the cycloidal profiles. For this, I defined the following parameters: Rr as the radius of the fixed pins, R as the radius of the circle on which the output pins are situated, N as the number of output pins, and E as the eccentricity or offset of the camshaft. While the radius of the arc on which the fixed pins are located is crucial, its exact value can be arbitrary as long as it is shared between the gears and where the fixed pins are mounted to. As a note of clarification, the fixed pins are the pins that extend through the internal parts of the cycloidal gears and restrain their rotation while the output pins are the lobes in the output gear that mesh with the cycloidal profiles. The parametric equations generate curves based on these variables (might require editing depending on CAD software): $x(t)=(R*cos(t))-(R_r*cos(t+atan(sin((1-N)*t)/((R/(E*N))-cos((1-N)*t)))))-(E*cos(N*t))$ (R*cos(t))-(R_r*cos(t+atan(sin((1-N)*t)/((R/(E*N))-cos((1-N)*t)))))-(E*cos(N*t)) $y(t)=(-R*sin(t))+(R_r*sin(t+atan(sin((1-N)*t)/((R/(E*N))-cos((1-N)*t)))))+(E*sin(N*t))$ (-R*sin(t))+(R_r*sin(t+atan(sin((1-N)*t)/((R/(E*N))-cos((1-N)*t)))))+(E*sin(N*t)) Before these equations could be applied to the output gear however, its geometry would have to be modified to power a parallel drive shaft instead of an arm joint. I chose to do this with double helical (herringbone) gears. Here, I modified the output gear with a fully-equation driven, involute spur gear according to module, number of teeth, and pressure angle. I also added a helix angle parameter which drove the angle between mirrored helical gear profiles. With this modification, power could be transmitted laterally between drive shafts while minimizing inward forces between the gears in this direction. Using the 24:1 reduction design, I performed a motion study on the experimental helical gears to confirm meshing: After confirming the validity of the helical gears, I applied the new equations to the cycloid gears to give a new actuator with a reduction ratio of 44:1. In making these improvements, I faced a bunch of challenges. Aside from convoluted motion studies causing my machine to crash, I found that a few of SolidWorks’ built-in features, such as the surface offset tool, break with complex sketches. For example, experiments with equations to add reductions to the cycloid gears usually succeeded in generating curves but could either not mirror, trim, or offset sketches/surfaces. After some experimentation using Desmos, I found that curves pushing the limits of self-intersection (but not actually intersecting) were the most problematic. So, I used trial and error by varying the number of output pins, their diameter, and eccentric offset until I achieved the maximum reduction that I could find. Referencing the above equations, the parameters that gave a functional 44:1 drive were Rr = 2.5 mm, R = 40 mm, N =45, and E = 0.75 mm with a 27.92 mm fixed pin mounting radius. Further improvements involved designing a slim ball bearing into the walls of the gearbox to reduce friction between the helical output gears and its surfaces. For the balls, I used 6 mm smoothed airsoft BB’s due to their low cost and because I already had a ton of them. These worked surprisingly well and removed enough friction to justify not choosing more expensive nylon or stainless steel bearings. For these and the tangential interfaces of the gears, I chose a silicon-based lubricant to avoid any degradation to printed components as a result of petroleum-based compounds. Lastly, the final version of the gearbox utilized and improved motor housing with a CNC’d motor plate to avoid plastic melt and facilitate air cooling. Overall, this project was incredibly fun and successful as a means of using my old brushless motors for something besides RC aircraft. The outcome of this project was the construction of 4 cycloidal actuators to drive my autonomous rover project. In the future, I plan on applying this actuator to different motors for use on camera-panning mechanisms and robot arms with the aid of brushless motor controllers. ### 8/5/20 Update: Improvements for the Autonomous Rover The initial success of the cycloidal drive was followed by the construction of 3 more fully-functioning actuators for testing on my autonomous rover project. I am currently collecting quality control and test data on these drives to improve their design. An example of a test-drive revision is a new composite camshaft which uses 3D printed cams around a steel, key stock core to prevent deformations. Work is being done to implement this design and other improvements to avoid failures of the actuators during high-torque maneuvers that skid steering robots perform. I will elaborate on improvements in the future but for now, here is a test of the improved actuator! With a stronger camshaft and better alignment, it’s able to reach much higher RPMs while maintaining stability and torque.
# All Questions 137 views ### Estimate Beta of CAPM from Implied Volatility? In the CAPM theory Beta of asset $i$ are estimated in this way: $\beta_i = \frac{\sigma_{im}}{\sigma^2_m}$ where $\sigma_{im} = \rho_{im} \sigma_i \sigma_m$ But all these data are historical data. ... 48 views ### Impulse response function interpretation I would need a quick help with Impulse response function interpretation which I have done after Vector autoregression model in stata. I need to understand how to interpret IRF graph or table values ... 73 views ### Do FRN's *always* trade on par on reset days, regardless if the issuer's credit quality has changed? I keep reading that floating rate notes trade on par on coupon reset days. Is this always true, regardless of changes in the issuer's credit quality since the FRN was issued? It seems probably ... 46 views ### simple game - fair value Suppose a person A has the following game: there are 2 red balls, 2 green balls and 1 white ball in a bag you take 1 ball (don't put it again in the bag) and then a second ball if you take the white ... 82 views ### What is the yield when a floating-rate note is issued above/below par? I am new in this area so all help is much appreciated! Let's say a 3-year floating rate note pays a coupon of LIBOR+100 bps, and is issued at a premium with price = 100.5. I understand that this ... 72 views ### pricing with implied volatility surface I am a newbee in Quantive finance. supposing I calibrate a smoothing implied volatility surface with cubic spline now. A minute later I want to price K=100,t=1 option, can I just find the point on ... 55 views 46 views ### volatility skew for lognormal model is flat? Does anyone know why the volatility skew for lognormal model, such as BK, should be a flat line, meaning that implied black volatility for options will be same for those with different strike prices? ... 156 views ### Black-Scholes PDE: what is the form of the boundary conditions I'm working on the Black-Scholes equation, but I'm pretty new to financial modeling. Right now, I am trying to understand the Black-Scholes PDE. I understand that the Black-Scholes equation is given ... 65 views ### Why financial instistution for instance banks lowered down their interest rate during QE? When QE is carried out, the Federal Reserve prints money and buy government bonds in an effort to pour extra money into the economy. This causes financial institutions for instance banks to lowered ... 24 views Suppose I have a bond with unknown bid-ask spread, and a portfolio, containing it and also other bonds, all with known bid-ask spreads. How can the unknown spread be inferred? I assume there should ... 157 views ### Why Central Bank carry out Qe when they can directly force banks to lower down the interest rate? To boost the economy, the central bank can do it either by lowering down the interest rate nor carry out QE. But QE objective is to lowered the interest rate also so banks can give out more loan. This ... 61 views ### How to estimate the price of a European call when the underlying is not tradable? Assume you have a vanilla call on an underlying $S$ with strike price $K$ and expiry at time $T$. Let's say that $S$ follows a GBM with volatility $\sigma$. In general, one would use the Black-... 54 views ### Fourier transform covariance estimator I am estimating realized variance and covariance by the estimator described in this paper, and relying on Fourier Transform. Now, as my data is one day of data in ultra high frequency, so that the ... 35 views ### Liquidity effect in case MS decrease What is the result if the liquidity effect is grater than other effects in case of decreased money supply? I got this question on the exam, In case of an increase in the money supply by the central ... 101 views ### FIX latency and clock syncronization We are trying to see latency from our server to different LPs . For that we are checking sendingtime value (from them) and current clock in our server. What we saw is difference of +-20ms between ... 71 views ### MSRV estimation in R What are the R packages that let you estimate Multi Scale Realized Volatility (MSRV)? So far I've only found highfrequency (which comes with Realized Kernel as well), but from what I understand it ... 113 views ### Anomaly or feature from Quantmod in R regarding getFX - currency data I am using R to analyse stock data, using the quantmod package to get all sorts of data, but here specifically FX data using the function ... 134 views 11 views ### CDS Premium table Interploation for the Arrear case If CDS spreads are given for say year end 1,2,3,4,5 .That means these premium payments are made in arrears. In that case we need to apply interpolation tools. But for which particular points do we ... 33 views ### Constructing Dedicated Risk Premia Strategies I am trying to figure out the "best" way to construct investment strategies which are focused on capturing specific risk premia individually. From my understanding the traditional approach to capture ... 45 views ### Result linked to Black-Scholes evaluation Why does this $$Se^{-D(T-t)}e^{-d_1^2/2} - Ee^{-r(T-t)}e^{-d_2^2/2}$$ equal to $0$? (Where $E$ is a strike) 102 views ### How many PHD level quant are there in US market? [closed] How many PHD (economics+finance) level quants are work here in US market? 53 views ### What does martingale look like? I'm doing a simulation of a CRR model and I'm trying to find parameters in order for the successive $S_t$s (stock prices) to be martingale. I'm assuming that if I'd create a function (and picked the ... 50 views ### Proper way to calculate the realized indiviual stock sharpe ratio From the textbook, sharpe ratio is (return-riskfree rate)/risk However I wonder if I can use (return-index return)/risk, where the index acts as the benchmark, to calculate the sharpe ratio? I am ... 71 views ### How would I exploit arbitrage if risk-neutral pricing doesn't hold? (Option Pricing) We are just learning about binomial option pricing, and how the up-factor and the down-factor must match the risk-neutral price. p * u + (1 - p) * d = continuous risk free rate compounded CRR ... 261 views ### Computing Pooled IRR from the IRRs of parts Suppose I have two cash flows: CF1: -10001001001100 CF2: -20020301 I can compute now: IRR(CF1) = 10% IRR(CF2) =-55% IRR(CF1+CF2) = 4.46% Is there a way to compute (or at least get a fair ... 229 views ### How to price an option allowing to change a call into a put? A recruiter asked me this question: Suppose you have the following contract: a call option with maturity T = 2 years the possibility to change this call into a put at t = 1 year What is the price ... 22 views ### Price of call (calibration) I need to understand how we got this : $\forall i \in I$ $C^{*}_{0}(T_i,K_i)=e^{-rT_i}E[(S_{T_{i}}-K_i)^+|S_0]=e^{-rT_i+X_{T_{i}}}E[(S_{T_{i}}-K_i)^+]$ at How we pass from conditional expecation to ...
# aliquot ## < a quantity that can be divided into another a whole number of time /> I am re-reading Melzak's Companion to Concrete Mathematics, and there's a section dedicated to $\pi$ (pp. 164–169). There are various formulas to approximate $\pi$ to a given precision, the first being probably the fraction 22/7, from Archimedes. This is only correct to three decimal places, so a better fractional approximation is 355/113 = 3.1415929, which is easy to memorize and probably the one we are first taught in (French) college (in addition to the mnemonic trick — “que j'aime a faire apprendre ce nombre utile aux sages”). If you're familiar with the bc program, you will recall that it relies on the arc tangent. This follows from Leibniz's approximation, which starts with $\arctan x = x - x^3/3 + x^5/5 + \dots$, which yields $\pi/4 = 1 - 1/3 + 1/5 + \dots$ for $x = 1$. From this, there exists a variety of arctan-type formulas for $\pi$, e.g. $\pi/4 = \arctan\tfrac{1}{2} + \arctan\tfrac{1}{3}$ or $\pi/4 = 5\arctan\tfrac{1}{7} + 2\arctan\tfrac{3}{79}$.1 Let's try it in a Fish shell:  echo "scale=11; 4*a(1)" | bc -l 3.14159265356 Of note, the scale parameter is quite important when a high precision is required. Another well-known formula, at least for $\LaTeX$ aficionado, is the following continued fraction, due to Brouncker: $$\frac{4}{\pi} = 1 + \frac{1^2}{2 + \frac{3^2}{2 + \frac{5^2}{2 + \dots}}}$$ Of course, many more approximations are available. Although Melzak notes that no hyperexponentially fast procedure2 appears to be known for computing $\pi$, there does exist efficient algorithms to compute $\pi$ to n exact figures. A short snippet of Python code is available in the case of the Chudnovsky algorithm, which remains the most efficient algorithm at the time of this writing. Scheme code is available on Programming Praxis. Other iterative algorithms, like Borwein's algorithm, are also simple to implement in languages that offer support for large integers. Note that this is only if you are interested in computing $\pi$ to a large number of decimal places since most PLs will provide you with built-in constants for $\pi$ or $\pi/2$. E.g., in C (using clang on macOS) $\pi$ is stored as a constant in math.h: #define M_PI 3.14159265358979323846264338327950288. This file is actually located under the command-line tools directory, that can be located using, e.g., echo "#include <math.h>" | gcc -v -x c -. Racket provides a double-precision flonum for $\pi$, but fractional approximations are used in various place the math library (e.g., 14488038916154245685/4611686018427387904 = 3.141592653589793). Melzak provides two additional formulas to compute $\pi$ and $e^{-\pi}$, based on the theory of elliptic functions which is far beyond the scope of this short post. Those formulas are: \begin{align} e^{-\pi} &= b + 2b^5 + 15b^9 +150b^{13} + 1707b^{17} + \dots \newline \pi &= \log\frac{1}{b} - 2b^4 - 13b^8 - \frac{368}{3}b^{12} - \frac{2701}{2}b^{16} + \dots, \end{align} with $b = \tfrac{1}{2}\frac{\sqrt[\leftroot{-1}\uproot{2}\scriptstyle 4]{2}-1}{\sqrt[\leftroot{-1}\uproot{2}\scriptstyle 4]{2}+1} = 0.0432136168629448960219378\dots$ It should be noted that the very first term, $\log\tfrac{1}{b}$, already gives $\pi$ correctly to five decimal places. Using Mathematica, I got the following result: In[1]:= b = 1/2*(Power[2, (4)^-1] - 1)/(Power[2, (4)^-1] + 1) In[2]:= N[b, 24] Out[2]= 0.0432136168629448960219378 In[3]:= N[Log[1/b], 24] Out[3]= 3.14159962823802109942254 In[4]:= N[Pi, 24] Out[4]= 3.14159265358979323846264 As a side note, Mathematica also allows us to compute any terms of the continued fraction of $\pi$:3 In[35]:= ContinuedFraction[Pi, 20] Out[35]= {3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2} In[36]:= FromContinuedFraction[%] Out[36]= 14885392687/4738167652 1. Such formulas rely on the following identity, attributed to Dogson: if $qr = 1+p^2$, then $\arctan\tfrac{1}{p} = \arctan\tfrac{1}{p+q} + \arctan\tfrac{1}{p+r}$. This identity was also proposed by Euler, who further demonstrated that $\arctan\tfrac{m}{n} = \frac{mn}{m^2+n^2}\left[ 1 + \frac{2}{3}\frac{m^2}{m^2+n^2} + \frac{2\cdot 4}{3\cdot 5}\left(\frac{m^2}{m^2+n^2}\right)^2 + \dots \right]$. 2. A numerical procedure is said to be exponentially fast if for large $n$, $E^n\sim c^n$ for some $c$, $0 < c < 1$. An hyperexponentially fast procedure is one for which $E_n\sim c^{n^{\alpha}}$ for some $c$, $0 < c < 1$ and $\alpha > 1$. 3. To compute the continued fraction of a number $x$, use the recurrence $a_0 = x$ and $a_n = (a_{n-1} - \lfloor a_{n_1} \rfloor)^{-1}$; the $n$th term will be $\lfloor a_n \rfloor$. Consider the precision of 20 digits illustrated above, we have: $n=0$, $a_n = 3.1415926535897932385$ and $d_n = 3$; $n=1$, $a_n = 0.1415926535897932385^{-1}=7.06251330593104577$ and $d_n = 7$; $n=2$, $a_n = 06251330593104577^{-1} = 15.9966$ and $d_n=15$; etc.
# Choosing $\lambda$ to yield sparse solution I'm supposed implementing certain optimization algorithms (ISTA, FISTA) to minimize: $$\frac12 ||Ax-(Ax_0+z)||_2^2 + \lambda ||x||_1.$$ $A$ is a matrix, $x$ is a vector, $z$ is some noise filled with random data from a certain distribution. I get that. $\lambda$ is supposed to be chosen so as to "yield a sparse solution". What does that mean exactly? Isn't the L1 norm itself supposed to help find the sparsest solution? How do I pick my lambda to improve on that? What factors influence what the best $\lambda$ should be? It's not hard to show (Hint: show that the dual problem is a projection of $$\frac{1}{\lambda}(Ax_0 + z)$$ onto the polyhedron $$\mathcal P := \{\theta \text{ s.t }\|A^T\theta\|_\infty \le 1\}$$, and then use the KKT conditons ...) that if $$\lambda \ge \lambda_{\text{max}} := \|A^T(Ax_0 + z)\|_\infty$$, then the solution of your problem is the only zero vector. As you decrease $$\lambda$$ from $$\lambda_{\text{max}}$$ to $$0^+$$ (i.e as you descend the Lasso path), roughly speaking, more and more indices enter the support of a solution (i.e solutions become less sparse). You want to check this soft-hand document on the subject. Jump to section 3. Now, which particular value of $$\lambda$$ is "good" depends on your application and what you're trying to do. This problem is known as model selection, and spans an entire sub-field in statistical learning. A simple and principled way to go about it is via cross-validation: you form a finite grid $$\mathcal G$$ of $$\lambda$$ values say, $$10^{-k}\lambda_{\text{max}}$$ for $$k = 0, 1, \ldots, N$$ (with $$N \sim 10$$, say). This gives your $$N$$ models to compare. Then you use something like $$K$$-fold cross-validation (say, with $$K = 8$$) to measure the relative performance of these models, and select the best based on these scores. When computing / fitting these $$N$$ Models (we say you're computing a regularization path), you'd start from the largest to the smallest value in $$\mathcal G$$, and each time you're warm-start the problem of fitting the current model, with the solution of the previous one. Amongst the many ways of computing the regularization path, the LARS-lasso algorithm is particularly cool. Coordinate descent is also an awesome method. There are cutting-edge implementations of these methods (including the cross-val thing) out there. For example, checkout scikit-learn or R's glmnet package. N.B.: $$\|a\|_\infty := \max_{u \text{ s.t }\|u\|_1 \le 1}a^Tu = \max_{j}|a_j| = \lim_{p \rightarrow \infty}\|a\|_p$$. • Thank you! So bigger $\lambda$ gives the best solution, but I'm guessing it becomes highly inefficient to calculate? So in practice one would use as close to $\lambda_{\text{max}}$ as is computationally reasonable. Apr 10, 2016 at 3:04 • I've extended my answer to give you more practical details: warm-starting, cross-validation, etc. Apr 10, 2016 at 13:55
# zbMATH — the first resource for mathematics General solutions of relativistic wave equations. (English) Zbl 1027.83003 The starting point of the paper is a criticism of the plane-wave solutions of the relativistic wave equations. As an improvement, the author presents these solutions in the form of series of hyperspherical functions. His theory is based on the isomorphism $$SL(2,\mathbb{C})\sim$$ complex $$SU(2)$$, and a generalization of the Gel’fand-Yaglom formalism. The technique of separating the variables is applicable since hyperspherical functions are defined on a two-dimensional complex sphere. Consequently, several fields are described in terms of functions on the Lorentz group. In the last sections of the paper it is shown how the Dirac, Weyl and Maxwell equations can be considered particular cases of a general relativistically invariant system. ##### MSC: 83A05 Special relativity 81Q70 Differential geometric methods, including holonomy, Berry and Hannay phases, Aharonov-Bohm effect, etc. in quantum theory 83C47 Methods of quantum field theory in general relativity and gravitational theory Full Text:
# Lie group and the frame bundle $T(\mathbb{O}M)=H(\mathbb{O}M)\oplus V(\mathbb{O}M)$. Here is my problem, I have equations involving $X_i f(\sigma)$ on the Lie group where the X_i s are a basis for the Lie algebra and $f \in C^2(G)$. I want to be able to map these across to the frame bundle such that I get $V_i f(s)$ where the V_i s are the canonical vertical vector fields and $f \in C^2(\mathbb{O}M)$.
Is there a general method for solving this type of recurrence? Edit: Here is the original problem; it is possible that my recurrence for the stationary distribution $\pi$ is incorrect. Consider a single server queue where customers arrive according to a Poisson process with intensity $\lambda$ and request i.i.d. $\mathsf{Exp}(\mu)$ service times. The server is subject to failures and repairs. The lifetime of a working server is an $\mathsf{Exp}(\theta)$ random variable, while the repair time is an $\mathsf{Exp}(\alpha)$ random variable. Successive lifetimes and repair times are independent, and are independent of the number of customers in the queue. When the server fails, all the customers in the queue are forced to leave, and while the server is under repair no new customers are allowed to join. Edit: I have revised the recurrence. In a problem on queueing theory I've derived the following recurrence: \begin{align} \pi_1 &=\left(\frac{\lambda+\theta}\mu\right)\pi_0 - \frac{\alpha\theta}{\mu(\alpha+\theta)}\\ \pi_{n+1} &= \left(1+\frac{\lambda+\theta}\mu\right)\pi_n - \frac\lambda\mu\pi_{n-1},\ n\geqslant1. \end{align} where $\lambda$, $\mu$, $\theta$, and $\alpha$ are positive constants and $$\sum_{i=0}^\infty \pi_i = \frac\alpha{\alpha+\theta}.$$ After a lot of tedious algebra, I found that $$\scriptsize\pi_n = \left(\frac{\alpha \theta \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\theta +\lambda +\mu+ \sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}\right)^n}{(\alpha +\theta ) (2 \mu )^n \sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}}\right)(1+\pi_0)$$ for $n\geqslant 1$. To save space, let $$\mathcal C:=\sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}.$$ Summing over $n$ and solving for $\pi_0$, I found $$\pi_0 =\frac{\alpha \mu \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\lambda -\mu-\theta-\mathcal C \right)}{2 \theta (\alpha +\theta ) \mathcal C},$$ and so $$\pi_n=\left(\frac{ \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\lambda -\mu-\theta-\mathcal C \right)+2 \theta (\alpha +\theta ) \mathcal C }{2(\alpha +\theta )^2\mathcal C^2\left(\alpha^2\mu \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right)\right)^{-1}} \right)\left(\frac{\lambda+\mu+\theta+\mathcal C }{2\mu}\right)^n.$$ If you see any errors let me know... I'm also wondering what conditions on $\lambda,\mu,\theta$, and $\alpha$ are necessary for $\sum_{i=0}^\infty \pi_i$ to converge. For context, this is a $M/M/1$ queue with arrival rate $\lambda$, service rate $\mu$, but with an added state $D$ with transitions of rate $\theta$ from each state $n$ to $D$ and a transition of rate $\alpha$ from $D$ to $0$. • Pushing all $\pi_k$ terms to the LHS, you've got a denumerably-infinite system of linear equations. Perhaps it'd be useful to view it as $A\Pi = B$ where $A,\Pi,B$ are an infinite matrix and infinite column vectors respectively. – Semiclassical May 29 '16 at 1:16 • The typo was actually on my part---mea culpa! But the additional context is helpful. – Semiclassical May 29 '16 at 4:27 • I note that you've changed the summation to have upper limit $i=n$ instead of $i=n-1$. If this was intentional, then the answers both I and @FelixMarin provided are no longer quite right (Mine, for instance, will have $\frac{\theta}{\mu}x P(x)$ instead of $\frac{\theta}{\lambda}x P(x)$.) – Semiclassical May 29 '16 at 14:17 • Yes, that is because we defined $\varphi_n:=\sum_{i=0}^n \pi_i$. – Math1000 May 29 '16 at 14:19 • Can you explain how you derived the recurrence? It seems that in modelling the problem using a Markov chain, we would need to have a special state for when the server is out of repair, distinct from the ordinary state with an empty queue. How is this reflected in your $\pi_n$ notation? I am assuming $\pi_n$ is supposed to represent the probability of being in state $n$, under the equilibrium distribution. – Brent Kerby May 29 '16 at 14:21 Hint: By defining $\phi_n = \sum_{i=0}^n \pi_i$, we may express $\pi_{n+1}$ and $\phi_{n+1}$ as linear combinations of $\pi_n$ and $\phi_n$ plus constants, giving a first-order linear matrix difference equation. EDIT: The equation we end up with is of the form $$x_{(n+1)} = Ax_{(n)}+b$$ where the vectors $x_{(n)}=\begin{pmatrix}\pi_n \\ \phi_n\end{pmatrix}\in\mathbb R^2$ are to be solved for, and $A$ is a known $2\times 2$ matrix and $b\in\mathbb R^2$ a known vector. If $A$ can be diagonalized, then, writing $A=SDS^{-1}$ and setting $y_{(n)}=S^{-1}x_{(n)}$, the system becomes $$y_{(n+1)} = Dy_{(n)}+S^{-1}b$$ In this case, the system decomposes into univariate difference equations, which can be solved separately, and then one uses $x_{(n)} = Sy_{(n)}$ to solve for the original unknowns. • This is an impressively simple approach. Well done! Though, should the definition for $\phi_n$ start at $i=1$ or $i=0$? – Semiclassical May 29 '16 at 4:41 • Good catch! I just edited it to fix that. – Brent Kerby May 29 '16 at 4:56 • @BrentKerby I'm not familiar with multivariate recurrences. Could you elaborate on how to solve for $\pi_n,\varphi_n$? – Math1000 May 29 '16 at 6:41 • Actually, I'm a little surprised that the summation ends at $i=n$ rather than $i=n-1$. In the former case, $\phi_n$ depends on $\pi_n$ whereas in the latter you can write $\phi_n=\phi_{n-1}+\pi_n$. – Semiclassical May 29 '16 at 14:33 • @Math1000, I've added an explanation. I may have caused some confusion by calling it a multivariate difference equation, a term which is used to describe a discrete analogue of a PDE; I should have called it a matrix difference equation. – Brent Kerby May 29 '16 at 14:33 With $\verts{z} < 1$: \begin{align} \sum_{n = 0}^{\infty}\pi_{n + 1}z^{n} & = a\sum_{n = 0}^{\infty}\pi_nz^{n} + b\sum_{n = 0}^{\infty}z^{n}\sum_{i=0}^{n - 1}\pi_i - c\sum_{n = 0}^{\infty}z^{n} \\[3mm]\imp {1 \over z}\pars{\sum_{n = 0}^{\infty}\pi_{n}z^{n} - \pi_{0}} & = a\sum_{n = 0}^{\infty}\pi_nz^{n} + b\sum_{i = 0}^{\infty}\pi_{i}\sum_{n = 1 + i}^{\infty}z^{n} - {c \over 1 - z} \\[3mm]\imp \pars{{1 \over z} - a}\sum_{n = 0}^{\infty}\pi_{n}z^{n} & = {\pi_{0} \over z} + b\sum_{i = 0}^{\infty}\pi_{i}{z^{i + 1} \over 1 - z} - {c \over 1 - z} \\[3mm]\imp \pars{{1 \over z} - a - b\,{z \over 1 - z}}\sum_{n = 0}^{\infty}\pi_{n}z^{n} & = {\pi_{0} \over z} - {c \over 1 - z} \end{align} Then, $$\sum_{i = 0}^{\infty}\pi_{i}z^{i} = {\pars{\pi_{0} + c}z - \pi_{0} \over \pars{b - a}z^{2} + \pars{a + 1}z - 1}$$ In order to get the set $\braces{\pi_{i}}$, expand th right hand side in powers of $z$. Maybe, some other conditions on the magnitud of $z$ will be required along the way. Note: The present form of the answer reflects an earlier version of the problem. I plan to modify it to reflect the changes, hopefully sooner rather than later... Let $P(x)=\sum_{n=0}^\infty \pi_n x^n$. The first few lines of this recurrence are \begin{align} \pi_1+\frac{1}{\mu}\left(\frac{\alpha\theta}{\alpha+\theta}\right) &= \left(\frac{\lambda+\mu\theta}{\mu}\right)\pi_0,\\ \pi_2+\frac{1}{\mu}\left(\frac{\alpha\theta}{\alpha+\theta}\right) &= \left(\frac{\lambda+\mu\theta}{\mu}\right)\pi_1+\frac{\theta}{\mu}\pi_0,\\ \pi_3+\frac{1}{\mu}\left(\frac{\alpha\theta}{\alpha+\theta}\right) &= \left(\frac{\lambda+\mu\theta}{\mu}\right)\pi_2+\frac{\theta}{\mu}\pi_1+\frac{\theta}{\mu}\pi_0,\\ \end{align} and so on. Multiplying each line by powers of $x$ (starting with $x^1$) and summing yields $$P(x)-\pi_0 +\frac{1}{\mu}\left(\frac{\alpha\theta}{\alpha+\theta}\right)(x+x^2+x^3+\cdots) = \left(\frac{\lambda+\mu\theta}{\mu}\right)xP(x)+\frac{\theta}{\lambda}\left(x^2+x^3+\cdots\right)P(x).$$ We clean this up by multiplying both sides by $(1-x)$, yielding $$(1-x)(P(x)-\pi_0)+\frac{1}{\mu}\left(\frac{\alpha\theta}{\alpha+\theta}\right)x=\left(\frac{\lambda+\mu\theta}{\mu}\right)x(1-x)P(x)+\frac{\theta}{\mu}x^2 P(x).$$ As a check, for $x=1$ this implies $$\frac{1}{\mu}\left(\frac{\alpha\theta}{\alpha+\theta}\right)=\frac{\theta}{\mu} P(1)\implies P(1)=\sum_{n=0}^\infty \pi_n =\frac{\alpha}{\alpha+\theta}$$ which is the desired normalization condition. What remains is to solve for $P(x)$ and then expand to obtain the coefficients $\{\pi_n\}$, a task I leave to the interested reader. • I find that $$P(x) = \frac{\pi _0 \mu (\alpha +\theta )-x \left(\alpha \left(\theta +\pi _0 \mu \right)+\pi _0 \theta \mu \right)}{(\alpha +\theta ) \left(\mu +x^2 (\theta (\mu -1)+\lambda )-x (\theta \mu +\lambda +\mu )\right)}.$$ Needless to say, $\texttt{SeriesCoefficient}$ in Mathematica yields a truly horrifying result :( – Math1000 May 29 '16 at 7:03 • What I find more bothersome than the horrible-looking result is that it seems underdetermined, since the higher coefficients all depend on $\pi_0$. Is there a condition that's missing? @Math1000 – Semiclassical May 29 '16 at 13:48 • Here $\pi$ is a probability distribution, so $\pi_D + \sum_{i=0}^\infty \pi_i=1$. So it suffices to solve for $\pi_n$ in terms of $\pi_0$. – Math1000 May 29 '16 at 13:52
# Add lines above and below title [duplicate] Currently working on a document of type article. I'd like to add two horizontal lines above and below the title. I tried the following: \documentclass{article} \begin{document} \line(1,0){250} \title{A Title} \line(1,0){250} \author{FirstName LastName} \maketitle \clearpage \end{document} This does not give the desired effect, and splits the title page into multiple pages. Could you advise? • @Johannes_B My question was posted in 2014, whereas the indicated question posted in 2016. I guess it makes sense to mark that question as duplicate??? – Melanie A Apr 2 '18 at 7:23 • We often close older questions as duplicates of newer questions. – Johannes_B Apr 2 '18 at 7:24 • @Johannes_B Why mark a question with a good answer as duplicate of a very generic one which does not show how to add lines? – user36296 Apr 2 '18 at 10:53 Try adding the line inside \titles argument: \title{\line(1,0){250}\\A Title\\\line(1,0){250}} Code: \documentclass{article} \begin{document} \title{\line(1,0){250}\\A Title\\\line(1,0){250}} \author{FirstName LastName} \maketitle \clearpage \end{document} Same with titling package: \documentclass{article} \usepackage{titling} \begin{document} \pretitle{% \begin{center}\LARGE \rule{3in}{0.4pt}\par } \posttitle{\par\rule{3in}{0.4pt}\end{center}\vskip 0.5em} \title{A Title} \author{FirstName LastName} \maketitle \clearpage \end{document} Well you can custom easily your title page with the titlepage environment. Belong your MWE we can get: Here is the code: \documentclass{article} \begin{document} \begin{titlepage} \setlength{\parindent}{0pt} \setlength{\parskip}{0pt} \vspace*{\stretch{1}} \rule{\linewidth}{1pt} \begin{flushright} \Huge A Title \\[14pt] First Name Last Name \end{flushright} \rule{\linewidth}{2pt} \vspace*{\stretch{2}} \end{titlepage} \end{document} Notice the way of build the lines and their thickness. You don't need more. You can build manually a title page with this environment. If you do that will be useful to use commands such as \vspace, \vspace*, \vfill, between others.
## Some History of Special Relativity The speed of light, was the central question that gave rise to the theory of relativity. The speed of light is very large compared to the speeds we experience. We have no physical intuition about speeds approaching . We can however measure the speed of light with a rotating mirror. In what frame is the speed of light ? Newton's laws are independent of which inertial frame is chosen. Is the speed of light going to break this symmetry of physics? Maxwell's equations predict the speed of light from some basic measurements of how fields are produced from charges and currents. Since currents are just moving charges, they also essentially predict how the fields transform as we transform from one inertial reference frame to another. These transformations were problematic. There was a simple way out. There could be one frame in which the medium on which EM waves propagate is at rest. The equations of EM were consistent if the speed of light is constant in one fixed frame. Physicists thought EM waves must propagate in some medium. Physicists postulated the ether'' (aether). They thought, space is filled with the ether'' in which EM waves propagate at a fixed speed. Ether gave one fixed frame for EM. But experiments, particularly Michelson-Morley disagreed. And we would loose the symmetry found in Newton's laws; any inertial frame''. The ether theory was testable. We should see some velocity of the ether. We should see a seasonal variation. Michelson and Morley set up to be sensitive even to the motion of the earth. Albert Abraham Michelson (1852-1931) was a German-born U.S. physicist (at Caltech) who established the speed of light as a fundamental constant. He received the 1907 Nobel Prize for Physics. In 1878 Michelson began work on the passion of his life, the measurement of the speed of light. His attempt to measure the effect of the earth's velocity through the supposed ether laid the basis for the theory of relativity. He was the first American scientist to win the Nobel Prize. Edward Williams Morley (1838-1923) was an American chemist whose reputation as a skilled experimenter attracted the attention of Michelson. In 1887 the pair performed what has come to be known as the Michelson-Morley experiment to measure the motion of the earth through the ether. The figure below shows the Michelson interferometer on a block of granite. A beam of light split, and reflected from two mirrors will interfere. The experiment is on a granite block floating in mercury to greatly reduce vibration and allow easy rotation. One can slowly rotate apparatus and measure the interference change. Michelson and Morley found no change as they rotated. The speed of light is the same even though the earth is moving. Oliver Heaviside (1850-1925) was a telegrapher, but deafness forced him to retire and devote himself to investigations of electricity. He became an eccentric recluse, befriended by FitzGerald and (by correspondence) by Hertz. In 1892 he introduced the operational calculus (Laplace transforms) to study transient currents in networks and theoretical aspects of problems in electrical transmission. In 1902, after wireless telegraphy proved effective over long distances, Heaviside theorized that a conducting layer of the atmosphere existed that allows radio waves to follow the Earth's curvature. He invented vector analysis and wrote Maxwell's equations as we know them today. He showed how EM fields transformed to new inertial frames. Hendrik Antoon Lorentz (1853-1928), a professor of physics at the University of Leiden, sought to explain the origin of light by the oscillations of charged particles inside atoms. Under this assumption, a strong magnetic field would effect the wavelength. The observation of this effect by his pupil, Zeeman, won a Nobel prize for 1902 for the pair. However, the Lorentz theory could not explain the results of the Michelson-Morley experiment. Influenced by the proposal of Fitzgerald, Lorentz arrived at the (approximate) formulas known as the Lorentz transformations to describe the relation of mass, length and time for a moving body. ( Poincare did this more accurately but referred to this as the Lorentz transformation). These equations form the basis for Einstein's special theory of relativity. George Francis FitzGerald (1851-1901), a professor at Trinity College, Dublin, was the first to suggest that an oscillating electric current would produce radio waves, laying the basis for wireless telegraphy. In 1892 FitzGerald suggested that the results of the Michelson-Morley experiment could be explained by the contraction of a body along its its direction of motion. Einstein's On the Electrodynamics of Moving Bodies'' introduced Special Relativity. Einstein had read Lorentz's book and worked for a few years on the problem. He did not believe there should be one fixed frame. He had a breakthrough which he called The Step'' in 1905 when he published his paper. Albert Einstein (1879-1955) grew up in Munich where his father and his uncle had a small electrical plant and engineering works. Einstein's special theory of relativity, first printed in 1905 with the title "On the Electrodynamics of Moving Bodies" had its beginnings in an essay Einstein wrote at age sixteen. The special theory is often regarded as the capstone of classical electrodynamic theory. Einstein did not get a Nobel prize for Special Relativity. He got one for contributions to theoretical physics including the photoelectric effect. The committee did not think Special Relativity had been proved correct until the 1940s. Einstein wanted the speed of light to be the same in every frame. This would work for E&M equations and the way the fields must transform. It would agree with experiment. Einstein did consider experiment but maybe not Michelson Morley. But velocity addition didn't make sense to anyone. How could an observer in an inertial frame moving at measure light to move at the same speed as we do in our frame at rest? In what he called The Step'', Einstein realized that by discarding the concept of a universal time, the speed of light could be the same in every frame. In going from one inertial frame to another, both and transform. The time is different in different inertial frames of reference. He derived the previously stated Lorentz transformation from the requirement that the speed of light is the same in every inertial frame. Jim Branson 2012-10-21
Intermediate Algebra 2e # Practice Test ### Practice Test In the following exercises, factor completely. 445. $80a2+120a380a2+120a3$ 446. $5m(m−1)+3(m−1)5m(m−1)+3(m−1)$ 447. $x2+13x+36x2+13x+36$ 448. $p2+pq−12q2p2+pq−12q2$ 449. $xy−8y+7x−56xy−8y+7x−56$ 450. $40r2+81040r2+810$ 451. $9s2−12s+49s2−12s+4$ 452. $6x2−11x−106x2−11x−10$ 453. $3x2−75y23x2−75y2$ 454. $6u2+3u−186u2+3u−18$ 455. $x3+125x3+125$ 456. $32x5y2−162xy232x5y2−162xy2$ 457. $6x4−19x2+156x4−19x2+15$ 458. $3x3−36x2+108x3x3−36x2+108x$ In the following exercises, solve 459. $5a2+26a=245a2+26a=24$ 460. The product of two consecutive integers is 156. Find the integers. 461. The area of a rectangular place mat is 168 square inches. Its length is two inches longer than the width. Find the length and width of the placemat. 462. Jing is going to throw a ball from the balcony of her condo. When she throws the ball from 80 feet above the ground, the function $h(t)=−16t2+64t+80h(t)=−16t2+64t+80$ models the height, h, of the ball above the ground as a function of time, t. Find: the zeros of this function which tells us when the ball will hit the ground. the time(s) the ball will be 128 feet above the ground. the height the ball will be at $t=4t=4$ seconds. 463. For the function, $f(x)=x2−7x+5,f(x)=x2−7x+5,$ find when $f(x)=−7f(x)=−7$ Use this information to find two points that lie on the graph of the function. 464. For the function $f(x)=25x2−81,f(x)=25x2−81,$ find: the zeros of the function the x-intercepts of the graph of the function the y-intercept of the graph of the function. Order a print copy As an Amazon Associate we earn from qualifying purchases. Want to cite, share, or modify this book? This book is Creative Commons Attribution License 4.0 and you must attribute OpenStax. • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
# Question #f233e Apr 5, 2017 Subtract $\frac{2}{3} - \frac{1}{2} = \frac{1}{6}$ #### Explanation: To find out how much is left subtract 1/2 from 2/3 $\frac{2}{3} - \frac{1}{2}$ To subtract there needs to be a common denominator. The LCM between 3 and 2 is 6 so change both fractions to a denominator of 6. $\frac{2}{3} \times \frac{2}{2} - \frac{1}{2} \times \frac{3}{3}$ This gives $\frac{4}{6} - \frac{3}{6} = \frac{1}{6}$
# Solving Laplace Equation having boundary conditions #### Vinod Hello, Please watch this video [video]https://youtu.be/_cPU-nf9owk[/video] and tell me whether $A)C_{n,m}=\frac {16V_0}{\pi^2 mn\cosh{\bigg(\sqrt{(\frac{n\pi}{a})^2+ (\frac{n\pi}{a})^2}}\bigg)}$ or $B)C_{n,m}=\frac {16V_0}{\pi^2 mn\cosh{\bigg(\sqrt{(\frac{n\pi}{a})^2+ (\frac{n\pi}{a})^2}}\frac{a}{2}\bigg)}$ Which is correct A) or B)? Last edited: MHF Helper 1 person #### Vinod You are kidding, right? Hello, Notice the argument of $\cosh$. The Math professor in the video put $\beta= \sqrt{(\frac{n\pi}{a})^2+(\frac{m\pi}{a})^2}y$ where $y=\frac{\pm a}{2}$ But i think math professor in the video forgot to put y in the argument of $\cosh$ while calculating $C_{n,m}$. Also math professor didn't comment at $V=0$ at the centre of cube. Is $E=0$ there?
### Home > INT2 > Chapter 7 > Lesson 7.2.2 > Problem7-90 7-90. Sketch a graph that represents each situation. Describe the features of the graph completely. 1. Rafael walks from his home to the store one mile away in $15$ minutes. He shops for $30$ minutes. Then he waits for a bus outside the store and takes the bus home. Represent his distance from home over time. 2. Sujata works at a gym $15$ hours a week. She makes $\9.00$/hr, plus a $\10$ bonus for each new gym membership she sells. Represent her weekly earnings based on the number of gym memberships she sells. Your graph should be linear. If you are only graphing her weekly earnings, should the points be connected? 3. Tom has $\500$ in a savings account. For balances under $\1000$, the bank does not pay interest. For balances over $\1000$, the bank pays $3\%$ annual interest. On the last day of each month, Tom deposits $\150$. Represent his account balance over time.
# Why is the cross-entropy always more than the entropy? I understand intuitively why cross-entropy is always bigger. However, could someone show that mathematically? Let's say you have two distributions $$p$$ and $$q$$. Cross entropy is: $$H(p,q)=-\sum_x{p(x)\log{q(x)}}$$. First, you'll manipulate it to obtain the very well known form: $$H(p,q)=H(p)+D_{KL}(p||q)$$, where $$D_{KL}(p||q)$$ is called the KL distance. $$H(p,q)=-\sum_x{p(x)\log{\left(\frac{q(x)p(x)}{p(x)}\right)}}=-\sum_xp(x)\log{\left(\frac{q(x)}{p(x)}\right)}-\sum_x{p(x)\log{p(x)}}=D_{KL}(p||q)+H(p)$$ Then, it only remains to prove that $$D_{KL}(p,q)\geq 0$$, which can be done in various ways. The page I shared uses $$\log(x)\leq x-1$$: $$D_{KL}(p||q)\geq \sum_x{p(x)\left(1-\frac{q(x)}{p(x)} \right)}=\sum_x{p(x)}-\sum_x{q(x)}=1-\sum_x{q(x)}\geq 0$$ From the beginning, we assume that $$x$$ is in the support set of $$p(x)$$, i.e. $$p(x)$$ is non-zero. In the wikipedia entry, it says $$\sum_x{q(x)}=1$$, but I disagree with it, since support set of $$q(x)$$ can be different. More intuitively with logical deduction: i) P(x) is the "real scenario", how things really happens, and Q(x) is the estimation ii) P(x) (-Log(Q(x))* is the "loss / punishment" function, since if indeed when, as example, probability P(x) to occur to so high and Q(x) is very low, then as P and Q are always between 0 and 1, that loss / punishment function will be VERY HIGH (due to log of a very small number) So a) By definition: H(P) = Sum_over_x (P(x)*Log(P(x)) is the Entropy, and P is the "Ground True" (or known to have minimal errors if the ground true cannot be properly measured) b) then it follows, for ALL other estimated distribution/probabilities Qi, H(P,Qi): Sum_over_x (P(x)*Log(Qi(x)) is always on average less accurate, and hence greater than H(P) Hence H(P,Q) >= H(P) is really by definition of the H(P). Note: Compare error A: P(x1) is 0.1, and Q(x1) is 0.5, log(0.5) = -0.69314718056, P(x1)* -Log (Q(x1)) = 0.0693..... error B: P(x2) is 0.5 vs Q(x2) is 0.1, log(0.1) = -2.302585093 P(x2)* -Log (Q(x2)) = 1.151..... In machine learning use case, P(xi) * -Log (Q(xi)) will penalize more those predict a lot lower probability when indeed it is a high probability case, whereas for a lower probability P(xi) such as 0.1 in error A above, $$H(p,q) \geq H(p) \xrightarrow[]{} \sum_{x}^{}-p_{x}\log(q_{x}) \geq \sum_{x}^{}-p_{x}\log(p_{x}) \\ \xrightarrow[]{-\log(x)=\log(\frac{1}{x})} \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \geq \sum_{x}^{}p_{x}\log(\frac{1}{p_{x}}) \\ \xrightarrow[]{} \sum_{x}^{}p_{x}\log(\frac{1}{p_{x}}) - \sum_{x}^{}p_{x}\log(\frac{1}{q_{x}}) \leq 0 \\ \xrightarrow[]{} \sum_{x}^{}p_{x}[\log(\frac{1}{p_{x}})- \log(\frac{1}{q_{x}})] \leq 0 \\ \xrightarrow[]{\log(x) - \log(y) = \log(\frac{x}{y})} \sum_{x}^{}p_{x}\log(\frac{q_{x}}{p_{x}}) \leq 0 \\ \xrightarrow[\sum_{x}^{}p_{x} = 1, \text{So It Is Like \color{red}\alpha In Concave Definition}]{{\log \text{is a Concave Function}}} \log[\sum_{x_{p_{x}\neq 0}}^{}p_{x}(\frac{q_{x}}{p_{x}})] \leq 0 \\ \xrightarrow[]{} \log[\sum_{x_{p_{x}\neq 0}}^{}\not{p_{x}}\frac{q_{x}}{\not{p_{x}}}] \leq 0 \xrightarrow[]{} \log(\sum_{x_{p_{x}\neq 0}}^{}q_{x}) \leq 0 \\ \xrightarrow[]{} \log(\sum_{x_{p_{x}\neq 0}}^{}q_{x}) \leq \log(\sum_{x}^{}q_{x}) = \log(1) = 0 \\ \xrightarrow[]{} \text{Proved}$$
# Auto/cross correlation of a cosine in the context of an adaptive beamformer I am currently reading chapter 13 of 'Adaptive Signal Processing' by Widrow and Stearns, so if anyone happens to have a copy of the book to hand it could be helpful. I am reading the chapter on "Introduction to Adaptive Arrays and Adaptive Beamforming" and am struggling understanding the auto-correlation/cross correlation between the primary and reference signals at a delay of one (as shown in the diagram below) and in the more general case (where the signal is arriving at an angle), where the general cross correlation vector is described by: $$\mathbf{P} = \begin{bmatrix} \phi_{dx}(0) \\ \phi_{dx}(1) \end{bmatrix} = \begin{bmatrix} E [C \cos k\omega_0 \cdot C \cos (k + \delta_0)\omega_0] \\ E [C \cos k\omega_0 \cdot C \sin (k + \delta_0)\omega_0] \end{bmatrix}$$ where $k$ is the sample number, $C$ is a constant amplitude, $\delta_0$ is the time delay of arrival between the primary and reference sensor and $\omega_0$ is the angular frequency. The book assumes that the refence signal to the array is a cosine of the form: $$\text{reference signal} = C \cos [(k + \delta_0) \omega_0].$$ (The above equation is 13.3 in the book). My question is, I don't understand why, for $\phi_{dx} (1)$ the 'lag' part of the equation becomes a sine rather than a cosine with a shift 1 (sample)? It's as if there is a shift of $-\pi/2$, hence the sine, but I can't see where this is from? Why for a lag of 1 does this cosine seemingly change to a sine? My understanding of cross (or auto) correlation is that for $\phi_{dx} (1)$ you compare one signal with another with a shift of 1 sample (hence a lag of 1). I understand that a sine is the quadrature of a cosine and understand why these equal zero (as the book describes; "$\phi_{dx} (1)$, is zero because it represents the correlation of (13.3) with its quadrature (sine) component"). My question is why is a sine introduced for a lag of 1? Diagram for completeness: Many thanks • I've read these pages in Widrow, there is no direct explanations why 1 sample shift is equal to $-\pi/2$.. It is so if sample rate is $2\omega_0$ but it isn't told to us. But I think it isn't too important for this case. In the scheme above you have only one tap and it exactly shifts the signal for $\pi/2$, so maybe author means "lag of 1" is this tap. It's only my suggestion. – Serj Jan 12 '15 at 20:35 The $90$ degree shift introduces the sine component to the cross-correlation vector $P$ at lag 1 ($\phi_{dx}(1)$). The block should be considered to be a 'high-level' description, and does not mean that a shift of one sample is equivalent to a $90$ degree phase shift, or anything like that. Rather it should be thought of as "shift by as many samples as necessary for a phase shift of $90$ degrees".
# How hard is it to find the first layer of this basic $\mathbb{Z}_p$-extension? Let $$p$$ be a prime number and $$\zeta_{p^n}$$ be a primitive $$p^n$$-th root of unity. We know that there is a unique subfield $$\mathbb{Q}_1$$ of $$\mathbb{Q}(\zeta_{p^2})$$ such that $$[\mathbb{Q}_1:\mathbb{Q}]=p$$ (the first layer of the cyclotomic $$\mathbb{Z}_p$$-extension of $$\mathbb{Q}$$). Here are some basic things I know about $$\mathbb{Q}_1$$: 1. Since $$[\mathbb{Q}(\zeta_{p^2}):\mathbb{Q}]=p(p-1)$$ and Gal$$(\mathbb{Q}(\zeta_{p^2})/\mathbb{Q})$$ is cyclic we know that $$\mathbb{Q}_1$$ is contained in the maximal real subfield $$\mathbb{Q}(\zeta_{p^2})^+$$ of $$\mathbb{Q}(\zeta_{p^2})$$. 2. Since $$p$$ is prime, we have that $$\mathbb{Q}_1$$ contains no other subfields. 3. We know that $$p$$ is totally ramified in $$\mathbb{Q}_1$$. If $$k$$ is an imaginary quadratic field such that the discriminant $$m$$ of $$k$$ is co-prime to $$p$$, then the first layer $$k_1$$ of the cyclotomic $$\mathbb{Z}_p$$-extension of $$k$$ is the compositum $$k\mathbb{Q}_1$$ (this is also true for $$k_n$$ and $$k\mathbb{Q}_n$$). Let $$\lambda = \lambda_p$$ be Iwasawas lambda invariant for the cyclotomic $$\mathbb{Z}_p$$-extension $$k \subseteq k_1 \subseteq k_2 \dots k_{\infty}$$, and $$A(k_n)$$ be the $$p$$-part of the class group of $$k_n$$. In this paper, Sands has shown that Iwasawa's Theorem usually kicks in at an early sage for Imaginary quadratic fields. In particular, if $$\lambda < p-1$$, then $$|A(k_1)| = |A(k)|p^{\lambda}$$. So, it seems to me if we know enough about $$k_1$$, we may have a shot at knowing about $$\lambda$$ (provided we know about $$A(k)$$ and $$A(k_1)$$, which is another question altogether). But from the above, I feel that knowing about $$\mathbb{Q}_1$$ in general might be worthwhile since again $$k_1 = k\mathbb{Q}_1$$. After trying to work out a few examples, it seems pretty difficult in general to figure out the first what the first layer $$\mathbb{Q}_1$$ is. Some questions I have: 2. Is there anything in the literature that may help with this? 3. Are there any other obvious properties about $$\mathbb{Q}_1$$ that I've overlooked? Any help is appreciated. • One way to generate this field explicitly is to use the polynomial whose roots are $\sum_{n \in c} \zeta_{p^2}^n$ where $c$ ranges over the $p$ cosets of ${(\mathbb{Z}/p^2\mathbb{Z})^\times}^p$ in $(\mathbb{Z}/p^2\mathbb{Z})^\times$. For $p=3,5,7,11$ this gives $x^3-x+1$ (for $\zeta_9^n + \zeta_9^{-n}$), $x^5 - 10x^3 - 5x^2 + 10x - 1$, $x^7 - 7x^6 + 49x^4 - 98x^2 - 49x + 7$, and $x^{11}-11x^{10}+363x^8-1089x^7-1089x^6+6413x^5+242x^4-11616x^3-2178x^2+6534x+2673$. Mar 13, 2021 at 4:40 • @NoamD.Elkies Does this come from Kummer theory? Or am I totally off? Mar 13, 2021 at 4:56 • typo: $x^3-3x+1$ Mar 13, 2021 at 10:32 • By the way, one can also compute anticyclotomic ones : numdam.org/item/CM_1976__32_2_157_0/?source=CM_1975__30_3_259_0 by Carroll and Kisilevsky and higher layers for $p=3$ in arxiv.org/abs/1806.10473. Though that was not asked. Mar 13, 2021 at 11:07 • @ChrisWuthrich Thanks for pointing me towards the paper. Mar 13, 2021 at 15:04 Let $$p$$ be a prime number and let $$\mathbb{Q}_{n}$$ be the $$n$$th layer of the cyclotomic $$\mathbb{Z}_{p}$$-extension of $$\mathbb{Q}$$. Then $$A(\mathbb{Q}_{n})$$, the $$p$$-part of the class group of $$\mathbb{Q}_{n}$$, is trivial for all $$n$$. (In particular, the $$\lambda$$ and $$\mu$$ invariants of the cyclotomic $$\mathbb{Z}_p$$-extension of $$\mathbb{Q}$$ are both zero.) So my feeling is that explicit knowledge of $$\mathbb{Q}_{1}$$ doesn't really help with the problem you're interested in. Of course, I'd be interested to know what's going on if this intuition turns out to be incorrect. • My real aim was to find the initial layer of an imaginary quadratic field $\mathbb{Q}(\sqrt{-m})$, so I wanted to find $\mathbb{Q}_1$ and then just adjoin $\sqrt{-m}$. Mar 13, 2021 at 15:07 • Okay, but I'm not clear in what sense you want to "find" $\mathbb{Q}_{1}$. Something along the lines of Noam's comment, or in some other sense? Mar 13, 2021 at 17:04 • I wanted to get a better picture of what $\mathbb{Q}_1$ is like, so Noam's comment is more or less what I was looking for. Mar 13, 2021 at 17:28
# Simplifying logaritm 1. Apr 25, 2014 ### Rectifier Hey there! I am getting two completely different equations when I try to simplify one. What am I doing wrong? 1. $$y=ln(2x) \ \Leftrightarrow \ y=ln(2) + ln(x) \ \Leftrightarrow \ e^y=e^{ln(2)}+e^{ln(x)} \ \Leftrightarrow \ e^y=2+x$$ 2. $$y=ln(2x) \ \Leftrightarrow \ e^y=e^{ln(2x)} \ \Leftrightarrow \ e^y=2x$$ I am sorry if its something completely obvious. Its pretty late here so my brain doesn't function properly :) 2. Apr 25, 2014 ### Staff: Mentor It should be $e^{ln(2) + ln(x)} = e^{ln(2)} \cdot e^{ln(x)}$ in your third step. 3. Apr 25, 2014 ### Rectifier Oh! Thank you Mark!
## Fundamentals of Physics Extended (10th Edition) $2.2\times10^{-18}\ m$ We know that the time taken by the electron to travel a distance d is $t=6.72\times 10^{-10}\ s$. During this time, the electron falls a vertical distance. So, using kinematics formula; $y = ut+\frac{1}{2}at^2$ $y=0+\frac{1}{2}gt^2$ $y=0.5\times9.8\times(6.72\times 10^{-10}\ s)^2$ $y=2.2\times10^{-18}\ m$
FACTOID # 29: 73.3% of America's gross operating surplus in motion picture and sound recording industries comes from California. Home Encyclopedia Statistics States A-Z Flags Maps FAQ About WHAT'S NEW SEARCH ALL FACTS & STATISTICS    Advanced view Search encyclopedia, statistics and forums: (* = Graphable) Encyclopedia > Hilbert's paradox of the Grand Hotel Hilbert's paradox of the Grand Hotel was a mathematical paradox about infinity presented by German mathematician David Hilbert (18621943): Robert Boyles self-flowing flask fills itself in this diagram, but perpetual motion machines cannot exist. ... The infinity symbol ∞ in several typefaces The word infinity comes from the Latin infinitas or unboundedness. ... David Hilbert (January 23, 1862, Wehlau, East Prussia – February 14, 1943, Göttingen, Germany) was a German mathematician, recognized as one of the most influential mathematicians of the 19th and early 20th centuries. ... 1862 was a common year starting on Wednesday (see link for calendar). ... 1943 (MCMXLIII) was a common year starting on Friday (the link is to a full 1943 calendar). ... In a hotel with a finite number of rooms, it is clear that once it is full, no more guests can be accommodated. Now, imagine a hotel with an infinite number of rooms. One might assume that the same problem will arise when all the rooms are occupied. However, in an infinite hotel, the situations "every room is occupied" and "no more guests can be accommodated" do not turn out to be equivalent. There is a way to solve the problem: if you move the guest occupying room 1 to room 2, the guest occupying room 2 to room 3, etc., you can fit the newcomer into room 1. Unlike a finite hotel, in an infinite hotel, being "full" in the sense that every room contains a person is not the same as being "full" in the sense that there is no space for another person. Note that a movement of an infinite number of guests would constitute a supertask. Infinity is a word carrying a number of different meanings in mathematics, philosophy, theology and everyday life. ... In philosophy, a supertask is a task occurring within a finite interval of time involving infinitely many steps (subtasks). ... It would seem to be possible to make place for a countably infinite number of new clients: just move the person occupying room 1 to room 2, occupying room 2 to room 4, occupying room 3 to room 6, etc., and all the odd-numbered new rooms will be free for the new guests. However, this is where the paradox lies. Even in the previous statement, if an infinite number of people fill the odd numbered rooms, then what amount is added to the infinity that was already there? Can one double an infinite number? Also, for example, say the infinite number of new guests do come and fill all of the odd-numbered rooms, and then the infinite number of guests in the even-numbered rooms leaves. An infinite number has just been 'subtracted' from an infinite number, yet an infinite number of people remain. This is where Hilbert's Hotel is paradoxical. In mathematics the term countable set is used to describe the size of a set, e. ... If a coutably infinite number of coaches arrive, each with an countably infinite number of passengers, you can even deal with that: first empty the odd numbered rooms as above, then put the first coach's load in rooms 3n for n = 1, 2, 3, ..., the second coach's load in rooms 5n for n = 1, 2, ... and so on; for coach number i we use the rooms pn where p is the i+1-st prime number. You can also solve the problem by looking at the license plate numbers on the coaches and the seat numbers for the passengers (if the seats are not numbered, number them). Regard the hotel as coach #0. Interleave the digits of the coach numbers and the seat numbers to get the room numbers for the guests. The guest in room number 1729 moves to room 1070209. The passenger on seat 8234 of coach 56719 goes to room 5068721394 of the hotel. In mathematics, a prime number (or a prime) is a natural number that has exactly two (distinct) natural number divisors, which are 1 and the prime number itself. ... Some find this state of affairs profoundly counterintuitive. The properties of infinite 'collections of things' are quite different from those of ordinary 'collections of things'. In an ordinary hotel, the number of odd-numbered rooms is obviously smaller than the total number of rooms. However, in Hilbert's aptly named Grand Hotel the 'number' of odd-numbered rooms is as 'large' as the total 'number' of rooms. In mathematical terms, this would be expressed as follows: the cardinality of the subset containing the odd-numbered rooms is the same as the cardinality of the set of all rooms. In fact, infinite sets are characterized as sets that have proper subsets of the same cardinality. For countable sets this cardinality is called $aleph_0$ (aleph-null). In mathematics, the cardinality of a set is a measure of the number of elements of the set. There are two approaches to cardinality – one which compares sets directly using bijections, injections, and surjections, and another which uses cardinal numbers. ... A is a subset of B, and B is a superset of A. In mathematics, especially in set theory, the terms, subset, superset and proper (or strict) subset or superset are used to describe the relation, called inclusion, of one set being contained inside another set. ... In mathematics, a set can be thought of as any collection of distinct things considered as a whole. ... In the branch of mathematics known as set theory, the aleph numbers are a series of numbers used to represent the cardinality (or size) of infinite sets. ... An even stranger story regarding this hotel shows that mathematical induction only works in one direction. No cigars may be brought into the hotel. Yet each of the guests (all rooms had guests at the time) got a cigar while in the hotel. How is this? The guest in Room 1 got a cigar from the guest in Room 2. The guest in Room 2 had previously received two cigars from the guest in Room 3. The guest in Room 3 had previously received three cigars from the guest in Room 4, etc. Each guest kept one cigar and passed the remainder to the guest in the next-lower-numbered room. Mathematical induction is a method of mathematical proof typically used to establish that a given statement is true of all natural numbers. ... ## The cosmological argument GA_googleFillSlot("encyclopedia_square"); A number of defenders of the cosmological argument for the existence of God, such as William Lane Craig, have attempted to use Hilbert's hotel as an argument for the physical impossibility of the existence of an actual infinity. Their argument is that, although there is nothing mathematically impossible about the existence of the hotel (or any other infinite object), intuitively (they claim) we know that no such hotel could ever actually exist in reality, and that this intuition is a specific case of the broader intuition that no actual infinite could exist. They argue that a temporal sequence receding infinitely into the past would constitute such an actual infinite. This article or section does not cite its references or sources. ... William Lane Craig (born August 23, 1949) is an American philosopher, theologian, New Testament historian, and Christian apologist. ... However, the paradox of Hilbert's hotel involves not just an actual infinite, but also supertasks; it is unclear whether this claimed intuition is really the physical impossibility of an actual infinite, or merely the physical impossibility of a supertask. A causal chain receding infinitely into the past need not involve any supertasks. See Thomas Aquinas' Summa Theologiae for details about infinite regressions and the existence of God. Saint Thomas Aquinas [Thomas of Aquin, or Aquino] (c. ... To meet Wikipedias quality standards, this article or section may require cleanup. ... ## In fiction The novel White Light, by mathematician/science fiction writer Rudy Rucker, includes a hotel based on Hilbert's paradox. White Light is a work of science fiction by Rudy Rucker published in 1980 by Ace Books. ... Science fiction is a form of speculative fiction principally dealing with the impact of imagined science and technology, or both, upon society and persons as individuals. ... Rudolf Rucker, Fall 2005. ... Stephen Baxter's science fiction novel Transcendent has a brief discussion on the nature of infinity, with an explanation based on the paradox — modified to use starship troopers rather than hotels. Stephen Baxter at the Science-Fiction-Tage NRW in Dortmund, Germany, March 1997 Stephen Baxter (born in Liverpool, 13 November 1957) is a British hard science fiction author. ... Transcendent (ISBN 0345457919) is a science-fiction novel by Stephen Baxter. ... Geoffrey A. Landis' Nebula Award-winning short story "Ripples in the Dirac Sea" uses the Hilbert hotel as an explanation of why an infinitely-full Dirac sea can nevertheless still accept particles. Geoffrey A. Landis emerged in the late 1980s as one of the foremost scientist-writers in the science fiction genre. ... The Nebula is an award given each year by the Science Fiction and Fantasy Writers of America (SFWA), for the best science fiction/fantasy fiction published in the United States during the two previous years. ... The Dirac sea is a theoretical model of the vacuum as an infinite sea of particles possessing negative energy. ... In Peter Hoeg's novel Smilla's Sense of Snow, the titular heroine reflects that it is admirable for the hotel's manager and guests to go to all that trouble so that the latecomer can have his own room and some privacy. Peter Høeg, born on May 17, 1957, is one of Denmarks most celebrated contemporary writers of fiction. ... Smillas Sense of Snow (also published as Miss Smillas Feeling for Snow), is a book by Danish author Peter Høeg. ... The booklet The Cat in Numberland by mathematician/philosopher Ivar Ekeland presents Hilbert’s paradox as a tale for children, in the tradition of Lewis Carroll. It is illustrated by John O’Brien. Lewis Carroll. ... ## See also The inspiration for the name of the principle: pigeons in holes. ... COMMENTARY Post Reply Share your thoughts, questions and commentary here Your name Your comments Want to know more? Search encyclopedia, statistics and forums: Press Releases |  Feeds | Contact The Wikipedia article included on this page is licensed under the GFDL. Images may be subject to relevant owners' copyright. All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved. Usage implies agreement with terms, 1022, m
# zbMATH — the first resource for mathematics The sparse Laplacian shrinkage estimator for high-dimensional regression. (English) Zbl 1227.62049 Summary: We propose a new penalized method for variable selection and estimation that explicitly incorporates the correlation patterns among predictors. This method is based on a combination of the minimax concave penalty and Laplacian quadratic associated with a graph as the penalty function. We call it the sparse Laplacian shrinkage (SLS) method. The SLS uses the minimax concave penalty for encouraging sparsity and Laplacian quadratic penalty for promoting smoothness among coefficients associated with the correlated predictors. The SLS has a generalized grouping property with respect to the graph represented by the Laplacian quadratic. We show that the SLS possesses an oracle property in the sense that it is selection consistent and equal to the oracle Laplacian shrinkage estimator with high probability. This result holds in sparse, high-dimensional settings with $$p \gg n$$ under reasonable conditions. We derive a coordinate descent algorithm for computing the SLS estimates. Simulation studies are conducted to evaluate the performance of the SLS method and a real data example is used to illustrate its application. ##### MSC: 62J07 Ridge regression; shrinkage estimators (Lasso) 62J05 Linear regression; mixed models 62H12 Estimation in multivariate analysis 65C60 Computational problems in statistics (MSC2010) ##### Software: PDCO; OSCAR; glasso; sparsenet Full Text: ##### References: [1] Bolstad, B. M., Irizarry, R. A., Astrand, M. and Speed, T. P. (2003). A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics 19 185-193. [2] Bondell, H. D. and Reich, B. J. (2008). Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Biometrics 64 115-123, 322-323. · Zbl 1146.62051 [3] Breheny, P. and Huang, J. (2011). Coordinate descent algorithms for nonconvex penalized regression methods. Ann. Appl. Stat. 5 232-253. · Zbl 1220.62095 [4] Chen, S. S., Donoho, D. L. and Saunders, M. A. (1998). Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 33-61. · Zbl 0919.94002 [5] Chiang, A. P., Beck, J. S., Yen, H. J., Tayeh, M. K., Scheetz, T. E., Swiderski, R., Nishimura, D., Braun, T. A., Kim, K. Y., Huang, J., Elbedour, K., Carmi, R., Slusarski, D. C., Casavant, T. L., Stone, E. M. and Sheffield, V. C. (2006). Homozygosity mapping with SNP arrays identifies a novel gene for Bardet-Biedl Syndrome (BBS10). Proc. Natl. Acad. Sci. USA 103 6287-6292. [6] Chung, F. R. K. (1997). Spectral Graph Theory. CBMS Regional Conference Series in Mathematics 92 . Conf. Board Math. Sci., Washington, DC. · Zbl 0867.05046 [7] Chung, F. and Lu, L. (2006). Complex Graphs and Networks. CBMS Regional Conference Series in Mathematics 107 . Conf. Board Math. Sci., Washington, DC. · Zbl 1114.90071 [8] Daye, Z. J. and Jeng, X. J. (2009). Shrinkage and model selection with correlated variables via weighted fusion. Comput. Statist. Data Anal. 53 1284-1298. · Zbl 1452.62049 [9] Fan, J. (1997). Comments on “Wavelets in statistics: A review” by A. Antoniadis. J. Italian Statist. Assoc. 6 131-138. [10] Fan, J., Feng, Y. and Wu, Y. (2009). Network exploration via the adaptive lasso and SCAD penalties. Ann. Appl. Stat. 3 521-541. · Zbl 1166.62040 [11] Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348-1360. · Zbl 1073.62547 [12] Frank, I. E. and Friedman, J. H. (1993). A statistical view of some chemometrics regression tools (with discussion). Technometrics 35 109-148. · Zbl 0775.62288 [13] Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatist. 9 432-441. · Zbl 1143.62076 [14] Friedman, J., Hastie, T., Höfling, H. and Tibshirani, R. (2007). Pathwise coordinate optimization. Ann. Appl. Stat. 1 302-332. · Zbl 1378.90064 [15] Fu, W. J. (1998). Penalized regressions: The bridge versus the lasso. J. Comput. Graph. Statist. 7 397-416. [16] Genkin, A., Lewis, D. D. and Madigan, D. (2004). Large-scale Bayesian logistic regression for text categorization. Technical report, DIMACS, Rutgers Univ. [17] Hebiri, M. and van de Geer, S. (2010). The smooth-Lasso and other \ell 1 + \ell 2 -penalized methods. Preprint. Available at . · Zbl 1274.62443 [18] Huang, J., Breheny, P., Ma, S. and Zhang, C. H. (2010a). The Mnet method for variable selection. Technical Report # 402, Dept. Statistics and Actuarial Science, Univ. Iowa. · Zbl 1356.62091 [19] Huang, J., Ma, S., Li, H. and Zhang, C. H. (2010b). The sparse Laplacian shrinkage estimator for high-dimensional regression. Technical Report # 403, Dept. Statistics and Actuarial Science, Univ. Iowa. · Zbl 1227.62049 [20] Irizarry, R. A., Hobbs, B., Collin, F., Beazer-Barclay, Y. D., Antonellis, K. J., Scherf, U. and Speed, T. P. (2003). Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatist. 4 249-264. · Zbl 1141.62348 [21] Jia, J. and Yu, B. (2010). On model selection consistency of the elastic net when p \gg n. Statist. Sinica 20 595-611. · Zbl 1187.62125 [22] Li, C. and Li, H. (2008). Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics 24 1175-1182. · Zbl 1022.68519 [23] Li, C. and Li, H. (2010). Variable selection and regression analysis for covariates with graphical structure. Ann. Appl. Stat. 4 1498-1516. · Zbl 1202.62157 [24] Mazumder, R., Friedman, J. and Hastie, T. (2009). SparseNet: Coordinate descent with non-convex penalties. Technical report, Dept. Statistics, Stanford Univ. · Zbl 1229.62091 [25] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436-1462. · Zbl 1113.62082 [26] Pan, W., Xie, B. and Shen, X. (2011). Incorporating predictor network in penalized regression with application to microarray data. Biometrics . · Zbl 1192.62235 [27] Scheetz, T. E., Kim, K. Y. A., Swiderski, R. E., Philp, A. R., Braun, T. A., Knudtson, K. L., Dorrance, A. M., DiBona, G. F., Huang, J., Casavant, T. L., Sheffield, V. C. and Stone, E. M. (2006). Regulation of gene expression in the mammalian eye and its relevance to eye disease. Proc. Natl. Acad. Sci. USA 103 14429-14434. [28] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 58 267-288. · Zbl 0850.62538 [29] Tutz, G. and Ulbricht, J. (2009). Penalized regression with correlation-based penalty. Stat. Comput. 19 239-253. [30] Wu, T. T. and Lange, K. (2008). Coordinate descent algorithms for lasso penalized regression. Ann. Appl. Stat. 2 224-244. · Zbl 1137.62045 [31] Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat. Methodol. 68 49-67. · Zbl 1141.62030 [32] Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika 94 19-35. · Zbl 1142.62408 [33] Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 38 894-942. · Zbl 1183.62120 [34] Zhang, B. and Horvath, S. (2005). A general framework for weighted gene co-expression network analysis. Stat. Appl. Genet. Mol. Biol. 4 45 pp. (electronic). · Zbl 1077.92042 [35] Zhang, C.-H. and Huang, J. (2008). The sparsity and bias of the LASSO selection in high-dimensional linear regression. Ann. Statist. 36 1567-1594. · Zbl 1142.62044 [36] Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67 301-320. · Zbl 1069.62054 [37] Zou, H. and Zhang, H. H. (2009). On the adaptive elastic-net with a diverging number of parameters. Ann. Statist. 37 1733-1751. · Zbl 1168.62064 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
Review question # If a disc is missing two smaller discs, where’s its centre of gravity? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource Ref: R6670 ## Solution $AB$ and $CD$ are two perpendicular diameters of a uniform circular metal disc of radius $\quantity{12}{in.}$ and centre $O$. Two circular holes of radii $\quantity{4}{in.}$ and $\quantity{2}{in.}$ are cut out of the disc, their centres being $\quantity{6}{in.}$ from $O$ along $OA$ and $\quantity{10}{in.}$ from $O$ along $OC$ respectively. Find 1. the distances of the centre of gravity, $G$, of the remaining portion of the disc from the diameters $CD$ and $AB$; We begin by sketching this on coordinate axes. We have chosen to have $O$ at the origin, with $AB$ lying along the $y$-axis with $A$ at $(0,-12)$ and $C$ at $(-12,0)$. We could equally well have drawn it with $A$ at $(0,12)$ and/or $C$ at $(12,0)$, or we could have swapped $A$ and $C$ around and have $AB$ on the $x$-axis, $CD$ on the $y$-axis. It makes no difference, as the axes are for our convenience only. We could also have placed the origin somewhere else, or not have had $AB$ lying along or parallel to an axis, but that would have made things harder, while our aim is to make things easier. We therefore have a disc centred at $(0,0)$ of radius $12$ missing a disc centred at $(0,-6)$ of radius $4$, and also missing another disc centred at $(-10,0)$ of radius $2$. We know how to calculate the centre of gravity (centre of mass) of a collection of objects by taking moments. We also know that the centre of gravity of a disc is at its centre, by symmetry. If we complete the reduced disc to a complete disc by replacing the missing small discs, we can calculate the centre of gravity of the complete disc by thinking complete disc = reduced disc + small disc 1 + small disc 2. So let’s suppose that the centre of gravity of the reduced disc is at $(X,Y)$, and that the lamina has density equal to $1$ per unit area. Then the mass of the complete disc is $12^2\times\pi=144\pi$, the smaller discs have masses $4^2\times\pi=16\pi$ and $2^2\times\pi=4\pi$, so the reduced disc has mass $144\pi-16\pi-4\pi=124\pi$. Taking moments about $OA$ now gives $144\pi\times 0= 124\pi\times X+16\pi\times0+4\pi\times(-10) \implies X = \dfrac{10}{31}.$ Similarly, taking moments about $OC$ gives $144\pi\times 0= 124\pi\times Y+16\pi\times(-6)+4\pi\times0 \implies Y = \dfrac{24}{31}.$ Thus $G$ is the point $\left(\dfrac{10}{31},\dfrac{24}{31}\right)$, and these are the required distances from $AB$ and $CD$ respectively (in inches). 1. the angle that $OA$ will make with the vertical if this remaining portion is hung on a smooth pivot at $O$. When a uniform lamina is hung from a smooth pivot, the centre of mass will always lie directly below the pivot. The centre of gravity $G$ here is labelled, and the straight line from $O$ through $G$ is the downwards vertical. The angle that $OA$ makes with the (upwards) vertical is therefore $\theta$. Using trigonometry on the coordinates of $G$, we can find $\theta$ as $\theta= \tan^{-1} \frac{X}{Y} = \tan^{-1} \frac{10}{24} = 22.6^\circ \quad \text{to 1 d.p.}$
# Algorithmic Pirogov-Sinai Theory Időpont: 2019. 04. 11. 16:15 Hely: H306 Előadó: Tyler Helmuth (Bristol) What does a random independent set look like? This is an important problem at the intersection of probability theory, statistical mechanics, and theoretical computer science. I will introduce this problem, also known as the hard-core model, and explain various ways in which the question can be answered. In particular, I will describe a recent algorithm for producing approximate samples of high-density independent sets on lattices. This is the first known algorithm in the high-density regime, and standard algorithms, like MCMC using Glauber dynamics, are known to fail. Based on joint work with Will Perkins and Guus Regts.
# The String of 7 Consider a string of $$n$$ $$7$$'s, $$7777\cdots77,$$ into which $$+$$ signs are inserted to produce an arithmetic expression. For example, $$7+77+777+7+7=875$$ could be obtained from eight $$7$$'s in this way. For how many values of $$n$$ is it possible to insert $$+$$ signs so that the resulting expression has value 7000? ×
Homework Help: Derivative of function 1. Jan 16, 2009 1. The problem statement, all variables and given/known data Compute the derivative of the following function. 2. Relevant equations f:[1,-1] arrow [-pie/2, pie/2] given by f(x)=sin^-1 (x) 3. The attempt at a solution I know that f ' (x)=1/[sqrt(1-x^2)] Im not sure how to include the intervals of pie given, not sure what they want me to do. 2. Jan 16, 2009 Staff: Mentor Knowing what the derivative is doesn't do you much good if you have compute it. Do you know about implicit differentiation? If so, letting y = f(x), you have y = sin-1(x) Solve this equation for x, and then calculate dy/dx. When you do this, you should get dy/dx = 1/cos(sin-1(x)), which you can simplify further. That's where the interval [-pi, pi] comes into play. BTW, the name of the Greek letter $\pi$ is pi, not pie.
Account It's free! Register Share Books Shortlist Your shortlist is empty # Question Paper Solutions - Geometry 2013 - 2014-S.S.C-10th Maharashtra State Board (MSBSHSE) SubjectGeometry Year2013 - 2014 (March) Marks: 40 [5]1 | Solve any five sub-questions: [1]1.1 In the following figure RP: PK= 3:2, then find the value of A(ΔTRP):A(ΔTPK). Chapter: [1] Similarity Concept: Properties of Ratios of Areas of Two Triangles [1]1.2 If two circles with radii 8 cm and 3 cm, respectively, touch internally, then find the distance between their centers. Chapter: [2] Circle Concept: Touching Circles [1]1.3 If the angle θ = -60° , find the value of sinθ . Chapter: [5] Trigonometry Concept: Trigonometric Ratios of Complementary Angles [1]1.4 Find the slope of the line passing through the points A(2,3) and B(4,7). Chapter: [2.07] Co-ordinate Geometry Concept: Slope of a Line [1]1.5 The radius of a circle is 7 cm. find the circumference of the circle. Chapter: [6] Mensuration Concept: Problems Based on Areas and Perimeter Or Circumference of Circle, Sector and Segment of a Circle [1]1.6 If the sides of a triangle are 6 cm, 8 cm and 10 cm, respectively, then determine whether the triangle is a right angle triangle or not. Chapter: [1] Similarity Concept: Pythagoras Theorem [8]2 | Solve any four sub-questions: [2]2.1 In the figure given below, Ray PT is bisector of ∠QPR. If PQ = 5.6 cm, QT = 4 cm and TR = 5 cm, find the value of x . Chapter: [3.01] Triangles Concept: Similarity of Triangles [2]2.2 In the following figure, Q is the center of the circle. PM and PN are tangents to the circle. If ∠MPN = 40° , find ∠MQN. Chapter: [2] Circle Concept: Number of Tangents from a Point on a Circle [2]2.3 Write the equation 2x - 3y - 4 = 0 in the slope intercept form. Hence, write the slope and y-intercept of the line. Chapter: [3] Co-ordinate Geometry Concept: Intercepts Made by a Line [2]2.4 If cosθ=1/sqrt(2), where θ is an acute angle, then find the value of sinθ. Chapter: [5] Trigonometry Concept: Trigonometric Ratios of Complementary Angles [2]2.5 If (4,-3) is a point on the line AB and slope of the line is (-2), write the equation of the line AB. Chapter: [3] Co-ordinate Geometry Concept: General Equation of a Line [2]2.6 Draw a tangent at any point ‘P’ on the circle of radius 3.5 cm and centre O. Chapter: [4] Geometric Constructions Concept: Construction of Tangent to the Circle from the Point on the Circle [9]3 | Solve any three sub-questions: [3]3.1 In a triangle ABC, line l || Side BC and line l intersects side AB and AC in points P and Q, respectively. Prove that: "AP"/"BP"="AQ"/"QC" Chapter: [3.01] Triangles Concept: Similarity of Triangles [3]3.2 In figure, ΔABC is an isosceles triangle with perimeter 44 cm. The base BC is of length 12 cm. Side AB and side AC are congruent. A circle touches the three sides as shown in the figure below. Find the length of the tangent segment from A to the circle. Chapter: [6] Mensuration Concept: Problems Based on Areas and Perimeter Or Circumference of Circle, Sector and Segment of a Circle [3]3.3 Draw tangents to the circle with center ‘C’ and radius 3.6 cm, from a point B at a distance of 7.2 cm from the center of the circle. Chapter: [3.03] Constructions Concept: Construction of Tangents to a Circle [3]3.4 Prove that: sec2θ + cosec2θ = sec2θ x cosec2θ Chapter: [5] Trigonometry Concept: Trigonometric Identities [3]3.5 Write the equation of each of the following lines: 1. The x-axis and the y-axis. 2. The line passing through the origin and the point (-3, 5). 3. The line passing through the point (-3, 4) and parallel to X-axis. Chapter: [3] Co-ordinate Geometry Concept: General Equation of a Line [8]4 | Solve any two sub-questions: [4]4.1 From the top of a lighthouse, an observer looks at a ship and finds the angle of depression to be 60° . If the height of the lighthouse is 90 meters, then find how far is that ship from the lighthouse? (√3 = 1.73) Chapter: [4.03] Heights and Distances Concept: Heights and Distances [4]4.2 Prove that the “the opposite angles of the cyclic quadrilateral are supplementary”. Chapter: [3.04] Circles Concept: Cyclic Properties [4]4.3 The sum of length, breadth and height of a cuboid is 38 cm and the length of its diagonal is 22 cm. Find the total surface area of the cuboid. Chapter: [7.02] Surface Areas and Volumes Concept: Surface Area of a Combination of Solids [10]5 | Solve any two sub-questions: [5]5.1 In triangle ABC, ∠C=90°. Let BC= a, CA= b, AB= c and let 'p' be the length of the perpendicular from 'C' on AB, prove that: 1. cp = ab 2. 1/p^2=1/a^2+1/b^2 Chapter: [1] Similarity Concept: Pythagoras Theorem [5]5.2 Construct the circumcircle and incircle of an equilateral triangle ABC with side 6 cm and centre O. Find the ratio of radii of circumcircle and incircle. Chapter: [4] Geometric Constructions Concept: Division of a Line Segment [5]5.3 There are three stair-steps as shown in the figure below. Each stair step has width 25 cm, height 12 cm and length 50 cm. How many bricks have been used in it, if each brick is 12.5 cm x 6.25 cm x 4 cm? Chapter: [4.03] Heights and Distances Concept: Heights and Distances S
# Multiple PDFs with page group included in a single page warning I updated TeX Live to the Ubuntu Quantal version (2012.20120611-4) and I suddenly got this warning: PDF inclusion: multiple pdfs with page group included in a single page This is a minimal example for which I get the warning: \documentclass{book} \usepackage{graphicx} \begin{document} \includegraphics{image1} \includegraphics{image2} \end{document} Both images have been produced by the export PDF feature of Inkscape and contain simple line drawings (no fancy stuff). I have been looking on the Internet, but only found others with this problem and did not found any solutions: • In the Latex user group they did not seem to understand/recognise the problem. And told the OP to go to the MikTeX groups, but it is not a MiKTeX specific problem as it is also happening with TeX Live and other distributions. • At gmane.comp.tex.pdftex they were looking into the use (and versions) of MS Office products. Also not the cause as I am not using MS Office to produce PDFs. During my search if found the PDFTeX code (pdftoepdf.cc) that spawns this warning, maybe it is of some help in understanding what is happening? if (pdfpagegroupval == 0) { // another pdf with page group was included earlier on the same page; // copy the Group entry as is pdftex_warn("PDF inclusion: multiple pdfs with page group included in a single page"); pdf_newline(); pdf_puts("/Group "); copyObject(&dictObj); } else { // write Group dict as a separate object, since the Page dict also refers to it pageDict->lookup((char *) "Group", &dictObj); if (!dictObj->isDict()) pdftex_fail("PDF inclusion: /Group dict missing"); writeSepGroup = true; initDictFromDict(groupDict, page->getGroup()); pdf_printf("/Group %d 0 R\n", pdfpagegroupval); } Does anyone have an idea what is happening, whether it is serious and how I could get rid of these warnings? - "I have been looking on the Internet, but only found others with this problem." Some links could be useful. –  lockstep Oct 11 '12 at 8:40 @lockstep sorry I forgot. I have updated my question showing the two of this discussions going nowhere. –  Veger Oct 11 '12 at 9:07 The problem is also reported in a german forum mrunix.de. It might be a bug in the tex distribution (pdftex). The problem happens only when you include multiple pdf pages, created in a specific manner (e.g. by MS Office products), in a single page. Solution: Convert pdf files into ps and then back to pdf using Ghostscript, then the warning will go away (pdf2ps -> ps2pdf). This conversion probably removes the "page group" information from pdf files. (Caveat: This renders your pdf and some text might not be selectable or searchable any more.) Editing the colorspace of pdf files with ghostscript also resolves the issue (if there is no multiple pages in the pdf file): gs -o image.pdf -sDEVICE=pdfwrite -dColorConversionStrategy=/sRGB -dProcessColorModel=/DeviceRGB image.pdf CMYK conversion if RGB does not work for you: gs -o image.pdf -sDEVICE=pdfwrite -dColorConversionStrategy=/CMYK -dProcessColorModel=/DeviceCMYK image.pdf P.S. Some programs generate "page group"s in pdf files; for example when you impose different images/objects in illustrator or inkscape. It seesm that pdftex is unable to handle multiple page groups in a single output page. The reason might be that each page groups specifies a different color space or transparency space. - Thank you for your explanation and solution, it works. Minor disadvantage of pdf2ps -> ps2pdf: the text in my PDF figures is not text anymore (ie it is not selectable), so I need to go and find another solution to remove the 'page groups' –  Veger Oct 17 '12 at 13:07 Your ghostscript conversion method works better! Thanks again –  Veger Oct 18 '12 at 6:58 Your gs command entirely screwed up my PDF images. –  Dominique May 2 '13 at 20:20 @Dominique, What happened to your images? The command works for me. You may cosnider converting to another colorspace if RGB does not work for you. –  Aydin May 7 '13 at 10:38 converting to .ps will rasterise anything with transparency, so not an option for many .pdfs –  Chris H Apr 2 at 15:01 PDF has a feature called "Page Groups" (PDF Reference, section 11.4.7). These descibe transparency effects between top-level objects on one page. When pdfTeX (or LuaTeX or XeTeX) includes a page from a PDF, it converts all pages into "Form XObjects" (section 8.10.1). pdfTeX also converts the Page Groups into /Group entries of the XObjects. The problem now is that Adobe products need also a /Group entry (whose content should not matter) in the /Page object which contains these XObjects to correctly render transparency (this is just needed to select the right rendering engine; the transparency information for the included pages should be taken from these included pages). pdfTeX will either use the first /Group it encounters when including PDFs or synthesize one when including PNGs with transparency. The warning is triggered when multiple Page Groups are encountered on one page (since the engine will then use the first one encountered and this may not be the "correct" one) and can probably be ignored. Of course this should be described somewhere in the pdfTeX documentation... - Thanks for your extensive explanation, it clarifies a lot! –  Veger Oct 17 '12 at 13:08 Instead of using pdf2ps or ghostscript (which both introduced some errors in my pdfs, e.g. missing lines), I just removed the Group information from the PDF with sed -i 's/\/Group.*R//g' file.pdf I don't know if this could introduce any problems, but in my case, with only single-paged libreoffice-exported images, it works well. - Welcome to TeX.SX! –  Claudio Fiandrino Jan 16 at 15:08 Thank you! I edited the regex to include the 'R', which happens to appear on those useless page groups, but not in real groups - at least in libreoffice PDFs. If someone would look further into this, I'm sure a Method can be found which securely deletes only the page group information in all PDFs. –  user44252 Jan 17 at 8:00 I have the same problem but only /Group >> followed by a short section. So this regex cannot help. –  math Mar 24 at 13:01 Sorry, this is dangerous advice. It would be nice if you could just delete text from a PDF file but you can't -- there are internal pointers that get corrupted, because they depend on the exactly location of the various elements in the file. So this answer is best deleted. –  Silvio Levy 2 days ago Additional Info / Workaround for MS Office Users: I have been using pdfLaTeX with PDFs generated from Visio for years. I just reinstalled my PC and then I got the warning - but only for NEWLY saved PDFs, not for the old ones. Therefore I looked for PDF options in Visio: If you tell Visio to generate PDF/A compatible PDFs, the warning will disappear. - Another additional info for Adobe Illustrator users: select "Acrobat 4 (PDF 1.3)" Compatibility when saving the PDF. Then the warning will disappear. - Following the pdf2ps -> ps2pdf advice from a previous answer, this is how I solved the issue. I browsed to the folder where my pdf images are located, used find to get all filenames, performed both transformations and deleted the temporary ps file. In short: for f in $(find . -type f -name "*.pdf"); do echo$f; pdf2ps $f${f%.*}.ps; ps2pdf ${f%.*}.ps$f; rm -f ${f%.*}.ps; done - Here is what I found: Somewhere at the beginning the PDFs contain a group which defines the contents of the page. For a ghostscript generated PDF it looks like this: 5 0 obj <</Type/Page/MediaBox [0 0 172.8 201.6] /Rotate 0/Parent 3 0 R /Group 4 0 R /Resources<</ProcSet[/PDF /Text] /ExtGState 104 0 R /XObject 105 0 R /Font 106 0 R >> /Contents 6 0 R >> The line /Group 4 0 R refers to a group object which looks like this: 4 0 obj <</Type/Group /S/Transparency /CS/DeviceRGB>>endobj This object defines the page transparency and color space. To get rid of the pdftex warning it is enough to remove the reference to this group i. e. remove the line /Group 4 0 R with an editor that won’t hurt the binary data like vi or use sed: sed -i".bak" -e "/^\/Group 4 0 R$/d" "filename.pdf" For PDFs generated by matplotlibs backend_pdf the group won’t be in a separated group. The main object looks like this: 10 0 obj << /Group << /CS /DeviceRGB /S /Transparency /Type /Group >> /Parent 2 0 R /MediaBox [ 0 0 172.8 201.6 ] /Resources 8 0 R /Type /Page /Contents 9 0 R >> endobj Here /Group << /CS /DeviceRGB /S /Transparency /Type /Group >> has to be removed. P. S.: The ghostscript method did unfortunately not work for me, the error still appears. Only when I forced PDF version 1.3 in ghostscript it would go away. But that rasterizes the image. - But it's not an error but a warning. Did you read my answer? –  Martin Schröder Jan 17 at 15:39 Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. –  Martin Schröder Jan 17 at 15:40 Yes, I read that but by the time you have about 70 warnings it is simply annoying and I couldn’t see real warnings and errors anymore. Of course this can only be a workaround until a proper solution, probably in pdftex, is found. –  Velocipede Berserker Jan 17 at 15:43 And thanks for the downvote, any explanations? :-/ –  Velocipede Berserker Jan 17 at 15:58 Your answer does not answer the question - only the last part, and since the warning is harmless, it's not needed and will probably do more harm then good. –  Martin Schröder Jan 17 at 19:05 I had the same problem when I was using the SVG file. To solve my problem and to automate the process under Microsoft Windows OS I did: 1) Add to the environment variable 'PATH' the full folder paths of Inkscape and Ghostviewer command line executables; 2) You may need to restart the computer, in order to update the environment variable 'PATH' with your changes; 3) Edit a batch file with the following content and name it 'svg2pdf.bat'. Save it in your main folder of the Texniccenter project: @echo off SET FileName=%1 ::Convert forward to backward slash SET FileName=%FileName:/=\% ::generate pdf image from svg inkscape -z -D --file=%FileName%.svg --export-pdf=%FileName%.pdf ::remove the multiple page problem gswin32c -sDEVICE=pdfwrite -o %FileName%_aux.pdf %FileName%.pdf ::save the resultant pdf file move /y %FileName%_aux.pdf %FileName%.pdf 4) In the preamble of the tex file, define the latex commands: \newcommand{\executeiffilenewer}[3]{% \ifnum\pdfstrcmp{\pdffilemoddate{#1}}% {\pdffilemoddate{#2}}>0% {\immediate\write18{#3}}% \fi% } \newcommand{\includesvgpdf}[2][\columnwidth]{% \includegraphics[#1]{#2.pdf}% } This new latex command \includesvgpdf accepts 2 arguments, where the first one must be "width=ToDefine", where "ToDefine" will be the width and the second argument must be the SVG file name without the extension. \includegraphics[width=0.5\textwidth]{SVG_FILE_NAME} use \includesvgpdf[width=0.5\textwidth]{SVG_FILE_NAME} Hopping it also works for you. - Welcome to TeX.SX! –  Heiko Oberdiek Jan 31 at 14:59 For me the gswin32c -sDEVICE=pdfwrite -o %FileName%_aux.pdf %FileName%.pdf won't remove the warning. –  math Mar 24 at 13:12 Unfortunately the issue is not always as harmless as stated in previous comments. I have a file (about 50 pages) with about 70 little sketches (typically 15mm x 8mm, all of them images.pdf), up to 8 sketches on one page. When the warning "multiple pdfs with page group included in a single page" appeared, 5 pages got completely black in the output pdf file, with the Acrobat Reader returning an error message. The remaining 45 visible pages were displayed correctly, including the sketches. With shifting some sketches to other places, other pages got black, with no clear relation between certain images and blackened pages discernible. After I opened all images with GSView, applied File>Converte>pdfwrite with Properties>Compatibility/Level 1.2, both the warnings and the black pages disappeared, and everything is fine now. Remark: The problem turned up when I started with a new TexLive installation under Windows 8.1 on a new computer. There was no problem with the self-same images.pdf with an older TexLive installation under Windows XP on my old computer. When I viewed the dammaged pdf files with Acrobat Reader on the old computer, they were dammaged as well. When I viewed the pdf output file from the old computer with Arobat Reader on the new computer, everything was fine. Thus obviously the new Techlive is handling images.pdf different than the old TexLive did. - I had the same problem when using graphs exported from OriginLab 9.0 and compiling with PDFTeXify (WinEdt 8.1 + MiKTeX). However, changing export options under PDF/Image Settings/PDF Options/Fonts/OutLine Mode into Adobe Type 3 I do not receive any more warnings. So, thanks guys for clarifying this issue - I had the same issue with exporting pdfs from inkscape. The work around was to export into ps and then convert pstopdf, e.g. with Adobe Distiller or other tools. - This is rather a comment than an answer –  Christian Hupfer Jul 3 at 8:33
Crashes from .exe file (VSC++ 2005) This topic is 3833 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts So my program runs from the ide fine when I click the "start debugging" button in both debug and release modes (I know its silly to debut the release version). However, when I build the project in release mode, then copy the .exe from the release folder to the project folder (so it gets all the relevant resource files) and run it, I get a crash soon after startup. The crash actually doesn't even finish its startup routine and enter the main game loop. I have tried to debug it, but I cant even look at the assembly. The only info I have gotten about the crash is it exits with the code: 0xc000000d at address 0x00000000004510e0 (I bet that address is useless to help diagnose the problem). My guess is that this is happening from some uninitialized variable or something similar, but my question is: how do I track it down? or is this some other problem? Share on other sites If you turn your Warning Level up, the compiler should be smart enough to catch many of your potential uninitialized variables. Also, you could look at the stack to see which function you crashed in. Share on other sites You can turn on debug info in the release build so that you get some info when it crashes. It may step weirdly as things get rearranged by the optimizer. Share on other sites You should be able to attach a debugger to the executable after it's crashed. You'll probably just have to tell it where the symbols (.pdb's) and source code are located, and then it will show you in the code where it's crashing. Share on other sites Thanks for the Quick and useful answers. it was this chunk here from some code almost a year old... wonder why it wasn't crashing things before. strncpy_s(stFileName,sizeof(char)*(strlen(stFileName)-1), stFile,_TRUNCATE);// so I changed stFileName to a std::string, like it should have always been, and every thing is just peachy now, thanks again. -vs Share on other sites You can also have your program generate a crash dump file when it crashes. Afterwards, you can take this file and load it into the debugger. Share on other sites Quote: Original post by Red AntYou can also have your program generate a crash dump file when it crashes. Afterwards, you can take this file and load it into the debugger. Ah that looks real useful actually. I thanks. • What is your GameDev Story? In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us. • 9 • 33 • 16 • 11 • 10 • Forum Statistics • Total Topics 634122 • Total Posts 3015637 ×
# Cone Beam Computed Tomography for detection of Bisphosphonate-related osteonecrosis of the jaw: Comparsion of quantitative and qualitative image parameters Koral, E M. Cone Beam Computed Tomography for detection of Bisphosphonate-related osteonecrosis of the jaw: Comparsion of quantitative and qualitative image parameters. 2013, University of Zurich, Faculty of Medicine.
# Smallest known counterexamples to Hedetniemi’s conjecture In 2019, Shitov has shown a counterexample (Ann. Math, 190(2) (2019) pp. 663-667) to Hedetniemi’s conjecture, $$\chi(G \times H)=\min(\chi(G),\chi(H))$$ where $$\chi(G)$$ is the chromatic number of the undirected finite graph $$G$$. Shitov counterexample is estimated to have $$|V(G)|\approx4^{100}$$ and $$|V(H)|\approx4^{10000}$$. Has there been some effort or progress to reduce the size of the counterexample? ## 1 Answer Yes, Xuding Zhu did this in Relatively small counterexamples to Hedetniemi's conjecture (J. Comb. Theory B 146 (2021) pp. 141-150, doi:10.1016/j.jctb.2020.09.005, arXiv:2004.09028) where the sizes of the graphs are $$3403$$ and $$10501$$. Marcin Wrochna has a preprint, Smaller counterexamples to Hedetniemi's conjecture, arXiv:2012.13558, that brings the sizes down to $$4686$$ and $$30$$ (as well as the chromatic number down to $$5$$).
## ROCK, PAPER, SNIFFERS by Jaimie Vernon Posted in music, Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on November 2, 2020 by segarini We were promised a paperless society by the technology gurus of the 1980s. It’s been over 30 years since IBM launched the concept of the desktop computer. It was going to revolutionize personal communication – even before the advent of the internet – and they were right. But that vision was gratuitously optimistic. I worked for the company that built the wiring systems for these beasts…back when they were the size of a gas furnace and ran on steam power and 47″ floppy discs containing 64k of memory. We were contracted to build about 150 wiring systems a week for their machines. I went to head office in Don Mills where they had motorized robotic pool tables shuttling CPU’s and 70 lb. Scare-o-Vision cathode driven monitors through a warehouse larger than Cape Canaveral. They were moving 10 of these units at a time…and over the course of a year they were selling less than 50,000 of these. ## JAIMIE VERNON – IT DICES, IT SLICES, YOU CAN WEAR IT AS A PARTY HAT Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on April 30, 2016 by segarini The world of Boomer pop culture continues to die off in epidemic proportions. It’s now officially becoming depressing. Aside from the shocking – and still reverberating – passing of Prince we’ve lost Canadian actor/singer Don Francks, actor James Carroll, beloved ‘The Flying Nun’ matriarch Madelaine Sherwood (also Canadian) and now the master of the original infomercial Philip Kives – the King of K-Tel – at the age of 87. ## JAIMIE VERNON – ROCK, PAPER, SNIFFERS Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on March 22, 2014 by segarini
# User talk:Brion VIBBER I have declared talk page bankruptcy! All old messages deleted. :) --brion (talk) 21:00, 16 June 2009 (UTC) ## How to get in touch with me Feel free to leave me messages here about articles and such on Wikipedia, but if you need to prod me about a tech issue I might not see your message here. You'll get a speedier response if you track me down elsewhere... • Most software issues should get added to our bug tracker. Even if you grab ahold of me another way, it's going to be easier for me to keep track of it if it's in here! • IRC chat - you'll often find me in #mediawiki or #wikimedia-tech during work hours • direct e-mail... if you're really sure :) ## Images hi Brion VIBBER, I wanna be an uploader. What should I do to be a?-- ♫Greatorangepumpkin♫ T 09:59, 7 June 2009 (UTC) Wait until your account is autoconfirmed. --brion (talk) 21:00, 16 June 2009 (UTC) ## Random obesrvation Is it just me, or do server problems occur in bundles? Seems like we've had server trouble every day since MJ's death. Ten Pound Hammer, his otters and a clue-bat • (Many ottersOne batOne hammer) 02:11, 3 July 2009 (UTC) • My random observation is that you, sir, are now a double talk page bankrupt! ;> Just FYI I asked Lar to take care of that "uploader" userrights issue I queried you about a while back [1]. He suggested I drop you a courtesy note. best regards, –xenotalk 02:13, 4 July 2009 (UTC) • Just to follow up, I removed the permission (Heather hasn't been editing since November 2008), but if you disagree with that feel free to ask me to put it back or do so yourself. ++Lar: t/c 00:50, 5 July 2009 (UTC) It sure does seem like that sometimes. :) --brion (talk) 18:20, 6 July 2009 (UTC) ## Russian Spelling for Your Name Dear Brion, As far as I remember it was your yourself who spelled your name as "Брион Вибур". However, it look like "Брайон Виббер" would be more correct one. Could you please advise? Thanks! Dr Bug (Vladimir V. Medeyko) 19:54, 3 July 2009 (UTC) ## ref is too imprecise I tried to use the ref element to integrate footnotes. However, there seems to be no possibility to indicate if the footnote references to the sentence in front of the ref element or the paragraph in front of it. In example: This is a first sentence. This is a second sentence.<ref>Miller (2009), p. 7.</ref> Instead, it should be as follows: This is a first sentence. <ref text="Miller (2009), p. 7.">This is a second sentence.</ref> Now, it is clear, that Miller is based on the second sentence only. Is there any possibility, to indicate that? If not, it should be programmed, because it opens new possibilities, e.g. mouse-ever effects when having the mouse pointer over a sentence. Then, a pop-up box could show "Miller (2009), p. 7". Furthermore, it is much more helpful for future editors to know, if sentence 1 has a reference already and can be relied on. Bugzilla didn't help to find a solution for this problem so far. Can you help to find a solution? 78.53.37.113 (talk) 10:28, 5 July 2009 (UTC) As the top of the talk page says, you probably want to file a bug in Wikimedia's bug tracker: https://bugzilla.wikimedia.org --MZMcBride (talk) 03:14, 6 July 2009 (UTC) I already did that, but the topic is not discussed there anymore, although it is a very necessary bug. 78.53.43.218 (talk) 21:07, 6 July 2009 (UTC) ## Substituting surname Any idea how you can {{subst: a surname in the page title to DEFAULTSORT sort the categories? Its just I have a large batch of German politicians to transwiki and I want to do it more quickly. So basically when you create the page it automatically places e.g Fritz Baier as Baier, Fritz in the categories. If not I gather there is a bot that can default sort the categories by surname and fix it afterwards? I'd imagine it is something like {subst:PAGENAME} but with a little programming to read the last word of the title and place it first. It would save a great deal of time anyway. Dr. Blofeld White cat 12:35, 12 July 2009 (UTC) Brion -- Dr. Blofeld asked me this question, and I suggested that it could be done, except that it appears that the string functions are disabled. Is that the case, and if so, why? My guess is that they would suck up too much CPU power on Wikimedia's end... Thanks, Mikaey, Devil's advocate 01:23, 30 July 2009 (UTC) ## Quick bar Hi. I'm left handed and I've only noticed that cologne blue and classic or one other has the option to change the quickbar from left to right and fixed or moveable. Given that I mostly use modern and occasionally mono, would it be possible to be able to add the quickbar option for modern so the navigation bar can be moved to the right hand side. I think you should give people this option for all skins givne that some people are left handed and might find it more natural to have it on the otherside. I also think there should be a shrinkable option on the nav bars like we have with big templates for instance as if we could conveniently shrink the task while reading or at least have the option to this would ease useability I think. Also for the task bar on modern does it have to be so wide? There is at least a centimetre gap, it could be trimmed easily by 1-2 centimetres. Dr. Blofeld White cat 13:41, 19 July 2009 (UTC) I'll pass it on to the usability team working on the skin updates, thanks for the feedback! Note we're considering actually using sidebars on both sides for some extra navigation tools, but that's still a ways away. --brion (talk) 18:17, 19 July 2009 (UTC) ## Renames So, you don't mind if we try a quarter-million rename, then, since the limit is not invoked? Just triple checking. -- Avi (talk) 00:02, 28 July 2009 (UTC) test test... test test... Welcome and thank you for experimenting with Wikipedia. Your test on the page User talk:Brion VIBBER worked, and it has been reverted or removed. Please take a look at the welcome page to learn more about contributing to this encyclopedia. If you would like to experiment further, please use the sandbox instead. Thank you. ## Lossless image compression and website optimization Have a look at yahoo's smush it. It can losslessly compress images (that are greater than 1kb)...it can save you guys quite a few bytes in bandwidth. Looking at the mediawiki svn branch, I can see that most images can be losslessly compressed. Is there any reason you haven't compressed these images...or do you simply need some help^.^.Smallman12q (talk) 18:57, 2 September 2009 (UTC) 1.13 is pretty old; we've been more aggressively recompressing things in more recent work. --brion (talk) 20:38, 4 September 2009 (UTC) ## File copyright problem with File:Fourth_test_file_investigating_bug_-_do_not_delete.png Thank you for uploading File:Fourth_test_file_investigating_bug_-_do_not_delete.png. However, it currently is missing information on its copyright status. Wikipedia takes copyright very seriously. It may be deleted soon, unless we can determine the license and the source of the file. If you know this information, then you can add a copyright tag to the image description page. If you have uploaded other files, consider checking that you have specified their license and tagged them, too. You can find a list of files you have uploaded by following this link. If you have any questions, please feel free to ask them at the media copyright questions page. Thanks again for your cooperation. Chris G Bot (talk) 00:19, 29 September 2009 (UTC) ## Pie for you! Have a Pie! You are hereby awarded ONE PIE for all you've done for Wikipedia and the WMF. Best of luck in your future endeavors! ArakunemTalk 14:07, 30 September 2009 (UTC) Nom nom nom -- thanks! --brion (talk) 17:09, 30 September 2009 (UTC) ## Lossless image compression/conversion Hi, I was wondering if there are any plans to compress the images in the mediawiki trunk? Some images can be losslessly compressed by 20% such as the document.png to File:Document.png.Smallman12q (talk) —Preceding undated comment added 15:52, 3 October 2009 (UTC). ## impact on fr.wp renamming file on commons Resolved Bonsoir, Il semble que sur la wikipédia francophone nous avons le même problème que la wikipédia anglophone avait. Par exemple: Mais il est curieux de voir que [4] fonctionne. Merci de ton aide. Regards, Otourly (talk) 18:38, 6 October 2009 (UTC) De plus [5] est devenu incohérent. Otourly (talk) 18:42, 6 October 2009 (UTC) Yeah, the Commons file redirects seem to be working on English sites (Wikipedia, Wikinews, etc) but not in other languages (checked French and German). Might be something incorrectly using the local namespace name in queries or cache lookups... --brion (talk) 19:21, 6 October 2009 (UTC) Sticking this in bugzilla:21026 to track -- hopefully an easy fix. --brion (talk) 19:27, 6 October 2009 (UTC) ...and fix is deployed. Thanks for the bug report! --brion (talk) 20:29, 6 October 2009 (UTC) and thank you for [6] Otourly (talk) 20:42, 6 October 2009 (UTC) ## Vandalism bug If you can't see vandalism on Maya civilization then please log out, delete coockies, give google search to access this article. Also see talk page of article. How this vandalism went unnoticed for such a long time? 117.98.79.156 (talk) 15:29, 23 November 2009 (UTC) ## Sequence of files (bug test) There is a sequence of file redirects with (do not delete) in the file name, which date from September. Is it now OK to delete these redirects? This, that, and the other (talk) 06:19, 26 November 2009 (UTC) ## Merry Christmas! December21st2012Freak Happy Holidays! 00:15, 24 December 2009 (UTC) ## Bureaucrat discussion for Juliancolton RfB A bureaucrat discussion has been opened in order to determine the consensus in this request for adminship. Please come participate. ···日本穣? · 投稿 · Talk to Nihonjoe 02:01, 2 January 2010 (UTC) ## Categories Hi, Brion, and Happy New Year's (& Decade) to you! I have brought this to Tim Starling's attention, too. Constantly when a new edit is being made to any article lately that has categories, the category box at the bottom of the page comes into contact (and sometimes conflict) with either a template or some other various print that is nearby (see The Beatles for example). Since around last early-mid December, this has been happening often, maybe something to do with the article's change in parameters. So far, as I know, nothing has become of this situation. If you could please have a look into this, and possibly resolve this problem once and for all. Thanks Brion! Best, --Discographer (talk) 13:53, 3 January 2010 (UTC) ## SF Meetup #11 In the area? You're invited to San Francisco Meetup # 11 Date: Saturday, February 6th, 2010 Time: 15:00 (3PM) Place: WMFoundation offices prev: Meetup 10 - next: Meetup 12 This is posted to the groups by request. Please sign up on the Invite list for future announcements. Thanks. --ShakataGaNai ^_^ 23:45, 4 February 2010 (UTC) Dear adminstrator, one of users of Persian wiki has insult me in my English talk page (in Persian language). How can I ask for protection of my User page and talk page and all sub pages against that I.P address? I have some valuable photos in my pages I dont want let him/her to damage them. Regards Pournick (talk) 00:55, 16 February 2010 (UTC) ## Add a "public interest" clause to Oversight A proposal to add a "public interest" clause to Wikipedia:Oversight has started at Wikipedia_talk:Oversight#Proposal_for_new_.27public_interest.27_clause. 10:40, 17 February 2010 (UTC) ## Bug 20246 Hi. Bug 20246 ("Install Extension:Transliterator on fr and en.wiktionary") has been sitting around since August, and you posted a comment on it saying "Assigning to myself for review." quite a long time ago. Is this going to get done any time soon? Are you even still doing work on bugs since you stepped down from being WMF CTO? If not, where does this go from here? Thanks in advance. --Yair rand (talk) 00:44, 9 April 2010 (UTC) ## Major flaw in New Features Hi Brion, I've had to go back to Take Me Back from New Features because New Features does not accept the wiki "dot" when editing templates which separates titles. Please try it yourself, and you will find I'm (unfortunately) correct. Best, --Discographer (talk) 22:03, 13 May 2010 (UTC) ## Usability Not sure how quickly you would see this, so just fyi. Thanks  7  00:52, 14 May 2010 (UTC) ## Saluton! Bona vin tagon! :D - UtherSRG (talk) 14:47, 1 June 2010 (UTC) Dankegon! :) --brion (talk) 16:36, 1 June 2010 (UTC) ## Quoted on, um, BRION Hey Brion. Just a quick note to say that I've chopped up your recent comment on phoebe's blog to make a nice little item (well, I think so anyway) for next week's BRION report in the Signpost. Inevitably, this meant shortening areas slightly, and I'd be grateful if you might check to make sure I haven't misrepresented your thoughts or if you have any suggestions about their presentation. Thanks, - Jarry1250 [Humorous? Discuss.] 18:54, 1 July 2010 (UTC) ## Login unification - odd snag with Japanese WP Brion: Sorry to bother you about this (please redirect me if appropriate), but my login unification was unsuccessful for reasons I don't understand, and the Help page on this suggests contacting a beaurocrat or steward. The holdout is the Japanese wikipedia (I'm confident that I never logged in there), where there's a userpage that looks like it was copied from an old version of my English wikipedia userpage. My English WP password doesn't work there. When I click the "send me a new password" button, I don't get anything in my email Inbox. Any suggestions on how I can finally unify my login worldwide (thus removing a nagging reminder in my Preferences)? -- Scray (talk) 05:29, 4 July 2010 (UTC) ## I herd u can Hi. I heard you are the former "Chief Techincal Officer of Wikimedia" and you are good at wikicodes, so i am asking you a question that has been [http://www.mediawiki.org/wiki/Thread:Project:Support_desk/Transcluding_onlyinclude_and_includeonly_tags unanswered on wikimedia for a quite long time. I hope you know the answer. To make it easier to see, i am posting the problem here too: short question Is there a way to let a template include <includeonly><onlyinclude>{{{date}}}</onlyinclude></includeonly> when transcluded? longer explaination for question I'll try to explain it with this table: Page name: Template(in this case: "template:update") Page between(in this case: "update page") Final page(in this case: "item page") use: add a notice to "update page" show information about the update, and when transcluded show the date of it only Only have the text between the onlyinclude tags included This is because at the RuneScape Wiki i am trying to automate the update date. Because there is always a link to the update page on the item page, and the update page has the date on it, i want the update page to have an additional note with the date entered(between onlyinclude tags) and i don't want it to appear on the update page itself(between includeonly tags). All update pages have the template:Update on it, with the parameter {{{date}}} so I want template:update to add <includeonly><onlyinclude>{{{date}}}</onlyinclude></includeonly> to the update page so that when the item page has {{#time:j F Y|{{Update:(updatename)}}}} on it, it shows the date in "dd month yy" automatically without needing to add that yourself. I hope you can help me. Joeytje50 (talk) 17:56, 8 December 2010 (UTC) ## Plans? I've read that you are a lead designer. I wonder whether there is anywhere we can see your design ideas for the Wikipedia, and possibly contribute to them? I didn't see anything about that on your user page (I aim to promote a sticky note system).-Tesseract2(talk) 19:50, 11 March 2011 (UTC) ## Unable to delete particular revision, blocked by interwiki shortcut Hi there. Your Commons page points here; though you might be interested in this weird glitch. Kind regards. Rehman 10:17, 29 March 2011 (UTC) ## hi! how come there's no article about you? you have been in the news [7]. 89.216.196.129 (talk) 13:32, 30 March 2011 (UTC) ## Problems with my Wikipedia name Hello. :) I cannot edit the page User:ℜepress/vector.css. I think the problem is my name. Quote: "Wikipedia does not have a user page with this exact name." Can you change my name to "Repress"? – ℜepress (talk) 14:12, 19 May 2011 (UTC) ## The Great American Wiknic Hi there! In the past, you've expressed an interest in local meetups of Wikipedians. Well, here's your chance! On Saturday, June 25, we'll be joining Wikipedians in cities all over the country for the first annual Great American Wiknic -- the picnic that anyone can edit! We'll meet up at a park in SF -- hopefully in the sun -- all other details are still in deliberation! If this sounds fun, please add your name to the list: Wikipedia:Meetup/San Francisco/Wiknic and add that page to your watchlist. (And of course, feel free to edit that page with your ideas, questions, etc.) I look forward to wiknicking with you! -Pete (talk) 00:28, 25 May 2011 (UTC) ## Your opinion would be appreciated As a member of WikiProject Countries, I'm seeking your opinion on a possible issue identified at List of sovereign states. If you have some spare moments, please contribute a comment at the Discussion of criteria. Best regards, Nightw 04:52, 20 June 2011 (UTC) ## A kitten for you! A funny kitten coming from JanPaul123 and demonstrate the WikiLove functionality (plus, you deserve a kitten). Hashar (talk) 20:29, 1 July 2011 (UTC) Yay kittens! --brion (talk) 20:41, 1 July 2011 (UTC) Feel free to clean it up or put it somewhere else :-) Hashar (talk) 06:18, 5 July 2011 (UTC) ## Orphaned non-free image File:Fifa world cup org.jpg Thanks for uploading File:Fifa world cup org.jpg. The image description page currently specifies that the image is non-free and may only be used on Wikipedia under a claim of fair use. However, the image is currently orphaned, meaning that it is not used in any articles on Wikipedia. If the image was previously in an article, please go to the article and see why it was removed. You may add it back if you think that that will be useful. However, please note that images for which a replacement could be created are not acceptable for use on Wikipedia (see our policy for non-free media). • I am a bot, and will therefore not be able to answer your questions. • I will remove the request for deletion if the file is used in an article once again. • If you receive this notice after the image is deleted, and you want to restore the image, click here to file an un-delete request. • To opt out of these bot messages, add {{bots|deny=DASHBot}} to your talk page. Thank you. DASHBot (talk) 06:07, 3 July 2011 (UTC) ## Reasons To Delete J1c3d (Y-DNA) 1. Articles that cannot possibly be attributed to reliable sources, including neologisms, original theories and conclusions, and articles that are themselves hoaxes (but not articles describing notable hoaxes) 2. Articles for which thorough attempts to find reliable sources to verify them have failed 3. Categories representing overcategorization JohnLloydScharf (talk) 02:31, 2 August 2011 (UTC) The hold on editing has been taken off without explanation, to my knowledge, as of this moment, without justification. JohnLloydScharf (talk) 00:42, 3 August 2011 (UTC) The one who took this off the edit hold did so without reading the talk page. JohnLloydScharf (talk) 01:17, 3 August 2011 (UTC) I refer to the article for J1c3d Y-DNA haplogroup as is indicated in the very first section of my User talk page. JohnLloydScharf (talk) 02:17, 3 August 2011 (UTC) ## User:Stinky New User Just saw this come across the user creation log and wanted to confirm that it is indeed you. Cheers! TNXMan 16:01, 4 August 2011 (UTC) Sure is! See bugzilla:30226 for the bug I found while creating it. ;) --brion (talk) 23:49, 4 August 2011 (UTC) ## Re: External link icon bug Hi Brion, I hope you don't mind the reply here. That would be the classic monobook skin. Hope this helps. :)  -- WikHead (talk) 18:50, 7 October 2011 (UTC) • Perhaps I should add, that when I view the raw HTML source of a page, I see no <img src=, or anything else that would indicate that an image is supposed to be loading into the blank space.  -- WikHead (talk) 19:11, 7 October 2011 (UTC) • Ok... you actually shouldn't see an <img> or anything as it's added in the CSS stylesheets -- so that's normal. :) I'll see if we can narrow down the bug checks on Monobook skin, thanks! --brion (talk) 20:59, 7 October 2011 (UTC) Great, thanks for the reply! If the new plan is to remove the icon, I really don't mind that so much. What's weird to me however, is seeing the (space) tail-padding before the punctuation. If the padding was removed, I'd be just as happy. :)  -- WikHead (talk) 21:09, 7 October 2011 (UTC) Hi. Can you upload this 1592 dictionary PDF (alternatively) to Commons:File:Hieronymus Megiser - Dictionarium quatuor linguarum.pdf? The problem is it's 125M size. --Sporti (talk) 09:26, 8 November 2011 (UTC) I can't -- I need to ask server people to do those kinds of uploads for me too. It's not a fun situation, and hopefully we'll get the chunked upload system going soon so we can all do these ourselves... --brion (talk) 19:36, 8 November 2011 (UTC) So does soon mean I should wait or file a bug report?--Sporti (talk) 16:12, 12 November 2011 (UTC) File a bug. --brion (talk) 19:50, 14 November 2011 (UTC) ## Request for uninvolved admin to review page deletion discussion Hello, I was wondering if you could lend a hand. I am currently in a discussion about a page deletion for the article Big Brother Australia 2012 and as an uninvolved administrator, I was wondering if you were able to review the discussion? Thank you. Bbmaniac (talk) 10:42, 18 November 2011 (UTC) I killed some of the ancient content on your user page. It could use a bit of sprucing, though. :-) --MZMcBride (talk) 21:08, 28 November 2011 (UTC) Thanks... I gotta spend more editing time on this wiki. :D --brion (talk) 21:11, 28 November 2011 (UTC) ## Thank you for your recent work with portals !!! Thanks so much for your help with portal coding, especially {{Related portals2}}. Much appreciated. ;) Cheers, — Cirt (talk) 06:03, 12 December 2011 (UTC) I agree. However, can you please fix {{Related portals2}} so that the contents float in the center of the section they are in instead of left aligning? I can't seem to figure out how to get it to do that. something somewhere in the code is preventing everything I try. Thanks! ···日本穣? · 投稿 · Talk to Nihonjoe · Join WP Japan! 08:27, 13 January 2012 (UTC) Ideally that would be nice! Not sure offhand how though; I think I tried something basic like setting text-align: center.... possibly using inline-block instead of floats would resolve this. --brion (talk) 18:36, 13 January 2012 (UTC) ## proper semantic constructs I'm sure you're busy. If you or one of the people you work with has time, we could use some input on improving the generated markup in things like navboxes. This is mostly about the new class=hlist in common.css (and bits of common.js). Things like {{flatlist}} and {{plainlist}} (and class=plainlist), too. And see Horizontal lists have got class in the Signpost. In a nutshell, I'm looking at this as something that will end-up folded into MediaWiki itself and made available to all projects and external wikis. Part of this may involve (wishful) tweaks to wiki-text syntax ;) Alarbus (talk) 01:40, 17 December 2011 (UTC) ## meta:Press kit/Wikimedia People/Officers/Brion Vibber You are invited to join the discussion at meta:Talk:Press kit/Wikimedia People/Officers/Brion Vibber#Incomplete. -- Trevj (talk) 16:17, 10 February 2012 (UTC) ## MSU Interview Dear Brion VIBBER, My name is Jonathan Obar user:Jaobar, I'm a professor in the College of Communication Arts and Sciences at Michigan State University and a Teaching Fellow with the Wikimedia Foundation's Education Program. This semester I've been running a little experiment at MSU, a class where we teach students about becoming Wikipedia administrators. Not a lot is known about your community, and our students (who are fascinated by wiki-culture by the way!) want to learn how you do what you do, and why you do it. A while back I proposed this idea (the class) to the communityHERE, where it was met mainly with positive feedback. Anyhow, I'd like my students to speak with a few administrators to get a sense of admin experiences, training, motivations, likes, dislikes, etc. We were wondering if you'd be interested in speaking with one of our students. So a few things about the interviews: • Interviews will last between 15 and 30 minutes. • Interviews can be conducted over skype (preferred), IRC or email. (You choose the form of communication based upon your comfort level, time, etc.) • All interviews will be completely anonymous, meaning that you (real name and/or pseudonym) will never be identified in any of our materials, unless you give the interviewer permission to do so. • All interviews will be completely voluntary. You are under no obligation to say yes to an interview, and can say no and stop or leave the interview at any time. • The entire interview process is being overseen by MSU's institutional review board (ethics review). This means that all questions have been approved by the university and all students have been trained how to conduct interviews ethically and properly. Bottom line is that we really need your help, and would really appreciate the opportunity to speak with you. If interested, please send me an email at [email protected] (to maintain anonymity) and I will add your name to my offline contact list. If you feel comfortable doing so, you can post your nameHERE instead. If you have questions or concerns at any time, feel free to email me at [email protected]. I will be more than happy to speak with you. Thanks in advance for your help. We have a lot to learn from you. Sincerely, Jonathan Obar --Jaobar (talk)23:23, 17 April 2012 (UTC) ## San Francisco Women's History Month Edit-a-Thon San Francisco Women's History Month Edit-a-Thon! Who should come? You should. Really. The San Francisco Women's History Month Edit-a-Thon will be held on Saturday, March 17, 2012 at the the Wikimedia Foundation offices in San Francisco! Participate in editing subjects about women's history and beyond! Workshops will also be hosted. New and experienced editors of any gender are welcome! We look forward to seeing you there! ## MathJax review I'm not sure you are checking for replies to your posts, so: Wikipedia_talk:WikiProject_Mathematics#mathJax_progress. Nageh (talk) 14:44, 14 March 2012 (UTC) ## A barnstar for you! The Original Barnstar Hey Brion! Thanks for your years of dedication and hard work in improving and making Wikipedia as best as ever! :) TheGeneralUser (talk) 23:15, 30 April 2012 (UTC) Thanks! :D --brion (talk) 23:28, 30 April 2012 (UTC) ## You're invited: San Francisco WikiWomen's Edit-a-Thon 2! San Francisco WikiWomen's Edit-a-Thon 2! You are invited! The San Francisco WikiWomen's Edit-a-Thon 2 will be held on Saturday, June 16, 2012 at the Wikimedia Foundation offices in San Francisco. Wikipedians of all experience levels are welcome to join us! This event will be specifically geared around encouraging women to learn how to edit and contribute to Wikipedia. Workshops on copy-editing, article creation, and sourcing will be hosted. Bring a friend! Come one, come all! EdwardsBot (talk) 23:26, 22 May 2012 (UTC) · Unsubscribe ## San Francisco Wiknic 2012 San Francisco Wiknic at Golden Gate Park You are invited to the second Great American Wikinic taking place in Golden Gate Park, in San Francisco, on Saturday, June 23, 2012. We're still looking for input on planning activities, and thematic overtones. List your add yourself to the attendees list, and edit the picnic as you like. 18:35, 21 May 2012 (UTC) If you would not like to receive future messages about meetups, please remove your name from Wikipedia:Meetup/San Francisco/Invite. ## Today is the Day Dankon Brion, Sanon! ;) --FoeNyx (talk) 05:32, 1 June 2012 (UTC) ### My toast for you Happy Brion Vibber Day Hurray! it is the time for celebration... Mi deziras al vi feliĉan Brion Vibber Tago. Here is my toast for you...(Well.. I had an early dinner!) I hope you have a great day today... Yours Kindly VanischenuTM 11:01, 1 June 2012 (UTC) awwww thanks :) ## testing testing just a test — Preceding unsigned comment added by 84.131.73.17 (talk) 09:24, 2 June 2012 (UTC) ## how can i get account user:goosy I'm goosy@zhwiki, In this wiki(en), I forgot my password, and no email. I can't attached accounts to Unified login. can you help me? thank you. --182.134.64.249 (talk) 01:22, 17 June 2012 (UTC) ## Proposed deletion of Starfighter The article Starfighter has been proposed for deletion because of the following concern: article on a word made up by some editor, unsourced after several years While all contributions to Wikipedia are appreciated, content or articles may be deleted for any of several reasons. You may prevent the proposed deletion by removing the {{proposed deletion/dated}} notice, but please explain why in your edit summary or on the article's talk page. Please consider improving the article to address the issues raised. Removing {{proposed deletion/dated}} will stop the proposed deletion process, but other deletion processes exist. In particular, the speedy deletion process can result in deletion without discussion, and articles for deletion allows discussion to reach consensus for deletion. I've seconded it. -- Trevj (talk) 11:22, 13 July 2012 (UTC) ## Wikipedia:Editor review/TheGeneralUser (2) Your review is required and will be greatly appreciated :) Hi Brion VIBBER ! I have started my second editor review at Wikipedia:Editor review/TheGeneralUser (2). I will be greatly delighted, thankful and valued to have your review for me regarding my editing and possible candidate for Adminship. As you are a experienced and long term Wikipedian so i have asked for your kind review. Take your time to review my editing and give the best review that you can :). Feel free to ask me any questions you would like to on the review page itself. It will be a great honor to have you review me for which I will truly feel appreciated and helpful! I always work to improve Wikipedia and make it a more better place to be for Everyone :). Regards and Happy Editing! TheGeneralUser (talk) 19:05, 4 September 2012 (UTC) ## You're invited! - Wiki Loves Monuments - San Francisco Events Palace of Fine Arts in San Francisco Hi! As part of Wiki Loves Monuments, we're organizing two photo events in the San Francisco Bay Area and one in Yosemite National Park. We hope you can come out and participate! Feel free to contact User:Almonroth with questions or concerns. There are three events planned: We look forward to seeing you there! You are receiving this message because you signed up on the SF Bay Area event listing, or have attended an event in the Bay Area. To remove yourself, please go here. EdwardsBot (talk) 00:41, 7 September 2012 (UTC) ## Invitation to join the Ten Year Society Dear Brion, I'd like to extend a cordial invitation to you to join the Ten Year Society, an informal group for editors who've been participating in the Wikipedia project for ten years or more. I'm currently inviting people whose names that I recognize from around the project, please feel free to invite anyone else you like. Best regards, -- Hex [t/c] 11:23, 27 September 2012 (UTC). ## You're invited! Ada Lovelace Day San Francisco October 16 - Ada Lovelace Day Celebration - You are invited! Come celebrate Ada Lovelace Day at the Wikimedia Foundation offices in San Francisco on October 16! This event, hosted by the Ada Initiative, the Mozilla Foundation, and the Wikimedia Foundation. It'll be a meet up style event, though you are welcome to bring a laptop and edit about women in STEM if you wish. Come mix, mingle and celebrate the legacy of the world's first computer programmer. The event is October 16, 5:00 pm - 8:00 pm, everyone is welcome! You must RSVP here - see you there! SarahStierch (talk) 19:53, 13 October 2012 (UTC) ## Wikipedia skins Hi Brion, I learn there are plans to throw out some of the old skins. Please, please please invent some new ones. I actually think editors should have the option to design their own skin to their modifications in their preferences but if not at least introduce a number of inspiring new skins which are graphically more impressive than the current ones. Presentation and graphics I think is very important and can radically alter how the website looks and is perceived. The current ones I find bland and uninspiring. Any chance you could do something in regards to this?♦ Dr. ☠ Blofeld 18:16, 29 October 2012 (UTC) Most of the new design work is focused on mobile -- we can iterate quickly there without breaking things -- but we plan to expand that work back to the desktop as well. Keep watching. :) --brion (talk) 01:17, 6 November 2012 (UTC) ## Deletion of a block log entry Hey, I was hoping you could answer a question. Is it possible to delete a block log entry of an editor? For some background, look at this discussion. I then did some poking around on my own and found this brief discussion from 2008, which is what led me to you. Thanks.--Bbb23 (talk) 23:12, 29 October 2012 (UTC) Generally there's no reason to remove log entries, though it is physically possible; see Wikipedia:Revision deletion#Log redaction. --brion (talk) 01:16, 6 November 2012 (UTC) Yes, I read that. It's too bad, though, because in cases of pure mistake (like this one), it would be nice not to have the block on the log, even though the unblock entry with the reason is also there. I'm just sympathetic to the user who would have liked it removed. Thanks for coming over and responding.--Bbb23 (talk) 01:23, 6 November 2012 (UTC) I'm doing research for Wikiquote, if anyone knows of interesting or pithy quotes about q:MediaWiki, please let me know at q:Talk:MediaWiki, it would be most appreciated! Thank you, — Cirt (talk) 17:25, 5 November 2012 (UTC) ## From VPR: TOC labels Just had a thought. Sometimes you want to quickly identify the status of something in a process - say, the completion amount of a particular day at AFD. So you go to that page's AFD log and there's a great big table of contents. Short of scrolling down the page there's no way to know what the status is of each item in that TOC. My suggestion: a magic word that allows the appending of text to the display of a heading in a TOC. Something like this: == First thing {{TOCLABEL:closed}}== Text == Second thing {{TOCLABEL:open}}== Text == Third thing {{TOCLABEL:reopened}}== Text which would render something like: ## Contents (hide) Wouldn't that be kind of handy? There would likely be all sorts of uses. The technique might need to be restricted to use outside article space only, though, I think. .:YellowPegasus:. (talkcontribs) 20:00, 23 November 2012 (UTC) ## Edit-a-thon tomorrow (Saturday) in Oakland Hi, I hope you will be joining us tomorrow afternoon at the Edit-a-thon at Tech Liminal, in Oakland. We'll be working on articles relating to women and democracy (and anything else that interests you). It's sponsored by the California League of Women Voters, Tech Liminal, and me. If this is the first you are hearing of this event, my apologies for the last-minute notice! I announced it on the San Francisco email list and by a banner on your watchlist, but I neglected to look at the San Francisco invitation list until this evening. If you can't make it this time, I hope to see you at a similar event soon! -Pete (talk) 04:41, 15 December 2012 (UTC) ## Wikimedia Foundation employee salaries You are invited to comment here. Nirvana2013 (talk) 09:24, 17 December 2012 (UTC) ## Crat statement draft Hi Following the drama at BN, I'm trying to come up with a statement all Crats could agree to. Please take a look, below. I am quite content to do this onwiki -we have always worked transparently, except where secrecy is essential (ie RTV). I think we should be able to wordsmith a statement acceptable to all, and I think it's an important thing to do. 1. In my opinion, this issue has come about through an unfortunate proliferation of documentation: policy, guideline, how-to etc 2. I am not convinced that there is community consensus on all of the points encapsulated in those various pages 3. I am unhappy at what may be described as some or all of: inconsistencies, inaccuracies or lack of clarity in that documentation 4. I do not believe that any of the issues we have faced have been caused by Crats trying to widen their powers 5. I would like to see the issues clarified, based on consensus, and for the documentation to be updated accordingly 6. I'd like to thank Griot-de for generously withdrawing the rename request Signed [crat sig] Lmk what you think. Many thanks, --Dweller (talk) 10:39, 7 January 2013 (UTC) ## Mail awaits you. It may take a few minutes from the time the email is sent for it to show up in your inbox. You can remove this notice at any time by removing the {{You've got mail}} or {{YGM}} template. --ukexpat (talk) 14:45, 30 March 2013 (UTC) ## MfD nomination of Wikipedia:WikiProject California/List of California-related topics Wikipedia:WikiProject California/List of California-related topics, a page you substantially contributed to, has been nominated for deletion. Your opinions on the matter are welcome; please participate in the discussion by adding your comments at Wikipedia:Miscellany for deletion/Wikipedia:WikiProject California/List of California-related topics and please be sure to sign your comments with four tildes (~~~~). You are free to edit the content of Wikipedia:WikiProject California/List of California-related topics during the discussion but should not remove the miscellany for deletion template from the top of the page; such a removal will not end the deletion discussion. Thank you. Mercurywoodrose (talk) 04:01, 1 April 2013 (UTC) ## Women's history editathon Hey - Sorry for the late notice, but since you have yourself tagged as living in the Bay Area, I thought you might appreciate notification that we’re having an event Saturday! It’ll be held at Hoyt Hall, an all-women's house of the Berkeley Student Cooperative from 3 to 6 pm tomorrow. The main event page is here. Anyone is welcome to show up, but we’re expecting a significant number of people to come who have literally never edited Wikipedia before. If you’re an experienced Wikipedian who would be able to provide useful help to some of the newbies, your presence would be especially appreciated (and it might be a good idea for you to show up at 2 or 2:30 instead of three. Thanks, Kevin Gorman (talk) 02:00, 6 April 2013 (UTC) I’m AWB’ing this message to all Wikipedians who have tagged themselves in the bay area. I’m sorry if the message isn’t of interest to you; feel free to delete it. I’ll be unlikely to send future messages in a similar way, but if really don’t want to receive future messages of this sort, please let me know. ## Template:Related portals2 Hey, I love this template but I would like to alter the images. Where does this pull the images from? I could simply change that location in the portals I am using. -- 06:28, 29 April 2013 (UTC) It pulls them via Template:Related portal item which pulls them via Template:Portal image which checks for filenames in templates of the form Template:Portal/Images/Film etc. You can change those little templates to change which image shows up for the given category. --brion (talk) 13:17, 29 April 2013 (UTC) thank you very much! -- 15:25, 29 April 2013 (UTC) ## Wiknic 2013 Wiknic 2013 Sunday, June 23rd · 12:34pm · Lake Merritt, Oakland Theme: Hyperlocal list-making Lake Merritt Wild Duck Refuge (Oakland, CA) This year's 2013 SF Wiknik will be held at Lake Merritt, next to Children's Fairyland in Oakland. This event will be co-attended by people from the hyperlocal Oakland Wiki. May crosspollination of ideas and merriment abound! ### Location and Directions • Location: The grassy area due south of Children's Fairyland (here) (Oakland Wiki) • Nearest BART: 19th Street • Nearest bus lines: NL/12/72 • Street parking abounds EdwardsBot (talk) 04:47, 3 May 2013 (UTC) ## Forced user renames coming soon for SUL Hi, sorry for writing in English. I'm writing to ask you, as a bureaucrat of this wiki, to translate and review the notification that will be sent to all users, also on this wiki, who will be forced to change their user name on May 27 and will probably need your help with renames. You may also want to help with the pages m:Rename practices and m:Global rename policy. Thank you, Nemo 13:05, 3 May 2013 (UTC) ## You're invited... to two upcoming Bay Area events: • Maker Faire 2013, Sat/Sun May 18-19, San Mateo -- there will have a booth about Wikimedia, and we need volunteers to talk to the public and ideas for the booth -- see the wiki page to sign up! • Edit-a-Thon 5, Sat May 25, 10-2pm, WMF offices in San Francisco -- this will be a casual edit-a-thon open to both experienced and new editors alike! Please sign up if on the wiki page if you can make it so we know how much food to get. I hope you can join us at one or both! -- phoebe / (talk to me) 18:51, 12 May 2013 (UTC) ## Already this time of the year ! Saluton, Brion ! Happy day ! Gratulon ! Dankon ! -- FoeNyx (talk) 14:52, 1 June 2013 (UTC) Dankon :D --brion (talk) 16:30, 5 June 2013 (UTC) ## Wanted by PETA You have just been added to PETA's watchlist for killing too many server kitties. — Preceding unsigned comment added by 114.6.22.11 (talk) 05:22, 16 July 2013 (UTC) ## A barnstar for you! The Admin's Barnstar Great job sir! Crab rangoons (talk) 19:08, 8 August 2013 (UTC) ## "Deletion means deletion" Hi! Is this comment from 2007 still accurate? Or would it be correct to say that the content is just hidden? Are there copies of deleted content on Wikimedia Dumps? (I'm asking because of a new discussion on Portuguese Wikipedia, about the terminology used in our deletion policy to describe what happens to "deleted pages") Helder 22:29, 25 August 2013 (UTC) In principle this remains true (that the archives of deleted pages in the database are not meant as permanent storage and could be cleared out from time to time), but in practice we haven't cleared out the deletion archives in a long time as people expect things to remain undeletable. Data dumps don't include the deleted pages -- but of course if you grab an old dump from between the creation and the deletion of a page, you'd find the at-the-time-not-deleted item in there. Note that since '07 we've also introduced "revision deletion" or hiding, which allows striking out of particular parts of a revision's metadata (content, author username, comment). Again, the data remains in the database on the site, but isn't included in data dumps. So, while in theory just about all deleted stuff could be recovered from Wikimedia Foundation's servers if required (some things get really-really-totally-perma-deleted-for-reals like child porn...) if Wikimedia shut down and Wikipedia had to be reinstated from public data dumps, those deleted items would not be available to the 'new' site. --brion (talk) 14:02, 28 August 2013 (UTC) ## Wikimedia Code Review Access Hi! I was told to go and contribute to the new repositories on Code Review. Since the only reason I was on that repository in the first place was to work on the Metro app, I would like to be able to access the Metro project on Code Review. Thank you very much! APerson (talk!) 03:41, 6 December 2013 (UTC) ## You're invited: Art & Feminism Edit-a-thon Art & Feminism Edit-a-Thon - You are invited! Hi Brion VIBBER! The first Art and Feminism Edit-a-thon will be held on Saturday, February 1, 2014 in San Francisco. Any editors interested in the intersection of feminism and art are welcome. Wikipedians of all experience levels are invited! Experienced editors will be on hand to help new editors. Bring a friend and a laptop! Come one, come all! Learn more here! SarahStierch (talk) 08:50, 21 December 2013 (UTC) ## Request for a complete export of one article and related pages. Hello Brion: I have been politely asked to "read the archives" before editing on a contentious article and realize this is going to take a while and require lots of notes. :) I decided to setup a personal Wikimedia-on-a-stick so I can work locally (much faster, I live in a rural area and have a somewhat slower connection) but the history limit on export is an issue since there are a lot more than 1000 edits. I saw you were able to do this before here and was hoping you would be willing to do it again maybe? The page is Morgellons and I would like to have the whole set of article + article history/diffs + talk + talk history/diffs + talk archives (11) + talk archives history/diffs. Is that possible? Thanks in advance either way. {{imaginary-smiley|pleading-with-big-sad-puppy-dog-eyes}} :) F6697 FORMERLY 66.97.209.215 TALK 04:24, 4 January 2014 (UTC) ## service oriented architecture Greetings $editor, I see from your$signpostProfileArticle that you have an interest in service-oriented architectures!  :-)   I'm trying to help a group of COI-encumbered PhDs re-write the SORCER article (NPOV), which is a service-oriented architecture based on the Jini/JVM substrate, used by the USAF (plus French and rumour has it Chinese and Russians and possibly Brits/Aussies) to speed up the automated mechanical-design-and-analysis of radical new aerospace structures. Interesting stuff, but could use another set of eyeballs to help translate from the jargon-debt they've built up during classified and quasi-governmental projects involving this software, since the previous millenium. Do you have any interest in helping, and are you available? And if so, how strong is your wikiImmuneSystem when it comes to surviving TLDR? p.s. I'd also like to give you an earful about WP:FLOW and VizEd and the general push for wiki2.0 sorts of things. But I promise to refrain, if you'd prefer not to hear it.  :-)   Hope this helps, and thanks for improving wikipedia. Please leave me a talkback / if you respon' / cannot have no watchlist / as just an anon'. Danke. 74.192.84.101 (talk) 17:19, 26 January 2014 (UTC) ## MfD nomination of Portal:Literature/Mobile redesign attempt Portal:Literature/Mobile redesign attempt, a page you substantially contributed to, has been nominated for deletion. Your opinions on the matter are welcome; please participate in the discussion by adding your comments at Wikipedia:Miscellany for deletion/Portal:Literature/Mobile redesign attempt and please be sure to sign your comments with four tildes (~~~~). You are free to edit the content of Portal:Literature/Mobile redesign attempt during the discussion but should not remove the miscellany for deletion template from the top of the page; such a removal will not end the deletion discussion. Thank you. Sven Manguard Wha? 20:02, 3 February 2014 (UTC) ## A barnstar for you! The Original Barnstar For collaboration on a technical book explaining the software that runs Wikipedia. Bides time (talk) 18:11, 20 February 2014 (UTC) ## You're invited! WikiWomen's Edit-a-thon at the University of California, Berkeley Saturday, April 5 - WikiWomen's Edit-a-thon at the University of California, Berkeley - You are invited! The University of California, Berkeley's Berkeley Center for New Media is hosting our first edit-a-thon, facilitated by WikiWoman Sarah Stierch, on April 5! This event, focused on engaging women to contribute to Wikipedia, will feature a brief Wikipedia policy and tips overview, followed by a fast-paced energetic edit-a-thon. Everyone is welcome to attend. The event is April 5, from 1-5 PM, at the Berkeley Center for New Media Commons at Moffitt Library. You must RSVP here - see you there! SarahStierch (talk) 23:13, 13 March 2014 (UTC) ## June is already here ! Saluton, Brion ! Happy day ! Gratulon ! Dankon ! -- FoeNyx (talk) 09:03, 1 June 2014 (UTC) ## Wikimania discussion Hello Brion, I have something I would like to discuss with you about using apps to contribute multimedia content to Wikipedia. Could we please meet up and have a brief discussion today at Wikimania? Please feel free to call or text me on my cell phone: +44 7792906335 or email: yu.wan05alumni.imperial.ac.uk Thank you! Regards, Francis --Computor (talk) 08:42, 10 August 2014 (UTC) ## An important message about renaming users Dear Brion VIBBER, I am cross-posting this message to many places to make sure everyone who is a Wikimedia Foundation project bureaucrat receives a copy. If you are a bureaucrat on more than one wiki, you will receive this message on each wiki where you are a bureaucrat. As you may have seen, work to perform the Wikimedia cluster-wide single-user login finalisation (SUL finalisation) is taking place. This may potentially effect your work as a local bureaucrat, so please read this message carefully. Why is this happening? As currently stated at the global rename policy, a global account is a name linked to a single user across all Wikimedia wikis, with local accounts unified into a global collection. Previously, the only way to rename a unified user was to individually rename every local account. This was an extremely difficult and time-consuming task, both for stewards and for the users who had to initiate discussions with local bureaucrats (who perform local renames to date) on every wiki with available bureaucrats. The process took a very long time, since it's difficult to coordinate crosswiki renames among the projects and bureaucrats involved in individual projects. The SUL finalisation will be taking place in stages, and one of the first stages will be to turn off Special:RenameUser locally. This needs to be done as soon as possible, on advice and input from Stewards and engineers for the project, so that no more accounts that are unified globally are broken by a local rename to usurp the global account name. Once this is done, the process of global name unification can begin. The date that has been chosen to turn off local renaming and shift over to entirely global renaming is 15 September 2014, or three weeks time from now. In place of local renames is a new tool, hosted on Meta, that allows for global renames on all wikis where the name is not registered will be deployed. Your help is greatly needed during this process and going forward in the future if, as a bureaucrat, renaming users is something that you do or have an interest in participating in. The Wikimedia Stewards have set up, and are in charge of, a new community usergroup on Meta in order to share knowledge and work together on renaming accounts globally, called Global renamers. Stewards are in the process of creating documentation to help global renamers to get used to and learn more about global accounts and tools and Meta in general as well as the application format. As transparency is a valuable thing in our movement, the Stewards would like to have at least a brief public application period. If you are an experienced renamer as a local bureaucrat, the process of becoming a part of this group could take as little as 24 hours to complete. You, as a bureaucrat, should be able to apply for the global renamer right on Meta by the requests for global permissions page on 1 September, a week from now. In the meantime please update your local page where users request renames to reflect this move to global renaming, and if there is a rename request and the user has edited more than one wiki with the name, please send them to the request page for a global rename. Stewards greatly appreciate the trust local communities have in you and want to make this transition as easy as possible so that the two groups can start working together to ensure everyone has a unique login identity across Wikimedia projects. Completing this project will allow for long-desired universal tools like a global watchlist, global notifications and many, many more features to make work easier. If you have any questions, comments or concerns about the SUL finalisation, read over the Help:Unified login page on Meta and leave a note on the talk page there, or on the talk page for global renamers. You can also contact me on my talk page on meta if you would like. I'm working as a bridge between Wikimedia Foundation Engineering and Product Development, Wikimedia Stewards, and you to assure that SUL finalisation goes as smoothly as possible; this is a community-driven process and I encourage you to work with the Stewards for our communities. Thank you for your time. -- Keegan (WMF) talk 18:24, 25 August 2014 (UTC) --This message was sent using MassMessage. Was there an error? Report it! ## You're invited! Litquake Edit-a-thon in San Francisco You are invited!Litquake Edit-a-thon in San Francisco → Saturday, October 11, 2014, from 1-5 PM The Edit-a-thon will occur in parallel with Litquake, the San Francisco Bay Area's annual literature festival. Writers from all over the Bay Area and the world will be in town during the nine day festival, so the timing is just right for us to meet, create and improve articles about literature and writers. All levels of Wikipedia editing experience are welcome. This event will include new editor training. Please bring your laptop. The venue: Wikimedia Foundation offices (149 New Montgomery Street, 6th Floor San Francisco, CA 94105) – Google Maps view You must RSVP here — see you there! --Rosiestep (talk) 03:44, 26 September 2014 (UTC) ## Block quotes on mobile I'm having trouble working out what to do with block quotes that are formatted to be narrower than usual. These can look good on desktop/laptop, because they avoid long lines of text and give us more white space. But on mobile, they're sometimes reduced to one or two letters per line. For example, see the block quote toward the end of the Prisons in England subsection in the Background section of Marshalsea on mobile. Would it be possible to have the mobile version reformat these, so that we don't have to remove or widen them? SlimVirgin (talk) 17:44, 16 October 2014 (UTC) ## Recent changes app Hello Brion, I'm an editor from the Catalan Wikipedia and I write small programs as a hobby (Python bots and some PHP and JavaScript). Recently I had the idea of making a Recent-changes-patrolling app where you could easily mark as patrolled or revert the recent changes. I had the idea of putting each unpatrolled edit in a "card" that you could swipe left to revert and right to mark as patrolled (something like Tinder or Swipable-Cards). The problem is I don't know where to start. I've never programmed an app before (but I would like to learn how to do it) and I don't know if is there a way to go multi-platform easily. I'm asking you this because you and User talk:Yuvipanda are the main contributors of the Wikipedia app. Gerardduenas (talk) 20:09, 21 December 2014 (UTC) ## PEGJS Saluton! Mi vidis, ke vi scipovas pegjs. Mi havas tiun problemon: tzwd!34!346 sdfw!212!54 rews!325!321 wdr!345!32 Mi volas tiun parsi, ke en la listo la tokenoj ne estu disigitaj en karakteroj, sed en vorto kaj tu entjeroj. Jen mi havas tiun: list = ((spaceNL* bareword spaceNL*)* spaceNL*) token = value:bareword separator line:integer separator offset:integer spaceNL*{ return value, line, offset; } integer "integer" = digits:$[0-9]+ { return parseInt(digits, 10); } bareword = cs:barewordChar+ barewordChar = '\\' chr:barewordMeta { return chr } / !barewordMeta chr:. { return chr } barewordMeta = [$"';&<>\n()\[*?| ] space = " " / "\t" spaceNL = space / "\n" separator = "!" ` Kiel povas mi helpi tiun? Szalakóta (talk) 18:57, 7 January 2015 (UTC) ## SF edit-a-thons on March 7 and 8 ArtAndFeminism (3/7) and International Women's Day (3/8)! Dear fellow Wikipedian, In celebration of WikiWomen's History Month, the SF Bay Area Wikipedia community has two events in early March -- please consider attending! First, we have an ArtAndFeminism edit-a-thon, which will take place at the Kadist Art Foundation from 12 noon to 6pm on Saturday, March 7. We'll be one of many sites worldwide participating in this edit-a-thon on March 7th. So join us as we help improve Wikipedia's coverage of women artists and their works! Second, we will be celebrating International Women's Day with the International Women's Day edit-a-thon on Sunday, March 8 from 1pm to 5pm at the Wikimedia Foundation. Our editing focus will be on women, of course! I hope to see you there! Rosiestep (talk) - via MediaWiki message delivery (talk) 18:06, 20 February 2015 (UTC) ## Samuel Clemens listed at Redirects for discussion An editor has asked for a discussion to address the redirect Samuel Clemens. Since you had some involvement with the Samuel Clemens redirect, you might want to participate in the redirect discussion if you have not already done so. Mr. Guye (talk) 03:14, 1 April 2015 (UTC) ## Once more! Saluton, Brion! It's already this time of the year: your day! Gratulon & Dankon ! -- FoeNyx (talk) 21:16, 1 June 2015 (UTC) ## Workshopping bureaucrat activity requirements (Message to all bureaucrats) There is an ongoing discussion about implementing some kind of standards for administrative and bureaucrat activity levels; and activity requirements for bureaucrats have been explored several times in the past. I've prepared a draft addition to Wikipedia:Bureaucrats that would require at least one bureaucratic action every five years to retain the bureaucrat permission. In the past, I've been hesitant of such proposals but I believe that if the bureaucrat group as a whole is seen to be actively engaged, the community may be more willing to grant additional tasks to the position. Please let me know your thoughts. I'm not sure if this actually applies to any of us, but if you have not acted as a bureaucrat in over five years, you might consider requesting removal of the permission or otherwise signalling that you intend to return to bureaucrat activity. –xenotalk 14:22, 30 June 2015 (UTC) ## FYI: bureaucrat discussion opened Message to most bureaucrats A bureaucrat chat has been opened by Maxim at Wikipedia:Requests for adminship/Rich Farmbrough 2/Bureaucrat discussion. Wikipedia:Bureaucrat discussion suggests notifying bureaucrats on their talk page as well as BN, hence this courtesy note. –xenotalk 16:44, 5 July 2015 (UTC) ## Community & Bureaucrat based desysoping proposal A discussion is taking place regarding a proposal to create a community and bureaucrat based desysoping committee. The proposal would modify the position of bureaucrat. Your input is encouraged. Please see Wikipedia:Administrators/RfC for BARC - a community desysoping process. Thank you, --Hammersoft (talk) 19:55, 28 July 2015 (UTC) ## Implementation of Wikipedia:Bureaucrats#Bureaucrat activity requirements Following a community discussion ending August 2015, consensus was reached to remove the bureaucrat permissions of users who have not participated in bureaucrat activity for three years. To assist with the implementation of this requirement, please see Wikipedia:Bureaucrat activity. Modeled after Wikipedia:Inactive administrators and similar to that process, the log page will be created on 1 September 2015. Bureaucrats who have not met the activity requirements as of that date will be notified by email (where possible) and on their talk page to advise of the pending removal. If the notified user does not return to bureaucrat activity and the permissions are removed, they will need to request reinstatement at WP:RFB. Removal of access is procedural only, and not intended to reflect negatively upon the affected user in any way. Please let me know if you have any questions or concerns. –xenotalk MediaWiki message delivery (talk) 22:20, 17 August 2015 (UTC) ## ArbCom elections are now open! Hi, You appear to be eligible to vote in the current Arbitration Committee election. The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to enact binding solutions for disputes between editors, primarily related to serious behavioural issues that the community has been unable to resolve. This includes the ability to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail. If you wish to participate, you are welcome to review the candidates' statements and submit your choices on the voting page. For the Election committee, MediaWiki message delivery (talk) 08:51, 23 November 2015 (UTC) ## Notification of pending suspension of bureaucrat permissions due to not meeting bureaucrat activity requirements Following a community discussion ending August 2015, consensus was reached to remove the bureaucrat permissions of users who have not participated in bureaucrat activity for three years. As a result of this discussion, your bureaucrat permissions may be removed if you do not return to bureaucrat activity within the next month. If you do not return to bureaucrat activity and the permissions are removed, you will need to request reinstatement at RFB. This removal of access is procedural only, and not intended to reflect negatively upon you in any way. We wish you the best in future endeavors, and thank you for your past bureaucrat efforts. –xenotalk 21:05, 30 November 2015 (UTC) ## Notification of imminent suspension of bureaucrat permissions due to not meeting bureaucrat activity requirements Following a community discussion ending August 2015, consensus was reached to remove the bureaucrat permissions of users who have not participated in bureaucrat activity for three years. As a result of this discussion, your bureaucrat permissions may be removed if you do not return to bureaucrat activity within the next few days. If you do not return to bureaucrat activity and the permissions are removed, you will need to request reinstatement at RFB. This removal of access is procedural only, and not intended to reflect negatively upon you in any way. We wish you the best in future endeavors, and thank you for your past bureaucrat efforts. –xenotalk 14:43, 26 December 2015 (UTC) ## Suspension of bureaucrat permissions due to not meeting bureaucrat activity requirements Following a community discussion ending August 2015, consensus was reached to remove the bureaucrat permissions of users who have not participated in bureaucrat activity for three years. As a result of this discussion, your bureaucrat permissions have been removed by a Steward. If you wish to request reinstatement, you may do so at WP:RFB. This removal of access is procedural only, and not intended to reflect negatively upon you in any way. We wish you the best in future endeavors, and thank you for your past bureaucrat efforts. –xenotalk 06:14, 31 December 2015 (UTC)
+0 0 121 2 +133 Let f(x) be a quartic polynomial with integer coefficients and four integer roots. Suppose the constant term of f(x) is 6 . (a) Is it possible for x=3 to be a root of f(x)? (b) Is it possible for x=3 to be a double root of f(x) ? Apr 30, 2019 #1 +5788 +2 $$\text{all the roots are integers so we have}\\ f(x) = (x-i_1)(x-i_2)(x-i_3)(x-i_4)\\ \text{The constant term is }c_0 = i_1 i_2 i_3 i_4 = 6\\ \text{3 can be a root as }3\cdot 2 = 6\\ \text{On the other hand 3 cannot be a double root as }3\cdot 3 = 9\\ \text{and there is no combination of integer factors that will multiply 9 to obtain 6}$$ . May 1, 2019 #2 +133 +2 Thank you this response is short and to the point while still helping me understand the problem May 2, 2019
Homework Help: Intensity of sound 1. Mar 4, 2010 semc A firework is detonated many meters above the ground. At a distance of 400m from the explosion, the acoustic pressure reaches a maximum of 10N/m2. Assume the speed of sound is constant at 343m/s, the ground absorbs all sound falling on it, and the air absorbs sound energy by the rate of 7dB/km. What is the sound level at 4km from the explosion? I have calculated the sound level at a distance 4km away from the explosion without the absorption. However why do we have to subtract (7*3.6)dB from that answer instead of (7*4)dB ? 2. Mar 8, 2010 aim1732 When you say you calculated the intensity without the absorption you ignore the fact that intensity given to you 400m away has already underwent some loss. Hence to subtract intensity lost over the total 4km would be actually to subtract intensity over the 400m stretch twice.
### On the Depth of Oblivious Parallel RAM T-H. Hubert Chan, Kai-Min Chung, and Elaine Shi ##### Abstract Oblivious Parallel RAM (OPRAM), first proposed by Boyle, Chung, and Pass, is the natural parallel extension of Oblivious RAM (ORAM). OPRAM provides a powerful cryptographic building block for hiding the access patterns of programs to sensitive data, while preserving the paralellism inherent in the original program. All prior OPRAM schemes adopt a single metric of simulation overhead'' that characterizes the blowup in parallel runtime, assuming that oblivious simulation is constrained to using the same number of CPUs as the original PRAM. In this paper, we ask whether oblivious simulation of PRAM programs can be further sped up if the OPRAM is allowed to have more CPUs than the original PRAM. We thus initiate a study to understand the true depth of OPRAM schemes (i.e., when the OPRAM may have access to unbounded number of CPUs). On the upper bound front, we construct a new OPRAM scheme that gains a logarithmic factor in depth and without incurring extra blowup in total work in comparison with the state-of-the-art OPRAM scheme. On the lower bound side, we demonstrate fundamental limits on the depth any OPRAM scheme --- even when the OPRAM is allowed to have an unbounded number of CPUs and blow up total work arbitrarily. We further show that our upper bound result is optimal in depth for a reasonably large parameter regime that is of particular interest in practice. Available format(s) Publication info Published elsewhere. MAJOR revision.ASIACRYPT 2017 Keywords oblivious parallel RAM Contact author(s) tszhubert @ gmail com History Short URL https://ia.cr/2017/861 CC BY BibTeX @misc{cryptoeprint:2017/861, author = {T-H. Hubert Chan and Kai-Min Chung and Elaine Shi}, title = {On the Depth of Oblivious Parallel RAM}, howpublished = {Cryptology ePrint Archive, Paper 2017/861}, year = {2017}, note = {\url{https://eprint.iacr.org/2017/861}}, url = {https://eprint.iacr.org/2017/861} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
In the book" The Quantum Theory of Radiation", Heitler derived the transverse self-energy of the electron(Chapter III, Section18, Eq.(23)) $$\frac{{{e^2}}}{{\pi m}}\int_{\text{0}}^\infty {kdk}$$ which is the energy of the electron under the action of the vacuum fluctuation of the radiation...
# zbMATH — the first resource for mathematics An $$L_2$$-quotient algorithm for finitely presented groups. (English) Zbl 1253.20033 Summary: The paper develops algorithmic methods to enumerate all normal subgroups of a finitely presented group such that the factor groups are either isomorphic to $$\mathrm{PSL}(2,p^n)$$ or to $$\mathrm{PGL}(2,p^n)$$. The case of two generators is treated in detail. A range of examples starting with the free group on two generators and ending with groups having only finitely many normal subgroups of this type is discussed. ##### MSC: 20F05 Generators, relations, and presentations of groups 20-04 Software, source code, etc. for problems pertaining to group theory 68W30 Symbolic computation and algebraic computation ##### Software: JanetOre; Magma; Janet Full Text: ##### References: [1] Blinkov, Y.A.; Cid, C.F.; Gerdt, V.P.; Plesken, W.; Robertz, D., The MAPLE package “janet”: I. polynomial systems, (), 31-40, also available together with the package from WWW: [2] Blinkov, Y.A.; Gerdt, V.P.; Yanovich, D.A., Construction of janet bases, II. polynomial bases, (), 249-263 · Zbl 1015.13013 [3] Bosma, W.; Cannon, J.J.; Playoust, C., The magma algebra system I: the user language, J. symbolic comput., 24, 235-265, (1997) · Zbl 0898.68039 [4] Cavicchioli, A.; O’Brien, E.A.; Spaggiari, F., On some questions about a family of cyclically presented groups, J. algebra, 320, 11, 4063-4072, (2008) · Zbl 1201.20027 [5] Hall, P., The Eulerian functions of a group, (), 7, 179-196, (1936), also in [6] Holt, D.F.; Rees, S., Finding subgroups and quotients of finitely presented groups, (), 99-107, (English summary) · Zbl 0829.20002 [7] Holt, D.F.; Plesken, W., A cohomological criterion for a finitely presented group to be infinite, J. London math. soc. (2), 45, 469-480, (1992) · Zbl 0792.20039 [8] Huppert, B., Endliche gruppen 1, (1967), Springer [9] Kitaoka, Y., Arithmetic of quadratic forms, (1993), Cambridge Univ. Press · Zbl 0785.11021 [10] M. Lange-Hegermann, Algorithmen zur lokalen Kommutativen Algebra und Primärzerlegung, Diploma thesis Aachen, 2008 [11] Plesken, W.; Robertz, D., Janet’s approach to presentations and resolutions for polynomials and linear pdes, Arch. math. (basel), 84, 1, 22-37, (2005) · Zbl 1091.13018 [12] Plesken, W.; Robertz, D., Representations, commutative algebra, and Hurwitz groups, J. algebra, 300, 1, 223-247, (2006) · Zbl 1166.20011 [13] Plesken, W.; Souvignier, B., Analysing finitely presented groups by constructing representations, J. symbolic comput., 24, 335-349, (1997) · Zbl 0886.20024 [14] Plesken, W., Counting solutions of polynomial systems via iterated fibrations, Arch. math., 92, 44-56, (2009) · Zbl 1180.14053 [15] D. Robertz, Noether normalization guided by monomial cone decompositions, J. Symb. Comp., in press, 19 pp · Zbl 1190.13026 [16] Stevenhagen, P.; Lenstra, H.W., Chebotarev and his density theorem, Math. intelligencer, 18, 2, 26-37, (1996) · Zbl 0885.11005 [17] Suzuki, M., Group theory I, (1982), Springer [18] Zassenhaus, H., On the spinor norm, Arch. math., 13, 434-451, (1962) · Zbl 0118.01804 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video It is currently 18 Feb 2020, 06:11 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # What is the minimum value of |x-4| + |x+3| + |x-5| ? Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Aug 2009 Posts: 8261 What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 12 Mar 2016, 19:45 5 21 00:00 Difficulty: 55% (hard) Question Stats: 63% (01:51) correct 37% (01:56) wrong based on 400 sessions ### HideShow timer Statistics What is the minimum value of |x-4| + |x+3| + |x-5| ? A. -3 B. 3 C. 5 D. 7 E. 8 Kudos for correct solution _________________ Math Expert Joined: 02 Aug 2009 Posts: 8261 Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 12 Mar 2016, 20:43 6 10 chetan2u wrote: What is the minimum value of |x-4| + |x+3| + |x-5| ? A. -3 B. 3 C. 5 D. 7 E. 8 OA after three days Good Explanation Engr2012. The Q tests us on the understanding of Property of a Modulus.. CONCEPT: - when we have two modulus, the value will be the minimum between the two Critical Points and will increase on either side of CP. the two extremeties are -3 and 5, so |x+3| + |x-5| will remain constant, 3+5=8, within the range from -3 to 5.. so we have to keep x-4 as minimum to have the minimum value for |x-4| + |x+3| + |x-5| .. x-4 will be 0 at x=4.. so our min value will OCCUR at x=4 and will be 8 E _________________ CEO Joined: 20 Mar 2014 Posts: 2535 Concentration: Finance, Strategy Schools: Kellogg '18 (M) GMAT 1: 750 Q49 V44 GPA: 3.7 WE: Engineering (Aerospace and Defense) Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 12 Mar 2016, 20:30 4 7 chetan2u wrote: What is the minimum value of |x-4| + |x+3| + |x-5| ? A. -3 B. 3 C. 5 D. 7 E. 8 OA after three days The ranges that this question needs to be looked at are: 1. x<-3 2. -3 $$\leq$$ x < 4 3. 4 $$\leq$$ x < 5 4. x $$\geq$$ 5 Taking each one of them one by one: 1. x<-3 , the given expression will become, f(x) = |x-4| + |x+3| + |x-5| = -(x-4)-(x+3)-(x-5) = -3x+6 . Slope of the line f(x) = -3x+6 is <0 and thus this line will not have a defined minimum (= -inf !) 2. In the range, -3 $$\leq$$ x < 4, the equation takes the form, f(x) = 12-x, again negative slope, potential of a minimum value in the given range. The minimum value will be at x=-3 ---> 12+3=15. Not in the options, move on. 3. 4 $$\leq$$ x < 5 , equation becomes f(x) = x+4, positive slope ---> minimum value will be when x=4 ---> 4+4 =8 . 4. x $$\geq$$ 5, equation becomes, f(x)= 3x-6, positive slope and as such the minimum will be at x=5 , giving you f(x)=9 as the minimum value. Hence E, 8 is the correct answer. Method 2: |x-a| is the distance of x from a. Thus f(x) = |x-4| + |x+3| + |x-5| = distance of x from 4 + distance of x from -3 + distance of x from 5. The ranges remain the same as the ones mentioned above, but now let us assume some values in the given ranges, (ignore the first one as it should be pretty obvious that this range will not give a minimum value!) Ranges # 2 and 3, -3 $$\leq$$ x < 4, take x=-2, you get f(x) =13, move to x=0, you get f(x) = 12, the values are decreasing, good. Now take x= 3, you get f(x) = 9, for x=4, you get f(x)=8, when you take x=5, f(x) = 9, starts to increase again. Thus 8 , E is the correct answer. This method though depends on the fact that the options given are integers and the nature of the values (decreasing to increasing) does not vary inconsistently. If the options would have been decimals or the nature of the values would have been varying, then this method will not be a recommended course of action. Hope this helps. FYI, the graph of the given functions look like this: Attachment: 2016-03-12_23-27-34.jpg [ 58.43 KiB | Viewed 11007 times ] ##### General Discussion Marshall & McDonough Moderator Joined: 13 Apr 2015 Posts: 1675 Location: India Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 12 Mar 2016, 20:39 5 2 A cannot be the answer as all the three terms are in modulus and hence the answer will be non negative. |x-4| >= 0 --> Minimum occurs at x = 4 |x+3| >= 0 --> Minimum occurs at x = -3 |x-5| >= 0 --> Minimum occurs at x = 5 x = -3 --> Result = 7 + 0 + 8 = 15. Also any negative value will push the combined value of |x-4| + |x-5| to a value > 9. x = 4 --> Result = 0 + 7 + 1 = 8 x = 5 --> Result = 1 + 8 + 0 = 9 x = 3 --> Result = 1 + 6 + 2 = 9 So minimum value of the expression occurs at x = 4 and the resultant value = 8 Manager Joined: 17 Oct 2015 Posts: 154 Location: India Concentration: Finance Schools: ISB '21 (A) GMAT 1: 690 Q47 V37 GMAT 2: 700 Q44 V41 WE: Corporate Finance (Investment Banking) What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 13 Mar 2016, 06:23 1 1 Hello, We can start here by defining the Critical point limits for potential sign changes at may occur. 1. x<-3: x-4=-ve, x+3=-ve and x-5=-ve==>AFTER OPENING MODs==> -x+4-x-3-x+5==>-3x+6==>infintely positive. 2. -3=<x<0 x-4=-ve, x+3=+ ve and x-5=-ve==> After mod. are opened==> -x+4+x+3-x+5==>12-x==> min. poss value would be if x=-3===> 15. 3. 0=<x<4: x-4 -ve, x+3 +ve, and x-5 -ve==>-x+12==>min poss. value would be when x=0==>min. value of function in this limit=12. 4. 4=<x<5: x-4 +ve, x+3 +ve, x-5 -ve.===>x+4 (+ve) and minimum poss. value as per this limit=8. 5. x>=5: x-4 +ve, x+3 +ve and x+5 +ve.===>3x+6==> infinitely positive. Hence minimum poss. value of function overall=8=Option E. I understand i shouldn't have put 0 in between for extra limit. but personally i'm kind of scared of zero. it sometimes leads to disasters when ignored. Also, do let me know Engr2012 and chetan2u how much time a person should take solving this. i took 4:56 minutes. CEO Joined: 20 Mar 2014 Posts: 2535 Concentration: Finance, Strategy Schools: Kellogg '18 (M) GMAT 1: 750 Q49 V44 GPA: 3.7 WE: Engineering (Aerospace and Defense) What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 13 Mar 2016, 06:34 debbiem wrote: Hello, We can start here by defining the Critical point limits for potential sign changes at may occur. 1. x<-3: x-4=-ve, x+3=-ve and x-5=-ve==>AFTER OPENING MODs==> -x+4-x-3-x+5==>-3x+6==>infintely positive. 2. -3=<x<0 x-4=-ve, x+3=+ ve and x-5=-ve==> After mod. are opened==> -x+4+x+3-x+5==>12-x==> min. poss value would be if x=-3===> 15. 3. 0=<x<4: x-4 -ve, x+3 +ve, and x-5 -ve==>-x+12==>min poss. value would be when x=0==>min. value of function in this limit=12. 4. 4=<x<5: x-4 +ve, x+3 +ve, x-5 -ve.===>x+4 (+ve) and minimum poss. value as per this limit=8. 5. x>=5: x-4 +ve, x+3 +ve and x+5 +ve.===>3x+6==> infinitely positive. Hence minimum poss. value of function overall=8=Option E. I understand i shouldn't have put 0 in between for extra limit. but personally i'm kind of scared of zero. it sometimes leads to disasters when ignored. Also, do let me know Engr2012 and chetan2u how much time a person should take solving this. i took 4:56 minutes. 3 mods questions are rare in GMAT (didnt see any across my 3 attempts) and as such if they do come in the GMAT, try to remember what chetan2u has mentioned above that minimum value will be between the 2 extreme critical points. But your detailed method is correct and should have taken you 2-3 minutes. In this case, x=0 should not be taken into account as it is NOT a critical point. If you had a mod |x| in the given expression, then yes, x=0 would have become a critical point. Additionally, for absolute value questions, try to plot a rough graph of what it should look like and it will help you in honing onto the range that you need to worry about. As shown in the graph in my post, the only value that you were supposed to worry about would have been in the range $$4 \leq x < 5$$. This can reduce the time taken to solve this question to less than 2 minutes. Hope this helps. Manager Joined: 17 Oct 2015 Posts: 154 Location: India Concentration: Finance Schools: ISB '21 (A) GMAT 1: 690 Q47 V37 GMAT 2: 700 Q44 V41 WE: Corporate Finance (Investment Banking) Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 13 Mar 2016, 06:38 Engr2012 wrote: debbiem wrote: Hello, We can start here by defining the Critical point limits for potential sign changes at may occur. 1. x<-3: x-4=-ve, x+3=-ve and x-5=-ve==>AFTER OPENING MODs==> -x+4-x-3-x+5==>-3x+6==>infintely positive. 2. -3=<x<0 x-4=-ve, x+3=+ ve and x-5=-ve==> After mod. are opened==> -x+4+x+3-x+5==>12-x==> min. poss value would be if x=-3===> 15. 3. 0=<x<4: x-4 -ve, x+3 +ve, and x-5 -ve==>-x+12==>min poss. value would be when x=0==>min. value of function in this limit=12. 4. 4=<x<5: x-4 +ve, x+3 +ve, x-5 -ve.===>x+4 (+ve) and minimum poss. value as per this limit=8. 5. x>=5: x-4 +ve, x+3 +ve and x+5 +ve.===>3x+6==> infinitely positive. Hence minimum poss. value of function overall=8=Option E. I understand i shouldn't have put 0 in between for extra limit. but personally i'm kind of scared of zero. it sometimes leads to disasters when ignored. Also, do let me know Engr2012 and chetan2u how much time a person should take solving this. i took 4:56 minutes. 3 mods questions are rare in GMAT (didnt see any across my 3 attempts) and as such if they do come in the GMAT, try to remember what chetan2u has mentioned above that minimum value will be between the 2 extreme critical points. But your detailed method is correct and should have taken you 2-3 minutes. In this case, x=0 should not be taken into account as it is NOT a critical point. If you had a mod |x| in the given expression, then yes, x=0 would have become a critical point. Additionally, for absolute value questions, try to plot a rough graph of what it should look like and it will help you in honing onto the range that you need to worry about. As shown in the graph in my post, the only value that you were supposed to worry about would have been in the range $$4 \leq x < 5$$. This can reduce the time taken to solve this question to less than 2 minutes. Hope this helps. Thanks! will def try graphs Manager Joined: 03 Oct 2013 Posts: 76 Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 24 May 2017, 13:30 I tried using the extreme points and a couple of integers around them to see the trend. The three points I used are x = -3, 4 and 5 With x = -3 the value is 15 With x = 4 the value is 8 With x = 5 the value is 9 because of the mod (x+3) term any value greater than 5 will lead to a sum greater than 8. I think the minimum value is 8. Intern Joined: 21 Jan 2017 Posts: 31 Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 18 Jun 2017, 22:43 Vyshak wrote: A cannot be the answer as all the three terms are in modulus and hence the answer will be non negative. |x-4| >= 0 --> Minimum occurs at x = 4 |x+3| >= 0 --> Minimum occurs at x = -3 |x-5| >= 0 --> Minimum occurs at x = 5 x = -3 --> Result = 7 + 0 + 8 = 15. Also any negative value will push the combined value of |x-4| + |x-5| to a value > 9. x = 4 --> Result = 0 + 7 + 1 = 8 x = 5 --> Result = 1 + 8 + 0 = 9 x = 3 --> Result = 1 + 6 + 2 = 9 So minimum value of the expression occurs at x = 4 and the resultant value = 8 I tried the same way! One question, from where did you get x=3?? (x=-3 or 4 or 5 sounds good though!) Why did you try with x=3? Intern Joined: 01 Oct 2017 Posts: 3 Location: India GMAT 1: 650 Q43 V33 GRE 1: Q150 V150 GPA: 4 Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 12 Aug 2018, 09:36 The question can be solved by plotting the origin of the three modulus sign on the number line. Please note |x-4| actually means distance from the point 4. So, 4 is origin. Similarly, |x+3| denotes distance from -3. So, in the question we need to find a point whose sum of distances from 4 , -3 and 5 is minimum. Point -3,4 and 5 on the number line and try to figure out a point whose sum is minimum. Please note it will lie always in the middle point when odd number of origins are there. So, the minimum will lie at x=4. Put x=4 in the equation, you will 8 as the answer Director Joined: 20 Feb 2015 Posts: 724 Concentration: Strategy, General Management Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 01 Sep 2018, 23:51 chetan2u wrote: What is the minimum value of |x-4| + |x+3| + |x-5| ? A. -3 B. 3 C. 5 D. 7 E. 8 Kudos for correct solution value of a mod is min when it is equal to 0 when x = 4 0 + 7 + 1 = 8 when x= -3 7 + 0 + 8 = 15 when x = 5 1 + 8 + 0 =9 so minimum = 8 Intern Joined: 30 May 2017 Posts: 43 Location: India Concentration: Finance, Strategy Schools: Wharton, IESE, ISB, NUS GPA: 4 WE: Engineering (Consulting) Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 10 Sep 2018, 04:51 chetan2u wrote: chetan2u wrote: What is the minimum value of |x-4| + |x+3| + |x-5| ? A. -3 B. 3 C. 5 D. 7 E. 8 OA after three days Good Explanation Engr2012. The Q tests us on the understanding of Property of a Modulus.. CONCEPT: - when we have two modulus, the value will be the minimum between the two Critical Points and will increase on either side of CP. the two extremeties are -3 and 5, so |x+3| + |x-5| will remain constant, 3+5=8, within the range from -3 to 5.. so we have to keep x-4 as minimum to have the minimum value for |x-4| + |x+3| + |x-5| .. x-4 will be 0 at x=4.. so our min value will OCCUR at x=4 and will be 8 E Well i was just wondering whether in only this case , mod of (x+3) and mod of (x-5) will remain constant for any value of x(of course i tried only integer value) , or is it general that between two extremities the sum of mod will be always constant for any value of x in between. Well , ofcourse you said that the value will be minimum between the C.P _________________ You say this is a problem , I say this must be an opportunity Math Expert Joined: 02 Aug 2009 Posts: 8261 Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 10 Sep 2018, 04:56 brains wrote: chetan2u wrote: chetan2u wrote: What is the minimum value of |x-4| + |x+3| + |x-5| ? A. -3 B. 3 C. 5 D. 7 E. 8 OA after three days Good Explanation Engr2012. The Q tests us on the understanding of Property of a Modulus.. CONCEPT: - when we have two modulus, the value will be the minimum between the two Critical Points and will increase on either side of CP. the two extremeties are -3 and 5, so |x+3| + |x-5| will remain constant, 3+5=8, within the range from -3 to 5.. so we have to keep x-4 as minimum to have the minimum value for |x-4| + |x+3| + |x-5| .. x-4 will be 0 at x=4.. so our min value will OCCUR at x=4 and will be 8 E Well i was just wondering whether in only this case , mod of (x+3) and mod of (x-5) will remain constant for any value of x(of course i tried only integer value) , or is it general that between two extremities the sum of mod will be always constant for any value of x in between. Well , ofcourse you said that the value will be minimum between the C.P Yes it is for all values, even fraction.. Reason :- you are adding in one and equivalent is getting subtracted from other.. |x+3|+|x-5|.... x as 0...3+5=8 x as 1 |4|+|-4|=8 x as 1/2...|3+1/2|+|1/2-5|=|7/2+9/2=16/2=8 _________________ Non-Human User Joined: 09 Sep 2013 Posts: 14074 Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?  [#permalink] ### Show Tags 25 Nov 2019, 09:27 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: What is the minimum value of |x-4| + |x+3| + |x-5| ?   [#permalink] 25 Nov 2019, 09:27 Display posts from previous: Sort by
> **Puzzle.** Can we change the category \$$\mathcal{C}\$$ to a category \$$\mathcal{C}^\prime\$$ so that functors \$$F : \mathcal{C}^\prime \to \mathbf{Set}\$$ are just databases of the sort Keith drew, with the kind of symmetry that his table has? I think we need to add the further constraint that \$$\textrm{FriendOf} \circ \textrm{FriendOf} = 1_{\textrm{People}} \$$.
## Solution: Time & Position Transformations Exercise: Derive from  X′ = ṼX  the standard relations between times and positions: $\Large{t'\,\,\, = \,\,\,\gamma \,\left( {t\,\, - \,\,\frac{{{\bf{v}} \cdot {\bf{x}}}}{{{c^2}}}} \right)\,\,,\,\,\,\,\,\,\,\,\,\,\,\,{\bf{x'}}\,\,\, = \,\,\,\gamma \,\left( {{\bf{x}}\,\, - \,\,{\bf{v}}t} \right)}$ Keeping in mind that we are considering motion in one dimension (our x-axis), vx = 0, so simply multiply the terms and collect scalar and vector parts separately: $\Large{\begin{array}{l}X'\,\,\, = \,\,\,ct'\,\, + \,\,{\bf{x'}}\,\,\, = \,\,\,\widetilde V\,X\,\,\, = \,\,\,\gamma \,\left( {1\,\, - \,\,{\bf{v}}/c} \right)\,\left( {ct\,\, + \,\,{\bf{x}}} \right)\\\\\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~~ct'\,\,\, = \,\,\,\gamma \,\left( {ct\,\, - \,\,{\bf{v}} \cdot {\bf{x}}/c} \right)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~{\rm{and}}~~~~~~\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\bf{x'}}\,\,\, = \,\,\,\gamma \,\left( {{\bf{x}}\,\, - \,\,{\bf{v}}t} \right)\end{array}}$ Similarly, $\Large{\begin{array}{l}X\,\,\, = \,\,\,ct\,\, + \,\,{\bf{x}}\,\,\, = \,\,\,V\,X'\,\,\, = \,\,\,\gamma \,\left( {1\,\, + \,\,{\bf{v}}/c} \right)\,\left( {ct'\,\, + \,\,{\bf{x'}}} \right)\\\\\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~ct\,\,\, = \,\,\,\gamma \,\left( {ct'\,\, + \,\,{\bf{v}} \cdot {\bf{x'}}/c} \right)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~{\rm{and}}~~~~~~\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\bf{x}}\,\,\, = \,\,\,\gamma \,\left( {{\bf{x'}}\,\, + \,\,{\bf{v}}t'} \right)\end{array}}$ We can check these against previous spacetime maps and conclusions, such as on the Solution: Lorentz Contraction Formula page. For example, if x' = 0, we're on the time axis of the primed system and both sets of these formulas correctly say x = vt and ct = γct': $\Large{\begin{array}{l}{\rm{In}}\,\,{\rm{the}}\,\,{\rm{case}}\,\,{\bf{x'}}\,\, = \,\,0\,:\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\\\\\\,\,{{\rm{1}}^{{\rm{st}}}}\,\,{\rm{set}}:\,\,\,\,\,\,\,\,\,\,\,\,ct'\,\,\, = \,\,\,\gamma \,\left( {ct\,\, - \,\,{\bf{v}} \cdot {\bf{x}}/c} \right)\,\,\,\,\,\,\,\,\,\,\,\,{\rm{and}}\,\,\,\,\,\,\,\,\,\,\,\,{\bf{x'}}\,\,\, = \,\,\,\gamma \,\left( {{\bf{x}}\,\, - \,\,{\bf{v}}t} \right)\\\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~~~~{\bf{x}}\,\,\, = \,\,\,{\bf{v}}t\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~{\rm{and}}~~~~~~\,\,\,\,\,\,\,\,\,\,\,\,ct'\,\,\, = \,\,\,\gamma \,\left( {ct\,\, - \,\,{\bf{v}} \cdot {\bf{v}}t/c} \right)\,\,\, = \,\,\,ct\,/\,\gamma \\\\{2^{{\rm{nd}}}}\,\,{\rm{set}}:\,\,\,\,\,\,\,\,\,\,\,\,ct\,\,\,\, = \,\,\,\gamma \,\left( {ct'\,\, + \,\,{\bf{v}} \cdot {\bf{x'}}/c} \right)\,\,\,\,\,\,\,\,\,{\rm{and}}\,\,\,\,\,\,\,\,\,\,\,\,{\bf{x}}\,\,\,\,\, = \,\,\,\gamma \,\left( {{\bf{x'}}\,\, + \,\,{\bf{v}}t'} \right)\\\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~~~ct\,\,\, = \,\,\,\gamma \,ct'\,\,\,\,\,\,\,\,\,\,\,\,~~~~~~{\rm{and}}~~~~~~\,\,\,\,\,\,\,\,\,\,\,\,{\bf{x}}\,\,\,\,\, = \,\,\,\gamma \,{\bf{v}}t'\,\,\, = \,\,\,{\bf{v}}t\end{array}}$
# Investigating the Length Scale of Bailer-Jones+18 distances¶ Recent work by Bailer-Jones et al. 2018 (hereafter BJ+18) appropriately inferred distances to stars in the Gaia DR2 sample using Bayesian analysis involving applying a distance prior and using the mode of the posterior distribution this generates as their distance estimate. The prior uses a length scale $L > 0$ to describe the exponentially decreasing space density of targets. The justification for this prior can be found in Bailer-Jones 2015 (hereafter BJ15). This in opposition to the more 'traditional' case of dividing 1 by the parallax to obtain distance, which does not hold true for targets with large parallax errors or negative parallaxes, which are nonetheless valid astrometric solutions. The work in BJ+18 uses Galaxia models subdivided into cells across the sky to fit for a value of $L$, before fititng a spherical harmonic model to obtain the length scale $L(l,b)$ as a function of galactic latitude and longitude. This blog aims to check the validity of these length scales by using TRILEGAL simluations to investigate how the length scale varies depending on stellar type, and how this affects the inferred distances to these targets. I'm not going to draw any lofty conclusions about which length scale is appropriate--- I just aim to initiate a discussion on how we go about using parallaxes and any catalogued distances! You can find me on: Twitter | Github | ojhall94 -at- gmail -dot- com If you want to skip how I fit for different values of $L$ and go straigh to the plots, click here. If you're just interested in my conclusions, click here to skip to the bottom. In [1]: import numpy as np import matplotlib.pyplot as plt import matplotlib import seaborn as sns sns.set_palette('colorblind',10) sns.set_context('notebook') matplotlib.rc('xtick', labelsize=15) matplotlib.rc('ytick', labelsize=15) import pandas as pd from astropy.table import Table from tqdm import tqdm import omnitool #You can find this repository on my Github! from omnitool.literature_values import * import sys rerun = True /usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters ## Lets have a quick look at how the BJ+18 prior length scale L changes as a function of RA and Dec in Kepler¶ The below catalogue was compiled by Megan Bedell can be found here! In [2]: data = Table.read('/home/oliver/PhD/Gaia_Project/data/KepxDR2/kepler_dr2_1arcsec.fits', format='fits') kdf = data.to_pandas() kdf.rename(columns={'kepid':'KICID'},inplace=True) print(len(kdf)) 195830 Lets also have a look at how these values change (if at all) for an exclusively RC sample, as idenitfied in Yu+18. Given the nature BJ+18's calculation of L, we don't expect any differences. In [3]: sfile = '/home/oliver/PhD/Gaia_Project/data/KepxDR2/MRCxyu18_wdupes_BC.csv' print(len(yu18)) 7725 In [4]: sns.distplot(kdf.r_length_prior,label='Full Kepler LC sample (200k stars)') sns.distplot(yu18.r_length_prior,label='RC stars from the Yu sample (7k stars)') plt.legend(fontsize=15) plt.xlabel('Length Prior (BailerJones)',fontsize=15) plt.show() Thus we can confirm that in the catalogue published by BJ+18, there is no evident separate treatement of the distance prior for Red Clump stars. This is expected, as BJ+18 calculate $L(l,b)$ as a function of galactic position, but its good to check anyway before proceeding to the next step. In [5]: fig, ax = plt.subplots() c = ax.scatter(kdf.ra, kdf.dec,s=0.1,c=kdf.r_length_prior,vmin=kdf.r_length_prior.min(), vmax=kdf.r_length_prior.max()) ax.set_title('Entire Kepler sample: '+str(len(kdf))+' stars',fontsize=15) ax.set_xlabel('Ra',fontsize=15) ax.set_ylabel('Dec',fontsize=15) fig.colorbar(c,label='Length Prior (BailerJones)') fig.tight_layout() plt.show() In this blog, we won't worry about how L changes with galactic position, just how it differs for stellar types. We'll thus be using a isotropic $L$. In a pefect world, we'd have a full skymap fit for L using Galaxia for every stellar type. ## Now lets investigate which values of L are appropriate for different stellar types using TRILEGAL¶ In [6]: tdf = pd.read_csv('/home/oliver/PhD/Gaia_Project/data/TRILEGAL_sim/k1.6b_K15b30_0910_new.all.out.txt',sep='\s+') tdf['Ak'] = omnitool.literature_values.Av_coeffs['Ks'].values[0]*tdf.Av tdf['MK'] = tdf.Ks - tdf['m-M0'] - tdf.Ak tdf['dist'] = 10.0**(tdf['m-M0'] / 5.0 + 1.0) In [7]: m_ks = tdf['Ks'].values mu = tdf['m-M0'].values Av = tdf['Av'].values M = tdf['Mact'].values labels =tdf['stage'].values Zish = tdf['[M/H]'].values logT = tdf['logTe'].values logL = tdf['logL'].values fig, ax = plt.subplots(figsize=(8,8)) label = ['Pre-Main Sequence', 'Main Sequence', 'Subgiant Branch', 'Red Giant Branch', 'Core Helium Burning',\ 'Secondary Clump [?]', 'Vertical Strip [?]', 'Asymptotic Giant Branch','[?]'] for i in range(int(np.nanmax(labels))+1): ax.scatter(logT[labels==i],logL[labels==i],s=5,label=label[i]) ax.legend(loc='best',fancybox=True,fontsize=15) ax.invert_xaxis() ax.set_xlabel(r"$log_{10}(T_{eff})$",fontsize=15) ax.set_ylabel(r'$log_{10}(L)$',fontsize=15) ax.set_title(r"HR Diagram for a TRILEGAL dataset of the $\textit{Kepler}$ field",fontsize=15) ax.grid() ax.set_axisbelow(True) plt.show(fig) Just to clarify, there are a couple of stellar classifications in there that I am uncertain of, and given the label '[?]'. However due to to their position in the HR Diagram I have taken the liberty of classifying them as giant stars. Not also that despite the colour similarities, the stars at the high end of the giant branch do not belong to the subgiant population. We'll plot below the distribution of TRILEGAL distances for different stellar groups, namely 'Giants' (everything RGB and later) and 'Dwarfs' (main sequence stars). Subgiants and pre-MS stars have been excluded. Post-giant stars are not included in the simulation. In [8]: giantmask = tdf.stage >= 3. In [9]: fig = plt.figure(figsize=(10,10)) sns.distplot(tdf.dist, label='all') plt.ylabel('Normalised Counts',fontsize=15) plt.xlabel('TRILEGAL Distance (pc)',fontsize=15) plt.xlim(0, 12000) plt.legend(fontsize=15) plt.show() I have applied a limit on the plot at $12 kpc$ for clarity. Clearly, the distribution of distances is different for different stellar types, as we'd expect from their differing luminosity functions. #### We'll use a simple PyStan model to fit the BJ15 distance prior to the data and find a length scale L appropriate for these stellar groups.¶ This exponentially decreasing space density distance prior, as found in Bailer-Jones15, described further in Bailer-Jones+18, and applied in Pystan in Hawkins+17, goes as ### $P(r | L) = \frac{1}{2L^3}r^2e^{-r/L}$,¶ for $r > 0$ and 0 everywhere else, where L is a length scale. I have chosen to apply a uninformative uniform prior on $L$ ranging from $0.1pc$ to $4kpc$. In [10]: import pystan lmodel = ''' functions { real bailerjones_lpdf(real r, real L){ return log((1/(2*L^3)) * (r*r) * exp(-r/L)); } } data { int<lower=0> N; real r[N]; } parameters { real<lower=.1, upper=4000.> L; } model { for (n in 1:N){ r[n] ~ bailerjones(L); } } ''' if rerun: sm = pystan.StanModel(model_code=lmodel, model_name='lmodel') else: pass INFO:pystan:COMPILING THE C++ CODE FOR MODEL lmodel_fd8651f5e23d60506623f70ecc9faf8a NOW. ##### Giant stars¶ In [11]: if rerun: dat = {'N':len(d), 'r' : d} fit = sm.sampling(data=dat, iter=1000, chains=2) /usr/local/lib/python2.7/dist-packages/pystan/misc.py:399: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. elif np.issubdtype(np.asarray(v).dtype, float): In [12]: if rerun: L_giant = np.median(fit['L']) fit.plot() plt.show() print(fit) else: L_giant = 1154.42 Inference for Stan model: lmodel_fd8651f5e23d60506623f70ecc9faf8a. 2 chains, each with iter=1000; warmup=500; thin=1; post-warmup draws per chain=500, total post-warmup draws=1000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat L 1154.3 0.14 2.65 1148.9 1152.6 1154.4 1156.1 1159.9 369.0 1.0 lp__ -6.5e5 0.06 0.85 -6.5e5 -6.5e5 -6.5e5 -6.5e5 -6.5e5 190.0 1.01 Samples were drawn using NUTS at Fri May 25 12:45:47 2018. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). ##### Dwarf stars¶ In [13]: if rerun: dat = {'N':len(d), 'r' : d} fit = sm.sampling(data=dat, iter=1000, chains=2) In [14]: if rerun: L_dwarf = np.median(fit['L']) fit.plot() plt.show() print(fit) else: L_dwarf = 452.13 Inference for Stan model: lmodel_fd8651f5e23d60506623f70ecc9faf8a. 2 chains, each with iter=1000; warmup=500; thin=1; post-warmup draws per chain=500, total post-warmup draws=1000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat L 452.12 0.03 0.58 451.01 451.73 452.14 452.51 453.22 369.0 1.01 lp__ -1.6e6 0.03 0.71 -1.6e6 -1.6e6 -1.6e6 -1.6e6 -1.6e6 422.0 1.0 Samples were drawn using NUTS at Fri May 25 12:49:02 2018. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). ##### Full sample¶ In [15]: if rerun: d = tdf.dist.values dat = {'N':len(d), 'r' : d} fit = sm.sampling(data=dat, iter=1000, chains=2) In [16]: if rerun: L_all = np.median(fit['L']) fit.plot() plt.show() print(fit) else: L_all = 638.7 Inference for Stan model: lmodel_fd8651f5e23d60506623f70ecc9faf8a. 2 chains, each with iter=1000; warmup=500; thin=1; post-warmup draws per chain=500, total post-warmup draws=1000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat L 638.68 0.03 0.66 637.39 638.22 638.66 639.09 640.01 448.0 1.0 lp__ -2.6e6 0.03 0.66 -2.6e6 -2.6e6 -2.6e6 -2.6e6 -2.6e6 480.0 1.0 Samples were drawn using NUTS at Fri May 25 12:54:02 2018. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). #### Now lets look at the results!¶ In [17]: def bjl(r, L): return (1/(2*L**3)) * (r*r) * np.exp(-r/L); In [18]: fig = plt.figure(figsize=(10,10)) sns.distplot(tdf.dist, label='all') plt.plot(np.sort(tdf.dist), bjl(np.sort(tdf.dist), L_all),label='L all: '+str(np.round(L_all,2))) plt.ylabel('Normalised Counts',fontsize=15) plt.xlabel('TRILEGAL Distance (pc)',fontsize=15) plt.legend(fontsize=15) plt.xlim(0., 12000) plt.show() In the Figure it appears that the distribution matches well to dwarf stars and to giant stars separately, but struggles to find a clean fit for the full sample. The value of L also differs by close to a factor of 3 between the two stellar groups, and by a factor of 2 from the value found from a fit to the full set of data. This is important, as the work by BJ+18 describes the calculation of L as being done for patches of sky, but not for different stellar groups. ## Lets investigate how this changes the posterior distributions that distance estimates are drawn from¶ If we assume parallax values to be distributed normally as $N(\varpi | 1/r, \sigma_{\varpi})$, then given the prior on distance and Bayes equation, we can find the ($\textbf{unnormalised!}$) posterior over the distance to be (as seen in BJ+18): ### $P^*(r | \varpi, \sigma_\varpi, L) = r^2 \exp\bigg[-\frac{r}{L} - \frac{1}{2\sigma^2_\varpi} \bigg(\varpi - \frac{1}{r}\bigg)^2\bigg]$¶ for $r > 0$, and 0 everywhere else. Right now we only care about the mode of the posterior and not its power, so we'll be omitting the normalisation for the rest of this blog. Note that I have made some changes from the version given in BJ+18; L is no longer a function of galactic position, and I have omitted the global parallax zeropoint $\varpi_{zp}$ as I am working with simulated data. I should probably note that TRILEGAL provides distance modulus and not parallax, so my synthetic parallaxes will be generated as $1/r$, and uncertainties will be inserted at the parallax level. The use of $1/r$ doesn't matter for what we're doing, as we want to see the $\textbf{quantitative}$ difference a change in $L$ and $\sigma_\varpi$ make to the esitmated distance, and we will assume the uncertainties on the simulated distances to be practically zero. In [19]: def postprob_un(r, L, oo, oo_err): return r**2 * np.exp(-(r/L) - (1/(2*oo_err**2))*(oo - (1./r))**2) In [20]: r = np.linspace(0., 10000, 100000) tdf['oo'] = 1/tdf.dist tdf['oo_err'] = .1*tdf.oo In [21]: def plot_postprob_un(r, L_dwarf, L_all, L_giant, oo, oo_err): fig = plt.figure() plt.plot(r,postprob_un(r, L_dwarf, oo, oo_err),label='L Dwarf: '+str(np.round(L_dwarf,2))) plt.plot(r,postprob_un(r, L_all, oo, oo_err),label='L All: '+str(np.round(L_all,2))) plt.plot(r,postprob_un(r, L_giant, oo, oo_err),label='L Giant: '+str(np.round(L_giant,2))) plt.plot(label='Test') plt.title(r'Unnormalised(!) posterior distributions for a given $\varpi$ and $\sigma_\varpi$.',fontsize=15) plt.xlabel('Distance (pc)',fontsize=15) plt.ylabel('Arbitrary units',fontsize=15) plt.yticks([]) fig.tight_layout() plt.legend(fontsize=15) print('Parallax: '+str(np.round(1000*oo,3))+' mas') print('Error: '+str(np.round(1000*oo_err,3))+' mas == '+str(oo_err*100/oo)+'%') plot_postprob_un(r, L_dwarf, L_all, L_giant, tdf.oo[0], tdf.oo_err[0]*5) /usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:2: RuntimeWarning: divide by zero encountered in divide Parallax: 1.096 mas Error: 0.548 mas == 50.0% As expected, the shape of the posterior is very different to a Gaussian distribution for uncertain targets, and thus the $1/r$ transformation does not hold for these objects. In [22]: plot_postprob_un(r, L_dwarf, L_all, L_giant, tdf.oo[0], tdf.oo_err[0]) /usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:2: RuntimeWarning: divide by zero encountered in divide Parallax: 1.096 mas Error: 0.11 mas == 10.0% As you can see, if we decrease the uncertainty, the result becomes a lot more constrained, the modes of the posteriors for different L are closer, and the posteriors tend towards a normal distribution in the limit that $\sigma_\varpi \rightarrow 0.$ As stated in BJ+18: While the posterior[s] [above] [are] the complete description of the distance to the source, we often want to use a single point estimate along with some measure of the uncertainty. [...] As the point estimator $r_{est}$, we prefer here the mode, $r_{mode}$. This is found analytically by solving a cubic equation. This cubic equation can be found by setting $P^*(r | \varpi, \sigma_\varpi, L)/dr = 0$, which gives ### $\frac{r^3}{L} - 2r^2 + \frac{\varpi}{\sigma_\varpi^2}r - \frac{1}{\sigma_\varpi^2} = 0$¶ We evaluate this using $\texttt{numpy.roots}$ in this blog, and employ the following evaluation criteria given in BJ15: Inspection of the roots leads to the following strategy for assigning the distance estimator [...] from the modes: • If there is one real root, it is a maximum: select this as the mode. • If there are three real roots, there are two maxima: • If $\varpi \geq 0$, select the smallest root as the mode. • If $\varpi < 0$, select the mode with r > 0 (there is only one). Note that the latter is not relevant to this test, as we do not have any negative parallax in the sample. I've nonetheless included the criterion in the code in case anybody wants to apply it elsewhere. We will be calculating $r_{mode}$ for each star in the TRILEGAL sample using each of the values of L, and for a range of parallax uncertainties. In [23]: def get_roots(L, oo, oo_err): p = np.array([1./L, -2, oo/oo_err**2., -1./oo_err**2.]) fullroots = np.roots(p) roots = fullroots[np.isreal(fullroots)] #Make sure we take only the true values if len(roots) == 1: return float(roots[0]) if len(roots) == 3: if oo >= 0.: return float(np.min(roots)) if oo < 0.: return float(roots[roots > 0][0]) else: print('You shouldnt be here, printing roots below for diagnostic:') print(roots) def get_modes(L, oo, oo_err): return np.array([get_roots(L, o, err) for o, err in zip(oo, oo_err)]) In [24]: idf = pd.DataFrame() Ls = {'all':L_all, 'dwarfs':L_dwarf, 'giants':L_giant} types = ['all','dwarfs','giants'] errange = np.arange(.05,.55,.05) for ltype in types: for err in tqdm(errange): label='r_'+ltype+'_'+str(np.round(err,2)) idf[label] = get_modes(Ls[ltype], tdf.oo, tdf.oo*err) idf['oo'] = tdf.oo idf['oo_err'] = tdf.oo_err idf['r_true'] = tdf.dist idf.sort_values('r_true', inplace=True) 0%| | 0/10 [00:00<?, ?it/s]/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:7: ComplexWarning: Casting complex values to real discards the imaginary part import sys 100%|██████████| 10/10 [03:06<00:00, 18.62s/it] 100%|██████████| 10/10 [03:06<00:00, 18.66s/it] 100%|██████████| 10/10 [03:06<00:00, 18.65s/it] Out[24]: r_all_0.05 r_all_0.1 r_all_0.15 r_all_0.2 r_all_0.25 r_all_0.3 r_all_0.35 r_all_0.4 r_all_0.45 r_all_0.5 ... r_giants_0.2 r_giants_0.25 r_giants_0.3 r_giants_0.35 r_giants_0.4 r_giants_0.45 r_giants_0.5 oo oo_err r_true 195791 11.538996 11.718551 12.044779 12.571975 13.422322 14.939718 19.350815 1240.729387 1248.590449 1254.152411 ... 12.577836 13.435288 14.973856 19.654243 2272.565509 2280.273743 2285.755343 0.087096 0.008710 11.481536 138419 16.678583 16.936988 17.406198 18.163610 19.382586 21.546020 27.593002 1223.935147 1235.531098 1243.691182 ... 18.175829 19.409547 21.616494 28.158390 2256.151045 2267.413139 2275.399956 0.060256 0.006026 16.595869 288988 17.464566 17.734968 18.225918 19.018288 20.293095 22.553763 28.837344 1221.338056 1233.519145 1242.083577 ... 19.031681 20.322637 22.630896 29.447384 2253.626147 2265.438619 2273.812141 0.057544 0.005754 17.378008 288989 17.464566 17.734968 18.225918 19.018288 20.293095 22.553763 28.837344 1221.338056 1233.519145 1242.083577 ... 19.031681 20.322637 22.630896 29.447384 2253.626147 2265.438619 2273.812141 0.057544 0.005754 17.378008 100114 17.464566 17.734968 18.225918 19.018288 20.293095 22.553763 28.837344 1221.338056 1233.519145 1242.083577 ... 19.031681 20.322637 22.630896 29.447384 2253.626147 2265.438619 2273.812141 0.057544 0.005754 17.378008 5 rows × 33 columns ## Now lets make some illustrative plots!¶ In [25]: bisector = np.linspace(idf['r_true'].min(), idf['r_true'].max(), 10) ### First, lets see how the estimated mode distance compares to the 'true' TRILEGAL distance for each length scale, for various fractional uncertainties¶ In [26]: fig, ax = plt.subplots(2,2,figsize=(12,12)) for err in errange: ax[1,0].loglog(idf['r_true'],idf['r_all_'+str(np.round(err,2))]) ax[1,1].plot(idf['r_true'][0:1],idf['r_true'][0:1],label='Uncertainties: '+str(np.round(err,2)*100)+'\%') ax[0,0].plot(bisector,bisector,linestyle='-.',c='k') ax[0,0].axvline(2*L_dwarf,linestyle='--',alpha=.5) ax[0,0].axhline(2*L_dwarf,linestyle='--',alpha=.5) ax[0,0].set_xlabel('TRILEGAL distance (pc)',fontsize=15) ax[0,0].set_ylabel(r'$r_{\rm dwarfs}$ (pc) for various fractional errors',fontsize=20) ax[0,0].set_title('Dwarves only: '+str(len(idf['r_true'][dwarfmask]))+' stars | L: '+str(np.round(L_dwarf,2)), fontsize=20) ax[0,0].grid() ax[0,1].plot(bisector,bisector,linestyle='-.',c='k') ax[0,1].axvline(2*L_giant,linestyle='--',alpha=.5) ax[0,1].axhline(2*L_giant,linestyle='--',alpha=.5) ax[0,1].set_xlabel('TRILEGAL distance (pc)',fontsize=15) ax[0,1].set_ylabel(r'$r_{\rm giants}$ (pc) for various fractional errors',fontsize=20) ax[0,1].set_title('Giants only: '+str(len(idf['r_true'][giantmask]))+' stars | L: '+str(np.round(L_giant,2)), fontsize=20) ax[0,1].grid() ax[1,0].plot(bisector,bisector,linestyle='-.',c='k') ax[1,0].axvline(2*L_all,linestyle='--',alpha=.5) ax[1,0].axhline(2*L_all,linestyle='--',alpha=.5) ax[1,0].set_xlabel('TRILEGAL distance (pc)',fontsize=15) ax[1,0].set_ylabel(r'$r_{\rm all}$ (pc) for various fractional errors',fontsize=20) ax[1,0].set_title('All stars: '+str(len(idf['r_true']))+' stars | L: '+str(np.round(L_all,2)), fontsize=20) ax[1,0].grid() ax[1,1].plot(bisector,bisector,linestyle='-.',c='k',label='Bisector') ax[1,1].axvline(2*L_dwarf,linestyle='--',alpha=.5, label=r'$2L$') ax[1,1].set_yticks([]) ax[1,1].set_xticks([]) ax[1,1].set_xlim(-5,-4) ax[1,1].set_ylim(-5,-4) ax[1,1].legend(fancybox=True,loc='center',fontsize=20) ax[1,1].spines['bottom'].set_edgecolor('white') ax[1,1].spines['top'].set_edgecolor('white') ax[1,1].spines['left'].set_edgecolor('white') ax[1,1].spines['right'].set_edgecolor('white') fig.suptitle('Distance estimated using BJ+18 method vs TRILEGAL distance',fontsize=20) fig.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.show() There are a number of important observations we can make about these plots, given the prior we apply on the distance: • The distances match the bisector relatively well for all values of $L$ for low uncertainties. This is expected-- at low uncertainties the prior will play a relatively small role in the posterior distribution and thus the choice of $L$ will have less impact. • There is a turning point for the residuals across the bisector which lies at a distance of $2L$ for each $L$. This value is the location of the mode of the distance prior. This again is as we expect; at $r < 2L$, targets with larger uncertainties will be biased towards a higher estimate of distance. For targets at $r > 2L$, they will be biased towards a lower estimate of distance, which is reflected in the shape of the residuals. • At low distances with large fractional errors the stars start behaving particularly strangely. This again, is expected. Small $r$ means high $\varpi$, meaning quantatively large $\sigma_\varpi$ at high fractional uncertainties. When fractional uncertainties are this high, the prior dominates the distance estimate. Since these are high parallax stars, the true distance is low, wheras the prior forces the estimate towards a value of $r = 2L$. This is especially visible for the closest stars, which clearly have distance estimates that lie at the $2L$ position. We'll compare $r_{dwarfs}$ and $r_{giants}$ to the 'true' TRILEGAL distance $r_{true} respectively, and will plot the fractional difference in radius, ie: ###$\Delta_{true, dwarfs} = \frac{r_{dwarfs} - r_{true}}{r_{true}}$¶ For the sake of clarity, I will not be including any of the data for fractional uncertainties$>35 \%$, as we now know that the distance estimates for these stars inflate wildly at small distances. In [27]: idf['r_true'] = tdf['dist'] for err in tqdm(errange): idf['d_true_dwarfs_'+str(np.round(err,2))] = (idf['r_dwarfs_'+str(np.round(err,2))] - idf['r_true'])/idf['r_true'] idf['d_true_giants_'+str(np.round(err,2))] = (idf['r_giants_'+str(np.round(err,2))] - idf['r_true'])/idf['r_true'] idf['d_true_all_'+str(np.round(err,2))] = (idf['r_all_'+str(np.round(err,2))] - idf['r_true'])/idf['r_true'] 100%|██████████| 10/10 [00:00<00:00, 127.13it/s] In [28]: fig, ax = plt.subplots(2,2,figsize=(12,12)) for err in errange[0:-3]: ax[0,0].semilogx(idf['r_true'][dwarfmask],idf['d_true_dwarfs_'+str(np.round(err,2))][dwarfmask]) ax[0,1].semilogx(idf['r_true'][giantmask],idf['d_true_giants_'+str(np.round(err,2))][giantmask]) ax[1,0].semilogx(idf['r_true'],idf['d_true_all_'+str(np.round(err,2))]) ax[1,1].plot(idf['r_true'][0:1],idf['r_true'][0:1],label='Uncertainties: '+str(np.round(err,2)*100)+'\%') ax[0,0].axvline(2*L_dwarf,linestyle='--',alpha=.5) ax[0,0].set_xlabel('TRILEGAL distance (pc)',fontsize=15) ax[0,0].set_ylabel(r'$\Delta_{\rm true, dwarfs}$(pc) at various fractional errors',fontsize=20) ax[0,0].set_title('Dwarves only: '+str(len(idf['r_true'][dwarfmask]))+' stars | L: '+str(np.round(L_dwarf,2)), fontsize=20) ax[0,0].grid() ax[0,1].axvline(2*L_giant,linestyle='--',alpha=.5) ax[0,1].set_xlabel('TRILEGAL distance (pc)',fontsize=15) ax[0,1].set_ylabel(r'$\Delta_{\rm true, giants}$(pc) at various fractional errors',fontsize=20) ax[0,1].set_title('Giants only: '+str(len(idf['r_true'][giantmask]))+' stars | L: '+str(np.round(L_giant,2)), fontsize=20) ax[0,1].grid() ax[1,0].axvline(2*L_all,linestyle='--',alpha=.5) ax[1,0].set_xlabel('TRILEGAL distance (pc)',fontsize=15) ax[1,0].set_ylabel(r'$\Delta_{\rm true, all}$(pc) at various fractional errors',fontsize=20) ax[1,0].set_title('All stars: '+str(len(idf['r_true']))+' stars | L: '+str(np.round(L_all,2)), fontsize=20) ax[1,0].grid() ax[1,1].axvline(2*L_dwarf,linestyle='--',alpha=.5, label=r'$2L\$') ax[1,1].set_yticks([]) ax[1,1].set_xticks([]) ax[1,1].set_xlim(-5,-4) ax[1,1].set_ylim(-5,-4) ax[1,1].legend(fancybox=True,loc='center',fontsize=20) ax[1,1].spines['bottom'].set_edgecolor('white') ax[1,1].spines['top'].set_edgecolor('white') ax[1,1].spines['left'].set_edgecolor('white') ax[1,1].spines['right'].set_edgecolor('white') fig.suptitle('Fractional difference in distance at different L vs TRILEGAL distances',fontsize=20) fig.tight_layout(rect=[0, 0.03, 1, 0.95]) plt.show()
# Maximum Matchings and RNA Secondary Structures solved by 1108 March 22, 2013, 7:15 p.m. by Rosalind Team ## Breaking the Bonds In “Perfect Matchings and RNA Secondary Structures”, we considered a problem that required us to assume that every possible nucleotide is involved in base pairing to induce an RNA secondary structure. Yet the only way this could occur is if the frequency of adenine in our RNA strand is equal to the frequency of uracil and if the same holds for guanine and cytosine. We will therefore begin to explore ways of counting secondary structures in which this condition is not required. A more general combinatorial problem will ask instead for the total number of secondary structures of a strand having a maximum possible number of base pairs. ## Problem Figure 1. The bonding graph of s = UAGCGUGAUCAC (left) has a perfect matching of basepair edges, but this is not the case for t = CAGCGUGAUCAC (right), in which one symbol has been replaced. Figure 2. A maximum matching (highlighted in red) is shown in each of the three graphs above. You can verify that no other matching can contain more edges. (Courtesy: Miym, Wikimedia Commons User) Figure 3. A red maximum matching of basepair edges in the bonding graph for t = CAGCGUGAUCAC. The graph theoretical analogue of the quandary stated in the introduction above is that if we have an RNA string $s$ that does not have the same number of occurrences of 'C' as 'G' and the same number of occurrences of 'A' as 'U', then the bonding graph of $s$ cannot possibly possess a perfect matching among its basepair edges. For example, see Figure 1; in fact, most bonding graphs will not contain a perfect matching. In light of this fact, we define a maximum matching in a graph as a matching containing as many edges as possible. See Figure 2 for three maximum matchings in graphs. A maximum matching of basepair edges will correspond to a way of forming as many base pairs as possible in an RNA string, as shown in Figure 3. Given: An RNA string $s$ of length at most 100. Return: The total possible number of maximum matchings of basepair edges in the bonding graph of $s$. ## Sample Dataset >Rosalind_92 AUGCUUC ## Sample Output 6
# Razib KhanOne-stop-shopping for all of my content ## July 31, 2017 Filed under: Open Thread — Razib Khan @ 1:25 am Read a bit of The Unholy Consult. People who say George R R Martin’s work is too dark? They need to really read a bit of R. Scott Bakker, and Martin will seem to like someone who sees the world through rose-colored glasses. I’m thinking of reading The Witchwood Crown later because I might need a pick-me-up after The Unholy Consult. I’ve also had The Wise Man’s Fear in my Kindle stack for over five years now, but I plan on reading it when Patrick Rothfuss finishes the series with book 3. Speaking of fantasy, there is a lot of commentary on Game of Thrones. Always. Some of it is quite dumb. For instance, Game of Thrones and race: who are the non-white characters and where are they from in the books and show? To make a sound argument you actually need to know something about the books. The writer does not. For example, “The Targaryen monarchs, who ruled Westeros for hundreds of years but, thanks to their thing for incest, never really bred all that much with the locals.” This is false. Daenerys is only 1/8th Valyrian (at most). Half her recent ancestry is from a First Men house, the Blackwoods (though it surely has much Andal blood too). About 3/8th of her recent ancestry is Dornish, so a mix of Andal, First Men, and Rhoynish. Second, George R. R. Martin published the first book in the series in 1996. It was on his mind for years before that. Obviously if he was writing these books today he’d tune them so they were more in sync with the cultural politics of the contemporary Left (since that is where his own personal sympathies lie). But it isn’t as if he can go back and rewrite the major characters and add some diversity which some of his fans might now want. The 1990s were a different time. I recall back on some message boards that Renly’s sexual orientation was an issue for some readers. Martin was arguably ahead of the times on that score. There are fantasy works where the central characters are nonwhite. Both Judith Tarr in Avaryan series and Ursula K. Le Guin’s Earthsea series have been around for a while. And both these worlds have the added benefit of not being standard Tolkienesque medieval settings. Inside Facebook’s Rapid Growth in Austin. Their presence is felt. Kimura & Crow: Infinite alleles. Really great piece on the working relationship between Motoo Kimura and James F. Crow. About 11 years I emailed Crow 10 questions on a lark. He responded in less than a day. Also, Kimura and Crow’s An Introduction to Population Genetics Theory is worth getting (it’s cheap). Divorce and Occupation. No surprise that there’s a correlation between income and divorce rate (negative). But some professions are outliers. Bartender and nurse anesthetists are above the trend line (more divorce than their income predicts). Clerics and actuaries are well below it. Postdoctoral positions in human population genomics, nutrigenomics, & association studies at Cornell in Alon Keinan’s lab. The TakingHayekSeriously Twitter account has been passing along pieces and posts around the controversy surrounding Nancy Maclean’s Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America. The book is ridiculous. So ridiculous that Vox published a piece Even the intellectual left is drawn to conspiracy theories about the right. Resist them. I’ve been very loosely associated with libertarians because of my political sympathies for a long time. Years ago I actually visited The Center for Study of Public Choice where James Buchanan had his office because my friend Garett Jones had his office there. There’s no conspiracy here, or secret cabals under the radar. Libertarians are by and large a nerdy group of radicals fixated on stuff like the nonaggression principle. Just like you see on the internet. Kooky. Yes. But a cabal? Have you met libertarians? They don’t have the aptitude for that sort of coordination (Radicals for Capitalism is really the book to understand libertarianism, in particular because Buchanan and public choice theory have a minor role at best to play in libertarianism). But that doesn’t matter. Democracy in Chains will validate the suspicions and beliefs of many people. And it’s a footnoted academic work. Unless it’s obvious fraud it’s going to be a success in influencing people. Remember that Arming America won the Bancroft Prize for outstanding work of American history in 2000. Arming America was likely a work of fraud in large part. But its thesis, that America’s gun culture did not date to the colonial era, was congenial to the political ideology of many historians. Therefore even though it did not pass the smell test they gave the book rave reviews. I’d be surprised if  Democracy in Chains is a work of fraud. The author just doesn’t know what she’s talking about, but she is telling a story her audience wants to hear, with some academic credibility to boot (and so far historians have supposedly supported her). The population genomics of archaeological transition in west Iberia: Investigation of ancient substructure using imputation and haplotype-based methods. I think these dynamics are going to be relatively common. I must say, I don’t recommend All Things Made New: The Reformation and Its Legacy. The title suggests a broader work than it is. Far too much space is given to the English Reformation. Just thought I’d mention that. Reading some of Big Gods: How Religion Transformed Cooperation and Conflict. Broadly agree with the thesis I think…but wondering about the replication of some of the experiments cited. Also in my stack, The Red Flag: A History of Communism. Andrew Sullivan notices in this week’s column that Islam seems to now be untouchable on the Left. This is going to too far, but liberals who express anti-Islamic sentiments are getting rather rare, and though privately many on the Left have serious issues with Islam (I know, because they tell me privately) they’re careful not to say it out loud lest they be attacked as racist. My own view is that there are 1.6 billion Muslims, so it makes sense for the Left to align with them. Isn’t world domination worth a hijab? Don’t blame the Empire. Alex Tabarrok takes some deserved shots at Shashi Tharoor’s An Era of Darkness: The British Empire in India. The anti-colonialism tick often gets out of control among Indians, to the point where all evils are heaped upon the British. This is a major aspect of post-colonialism, which “erases” all identity-forming events before the arrival of Europeans. Twitter lost 2 million users in the U.S. last quarter. Shit’s getting real Jack. If you use RSS, subscribe to my feed! I also have a mailing list, where I’ve sent out exactly one email so far. But if Twitter goes down…. Can 23andMe Tell Us If Jews Are A Race — And Is That A Good Thing? The author interviews scientists who know the science, but he still manages to garble and confuse everything. First, Ashkenazi Jews descend from a endogamous community which flourished in Central Europe probably no earlier than ~1000 AD. That is why a Ashkenazi Jewish cluster emerges naturally out of the population genetic data; there’s a real coherent demographic history being reflected. Whether that’s a “race” or not I’ll leave to the reader. Second the story states that “Sephardic Jews are not considered a distinct population by either company, or by researchers — their genetic make-up is not sufficiently different from surrounding North African, Iberian and Greek populations.” This very misleading. To a great extent Sephardic Jews are rather distinct from the surrounding populations. There is some evidence of shared ancestry in Moroccan Jews with Moroccan Berbers (I know because I’ve looked at a lot of this data), but it’s a small proportion. Similar things can be said about most Sephardic communities. But, they are not nearly as coherent a genetic cluster as Ashkenazi Jews. There has been some gene flow and assimilation with many local Jewish populations (e.g., the Syrian Sephardic Jews absorbed a local Levantine Jewish community, which had its own liturgy until the 19th century). Neanderthal-Derived Genetic Variation Shapes Modern Human Cranium and Brain. Many people skeptical of the robustness of this result. ## July 30, 2017 ### The culture of reasoning: the Ummah shall not agree upon error Because I watch Screen Junkies‘ “Honest Trailers” I get recommendations like the above from Looper, The Real Reason Why Valerian Flopped At The Box Office. Of course no one knows the ‘real reason’ Valerian flopped, aside from “it didn’t seem like a good movie.” The reality is that Valerian and the City of a Thousand Planets based on a French comic book and cast a 31 year old actor who looks like a haunted 15 year old. That’s all there is to say definitively. All the various failure points are overdetermined. But the video above gives you a lot of “reasons” if you want them in a list format in a British accent. All in the service of infotainment. Hugo Mercier and Dan Sperber’s The Enigma of Reason offers up an explanation for you have things like “top 10 reasons” for pop culture artifacts of an ephemeral nature (for a preview, The Function of Reason at Edge). I’ve mentioned this book a few times. I finished while in the Persian Gulf (I’ll blog that at some point soon), and have been ruminating on its implications, and whether to mention it further. The issue I’m having is that I am very familiar with Sperber’s work, and those who he has influenced, and research domains complementary to his. Even if I didn’t know all the details of the argument in The Enigma of Reason, in the broad sketches I knew where they were going, and frankly I could anticipate it. I suppose somewhat ironically I managed to infer and reason ahead of the narrative since I had so many axioms from earlier publications. The Enigma of Reason comes out of a particular tradition in cognitive anthropology. What Dan Sperber terms the “naturalistic paradigm” in anthropology. This is in contrast to the more interpretative framework that you are probably familiar with in the United States. No one would deny that the naturalistic paradigm has scientific aspirations. That is, it draws from natural science (in particular cognitive anthropology as well as the field of cultural evolution), and conceives of itself as the study of natural phenomenon. Scott Atran’s In Gods We Trust: The Evolutionary Landscape of Religion comes out of this tradition, and some of the experimental literature in The Enigma of Reason seem very familiar from the earlier book (as well as Pascal Boyer’s Religion Explained). This is due to the fact that Atran goes to great lengths to show the ultimate nature of religion does not have to do with rational inferences as we understand them. That is, theological is a superstructure overlain atop a complex phenomenon which is not about philosophical reflection at all. Of course the flip side can be true as well. When I was a teenager and younger adult I explored the literature on the existence of God a bit, from old classics like Thomas Aquinas’ arguments in Summa Theologica, to more recent and contrasting proofs of Norman Malcolm and Richard Swinburne (Michael Martin’s Atheism: A Philosophical Justification was actually a good sourcebook for high level arguments to theism). When I read In Gods We Trust I realized that my earlier explorations were primarily intellectual justifications, and had little relationship why most people around me believed in God. And yet how did I become an atheist? For me this is a flashbulb memory. I was eight years old, in the public library. I was thumbing through the science books in the children’s section (particular, books on biology and medicine). The third row from the front of the stacks. And all of a sudden I had the insight that there wasn’t a necessary reason for the existence of God. It all happened over the course of a minute or so. Mind you, I was never really religious in any deep sense. Something I’ve confirmed when talking to religious friends about their beliefs and how it impacts them. Though I nominally adhered to my parents’ religion when I was a small child, I was fascinated much more by science, and that really engaged most of my thoughts and guided my actions (contrastingly, going to the mosque was one of the most horribly boring things I recall doing as a small child). My point here is that many of our beliefs are arrived at in an intuitive manner, and we find reasons to justify those beliefs. One of the core insights you’ll get from The Enigma of Reason is that rationalization isn’t that big of a misfire or abuse of our capacities. It’s probably just a natural outcome for what and how we use reason in our natural ecology. Mercier and Sperber contrast their “interactionist” model of what reason is for with an “intellectualist: model. The intellecutalist model is rather straightforward. It is one where individual reasoning capacities exist so that one may make correct inferences about the world around us, often using methods that mimic those in abstract elucidated systems such as formal logic or Bayesian reasoning. When reasoning doesn’t work right, it’s because people aren’t using it for it’s right reasons. It can be entirely solitary because the tools don’t rely on social input or opinion. The interactionist model holds that reasoning exists because it is a method of persuasion within social contexts. It is important here to note that the authors do not believe that reasoning is simply a tool for winning debates. That is, increasing your status in a social game. Rather, their overall thesis seems to be in alignment with the idea that cognition of reasoning properly understood is a social process. In this vein they offer evidence of how juries may be superior to judges, and the general examples you find in the “wisdom of the crowds” literature. Overall the authors make a strong case for the importance of diversity of good-faith viewpoints, because they believe that the truth on the whole tends to win out in dialogic formats (that is, if there is a truth; they are rather unclear and muddy about normative disagreements and how those can be resolved). The major issues tend to crop up when reasoning is used outside of its proper context. One of the literature examples, which you are surely familiar with, in The Enigma of Reason is a psychological experiment where there are two conditions, and the researchers vary the conditions and note wide differences in behavior. In particular, the experiment where psychologists put subjects into a room where someone out of view is screaming for help. When they are alone, they quite often go to see what is wrong immediately. In contrast, when there is a confederate of the psychologists in the room who ignores the screaming, people also tend to ignore the screaming. The researchers know the cause of the change in behavior. It’s the introduction of the confederate and that person’s behavior. But the subjects when interviewed give a wide range of plausible and possible answers. In other words, they are rationalizing their behavior when called to justify it in some way. This is entirely unexpected, we all know that people are very good at coming up with answers to explain their behavior (often in the best light possible). But that doesn’t mean they truly understanding their internal reasons, which seem to be more about intuition. But much of The Enigma of Reason also recounts how bad people are at coming up with coherent and well thought out rationalizations. That is, their “reasons” tend to be ad hoc and weak. We’re not very good at formal logic or even simple syllogistic reasoning. The explanation for this seems to be two-fold. First, reason is itself an intuitive process. For the past few weeks we’ve had an intern at the office. I’ve given them a project using Python…a language they barely know. One of the things that is immediately obvious when going through pitfalls is that a lot of the debugging process relies on intuition one accrues over time, through trial and error. When someone is learning a programming language they don’t have this intuition, so bugs can be extremely difficult to overcome since they don’t have a good sense of the likely distribution of probabilities of the errors they’d introduce into the system (or, to be concrete, a novice programmer might not even recognize that there’s an unclosed loop, when that is one of the most obvious errors to anyone). Second, reason is an iterative process which operates optimally in a social context. While  The Enigma of Reason reviews all the data which suggests that humans are poor at formal logic and lazy in relation to production of reasons, the authors also assert that we are skeptical of alternative models. This rings true. I recall an evangelical Protestant friend who once told me how ridiculous the idea of Hindu divine incarnations were. He was less than pleased with I simply switched his logic to a Christian context. But Mercier and Sperber suggest that these two features of loose positive production of reasons and tighter negative skepticism of those reasons come together in a social context to converge upon important truths which might increase our reproductive fitness. The framework above is fundamentally predicated on methodological individualism, focusing in natural selection at that level. The encephalization of humans over the past two million years was driven by increased social complexity, and this social complexity was enabled by the powerful ability to reason and relate by individual humans. In some ways  The Enigma of Reason co-opts some of the same arguments presented by Robin Dunbar over ten years ago in Grooming, Gossip, and the Evolution of Language, except putting the emphasis on persuasion and reasoning. At this point we need to address the elephant in the room: some humans seem extremely good at reasoning in a classical sense. I’m talking about individuals such as Blaise Pascal, Carl Friedrich Gauss, and John von Neumann. Early on in The Enigma of Reason the authors point out the power of reason by alluding to Eratosthenes’s calculation of the circumference of the earth, which was only off by one percent. Myself, I would have mentioned Archimedes, who I suspect was a genius on the same level as the ones mentioned above. Mercier and Sperber state near the end of the book that math in particular is special and a powerful way to reason. We all know this. In math the axioms are clear, and agreed upon. And one can inspect the chain of propositions in a very transparent manner. Mathematics has guard-rails for any human who attempts to engage in reasoning. By reducing the ability of humans to enter into unforced errors math is the ideal avenue for solitary individual reasoning. But it is exceptional. Second, though it is not discussed in The Enigma of Reason there does seem to be variation in general and domain specific intelligence within the human population. People who flourish in mathematics usually have high general intelligences, but they also often exhibit a tendency to be able to engage in high levels of visual-spatial conceptualization. One the whole the more intelligent you are the better you are able to reason. But that does not mean that those with high intelligence are immune from the traps of motivated reasoning or faulty logic. Mercier and Sperber give many examples. There are two. Linus Pauling was indisputably brilliant, but by the end of his life he was consistently pushing Vitamin C quackery (in part through a very selective interpretation of the scientific literature).* They also point out that much of Isaac Newton’s prodigious intellectual output turns out to have been focused on alchemy and esoteric exegesis which is totally impenetrable. Newton undoubtedly had a first class mind, but if the domain it was applied to was garbage, then the output was also garbage. A final issue, which is implicit in the emergence of genius is that it exists in can only manifest in a particular social context. Complex societies with some economic surplus and specialization are necessary for cognitive or creative genius to truly shine. In a hunter-gatherer egalitarian society having general skills to subsist on the Malthusian margin is more critical than being an exceptional mind.** Overall, the take-homes are: • Reasoning exists to persuade in a group context through dialogue, not individual ratiocination. • Reasoning can give rise to storytelling when prompted, even if the reasons have no relationship to the underlying causality. • Motivated reasoning emerges because we are not skeptical of the reasons we proffer, but highly skeptical of reasons which refute our own. • The “wisdom of the crowds” is not just a curious phenomenon, but one of the primary reasons that humans have become more socially complex and our brains have larger. Ultimately, if you want to argue someone out of their beliefs…well, good luck with that. But you should read The Enigma of Reason to understand the best strategies (many of them are common sense, and I’ve come to them independently simply through 15 years of having to engage with people of diverse viewpoints). * R. A. Fisher, who was one of the pioneers of both evolutionary genetics and statistics, famously did not believe there was a connection between smoking and cancer. He himself smoked a pipe regularly. ** From what we know about Blaise Pascal and Isaac Newton, their personalities were such that they’d probably be killed or expelled from a hunter-gatherer band. ## July 29, 2017 ### The passing onto to better things…faster and faster Filed under: iPhone,Technology — Razib Khan @ 2:15 am As many of you know, Apple is doing away with the iPod Shuffle. One curious thing is that I’ve noticed several people buying these devices in the last week through my Amazon referrals. At $50 the price point isn’t high, but it does seem a bit much for an obsolete technology. Which made me reflect on how quickly technologies become obsolete now. As the few people who read this blog and know me in real life are aware, between 2007 and 2014 I went everywhere with a Shuffle. I always had a backup Shuffle. This is not because I’m an audiophile. I’m not. I listened to podcasts. Arguably the emergence of smartphones made the Shuffle redundant, but I found that the Shuffle was more portable than a smartphone. Ultimately what made me dump the Shuffle is that I went full d-bag and started doing the bluetooth thing. All of a sudden it didn’t matter where the phone was. I still have a Shuffle, but it’s in a drawer somewhere. Perhaps I have a backup too. I don’t recall. I probably stuck with the Shuffle longer than most. As an old(er) person I’m reflecting now how fast “ubiquitous” technologies are getting obsolete. Faster and faster. As a child of the 1980s VCRs were part and parcel of our technological furniture. By the early 2000s VCRs were in decline, with DVD rentals surpassing VHS in 2003. Cassettes were eclipsed by CDs in the early 1990s after a two decade reign, but CDs really didn’t master the space for more than ten years (at least in the USA). DVDs had a similarly short “moment.” How much more can change though? Some of the transition occurred because smartphones, in particular the iPhone, swallowed up whole sectors (audio and photography). Other changes are due to the utilization of high speed internet for video. We got rid of our television in 2004, and for a while there I felt “out of the loop” on a lot of water cooler conversation. But now television has come to me, as binge watching on Netflix has become common. What will change next? ### Generation X Filed under: Beastie Boys — Razib Khan @ 1:03 am ## July 28, 2017 ### The Indo-Aryan question nearing resolution Filed under: Genetics,science — Razib Khan @ 5:50 pm India Today published my review of the current state of the genetics and genomics of the Indian subcontinent, and what it can tell us about the ethnogenesis of South Asians generally. In the piece I tried to be very circumspect and stick to what we know with a high, if not perfect, degree of certainty. Here I will add some comments where I reduce the threshold of certainty somewhat. That, I’m going to include here my beliefs where I think I’m right, but in some details wouldn’t be surprised if I was wrong. First, the title is Aryan wars: Controversy over new study claiming they came from the west 4,000 years ago. Writers don’t get to choose titles, and this is not one I would have chosen. But I am not in a position to care or know what draws clicks. Let’s note that this “controversy” is restricted mostly to India. Outside of India it’s not controversial, but a matter of the science, because people don’t have any political or social investment in the topic. It reminds me of debates about genetics and intelligence in the West, where emotions get overwrought and lies fly wildly with abandon.* Second, there is a reference in the figures to an “Out of India” (OIT) model. That is, the Aryans migrated out of India, and implicitly the Indo-European languages derive from South Asia. I don’t think this theory has any support at all. That is, I think it is rather clear that proto-Indo-European probably emerged neither in Europe proper, nor in South Asia, but in the Inner Eurasian spaces between. But for an Indian audience ignoring OIT would seem a peculiar lacunae, so there was a reference added to the figure on that account (I pushed back against this, but do not make ultimate decisions on figures). But I do think it was plausible up until 2009’s Reconstructing Indian History to suggest that most modern South Asian ancestry dates to the Pleistocene. In this framework the Indo-Europeanization of the subcontinent was primarily a cultural one, where small groups of Central Asians imposed their language on the native population. What the genome-wide work has shown is that South Asians are the product of a large-scale mixing process between a population very distant from West Eurasians (“Ancestral South Indians”, ASI) and a population which was indistinguishable from other West Eurasians (“Ancestral North Indians”, ANI). Since ANI is indistinguishable from West Eurasians I hold it is clearly a West Eurasian population in provenance. Those who reject this position from a scientific perspective believe that there could have been some sort continuous zone of “ANI-like” habitation from northwestern South Asia up into northern Inner Eurasia (and perhaps toward West Asia as well) dating from the late Pleistocene. I do not that believe this is plausible, and I will tell you that prominent researchers who I have brought up this idea to are somewhat incredulous.** Third, there are major unresolved issues genetically in relation to the dates and the total number of mixing populations. I am quite confident saying around half of the total South Asian genomic ancestry today derives from populations who were living outside of South Asia on the Holocene-Pleistocene boundary 11,700 years ago. Much of that ancestry probably flourished between the Caucasus and Zagros mountains. The remainder somewhere in the vast swath of territory between the Baltic and Siberia (perhaps further south, toward the Pamirs?). But I am not confident of the relative balances of contribution to the ANI. It does seem that the northern component, which is derived in part from the southern component, is much more prominent in upper castes and northwestern populations. In contrast the southern component is found throughout the subcontinent. In Genomic insights into the origin of farming in the Near East there is analysis of South Asia in the supplements. The author concludes that ANI can not be modeled as a single population (Zack Ajmal and I were saying this in 2010). The top hits for the sources of ANI tend to be the genomic sample from the Zagros, in western Iran (before subsequent admixture with Levantine farmers), and a population similar to the Yamna culture the steppe. The issue seems to be that later steppe populations which harbor a fair amount of “Early European Farmer” ancestry (e.g., LBK in Central Europe) due likely to back migration aren’t good model fits. Below are two plots, one showing a scatter of South Asian groups with their Iran_N (a sample from ~10,000 years ago) vs. Yamna (from ~5,000 years ago), and another with the ratios. DO NOT TAKE THE PROPORTIONS LITERALLY. My intuition that these models are overestimating the proportion of steppe ancestry, but my confidence in my intuition is low. There are two groups enriched for Iran_N ancestry. 1. Lower caste groups, especially from South India. 2. Populations in southern Pakistan. The reasons I differ. If you have done genetic analysis of the Pakistani populations it seems quite obvious that unlike other groups in South Asia Pakistani groups facing the Arabian sea across from Oman have genuine Near Eastern ancestry. This affinity declines as you go north in Pakistan rather rapidly. Notice though one South Indian group: Jews from Cochin. This population clearly has recent Near Eastern ancestry. The Kharia are an Austro-Asiatic Munda group. For whatever reason Austro-Asiatic groups seem to consistently have very little steppe ancestry. The Mala are Dalits from South India. The further up you go on the modal Iran_N-Yamna cline you see the populations are either upper caste, or, they are from the far northwest of the subcontinent. The conclusion I derive from this is that first there was an early migration of West Eurasian populations consisting of Iranian farmers. This group mixed with the ASI element. The Indo-Aryans, who probably correlates with the Yamna-like component, arrived later as an overlay (and nearly half of their ancestry was derived from Iranian farmers). Then many South Asian populations have modifications on this base model of compound ANI + ASI; Munda and Bengali have later East Asian ancestry, while populations on the Arabian sea have Near Eastern ancestry. Fourth, the story in India Today leans heavily on Y chromosomes R1a1a. It is true we are Lords of the Steppe and destined to drive out enemies before us. But, it is not the primary story. And yet Y chromosomal phylogenies are easy for the public to understand. But they only make sense in light of the above framework. R1a1a is found in South Indian tribal populations. It seems likely that Indo-Aryan paternal lineages were highly invasive across the subcontinent, just as they were in Europe. In many cases they likely extended far beyond domains where Indo-European acculturation occurred. I’m probably wrong on some of the details. But I suspect the final story will not be so different from this. Finally, I will mention the cultural element here. There is a fair amount of the discussion of the form “so you are saying the ancestors of Indians are Europeans?” or “does this mean Hinduism is not Indian?” The piece was about genetics and demography, not my opinions about culture. So I will say this: 1. The “West” as an entity is no older that Classical Greece. 500 BC. My own personal position, strongly held, is that the West should indicate cultures and societies which descend from the European societies which adhered to the Western Church around ~1000 AD (some nations, like Lithuania, became absorbed into this cultural complex hundreds of years later). So Russia is not the West. And Merovingian Francia is not the West. 2. Indian civilization of what we term the Hindu variety coalesced in the period between between 500 BC and 500 AD, from before to the Mauryas up to the Guptas. Obviously the period before 1000 BC was important in setting the ground-work, but I do not believe it was Indian as we’d understand it in anything but the geographical sense, nor was it Hindu in any way we’d recognize it today (similarly, Shang dynasty China was not China as we’d understand, which came into being after 500 BC). These positions mean that I think nationalist passions are in the “not even wrong” category. Indian Hindu civilization is indigenous by definition, since it was synthesized in situ on the edge of historical perception and attestation (for the record, I think Adi Shankara was critical in the completion of a crystalized self-conception of Hindu religio-philosophical thought, but its origins predate him). Similarly, Indian civilization was not seeded by white Europeans because white Europeans were only coming into being in Europe when the Indus Valley civilization was collapsing. That is all (for now). Addendum: The first tranche of ancient DNA should be out in a few months. Also, there is another paper on Indian genetics in the work from the usual suspects. There won’t be anything totally surprising (or so I’ve been told). * By lies, I mean the contention that intelligence is an “invalid” instrument in relation to predictiveness, or, if it is valid, it is not genetically heritable. People routinely lie about these facts in discussion or spread lies because there are socially preferred positions which they conform to. Similarly, many questions about Indian history seem to hinge on widely promoted lies. ** This model needs to also confront the massive mixing of the last 4,000 years. If it is true then it is ASI which is mostly likely intrusive, because it is not creditable that these two populations were in nearby proximity for tens of thousands of years without exchanging genes. ### The Indo-Aryan migration to the Indian subcontinent Filed under: India Genetics,Indian Genetics — Razib Khan @ 7:45 am The piece is up at India Today. The headline and title are of course optimized for clicks. I would, for example, say that the Indo-Aryans came from the west, not the West. In the course of writing this it has become clear that many people have very specific commitments on this issue. I think it is clear I do not. Genetic inference methods have wide shoulders of confidence in particular dates. So I’ll leave it to those with more archaeological knowledge to argue over specific date. But it strikes me that the dates point to a likelihood that much of the expansion and diversification of Indo-Aryans may precede their expansion into the Gangetic plain ~1500 BCE, the date preferred by many scholars. Apparently we shouldn’t have to wait too long for ancient DNA from Rakighari (months, not years). But I doubt that will settle anything, as opposed to being preliminary and setting off new debates. ## July 26, 2017 ### 18,000 years BC (the film) Filed under: Dog Evolution,Human Evolution,Paleolithic — Razib Khan @ 5:42 pm Alpha, set 20,000 years ago in Europe, was apparently originally titled “Solutrean.” The change is probably for the best. It will come out next spring. I really hope that this movie is good and does well. It isn’t often that you have something which takes place during the Last Glacial Maximum. The plot seems to reflect the what you might read in Pat Shipman’s The Invaders, but it’s about 20,000 years too late for her model to work. One of the major criticisms of the idea that dogs and modern humans operated as a team is that it seems way too early. But of late there have been suggestions that the date is earlier than we’d previous thought in relation to when dogs as we understand them arose: Ancient European dog genomes reveal continuity since the Early Neolithic. Here’s the relevant section: “By calibrating the mutation rate using our oldest dog, we narrow the timing of dog domestication to 20,000–40,000 years ago.” Please note though that the divergence of the dog lineage from the ancestors of modern wolves is a distinct question and process from domestication as such as we understand it. Though it seems likely these events didn’t occur too far apart in time. ### The future will be genetically engineered Filed under: Genetics,Genomics — Razib Khan @ 4:04 pm If the film Rise of the Planet of the Apes had come out a few years later I believe there would have been mention of CRISPR. Sometimes science leads to technology, and other times technology aids in science. On occasion the two are one in the same. The plot I made above shows that in the first five years of the second decade of the 20th century CRISPR went from being an obscure aspect of bacterial genetics to ubiquitous. Friends who had been utilizing “advanced” genetic engineering methods such as TALENS and zinc fingers switched overnight to a CRISPR/Cas9 framework. As I’ve said before the 2010s are the decade when “reading” the genome becomes normal. We really don’t know what the CRISPR/Cas9 technology is capable of. It’s early years yet. With that, First Human Embryos Edited in U.S.. Technically they’re single celled zygotes. The science itself is not astounding. Rather, it is that the human rubicon has been passed in the United States. As indicated in the article there has been some jealousy about what the Chinese have been able to do because of a different cultural and regulatory framework. There are those calling for a moratorium on this work (on humans). I’m not in favor or opposed. Rather, my question is simple: if CRISPR/Cas9 makes genetic engineering cheap, easy, and effective, how exactly are we going to enforce a world-wide moratorium? A Butlerian Jihad? Note: I know that people are freaking about humans + genetic engineering. But most geneticists I know are more excited about the prospects of non-human work, since human clinical trials are going to be way in the future. Over 20 years since Dolly it’s notable to me that no human has been cloned from adult somatic cells yet. ## July 25, 2017 ### On the precipice of the Kali Yuga Filed under: China,History — Razib Khan @ 2:04 am The idea of decline is an old one. See The Idea of Decline in Western History for a culturally delimited view. But whether it is Pandora opening her box or Eve biting the apple, the concept of an idyllic past and the ripeness of imminent decline seems baked into the cake of human cultural cognition. It was always better in the good old days. Of course there is the flip side of those who presume that the Eternal City will continue as it always was unto the end of time. Meanwhile, cornucopian optimists of our modern era, such as Steve Pinker, are the historical aberration. But they are influential in our age. Tanner Greer has a profoundly pessimistic post up, Everything is Worse in China, which is getting some attention (as I’ve stated before Tanner’s blog in general is worth a read). Rod Dreher has two follow up posts in response. First, A: Confucius, Basically, which is somewhat an answer to Tanner. And then an email from Tanner himself. It is here that he suggests to Rod’s readers Xunzi: The Complete Text. That is all for the good (for a broader view, A Short History of Chinese Philosophy). Readers can probably read between the lines that I have been gripped somewhat by Sinophilia of late. I am rather pessimistic about the state of American culture and the prospects for the American republic as we have known it. I don’t see any of the major political factions offering up a solution for the impending immiseration of the middle class. So I look to the east. Much of the history of the world has been a history of Asia, and it seems we are going to go back in that direction. If we are pessimistic about China, to a great extent we are pessimistic about the world. Perhaps then we need to abandon the idol of the nation-state, or in China’s case the nation-civilization. Rod Dreher has the Benedict Option for orthodox Christians* But we need to think bigger. Men and women of civilized inclinations may need to band together, and form secret societies shielded from the avarice of the institutional engines which channel human passions toward inexorable ends. We need a strategy for living as civilized people in an anarchic world, an archipelago of oligarchy in the sea of barbarism. Sooner, rather than later. History comes at you fast. * I mean here Trinitarian Christians of a traditionalist bent, not Eastern or Oriental Orthodox Christians. ### Ancient Europeans: isolated, always on the edge of extinction Filed under: Europe,Human Genetics,Scandinavia — Razib Khan @ 12:19 am A few years ago I suggested to the paleoanthropologist Chris Stringer that the first modern humans who arrived in Europe did not contribute appreciable ancestry to modern populations in the continent (appreciable as in 1% or more of the genome).* It seems I may have been right according to results from a 2016 paper, The genetic history of Ice Age Europe. The very oldest European ancient genome samples “failed to contribute appreciably to the current European gene pool.” Why did I make this claim? Two reasons: 1) 40,000 years is a long time, and there was already substantial evidence of major population turnovers across northern Eurasia by this point. You go far enough into the future and it’s not likely that a local population leaves any descendants. So just work that logic backward. 2) There was already evidence of low population sizes and high isolation levels between groups in Pleistocene and Mesolithic/Neolithic Europe. This would again argue in favor of a high likelihood of local extinctions give enough time. This does not only apply to just modern humans, descendants of southern, likely African, populations. Neanderthals themselves show evidence of high homogeneity, and expansions through bottlenecks over the ~600,000 years of their flourishing. The reason that these dynamics characterized modern humans and earlier hominins in northern Eurasia is what ecologists would term an abiotic factor: the Ice Age. Obviously humans could make a go of it on the margins of the tundra (the Neanderthals seem less adept at penetrating the very coldest of terrain in comparison to their modern human successors; they likely frequented the wooded fringes, see The Humans Who Went Extinct). We have the evidence of several million years of continuous habitation by our lineage. But many of the ancient genomes from these areas, whether they be Denisovan, Neanderthal, or Mesolithic European hunter-gatherer, show indications of being characterized by very low effective population sizes. Things only change with the arrival of farming and agro-pastoralism. For two obvious reasons we happen to have many ancient European genomes. First, many of the researchers are located in Europe, and the continent has a well developed archaeological profession which can provide well preserved samples with provenance and dates. And second, Europe is cool enough that degradation rates are going to be lower than if the climate was warmer. But if Europe, as part of northern Eurasia, is subject to peculiar exceptional demographic dynamics we need to be cautious about generalizing in terms of the inferences we make about human population genetic history. Remember that ancient Middle Eastern farmers already show evidence of having notably larger effective population sizes than European hunter-gatherers. Two new preprints confirm the long term population dynamics typical of European hunter-gatherers, Assessing the relationship of ancient and modern populations and Genomics of Mesolithic Scandinavia reveal colonization routes and high-latitude adaptation. The first preprint is rather methods heavy, and seems more of a pathfinder toward new ways to extract more analytic juice from ancient DNA results. Those who have worked with population genomic data are probably not surprised at the emphasis on collecting numbers of individuals as opposed to single genome quality. That is, for the questions population geneticists are interested in “two samples sequenced to 0.5x coverage provide better resolution than a single sample sequenced to 2x coverage.” I encourage readers (and “peer reviewers”) to dig into the appendix of Assessing the relationship of ancient and modern populations. I won’t pretend I have (yet). Rather, I want to highlight an interesting empirical finding when the method was applied to extant ancient genomic samples: “we found that no ancient samples represent direct ancestors of modern Europeans.” This is not surprising. The ‘hunter-gatherer’ resurgence of the Middle Neolithic notwithstanding, Northern Europe was subject to two major population replacements, while Southern Europe was subject to one, but of a substantial nature. Recall that the Bell Beaker paper found that “spread of the Beaker Complex to Britain was mediated by migration from the continent that replaced >90% of Britain’s Neolithic gene pool within a few hundred years.” This means that less than 10% of modern Britons’ ancestry are a combination of hunter-gatherers and Neolithic farmers. And yet if you look at various forms of model-based admixture analyses it seems as if modern Europeans have substantial dollops of hunter-gatherer ancestry (and hunter-gatherer U5 mtDNA and Y chromosomal lineage I1 and I2, associated with Pleistocene Europeans, is found at ~10% frequency in modern Europe in the aggregate; though I suspect this is a floor). What gives? Let’s look at the second preprint, which is more focused on new empirical results from ancient Scandinavian genomes, Genomics of Mesolithic Scandinavia reveal colonization routes and high-latitude adaptation. From early on in the preprint: Based on SF12’s high-coverage and high-quality genome, we estimate the number of single nucleotide polymorphisms (SNPs) hitherto unknown (that are not recorded in dbSNP (v142)) to be c. 10,600. This is almost twice the number of unique variants (c. 6,000) per Finnish individual (Supplementary Information 3) and close to the median per European individual in the 1000 Genomes Project (23) (c. 11,400, Supplementary Information 3). At least 17% of these SNPs that are not found in modern-day individuals, were in fact common among the Mesolithic Scandinavians (seen in the low coverage data conditional on the observation in SF12), suggesting that a substantial fraction of human variation has been lost in the past 9,000 years (Supplementary Information 3). In other words, the SHGs (as well as WHGs and EHGs) have no direct descendants, or a population that show direct continuity with the Mesolithic populations (Supplementary Information 6) (13–17). Thus, many genetic variants found in Mesolithic individuals have not been carried over to modern-day groups. The gist of the paper in terms of archaeology and demographic history is that Scandinavian hunter-gatherers were a compound population. One component of their ancestry is what we term “Western hunter-gatherers” (WHG), who descended from the late Pleistocene Villabruna cluster (see paper mentioned earlier). Samples from Belgium, Switzerland, and Spain all belong to this cluster. The second element are “Eastern hunter-gatherers” (EHG). These samples derive from the Karelia region, to the east of modern Finland, bound by the White Sea to the north. EHG populations exhibit affinities to both WHG as well as Siberian populations who contributed ancestry to Amerindians, the “Ancestral North Eurasians” (ANE). There is a question at this point whether EHG are the product of a pulse admixture between an ANE and WHG population, or whether there was a long existent ANE-WHG east-west cline which the EHG were situated upon. That is neither here nor there (the Tartu group has a paper addressing this leaning toward isolation-by-distance from what I recall). Explicitly testing models to the genetic data the authors conclude that there was a migration of EHG populations with a specific archaeological culture around the north fringe of Scandinavia, down the Norwegian coast. Conversely, a WHG population presumably migrated up from the south and somewhat to the east (from the Norwegian perspective). And yet the distinctiveness of the very high quality genome as inferred from unique SNPs they have suggests to them that very little of the ancestry of modern Scandinavians (and Finns to be sure) derives from these ancient populations. Very little does not mean all. There is a lot of functional analysis in the paper and supplements which I will not discuss in this post, and one aspect is that it seems some adaptive alleles for high latitudes might persist down to the present in Nordic populations as a gift from these ancient forebears. This is no surprise, not all regions of the genome are created equal (a more extreme case is the Denisovan derived high altitude adaptation haplotype in modern Tibetans). Nevertheless, there was a great disruption. First, the arrival of farmers whose ultimate origins were Anatolia ~6,000 years ago to the southern third of Scandinavia introduced a new element which came in force (agriculture spread over the south in a few centuries). A bit over a thousand years later the Corded Ware people, who were likely Indo-European speakers, arrived. These Indo-European speakers brought with them a substantial proportion of ancestry related to the hunter-gatherers because they descended in major fraction from the EHG (and later accrued more European hunter-gatherer ancestry from both the early farmers and likely some residual hunter-gatherer populations who switched to agro-pastoralism**). For several years I’ve had discussions with researchers whose daily bread & butter are the ancient DNA data sets of Europe. I’ve gotten some impressions implicitly, and also from things they’ve said directly. It strikes me that the Bantu expansion may not be a bad analogy in regards to the expansion of farming in Europe (and later agro-pastoralism). Though the expanding farmers initial mixed with hunter-gatherers on the frontier, once they got a head of steam they likely replaced small hunter-gatherer groups in totality, except in areas like Scandinavia and along the maritime fringe where ecological conditions were such hunter-gatherers were at advantage (War Before Civilization seems to describe a massive farmer vs. coastal forager war on the North Sea). But this is not the end of the story for Norden. At SMBE I saw some ancient genome analysis from Finland on a poster. Combined with ancient genomic analysis from the Baltic, along with deeper analysis of modern Finnish mtDNA, it seems likely that the expansion of Finno-Samic languages occurred on the order of ~2,000 years ago. After the initial expansion of Corded Ware agro-pastoralists. The Sami in particular seem to have followed the same path along the northern fringe of Scandinavia that the EHG blazed. Though they herd reindeer, they were also Europe’s last indigenous hunter-gatherers. Genetically they exhibit the same minority eastern affinities in their ancestry that the Finns do, though to a greater extent. But their mtDNA harbors some distinctive lineages, which might be evidence of absorption of ancient Scandinavian substate. I’ll leave it to someone else to explain how and why the Finns and Sami came to occupy the areas where they currently dominate (note that historically Sami were present much further south in Norway and Sweden than they are today). But note that in Latvia and Lithuania the N1c Y chromosomal lineage is very common, despite no language shift, indicating that there was a great deal of reciprocal mixing on the Baltic. Overall the story is of both population and cultural turnover. This should not surprise when one considers that northern Eurasia is on the frontier of the human range. And perhaps it should temper the inferences we make about other areas of the world. * You may notice that this threshold is lower than the Neanderthal admixture proportions in the non-African genome. Why is this old admixture still detectable while modern human lineages go extinct? Because it seems to have occurred with non-African humans had a very small effective population, and was mixed thoroughly. Because of the even genomic distribution this ancestry has not been lost in any of the daughter populations. ** Haplogroup I1, which descends from European late Pleistocene populations, exhibits a star phylogeny of similar time depth as R1b and R1a. ## July 23, 2017 ### Open Thread, 07/23/2017 Filed under: Open Thread — Razib Khan @ 4:49 pm Finished The Enigma of Reason. The basic thesis that reasoning is a way to convince people after you’ve already come to a conclusion, that is, rationalization, was already one I shared. That makes sense since one of the coauthors, Dan Sperber, has been influential in the “naturalistic” school of anthropology. If you’ve read books like In Gods We Trust The Enigma of Reason goes fast. But it is important to note that the cognitive anthropology perspective is useful in things besides religion. I’m thinking in particular of politics. I haven’t been blogging much since I was abroad on a business trip. Specifically to the Persian Gulf. I’ll say more later, though I am going to be vague on geography since I’d rather not mix these two streams of my life (also, to be clear, this is not related to my day job). One Family, Many Revolutions: From Black Panthers, to Silicon Valley, to Trump. I had known of this connection before, between Ben Horowitz, the Silicon Valley VC guy, and David Horowitz, the right-wing provocateur. The elder Horowitz’s contention that one needs to play dirty to get anywhere is a position that I believe has more support today than it did ten years ago. The culture has come to him. Don’t Believe in God? Maybe You’ll Try U.F.O.s. No surprise. 43 Senators Want to Make It a Federal Crime to Boycott Israeli Settlements. Here are the sponsors. I’ve never felt so sympathetic toward BDS…. My piece in India Today on South Asian genetics is hitting the printing press this week. ## July 17, 2017 ### Castes are not just of mind Filed under: Caste,Human Genetics,India — Razib Khan @ 8:31 pm Before Nicholas Dirks was a controversial chancellor of UC Berkeley, he was a well regarded historian of South Asia. He wrote Castes of Mind: Colonialism and the Making of Modern India. I read it, along with other books on the topic in the middle 2000s. Here is Amazon summary from Library Journal: Is India’s caste system the remnant of ancient India’s social practices or the result of the historical relationship between India and British colonial rule? Dirks (history and anthropology, Columbia Univ.) elects to support the latter view. Adhering to the school of Orientalist thought promulgated by Edward Said and Bernard Cohn, Dirks argues that British colonial control of India for 200 years pivoted on its manipulation of the caste system. He hypothesizes that caste was used to organize India’s diverse social groups for the benefit of British control. His thesis embraces substantial and powerfully argued evidence. It suffers, however, from its restricted focus to mainly southern India and its near polemic and obsessive assertions. Authors with differing views on India’s ethnology suffer near-peremptory dismissal. Nevertheless, this groundbreaking work of interpretation demands a careful scholarly reading and response. The condensation is too reductive. Dirks does not assert that caste structures (and jati) date to the British period, but the thrust of the book clearly leaves the impression that this particular identity’s formative shape on the modern landscape derives from the colonial experience. The British did not invent caste, but the modern relevance seems to date to the British period. This is in keeping with a mode of thought flourishing today under the rubric of postcolonialism, with roots back to Edward Said’s Orientalism. As a scholar of literature Said’s historical analysis suffered from the lack of deep knowledge. A cursory reading of Orientalism picks up all sorts of errors of fact. But compared to his heirs Said was actually a paragon of analytical rigor. I say this after reading some contemporary postcolonial works, and going back and re-reading Orientalism. To not put too fine a point on it postcolonialism is more about a rhetorical posture which aims to destroy what it perceives as Western hegemonic culture. In the process it transforms the modern West into the causal root of almost all social and cultural phenomenon, especially those that are not egalitarian. Anyone with a casual grasp of world history can see this, which basically means very few can, since so few actually care about details of fact. Castes of Mind is an interesting book, and a denser piece of scholarship than Orientalism. Its perspective is clear, and though it is not without qualification, many people read it to mean that caste was socially constructed by the British. This seems false. It has become quite evident that even the classical varna categories seem to correlate with genome-wide patterns of relatedness. And the Indian jatis have been endogamous for on the order of two thousand. From The New York Times, In South Asian Social Castes, a Living Lab for Genetic Disease: The Vysya may have other medical predispositions that have yet to be characterized — as may hundreds of other subpopulations across South Asia, according to a study published in Nature Genetics on Monday. The researchers suspect that many such medical conditions are related to how these groups have stayed genetically separate while living side by side for thousands of years. This is not really a new finding. It was clear in 2009’s Reconstructing Indian Population History. It’s more clear now in The promise of disease gene discovery in South Asia. Unfortunately though science is not well known in any depth in the general public. The ascendency of social constructionism is such that a garbled and debased view that “caste was invented by the British” will continue to be the “smart” and fashionable view among many elites. ## July 16, 2017 ### Open Thread, 07/16/2017 Filed under: Open Thread — Razib Khan @ 2:07 pm I know that Game of Thrones is premiering tonight, but just wanted to remind readers that R. Scott Bakker’s The Unholy Consult will be out in a week. The author, R. Scott Bakker, has a blog, Three Pound Brain. He has some strange ideas…much of which I can’t make heads or tails of. But that’s OK, I enjoy his fiction, I don’t worship his philosophy. I’m traveling, so not much time to comment. But let me say that I’m sad to see that Maryam Mirzakhani has died. If you want to get a sense of the historical background of the framework within which I write much of this blog, you might find Will Provine’s The Origins of Theoretical Population Genetics of interest. Tucker Carlson Goes to War Against the Neocons. I know that most people on the Left don’t like Tucker Carlson now because of his recent political postures, but back in the 2000s he was known as a quite heterodox (read: not partisan and boring) commentator. And I have to say that it is nice for someone to say what may of us, including former supporters of the Iraq invasion, think now and then when we recall the period before 2011. ## July 14, 2017 ### The past was not PG Filed under: Bible,Culture,Game of Thrones,Mythology — Razib Khan @ 9:34 am The Week has published a screed against the low moral quality of Game of Thrones, Game of Thrones is bad — and bad for you. Obviously there is something to this insofar as one can see a coarsening of entertainment, or at least a decline in the stylized aspects of the depiction of reality. But one of my initial reactions is that much of the narrative that we value from the past was not particularly PG. If you read The Harlot by the Side of the Road: Forbidden Tales of the Bible you see that the “Good Book”, in fact the only book many read front to back by many after the Reformation in Protestant Europe, has some quite unsavory tales. The story of Judah and Tamar in particular is hard to digest from a modern Western perspective because many of the elements are understated and workaday. Greek mythology is no better obviously. From Zeus raping Leda, Achilles throwing a fit because his sex-slave was taken away, to the tradition of Agamemnon sacrificing Iphigenia. In some cases the shocking aspect of ancient stories is because moderns have different values. Slavery and concubinage were taken for granted during the period that the Hebrew Bible and Classical mythology crystallized into the forms which came down to us. In other cases I presume that it was unlikely that small children were going to ever read the original stories themselves, so sexual elements that might confuse were probably omitted in some oral tellings. This is not to say that Game of Thrones is a modern masterpiece. But some of the disquieting, and frankly perverse, aspects of the narrative are only shocking if your standard is the relatively antiseptic literary fiction which one finds between the Regency and the cultural revolution of the 1960s. That is the aberration in human history, while gritty genre fiction much closer to primal human storytelling. ## July 13, 2017 ### When white people were “ethnic” Filed under: Religion — Razib Khan @ 3:22 pm In the period between 2005 and 2010 I spent a fair amount of time reading about American history. And one aspect which interested me was the nature of the assimilation of white Americans of non-Protestant background, in particular Roman Catholics and Jews. This was triggered by reading The Impossibility of Religious Freedom, where the author argues that the modern American conception of church-state separation is difficult to understand in practice unless religion is defined as something similar to low church American Protestantism Though the American founding was famously eclectic and tolerant, as befitted a republic designed by men with elite Enlightenment sensibilities, it was culturally without a doubt Protestant in heritage, if not belief. The American Revolutionary Zeitgeist was steeped in British-influenced anti-Catholicism. In keeping with the same sort of Protestant populism which inspired the Gordon riots a broad swath of American colonial opinion was critical of the Quebec Act for giving French speaking Catholics a modicum of religious liberty and equality before the law. Despite this historical context the relationship between the Roman Catholic population and the American republic in the early years was relatively amicable. Most of the priests were French Canadians, and Catholic population was highly assimilated and integrated. The great change occurred with the arrival of large numbers of Roman Catholic Irish, as well as a Irish American clerical ascendency which drew upon a revival in the Church in Ireland. John T. McGreevy’s Catholicism and American Freedom is probably the best history of the religion in the United States that I read during that period. Not because it’s comprehensive, it’s not. Rather, because it focuses on the tension between the Church and the American republic and society, and how it resolved itself, and how that resolution unravelled. Periodically people in the media make allusions to the ability of the American republic and culture to assimilate Catholics and Jews, and how that might apply to Muslims today. The discussion really frustrates me because there is almost never an acknowledgement that Roman Catholics experienced various degrees of low-grade persecution during periods of the 19th century. The Ursuline Convent riots are just the most sensational incident, and the Know Nothing movement turned into a political party. The expansion of public schooling in parts of this was country tied to anti-Catholicism. But the Catholics did not take this passively. The emergence of a whole counter-culture, and parochial schools, suggested that they were ready to fight back to maintain their identity. The powerful Irish clerics who served as de facto leaders of the Roman Catholic faithful seem to have wanted to establish a modus vivendi with the American government which recognized the Church’s corporate role in society. By and large American elites and culture rejected this attempt to import a European style model to the New World. By the late 19th century a movement began in the American Roman Catholic Church which became labeled the Americanist heresy. Despite its official condemnation I would argue that “Americanism” eventually became the de facto ideology of most American Roman Catholics. As Catholics conceded and assimilated toward American liberal and democratic norms in their everyday life, the hostility from the general public declined, and by the middle of the 20th century Will Herberg’s Protestant, Catholic, Jew articulated a vision of religious harmony among white Americans. It should be rather obvious from the above that I believe this religious harmony was achieved in large part through concessions that American Catholics made to the folkways of the United States. You see the same dynamic in Jonathan Sarna’s American Judaism. Second, in Catholicism and American Freedom McGreevy lays out the great unravelling of the Catholic hierarchy’s understanding with American society which occurred in the 1960s, as social liberalism went far beyond what even the most progressive Roman Catholic intellectuals were ready to countenance. And in this cultural revolution Catholics were shocked to find that their Jewish allies made common cause with mainline Protestants and post-Protestants. The reason I am writing this is that the American landscape today is different in deep ways from that of the 19th and early 20th century. The lessons of Catholic and Jewish assimilation to a Protestant understanding of religion were achieved through bitter conflict, and the rejection of a corporatist accommodation between the American government and religious minorities, as was achieved in several European countries. The modern ideas of religious pluralism are fundamentally different from the explicit understanding of Protestant supremacy which ruled the day a century ago, and only slowly faded with assimilation of non-Protestants. ## July 11, 2017 ### 23andMe ancestry only is$49.99 for Prime Day Filed under: 23andMe,D.T.C. Personal Genomics,Personal genomics — Razib Khan @ 11:10 am 23andMe has gone below $50 for “Prime Day”! For those of us who bought kits (albeit more fully featured) at$399 or even more this is pretty incredible. But from what I’m to understand these sorts of SNP-chips are now possible to purchase from Illumina for well less than 50 so this isn’t charitable. At minimum a way to get a raw genotype you can bank later. ## July 10, 2017 ### The sons of Ham and Shem Filed under: Afro-Asiatic,History — Razib Khan @ 1:33 am Recently I had the pleasure of having lunch with David Reich and he asked me about my opinions in relation to the Afro-Asiatic languages. I thought it was a strange question in that I get asked about that in the comments of this weblog too. Why would I have any particular insight? I gave him what I thought was the likely answer: Afro-Asiatic languages probably emerged from the western Levant. The ancient textual evidence indicates that to the north and east of Mesopotamia the languages were not Semitic. Though Akkadian, a Semitic language, was present at the dawn of civilization, Sumerian was the dominant language culturally in the land between two rivers, and it was not Semitic. As Lazaridis et al. did not detect noticeable Sub-Saharan African ancestry in Natufians, or later Near Easterners, I have become skeptical of any Sub-Saharan African origin for Afro-Asiatic. But after the earlier post I made a few mental connections, and so I’ll put something up which pushes forward my confidence on a few issues. They lean predominantly on Y chromosomes. I understand that this sort of phylogeography has been shown to be not too powerful in the past, but in the scaffold of the ancient DNA framework it can resolve some issues. About a decade ago study of Adolf Hitler’s paternal lineage (through male relatives) indicated that his haplogroup was E1b1b. Though reports that Hitler was non-European, because this is a very common lineage in non-Europeans, as well as Jews, were incorrect, it does turn out that Hitler’s paternal lineage is not associated with the Indo-European migrations. That is, unlike me, Adolf Hitler does not descend from the All-father, but rather one of the men who were conquered and assimilated by the steppe pastoralists. But E1b1b is an interesting lineage. First, it is very common in much of Africa, especially the north. Second, it is common among the Natufian people according to Lazaridis et al. In contrast the Neolithic Iranian farmers seem to have harbored haplogroups J. Today the Near East is a mix of the two, which makes sense in light of the fact that reciprocal gene flow has occurred in the last 6,000 years. Looking at E1b1b frequencies you notice a few things. The highest frequencies with large N’s are in the Cushitic and Berber languages. Haplogroup J has a different distribution, being skewed more to West Asia. In Ethiopia E1b1b is more common, but J is far more prevalent among the Semitic Amhara than the Cushitic Oromo. Though it is subtle autosomal DNA makes it clear that the Semitic speaking populations in Ethiopia-Somalia have more Eurasian ancestry than the Cushitic ones. I believe this is evidence of the multiple migration pattern discerned earlier. If you go further south in East Africa and compare E1b1b and J you see a skew in the ratio. E1b1b declines in frequency, but J basically disappears. Among the Masai, who have a clear minor West Eurasian ancestral component, albeit far less than Ethiopians, 50% carry E1b1b. Among the Sandawe, who are a language isolate with clicks, but exhibit Cushitic genetic affinities, 34% carry E1b1b. Among their Hadza hunter-gatherer neighbors, 15% do so. Among many Khoisan groups the frequency of E1b1b is 10%. Most of these groups exhibit no J haplogroup. This aligns easily with what Skoglund was reporting earlier: the first pastoralists had no “eastern farmer,” but did have “western farmer.” The Natufians were E1b1b. The wider reach of E1b1b in Africa in comparison to J is likely due to the fact that the admixed pastoralists were pushing into relatively virgin territories. Later Eurasian backflow events, which brought Semitic languages, encountered a much more densely populated Africa. The hypothesis I present is that after the descendants of the Natufians made the transition to farming, some immediately pushed into areas of Africa suitable for farming and/or pastoralism. They quick diversified into the various Berber and Cushitic languages. The adoption of Nilo-Saharan languages, and later Khoisan ones, was simply the process of successive and serial admixture into local populations as these paternal lineages introduced their lifestyle. In the Near East many distinct Semitic languages persisted across the Fertile Crescent, and for whatever reason the various non-Semitic languages faded and Semitic ones flourished. ### The great Bantu expansion was massive Filed under: History,Human Genetics,Punt — Razib Khan @ 12:01 am Lots of stuff at SMBE of interest to me. I went to the Evolution meeting last year, and it was a little thin on genetics for me. And I go to ASHG pretty much every year, but there’s a lot of medical stuff that is not to my taste. SMBE was really pretty much my style. In any case one of the more interesting talks was given by Pontus Skoglund (soon of the Crick Institute). He had several novel African genomes to talk about, in particular from Malawi hunter-gatherers (I believe dated to 3,000 years before the present), and one from a pre-Bantu pastoralist. At one point Skoglund presented a plot showing what looked like an isolation by distance dynamic between the ancient Ethiopian Mota genome and a modern day Khoisan sample, with the Malawi population about\frac{2}{3}$of the way toward the Khoisan from the Ethiopian sample. Some of my friends from a non-human genetics background were at the talk and were getting quite excited at this point, because there is a general feeling that the Reich lab emphasizes the stylized pulse admixture model a bit too much. Rather than expansion of proto-Ethiopian-like populations and proto-Khoisan-like populations they interpreted this as evidence of a continuum or cline across East Africa. I’m not sure if this is the right interpretation of the plot presented, but it’s a reasonable one. Malawi is considerably to the north of modern Khoisan populations. This is not surprising. From what I have read Khoisan archaeological remains seem to be found as far north as Zimbabwe, while others have long suggested a presence as far afield as Kenya. Perhaps more curiously: the Malawi hunter-gatherers exhibit not evidence of having contributed genes to modern Bantu residents of Malawi. Surprising, but not really. If you look at a PCA plot of Bantu genetic variation it really starts showing evidence of local substrate (Khoisan) in South Africa. From Cameroon to Mozambique it looks like the Bantu simply overwhelmed local populations, they are clustered so tight. Though it is true that African populations harbor a lot of diversity, that diversity is not necessarily partition between the populations. The Bantu expansion is why. Of more interest from the perspective of non-African history is the Tanzanian pastoralist. This individual is about 38% West Eurasian, and that ancestry has the strongest affinities with Levantine Neolithic farmers. Specifically, the PPN, which dates to between 8500-5500 BCE. More precisely, this individual was exclusively “western farmer” in the Lazaridis et al. formulation. Additionally, Skoglund also told me that the Cushitic (and presumably Semitic) peoples to the north and east had some “eastern farmer.” I immediately thought back to Hogdson et al. Early Back-to-Africa Migration into the Horn of Africa, which suggested multiple layers. Finally, 2012 Pagani et al. suggested that admixture in the Ethiopian plateau occurred on the order of ~3,000 years ago. Bringing all of this together it suggests to me two things 1. The migration back from Eurasia occurred multiple times, with an early wave arriving well before the Copper/Bronze Age east-west and west-east gene flow in the Near East (also, there was backflow to West Africa, but that’s a different post….). 2. The migration was patchy; the Mota sample dates to 4,500 years ago, and lacks any Eurasian ancestry, despite the likelihood that the first Eurasian backflow was already occurring. Skoglund will soon have the preprint out. ## July 9, 2017 ### Our civilization’s Ottoman years Filed under: Culture,International Affairs,international relations — Razib Khan @ 9:29 pm Some right-wing intellectuals are wont to say that multicultural and multiracial empires do not last. This is not true. Historically there are plenty which lasted for quite a long time. Rome, Byzantium, and the Ottomans, to name just a few of the longest. But, though they were diverse polities modern liberal democratic sensibilities would have been offended by them. That is because these empires were ordered and centered around a hegemonic culture, with other cultures accepted and tolerated on the condition of submission and subordination. The Ottoman example is the most stark because it was formally explicit under the millet system by the end of its history, though it naturally evolved out of Islamic conceptions of the roles of dhimmis under Muslim hegemony. For 500 years the Ottomans ruled a multicultural empire. Yes, it decayed and collapsed, but 500 years is a good run. I bring up the Ottoman example because I was having a discussion with a friend of mine, an academic, and he brought up the idea that the seeming immiseration of the middle to lower classes in developed societies will lead to redistributive economic policies. Both of us agree that immiseration seems on the horizon, and that no contemporary political movement has a good response. But I pointed out that traditionally redistributive socialism seems most successful in relatively homogeneous societies, and the United States is not that. American society is diverse. Descriptively multicultural. There would be another likely solution. Eleven years ago Amartya Sen wrote a piece for The New Republic which could never get published in the journal today, The Uses and Abuses of Multiculturalism. In it he looked dimly upon the emergence of plural monoculturalism. Today plural monoculturalism is the dominant ideal of the identity politics Left, with cultural appropriation in vogue, and separatism reminiscent of the 1970s starting to come back into fashion. Against plural monoculturalism he contrasted genuine multiculturalism. I think a better word for it is cosmopolitanism. The Ottoman ruling elite was Sunni Muslim, but it was cosmopolitan. The Sultan himself often had a Christian mother, while during the apex of the empire the shock troops were janissary forces drawn from the dhimmi peoples of the Balkans. This was a common feature of the Islamic, and before them Byzantine and Roman empires. The ruling elites exhibited a common ethos, but their origins were variegated. Many of the Byzantine emperors were not from ethnic Greek Chalcedonian Christian backgrounds (before the loss of the Anatolian territories many were of Armenian, and therefore non-Chalcedonian, origin). But the culture they assimilated to, and promoted, as the core identity of the empire was Greek-speaking and Chalcedonian, with a self-conscious connection to ancient Rome. I can give similar examples from South Asia or China. Diverse peoples can be bound together in a sociopolitical order, but it is invariably one of domination, subordination, and specialization. But subordinate peoples had their own hierarchies, and these hierarchies interacted with the Ottoman Sultan in an almost feudal fashion. Toleration for the folkways of these subordinate populations was a given, so long as they paid their tax and were sufficiently submissive. The leaders of the subordinate populations had their own power, albeit under the penumbra of the ruling class, which espoused the hegemonic ethos. How does any of this apply to today? Perhaps this time it’s different, but it seems implausible to me that our multicultural future is going to involve equality between the different peoples. Rather, there will be accommodation and understandings. Much of the population will be subject to immiseration of subsistence but not flourishing. They may have some universal basic income, but they will be lack the dignity of work. Identity, religious and otherwise, will become necessary opiums of the people. The people will have their tribunes, who represent their interests, and give them the illusion or semi-reality of a modicum agency. The tribunes, who will represent classical ethno-cultural blocs recognizable to us today, will deal with a supra-national global patriciate. Like the Ottoman elite it will not necessarily be ethnically homogeneous. There will be aspects of meritocracy to it, but it will be narrow, delimited, and see itself self-consciously above and beyond local identities and concerns. The patriciate itself may be divided. But their common dynamic will be that they will be supra-national, mobile, and economically liberated as opposed to dependent. Of course democracy will continue. Augustus claimed he revived the Roman Republic. The tiny city-state of Constantinople in the 15th century claimed it was the Roman Empire. And so on. Outward forms and niceties may be maintained, but death of the nation-state at the hands of identity politics and late stage capitalism will usher in the era of oligarchic multinationalism. I could be wrong. I hope I am. ### Open Thread, 07/09/2017 Filed under: Open Thread — Razib Khan @ 8:29 pm I’m a sucker for the aesthetics of Norden. Why? I wonder if part of it is that the fringe of Northern Europe is a science fictional setting. The long dark nights during the cold winter, and the twilight during midsummer. The sun may be bright, it never gets too high in the sky. The 13th Warrior wasn’t the best movie, but it was evocative. One of the problems with the film depiction of the Lord of the Rings trilogy is that New Zealand seems too bright and airy (and also not decayed enough). Because of the SMBE meeting I haven’t made much progress on The Enigma of Reason. Much of it has been reviewing the literature in cognitive psychology and reasoning which I’m familiar with (system 1 vs. system 2, Wason reasoning task, etc.). Though it is leading me up to the main thesis. I remember years ago Matthew Yglesias mentioned he was going to do a bit more reading of books, as opposed to news, to differentiate himself from other pundits. Today he admitted he wasn’t going to make a show of having an informed opinion about the Frankfurt School. I suggested he take time out to read The Dialectical Imagination: A History of the Frankfurt School and the Institute of Social Research, 1923-1950. The modern campus Red Guards don’t know anything about Adorno, Marcuse, or Horkheimer. But the outlines of contemporary project toward cultural revolution and exaltation of the marginalized are all there. Rather than being the origin modern radical movements, I suspect that the Frankfurt School simply provides a useful tool and framework to go about its project. I do know that some politically moderate scientists who read The Dialectical Imagination, and saw campus politics in a totally different, and more intelligible, light. Joe Pickrell’s new company, Gencove, Improving ancestry estimates in South Asia. I said on Twitter that “easiest way to make housing affordable for non-rich is to build more houses for the rich so they won’t buy houses built for non-rich.” What do I mean? It’s all about supply. The well-off will always be first in line for any supply of housing. If you allow for copious development, vertically and horizontally, then the rich can purchase the luxury condos and mansions that they crave, while the middle class and lower class can buy up the more normal housing stock. Bangladeshi students test into elite schools. This story is about the entrance examinations for the elite public high schools of New York City. In 2010 the average Bangladeshi family in New York City had a household income of$37,000. I believe in the near future the entrance exams will not be the only criterion for gaining admission. The reality is that Asian American students lack “leadership” and are not “well rounded,” and all the Asian American applicants “look the same.” Racism Is Everywhere, So Why Not Move South? This article is written in the context of black Americans. But the insights are general. Houston has a cost of living that’s at the national average. It’s the fourth largest city in the United States, and there is a lot of good phở because of the large Vietnamese community. Patrick Wyman, who sometimes comments on this blog, has a great Fall of Rome podcast. The Sad, Sexist Past of Bengali Cuisine. Really upper caste Bengali Hindu cuisine. Utilities fighting against rooftop solar are only hastening their own doom. Not surprised. I have been following Ramez Naam’s commentary on this for years. He’s been on this. Islamophobes are attacking me because I’m their worst nightmare. Linda Sarsour. I thought Hillary would win the election. But I told a long-time reader of this weblog who is a Democratic operative that BLM activists getting in Bernie Sanders’ face did not presage well for the direction of the party. Linda Sarsour as the face of progressivism is a massive boon for the Right and Republicans. Sarsour has left a trail of obnoxious and offensive comments on Twitter. So have many people. For me personally the biggest issue is possible solidarity with Rasmea Odeh. The PFLP is the literal definition of a terrorist organization (though a Marxist, not Islamic, one). But the reality is that her enemies on the Right know that she and her compatriots in the “woke” movement would never exhibit charity toward their political opponents, so they are attempting to destroy her because they know she would do the same to them. That’s where we are in American politics today. You destroy your enemies, or they destroy you. Let’s have fun until the last battle though! A combined analysis of genetically correlated traits identifies 107 loci associated with intelligence. I guess I’ll start paying attention to when they can explain ~25% of outgroup sample variance. They’re already further than the 7% in this preprint, though that will take a little longer to publish. Older Posts »
+91 9711 265 586 Replies : 4 Last Replied :26/06/2008 Views : 8649 Go to last reply Author Message Status : Answered and Reviewed Difficulty Level: Low Importance for Exam:  Yes Siddhant Yadav Joined: 06/04/2008 Expert Level Total points 0 Total Messages 165 Contact Me   (Offline) Can anybody explain the basics of componendo and dividendo? The basic rules to be learnt. Author Message Status: Vipul Bawa Joined: 16/11/2007 Expert Level Total points 1350 Total Messages 339 Contact Me   (Offline) Basic rule is that:$eq=if\;\frac{a}{b}\ = \frac{c}{d}$ $eq=then \; \frac{a+b}{a-b} = \frac{c+d}{c-d}$ Author Message Status: Oh! just a 25 for this one, keep trying to get better Vipul Bawa Joined: 16/11/2007 Expert Level Total points 1350 Total Messages 339 Contact Me   (Offline) Derivation: $eq=\frac{a}{b}=\frac{c}{d}\\ add 1\\\Rightarrow\frac{a+b}{b}=\frac{c+d}{d}............(1)\\ subtract 1\\ \Rightarrow\frac{a-b}{b}=\frac{c-d}{d}..........(2)\\ divide (1) by (2)\\ \Rightarrow\frac{a+b}{a-b}=\frac{c+d}{c-d}$ Author Message Status: Yogesh Garg Joined: 11/11/2007 Expert Level Total points 825 Total Messages 192 Contact Me   (Offline) another thing you shud keep in mind is : $eq=\frac{{N}_{1}}{{D}_{1}}=\frac{{N}_{2}}{{D}_{2}}\\ then\;apply\;operation\; \frac{N-D}{N+D}\; on\;both\;sides$ why : it helps us in complex and we are saved from doing errors in the qs where we tend to get confused. of course if you are a pro then this answer is not for you. This message was edited 1 time. Last update was at 15/06/2008 06:58:03 Author Message Status: APEKSHA SINGHAL Joined: 13/05/2008 Expert Level Total points 0 Total Messages 15 Contact Me   (Offline) IF $eq=\frac{a}{b}=\frac{c}{d}$ THEN $eq=\frac{a+b}{a-b}=\frac{c+d}{c-d}$ this is the law called componendo and divinendo ie.. adding and subtracting Go to Forum: Select a forum Solutions to JEE Mains 2014 paper Paper I Paper II General discussion Counselling, Expected marks and ranks Solutions to JEE Mains 2013 paper Paper I Paper II General discussion Expected marks and ranks General Discussions Expected Marks and Ranks Paper I Paper II Announcements Information corner Physics Discussions Chemistry Discussions Maths Discussions IITJEE 2011 Test series IITJEE 2010 Test series AIEEE 2011 Test series AIEEE 2010 Test series Class X Maths Test series Class X Foundation Discussions Preparation Tools and Techniques Guidance from Experts/Toppers General Discussions Last year Cut offs, Expected Marks and Ranks Paper I-Code 1 Paper II-Code 1 General Discussions Expected Marks and Ranks AIEEE 2010 - Code C paper General Discussions Expected Marks and Ranks Paper I Paper II General Discussions Expected Marks and Ranks Chemistry Questions Physics Questions Maths Questions General Discussions Expected Marks and Ranks Chemistry Questions Physics Questions Maths Questions General Discussions Chill Zone Parents Corner
# Ratios ## Word Problem 2 Problem 2: Annual income of A and B are in the ratio of 4:3 and their annual expances are in the rat... » ## Word Problem 1 Problem 1: There are two types of teas, one is worth Rs 30.20 per kg and other is worth Rs 20.30 per... » Insert math as $${}$$
## Dust flux, Vostok ice core Two dimensional phase space reconstruction of dust flux from the Vostok core over the period 186-4 ka using the time derivative method. Dust flux on the x-axis, rate of change is on the y-axis. From Gipp (2001). ## Tuesday, June 14, 2011 ### Information theoretic approaches to characterizing complex systems, part 1: complexity of reconstructed epsilon machines Introduction In earlier posts, I opined that the behaviour of various climate subsystems showed greater complexity in the Late Pleistocene than in the early Pleistocene. This opinion is shaped by observations of the behaviour of these varying subsystems (Himalayan monsoon strength, global ice volume, and oceanographic conditions) inferred from proxy records up to about 2 million years in length. Any such argument would be strengthened by a number. So the challenge I will address over the next few posts in this series will be--how to characterize the complexity of the output of a dynamic series by a single number. After all, in order to compare complexity between two periods, it would be helpful to have a single parameter to compare. The principal information theoretic concept we shall use is Shannon's (1949) measurement of entropy. The trick is deciding how to apply this parameter. Entropy of the epsilon machine for the ice volume proxy Let's look at epsilon machine reconstructions of the ice volume proxy. Three separate first-order epsilon machines describe portions of the Early Pleistocene variations in the ice volume proxy. A1 represents a minimum ice state, A2 is roughly what goes for an interglacial at present, and A3 is the maximum ice state of the early Pleistocene, which would pass for a minor glacial event in the late Pleistocene. The Mid-Pleistocene epsilon machine looks more complex. The Late Pleistocene epsilon machine reconstruction for the ice volume looks to be more complex than any of the Early Pleistocene reconstructions. But is it? How can we tell? The approach we will try here is to characterize the complexity by the entropy of all of the state transitions. Entropy is expressed as -Σp(i)log p(i) for all values of p(i).* Entropy is considered to be a measure of the "information" in a stream of data. This expression is normally applied in systems where Σp(i) = 1, a condition not met in the figures above. The probabilities of each pathway leading from each of the predictive states adds up to 1; so that the total of all "probabilities" adds up to the number of predictive states in the epsilon machine (two or three in the Early Quaternary, four in the mid Quaternary, six in the late Quaternary. Do we add up the probabilities as they appear? Should we divide all probabilities by the total number of predictive states so we end up with Σ p(i) = 1? Should we weight the various probabilities to reflect the relative importance of an individual predictive state? Let's see what happens. First off, consider a system based not on the proxy data, but on a model. Say, the Late Quaternary global ice volume model of Paillard (1998). Quite provocative given the current state of the economy! Actually, the I stands for interglacial regime, the M for mild glacial regime, and the F for full glacial regime. The system bumps along from state to state, but there are no probabilities listed as there is only one possible successor state from each predictive state. Is the above system more complex, less complex, or the same as one with a single state--say, "I". From a dynamic system, which would use topological arguments--both systems would be equally complex (or  simple, in this case), as there is no choice of successor state from each predictive state. From an information sense, it is not at all clear the two systems are the same. I M F I M F I M F I M F I M F I M F . . . I I I I I I I I I I I I I I I I I I I I I I . . . It depends on whether you allow yourself to 'group' the Is, Ms, and Fs into words, which repeat. From a geological perspective, there is a difference in the complexity between the two systems--having three separate predictive states is different than having a single repeated predictive state. However, if we calculate entropy [-Σp(i)log p(i)] for all the states in both systems, we come up with a value of zero. This is because the probability of each transition (I  M, M F, F → I or I  I in the second example) is 1. If, on the other hand, we establish the probability of each of the transitions (I  M, M  F, and F  I) as 1/3, then the entropy is 1.585, as compared to zero for the system with a single predictive state. The implied complexity is 3x greater for the I M F system as compared to I. Seems reasonable. Now let's consider the epsilon machine construction for the Early Quaternary ice volume proxy. There are two possibilities to recalculating the probabilities of each transition for α1, for instance: we could divide each of the probabilities by by the number of predictive states (remembering that the probability on the unlabelled A1 → A3 arrow is 1), or we could multiply the probability of each transition by the probability of the originating predictive state. In the interval from 1870-1700 ka, we find p(A1) = 0.2, p(A2) = 0.4, p(A3) = 0.4. By method 1, the entropy for α1 is 2.13. By method 2, the entropy for α1 is 2.17. Not too different. By both methods, the entropy for both α2 and α3 is 1. For the Mid Pleistocene, the entropy for α4 (by method 1) is 3.23. We observe p(A1) = 0.19, p(A2) = 0.125, p(A3) = 0.31, p(A4) = 0.375, so by method 2, entropy for α4 is 3.21. Again, not much different from method 1. For the Late Pleistocene, the entropy of α5 (by method 1) is 3.42. We observe p(A1) = 0.027, p(A2) = 0.243, p(A3) = 0.243, p(A4) = 0.297, p(A5) = 0.162, p(A6) = 0.027. The entropy of α5 is 2.85, which is considerably lower than by method 1. I think this is because observations of A1 and A6 are rare, as these predictive states are only observed once each during the Late Pleistocene. Entropy of epsilon machines for paleomonsoon strength proxy Now consider the reconstructed epsilon machines for the paleomonsoon strength proxy. We shall only use method 2 in calculating entropy. Three predictive states in the Early Quaternary, dominated by M1 and M2. Given p(M1) = 0.50, p(M2) = 0.41, p(M3) = 0.09, then the entropy of μ1 is 1.83. In the Late Quaternary, there are six predictive states, with observed probabilities as follows: p(M1) = 0.4, p(M2) = 0.2, p(M3) = 0.17, p(M4) = 0.1, p(M5) = 0.1, p(M6) = 0.03. The entropy of μ2 is 3.67. Conclusions Method 1 is the easier calculation, however method 2 is a better calculation. However, method 1 can be used as long as the distribution of predictive states is not too far from even. In summary Time                                             Entropy (ice volume)       Entropy (paleomonsoon) Late Pleistocene                                    2.85                                    3.67 Mid-Pleistocene                                     3.21                                   1.83 Early Pleistocene                                   1-2.1                                   1.83 By this test, the behaviour of the climate system has been more complex in the Late Pleistocene than it was in the Early Pleistocene. In our next installment, we look at how we can use the characterize the complexity for the probability density calculation for each window shown here, to give us a nice smooth graph of complexity of the climate system through time. References Crutchfield, J. P., 1994. The calculi of emergence: Computation, dynamics, and induction. Physica D 75: 11-54. Gipp, M. R., 2001. Interpretation of climate dynamics from phase space portraits: Is the climate system strange or just different? Paleoceanography, 16, 335-351. Kukla, G., Z. S. An, J. L. Melice, J. Gavin, and J. L. Xiao, 1990. Magnetic susceptibility record of Chinese loess. Trans. R. Soc. Edinburgh Earth Sci., 81: 263-288. Paillard, D., 2001. Glacial cycles: Toward a new paradigm. Reviews of Geophysics, 3: 325-346. Shackleton, N. J., A. Berger, and W. R. Peltier, 1990. An alternative astronomical calibration of the Lower Pleistocene timescale based on ODP site 677, Trans. R. Soc. Edinburgh, Earth Sci., 81: 251-261. Shannon, Claude (1949). "Communication Theory of Secrecy Systems". Bell System Technical Journal 28 (4): 656–715. * We calculate all logarithms in a base of 2, in accordance with the nerds who came up with this concept.
Personal tools Theme for TIFR Centre For Applicable Mathematics, Bangalore You are here: Home / Some uniqueness results in tensor tomography # Some uniqueness results in tensor tomography Mr. Rohit Kumar Mishra TIFR-CAM,Bangalore Speaker Mr. Rohit Kumar Mishra TIFR-CAM,Bangalore Jul 26, 2017 from 02:00 PM to 03:00 PM LH 006 vCal iCal Abstract: We consider a generalization of geodesic ray transforms, called integral moment transforms, of symmetric $$m-$$tensor fields in Riemannian and Euclidean geometries. The $$q^{\mathrm{th}}$$ integral moment transform of a symmetric $$m$$-tensor field $$f=f_{i_{1}\cdots i_{m}} dx^{i_{1}}\cdots dx^{i_{m}}$$ on a Riemannian manifold $$(M,g)$$ is defined as follows: $$I^qf(x, \xi)$$ = $$\int_{\mathbb{R}}$$ $$t^q\langle f(\gamma_{x,\xi}(t))$$, $$\dot{\gamma}_ {x,\xi}^m(t)$$ $$\rangle$$_ g dt  = $$\int_{\mathbb{R}}$$$$t^q f_{i_1\dots i_m}$$$$(\gamma_{x,\xi}(t))$$$$\dot{\gamma}_{x,\xi}^{i_1}(t)$$$$\cdots$$ $$\dot{\gamma}_{x,\xi}^{i_m}(t) dt$$. where $$\gamma_{x,\xi}(t)$$ is the geodesic starting from $$x$$ in the direction $$\xi$$. The special case when $$q=0$$ in the above definition is called the longitudinal geodesic ray transform. We are interested in the question of recovery of the symmetric $$m-$$tensor field $$f$$ from the knowledge of its integral moments. We first consider the Euclidean setting and show that a vector field in $$\mathbb{R}^{n}$$ can be uniquely recovered with an explicit inversion formula from the knowledge of the first two integral moments restricted to lines passing through a fixed curve. Next we consider a restricted longitudinal ray transform of symmetric $$m$$ tensor fields and show that this transform can be inverted microlocally recovering a component of the field $$f$$ modulo a known error term and smoothing terms. Finally, we consider the integral moment transforms in a Riemannian manifold setting and prove a Helgason-type support theorem given the first $$m+1$$ integral moments of a symmetric $$m-$$tensor field $$f$$.
New South Wales Higher School Certificate Mathematics Extension 2 (Online since January 1, 2001) Theme song - Spem in alium Ext. 2 Practice papers: 200 papers ; 225 papers New January 22, 2015 Beta build 14C106a for OS X 10.10.2 was released today to public beta testers. January 8, 2015 Beta build 14C94b for OS X 10.10.2 has just been released to public beta testers. January 5, 2015 Version 7.3 of the Australian Curriculum was released today: December 20, 2014 Beta build 14C81h for OS X 10.10.2 has just been released to public beta testers. December 13, 2014 National curriculum for Years 11 and 12 is now shelved. Here is The Australian article: http://4unitmaths.com/nc-y11-12shelved.pdf December 8, 2014 November 18, 2014 Apple released OS X Yosemite 10.10.1 update for macs today. With airplay mirroring, we can have Extend Desktop which lets you use full screen apps on one screen whilst working on something different on another. This is very useful for classrooms where for example a class is working on some questions on an interactive whiteboard connected to an apple tv, and at the same time, the teacher could be recording a roll online without interrupting the student activity. Here is my yosemite page with more details on it: http://users.tpg.com.au/nanahcub/yosemite.html If on the other hand you have a pc, you could get Airparrot instead which does a similar thing - but the major difference is apple's airplay mirroring runs off the processor, whereas airparrot is a less efficient software-based version of airplay mirroring. October 28, 2014 Terry Lee has published his solutions to the 2014 Extension 1 and 2 HSC papers at his website http://hsccoaching.com/documents/35.html Itute.com also have General 2 solutions at http://www.itute.com/wp-content/uploads/2014-nsw-bos-mathematics-general-2-solutions.pdf October 12, 2014 October 8, 2014 Version 7.2 of the Australian Curriculum was released today. August 13, 2014 The 2014 Fields medals have been announced today. The recipients are Artur Avila, Manjul Bhargava, Martin Hairer, Maryam Mirzakhani. Maryam Mirzakhani is also the first woman to ever receive it. August 11, 2014 Proposed changes to the NSW Calculus-based courses BOSTES has announced proposed changes to the NSW Calculus-based courses today and are calling for feedback via the online survey ( https://www.surveymonkey.com/s/bostesstage6 ) (closing date Sept. 21) or via written submission to the Board Inspector, Peter Osland (email: [email protected] ) Here are the proposed changes in content for the calculus-based courses on page 22 of http://www.boardofstudies.nsw.edu.au/australian-curriculum/pdf_doc/senior-secondary-evaluation-2014-08.pdf Preliminary Mathematics 2 Unit Approximately six topics focusing on areas of Mathematics such as real numbers, algebra, functions, graphs, geometry, trigonometry, differential calculus, sequences and series, and descriptive statistics. A number of modelling topics focusing on applications of Mathematics from other topics in the Preliminary course and utilising techniques from other topics in the course and earlier courses, such as applications involving real functions and applications of series to finance. HSC Mathematics 2 Unit Approximately six topics focusing on areas of Mathematics such as differential calculus, integral calculus, probability, trigonometry, exponential and logarithmic functions, descriptive statistics, and random variables. A number of modelling topics focusing on applications of Mathematics from other topics in the HSC course, and utilising techniques from other topics in the course and earlier courses, such as applications involving probability and finance, applications to the natural environment. Preliminary Mathematics Extension 1 Approximately six topics focusing on areas of Mathematics such as circle geometry, further algebra, polynomials, functions, graphs, trigonometry, series, elementary difference equations, random variables, and the normal distribution. HSC Mathematics Extension 1 Approximately six topics focusing on areas of Mathematics such as mathematical induction, binomial theorem, methods and applications of integration, further trigonometry, inverse functions and the inverse trigonometric functions, and further applications of calculus. Mathematics Extension 2 Approximately eight topics focusing on areas of Mathematics such as further inequalities, complex numbers, polynomials, functions, graphs, vectors, integration techniques, volumes, modelling with functions and derivatives, mechanics, difference equations, and statistical inference. July 18, 2014 Note there is a mistake in the video. The woman at the start says he is in year 12. He isn't. He is in year 11. The mistake is repeated in the caption. July 17, 2014 July 12, 2014 55th IMO Results An Australian (Alexander Gunning) is one of only 3 out of 560 contestants to get a perfect score in the 55th International Mathematical Olympiad held a few days ago. It is the first time ever that an Australian has got a perfect score! Results are out today. People's Republic of China had overall country rank 1 with 5 gold medals and 1 silver medal, (which USA also got), but total points were 201 (and USA got 193 and country rank 2). There were 3 perfect scorers, one of whom was from Australia! (individually ranked equal 1st out of 560): Alexander Gunning (Australia), Jijang Gao (People's Republic of China), Po-Sheng Wu (Taiwan) Medal cutoffs: Gold=29, Silver=22, Bronze=16 Results for Australian Team: rank=11 out of 101, total=156 out of 252: Alexander Gunning: 7+7+7+7+7+7=42=Gold Medal (and perfect score!) (rank=1) Seyoon Ragavan: 7+2+0+7+1+0=17=Bronze Medal (rank=256) Mel Shu: 7+7+0+7+2+0=23=Silver Medal (rank=109) Yang Song: 7+3+0+7+3+0=20=Bronze Medal (rank=200) Praveen Wilerathna: 7+7+0+7+7+0=28=Silver Medal (rank=50) Damon Zhong: 7+5+0+7+7+0=26=Silver Medal (rank=83) July 9, 2014 June 23, 2014 Terry Tao has won one of the $3m breakthrough prizes: That's worth more than a Nobel Prize! February 28, 2014 January 10, 2014 Today the government announced a review of the national curriculum. Teaching Resources 4 unit Syllabus (from boardofstudies server) 2 and 3 unit Syllabus (from boardofstudies server) Link Between 1995 and 2010 HSC Exams Leads To Generalised Wallis Product (preprint) - another version appeared in MANSW's Reflections, Vol. 36, No. 4, 2011, pp. 22-23 How NOT to find the surface area of revolution, by Derek Buchanan Yet another proof of the irrationality of e DON'T BAN YOUTUBE! 257,885,161-1 was discovered on January 25, 2013 to be the largest known prime by Curtis Cooper. It has 17,425,170 digits which you can get here. More info on this discovery is at http://www.mersenne.org Also, on December 25, 2011 the largest known twin primes were found by Timothy D. Winslow. They are 3756801695685x2666669±1 both of which have 200,700 digits. They are at http://4unitmaths.com/tp1.pdf and http://4unitmaths.com/tp2.pdf. Online video on Fermat's Last Theorem: msri Wiles' online lecture 2004hsc8bsol.pdf Alternative solution to 2003 HSC Q3(a)(iv) Have your pi and e it too. The General Conic and Dandelin Spheres The Cubic Formula The Quartic Formula Proof of the Fundamental Theorem of Algebra University Mathematics The Putnam Competition Harvard University's notes Proof of the Taniyama-Shimura-Weil Conjecture Beal Prize for$1,000,000 for proving (or disproving) the Beal Conjecture, i.e., that the only solutions to the equation $$A^x + B^{\ y} = C^{\ z}$$, when $$A$$, $$B$$, $$C$$, are positive integers, and $$x$$, $$y$$ and $$z$$ are positive integers greater than 2, are those in which $$A$$, $$B$$ and $$C$$ have a common factor Online LaTeX editors Too many philistines are using Word. They should stop being philistines and start using LaTeX. For web browsers (nothing needs to be installed): Verbosus ; ScribTeX iPad app: TeX Touch. Files created in this app can be compiled via the TeX Cloud. Forums Other websites Fields medallists
# Seasonal Data with GAMMs I'm interested in modelling a time series of temperature data across several years. The data are on the level of hourly observations, so I have variables for year, month, day, and time. I found a great example of doing this by Gavin Simpson (found here). The blog only considers correlation within year, where as I have to deal with correlation within year and within day. How can I best account for this correlation with gamm? Gavin uses the following code modar2 <- gamm(apparentTemperature ~ s(month, bs = "cc", k = 12) + s(time, k = 20),data = timetemp, correlation = corARMA(form = ~ 1|year, p = 2),control = ctrl) Where should I pass variables to account for correlation within day? For reference, here is a sample of my data: tibble::tribble( ~created_at, ~time, ~month, ~year, ~apparentTemperature, "2014-01-03 09:30:28", 9.5, 1, 2014, -17.87, "2014-01-03 10:13:43", 10.2166666666667, 1, 2014, -17.87, "2014-01-03 12:19:32", 12.3166666666667, 1, 2014, -16.14, "2014-01-03 12:44:04", 12.7333333333333, 1, 2014, -20.24, "2014-01-03 13:09:38", 13.15, 1, 2014, -20.24, "2014-01-03 13:39:00", 13.65, 1, 2014, -20.44 ) Depends how you want nest the autocorrelation, within days? modar2 <- gamm(apparentTemperature ~ s(year) + s(month, bs = "cc", k = 12) + s(time, k = 20), data = timetemp, correlation = corARMA(form = ~ 1|day, p = 2), control = ctrl) would have smooth long term trend, smooth seasonal effect, smooth time of day effect, with autocorrelation nested within days (for which you'd need to create a new variable day which generates the day of year from the date time variable. If you have a lot of data, you really don't want to use form = ~ obs_seq for the correlation structure, where obs_seq is a sequence 1, 2, ..., number of observations, as that will create a massive covariance matrix that lme() will need to invert at each iteration. Having fitted such a model to high frequency data, it took gamm() a week to converge on powerful multicore workstation. The reason I nested the correlation within year in that example was partly for this reason; that's a long monthly record and fitting a full ARMA function across all timepoints is not quick. • Hmm, my data is nearly 30,000 observations, so I guess that is far too large. I could summarize my data by day (e.g. mean apparent temperature for April 18 2018) which reduces my data to nearly 1,500 observations. Then I could just use your approach from the blog, yes? I'd have Year, month, and day of month, smoothing year and day with a seasonal effect of month. – Demetri Pananos Apr 20 '18 at 2:10 • Why do you want to have the autocorrelation operate at beyond the daily level? Do you have evidence of longer scale autocorrelation? The main issue is that you're likely to run out of RAM unless you have a lot of it available. – Gavin Simpson Apr 20 '18 at 2:26 • Start with the simpler model (nest the AR within day-of-year within year), use an AR(1) initially (corAR1()) rather than an ARMA as that is much more efficient. Then if that fits, look at the normalized residuals to see if you still have remaining autocorrelation. You could also fit without the AR and check that model's residuals. If you go that route, see bam() in mgcv. – Gavin Simpson Apr 20 '18 at 3:09
# Mixture of models and the E.M algorithm ## 2 Introduction and examples One of the most widely used mixture of distributions (or mixture model, i.e MM) is the Gaussian mixture model (GMM). When a random vector of real values $$\X$$ is drawn from a GMM, its density of probability is: $p(\X| \M,\C,\pis) = \sum_{k=1}^K \pi_k \normal(\X|\M_k,\C_k),$ with $$\pis = (\pi_k)_{1}^K$$ and $$K$$ the number of gaussians or clusters. The set of parameters for this model is $$\pa=(\pi_k, \M_k, \C_k)_{k=1}^K$$. The generative story of this model is as follows: 1. First pick a cluster (or a gaussian) $$\sim Cat(\pis)$$ 2. Generate $$\X \sim \normal(\X|\M_k,\C_k)$$ MMs are mainly used for clustering or probability density estimation. ### 2.1 Clustering In a clustering task, each gaussian is associated to a cluster, and the learning step aims at estimating the hidden assignments of a data point to a cluster. In the case of GMM, a gaussian is associated to a cluster. This can be interpreted as an extension of the K-means algorithm, with soft assignements and the possibility to better control the shape of each clusters with the covariance matrix. ### 2.2 Estimation for mixture of distributions For probabilistic classifier, each class is modeled by a distribution (the likelihood term). However, following the gaussian example, assigning a single gaussian to model the data of one class can be a poor assumption. Inside a class, the data can exhibit different modes associated to the intra-class variability of the data. In this case a MM can be a better choice: for instance by increasing the number of Gaussians, we allows the model to have an improved expressivity along with more parameters to be estimated. ## 3 The EM Algorithm in general A mixture model is a generative model that relies on latent variables $$\Z$$ to explain the observed data $$\X$$. These latent parameters $$\Z$$ represent the affectation of each observation to a component of the mixture. To learn the parameters of the model we can maximize the log-likelihood of the parameters on the training data: \begin{align*} \log(p(\X|\pa)) &= \log(\sum_{\Z} p(\X,\Z|\pa) ) \end{align*} For a training point $$\x$$, a latent vector $$\z$$ of dimension $$K$$ is associated. $$\z$$ corresponds to the assignement of $$\x$$ to each cluster. $$\Z$$ is therefore a set of binary random variables: • $$\z = (z_k)_{k=1}^K$$ • $$z_k=1$$ if $$\x$$ belongs to the cluster $$k$$ and $$0$$ otherwise. To say it differently, $p(\X , \Z = \z, \pa) = \pi_k \normal(\M_k,\C_k),$ if $$z_k = 1$$ and $$z_{k'} = 0,\ \forall k'\neq k$$. If $$\Z$$ could be observed, it becomes therefore a simple and easy to solve classification problem. $$(\X, \Z)$$ is often denoted as the complete data set, while $$(\X)$$ is the incomplete one. In other words, $$P(\X,\Z|\pa)$$ is easy to optimize, while $$P(\X|\pa)$$ requires to marginalize the latent variable, introducing a log-sum without a closed-form solution. The Expectation-Maximization (EM) algorithm is a solution to solve the optimization problem of finding $$\pa$$ to maximize $$\log(p(\X|\pa))$$. While in practice $$\Z$$ is unknown, for a given set of parameters we can compute $$P(\Z|\X,\pa)$$ and also the expected value of $$\Z|\X,\pa$$. This is the E(xpectation) step. In a second step, learning the classification task can be carried out: maximizing $$\pa$$ knowing $$\Z$$ (or the expected value). This is the M(aximization) step. ## 4 Variational view of EM The quantity of interest can be rewritten as follows by introducing a distribution $$q(\Z)$$ defined over the latent variables: $\log(P(\X|\pa))= \sum_{\Z}q(\Z) \log(\frac{P(\X,\Z|\pa)}{q(\Z)}) - \sum_{\Z}q(\Z) \log(\frac{P(\Z|\X,\pa)}{q(\Z)})$ Two terms appear. The second one is the Kullback-Leibler divergence between $$P(\Z|\X,\pa)$$ and $$q(\Z)$$ while the first one is for the moment denoted by $$\lb$$. \begin{align*} \log(P(\X|\pa)) &= \lb + \dkl(q(\Z)||P(\Z|\X,\pa))\\ \lb &= \sum_{\Z}q(\Z) \log(\frac{P(\X,\Z|\pa)}{q(\Z)})\\ \dkl(q(\Z)||P(\Z|\X,\pa)) &= \sum_{\Z}q(\Z) \log(\frac{P(\Z|\X,\pa)}{q(\Z)}) \end{align*} Recall that the Kullback-Leibler divergence satisfies $$\dkl(q|p)\ge 0$$, with equality if, and only if $$q(Z) = p(Z|X,\pa)$$. Therefore $$P(\X|\pa) \ge \lb$$ which means that $$\lb$$ is a lower bound of $$P(\X|\pa)$$ we want to maximize. The goal of the EM algorithm is therefore to maximize $$\lb$$ to indirectly maximize $$P(X|\pa)$$. ### 4.1 Remark This formulation does not solve the tractability issue since in the $$\dkl$$ term we still have a dependance between $$P(\Z|X,\pa)$$ and $$P(\X|\pa)$$, i.e : $$P(\Z|X,\pa) = \frac{P(\X,\Z|\pa)}{P(\X|\pa)}$$. So both $$P(\Z|X,\pa)$$ and $$P(\X|\pa)$$ are untractable. The probabilistic model specifies the joint distribution $$P(\X, \Z)$$, and the goal is to find an approximation for the posterior distribution $$P(\Z|\X,\pa)$$ as well as for the model evidence $$P(X|\pa)$$. ### 4.2 E step Suppose that the current value of the parameter vector is $$\paold$$. In the E step, the lower bound $$\lb$$ is maximized with respect to $$q(\Z)$$ while holding $$\pa$$ fixed to $$\paold$$. \begin{align*} \lb &= - \dkl(q(\Z)||P(\Z|\X,\pa)) + p(\X|\paold) \\ &= - \dkl(q(\Z)||P(\Z|\X,\pa)) + cte \end{align*} • the solution is when $$q(\Z) = P(\Z|\X,\paold)$$, and • the Kullback-Leibler divergence vanishes. Since the KL divergence is zero, we have $$\lb=P(\X|\paold)$$. In fact, the E-step consists in computing the posterior distribution over $$\Z$$ with the parameters fixed at $$\paold$$. Then you just set theoritically $$q(\Z) = P(\Z|\X,\paold)$$. ### 4.3 M step The distribution $$q(Z)$$ is now fixed and the lower bound $$\lb$$ is now maximized with respect to $$\pa$$. The maximization process yields a new value for the parameters $$\panew$$ and increases the lower bound (except if we are already at the maximum). Note that $$q(Z)=P(\Z|\X,\paold)$$ acts as a constant in the maximization process: \begin{align} \lb &= \sum_{\Z} P(\Z|\X,\paold) \log(\frac{P(\X,\Z|\pa)}{P(\Z|\X,\paold)}) \\ &= \sum_{\Z} P(\Z|\X,\paold) \log(P(\X,\Z|\pa)) \\ &- \sum_{\Z} P(\Z|\X,\paold) \log(P(\Z|\X,\paold)) \end{align} The second term is a positive constant, since it depends only on $$\paold$$. This is the entropy of the posterior distribution $$H(P(\Z|\X,\paold))$$. The first we want to maximize is the expectation under the posterior $$P(\Z|\X,\paold)$$ of the log-likelihood of complete data. This means in practice, that we optimize a classifier of $$\X$$ into $$\Z$$, with the supervision of $$P(\Z|\X,\paold)$$ that provides the pseudo-affectation. Since the distribution $$q$$ is fixed to $$P(\Z|\X,\paold)$$, $$q(Z) \neq P(\Z|\X,\panew)$$ and now the KL divergence term is nonzero. The increase in the log likelihood function is therefore greater than the increase in the lower bound. ### 4.4 A summary of EM You can see the EM algorithm as pushing two functions. • With fixed parameters ($$\paold$$), push $$\lb$$ to stick $$P(\X|\paold)$$ by setting $$q(Z)=P(\Z|\X,\paold)$$ • Then recompute the parameters to get $$\panew$$ with a fixed $$q(\Z)=P(\Z|\X,\paold)$$. The criterion is to maximize $$\lb$$ w.r.t $$\pa$$. In fact you are moving in the parameter space in order to push $$\lb$$, but by doing this you also push the likelihood even further: $$P(\X|\panew) \ge \lb$$ because $$P(\Z|\X,\pa)$$ has changed to $$P(\Z|\X,\panew)$$ while $$q(\Z)$$ has not. For graphical illustration, you can look at the book of Christopher Bishop called Pattern Recognition and Machine Learning. You can also read the great paper of Radford Neal and Geoffrey Hinton on the link between Variational Bayes and EM. Note that for implementation, $$q(\Z)$$ does not really exist or is not a quantity of interest. It acts more like a temporary variable to perform this two steps game. ## 5 GMM Application of the EM algorithm to GMM. ### 5.1 E step Given the current set of parameters $$\pa=\paold$$, we set $$q(\Z)=P(\Z|\X,\paold)$$. This implies that for each training example $$\X$$, we compute the posterior distribution of its associated latent variable $$\Z$$ \begin{align*} q(\Z)&=P(\Z|\X,\paold)\\ &=\frac{P(\X,\Z|\paold)}{\sum_z P(\X,\Z|\paold)}\\ &= \frac{\pi_k \normal(\X|\M_k,\C_k)}{\sum_{j=1}^K\ \pi_j \normal(\X|\M_j,\C_j)} \end{align*} $$q(\Z)$$ can ba be considered as the a soft assignement of $$\X$$ to the different clusters, or its responsability. ### 5.2 M step The distribution $$q(Z)$$ is now fixed and the lower bound $$\lb$$ is maximized with respect to $$\pa$$. We optimize a classifier of $$\X$$ into $$\Z$$, with the supervision of $$P(\Z|\X,\paold)$$ that provides the pseudo-affectation. • Each data points is involved in each cluster, but weighted by its responsability. • $$\x_n$$ is associated to $$\z_n$$ • $$\x_n$$ belongs to cluster $$k$$ with a weight given by $$q(z_{nk})$$ \begin{align*} N_k &= \sum_n q(z_{nk}) \M_k &= \frac{1}{N_k} \sum_n q(z_{nk}) \x_n \\ \C_k &= \frac{1}{N_k} \sum_n q(z_{nk}) (\x_n-\M_k) (\x_n-\M_k)^t\\ \pi_k &= \frac{N_k}{N} \end{align*} ## 7 Bernoulli Mixture model With the naive assumption, a random vector $$\X$$ is assumed to be generated by a set of independant Bernoulli distributions, one for each component: $P(\X=\x|\M) = \prod_{i=1}^d \m_i^{x_i} (1-\m_i)^{1-x_i},$ where $$\M$$ is the vector of parameters, and $$\m_i$$ is the parameter of the Bernoulli distribution associated to the component $$i$$ of $$\X$$. We can then compute the (statistical) mean and covariance of $$\X$$ under this naive assumption: \begin{align*} E[\X] &= \M\\ cov[\X] &= diag\{\M(1-\M)\}. \end{align*} Here $$diag$$ denotes a diagonal matrix and $$\M(1-\M)$$ is the vector gathering the diagonal values. Therefore, there is no correlation between the component of $$\X$$ (of course, it was designed like this). The variance for one component is thus $$\m_i(1-\m_i)$$. If we want to capture correlations between the variables, unlike a single Bernoulli distribution, a solution is to assume a mixture of Bernoulli distributions. $P(\X=\x|\M,\pis) = \sum_{k=1}^K \pi_k P(\X=\x|\m_k),$ Under this distribution, the statistics are now: \begin{align*} E[\X] &= \sum_k \pi_k \M_k\\ cov[\X] &= \sum_k \pi_k (diag\{\M_k(1-\M_k)\} + \M_k\M_k^t) + E[\X]E[\X]^t \end{align*} The covariance is no longer diagonal and the model can capture correlation between variables at the expense of increasing the number of parameters ($$K$$ times).
# Is it possible to get all possible sums with the same probability if I throw two unfair dice together? I throw 2 unfair dice, suppose that $$p_i$$ is the probability that the first die can give an $$i$$ if I throw it, for $$i =1,2,3,..6$$ and $$q_i$$ the probability that the second die can give an $$i$$. If I throw the dice together, is it possible to get all possible sums $$2,3,4,...12$$ with the same probability? Here's what I've tried so far, the probability that I get a $$2$$ if I throw both dice is $$p_1q_1$$, the probability that I get $$3$$ is $$p_1q_2+p_2q_1$$, and generally the probability that I get $$n$$ is $$\sum_{i+j=n} p_iq_j$$ where $$i=1,2,...6$$, $$j=1,2,...6$$. So now in order for all possible sums to appear with the same probability, it must be true that $$p_1q_1=p_1q_2+p_2q_1$$ $$p_1q_2+p_2q_1=p_1q_3+p_2q_2+p_3q_1$$ $$........$$ has a solution, this is where I am stuck I can't find a way to prove that the system above has a solution, can you help? • It's clear that the system will have the zero solution, I want to prove it has non-zero solution where $p,q \leq 1$. – Guin_go Apr 15 at 6:48 • So, you want the probability of a sum of $2$ to be the same as that of $3$ etc? So, they will all be $\frac{1}{11}$. You don't mean the same as with fair dice where these probabilities are not equal. However, that suggests a different interesting question: could you have dice which are unfair by themselves yet as a pair behave as a fair pair. – badjohn Apr 15 at 8:15 • @badjohn I am trying to see if 2 unfair dice are rolled together will be a fair pair. – Guin_go Apr 15 at 8:31 • In that case, remember that even with a fair pair, the probability of $2$ is not the same as the probability of $3$. – badjohn Apr 15 at 8:49 • @badjohn your "different interesting question" is indeed interesting. Perhaps you can ask it as a separate question. It is easy to see that if two unfair dice are unfair in exactly the same way then the pair can't mimic the sums of a standard fair pair, but it isn't so obvious for a pair of differently biased dice. I suspect that there are enough degrees of freedom to make it work. If that is the case, a minor variation of the question is if you can do so with the additional constraint that the probability of rolling a double is 1/6. – John Coleman Apr 16 at 16:08 This is a classical problem. Without changing the problem, we can let the digits on the dice be $$0, \ldots, 5$$ instead of $$1, \ldots, 6$$ to make our notation easier. Now we make two polynomials: $$P(x) = \sum_{i=0}^5 p_ix^i,\qquad Q(x) = \sum_{i=0}^5q_ix^i.$$ Now we can succinctly phrase your condition on $$p_i, q_i$$: it is satisfied if and only if $$P(x)Q(x) = \frac1{11}\sum_{i=0}^{10} x^i.$$ Let's multiply both sides by $$11 \times (x-1)$$, and you get $$11(x-1)P(x)Q(x) = x^{11} - 1.$$ The 11 zeroes of the polynomial on the right are the 11th roots of unity, which means those are also the zeroes of the polynomial on the left. The term $$(x-1)$$ takes care of one of the zeroes, and since $$P, Q$$ are both of degree 5, that means that they each have to have 5 of the other 10 zeroes. But now note: besides $$1$$, all of the 11th roots of unity are complex numbers, while $$P, Q$$ are real polynomials. If a complex number is the root of a real polynomial, then so is its complex conjugate. That means that $$P, Q$$ must each have an even number of complex zeroes, but we just showed that they also have to have 5 each. We have reached a contradiction: such $$P, Q$$, and thus such distributions $$p_i, q_i$$, do not exist. • Here's the same argument, but without using complex numbers: the LHS and the RHS have the same (real) zeros. Therefore, the zeros of P(x) and Q(x) are also the zeros of x^11 - 1. Since the graph of the RHS crosses the x-axis only at x=1, and since P(x), Q(x) are each odd-degree polynomials with at least one root, each is divisible by (x-1). Dividing the LHS and RHS by (x-1), we get that (x-1)^2 divides x^10 + x^9 + ... + 1, a contradiction. – Pinkwater Apr 15 at 17:47 • In the more general case where both dice have $n\ge 2$ sides (note that I still assume that both dice have the same number of sides), faces being numbered $0,1,2,\ldots,n-1$, the formula becomes$$(2n-1)(x-1)P(x)Q(x)=x^{2n-1}-1$$Both polynomials $P$ and $Q$ have degree $n-1$. Is there an argument if $n$ is odd, so $n-1$ is even? – Jeppe Stig Nielsen Apr 17 at 10:47 • @JeppeStigNielsen At the heart of that problem is the question whether the polynomial $$\frac{x^{2n+1}-1}{x-1}=x^{2n}+x^{2n-1}+\ldots+x^2+x+1,$$ is a product of two polynomials $P$ and $Q$ of degree $n$ with all positive coefficients and $P(1)=Q(1)=1$. In particular you need $\varphi(2n+1)\leq n$, so $n$ can't be very small. The smallest value $n=52$ doesn't allow such a factorization. It seems unlikely that such a factorization exists for any $n$. – Servaes Apr 17 at 15:39 • @Servaes Your condition $\phi(2n+1) \leq n$ suggests that you think that $P$ and $Q$ will have rational coefficients (since $\phi(2n+1)$ is the degree of the cyclotomic polynomial). But it is perfectly natural to consider this question with real coefficients, and then the odds seem much better. – David E Speyer Apr 18 at 1:27 • To make a small observation, all roots of $x^{2n}+\cdots +x+1$ are complex, so $P$ and $Q$ must have even degree, so Servaes's $n$ (which is half of Jeppe's $2n-1$) must be even. – David E Speyer Apr 18 at 1:37 Let's assume that this is possible. We can derive a contradiction from this assumption. The probability of rolling a total of $$2$$ must be $$1/11$$, and the probability of rolling a total of $$12$$ must also be $$1/11$$, so \begin{align} p_1 q_1 &= 1/11,\ \text{and}\\ p_6 q_6 &= 1/11. \end{align} The probability of rolling a total of $$7$$ must also be $$1/11$$. At the same time, the probability of rolling a total of $$7$$ is greater than or equal to $$p_1 q_6 + p_6 q_1$$. So, $$p_1 q_6 + p_6 q_1 \le 1/11.$$ The numbers $$p_1$$, $$p_6$$, $$q_1$$, and $$q_6$$ are all probabilities, so they can't be negative. Also, if any one of them were $$0$$, then either $$p_1 q_1$$ or $$p_6 q_6$$ would be $$0$$, which we already know is not the case. So, all four of these numbers are positive. This means that $$p_1 q_6$$ and $$p_6 q_1$$ are both positive, and so \begin{align} p_1 q_6 &< 1/11,\ \text{and}\\ p_6 q_1 &< 1/11. \end{align} If we combine these inequalities with the equations above, we find that \begin{align} p_1 q_6 &< p_1 q_1,\\ p_6 q_1 &< p_1 q_1,\\ p_1 q_6 &< p_6 q_6,\ \text{and}\\ p_6 q_1 &< p_6 q_6. \end{align} Since the numbers $$p_1$$, $$p_6$$, $$q_1$$ and $$q_6$$ are all positive, we may cancel them when they appear on both sides of one of these inequalities. Doing that, we conclude that \begin{align} q_6 &< q_1,\\ p_6 &< p_1,\\ p_1 &< p_6,\ \text{and}\\ q_1 &< q_6. \end{align} But this is impossible. • +1 I think this is the best answer, as it is the only one that generalizes readily to dice with other numbers of sides. – Yly Apr 16 at 6:56 • @Yly - the generating polynomial answers work equally well for all dice with an even number of sides. But it is true that the particular contradiction they find fails for dice with an odd number of sides. – Paul Sinclair Apr 16 at 14:00 • @MarkRansom But the question doesn't assume that they're fair dice. – Tanner Swett Apr 16 at 19:02 • @MarkRansom : Neither the title nor the Question (even checking prior versions) nor this answer assert any expectations. From where is this comparison/contrast with fair dice coming? – Eric Towers Apr 16 at 21:48 • @MarkRansom I'm trying to figure out what you mean when you say that "the question assumes something which already isn't true." I think you're talking about the asker's statement that "I can't find a way to prove that the system above has a solution," which, as you're pointing out, seems to assume (falsely) that the system does have a solution. Other than that statement, the question doesn't seem to make any false assumptions. – Tanner Swett Apr 17 at 0:19 The probability generating function of the dice are $$P(x) = \sum_{i=1}^6 p_i x^i$$ and $$Q(x) = \sum_{i=1}^6 q_i x^i$$. The probability generating function for their sum is $$R(x) = P(x)Q(x)$$. You want all possible sums $$2, \ldots, 12$$ to have the same probability, or equivalently you want $$R(x) = \frac{1}{11} x^2 (1+x+\cdots+x^{10})$$. Hence an equivalent way to state your problem is: can we factor $$1+x+\cdots+x^{10} = p(x)q(x)$$ where $$p, q \in \mathbb{R}_{\geq 0}[x]$$ have degree 5? We can factor $$1+x+\cdots+x^{10} = \frac{1-x^{11}}{1-x}$$ over the complex numbers as $$\prod_{k=1}^{10} (x-\exp(2\pi i k/11))$$. Grouping together complex conjugate pairs using \begin{align*} (x-\exp(2\pi i k/m))(x-\exp(-2\pi i k/m)) &= x^2 - (\exp(2\pi i k/m) + \exp(-2\pi i k/m))x + 1 \\ &= x^2 - 2\cos(2\pi k/m) x + 1 \end{align*} gives \begin{align*} 1 + x + \cdots + x^{10} = \prod_{k=1}^5 (x^2 - 2\cos(2\pi k/11)x + 1). \end{align*} But that says $$1 + x + \cdots + x^{10}$$ has no real factors of degree $$5$$ whatsoever, so there is no solution. Suppose you weaken the requirements and don't insist the dice both have values in $$\{1, \ldots, 6\}$$. The subsets of factors with non-negative coefficients are $$\{\}$$,$$\{3\}$$,$$\{4\}$$,$$\{5\}$$,$$\{2,4\}$$,$$\{2,5\}$$,$$\{3,4\}$$,$$\{3,5\}$$,$$\{4,5\}$$,$$\{2,3,4\}$$,$$\{2,3,5\}$$,$$\{2,4,5\}$$,$$\{3,4,5\}$$,$$\{2,3,4,5\}$$,$$\{1,2,3,4,5\}$$. Since $$1$$ is only in one of these, namely $$\{1, 2, 3, 4, 5\}$$, the only complementary pair is the trivial one, $$\{\}, \{1,2,3,4,5\}$$ and there are no interesting solutions to the weaker version either. • Not to overstate this, but this might be my favorite answer in the history of MSE. What a great piece of math. – Cade Reinberger Apr 15 at 14:57 • Cool indeed :-) Just curious, did you explore whether $\{3\},\{1,2,4,5\}$ is the only factorization that has nonnegative coefficients, or could there be others? – David Z Apr 15 at 21:04 • @DavidZ Sadly there was a typo in my code (I was missing the 2 in front of the $\cos$ term). There are actually no non-trivial solutions to the weaker problem with differing numbers of pips on the dice. The answer has been edited. – Joshua P. Swanson Apr 16 at 2:02 • Ah well, as far as I'm concerned, knowing that there are no solutions to that variant of the problem is just as interesting as finding one! – David Z Apr 16 at 3:48 • Not sure what you mean by the negation of "the dice both have values in $\{1,\dotsc,6\}$". If the dice faces can be anything, then the usual $\{1,2,3,4,5,6\}$ and $\{1,7,13,19,25,31\}$ works. – obscurans Apr 16 at 4:00 Let take a simpler system. There are 2 outcomes for 2 coins $$\{C_1,C_2\}$$ - $$\{1,2\}$$ with probabilities $$p_1,p_2$$ for $$C_1$$ and $$q_1,q_2$$ for $$C_2$$. Given condition: probability after 2 coin throws of the sum being $$\{2,3,4\}$$ is the same. Question: Can you find some $$p_1,p_2,q_1,q_2$$ that satisfies the condition? In this the given condition is $$p_1q_1 = p_1q_2 + p_2q_1 = p_2q_2 -(G)$$. But we also have the implicit conditions: $$p_1 + p_2 = 1-(eq1)$$, $$q_1+q_2 = 1-(eq2)$$. Consider the first and third expressions of the given condition (G). \begin{align*} p_1 q_1 &= p_2q_2\\ p_1 q_1 &= (1-p_1)(1-q_1)\\ p_1 q_1 &= 1-p_1-q_1+p_1q_1\\ p_1+q_1 &= 1\tag{eq3}\\ p_2+q_2 &= 1\tag{eq4}\\ \end{align*} Comparing the equations (eq1) and (eq3) and (eq1) and (eq4), we get $$p_2 = q_1$$ and $$p_1 = q_2$$. Finally from the first two equalities of the given condition, we have: \begin{align*} p_1q_1 &= p_1q_2 + p_2q_1\\ p_1p_2 &= p_1p_1 + p_2p_2\\ -p_1p_2 &= p_1^2 + p_2^2 - 2p_1p_2\\ -p_1p_2 &= (p_1 - p_2)^2\\ \end{align*} Now we have a contradiction as the left side is $$-ve$$ and the right side is $$+ve$$ Since we can't find a solution for this simpler system, with just 2 equalities, it is quite unlikely that a solution for the more complicated die system with $$\binom{6}{2}$$ equalities exists. Note: This is of course not a proof that no solution exists for the die system. • If the possible outcomes for each coin are: $\{1, 2\}$ then aren't the possible sums $\{2, 3, 4 \}$? – badjohn Apr 15 at 8:10 • thanks @badjohn. I corrected it now. – Rahul Madhavan Apr 15 at 8:13 If you want to generate numbers 1 through 12 with uniform probability, it is possible by re-labeling the faces of fair dice. One die has the faces labeled 1 through 6. The other has the faces labeled -2, 0, 2, 4, 6, 8. If the total is not in the 1 through 12 range, roll again. This is an acceptance-rejection technique. Also, there is a theorem in probability theory than any desired probability can be constructed by a sequence of Bernoulli trials (coin flips). Simulate coin flips by having the faces of a die be 0, 0, 0, 1, 1, 1, another die with faces 0, 0, 0, 2, 2, 2, a third die with faces 0 and 4, and a fourth die with faces 0 and 8. You will roll 0 through 15 with equal probability, and roll again if your number is not in the 1 through 12 range. Another acceptance-rejection method, with dice simulating coins. Forgive my "engineering" approach; I can't help it.
# Solve the equation. 3x^2-5=16 Solve the equation. $3{x}^{2}-5=16$ You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Jaylynn Huffman For this question, first add 5 to both sides of the equation toget: $3{x}^{2}=25$ Now, we need to isolate for x alone by dividing both sides ofthe equation by 3 to get: ${x}^{2}=\frac{25}{3}$ At this point, we isolate for x alone by now taking the squareroot of both sides of the equation to get: $x=±\sqrt{\frac{25}{3}}$ NOTE: We need the $±$ symbol since squaring apositive or negative number always gives a postive result. ###### Not exactly what you’re looking for? Dawson Downs $3{x}^{2}-5=16$ $3{x}^{2}=16+5$ $3{x}^{2}=21$ ${x}^{2}=21/3$ ${x}^{2}=7$ $x=±\sqrt{7}$
MobLab Guides Aa ## Game Description A jar contains only blue and red balls. Each student in a group guesses, in a randomly determined order, whether the jar contains mostly blue or mostly red balls. A student sees the guess of each player who has already guessed (public information) and the color of 1 randomly selected ball which is subsequently returned to the jar (private information). ### Learning Objective 1: Information Cascades (Herd Behavior) In a social learning environment, it is often rational for an individual to ignore her private information and follow the herd (i.e., imitate the choice of her predecessors). ### Learning Objective 2: Private vs. Public Information While players will generally weigh private and public information appropriately, it is not uncommon for players to place too much weight on private information. Note: We follow the convention in the literature and say that a herd occurs when a student makes the same choice as her immediate predecessor(s), and an information cascade occurs when a player rationally makes the same choice as her predecessor regardless of her private signal (i.e., regardless of her ball’s color). ## Brief Instructions Because the final student in a game does not make his choice until all others have done so, you want to limit group size. To alleviate student wait time while still allowing the development of information cascades, we suggest Group Size of either 5 or 6. The fraction of balls that are the majority color (% of Majority) is the primary parameter you can adjust. As discussed in the Equilibrium section, if players follow their own signal when indifferent, a rational information cascade occurs the first time two balls of the same color are drawn sequentially, and it occurs on the incorrect choice when they are the minority color. Thus, increasing from the default of 60%, increases the likelihood of a cascade, while reducing the likelihood of an incorrect one. We recommend a few repetitions in order to facilitate learning. With Periods > 1 , a player maintains both his group and his order within that group across periods. By replaying a one-period game, you can ensure that students play in different roles. Finally, you may choose to run this game outside of the classroom. Choose how long the game will be available to students in the Duration panel, and check All Play Only Robots to have each player grouped with automated players. We describe their strategies in the Robot Play section. ## Results For each period, we present two graphs summarizing prevalence of herding and cascades. You can switch between periods using the Go To: drop-down menu. The first graph (Figure 1) shows both the likelihood of making a correct guess, and the likelihood of making a choice that ignores private information for each player role. Later players should both be more likely to guess correctly and more likely to play against their own signal The second graph shows the distribution of herd sizes. The size of a herd is the total number of consecutive players making the same choice. (For example, RRRR is counted as 4 even though it contains 2 3-player runs.) For each herd size, we report the fraction of all herds that are a particular size. ## Equilibrium Prediction #### Summary Assuming a student does not follow her private signal whenever her private signal leaves her indifferent between her two choices, then an information cascade occurs immediately. All students make the same choice as the student choosing first. Assuming a student follows her private signal whenever indifferent, then an information cascade occurs after the same color ball is drawn twice in a row. #### Notation: We use $$\alpha$$ for the fraction of balls that are the majority color (% of Majority). We use upper-case letters for choices and lower case for the ball drawn by a player. Thus $$R_2$$ means player 2 guessed that the jar contained mostly red balls and $$b_2$$ means she drew a blue one. Finally, $$Pr_i[J=B|\cdot]$$ is player $$i$$’s conditional belief that the jar contains mostly blue balls. #### Analysis Summary: Each player uses Bayes’ Rule to update her beliefs and guesses that the jar contains mostly a color if the calculated likelihood that it contains mostly that color exceeds 50%. Assume player 1 draws blue. As $$Pr_1[J=B|b_1]=\alpha$$, she chooses Blue. Assume player 2 also draws blue. By Bayes’ Rule, $$Pr_2[J=B|B_1,b_2]=\frac{\alpha^2}{\alpha^2+(1-\alpha)(1-\alpha)}>\frac{1}{2},$$ so unsurprisingly player 2 guesses blue. Note that anytime the public belief starts at $$\frac{1}{2}$$ and two blues are sequentially drawn, this will be the posterior belief of the person drawing the second blue. Things get more interesting if player 2 draws a different color than 1, as $$Pr_2[J=B|B_1,r_2]=\frac{1}{2}$$. We consider two cases. First, let us assume that whenever indifferent, a player joins the herd and chooses what her predecessor did (as opposed to following her private signal). Because a player makes the same choice regardless of her private signal, the public belief remains unchanged at $$\alpha$$. In other words, a herd always starts with player 1! (This is an information cascade because after player 1’s choice, no subsequent private signals affect choices.) We now assume the other extreme: whenever indifferent, a player is a contrarian and makes the opposite choice of his predecessor (that is, he follows his private signal when indifferent). Under this assumption, we consider player 3’s beliefs after after each of player 2’s potential guesses. First, if player 2 makes a different choice than player 1, we have $$Pr_3[J=B|B_1,R_2]=\frac{1}{2}$$. That is, we know that player 1 drew red and player 2 drew blue, and this information favors neither being the majority. Second, if player 2 makes the same choice as player 1 (but would have made the opposite choice if he drew the other color), as above we have $$Pr_3[J=B|B_1,B_2]=\frac{\alpha^2}{\alpha^2+(1-\alpha)(1-\alpha)}$$. If player 3 now draws red, we have $$Pr_3[J=B|B_1,B_2,r3]=\frac{(1-\alpha)\frac{\alpha^2}{\alpha^2+(1-\alpha)(1-\alpha)}}{(1-\alpha)\frac{\alpha^2}{\alpha^2+(1-\alpha)(1-\alpha)}+\alpha(1-\frac{\alpha^2}{\alpha^2+(1-\alpha)(1-\alpha)})}=\alpha>\frac{1}{2}.$$ Therefore, even assuming players follow their own signal if indifferent, an information cascase occurs after two consecutive players make the same choice. #### Interpretation: While information cascades are likely even if players refuse to join the herd when indifferent, comparing the above cases provides insights into the potential inefficiency when trying to make inferences from the choices, as opposed to information, of others. When players follow the herd when indifferent, the indifferent player provides no information about his private signal and the resulting herd is based on a single signal. If players instead reverse the herd, a player reveals her private information (delaying the onset of the herd) and any resulting herd will be based on stronger information (as two consecutive of the same color are needed). To quantify the value of contrarians, consider the equilibrium likelihood that player 5 chooses incorrectely. When the majority color makes up 60% of the jar, without contrarians player 5 chooses incorrectly 40% of the time compared to approximately 32% with contrarians. With $$\alpha=80%$$, the likelihood falls only to 20% without contrarians, but all the way to 8% with contrarians. ## Robot Play MobLab robot players roughly follow equilibrium play, ignoring off-the-equilibrium-path moves. That is, if a robot’s immediate two predecessors make the same choice, its guess is the same as its immediate predecessor. Otherwise, its guess equals the color of the ball it draws.
# Recent history for hwlau 6 years ago posted a comment 7 years ago posted a comment Meaning of $\int \phi^\dagger \hat A \psi \:\mathrm dx$ 7 years ago posted a comment Do all massless particles (e.g. photon, graviton, gluon) necessarily have the same speed $c$? 7 years ago question answered What is phenomenological equation and phenomenological model? 7 years ago posted a comment Quantum dimension in topological entanglement entropy 7 years ago answer commented on Why is the ground state of the ferromagnetic tetrahedron threefold degenerate? 7 years ago answer commented on Why is the ground state of the ferromagnetic tetrahedron threefold degenerate? 7 years ago posted an answer Why is the ground state of the ferromagnetic tetrahedron threefold degenerate? 7 years ago answer commented on Why is the ground state of the ferromagnetic tetrahedron threefold degenerate? 7 years ago posted a comment Why is the ground state of the ferromagnetic tetrahedron threefold degenerate? 7 years ago answer commented on Why is the ground state of the ferromagnetic tetrahedron threefold degenerate? 7 years ago posted a comment Why is the ground state of the ferromagnetic tetrahedron threefold degenerate? 7 years ago posted a comment How large is the smallest object that can be detected at a given wavelength? 7 years ago received upvote on answer Double Slit Experiment: How do scientists ensure that there's only one photon? 7 years ago received upvote on answer Double Slit Experiment: How do scientists ensure that there's only one photon? 7 years ago received upvote on answer Double Slit Experiment: How do scientists ensure that there's only one photon? 7 years ago received upvote on answer Double Slit Experiment: How do scientists ensure that there's only one photon? 7 years ago received upvote on answer Double Slit Experiment: How do scientists ensure that there's only one photon? 7 years ago received upvote on answer Double Slit Experiment: How do scientists ensure that there's only one photon? 7 years ago received upvote on answer Double Slit Experiment: How do scientists ensure that there's only one photon?
Student[LinearAlgebra] - Maple Programming Help Home : Support : Online Help : Education : Student Package : Linear Algebra : Computation : Queries : Student/LinearAlgebra/IsDefinite Student[LinearAlgebra] IsDefinite test for positive or negative definite Matrices Calling Sequence IsDefinite(A, q) Parameters A - square Matrix q - (optional) equation of the form query = attribute where attribute is one of 'positive_definite', 'positive_semidefinite', 'negative_definite', or 'negative_semidefinite' Description • The IsDefinite(A, query = 'positive_definite') returns true if $A$ is a real symmetric or a complex Hermitian Matrix and all the eigenvalues are determined to be positive.  This command is equivalent to IsDefinite(A), that is, the default query is for positive definiteness. Similarly, for real symmetric or complex Hermitian Matrices, the following calling sequences return the indicated result. IsDefinite(A, query = 'positive_semidefinite') returns true if all the eigenvalues are determined to be non-negative. IsDefinite(A, query = 'negative_definite') returns true if all the eigenvalues are determined to be negative. IsDefinite(A, query = 'negative_semidefinite') returns true if all the eigenvalues are determined to be non-positive. If the eigenvalues are determined to be other than described in the cases above, a value of false is returned. If any of the conditions on the eigenvalues cannot be resolved, a boolean expression representing the condition which must be satisfied for the query to resolve to "true" is returned. • The definition of  positive definite is that, for all column Vectors $x$, ${x}^{*}.A.x>0$, where  ${x}^{*}$ is the Hermitian transpose of $x$. The definitions for positive semidefinite, negative definite, and negative semidefinite involve reversal of the inequality sign, or relaxation from a strict inequality. • For real non-symmetric (complex non-Hermitian) Matrices, definiteness is established by considering the symmetric (Hermitian) part of $A$, that is,  $\frac{1}{2}\left(A+{A}^{+}\right)$ ($\frac{1}{2}\left(A+{A}^{*}\right)$). Examples > $\mathrm{with}\left(\mathrm{Student}[\mathrm{LinearAlgebra}]\right):$ > $A≔\mathrm{DiagonalMatrix}\left(\left[-5,0,-1\right]\right)$ ${A}{≔}\left[\begin{array}{rrr}{-}{5}& {0}& {0}\\ {0}& {0}& {0}\\ {0}& {0}& {-}{1}\end{array}\right]$ (1) > $\mathrm{IsDefinite}\left(A\right)$ ${\mathrm{false}}$ (2) > $\mathrm{IsDefinite}\left(A,'\mathrm{query}'='\mathrm{positive_semidefinite}'\right)$ ${\mathrm{false}}$ (3) > $\mathrm{IsDefinite}\left(A,'\mathrm{query}'='\mathrm{negative_semidefinite}'\right)$ ${\mathrm{true}}$ (4) > $B≔⟨⟨1,8,3⟩|⟨-4,5,2⟩|⟨6,1,0⟩⟩$ ${B}{≔}\left[\begin{array}{rrr}{1}& {-}{4}& {6}\\ {8}& {5}& {1}\\ {3}& {2}& {0}\end{array}\right]$ (5) > $\mathrm{IsDefinite}\left(B\right)$ ${\mathrm{false}}$ (6) > $C≔⟨⟨1,2+I⟩|⟨2-I,5⟩⟩$ ${C}{≔}\left[\begin{array}{cc}{1}& {2}{-}{I}\\ {2}{+}{I}& {5}\end{array}\right]$ (7) > $\mathrm{IsDefinite}\left(C\right)$ ${\mathrm{false}}$ (8) > $\mathrm{IsDefinite}\left(C,'\mathrm{query}'='\mathrm{positive_semidefinite}'\right)$ ${\mathrm{true}}$ (9) > $\mathrm{IsDefinite}\left(C,'\mathrm{query}'='\mathrm{negative_semidefinite}'\right)$ ${\mathrm{false}}$ (10)
# Testing RED-I with a sample REDCap Project¶ ## Purpose¶ The “vagrant” folder was created with the goal of making testing RED-I software as easy as possible. It contains the Vagrantfile which allows you to start a virtual machine capable of running the REDCap software – which means that during virtual machine creation the Apache and MySQL software is installed without any user intervention. There are a few important things to note before proceeding with running RED-I to import data into a sample REDCap project: • You have to install the vagrant and virtual box software • You have to obtain the closed-source REDCap software from http://project-redcap.org/ • You have to obtain a Makefile.ini file in order to be able to execute tasks from the Makefile ## Steps¶ ### 1. Install vagrant and virtual box¶ On a linux machine run: • sudo apt-get install vagrant • sudo apt-get install virtualbox On a mac machine: For more details about Vagrant software you can go to why-vagrant page. ### 2. Configure the VM¶ As mentioned above you have to obtain a copy of the REDCap software from http://project-redcap.org/ and save it as “redcap.zip” file in the “config-example/vagrant-data” folder. This ensures that in the later steps the bootstrap.sh script can extract the files to the virtual machine path “/var/www/redcap”. Now execute the following commands to complete the configuration: cd ./vagrant # must be in the redi/vagrant/ directory make copy_config_example make copy_redcap_code make copy_project_data make show_config ### 3. Start the VM¶ To use the vagrant VM you will need to install Vagrant and Virtual Box. With these packages installed, follow this procedure to use a VM template: # must be in the redi/vagrant/ directory cd ./vagrant vagrant up Vagrant will instantiate and provision the new VM. The REDCap web application should be accessible in the browser at http://localhost:8998/redcap/ If port 8998 is already in use vagrant will choose a different port automatically. Read the log of “vagrant up” and note the port to be used. ### 4. Verify the VM is running¶ Verify that the virtual machine is working properly by accessing it using: vagrant ssh ### 5. Import Enrollment Data using RED-I¶ Import the sample subject list into REDCap by executing: make rc_enrollment Note: This step is necessary because in order to associate data with subjects the list of subjects needs to exist in the REDCap database. ### 6. Import Electronic Health Records using RED-I¶ Import the sample electronic health records into REDCap by executing: make rc_post Verify that the output of this command ends with: You can review the summary report by opening: report.html in your browser If this step succeded you have verified that RED-I can be used to save time by automating EHR data imports into REDCap. Congratulations! You can now add your own REDCap project and start using RED-I to move data. Please refer to Add new REDCap Project and API Key document for help.
## Precalculus (6th Edition) $\frac{\pi}{3}$, $\frac{5\pi}{3}$ $2\sec x+1=\sec x+3$ $\sec x+1=3$ $\sec x=2$ $\cos x=\frac{1}{2}$ Referring to the unit circle (and remembering that cosine is positive in Quadrants I and IV), we see that the only solutions in $[0, 2\pi)$ are $\frac{\pi}{3}$ and $\frac{5\pi}{3}$.
# Maximum value of f(x,y) Level pending If $$f(x,y) = \sqrt{x^2+(y-1)^2} + \sqrt{(x-3)^2 + (y-4)^2} - \sqrt{x^2+y^2} - \sqrt{(x-1)^2+y^2}$$. If maximum value of $$f(x,y) = a+\sqrt{b}$$, where $$a,b$$ are positive integers and $$x,y\in \mathbb{R}$$. Then $$a+b =$$ ×
# Where is the acid in DNA/RNA? It is well known that the A in both DNA and RNA stands for acid, but where is the acid in chemical formula for the compound, and it is classified so based on what acid-base theory? Like Arrhenius, Bronsted-Lowry or Lewis. Just to better explain my question, when I say DNA I am essencially thinking about nucleotides linked together, am I missing something in the structure that contains my answer or this is the way to go? - ## 2 Answers The phosphate ester groups which connect the nucleotides contain one acidic proton at their OH group, and two at the end of each strand. As you can see here, here and in the picture below (with the groups marked turquoise), most of these groups are deprotonated depending on pH. They act as Brönsted-Lowry acids. - Just to better explain my question, when I say DNA I am essencially thinking about nucleotides linked together, am I missing something in the structure that contains my answer or this is the way to go? Yes, you do. *NA consist of nucleotides, linked by phosphodiester fragments. This leaves one hydroxyl of the phosphate fragment intact, so it can act as an acid. Look closely: There is a $\ce{P-O-}$ fragment, which means that there is a cation somewhere nearby. So, *NA are Brønsted acids, usually occuring as salts. -
Poverty, mental health and rhesus monkeys Today I came across a very interesting paper titled “Values Encoded in orbitofrontal cortex are causally related to economic choices” by Ballesta, Shi, Conen and Padua-Schioppa. I haven’t read and analyzed the paper fully, partly because of the many statistical tools that I will have to learn to assess it carefully. However, I did manage to read some important bits, and it set me thinking about how it directly applies to so many of us in our daily lives. In this paper, the researchers claim that our subjective values of things are hard-coded in our orbitofrontal cortices. That is just a fancy way of saying that if we like burgers more than fries, this information is stored in a part of the brain that lies directly above your eyes. Hence, every time you’re offered a choice between burgers and fries, that part of your brain implores you to choose burgers. Experiment The following experiment was done on rhesus monkeys. They were given a choice between 2 drops of grape juice and 6 drops of peppermint tea. Depending upon their surjective preferences (as coded in their orbitofrontal cortices), they would prefer one or the other. For example, let us assume that a monkey named Tim would mostly choose the 2 drops of grape juice over peppermint tea. How exactly is Tim offered these choices? Tim is first shown a picture of 2 drops of grape juice. Then after a 1 second delay, he is shown a picture of 6 drops of peppermint tea. He is then asked to choose amongst the two images. He consistently chooses the grape juice. However, suppose a current of 100 $\mu A$ is passed through his orbitofrontal cortex every time he is shown the grape juice image. The passage of this current causes Tim to start choosing peppermint tea slightly more often than before. Note that Tim does not have a clear preference for peppermint tea as such. It is just that his choices become more randomized. Why does this happen? The authors of the paper think that the current interferes with the working of the orbitofrontal cortex, which was initially asking it to consistently choose the grape juice. Depression and poverty If you’ve been depressed before, you can surely empathize with this. Let us suppose that in a happy and stable state of mind, you’d always prefer the color red over blue. Given the choice between two t-shirts of those colors, you’d consistently choose the red one. However, once that depression hits, you don’t really care anymore. You’d start choosing the blue one more often than before because they’re all the same, and it doesn’t matter. Our brain just refuses to do the computation that leads us to conclude that we prefer red over blue (yes, even choices are a result of mental computation). And the absence of computation leads to random choice. What about poverty? Imagine that poverty causes a small current to run though your orbitofrontal cortex. This causes your brain’s computational capacities to plummet, leading you to make arbitary choices, or perhaps choices that are dictated by short term thinking (obviously short term thinking requires less computation than long term thinking). Say at the end of a hard day’s labor, you have $100 in your pocket. If you save$50 every day for a year, you will have saved enough money to accumulate interest, perhaps help you tide over bad times. But c’mon. You’re incapable of that computation. Your orbitofrontal cortex is screaming at you to save some money at least. You’ve been through terrible times, and if you’d saved in the past, you’d have been so much better off. You know that you should save money. You’ve definitely been burned enough times to know that. But you cannot hear your brain screaming over the current. You’d rather go to the bar right now and drink it all up. Tomorrow is another day. What about self-destructive behaviour? What if the lack of will power is basically the lack of computational capabilites? This is perhaps getting into speculative territory. But these are burning questions that can be answered using a similar experimental evidence as described above. The analysis above is different from the paper in one significant aspect: in the case of depression and poverty, there’s a current that is always running through the orbitofrontal cortex. Hence, our brain, that is now incapable of computing and hence making a good choice, now makes a random choice. However, in the experiment, the current is running through Tim’s brain only when the first choice is being shown. In effect, the current interferes with Tim’s ability to register or analyze the first choice properly. Hence, he starts choosing the second choice, which he can at least perceive properly (even though he may not like the second choice per se). If current through the orbitofrontal cortex does indeed decrease computational capabilities, then this current is causing Tim to not be able to carry out the computation to register the first choice, hence leading it to go for the second choice that it has registered better. It would be interesting to see an experiment in which Tim has a clear preference for choice A, and a current is running through his brain when both choices are presented. Will this randomize his choices between A and B? That would then provide supporting evidence for my brain current theory of depression and poverty. References 1. Values Encoded in orbitofrontal cortex are causally related to economic choices” by Ballesta, Shi, Conen and Padua-Schioppa